As more and more individuals and organizations are actively using AI language models such as GPT-4 to fulfil various content creation and natural language processing needs, it’s essential for users to understand the cost structure associated with these powerful tools.
In this blog post, we will discuss the updated pricing model for GPT-4, outlining the costs for both prompt tokens and output tokens across different context lengths.
GPT-4 Pricing Overview
The pricing model for GPT-4 considers the token count and model variant (8k and 32k context lengths) to determine the cost.
Here is the latest pricing for GPT-4, broken down into two categories: 8k context length models (e.g.,
gpt-4-0314) and 32k context length models (e.g.,
|Model||Price per 1k|
|Price per 1k|
1k tokens, whether prompt or output, are approximately equal to 750 words in English.
Examples: Cost Calculation
Let’s provide some examples to help you better understand how the pricing model works.
Example 1: Suppose you use
gpt-4 with 100 words as a prompt and receive 500 words of output. In this case, you’ll have around 133 prompt tokens (100/750 * 1000) and 667 output tokens (500/750 * 1000).
The total cost would be (0.03 * 0.133) + (0.06 * 0.667) = $0.00399 + $0.04002 = $0.04401.
Example 2: Assuming you use
gpt-4-32k with 500 words as a prompt and obtain 1000 words of output. In this case, you’ll have around 667 prompt tokens (500/750 * 1000) and 1333 output tokens (1000/750 * 1000).
The total cost for this usage would be (0.06 * 0.667) + (0.12 * 1.333) = $0.04002 + $0.15996 = $0.19998.
🧮 For more accurate calculations, you can use this simple pricing calculator.
I hope this concise overview of the GPT-4 pricing model helps you evaluate the costs and benefits of using GPT-engineered language tools in your projects.