GPT Image 1 vs 1.5 vs 2: Migration, Specs, and Pricing (2026)
Compare OpenAI's GPT Image 1, 1.5, and 2 on quality, pricing, and request shape. Plus the migration plan before gpt-image-1 shuts down on October 23, 2026.
OpenAI shipped three GPT Image generations in twelve months — gpt-image-1, gpt-image-1.5, and gpt-image-2. Each one rewrote the request shape, changed the pricing model, and left the previous generation hanging on a deprecation timer. If your codebase still references gpt-image-1, the deadline that actually matters is October 23, 2026 — the date OpenAI shuts the route down. This guide covers what changed at each step, how the three generations compare on quality and price, and how to migrate cleanly.
TL;DR:
gpt-image-1is deprecated and shuts down October 23, 2026.gpt-image-1.5(December 16, 2025) is a stable, conservative production target — same prompt behaviour as 1.0 with better instruction following.gpt-image-2(April 21, 2026) is the current state of the art — best text rendering, no warm yellow color cast, supports 3:1 / 1:3 ultrawide aspect ratios, native 2K output. On Unifically: GPT Image 2 prices flat at $0.03 / $0.05 / $0.06 per image for 1K / 2K / 4K. The legacygpt-image-1andgpt-image-1.5routes are hidden in the pricing UI but still callable for in-flight workloads.
Three generations at a glance
| Spec | gpt-image-1 | gpt-image-1.5 | gpt-image-2 |
|---|---|---|---|
| Status | Deprecated — shutdown October 23, 2026 | Stable, hidden in Unifically pricing UI | Current — recommended for new work |
| Released | Original GPT Image generation | December 16, 2025 | April 21, 2026 |
| Output resolution | 1024×1024 baseline | 1024×1024 baseline | Up to 2K, ultrawide aspect ratios |
| Aspect ratios | 1:1, 2:3, 3:2 | 1:1, 2:3, 3:2 | 1:1, 2:3, 3:2, plus 3:1 / 1:3 ultrawide |
| Request shape | quality (low / medium / high) | quality (low / medium / high) | resolution (1k / 2k / 4k) |
| Text rendering | Workable, occasional artifacts | Better instruction following | Near-perfect; clean small text and UI elements |
| Color cast | Warm yellow tint visible | Warm yellow tint partially fixed | No yellow cast; neutral output |
| Reasoning | None | None | Adds reasoning step before generation |
| OpenAI direct pricing (1024×1024 ref) | low/med/high (legacy) | $0.009 / $0.034 / $0.133 | $0.006 / $0.053 / $0.211 |
| Unifically pricing | hidden, in-flight only | hidden, in-flight only | $0.03 / $0.05 / $0.06 (1K / 2K / 4K) |
What changed between gpt-image-1 and gpt-image-1.5
gpt-image-1.5 shipped December 16, 2025 as an incremental upgrade. The headline changes:
- Better instruction following. Prompts that asked for specific layouts, text, or compositional rules landed more reliably than on 1.0.
- Same prompt behaviour overall.
gpt-image-1.5was positioned as a stability upgrade — drop-in for most workloads, with a gentler learning curve than jumping straight to 2.0. - Same
qualityknob. Low / medium / high remained the request shape. No reformatting of the call payload was needed. - Slightly cheaper at low and medium quality ($0.009 / $0.034 vs the legacy 1.0 rates).
For teams that wanted gpt-image-1's look but needed slightly tighter prompt adherence, 1.5 was the safe upgrade. It is still callable on Unifically (hidden in the pricing UI, but the API works) for in-flight workloads that haven't fully migrated.
What changed between gpt-image-1.5 and gpt-image-2
gpt-image-2 shipped April 21, 2026 and is the much bigger jump. Five things changed:
- Request shape pivoted from
qualitytoresolution. The old low / medium / high quality knob is replaced with explicit 1K / 2K / 4K output sizes. Pricing follows resolution, so drafts at 1K and finals at 4K are budgetable. - Text rendering is near-perfect. Small text, UI elements, manga panels, pixel art, packaging copy — all render cleanly where 1.0 / 1.5 produced artifacts on dense type.
- Warm yellow color cast eliminated. The 1.0 / 1.5 line shipped a recognisable warm tint on neutral scenes. 2.0 outputs are neutral.
- Wider aspect ratio support. 3:1 and 1:3 ultrawide ratios join 1:1, 2:3, 3:2.
- Reasoning step before generation. OpenAI added a planning pass — useful on prompts with complex spatial relationships, multi-element compositions, and dense scene descriptions.
The cost is that the old quality parameter doesn't exist on 2.0. Calls that hardcoded quality: 'high' need to be rewritten to use resolution: '4k' (or whichever tier matches the output target).
Pricing comparison
OpenAI direct rates are token-based on 2.0 and per-quality-tier on 1.5 / 1.0. Unifically prices flat per resolution tier for 2.0 — no token math.
| Model | Tier | OpenAI direct (1024×1024) | Unifically (per image) |
|---|---|---|---|
| gpt-image-1 | Low quality | legacy rate | hidden / in-flight only |
| gpt-image-1 | Medium quality | legacy rate | hidden / in-flight only |
| gpt-image-1 | High quality | legacy rate | hidden / in-flight only |
| gpt-image-1.5 | Low | $0.009 | hidden / in-flight only |
| gpt-image-1.5 | Medium | $0.034 | hidden / in-flight only |
| gpt-image-1.5 | High | $0.133 | hidden / in-flight only |
| gpt-image-2 | 1K | $0.006 (low quality ref) | $0.03 |
| gpt-image-2 | 2K | $0.053 (medium quality ref) | $0.05 |
| gpt-image-2 | 4K | $0.211 (high quality ref) | $0.06 |
Direct rates from OpenAI's published pricing for gpt-image-1.5 and gpt-image-2 (1024×1024 reference points). Unifically's flat per-tier shape lines up with OpenAI's high-quality output rates while staying predictable for budgeting.
Migrating from gpt-image-1 or gpt-image-1.5
OpenAI's deprecations page lists gpt-image-1 shutdown for October 23, 2026. Calls to that route will start returning errors after that date. Two practical migration paths:
Option A — migrate to gpt-image-2 (recommended for new work)
Best for: workloads that benefit from the latest text rendering, neutral colour, ultrawide aspect ratios, and the resolution-tiered pricing shape.
What changes in your code:
- Switch
model: 'openai/gpt-image-1'→model: 'openai/gpt-image-2'. - Replace
quality: 'low' | 'medium' | 'high'withresolution: '1k' | '2k' | '4k'. - Optionally add ultrawide aspect ratios (3:1 / 1:3) where useful.
- Keep
image_urls[]as-is (still up to 16 references at 100 MB each on 2.0).
Approximate quality mapping:
quality: 'low'→resolution: '1k'quality: 'medium'→resolution: '2k'quality: 'high'→resolution: '4k'
Option B — migrate to gpt-image-1.5 (low-risk stopgap)
Best for: workloads where output stability matters more than the latest model behaviour. gpt-image-1.5 shares prompt behaviour with gpt-image-1 while shipping better instruction following at lower cost on the medium / high bands.
What changes in your code:
- Switch
model: 'openai/gpt-image-1'→model: 'openai/gpt-image-1.5'. - Keep
qualityparameter as-is (low / medium / high still works). - Keep
image_urls[]and prompt structure as-is.
Note: gpt-image-1.5 is also slated for deprecation eventually. Treat Option B as a short-term stability bridge, not a long-term destination.
Code: migrating a gpt-image-1 call to gpt-image-2
Before (gpt-image-1)
const start = await fetch(`${API}/v1/tasks`, {
method: 'POST',
headers,
body: JSON.stringify({
model: 'openai/gpt-image-1',
input: {
prompt: 'A photorealistic packaging mockup of a matte black bottle on marble',
quality: 'high',
aspect_ratio: '2:3',
image_urls: ['https://example.com/brand-reference.jpg'],
},
}),
}).then((r) => r.json());
After (gpt-image-2)
const start = await fetch(`${API}/v1/tasks`, {
method: 'POST',
headers,
body: JSON.stringify({
model: 'openai/gpt-image-2',
input: {
prompt: 'A photorealistic packaging mockup of a matte black bottle on marble',
resolution: '4k',
aspect_ratio: '2:3',
image_urls: ['https://example.com/brand-reference.jpg'],
},
}),
}).then((r) => r.json());
The only meaningful change is quality: 'high' → resolution: '4k'. Polling on /v1/tasks/{task_id} is identical across all three model generations.
When to use each model
Use gpt-image-2 when
- Starting a new workload — there is no good reason to start on a deprecated model.
- Text rendering matters (UI mockups, packaging copy, posters, manga).
- You need neutral output without the warm yellow cast.
- You need ultrawide aspect ratios (3:1 or 1:3).
- You want explicit resolution tiers for budgeting (drafts at 1K, finals at 4K).
Use gpt-image-1.5 when
- You're mid-migration and need a low-risk stopgap.
- Output stability matters more than the latest model behaviour.
- Your codebase still passes
qualityand you can't refactor immediately. - The cheaper medium-quality tier ($0.034 direct) hits the budget you need.
Avoid gpt-image-1 because
- It shuts down on October 23, 2026.
gpt-image-1.5is a drop-in stability bridge with better instruction following.gpt-image-2is materially better on every quality dimension.
Common mistakes during migration
- Skipping the migration deadline. October 23, 2026 is the hard cutoff for
gpt-image-1. Calls after that return errors. Don't leave it to the last week. - Hardcoding
quality: 'high'and forgetting to update on 2.0. Thequalityparameter does not exist ongpt-image-2. Setresolutioninstead. - Migrating to 1.5 thinking it's safe forever. 1.5 is a stable bridge; OpenAI will eventually deprecate it too. Plan a 2.0 path even if you take the 1.5 stopgap.
- Assuming Unifically's
qualitymapping carries over. Unifically pricesgpt-image-2flat per resolution tier ($0.03 / $0.05 / $0.06). The OpenAI-direct token-based pricing isn't what you pay through Unifically. - Forgetting the warm cast. If your downstream pipeline applies a warm-tint LUT to compensate for
gpt-image-1's yellow cast, that LUT will over-warmgpt-image-2output. Strip it on migration.
Frequently asked questions
When does gpt-image-1 shut down?
October 23, 2026, per OpenAI's deprecations page. Calls after that date will return errors. Migrate to gpt-image-2 (recommended) or gpt-image-1.5 (stable stopgap) before then.
What is the biggest difference between gpt-image-1.5 and gpt-image-2?
Three things. The request shape switched from quality (low / medium / high) to resolution (1K / 2K / 4K). Text rendering went from "workable" to "near-perfect". The warm yellow color cast that defined the 1.0 / 1.5 line is gone on 2.0. Plus ultrawide aspect ratios (3:1 / 1:3) and a reasoning step before generation.
How much does GPT Image 2 cost on Unifically?
$0.03 per image at 1K, $0.05 at 2K, and $0.06 at 4K. There is no subscription — billing is per generated image against the openai/gpt-image-2 price key. Approximate quality mapping from 1.5: low ≈ 1K, medium ≈ 2K, high ≈ 4K.
Can I still call gpt-image-1 on Unifically?
For now, yes — the route is hidden in the live pricing UI but the API is still callable for in-flight workloads. After OpenAI's October 23, 2026 shutdown, calls will return errors. Migrate before then.
Should I jump to GPT Image 2 directly or migrate to 1.5 first?
Jump to 2.0 if your team can absorb the prompt-shape change (quality → resolution) and re-test outputs. The quality and pricing benefits are material. Use 1.5 as a stability stopgap only if you cannot refactor before the October 2026 deadline.
Related reading
- GPT Image 2 deep dive — full GPT Image 2 specs and code samples
- GPT Image 2 model page — live playground with 1K / 2K / 4K toggle
- Nano Banana 2 — Google alternative with the same per-tier pricing shape
- Qwen Image vs Nano Banana — when Google's safety filter is too restrictive
- Flux.2 — Black Forest Labs alternative with Flex / Pro / Max tiers



