HomeGlobalGPT-5.4 Mini and Nano Arrive for Faster AI Tasks

GPT-5.4 Mini and Nano Arrive for Faster AI Tasks

OpenAI has launched two new smaller models, GPT-5.4 mini and GPT-5.4 nano, positioning them as faster and more efficient options for coding, tool use and other high-volume AI tasks. The company says they are its most capable small models yet and are designed for workloads where latency matters just as much as raw intelligence.

According to OpenAI, GPT-5.4 mini improves on GPT-5 mini across coding, reasoning, multimodal understanding and tool use, while running more than twice as fast. GPT-5.4 nano is being pitched as the smallest and cheapest GPT-5.4 model for tasks where cost and speed matter most, including classification, ranking, data extraction and simpler coding subagent work.

Built for Speed, Coding and Subagents

OpenAI says the new models are aimed at the kinds of jobs where slower responses directly hurt the product experience. That includes coding assistants, multimodal apps reasoning over images in real time, computer-use systems interpreting screenshots, and subagents handling narrower supporting tasks in larger AI workflows.

The company makes a strong case that smaller models are becoming more strategically important, not less. In its example, a larger model like GPT-5.4 can handle planning and coordination, while GPT-5.4 mini subagents take care of narrower tasks such as searching a codebase, reviewing a file or processing supporting documents in parallel.

OpenAI Is Chasing Better Performance per Dollar

OpenAI’s benchmark results suggest the main selling point here is not simply that mini and nano are cheaper. It is that they are supposed to offer a much stronger balance of speed, cost and capability than earlier small models. On the company’s published comparisons, GPT-5.4 mini outperforms GPT-5 mini on several coding, tool-calling and intelligence benchmarks, while GPT-5.4 nano also improves over GPT-5 nano in a number of supported use cases.

That makes this launch less about replacing flagship models and more about filling an increasingly important middle layer of the AI stack. Developers do not always want the biggest model possible. Often they want something good enough to reason, code and use tools reliably, but fast enough and cheap enough to run at scale. OpenAI’s launch is clearly aimed at that part of the market.

Where the New Models Are Available

OpenAI says GPT-5.4 mini is available immediately in the API, Codex and ChatGPT. In the API, it supports text and image inputs, tool use, function calling, web search, file search, computer use and skills, with a 400k context window. OpenAI lists pricing for GPT-5.4 mini at $0.75 per 1 million input tokens and $4.50 per 1 million output tokens.

The company says GPT-5.4 nano is available only in the API, priced at $0.20 per 1 million input tokens and $1.25 per 1 million output tokens. In ChatGPT, OpenAI says GPT-5.4 mini is available to Free and Go users through the Thinking feature in the plus menu, while other users will see it as a rate-limit fallback for GPT-5.4 Thinking.

Why this matters for Australia
For Australian developers, startups and businesses, smaller models like these could be more relevant than the flagship releases. Cost-sensitive teams often care less about squeezing out the absolute highest benchmark score and more about whether a model is fast, stable and affordable enough to use every day.

It also points to where the AI market is heading. The competitive battle is no longer only about who has the smartest large model. It is increasingly about who can offer the best mix of speed, capability and pricing across different layers of the stack, especially for coding and workflow automation.

For readers, the bigger takeaway is simple: AI companies are not just racing to build more powerful models. They are also racing to make smaller models useful enough to handle real work at scale.

Source: OpenAI

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments