|

Zee Live News News, World's No.1 News Portal

OpenAI’s Faster GPT-5.4 Mini and Nano AI Models Are Here: Details

Author: admin_zeelivenews

Published: 18-03-2026, 11:02 AM
OpenAI’s Faster GPT-5.4 Mini and Nano AI Models Are Here: Details
Telegram Group Join Now

OpenAI introduced two new artificial intelligence (AI) models in the GPT-5.4 family on Tuesday. Dubbed GPT-5.4 mini and GPT-5.4 nano, the two smaller AI models are faster compared to the larger models in the family, and are aimed at low-latency workloads. Some of the key strengths of these models include coding proficiency, computer use, multimodal understanding, and subagent handling. For developers, these models will also be cost-efficient, given the lower cost of input and output tokens.

OpenAI Introduces GPT-5.4 Mini and GPT-5.4 Nano

In a blog post, the San Francisco-based AI giant announced the release of the two new models. GPT‑5.4 mini is now available via the application programming interface (API), Codex, and ChatGPT. In the API, the model supports text and image inputs, tool use, function calling, web and file search, computer use, and skills with its 400,000 tokens context window. It costs $0.75 per million input tokens and $4.50 per million output tokens.

In the API, GPT‑5.4 mini supports text and image inputs, tool use, function calling, web search, file search, computer use, and skills. It has a 400k context window and costs $0.75 (roughly Rs. 68) per 1M input tokens and $4.50 (roughly Rs. 416) per 1M output tokens.

Notably, GPT-5.4 mini is available to the free and Go tiers via the Thinking feature, whereas other tiers will find it as a fallback model after they hit the rate limit for GPT-5.4 Thinking. Coming to GPT-5.4 nano, it is currently only available as an API offering, with pricing set at $0.20 per million input and $1.25 per million output tokens.

On capabilities, both models are optimised for coding-related tasks as long as they are deployed in fast, iterative environments. OpenAI says the models “handle targeted edits, codebase navigation, front-end generation, and debugging loops with low latency.” Additionally, it is said that the 5.4 mini outperforms GPT-5-mini in most areas at similar latencies.

Another unique strength of the model is subagent handling. While the larger AI models in the family are suitable for more complex agentic tasks involving planning, coordination, and final judgment, the mini variant can handle subagents that take care of narrower subtasks in parallel.

OpenAI says these smaller models offer developers the option to compose systems where one single model is not overlooking every subtask in an agentic workflow. Apart from this, the company claims that the mini variant also excels on multimodal tasks around computer use. Interestingly, on the OSWorld-Verified benchmark, the mini variant approaches GPT-5.4.

Source link
#OpenAIs #Faster #GPT5.4 #Mini #Nano #Models #Details

Related News

Leave a Comment

Plugin developed by ProSEOBlogger
Facebook
Telegram
Telegram
Plugin developed by ProSEOBlogger. Get free Ypl themes.
Plugin developed by ProSEOBlogger. Get free gpl themes