OpenAI Deep Research API explained: features, use cases, and pricing

OpenAI just released something that is going to change how research gets done. The new Deep Research API gives developers access to the same advanced models used inside ChatGPT for deep, multi-step research tasks.
This goes far beyond question-answering. You give it a big question, and it figures out what to do. It breaks it down into smaller problems, searches online sources, runs code, and delivers back a full report with citations and source metadata.
Let’s walk through what makes it so important.
What makes this API different
The API is powered by two specialized models: o3-deep-research and o4-mini-deep-research. These models are designed specifically for planning and executing complex research tasks from end to end.
They do not just respond to a prompt. They map out the problem, explore the right sources, analyze results, and deliver structured output that feels more like a full research assistant than a chatbot.
Web search, code execution, and private data access
This release also includes native support for tools that make the whole system more useful.
- It can search the web in real-time
- It can run code to test ideas or calculate results
- It can connect to your private data using MCP servers
That last piece is huge. With MCP, the model can tap into your company’s internal APIs, documents, databases, or knowledge base. That makes it possible to combine internal knowledge with live public data in the same workflow.
Building agents is easier than ever
OpenAI released examples showing how to use the Deep Research API with the Agents SDK. In just 30 to 40 lines of code, you can build a multi-agent system where each agent plays a role.
You might have one agent that plans, another that researches, one that executes code, and one that writes the final output. These agents work together to deliver a full research response, all coordinated behind the scenes.
Cost breakdown
This level of performance comes with a cost, and it is important to factor that in before deploying.
- o3-deep-research: $10 per million input tokens, $40 per million output tokens
- o4-mini-deep-research: $2 per million input, $8 per million output
- Web search tool: $10 per 1,000 tool calls
Some users have already reported spending over $100 on just 10 test queries. This is not for casual use. It is for moments where speed, accuracy, and depth really matter.
Rysysth insights
At Rysysth, we see this as a major shift in how businesses can operationalize research. This is no longer just about faster answers. It is about creating real-time research agents that work across both public and internal data.
We are exploring early use cases across strategy, reporting, and operations. The ability to run deep research using our client’s data and infrastructure opens up some exciting possibilities. We will be sharing real-world tests soon.
Until next time.