OpenAI’s Nick Turley on what comes after ChatGPT: proactive AI agents, GPT‑5, multimodal AI, new hardware with Jony Ive, and why human connection still matters.
What comes after ChatGPT? In a recent interview, OpenAI product lead Nick Turley shared a candid view of where AI is heading next. Read the original interview via t3n. His vision centers on a super-assistant that anticipates needs, acts on our behalf, and blends seamlessly into daily life.
The era of AI agents has begun
Chatbots changed how we work with AI. The next leap is AI agents. These agents do more than answer questions. They plan, take action, and complete tasks for you. Think bookings, orders, research, and complex workflows. Turley calls this shift essential for usefulness. A smart assistant should be proactive, not only reactive. It should see next steps and help you finish the job.
ChatGPT started as a short-term prototype to collect feedback. It now serves hundreds of millions every week. That growth shows demand for a helpful, reliable assistant. The next wave will make that assistant more capable and more autonomous, while keeping users in control.
GPT-5 matters — but products matter more
Smarter foundation models such as GPT-5 will drive a visible step forward. But model quality is only half the story. The other half is real products that put those models to work. Turley highlights industry use cases as the true frontier. One example: Moderna uses OpenAI technology to help design safer medicines faster. The value appears when AI meets a sharp workflow and a clear outcome.
The takeaway is practical. Pair leading models with focused features and robust guardrails. Then ship them where they solve real problems for real users.
Multimodal AI is the new default
We will not talk to AI only through text. Multimodal AI is becoming the default now, not in 20 years. Many people prefer to speak and listen. Others want to show the problem with a photo or a live camera feed. Turley cites everyday moments: using ChatGPT’s audio mode in the car to organize the day. Snapping a photo of a tricky assembly and getting step-by-step help. These are simple but powerful shifts.
As models get better with speech, vision, and context, the assistant feels closer to a helpful teammate. It can see what you see and act faster than you can type.
New devices will unlock the experience
We are using tomorrow’s AI on yesterday’s devices. That gap invites new hardware. OpenAI has announced a project with Jony Ive to explore fresh device ideas. Details remain private, but the goal is clear: design a form factor that fits proactive, multimodal agents. Touchscreens changed everything. The next shift could come from hardware made for AI-first interactions.
Expect wearables, ambient devices, or companion tools that keep assistants present but unobtrusive. The best device will reduce friction and expand what agents can do safely.
Ship fast, iterate safely, include everyone
Turley argues for iterative delivery. Release updates early and often, not once a year. Fast cycles let people adapt, give feedback, and flag risks. OpenAI followed this path with DALL·E. It launched before perfect photorealism to invite a broader debate about generated images. That dialogue helped shape safety measures and norms.
Progress should benefit everyone. Users and non-users alike deserve a voice in how AI evolves. Iteration, transparency, and public input help keep systems useful and safe.
AI’s limit: it cannot replace human connection
AI opens many doors, but it will not replace people who matter to us. We choose friends and partners for meaning, not raw intelligence. That is a natural boundary. Turley hopes AI frees time for what counts: family, creativity, and community. The right goal is not to replace humans, but to support them.
Key takeaways for leaders and builders
- Prioritize agents: Move from chat to proactive AI agents that plan and act.
- Model + product: Pair GPT‑5-class models with sharp features and guardrails.
- Go multimodal: Design for voice, vision, and live context from day one.
- Think hardware: Expect AI-first devices; watch OpenAI and Jony Ive’s work.
- Ship, learn, refine: Release early, listen widely, and improve safety continuously.
- Protect the human core: Use AI to return time to relationships and purpose.
Why this matters now
ChatGPT’s rise shows a deep need for clear, capable help. The next phase will be defined by AI agents that see context, coordinate steps, and deliver results. GPT-5 and future models will power this shift. Multimodal AI and new hardware will make it feel natural. And rapid, responsible iteration will keep it safe and useful for all. That is what comes after ChatGPT.