Beyond Chat: Why LLMs Should Render Interfaces, Not Conversations
date
Aug 6, 2025
slug
beyond-chat
status
Published
tags
AI
Product
type
Post
URL
summary
A thesis on the future of human-AI interaction and the end of chat fatigue
By Liran Markin and Gal Wiernik
The Terminal Era of AI
Our current approach to AI interaction is a profound category error. We’ve developed systems capable of understanding and generating any form of structured information, yet we confine them to conversational text interfaces. This approach is not only suboptimal, but also a clear indication of our misunderstanding of the full potential of these systems.
Andrej Karpathy captured this perfectly in his Software 3.0 keynote: “Whenever I talk to ChatGPT or some LLM directly in text, I feel like I’m talking to an operating system through the terminal.” His analogy is precise. Just as early computers required users to memorize commands and syntax, we’re forcing users to articulate their needs through prompts and parse responses through text.
The numbers reveal the scale of this mismatch. ChatGPT attracted 3.7 billion visits in October 2024, making it one of the world’s most visited websites. Users of Character.ai spend an average of 93 minutes per day in chat interfaces. Yet chat-first interfaces represent only a small fraction of traffic flowing through the internet’s most significant sites. According to Similarweb data, the top websites are dominated by visual interfaces - Google commands 18.32% of global web traffic, YouTube holds 6.96%, while visual-first platforms like Facebook, Instagram, and Amazon round out the top rankings. The vast majority use rich visual interfaces, such as search engines, social feeds, video platforms, and e-commerce sites, because they convey information more efficiently.
This explosion of chat interfaces has created what we call ‘chat fatigue’- the cognitive exhaustion that comes from constantly crafting prompts, interpreting text responses, and re-prompting for clarification. Every AI interaction becomes a writing exercise, forcing users to work at the speed of text rather than the speed of thought. This is a clear indication of the limitations of current chat interfaces and the urgent need for a more efficient solution.
The Information Bandwidth Hierarchy
Human information processing follows a clear hierarchy:
- Visual pattern recognition: Near-instantaneous
- Reading: 250-300 words per minute
- Listening: 150 words per minute
- Writing: 40 words per minute (average typing speed)
Chat interfaces operate at the bottom of this hierarchy. Users must serialize their thoughts into text, wait for responses, and then deserialize the text back into an understanding. Traditional interfaces leverage the full stack - visual hierarchy, spatial relationships, color, animation - to communicate at the speed of perception rather than the speed of reading.
Only 10-14% of text contains actual signal versus noise. In audio, it drops to 8%. Videos contain merely 5.5% signal. Yet we’re forcing AI interactions through the lowest-bandwidth channel available.
The Core Thesis: AI-Native Rendering as the Natural Interface
Here’s the central insight: Large Language Models are not just text generators - they are universal interface generators waiting to be unleashed.
Consider what LLMs can already do:
- Generate syntactically perfect code in any programming language
- Create complete web applications from descriptions
- Understand design patterns and user experience principles
- Adapt outputs based on context and user needs
The same model that can explain quantum physics can also generate the HTML, CSS, and JavaScript for an interactive quantum physics simulator. The same intelligence that can analyze financial data can create the optimal dashboard for visualizing that analysis.
The question is not whether AI can generate interfaces - it clearly can. The question is why we’re not letting it.
What’s truly needed is an AI-native rendering system - an infrastructure where AI doesn’t just power the backend or fill in templates, but generates the entire frontend experience in real-time. This approach is a significant departure from current methods that treat AI as a service within traditional architectures.
From Static to Dynamic: The Inevitable Evolution
Traditional interface design follows a waterfall from designer to developer to user. Designers create mockups, developers implement components, and users interact with fixed interfaces. This made sense when computational resources were scarce and user needs were predictable.
But this model breaks down in the age of AI:
- Infinite use cases cannot be anticipated by finite design teams
- Personalization at scale requires more than A/B testing
- Context-aware interfaces need to be generated, not selected
The solution is Generative UI - interfaces created in real-time by AI based on the user’s specific context, intent, and needs. Not templates filled with content, but novel interfaces generated for each interaction.
Evidence: The Pond Experiment
To test this thesis, we built a proof of concept at Pond. We demonstrate a system where:
- LLMs maintain complete control over rendered interfaces
- Every interaction generates new UI elements
- Context persists across interactions
- Real-time performance matches user expectations

Pond works by giving the AI direct control over what appears on screen. When you interact with Pond, you’re not chatting with an AI that returns text - you’re interacting with an AI that generates the entire interface in real-time. The engineering breakthrough lies in optimizing inference specifically for UI generation and streaming complete webpages as they’re created. This required rethinking how LLM outputs are processed and rendered, creating a pipeline that can stream HTML, CSS, and JavaScript components with minimal latency while maintaining the coherence and functionality of the generated interface.
Implications for Digital Experience
The Hyperpersonalized Web
Every website looks identical to every visitor. Whether you’re a power user or first-time visitor, shopping for yourself or someone else, researching or ready to buy - you see the same homepage, navigate the same menus, fill out the same forms. This one-size-fits-all approach made sense when personalization meant A/B testing button colors.
AI changes this fundamental constraint. Instead of showing everyone the same interface and hoping it works for most, AI can generate interfaces tailored to each person’s specific context and intent. An AI-native rendering system would create shopping interfaces tailored to each user’s context - not filtered results but dynamically generated stores.
Beyond Applications
The implications extend beyond web browsers. Any screen becomes a canvas for AI-generated interfaces:
- Digital advertising that generates unique creatives based on viewer context
- Home displays that create interfaces based on inhabitants’ needs
- Public screens that adapt to their audience in real-time
The End of Apps
Why navigate through fixed application interfaces when AI can generate the exact interface needed for each task? The app paradigm itself - discrete programs with predetermined interfaces - becomes obsolete when interfaces can be generated on demand.
Chat Fatigue and the Market Reality
The market is already signaling its exhaustion with chat interfaces. While 85% of customer service leaders will explore conversational AI in 2025, users are showing signs of fatigue. They spend 93 minutes daily in chat apps but struggle with the cognitive load of constant prompting and text parsing.
This isn’t sustainable. 80% of users now resolve 40% of their queries directly through chatbots, bypassing websites entirely. But this creates a new problem: users get information through the narrowest possible channel, missing the rich interaction possibilities of visual interfaces.
Chat fatigue is not just user frustration - it’s a market signal that we need better interaction paradigms.
The Philosophical Shift
This represents more than a technical evolution. It’s a fundamental shift in how we conceive of human-computer interaction.
Traditional HCI assumes:
- Interfaces are designed by humans for humans
- Functionality is predetermined and fixed
- Personalization happens within constraints
AI-native interaction assumes:
- Interfaces are generated by AI for specific moments
- Functionality emerges from user needs
- Every interaction can be completely unique
Conclusion: The GUI Revolution, Redux
The transition from command-line interfaces to graphical user interfaces made computers accessible to billions. But we’re still using the equivalent of command lines to interact with AI.
The next revolution won’t be as visible as windows and mice replacing terminals. Interfaces will become smarter, more responsive, more personal. Users won’t necessarily know AI is generating their experience - they’ll notice that digital products finally understand them.
This is not about building better chatbots or adding AI features to existing interfaces. It’s about recognizing that we need AI-native rendering systems where the interface itself is dynamic, alive, and generated. The AI should not just answer within existing interfaces - it should create the interface itself.
The question facing technologists, designers, and entrepreneurs is simple: Will you continue building for the chat paradigm that’s already creating user fatigue, or will you help build the AI-native rendering systems of tomorrow?
If this vision excites you - if you see the potential for interfaces that truly understand and adapt to users - reach out to us or explore our work at Pond. The future of human-AI interaction is being written now.