What Designing AI-Centric Products Looks Like Now and in the (Near) Future

If you’ve clicked open this article because the title piqued your curiosity, that makes two of us. Putting together this article has been the culmination of months of observing trends evolve. I’ve been curious and concerned about the direction of the digital design market, especially how the interface design landscape seems to be shifting towards typing conversations into Relume or ChartAI compared to traditional interactions.
The global Conversational AI market is projected to grow from £9.2B in 2024 to £46.4B by 2032, while 64% of CX leaders plan to increase bot budgets in 2025. With this trajectory seemingly locked in, what is the near-future of AI centric products experiences?
I want to be crystal clear about one thing: I am neither pro or anti-AI. I am pro-process; the kind of process that determines whether AI is right for your market, business model, organisation, and perhaps most importantly, your users. The AI hype train has been gaining steam for years now, but there’s been trouble along the way. Weirdly, analysts estimated that chatbots hallucinate as much as 27% of the time, with factual errors present in 46% of generated texts. We’ve seen instances like Canadian airline Air Canada being ordered by the Civil Resolution Tribunal to pay damages to a customer and honour a bereavement fare policy that was hallucinated by a support chatbot just to name one of thousands of examples. Yes, the technology is in its infancy, but I have no doubt it will improve in terms of performance and cost.
The trust challenge runs deeper than technical limitations. According to the 2025 KPMG Global Trust in AI Study, which surveyed over 48,000 people across 47 countries, only 46% of people globally are willing to trust AI systems, despite 66% of people using AI regularly. Even more concerning, 66% rely on AI output without evaluating accuracy and are making mistakes in their work due to AI (56%). In the United States specifically, only a quarter of adults trust AI to provide accurate information, and even fewer trust the technology to make unbiased or ethical decisions (I mean why would they). This trust deficit creates a fundamental tension between AI adoption and user confidence, a chasm that designers must bridge carefully.
Unpacking Trends & Trajectories

Digital Intuition
The evolution we’re witnessing isn’t happening in a vacuum, it’s unfolding right before our eyes, reshaping how we interact with technology in fundamental ways. The shift towards conversational interfaces represents more than just a design trend; it’s a complete reimagining of the relationship between humans and design systems. Jesse Lyu, the founder of Rabbit, asserts that a natural language approach will be “so intuitive that you don’t even need to learn how to use it”, whilst Noah Levin, Figma’s VP of Product Design, contends that “it’s a very intuitive thing to learn how to talk to something”. Truth be told, I’m a fan of the sentiment of this statement but the execution has me concerned. Not all friction is bad. Executed well, it becomes traction that keeps the user engaged. I think the future interactions of AI-centric consumer based products are going to have to intentionally keep well designed friction in the experience to make sure users feel invested in the experience.
The “AI” makeover

The last few years, anyone who has spent more than 5 minutes on Linkedin has seen the slew of “AI”-ification of existing products (btw, witnessing this has been a fantastic demonstration of companies misunderstanding their market fit). Consider how drastically products like Notion have transformed over recent years. What began as a relatively straightforward note-taking and database tool has morphed into an AI-powered workspace where users can simply describe what they want and watch it materialise… kinda. The traditional menu-driven interface hasn’t disappeared entirely, but it’s increasingly taking a backseat to natural language commands. This isn’t just feature creep, it’s a fundamental pivot towards anticipating user intent rather than forcing users to navigate predetermined pathways. Similar transformations are happening across platforms like Microsoft Office, where Copilot integration is reshaping how users interact with familiar tools.
The idea and execution of being able to summon personalised knowledge is appealing and does have true utility for most users but most artificial intelligence is at essentially a prediction engine that, as this article has shown, can get it wrong. Reports of dashboards not aligning with what users want and not recognising the new ecosystem of AI apps that users lean on. Notion and products like it, aka products that have a solid base with their existing UI, UX and engineering are perhaps leaning more into what they think is useful AI rather than doing proper user research to see what kind of AI their users would want and use within their product.
The Brain Dilemma

What happens when all of that learned behaviour becomes obsolete, figuratively, overnight? There’s a genuine concern about cognitive atrophy, not unlike how GPS has arguably diminished our natural navigation abilities. By following a set of digital turn-by-turn directions, GPS navigation apps treat us as passive passengers rather than active explorers, removing our agency to make decisions. When every interaction becomes a conversation, do we lose our capacity for systematic, structured thinking about digital tasks?
The flip side, of course, is that conversational interfaces might actually align better with how our brains naturally process information. Perhaps we’ve been forcing ourselves into unnatural interaction patterns for so long that returning to conversation feels revolutionary when it’s actually just returning to our roots. Active learning methods try to arouse the learner by giving them the opportunity to control the information that is experienced, whilst when new information is taught with a passive learning method, this information is stored with less connections to the existing schemas, and hence retrieval becomes more cumbersome. However, this isn’t an excuse to design things with intentional cognitive friction but a reason to put some limiters on the knowledge being outputted by the model to encourage active engagement rather than passive consumption.
The UI of today has been bound to the same keyboard and mouse set up for the last 30 plus years. This shifted drastically with smartphones in the 2000s. But all of those things were highly tactile, whether it was our mouse dragging things across the screen or our thumbs zooming in on something, there was a hand to screen connection and that created a deeper connection (when the UI and UX is designed well). I think we are losing something with this migration to chat interfaces and not getting something of equal or greater value in return.
The Personality Imperative

Humans are hardwired to be drawn to personality. That’s why, in my humble opinion, the most successful traditional B2C products are those that focused on weaving character into every corner of their UX and UI whilst remaining functional. This was a big driver behind DuoLingos branding. I believe that AI-centric products that succeed will capitalise on this fundamental human tendency even more aggressively.
Consider the early success of ChatGPT versus more technically superior but personality-deficient alternatives. Users gravitated towards ChatGPT not just because of its capabilities, but because it felt like conversing with someone rather than querying a database. The most successful AI products going forward will be those that manage to feel genuinely personable without crossing into the uncanny valley.
Do you want to play a game?

A stronger wave of gamification is almost certainly what will come next with AI-centric consumer products. In 2023, researchers put together an experiment; developing AI-driven NPCs that have social structures in an old arcade like interface. Two things stuck out to me from this study 1) These social structures were emergent behaviour and 2) this wasn’t so much about the direct manipulation of the environment as was watching it grow. What makes this particularly relevant to AI-centric products is the psychological foundation: we’re becoming increasingly wired for game-like interactions. 77% of Gen Z participants engage in daily mobile gaming. This suggests that gamified AI interfaces aren’t just a novelty, they’re tapping into deeply ingrained behavioural patterns.
I see SIM-esque interfaces for AI agents that capitalises on how we are hardwired to interact with communities and conversation (good conversation that is). This paired with unique personalities for each agent that is feels alive… how could traditional interfaces for traditional products complete against that? Gamification is a reward system paired with achievements in the product (think Duolingo’s gem system). But perhaps the reward system won’t be so transparent. Perhaps the reward system is the experience itself, AI agents with personalities paired with the information and tasks they represent, reporting back what they did and awaiting instructions outside their autonomy parameters. Imagine not a personalised dashboard but a social circle with avatars of all the knowledge that you feed into it and those bodies of knowledge talking to each other.
So What’s Next?

While its chat interfaces for now, I think this trend will give way towards a new more hyper targeted version of gamification whenever that model applies. Eventually this model will be overtaken by a brain chip interface (hopefully by then I will have enough money saved for the ad free experience of this technology. I can already imagine the McDonalds ads getting in the way of me visualising the necessary spread sheet commands ). This reality extends beyond mere model alignment, though that’s certainly part of the equation. For designers, indeed for anyone who wants to remain remotely relevant in this evolving landscape, it will take more than skimming headlines or absorbing surface-level trend pieces. It will require time to distinguish between what’s genuinely valuable and just a waste of time. The most successful designers in this new paradigm won’t be those who can prompt engineer their way through every problem, nor those who reflexively resist every AI innovation. They’ll be the ones who can thoughtfully evaluate when AI enhances the user experience versus when it merely adds complexity masquerading as sophistication. They’ll be the ones who can think and articulate their thinking.
Perhaps most importantly, we must resist the temptation to assume that because something is technically possible, it’s therefore desirable or necessary. With the power of AI there is a real potential to solve problems for users, not redress a symptom in a new way. Having an avatar attend a meeting for you only to hear what the meeting was about later… yeah, I wish I was making this up. HeyGen’s Interactive Avatar can join one or multiple Zoom meetings simultaneously, 24/7 They are designed to not only look and sound like the people they are representing, but they will also think, talk and make decisions like them, according to HeyGen, whilst Zoom founder Eric Yuan shared his vision of “digital twins” who could attend virtual meetings for users. Are designing a future where we interface with AI or having AI impersonate us?
❤️ Big thanks to Tamanna Rumee, Matheus Bertelli, Filipe Galvan, Gaspar Uhas, Spencer, Jakub Żerdzicki, Volodymyr Hryshchenko and Meo for the photos.