How Are People Really Using AI Today?

A few weeks ago, I was invited to join a conversation at an event focused on one simple but powerful question: How are people actually using AI in their everyday lives?
It struck me as the kind of question we don’t ask often enough. Most AI discussions focus on what's new, what's possible, or what might be coming next. But what about right now? Which tools are actually being used? What sticks? What fades?
Preparing for the event pushed me to reflect more critically on my own experience—and I realized something a bit unexpected.
The Tools We Talk About vs. the Tools We Actually Use
Over the past year, I’ve tested all sorts of AI tools. Assistants that summarize meetings. Plugins that automate workflows. Coding copilots. Agent-based IDEs. Custom RAG systems.
You name it—I’ve probably tried it.
But when I looked honestly at what I still use regularly, the list was surprisingly short: ChatGPT. That’s it.
Not because the other tools were bad. Many of them were genuinely impressive. But none of them managed to earn a lasting place in my daily routine. They were too brittle, too manual, too slow to adapt to real workflows. So they faded away.
That realization became a central theme for me: what would it take for these tools to stick? Why do some survive, while others—even those with lots of promise—don’t?
Three Waves of Practical AI
Looking at the bigger picture, I started seeing clear phases in how we’ve been approaching AI in practice. Not in terms of research milestones, but from the perspective of actual usage:
- LLM chat interfaces
The first big leap forward was the conversational interface. Tools like ChatGPT made it feel natural to interact with AI in plain language. You ask a question, you get an answer. This remains the entry point for most users—and for good reason. It’s powerful, intuitive, and doesn’t require setup or integration. - Retrieval-Augmented Generation (RAG)
The second wave added grounding. Instead of relying solely on a model’s pretraining, RAG systems allow it to pull in fresh, factual data—documents, websites, or databases—to inform its responses. This adds depth and accuracy, especially for specialized or real-time use cases.
But in practice? Very few people use these systems daily. Outside of enterprise setups or developer tinkering, they remain under the radar. - AI agents
The third and most ambitious stage is the rise of autonomous agents: systems that can reason, plan, and act with minimal human input. They can break down complex goals into tasks, navigate tools, write code, even execute it.
But again—this is still niche. Agent-based tools are mostly explored by developers. Tools like Cursor or AutoGPT are interesting, but haven’t reached a level of reliability or ease that would attract everyday users.
So Where Are We, Really?
To get a better sense of the landscape, I ran a short survey. I wanted to know how others were using AI—what tools they found valuable, what stuck with them.
The results weren’t shocking, but they were telling:
Most people are still firmly in the first phase. They interact with chat-based LLMs. That’s the extent of their AI usage. RAG-based tools are rare. Agent-based tools are practically unheard of.
Which brings me to a bigger reflection: we’re still very early in this journey. AI hasn’t reached anywhere near the adoption levels of mobile phones, search engines, or even voice assistants. Even ChatGPT—arguably the most recognizable AI tool today—is used regularly by only a small percentage of the general population.
What Would Need to Change?
This brings me back to my original question:
What would it take for these tools to actually stick?
For me, the answer lies in one word: autonomy.
Most current tools—even the impressive ones—still need to be micromanaged. You have to prompt them, supervise them, nudge them along. That’s fine for experimentation or one-off tasks, but not for sustained usage.
If agent-based systems are to become truly useful, they need to act on their own. They need to be reliable, adaptable, and capable of making decisions without constant human input.
And right now? That’s still out of reach in most cases.
Final Thoughts
I went into this process expecting to talk about exciting AI breakthroughs. I came out of it realizing that I still mostly use ChatGPT—and that I’m not alone.
We’re in a moment of incredible potential, but the actual usage tells a more grounded story. We’re still at the beginning. The tools that work are the ones that are simple, accessible, and reliable. The more advanced ones—those with RAG, agents, or deep integration—haven’t yet earned our trust.
But maybe that’s the opportunity.
If we want AI to move from curiosity to infrastructure, it has to become invisible—something that just works, without us having to think about it.
That’s the real challenge.
And maybe, the real frontier.