When OpenAI released ChatGPT in November 2022, it redefined expectations for language models and pushed conversational AI into the mainstream. Over the past year, the story has shifted from assistants to agents.
Today’s systems don’t just generate responses. They plan tasks, take actions, and evaluate their outputs—operating across APIs, enterprise datasets, and full computing environments. The user experience has evolved from interacting with a helper to delegating to an operator.
5 Types
There are 5 main types of AI agents: simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents and learning agents. All AI agents share the same loop—perceive, decide, act—but differ in sophistication: simple reflex agents react directly to inputs, model-based agents track the world’s state, goal-based agents act to achieve objectives, utility-based agents weigh trade-offs to maximize outcomes, and learning agents adapt and improve over time. Each type has distinct strengths and applications, ranging from basic automated systems to highly adaptable AI models. (We’ll dive deeper into these different types of AI agents in an upcoming blog post.)
Market Overview
The AI agent market is crystallizing into five strategic camps:
Frontier Model Vendors (OpenAI, Anthropic, Google). Driving the shift from assistants to agents. OpenAI pushes broad autonomy (ChatGPT Agents, CUA), Anthropic emphasizes safe enterprise deployment (Claude Computer Use), and Google bets on real-time multimodal ubiquity (Project Astra, Gemini Live).
Cloud & Enterprise Platforms (AWS, Salesforce, Databricks). Focused on trust and scale. AWS Bedrock Agents provide governance and orchestration, Salesforce Agentforce embeds agents in frontline SaaS, and Databricks Mosaic AI builds observability and evaluation into enterprise data workflows.
Open Frameworks (LangChain’s LangGraph, CrewAI). Supplying orchestration tools for stateful, multi-agent workflows, with an edge in developer adoption and ecosystem momentum.
Vertical Exemplars (Cognition/Devin). Proving ROI through deep specialization. Devin, for example, validates software engineering as a narrow but high-value use case.
Embodied/Multimodal (NVIDIA ACE, Audio2Face). Building the interface layer with real-time avatars and conversational digital humans—enhancing trust and engagement.
Who is Winning Today?
The AI agents market is still early and fragmented, but tech giants are best positioned: Microsoft and Google lead due to their deep integrations across productivity, search, and cloud; Amazon/AWS is strong in infrastructure and commerce; and NVIDIA dominates the underlying compute layer powering all agent ecosystems. IBM and others play niche enterprise roles. “Winning” today largely means distribution, ecosystem lock-in, and compute dominance—so incumbents hold the advantage, though the market could shift quickly if startups crack orchestration, trust, or vertical specialization.
What’s Changed vs. 1 Year Ago
The leap from “tool use” to autonomous task execution is real. Agents now plan, self-correct, and handle complex workflows end-to-end.
Enterprises are adopting faster, enabled by guardrails, memory, and observability—core building blocks for safety and scale. “Computer use” is now table stakes: agents click, type, and navigate across digital interfaces, moving toward real-time, multimodal interaction.
Orchestration frameworks have matured, enabling stateful, multi-agent collaboration. Meanwhile, embodiment is on the rise, with avatars and real-time engagement signaling a future where agents provide presence as well as function.
What to Watch Next (6–12 Months)
Horizontal agents are becoming proactive: orchestrating specialized sub-agents, integrating plug-ins, ensuring auditability, personalizing over time, and optimizing for cost.
Vertical agents are embedding compliance logic, simulations, and edge deployment for regulated, data-rich environments like healthcare, finance, and manufacturing—augmenting decisions as much as automating them.
Both depend on robust infrastructure: cross-tier execution (cloud and edge), plug-in frameworks, governance, low-latency pipelines, inter-agent reliability, and strong observability. These ensure enterprises can deploy agents safely at scale.
Bottom Line
In just one year, “LLM apps” have evolved into operational agents that plan, execute, and improve with feedback loops.
The winners will combine:
Acting capability (computer use)
Enterprise rails (security, observability, evaluation)
Everyday distribution (embedding into tools people already use)
If you’re placing bets:
Pick a governed platform.
Adopt a stateful orchestration framework.
Pilot one or two vertical agent workflows where you already have measurable success metrics.