Article · May 2026
A practical breakdown of the agents that power Applied's living map of real AI deployments.
Founder of Applied. About →
Last month, I shared the State of Applied AI, a report based on 200+ real AI deployments from companies around the world. A few people asked how those cases were found, analyzed, and organized. I covered the methodology briefly in the report, since the focus was the findings.
This article is about the system behind it.
That system is Applied: a living map of real AI use cases. It tracks how companies adopt AI across industries, business functions, tools, and outcomes.
At its core, Applied is a business intelligence platform for enterprise AI adoption. What makes it interesting is not just the database, but how the database is built and updated using AI agents.
Today, I'll break down how a team of agents collects, structures, and transforms scattered stories into intelligence.
Behind the scenes, Applied is a team of agents with specific tasks and different levels of autonomy. Underneath them, there is an orchestration layer. This is usually a complex part of agentic systems, but I kept it dead simple on purpose.
I built the framework and agents with Claude Code, but there are other pieces of the stack that I won't be covering today.
The Scout Agent discovers and prioritizes new AI use cases to extract. It is the starting point of the system, and plays a key role in keeping Applied both deep and broad. It needs to cover as many industries and business functions as possible, while balancing established AI tools with new entrants.
The goal is to avoid building a living map that only covers three tools, or one where 80% of the cases come from engineering teams.
The Scout Agent:
The Extractor Agent is the most important agent in the system.
It reads each potential case, decides if it is worth including, and extracts the structured data behind it: companies, vendors, tools, business functions, outcomes, and other key entities.
This is the agent I have spent the most time tweaking. It burns the most tokens, but it is also where most of the quality control happens.
Over time, it has learned what makes a strong case: novel in terms of AI usage, not duplicated, tied to a clear company, connected to an end-to-end scenario, supported by tangible outcomes, and recent enough to matter — usually from 2025 or newer.
In practice, it checks whether the case is really about AI, whether it uses newer technologies like LLMs or autonomous systems, whether the outcomes are concrete, and whether the entities already exist in the map or need to be added.
The Extractor Agent:
The Enrichment Agent runs after the Extractor Agent. At this point, the case is already structured, so the agent has more context to work with.
Its job is to complete missing information about companies, tools, categories, and use cases using additional sources.
The Enrichment Agent:
Applied use cases are available in both English and Spanish. The Translator Agent handles that translation layer and keeps the living map consistent across both languages.
Early on, the system did not have a QA Agent. That became a problem.
The data, taxonomy, and even parts of the UI started to feel inconsistent. The living map had the right ingredients, but it was not cohesive enough.
So I added a QA Agent to find and fix issues across Applied.
The Match Maker Agent is the newest agent in Applied.
I added it to help users discover cases that match their interests. As the living map grows, simple filters on a webpage are not enough.
The agent uses each user's onboarding and settings preferences to select three relevant cases and send them by email or notify in the app.
The Match Maker Agent:
In Applied's AI tools landscape, there is a category called Agentic Management. These tools help orchestrate agents, track their outputs, and monitor spending. This layer is particularly important when dealing with swarms of agents and mission-critical tasks.
So, how do agents coordinate in Applied?
There is no external orchestration tool. So far, a simple system that combines a data model and individual report logs per agent has worked well.
So, in practice, the orchestration layer is mostly the living map, the logs, and myself.
That leads to my role in the system. I still guide the parts where judgment matters most:
There is still a lot to improve, especially in the enrichment agent. Applied could add more context around cases, vendors, tools, and categories: pricing, reviews, quality signals, outcomes, rollout time, required expertise, and more.
There aren't automatic feedback loops — those still go through me. The QA Agent reports issues, but those findings do not go back directly to other agents. I'm the bridge.
Closing that loop would make each agent upgrade itself, but the risk is that agents can make strong decisions without enough evidence. Even with today's best models, they sometimes overreach. They could introduce drastic changes to the taxonomy or data model, or get stuck in loops where they change something, reverse it, and repeat the process.
That is why I would be careful about giving agents full control over the core structure of Applied.
Where closing the loop feels safer is in the Match Maker Agent. For example, it could learn from user feedback, email replies, platform comments, open rates, and click rates. Over time, it would learn which cases are actually valuable to different types of users.
Applied is focused on AI adoption, but the same framework can be used in many other areas. Any workflow that requires finding information, filtering what matters, extracting structured data, enriching it, and keeping it updated could benefit from a similar agentic setup.
You can probably think of your own version based on the problems you care about. A few examples:
The pattern is the same: A) scout the sources, B) extract what matters, C) enrich the context, D) check quality, and E) deliver useful insights to the right person.