How AppFolio Cut LLM Latency 80–90% With Datadog Observability
AppFolio is a real estate management platform serving 20,000+ customers and 8 million+ units under management. After building Realm-X Messages—an LLM-powered inbox for property managers—on Amazon Bedrock, AppFolio used Datadog LLM Observability to identify bottlenecks and cut latency by 80–90%, which drove a 300% increase in product adoption and saves property managers an average of five hours per week.
Impact
80–90%
Latency reduction
300%
Product adoption increase
5 hours
Hours saved per property manager per week
Under 1 week
Time to production after QA setup
20,000+
Property managers on platform
Challenge
AppFolio needed to deploy and scale an LLM-powered messaging product built on Amazon Bedrock, but had no way to monitor response quality, identify latency bottlenecks, or detect model behavior changes without a purpose-built AI observability solution.
Solution
AppFolio deployed Datadog LLM Observability to instrument Realm-X Messages with trace-level visibility into the full LLM chain, enabling real-time monitoring of latency, token usage, model quality evaluations, and cluster analysis of resident topic categories.
Tools & Technologies
What Leaders Say
“Datadog LLM Observability helped us ensure high model performance and quality, and allowed us to expand functionality quickly and safely.”
“With LLM Observability, our team can understand, debug, and evaluate the usage and performance of our GenAI applications. We can monitor response quality to prevent negative interactions and ensure we’re providing our users with a positive experience.”
Sign up to read complete case studies, access detailed metrics, and unlock all use cases.
Full Story
Property managers spend a disproportionate share of their working day on tenant correspondence. AppFolio, which provides the software platform used to run properties ranging from single-family rentals to large commercial portfolios, identified this friction point clearly: as many as 50% of a property manager’s working hours were going toward resident communications. For a platform that prides itself on maximizing productivity, that was a gap worth solving with AI.
AppFolio built Realm-X Messages on Amazon Bedrock—an LLM-powered inbox that organizes incoming messages, suggests responses, and flags action items. The technical ambition was clear, but so was the challenge: LLM applications are non-deterministic. Response quality varies. Latency spikes unpredictably. Scaling a generative AI product safely requires visibility that most observability tools were not designed to provide.
The team chose Datadog LLM Observability to monitor Realm-X Messages from the start. The integration with Amazon Bedrock required minimal code annotations, enabling AppFolio to instrument the full LLM chain—from function calls to document retrieval to model calls—without major engineering overhead. Datadog’s dashboards surfaced latency trends and token consumption over time, while out-of-the-box evaluations flagged toxicity and failure-to-answer rates. The cluster map helped the team understand which resident topics the model handled well and which needed improvement.
In the early stages of release, AppFolio spotted a strong correlation between lower latency and user adoption. Using LLM Observability to trace the slowest requests, the team identified specific bottlenecks: inefficient prompts and slow API calls within the LLM chain. Prompt optimization and architecture updates reduced latency by 80 to 90%, which corresponded with adoption climbing nearly 300%. After initial QA setup, AppFolio moved Realm-X Messages into production in under a week.
Today, Realm-X Messages saves property managers an average of five hours per week on resident communications—time redirected to leasing, maintenance coordination, and portfolio growth. The case illustrates a broader pattern emerging in enterprise software: companies building AI-powered features are discovering that the quality of the observability layer determines whether those features scale from pilot to product. AppFolio’s ability to move fast without sacrificing reliability came directly from knowing, in real time, what its LLM was doing.