Real Estateengineering

How AppFolio Cut LLM Latency 80–90% With Datadog Observability

AppFolio is a real estate management platform serving 20,000+ customers and 8 million+ units under management. After building Realm-X Messages—an LLM-powered inbox for property managers—on Amazon Bedrock, AppFolio used Datadog LLM Observability to identify bottlenecks and cut latency by 80–90%, which drove a 300% increase in product adoption and saves property managers an average of five hours per week.

Impact

80–90%

Latency reduction

300%

Product adoption increase

5 hours

Hours saved per property manager per week

Under 1 week

Time to production after QA setup

20,000+

Property managers on platform

Challenge

AppFolio needed to deploy and scale an LLM-powered messaging product built on Amazon Bedrock, but had no way to monitor response quality, identify latency bottlenecks, or detect model behavior changes without a purpose-built AI observability solution.

Solution

AppFolio deployed Datadog LLM Observability to instrument Realm-X Messages with trace-level visibility into the full LLM chain, enabling real-time monitoring of latency, token usage, model quality evaluations, and cluster analysis of resident topic categories.

Tools & Technologies

What Leaders Say

Datadog LLM Observability helped us ensure high model performance and quality, and allowed us to expand functionality quickly and safely.

Teddy Ho, Principal Product Manager, AppFolio

With LLM Observability, our team can understand, debug, and evaluate the usage and performance of our GenAI applications. We can monitor response quality to prevent negative interactions and ensure we’re providing our users with a positive experience.

Kyle Triplett, Vice President of Product, AppFolio
Get the full story.

Sign up to read complete case studies, access detailed metrics, and unlock all use cases.

Full Story

Property managers spend a disproportionate share of their working day on tenant correspondence. AppFolio, which provides the software platform used to run properties ranging from single-family rentals to large commercial portfolios, identified this friction point clearly: as many as 50% of a property manager’s working hours were going toward resident communications. For a platform that prides itself on maximizing productivity, that was a gap worth solving with AI.

AppFolio built Realm-X Messages on Amazon Bedrock—an LLM-powered inbox that organizes incoming messages, suggests responses, and flags action items. The technical ambition was clear, but so was the challenge: LLM applications are non-deterministic. Response quality varies. Latency spikes unpredictably. Scaling a generative AI product safely requires visibility that most observability tools were not designed to provide.

The team chose Datadog LLM Observability to monitor Realm-X Messages from the start. The integration with Amazon Bedrock required minimal code annotations, enabling AppFolio to instrument the full LLM chain—from function calls to document retrieval to model calls—without major engineering overhead. Datadog’s dashboards surfaced latency trends and token consumption over time, while out-of-the-box evaluations flagged toxicity and failure-to-answer rates. The cluster map helped the team understand which resident topics the model handled well and which needed improvement.

In the early stages of release, AppFolio spotted a strong correlation between lower latency and user adoption. Using LLM Observability to trace the slowest requests, the team identified specific bottlenecks: inefficient prompts and slow API calls within the LLM chain. Prompt optimization and architecture updates reduced latency by 80 to 90%, which corresponded with adoption climbing nearly 300%. After initial QA setup, AppFolio moved Realm-X Messages into production in under a week.

Today, Realm-X Messages saves property managers an average of five hours per week on resident communications—time redirected to leasing, maintenance coordination, and portfolio growth. The case illustrates a broader pattern emerging in enterprise software: companies building AI-powered features are discovering that the quality of the observability layer determines whether those features scale from pilot to product. AppFolio’s ability to move fast without sacrificing reliability came directly from knowing, in real time, what its LLM was doing.

Similar Cases

FD
Fifth Dimension
Days or weeks → 30 minutes
investment memo drafting time

Fifth Dimension, a global AI platform for commercial real estate asset managers and owner-operators, built a multi-model workflow on Google Cloud using Gemini for large-scale document ingestion and Claude for high-precision reasoning. The platform compressed investment memo drafting from days or weeks to just 30 minutes and achieved 99.9% reliability for multi-hour workflows, driving deals with top-10 U.S. asset managers.

Real EstateVAVertex AIBBigQuery
UG
UOL Group
80%
incident resolution time reduction

UOL Group is Brazil’s largest digital media, technology, and payments platform, serving eight out of ten Brazilian internet users monthly across more than 200 applications and thousands of cloud and on-premises resources. After migrating from Splunk to Elastic Security and deploying Elastic AI Assistant and Attack Discovery with Amazon Bedrock integration, UOL reduced security incident resolution time by 80% — from days to minutes — and cut false positive alert volume in half.

Media & EntertainmentEAElastic Attack DiscoveryESElastic Security
V
Vectorize.io
~2 hours
time to deploy ai solution for new client

Vectorize.io is a US-based software company that builds agentic and generative AI infrastructure, helping organizations in law, insurance, and finance make vast volumes of unstructured data usable by large language models. By integrating Elastic’s hybrid search and Elastic Cloud Serverless with Amazon Bedrock, Vectorize deploys production-ready AI solutions for clients in hours rather than weeks. One client whose developer community grew by a million users in a year relied on Vectorize’s real-time learning agent—built on Elasticsearch—to answer support queries and instantly index new answers for future use.

ABAmazon BedrockEElasticsearch
N
N26
70%
task automation in targeted processes

N26 deployed Claude via AWS Bedrock across 15+ internal use cases in its first year, automating up to 70% of tasks in targeted customer service processes and cutting manual processing by 50% across 24 European markets. New AI implementations now go from ideation to evaluation in 1–2 weeks.

Financial ServicesABAmazon BedrockCEClaude Enterprise
T
Tabnine
50%
improvement in response times

Tabnine integrated Claude 3.5 Sonnet via Amazon Bedrock into its AI coding assistant, serving over 1 million monthly developers. The migration delivered 50% faster response times, a 20% increase in free-to-paid conversions, and a 20-30% reduction in churn—while meeting strict security and compliance requirements for regulated industries.

SoftwareABAmazon BedrockCClaude
I
Intuit
Higher
helpfulness rating vs. non-claude experiences

Intuit integrated Claude via Amazon Bedrock into its Intuit Assist feature within TurboTax to generate plain-language explanations of tax calculations. The integration combines Claude's natural language capabilities with Intuit's proprietary tax knowledge engine, serving millions of customers during peak tax season. The result was higher helpfulness ratings and improved completion rates for federal tax filings.

Financial TechnologyTechnologyIAIntuit AssistABAmazon Bedrock
NR
Nomura Research Institute
50%
document review time reduction

Nomura Research Institute deployed Claude 3.5 Sonnet via Amazon Bedrock to automate complex Japanese document analysis, cutting review times by 50% for clients in financial, manufacturing, and distribution sectors.

Professional ServicesABAmazon BedrockC3Claude 3.5 Sonnet
O
Omnicom
90%
compute infrastructure cost reduction

Omnicom is one of the world’s largest marketing communications networks, with 75,000 employees serving over 5,000 clients across 70+ countries. The company migrated nine global data centers to AWS and built an AI-powered platform on Amazon Bedrock and Amazon SageMaker to deliver hyper-personalized campaigns at scale. The migration cut compute infrastructure costs by 90% while enabling real-time processing of 400 billion daily marketing events.

Advertising & MediaASAmazon SageMakerABAmazon Bedrock