How Omnicom Built an AI Marketing Platform on AWS Cutting Costs 90%

Omnicom is one of the world’s largest marketing communications networks, with 75,000 employees serving over 5,000 clients across 70+ countries. The company migrated nine global data centers to AWS and built an AI-powered platform on Amazon Bedrock and Amazon SageMaker to deliver hyper-personalized campaigns at scale. The migration cut compute infrastructure costs by 90% while enabling real-time processing of 400 billion daily marketing events.

Impact

90%

Compute infrastructure cost reduction

75 petabytes

Data migrated to AWS

400 billion

Daily marketing events processed

100+

Data sources integrated

Challenge

Nine separate data centers managing 9,000 data sources and 400 billion daily events across 70 countries created infrastructure overhead that diverted teams from creative work and blocked real-time campaign analytics.

Solution

Omnicom migrated 75 petabytes of data to AWS, building a unified analytics layer on Amazon Redshift and an AI platform on Amazon SageMaker and Amazon Bedrock for campaign optimization and hyper-personalized content generation.

Tools & Technologies

What Leaders Say

We had nine separate data centers trying to support agencies across 70 countries. Our teams were spending too much time on infrastructure instead of creating the next breakthrough campaign for our clients.

Craig Cuyar, Senior Vice President and Global CIO, Omnicom

Using the suite of AWS generative AI tools, including Amazon Bedrock, Amazon Bedrock Guardrails, Amazon SageMaker AI, Amazon Nova, and soon Amazon Bedrock AgentCore, is accelerating our innovation and enabling us to deliver industry-leading real-time insights and campaign planning to our clients around the world, at scale.

Craig Cuyar, Senior Vice President and Global CIO, Omnicom
Get the full story.

Sign up to read complete case studies, access detailed metrics, and unlock all use cases.

Full Story

Running one of the world’s largest marketing networks means managing creative at a scale that defies conventional infrastructure. Omnicom’s hundreds of agencies span 70 countries, producing campaigns for some of the world’s most recognized brands. But behind the creative output sat a fragile foundation: nine separate data centers struggling to support 9,000 data sources, 400 billion daily events, and the compliance requirements of dozens of global markets. The friction was measurable — teams spent time managing infrastructure rather than building campaigns.

The first phase of transformation was a full migration from on-premises data centers to AWS. Moving 75 petabytes of data required a methodical approach: Omnicom standardized data ingestion using Amazon S3 Intelligent Tiering to handle 90+ petabytes across 100+ data sources, and adopted Amazon Redshift for processing 400 billion daily events. The migration eliminated data silos that had previously prevented data scientists from analyzing cross-agency signals at scale.

With infrastructure consolidated, Omnicom built its AI capabilities on top. Amazon SageMaker AI powers real-time campaign analytics and media optimization, enabling agencies to shift strategy mid-campaign based on live audience signals. Amazon Bedrock provides the generative AI layer for producing hyper-personalized messaging at omnichannel scale — from standard ad copy to pixel-precise automotive imagery. Amazon Bedrock Guardrails ensures responsible AI outputs across a global network of creative teams.

The results validated the architectural bet. Compute infrastructure costs dropped 90% compared to the on-premises footprint. The last of nine data centers is being closed. With the pending acquisition of Interpublic Group adding another 50 petabytes of data, Omnicom’s AWS foundation is already architected to absorb the scale without a rebuild.

The transformation positions Omnicom not just as a cost-efficient operation but as a technology differentiator. Real-time processing, responsible generative AI, and unified data access across hundreds of agencies create a compounding advantage — one that gets sharper as more client data flows into the platform.

Similar Cases

CE
Class Editori
One of Italy’s first AI agents in the media landscape
mfgpt launch milestone

Class Editori, a leading Italian media company specializing in finance, fashion, and lifestyle with 40 years of content archives, partnered with Softlab to build MFGPT on Google Cloud — one of Italy’s first generative AI agents in the media industry. The system unified four decades of journalistic archives and real-time financial data into BigQuery, powered by Gemini and Vertex AI, converting trial users into paid subscribers and securing B2B enterprise agreements with major financial institutions.

Advertising & MediaBBigQueryVAVertex AI
V
VideoAmp
90%
cost reduction

VideoAmp consolidated its entire data warehouse onto Snowflake, achieving 90% cost reduction, 10x performance improvement, and cutting data backfilling from 5 days to 13 hours.

Advertising & MediaSCSnowflake Cortex AI
A
ASAPP
91%
first-call resolution rate

ASAPP is an AI-native customer service platform that orchestrates large language models to automate contact center interactions for enterprise clients. By deploying Anthropic’s Claude through Amazon Bedrock, ASAPP eliminated its homegrown PII redaction layer and reduced call escalations by up to 40%, while helping clients achieve a 91% first-call resolution rate. The platform now automates more than 90% of contact center interactions, with human agents freed to handle three times the volume of complex cases.

TechnologyCustomer Support TechnologyABAmazon BedrockC(Claude (via Amazon Bedrock)
T
TaskUs
20%
average handle time reduction

TaskUs is a leading outsourced digital services company providing next-generation customer experience (CX) for innovative global brands. To move beyond flat-file embedding storage and scaling limitations, TaskUs built TaskGPT—a proprietary GenAI platform—with Pinecone as the core vector database for semantic search, RAG-based knowledge retrieval, and client-specific recommendations. The result: a 20% reduction in average handle time and a 5% increase in customer satisfaction across client deployments.

Business Process OutsourcingPPineconeABAmazon Bedrock
TR
Thomson Reuters
3,000+
subject matter experts' knowledge delivered via ai

Thomson Reuters integrated Claude via Amazon Bedrock into its AI platform, CoCounsel, to make the expertise of 3,000+ subject matter experts and 150 years of authoritative content accessible to legal and tax professionals. The solution combines Retrieval-Augmented Generation (RAG) architecture with multi-model deployment to deliver comprehensive, accurate professional analysis. Early adopters report dramatic efficiency gains, with some estimating task time cut in half or more.

Professional ServicesC3Claude 3 HaikuC3Claude 3.5 Sonnet
CA
Cox Automotive
17 (from 57 evaluated)
production ai solutions

Cox Automotive deployed 17 production AI agent solutions using Amazon Bedrock AgentCore, reducing estimate completion from 48 hours to 30 minutes, achieving 3x consumer response rates, and projecting 17,000 hours saved.

AutomotiveABAmazon Bedrock AgentCoreABAmazon Bedrock
P
Pinterest
17% YoY
revenue growth

Pinterest built an AI-powered discovery engine on AWS processing 18TB daily, delivering 10 million AI recommendations per second across 10,000+ GPU instances, driving 17% revenue growth and 70% AI-driven discovery.

Social MediaAEAmazon EKSASAmazon SageMaker AI
P
Postman
Up to 1,150/year
developer hours saved

Postman selected Claude Opus 4.6 as the default model for Agent Mode, saving developers up to 1,150 hours per year and nearly $1M annually for a 10-person team in API development automation.

TechnologyCAClaude APIABAmazon Bedrock