How Fujitsu Built Takane, a Japanese LLM for Regulated Industries, with Cohere

Fujitsu, the global IT and digital transformation company with 124,000 employees, partnered with Cohere to develop Takane — a state-of-the-art Japanese large language model built on the Cohere Command series. Designed for private deployment in regulated sectors such as finance, healthcare, and government, Takane delivers world-class performance on the JGLUE benchmark and is now integrated into Fujitsu’s AI service offerings and data intelligence platform.

Impact

World-class score

JGLUE benchmark performance

100+

Countries served by Fujitsu

124,000

Fujitsu employee count

Challenge

Fujitsu needed a highly accurate Japanese LLM capable of private deployment in regulated industries such as finance, healthcare, and government — sectors where data sovereignty, compliance, and language precision are non-negotiable, and where general-purpose public LLMs were inadequate.

Solution

Fujitsu partnered with Cohere to build Takane, a custom Japanese LLM trained on the Cohere Command series with private deployment architecture, optimized for complex document workflows, multilingual enterprise use cases, and high-stakes industries requiring strict compliance.

Tools & Technologies

What Leaders Say

We are dedicated to supporting customer’s business transformation by bringing the most advanced AI to market, not only through our own innovations, but also by collaborating with our global partners like Cohere.

Yoshinami Takahashi, Corporate Vice President, COO, Fujitsu Limited
Get the full story.

Sign up to read complete case studies, access detailed metrics, and unlock all use cases.

Full Story

Fujitsu serves customers across more than 100 countries as a digital transformation partner, with deep roots in sectors that demand the highest standards for data privacy and regulatory compliance: finance, government, healthcare, law, and manufacturing. As generative AI became a core transformation enabler, Fujitsu recognized that general-purpose LLMs — optimized primarily for English — created a fundamental accuracy gap for Japanese enterprise customers. In regulated industries, even minor language errors can have serious consequences, making reliability in Japanese-specific contexts non-negotiable.

The challenge Fujitsu faced was not simply building a Japanese-capable model, but building one secure enough for industries operating under strict data governance requirements. General LLMs hosted by public cloud providers could not meet these customers’ compliance and sovereignty requirements. Developing a robust, scalable model from scratch would also require access to advanced training infrastructure, extensive Japanese language datasets, and deep expertise in LLM development — capabilities Fujitsu needed to acquire or partner for.

Fujitsu chose Cohere as its strategic technology partner, leveraging the Cohere Command series as the base architecture. Working with Cohere’s engineers, Fujitsu built Takane as a privately-deployed custom model optimized for the specific linguistic and domain requirements of Japanese enterprise customers. The model’s capabilities span complex document workflow processing, multilingual support (Japanese, German, Chinese, Portuguese), structured data extraction into formats like JSON and CSV, mathematical and logical reasoning for finance and engineering use cases, and high-accuracy summarization and sentiment analysis for enterprise decision-making.

The performance outcome was significant: Takane achieved a world-class score on the JGLUE benchmark, validating its accuracy for the demanding business contexts Fujitsu’s customers operate in. The model is now embedded in Fujitsu’s commercial AI infrastructure — integrated into the Fujitsu Kozuchi AI service and offered through the Fujitsu Data Intelligence PaaS (DI PaaS) — making it accessible to customers across Fujitsu’s client base in Japan’s most privacy-sensitive industries.

The Takane partnership demonstrates how an established global IT company can accelerate AI capability development by combining its domain expertise and distribution reach with a specialized AI partner’s model infrastructure. Rather than building its own LLM from zero, Fujitsu used Cohere’s training expertise and base models to reach production quality faster than an independent effort would have allowed, while retaining the ability to deploy privately and maintain full control over its customers’ data.

Similar Cases

D
Delphi
>100M
vectors stored

Delphi is an AI platform that enables coaches, creators, and experts to deploy interactive “Digital Minds”—always-on conversational agents trained on their unique content. Scaling from proof of concept to a commercial platform with thousands of customers required a vector database that could support millions of isolated namespaces, billions of vectors, and sub-second retrieval under variable load. Delphi selected Pinecone, achieving P95 query latency of 100ms and keeping retrieval under 30% of total response time—freeing the engineering team to build product rather than manage infrastructure.

TechnologyPPinecone
N
Notion
Millions
notion ai users reached

Notion, the connected workspace platform used by millions worldwide, integrated Cohere Rerank into its search pipeline to power Notion AI’s search accuracy across multilingual enterprise workspaces. Every search and Notion AI interaction now routes through Cohere Rerank, delivering dramatically improved relevance while cutting the cost and complexity of embedding-based retrieval for smaller workspaces.

TechnologyCRCohere Rerank
PA
Palo Alto Networks
351,000 hours
employee productivity hours saved

Palo Alto Networks, the global cybersecurity leader with nearly 15,000 employees, deployed Moveworks as an AI Assistant named Sheldon to deliver autonomous support across Slack, email, and ServiceNow. The platform resolves 4,000 IT and HR issues per month while saving 351,000 employee hours, enabling the company to scale its hybrid FLEXWORK model without adding headcount.

TechnologyMMoveworks
PS
Pure Storage
30+ minutes
time saved per search

Pure Storage, a Santa Clara-based enterprise data storage company, deployed Glean to unify knowledge access across Jira, GitHub, and internal wikis for teams spanning engineering, legal, and customer support. The AI-powered search platform cuts information-retrieval time by more than 30 minutes per search and enables employees to build custom GenAI applications in as little as 5 minutes, while boosting overall employee satisfaction scores by 39 points.

TechnologyGGlean
C
CoreWeave
2–5 days (down from 4–8 days)
mean time to resolution

CoreWeave, a global AI cloud provider serving top AI labs and enterprises, deployed Cohere’s North agentic AI platform to overhaul its Slack-based customer support workflow in 90 days. North automated ticket triage, context gathering, and routing recommendations, cutting mean resolution time from 4–8 days to 2–5 days while sustaining customer satisfaction scores between 4.9 and 5.0.

TechnologyCNCohere North
S
Salesforce
20%
productivity increase

Salesforce, the world’s leading CRM company, deployed Writer across more than 3,000 employees spanning marketing, communications, product, and customer success. Using Writer’s AI Studio no-code builder and Knowledge Graph RAG, teams create and launch custom agents in minutes without engineering support. Users report a 20% productivity gain—equivalent to reclaiming one full workday per week—with 78% saying the platform positively affects their daily work.

TechnologyWWriter
FD
Fifth Dimension
50x
document processing capacity increase

Fifth Dimension, a UK-based AI analytics company serving the real estate industry, migrated to Google Cloud to overcome critical infrastructure bottlenecks. By adopting Vertex AI, Cloud Run, and serverless architecture, the company achieved 50x processing scalability, 6x revenue growth, and a 30% reduction in infrastructure costs — all within a rapid growth trajectory from founding in 2023 to global scale by 2025.

TechnologyVAVertex AICRCloud Run
A
Adobe
30%
faster case resolutions

Adobe deployed the ServiceNow AI Platform across IT, HR, security, and workplace operations to streamline employee experiences for over 30,000 staff. Generative AI tools like Now Assist help more than 8,000 IT and HR team members resolve cases faster, reduce outage recovery time, and automate email triage. The result is a measurably faster, more connected workforce that frees employees to focus on high-value creative work.

TechnologySAServiceNow AI PlatformNANow Assist