How Tinexta Visura Uses Elasticsearch and Generative AI to Cut Legal Research by Two Days

Tinexta Visura is an Italian digital trust and technology company that built Lextel AI, a legal research platform for Italian law firms and corporate legal teams. Powered by Elasticsearch, Google Gemini, and retrieval-augmented generation across a repository of 4.8 million legal documents, the platform enables attorneys to locate relevant case law and automatically generate traceable legal opinions. The system reduces attorney research and drafting time by one hour to two full working days per task, depending on complexity.

Impact

1 hour to 2 full days

Legal research time saved per task

4.8 million

Legal documents in repository

Significant reduction

Token usage cost reduction

Challenge

Italian legal professionals needed to search hundreds of court rulings and statutes per case using tools that lacked semantic understanding, resulting in research tasks that routinely consumed one to two full working days before any drafting could begin.

Solution

Tinexta Visura built Lextel AI on Elasticsearch and Google Cloud, applying hybrid BM25 and semantic vector search across 4.8 million legal documents, then using Google Gemini to generate citeable summaries, legal opinions, and memos from the retrieved results.

Tools & Technologies

What Leaders Say

With Elasticsearch, we can filter that content before it even reaches the generative layer, dramatically reducing token usage costs.

Giancarlo Facoetti, Head of AI Strategy, Tinexta Innovation Hub

Elasticsearch massively reduces the complexity around semantic search. You don’t have to stitch together multiple components, which translates to fewer systems to monitor, maintain, and troubleshoot—and fewer headaches for everyone from our innovation team to our client end users.

Giancarlo Facoetti, Head of AI Strategy, Tinexta Innovation Hub

I’ve worked with several major enterprise platforms, and I know that choosing a technology isn’t just about the product itself. It’s also about the service, the people behind it, and the trust they build with your team. Elastic on Google Cloud gave us that trust.

Andrea Vingolo, General Manager, Tinexta Visura
Get the full story.

Sign up to read complete case studies, access detailed metrics, and unlock all use cases.

Full Story

Tinexta Visura operates at the intersection of digital trust, cybersecurity, and professional services innovation in Italy. As part of Tinexta Group, it serves law firms and corporate legal departments that handle vast volumes of court rulings, statutes, and precedents. The pressure to research and draft faster without sacrificing accuracy or citability was intensifying, and traditional keyword search tools were not keeping pace.

Legal professionals faced a specific and grinding bottleneck: when building a legal argument, an attorney might need to review hundreds of court rulings manually, reading through documents to find relevant passages. This process routinely consumed a full day or more, and complex cases could stretch to two days of pure research before any drafting began. Keyword search tools lacked semantic understanding, returning too many irrelevant results and missing contextually relevant material that used different terminology.

Tinexta Visura built Lextel AI on Elasticsearch running on Elastic Cloud and Google Cloud. The system applies hybrid search, combining BM25 keyword retrieval with vector-based semantic search, so attorneys can submit detailed natural-language queries and get contextually precise results from a corpus of 4.8 million legal documents averaging 15 pages each. Retrieved documents are then passed to Google Gemini, which generates structured summaries, draft legal opinions, and memos grounded in cited sources. A key engineering insight: by pre-filtering documents in Elasticsearch before they reach the generative model, the team dramatically reduced token usage and LLM API costs.

The results are measurable and immediate. Attorneys using Lextel AI save anywhere from one hour to two full working days per research task, depending on the complexity of the case. For a task that previously meant manually reviewing hundreds of rulings, Lextel AI now retrieves the most relevant cases, highlights the critical passages, and produces a draft legal opinion in a fraction of the time. The combined BM25 and semantic retrieval approach gives greater precision and contextual awareness than either method alone.

Looking ahead, Tinexta Visura is positioned to consolidate their infrastructure further as Elasticsearch adds native embedding computation, eliminating the need for separate vector database systems. The Ranking Evaluation API enables continuous quality tuning as Italian legal content evolves. For a legal technology company building AI into the core of professional workflows, this architecture offers both the performance and the simplicity needed to scale confidently.

Similar Cases

WE
WP Engine
~5 milliseconds
search query response time

WP Engine, the leading WordPress hosting platform serving more than 1.5 million users across 200,000 websites in 150+ countries, deployed Elastic’s Search AI Platform alongside Google Cloud Vertex AI and Gemini to build Smart Search AI and enable retrieval-augmented generation (RAG) capabilities for its customers. The integration allows WP Engine to deliver natural language search, context-aware product recommendations, and AI-powered chatbots to website owners without requiring them to stitch together multiple vendors. Response times dropped to as low as five milliseconds, and the platform handled traffic spikes from hundreds of thousands to tens of millions of queries per minute with zero downtime.

TechnologyGGeminiEElasticsearch
V
Vectorize.io
~2 hours
time to deploy ai solution for new client

Vectorize.io is a US-based software company that builds agentic and generative AI infrastructure, helping organizations in law, insurance, and finance make vast volumes of unstructured data usable by large language models. By integrating Elastic’s hybrid search and Elastic Cloud Serverless with Amazon Bedrock, Vectorize deploys production-ready AI solutions for clients in hours rather than weeks. One client whose developer community grew by a million users in a year relied on Vectorize’s real-time learning agent—built on Elasticsearch—to answer support queries and instantly index new answers for future use.

Software and TechnologyABAmazon BedrockEElasticsearch
CC
Chipper Cash
95%+
selfie verification accuracy

Chipper Cash, a fintech serving over five million customers across Africa, deployed a Pinecone-powered facial similarity search system to detect and block fraudulent duplicate sign-ups in real time. The solution slashed identity verification latency from up to 20 minutes down to under 2 seconds, and reduced fraudulent sign-ups by 10x across all markets.

Financial ServicesGCGoogle CloudSSnowflake
MB
Millennium bcp
2.6x higher
conversion rate lift — owned media (bigquery audiences vs. other first-party audiences)

Millennium bcp, Portugal's largest private bank, used Google Cloud's BigQuery machine learning tools to build predictive audience models for personal loan campaigns. By segmenting existing customers by propensity to borrow, the bank dramatically improved both owned and paid media performance. The result was a 2.6x higher conversion rate and a 36% drop in cost per acquisition.

Financial ServicesFFirebaseBBigQuery
S
Super-Pharm
50% to 90%
inventory accuracy

Super-Pharm leveraged Google Vertex AI for ML-powered demand forecasting, improving inventory accuracy from 50% to 90% and making forecasting 10x more efficient.

RetailGBGoogle BigQueryGVGoogle Vertex AI
E
EVERSANA
30 minutes
campaign development time

EVERSANA built the first AI-powered marketing agency platform on Google Cloud using three master AI agents that create complete pharma campaigns in 30 minutes.

PharmaceuticalsGVGoogle Vertex AIGGGoogle Gemini