How Tinexta Visura Uses Elasticsearch and Generative AI to Cut Legal Research by Two Days
Tinexta Visura is an Italian digital trust and technology company that built Lextel AI, a legal research platform for Italian law firms and corporate legal teams. Powered by Elasticsearch, Google Gemini, and retrieval-augmented generation across a repository of 4.8 million legal documents, the platform enables attorneys to locate relevant case law and automatically generate traceable legal opinions. The system reduces attorney research and drafting time by one hour to two full working days per task, depending on complexity.
Impact
1 hour to 2 full days
Legal research time saved per task
4.8 million
Legal documents in repository
Significant reduction
Token usage cost reduction
Challenge
Italian legal professionals needed to search hundreds of court rulings and statutes per case using tools that lacked semantic understanding, resulting in research tasks that routinely consumed one to two full working days before any drafting could begin.
Solution
Tinexta Visura built Lextel AI on Elasticsearch and Google Cloud, applying hybrid BM25 and semantic vector search across 4.8 million legal documents, then using Google Gemini to generate citeable summaries, legal opinions, and memos from the retrieved results.
Tools & Technologies
What Leaders Say
“With Elasticsearch, we can filter that content before it even reaches the generative layer, dramatically reducing token usage costs.”
“Elasticsearch massively reduces the complexity around semantic search. You don’t have to stitch together multiple components, which translates to fewer systems to monitor, maintain, and troubleshoot—and fewer headaches for everyone from our innovation team to our client end users.”
“I’ve worked with several major enterprise platforms, and I know that choosing a technology isn’t just about the product itself. It’s also about the service, the people behind it, and the trust they build with your team. Elastic on Google Cloud gave us that trust.”
Sign up to read complete case studies, access detailed metrics, and unlock all use cases.
Full Story
Tinexta Visura operates at the intersection of digital trust, cybersecurity, and professional services innovation in Italy. As part of Tinexta Group, it serves law firms and corporate legal departments that handle vast volumes of court rulings, statutes, and precedents. The pressure to research and draft faster without sacrificing accuracy or citability was intensifying, and traditional keyword search tools were not keeping pace.
Legal professionals faced a specific and grinding bottleneck: when building a legal argument, an attorney might need to review hundreds of court rulings manually, reading through documents to find relevant passages. This process routinely consumed a full day or more, and complex cases could stretch to two days of pure research before any drafting began. Keyword search tools lacked semantic understanding, returning too many irrelevant results and missing contextually relevant material that used different terminology.
Tinexta Visura built Lextel AI on Elasticsearch running on Elastic Cloud and Google Cloud. The system applies hybrid search, combining BM25 keyword retrieval with vector-based semantic search, so attorneys can submit detailed natural-language queries and get contextually precise results from a corpus of 4.8 million legal documents averaging 15 pages each. Retrieved documents are then passed to Google Gemini, which generates structured summaries, draft legal opinions, and memos grounded in cited sources. A key engineering insight: by pre-filtering documents in Elasticsearch before they reach the generative model, the team dramatically reduced token usage and LLM API costs.
The results are measurable and immediate. Attorneys using Lextel AI save anywhere from one hour to two full working days per research task, depending on the complexity of the case. For a task that previously meant manually reviewing hundreds of rulings, Lextel AI now retrieves the most relevant cases, highlights the critical passages, and produces a draft legal opinion in a fraction of the time. The combined BM25 and semantic retrieval approach gives greater precision and contextual awareness than either method alone.
Looking ahead, Tinexta Visura is positioned to consolidate their infrastructure further as Elasticsearch adds native embedding computation, eliminating the need for separate vector database systems. The Ranking Evaluation API enables continuous quality tuning as Italian legal content evolves. For a legal technology company building AI into the core of professional workflows, this architecture offers both the performance and the simplicity needed to scale confidently.