dot Stop testing, start deploying your AI apps. See how with MIT Technology Review’s latest research.

Download now

Redis Enterprise Powers Docugami’s LLM Transforming Documents into Actionable Data

Company: Docugami
Industry: Software, Document Processing

docugami hero image

Customer

Docugami uses generative AI technology to transform how businesses create and manage documents. Its proprietary Business Document Foundation Model turns business documents into data. Docugami is cross-segment, with customers across many industries from commercial insurance to health care to real estate to professional services and more.  

Challenge

After experiencing performance and latency challenges with its Apache Spark data processing environment, Docugami sought a solution to store checkpoint data in a database cache to speed up processing. Docugami also needed a vector database that could accelerate essential generative AI tasks such as retrieval-augmented generation (RAG), in-context learning (ICL), and vector search.

Solution

Redis Enterprise provides a comprehensive data management platform that helps streamline key aspects of the generative AI project lifecycle, from caching Apache Spark checkpoint data to enabling VSS on the document knowledge base. Redis also provides a vector store for Docugami’s ML pipelines, in conjunction with Auto Tiering to extend databases beyond DRAM, document indexing, and AI-powered search capabilities.

Results

Redis Enterprise makes it easy to store, search, and update vector embeddings at scale, improving the user experience by ensuring that Docugami’s foundation model receives the most timely, relevant and up-to-date context. Redis also mitigated the bottlenecks in Docugami’s  Apache Spark processing pipeline, which was stymied due to high I/O. 

Docugami Photo 1

According to an August 2023 report from McKinsey & Company, nearly 22 percent of today’s knowledge workers use generative AI systems for their work.1 Docugami is on the crest of this rising wave with a unique family of large language models (LLMs) that can be applied to corporate business documents.

Docugami’s proprietary Business Document Foundation Model unlocks the critical information in corporate documents and uses it to generate reports, uncover insights, create new documents, and develop data streams for enterprise applications—all without requiring clients to invest in machine learning, staff training, or IT development.

“We are in the business of converting documents into data, and creating documents using the power of AI,” explains Taqi Jaffri, co-founder and head of product at Docugami. “Redis Enterprise is at the heart of our operation.”

The process begins by ingesting a client’s internal data and business documents. For an insurance company, that might include policies and claims. For a commercial real estate firm, documents would include listing agreements, purchase agreements, and bills of sale. Docugami creates a hierarchical representation of the content of each document in its entirety, which allows its LLMs to assemble new documents, generate insights, and supply input to line-of-business systems. 

Docugami’s AI algorithms convert the output of this process into “chunk embeddings” and store them in Redis Enterprise. Embeddings are numeric representations of unstructured data that capture semantic information. Redis Enterprise vector capabilities enable Docugami to capture, search, and update these embeddings at scale. 

Redis Enterprise is also used for chat-based retrieval from business documents, which are maintained as XML trees. This functionality not only improves Docugami’s ability to understand the relevance of each document, but also accelerates the feedback loop when users query the LLMs, enhancing the overall user experience. 

Mike Palmer, co-founder and head of technologies at Docugami, explains the significance of these technologies in a recent blog post. “Redis Enterprise enables us to handle document sets more efficiently, improving the consistency and accuracy of our document processing efforts,” he writes. “Our transition to Redis Enterprise also extends to the persistence layer for hierarchical chunks identified in documents. These chunks, as well as user feedback on them, are critical to our operations.”2

Complementing and extending the Spark environment with Redis Enterprise

Foundation models are the cornerstone of generative AI applications because they enable companies like Docugami to build specialized, domain-specific systems that put today’s AI and ML technologies to work. Docugami uses Apache Spark for its document processing and analytics pipeline, but Spark’s “chatty” architecture required excessive I/O operations, which over-stressed the storage layer. 

“Large processing jobs introduced significant latencies due to the high-frequency access patterns of Spark,” says Jaffri. “The sheer scale of the problem required lots of compute resources. For example, a large company may have tens of thousands of documents and millions of pages of content.”

After experimenting with hosted storage layers from various cloud providers, Docugami purchased Redis Enterprise, then deployed it as the data layer underlying Spark. For maximum flexibility and scalability, they deployed Redis Enterprise in Kubernetes containers. “The Redis Enterprise Kubernetes Operator has resulted in remarkable enhancements in performance cost reduction, illustrating the power of this modern, high-performance database solution,” says Palmer.

Redis Enterprise simplifies the Docugami architecture by supplying one software solution to solve many unique technology problems. For example, Docugami uses Redis Enterprise as a vector store for its Spark ML pipelines to reclaim space occupied by deleted or outdated data. Redis Enterprise’s Auto Tiering allows Docugami to efficiently process extremely large data sets that are too big to fit in memory.

The right vector database for LLMs

Docugami Photo 2

More than 80 percent of today’s business data is unstructured, stored as text, images, audio, video, and other formats. To discern the inherent patterns, terminology, and relationships in this data, Docugami’s generative AI solutions employ a variety of popular techniques such as retrieval-augmented generation (RAG), in-context learning (ICL), and few-shot prompting. 

Redis Enterprise complements and extends these generative AI techniques. For example, Docugami uses Redis as a persistent, hierarchical database for storing documents in domain-specific knowledge bases as part of the generative AI process. Redis Enterprise enables AI-powered search capabilities such as vector search, which use deep learning and other advanced techniques to answer queries based on a contextual understanding of the content.  In addition, Redis’ RAG capabilities enable Docugami’s foundation model to access up-to-date or context-specific data, improving the accuracy and performance of queries and searches. Redis Enterprise also provides powerful hybrid semantic capabilities to infuse relevant contextual data into user prompts before they are sent to the LLM. 

Docugami Diagram

Finally, Redis Enterprise stores external domain-specific knowledge to improve the quality of search results from Docugami’s Document XML Knowledge Graph. This capability allows people to search unstructured data using natural language prompts. “Through Redis Enterprise, we’ve seen a dramatic increase in the performance of our Document XML Knowledge Graph and a notable reduction in costs,” Palmer says. “These operational improvements have facilitated a more efficient, reliable document processing workflow.”

In the future, Docugami intends to use Redis Enterprise for semantic caching to further enhance the performance of the foundation model by enabling it to provide responses more quickly. While standard caching provides a mechanism to store and quickly retrieve pre-generated responses for recurrent queries, semantic caching will allow the model to understand and leverage each query’s underlying semantics. 

“Semantic caching will allow us to use Redis Enterprise not only for identical queries, but for similar queries as well,” Jaffri explains. “For example, ‘Show me all queries and questions  that contain text strings that are similar to the text string in this document.’”

Improving performance, reliability, and scalability

Since standardizing on Redis Enterprise, Docugami has seen steady improvements in performance and reliability, overcoming the business problems they experienced with their previous database management systems. Jaffri believes other generative AI businesses that use Apache Spark will be forced to confront these same issues, leading to a growing number of implementations of Redis Enterprise.

“Lots of companies use Spark because it is battle-hardened, it scales well, and it is widely used,” he notes. “However, as more companies develop and run generative AI models, they are going to run into the same limitations with Spark that we have encountered. 

“We are very happy with Redis Enterprise because it allows us to do a better job faster and more reliably,” Jaffri continues. “This is a core business tenet for us. Redis Enterprise is a game-changer. It is a fast, high-performance vector database—and Redis is a wonderful partner.”

Palmer concurs. “Our adoption of Redis Enterprise has led to remarkable improvements in our ML pipeline, ML Ops, and overall document processing operations. Redis Enterprise is helping us deliver on our commitment to quality and efficiency through better chunking, a more efficient vector database, and dramatic advances in scalability.”

1 “The state of AI in 2023: AI’s breakout year” (McKinsey & Company, August 1, 2023)
2 New LLM Stack & ML Ops: Docugami Chooses Redis Enterprise to Scale Up Document Processing Pipeline