Document Processing | Towards Data Science https://towardsdatascience.com/tag/document-processing/ The world’s leading publication for data science, AI, and ML professionals. Wed, 05 Mar 2025 19:50:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://towardsdatascience.com/wp-content/uploads/2025/02/cropped-Favicon-32x32.png Document Processing | Towards Data Science https://towardsdatascience.com/tag/document-processing/ 32 32 Overcome Failing Document Ingestion & RAG Strategies with Agentic Knowledge Distillation https://towardsdatascience.com/overcome-failing-document-ingestion-rag-strategies-with-agentic-knowledge-distillation/ Wed, 05 Mar 2025 19:50:12 +0000 https://towardsdatascience.com/?p=598745 Introducing the pyramid search approach

The post Overcome Failing Document Ingestion & RAG Strategies with Agentic Knowledge Distillation appeared first on Towards Data Science.

]]>
Introduction

Many generative AI use cases still revolve around Retrieval Augmented Generation (RAG), yet consistently fall short of user expectations. Despite the growing body of research on RAG improvements and even adding Agents into the process, many solutions still fail to return exhaustive results, miss information that is critical but infrequently mentioned in the documents, require multiple search iterations, and generally struggle to reconcile key themes across multiple documents. To top it all off, many implementations still rely on cramming as much “relevant” information as possible into the model’s context window alongside detailed system and user prompts. Reconciling all this information often exceeds the model’s cognitive capacity and compromises response quality and consistency.

This is where our Agentic Knowledge Distillation + Pyramid Search Approach comes into play. Instead of chasing the best chunking strategy, retrieval algorithm, or inference-time reasoning method, my team, Jim Brown, Mason Sawtell, Sandi Besen, and I, take an agentic approach to document ingestion.

We leverage the full capability of the model at ingestion time to focus exclusively on distilling and preserving the most meaningful information from the document dataset. This fundamentally simplifies the RAG process by allowing the model to direct its reasoning abilities toward addressing the user/system instructions rather than struggling to understand formatting and disparate information across document chunks. 

We specifically target high-value questions that are often difficult to evaluate because they have multiple correct answers or solution paths. These cases are where traditional RAG solutions struggle most and existing RAG evaluation datasets are largely insufficient for testing this problem space. For our research implementation, we downloaded annual and quarterly reports from the last year for the 30 companies in the DOW Jones Industrial Average. These documents can be found through the SEC EDGAR website. The information on EDGAR is accessible and able to be downloaded for free or can be queried through EDGAR public searches. See the SEC privacy policy for additional details, information on the SEC website is “considered public information and may be copied or further distributed by users of the web site without the SEC’s permission”. We selected this dataset for two key reasons: first, it falls outside the knowledge cutoff for the models evaluated, ensuring that the models cannot respond to questions based on their knowledge from pre-training; second, it’s a close approximation for real-world business problems while allowing us to discuss and share our findings using publicly available data. 

While typical RAG solutions excel at factual retrieval where the answer is easily identified in the document dataset (e.g., “When did Apple’s annual shareholder’s meeting occur?”), they struggle with nuanced questions that require a deeper understanding of concepts across documents (e.g., “Which of the DOW companies has the most promising AI strategy?”). Our Agentic Knowledge Distillation + Pyramid Search Approach addresses these types of questions with much greater success compared to other standard approaches we tested and overcomes limitations associated with using knowledge graphs in RAG systems. 

In this article, we’ll cover how our knowledge distillation process works, key benefits of this approach, examples, and an open discussion on the best way to evaluate these types of systems where, in many cases, there is no singular “right” answer.

Building the pyramid: How Agentic Knowledge Distillation works

AI-generated image showing a pyramid structure for document ingestion with labelled sections.
Image by author and team depicting pyramid structure for document ingestion. Robots meant to represent agents building the pyramid.

Overview

Our knowledge distillation process creates a multi-tiered pyramid of information from the raw source documents. Our approach is inspired by the pyramids used in deep learning computer vision-based tasks, which allow a model to analyze an image at multiple scales. We take the contents of the raw document, convert it to markdown, and distill the content into a list of atomic insights, related concepts, document abstracts, and general recollections/memories. During retrieval it’s possible to access any or all levels of the pyramid to respond to the user request. 

How to distill documents and build the pyramid: 

  1. Convert documents to Markdown: Convert all raw source documents to Markdown. We’ve found models process markdown best for this task compared to other formats like JSON and it is more token efficient. We used Azure Document Intelligence to generate the markdown for each page of the document, but there are many other open-source libraries like MarkItDown which do the same thing. Our dataset included 331 documents and 16,601 pages. 
  2. Extract atomic insights from each page: We process documents using a two-page sliding window, which allows each page to be analyzed twice. This gives the agent the opportunity to correct any potential mistakes when processing the page initially. We instruct the model to create a numbered list of insights that grows as it processes the pages in the document. The agent can overwrite insights from the previous page if they were incorrect since it sees each page twice. We instruct the model to extract insights in simple sentences following the subject-verb-object (SVO) format and to write sentences as if English is the second language of the user. This significantly improves performance by encouraging clarity and precision. Rolling over each page multiple times and using the SVO format also solves the disambiguation problem, which is a huge challenge for knowledge graphs. The insight generation step is also particularly helpful for extracting information from tables since the model captures the facts from the table in clear, succinct sentences. Our dataset produced 216,931 total insights, about 13 insights per page and 655 insights per document.
  3. Distilling concepts from insights: From the detailed list of insights, we identify higher-level concepts that connect related information about the document. This step significantly reduces noise and redundant information in the document while preserving essential information and themes. Our dataset produced 14,824 total concepts, about 1 concept per page and 45 concepts per document. 
  4. Creating abstracts from concepts: Given the insights and concepts in the document, the LLM writes an abstract that appears both better than any abstract a human would write and more information-dense than any abstract present in the original document. The LLM generated abstract provides incredibly comprehensive knowledge about the document with a small token density that carries a significant amount of information. We produce one abstract per document, 331 total.
  5. Storing recollections/memories across documents: At the top of the pyramid we store critical information that is useful across all tasks. This can be information that the user shares about the task or information the agent learns about the dataset over time by researching and responding to tasks. For example, we can store the current 30 companies in the DOW as a recollection since this list is different from the 30 companies in the DOW at the time of the model’s knowledge cutoff. As we conduct more and more research tasks, we can continuously improve our recollections and maintain an audit trail of which documents these recollections originated from. For example, we can keep track of AI strategies across companies, where companies are making major investments, etc. These high-level connections are super important since they reveal relationships and information that are not apparent in a single page or document.
Sample subset of insights extracted from IBM 10Q, Q3 2024
Sample subset of insights extracted from IBM 10Q, Q3 2024 (page 4)

We store the text and embeddings for each layer of the pyramid (pages and up) in Azure PostgreSQL. We originally used Azure AI Search, but switched to PostgreSQL for cost reasons. This required us to write our own hybrid search function since PostgreSQL doesn’t yet natively support this feature. This implementation would work with any vector database or vector index of your choosing. The key requirement is to store and efficiently retrieve both text and vector embeddings at any level of the pyramid. 

This approach essentially creates the essence of a knowledge graph, but stores information in natural language, the way an LLM natively wants to interact with it, and is more efficient on token retrieval. We also let the LLM pick the terms used to categorize each level of the pyramid, this seemed to let the model decide for itself the best way to describe and differentiate between the information stored at each level. For example, the LLM preferred “insights” to “facts” as the label for the first level of distilled knowledge. Our goal in doing this was to better understand how an LLM thinks about the process by letting it decide how to store and group related information. 

Using the pyramid: How it works with RAG & Agents

At inference time, both traditional RAG and agentic approaches benefit from the pre-processed, distilled information ingested in our knowledge pyramid. The pyramid structure allows for efficient retrieval in both the traditional RAG case, where only the top X related pieces of information are retrieved or in the Agentic case, where the Agent iteratively plans, retrieves, and evaluates information before returning a final response. 

The benefit of the pyramid approach is that information at any and all levels of the pyramid can be used during inference. For our implementation, we used PydanticAI to create a search agent that takes in the user request, generates search terms, explores ideas related to the request, and keeps track of information relevant to the request. Once the search agent determines there’s sufficient information to address the user request, the results are re-ranked and sent back to the LLM to generate a final reply. Our implementation allows a search agent to traverse the information in the pyramid as it gathers details about a concept/search term. This is similar to walking a knowledge graph, but in a way that’s more natural for the LLM since all the information in the pyramid is stored in natural language.

Depending on the use case, the Agent could access information at all levels of the pyramid or only at specific levels (e.g. only retrieve information from the concepts). For our experiments, we did not retrieve raw page-level data since we wanted to focus on token efficiency and found the LLM-generated information for the insights, concepts, abstracts, and recollections was sufficient for completing our tasks. In theory, the Agent could also have access to the page data; this would provide additional opportunities for the agent to re-examine the original document text; however, it would also significantly increase the total tokens used. 

Here is a high-level visualization of our Agentic approach to responding to user requests:

Overview of the agentic research & response process
Image created by author and team providing an overview of the agentic research & response process

Results from the pyramid: Real-world examples

To evaluate the effectiveness of our approach, we tested it against a variety of question categories, including typical fact-finding questions and complex cross-document research and analysis tasks. 

Fact-finding (spear fishing): 

These tasks require identifying specific information or facts that are buried in a document. These are the types of questions typical RAG solutions target but often require many searches and consume lots of tokens to answer correctly. 

Example task: “What was IBM’s total revenue in the latest financial reporting?”

Example response using pyramid approach: “IBM’s total revenue for the third quarter of 2024 was $14.968 billion [ibm-10q-q3-2024.pdf, pg. 4]

Screenshot of total tokens used to research and generate response
Total tokens used to research and generate response

This result is correct (human-validated) and was generated using only 9,994 total tokens, with 1,240 tokens in the generated final response. 

Complex research and analysis: 

These tasks involve researching and understanding multiple concepts to gain a broader understanding of the documents and make inferences and informed assumptions based on the gathered facts.

Example task: “Analyze the investments Microsoft and NVIDIA are making in AI and how they are positioning themselves in the market. The report should be clearly formatted.”

Example response:

Screenshot of the response generated by the agent analyzing AI investments and positioning for Microsoft and NVIDIA.
Response generated by the agent analyzing AI investments and positioning for Microsoft and NVIDIA.

The result is a comprehensive report that executed quickly and contains detailed information about each of the companies. 26,802 total tokens were used to research and respond to the request with a significant percentage of them used for the final response (2,893 tokens or ~11%). These results were also reviewed by a human to verify their validity.

Screenshot of snippet indicating total token usage for the task
Snippet indicating total token usage for the task

Example task: “Create a report on analyzing the risks disclosed by the various financial companies in the DOW. Indicate which risks are shared and unique.”

Example response:

Screenshot of part 1 of a response generated by the agent on disclosed risks.
Part 1 of response generated by the agent on disclosed risks.
Screenshot of part 2 of a response generated by the agent on disclosed risks.
Part 2 of response generated by the agent on disclosed risks.

Similarly, this task was completed in 42.7 seconds and used 31,685 total tokens, with 3,116 tokens used to generate the final report. 

Screenshot of a snippet indicating total token usage for the task
Snippet indicating total token usage for the task

These results for both fact-finding and complex analysis tasks demonstrate that the pyramid approach efficiently creates detailed reports with low latency using a minimal amount of tokens. The tokens used for the tasks carry dense meaning with little noise allowing for high-quality, thorough responses across tasks.

Benefits of the pyramid: Why use it?

Overall, we found that our pyramid approach provided a significant boost in response quality and overall performance for high-value questions. 

Some of the key benefits we observed include: 

  • Reduced model’s cognitive load: When the agent receives the user task, it retrieves pre-processed, distilled information rather than the raw, inconsistently formatted, disparate document chunks. This fundamentally improves the retrieval process since the model doesn’t waste its cognitive capacity on trying to break down the page/chunk text for the first time. 
  • Superior table processing: By breaking down table information and storing it in concise but descriptive sentences, the pyramid approach makes it easier to retrieve relevant information at inference time through natural language queries. This was particularly important for our dataset since financial reports contain lots of critical information in tables. 
  • Improved response quality to many types of requests: The pyramid enables more comprehensive context-aware responses to both precise, fact-finding questions and broad analysis based tasks that involve many themes across numerous documents. 
  • Preservation of critical context: Since the distillation process identifies and keeps track of key facts, important information that might appear only once in the document is easier to maintain. For example, noting that all tables are represented in millions of dollars or in a particular currency. Traditional chunking methods often cause this type of information to slip through the cracks. 
  • Optimized token usage, memory, and speed: By distilling information at ingestion time, we significantly reduce the number of tokens required during inference, are able to maximize the value of information put in the context window, and improve memory use. 
  • Scalability: Many solutions struggle to perform as the size of the document dataset grows. This approach provides a much more efficient way to manage a large volume of text by only preserving critical information. This also allows for a more efficient use of the LLMs context window by only sending it useful, clear information.
  • Efficient concept exploration: The pyramid enables the agent to explore related information similar to navigating a knowledge graph, but does not require ever generating or maintaining relationships in the graph. The agent can use natural language exclusively and keep track of important facts related to the concepts it’s exploring in a highly token-efficient and fluid way. 
  • Emergent dataset understanding: An unexpected benefit of this approach emerged during our testing. When asking questions like “what can you tell me about this dataset?” or “what types of questions can I ask?”, the system is able to respond and suggest productive search topics because it has a more robust understanding of the dataset context by accessing higher levels in the pyramid like the abstracts and recollections. 

Beyond the pyramid: Evaluation challenges & future directions

Challenges

While the results we’ve observed when using the pyramid search approach have been nothing short of amazing, finding ways to establish meaningful metrics to evaluate the entire system both at ingestion time and during information retrieval is challenging. Traditional RAG and Agent evaluation frameworks often fail to address nuanced questions and analytical responses where many different responses are valid.

Our team plans to write a research paper on this approach in the future, and we are open to any thoughts and feedback from the community, especially when it comes to evaluation metrics. Many of the existing datasets we found were focused on evaluating RAG use cases within one document or precise information retrieval across multiple documents rather than robust concept and theme analysis across documents and domains. 

The main use cases we are interested in relate to broader questions that are representative of how businesses actually want to interact with GenAI systems. For example, “tell me everything I need to know about customer X” or “how do the behaviors of Customer A and B differ? Which am I more likely to have a successful meeting with?”. These types of questions require a deep understanding of information across many sources. The answers to these questions typically require a person to synthesize data from multiple areas of the business and think critically about it. As a result, the answers to these questions are rarely written or saved anywhere which makes it impossible to simply store and retrieve them through a vector index in a typical RAG process. 

Another consideration is that many real-world use cases involve dynamic datasets where documents are consistently being added, edited, and deleted. This makes it difficult to evaluate and track what a “correct” response is since the answer will evolve as the available information changes. 

Future directions

In the future, we believe that the pyramid approach can address some of these challenges by enabling more effective processing of dense documents and storing learned information as recollections. However, tracking and evaluating the validity of the recollections over time will be critical to the system’s overall success and remains a key focus area for our ongoing work. 

When applying this approach to organizational data, the pyramid process could also be used to identify and assess discrepancies across areas of the business. For example, uploading all of a company’s sales pitch decks could surface where certain products or services are being positioned inconsistently. It could also be used to compare insights extracted from various line of business data to help understand if and where teams have developed conflicting understandings of topics or different priorities. This application goes beyond pure information retrieval use cases and would allow the pyramid to serve as an organizational alignment tool that helps identify divergences in messaging, terminology, and overall communication. 

Conclusion: Key takeaways and why the pyramid approach matters

The knowledge distillation pyramid approach is significant because it leverages the full power of the LLM at both ingestion and retrieval time. Our approach allows you to store dense information in fewer tokens which has the added benefit of reducing noise in the dataset at inference. Our approach also runs very quickly and is incredibly token efficient, we are able to generate responses within seconds, explore potentially hundreds of searches, and on average use <40K tokens for the entire search, retrieval, and response generation process (this includes all the search iterations!). 

We find that the LLM is much better at writing atomic insights as sentences and that these insights effectively distill information from both text-based and tabular data. This distilled information written in natural language is very easy for the LLM to understand and navigate at inference since it does not have to expend unnecessary energy reasoning about and breaking down document formatting or filtering through noise

The ability to retrieve and aggregate information at any level of the pyramid also provides significant flexibility to address a variety of query types. This approach offers promising performance for large datasets and enables high-value use cases that require nuanced information retrieval and analysis. 


Note: The opinions expressed in this article are solely my own and do not necessarily reflect the views or policies of my employer.

Interested in discussing further or collaborating? Reach out on LinkedIn!

The post Overcome Failing Document Ingestion & RAG Strategies with Agentic Knowledge Distillation appeared first on Towards Data Science.

]]>
OCR-Free Document Data Extraction with Transformers (2/2) https://towardsdatascience.com/ocr-free-document-data-extraction-with-transformers-2-2-38ce26f41951/ Thu, 10 Aug 2023 07:01:27 +0000 https://towardsdatascience.com/ocr-free-document-data-extraction-with-transformers-2-2-38ce26f41951/ Donut versus Pix2Struct on custom data

The post OCR-Free Document Data Extraction with Transformers (2/2) appeared first on Towards Data Science.

]]>
Image by author (with)
Image by author (with)

How well do these two transformer models understand documents? In this second part I will show you how to train them and compare their results for the task of key index extraction.

Finetuning Donut

So let’s pick up from part 1, w[here](https://github.com/Toon-nooT/notebooks/blob/main/Donut_vs_pix2struct_2_Ghega_donut.ipynb) I explain how to prepare the custom data. I zipped the two folders of the dataset and uploaded them into a new huggingface dataset here. The colab notebook I used can be found here. It will download the dataset, set up the environment, load the Donut model and train it.

After finetuning for 75 minutes I stopped it when the validation metric (which is the edit distance) reached 0.116:

Image by author
Image by author

On field level I get these results for the validation set:

Image by author
Image by author

When we look at Doctype, we see Donut always correctly identifies the docs as either a patent or a datasheet. So we can say that classification reaches a 100% accuracy. Also note that even though we have a class datasheet it doesn’t need this exact word to be on the document to be classifying it as such. It does not matter to Donut as it was finetuned to recognize it like that.

Other fields score quite OK as well, but it’s hard to say with this graph alone what goes on under the hood. I’d like to see where the model goes right and wrong in specific cases. So I created a routine in my notebook to generate an HTML-formatted report table. For every document in my validation set I have a row entry like this:

Image by author
Image by author

On the left is the recognized (inferred) data together with its ground truth. On the right side is the image. I also used color codes to have a quick overview:

Image by author
Image by author

So ideally, everything should be highlighted green. If you want to see the full report for the validation set, you can see it here, or download this zip file locally.

With this information we can spot usual Ocr errors like Dczcmbci instead of December, or GL420 instead of GL420 (0’s and O’s are hard to distinguish), that lead to false positives.

Let’s focus now on the worst performing field: Voltage. Here are some samples of the inferred data, the ground truth and the actual relevant document snippet.

Image by author
Image by author

The problem here is that the ground truth is mostly wrong. There is no standard of including the unit (Volt or V) or not. Sometimes irrelevant text is taken along, sometimes just a (wrong!) number. I can see now why Donut had a hard time with this.

Image by author
Image by author

Above are some samples where Donut actually returns the best answer while the ground truth is incomplete or wrong.

Image by author
Image by author

Above is another good example of bad training data confusing Donut. The ‘I’ letter in the ground truth is an artifact of the OCR reading the vertical line in front of the information. Sometimes it’s there, other times not. If you preprocess your data to be consequent in this regard, Donut will learn this and adhere to this structure.

Finetuning Pix2Struct

Donut’s results are holding up, will Pix2Struct’s as well? The colab notebook I used for the training can be found here.

After training for 75 minutes I got an edit distance score of 0.197 versus 0.116 for Donut. It is definitely slower at converging.

Another observation is that so far every value that is returned starts with a space. This could be an error in the ImageCaptioningDataset class, but I did not investigate further into the root cause. I do remove this space when generating the validation results though.

Prediction: <s_DocType> datasheet</s_DocType></s_DocType> TSZU52C2 – TSZUZUZC39<s_DocType>
    Answer: <s_DocType>datasheet</s_DocType><s_Model>Tszuszcz</s_Model><s_Voltage>O9</s_Voltage>

I stopped the finetuning process after 2 hours because the validation metric went up again:

Image by author
Image by author

But what does that mean on field level for the validation set?

Image by author
Image by author

That looks a lot worse than the results of Donut! If you want to see the full HTML report, you can see it here, or download this zip file locally.

Only the classification between a datasheet and a patent seems to be quite OK (but not as good as Donut). The other fields are just plain bad. Can we deduct what’s going on?

For the patent docs, I see lots of orange lines which mean that Pix2Struct did not return those fields at all.

Image by author
Image by author

And even for patents where it returns fields, they are completely made up. Whereas Donut’s errors stem from picking it from another region on the document or having minor OCR mistakes, Pix2Struct is hallucinating here.

Disappointed by Pix2Struct’s performance, I tried several new training runs in hopes of better results:

Image by author
Image by author

I tried lowering gradually the accumulate_grad_batches from 8 to 1. But then the learn rate is too high and overshoots. Lowering that to 1e-5 makes the model not converge. Other combinations lead to the model collapsing. Even if with some specific hyperparameters the validation metric looked quite OK, it was giving a lot of incorrect or unparseable lines, like:

<s_DocType> datasheet</s_DocType><s_Model> CMPZSM</s_Model><s_StorageTemperature> -0.9</s_Voltage><s_StorageTemperature> -051c 150</s_StorageTemperature>

None of these attempts gave me substantial better results, so I left it at that.

Until I saw that a cross attention bug was fixed in the huggingface implementation. So i decided to give it a last try. Trained for 2 and a half hour and stopped at a validation metric of 0.1416 .

Image by author
Image by author
Image by author
Image by author

This looks definitely better than all previous runs. Looking at the HTML report, it now seems to hallucinate less. Overall it’s still performing worse than Donut.

As for reasons why, I have two theories. Firstly, Pix2Struct was mainly trained on HTML web page images (predicting what is behind masked image parts) and has trouble switching to another domain, namely raw text. Secondly, the dataset used was challenging. It contains many OCR errors and non-conformities (such as including units, length, minus signs). In my other experiments it really came to light that the quality and conformity of the dataset is more important than the quantity. In this dataset the data quality is really subpar. Maybe that is why I could not replicate the claim in the paper that Pix2Struct exceeds Donuts performance.

Inference speed

How do the two models compare in terms of speed? All trainings were done on the same T4 architecture, so the times can be readily compared. We already saw that Pix2Struct takes much longer to converge. But what about inference times? We can compare the time it took for inferring the validation set:

Image by author
Image by author

Donut takes on average 1.3 seconds per document to extract, while Pix2Struct more than double.

Takeaways

  • The clear winner for me is Donut. In terms of ease-of-use, performance, training stability and speed.
  • Pix2Struct is challenging to train because it is very sensitive to training hyperparameters. It converges slower and doesn’t reach the results of Donut in this dataset. It may proof worthwhile to revisit Pix2Struct with a high(er) quality dataset.
  • Because the Ghega dataset contains too many inconsistencies, I will refrain from using it in further experiments.

Are there any alternative models?

  • Dessurt, which seems to share a similar architecture with Donut should perform in the same league.
  • DocParser, which the paper claims to perform even a little better. Unfortunately there is no plan to release this model in the future.
  • mPLUG-DocOwl will soon be released which is yet another OCR-Free LLM for document understanding with promising benchmarks.

You may also like:

Hands-on: document data extraction with 🍩 transformer

References:

Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding

OCR-free Document Understanding Transformer

GitHub – Toon-nooT/notebooks

The post OCR-Free Document Data Extraction with Transformers (2/2) appeared first on Towards Data Science.

]]>
How to Chat With Any File from PDFs to Images Using Large Language Models – With Code https://towardsdatascience.com/how-to-chat-with-any-file-from-pdfs-to-images-using-large-language-models-with-code-4bcfd7e440bc/ Sat, 05 Aug 2023 07:11:52 +0000 https://towardsdatascience.com/how-to-chat-with-any-file-from-pdfs-to-images-using-large-language-models-with-code-4bcfd7e440bc/ Complete guide to building an AI assistant that can answer questions about any file

The post How to Chat With Any File from PDFs to Images Using Large Language Models – With Code appeared first on Towards Data Science.

]]>
How to Chat With Any PDFs and Image Files Using Large Language Models – With Code

Introduction

So much valuable information is trapped in PDF and image files. Luckily, we have these powerful brains capable of processing those files to find specific information, which in fact is great.

But how many of us, deep inside wouldn’t like to have a tool that can answer any question about a given document?

That is the whole purpose of this article. I will explain step-by-step how to build a system that can chat with any PDFs and image files.

If you prefer to watch video instead, check the link below:

General Workflow of the project

It’s always good to have a clear understanding of the main components of the system being built. So let’s get started.

End-to-end workflow of the overall chat system (Image by Author)
End-to-end workflow of the overall chat system (Image by Author)
  • First, the user submits the document to be processed, which can be in PDF or image format.
  • A second module is used to detect the format of the file so that the relevant content extraction function is applied.
  • The content of the document is then split into multiple chunks using the Data Splitter module.
  • Those chunks are finally transformed into embeddings using the Chunk Transformer before they are stored in the vector store.
  • At the end of the process, the user’s query is used to find relevant chunks containing the answer to that query, and the result is returned as a JSON to the user.

1. Detect document type

For each input document, specific processing is applied depending on its type, whether it is PDF , or image.

This can be achieved with the helper function detect_document_type combined with the guess function from the built-in Python module.

def detect_document_type(document_path):

    guess_file = guess(document_path)
    file_type = ""
    image_types = ['jpg', 'jpeg', 'png', 'gif']

    if(guess_file.extension.lower() == "pdf"):
        file_type = "pdf"

    elif(guess_file.extension.lower() in image_types):
        file_type = "image"

    else:
        file_type = "unkown"

    return file_type

Now we can test the function on two types of documents:

  • transformer_paper.pdf is the Transformers research paper from Arxiv.
  • zoumana_article_information.png is the image document containing information about the main topics I have covered on Medium.
research_paper_path = "./data/transformer_paper.pdf"
article_information_path = "./data/zoumana_article_information.png"

print(f"Research Paper Type: {detect_document_type(research_paper_path)}")
print(f"Article Information Document Type: {detect_document_type(article_information_path)}")

Output:

Files types successfully detected (Image by Author)
Files types successfully detected (Image by Author)

Both file type is successfully detected by the detect_document_type function.

2. Extract content based on document type

The [langchain](https://python.langchain.com/docs/get_started/introduction.html) library provides different modules to extract the content of a given type of document.

  • UnstructuredImageLoader extracts image content.
  • UnstructuredFileLoader extracts the content of any pdf and Txt files.

We can combine these modules and the above detect_document_type function to implement the text extraction logic.

These modules can be used to implement end-to-end text extraction logic within the extract_file_content function.

Let’s see them in action! 🔥

from langchain.document_loaders.image import UnstructuredImageLoader
from langchain.document_loaders import UnstructuredFileLoader

def extract_file_content(file_path):

    file_type = detect_document_type(file_path)

    if(file_type == "pdf"):
        loader = UnstructuredFileLoader(file_path)

    elif(file_type == "image"):
        loader = UnstructuredImageLoader(file_path)

    documents = loader.load()
    documents_content = 'n'.join(doc.page_content for doc in documents)

    return documents_content

Now, let’s print the first 400 characters of each file content.

research_paper_content = extract_file_content(research_paper_path)
article_information_content = extract_file_content(article_information_path)

nb_characters = 400

print(f"First {nb_characters} Characters of the Paper: n{research_paper_content[:nb_characters]} ...")
print("---"*5)
print(f"First {nb_characters} Characters of Article Information Document :n {research_paper_content[:nb_characters]} ...")

Output:

The first 400 characters of each of the above documents are shown below:

  • The research paper content starts with Provided proper attribution is provided and ends with Jacod Uszkoreit* Google Research usz@google.com.
  • The image document’s content starts with This document provides a quick summary and ends with Data Science section covers basic to advance concepts.
First 400 characters of the Transformers paper and the Article Information document (Image by Author)
First 400 characters of the Transformers paper and the Article Information document (Image by Author)

3. Chat Implementation

The input document is broken into chunks, then an embedding is created for each chunk before implementing the question-answering logic.

a. Document chunking

The chunks represent smaller segments of a larger piece of text. This process is essential to ensure that a piece of content is represented with as little noise as possible, making it semantically relevant.

Multiple chunking strategies can be applied. For instance, we have the NLTKTextSplitter , SpacyTextSplitter , RecursiveCharacterTextSplitter , CharacterTextSplitter and more.

Each one of these strategies has its own pros and cons.

The main focus of this article is made on the CharacterTextSplitter which creates chunks from the input documents based on nn , and measure each chunk’s length (length_function ) by its number of characters.

text_splitter = CharacterTextSplitter(        
    separator = "nn",
    chunk_size = 1000,
    chunk_overlap  = 200,
    length_function = len,
)

The chunk_size tells that we want a maximum of 1000 characters in each chunk, and a smaller value will result in more chunks, while a larger one will generate fewer chunks.

It is important to note that the way the chunk_size is chosen can affect the overall result. So, a good approach is to try different values and chose the one that better fits one’s use case.

Also, the chunk_overlap means that we want a maximum of 200 overlapping characters between consecutive chunks.

For instance, imagine that we have a document containing the text Chat with your documents using LLMs and want to apply the chunking using the Chunk Size = 10 and Chunk overlap = 5.

The process is explained in the image below:

Document chunking illustration (Image by Author)
Document chunking illustration (Image by Author)

We can see that we ended up with a total of 7 chunks for an input document of 35 characters (spaces included).

But, why do we use these overlaps in the first place?

Including these overlaps, the CharacterTextSplitter ensures that the underlying context is maintained between the chunks, which is especially useful when working with long pieces of documents.

Similarly to the chunk_size there is no fixed value of chunk_overlap . Different values need to be tested to choose the one with better results.

Now, let’s see their application in our scenario:

research_paper_chunks = text_splitter.split_text(research_paper_content)
article_information_chunks = text_splitter.split_text(article_information_content)

print(f"# Chunks in Research Paper: {len(research_paper_chunks)}")
print(f"# Chunks in Article Document: {len(article_information_chunks)}")

Output:

Number of chunks in each document (Image by Author)
Number of chunks in each document (Image by Author)

For a larger document like the research paper, we have a lot more chunks (51) compared to the one-page article document, which is only 2.

b. Create embeddings of the chunks

We can use the OpenAIEmbeddings module, which uses text-embedding-ada-002 model by default to create the embedding of the chunks.

Instead of using the text-embedding-ada-002 can use a different model (e.g. gpt-3.5-turbo-0301) by changing the following parameters:

  • model = "gpt-3.5-turbo-0301"
  • deployment = "<DEPLOYMENT-NAME> " which corresponds to the name given during the deployment of the model. The default value is also text-embedding-ada-002

For simplicity’s sake, we will stick to using the default parameters’ value in this tutorial. But before that, we need to acquire the OpenAI credentials, and all the steps are provided in the following article.

from Langchain.embeddings.openai import OpenAIEmbeddings
import os

os.environ["OPENAI_API_KEY"] = "<YOUR_KEY>"
embeddings = OpenAIEmbeddings()

c. Create document search

To get the answer to a given query, we need to create a vector store that finds the closest matching chunk to that query.

Such vector store can be created using the from_texts function from FAISS module and the function takes two main parameters: text_splitter and embeddings which are both defined previously.

from langchain.vectorstores import FAISS

def get_doc_search(text_splitter):

    return FAISS.from_texts(text_splitter, embeddings)

By running the get_doc_search on the research paper chunks, we can see that the result is of a vectorstores . The result would have been the same if we used the article_information_chunks.

doc_search_paper = get_doc_search(research_paper_chunks)
print(doc_search_paper)

Output:

Vector store of the research paper (Image by Author)
Vector store of the research paper (Image by Author)

d. Start chatting with your documents

Congrats on making it that far! 🎉

The chat_with_file function is used to implement the end-to-end logic of the chat by combining all the above functions, along with the with similarity_search function.

The final function takes two parameters:

  • The file we want to chat with, and
  • The query provided by the user
from langchain.llms import OpenAI
from langchain.chains.question_answering import load_qa_chain
chain = load_qa_chain(OpenAI(), chain_type = "map_rerank",  
                      return_intermediate_steps=True)

def chat_with_file(file_path, query):

    file_content = extract_file_content(file_path)
    text_splitter = text_splitter.split_text(file_content)

    document_search = get_doc_search(text_splitter)
    documents = document_search.similarity_search(query)

    results = chain({
                        "input_documents":documents, 
                        "question": query
                    }, 
                    return_only_outputs=True)
    answers = results['intermediate_steps'][0]

    return answers

Let’s take a step back to properly understand what is happening in the above code block.

  • The load_qa_chain provides an interface to perform question-answering over a set of documents. In this specific case, we are using the default OpenAI GPT-3 large language model.
  • The chain_type is map_rerank . By doing so, the load_qa_chain function returns the answers based on a confidence score given by the chain. There are other chain_type that can be used such as map_reduce , stuff , refine and more. Each one has its own pros and cons.
  • By setting return_intermediate_steps=True , we can access the metadata such as the above confidence score.

Its output is a dictionary of two keys: the answer to the query, and the confidence score.

We can finally chat with the our files, starting with the image document:

  • Chat with the image document

To chat with the image document, we provide the path to the document, and the question we want the model to answer.

query = "What is the document about"

results = chat_with_file(article_information_path, query)

answer = results["answer"]
confidence_score = results["score"]

print(f"Answer: {answer}nnConfidence Score: {confidence_score}")

Output:

Result of a query on the image document (Image by Author)
Result of a query on the image document (Image by Author)

The model is 100% confident in its response. By looking at the first paragraph of the original document below, we can see that the model response is indeed correct.

First two paragraphs of the original article image document (Image by Author)
First two paragraphs of the original article image document (Image by Author)

One of the most interesting parts is that it provided a brief summary of the main topics covered in the document ( statistics, model evaluation metrics, SQL queries, etc.).

  • Chat with the PDF file

The process with the PDF file is similar to the one in the above section.

query = "Why is the self-attention approach used in this document?"

results = chat_with_file(research_paper_path, query)

answer = results["answer"]
confidence_score = results["score"]

print(f"Answer: {answer}nnConfidence Score: {confidence_score}")

Output:

Once again we are getting a 100% confidence score from the model. The answer to the question looks pretty correct!

Result of a query on the PDF document (Image by Author)
Result of a query on the PDF document (Image by Author)

In both cases, the model was able to provide a human-like response in a few seconds. Making a human go through the same process would take minutes, even hours depending on the length of the document.

Conclusion

Congratulations!!!🎉

I hope this article provided enough tools to help you take your knowledge to the next level. The code is available on my GitHub.

In my next article, I will explain how to integrate this system into a nice user interface. Stay tuned!

Also, If you enjoy reading my stories and wish to support my writing, consider becoming a Medium member. It’s $5 a month, giving you unlimited access to thousands of Python guides and Data science articles.

By signing up using my link, I will earn a small commission at no extra cost to you.

Join Medium with my referral link – Zoumana Keita

Feel free to follow me on Twitter, and YouTube, or say Hi on LinkedIn.

Let’s connect here for a 1–1 discussion

Before you leave, there are more great resources below you might be interested in reading!

Introduction to Text Embeddings with the OpenAI API

How to Extract Text from Any PDF and Image for Large Language Model

The post How to Chat With Any File from PDFs to Images Using Large Language Models – With Code appeared first on Towards Data Science.

]]>