{ "metadata": { "kernelspec": { "display_name": "Python (Pyodide)", "language": "python", "name": "python" }, "language_info": { "codemirror_mode": { "name": "python", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8" } }, "nbformat_minor": 4, "nbformat": 4, "cells": [ { "cell_type": "markdown", "source": "# Semantic Search\n\nThe Semantic search API helps you build AI systems that use data from documents. With the API, you can search for texts with similar semantic meaning, unlike the search where you search for keywords, and find relevant texts written in other languages.\n\nThe Semantic search API calculates Vector Embeddings for all PDF documents uploaded to Cognite Data Fusion (CDF). You can read more about [embeddings](https://platform.openai.com/docs/guides/embeddings) and [AIs multitool vector embeddings](https://cloud.google.com/blog/topics/developers-practitioners/meet-ais-multitool-vector-embeddings).\n\nVector embeddings are created as part of the built-in Retrieval Augmented Generation (RAG) pipeline. All PDF documents you upload to CDF are parsed and OCRed, and the extracted text is divided into suitably sized passages which are indexed into a vector store.\n\nThis notebook shows you how to upload a PDF to CDF and use the Semantic search API to find relevant passages for a user question.", "metadata": {} }, { "cell_type": "code", "source": "from cognite.client import CogniteClient\n\n# Instantiate Cognite SDK client:\nclient = CogniteClient()", "metadata": { "trusted": true }, "outputs": [], "execution_count": null }, { "cell_type": "markdown", "source": "## Step 1. Upload PDF\n\nYou can upload a PDF file to CDF one of the following ways:\n\n* Go to **_CDF_** > **_Industrial tools_** > **_Canvas_** and drag your PDF file to the canvas or upload existing files by selecting **_+ Add data_**. \n If you don't have a good file to upload, try this [test file](./well_report.pdf).\n\n* Go to **_CDF_** > **_Industrial tools_** > **_Data explorer_** > **_Files_** and select **_Upload_**.\n\n* Use the Python code.", "metadata": {} }, { "cell_type": "code", "source": "response1 = client.files.upload(path=\"./well_report.pdf\")\ndocument_id = response1.id\nprint(document_id)", "metadata": { "trusted": true }, "outputs": [], "execution_count": null }, { "cell_type": "markdown", "source": "## Step 2. Processing\n\nOnce you've uploaded the file, wait for it to pass through the RAG pipeline. You can use the Document status API to poll the status.", "metadata": {} }, { "cell_type": "code", "source": "import time\n\nstatus_path = f\"/api/v1/projects/{client.config.project}/documents/status\"\n\nbody = {\n \"items\": [\n {\n \"id\": document_id\n }\n ]\n}\n\nwhile True:\n response2 = client.post(status_path, json=body, headers={\"cdf-version\": \"beta\"}).json()\n\n status = response2[\"items\"][0][\"passages\"][\"status\"]\n print(f\"status: {status}\")\n\n if status in {\"waiting\", \"running\"}:\n time.sleep(5)\n continue\n\n break", "metadata": { "trusted": true }, "outputs": [], "execution_count": null }, { "cell_type": "markdown", "source": "## Step 3. Search\n\nOnce the document is fully indexed, search for the the relevant pasages with the Python code.", "metadata": {} }, { "cell_type": "code", "source": "import json\n\nsearch_path = f\"/api/v1/projects/{client.config.project}/documents/passages/search\"\n\nbody = {\n \"limit\": 3,\n \"filter\": {\n \"and\": [\n {\n \"equals\": {\n \"property\": [\"document\", \"id\"],\n \"value\": document_id\n }\n },\n {\n \"semanticSearch\": {\n \"property\": [\"content\"],\n \"value\": \"Where is the Volve field located?\"\n }\n }\n ]\n }\n}\n\nresponse3 = client.post(search_path, json=body).json()\n\nprint(json.dumps(response3, indent=2))", "metadata": { "trusted": true }, "outputs": [], "execution_count": null }, { "cell_type": "code", "source": "", "metadata": { "trusted": true }, "outputs": [], "execution_count": null } ] }