This notebook guides you step by step on using Qdrant as a vector database for OpenAI embeddings. Qdrant is a high-performant vector search database written in Rust. It offers RESTful and gRPC APIs to manage your embeddings. There is an official Python qdrant-client that eases the integration with your apps.
This notebook presents an end-to-end process of:
Using precomputed embeddings created by OpenAI API.
Storing the embeddings in a local instance of Qdrant.
Converting raw text query to an embedding with OpenAI API.
Using Qdrant to perform the nearest neighbour search in the created collection.
Qdrant is an Open Source vector database that allows storing neural embeddings along with the metadata, a.k.a payload. Payloads are not only available for keeping some additional attributes of a particular point, but might be also used for filtering. Qdrant offers a unique filtering mechanism which is built-in into the vector search phase, what makes it really efficient.
Qdrant provides both RESTful and gRPC APIs which makes integration easy, no matter the programming language you use. However, there are some official clients for the most popular languages available, and if you use Python then the Python Qdrant client library might be the best choice.
We're going to use a local Qdrant instance running in a Docker container. The easiest way to launch it is to use the attached [docker-compose.yaml] file and run the following command:
! docker-compose up -d
qdrant_qdrant_1 is up-to-date
We might validate if the server was launched successfully by running a simple curl command:
This notebook obviously requires the openai and qdrant-client packages, but there are also some other additional libraries we will use. The following command installs them all:
Once you get your key, please add it to your environment variables as OPENAI_API_KEY by running following command:
! export OPENAI_API_KEY="your API key"
# Test that your OpenAI API key is correctly set as an environment variable# Note. if you run this notebook locally, you will need to reload your terminal and the notebook for the env variables to be live.import os# Note. alternatively you can set a temporary env variable like this:# os.environ["OPENAI_API_KEY"] = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"if os.getenv("OPENAI_API_KEY") isnotNone:print("OPENAI_API_KEY is ready")else:print("OPENAI_API_KEY environment variable not found")
In this section we are going to load the data prepared previous to this session, so you don't have to recompute the embeddings of Wikipedia articles with your own credits.
import wgetembeddings_url ="https://cdn.openai.com/API/examples/data/vector_database_wikipedia_articles_embedded.zip"# The file is ~700 MB so this will take some timewget.download(embeddings_url)
The downloaded file has to be then extracted:
import zipfilewith zipfile.ZipFile("vector_database_wikipedia_articles_embedded.zip","r") as zip_ref: zip_ref.extractall("../data")
And we can finally load it from the provided CSV file:
import pandas as pdfrom ast import literal_evalarticle_df = pd.read_csv('../data/vector_database_wikipedia_articles_embedded.csv')# Read vectors from strings back into a listarticle_df["title_vector"] = article_df.title_vector.apply(literal_eval)article_df["content_vector"] = article_df.content_vector.apply(literal_eval)article_df.head()
Qdrant stores data in collections where each object is described by at least one vector and may contain an additional metadata called payload. Our collection will be called Articles and each object will be described by both title and content vectors. Qdrant does not require you to set up any kind of schema beforehand, so you can freely put points to the collection with a simple setup only.
We will start with creating a collection, and then we will fill it with our precomputed embeddings.
from qdrant_client.http import models as restvector_size =len(article_df["content_vector"][0])client.recreate_collection(collection_name="Articles",vectors_config={"title": rest.VectorParams(distance=rest.Distance.COSINE,size=vector_size, ),"content": rest.VectorParams(distance=rest.Distance.COSINE,size=vector_size, ), })
True
client.upsert(collection_name="Articles",points=[ rest.PointStruct(id=k,vector={"title": v["title_vector"],"content": v["content_vector"], },payload=v.to_dict(), )for k, v in article_df.iterrows() ],)
Once the data is put into Qdrant we will start querying the collection for the closest vectors. We may provide an additional parameter vector_name to switch from title to content based search. Since the precomputed embeddings were created with text-embedding-3-small OpenAI model we also have to use it during search.
query_results = query_qdrant("modern art in Europe", "Articles")for i, article inenumerate(query_results):print(f"{i +1}. {article.payload['title']} (Score: {round(article.score, 3)})")
1. Museum of Modern Art (Score: 0.875)
2. Western Europe (Score: 0.868)
3. Renaissance art (Score: 0.864)
4. Pop art (Score: 0.86)
5. Northern Europe (Score: 0.855)
6. Hellenistic art (Score: 0.853)
7. Modernist literature (Score: 0.847)
8. Art film (Score: 0.843)
9. Central Europe (Score: 0.843)
10. European (Score: 0.841)
11. Art (Score: 0.841)
12. Byzantine art (Score: 0.841)
13. Postmodernism (Score: 0.84)
14. Eastern Europe (Score: 0.839)
15. Europe (Score: 0.839)
16. Cubism (Score: 0.839)
17. Impressionism (Score: 0.838)
18. Bauhaus (Score: 0.838)
19. Surrealism (Score: 0.837)
20. Expressionism (Score: 0.837)
# This time we'll query using content vectorquery_results = query_qdrant("Famous battles in Scottish history", "Articles", "content")for i, article inenumerate(query_results):print(f"{i +1}. {article.payload['title']} (Score: {round(article.score, 3)})")
1. Battle of Bannockburn (Score: 0.869)
2. Wars of Scottish Independence (Score: 0.861)
3. 1651 (Score: 0.853)
4. First War of Scottish Independence (Score: 0.85)
5. Robert I of Scotland (Score: 0.846)
6. 841 (Score: 0.844)
7. 1716 (Score: 0.844)
8. 1314 (Score: 0.837)
9. 1263 (Score: 0.836)
10. William Wallace (Score: 0.835)
11. Stirling (Score: 0.831)
12. 1306 (Score: 0.831)
13. 1746 (Score: 0.83)
14. 1040s (Score: 0.828)
15. 1106 (Score: 0.827)
16. 1304 (Score: 0.827)
17. David II of Scotland (Score: 0.825)
18. Braveheart (Score: 0.824)
19. 1124 (Score: 0.824)
20. July 27 (Score: 0.823)