This notebook guides you step by step on using Qdrant
as a vector database for OpenAI embeddings. Qdrant is a high-performant vector search database written in Rust. It offers RESTful and gRPC APIs to manage your embeddings. There is an official Python qdrant-client that eases the integration with your apps.
This notebook presents an end-to-end process of:
- Using precomputed embeddings created by OpenAI API.
- Storing the embeddings in a local instance of Qdrant.
- Converting raw text query to an embedding with OpenAI API.
- Using Qdrant to perform the nearest neighbour search in the created collection.
What is Qdrant
Qdrant is an Open Source vector database that allows storing neural embeddings along with the metadata, a.k.a payload. Payloads are not only available for keeping some additional attributes of a particular point, but might be also used for filtering. Qdrant offers a unique filtering mechanism which is built-in into the vector search phase, what makes it really efficient.
Deployment options
Qdrant might be launched in various ways, depending on the target load on the application it might be hosted:
- Locally or on premise, with Docker containers
- On Kubernetes cluster, with the Helm chart
- Using Qdrant Cloud
Integration
Qdrant provides both RESTful and gRPC APIs which makes integration easy, no matter the programming language you use. However, there are some official clients for the most popular languages available, and if you use Python then the Python Qdrant client library might be the best choice.