Neon is Serverless Postgres built for the cloud. Neon separates compute and storage to offer modern developer features such as autoscaling, database branching, scale-to-zero, and more.
Neon supports vector search using the pgvector open-source PostgreSQL extension, which enables Postgres as a vector database for storing and querying embeddings.
Check out the notebook in this repo for working with Neon Serverless Postgres as your vector database.
In this notebook you will learn how to:
- Use embeddings created by OpenAI API
- Store embeddings in a Neon Serverless Postgres database
- Convert a raw text query to an embedding with OpenAI API
- Use Neon with the
pgvectorextension to perform vector similarity search
Neon enables you to scale your AI applications with the following features:
- Autoscaling: If your AI application experiences heavy load during certain hours of the day or at different times, Neon can automatically scale compute resources without manual intervention. During periods of inactivity, Neon is able to scale to zero.
- Instant read replicas: Neon supports instant read replicas, which are independent read-only compute instances designed to perform read operations on the same data as your read-write computes. With read replicas, you can offload reads from your read-write compute instance to a dedicated read-only compute instance for your AI application.
- Build an AI-powered semantic search application - Submit a startup idea and get a list of similar ideas that YCombinator has invested in before
- Build an AI-powered chatbot - A Postgres Q&A chatbot that uses Postgres as a vector database
- Vercel Postgres pgvector Starter - Vector similarity search with Vercel Postgres (powered by Neon)