13 Best Vector Databases For AI Apps (Comparisons, Reviews, Demos, and Limitations)

JD Prater

August 1, 2023

Table of Contents

Information today comes in infinite forms - from text and images to audio and video. This diversity once made connecting insights across data daunting.

But Modern AI is changing the game. Innovations like embedding models work magic under the hood, transforming data into vectors capturing meaning.

Suddenly your data speaks a universal language. Documents, logs, songs, images - they become encoded points in a vector space. Now surface insights by finding clustered vectors.

Vector search unlocks extraordinary experiences.

  • Discover similar products through images.
  • Identify customer needs in behavior data.
  • Power semantic search.

These breakthroughs are fueled by vector databases, the engines behind vector search. They efficiently store and process billions of vectors for blazing fast similarity matching and AI-driven insights.

In this post, I'll explore the capabilities of these pivotal Modern AI components. Understand how vector databases uniquely unlock value from embeddings. And spotlight the top solutions leading the way in 2023.

Enter the World of Vector Databases

What exactly is a vector database, you ask? Think of vector databases as the masterminds fueling moden AI applications. They expertly handle a unique kind of data, termed 'vector embeddings,' akin to multi-dimensional fingerprints of information. These fingerprints, crafted by powerhouse AI models like Large Language Models (LLMs), teem with numerous features, making them somewhat elusive to manage.

the vector database landscape
Source: Aishwarya Naresh Reganti

How do vector databases work?

This is where vector databases come in, acting as the rapid-response memory banks for these embeddings. They offer specialized storage and speedy look-up capabilities, ensuring efficient access to and comprehension of this data.

text to vector database
Transforming data into vectors

The exciting part? Traditional databases, adept at handling simpler, flat data, falter when confronted with the complexity and enormity of these vector embeddings. Vector databases, on the other hand, are purpose-built for this challenge. They're designed to manage these high-dimensional fingerprints, delivering the speed, scalability, and adaptability needed to truly exploit your data.

Equipped with a vector database, you can supercharge your AI capabilities, enabling it to unravel deeper meanings in data (hello, semantic information retrieval!) and retain long-term memory. This drives your AI applications to reach new heights of sophistication. In a nutshell, vector databases are a critical cog in the Modern AI ecosystem where managing high-dimensional, context-rich data is the new normal.

The Rising Popularity of Vector Databases

The rising prominence of vector databases is largely linked to the burgeoning necessity for large-scale generative AI models.

This surge can be tied to three pivotal factors:

  1. AI models' appetite for complex, extensive data has led to a data explosion. Vector databases step in here, demonstrating their fundamental role in efficiently managing this vast data influx.
  2. AI models often generate intricate text that demands advanced similarity searches and matching. This is where traditional search methods falter, but vector databases rise to the occasion, providing unrivaled relevance and accuracy.
  3. The ability of AI models to handle multiple modalities of data, including text, images, and speech, emphasizes the need for robust systems like vector databases. Their proficiency in storing, indexing, and querying diverse data types bolsters their versatility.

In essence, the evolution of vector databases is a mirror reflecting the progression of large-scale foundation models like Chat GPT-4, RoBERTa, and LaMDA. As AI innovation continues to march forward, vector databases are set to play an increasingly vital role in harnessing the full power of Modern AI.

The Best Vector Databases in 2024 (Updated Feb 2024)

Let's explore the details of 13 vector databases - commercial and open source - that are currently shaping the AI landscape.

The landscape of vector databases

1) Milvus

Milvus is a highly flexible, reliable, and blazing-fast cloud-native, open-source vector database. It powers embedding similarity search and AI applications and strives to make vector databases accessible to every organization. Milvus can store, index, and manage a billion+ embedding vectors generated by deep neural networks and other machine learning (ML) models. This level of scale is vital to handling the volumes of unstructured data generated to help organizations to analyze and act on it to provide better service, reduce fraud, avoid downtime, and make decisions faster. 

Milvus is a graduated-stage project of the LF AI & Data Foundation.

GitHub Stars: 25k

Architecture

Milvus Architecture

Review

With its efficient and scalable architecture, Milvus can easily handle large-scale data sets, providing fast and accurate results even on complex queries. Morgen z.

2) Qdrant

Qdrant engine is an open-source vector search database. It deploys as an API service providing a search for the nearest high-dimensional vectors. With Qdrant, embeddings or neural network encoders can be turned into full-fledged applications for matching, searching, recommending, and much more.

GitHub Stars: 16k

Architecture

qdrant architecture

Review

“Every commercial generative AI use case we encounter benefits from faster training and inference, whether mining customer interactions for next best actions or sifting clinical data to speed a therapeutic through trial and patent processes.” - Birago Jones, CEO, Pienso

3) Pinecone

Pinecone is a fully managed vector database that makes it easy to add vector search to production applications. It combines state-of-the-art vector search libraries, advanced features such as filtering, and distributed infrastructure to provide high performance and reliability at any scale. No more hassles of benchmarking and tuning algorithms or building and maintaining infrastructure for vector search.

GitHub Stars: N/A

Architecture

Pinecone architecture diagram

Review

“Thanks to the Pinecone vector database we can run our high-performance applications across 10+ billion records without breaking a sweat.” Ohad Parush, Chief R&D Officer at Gong

4) Supabase

Supabase is a managed Postgresql solution that implements storing embeddings using the pgvector extension.

Supabase is an Open Source Firebase Alternative from the company of the same name in Singapore. Every Supabase project is a dedicated PostgreSQL database. Supabase also provides an open source Object store with unlimited scalability, for any file type. Supports open source authentication, with every Supabase project coming with a complete User Management system that works without any additional tools. 

GitHub Stars: 63k

Architecture

Supabase Architecture

Review

“Supabase is incredibly generous in their pricing, offering an amazing suite of tools to enthusiasts and hobbyists, without the fear of surprise bills at the end of the month.” Josh C. Developer

5) Weaviate

Weaviate is an open-source vector database used to store data objects and vector embeddings from ML-models, and scale into billions of data objects, from the company of the same name in Amsterdam. Users can can index billions of data objects to search through, and combine multiple search techniques, such as keyword-based and vector search, to provide search experiences.

For hosted Weaviate, the user starts for free and pay for the vector dimensions stored and queried. Upgrading to one of Weaviate's unlimited capacity plans starts at $0.05 per 1 million vector dimensions and scales as the user's needs grow.

GitHub Stars: 9k

Architecture

Weaviate Architecture

Review

"Providing research teams with high quality search capabilities / semantic search in legal documents." Alexsei

6) Chroma

Chroma is the open-source embedding database. Chroma makes it easy to build LLM apps by making knowledge, facts, and skills pluggable for LLMs. It is free to use under an Apache 2.0 license.

GitHub Stars: 11k

Architecture

Chroma architecture

7) Vald

Vald is a highly scalable distributed fast approximate nearest neighbor dense vector search engine. It's designed and implemented based on the Cloud-Native architecture. It uses the fastest ANN Algorithm NGT to search for neighbors. 

Vald has automatic vector indexing and index backup, and horizontal scaling which is made for searching from billions of feature vector data. Vald is easy to use, feature-rich and highly customizable as you needed. Usually, the graph requires locking during indexing, which causes stop-the-world. 

But Vald uses distributed index graphs so it continues to work during indexing. Vald implements it's own highly customizable Ingress/Egress filter. Which can be configured to fit the gRPC interface. Horizontal scalable on memory and cpu for your demand. Vald supports to auto backup feature using Object Storage or Persistent Volume which enables disaster recovery.

GitHub Stars: 1.4k

Architecture

vald Architecture

8) KX - KDB.AI

KDB.AI is a powerful knowledge-based vector database and search engine that allows developers to build scalable, reliable and real-time applications by providing advanced search, recommendation and personalization for AI applications.

GitHub Stars: N/A

Architecture

kdb ai architecture

Review

It's a very powerful, flexible and high performance system. It's really a programming language which also has a database, which brings a great deal of flexibility. The language, q, provides very succinct ways of expressing relatively complex operations. Jonny P.

9) Vespa

Apply AI to your data, online. At any scale, with unbeatable performance.

You'll need to co-locate vectors, metadata and content on the same item on the same node, run inference there to achieve scalable performance, and seamlessly scale this across nodes to handle any amount of data and traffic. Vespa does all this for you so you can focus on building your application.

Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query. Integrated machine-learned model inference allows you to apply AI to make sense of your data in real time.

Recommendation, personalization and targeting involves evaluating recommender models over content items to select the best ones. Vespa lets you build applications which does this online, typically combining fast vector search and filtering with evaluation of machine-learned models over the items.

GitHub Stars: 5.1k

Architecture

vespa Architecture

10) SingleStore

SingleStoreDB is a high-performance, scalable, modern SQL DBMS and cloud service that supports multiple data models including structured data, semi-structured data based on JSON, time-series, full text, spatial, key-value and vector data. Our vector database subsystem, first made available in 2017 and subsequently enhanced, allows extremely fast nearest-neighbor search to find objects that are semantically similar, easily using SQL. Moreover, so-called "metadata filtering" (which is billed as a virtue by SVDB providers) is available in SingleStoreDB in far more powerful and general form than they provide — simply by using SQL filters, joins and all other SQL capabilities.

GitHub Stars: N/A

Architecture

Review

“Since switching to SingleStore, there have been no worries or headaches about downtime from the data source, APIs being down, or having to re-collect data. SingleStore solved all the pain points that kept me up at night.” Guy Warner, Chief Technology Officer, MonitorBase

11) LanceDB

LanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings.

The key features of LanceDB include:

  • Production-scale vector search with no servers to manage.
  • Store, query and filter vectors, metadata and multi-modal data (text, images, videos, point clouds, and more).
  • Support for vector similarity search, full-text search and SQL.
  • Native Python and Javascript/Typescript support.
  • Zero-copy, automatic versioning, manage versions of your data without needing extra infrastructure.
  • Ecosystem integrations with LangChain 🦜️🔗, LlamaIndex 🦙, Apache-Arrow, Pandas, Polars, DuckDB and more on the way.

GitHub Stars: 2.1k

12) Marqo

An end-to-end, multimodal vector search engine. Store and query unstructured data such as text, images, and code through a single easy-to-use API.

A tensor-based search and analytics engine that seamlessly integrates with your applications, websites, and workflows. Marqo is a versatile and robust search and analytics engine that can be integrated into any website or application. Due to horizontal scalability, Marqo provides lightning-fast query times, even with millions of documents. Marqo helps you configure deep-learning models like CLIP to pull semantic meaning from images. It can seamlessly handle image-to-image, image-to-text and text-to-image search and analytics. Marqo adapts and stores your data in a fully schemaless manner. It combines tensor search with a query DSL that provides efficient pre-filtering. Tensor search allows you to go beyond keyword matching and search based on the meaning of text, images and other unstructured data. Be a part of the tribe and help us revolutionize the future of search. Whether you are a contributor, a user, or simply have questions about Marqo, we got your back.

GitHub Stars: 3.9k

marqo vector database

Features

  • Embeddings stored in in-memory HNSW indexes, achieving cutting edge search speeds
  • Scale to hundred-million document indexes with horizontal index sharding
  • Async and non-blocking data upload and search
  • Use the latest machine learning models from PyTorch, Huggingface, OpenAI and more
  • Start with a pre-configured model or bring your own
  • Built in ONNX support and conversion for faster inference and higher throughput
  • CPU and GPU support

13) Deep Lake by Activeloop

Deep Lake is a Database for AI powered by a storage format optimized for deep-learning and Large Language Model (LLM) based applications.

GitHub Stars: 7.4k

Deep Lake can be used for:

  1. Storing data and vectors while building LLM applications
  2. Managing datasets while training deep learning models

Deep Lake simplifies the deployment of enterprise-grade LLM-based products by offering storage for all data types (embeddings, audio, text, videos, images, pdfs, annotations, etc.), querying and vector search, data streaming while training models at scale, data versioning and lineage, and integrations with popular tools such as LangChain, LlamaIndex, Weights & Biases, and many more. Deep Lake works with data of any size, it is serverless, and it enables you to store all of your data in your own cloud and in one place. Deep Lake is used by Intel, Airbus, Matterport, ZERO Systems, Red Cross, Yale, & Oxford.

Deep Lake comparison to other vector databases
Deep Lake comparison to other vector databases

Vector Database Adoption and Implementation

Vector databases are starting to gain some traction as a tool for managing AI implementations, according to a recent survey. However, adoption remains relatively low for now, with less than 20% of respondents currently using a vector database according to Retool's State of AI Report 2023.

This lower uptake makes sense in the context that most companies are still in the early stages with AI. With hosted models being the norm, companies have not yet reached the point of needing the specialized data storage and retrieval that vector databases provide. But for those dipping their toes in, sentiment seems positive - vector database users rated their satisfaction level higher than average.

popular vector databases
According to Retool's State of AI Report 2023, fewer than 20% of respondents are using a vector db at all.

Looking at current usage, smaller vector databases like Pinecone, MongoDB, and pg_vector take the top spots. Larger players have yet to take a definitive lead. Adoption patterns also differ by company size, but small sample sizes make conclusions tricky.

Slow uptake doesn't necessarily mean vector databases lack value. In fact, some write-in survey responses indicated uncertainty about whether vector databases are worth investing in. This suggests an awareness issue may be partly to blame. As AI implementations mature, the benefits of vector data storage may become more apparent.

In the coming years, we may see vector databases catch on more widely. But for now, most companies are still focused on early AI experiments with hosted solutions. As these projects evolve, the need for optimized data storage and retrieval will grow. Vector databases seem poised to step in and fill that need, but only time will tell if they move into the mainstream. Their current positive reception hints at a promising future if companies continue maturing their AI capabilities.

The Hidden Challenges of Integrating Vector Databases

Vector databases enable powerful AI applications. But integrating one into your stack introduces underappreciated complexities.

For databases that currently lack vector search functionality, it is only a matter of time before they implement these features.
Yingjun Wu - Why You Shouldn’t Invest In Vector Databases?
What does it take to deploy a vector database into production

By carefully considering these questions, you can make a more informed decision about whether integrating and maintaining a vector database is the right path for your organization's needs and resources.

  1. Expertise: Do you have the necessary expertise in-house to scale performance, configure appropriate indexes, and handle partitioning? How will you handle data modification tasks while maintaining index consistency? How will you manage the impact on vector representations of upstream model changes?
  2. Resources: Do you have the resources to ensure data integrity, consistency, and scalability, given that vector databases often lack mature data management capabilities? Are you prepared for the potential drain on resources that may be required to address the complex issues that come with integrating and maintaining a vector database?
  3. Monitoring and Alerting: How will you track dependencies and manage changes in vector data that could potentially impact downstream uses? Will you need to buy and piece together more point solutions?
  4. Security and Compliance: If you opt for an open-source solution, how will you ensure robust security, monitoring, and controls? Do you fully understand the compliance requirements and regulations related to data residency, privacy, copyright, and so on? Have you thought through what role a vector database plays in those considerations?
  5. Cost Implications: If you choose a commercial vector database, do you fully understand the cost implications and potential for price escalation? Is the pricing predictable and transparent to make budgeting decisions, or does cost vary wildly with usage? Will you need to buy more point solutions to reinforce this decision?
  6. Integration: How much custom work will integrating this vector database require, given the lack of standardized APIs? If you need to migrate systems in the future, do you have a plan for how to re-architect the system for a smooth transition?
  7. Data Distribution: How will you handle situations where related information ends up in different chunks or partitions? What strategies do you have to avoid inefficiencies in data retrieval and potential coherence issues?
  8. Foundation Models: If you only select one foundation model, is it always the best model for your use cases? And, if it’s not, to what extent does the infrastructure that you’ve built support experimentation with others
  9. Alternatives: Have you thoroughly examined all alternatives, including all-in-one AI platforms like Graft, which offer a range of features designed to work together seamlessly?

Graft: Your Shortcut to Production-Ready AI

Don't underestimate the effort required to setup and integrate a vector database.

A production AI system requires much more than just a vector store. You need extensive data pipelines, embeddings, foundation models, monitoring and alerting, and a significant allocation of engineering resources.

Graft is the AI platform where practicality meets possibility. Designed for the 99%, Graft empowers organizations of all sizes to leverage the transformative power of advanced AI technologies with unparalleled ease.

Our Modern AI Platform stands unique in its ability to simplify the complex, offering the fastest path from idea to impact in AI deployment. With Graft, cutting-edge AI is no longer the exclusive domain of companies with deep pockets and specialized expertise; it's an accessible tool for innovation, efficiency, and competitive advantage.

Graft's Full Production AI System
Graft's Full Production AI System

Whether it's through enhancing search capabilities, driving predictive analytics, or creating generative marvels, you can concentrate on your core use cases and expedite your ROI. This means no more piecing together insecure, inflexible components, and replicating fragile pipelines.

We're democratizing access to production-ready AI, eliminating the necessity for patchwork solutions. Don't settle for duct-taped solutions.

The Graft Intelligence Layer integrates your company knowledge and expertise to streamline your enterprise operations.

Book Demo
checkmark icon
All Your Use Cases - Advanced AI models for search, predictive, and generative.
checkmark icon
Use All Your Data - Every data source, every modality, always current.
checkmark icon
Customizable and Extensible - Leverage Graft's API to build custom AI-powered applications and workflows on top of the intelligence layer.
The AI of the 1%,
Built for the 99%
Get Access

Last Updated

April 19, 2024

Further reading

JD Prater

Head of Marketing

JD writes about his experience using and building AI solutions. Outside of work, you'll find him spending time with his family, cycling the backroads of the Santa Cruz mountains, and surfing the local sandbars. Say hi on LinkedIn.

Unify Knowledge

Centralized information and expertise for quick access and discovery.

grid icon
Quick Setup

No code; no AI expertise; and no infrastructure setup required.

cubes icon
Tailor to Your Needs

We partner closely with your team to ensure your success.

Equip your teams with intelligence

checkmark icon
Immediate productivity gains
checkmark icon
Save 2-3 hours/week/employee
checkmark icon
Reduce costs