FerresDB is a high-performance vector search engine for semantic search, RAG pipelines, and recommendation systems. REST API, hybrid search, WAL persistence — production-ready from day one.
FerresDB is fully open source.
Licensed under MIT OR Apache-2.0.
Core Features
HNSW index with Cosine, Euclidean, and Dot Product metrics. P50 latency under 500μs even at scale.
Combine vector similarity with full-text BM25 scoring in a single query. Best of both worlds for RAG.
No GC pauses. Predictable latency. Thread-safe by design. SIMD acceleration (AVX2/SSE4.1) for distance kernels.
Write-Ahead Log, periodic snapshots every 1000 ops, and automatic crash recovery. Your data survives failures.
Connect Claude Desktop directly to FerresDB via the Model Context Protocol. Enable --mcp and start searching from your AI assistant.
Prometheus metrics at /metrics, health check at /health, query analytics with P95 latency, and a web dashboard included.
Benchmarks
1K vectors
~10–20ms
50K–100K pts/s
10K vectors
~150–300ms
30K–60K pts/s
100K vectors
~2.5–5s
20K–40K pts/s
Reference hardware: Intel i7 / AMD Ryzen, 16GB RAM. Results vary with HNSW config and vector dimension.
Quick Start
docker-compose up -d
# Backend: http://localhost:8080
# Dashboard: http://localhost:3000— or run individual containers —
docker pull ferresdb/ferres-db-core
docker run -d \
--name ferres-db-core \
-p 8080:8080 \
-e FERRESDB_API_KEYS=ferres_sk_your_key_here \
-v ferres-data:/data \
ferresdb/ferres-db-corecurl -X POST http://localhost:8080/api/v1/collections \
-H "Content-Type: application/json" \
-d '{"name":"docs","dimension":384,"distance":"Cosine"}'# Insert
curl -X POST http://localhost:8080/api/v1/collections/docs/points \
-H "Content-Type: application/json" \
-d '{"points":[{"id":"doc-1","vector":[0.1,0.2,-0.1],"metadata":{"text":"Hello FerresDB"}}]}'
# Search
curl -X POST http://localhost:8080/api/v1/collections/docs/search \
-H "Content-Type: application/json" \
-d '{"vector":[0.1,0.2,-0.1],"limit":5}'Official SDKs
pick your language
pnpm add @ferres-db/typescript-sdkimport { VectorDBClient, DistanceMetric } from "@ferres-db/typescript-sdk";
const client = new VectorDBClient({
baseUrl: "http://localhost:8080",
apiKey: "ferres_sk_...",
});
await client.upsertPoints("documents", [
{ id: "doc-1", vector: [0.1, 0.2, 0.3], metadata: { text: "Hello" } },
]);
const results = await client.search("documents", {
vector: [0.1, 0.2, 0.3],
limit: 5,
});Use Cases
Index documents, articles, or product descriptions. Find the most relevant results using vector similarity — no keyword matching required.
Connect FerresDB to your LLM pipeline. Retrieve the most relevant context chunks before generation. Works with LangChain, LlamaIndex, and custom pipelines.
Store user embeddings and item vectors. Query the nearest neighbors in microseconds to power real-time recommendations.
Claude Desktop Integration
FerresDB supports the Model Context Protocol (MCP). Use Claude Desktop to search, upsert, and explore your vector collections — directly from your AI assistant.
# Build with MCP support
cargo build -p ferres-db-server --features mcp --release# Or enable via environment variable
FERRESDB_ENABLE_MCP=true ./ferres-db-server --mcpFerresDB is open source under the MIT OR Apache-2.0 license. Contributions, issues, and feedback are welcome.
Licensed MIT OR Apache-2.0 · Conventional Commits · DCO signed contributions