TensorFlowQdrantVector Search

Implementing Vector Search Using Qdrant and Next.js

Joseph Damiba

We live in very exciting times, where there are a variety of powerful free tools that allow us to create applications that would have been impossible just a few years ago. To take just one example, the introduction of vector databases has revolutionized how we find similar content across various domains, from images to text. In this post, I'll walk through implementing an image similarity search application using Next.js, TensorFlow.js, and Qdrant - a powerful vector database.

What We Are Building

We'll create a web application that:

  • 1. Allows users to upload ground-truth images
  • 2. Converts the ground-truth images into vector embeddings using TensorFlow.js and MobileNet
  • 3. Stores these embeddings in a Qdrant collection
  • 4. Allows users to to upload target imges
  • 5. Converts the target images into vector embeddings using TensorFlow.js and MobileNet
  • 6. Performs a vector search against the Qdrant collection
  • 7. Displays the results to the user

Understanding the Components

Before diving into the implementation, let's understand our key tools:

  • Next.js: Our React framework for building the application
  • MobileNet: A pre-trained convolutional neural network that can extract meaningful features from images while being lightweight enough to run in the browser.
  • Qdrant: A vector search provider that excels at storing and querying high-dimensional vectors, perfect for our image embeddings.

Understanding Vector Embeddings

Vector embeddings are numerical representations of data that capture semantic meaning. For images, these embeddings represent visual features like shapes, textures, and objects. When we run an image through MobileNet, it generates a 1280-dimensional vector that represents the image's features. Similar images will have vectors that are close to each other in this high-dimensional space.

Setting Up The Project

First, let's create a new Next.js project with TypeScript support, and install the necessary dependencies. We'll use the official Next.js project creation tool, which sets up all the basic configuration for us. The additional packages we're installing are:

  • @tensorflow/tfjs: The core TensorFlow.js library that allows us to run machine learning models in JavaScript
  • @tensorflow-models/mobilenet: A pre-trained neural network that can analyze images
  • qdrant-js: The JavaScript client for interacting with our Qdrant vector database
npx create-next-app@latest image-similarity-search
cd image-similarity-search
npm install @tensorflow/tfjs @tensorflow-models/mobilenet qdrant-js

Next, we'll create the core functionality that converts images into vector embeddings. This is where the magic happens - we'll use the MobileNet model to analyze images and generate numerical representations (vectors) that capture their visual features.

Here's how the code works:

  • We first load the MobileNet model (only once) and keep it in memory
  • For each image, we:
    1. Convert it from base64 format to a buffer
    2. Resize it to 224x224 pixels (what MobileNet expects)
    3. Convert it to a tensor (a special format for machine learning)
    4. Generate predictions and embeddings using MobileNet
    5. Clean up the memory to prevent leaks
// utils/embedding-node.ts
import * as tf from "@tensorflow/tfjs";
import * as mobilenet from "@tensorflow-models/mobilenet";
import sharp from "sharp";

let model: mobilenet.MobileNet | null = null;

async function loadModel() {
  if (!model) {
    // Use CPU backend instead of WASM
    await tf.setBackend("cpu");
    model = await mobilenet.load({ version: 2, alpha: 1.0 });
  }
  return model;
}

export async function generateEmbedding(
  base64Image: string
): Promise<number[]> {
  try {
    const mobileNetModel = await loadModel();

    // Convert base64 to buffer
    const imageBuffer = Buffer.from(base64Image, "base64");

    // Process image with sharp
    const processedImage = await sharp(imageBuffer)
      .resize(224, 224, { fit: "contain" })
      .raw()
      .toBuffer();

    // Create tensor from raw pixel data
    const tfImage = tf.tensor3d(new Uint8Array(processedImage), [224, 224, 3]);

    const predictions = await mobileNetModel.classify(tfImage);
    console.log("predictions", predictions);

    // Generate embedding
    const embedding = mobileNetModel.infer(tfImage, true);
    const embeddingData = await embedding.data();
    console.log("embeddingData", embeddingData);

    // Cleanup
    tfImage.dispose();
    embedding.dispose();

    return Array.from(embeddingData);
  } catch (error) {
    console.error("Error generating embedding:", error);
    throw error;
  }
}
                  

Now we need to create an API endpoint that will receive images from users and store their vector representations in our Qdrant database. This endpoint does several important things:

  • Checks if our Qdrant collection exists, creating it if it doesn't
  • Accepts image uploads through a FormData request
  • Converts the uploaded image to a format our embedding function can process
  • Generates the embedding using our previous function
  • Stores the embedding in Qdrant with a unique ID and the filename

The collection is configured to use "Cosine" distance, which is a mathematical way to measure how similar two vectors are - the closer to 1, the more similar they are.

// app/api/add-to-collection/route.ts
import { NextRequest, NextResponse } from "next/server";
import { QdrantClient } from "@qdrant/js-client-rest";
import { generateEmbedding } from "@/utils/embedding-node";

// Define custom error type
type ProcessingError = {
  message: string;
  status?: number;
  cause?: unknown;
};

const qdrant = new QdrantClient({
  url: process.env.QDRANT_URL,
  apiKey: process.env.QDRANT_API_KEY,
});

const COLLECTION_NAME = "image_vector_embeddings_20250310";
const VECTOR_SIZE = 1280; // MobileNet embedding size

async function ensureCollection() {
  try {
    // Check if collection exists
    const collections = await qdrant.getCollections();
    const exists = collections.collections.some(
      (collection) => collection.name === COLLECTION_NAME
    );

    if (!exists) {
      // Create collection if it doesn't exist
      await qdrant.createCollection(COLLECTION_NAME, {
        vectors: {
          size: VECTOR_SIZE,
          distance: "Cosine",
        },
      });
    }
  } catch (error) {
    console.error("Error ensuring collection exists:", error);
    throw error;
  }
}

export async function POST(request: NextRequest) {
  try {
    // Ensure collection exists before proceeding
    await ensureCollection();

    const formData = await request.formData();
    const file = formData.get("image") as File;

    if (!file) {
      return NextResponse.json(
        { error: "No image file provided" },
        { status: 400 }
      );
    }

    // Convert file to base64
    const bytes = await file.arrayBuffer();
    const buffer = Buffer.from(bytes);
    const base64Image = buffer.toString("base64");

    // Generate embedding
    const embedding = await generateEmbedding(base64Image);

    // Upload to Qdrant
    await qdrant.upsert(COLLECTION_NAME, {
      points: [
        {
          id: Date.now(),
          vector: embedding,
          payload: {
            filename: file.name,
          },
        },
      ],
    });

    return NextResponse.json({ success: true });
  } catch (error: unknown) {
    console.error("Error processing image:", error);

    const processingError: ProcessingError = {
      message:
        error instanceof Error ? error.message : "Error processing image",
      cause: error,
    };

    return NextResponse.json(
      { error: processingError.message },
      { status: 500 }
    );
  }
}

Finally, we need an endpoint that can take a new image and find similar ones in our database. This endpoint:

  • Accepts an uploaded image from the user
  • Converts it to an embedding using the same process as before
  • Searches our Qdrant collection for the most similar vectors
  • Returns the similarity score (between 0 and 1) and filename of the best match

A similarity score closer to 1 means the images are very similar, while a score closer to 0 means they're very different.

// app/api/compare-image/route.ts
import { NextRequest, NextResponse } from "next/server";
import { QdrantClient } from "@qdrant/js-client-rest";
import { generateEmbedding } from "@/utils/embedding-node";

// Define custom error type
type ProcessingError = {
  message: string;
  status?: number;
  cause?: unknown;
};

const qdrant = new QdrantClient({
  url: process.env.QDRANT_URL,
  apiKey: process.env.QDRANT_API_KEY,
});
const COLLECTION_NAME = "image_vector_embeddings_20250310";

export async function POST(request: NextRequest) {
  try {
    const formData = await request.formData();
    const file = formData.get("image") as File;

    if (!file) {
      return NextResponse.json(
        { error: "No image file provided" },
        { status: 400 }
      );
    }

    // Convert file to base64
    const bytes = await file.arrayBuffer();
    const buffer = Buffer.from(bytes);
    const base64Image = buffer.toString("base64");

    // Generate embedding for uploaded image
    const embedding = await generateEmbedding(base64Image);

    // Get similarity score from Qdrant
    const searchResult = await qdrant.search(COLLECTION_NAME, {
      vector: embedding,
      limit: 1,
      with_payload: true,
    });

    const similarity = searchResult[0]?.score || 0;

    return NextResponse.json({
      similarity,
      filename: searchResult[0]?.payload?.filename,
    });
  } catch (error: unknown) {
    console.error("Error processing image:", error);

    const processingError: ProcessingError = {
      message:
        error instanceof Error ? error.message : "Error processing image",
      cause: error,
    };

    return NextResponse.json(
      { error: processingError.message },
      { status: 500 }
    );
  }
}

With our backend APIs ready, we can create the user interface. This React component includes:

  • A section for uploading multiple "ground truth" images to our collection
  • Status indicators to show the progress of processing each image
  • A section for uploading a single target image to compare
  • A display area for showing the similarity results

The interface uses React's useState hook to manage the various states of our application, and provides real-time feedback as images are processed and compared.


// app/page.tsx
"use client";

import { useState } from "react";
import Image from "next/image";

type UploadStatus = {
  filename: string;
  status: "pending" | "processing" | "complete" | "error";
  error?: string;
};

export default function Home() {
  const [collectionImages, setCollectionImages] = useState<File[]>([]);
  const [uploadStatuses, setUploadStatuses] = useState<UploadStatus[]>([]);
  const [targetImage, setTargetImage] = useState<File | null>(null);
  const [targetPreview, setTargetPreview] = useState<string | null>(null);
  const [similarityScore, setSimilarityScore] = useState<number | null>(null);
  const [isComparing, setIsComparing] = useState(false);

  const handleCollectionUpload = async (
    e: React.ChangeEvent<HTMLInputElement>
  ) => {
    const files = Array.from(e.target.files || []);
    setCollectionImages(files);
    setUploadStatuses(
      files.map((file) => ({
        filename: file.name,
        status: "pending",
      }))
    );
  };

  const processCollection = async () => {
    for (let i = 0; i < collectionImages.length; i++) {
      const file = collectionImages[i];
      setUploadStatuses((prev) =>
        prev.map((status, idx) =>
          idx === i ? { ...status, status: "processing" } : status
        )
      );

      try {
        const formData = new FormData();
        formData.append("image", file);

        const response = await fetch("/api/add-to-collection", {
          method: "POST",
          body: formData,
        });

        if (!response.ok) throw new Error("Failed to process image");

        setUploadStatuses((prev) =>
          prev.map((status, idx) =>
            idx === i ? { ...status, status: "complete" } : status
          )
        );
      } catch (error: unknown) {
        setUploadStatuses((prev) =>
          prev.map((status, idx) =>
            idx === i
              ? {
                  ...status,
                  status: "error",
                  error:
                    error instanceof Error ? error.message : "Unknown error",
                }
              : status
          )
        );
      }
    }
  };

  const handleTargetImageUpload = async (
    e: React.ChangeEvent<HTMLInputElement>
  ) => {
    const file = e.target.files?.[0];
    if (file) {
      setTargetImage(file);
      setTargetPreview(URL.createObjectURL(file));
      setSimilarityScore(null);
    }
  };

  const handleCompare = async () => {
    if (!targetImage) return;

    setIsComparing(true);
    try {
      const formData = new FormData();
      formData.append("image", targetImage);

      const response = await fetch("/api/compare-image", {
        method: "POST",
        body: formData,
      });

      const data = await response.json();
      setSimilarityScore(data.similarity);
    } catch (error) {
      console.error("Error comparing images:", error);
    } finally {
      setIsComparing(false);
    }
  };

  return (
    <div className="min-h-screen p-8">
      <main className="max-w-4xl mx-auto space-y-12">
        {/* Collection Building Section */}
        <section className="space-y-6">
          <h2 className="text-2xl font-bold">Add Image(s) toCollection</h2>
          <input
            type="file"
            multiple
            accept="image/*"
            onChange={handleCollectionUpload}
            className="block w-full text-sm text-gray-500
              file:mr-4 file:py-2 file:px-4
              file:rounded-full file:border-0
              file:text-sm file:font-semibold
              file:bg-violet-50 file:text-violet-700
              hover:file:bg-violet-100"
          />

          {uploadStatuses.length > 0 && (
            <div className="space-y-4">
              <button
                onClick={processCollection}
                className="px-4 py-2 bg-violet-500 text-white rounded-md"
              >
                Process Images
              </button>

              <div className="space-y-2">
                {uploadStatuses.map((status, idx) => (
                  <div key={idx} className="flex items-center gap-2">
                    <span>{status.filename}</span>
                    <span
                      className={`text-sm ${
                        status.status === "complete"
                          ? "text-green-500"
                          : status.status === "error"
                          ? "text-red-500"
                          : status.status === "processing"
                          ? "text-blue-500"
                          : "text-gray-500"
                      }`}
                    >
                      {status.status}
                    </span>
                  </div>
                ))}
              </div>
            </div>
          )}
        </section>

        {/* Image Comparison Section */}
        <section className="space-y-6">
          <h2 className="text-2xl font-bold">Compare Image</h2>
          <input
            type="file"
            accept="image/*"
            onChange={handleTargetImageUpload}
            className="block w-full text-sm text-gray-500
              file:mr-4 file:py-2 file:px-4
              file:rounded-full file:border-0
              file:text-sm file:font-semibold
              file:bg-blue-50 file:text-blue-700
              hover:file:bg-blue-100"
          />

          {targetPreview && (
            <div className="space-y-4">
              <Image
                src={targetPreview}
                alt="Preview"
                width={300}
                height={300}
                className="object-contain"
              />

              <button
                onClick={handleCompare}
                disabled={isComparing}
                className="px-4 py-2 bg-blue-500 text-white rounded-md
                  disabled:bg-gray-300 disabled:cursor-not-allowed"
              >
                {isComparing ? "Comparing..." : "Get Similarity Score"}
              </button>
            </div>
          )}

          {similarityScore !== null && (
            <div className="mt-4 p-4 bg-gray-50 rounded-lg">
              <p className="text-lg text-black font-semibold">
                Similarity Score: {(similarityScore * 100).toFixed(2)}%
              </p>
            </div>
          )}
        </section>
      </main>
    </div>
  );
}

With these pieces in place, we can run npm run dev and interact with our application.

The complete code for this project can be found on my GitHub