Build a Men’s Fashion Recommendation System Using FastEmbed and Qdrant

Rindra RANDRIAMIHAMINA Last Updated : 07 Jul, 2025
6 min read

Recommendation systems are everywhere. From Netflix and Spotify to Amazon. But what if you wanted to build a visual recommendation engine? One that looks at the image, not just the title or tags? In this article, you’ll build a men’s fashion recommendation system. It will use image embeddings and the Qdrant vector database. You’ll go from raw image data to real-time visual recommendations.

Learning Objective

  • How image embeddings represent visual content
  • How to use FastEmbed for vector generation
  • How to store and search vectors using Qdrant
  • How to build a feedback-driven recommendation engine
  • How to create a simple UI with Streamlit

Use Case: Visual Recommendations for T-shirts and Polos

Imagine a user clicks on a stylish polo shirt. Instead of using product tags, your fashion recommendation system will recommend T-shirts and polos that look similar. It uses the image itself to make that decision.

Let’s explore how.

Step 1: Understanding Image Embeddings

What Are Image Embeddings?

An image embedding is a vector. It is a list of numbers. These numbers represent the key features in the image. Two similar images have embeddings that are close together in vector space. This allows the system to measure visual similarity.

For example, two different T-shirts may look different pixel-wise. But their embeddings will be close if they have similar colors, patterns, and textures. This is a crucial ability for a fashion recommendation system.

Fashion recommendation system 1

How Are Embeddings Generated?

Most embedding models use deep learning. CNNs (Convolutional Neural Networks) extract visual patterns. These patterns become part of the vector.

In our case, we use FastEmbed. The embedding model used here is: Qdrant/Unicom-ViT-B-32

from fastembed import ImageEmbedding
from typing import List
from dotenv import load_dotenv
import os

load_dotenv()
model = ImageEmbedding(os.getenv("IMAGE_EMBEDDING_MODEL"))

def compute_image_embedding(image_paths: List[str]) -> list[float]:
    return list(model.embed(image_paths))

This function takes a list of image paths. It returns vectors that capture the essence of those images.

Step 2: Getting the Dataset

We used a dataset of around 2000 men’s fashion images. You can find it on Kaggle. Here is how we load the dataset:

import shutil, os, kagglehub
from dotenv import load_dotenv

load_dotenv()
kaggle_repo = os.getenv("KAGGLE_REPO")
path = kagglehub.dataset_download(kaggle_repo)
target_folder = os.getenv("DATA_PATH")

def getData():
    if not os.path.exists(target_folder):
        shutil.copytree(path, target_folder)

This script checks if the target folder exists. If not, it copies the images there.

Step 3: Store and Search Vectors with Qdrant

Once we have embeddings, we need to store and search them. This is where Qdrant comes in. It’s a fast and scalable vector database.

Here is how to connect to Qdrant Vector Database:

from qdrant_client import QdrantClient

client = QdrantClient(
    url=os.getenv("QDRANT_URL"),
    api_key=os.getenv("QDRANT_API_KEY"),
)
This is how to insert the images paired with its embedding to a Qdrant collection:
class VectorStore:
    def __init__(self, embed_batch: int = 64, upload_batch: int = 32, parallel_uploads: int = 3):
        # ... (initializer code omitted for brevity) ...

    def insert_images(self, image_paths: List[str]):
        def chunked(iterable, size):
            for i in range(0, len(iterable), size):
                yield iterable[i:i + size]

        for batch in chunked(image_paths, self.embed_batch):
            embeddings = compute_image_embedding(batch)  # Batch embed
            points = [
                models.PointStruct(id=str(uuid.uuid4()), vector=emb, payload={"image_path": img})
                for emb, img in zip(embeddings, batch)
            ]

            # Batch upload each sub-batch
            self.client.upload_points(
                collection_name=self.collection_name,
                points=points,
                batch_size=self.upload_batch,
                parallel=self.parallel_uploads,
                max_retries=3,
                wait=True
            )

This code takes a list of image file paths, turns them into embeddings in batches, and uploads those embeddings to a Qdrant collection. It first checks if the collection exists. Then it processes the images in parallel using threads to speed things up. Each image gets a unique ID and is wrapped into a “Point” with its embedding and path. These points are then uploaded to Qdrant in chunks.

Search Similar Images

def search_similar(query_image_path: str, limit: int = 5):
    emb_list = compute_image_embedding([query_image_path])
    hits = client.search(
        collection_name="fashion_images",
        query_vector=emb_list[0],
        limit=limit
    )
    return [{"id": h.id, "image_path": h.payload.get("image_path")} for h in hits]

You give a query image. The system returns images that are visually similar using cosine similarity metrics.

Step 4: Create the Recommendation Engine with Feedback

We now go a step further. What if the user likes some images and dislikes others? Can the fashion recommendation system learn from this?

Yes. Qdrant allows us to give positive and negative feedback. It then returns better, more personalized results.

class RecommendationEngine:
    def get_recommendations(self, liked_images:List[str], disliked_images:List[str], limit=10):
        recommended = client.recommend(
            collection_name="fashion_images",
            positive=liked_images,
            negative=disliked_images,
            limit=limit
        )
        return [{"id": hit.id, "image_path": hit.payload.get("image_path")} for hit in recommended]

Here are the inputs of this function:

  • liked_images: A list of image IDs representing items the user has liked.
  • disliked_images: A list of image IDs representing items the user has disliked.
  • limit (optional): An integer specifying the maximum number of recommendations to return (defaults to 10).

This will returns recommended clothes using the embedding vector similarity presented previously.

This lets your system adapt. It learns user preferences quickly.

Step 5: Build a UI with Streamlit

We use Streamlit to build the interface. It’s simple, fast, and written in Python.

Fashion recommendation system 2
Fashion recommendation system

Users can:

  • Browse clothing
  • Like or dislike items
  • View new, better recommendations

Here is the streamlit code:

import streamlit as st
from PIL import Image
import os

from src.recommendation.engine import RecommendationEngine
from src.vector_database.vectorstore import VectorStore
from src.data.get_data import getData

# -------------- Config --------------
st.set_page_config(page_title="🧥 Men's Fashion Recommender", layout="wide")
IMAGES_PER_PAGE = 12

# -------------- Ensure Dataset Exists (once) --------------
@st.cache_resource
def initialize_data():
    getData()
    return VectorStore(), RecommendationEngine()

vector_store, recommendation_engine = initialize_data()

# -------------- Session State Defaults --------------
session_defaults = {
    "liked": {},
    "disliked": {},
    "current_page": 0,
    "recommended_images": vector_store.points,
    "vector_store": vector_store,
    "recommendation_engine": recommendation_engine,
}

for key, value in session_defaults.items():
    if key not in st.session_state:
        st.session_state[key] = value

# -------------- Sidebar Info --------------
with st.sidebar:
    st.title("🧥 Men's Fashion Recommender")

    st.markdown("""
    **Discover fashion styles that suit your taste.**  
    Like 👍 or dislike 👎 outfits and receive AI-powered recommendations tailored to you.
    """)

    st.markdown("### 📦 Dataset")
    st.markdown("""
    - Source: [Kaggle – virat164/fashion-database](https://www.kaggle.com/datasets/virat164/fashion-database)  
    - ~2,000 fashion images
    """)

    st.markdown("### 🧠 How It Works")
    st.markdown("""
    1. Images are embedded into vector space  
    2. You provide preferences via Like/Dislike  
    3. Qdrant finds visually similar images  
    4. Results are updated in real-time
    """)

    st.markdown("### ⚙️ Technologies")
    st.markdown("""
    - **Streamlit** UI  
    - **Qdrant** vector DB  
    - **Python** backend  
    - **PIL** for image handling  
    - **Kaggle API** for data
    """)

    st.markdown("---")
# -------------- Core Logic Functions --------------
def get_recommendations(liked_ids, disliked_ids):
    return st.session_state.recommendation_engine.get_recommendations(
        liked_images=liked_ids,
        disliked_images=disliked_ids,
        limit=3 * IMAGES_PER_PAGE
    )

def refresh_recommendations():
    liked_ids = list(st.session_state.liked.keys())
    disliked_ids = list(st.session_state.disliked.keys())
    st.session_state.recommended_images = get_recommendations(liked_ids, disliked_ids)

# -------------- Display: Selected Preferences --------------
def display_selected_images():
    if not st.session_state.liked and not st.session_state.disliked:
        return

    st.markdown("### 🧍 Your Picks")
    cols = st.columns(6)
    images = st.session_state.vector_store.points

    for i, (img_id, status) in enumerate(
        list(st.session_state.liked.items()) + list(st.session_state.disliked.items())
    ):
        img_path = next((img["image_path"] for img in images if img["id"] == img_id), None)
        if img_path and os.path.exists(img_path):
            with cols[i % 6]:
                st.image(img_path, use_container_width=True, caption=f"{img_id} ({status})")
                col1, col2 = st.columns(2)
                if col1.button("❌ Remove", key=f"remove_{img_id}"):
                    if status == "liked":
                        del st.session_state.liked[img_id]
                    else:
                        del st.session_state.disliked[img_id]
                    refresh_recommendations()
                    st.rerun()

                if col2.button("🔁 Switch", key=f"switch_{img_id}"):
                    if status == "liked":
                        del st.session_state.liked[img_id]
                        st.session_state.disliked[img_id] = "disliked"
                    else:
                        del st.session_state.disliked[img_id]
                        st.session_state.liked[img_id] = "liked"
                    refresh_recommendations()
                    st.rerun()

# -------------- Display: Recommended Gallery --------------
def display_gallery():
    st.markdown("### 🧠 Smart Suggestions")

    page = st.session_state.current_page
    start_idx = page * IMAGES_PER_PAGE
    end_idx = start_idx + IMAGES_PER_PAGE
    current_images = st.session_state.recommended_images[start_idx:end_idx]

    cols = st.columns(4)
    for idx, img in enumerate(current_images):
        with cols[idx % 4]:
            if os.path.exists(img["image_path"]):
                st.image(img["image_path"], use_container_width=True)
            else:
                st.warning("Image not found")

            col1, col2 = st.columns(2)
            if col1.button("👍 Like", key=f"like_{img['id']}"):
                st.session_state.liked[img["id"]] = "liked"
                refresh_recommendations()
                st.rerun()
            if col2.button("👎 Dislike", key=f"dislike_{img['id']}"):
                st.session_state.disliked[img["id"]] = "disliked"
                refresh_recommendations()
                st.rerun()

    # Pagination
    col1, _, col3 = st.columns([1, 2, 1])
    with col1:
        if st.button("⬅️ Previous") and page > 0:
            st.session_state.current_page -= 1
            st.rerun()
    with col3:
        if st.button("➡️ Next") and end_idx < len(st.session_state.recommended_images):
            st.session_state.current_page += 1
            st.rerun()

# -------------- Main Render Pipeline --------------
st.title("🧥 Men's Fashion Recommender")

display_selected_images()
st.divider()
display_gallery()

This UI closes the loop. It turns a function into a usable product.

Conclusion

You just built a complete fashion recommendation system. It sees images, understands visual features, and makes smart suggestions.

Using FastEmbed, Qdrant, and Streamlit, you now have a powerful recommendation system. It works for T-shirts, polos and for any men’s clothing but can be adapted to any other image-based recommendations.

Frequently Asked Questions

Do the numbers in image embeddings represent pixel intensities?

Not exactly. The numbers in embeddings capture semantic features like shapes, colors, and textures—not raw pixel values. This helps the system understand the meaning behind the image rather than just the pixel data.

Does this recommendation system require training?

No. It leverages vector similarity (like cosine similarity) in the embedding space to find visually similar items without needing to train a traditional model from scratch.

Can I fine-tune or train my own image embedding model?

Yes, you can. Training or fine-tuning image embedding models typically involves frameworks like TensorFlow or PyTorch and a labeled dataset. This lets you customize embeddings for specific needs.

Is it possible to query image embeddings using text?

Yes, if you use a multimodal model that maps both images and text into the same vector space. This way, you can search images with text queries or vice versa.

Should I always use FastEmbed for embeddings?

FastEmbed is a great choice for quick and efficient embeddings. But there are many alternatives, including models from OpenAI, Google, or Groq. Choosing depends on your use case and performance needs.

Can I use vector databases other than Qdrant?

Absolutely. Popular alternatives include Pinecone, Weaviate, Milvus, and Vespa. Each has unique features, so pick what best fits your project requirements.

Is this system similar to Retrieval Augmented Generation (RAG)?

No. While both use vector searches, RAG integrates retrieval with language generation for tasks like question answering. Here, the focus is purely on visual similarity recommendations.

I am a Data Scientist with expertise in Natural Language Processing (NLP), Large Language Models (LLMs), Computer Vision (CV), Predictive Modeling, Machine Learning, Recommendation Systems, and Cloud Computing.

I specialize in training ML/DL models tailored to specific use cases.

I build Vector Database applications to enable LLMs to access external data for more precise question answering.

I fine-tune LLMs on domain-specific data.

I leverage LLMs to generate structured outputs for automating data extraction from unstructured text.

I design AI solution architectures on AWS following best practices.

I am passionate about exploring new technologies and solving complex AI problems, and I look forward to contributing valuable insights to the Analytics Vidhya community.

Login to continue reading and enjoy expert-curated content.

Responses From Readers

Clear