AI technologies like Deepfake and face swap are becoming more and more common in our everyday digital lives. We often scroll across reels and videos of some of our favourite celebrities or political leaders doing all kinds of funny things. While we know they are all fake and AI-generated, have you ever wondered how they are made? One of the tools used to make such content is Deep Live Cam. It’s an open-source tool that let’s you swap faces in videos in real-time and create Deepfakes using just a single image. In this blog, we will learn about the working of Deep Live Cam, how to set it up, and what to keep in mind when using real-time face swap tools responsibly.
Deep Live Cam is an AI-based application that enables real-time face swaps on live video feeds and supports one-click Deepfake video generation. Using machine learning models, it maps one person’s face onto another while preserving natural expressions, head movement, lighting, and angles. Designed with simplicity in mind, the tool requires just a single source image to produce realistic results.
Here are some of its key features:
Deep Live Cam uses a few different AI models together to power their real-time face swap functions. This includes:
This section guides you through installing Deep Live Cam. Follow these steps carefully for a successful setup. Proper installation prepares the software for real-time face swap and deepfake video generation.
Deep Live Cam recommends using Python version 3.10. Newer versions, like 3.12 or 3.13, might cause errors. If you use a Python version newer than 3.10, you might see this error: ModuleNotFoundError: No module named ‘distutils’. This error occurs because distutils is not part of newer Python versions. Using Python 3.10 avoids this.
Visit the official Python release page here.
Video processing is handled by FFmpeg for Deep Live Cam.
Download FFmpeg: We are running this system on Linux, so
# Make a directory in your home for FFmpeg
mkdir -p ~/apps/ffmpeg && cd ~/apps/ffmpeg
# Download a static build of FFmpeg for Linux
wget https://johnvansickle.com/ffmpeg/releases/ffmpeg-release-amd64-static.tar.xz
# Extract it
tar -xf ffmpeg-release-amd64-static.tar.xz
# Enter the extracted directory
cd ffmpeg-*-amd64-static
# Test it
ffmpeg -version
It will print the version of ffmpeg that you have installed. Now add ffmpeg to Path:
export PATH="$HOME/apps/ffmpeg/ffmpeg-*-amd64-static:$PATH"
Next, get the Deep Live Cam project files.
Clone with Git: Open your terminal or command prompt. Navigate to your desired directory using cd your\desired\path. Then, run:
git clone https://github.com/hacksider/Deep-Live-Cam.git
The terminal will show cloning progress. Now change the directory using
cd Deep-Live-Cam
Deep Live Cam needs specific AI models to function.
Using a virtual environment (venv) is recommended. It keeps project dependencies isolated. venv is a Python tool. It creates isolated Python environments. This prevents package conflicts between projects. Each project can have its own package versions. It keeps your main Python installation clean.
Create Virtual Environment: Open your terminal in the Deep-Live-Cam root directory. Run:
python -m venv deepcam
If you have multiple Python versions, specify Python 3.10 using its full path:
/path/to/your/python3.10 -m venv deepcam
1. Activate Virtual Environment:
On macOS/Linux
source deepcam/bin/activate
2. Your command line prompt should now show (deepcam) at the beginning:
Install Required Packages: With the virtual environment active, run:
pip install -r requirements.txt
This process may take a few minutes to run it will download all the required libraries for the app.
After installing dependencies, you can run the program.
Execute the following command in your terminal (ensure venv is active):
python run.py
Note: The first time you run this, the program will download additional model files (around 300MB).
Your Deep Live Cam should now be ready for CPU-based operation:
Upload the source face and a target face then click on “Start”, it will start swapping your face with from the source to target image.
Output:
We can see that the model is performing well and providing us with a good output.
Testing the Live Feature
For testing the live feature, select a face and then click on live from the available options.
Output:
The model outputs in the live feature are also commendable although the camara moment is very low due to expensive calculations in the background.
We also noticed that while using our glasses, the model is not losing its accuracy. It’s able to swap the face even if any object is coming in between the face and the camara.
For faster performance, you can use GPU acceleration if your hardware supports it.
Install CUDA Toolkit: Ensure you have CUDA Toolkit 11.8 installed from NVIDIA’s website.
Install Dependencies:
pip uninstall onnxruntime onnxruntime-gpu
pip install onnxruntime-gpu==1.16.3
Run with CUDA:
python run.py --execution-provider cuda
If the program window opens without errors, CUDA acceleration is working.
Executing python run.py launches the application window.
Face area showing a black block? If you experience this issue, try these commands within your activated venv environment:
For Nvidia GPU users:
pip uninstall onnxruntime onnxruntime-gpu
pip install onnxruntime-gpu==1.16.
Then, try running the program again:
python run.py
Also Read: How to Detect and Handle Deepfakes in the Age of AI?
I tested Deep Live Cam using clear photos of celebrities Sam Altman and Elon Musk, applying the real-time face swap feature to my live webcam feed. The results were quite good:
Deep Live Cam offers exciting uses. It also brings significant risks. Its real-time face swap ability needs careful thought. Some of the
Users must understand these dangers. They should use Deep Live Cam responsibly. Implementing safeguards helps. Watermarking deepfake content is one step. Obtaining consent before using a likeness is crucial. These actions can reduce potential misuse.
Also Read: An Introduction to Deepfakes with Only One Source Video
Deep Live Cam makes real-time face swaps and Deepfake videos easy to create, even with minimal technical skills. While it’s a powerful tool for creators and educators, its ease of use also raises serious concerns. The potential for misuse, like identity theft, misinformation, or privacy violations is real. That’s why it’s important to use this technology responsibly. Always get consent, add safeguards like watermarks, and avoid deceptive use. Deepfake tools can enable creativity but only when used with care.
A. Deep Live Cam is an AI tool. It swaps faces in live video. It also creates deepfake videos from one image.
A. You need Python 3.8+ and specific libraries. Pre-trained AI models are also required. A capable computer (CPU, NVIDIA, or Apple Silicon) is best.
A. It aims for user-friendliness for tasks like one-click deepfakes. However, initial setup might require some technical skill.
A. Yes, significant risks exist. These include identity theft, financial fraud, and misinformation. Ethical use is essential.
A. Yes. It uses models such as GFPGAN. These models enhance the swapped face, aiming for a more realistic appearance.