From d56b9c13f8563dfc3c944dc66e097b99481b7685 Mon Sep 17 00:00:00 2001 From: Satoshi Sato <36268688+ProtonSato@users.noreply.github.com> Date: Sun, 11 Feb 2018 17:27:16 +0100 Subject: [PATCH] Documentation and grammar (#167) * Fixed gramatical mistake * Improved documentation and fixed spelling and grammar --- INSTALL.md | 26 +++++--------------------- README.md | 18 +++++++----------- USAGE.md | 43 ++++++++++++++++++++++--------------------- 3 files changed, 34 insertions(+), 53 deletions(-) diff --git a/INSTALL.md b/INSTALL.md index b0f0a46..fb387aa 100644 --- a/INSTALL.md +++ b/INSTALL.md @@ -4,7 +4,6 @@ Machine learning essentially involves a ton of trial and error. You're letting a The type of computations that the process does are well suited for graphics cards, rather than regular processors. **It is pretty much required that you run the training process on a desktop or server capable GPU.** Running this on your CPU means it can take weeks to train your model, compared to several hours on a GPU. ## Hardware Requirements - **TL;DR: you need at least one of the following:** - **A powerful CPU** @@ -18,7 +17,6 @@ The type of computations that the process does are well suited for graphics card - **A lot of patience** ## Supported operating systems: - - **Windows 10** Windows 7 and 8 might work. Your milage may vary - **Linux** @@ -28,9 +26,7 @@ The type of computations that the process does are well suited for graphics card Alternatively there is a docker image that is based on Debian. - # Important before you proceed - **In its current iteration, the project relies heavily on the use of the command line. If you are unfamiliar with command line tools, you should not attempt any of the steps described in this guide.** Wait instead for this tool to become usable, or start learning more about working with the command line. This guide assumes you have intermediate knowledge of the command line. The developers are also not responsible for any damage you might cause to your own computer. @@ -40,16 +36,13 @@ The developers are also not responsible for any damage you might cause to your o ## Installing dependencies ### Python 3.6 - Note that you will need the 64bit version of Python, especially to setup the GPU version! #### Windows - -Download the latest version of Python 3 from Python.org: https://www.python.org/downloads/release/python-364/ +Download the latest version of Python 3 from Python.org: https://www.python.org/downloads/release/python-364 #### macOS - -By default, macOS comes with Python 2.7. For best usage, need Python 3.6. The easiest way to do so is to install it through `Homebrew`. If you are not familiar with `homebrew`, read more about it here: https://brew.sh/ +By default, macOS comes with Python 2.7. For best usage, need Python 3.6. The easiest way to do so is to install it through `Homebrew`. If you are not familiar with `homebrew`, read more about it here: https://brew.sh To install Python 3.6: @@ -58,11 +51,9 @@ brew install python3 ``` #### Linux - You know how this works, don't you? ### Virtualenv - Install virtualenv next. Virtualenv helps us make a containing environment for our project. This means that any python packages we install for this project will be compartmentalized to this specific environment. We'll install virtualenv with `pip` which is Python's package/dependency manager. ```pip install virtualenv``` @@ -74,22 +65,17 @@ or Alternative, if your Linux distribution provides its own virtualenv through apt or yum, you can use that as well. #### Windows specific: - `virtualenvwrapper-win` is a package that makes virtualenvs easier to manage on Windows. ```pip install virtualenvwrapper-win``` - ## Getting the faceswap code - -Simply download the code from http://github.com/deepfakes/faceswap/ - For development it is recommended to use git instead of downloading the code and extracting it. +Simply download the code from http://github.com/deepfakes/faceswap - For development it is recommended to use git instead of downloading the code and extracting it. For now, extract the code to a directory where you're comfortable working with it. Navigate to it with the command line. For our example we will use `~/faceswap/` as our project directory. ## Setting up our virtualenv - ### First steps - We will now initialize our virtualenv: ``` @@ -103,14 +89,13 @@ mkvirtualenv faceswap setprojectdir . ``` -This will create a folder with python, pip, and setuptools all ready to go in its own little environment. It will also activate the Virtual Environment which is indicated with the (faceswap) on the left side of the prompt. Anything we install now will be specific to this project. And available to the projects we connect to this environment. +This will create a folder with python, pip, and setuptools all ready to go in its own little environment. It will also activate the Virtual Environment which is indicated with the (faceswap) on the left side of the prompt. Anything we install now will be specific to this project. And available to the projects we connect to this environment. Let's say you’re content with the work you’ve contributed to this project and you want to move onto something else in the command line. Simply type `deactivate` to deactivate your environment. To reactive your environment on Windows, you can use `workon faceswap`. On Mac and Linux, you can use `source ./faceswap_env/bin/activate`. Note that the Mac/Linux command is relative to the project and virtualenv directory. ### Setting up for our project - With your virtualenv activated, install the dependencies from the requirements files. Like so: ```bash @@ -123,7 +108,7 @@ If you want to use your GPU instead of your CPU, substitute `requirements.txt` w pip install -r requirements-gpu.txt ``` -Should you choose the GPU version, Tensorflow might ask you to install the CUDA Toolkit and the cuDNN libraries. Instructions on installing those can be found on Nvidia's website. (For Ubuntu, maybe all Linux, see: https://yangcha.github.io/Install-CUDA8/) +Should you choose the GPU version, Tensorflow might ask you to install the [CUDA Toolkit](https://developer.nvidia.com/cuda-zone) and the [cuDNN libraries](https://developer.nvidia.com/cudnn). Instructions on installing those can be found on Nvidia's website. (For Ubuntu, maybe all Linux, see: https://yangcha.github.io/Install-CUDA8) Once all these requirements are installed, you can attempt to run the faceswap tools. Use the `-h` or `--help` options for a list of options. @@ -134,7 +119,6 @@ python faceswap.py -h Proceed to [../blob/master/USAGE.md](USAGE.md) ## Notes - This guide is far from complete. Functionality may change over time, and new dependencies are added and removed as time goes on. If you are experiencing issues, please raise them in the [faceswap-playground](https://github.com/deepfakes/faceswap-playground) repository instead of the main repo. diff --git a/README.md b/README.md index a611d72..4ccc55a 100644 --- a/README.md +++ b/README.md @@ -1,10 +1,8 @@ - **Notice:** This repository is not operated or maintained by [/u/deepfakes](https://www.reddit.com/user/deepfakes/). Please read the explanation below for details. --- # deepfakes_faceswap - Faceswap is a tool that utilizes deep learning to recognize and swap faces in pictures and videos. ## Overview @@ -24,18 +22,17 @@ From your setup folder, run `python faceswap.py train`. This will take photos fr From your setup folder, run `python faceswap.py convert`. This will take photos from `original` folder and apply new faces into `modified` folder. #### General notes: -- All of the scripts mentioned have `-h`/`--help` options with a arguments that they will accept. You're smart, you can figure out how this works, right?! +- All of the scripts mentioned have `-h`/`--help` options with arguments that they will accept. You're smart, you can figure out how this works, right?! -Note: there is no conversion for video yet. You can use MJPG to convert video into photos, process images, and convert images back to video +Note: there is no conversion for video yet. You can use [ffmpeg](https://www.ffmpeg.org) to convert video into photos, process images, and convert images back to video. -## Training Data -**Whole project with training images and trained model (~300MB):** +## Training Data +**Whole project with training images and trained model (~300MB):** https://anonfile.com/p7w3m0d5be/face-swap.zip or [click here to download](https://anonfile.com/p7w3m0d5be/face-swap.zip) ## How To setup and run the project -### Setup - +### Setup Clone the repo and setup you environment. There is a Dockerfile that should kickstart you. Otherwise you can setup things manually, see in the Dockerfiles for dependencies. Check out [../blob/master/INSTALL.md](INSTALL.md) and [../blob/master/USAGE.md](USAGE.md) for basic information on how to configure virtualenv and use the program. @@ -82,7 +79,7 @@ Also note that it does not have a GUI output, so the train.py will fail on showi - **Notice** Any issue related to running the code has to be open in the 'faceswap-playground' project! ### For haters -Sorry no time for that +Sorry, no time for that. # About github.com/deepfakes @@ -90,7 +87,7 @@ Sorry no time for that It is a community repository for active users. ## Why this repo? -The joshua-wu repo seems not active. Simple bugs like missing _http://_ in front of url has not been solved since days. +The joshua-wu repo seems not active. Simple bugs like missing _http://_ in front of urls have not been solved since days. ## Why is it named 'deepfakes' if it is not /u/deepfakes? 1. Because a typosquat would have happened sooner or later as project grows @@ -103,7 +100,6 @@ This is a friendly typosquat, and it is fully dedicated to the project. If /u/de # About machine learning ## How does a computer know how to recognise/shape a faces? How does machine learning work? What is a neural network? - It's complicated. Here's a good video that makes the process understandable: [![How Machines Learn](https://img.youtube.com/vi/R9OHn5ZF4Uo/0.jpg)](https://www.youtube.com/watch?v=R9OHn5ZF4Uo) diff --git a/USAGE.md b/USAGE.md index 7f16221..93a7012 100644 --- a/USAGE.md +++ b/USAGE.md @@ -1,7 +1,6 @@ -**Before attempting any of this, please make sure you have read, understood and completed [the installation instructions](../master/INSTALL.md). If you are experiencing issues, please raise them in the [faceswap-playground](https://github.com/deepfakes/faceswap-playground) repository instead of the main repo.** +**Before attempting any of this, please make sure you have read, understood and completed the [installation instructions](../master/INSTALL.md). If you are experiencing issues, please raise them in the [faceswap-playground](https://github.com/deepfakes/faceswap-playground) repository instead of the main repo.** # Workflow - So, you want to swap faces in pictures and videos? Well hold up, because first you gotta understand what this collection of scripts will do, how it does it and what it can't currently do. The basic operation of this script is simple. It trains a machine learning model to recognize and transform two faces based on pictures. The machine learning model is our little "bot" that we're teaching to do the actual swapping and the pictures are the "training data" that we use to train it. Note that the bot is primarily processing faces. Other objects might not work. @@ -9,17 +8,15 @@ The basic operation of this script is simple. It trains a machine learning model So here's our plan. We want to create a reality where Donald Trump lost the presidency to Nic Cage; we have his inauguration video; let's replace Trump with Cage. ## Gathering raw data - In order to accomplish this, the bot needs to learn to recognize both face A (Trump) and face B (Nic Cage). By default, the bot doesn't know what a Trump or a Nic Cage looks like. So we need to show it some pictures and let it guess which is which. So we need pictures of both of these faces first. -A possible source is Google, DuckDuckGo or Bing image search. There are scripts to download large amounts of images. Alternatively, if you have a lot of videos of the person you're looking for (like interviews, public speeches, movies), you can convert a video to still images/frames and use those. +A possible source is Google, DuckDuckGo or Bing image search. There are scripts to download large amounts of images. Alternatively, if you have a video of the person you're looking for (from interviews, public speeches, or movies), you can convert this video to still images and use those. see [Extracting video frames](#Extracting_video_frames) for more information. Feel free to list your image sets in the [faceswap-playground](https://github.com/deepfakes/faceswap-playground), or add more methods to this file. So now we have a folder full of pictures of Trump and a separate folder of Nic Cage. Let's save them in our directory where we put the faceswap project. Example: `~/faceswap/photo/trump` and `~/faceswap/photo/cage` ## EXTRACT - So here's a problem. We have a ton of pictures of both our subjects, but they're just pictures of them doing stuff or in an environment with other people. Their bodies are on there, they're on there with other people... It's a mess. We can only train our bot if the data we have is consistent and focusses on the subject we want to swap. This is where faceswap first comes in. ```bash @@ -31,15 +28,20 @@ python faceswap.py extract -i ~/faceswap/photo/cage -o ~/faceswap/data/cage We specify our photo input directory and the output folder where our training data will be saved. The script will then try its best to recognize face landmarks, crop the image to that size, and save it to the output folder. Note: this script will make grabbing test data much easier, but it is not perfect. It will (incorrectly) detect multiple faces in some photos and does not recognize if the face is the person who we want to swap. Therefore: **Always check your training data before you start training.** The training data will influence how good your model will be at swapping. -## TRAIN +You can see the full list of arguments for extracting via help flag. i.e. +```bash +python faceswap.py extract -h +``` + +## TRAIN The training process will take the longest, especially on CPU. We specify the folders where the two faces are, and where we will save our training model. It will start hammering the training data once you run the command. I personally really like to go by the preview and quit the processing once I'm happy with the results. ```bash python faceswap.py train -A ~/faceswap/data/trump -B ~/faceswap/data/cage -m ~/faceswap/models/ # or -p to show a preview python faceswap.py train -A ~/faceswap/data/trump -B ~/faceswap/data/cage -m ~/faceswap/models/ -p -```` +``` If you use the preview feature, select the preview window and press Q to save your processed data and quit gracefully. Without the preview enabled, you might have to forcefully quit by hitting Ctrl+C to cancel the command. Note that it will save the model once it's gone through about 100 iterations, which can take quite a while. So make sure you save before stopping the process. @@ -47,20 +49,11 @@ You can see the full list of arguments for training via help flag. i.e. ```bash python faceswap.py train -h -```` +``` ## CONVERT - Now that we're happy with our trained model, we can convert our video. How does it work? Similarly to the extraction script, actually! The conversion script basically detects a face in a picture using the same algorithm, quickly crops the image to the right size, runs our bot on this cropped image of the face it has found, and then (crudely) pastes the processed face back into the picture. -You can see the full list of arguments available for converting via help flag. i.e. - -```bash -python faceswap.py convert -h -```` - -### Testing out our bot - Remember those initial pictures we had of Trump? Let's try swapping a face there. We will use that directory as our input directory, create a new folder where the output will be saved, and tell them which model to use. ```bash @@ -69,22 +62,30 @@ python faceswap.py convert -i ~/faceswap/photo/trump/ -o ~/faceswap/output/ -m ~ It should now start swapping faces of all these pictures. -### Preparing a video +You can see the full list of arguments available for converting via help flag. i.e. -A video is just a series of pictures (frames). You can export a video to still frames using `ffmpeg` for example. Below is an example command to process a video to frames. +```bash +python faceswap.py convert -h +``` + +## Video's +A video is just a series of pictures in the form of frames. Therefore you can gather the raw images from them for your dataset or combine your results into a video. + +## Extracting video frames +You can split a video into seperate frames using [ffmpeg](https://www.ffmpeg.org) for instance. Below is an example command to process a video to seperate frames. ```bash ffmpeg -i /path/to/my/video.mp4 /path/to/output/video-frame-%d.png ``` -If you then use the resulting directory with frames to faceswap, it will automatically go through all of those. And here's a command to stitch png frames to a single video again: +## Generating a video +If you split a video, using [ffmpeg](https://www.ffmpeg.org) for example, and used them as a target for swapping faces onto you can combine these frames again. The command below stitches the png frames back into a single video again. ```bash ffmpeg -i video-frame-%04d.png -c:v libx264 -vf "fps=25,format=yuv420p" out.mp4 ``` ## Notes - This guide is far from complete. Functionality may change over time, and new dependencies are added and removed as time goes on. If you are experiencing issues, please raise them in the [faceswap-playground](https://github.com/deepfakes/faceswap-playground) repository instead of the main repo.