Unified Dependency Solution (#370)

* Unified install helper

smart load dep from requirements-docker.txt

working GUI in docker

compile dlib with avx&cuda

* improve tf version selection
This commit is contained in:
DKing
2018-05-11 02:39:11 +08:00
committed by torzdf
parent 80cde77a6d
commit 431ff543d5
12 changed files with 508 additions and 164 deletions

33
Dockerfile.cpu Normal file → Executable file
View File

@@ -1,29 +1,14 @@
FROM debian:stretch
FROM tensorflow/tensorflow:latest-py3
# install debian packages
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update -qq \
&& apt-get install --no-install-recommends -y \
# install essentials
build-essential \
# install python 3
python3.5 \
python3-dev \
python3-pip \
python3-wheel \
# Boost for dlib
cmake \
libboost-all-dev \
# requirements for keras
python3-h5py \
python3-yaml \
python3-pydot \
python3-setuptools \
RUN apt-get update -qq -y \
&& apt-get install -y cmake libsm6 libxrender1 libxext-dev python3-tk\
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
COPY ./requirements-python35.txt .
RUN pip3 --no-cache-dir install -r ./requirements-python35.txt
COPY requirements-docker.txt /opt/
RUN pip3 install cmake
RUN pip3 install dlib --install-option=--yes --install-option=USE_AVX_INSTRUCTIONS
RUN pip3 --no-cache-dir install -r /opt/requirements-docker.txt && rm /opt/requirements-docker.txt
WORKDIR /srv/
WORKDIR "/notebooks"
CMD ["/run_jupyter.sh", "--allow-root"]

6
Dockerfile.gpu Normal file → Executable file
View File

@@ -5,8 +5,10 @@ RUN apt-get update -qq -y \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
COPY requirements-docker-gpu.txt /opt/
RUN pip3 --no-cache-dir install -r /opt/requirements-docker-gpu.txt && rm /opt/requirements-docker-gpu.txt
COPY requirements-docker.txt /opt/
RUN pip3 install dlib --install-option=--yes --install-option=USE_AVX_INSTRUCTIONS
RUN pip3 --no-cache-dir install -r /opt/requirements-docker.txt && rm /opt/requirements-docker.txt
WORKDIR "/notebooks"
CMD ["/run_jupyter.sh", "--allow-root"]

145
INSTALL.md Normal file → Executable file
View File

@@ -33,86 +33,103 @@ The developers are also not responsible for any damage you might cause to your o
# Installation Instructions
Basically, you can follow the hints given by `install-guide.py` to finish the environment setup. The script will provides instructions/links depending on your system status.
## Installing dependencies
- Python >= 3.2
- apt/yum install python3 (Linux)
- [Installer](https://www.python.org/downloads/) (Windows)
- [brew](https://brew.sh/) install python3 (macOS)
- [virtualenv](https://github.com/pypa/virtualenv) and [virtualenvwrapper](https://virtualenvwrapper.readthedocs.io) may help when you are not using docker.
### Python >= 3.2
Note that you will need the 64bit version of Python, especially to setup the GPU version!
#### Windows
Download the latest version of Python 3 from Python.org: https://www.python.org/downloads/release/python-364
#### macOS
By default, macOS comes with Python 2.7. For best usage, need at least Python 3.2. The easiest way to do so is to install it through `Homebrew`. If you are not familiar with `homebrew`, read more about it here: https://brew.sh/
To install Python 3.2 or higher:
```
brew install python3
```
#### Linux
You know how this works, don't you?
### Virtualenv
Install virtualenv next. Virtualenv helps us make a containing environment for our project. This means that any python packages we install for this project will be compartmentalized to this specific environment. We'll install virtualenv with `pip` which is Python's package/dependency manager.
```pip install virtualenv```
or
```pip3 install virtualenv```
Alternative, if your Linux distribution provides its own virtualenv through apt or yum, you can use that as well.
#### Windows specific:
`virtualenvwrapper-win` is a package that makes virtualenvs easier to manage on Windows.
```pip install virtualenvwrapper-win```
## Getting the faceswap code
Simply download the code from http://github.com/deepfakes/faceswap - For development it is recommended to use git instead of downloading the code and extracting it.
For now, extract the code to a directory where you're comfortable working with it. Navigate to it with the command line. For our example we will use `~/faceswap/` as our project directory.
## Setting up our virtualenv
### First steps
We will now initialize our virtualenv:
## Setting up for our project
Information for deciding every option:
- CUDA: For acceleration. Requires a good nVidia Graphics Card (which supports CUDA inside)
- Docker: Provide a ready-made image. Hide trivial details. Get you straight to the project.
- nVidia-Docker: Access to the nVidia GPU on host machine from inside container.
CUDA with Docker in 20 minutes.
```
virtualenv faceswap_env/
INFO The tool provides tips for installation
and installs required python packages
INFO Setup in Linux 4.14.39-1-MANJARO
INFO Installed Python: 3.6.5 64bit
INFO Installed PIP: 10.0.1
Enable Docker? [Y/n]
INFO Docker Enabled
Enable CUDA? [Y/n]
INFO CUDA Enabled
INFO 1. Install Docker
https://www.docker.com/community-edition
2. Install latest CUDA
CUDA: https://developer.nvidia.com/cuda-downloads
3. Install Nvidia-Docker
https://github.com/NVIDIA/nvidia-docker
4. Build Docker Image For Faceswap
docker build -t deepfakes-gpu -f Dockerfile.gpu .
5. Mount faceswap volume and Run it
# without gui. tools.py gui working.
docker run -p 8888:8888 --hostname faceswap-gpu --name faceswap-gpu -v
/opt/faceswap:/srv faceswap-gpu
# with gui. tools.py gui not working.
docker run -p 8888:8888 \
--hostname faceswap-gpu --name faceswap-gpu \
-v /opt/faceswap:/srv \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=unix$DISPLAY \
-e AUDIO_GID=`getent group audio | cut -d: -f3` \
-e VIDEO_GID=`getent group video | cut -d: -f3` \
-e GID=`id -g` \
-e UID=`id -u` \
faceswap-gpu
6. Open a new terminal to interact with the project
docker exec faceswap-gpu python /srv/tools.py gui
```
On Windows you can use:
A successful setup log, without docker.
```
mkvirtualenv faceswap
setprojectdir .
INFO The tool provides tips for installation
and installs required python packages
INFO Setup in Linux 4.14.39-1-MANJARO
INFO Installed Python: 3.6.5 64bit
INFO Installed PIP: 10.0.1
Enable Docker? [Y/n] n
INFO Docker Disabled
Enable CUDA? [Y/n]
INFO CUDA Enabled
INFO CUDA version: 9.1
INFO cuDNN version: 7
WARNING Tensorflow has no official prebuild for CUDA 9.1 currently.
To continue, You have to build your own tensorflow-gpu.
Help: https://www.tensorflow.org/install/install_sources
Are System Dependencies met? [y/N] y
INFO Installing Missing Python Packages...
INFO Installing tensorflow-gpu
INFO Installing pathlib==1.0.1
......
INFO Installing tqdm
INFO Installing matplotlib
INFO All python3 dependencies are met.
You are good to go.
```
This will create a folder with python, pip, and setuptools all ready to go in its own little environment. It will also activate the Virtual Environment which is indicated with the (faceswap) on the left side of the prompt. Anything we install now will be specific to this project. And available to the projects we connect to this environment.
Let's say youre content with the work youve contributed to this project and you want to move onto something else in the command line. Simply type `deactivate` to deactivate your environment.
To reactive your environment on Windows, you can use `workon faceswap`. On Mac and Linux, you can use `source ./faceswap_env/bin/activate`. Note that the Mac/Linux command is relative to the project and virtualenv directory.
### Setting up for our project
With your virtualenv activated, install the dependencies from the requirements files. Like so:
```bash
pip install -r requirements.txt
```
If you want to use your GPU instead of your CPU, substitute `requirements.txt` with `requirements-gpu.txt`:
```bash
pip install -r requirements-gpu.txt
```
Should you choose the GPU version, Tensorflow might ask you to install the [CUDA Toolkit](https://developer.nvidia.com/cuda-zone) and the [cuDNN libraries](https://developer.nvidia.com/cudnn). Instructions on installing those can be found on Nvidia's website. (For Ubuntu, maybe all Linux, see: https://yangcha.github.io/Install-CUDA8)
## Run the project
Once all these requirements are installed, you can attempt to run the faceswap tools. Use the `-h` or `--help` options for a list of options.
```bash

22
README.md Normal file → Executable file
View File

@@ -33,9 +33,13 @@ https://anonfile.com/p7w3m0d5be/face-swap.zip or [click here to download](https:
## How To setup and run the project
### Setup
Clone the repo and setup you environment. There is a Dockerfile that should kickstart you. Otherwise you can setup things manually, see in the Dockerfiles for dependencies.
Clone the repo and setup you environment.
Check out [../blob/master/INSTALL.md](INSTALL.md) and [../blob/master/USAGE.md](USAGE.md) for basic information on how to configure virtualenv and use the program.
Docker + Linux will get you straight to the point. Still there is another option to manually decide every detail.
For more information, Try the setup helper `install-guide.py`
Check out [INSTALL.md](INSTALL.md) and [USAGE.md](USAGE.md) for basic information on how to configure virtualenv and use the program.
You also need a modern GPU with CUDA support for best performance
@@ -44,20 +48,6 @@ You also need a modern GPU with CUDA support for best performance
Reusing existing models will train much faster than starting from nothing.
If there is not enough training data, start with someone who looks similar, then switch the data.
#### Docker
If you prefer using Docker, You can start the project with:
- GPU:
- Prerequiste: Install [nvidia-docker](https://github.com/NVIDIA/nvidia-docker) and a CUDA driver on the host machine.
- Build: `docker build -t deepfakes-gpu -f Dockerfile.gpu .`
- Run: `nvidia-docker run --name deepfakes-gpu -p 8888:8888 -v [src_folder]:/src -it deepfakes-gpu`
- Execute: `docker exec -it deepfakes bash`
- Tested working on training.
- CPU:
- Build: `docker build -t deepfakes -f Dockerfile.cpu .`
- Run: `docker run --rm --name deepfakes -v [src_folder]:/srv -it deepfakes bash` . `bash` can be replaced by your command line
- Note that Dockerfile.cpu does not have all good requirments, so it will fail on some python 3 commands.
- Also note that it does not have a GUI output, so the train.py will fail on showing image. You can comment this, or save it as a file.
## How to contribute
### For people interested in the generative models

0
USAGE.md Normal file → Executable file
View File

View File

@@ -1,10 +0,0 @@
pathlib==1.0.1
scandir==1.6
h5py==2.7.1
Keras==2.1.2
opencv-python==3.3.0.10
scikit-image
dlib
face_recognition
tqdm
matplotlib

View File

@@ -1,12 +0,0 @@
pathlib==1.0.1
scandir==1.6
h5py==2.7.1
Keras==2.1.2
opencv-python==3.3.0.10
tensorflow-gpu==1.4.0
ffmpy==0.2.2
scikit-image
dlib
face_recognition
tqdm
matplotlib

View File

@@ -1,12 +0,0 @@
pathlib==1.0.1
scandir==1.6
h5py==2.7.1
Keras==2.1.2
opencv-python==3.3.0.10
tensorflow-gpu==1.5.0
ffmpy==0.2.2
scikit-image
dlib
face_recognition
tqdm
matplotlib

View File

@@ -1,12 +0,0 @@
pathlib==1.0.1
scandir==1.6
h5py==2.7.1
Keras==2.1.2
opencv-python==3.3.0.10
tensorflow==1.4.1
ffmpy==0.2.2
scikit-image
dlib
face_recognition
tqdm
matplotlib

View File

@@ -1,12 +0,0 @@
pathlib==1.0.1
scandir==1.6
h5py==2.7.1
Keras==2.1.2
opencv-python==3.3.0.10
tensorflow==1.5.0
ffmpy==0.2.2
scikit-image
dlib
face_recognition
tqdm
matplotlib

18
requirements.txt Executable file
View File

@@ -0,0 +1,18 @@
pathlib==1.0.1
scandir==1.6
h5py==2.7.1
Keras==2.1.2
opencv-python==3.3.0.10
scikit-image
cmake
dlib
face-recognition
tqdm
matplotlib
ffmpy==0.2.2
# tensorflow is included within the docker image.
# If you are looking for dependencies for a manual install,
# you may want to install tensorflow-gpu==1.4.0 for CUDA 8.0 or tensorflow-gpu>=1.5.0 for CUDA 9.0
# cmake needs to be installed before compiling dlib.

390
setup.py Executable file
View File

@@ -0,0 +1,390 @@
#!/usr/bin/env python3
### >>> ENV
import os
import sys
import platform
OS_Version = (platform.system(), platform.release())
Py_Version = (platform.python_version(), platform.architecture()[0])
LD_LIBRARY_PATH = os.environ.get("LD_LIBRARY_PATH", None)
IS_ADMIN = False
IS_VIRTUALENV = False
CUDA_Version = ""
ENABLE_DOCKER = True
ENABLE_CUDA = True
COMPILE_DLIB_WITH_AVX_CUDA = True
Required_Packages = [
"tensorflow"
]
Installed_Packages = {}
Missing_Packages = []
# load requirements list
with open("requirements.txt") as req:
for r in req.readlines():
r = r.strip()
if r and (not r.startswith("#")):
Required_Packages.append(r)
### <<< ENV
### >>> OUTPUT
color_red = "\033[31m"
color_green = "\033[32m"
color_yellow = "\033[33m"
color_default = "\033[0m"
def __indent_text_block(text):
a = text.splitlines()
if len(a)>1:
b = a[0] + "\r\n"
for i in range(1, len(a)-1):
b = b + " " + a[i] + "\r\n"
b = b + " " + a[-1]
return b
else:
return text
def Term_Support_Color():
global OS_Version
return (OS_Version[0] == "Linux" or OS_Version[0] == "Darwin")
def INFO(text):
t = "%sINFO %s " % (color_green, color_default) if Term_Support_Color() else "INFO "
print(t + __indent_text_block(text))
def WARNING(text):
t = "%sWARNING%s " % (color_yellow, color_default) if Term_Support_Color() else "WARNING "
print(t + __indent_text_block(text))
def ERROR(text):
t = "%sERROR %s " % (color_red, color_default) if Term_Support_Color() else "ERROR "
print(t + __indent_text_block(text))
exit(1)
### <<< OUTPUT
def Check_Permission():
import ctypes, os
global IS_ADMIN
try:
IS_ADMIN = os.getuid() == 0
except AttributeError:
IS_ADMIN = ctypes.windll.shell32.IsUserAnAdmin() != 0
if IS_ADMIN:
INFO("Running as Root/Admin")
else:
WARNING("Running without root/admin privileges")
def Check_System():
global OS_Version
INFO("The tool provides tips for installation\nand installs required python packages")
INFO("Setup in %s %s" % (OS_Version[0], OS_Version[1]))
if not OS_Version[0] in ["Windows", "Linux", "Darwin"]:
ERROR("Your system %s is not supported!" % OS_Version[0])
def Enable_CUDA():
global ENABLE_CUDA
i = input("Enable CUDA? [Y/n] ")
if i == "" or i == "Y" or i == "y":
INFO("CUDA Enabled")
ENABLE_CUDA = True
else:
INFO("CUDA Disabled")
ENABLE_CUDA = False
def Enable_Docker():
global ENABLE_DOCKER
i = input("Enable Docker? [Y/n] ")
if i == "" or i == "Y" or i == "y":
INFO("Docker Enabled")
ENABLE_DOCKER = True
else:
INFO("Docker Disabled")
ENABLE_DOCKER = False
def Check_Python():
global Py_Version, IS_VIRTUALENV
# check if in virtualenv
IS_VIRTUALENV = (hasattr(sys, "real_prefix")
or (hasattr(sys, "base_prefix") and sys.base_prefix != sys.prefix))
if Py_Version[0].split(".")[0] == "3" and Py_Version[1] == "64bit":
INFO("Installed Python: {0} {1}".format(Py_Version[0],Py_Version[1]))
return True
else:
ERROR("Please run this script with Python3 64bit and try again.")
return False
def Check_PIP():
try:
try: # for pip >= 10
from pip._internal.utils.misc import get_installed_distributions, get_installed_version
except ImportError: # for pip <= 9.0.3
from pip.utils import get_installed_distributions, get_installed_version
global Installed_Packages
Installed_Packages = {pkg.project_name:pkg.version for pkg in get_installed_distributions()}
INFO("Installed PIP: " + get_installed_version("pip"))
return True
except ImportError:
ERROR("Import pip failed. Please Install python3-pip and try again")
return False
# only invoked in linux
def Check_CUDA():
global CUDA_Version
a=os.popen("ldconfig -p | grep -P -o \"libcudart.so.\d.\d\" | head -n 1")
libcudart = a.read()
if LD_LIBRARY_PATH and not libcudart:
paths = LD_LIBRARY_PATH.split(":")
for path in paths:
a = os.popen("ls {} | grep -P -o \"libcudart.so.\d.\d\" | head -n 1".format(path))
libcudart = a.read()
if libcudart:
break
if libcudart:
CUDA_Version = libcudart[13:].rstrip()
if CUDA_Version:
INFO("CUDA version: " + CUDA_Version)
else:
ERROR("""CUDA not found. Install and try again.
Recommended version: CUDA 9.0 cuDNN 7.1.3
CUDA: https://developer.nvidia.com/cuda-downloads
cuDNN: https://developer.nvidia.com/rdp/cudnn-download
""")
# only invoked in linux
def Check_cuDNN():
a=os.popen("ldconfig -p | grep -P -o \"libcudnn.so.\d\" | head -n 1")
libcudnn = a.read()
if LD_LIBRARY_PATH and not libcudnn:
paths = LD_LIBRARY_PATH.split(":")
for path in paths:
a = os.popen("ls {} | grep -P -o \"libcudnn.so.\d\" | head -n 1".format(path))
libcudnn = a.read()
if libcudnn:
break
if libcudnn:
cudnn_version = libcudnn[12:].rstrip()
if cudnn_version:
INFO("cuDNN version: " + cudnn_version)
else:
ERROR("""cuDNN not found. Install and try again.
Recommended version: CUDA 9.0 cuDNN 7.1.3
CUDA: https://developer.nvidia.com/cuda-downloads
cuDNN: https://developer.nvidia.com/rdp/cudnn-download
""")
def Continue():
i = input("Are System Dependencies met? [y/N] ")
if i == "" or i == "N" or i == "n":
ERROR('Please install system dependencies to continue')
def Check_Missing_Dep():
global Missing_Packages, Installed_Packages
Missing_Packages = []
for pkg in Required_Packages:
key = pkg.split("==")[0]
if not key in Installed_Packages:
Missing_Packages.append(pkg)
continue
else:
if len(pkg.split("=="))>1:
if pkg.split("==")[1] != Installed_Packages.get(key):
Missing_Packages.append(pkg)
continue
def Check_dlib():
global Missing_Packages, COMPILE_DLIB_WITH_AVX_CUDA
if "dlib" in Missing_Packages:
i = input("Compile dlib with AVX (and CUDA if enabled)? [Y/n] ")
if i == "" or i == "Y" or i == "y":
INFO("dlib Configured")
WARNING("Make sure you are using gcc-5/g++-5 and CUDA bin/lib in path")
COMPILE_DLIB_WITH_AVX_CUDA = True
else:
COMPILE_DLIB_WITH_AVX_CUDA = False
def Install_Missing_Dep():
global Missing_Packages
if len(Missing_Packages):
INFO("""Installing Required Python Packages. This may take some time...""")
try:
from pip._internal import main as pipmain
except:
from pip import main as pipmain
for m in Missing_Packages:
msg = "Installing {}".format(m)
INFO(msg)
# hide info/warning and fix cache hang
pipargs = ["install", "-qq", "--no-cache-dir"]
# install as user to solve perm restriction
if not IS_ADMIN and not IS_VIRTUALENV:
pipargs.append("--user")
# compile dlib with AVX ins and CUDA
if m.startswith("dlib") and COMPILE_DLIB_WITH_AVX_CUDA:
pipargs.extend(["--install-option=--yes", "--install-option=USE_AVX_INSTRUCTIONS"])
pipargs.append(m)
# pip install -qq (--user) (--install-options) m
pipmain(pipargs)
def Update_TF_Dep():
global CUDA_Version
Required_Packages[0] = "tensorflow-gpu"
if CUDA_Version.startswith("8.0"):
Required_Packages[0] += "==1.4.0"
elif not CUDA_Version.startswith("9.0"):
WARNING("Tensorflow has no official prebuild for CUDA 9.1 currently.\r\n"
"To continue, You have to build and install your own tensorflow-gpu.\r\n"
"Help: https://www.tensorflow.org/install/install_sources")
custom_tf = input("Location of custom tensorflow-gpu wheel (leave blank to manually install): ")
if not custom_tf:
del Required_Packages[0]
return
if os.path.isfile(custom_tf):
Required_Packages[0] = custom_tf
else:
ERROR("{} not found".format(custom_tf))
def Tips_1_1():
INFO("""1. Install Docker
https://www.docker.com/community-edition
2. Build Docker Image For Faceswap
docker build -t deepfakes-cpu -f Dockerfile.cpu .
3. Mount faceswap volume and Run it
# without gui. tools.py gui not working.
docker run -p 8888:8888 --hostname faceswap-cpu --name faceswap-cpu -v {path}:/srv faceswap-cpu
# with gui. tools.py gui working.
docker run -p 8888:8888 \\
--hostname faceswap-cpu --name faceswap-cpu \\
-v {path}:/srv \\
-v /tmp/.X11-unix:/tmp/.X11-unix \\
-e DISPLAY=unix$DISPLAY \\
-e AUDIO_GID=`getent group audio | cut -d: -f3` \\
-e VIDEO_GID=`getent group video | cut -d: -f3` \\
-e GID=`id -g` \\
-e UID=`id -u` \\
faceswap-cpu
4. Open a new terminal to run faceswap.py in /srv
docker exec -it faceswap-cpu bash
""".format(path=sys.path[0]))
INFO("That's all you need to do with a docker. Have fun.")
def Tips_1_2():
INFO("""1. Install Docker
https://www.docker.com/community-edition
2. Install latest CUDA
CUDA: https://developer.nvidia.com/cuda-downloads
3. Install Nvidia-Docker
https://github.com/NVIDIA/nvidia-docker
4. Build Docker Image For Faceswap
docker build -t deepfakes-gpu -f Dockerfile.gpu .
5. Mount faceswap volume and Run it
# without gui. tools.py gui working.
docker run -p 8888:8888 --hostname faceswap-gpu --name faceswap-gpu -v {path}:/srv faceswap-gpu
# with gui. tools.py gui not working.
docker run -p 8888:8888 \\
--hostname faceswap-gpu --name faceswap-gpu \\
-v {path}:/srv \\
-v /tmp/.X11-unix:/tmp/.X11-unix \\
-e DISPLAY=unix$DISPLAY \\
-e AUDIO_GID=`getent group audio | cut -d: -f3` \\
-e VIDEO_GID=`getent group video | cut -d: -f3` \\
-e GID=`id -g` \\
-e UID=`id -u` \\
faceswap-gpu
6. Open a new terminal to interact with the project
docker exec faceswap-gpu python /srv/tools.py gui
""".format(path=sys.path[0]))
def Tips_2_1():
INFO("""Tensorflow has no official prebuilts for CUDA 9.1 currently.
1. Install CUDA 9.0 and cuDNN
CUDA: https://developer.nvidia.com/cuda-downloads
cuDNN: https://developer.nvidia.com/rdp/cudnn-download (Add DLL to %PATH% in Windows)
2. Install System Dependencies.
In Windows:
Install CMake x64: https://cmake.org/download/
In Debian/Ubuntu, try:
apt-get install -y cmake libsm6 libxrender1 libxext-dev python3-tk
3. Install PIP requirements
You may want to execute `chcp 866` in cmd line
to fix Unicode issues on Windows when installing dependencies
""")
def Tips_2_2():
INFO("""1. Install System Dependencies.
In Windows:
Install CMake x64: https://cmake.org/download/
In Debian/Ubuntu, try:
apt-get install -y cmake libsm6 libxrender1 libxext-dev python3-tk
2. Install PIP requirements
You may want to execute `chcp 866` in cmd line
to fix Unicode issues on Windows when installing dependencies
""")
def Main():
global ENABLE_DOCKER, ENABLE_CUDA, CUDA_Version, OS_Version
Check_System()
Check_Python()
Check_PIP()
# ask questions
Enable_Docker()
Enable_CUDA()
# warn if nvidia-docker on non-linux system
if OS_Version[0] != "Linux" and ENABLE_DOCKER and ENABLE_CUDA:
WARNING("Nvidia-Docker is only supported on Linux.\r\nOnly CPU is supported in Docker for your system")
Enable_Docker()
if ENABLE_DOCKER:
WARNING("CUDA Disabled")
ENABLE_CUDA = False
# provide tips
if ENABLE_DOCKER:
# docker, quick help
if not ENABLE_CUDA:
Tips_1_1()
else:
Tips_1_2()
else:
if ENABLE_CUDA:
# update dep info if cuda enabled
if OS_Version[0] == "Linux":
Check_CUDA()
Check_cuDNN()
else:
Tips_2_1()
WARNING("Cannot find CUDA on non-Linux system")
CUDA_Version = input("Manually specify CUDA version: ")
Update_TF_Dep()
else:
Tips_2_2()
# finally check dep
Continue()
Check_Missing_Dep()
Check_dlib()
Install_Missing_Dep()
INFO("All python3 dependencies are met.\r\nYou are good to go.")
if __name__ == "__main__":
Main()