Our great sponsors
-
first-order-model
This repository contains the source code for the paper First Order Motion Model for Image Animation
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
First of the model we are going to be making use of is a Deepfake model by the name of First Order Motion. Deepfakes you allow you create an artificial version of a person saying or doing an action, I first found about this particular model on two minute papers (an awesome YT channel for lovers of AI ⚡) and wanted to try it for myself. The video below talks more about the model.
FROM nvcr.io/nvidia/cuda:10.0-cudnn7-runtime-ubuntu18.04 RUN DEBIAN_FRONTEND=noninteractive apt-get -qq update \ && DEBIAN_FRONTEND=noninteractive apt-get -qqy install python3-pip ffmpeg git less nano libsm6 libxext6 libxrender-dev \ && rm -rf /var/lib/apt/lists/* COPY . /app/ WORKDIR /app RUN pip3 install --upgrade pip RUN pip3 install \ https://download.pytorch.org/whl/cu100/torch-1.0.0-cp36-cp36m-linux_x86_64.whl \ git+https://github.com/1adrianb/face-alignment \ -r requirements.txt ENTRYPOINT [ "python3" ] CMD [ "app.py" ]
Related posts
- Is it possible to sync a lip and facial expression animation with audio in real time?
- 500 Realistic Vision portraits, eyes aligned, sorted by happiness
- Help from Community [Development]
- Deepfakes in High-Resolution Created From a Single Photo
- d-id.com is an awesome AI tool to animate any character into a video and add human-like, yet artificial, voice-overs !