From d696753538fd378ce8fc1f90d359fa2fc44d3e5e Mon Sep 17 00:00:00 2001 From: Nataniel Ruiz <nruiz9@gatech.edu> Date: 星期三, 29 十一月 2017 13:56:50 +0800 Subject: [PATCH] Update README.md --- README.md | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index f2c8e22..42bf9e8 100644 --- a/README.md +++ b/README.md @@ -8,11 +8,11 @@ To use please install [PyTorch](http://pytorch.org/) and [OpenCV](https://opencv.org/) (for video) - I believe that's all you need apart from usual libraries such as numpy. You need a GPU to run Hopenet (for now). -To test on a video using dlib face detections (center of face will be jumpy): +To test on a video using dlib face detections (center of head will be jumpy): ```bash python code/test_on_video_dlib.py --snapshot PATH_OF_SNAPSHOT --face_model PATH_OF_DLIB_MODEL --video PATH_OF_VIDEO --output_string STRING_TO_APPEND_TO_OUTPUT --n_frames N_OF_FRAMES_TO_PROCESS --fps FPS_OF_SOURCE_VIDEO ``` -To test on a video using your own face detections (we recommend using [dockerface](https://github.com/natanielruiz/dockerface)): +To test on a video using your own face detections (we recommend using [dockerface](https://github.com/natanielruiz/dockerface), center of head will be very smooth): ```bash python code/test_on_video_dockerface.py --snapshot PATH_OF_SNAPSHOT --video PATH_OF_VIDEO --bboxes FACE_BOUNDING_BOX_ANNOTATIONS --output_string STRING_TO_APPEND_TO_OUTPUT --n_frames N_OF_FRAMES_TO_PROCESS --fps FPS_OF_SOURCE_VIDEO ``` -- Gitblit v1.8.0