| | |
| | | |
| | | To use please install [PyTorch](http://pytorch.org/) and [OpenCV](https://opencv.org/) (for video) - I believe that's all you need apart from usual libraries such as numpy. You need a GPU to run Hopenet (for now). |
| | | |
| | | To test on a video using dlib face detections (center of face will be jumpy): |
| | | To test on a video using dlib face detections (center of head will be jumpy): |
| | | ```bash |
| | | python code/test_on_video_dlib.py --snapshot PATH_OF_SNAPSHOT --face_model PATH_OF_DLIB_MODEL --video PATH_OF_VIDEO --output_string STRING_TO_APPEND_TO_OUTPUT --n_frames N_OF_FRAMES_TO_PROCESS --fps FPS_OF_SOURCE_VIDEO |
| | | ``` |
| | | To test on a video using your own face detections (we recommend using [dockerface](https://github.com/natanielruiz/dockerface)): |
| | | To test on a video using your own face detections (we recommend using [dockerface](https://github.com/natanielruiz/dockerface), center of head will be very smooth): |
| | | ```bash |
| | | python code/test_on_video_dockerface.py --snapshot PATH_OF_SNAPSHOT --video PATH_OF_VIDEO --bboxes FACE_BOUNDING_BOX_ANNOTATIONS --output_string STRING_TO_APPEND_TO_OUTPUT --n_frames N_OF_FRAMES_TO_PROCESS --fps FPS_OF_SOURCE_VIDEO |
| | | ``` |