| | |
| | | |
| | | **Hopenet** is an accurate and easy to use head pose estimation network. Models have been trained on the 300W-LP dataset and have been tested on real data with good qualitative performance. |
| | | |
| | | For details about the method and quantitative results please check the [paper](https://arxiv.org/abs/1710.00925). |
| | | For details about the method and quantitative results please check the CVPR Workshop [paper](https://arxiv.org/abs/1710.00925). |
| | | |
| | | <div align="center"> |
| | | <img src="conan-cruise.gif" /><br><br> |
| | |
| | | |
| | | For more information on what alpha stands for please read the paper. First two models are for validating paper results, if used on real data we suggest using the last model as it is more robust to image quality and blur and gives good results on video. |
| | | |
| | | **Please keep in mind that testing instructions to reproduce the paper results will be obtained very soon** |
| | | Please open an issue if you have an problem. |
| | | |
| | | This work is still in progress - we are obtaining better results and will also be updating this README with instructions. Please open an issue if you have an problem. |
| | | Some very cool implementation of this work on other platforms by some cool people: |
| | | [Gluon](https://github.com/Cjiangbpcs/gazenet_mxJiang) |
| | | |
| | | Some things that will be added: |
| | | * Test script for images |
| | | * Docker image |
| | | * Instructions for all scripts |
| | | * Better and better models |
| | | * Videos and example images! |
| | | [MXNet](https://github.com/haofanwang/mxnet-Head-Pose) |
| | | |
| | | If you find Hopenet useful in your research please consider citing: |
| | | [TensorFlow with Keras](https://github.com/Oreobird/tf-keras-deep-head-pose) |
| | | |
| | | |
| | | If you find Hopenet useful in your research please cite: |
| | | |
| | | ``` |
| | | @article{DBLP:journals/corr/abs-1710-00925, |
| | | author = {Nataniel Ruiz and |
| | | Eunji Chong and |
| | | James M. Rehg}, |
| | | title = {Fine-Grained Head Pose Estimation Without Keypoints}, |
| | | journal = {CoRR}, |
| | | volume = {abs/1710.00925}, |
| | | year = {2017}, |
| | | url = {http://arxiv.org/abs/1710.00925}, |
| | | archivePrefix = {arXiv}, |
| | | eprint = {1710.00925}, |
| | | timestamp = {Wed, 01 Nov 2017 19:05:43 +0100}, |
| | | biburl = {http://dblp.org/rec/bib/journals/corr/abs-1710-00925}, |
| | | bibsource = {dblp computer science bibliography, http://dblp.org} |
| | | @InProceedings{Ruiz_2018_CVPR_Workshops, |
| | | author = {Ruiz, Nataniel and Chong, Eunji and Rehg, James M.}, |
| | | title = {Fine-Grained Head Pose Estimation Without Keypoints}, |
| | | booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, |
| | | month = {June}, |
| | | year = {2018} |
| | | } |
| | | ``` |
| | | |