----- If you use this gesture segmenter, please cite the following: ----- Ferstl, Ylva, Michael Neff, and Rachel McDonnell. "Adversarial gesture generation with realistic gesture phasing." Computers & Graphics 89 (2020): 117-130. BibTeX: @article{ferstl2020adversarial, title={Adversarial gesture generation with realistic gesture phasing}, author={Ferstl, Ylva and Neff, Michael and McDonnell, Rachel}, journal={Computers \& Graphics}, volume={89}, pages={117--130}, year={2020}, publisher={Elsevier} } --- Instructions for use: You only need to make edits to "label_phases_with_models.py", which loads the data to be labelled and the trained models, and then labels everything. This file inlcudes comments describing the format the data should be in. You should only need to amend lines 16-22, unless your data naming follows a different standard than the following proposed naming: [takeNum]_motion_normed.csv and [takeNum]_pitch_normed.csv If your file naming convention is different, you will also need to change 197 (or lines 192-193 if using pitch data). The default setting is to segment motion solely based on the motion signal (the recommended use), but pitch data can be included via the "use_pitch" flag. The zip file contains all necessary files, as does the folder with the same name.