Getting started
/submission: Submit your model output/study: Study screen for human evaluation/user: User participation/result: Leaderboard
Summary
System naming convention
- natural mocap data (ground truth):
NA - submitted systems:
SA,SB, ...,SZ - baseline systems:
BA,BB,BC, ...
We provide a input and output to the participants, and you generate gestures npy for the input.
Then, we recruit Prolific to start user studies. Using pairwise comparison, we evaluate the generated gestures with human evaluation
emotion mismatching studies
speech appropriateness studies
pairwise emotion studies
pairwise human-likeness studies
Evaluation Process
Download submission_inputs.csv file
input_code, video_file
234083450345, 234083450345.npy
346643424234, 346643424234.npy
443646423423, 443646423423.npyRun your model inference
Run your model inference to get all video file output.
234083450345.npy
346643424234.npy
443646423423.npySubmission your model .npy output
Go to /submission and upload all video of your model
Convert & Split to Video
We will convert all .npy file to .npy video and split to mutiple segment video.
Generate all study pairwise compare screen
Sample study screen
- You can visit
/studyto get information about each study screen. - You can visit
/userto follow all user participation our study.
Recruit partipation on Prolific to study
Each Prolific partipation will be study for
When partipation study, we record all action and final result
Selection Result
Actions Record
Evalution final result
Sample evalution result