We are developing a heuristic model to accurately position the VR avatar body in relation to the real life human head position, where the HMD provides the only actual position tracking for the entire player. This leaves us to make assumptions as to how the avatars position changes relative to any given head position from a reference start point, to support the body properly with the feet as stable bases to the center of gravity (CoG), and how the avatar body follows the head across time. Further models will be augmented with hand tracking solutions, which will then give us three source points of 6DoF data to draw from. The process of setting up a basic 3-point tracking rig and corresponding avatar mapping is covered here :
The total solution is much needed, and despite rich mocap solutions like Perception Neuron, PrioVR etc… the most common scenario with 2015 VR HMDs / VR rigs is: singular positional head tracking, and a body-representative avatar that needs to be placed in a realistic pose based on the head position. Traditional inverse kinematics solutions tend to fall a bit short here, so we are augmenting the IK solves with data from a real-world model, derived from cardinal point video of real people in real HMDs, looking at targets and bending their bodies as real-world bodies do. We are then building computational models to represent what real world players do, and how their bodies move, based on a series of controlled head movements. We are building datasets for both standing and seated experiences, as well as transition animations and detection triggers to smoothly transition between the two states.
Freeform notes from the Internets
from which some of these images were sourced, mostly via Googling “Computational Model of the Human Spine for Animation” and “modeling head movement inverse kinematics for VR”:
2. Computational Model of the Human Spine (PDF) — by Volkan Esat
3. How to Watch the Sky — by AmateurHuman
This dataset and model is a component of the dSky VR.Engine.