Razer Hydra Input in Unity3D : Sixense Input control syntax

dsky-screenshot-lightsaber

we’ve been doing some fairly extensive development with the Razer Hydras in anticipation of the forthcoming Sixense STEM, as well as a bevy of other 6DoF controllers (Perception Neuron, Sony Move, PrioVR, etc). The Hydra input harness is somewhat convoluted and exists outside and parallel to the standard Unity Input Manager.

razer_hydra_TIGHT

 

I’ve found scant documentation for this on the interwebs, so here is the result of our reverse engineering efforts. If you want to code for Hydra input in your Unity experiences, here are the hooks:

First, we map primary axis and buttons as symbolic representations in the Unity Input Manager (i.e. P1-Horizontal, P1-Vertical, P1-Jump…); those handle basic keyboard, mouse, and standard gamepad input (xbox, playstation). Then inside of our Input handler code, we write custom routines to detect the Hydras, to read their values, and to sub those values into the aforementioned symbolic variables.

Our best recommendation is to install the Sixense plug-in from the Unity Asset Store, and to thoroughly examine the SixenseInputTest.cs that comes with it.

The basic streaming vars are :

• SixenseInput.Controllers[i].Position — Vector3 XYZ
• SixenseInput.Controllers[i].Rotation Vector4 Quaternion
• SixenseInput.Controllers[i].JoystickX — analog float -1.0 to 1.0
• SixenseInput.Controllers[i].JoystickY — analog float -1.0 to 1.0
• SixenseInput.Controllers[i].Trigger — analog float 0.0 to 1.0

obtaining the button taps is a bit more obfuscated,
they’re something like:

• SixenseInput.Controllers[i].GetButton(buttonObjectName)
where “buttonObjectName” is one of many objects:
ONE, TWO, THREE, FOUR, START, BUMPER, JOYSTICK
representing which “switch” is being closed on that cycle,

It also appears that there are two simpler methods,
if you want to trap explicit button press events:

• SixenseInput.Controllers[i].GetButtonDown(buttonObjectName)
• SixenseInput.Controllers[i].GetButtonUp(buttonObjectName)

This sample script has a bevy of (non-optimized?) methods for reading the controllers output in real time, from which you can (in code) map all buttons, thumbsticks, and 6DoF XYZ YPR data to your app. Hopefully the STEM API will be far more integrated into the standard Unity Input Manager framework, and thus work in seamless parallel with standard controllers, without the need for custom code.

Have any tips on Hydra input for Unity?
Pop’em into the comments below:

VR tech 411 : 6DoF, XYZ + YPR, position + orientation in 3space

I’ve spent so many cycles describing this gesturally to so many people, I’m considering getting this tattooed on my chest. To avert that, here is the diagram, liberally adapted, corrected, and upgraded from the Oculus Developer Guide:

We present to you, the standard coordinate 3-space system:

dSky-Oculus-XYZ-YPR position orientation diagram

POSITION is listed as a set of coordinates :

  • X is left / right
  • Y is up / down
  • Z is forward / back

ORIENTATION is represented as a quaternion* (later). Simply:

  • Pitch is leaning forward / back (X axis rotation)
  • Yaw is rotating left / right (Y axis rotation / compass orientation)
  • Roll is spinning clockwise / counterclockwise (Z axis rotation)

Now there, all clear. You’re welcome.


 

Further clarifications:

* a quaternion is a very special (and generally non-human readable) way of representing 3-dimensional orientation reference as a 4-dimensional number (X, Y, Z, W) in order to correct for strange behaviours encountered when rotating 3d objects.

* 6DoF is an acronym for “six degrees of freedom”. It is generally used in talking about input devices which allow a user to control position and orientation simultaneously, such as head trackers, playstation Moves, razer Hydras, Sixense STEMs, etc.

 

Approaching Cinema : Lessons Learned in 360 Capture

dsky-wilderness-v011-title-splash-screen

We’ve created some very rapid prototypes in the past 10 days, just to test the waters (no pun intended) of cinematic 360 capture and playback within VR HMDs.

viewing the photosphere in-engine, with polygons. cool.

viewing the photosphere in-engine, with polygons. cool.

The tests have been mostly very rewarding.

Our findings are as follows:

  1. Initial 2D 360 still / video capture is easy. We started with the Google Photosphere app, free on Android, which takes about 5 minutes and 40 photos per sphere. We’ve since upgraded to the Ricoh Theta, which captures 360 video with a single button-press.
  2. consider EVERYTHING in the 360 field of view. Its all in the shot. There’s no back stage. This concept takes a lot of getting used to if you’re used to working with lights, sound techs, and crews.
  3. Editing is time consuming. Easier to clean physical reality prior to the actual shot, then to paint it in post.
  4. A base plug at the foot of the shot is a nice touch, both visually and to cover the merge seam.
  5. Similarly, we use a lens flare to simulate the light dynamics of the sun.
  6. Audio engineering is key, time-consuming, fun, AND makes the difference between “just another photosphere” and the feeling of presence. You collect video at a single point; audio should be collected at all the local sound origination points, then placed into proper 3D positions in post, with filters.
  7. Since we’re authoring all this within the game engine, we’re having a lot of fun with the 3D positional audio. Placing sounds, even animating sounds as, say, a bird flies across the forest canopy.
into the wild... virtually.

into the wild… virtually.

And finally, there are some things, some of the best parts of nature, which simply aren’t going to be in VR anytime soon. Those being, the elements. Wind in your hair, and clean running stream water on your bare feet… those will have to wait.

fresh water from the springs... yes please!.. but not in VR.

fresh water from the springs… yes please!.. but not in VR.

For a 2D sample of what’s being created and captured in the world, start with YouTube’s shiny new 360 video channel.

Are there places or experiences you’d like to see us model in VR? Do you see yourself capturing and publishing your own 360 experiences?

Continue the conversation in the comments below:

Cinema meets Videogames : Strange bedfellows, or Match made in Heaven?

We’re fresh back from the epic Digital Hollywood conference / rolling party in LA, and chock full of ideas of how to integrate classic cinema techniques with our native videogame tropes.

See, 80+% of the participants at the conference were from traditional media backgrounds — music, TV, film. And while VR was absolutely the hot topic of the show — as it was for CES, GDC, and NAB — there was as much confusion as there was excitement about the commercial and artistic promise of this brand spanking new medium.

One of the key findings on our part was a genuine need to integrate cinema techniques, aka linearity, composition, color and storytelling —  into our hyper-interactive realm of videogame design. Thus began our investigations. What exactly does it take to make full, HD, 3D, 360 captures of real-world environments?

We’ll get into more details later, but for now I want to spell it out, if only for technical humor: It takes a:

  • 12-camera
  • 4k
  • 360°
  • 3D stereoscopic
  • live capture stream
  • …stitched to form an:
  • equirectangular projection
  • over / under
  • 1440p
  • panoramic video

Say that 12 times fast. Oh, and be ready for handling the approximately 200GB / minute data stream that these rigs generate. Thank god for GPUs.

What does that look like in practice?
Like this:

A still frame from a 360 stereoscopic over/under video. Playback software feeds a warped portion of each image to each of the viewers eyes.

A still frame from a 360 stereoscopic over/under video. Playback software feeds a warped portion of each image to each of the viewers eyes.

And how do you capture it? With something like this:

360Hero GoPro stereo 360 rig

12 camera GoPro 360Hero rig

Or, if you’re really high-budget, this:

Red Dragon 6k 360 3D stereoscopic capture rig by NextVR_Rig1

array of 10 Red digital cinema cameras (photo not showing top and bottom cam pairs)

Though personally, we really prefer the sci-fi aesthetic:

jaunt-sci-fi-rig-header

an early 3d 360 capture prototype by JauntVR

Then there’s the original 360 aerial drone capture device, circa 1980

Empire Viper Droid, Empire Strikes Back, c. 1980, LucasArts

Empire Viper Droid, Empire Strikes Back, c. 1980, LucasArts

Then, the ever-so-slightly more sinister, and agile version, circa 1999…

Sentinel Drone, The Matrix, 1999, via the Wachowski brothers

Sentinel Drone, The Matrix, 1999, via the Wachowski brothers

What do you think? Is the realism of live capture worth the trouble? Would you prefer “passive” VR experiences that transport you to hard-to-get-to real world places and events, “interactive” experiences more akin to xBox and PlayStation games, or some combination of the two?

Join the conversation below:

The Avatar Magna Carta : or, How to Puppeteer a 3D Humanoid with 6DoF head and hands tracking

In this post, we present the workflow required in order to enable a player to live puppeteer a rigged first person 3d avatar in-game, by:

  1. driving the avatars in-game hands in a 1:1 relationship with the players actual physical hands, and
  2. animating the avatar’s in-game head orientation to match the precise orientation of the players physical real-world head,
  3. via 3d trackers on the players heads and hands, and the application of a simple inverse kinematic (IK) physics model.

We spent a long time figuring out this path, so I thought we’d share it with the community. Note: this is an entirely technical, workflow, pipeline post for readers who are currently developing VR applications. Its not for the general consumer. This tutorial is specifically crafted for a Unity3D pipeline. Also, it is specific to the Oculus Rift DK2 HMD, and Razer Hydra hand trackers, powered by Sixense. It shuold work with other tracking solutions, with modification.

Go ahead, wave hello to the adoring fans...

Go ahead, wave hello… heads and hands fully tracked and puppeteered

First, the basic premise:

People want to relate to their own physical avatars in VR. They want to be able to look down at their feet, and see their body. They want to be able to wave their hands in front of their face, and see some representation of their appendages in front of them, superimposed on the virtual scene. In short, they want to feel like they are present in the experience, not just an ethereal viewer.

This problem proved a bit more difficult to solve in practice than one might imagine. So in the interest of fostering community, we are sharing our technical solution with everyone. It isn’t perfect yet; we’ve posted a number of tips and follow-on research topics at the end of the tutorial, places where this needs to go before its fully “ready for prime time.” However, the solution presented here IS functional, leaps beyond the standard “avatar-less” VR being produced today, and should serve as a baseline from which improvements can and will be made.

Now, the actual tutorial:


  1. build and skin your Avatar Model
    1. we create humanoids in Mixamo Fuse
    2. you may choose the modeling software of your choice : Maya, Blender, etc
    3. make sure that your final model is in T-pose
  2. rig the character with bones
    1. this can be done by uploading a t-pose character to mixamo.com
    2. …or manually in your software
    3. use the maximum resolution possible : we use a 65-bone skeleton, which includes articulated fingers
  3. give the character at least one animation
    1. we will use “idle” state
    2. you assign this online in Mixamo
  4. get the Avatar into Unity
    1. export from Mixamo to Unity FBX format
    2. import the resulting FBX (approx. 5-20MB) into Unity Assets : gChar folder
    3. this will generate a prefab in Unity along with 4 components in a subfolder:
      1. a mesh
      2. a bone structure
      3. an animation loop file
      4. an avatar object
    4. the prefab will have a name such as “YourModelName@yourAnimName”
  5. Configure the Avatar
    1. click on the prefab in the Assets folder
    2. In the inspector, select the “Rig” tab
      1. make sure that “Humanoid” is selected in the “Animation Type” pull-down
      2. if you selected that manually, hit “Apply”
    3. drag the prefab from the Assets into the Heirarchy
    4. select the prefab avatar in the heirarchy
      1. In the Inspector:
      2. add an “Animator” component. we will fil in the details later
      3. add the g-IKcontrol.cs C# script. again, we will fill in details later
      4. you can copy the source of the script from here
  6. Add the latest Oculus SDK (OVR) to the project. 
    1. Download the latest Oculus SDK for Unity
    2. this is usually done by double-clicking “OculusUnityIntegration.unitypackage” in the OS, then accepting the import into your project by clicking “import all”
    3. You should now have a folder within Assets called “OVR”
  7. Add the latest Sixense Hydra / STEM SDK to the project
    1. Download the Hydra plug-in from the Unity Asset Store
    2. Import it into your project.
    3. You should now have a folder within Assets called “SixenseInput”
  8. Create a high level Empty in your hierarchy and name it “PLAYER-ONE”
    1. make your Avatar prefab a child of this parent
    2. Drag the OVR CameraRig from the OVR folder and also make it a child of PLAYER-ONE
  9. Properly position the Oculus camera in-scene
    1. The oculus camera array should be placed just forward of the avatars eyes
    2. we typically reduce the forward clipping plane to around 0.15m
    3. If you’re using the OVRPlayerController, Character Controller settings work well:
      1. Center Y = -0.84m (standing),
      2. Center Z = -0.1 (prevents from being “inside head”)
      3. Radius = 0.23m
      4. Height = 2m
    4. This will require some trial and error. Make sure that you use the Oculus camera, and not the Oculus Player controller. Experimentation will be required to bridge the spatial relationship between a seated player and a standing avatar. Calibration software needs to be written. Trial and Error is generally defined as a series of very fast cycles of : build, test, make notes. modify, re-build, re-test, make notes. repeat until perfect. There are many gyrations and you will become an expert at rapidly donning and removing the HMD, headphones, and hand controllers.
  10. Create the IK target for head orientation
    1. Right-click on the CenterEyeAnchor in the Hierarchy, and select “Create 3D Object > Cube”
    2. Name the cube “dristi-target”
    3. move the cube approx 18” (0.5m) directly outward from the Avatar’s third eye
    4. This will serve as the IK point towards where the avatar’s head is “aimed” at, i.e. where they are looking. In yoga, the direction of the gaze is called dristi.
  11. Get the Sixense code into your scene
    1. Open the SixenseDemoScene
    2. copy the HandsController and SixenseInput assets from the heirarchy
    3. Re-open your scene
    4. paste the HandsController and SixenseInput assets into your heirarchy
    5. drag both to make them children of OVRcameraRig
  12. Make sure the Sixense hands are correctly wired.
    1. Each hand should have the “SixenseHandAnimator” controller assigned to it
    2. Root Motion should be UNchecked
    3. Each hand should have the SixenseHand Script attached to it
    4. On the pull down menu doe SixenseHand script, the proper hand should be selected (L/R)
  13. Properly position the Sixense hands in-scene
    1. They should be at about the Y-pos height of the belly button
    2. The wrists should be about 12” or 30cm in Z-pos forward of the abdomen
    3. In otherwords, they should be positioned as if you are sitting with your elbows glued to your sides, forearms extended paralell to the ground.
    4. You will want to adjust, tweak, and perfect this positioning. There is an intrinsic relationship between where you position the hands in the scene, and the Sixense trackers position in the real world relative to the Oculus camera. Trial and error and clever calibration software solves this. That’s another tutorial.
  14. Make the Sixense hands invisible. 
    1. we do this because they will merely serve as IK targets for the avatars hands
    2. do this by drilling down into HandsController : Hand_Right : Hand_MDL and unchecking the “Skinned Mesh Renderer” in the Inspector panel
    3. do the same with the left hand.
    4. this leaves the assets available as IK targets, but removes their rendered polys from the game
  15. Create the Animator Controller
    1. create transitions from Entry to New State, and
    2. from New State to Idle (or what you named your created animation)
    3. On the Base Layer, click the gear, and make sure that “IK Pass” is checked.
    4. this will pass the IK data from the animation controller on down the script chain
  16. Assign the new Animation Controller to the Avatar
    1. select the avatar in the heirarchy
    2. assign the controller in the inspector
  17. Map the Avatar with Puppet IK targets for Hands and Head,
    1. drag the “Hand – Left” from the Sixense Handscontroller parent to the “Left Hand Obj” variable
    2. drag the “Hand – Right” from the Sixense Handscontroller parent to the “Right Hand Obj” variable
    3. drag the Look-At-Target from the OVRCameraRig to the “Look Obj” variable
      1. The Look-At-Target is nested deep:
      2. PLAYER-ONE : OVRCameraRig : TrackingSpace : CenterEyeAnchor : Look-at-Target
  18. THATS IT!
    1. Build a compiled runtime.
    2. Connect your Rift and Hydra
    3. launch the game
    4. activate the Hydras.
      1. grasp the left controller, aim it at the base, and squeeze the trigger.
      2. grasp the right controller, aim it at the base, and squeeze the trigger.
      3. hit the “start” button, just south of the joystick on the right controller
    5. When you tilt and rotate your head, the avatars head should also tilt and roll. When you move your hands, the avatars hands should move in a 1:1 ratio in-scene. Congratulations, you’re an Avatar Master.
 

TIPS
and areas for further R&D
 
  1. Ideally, the avatars head should not be rendered for the player, yet it should still cast shadows and reflections
  2. the avatars head should also clearly be rendered for other players in multi-player scenarios, as well as for third-person camera observer positions.
  3. An in-game shadow is a great way to ground the player to the avatar in a convincing manner. Even when the hands are outside the field of view, seeing the shadows of the heads and hands triggers a very powerful sense of presence.
  4. While head rotation and orientation on the skull axis is fairly straightforward, head translation, i.e. significant leaning in or out or to the side, is a bit more abstract in terms of pupeteering choices. You may wish to explore either locomotion animations, or “from the hip” IK solutions to move the torso along with the head.
  5. RL : VR / 1:1 Spatial Calibration is KEY to great experiences.
    1. See 9.3, above : Properly position the Oculus camera
    2. and 13.4 : Properly position the Sixense hands 
  6. The built-in Unity IK leaves a lot to be desired when it comes to realistic approximations of elbow positions. We are investigating the FinalIK package and other professional-class solutions.
  7. This solution in its current form disables the ultra-cool “grasping” or “pointing” animations that are built-in to the Sixense template models. Investigate how to re-enable those animations on the rigged avatar’s MecAnim structure.
  8. You will also want to configure the remainder of the Hydra joysticks and buttons to control all  game functions, because it sucks to be reaching and fumbling for a keyboard when you are fully immersed in a VR world.
  9. The majority of this configuration starts in the Unity Input Manager
    1. Edit | Project Settings | Input…
  10. There should be keyboard fallback controls for users who do not own Hydras…
Have you tackled this same challenge? Have you created any solutions to the further investigations we’ve proposed above?
Share in the comments below,
because we’re all in this together! 

 

dynamic audio : wind in your hair

When we design spaces, we want our worlds to be *alive*.

A key component of this sense of vitality is dynamic audio. In a nutshell, dynamic audio is physics-based, player-generative audio signals. The two challenges we are working on are:

1. the sound of the lightsaber as the player swooshes it around them in the environment. buzzzzzzz…. hmmmmm… zap! Obviously, this is dynamic, based on the velocity and acceleration of the ‘blade’.

dsky sceneplay lightsaber ultimate

2. the sound of wind in the player’s hair as they fly high above the city… modulated by airspeed, gusts, and hopefully, near-miss-objects.

_dsky-real-flying-SF

Programming Sound with Pure Data

Solution? Hard work, creative sample bases, and sweet code. I’ve found this awesome resource: Programming Sound with Pure Data, by Tony Hillerson.

 

Can’t wait!

Stay tuned…

 

PS – here’s a little extra on the actual components of the original lightsaber sounds, circa 1977: doppler microphone swinging :)

 

Avatar : Arrival

Finally, our project has gotten to Phase Zero.

The avatar has arrived.
Witness: 

VR feet - virtual world

My very first look at my feet in the VR world… perfection.

avatar-foreal

comparing to my feet in RL — Real Life… pretty damn good. Same feet, same place. VR goggles on == VR goggles off. Perfect Calibration and Registration. Achievement : Unlocked.

v1 avatar tet

And then, suddenly… the whole Avatar is Manifest. Head, spine, hands… all are 1:1 with RL… and SL is born.

Go ahead, wave hello to the adoring fans...

Go ahead, wave hello to the adoring fans…

first flights : lessons learned

Flying : Major accomplishments, and major lessons learned.

dSky VR : under the GGB!

dSky VR : under the GGB!

The good news : we got Flying Adventure working in the Rift, and let me tell you : flying under the Golden Gate Bridge, through volumetric clouds, at 70mph about 3′ above deck… a total rush. I flew in and around the city for about 30 minutes, totally absorbed, free, in bliss. Further, there was a palpably sublime moment when, flying across the surface of the water at speed, I looked down and saw something on the face of the waves: it was my avatar’s reflection, distorted in real time. Spine tingling.

whats that on the face of the water? me, as avatar

whats that reflected on the face of the water? me, as avatar

The bad news : humans and birds are not at all built the same. We first modelled the simulation so that a human flyer would be belly down, in a sort of yoga cobra position, abs engaged. The challenge is : while a birds eyes are on the *sides* of its head, and its head is naturally aligned for the bird to view forward while prone… a human’s eyes are on the *front* of our heads; and while prone, we are naturally looking downward. thus, if we are using “natural” forward propulsion, as one might imagine superman or ironman doing, we humans are forced to *seriously* arc our necks back in order to see “forward” towards where we are headed, and to more naturally navigate our flying world.

Human neck : natural downward articulation while flying | avian neck : natural forward orientation

Human neck : natural downward articulation | avian neck : natural forward orientation

human v. eagle : very different animals.

human v. eagle : very different animals.

The consequence : after 30 minutes of flying, my neck really hurts… and that’s coming from a trained acrobat, supposedly used to such contortions.

Fast conclusion: we are fast coming full circle to Palmer Luckey’s assertion that “present-day VR is a seated experience”. Going to start exploring alternate methods of navigation metaphors, including:

  • levitating chair, a la Professor X
  • cockpit, a la an F-18
  • saddle riding, a la How to Train Your Dragon
quite possibly the best way to fly

quite possibly the best way to fly : on a saddle, atop a trained giant eagle.

  • we also might simply try rotating the camera 90° up
    relative to the avatar body :)

Until our next post, enjoy the screenshots.

flyin

Click here for downloadable demo.

 

 

we made it: your avatar awaits…

Well, its been a hard month of headbanging on the issue of inverse kinematics, first person head cameras, and 1:1 perfect hand calibration.

And today, we made it:

head-and-hands-finally

As with many things, it took going back to square one.

Instead of trying and continually failing to integrate multi-player full avatar control + keyboard + mouse + gamepad + oculus + hydra all at once into our working game prototype, we created a barebones scene:

1 avatar, 1 wall, 1 light.

simple avatar solve

And went about tracking heads and hands, and making the avatar move along with the inputs. Finally, late tonight, everything came together, and we got it working. Thank the VR Gods. Now onto more important things. Like eating, drinking, sleeping, and… yes, that.

meanwhile, back at the ranch…

DT and I, fresh back from a great demo of the StarWars ‘ractive to Sixense in SF, are now branching the dev base. ScenePlay SWEp4 continues… FlyingAdventure begins.

Iron_Man_Flight

We’ll let you decide which film flying model we will best embody. Here are the theatrical flying scenes which inspire us most… click the links to play the movieclips:

So… Which flying model would you prefer to embody?

Comment below with your choice.