We don’t always use hand control at dSky, but when we do… we choose Hydra

Madame Hydra

Yes, Marvel did have video long before the Avengers re-start. Oh, good ole G.I. Joe…

With Hydra, we get to use BOTH our hands in VR,
just like Madame Hydra here…

Keeping it simple... because the best part of the hydras is... you move your hands, and your VR hands... well, they move precisely where they should.

Keeping it simple… because the best part of the hydras is… you move your hands, and your VR hands… well, they move precisely where they should. Buttons not included.

and here are the hydras at play:

 

Razer Hydra Input in Unity3D : Sixense Input control syntax

dsky-screenshot-lightsaber

we’ve been doing some fairly extensive development with the Razer Hydras in anticipation of the forthcoming Sixense STEM, as well as a bevy of other 6DoF controllers (Perception Neuron, Sony Move, PrioVR, etc). The Hydra input harness is somewhat convoluted and exists outside and parallel to the standard Unity Input Manager.

razer_hydra_TIGHT

 

I’ve found scant documentation for this on the interwebs, so here is the result of our reverse engineering efforts. If you want to code for Hydra input in your Unity experiences, here are the hooks:

First, we map primary axis and buttons as symbolic representations in the Unity Input Manager (i.e. P1-Horizontal, P1-Vertical, P1-Jump…); those handle basic keyboard, mouse, and standard gamepad input (xbox, playstation). Then inside of our Input handler code, we write custom routines to detect the Hydras, to read their values, and to sub those values into the aforementioned symbolic variables.

Our best recommendation is to install the Sixense plug-in from the Unity Asset Store, and to thoroughly examine the SixenseInputTest.cs that comes with it.

The basic streaming vars are :

• SixenseInput.Controllers[i].Position — Vector3 XYZ
• SixenseInput.Controllers[i].Rotation Vector4 Quaternion
• SixenseInput.Controllers[i].JoystickX — analog float -1.0 to 1.0
• SixenseInput.Controllers[i].JoystickY — analog float -1.0 to 1.0
• SixenseInput.Controllers[i].Trigger — analog float 0.0 to 1.0

obtaining the button taps is a bit more obfuscated,
they’re something like:

• SixenseInput.Controllers[i].GetButton(buttonObjectName)
where “buttonObjectName” is one of many objects:
ONE, TWO, THREE, FOUR, START, BUMPER, JOYSTICK
representing which “switch” is being closed on that cycle,

It also appears that there are two simpler methods,
if you want to trap explicit button press events:

• SixenseInput.Controllers[i].GetButtonDown(buttonObjectName)
• SixenseInput.Controllers[i].GetButtonUp(buttonObjectName)

This sample script has a bevy of (non-optimized?) methods for reading the controllers output in real time, from which you can (in code) map all buttons, thumbsticks, and 6DoF XYZ YPR data to your app. Hopefully the STEM API will be far more integrated into the standard Unity Input Manager framework, and thus work in seamless parallel with standard controllers, without the need for custom code.

Have any tips on Hydra input for Unity?
Pop’em into the comments below:

VR tech 411 : 6DoF, XYZ + YPR, position + orientation in 3space

I’ve spent so many cycles describing this gesturally to so many people, I’m considering getting this tattooed on my chest. To avert that, here is the diagram, liberally adapted, corrected, and upgraded from the Oculus Developer Guide:

We present to you, the standard coordinate 3-space system:

dSky-Oculus-XYZ-YPR position orientation diagram

POSITION is listed as a set of coordinates :

  • X is left / right
  • Y is up / down
  • Z is forward / back

ORIENTATION is represented as a quaternion* (later). Simply:

  • Pitch is leaning forward / back (X axis rotation)
  • Yaw is rotating left / right (Y axis rotation / compass orientation)
  • Roll is spinning clockwise / counterclockwise (Z axis rotation)

Now there, all clear. You’re welcome.


 

Further clarifications:

* a quaternion is a very special (and generally non-human readable) way of representing 3-dimensional orientation reference as a 4-dimensional number (X, Y, Z, W) in order to correct for strange behaviours encountered when rotating 3d objects.

* 6DoF is an acronym for “six degrees of freedom”. It is generally used in talking about input devices which allow a user to control position and orientation simultaneously, such as head trackers, playstation Moves, razer Hydras, Sixense STEMs, etc.

 

Approaching Cinema : Lessons Learned in 360 Capture

dsky-wilderness-v011-title-splash-screen

We’ve created some very rapid prototypes in the past 10 days, just to test the waters (no pun intended) of cinematic 360 capture and playback within VR HMDs.

viewing the photosphere in-engine, with polygons. cool.

viewing the photosphere in-engine, with polygons. cool.

The tests have been mostly very rewarding.

Our findings are as follows:

  1. Initial 2D 360 still / video capture is easy. We started with the Google Photosphere app, free on Android, which takes about 5 minutes and 40 photos per sphere. We’ve since upgraded to the Ricoh Theta, which captures 360 video with a single button-press.
  2. consider EVERYTHING in the 360 field of view. Its all in the shot. There’s no back stage. This concept takes a lot of getting used to if you’re used to working with lights, sound techs, and crews.
  3. Editing is time consuming. Easier to clean physical reality prior to the actual shot, then to paint it in post.
  4. A base plug at the foot of the shot is a nice touch, both visually and to cover the merge seam.
  5. Similarly, we use a lens flare to simulate the light dynamics of the sun.
  6. Audio engineering is key, time-consuming, fun, AND makes the difference between “just another photosphere” and the feeling of presence. You collect video at a single point; audio should be collected at all the local sound origination points, then placed into proper 3D positions in post, with filters.
  7. Since we’re authoring all this within the game engine, we’re having a lot of fun with the 3D positional audio. Placing sounds, even animating sounds as, say, a bird flies across the forest canopy.
into the wild... virtually.

into the wild… virtually.

And finally, there are some things, some of the best parts of nature, which simply aren’t going to be in VR anytime soon. Those being, the elements. Wind in your hair, and clean running stream water on your bare feet… those will have to wait.

fresh water from the springs... yes please!.. but not in VR.

fresh water from the springs… yes please!.. but not in VR.

For a 2D sample of what’s being created and captured in the world, start with YouTube’s shiny new 360 video channel.

Are there places or experiences you’d like to see us model in VR? Do you see yourself capturing and publishing your own 360 experiences?

Continue the conversation in the comments below:

Cinema meets Videogames : Strange bedfellows, or Match made in Heaven?

We’re fresh back from the epic Digital Hollywood conference / rolling party in LA, and chock full of ideas of how to integrate classic cinema techniques with our native videogame tropes.

See, 80+% of the participants at the conference were from traditional media backgrounds — music, TV, film. And while VR was absolutely the hot topic of the show — as it was for CES, GDC, and NAB — there was as much confusion as there was excitement about the commercial and artistic promise of this brand spanking new medium.

One of the key findings on our part was a genuine need to integrate cinema techniques, aka linearity, composition, color and storytelling —  into our hyper-interactive realm of videogame design. Thus began our investigations. What exactly does it take to make full, HD, 3D, 360 captures of real-world environments?

We’ll get into more details later, but for now I want to spell it out, if only for technical humor: It takes a:

  • 12-camera
  • 4k
  • 360°
  • 3D stereoscopic
  • live capture stream
  • …stitched to form an:
  • equirectangular projection
  • over / under
  • 1440p
  • panoramic video

Say that 12 times fast. Oh, and be ready for handling the approximately 200GB / minute data stream that these rigs generate. Thank god for GPUs.

What does that look like in practice?
Like this:

A still frame from a 360 stereoscopic over/under video. Playback software feeds a warped portion of each image to each of the viewers eyes.

A still frame from a 360 stereoscopic over/under video. Playback software feeds a warped portion of each image to each of the viewers eyes.

And how do you capture it? With something like this:

360Hero GoPro stereo 360 rig

12 camera GoPro 360Hero rig

Or, if you’re really high-budget, this:

Red Dragon 6k 360 3D stereoscopic capture rig by NextVR_Rig1

array of 10 Red digital cinema cameras (photo not showing top and bottom cam pairs)

Though personally, we really prefer the sci-fi aesthetic:

jaunt-sci-fi-rig-header

an early 3d 360 capture prototype by JauntVR

Then there’s the original 360 aerial drone capture device, circa 1980

Empire Viper Droid, Empire Strikes Back, c. 1980, LucasArts

Empire Viper Droid, Empire Strikes Back, c. 1980, LucasArts

Then, the ever-so-slightly more sinister, and agile version, circa 1999…

Sentinel Drone, The Matrix, 1999, via the Wachowski brothers

Sentinel Drone, The Matrix, 1999, via the Wachowski brothers

What do you think? Is the realism of live capture worth the trouble? Would you prefer “passive” VR experiences that transport you to hard-to-get-to real world places and events, “interactive” experiences more akin to xBox and PlayStation games, or some combination of the two?

Join the conversation below: