We’ve been very very busy preparing the final build… now that the camera is free to roam, we thought we’d better upgrade the exterior. Here she is, v1.0:
..inspired by the original matte painting:
we’ve been doing some fairly extensive development with the Razer Hydras in anticipation of the forthcoming Sixense STEM, as well as a bevy of other 6DoF controllers (Perception Neuron, Sony Move, PrioVR, etc). The Hydra input harness is somewhat convoluted and exists outside and parallel to the standard Unity Input Manager.
I’ve found scant documentation for this on the interwebs, so here is the result of our reverse engineering efforts. If you want to code for Hydra input in your Unity experiences, here are the hooks:
First, we map primary axis and buttons as symbolic representations in the Unity Input Manager (i.e. P1-Horizontal, P1-Vertical, P1-Jump…); those handle basic keyboard, mouse, and standard gamepad input (xbox, playstation). Then inside of our Input handler code, we write custom routines to detect the Hydras, to read their values, and to sub those values into the aforementioned symbolic variables.
Our best recommendation is to install the Sixense plug-in from the Unity Asset Store, and to thoroughly examine the SixenseInputTest.cs that comes with it.
The basic streaming vars are :
• SixenseInput.Controllers[i].Position — Vector3 XYZ
• SixenseInput.Controllers[i].Rotation — Vector4 Quaternion
• SixenseInput.Controllers[i].JoystickX — analog float -1.0 to 1.0
• SixenseInput.Controllers[i].JoystickY — analog float -1.0 to 1.0
• SixenseInput.Controllers[i].Trigger — analog float 0.0 to 1.0
obtaining the button taps is a bit more obfuscated,
they’re something like:
• SixenseInput.Controllers[i].GetButton(buttonObjectName)
where “buttonObjectName” is one of many objects:
ONE, TWO, THREE, FOUR, START, BUMPER, JOYSTICK
representing which “switch” is being closed on that cycle,
It also appears that there are two simpler methods,
if you want to trap explicit button press events:
• SixenseInput.Controllers[i].GetButtonDown(buttonObjectName)
• SixenseInput.Controllers[i].GetButtonUp(buttonObjectName)
This sample script has a bevy of (non-optimized?) methods for reading the controllers output in real time, from which you can (in code) map all buttons, thumbsticks, and 6DoF XYZ YPR data to your app. Hopefully the STEM API will be far more integrated into the standard Unity Input Manager framework, and thus work in seamless parallel with standard controllers, without the need for custom code.
Have any tips on Hydra input for Unity?
Pop’em into the comments below:
I’ve spent so many cycles describing this gesturally to so many people, I’m considering getting this tattooed on my chest. To avert that, here is the diagram, liberally adapted, corrected, and upgraded from the Oculus Developer Guide:
We present to you, the standard coordinate 3-space system:
POSITION is listed as a set of coordinates :
ORIENTATION is represented as a quaternion* (later). Simply:
Now there, all clear. You’re welcome.
Further clarifications:
* a quaternion is a very special (and generally non-human readable) way of representing 3-dimensional orientation reference as a 4-dimensional number (X, Y, Z, W) in order to correct for strange behaviours encountered when rotating 3d objects.
* 6DoF is an acronym for “six degrees of freedom”. It is generally used in talking about input devices which allow a user to control position and orientation simultaneously, such as head trackers, playstation Moves, razer Hydras, Sixense STEMs, etc.
We’ve created some very rapid prototypes in the past 10 days, just to test the waters (no pun intended) of cinematic 360 capture and playback within VR HMDs.
The tests have been mostly very rewarding.
Our findings are as follows:
And finally, there are some things, some of the best parts of nature, which simply aren’t going to be in VR anytime soon. Those being, the elements. Wind in your hair, and clean running stream water on your bare feet… those will have to wait.
For a 2D sample of what’s being created and captured in the world, start with YouTube’s shiny new 360 video channel.
Are there places or experiences you’d like to see us model in VR? Do you see yourself capturing and publishing your own 360 experiences?
Continue the conversation in the comments below:
We’re fresh back from the epic Digital Hollywood conference / rolling party in LA, and chock full of ideas of how to integrate classic cinema techniques with our native videogame tropes.
See, 80+% of the participants at the conference were from traditional media backgrounds — music, TV, film. And while VR was absolutely the hot topic of the show — as it was for CES, GDC, and NAB — there was as much confusion as there was excitement about the commercial and artistic promise of this brand spanking new medium.
One of the key findings on our part was a genuine need to integrate cinema techniques, aka linearity, composition, color and storytelling — into our hyper-interactive realm of videogame design. Thus began our investigations. What exactly does it take to make full, HD, 3D, 360 captures of real-world environments?
We’ll get into more details later, but for now I want to spell it out, if only for technical humor: It takes a:
Say that 12 times fast. Oh, and be ready for handling the approximately 200GB / minute data stream that these rigs generate. Thank god for GPUs.
What does that look like in practice?
Like this:
A still frame from a 360 stereoscopic over/under video. Playback software feeds a warped portion of each image to each of the viewers eyes.
And how do you capture it? With something like this:
Or, if you’re really high-budget, this:
Though personally, we really prefer the sci-fi aesthetic:
Then there’s the original 360 aerial drone capture device, circa 1980
Empire Viper Droid, Empire Strikes Back, c. 1980, LucasArts
Then, the ever-so-slightly more sinister, and agile version, circa 1999…
Sentinel Drone, The Matrix, 1999, via the Wachowski brothers
What do you think? Is the realism of live capture worth the trouble? Would you prefer “passive” VR experiences that transport you to hard-to-get-to real world places and events, “interactive” experiences more akin to xBox and PlayStation games, or some combination of the two?
Join the conversation below: