Keys : Publishing App to the Oculus Store on GearVR

Or, how we crafted our AndroidManifest.xml manifest superfile from completely shaky and hidden online documentation rumors.

It took us waaaay too long to hunt down all the details of how to get our Android demo from a side-loading APK into a fully functioning app that would play nice with the Oculus launcher app on GearVR. For those on the same journey, we;re sharing the key resources:

 


Mobile Build Target: Android : SETTINGS DETAILS
Oculus Submission Validator
a nasty little piece of command-line software that will save you many many headaches:
Application Manifest Requirements
bits and pieces of what you need to insert into your XML manifest
outdated but still informative :
Oculus Mobile Submission Guidelines PDF
more general knowledge:
Unity VR Overview 5.1+
porting Unity projects from Oculus SDK (0.5.0 or prior)
to Utilities for Unity (0.1.2+ on Unity 5.1+)
For those who have been building VR for a year or more, and want to update your projects from legacy Oculus SDKs and Unity 4.6 into the present:
1. what to delete when you port
2. whats in the Uilities package
general unity forum for VR Q&A
possible 45 minute detail video session if all else fails:

Your Golden Ticket:
SAMPLE AndroidManifest.xml
 
First you need to copy your custom Android manifest here:
Assets/Plugins/Android/AndroidManifest.xml
You can copy the Android Manifest that Unity generates when you compile your game,
it’s on folder:
YourProjectname/Temp/StagingArea/AndroidManifest.xml,
copy from there to
Assets/Plugins/Android.
Have fun, kids.

Oculus Connect 2 : Pixar v. Epic : Life in 11ms

There was so much to absorb at Oculus Connect 2, and now coming on the heels of Digital Hollywood, my brain is completely full. So instead of making my massive report as I did last year, we’re going to let the knowledge and wisdom trickle down in little pieces. Here’s the first one, from Max Planck, Sascha Unseld, and the team at Oculus Story Studio that produced Henry:

“At Pixar, with our rendering farm, we accepted the truism that it would take roughly 11 minutes to render each frame of animation, producing 24 frames per second.

With Henry, we have a hard wall of rendering each frame, in real time, at 90 frames per second, which translates to 11 milliseconds of rendering time per frame. We spent 6 months optimizing every single aspect of the models, lighting, renderers and animations to assure that we met that 11ms threshold for each and every frame, without compromise.”

11 minutes to 11 milliseconds.

Thank you Moore’s Law.
And Thank You Epic.

We accept this Truth…

…to be self-evident:

If it performs wonderfully on GearVR:
then it will be a dream on the Oculus Rift,
and take little or no effort to port, other than input harness*

If it performs adequately on the Rift + PC,
it may or may not work well on GearVR.
In fact, it may take both a massive re-factoring as well as a
total re-thinking of graphics, models, textures and code.

Conclusion?

gear-vr-vs-oculus-rift

Be smart : Mobile First.

Develop for the Gear. Port to the Rift. WIN.

 


 

* a few words on input harnesses:

Designing input mechanisms for the GearVR touchpad is a tricky business… its a VERY limited input surface, and we tend to use both gestures AND a lot of “gaze detection” combined with taps…

for the Rift, we often take the easy way out : keyboard input. At dSky, we are especially fond of the following, which are easy to “find” for “blind” VR users:

  • space bar (easiest to find… big and central and edge)
  • cursor arrows at lower right (2nd easiest to find blind)
  • ESC key (far upper left, also “feelable”)

Truth be told, we should ALL be designing for
a) gamepad, and
b) natural hand tracking devices,
c) with keyboard as a “fallback”
d) oh, did i fail to mention the venerable mouse? ooops!

as the long-term goal for natural and intuitive VR input streams.

 

Capturing Virtual Worlds to Virtual Cinema : How To

We’ve just read once, twice, three times this most excellent tutorial / thought piece by D Coetzee, entitled “Capturing Virtual Worlds: A Method for Taking 360 Virtual Photos and Videos“.

The article gets into the dirty details of how we might transform a high-powered, highly interactive VR experience into a compact* file sharable with all our friends on a multitude of platforms (cardboard, mergeVR, oculus, webVR, etc)

Having spent a great deal of time figuring out these strategies ourselves, its good to see someone articulate the challenge, the process, and future development paths so well.

360 3d panorama thin strip stitching

Most accessible present-day solution: a script that renders thin vertical strips with a rotating stereo camera array, then stitches into the final panorama

Enjoy.

  • the term “compact” is used here liberally. A typical 5 minute VR experience might weigh in at 500MB for the download. Transforming this into a 120MB movie might be considered lightweight… for VR. Time to beef up your data plans, kiddies. And developers, say it with me now : algorithmic content 🙂

 

VR Design : Best Practices

At GDC 2015, I had some informal conversations with some of the best videogame designers and engineers in the world, and inevitably they all centered around: “we know all about videogames, what do we need to look out for when creating VR?”

chemistryAcross the course of the conference, I synthesized these key points, which together represent what we feel are the guiding principles of VR design, circa 2015.

From the trenches, to your eyes. Here’s your free guidance:

Best Practices in VR Design

1. Its got to be pure 3d
— 2d tricks no longer work. billboards, masks, overlays etc…
unless you want to make a stylistic choice
— even your UI is now 100% situated in 3space

2. your geometry, and physics, must be seamless, waterproof, and tight.
— when a player sees the world stereoscopically, small details stand out
— for instance, props that float 1cm above a surface
— and 2mm cracks at wall joins
— these were overlooked in frame games, but are unforgivable in VR

3. really consider detail in textures / normals
— VR has a way of inviting players to inspect objects, props, surfaces and characters…
— up close. really close.
— in a much more intimate level than traditional games
— so be prepared for close inspection
— and make sure that your textures are tight
— along with your collision hulls

4. your collisions for near field objects must be perfect
— fingers can’t penetrate walls
— create detailed high resolution collision shells
. . . for all near sets pieces, props, and characters

5. positional audio is paramount
— audio now has true perceptive 3d positioning, 360° sphere
— you can really effectively guide the users attention and direction with audio prompts
— they will generally turn and look at audio calls for attention.

6. locomotion is key. and hard.
— swivel chair seated experiences are currently optimal
— near-instant high velocity teleports are optimal
strafing is out, completely : generates total nausea
— 2 primary metaphors are
. . . a) cockpits — cars, planes, ships
. . . b) suited helmets — space suit, scuba mask, ski mask
— cockpits allow physical grounding and help support hard / fast movements
— helmets support HUDs for UI, maps, messaging

7. flying is fun
— a near optimal form of locomotion
— no concerns with ground contact, head bob
— good way to cover large geographies at moderate speed
— managing in-flight collisions:
— a whole ‘nother conversation : force fields and the skillful flying illusion
— speaking of collisions:

8. consider where to place UI
— fixed GUIs suggest a helmet
— local / natural GUIs are more optimal
— consider point of attachment : primaries are:
—— head attachment, which is like a helmet
—— abdomen attachment, which is something you can look down and view

9. graphics performance & frame rate is absolutely key
— the difference between 75fps and 30fps is night and day…
— you MUST deliver 75 fps at a minimum
— don’t ship until you hit this bar
— this isn’t an average, its a floor

10. consider the frustum / tracking volume
— generally, depending on the specific hardware, the positional tracking is in a limited volume
— design your game to optimize performance while in the volume
— and don’t do things that lead players outside the volume
— and gracefully handle what happens when they exit, and then re-enter, the tracking space
— this is similar to the “follow-cam” challenge in trad 3D videogames

11. pacing
— when designing the play experience, consider:
— VR currently favors exploratory experiences above fast paced combat
— this is an absolutely new medium, with its own conventions and rules
— this is a KEY design principle
— be considerate of a users comfort and joy

11+. test test test
— VR experiences are very subjective
— find out what works for your intended audience
— reward your players for their commitment

 


That’s your high level design direction.

There’s also some great, more detailed technical docs on the web regarding the dirty details of VR dev & design, from the creators themselves. Here they are:

Got experience with VR dev / design?
Think we missed something? Want a job?
Comment below:

Typekit for Game Designers : Live Shadows for TextMeshes in Unity

YES, thanks to Typogenic, we now have live light baking and true shadow-casting from dynamic in-game text in Unity. Thank God!

real time shadows on textmesh in unity3d

real time shadows cast from dynamic TextMesh in Unity3d 5.1

THANK YOU Typogenic,
you’re TypoGenius!

And, while we’re at it,
where have you been, Littera?
This web-based font rendering tool is awesome!

kvazars.com/littera/

Screen Shot 2015-07-17 at 9.04.43 AM

What’s all this type about?

Exciting upgrades coming down the road for the dSky VRengine, specifically for our CreditsMachine component.

Stay tuned!

Razer Hydra Input in Unity3D : Sixense Input control syntax

dsky-screenshot-lightsaber

we’ve been doing some fairly extensive development with the Razer Hydras in anticipation of the forthcoming Sixense STEM, as well as a bevy of other 6DoF controllers (Perception Neuron, Sony Move, PrioVR, etc). The Hydra input harness is somewhat convoluted and exists outside and parallel to the standard Unity Input Manager.

razer_hydra_TIGHT

 

I’ve found scant documentation for this on the interwebs, so here is the result of our reverse engineering efforts. If you want to code for Hydra input in your Unity experiences, here are the hooks:

First, we map primary axis and buttons as symbolic representations in the Unity Input Manager (i.e. P1-Horizontal, P1-Vertical, P1-Jump…); those handle basic keyboard, mouse, and standard gamepad input (xbox, playstation). Then inside of our Input handler code, we write custom routines to detect the Hydras, to read their values, and to sub those values into the aforementioned symbolic variables.

Our best recommendation is to install the Sixense plug-in from the Unity Asset Store, and to thoroughly examine the SixenseInputTest.cs that comes with it.

The basic streaming vars are :

• SixenseInput.Controllers[i].Position — Vector3 XYZ
• SixenseInput.Controllers[i].Rotation Vector4 Quaternion
• SixenseInput.Controllers[i].JoystickX — analog float -1.0 to 1.0
• SixenseInput.Controllers[i].JoystickY — analog float -1.0 to 1.0
• SixenseInput.Controllers[i].Trigger — analog float 0.0 to 1.0

obtaining the button taps is a bit more obfuscated,
they’re something like:

• SixenseInput.Controllers[i].GetButton(buttonObjectName)
where “buttonObjectName” is one of many objects:
ONE, TWO, THREE, FOUR, START, BUMPER, JOYSTICK
representing which “switch” is being closed on that cycle,

It also appears that there are two simpler methods,
if you want to trap explicit button press events:

• SixenseInput.Controllers[i].GetButtonDown(buttonObjectName)
• SixenseInput.Controllers[i].GetButtonUp(buttonObjectName)

This sample script has a bevy of (non-optimized?) methods for reading the controllers output in real time, from which you can (in code) map all buttons, thumbsticks, and 6DoF XYZ YPR data to your app. Hopefully the STEM API will be far more integrated into the standard Unity Input Manager framework, and thus work in seamless parallel with standard controllers, without the need for custom code.

Have any tips on Hydra input for Unity?
Pop’em into the comments below:

VR tech 411 : 6DoF, XYZ + YPR, position + orientation in 3space

I’ve spent so many cycles describing this gesturally to so many people, I’m considering getting this tattooed on my chest. To avert that, here is the diagram, liberally adapted, corrected, and upgraded from the Oculus Developer Guide:

We present to you, the standard coordinate 3-space system:

dSky-Oculus-XYZ-YPR position orientation diagram

POSITION is listed as a set of coordinates :

  • X is left / right
  • Y is up / down
  • Z is forward / back

ORIENTATION is represented as a quaternion* (later). Simply:

  • Pitch is leaning forward / back (X axis rotation)
  • Yaw is rotating left / right (Y axis rotation / compass orientation)
  • Roll is spinning clockwise / counterclockwise (Z axis rotation)

Now there, all clear. You’re welcome.


 

Further clarifications:

* a quaternion is a very special (and generally non-human readable) way of representing 3-dimensional orientation reference as a 4-dimensional number (X, Y, Z, W) in order to correct for strange behaviours encountered when rotating 3d objects.

* 6DoF is an acronym for “six degrees of freedom”. It is generally used in talking about input devices which allow a user to control position and orientation simultaneously, such as head trackers, playstation Moves, razer Hydras, Sixense STEMs, etc.

 

Cinema meets Videogames : Strange bedfellows, or Match made in Heaven?

We’re fresh back from the epic Digital Hollywood conference / rolling party in LA, and chock full of ideas of how to integrate classic cinema techniques with our native videogame tropes.

See, 80+% of the participants at the conference were from traditional media backgrounds — music, TV, film. And while VR was absolutely the hot topic of the show — as it was for CES, GDC, and NAB — there was as much confusion as there was excitement about the commercial and artistic promise of this brand spanking new medium.

One of the key findings on our part was a genuine need to integrate cinema techniques, aka linearity, composition, color and storytelling —  into our hyper-interactive realm of videogame design. Thus began our investigations. What exactly does it take to make full, HD, 3D, 360 captures of real-world environments?

We’ll get into more details later, but for now I want to spell it out, if only for technical humor: It takes a:

  • 12-camera
  • 4k
  • 360°
  • 3D stereoscopic
  • live capture stream
  • …stitched to form an:
  • equirectangular projection
  • over / under
  • 1440p
  • panoramic video

Say that 12 times fast. Oh, and be ready for handling the approximately 200GB / minute data stream that these rigs generate. Thank god for GPUs.

What does that look like in practice?
Like this:

A still frame from a 360 stereoscopic over/under video. Playback software feeds a warped portion of each image to each of the viewers eyes.

A still frame from a 360 stereoscopic over/under video. Playback software feeds a warped portion of each image to each of the viewers eyes.

And how do you capture it? With something like this:

360Hero GoPro stereo 360 rig

12 camera GoPro 360Hero rig

Or, if you’re really high-budget, this:

Red Dragon 6k 360 3D stereoscopic capture rig by NextVR_Rig1

array of 10 Red digital cinema cameras (photo not showing top and bottom cam pairs)

Though personally, we really prefer the sci-fi aesthetic:

jaunt-sci-fi-rig-header

an early 3d 360 capture prototype by JauntVR

Then there’s the original 360 aerial drone capture device, circa 1980

Empire Viper Droid, Empire Strikes Back, c. 1980, LucasArts

Empire Viper Droid, Empire Strikes Back, c. 1980, LucasArts

Then, the ever-so-slightly more sinister, and agile version, circa 1999…

Sentinel Drone, The Matrix, 1999, via the Wachowski brothers

Sentinel Drone, The Matrix, 1999, via the Wachowski brothers

What do you think? Is the realism of live capture worth the trouble? Would you prefer “passive” VR experiences that transport you to hard-to-get-to real world places and events, “interactive” experiences more akin to xBox and PlayStation games, or some combination of the two?

Join the conversation below: