Keys : Publishing App to the Oculus Store on GearVR

Or, how we crafted our AndroidManifest.xml manifest superfile from completely shaky and hidden online documentation rumors.

It took us waaaay too long to hunt down all the details of how to get our Android demo from a side-loading APK into a fully functioning app that would play nice with the Oculus launcher app on GearVR. For those on the same journey, we;re sharing the key resources:

 


Mobile Build Target: Android : SETTINGS DETAILS
Oculus Submission Validator
a nasty little piece of command-line software that will save you many many headaches:
Application Manifest Requirements
bits and pieces of what you need to insert into your XML manifest
outdated but still informative :
Oculus Mobile Submission Guidelines PDF
more general knowledge:
Unity VR Overview 5.1+
porting Unity projects from Oculus SDK (0.5.0 or prior)
to Utilities for Unity (0.1.2+ on Unity 5.1+)
For those who have been building VR for a year or more, and want to update your projects from legacy Oculus SDKs and Unity 4.6 into the present:
1. what to delete when you port
2. whats in the Uilities package
general unity forum for VR Q&A
possible 45 minute detail video session if all else fails:

Your Golden Ticket:
SAMPLE AndroidManifest.xml
 
First you need to copy your custom Android manifest here:
Assets/Plugins/Android/AndroidManifest.xml
You can copy the Android Manifest that Unity generates when you compile your game,
it’s on folder:
YourProjectname/Temp/StagingArea/AndroidManifest.xml,
copy from there to
Assets/Plugins/Android.
Have fun, kids.

Capturing Virtual Worlds to Virtual Cinema : How To

We’ve just read once, twice, three times this most excellent tutorial / thought piece by D Coetzee, entitled “Capturing Virtual Worlds: A Method for Taking 360 Virtual Photos and Videos“.

The article gets into the dirty details of how we might transform a high-powered, highly interactive VR experience into a compact* file sharable with all our friends on a multitude of platforms (cardboard, mergeVR, oculus, webVR, etc)

Having spent a great deal of time figuring out these strategies ourselves, its good to see someone articulate the challenge, the process, and future development paths so well.

360 3d panorama thin strip stitching

Most accessible present-day solution: a script that renders thin vertical strips with a rotating stereo camera array, then stitches into the final panorama

Enjoy.

  • the term “compact” is used here liberally. A typical 5 minute VR experience might weigh in at 500MB for the download. Transforming this into a 120MB movie might be considered lightweight… for VR. Time to beef up your data plans, kiddies. And developers, say it with me now : algorithmic content 🙂

 

VR Design : Best Practices

At GDC 2015, I had some informal conversations with some of the best videogame designers and engineers in the world, and inevitably they all centered around: “we know all about videogames, what do we need to look out for when creating VR?”

chemistryAcross the course of the conference, I synthesized these key points, which together represent what we feel are the guiding principles of VR design, circa 2015.

From the trenches, to your eyes. Here’s your free guidance:

Best Practices in VR Design

1. Its got to be pure 3d
— 2d tricks no longer work. billboards, masks, overlays etc…
unless you want to make a stylistic choice
— even your UI is now 100% situated in 3space

2. your geometry, and physics, must be seamless, waterproof, and tight.
— when a player sees the world stereoscopically, small details stand out
— for instance, props that float 1cm above a surface
— and 2mm cracks at wall joins
— these were overlooked in frame games, but are unforgivable in VR

3. really consider detail in textures / normals
— VR has a way of inviting players to inspect objects, props, surfaces and characters…
— up close. really close.
— in a much more intimate level than traditional games
— so be prepared for close inspection
— and make sure that your textures are tight
— along with your collision hulls

4. your collisions for near field objects must be perfect
— fingers can’t penetrate walls
— create detailed high resolution collision shells
. . . for all near sets pieces, props, and characters

5. positional audio is paramount
— audio now has true perceptive 3d positioning, 360° sphere
— you can really effectively guide the users attention and direction with audio prompts
— they will generally turn and look at audio calls for attention.

6. locomotion is key. and hard.
— swivel chair seated experiences are currently optimal
— near-instant high velocity teleports are optimal
strafing is out, completely : generates total nausea
— 2 primary metaphors are
. . . a) cockpits — cars, planes, ships
. . . b) suited helmets — space suit, scuba mask, ski mask
— cockpits allow physical grounding and help support hard / fast movements
— helmets support HUDs for UI, maps, messaging

7. flying is fun
— a near optimal form of locomotion
— no concerns with ground contact, head bob
— good way to cover large geographies at moderate speed
— managing in-flight collisions:
— a whole ‘nother conversation : force fields and the skillful flying illusion
— speaking of collisions:

8. consider where to place UI
— fixed GUIs suggest a helmet
— local / natural GUIs are more optimal
— consider point of attachment : primaries are:
—— head attachment, which is like a helmet
—— abdomen attachment, which is something you can look down and view

9. graphics performance & frame rate is absolutely key
— the difference between 75fps and 30fps is night and day…
— you MUST deliver 75 fps at a minimum
— don’t ship until you hit this bar
— this isn’t an average, its a floor

10. consider the frustum / tracking volume
— generally, depending on the specific hardware, the positional tracking is in a limited volume
— design your game to optimize performance while in the volume
— and don’t do things that lead players outside the volume
— and gracefully handle what happens when they exit, and then re-enter, the tracking space
— this is similar to the “follow-cam” challenge in trad 3D videogames

11. pacing
— when designing the play experience, consider:
— VR currently favors exploratory experiences above fast paced combat
— this is an absolutely new medium, with its own conventions and rules
— this is a KEY design principle
— be considerate of a users comfort and joy

11+. test test test
— VR experiences are very subjective
— find out what works for your intended audience
— reward your players for their commitment

 


That’s your high level design direction.

There’s also some great, more detailed technical docs on the web regarding the dirty details of VR dev & design, from the creators themselves. Here they are:

Got experience with VR dev / design?
Think we missed something? Want a job?
Comment below:

Razer Hydra Input in Unity3D : Sixense Input control syntax

dsky-screenshot-lightsaber

we’ve been doing some fairly extensive development with the Razer Hydras in anticipation of the forthcoming Sixense STEM, as well as a bevy of other 6DoF controllers (Perception Neuron, Sony Move, PrioVR, etc). The Hydra input harness is somewhat convoluted and exists outside and parallel to the standard Unity Input Manager.

razer_hydra_TIGHT

 

I’ve found scant documentation for this on the interwebs, so here is the result of our reverse engineering efforts. If you want to code for Hydra input in your Unity experiences, here are the hooks:

First, we map primary axis and buttons as symbolic representations in the Unity Input Manager (i.e. P1-Horizontal, P1-Vertical, P1-Jump…); those handle basic keyboard, mouse, and standard gamepad input (xbox, playstation). Then inside of our Input handler code, we write custom routines to detect the Hydras, to read their values, and to sub those values into the aforementioned symbolic variables.

Our best recommendation is to install the Sixense plug-in from the Unity Asset Store, and to thoroughly examine the SixenseInputTest.cs that comes with it.

The basic streaming vars are :

• SixenseInput.Controllers[i].Position — Vector3 XYZ
• SixenseInput.Controllers[i].Rotation Vector4 Quaternion
• SixenseInput.Controllers[i].JoystickX — analog float -1.0 to 1.0
• SixenseInput.Controllers[i].JoystickY — analog float -1.0 to 1.0
• SixenseInput.Controllers[i].Trigger — analog float 0.0 to 1.0

obtaining the button taps is a bit more obfuscated,
they’re something like:

• SixenseInput.Controllers[i].GetButton(buttonObjectName)
where “buttonObjectName” is one of many objects:
ONE, TWO, THREE, FOUR, START, BUMPER, JOYSTICK
representing which “switch” is being closed on that cycle,

It also appears that there are two simpler methods,
if you want to trap explicit button press events:

• SixenseInput.Controllers[i].GetButtonDown(buttonObjectName)
• SixenseInput.Controllers[i].GetButtonUp(buttonObjectName)

This sample script has a bevy of (non-optimized?) methods for reading the controllers output in real time, from which you can (in code) map all buttons, thumbsticks, and 6DoF XYZ YPR data to your app. Hopefully the STEM API will be far more integrated into the standard Unity Input Manager framework, and thus work in seamless parallel with standard controllers, without the need for custom code.

Have any tips on Hydra input for Unity?
Pop’em into the comments below:

VR tech 411 : 6DoF, XYZ + YPR, position + orientation in 3space

I’ve spent so many cycles describing this gesturally to so many people, I’m considering getting this tattooed on my chest. To avert that, here is the diagram, liberally adapted, corrected, and upgraded from the Oculus Developer Guide:

We present to you, the standard coordinate 3-space system:

dSky-Oculus-XYZ-YPR position orientation diagram

POSITION is listed as a set of coordinates :

  • X is left / right
  • Y is up / down
  • Z is forward / back

ORIENTATION is represented as a quaternion* (later). Simply:

  • Pitch is leaning forward / back (X axis rotation)
  • Yaw is rotating left / right (Y axis rotation / compass orientation)
  • Roll is spinning clockwise / counterclockwise (Z axis rotation)

Now there, all clear. You’re welcome.


 

Further clarifications:

* a quaternion is a very special (and generally non-human readable) way of representing 3-dimensional orientation reference as a 4-dimensional number (X, Y, Z, W) in order to correct for strange behaviours encountered when rotating 3d objects.

* 6DoF is an acronym for “six degrees of freedom”. It is generally used in talking about input devices which allow a user to control position and orientation simultaneously, such as head trackers, playstation Moves, razer Hydras, Sixense STEMs, etc.

 

The Avatar Magna Carta : or, How to Puppeteer a 3D Humanoid with 6DoF head and hands tracking

In this post, we present the workflow required in order to enable a player to live puppeteer a rigged first person 3d avatar in-game, by:

  1. driving the avatars in-game hands in a 1:1 relationship with the players actual physical hands, and
  2. animating the avatar’s in-game head orientation to match the precise orientation of the players physical real-world head,
  3. via 3d trackers on the players heads and hands, and the application of a simple inverse kinematic (IK) physics model.

We spent a long time figuring out this path, so I thought we’d share it with the community. Note: this is an entirely technical, workflow, pipeline post for readers who are currently developing VR applications. Its not for the general consumer. This tutorial is specifically crafted for a Unity3D pipeline. Also, it is specific to the Oculus Rift DK2 HMD, and Razer Hydra hand trackers, powered by Sixense. It shuold work with other tracking solutions, with modification.

Go ahead, wave hello to the adoring fans...

Go ahead, wave hello… heads and hands fully tracked and puppeteered

First, the basic premise:

People want to relate to their own physical avatars in VR. They want to be able to look down at their feet, and see their body. They want to be able to wave their hands in front of their face, and see some representation of their appendages in front of them, superimposed on the virtual scene. In short, they want to feel like they are present in the experience, not just an ethereal viewer.

This problem proved a bit more difficult to solve in practice than one might imagine. So in the interest of fostering community, we are sharing our technical solution with everyone. It isn’t perfect yet; we’ve posted a number of tips and follow-on research topics at the end of the tutorial, places where this needs to go before its fully “ready for prime time.” However, the solution presented here IS functional, leaps beyond the standard “avatar-less” VR being produced today, and should serve as a baseline from which improvements can and will be made.

Now, the actual tutorial:


  1. build and skin your Avatar Model
    1. we create humanoids in Mixamo Fuse
    2. you may choose the modeling software of your choice : Maya, Blender, etc
    3. make sure that your final model is in T-pose
  2. rig the character with bones
    1. this can be done by uploading a t-pose character to mixamo.com
    2. …or manually in your software
    3. use the maximum resolution possible : we use a 65-bone skeleton, which includes articulated fingers
  3. give the character at least one animation
    1. we will use “idle” state
    2. you assign this online in Mixamo
  4. get the Avatar into Unity
    1. export from Mixamo to Unity FBX format
    2. import the resulting FBX (approx. 5-20MB) into Unity Assets : gChar folder
    3. this will generate a prefab in Unity along with 4 components in a subfolder:
      1. a mesh
      2. a bone structure
      3. an animation loop file
      4. an avatar object
    4. the prefab will have a name such as “YourModelName@yourAnimName”
  5. Configure the Avatar
    1. click on the prefab in the Assets folder
    2. In the inspector, select the “Rig” tab
      1. make sure that “Humanoid” is selected in the “Animation Type” pull-down
      2. if you selected that manually, hit “Apply”
    3. drag the prefab from the Assets into the Heirarchy
    4. select the prefab avatar in the heirarchy
      1. In the Inspector:
      2. add an “Animator” component. we will fil in the details later
      3. add the g-IKcontrol.cs C# script. again, we will fill in details later
      4. you can copy the source of the script from here
  6. Add the latest Oculus SDK (OVR) to the project. 
    1. Download the latest Oculus SDK for Unity
    2. this is usually done by double-clicking “OculusUnityIntegration.unitypackage” in the OS, then accepting the import into your project by clicking “import all”
    3. You should now have a folder within Assets called “OVR”
  7. Add the latest Sixense Hydra / STEM SDK to the project
    1. Download the Hydra plug-in from the Unity Asset Store
    2. Import it into your project.
    3. You should now have a folder within Assets called “SixenseInput”
  8. Create a high level Empty in your hierarchy and name it “PLAYER-ONE”
    1. make your Avatar prefab a child of this parent
    2. Drag the OVR CameraRig from the OVR folder and also make it a child of PLAYER-ONE
  9. Properly position the Oculus camera in-scene
    1. The oculus camera array should be placed just forward of the avatars eyes
    2. we typically reduce the forward clipping plane to around 0.15m
    3. If you’re using the OVRPlayerController, Character Controller settings work well:
      1. Center Y = -0.84m (standing),
      2. Center Z = -0.1 (prevents from being “inside head”)
      3. Radius = 0.23m
      4. Height = 2m
    4. This will require some trial and error. Make sure that you use the Oculus camera, and not the Oculus Player controller. Experimentation will be required to bridge the spatial relationship between a seated player and a standing avatar. Calibration software needs to be written. Trial and Error is generally defined as a series of very fast cycles of : build, test, make notes. modify, re-build, re-test, make notes. repeat until perfect. There are many gyrations and you will become an expert at rapidly donning and removing the HMD, headphones, and hand controllers.
  10. Create the IK target for head orientation
    1. Right-click on the CenterEyeAnchor in the Hierarchy, and select “Create 3D Object > Cube”
    2. Name the cube “dristi-target”
    3. move the cube approx 18” (0.5m) directly outward from the Avatar’s third eye
    4. This will serve as the IK point towards where the avatar’s head is “aimed” at, i.e. where they are looking. In yoga, the direction of the gaze is called dristi.
  11. Get the Sixense code into your scene
    1. Open the SixenseDemoScene
    2. copy the HandsController and SixenseInput assets from the heirarchy
    3. Re-open your scene
    4. paste the HandsController and SixenseInput assets into your heirarchy
    5. drag both to make them children of OVRcameraRig
  12. Make sure the Sixense hands are correctly wired.
    1. Each hand should have the “SixenseHandAnimator” controller assigned to it
    2. Root Motion should be UNchecked
    3. Each hand should have the SixenseHand Script attached to it
    4. On the pull down menu doe SixenseHand script, the proper hand should be selected (L/R)
  13. Properly position the Sixense hands in-scene
    1. They should be at about the Y-pos height of the belly button
    2. The wrists should be about 12” or 30cm in Z-pos forward of the abdomen
    3. In otherwords, they should be positioned as if you are sitting with your elbows glued to your sides, forearms extended paralell to the ground.
    4. You will want to adjust, tweak, and perfect this positioning. There is an intrinsic relationship between where you position the hands in the scene, and the Sixense trackers position in the real world relative to the Oculus camera. Trial and error and clever calibration software solves this. That’s another tutorial.
  14. Make the Sixense hands invisible. 
    1. we do this because they will merely serve as IK targets for the avatars hands
    2. do this by drilling down into HandsController : Hand_Right : Hand_MDL and unchecking the “Skinned Mesh Renderer” in the Inspector panel
    3. do the same with the left hand.
    4. this leaves the assets available as IK targets, but removes their rendered polys from the game
  15. Create the Animator Controller
    1. create transitions from Entry to New State, and
    2. from New State to Idle (or what you named your created animation)
    3. On the Base Layer, click the gear, and make sure that “IK Pass” is checked.
    4. this will pass the IK data from the animation controller on down the script chain
  16. Assign the new Animation Controller to the Avatar
    1. select the avatar in the heirarchy
    2. assign the controller in the inspector
  17. Map the Avatar with Puppet IK targets for Hands and Head,
    1. drag the “Hand – Left” from the Sixense Handscontroller parent to the “Left Hand Obj” variable
    2. drag the “Hand – Right” from the Sixense Handscontroller parent to the “Right Hand Obj” variable
    3. drag the Look-At-Target from the OVRCameraRig to the “Look Obj” variable
      1. The Look-At-Target is nested deep:
      2. PLAYER-ONE : OVRCameraRig : TrackingSpace : CenterEyeAnchor : Look-at-Target
  18. THATS IT!
    1. Build a compiled runtime.
    2. Connect your Rift and Hydra
    3. launch the game
    4. activate the Hydras.
      1. grasp the left controller, aim it at the base, and squeeze the trigger.
      2. grasp the right controller, aim it at the base, and squeeze the trigger.
      3. hit the “start” button, just south of the joystick on the right controller
    5. When you tilt and rotate your head, the avatars head should also tilt and roll. When you move your hands, the avatars hands should move in a 1:1 ratio in-scene. Congratulations, you’re an Avatar Master.
 

TIPS
and areas for further R&D
 
  1. Ideally, the avatars head should not be rendered for the player, yet it should still cast shadows and reflections
  2. the avatars head should also clearly be rendered for other players in multi-player scenarios, as well as for third-person camera observer positions.
  3. An in-game shadow is a great way to ground the player to the avatar in a convincing manner. Even when the hands are outside the field of view, seeing the shadows of the heads and hands triggers a very powerful sense of presence.
  4. While head rotation and orientation on the skull axis is fairly straightforward, head translation, i.e. significant leaning in or out or to the side, is a bit more abstract in terms of pupeteering choices. You may wish to explore either locomotion animations, or “from the hip” IK solutions to move the torso along with the head.
  5. RL : VR / 1:1 Spatial Calibration is KEY to great experiences.
    1. See 9.3, above : Properly position the Oculus camera
    2. and 13.4 : Properly position the Sixense hands 
  6. The built-in Unity IK leaves a lot to be desired when it comes to realistic approximations of elbow positions. We are investigating the FinalIK package and other professional-class solutions.
  7. This solution in its current form disables the ultra-cool “grasping” or “pointing” animations that are built-in to the Sixense template models. Investigate how to re-enable those animations on the rigged avatar’s MecAnim structure.
  8. You will also want to configure the remainder of the Hydra joysticks and buttons to control all  game functions, because it sucks to be reaching and fumbling for a keyboard when you are fully immersed in a VR world.
  9. The majority of this configuration starts in the Unity Input Manager
    1. Edit | Project Settings | Input…
  10. There should be keyboard fallback controls for users who do not own Hydras…
Have you tackled this same challenge? Have you created any solutions to the further investigations we’ve proposed above?
Share in the comments below,
because we’re all in this together!