We’re fresh back from the epic Digital Hollywood conference / rolling party in LA, and chock full of ideas of how to integrate classic cinema techniques with our native videogame tropes.
See, 80+% of the participants at the conference were from traditional media backgrounds — music, TV, film. And while VR was absolutely the hot topic of the show — as it was for CES, GDC, and NAB — there was as much confusion as there was excitement about the commercial and artistic promise of this brand spanking new medium.
One of the key findings on our part was a genuine need to integrate cinema techniques, aka linearity, composition, color and storytelling — into our hyper-interactive realm of videogame design. Thus began our investigations. What exactly does it take to make full, HD, 3D, 360 captures of real-world environments?
We’ll get into more details later, but for now I want to spell it out, if only for technical humor: It takes a:
- 3D stereoscopic
- live capture stream
- …stitched to form an:
- equirectangular projection
- over / under
- panoramic video
Say that 12 times fast. Oh, and be ready for handling the approximately 200GB / minute data stream that these rigs generate. Thank god for GPUs.
What does that look like in practice?
And how do you capture it? With something like this:
Or, if you’re really high-budget, this:
Though personally, we really prefer the sci-fi aesthetic:
Then there’s the original 360 aerial drone capture device, circa 1980
Then, the ever-so-slightly more sinister, and agile version, circa 1999…
What do you think? Is the realism of live capture worth the trouble? Would you prefer “passive” VR experiences that transport you to hard-to-get-to real world places and events, “interactive” experiences more akin to xBox and PlayStation games, or some combination of the two?
Join the conversation below: