Tag Archives: Storytelling

Report from the VR World Congress 2017

Bristol was hosting this three days congress. What a good excuse to explore the West Coast of England. Loved it! How to report from such an big event pact of keynotes, talks, debates and demos about Virtual and Augmented Reality, interactive storytelling and immersive art, architecture visualisation and video game development, to name just a few of the field involved ? I will start with  the general trends, then some key points from the talks, to finish with what really got me hooked.

This event was a solid push from AMD. As far as I can remember, AMD had always an edge to better process 3D on his most known rival Intel. Well, it looks like they are still in the fight, but now to process VR wirelessly with their Nitero tech. And this is important because, being in a virtual environment is pretty cool, however, without any cable in our way, it will be much better.  Hololens has taken that mobility party from the start. They made quite a demo there, bringing forward the Mixed Reality concept. That being said, I am still not convinced with this small field of view and the basic interaction where you pinch things around.

Mk2 in Paris
SoReal in china

In the meantime, the big hype is around VR location-based experiences. Mk2 in Paris looks very neat, curating only high quality content and SoReal, a VR theme Park in China sounds epic. On a hardware note, I am curious to see what kind of headset the Chinese will bring on the market with their DeePoon!

Another main trend is the good old haptic feedback. They are working hard to bring the third sense into the game. We are not sure what shape it will take:  gloves, waves, arms, sensors,…but it was explore in every corner of the congress.

Most important is the effort given to produce high quality content. At this stage, only great content will bring VR to the mass.

Follows bullets points of my tweets and comments of the different talks I followed:

On Wednesday:

  • Vive and the Evolving Ecosystem of VR” with Graham Breen from HTC.
    HTC Vive – Haptic

    What’s next for Vive and his ecosystem evolution? Not much on the hardware side, a part of those haptic gloves shown there. They are focus on helping and promoting content creators with their Viveport platform and the ViveX accelerator.

  • Streaming VR/AR Content over the Web”  with Jamieson Brettle from Google. That’s where it get really exciting! AR/VR through the browser. He was telling about pretty good compression performances for cloud point 3D with Draco. For sound volumetric or spatial audio, they are using Opus with ambisonic compression.
  • “Ghost in the Shell VR – A deep Dive” with Sol Rogers from Rewind.
    He delivered a great talk about how he and his team made Ghost in the Shell. He gave no numbers and ask very nicely not to take a photo!

    That’s all I got from Ghost in the Shell
  • “The Importance of touch: Mid-air haptic feedback for VR & AR” with Tom Carter from Ultrahaptics.
    How cool is that, conducting emotions through mid air haptic feedback? Because, at the end, it is the sense of touch that makes things real.
  • Perception = Reality: Exploiting Perceptual Illusions for Design of Compelling AR & VR Interfaces” with Hrvoje Benko from Microsoft Research. Don’t try to be perfect. Use tricks and perception illusion.

Using perception illusion to extend the experience; Passive haptic; Perspective Projection Mapping;  The rubber hand effect revisited as Haptic Retargetting; Body warping and world warping are very promising technic that make you  believe you interacting with objects and that gives you this haptic feedback.

  • “Virtual Futures Track: The Ethics of VR Risks and Recommendations” with Michael Madar from University of Mainz
    • Why should we care?  Our environment influence our behaviour.
    • VR brings the illusions of place, embodiment and agency.
    • Beneficial vs manipulative in VR, tricky balance to make.
    • Are we going to create a rating system for VR experience?
    • “Users should be made aware that there is evidence that experience in VR can have a lasting influence on behaviour after leaving the virtual environment.”
    • Users should also be made aware that we do not yet know the effects of long-term immersion.”

On Thursday, I went to only one talk from the lovely blue hair Mary Kassin from Google who explained her day-to-day process. She prototypes a lot. Trial, user feedback and iteration make the most of her time. She also mentioned the Tango ready phone to play with AR.

Alright, this is all very exciting, full of ideas, however, what made the biggest impression on me was the couple of VR visualisation agencies showing off their architectural Immersive and Interactive walk-through made with Unreal engine.  With this very realistic real-time rendering, we are getting closer to eliminate the long hours of rendering time for still images for a full immersive experience in the early stage of the process.  That is happening right now!

London VR Meetup Special Education Rocked.

Monday evening was my first VR meetup for developers, special education; right down my alley.  Really dynamic, full of enthusiastic developers and makers, a few exciting demos on the first floor and loads of speakers on the ground floor.  All this in a good old fashion London building, the Hackney House.  This blog will cover mostly my experiences with all the demos I have experienced.

The speaker room

First thing I tried was the full body VR immersion kit. It took some time to adjust all the straps around every main bones. Once calibration is done,  fit the Oculus Rift on your head and there you are looking in the mirror at your avatar,  a character attached to your  body and movements. Despite the lack of space (the casual black environment),  a sense of scale was given by a couple of statues. A few tweaks using UX interface designed by Dr Harry Brenton, and you have a tail, a gigantic arm or an alien head. The level of presence get higher with every step. I can’t wait to have full hand and fingers tracked.

Harry Brenton slide on Character Design

The next experience was a real-time holograph in VR. You get the headset on to meet the holographic projection of someone in Lithuania in real-time. She couldn’t see me, though, I could see her, and talk to her over the phone saying “hello” and she would wave her hands. Not yet like in Ghost in the Shell but it works, telepresence, yes!  However, it would be nice to have a sense of  geographic location in the environment, wouldn’t it?

The most exciting demo was about making VR in VR by Constructive Labs.  Still in an early stage, the demo let you manipulate objects in VR using HTC Vive and controllers like you would use the mouse and keyboard in 3Dsmax or Unity. On top, we were able to do that in interaction with another person. Their idea is to develop a VROS, Virtual Reality Operating system.  Pretty cool stuff! However, again, the environment was really poor. They just used a model of a random brick tower as a gimmick surrounding.

The last demo I tried was more on the interactive storytelling treat made by Bandits Collective. Following the hot panel discussion about 360 video a bit earlier in the evening, I think they nailed it quite well. Their intro for a short movie brings you in a computer generated 360 environment animation based and stylish. It is the environment that make you stay where you are and look where the action happens, though you can check all around and even move. There is no interaction. The action is happening around you. Very promising!

There were a couple of other demos with cardboard and other mobile VR. We know what we get there. I am much more interested with what we don’t know. There is still so much to explore, mostly for me, as you can tell, about spatial environment in VR.   On those four demos, only the 360 story has a designed spatial environment that support the experience.

360 Panoramic Views with WebVR – Production Workflow

Optimising work is part of me. I hate repeating the same task again and again. We are using computers, that’s what they are good at, repeating a task for us. When it comes to work in a team, the workflow get even more important so that two different persons don’t overlap each other work or come back with always the same questions of “how you doing this?” or “where is this file?”. This is all good sense in theory, very far for being the norm in practice, at least from my experience, working mostly with architects and designers. Furthermore, as a freelance, months will pass without using a specific workflow,  working on a different project with different software. Being able to look back at how we did something  is a must if we don’t want to repeat the same mistake again.
On the Saydnaya project I was working on with Forensic Architecture for Amnesty International, we produce a series of interactive 360 panoramic views developed using the fresh WebVR. We didn’t get enough time to develop it in VR, although it is real-time 3D online in a browser, no plugin, no viewer needed. Pretty awesome stuff! Here is the workflow we used to get those 8 Interactive 360 Panoramic views online:
Saydnaya Circulation in 3dsMax
  1. Some of the 3d models were made with Rhino then exported to 3dsMax, most of the models were made only with 3dsMax avoiding any mistranslation between Rhino and 3dsMax, two very different modellers.
  2. All the texturing and lighting was done in 3dsMax, then baked to texture.
  3. Optimization was a big part of the process.. Models were  exported in OBJ (only meshes) and optimised with MeshLab. Maps were optimized with Photoshop.
  4. Everything was tested in a sandbox in WebGL using MAMP as a Apache Local Server, Atom to edit the PhD and Python files.
Saydnaya Circulation WebGL

If you are interested  and would like more details, let me know in the comment below or via email.

What is WebVR? It is 3 things, see this Reddit post here for more details:
  • WebVR is configuring the VR HMDs
  • WebGL is about drawing in WebVR using mainly three.js
  • HTML5 is the sandbox to play in.

Saydnaya Project link

Creating Memories in Virtual Reality.

I love  Metaworld ‘s introduction about creating memories despite the fact they are using the social presence as their main attraction, which I completely disregard in my research, … for now. Anyway, you are wondering why would we want to create memories in a virtual environment at the first place? Isn’t it better to go out in the physical world and meet real people?

metaworld-chess
Playing chess inside Metaworld

Well, it doesn’t compare. The whole idea of using VR to create memories is based on a couple of principles that are much more powerful to set up in VR : associations and embodiment.

Firstly, memories love associations. It is by associating one new information in a network of previously formed memories, that we are going to make sense of it and remember it. We will be able to retrieve the memory more efficiently later on by accessing the same network.

Secondly, memories are embodied. We don’t work like computers. We don’t have a hard drive or a cloud to engrave information one by one in a linear manner. We are storing information on a multi modal level within a mind-body system . So the more senses we are using when accessing a new information, which call upon the previous principle of association, the better we will be able to form new memories.

By proposing a persistent online world where we’re entering by immersing ourself with a HMD (Head Mounted Display), a head phone and a couple of controllers in our hands, we are already covering most of our senses. However, in VR we don’t have the same constraints as in the physical world. Anything is possible that can serve the purpose the participant is after. We can fly, we can travel into space, swim in the ocean or go inside a volcano.

Now, one of the debate about embodiment is the question of the avatar. When we enter into VR as ourself, we can see our hands, we don’t want to see a cartoonist body of ours. Although, if we want to interact with other people, we need to see them, somehow. Metaworld solves this issue by using a quite cartoonist avatars with non attached detailed hands. That way, you can see your own hands and other people hands with their avatar.  Let’s try it.

Giant VR : a new kind of storytelling

In the same spirit of the Saydnaya project mentioned in an earlier post, although with a much bigger budget, Giant seems to be a really promising VR experience. It is using VR to immerse the participant in a story based on real events. I didn’t had the chance to experience it yet. I just went through this in depth article from the Unrealengine website, it definitely worth the read. If reading is not your thing, Winslow Porter, Giant’s producer and co-creator, is presenting her work in this video.

This is the kind of VR experience we would like to hear more about.