Perspective Restriction and Dissipation

The anti-thesis of virtual reality is the actual restriction to only one point of view in only one direction. This is what “perspective” is.  And this explains what happened in western culture since his invention: the reduction of our perception of the world to one perspective.

Related image
George Rousse

George Rousse captured this phenomenon in his installations with a perfect sense of accuracy. Those surreal photographs can only be taken from one very accurate location to show a perfect geometric shape. One inch on the side and the depth break the geometry.

Through the lens of the camera,  his primary material is space. He crystallises that architectural space by proposing a static reading through the frame of a photo.  I take this as a reaction to the dynamism, and potentially the loss of perspective, that an immersive approach to VR has to offer.

Related image
George Rousse

The final photographic image perturbs our visual habits and convictions by presenting three kinds of space: the real space, where he makes his installations; an imaginary utopian space, which the artist invents and then carefully builds at his chosen site; and a new space that is visible from only one spot when he clicks the camera shutter, and exists only in the photo.

The convergence of these spaces goes beyond a visual game: Like a hall of mirrors, enigmatic and dizzying, it questions the role of photography as a faithful reproduction of reality; it probes the distances between perception and reality, between imaginary and concrete

What makes it so intriguing is that the mind of the beholder (your mind) can not stop to try figuring out all the work done by the artist before getting to that moment in time when he press the shutter. It is like we are transposing ourselves in that space and recreating the work. This phenomenon is also called embodied simulation.  (Gallese 2011, more on this in a future post)

Related image
George rousse

VR Demo for Techday at Dragon Hall

 The team from the Dragon Hall Trust set up a #techday event in June to give the opportunity to young people that wouldn’t usually have the chance, to try out new technologies such as robotics, 3D printing, VR and AR. Being in charge of the VR side of the event, I presented a prototype of my research,  as well as the Irrational Exuberance Introduction using the HTC Vive VR system. By proposing those two very different VR experiences,  visitors had the choice between being a builder or an explorer.
Their response was fantastic. Their enthusiasm, positive and sceptic reactions have fueled me to pursue my research with VR and architecture with renewed energy.
Techday at Dragons Hall Trust
I would like to take this opportunity to share the post event respond from the team that gives a pretty good idea of the success of the event:
“I would like to say a huge thank you for coming to display, demonstrate and inspire the young people who attended Tech day. Your VR display was incredible, everyone came off it were amazed and enjoyed themselves. I still have flash backs from when one of the kids let go of the hand sensor. However, the great thing about your headset is that it can engage not only young people but adults and they can all have the same experience and enjoyments from it.
That for us is one of the great things about tech day. It is a place where young people are able to see first hand the developments in STEM and how technology can be used.
We reached 232 attendees, with young people and adults witnessing technology they haven’t previously seen. Anyone who we have spoken to has given us great feedback.”
The last thing I would like to add to this post is that I will be ready for the next event with a more advanced VR experience to try out.

VR or not VR?

On 20th June, I had the opportunity to present my work on Saydnaya Prison at WebXR Meetup, Mozilla HQ London. (Meetup VR AR WebGL).  The original idea of that talk was to present the workflow used to produce the 8  interactive 360 panoramic views (in WebGL) made for the website as already explained in an earlier post here . The kick was to explain why we didn’t go down the VR route! Here is how the talk went.

Memories from Saydnaya – Prezi Presentation

Prezi presentation of that talk can be found here.

Let me start to give you some background to understand where I am coming from. Back in the nineties, I studied architecture in Brussels. I learn how to encode drawing in CAD applications then jumped into 3D modeling to bring them some depth. At the same time,  Internet was growing. In response to an international competition,  we designed an online interface for the web that would keep track of user’s navigation through his hypertext trait.  VRML (Virtual Reality Modeling Language) was the promising language that will support the design and the experience of this new sort of 3D Graphic User Interface to get access to the library of the information age – codename: “MetaLibrary”.

Metalibrary 1999 in VRML

Jumping forward in 2012, two events triggered me back into it. Firstly, I read Joshua Foer’s book: “Moonwalking with Einstein“. Secondly, Palmey Lucky started the Oculus Kickstarter. I didn’t need anything else to invest more time in that very same idea of designing three-dimensional interface to support knowledge acquisition. With the power of the method of loci described in J. Foer’s book and the potential of VR immersion from the Rift in mind, I dived back into the exploration of architectural based immersive virtual environment and started a PhD in computing at Goldsmiths University.

To make a long story short, that is where I met Eyal Weizman from Forensic Architecture and embarked on a black mirror version of my project. Forensic Architecture’s core activities are based on the use of spatial analysis and evidence for legal and political forums. In the case of the Saydnaya project,  we rebuild a secret prison, used to torture people, based mostly on witnesses testimonies and an aerial view of the site.  As mentioned in the intro, I am not going to explain the workflow we used to produce the interactive 360 panoramic views as it has been covered in this earlier post. The point I am getting at here is more about to use the best available technology to do the right job. Saydnaya project is also explained in this previous post.

Saydnaya Circulation WebGL

So why didn’t we design a full VR experience for this prison?

This decision was actually taken early in the process. After having watched and listened to the testimonies, or even, for some colleagues, being present with the witnesses, we realised the intolerable horror of what they have been through. No one wants to live this too closely. Immersing people in this kind of experience would not be acceptable.  It would have the opposite effect and instead, repulse people from getting to understand the overall situation those prisoners were living in.

On top of that, to reach the mass today with “VR experience” it has to be through mobile 360 which is not really VR to start with anyway, neither it is comfortable enough to watch long minutes of videos. Indeed from each interactive 360 panoramic views, the “visitor” is given the opportunity to click on specific props that bring a video showing the witness explaining a piece of his story related to that object. Taking all those constraints into account, the website is designed to keep visitors at some distance (onto the screen) from the experience and, at the same time, to encourage them to follow a narrative through the entangled storylines of witnesses. “Mobile VR” is not ready for that kind of experience yet.

Whatever the available technology of the time,  VRML in the 90’s or WebVR in the 10’s, what is exciting for me is how to foster the potential of that technology through the use of architectural language to enhance humans spatial abilities and storytelling.

Using Props to Boost the VR Experience.

Seeing your hands in VR is already a good step to increase the level of presence. Adding the sense of touch is just a natural progression.
What better way to do so than bringing real props in the equation? Yes we have the controllers already!  Although they can appear as a sword and a shield,  those are stuck in your hands and don’t really bring the feeling of affordances. Instead, if our hands were free, we could naturally grab a door handle to open a door physically and virtually at the same time. In that spirit, Doraemon has done some promising experiment with the “Anywhere door” as seen on roadtoVR.
Image result for Doraemon ‘Anywhere Door’
Doraemon ‘Anywhere Door’
The power of a door doesn’t stop with its affordance. The door is also a metaphor to bring someone from one room to the next. What is exciting with VR is that those rooms don’t have to be physically adjacent. They can be any kind of place, anywhere. From a living room, the door can lead you to the beach or Jupiter. Furthermore, by using two doors cleverly juxtaposed as in a booth, you can redirect the participant to maximize the use of the constraint physical space.
Startracker Stickers
Doctor Who’s Tardis find new meaning with VR. At the Virtual Reality Show in April 2017,  I was able to try a mobile VR setup, no string attached, just a space delimited by a ceiling full of sticky stars. (StarTrackerVR) Those stickers were reflecting an accurate location without the need of cameras. The main trick of the experience was based on a set of virtual booth with one door to get in and one door to get out. Each booth was a gate from one virtual scene to the next; and it was also redirecting the participant to keep him inside the perimeter. However, in this case, they were no physical door.
From the Virtual Reality Show 2017
Either way, physical or virtual, or both, the door is only one architectural element among a dozen that can be used in VR as a way to convey the participant through an experience or a story.

Report from the VR World Congress 2017

Bristol was hosting this three days congress. What a good excuse to explore the West Coast of England. Loved it! How to report from such an big event pact of keynotes, talks, debates and demos about Virtual and Augmented Reality, interactive storytelling and immersive art, architecture visualisation and video game development, to name just a few of the field involved ? I will start with  the general trends, then some key points from the talks, to finish with what really got me hooked.

This event was a solid push from AMD. As far as I can remember, AMD had always an edge to better process 3D on his most known rival Intel. Well, it looks like they are still in the fight, but now to process VR wirelessly with their Nitero tech. And this is important because, being in a virtual environment is pretty cool, however, without any cable in our way, it will be much better.  Hololens has taken that mobility party from the start. They made quite a demo there, bringing forward the Mixed Reality concept. That being said, I am still not convinced with this small field of view and the basic interaction where you pinch things around.

Mk2 in Paris
SoReal in china

In the meantime, the big hype is around VR location-based experiences. Mk2 in Paris looks very neat, curating only high quality content and SoReal, a VR theme Park in China sounds epic. On a hardware note, I am curious to see what kind of headset the Chinese will bring on the market with their DeePoon!

Another main trend is the good old haptic feedback. They are working hard to bring the third sense into the game. We are not sure what shape it will take:  gloves, waves, arms, sensors,…but it was explore in every corner of the congress.

Most important is the effort given to produce high quality content. At this stage, only great content will bring VR to the mass.

Follows bullets points of my tweets and comments of the different talks I followed:

On Wednesday:

  • Vive and the Evolving Ecosystem of VR” with Graham Breen from HTC.
    HTC Vive – Haptic

    What’s next for Vive and his ecosystem evolution? Not much on the hardware side, a part of those haptic gloves shown there. They are focus on helping and promoting content creators with their Viveport platform and the ViveX accelerator.

  • Streaming VR/AR Content over the Web”  with Jamieson Brettle from Google. That’s where it get really exciting! AR/VR through the browser. He was telling about pretty good compression performances for cloud point 3D with Draco. For sound volumetric or spatial audio, they are using Opus with ambisonic compression.
  • “Ghost in the Shell VR – A deep Dive” with Sol Rogers from Rewind.
    He delivered a great talk about how he and his team made Ghost in the Shell. He gave no numbers and ask very nicely not to take a photo!

    That’s all I got from Ghost in the Shell
  • “The Importance of touch: Mid-air haptic feedback for VR & AR” with Tom Carter from Ultrahaptics.
    How cool is that, conducting emotions through mid air haptic feedback? Because, at the end, it is the sense of touch that makes things real.
  • Perception = Reality: Exploiting Perceptual Illusions for Design of Compelling AR & VR Interfaces” with Hrvoje Benko from Microsoft Research. Don’t try to be perfect. Use tricks and perception illusion.

Using perception illusion to extend the experience; Passive haptic; Perspective Projection Mapping;  The rubber hand effect revisited as Haptic Retargetting; Body warping and world warping are very promising technic that make you  believe you interacting with objects and that gives you this haptic feedback.

  • “Virtual Futures Track: The Ethics of VR Risks and Recommendations” with Michael Madar from University of Mainz
    • Why should we care?  Our environment influence our behaviour.
    • VR brings the illusions of place, embodiment and agency.
    • Beneficial vs manipulative in VR, tricky balance to make.
    • Are we going to create a rating system for VR experience?
    • “Users should be made aware that there is evidence that experience in VR can have a lasting influence on behaviour after leaving the virtual environment.”
    • Users should also be made aware that we do not yet know the effects of long-term immersion.”

On Thursday, I went to only one talk from the lovely blue hair Mary Kassin from Google who explained her day-to-day process. She prototypes a lot. Trial, user feedback and iteration make the most of her time. She also mentioned the Tango ready phone to play with AR.

Alright, this is all very exciting, full of ideas, however, what made the biggest impression on me was the couple of VR visualisation agencies showing off their architectural Immersive and Interactive walk-through made with Unreal engine.  With this very realistic real-time rendering, we are getting closer to eliminate the long hours of rendering time for still images for a full immersive experience in the early stage of the process.  That is happening right now!