The team from the Dragon Hall Trust set up a #techday event in June to give the opportunity to young people that wouldn’t usually have the chance, to try out new technologies such as robotics, 3D printing, VR and AR. Being in charge of the VR side of the event, I presented a prototype of my research, as well as the Irrational Exuberance Introduction using the HTC Vive VR system. By proposing those two very different VR experiences, visitors had the choice between being a builder or an explorer.
Their response was fantastic. Their enthusiasm, positive and sceptic reactions have fueled me to pursue my research with VR and architecture with renewed energy.
I would like to take this opportunity to share the post event respond from the team that gives a pretty good idea of the success of the event:
“I would like to say a huge thank you for coming to display, demonstrate and inspire the young people who attended Tech day. Your VR display was incredible, everyone came off it were amazed and enjoyed themselves. I still have flash backs from when one of the kids let go of the hand sensor. However, the great thing about your headset is that it can engage not only young people but adults and they can all have the same experience and enjoyments from it.
That for us is one of the great things about tech day. It is a place where young people are able to see first hand the developments in STEM and how technology can be used.
We reached 232 attendees, with young people and adults witnessing technology they haven’t previously seen. Anyone who we have spoken to has given us great feedback.”
The last thing I would like to add to this post is that I will be ready for the next event with a more advanced VR experience to try out.
On 20th June, I had the opportunity to present my work on Saydnaya Prison at WebXR Meetup, Mozilla HQ London. (Meetup VR AR WebGL). The original idea of that talk was to present the workflow used to produce the 8 interactive 360 panoramic views (in WebGL) made for the website as already explained in an earlier post here . The kick was to explain why we didn’t go down the VR route! Here is how the talk went.
Prezi presentation of that talk can be found here.
Let me start to give you some background to understand where I am coming from. Back in the nineties, I studied architecture in Brussels. I learn how to encode drawing in CAD applications then jumped into 3D modeling to bring them some depth. At the same time, Internet was growing. In response to an international competition, we designed an online interface for the web that would keep track of user’s navigation through his hypertext trait. VRML (Virtual Reality Modeling Language) was the promising language that will support the design and the experience of this new sort of 3D Graphic User Interface to get access to the library of the information age – codename: “MetaLibrary”.
Jumping forward in 2012, two events triggered me back into it. Firstly, I read Joshua Foer’s book: “Moonwalking with Einstein“. Secondly, Palmey Lucky started the Oculus Kickstarter. I didn’t need anything else to invest more time in that very same idea of designing three-dimensional interface to support knowledge acquisition. With the power of the method of loci described in J. Foer’s book and the potential of VR immersion from the Rift in mind, I dived back into the exploration of architectural based immersive virtual environment and started a PhD in computing at Goldsmiths University.
To make a long story short, that is where I met Eyal Weizman from Forensic Architecture and embarked on a black mirror version of my project. Forensic Architecture’s core activities are based on the use of spatial analysis and evidence for legal and political forums. In the case of the Saydnaya project, we rebuild a secret prison, used to torture people, based mostly on witnesses testimonies and an aerial view of the site. As mentioned in the intro, I am not going to explain the workflow we used to produce the interactive 360 panoramic views as it has been covered in this earlier post. The point I am getting at here is more about to use the best available technology to do the right job. Saydnaya project is also explained in this previous post.
So why didn’t we design a full VR experience for this prison?
This decision was actually taken early in the process. After having watched and listened to the testimonies, or even, for some colleagues, being present with the witnesses, we realised the intolerable horror of what they have been through. No one wants to live this too closely. Immersing people in this kind of experience would not be acceptable. It would have the opposite effect and instead, repulse people from getting to understand the overall situation those prisoners were living in.
On top of that, to reach the mass today with “VR experience” it has to be through mobile 360 which is not really VR to start with anyway, neither it is comfortable enough to watch long minutes of videos. Indeed from each interactive 360 panoramic views, the “visitor” is given the opportunity to click on specific props that bring a video showing the witness explaining a piece of his story related to that object. Taking all those constraints into account, the website is designed to keep visitors at some distance (onto the screen) from the experience and, at the same time, to encourage them to follow a narrative through the entangled storylines of witnesses. “Mobile VR” is not ready for that kind of experience yet.
Whatever the available technology of the time, VRML in the 90’s or WebVR in the 10’s, what is exciting for me is how to foster the potential of that technology through the use of architectural language to enhance humans spatial abilities and storytelling.
Seeing your hands in VR is already a good step to increase the level of presence. Adding the sense of touch is just a natural progression.
What better way to do so than bringing real props in the equation? Yes we have the controllers already! Although they can appear as a sword and a shield, those are stuck in your hands and don’t really bring the feeling of affordances. Instead, if our hands were free, we could naturally grab a door handle to open a door physically and virtually at the same time. In that spirit, Doraemon has done some promising experiment with the “Anywhere door” as seen on roadtoVR.
The power of a door doesn’t stop with its affordance. The door is also a metaphor to bring someone from one room to the next. What is exciting with VR is that those rooms don’t have to be physically adjacent. They can be any kind of place, anywhere. From a living room, the door can lead you to the beach or Jupiter. Furthermore, by using two doors cleverly juxtaposed as in a booth, you can redirect the participant to maximize the use of the constraint physical space.
Doctor Who’s Tardis find new meaning with VR. At the Virtual Reality Show in April 2017, I was able to try a mobile VR setup, no string attached, just a space delimited by a ceiling full of sticky stars. (StarTrackerVR) Those stickers were reflecting an accurate location without the need of cameras. The main trick of the experience was based on a set of virtual booth with one door to get in and one door to get out. Each booth was a gate from one virtual scene to the next; and it was also redirecting the participant to keep him inside the perimeter. However, in this case, they were no physical door.
Either way, physical or virtual, or both, the door is only one architectural element among a dozen that can be used in VR as a way to convey the participant through an experience or a story.
Bristol was hosting this three days congress. What a good excuse to explore the West Coast of England. Loved it! How to report from such an big event pact of keynotes, talks, debates and demos about Virtual and Augmented Reality, interactive storytelling and immersive art, architecture visualisation and video game development, to name just a few of the field involved ? I will start with the general trends, then some key points from the talks, to finish with what really got me hooked.
This event was a solid push from AMD. As far as I can remember, AMD had always an edge to better process 3D on his most known rival Intel. Well, it looks like they are still in the fight, but now to process VR wirelessly with their Nitero tech. And this is important because, being in a virtual environment is pretty cool, however, without any cable in our way, it will be much better. Hololens has taken that mobility party from the start. They made quite a demo there, bringing forward the Mixed Reality concept. That being said, I am still not convinced with this small field of view and the basic interaction where you pinch things around.
In the meantime, the big hype is around VR location-based experiences. Mk2 in Paris looks very neat, curating only high quality content and SoReal, a VR theme Park in China sounds epic. On a hardware note, I am curious to see what kind of headset the Chinese will bring on the market with their DeePoon!
Another main trend is the good old haptic feedback. They are working hard to bring the third sense into the game. We are not sure what shape it will take: gloves, waves, arms, sensors,…but it was explore in every corner of the congress.
Most important is the effort given to produce high quality content. At this stage, only great content will bring VR to the mass.
Follows bullets points of my tweets and comments of the different talks I followed:
Vive and the Evolving Ecosystem of VR” with Graham Breen from HTC.
What’s next for Vive and his ecosystem evolution? Not much on the hardware side, a part of those haptic gloves shown there. They are focus on helping and promoting content creators with their Viveport platform and the ViveX accelerator.
Streaming VR/AR Content over the Web” with Jamieson Brettle from Google. That’s where it get really exciting! AR/VR through the browser. He was telling about pretty good compression performances for cloud point 3D with Draco. For sound volumetric or spatial audio, they are using Opus with ambisonic compression.
“Ghost in the Shell VR – A deep Dive” with Sol Rogers from Rewind.
He delivered a great talk about how he and his team made Ghost in the Shell. He gave no numbers and ask very nicely not to take a photo!
“The Importance of touch: Mid-air haptic feedback for VR & AR” with Tom Carter from Ultrahaptics.
How cool is that, conducting emotions through mid air haptic feedback? Because, at the end, it is the sense of touch that makes things real.
“Perception = Reality: Exploiting Perceptual Illusions for Design of Compelling AR & VR Interfaces” with Hrvoje Benko from Microsoft Research. Don’t try to be perfect. Use tricks and perception illusion.
Using perception illusion to extend the experience; Passive haptic; Perspective Projection Mapping; The rubber hand effect revisited as Haptic Retargetting; Body warping and world warping are very promising technic that make you believe you interacting with objects and that gives you this haptic feedback.
“Virtual Futures Track: The Ethics of VR Risks and Recommendations” with Michael Madar from University of Mainz
Why should we care? Our environment influence our behaviour.
VR brings the illusions of place, embodiment and agency.
Beneficial vs manipulative in VR, tricky balance to make.
Are we going to create a rating system for VR experience?
“Users should be made aware that there is evidence that experience in VR can have a lasting influence on behaviour after leaving the virtual environment.”
“Users should also be made aware that we do not yet know the effects of long-term immersion.”
On Thursday, I went to only one talk from the lovely blue hair Mary Kassin from Google who explained her day-to-day process. She prototypes a lot. Trial, user feedback and iteration make the most of her time. She also mentioned the Tango ready phone to play with AR.
Alright, this is all very exciting, full of ideas, however, what made the biggest impression on me was the couple of VR visualisation agencies showing off their architectural Immersive and Interactive walk-through made with Unreal engine. With this very realistic real-time rendering, we are getting closer to eliminate the long hours of rendering time for still images for a full immersive experience in the early stage of the process. That is happening right now!
Monday evening was my first VR meetup for developers, special education; right down my alley. Really dynamic, full of enthusiastic developers and makers, a few exciting demos on the first floor and loads of speakers on the ground floor. All this in a good old fashion London building, the Hackney House. This blog will cover mostly my experiences with all the demos I have experienced.
First thing I tried was the full body VR immersion kit. It took some time to adjust all the straps around every main bones. Once calibration is done, fit the Oculus Rift on your head and there you are looking in the mirror at your avatar, a character attached to your body and movements. Despite the lack of space (the casual black environment), a sense of scale was given by a couple of statues. A few tweaks using UX interface designed by Dr Harry Brenton, and you have a tail, a gigantic arm or an alien head. The level of presence get higher with every step. I can’t wait to have full hand and fingers tracked.
The next experience was a real-time holograph in VR. You get the headset on to meet the holographic projection of someone in Lithuania in real-time. She couldn’t see me, though, I could see her, and talk to her over the phone saying “hello” and she would wave her hands. Not yet like in Ghost in the Shell but it works, telepresence, yes! However, it would be nice to have a sense of geographic location in the environment, wouldn’t it?
The most exciting demo was about making VR in VR by Constructive Labs. Still in an early stage, the demo let you manipulate objects in VR using HTC Vive and controllers like you would use the mouse and keyboard in 3Dsmax or Unity. On top, we were able to do that in interaction with another person. Their idea is to develop a VROS, Virtual Reality Operating system. Pretty cool stuff! However, again, the environment was really poor. They just used a model of a random brick tower as a gimmick surrounding.
The last demo I tried was more on the interactive storytelling treat made by Bandits Collective. Following the hot panel discussion about 360 video a bit earlier in the evening, I think they nailed it quite well. Their intro for a short movie brings you in a computer generated 360 environment animation based and stylish. It is the environment that make you stay where you are and look where the action happens, though you can check all around and even move. There is no interaction. The action is happening around you. Very promising!
There were a couple of other demos with cardboard and other mobile VR. We know what we get there. I am much more interested with what we don’t know. There is still so much to explore, mostly for me, as you can tell, about spatial environment in VR. On those four demos, only the 360 story has a designed spatial environment that support the experience.