- The idea of considering AI more from a human-centered point of view
- On the difference between education and mind control.
– education needs practical awareness of what we learn
– education is an infinite game so that you can adapt what you learn to your own goal, not someone else’s.
- Human tails: ownership and control of extended humanoid avatars
- VR and AR will actually make us more aware of reality, looking at it from a new perspective.
- Rapid adaptation to non-human body in VR is realy cool. check the human tail experiment
On 20th June, I had the opportunity to present my work on Saydnaya Prison at WebXR Meetup, Mozilla HQ London. (Meetup VR AR WebGL). The original idea of that talk was to present the workflow used to produce the 8 interactive 360 panoramic views (in WebGL) made for the website as already explained in an earlier post here . The kick was to explain why we didn’t go down the VR route! Here is how the talk went.
Prezi presentation of that talk can be found here.
Let me start to give you some background to understand where I am coming from. Back in the nineties, I studied architecture in Brussels. I learn how to encode drawing in CAD applications then jumped into 3D modeling to bring them some depth. At the same time, Internet was growing. In response to an international competition, we designed an online interface for the web that would keep track of user’s navigation through his hypertext trait. VRML (Virtual Reality Modeling Language) was the promising language that will support the design and the experience of this new sort of 3D Graphic User Interface to get access to the library of the information age – codename: “MetaLibrary”.
Jumping forward in 2012, two events triggered me back into it. Firstly, I read Joshua Foer’s book: “Moonwalking with Einstein“. Secondly, Palmey Lucky started the Oculus Kickstarter. I didn’t need anything else to invest more time in that very same idea of designing three-dimensional interface to support knowledge acquisition. With the power of the method of loci described in J. Foer’s book and the potential of VR immersion from the Rift in mind, I dived back into the exploration of architectural based immersive virtual environment and started a PhD in computing at Goldsmiths University.
To make a long story short, that is where I met Eyal Weizman from Forensic Architecture and embarked on a black mirror version of my project. Forensic Architecture’s core activities are based on the use of spatial analysis and evidence for legal and political forums. In the case of the Saydnaya project, we rebuild a secret prison, used to torture people, based mostly on witnesses testimonies and an aerial view of the site. As mentioned in the intro, I am not going to explain the workflow we used to produce the interactive 360 panoramic views as it has been covered in this earlier post. The point I am getting at here is more about to use the best available technology to do the right job. Saydnaya project is also explained in this previous post.
So why didn’t we design a full VR experience for this prison?
This decision was actually taken early in the process. After having watched and listened to the testimonies, or even, for some colleagues, being present with the witnesses, we realised the intolerable horror of what they have been through. No one wants to live this too closely. Immersing people in this kind of experience would not be acceptable. It would have the opposite effect and instead, repulse people from getting to understand the overall situation those prisoners were living in.
On top of that, to reach the mass today with “VR experience” it has to be through mobile 360 which is not really VR to start with anyway, neither it is comfortable enough to watch long minutes of videos. Indeed from each interactive 360 panoramic views, the “visitor” is given the opportunity to click on specific props that bring a video showing the witness explaining a piece of his story related to that object. Taking all those constraints into account, the website is designed to keep visitors at some distance (onto the screen) from the experience and, at the same time, to encourage them to follow a narrative through the entangled storylines of witnesses. “Mobile VR” is not ready for that kind of experience yet.
Whatever the available technology of the time, VRML in the 90’s or WebVR in the 10’s, what is exciting for me is how to foster the potential of that technology through the use of architectural language to enhance humans spatial abilities and storytelling.
Bristol was hosting this three days congress. What a good excuse to explore the West Coast of England. Loved it! How to report from such an big event pact of keynotes, talks, debates and demos about Virtual and Augmented Reality, interactive storytelling and immersive art, architecture visualisation and video game development, to name just a few of the field involved ? I will start with the general trends, then some key points from the talks, to finish with what really got me hooked.
This event was a solid push from AMD. As far as I can remember, AMD had always an edge to better process 3D on his most known rival Intel. Well, it looks like they are still in the fight, but now to process VR wirelessly with their Nitero tech. And this is important because, being in a virtual environment is pretty cool, however, without any cable in our way, it will be much better. Hololens has taken that mobility party from the start. They made quite a demo there, bringing forward the Mixed Reality concept. That being said, I am still not convinced with this small field of view and the basic interaction where you pinch things around.
In the meantime, the big hype is around VR location-based experiences. Mk2 in Paris looks very neat, curating only high quality content and SoReal, a VR theme Park in China sounds epic. On a hardware note, I am curious to see what kind of headset the Chinese will bring on the market with their DeePoon!
Another main trend is the good old haptic feedback. They are working hard to bring the third sense into the game. We are not sure what shape it will take: gloves, waves, arms, sensors,…but it was explore in every corner of the congress.
Most important is the effort given to produce high quality content. At this stage, only great content will bring VR to the mass.
Follows bullets points of my tweets and comments of the different talks I followed:
- Vive and the Evolving Ecosystem of VR” with Graham Breen from HTC.
What’s next for Vive and his ecosystem evolution? Not much on the hardware side, a part of those haptic gloves shown there. They are focus on helping and promoting content creators with their Viveport platform and the ViveX accelerator.
- Streaming VR/AR Content over the Web” with Jamieson Brettle from Google. That’s where it get really exciting! AR/VR through the browser. He was telling about pretty good compression performances for cloud point 3D with Draco. For sound volumetric or spatial audio, they are using Opus with ambisonic compression.
- “Ghost in the Shell VR – A deep Dive” with Sol Rogers from Rewind.
He delivered a great talk about how he and his team made Ghost in the Shell. He gave no numbers and ask very nicely not to take a photo!
- “The Importance of touch: Mid-air haptic feedback for VR & AR” with Tom Carter from Ultrahaptics.
How cool is that, conducting emotions through mid air haptic feedback? Because, at the end, it is the sense of touch that makes things real.
- “Perception = Reality: Exploiting Perceptual Illusions for Design of Compelling AR & VR Interfaces” with Hrvoje Benko from Microsoft Research. Don’t try to be perfect. Use tricks and perception illusion.
Using perception illusion to extend the experience; Passive haptic; Perspective Projection Mapping; The rubber hand effect revisited as Haptic Retargetting; Body warping and world warping are very promising technic that make you believe you interacting with objects and that gives you this haptic feedback.
- “Virtual Futures Track: The Ethics of VR Risks and Recommendations” with Michael Madar from University of Mainz
- Why should we care? Our environment influence our behaviour.
- VR brings the illusions of place, embodiment and agency.
- Beneficial vs manipulative in VR, tricky balance to make.
- Are we going to create a rating system for VR experience?
- “Users should be made aware that there is evidence that experience in VR can have a lasting influence on behaviour after leaving the virtual environment.”
- “Users should also be made aware that we do not yet know the effects of long-term immersion.”
On Thursday, I went to only one talk from the lovely blue hair Mary Kassin from Google who explained her day-to-day process. She prototypes a lot. Trial, user feedback and iteration make the most of her time. She also mentioned the Tango ready phone to play with AR.
Alright, this is all very exciting, full of ideas, however, what made the biggest impression on me was the couple of VR visualisation agencies showing off their architectural Immersive and Interactive walk-through made with Unreal engine. With this very realistic real-time rendering, we are getting closer to eliminate the long hours of rendering time for still images for a full immersive experience in the early stage of the process. That is happening right now!
Monday evening was my first VR meetup for developers, special education; right down my alley. Really dynamic, full of enthusiastic developers and makers, a few exciting demos on the first floor and loads of speakers on the ground floor. All this in a good old fashion London building, the Hackney House. This blog will cover mostly my experiences with all the demos I have experienced.
First thing I tried was the full body VR immersion kit. It took some time to adjust all the straps around every main bones. Once calibration is done, fit the Oculus Rift on your head and there you are looking in the mirror at your avatar, a character attached to your body and movements. Despite the lack of space (the casual black environment), a sense of scale was given by a couple of statues. A few tweaks using UX interface designed by Dr Harry Brenton, and you have a tail, a gigantic arm or an alien head. The level of presence get higher with every step. I can’t wait to have full hand and fingers tracked.
The next experience was a real-time holograph in VR. You get the headset on to meet the holographic projection of someone in Lithuania in real-time. She couldn’t see me, though, I could see her, and talk to her over the phone saying “hello” and she would wave her hands. Not yet like in Ghost in the Shell but it works, telepresence, yes! However, it would be nice to have a sense of geographic location in the environment, wouldn’t it?
The most exciting demo was about making VR in VR by Constructive Labs. Still in an early stage, the demo let you manipulate objects in VR using HTC Vive and controllers like you would use the mouse and keyboard in 3Dsmax or Unity. On top, we were able to do that in interaction with another person. Their idea is to develop a VROS, Virtual Reality Operating system. Pretty cool stuff! However, again, the environment was really poor. They just used a model of a random brick tower as a gimmick surrounding.
The last demo I tried was more on the interactive storytelling treat made by Bandits Collective. Following the hot panel discussion about 360 video a bit earlier in the evening, I think they nailed it quite well. Their intro for a short movie brings you in a computer generated 360 environment animation based and stylish. It is the environment that make you stay where you are and look where the action happens, though you can check all around and even move. There is no interaction. The action is happening around you. Very promising!
There were a couple of other demos with cardboard and other mobile VR. We know what we get there. I am much more interested with what we don’t know. There is still so much to explore, mostly for me, as you can tell, about spatial environment in VR. On those four demos, only the 360 story has a designed spatial environment that support the experience.
- Some of the 3d models were made with Rhino then exported to 3dsMax, most of the models were made only with 3dsMax avoiding any mistranslation between Rhino and 3dsMax, two very different modellers.
- All the texturing and lighting was done in 3dsMax, then baked to texture.
- Optimization was a big part of the process.. Models were exported in OBJ (only meshes) and optimised with MeshLab. Maps were optimized with Photoshop.
- Everything was tested in a sandbox in WebGL using MAMP as a Apache Local Server, Atom to edit the PhD and Python files.
If you are interested and would like more details, let me know in the comment below or via email.
- WebVR is configuring the VR HMDs
- WebGL is about drawing in WebVR using mainly three.js
- HTML5 is the sandbox to play in.