On 20th June, I had the opportunity to present my work on Saydnaya Prison at WebXR Meetup, Mozilla HQ London. (Meetup VR AR WebGL). The original idea of that talk was to present the workflow used to produce the 8 interactive 360 panoramic views (in WebGL) made for the website as already explained in an earlier post here . The kick was to explain why we didn’t go down the VR route! Here is how the talk went.
Prezi presentation of that talk can be found here.
Let me start to give you some background to understand where I am coming from. Back in the nineties, I studied architecture in Brussels. I learn how to encode drawing in CAD applications then jumped into 3D modeling to bring them some depth. At the same time, Internet was growing. In response to an international competition, we designed an online interface for the web that would keep track of user’s navigation through his hypertext trait. VRML (Virtual Reality Modeling Language) was the promising language that will support the design and the experience of this new sort of 3D Graphic User Interface to get access to the library of the information age – codename: “MetaLibrary”.
Jumping forward in 2012, two events triggered me back into it. Firstly, I read Joshua Foer’s book: “Moonwalking with Einstein“. Secondly, Palmey Lucky started the Oculus Kickstarter. I didn’t need anything else to invest more time in that very same idea of designing three-dimensional interface to support knowledge acquisition. With the power of the method of loci described in J. Foer’s book and the potential of VR immersion from the Rift in mind, I dived back into the exploration of architectural based immersive virtual environment and started a PhD in computing at Goldsmiths University.
To make a long story short, that is where I met Eyal Weizman from Forensic Architecture and embarked on a black mirror version of my project. Forensic Architecture’s core activities are based on the use of spatial analysis and evidence for legal and political forums. In the case of the Saydnaya project, we rebuild a secret prison, used to torture people, based mostly on witnesses testimonies and an aerial view of the site. As mentioned in the intro, I am not going to explain the workflow we used to produce the interactive 360 panoramic views as it has been covered in this earlier post. The point I am getting at here is more about to use the best available technology to do the right job. Saydnaya project is also explained in this previous post.
So why didn’t we design a full VR experience for this prison?
This decision was actually taken early in the process. After having watched and listened to the testimonies, or even, for some colleagues, being present with the witnesses, we realised the intolerable horror of what they have been through. No one wants to live this too closely. Immersing people in this kind of experience would not be acceptable. It would have the opposite effect and instead, repulse people from getting to understand the overall situation those prisoners were living in.
On top of that, to reach the mass today with “VR experience” it has to be through mobile 360 which is not really VR to start with anyway, neither it is comfortable enough to watch long minutes of videos. Indeed from each interactive 360 panoramic views, the “visitor” is given the opportunity to click on specific props that bring a video showing the witness explaining a piece of his story related to that object. Taking all those constraints into account, the website is designed to keep visitors at some distance (onto the screen) from the experience and, at the same time, to encourage them to follow a narrative through the entangled storylines of witnesses. “Mobile VR” is not ready for that kind of experience yet.
Whatever the available technology of the time, VRML in the 90’s or WebVR in the 10’s, what is exciting for me is how to foster the potential of that technology through the use of architectural language to enhance humans spatial abilities and storytelling.