Tag Archives: Embodiment

IVAS eXPeriment

Thirty-five people took part in the Immersive Virtual Architecture Studio (IVAS) between 26th February and 9th March 2018. The experiment was simply presented as a room scale VR experience where you would have to solve a few jigsaw puzzles inside different rooms. This was one of the most important and exciting parts of my PhD research project. Follows a short explanation of context, purpose and approach.

Soapbox VR room running IVAS with Dean.


Homo Sapiens’ biggest achievement, to a certain extent, as a global civilization, has been to transform and adapt the environment to his needs. The main strength to achieve this outcome is Sapiens “spatial awareness”: the ability to perceive and make sense of his spatial environment and the intrinsic sense of agency that it affords. Sapiens developed this ability following different trait, the most recognizable one being known under the field of “architecture”. For more than five thousand years, using bricks and mortar, he built places to fulfill all the different functions required by society: services, religions, politics and other cultural activities. In the 21st century, Virtual Reality (VR), an inherently spatial technology, offers us the perfect medium to test and apply some architectural principles developed over the centuries to structure and navigate today’s overwhelming digital landscape.


The built environment has a significant effect on humans behavior in the physical world (1). How does that translate in VR? The overall aim of this project is to establish the foundations of a framework to support the design of Immersive Virtual Environments. Such a framework will have benefits not only for scientists but in every field VR is disrupting such as game design, industrial design, data visualization and learning applications to name just a few.


This study is exploring ways to evaluate how different architectural elements affect human’s spatial cognition performance using the IVAS. The following steps will be to apply those findings to support specific cognitive tasks for specific users. This particular iteration of the project is looking at two architectural elements arranged following two spatial characteristics. Those fours conditions are tested using three cognitive tasks. Follows a short description of the setup.

Physical Space – Hardware – Mode of Interaction

For most of our history, natural movement has been the only way to navigate our environment and to experience “architecture”, therefore, it is the primary mode of interaction used in this experiment. To accompany this principle, a room-scale VR environment is set up with a minimum of 9 sqm (3mx3m) of navigable space. In this instance, the IVAS exp. happened in two different rooms, at two different sites: Goldsmiths, Hatcham house, 1 and Soapbox, Old Street 68.
The second mode of interaction is the VR system which is composed of an HTC Vive head-mounted display with two wireless hand-held controllers allowing together 18 degrees of freedom (18 DOF) of movement. The headset is tethered to a powerful laptop that runs the simulation.

Prt Screen from Unity running IVAS A1

Virtual Space – Software – 3D Models

The room with approximately the same dimensions as the physical room is modeled in 3D and will serve as the base for the different conditions (architectural scenes) that will be tested. All other 3D assets are modeled using 3Dsmax before being imported in Unity3D where the interactivity is programmed.

Spatial Conditions

Two architectural elements, wall and columns, were studied following two spatial characteristics: enclosure and complexity (3)

  • A1 : Close Columns
  • A2 : Open Columns
  • B1 : Close Walls
  • B2 : Open Walls
Layout of the different conditions.

Three Tasks involving Spatial Cognition

Solving a Jigsaw Puzzle

This task was design to encourage participants to navigate the space in search for all the items needed to solve the puzzle. A stopwatch was encouraging them to do so as fast as possible – a way to measure performance. VR allows to easily track user’s movement: time, position and rotation. Everybody seems to have enjoyed solving the jigsaw puzzle and were very focused on the task. I had to remind them to explore the space before starting the task. Once the puzzle was solved, the participant was automatically transferred to a transition area where he had to rate two experiential qualities.

Rating of Experiential Qualities (REQ)

The spatial analysis can only be meaningful in regards to an equivalent evaluation from a human experience point of view. Evaluating “lived space” (2) can be done by asking participants to rate their experience with each spatial characteristics. This task brings the qualitative human evaluation into the equation. Using a semantic differential scaling technique, subjects were able to differentiate their appraisal using a five-step Likert-like scale. The rating categories were selected to represent previously mentioned properties: enclosure and complexity.

Perspective Taking Task (PTT)

Once out of the IVAS, participants had to answer a few questions on the online questionnaire before completing this last cognitive task. The main purpose of this task is to measure the memorability of each scene (4). It consists of a sequence of 16 pairs of images. For each pair, one of the images was taken from one of the explored room, the other image was taken from a room not visited. The participant had to identify which image relates to one of the scenes he had experienced.

Perspective Taking Task – All the views
Perspective Taking Task – Pair 03
Perspective Taking Task – Pair 11

Space Syntax Design Analysis

The design analysis using Space Syntax approach will give us an objective measure of each considered spatial characteristics. By combining both “Isovist” and “Visibility Graph” techniques, we obtain a number of measurands (3). In this case, we will be using the following measurands to represent the best predictor variables for the spatial characteristic considered:

The spatial qualities and their related measurands are :

  • Enclosure: “isovist openness” and “jaggedness”;
  • Complexity : “number of vertices”, “vertex density”, “roundness” and “jaggedness”;

Early Observations

A quick glance at the data shows that participants experienced spatial complexity as intended in the scenes designed with the columns. Their average best performance comes out of the scene with the closest room with columns. However, the feeling of openness doesn’t seem to be related to the number of windows in the room. One explanation for this is most probably because there was a texture on the glass. It wasn’t completely transparent. A participant even said: “I didn’t realize that there were glass panel walls!”
This is just a short intro of the kind of conclusions I am working on. This experiment is bringing plenty of good data to dig into, some with positive results some negatives. I have a few pages to fill with that discussion (check further posts).

  1. Arthur E. Stamps. Mystery, complexity, legibility and coherence: A meta-analysis. Journal of Environmental Psychology, 24(1):1–16, 3 2004
  2. Annemarie S. Dosen and Michael J. Ostwald. Lived space and geometric space: comparing people’s perceptions of spatial enclosure and exposure with metric room properties and isovist measures. Architectural Science Review, 60(1):62–77
  3. Jan M. Wiener and Gerald Franz. Isovists as a Means to Predict Spatial Experience and Behavior. pages 42–57. Springer Berlin Heidelberg, 2005.
  4. Barbara Tversky and Bridgette Martin Hard. Embodied and disembodied cognition: Spatial perspective-taking. Cognition, 110(1):124–129, 1 2009.


Creating Memories in Virtual Reality.

I love  Metaworld ‘s introduction about creating memories despite the fact they are using the social presence as their main attraction, which I completely disregard in my research, … for now. Anyway, you are wondering why would we want to create memories in a virtual environment at the first place? Isn’t it better to go out in the physical world and meet real people?

Playing chess inside Metaworld

Well, it doesn’t compare. The whole idea of using VR to create memories is based on a couple of principles that are much more powerful to set up in VR : associations and embodiment.

Firstly, memories love associations. It is by associating one new information in a network of previously formed memories, that we are going to make sense of it and remember it. We will be able to retrieve the memory more efficiently later on by accessing the same network.

Secondly, memories are embodied. We don’t work like computers. We don’t have a hard drive or a cloud to engrave information one by one in a linear manner. We are storing information on a multi modal level within a mind-body system . So the more senses we are using when accessing a new information, which call upon the previous principle of association, the better we will be able to form new memories.

By proposing a persistent online world where we’re entering by immersing ourself with a HMD (Head Mounted Display), a head phone and a couple of controllers in our hands, we are already covering most of our senses. However, in VR we don’t have the same constraints as in the physical world. Anything is possible that can serve the purpose the participant is after. We can fly, we can travel into space, swim in the ocean or go inside a volcano.

Now, one of the debate about embodiment is the question of the avatar. When we enter into VR as ourself, we can see our hands, we don’t want to see a cartoonist body of ours. Although, if we want to interact with other people, we need to see them, somehow. Metaworld solves this issue by using a quite cartoonist avatars with non attached detailed hands. That way, you can see your own hands and other people hands with their avatar.  Let’s try it.

Is Augmented Reality the next design tool?

What role Augmented Reality (AR) will be playing in the architecture practice and the design of our built environment? How will companies use this technology to attract more consumers? Lets look at those two very different ways to use the power of AR:  as a design tool and as a consumerism device.

Looks like Microsoft is taking the first route with the Hololens. With a variety of polished video presentations, they are showing the Hololens as this amazing new tool to help design interiors, buildings or even a whole new world. It even let you communicate in real time with your teammates from another continent. As an architect, Greg Lynn seems enthusiastic after having use the HoloLens at the Venice Architecture Biennale 2016 (interview from Dezeen here).   For interior designer lacking of vision, this video shows the potential. And to be be really blown away and transported in the future, check out this TED talk from the visionary Alex Kipman.

Of course, I can’t agree more with him on the fact that all those 2D interfaces, displays and screens that have invades our life are born obsolete. We are moving creatures always exploring around us, using our whole body with all our senses to interact with our environment. We are not supposed to be locked behind a screen all day. But, I am digressing onto my research topic here. Even though this is very exciting topic,  we still don’t really know how we are actually interacting with those virtual objects embedded in our physical surroundings. I will look into VR and user experience in another post.

In the meantime, looking at the other side of the coin, the consumerism version of AR, perhaps Keiichi Matsuda vision with his “Hyper-Reality” production is closer to what will happen sooner than later.

HYPER-REALITY from Keiichi Matsuda on Vimeo.

If VR is not the answer, AR is not neither. What I am really looking forward to is when those two technologies will merged into a Mixed Reality (MR) integrated into some sort of contact lenses.

Testing Oculus DK2 – Leap Motion Virtual Reality Blocks Demo.

Last week, I had the opportunity to try out the Oculus Rift – Leap Motion Demo in Goldsmiths Digital Studio. Here is the set up: with the Oculus Rift on your head,  you stand in the middle of the room, a couple of wire climbing up to the ceiling. There is enough space around you to take a few steps in every directions. A small camera sitting on the front wall will capture the movement of your head  in real time: rotating, tilting and panning. Stick on the face of the Head Mounted Display (HMD) is the Leap Motion sensor which will capture your hands movements in front of you. An ambient music is running in the background for everyone in the room to listen to. Sounds, triggered by your movement as well as your reactive expressions will give any observer present in the room a great incentive to try the device for themself. That’s all they perceive from the outside though.

From your point of view (the participant), it is a very different situation. You are immersed in a simple world made of a grid like grey plane levelled up with your chest and surrounded by a black environment. A variety of solids (3D geometric shapes) like cubes, cuboids and other polyhedra randomly spread on the grid surface will attract your attention. Right in front of you is a little character guiding you through available gestures you can use to interact with those solids. By pinching with both hands, you can create more solids, by opening one hand you are able to point at different shapes to choose from with a finger of your other hand. You can actually interact with all those solids around you, pick them up, build walls or even throw them away. Then,  with both hands flat out, palms up, moving gently up, gravity will go away, and all the polyhedra will float above you. The opposite gesture, palms down, will bring gravity back and all the solids down to the ground.

This interaction makes you feel a sense of power and control over things you can only dream of. By playing for a few minutes, because the interactions with those solids is so subtle, your sense of touch will rapidly becoming more accurate and develop some level of haptic feedback.

So, yes, great experience overall, being able to see your hands in Virtual Reality is definitely adding up to the feeling of immersion. It is not just looking around in a 360 panoramic view. By being able to interact with your environment directly using your hands and moving things around, the sensation of presence in another space is undeniable.

Hustling Space-Body Phenomenon

AVPD Hitchcock-hallway
AVPD Hitchcock-hallway
Experiencing with space, placing the participant in an environment that will shift his perspective, is a powerful way to question who we are and how much we relate to our surroundings. AVPD, an artistic spatial laboratory from Copenhagen,  is using architecture language to explore and “rethink the triangular constellation of the subject, the object and the context”. Their work evolves by hustling the space-body phenomenon. It makes you realised how much our perception of the world relates on our acquired experiences and emotions.
Once aware of this phenomenon, we can use it at our advantage by reconfiguring our spatial understanding of the world. What I am interested in is to extract from those experiments and art installations some data that could lead to quantify the usefulness of specific architectural features at helping someone to navigate the physical world as well as the virtual world.