Tag Archives: Education

IVAS eXPeriment

Thirty-five people took part in the Immersive Virtual Architecture Studio (IVAS) between 26th February and 9th March 2018. The experiment was simply presented as a room scale VR experience where you would have to solve a few jigsaw puzzles inside different rooms. This was one of the most important and exciting parts of my PhD research project. Follows a short explanation of context, purpose and approach.

Soapbox VR room running IVAS with Dean.

Context

Homo Sapiens’ biggest achievement, to a certain extent, as a global civilization, has been to transform and adapt the environment to his needs. The main strength to achieve this outcome is Sapiens “spatial awareness”: the ability to perceive and make sense of his spatial environment and the intrinsic sense of agency that it affords. Sapiens developed this ability following different trait, the most recognizable one being known under the field of “architecture”. For more than five thousand years, using bricks and mortar, he built places to fulfill all the different functions required by society: services, religions, politics and other cultural activities. In the 21st century, Virtual Reality (VR), an inherently spatial technology, offers us the perfect medium to test and apply some architectural principles developed over the centuries to structure and navigate today’s overwhelming digital landscape.

Purpose

The built environment has a significant effect on humans behavior in the physical world (1). How does that translate in VR? The overall aim of this project is to establish the foundations of a framework to support the design of Immersive Virtual Environments. Such a framework will have benefits not only for scientists but in every field VR is disrupting such as game design, industrial design, data visualization and learning applications to name just a few.

Approach

This study is exploring ways to evaluate how different architectural elements affect human’s spatial cognition performance using the IVAS. The following steps will be to apply those findings to support specific cognitive tasks for specific users. This particular iteration of the project is looking at two architectural elements arranged following two spatial characteristics. Those fours conditions are tested using three cognitive tasks. Follows a short description of the setup.

Physical Space – Hardware – Mode of Interaction

For most of our history, natural movement has been the only way to navigate our environment and to experience “architecture”, therefore, it is the primary mode of interaction used in this experiment. To accompany this principle, a room-scale VR environment is set up with a minimum of 9 sqm (3mx3m) of navigable space. In this instance, the IVAS exp. happened in two different rooms, at two different sites: Goldsmiths, Hatcham house, 1 and Soapbox, Old Street 68.
The second mode of interaction is the VR system which is composed of an HTC Vive head-mounted display with two wireless hand-held controllers allowing together 18 degrees of freedom (18 DOF) of movement. The headset is tethered to a powerful laptop that runs the simulation.

Prt Screen from Unity running IVAS A1

Virtual Space – Software – 3D Models

The room with approximately the same dimensions as the physical room is modeled in 3D and will serve as the base for the different conditions (architectural scenes) that will be tested. All other 3D assets are modeled using 3Dsmax before being imported in Unity3D where the interactivity is programmed.

Spatial Conditions

Two architectural elements, wall and columns, were studied following two spatial characteristics: enclosure and complexity (3)

  • A1 : Close Columns
  • A2 : Open Columns
  • B1 : Close Walls
  • B2 : Open Walls
Layout of the different conditions.

Three Tasks involving Spatial Cognition

Solving a Jigsaw Puzzle

This task was design to encourage participants to navigate the space in search for all the items needed to solve the puzzle. A stopwatch was encouraging them to do so as fast as possible – a way to measure performance. VR allows to easily track user’s movement: time, position and rotation. Everybody seems to have enjoyed solving the jigsaw puzzle and were very focused on the task. I had to remind them to explore the space before starting the task. Once the puzzle was solved, the participant was automatically transferred to a transition area where he had to rate two experiential qualities.

Rating of Experiential Qualities (REQ)

The spatial analysis can only be meaningful in regards to an equivalent evaluation from a human experience point of view. Evaluating “lived space” (2) can be done by asking participants to rate their experience with each spatial characteristics. This task brings the qualitative human evaluation into the equation. Using a semantic differential scaling technique, subjects were able to differentiate their appraisal using a five-step Likert-like scale. The rating categories were selected to represent previously mentioned properties: enclosure and complexity.

Perspective Taking Task (PTT)

Once out of the IVAS, participants had to answer a few questions on the online questionnaire before completing this last cognitive task. The main purpose of this task is to measure the memorability of each scene (4). It consists of a sequence of 16 pairs of images. For each pair, one of the images was taken from one of the explored room, the other image was taken from a room not visited. The participant had to identify which image relates to one of the scenes he had experienced.

Perspective Taking Task – All the views
Perspective Taking Task – Pair 03
Perspective Taking Task – Pair 11

Space Syntax Design Analysis

The design analysis using Space Syntax approach will give us an objective measure of each considered spatial characteristics. By combining both “Isovist” and “Visibility Graph” techniques, we obtain a number of measurands (3). In this case, we will be using the following measurands to represent the best predictor variables for the spatial characteristic considered:

The spatial qualities and their related measurands are :

  • Enclosure: “isovist openness” and “jaggedness”;
  • Complexity : “number of vertices”, “vertex density”, “roundness” and “jaggedness”;

Early Observations

A quick glance at the data shows that participants experienced spatial complexity as intended in the scenes designed with the columns. Their average best performance comes out of the scene with the closest room with columns. However, the feeling of openness doesn’t seem to be related to the number of windows in the room. One explanation for this is most probably because there was a texture on the glass. It wasn’t completely transparent. A participant even said: “I didn’t realize that there were glass panel walls!”
This is just a short intro of the kind of conclusions I am working on. This experiment is bringing plenty of good data to dig into, some with positive results some negatives. I have a few pages to fill with that discussion (check further posts).


References:
  1. Arthur E. Stamps. Mystery, complexity, legibility and coherence: A meta-analysis. Journal of Environmental Psychology, 24(1):1–16, 3 2004
  2. Annemarie S. Dosen and Michael J. Ostwald. Lived space and geometric space: comparing people’s perceptions of spatial enclosure and exposure with metric room properties and isovist measures. Architectural Science Review, 60(1):62–77
  3. Jan M. Wiener and Gerald Franz. Isovists as a Means to Predict Spatial Experience and Behavior. pages 42–57. Springer Berlin Heidelberg, 2005.
  4. Barbara Tversky and Bridgette Martin Hard. Embodied and disembodied cognition: Spatial perspective-taking. Cognition, 110(1):124–129, 1 2009.

 

PhD Upgrade Success!

Success!

Celebration time, progress has been made, on 3rd October 2017, I   passed my upgrade.  It took me a while to get there. It took me actually more than 540 hours of mostly writing to be awarded a MPhil (Master of philosophy). I can now work towards obtaining a PhD (Doctor of philosophy).

Follows an extract of this upgrade report to browse a better picture of what I am working on. This is the introduction.

We live in exciting times where technologies developed for the last 50 years are converging. Mobile computers, with the smartphone being the catalyst of those technologies, are spreading into the market faster than any other technology before, reaching almost saturation in western countries. Today, most high end smartphones offers the possibility of basic level of immersion in Virtual Reality. This brings a completely different medium of communication and interaction with our digital environment. We are on the verge of breaking through the frame, the screen, the window… By coming back to a more natural interaction with the digital realm, we have the opportunity to rediscover one of the great humans strength: spatial awarness.

Virtual Reality has been used as a laboratory test environment across many different fields for the last five decades. Cognitive processes such as visuospatial perception, memory, spatial awareness, navigation, are just a few areas of investigation that have been placed under the lens of different types of Virtual Reality systems by the scientific community. Each of those systems comes with various specifications in regards to hardware and software used. Those in return are influencing the quality of computer graphics and human computer interaction implemented which plays a huge role on the level of presence experienced by participants. Besides those variables, Virtual Reality is indeed a very convenient tool in research as most of those variables can potentially be controlled and modified in comparison to “real” world settings where things are more complicated to customise, at least way more expensive.

However, despite the ground breaking works and progress made in those fields, most of those virtual environments contains loads of inconsistencies by design. Those design inconsistencies make it difficult to replicate studies from one lab to another, or even from one study to the next. Comparisons between those studies are difficult due to too many confounding variables. In particular, variables concerning the description of the spatial environment are usually barely described at all. This research will focus on isolating those spatial qualities and evaluating how they affect humans cognitive performances in VR.

Next post will present the architectural side of this project and how it will be implemented.

Let me know if you have any comment on the writing, the comprehension, and/or the content.

VR Demo for Techday at Dragon Hall

 The team from the Dragon Hall Trust set up a #techday event in June to give the opportunity to young people that wouldn’t usually have the chance, to try out new technologies such as robotics, 3D printing, VR and AR. Being in charge of the VR side of the event, I presented a prototype of my research,  as well as the Irrational Exuberance Introduction using the HTC Vive VR system. By proposing those two very different VR experiences,  visitors had the choice between being a builder or an explorer.
Their response was fantastic. Their enthusiasm, positive and sceptic reactions have fueled me to pursue my research with VR and architecture with renewed energy.
Techday at Dragons Hall Trust
I would like to take this opportunity to share the post event respond from the team that gives a pretty good idea of the success of the event:
“I would like to say a huge thank you for coming to display, demonstrate and inspire the young people who attended Tech day. Your VR display was incredible, everyone came off it were amazed and enjoyed themselves. I still have flash backs from when one of the kids let go of the hand sensor. However, the great thing about your headset is that it can engage not only young people but adults and they can all have the same experience and enjoyments from it.
That for us is one of the great things about tech day. It is a place where young people are able to see first hand the developments in STEM and how technology can be used.
We reached 232 attendees, with young people and adults witnessing technology they haven’t previously seen. Anyone who we have spoken to has given us great feedback.”
The last thing I would like to add to this post is that I will be ready for the next event with a more advanced VR experience to try out.

London VR Meetup Special Education Rocked.

Monday evening was my first VR meetup for developers, special education; right down my alley.  Really dynamic, full of enthusiastic developers and makers, a few exciting demos on the first floor and loads of speakers on the ground floor.  All this in a good old fashion London building, the Hackney House.  This blog will cover mostly my experiences with all the demos I have experienced.

The speaker room

First thing I tried was the full body VR immersion kit. It took some time to adjust all the straps around every main bones. Once calibration is done,  fit the Oculus Rift on your head and there you are looking in the mirror at your avatar,  a character attached to your  body and movements. Despite the lack of space (the casual black environment),  a sense of scale was given by a couple of statues. A few tweaks using UX interface designed by Dr Harry Brenton, and you have a tail, a gigantic arm or an alien head. The level of presence get higher with every step. I can’t wait to have full hand and fingers tracked.

Harry Brenton slide on Character Design

The next experience was a real-time holograph in VR. You get the headset on to meet the holographic projection of someone in Lithuania in real-time. She couldn’t see me, though, I could see her, and talk to her over the phone saying “hello” and she would wave her hands. Not yet like in Ghost in the Shell but it works, telepresence, yes!  However, it would be nice to have a sense of  geographic location in the environment, wouldn’t it?

The most exciting demo was about making VR in VR by Constructive Labs.  Still in an early stage, the demo let you manipulate objects in VR using HTC Vive and controllers like you would use the mouse and keyboard in 3Dsmax or Unity. On top, we were able to do that in interaction with another person. Their idea is to develop a VROS, Virtual Reality Operating system.  Pretty cool stuff! However, again, the environment was really poor. They just used a model of a random brick tower as a gimmick surrounding.

The last demo I tried was more on the interactive storytelling treat made by Bandits Collective. Following the hot panel discussion about 360 video a bit earlier in the evening, I think they nailed it quite well. Their intro for a short movie brings you in a computer generated 360 environment animation based and stylish. It is the environment that make you stay where you are and look where the action happens, though you can check all around and even move. There is no interaction. The action is happening around you. Very promising!

There were a couple of other demos with cardboard and other mobile VR. We know what we get there. I am much more interested with what we don’t know. There is still so much to explore, mostly for me, as you can tell, about spatial environment in VR.   On those four demos, only the 360 story has a designed spatial environment that support the experience.