exp_08

GAZING FIGURES lets go of two-eyed vision and confronts performers with a multi-eyed perspective that is inspired by the jumping spider Phidippus Regius.

The experiment equips the performers with self-constructed VR suits, VR headsets and cameras for each of their wrists and knees.

performer wearing a VR suit that attaches webcams to their wrists and knees

Foto: Mario Strahl

research questions

  1. If the way humans move is shaped by how they see or do not see, will modifying the configuration of their vision change the way they move?

  2. Is the default of stereoscopic vision so elemental to VR that an experiment embracing other styles of vision must be called something other than VR?

  3. How do human testers react when their point of view, usually closely linked to their heads, becomes technologically multiplied and spread over their bodies?

development

  1. The basis for GAZING FIGURES was research on animals with more than two eyes. Spiders came to mind first and some further reading drew our interest to jumping spiders of the family Phidippus.
Phidippus Regius is about 15mm (females) or 12mm (males) in size. While many other spiders use vibration to detect prey, jumping spiders rely on their relatively good vision. Phidippus Audax for example can see sharp within a range of 30cm. Jumping spiders have spectral sensitivities in the ranges of green and ultraviolet light.
jumping spider phidippus redius
  1. For our first test we wrote the code to input multiple webcams in the game engine Unity. It was unusual to see oneself seen by cameras in different angles on a computer screen.

unity screenshots with different webcam images

  1. In our university’s VR lab, we connected several webcams to the main computer. Because the Virtual Reality headset HTC Vive worked with Windows only we faced some complications in getting all webcams detected. We had to power the cameras via different USB ports of the computer because one port would not power more than one camera. Finally, we managed to access all four cameras at once.
  1. We attached four webcams to our hands and knees using cable straps and fabric. Unfortunately, the person tied to the webcams had to stand close to the main computer, since the cables of the webcams had to be connected to the heavy machine.

a camera is attached to a hand

  1. We combined the four webcam images to a collage where the margins of the images would softly blend into one another. On the HTC Vive’s headset we displayed the collage.

a screenshot of a collage that we saw with the HTC's headset

  1. Because the movements were very restricted when testers had to be connected to a big computer we ported the experiment to small, handy Raspberry Pi computers and mobile VR headsets.

try or catch

To associate the camera positions on the body with the positions of the camera images in the collage was generally easy, but moving turned into a challenge for us and our two testers. Movements slowed down as the testers had to coordinate arms and legs in an unfamiliar way.

Their first reaction was to lower their body closer to the ground and remain still for a minute to just look, then they initiated slow movements starting with the arms first and following with the knees carefully. When they later moved on their feet and hands, belly up, or stood with hands and knees stretched away from their core as far as possible, their poses and motions looked arachnoid. Seeing them was strange, because we had not used GAZING FIGURES long enough ourselves to comprehend what they saw when they gazed at us through multiple cameras. They chose to hold postures that seemed exhausting, so as to test their new vision properly and, while they had talked to us a lot in the first minutes of the experiments, they sank deeper into the experience without communicating verbally.

After the experiment, testers told us that they had given more attention to objects close to the ground, which usually were too far away from their head and eyes to catch their interest. They described the experience as ‘navigating through your own perception’ and liked that they could choose what to look at and from which angle(s). They had fun trying new poses and movements.

After these first tests, we constructed VR suits that allowed performers to move more freely. In exhibitions, we transferred the collage of webcam images that the performers saw inside their VR-headset to monitors.

performer in the exhibition moves according to their modified vision performer in the exhibition moves according to their modified vision

Fotos: Mario Strahl

conclusion

That human testers change their movements and poses due to a multi-eyed process of perception is peculiar to us. The experiment shows that changes in the human body’s structure, which are for example caused by technology, lead to different body movements. Testers easily adjusted to receiving visual inputs from areas of the body that usually cannot see.

In contrast to GAZING FIGURES’ arranged views, which request testers to initiate large movements to make sense of their surroundings, human eyes are able to give an overview of a room by being rotated. However and even though testers of GAZING FIGURES technically looked at a flat screen in a VR headset, their actions and their verbal feedback emphasized that they felt closely connected to their surroundings. Instead of watching objects from a distance, they were sometimes next to, under, above and far from an object with four cameras at a time.

Most contemporary VR games and experiences already define how the player has to move or not move to make progress. VR content producers in a way design choreographies that become visible to those who watch others act in VR.

To us, the experiment proposes something other than what producers of VR Systems like Oculus, HTC, Playstation and Google promote as VR, because it serves to connect users to a space rather than to displace them. In the process of developing GAZING FIGURES we dropped the original plan to use the HTC Vive because the area in which we could have moved with it was too restricted. Our university’s HTC Vive and the matching PC cost around 2500€, a Raspberry Pi Computer and a cheap mobile VR headset cost less than 50€ and can be used with a three year old Android phone that we already have.

Can VR, as a technology that is built upon stereoscopic vision and that promotes an escape from reality, be reinvented and reframed by artists and activists?