All is surface is an application and installation that loops a gaze between human eye and webcam.

By being present in a browser or gallery it merges and recreates structures that stem from the designs of operating systems, simulated 3D objects and human bodies.

visitor interacting with the installation

research questions

  1. Looking at the design of modern operating systems (OS) and human bodies – how are they interactive and how interacted (with) and how interacting?

  2. Can we loop a gaze between OS and a human participant that questions the human participant as sole viewer?

  3. Can this loop reveal something about the gaze that is in loop rather than the entities that are drawn into the loop?


  1. First, we looked at our computers, our OS (Is it ours? Can it return our gaze?) and 3D worlds from games and talked about what we saw.
Computer and OS: Although I may be able to touch a screen and cause actions due to my touching, I definitely can’t reach through the screen or make it feel and look less flat, cool and delicate – at least not without destroying it.
The surface of the modern operating system macOS Sierra is greyish and transparent, allowing me to look through paper-like layers (behind my browser bar I see my desktop which I personalized with a photo). These layers are two-dimensional: When diving deeper into my folder system I move from left to right or switch layers, but I never move on the z-axis. Even though everything looks tidy and simple to me, I did put a piece of tape on my laptop’s webcam to make sure that my gaze is the only gaze in touch with my machine.
Games: Simulated three-dimensional worlds with high resolution and photorealistic graphics are often produced by the game industries, who are less focused on me as a user but as a player – someone who dives into a world, overcomes obstacles, and wins or loses. Many commercially successful studios (AAA-studios) believe that immersion, a state of mind in which the player will feel as part of the displayed world, can be achieved by designing this world in a way that makes it look indistinguishable from what they believe the non-digital world to look like. The avatars are often white, strong males or fantasy figures. More abstract 3D worlds can be found among alternative games that do not strive for realism but the telling of personal experiences. Three-dimensional games let me walk into their virtual world, but am I the one intruding or is their world enclosing me?
  1. In the game engine Unity a virtual sky can be set up to make the virtual surroundings look more realistic. Unity’s default shader simulates material, light and reflections.
In Unity, a surface may reflect the room it is embedded in, so a silver ball on green grass under a blue sky looks silvery but also shows some green and blue. Such a simulation can make me as a viewer/player believe that the ball looks realistic, but leaves me as a person with a body in front of a screen out of the picture: I am not reflected on the ball.
We used a live webcam input video from our computer in Unity, and set it up as the texture of our virtual sky. A smooth metallic surface, as a conductive material, reflects light very easily, so we used that as the surface of the object we wanted to be reflected upon: a can. When hitting the Play-button we could see ourselves reflected on the can.
screenshot allissurface displaying six rotated cans
  1. We started to multiply the cans and to set them in motion. 3D objects can collide with each other without causing permanent destruction to their forms. Watching the cans merge was a pleasure, so we experimented with movements that would over time lead to collision.

gif with merging cans

  1. Opening the application we exported from Unity made our live images appear on the cans in a distorted way. With the movement of the cans the image of our bodies was deformed and like the cans it merged with other cans, creating body images far from realistic proportions and easily identifiable body parts. Putting a finger on the webcam made the surface of the cans red.
the cans appear red when the camera lens is touched

try or catch

All is surface has been exhibited at Panke Gallery Berlin in February 2017. Visitors walking by the screen would often stop when seeing their reflection on the cans. Watching the cans move, collide, merge, they started moving themselves to see more of their body and their movements mirrored and liquefied.

Seeing parts of their bodies on the 3D cans was a new experience to many visitors. They said they were used to seeing themselves on screen when using video chat on an always flat surface like a program’s interface.


People were not only interacting with All is surface but were also interacted with – they were used as an input of an OS and they changed their behaviour. They were looking at something presented to them as a media art piece but the media art piece instantiated a loop of gazes instead of being an object merely looked at.

Visitors were looking at a screen, but the application on the screen had access to a webcam and with that simultaneously was looking at the visitors. The application channeled the visitors’ gaze looking at it and displayed that as part of its visuals, thereby visualizing less a mirror image but more an image created by gazes.

Regarding the perceived dichotomy of real and virtual worlds, we found that the design of OS separates its users from and binds them to what they see displayed on a screen. Users can’t reach through, but they personalize what they see. Most OS display a flat surface that dismisses the human body’s design as three dimensional and instantiate only human fingers as suitable for interactions such as clicking, dragging, pointing. In games, human players are mostly represented by avatars that can sometimes be personalized. The world on the human users’ side of the screen is however very seldom included, regenerated or reimagined in the game world.

For further research, we’re interested in what happens when we actually look ‘through’ OS driven devices. Virtual Reality (VR) headsets combined with VR compatible computers generate their content right in front of the human user’s eyes. How is the design of the human present, embedded or hidden in virtual worlds?