Doing Nothing with AI 2.0 is the second iteration of a series of robotic installations which uses EEG measurements and a GAN machine-learning model to optimize its movement, sound and visuals with the aim to make me watch and Do Nothing in 2020.
In times of constant busyness, technological overload and the demand for permanent receptivity to information, doing nothing is not much accepted, often seen as provocative and associated with wasting time. People seem to always be in a rush, stuffing their calendars, seeking for distraction and the subjective feeling of control, unable to tolerate even short periods of inactivity.
The multidisciplinary project Doing Nothing with AI intends to address the common misconception of confusing busyness with productivity or even effectiveness. Taking a closer look there is not too much substance in checking our emails every ten minutes or doing some unfocused screen scrolling whenever there is a five minutes wait at the subway station. Enjoying a moment of inaction and introspection while letting our minds wander and daydream may be more productive than constantly keeping us busy with doing something.
However, nowadays it seems to become more and more difficult as we live in a highly stimulative digital culture. Our brains start to adapt to sequential tasking, attention spans decline and our cognitive modes seem to leave deep attention behind.
In order to promote a doing nothing state, Emanuel Gollob and his team created a neuroreactive installation, including live EEG-measurements, a real-time adapting robotic choreography, parametric soundscape and parametric visuals. Over time the A.I. increasingly learns to move the installation in a way it best supports the user’s mind-wandering process, while a parametric control system allows creating a space of more than 4 million possible choreographies.
„Doing Nothing with AI“ is animated by a Generative Adversarial Networks (GAN) and a Kuka Robot KR6 R900. Using the real-time robot control system MxAutomation together with the Grasshopper plugin „Kuka PRC“ and MaxMsp creates a space of more than 4 million possible robotic movements, sound and visual combinations.
Every time a spectator puts on the EEG headband, the GAN generates a choreography based on collected data from previous spectators. After 30 seconds of EEG data, the current choreography gets evaluated. If it did bring the spectator closer to a state of doing nothing the current choreography is saved and starts to mutate slightly. If it didn‘t get the spectator closer to a state of doing nothing the GAN generates the next choreography.
For public interactive settings, I use a Muse 2016 EEG headband measuring the relative change of alpha and beta waves at the prefrontal cortex. For more advanced settings I use an Enobio EEG Cap with 8 electrodes and source localisation to calculate the relative change in the Non-Task brain network.
Emanuel Gollob – design, concept & research
Magdalena May – concept & research
Conny Zenk – visual art
Veronika Mayer – sound art
Advice and support
Johannes Braumann – Laboratory for Creative Robotic
Dr Orkan Attila Akgün – Neuroscientist
Magdalena Akantisz & Pia Plankensteiner – Graphic Design
Doing Nothing with AI is a project supported by Vienna Business Agency.