How do we interact with NNs, how do we perceive and use the generated data? Can we navigate in high dimensional spaces and manipulate its outputs in a more human/tacit/natural way? Can we explore 3D worlds and control its properties with our concrete physical movements in real time? These questions gave the impulse to critically examine the way we give commands to computers converting analog data to digital and emerged with the intention to create a wearable interface that would allow alternative human-machine interaction with the means of hand gestures.
"Unseen and unfelt, the mouse has to disappear in order to work. It has to be both a part of my body and part of the computer, binding two organisms into one, allowing the electrical signals in the nervous system to stimulate and be stimulated by the electrical signals in the computer. The role of the mouse is to simply attach a thin wire to the hand, linking our organic and inorganic circuits.” Mark Wigley. "The Architecture of the Mouse", Architectural Design 80(6):50-57., 2010.
The presented work-in-progress enables gesture navigation through an audiovisual scene filled with latent space imagery, based on moving 2D textures, generated with the StyleGan2 neural model, trained on Google Earth satellite dataset and wrapped back on 3D objects allowing speculative workflow for interpreting network's inner space. The 3D landscape geometry, audio filtering, navigation and export of generated illusionary planet “snapshots” are controlled by hand gestures as an experimental approach towards alternative interfaces for interaction with AI. As all technologies, interfaces will experience “inter-phases”, in-transition stages of development, that emerge from within the broader scope of intuitive human intention towards more natural ways of communication: with technologies, artefacts and other intelligences.