- Adaptive Environments
- Kite-powered Design-to-Robotic-Production
- Fibrous Smart Material Topologies
- S.M.A.R.T Environments
- Robotic Building
- --- METABODY 1st EVENT
- --- METABODY 2nd EVENT
- --- Hyperbody update 02-2014
- --- Hyperbody Msc2 prototypes
- --- Ambiguous Topology 07-2014
- --- Reflectego & RoboZoo 07-2014
- --- The Hyper-loop
- --- Nervion, Textrinium & [S]caring-ami 07-2015
- Robotic Architecture
- Automotive Complex
- Manhal Oasis Masterplan
- Muscle NSA
- protoCITY 2005+
- Virtual Operation Room
- Digital Pavilion
- METABODY project
- Hyperbody update 02-2014
- 2013 - 2018
- Project Leader from Hyperbody
- Dr. Nimish Biloria
- Project Team from Hyperbody
- Dr. Nimish Biloria, Ir. Kas Oosterhuis, Jia Rey Chang, Dieter Vandoeren, Veronika Laszlo
- 1. Italy - Infomus -Universitá di Genova - A. Camurri
2. Germany - Trans-Media-Akademie - Hellerau - T. Dumke
3. UK - DAP_Lab - J. Birringer
4. France - K-Danse - J-M. Matos
5. Netherlands - STEIM - M. Baalman
6. Portugal - Fabrica de Movimentos - A. Magno
7. Germany - Palindrome - R. Wechsler
8. Spain - Universidad Autónoma de Madrid - E. Botella-Ordinas
9. Spain - Kouros - P. Palacio, M. Romero
10.Spain - Innovalia -C. Maza
- Associated Partners
- 1. Netherlands - Hyperbody - Delft University - K. Oosterhuis, N. Biloria
2. Denmark - CAVI - Aarhus University - J. Fritsch
3. Czech Republic - New technologies Research Centre - University of West Bohemia -J. Romportl
4. France - IRCAM - A. Gerzso
5. UK - Goldsmith's University - L. Parisi
6. Germany - Leuphana Universität - Y. Foerster-Beuthan
7. Spain - Medialab Prado - M. García
8. Spain - Esmuc - Rubén López Cano
9. Canada - SenseLab - Concordia University - E. Manning, B. Massumi
10.USA - Duke University - K. Hayles - Literature Program
11.USA - Duke University - T.F. DeFrantz - Corporeality Working Group
12.USA - UC Berkeley - Theatre, Dance & Performance Studies - L. Wymore
13.USA - UC Santa Cruz - E. Stephens - Art Department
14.USA - NYIT - New York Institute of Technology - K. LaGrandeur
15.Colombia - Facultad de Artes ASAB - A. Gómez
16.Chile - INTERFACE - Arte, Cuerpo, Ciencia y Tecnología - Brisa MP
17.Chile - FIDET - Festival Internacional de Escena y Transdiciplina - S. Valenzuela
18.Chile - Moodlab - F. Ocampo
19.Korea - Myongji University - R. Beuthan
- Spain - Asociación Transdisciplinar REVERSO - Jaime del Val
- Advisory Board
- Donna Haraway, Katherine Hayles, Allucquiére Rosanne (Sandy) Stone, Karen Barad, Stelarc, Brian Massumi, Erin Manning, Annie Sprinkle, Elizabeth Stephens, Luciana Parisi, Federica Frabetti, Liana Borghi, Harmony Bench, Claudia Giannetti, Stefan Lorenz Sorgner, Francesca Ferrando, Yunus Tuncel, Marlon Barrios Solano
Hyperbody based METABODY developments since January 2014
Experiment 01 (EX 1): Light Scapes
Developed a first operational version of the generalized software framework, based on the Integration.04 project (Dieter Vandoren). Adapted the Max/MSP system to the Protospace lab situation and modularized its components to allow for quick prototyping and experimentation with different spatial configurations, hardware interfaces, software tools and audiovisual synthesis methods.
A first demo environment using the framework was developed for the introduction of the MSc2 course 'Inter-Performing Environments' at Hyperbody. After the introductory talks the students got to try out the demo environment, which gradually built up the level of motion, tracked interaction and projected light architectures. This environment served as a quick test case for the mentioned technical framework. By no means a finished, polished work but rather a teaser to get the inspiration of the students flowing.
- Students enter the dark space in small groups of 2 or 3 students at a time.
- Upon entering visitors encounter one static light plane dividing the space, luring them to the center to examine it (a slight reference to Dick Raaijmakers fantastic essay The Great Plane: http://v2.nl/archive/articles/the-great-plane, though the original Dutch version's phrasing is better)
Sound: A drone sound fills the space with a slightly alienating ambience.
- When the first visitor to approach it 'touches' or crosses the light plane it suddenly disappears, together with the sound. The visitors are left in surprise.
- 2 seconds later light rays emerge from all 3 projectors tracking the center of mass of every visitor within the tracked area. Visitors realize the 'system' is tracking them.
- Tracking rays make place for jittery planar connections randomly jumping in between the hands, feet and heads of all the visitors within tracking range. They notice the shapes connect to their body parts and they morph them by moving around.
Sound: noise sound burst accompany the creation of new connections, supporting its jittery and angular geometric shape.
- Connections make place for vertical planes randomly tracking some of the hands of the visitors. The position of the large, space dividing planes lock on to the visitor's hands.
Sound: a sine tone links to each plane, emphasizing its solidity.
- The vertical planes make place a square grid structure filling/dividing the entire space volume. After 10 seconds it starts spinning around the vertical axis, creating vertically scanning planes travelling through the space. 10 seconds later it starts spinning around a 2nd axis, making it pitch as well as rotate vertically. Another 10 seconds later it starts rotating around the 3rd axis as well resulting in a slightly disorienting, morphing spatial structure. (No motion tracking interaction here)
Sound: Silence, Only visual contemplation.
- Notable reactions from the students:
- - "it feels like being tickled" (referring to the tracking rays)
- - "I expected my voice to echo in the projected structure"
Experiment 02 (EX 2): SWARM INTERACTIONS
SwarmmyBody is an interactive-swarm-simulation experiment using Processing and Kinect. By applying the basic swarm logic with Processing and skeleton tracking system of Kinect, users are able to manipulate a swarm based particle system with body movements in real-time in a virtual environment. During the interactive process, the swarm emerges from the user's body as an extension of his/her own self, and begins to wander from the center of the body mass to the spatial surroundings. Although the swarm has its own degree of freedom to float around in the virtual environment, however, using the logic of cohesion, alignment and separation, it tends to follow the user's position to keep tracking and interacting with their motions.
The swarm emerges from the center of Body. The swarm follows the user's position in 3d space via bdy tracking. The size of the swarm boundary is defined by the distance between hands of the user.
The distance between user's hands defines the boundary of the swarm. In other words, the longer the distance, the wider space the swarm can float in. With certain postures, the user is also able to have impacts on the swarm's behavior. For example, if the user raises one of his/her hands and puts the other one down, the joint's coordination of the raising hand is automatically translated as an attraction point forcing the swarm to steer towards the hand.
Attraction point will be created once the user raises one of his hands..
Different render modes are applied to the code as well. The "Swarm-Mode" only shows the vertices of the swarm.
Render Modes: The Swarm mode (above) and the "Tails-Mode" (below), coloring the traces of the swarm.
The "Tails-Mode" displays the motion traces of the swarm's mobility by coloring its tracks like tails. The "Line-Between-Mode" shows the connections between swarm particles within a certain distance as a network of surfaces.
By clicking different buttons of the user interface, different modes can be switched on and off according to different conditions. Three sliders of cohesion, alignment and separation control the basic parameters of the swarm behaviors.
The "Line-Between-Mode": Creating connection lines between swarm particles within a specified distance.
In the current nascent state of this experiment, the focus, apart from establishing behavioral modes of the swarm in virtual environments is to set up a robust communication protocol between Processing and MAX/MSP. In its first phase, the OSC (Open Sound Control) communication protocol between Processing and Max/MSP has been setup as well. In other words, the x-y-z coordination of each swarm's vertices can be synchronically send to Max/MSP as a reference input for immersive projected environments (EX 1). After receiving the input data from Processing, the Max/MSP patches developed by Dieter Vandoren are able to adequately implement it with the render mode for the projection.
From left to right: Processing code, Processing simulation and Max/MSP communication patch.
Parallel Physical and Ambient experimentations:
Apart from EX 1 and 2, experiments in setting up ambient interfaces and their interconnection with physical systems are also being conducted in parallel. These predominantly deal with controlling light, physical actuators and motor mechanisms via gestural control.
Gesture based actuation of linear actuators, lights and step motors.
- The next phase will involve combining the two experiments in way that the technical communication platforms between the EX 1 and EX 2 are synchronized. This will allow for spatial swarm systems to be activated in three-dimensional physical space fully integrated with body tracking and interaction capabilities. Various rendering modes within the swarm systems shall thus be interfaced with human interaction and real-time manipulation so that multi-dimensional inter-connected networks and topologies can be generated on the fly owing to the interaction between human and technological agencies. Emergent networks and connections can be visualized with multiple user interaction within the performance space. The ability to create architectures via swarm-based networks in real-time via the human being an integral part of the interactive system shall take us one step closer to synergizing different agencies.
- Subsequently generative sound shall be interwoven as an integral part of the system itself. Any interaction, any new network connection, any new topological formation in combination with body gestures shall thus trigger emergent sound scapes. Possibilities of creating sequential triggers to draw in people eg. Starting with sound as an psychologically attracting medium and then subtly getting them involved with other agencies could be an option to be discussed. The idea being not to push an observer into complete chaos but to develop leads and subtly build up the complexity in order for the user to be absorbed and be united with the system (as good as meeting a new being)
- Parallel experiments with a network of connected air filled cushions as a continuous surface (floor, wall, ceiling) are being worked on as of now. However we would like to extract/map data-sets which emerges from the interaction space generated via the 1 and 2. Lets say sample a section of this data scape and then develop a feedback loop to the physical material space. This in our opinion will still be a semi- proactive system, which will collect data from the virtual interactions and feed these into physical systems. Therefore stage 4:
- We will start conducting experiments on how physical augmentation of the material system, via touch, gesture, movement of body etc can initiate a looped sequence which starts connecting with the virtual interaction space: sound, light, swarms. Can there be a connection established, which not only engulfs the user but also forces him/her to think and decipher the triggers which lead to multi-modal immersive experiences. We will be critically looking at such issues in an attempt to bind the physical and virtual platforms so that we establish seamless information integration and work with adaptations of material systems.