Research exploring novel methods of of artistic expression through motion-responsive technology. Developed at UC Berkeley's Berkeley Center for New Media (BCNM) under Dr. Kimiko Ryokai's INFO C262: Tangible User Interfaces course, this project investigates how to leverage unconventional input loops to make digital interfaces function as generative platforms for personal & collective emotional expression.
π Read the Full Research Paper
This research examines:
- How digital movement representation through inertial tracking creates new movement qualities when translated into media art
- How technological interfaces alter participants' spatial experiences, particularly focusing on flow states and collective movement patterns
- How different kinesthetic tracking methods influence participants' affective responses through ludic, gesture-reactive visualization
- Methods for disrupting participants' daily repertoire of mobility through digital art
- Exploration of non-verbal engagement with media in public and private spaces
Our post-structural approach to interaction design is characterized by:
- Non-hierarchical interaction models emphasizing emergent experiences
- Emphasis on flowing, participant-driven experiences that break traditional structures
- Rejection of binary conceptualizations of "correct" interaction
- Integration of vulnerability and collective expression through movement
- Prioritization of fluid, interpretative experiences over prescriptive interactions
A motion-responsive visualization system that runs on RGB images and transforms physical movements into artistic expression. Features two interaction modes:
- Exploratory Mode: Designed for general users, enabling intuitive engagement with flowing visuals
- Advanced Mode: Targeted at professional movement practitioners, providing detailed biomechanical feedback
- MoveNet pose estimation with DepthAI hardware (OAK-1, OAK-D)
- Real-time movement tracking using Kinect Camera
- Accelerometer (ADXL345) integration for precise motion data
- Multi-person tracking capabilities for collective interaction
- Edge detection and gesture recognition systems
- TouchDesigner for real-time visual feedback
- Point-cloud visualization with reactive harmonics
- Gesture-reactive visual elements including:
- Seed, period, and amplitude modulation
- Dynamic roughness and exponent mapping
- Real-time color theory integration
- Fluid design approaches aligning with breaking structure
- Support for both single and multi-user interactions
- Real-time average value computation for group movement visualization
python3 -m pip install -r requirements.txt
- Kinect Camera
- ADXL345 Accelerometer
- Arduino Nano
- DepthAI compatible device (OAK-1, OAK-D)
- Optional: LED lighting for ambient feedback
- Projection system for visual output
python3 demo.py
-e, --edge Enable Edge processing mode -m MODEL Select model ('thunder'/'lightning') -i INPUT Input source ('rgb'/'rgb_laconic'/file path) -f FPS Set internal camera FPS
Key | Function |
---|---|
Space | Pause |
c | Toggle crop region |
f | Show/hide FPS |
q/ESC | Exit |
from MovenetDepthaiEdge import MovenetDepthai
pose = MovenetDepthai(input_src="rgb",
model="thunder",
score_thresh=0.2)
renderer = MovenetRenderer(pose)
while True:
frame, body = pose.next_frame()
if frame is None: break
frame = renderer.draw(frame, body)
if renderer.waitKey(1) == 27:
break
The cropping algorithm determines from the body detected in frame N, on which region of frame N+1 the inference will run. The mode (Host or Edge) describes where this algorithm runs:
- In Host mode, the cropping algorithm runs on the host CPU. Only this mode allows images or video files as input. The flow of information between the host and the device is bi-directional: in particular, the host sends frames or cropping instructions to the device.
- In Edge mode, the cropping algorithm runs on the MyriadX. So, in this mode, all the functional bricks of MoveNet (inference, determination of the cropping region for next frame, cropping) are executed on the device. The only information exchanged are the body keypoints and optionally the camera video frame.
Note: in either mode, when using the color camera, you can choose to disable the sending of the video frame to the host, by specifying "rgb_laconic" instead of "rgb" as input source.
Our approach leverages:
- Multiple motion capture technologies
- Real-time visualization systems
- User-centered design principles
- Speculative design methodology
- Extensive scenario testing
- IDEO Model Cards for scenario development
- Tarot Cards of Tech for obstacle anticipation
- Body storming and interactive prototyping
embodied-movement/
βββ src/
βββ demo.py
βββ MovenetDepthai.py
βββ MovenetDepthaiEdge.py
βββ MovenetRenderer.py
βββ docs/
βββ paper.pdf
βββ examples/
βββ demo_scenarios/
- "The Inheritance" (2015) - Movement in large spaces and aesthetics of wide-ranging movements
- "Farbklangschichten" - Kinesthetic experiences with colors and sounds
- "Colored Shadows" - Movement representation through dancing shadows
- "The Treachery of Sanctuary" (2012) - Interactive shadow transformation
- Sonja Thiel, UC Berkeley
- Gyuyeon Jung, UC Berkeley
- Jared Mantell, UC Berkeley
- Marissa Maldonado, UC Berkeley
- Paloma Bondi, UC Berkeley
- Prof. Kimiko Ryokai, UC Berkeley
- Vivian Chan, UC Berkeley
- Christopher Sakuma, University of Pennsylvania
Thiel, S., Jung, G., Mantell, J., Maldonado, M., & Bondi, P. (2024). Embodied Movement: Exploring Digital Responsive Interactions. UC Berkeley School of Information.
This project is licensed under the MIT License - see the LICENSE file for details.
- Google Next-Generation Pose Detection with MoveNet and TensorFlow.js
- Katsuya Hyodo (Pinto) for model conversion expertise