top of page

2024

Towards Multimodal Interaction with AI-Infused Shape-Changing UIs

Chenfeng (Jesse) Gao*, Wanli (Michael) Qian*, Richard Liu, Rana Hanocka, Ken Nakagaki

About

We present a proof-of-concept system exploring multimodal inter- action with AI-infused Shape-Changing Interfaces. Our prototype integrates inFORCE, a 10x5 pin-based shape display, with AI tools for 3D mesh generation and editing. Users can create and mod- ify 3D shapes through speech, gesture, and tangible inputs. We demonstrate potential applications including AI-assisted 3D mod- eling, adaptive physical controllers, and dynamic furniture. Our implementation, which translates text to point clouds for physical rendering, reveals both the potential and challenges of combining AI with shape-changing interfaces. This work explores how AI can enhance tangible interaction with 3D information and opens up new possibilities for multimodal shape-changing UIs.


*This project was in collaboration with Richard Liu and Prof. Rana Hanocka  from 3DL (UChicago CS). 

Exhibitions
AxLab Members

Chenfeng (Jesse) Gao

Wanli (Michael) Qian

Ken Nakagaki

Publication
ACM UIST2024 Poster

Towards Multimodal Interaction with AI-Infused Shape-Changing Interfaces

Chenfeng Gao, Wanli Qian, Richard Liu, Rana Hanocka, and Ken Nakagaki

Gallery
bottom of page