Coupled with knowledge about a remote partner's schedule and routine, this system provides a rich new background signal to process, without ever having to look at a smartphone screen.
The objective is to provide timely adaptations that maximize communication efficiency, while minimizing the annoyance and disruptiveness associated with today's notifications, ringtones and alerts.
The haptic cues will be rendered on four different points on the bottoms of the feet through vibrotactile actuators. The shoes will also measure the force profile and movement dynamics by using FSR sensors and inertial measurement units, respectively.
This involves research into the integrated and interchangeable use of the haptic and auditory modality in floor interfaces, and for the synergy of perception and action in capturing and guiding human walking.
rtER offers access to high-quality "live" data that may be visualized effectively both by responders in-situ and by remote operators in dedicated control rooms. Its components will include multimodal data registration, interactive visualization capabilities, and live streaming of the integrated contents.
Although other systems (e.g., Humanware's Trekker and standard GPS tools) emphasize navigation from one specific location to another, typically accomplished by explicit turn-by-turn instructions, our goal is to use ambient audio to reveal the kind of information that visual cues such as neon signs provide to sighted users. Once users notice a point of interest, additional details are available on demand.
The software has been used for a range of demanding applications including live concert streaming, remote mixing, collaborative performance, distance masters classes, remote video interpreting of sign language, and rendering of a multi-screen uncompressed high-definition video "shared space".
This video (from January 2021) provides a brief overview of some of our current and planned research activities.