Weather     Live Markets

Northwestern University engineers have developed MobilePoser, a system for full-body motion capture that leverages sensors already embedded within consumer mobile devices without the need for specialized rooms, expensive equipment, bulky cameras, or an array of sensors. Developed by Karan Ahuja and his team, MobilePoser uses a combination of sensor data, machine learning, and physics to accurately track a person’s full-body pose and global translation in real time. This technology opens up new possibilities in gaming, fitness, and indoor navigation by making immersive experiences more accessible without requiring specialized equipment.

At the 2024 ACM Symposium on User Interface Software and Technology, Ahuja will unveil MobilePoser, which marks a significant advancement toward mobile motion capture. By running in real time on mobile devices, MobilePoser achieves state-of-the-art accuracy through advanced machine learning and physics-based optimization. Ahuja, an expert in human-computer interaction and an assistant professor of computer science, believes that MobilePoser can revolutionize various industries by enabling innovative applications that were previously inaccessible due to the high cost and complexity of traditional motion capture systems.

Traditional motion capture techniques involve actors wearing form-fitting suits with sensors in specialized rooms to create CGI characters such as Gollum in “Lord of the Rings” or the Na’vi in “Avatar.” However, these setups can cost upwards of $100,000, making them impractical for many applications. Systems like Microsoft Kinect rely on stationary cameras, limiting their use for mobile or on-the-go applications. To address these limitations, Ahuja’s team turned to inertial measurement units (IMUs) within smartphones and other devices, developing MobilePoser to accurately track body movement and orientation in real time with minimal equipment requirements.

MobilePoser uses a multi-stage artificial intelligence (AI) algorithm trained on a large dataset of synthesized IMU measurements to predict joint positions, rotations, walking speed, direction, and contact with the ground. By combining sensor data with AI algorithms and a physics-based optimizer, MobilePoser achieves a tracking error of just 8 to 10 centimeters, providing a high level of accuracy comparable to traditional motion capture systems like Microsoft Kinect. The system is adaptive, allowing users to move freely and adapt to different configurations of devices, such as smartphones and smartwatches, for enhanced accuracy.

In addition to gaming applications, MobilePoser has the potential to revolutionize health and fitness by enabling users to view their full-body posture and ensure correct form when exercising. The technology could also be used by physicians to analyze patients’ mobility, activity level, and gait, providing valuable insights for healthcare monitoring and treatment. Furthermore, MobilePoser offers possibilities for indoor navigation, addressing limitations of GPS systems that only work outdoors. By releasing their pre-trained models, data pre-processing scripts, and model training code as open-source software, Ahuja’s team aims to encourage other researchers to further develop and expand upon their work.

As MobilePoser becomes available for iPhone, AirPods, and Apple Watch users, Ahuja envisions a future where mobile devices serve as proactive assistants capable of detecting different activities and determining user poses. By leveraging the power of sensor data, machine learning, and physics-based optimization, MobilePoser represents a significant leap forward in mobile motion capture technology, offering a wealth of new opportunities for immersive experiences and innovative applications across various industries.

Share.
Exit mobile version