First-of-its-kind wristband can track body’s posture in 3D using a tiny camera
Cornell researchers have developed the first wristband in the world that can track the entire body posture in 3D, thanks to a miniature camera and a customized deep neural network.
Called BodyTrak, the wearable could be a "game-changer" in "monitoring user body mechanics in physical activities where precision is critical," Cheng Zhang, assistant professor of information science and the paper’s senior author, said in a press release.
"Since smartwatches already have a camera, technology like BodyTrak could understand the user’s pose and give real-time feedback," Zhang said. "That’s handy, affordable, and does not limit the user’s moving area."
BodyTrak is one of many systems developed by the SciFiLab at Cornell. Previously, the group had developed deep-learning models to track silent-speed recognition, facial expressions, and hand and finger movements.
The team published their findings in the Proceedings of the Association for Computing Machinery (ACM) on Interactive, Mobile, Wearable, and Ubiquitous Technology.
The deep neural network supplements the miniature camera
BodyTrak comprises a dime-sized camera on the wrist, along with a deep neural network. The latter reads the camera's images or "silhouettes" of the user's body whenever they move and virtually recreates 14 body poses in 3D and in real-time.
To put it into perspective, the model completes the partial and blurry images captured by the camera.
The researchers conducted a user study with nine participants. Each of them performed 12 daily activities in different scenarios, all with multiple camera settings on the wrist. The results revealed that the BodyTrak system could indeed comprehend the entire body (3D positions of 14 joints) with an average error of 6.9 cm using only one miniature RGB camera on the wrist pointing towards the body.
"Our research shows that we don’t need our body frames to be fully within camera view for body sensing. If we are able to capture just a part of our bodies, that is a lot of information to infer to reconstruct the full body," said Hyunchul Lim, a doctoral student in the field of information science and the paper’s lead author.
The researchers also took privacy into account - which is imperative when wearing delicate sensing devices. According to Zhang and Lim, "BodyTrak mitigates privacy concerns for bystanders since the camera is pointed toward the user’s body and collects only partial body images of the user."
The researchers recognize that smartwatches today do not have powerful enough cameras and the required battery life to integrate full body sensing but will do so in the future.
In this paper, we present BodyTrak, an intelligent sensing technology that can estimate full body poses on a wristband. It only requires one miniature RGB camera to capture the body silhouettes, which are learned by a customized deep learning model to estimate the 3D positions of 14 joints on arms, legs, torso, and head. We conducted a user study with 9 participants in which each participant performed 12 daily activities such as walking, sitting, or exercising, in varying scenarios (wearing different clothes, outdoors/indoors) with a different number of camera settings on the wrist. The results show that our system can infer the full body pose (3D positions of 14 joints) with an average error of 6.9 cm using only one miniature RGB camera (11.5mm x 9.5mm) on the wrist pointing towards the body. Based on the results, we disscuss the possible application, challenges, and limitations to deploy our system in real-world scenarios.
Astrophysicist Neil deGrasse Tyson, former NASA Deputy Aministrator Lori Garver, and others weigh in on NASA's historic Artemis mission.