World’s largest multimodal datasets for living space AI development

1 min read

The FINANCIAL — Panasonic Corporation and Stanford Vision and Learning Lab (SVL) in the US have compiled the world’s largest multimodal datasets for living space AI development, called Home Action Genome, and make it available to researchers. In addition, the parties host an International Challenge on Compositional and Multimodal Perception (CAMP), a competition for action recognition algorithm development using this dataset.

Home Action Genome is an image and measurement dataset compiled from multiple sensor data, including data from cameras and heat sensors, in residential scenes where daily life actions are mimicked. The data includes annotation that characterizes human action content for each scene.

Most living space datasets released to date have been small and composed largely of audio and image data. The new dataset combines Panasonic’s data measurement technology with annotation expertise from SVL to achieve the world’s largest multimodal datasets in living space.

AI researchers will be able to apply this dataset to machine learning, and utilize it for research into AI to support people in the home.

To realize individualized Lifestyle Updates that make living better day by day, Panasonic will accelerate AI development for home by promoting collaborative use of the dataset.

See also  Why You Can Trust ALPR Cameras - How Do They Keep Your Data Safe?

Leave a Reply