Facebook captured more than 2,000 hours of first-person video to train next-generation A.I.

- Advertisement -


  • Facebook has announced a research project in which it has collected 2,200 hours of first-person footage from around the world to train the next generation of AI models.
  • The project is called Ego4D, and could prove crucial to Facebook’s Reality Labs division, which is working on smart glasses, augmented reality, and virtual reality projects.
  • Facebook said it would make the Ego4D data set publicly available to researchers in November.

- Advertisement -

Facebook on Thursday announced a research project in which it collected 2,200 hours of first-person footage from around the world to train the next generation of artificial intelligence models.

- Advertisement -

The project is called Ego4D, and it could prove crucial to Facebook’s Reality Labs division, which is working on a number of projects that could benefit from AI models trained using video footage shot from a human’s perspective. can. This includes smart glasses, such as the Ray-Ban Stories that Facebook released last month, and virtual reality, in which Facebook has invested heavily since its $2 billion acquisition of Oculus in 2014.

The footage can teach artificial intelligence to understand or recognize something in the real world, or virtual world, that you can view from a first-person perspective through a pair of glasses or an Oculus headset.

- Advertisement -

Facebook said it would make the Ego4D data set publicly available to researchers in November.

“This release, which is an open data set and research challenge, is going to catalyze progress internally for us, but also externally and broadly in the academic community. [allow] Other researchers to get behind these new problems, but now be able to do it in a more meaningful way and on a larger scale,” Kristen Grauman, Facebook’s lead research scientist, told Businesshala.

Grauman said the data sets can be deployed into AI models that train robot-like technology to understand the world more quickly.

“Traditionally a robot learns by doing stuff in the world or is literally shown by hand how to do things,” Grauman said. “There are opportunities to let them learn from the videos only from our own experience.”

A consortium of Facebook and 13 university partners relied on more than 700 participants in nine countries to capture first-person footage. Facebook says the Ego4D contains 20 times more footage than any other data set of its kind.

Facebook’s university partners include Carnegie Mellon in the US, the University of Bristol in the UK, the National University of Singapore, the University of Tokyo in Japan and the International Institute of Information Technology in India.

The footage was captured in the US, UK, Italy, India, Japan, Singapore and Saudi Arabia. Facebook said it hopes to expand the project to other countries, including Colombia and Rwanda.

“An important design decision for this project is that we want the partners to be the foremost experts in the field, interested in these problems and motivated to pursue them but also having geographic diversity,” Grauman said. .

The announcement of Ego4D comes at an interesting time for Facebook.

The company continues to intensify its efforts in hardware. Last month, it released the $299 Ray-Ban Stories, its first smart glasses. And in July, Facebook announced the formation of a product team specifically to work on “Metaverse,” a concept that involves creating a digital world in which multiple people can reside at the same time.

However, over the past month, Facebook has been hit by a flurry of news stemming from a slew of internal company research that was leaked by whistleblower-turned-whistleblower Frances Haugen, a former Facebook product manager. Among the research released were slides that showed Instagram was harmful to teens’ mental health.

Footage was captured using off-the-shelf devices such as a GoPro camera and Vuzix smart glasses.

For the sake of privacy, Facebook said participants were instructed to avoid capturing personal identifying characteristics when collecting footage inside the home. This includes people’s faces, conversations, tattoos and jewelry. Facebook said it removed personally identifiable information from the videos and blurred out viewers’ faces and vehicle license plate numbers. The company said that audio has also been removed from many videos.

“Step No. 1 was a very thorough and important process for all of the university partners who archived this video to create a policy for proper archiving,” Grauman said.

.

- Advertisement -

Stay on top - Get the daily news in your inbox

DMCA / Correction Notice

Recent Articles

Related Stories

Stay on top - Get the daily news in your inbox