New York, Aug 26 (IANS): In its bid to help developers excel in the field of artificial intelligence (AI)-powered vision, Facebook has made available its latest computer vision findings to all for free.
The Facebook AI Research (FAIR) is essentially putting three sets of open-sourced code on web-based repository hosting service GitHub.
"We're making the code for DeepMask SharpMask as well as MultiPathNet - along with our research papers and demos related to them - open and accessible to all, with the hope that they'll help rapidly advance the field of machine vision," the social network giant said in a post on Friday.
"As we continue improving these core technologies, we'll continue publishing our latest results and updating the open source tools we make available to the community," it added.
DeepMask figures out if there's an object in the image.
SharpMask refines the output of DeepMask, generating higher-fidelity masks that more accurately delineate object boundaries.
MultiPathNet attempts to identify what they are.
Together, the three make up a visual-recognition system that is able to understand images at the pixel level.
"When humans look at an image, they can identify objects down to the last pixel. At Facebook AI Research (FAIR) we're pushing machine vision to the next stage - our goal is to similarly understand images and objects at the pixel level," the post further added.
In summary, the Facebook object detection system follows a three-stage procedure.
DeepMask generates initial object masks, SharpMask refines these masks and finally, MultiPathNet identifies the objects delineated by each mask.
There are wide-ranging potential uses for the visual-recognition technology.
"Building off this existing computer vision technology and enabling computers to recognize objects in photos, for instance, it will be easier to search for specific images without an explicit tag on each photo," Facebook said.
People with vision loss, too, will be able to understand what is in a photo their friends share because the system will be able to tell them, regardless of the caption posted alongside the image, it added.
The next challenge is to apply these techniques to video where objects are moving, interacting and changing over time.