How do you account for occlusions or movement away from the camera?

Our algorithms are trained on our data sets of individually annotated frames that are representative of images encountered in a real test scenario - what we call 'in the wild'. By using our own data sets collected in the wild, we can maximize the chances that the algorithms will be able to read faces even in difficult situations: bad quality images (or cameras); when the lighting is bad; parts of the face are very shadowed or hidden; when people are either too far or too close to the camera; when they're wearing large/thick glasses; or even have a lot of facial hair. By being trained on 'wild' data the algorithm will perform more reliably 'in the wild' than it would otherwise.

Additionally, we enforce strict quality filters to ensure we only include the data for high quality sessions in the final data set. If occlusions or movement away from the camera interfere with the quality of the recording, the session is excluded. 

Have more questions? Submit a request


Please sign in to leave a comment.