Our technology tracks 49 key-points of the face, mainly around the eyes, nose and mouth. These 49 points are used to create a 3D mesh from the 2D webcam image. More information about how we do this can be found in our white paper.
Where automated facial coding really shines in comparison to traditional market research methods is in providing the unbiased, unfiltered reactions of consumers anywhere in the world, cost effectively and efficiently.
- Emotions are Universal: Our algorithms can detect the same six basic emotions consistently in every single person, regardless of their demographic.
- Facial coding is an immediate and passive measurement: the response measured is an organic, spontaneous and authentic evaluation of emotions. In traditional market research, consumers often can't describe how they felt effectively, can't remember how they felt at exact moments, or just don't care to share how they felt! Emotion measurement surpasses these issues.
- A webcam and a connected device are the only requirements: our industry leading classifiers for automated facial coding are robust enough to work through a standard webcam in people's homes. This means that we collect results at scale and tap into a huge number of markets at the click of a button. This approach makes our services quick and efficient - for most markets, our turnaround time is 48 hours.
With over 90% of human decision making being driven by the subconscious, getting to the bottom of how your audience really feels is invaluable. Learn more about the value of examining emotional responses here.
Yes, here is the link to download our white paper. Our team of acclaimed scientists, experts and researchers, led by our internationally-renowned scientific advisors, Professors Maja Pantic and Jeffrey Cohn, are constantly working on pushing the boundaries of facial coding technology.
Long-standing research has established that there six basic emotions that are universally recognized by humans - happiness, surprise, anger, sadness, disgust, and fear. The basic emotions don't vary in their expression, so we can accurately train computer algorithms to recognise them using computer vision and machine learning techniques.
The computer is fed a series of training instances – sets of images representing each emotion. Using these data sets, the computer learns to associate the image with the emotion label, and is thus able to assign these labels to unseen data - images it hasn't encountered before, such as new recordings of people watching video content.
How it works is viewers responses are recorded through their webcams with their consent while they are watching video content. These are streamed to our cloud servers, where they're securely processed - low quality recordings are filtered out, facial expressions are analysed using our algorithms, and the results are aggregated and reported on our online dashboard in near-real time.
To see a simplified version of emotion tracking technology in action, download our Emotion Booth App here.
For full details on the ins-and-outs of the science behind our technology, download our whitepaper here.
Yes, the validity of the science behind our platform is at the core of what we do. We have some of the most acclaimed scientists in the field of affective computing working on our shape-based approach to tracking emotions, based on Dr Ekman's widely-accepted theory of the cross-cultural universality of emotions. You can read both our technological white paper and Ekman's research on the subject.
Yes, the platform creates an individual 'mean face shape' for each respondent based on their data during the session, and uses that personalised definition of a neutral face as a base from which to measure any relevant variation.
Yes, in addition to the six basic emotions: happiness, surprise, fear, sadness, disgust and confusion – we also calculate measures of engagement and valence, and a general negative emotion classifier.
- Engagement: a measure of whether participants have any expressive reaction to a stimulus.
- Valence (Net Positivity): a measure of whether a reaction is more positive or negative. This measure helps to elucidate the emotional “tenor” of the viewing experience by deducting Negative from Positive.
- Negative (Net Negativity): a measure of whether participants are showing an emotion classified as negative.
For further detail on each of our metrics, visit our Glossary.
Yes, in addition to the 6 basic emotions we report on 3 proprietary emotion metrics: Engagement, Valence, and Negative.
We also report on four proprietary EmotionAll® Score metrics: Attraction, Retention, Engagement and Impact. These are derived from academic literature, including the work of Daniel Kahneman, Teixera, et al., and our own extensive experience. Download the PDFs:
- Emotion-induced engagement in internet video ads (Thales Teixeira, Michel Wedel and Rik Pieters)
- Why, When and How Much to Entertain Consumers in Advertisements? (Thales Teixeira, Rosalind Picard and Rana el Kaliouby)
- Evaluation by Moments: Past and Future (Daniel Kahneman)
For further detail on each of our proprietary emotion metrics and EmotionAll® Score metrics, visit our EmotionAll® FAQ.
No, there is no specific required threshold of evoked emotion. Expressive signal thresholds do vary between individuals, but by aggregating the data over a large sample size, we can give reliable account of people’s emotional response.
Yes, we are increasing the number of metrics to best meet the specific needs of each client. We are developing metrics of cognitive-emotional states in addition to basic emotions, such as confusion and boredom. We also are working on metrics that directly predict sales and other product performance measures directly from metrics of facial movement or appearance.