• Home
  • FAQs
  • Status
  • Submit a request
  • Sign in
  1. Realeyes
  2. FAQ
  3. Study Knowhow - Redirect Studies

Study Knowhow - Redirect Studies

Follow New articles New articles and comments

  • Guide to Redirect Testing

    A full redirect method can be used to send participants from your survey or platform to our emotion measurement tests, and back again.

    Have Your Test Link Created

    The first step to running a redirct study is to have a test link created for you. All of our test URLs take this form https://collect.realeyesit.com/XXXXXX where XXXXXX is replaced with a unique study specific hash. To have your test link created, login to your dashboard account and upload your video(s) to our Media Library. From there, reach out to our team and indicate the name of the video in the media library and any other test requirements. We will then create the test link and send it back to you.

    Pass Unique Participant IDs

    When redirecting participants to the test URL, a unique participant ID will need to be attached and passed as a URL variable called 'CustomParticipantID'. Find the instructions for passing unique participant IDs here.

    Redirect Participants Back After Emotions Measured

    Once participants either complete or fail to complete the emotion measurement test, they will be redirected back to your platform. Between 1 and 3 redirect URLs can be used to reflect the 3 potential participants flows in an emotion measurement test. These are Complete, Failed, & Declined, and the URLs can be edited directly via our Dashboard.

    1. Visit: https://delivery.realeyesit.com/Collections/Index
    2. This page will show all the Collections associated with your user account.
    3. Select the Collection of interest by clicking on the corresponding ‘Edit’ button on the right hand side.
    4. The edit page will open, allowing you to input or edit various aspects of the study, such as the redirect URLs.
    5. Once finished, click the ‘Save’ button in the lower left hand corner.

     

     

    As mentioned, we can accept up to 3 redirects, based on the 3 participant end states: Complete, Failed & Declined.

    • Complete: This refers to a participant who fully completed the test. They granted access to their webcam, watched all of the test media, and finally had their results saved to the our servers.
    • Failed: This refers to a participant who failed to complete the test. This is likely down to a hardware or software issue, such as not having a webcam, or an old and incompatible browser.
    • Declined: This refers to a participant who declined to take the test. They either did not grant access to their webcam, or they quit the test before it had finished.

    In terms of redirects, this functionality is useful for monitoring the participant's end states and retention rates. For instance, if a higher than average number of participants are declining to take the test, then it might be wise to change the language of the invitation or of the page that proceeds the emotion measurement test in the survey.

    Passing URL Variables

    There is an instructional panel on the right hand side of the Collection’s ‘Edit’ page the explains which Realeyes variables are available to pass back as well as how to do so via the redirects. It is also possible to send and have passed back any other variables, or to pass back static variables. 

    However, any variables sent to us beyond CustomParticipantID will NOT be recorded permanently. They are stored temporarily and deleted once the participant’s session has terminated. Therefore we can accept and pass back any number of URL variables, but CustomParticipantID is the only variable that we store permanently.

    Imagine that the following URL variables were required:

    • CustomParticipantID
    • Survey Platform ID (SPI)
    • State (1 – 3 depending on whether they completed, failed or declined the test)

    To have the 3 above mentioned variables passed back, the redirects will need to be as follows:

    • Complete: http://survey.com?ID={CustomParticipantID}&SPI={SPI}&State=1
    • Failed: http://survey.com?ID={CustomParticipantID}&SPI={SPI}&State=2
    • Declined: http://survey.com?ID={CustomParticipantID}&SPI={SPI}&State=3

    The variables could have any name. The ID, SPI and State variables in the above three URLs are simply examples. However there are a few points to remember:

    • All variables are case sensitive when being referenced within {curly brackets}.
    • The Realeyes variable CustomParticipantID must be named ‘CustomParticipantID’ in the collection URL that the participants come to the Realeyes test with.

    Any additional variables that need to pass and be passed back must be identically named when sent and referenced. So if the collection URL has the variable ‘SPI’, then to reference that in the redirect it must be ‘variableName={SPI}’. Using ‘variableName={spi}’ will not work. 

    Before launching your study, be sure to run a couple tests to ensure the redirects are working correctly and any URL variables are being passed. Once you have tested, reach out to us for confirmation that we have collected the correct testcustomparticipantid value(s).

    If all testing is successful, you are ready to launch! Be sure to send a quick note to us just before launching so we can remove any test sessions and set the test to 'Live'.

    Was this article helpful?
    0 out of 0 found this helpful
    Read more
  • Guide to Monitoring Data Collection Progress

    You can monitor the data collection progress directly from the 'Collections' page of our dashboard.

    First, find the name of the collection that you are interested in looking at. This can be done by searching for the specific collection name in the first column. 

     

    1.PNG

     

    Once you have found the collection of interest, you can view the collection stats directly in the table. The table includes:

    • Collection: Name of your collection
    • URL: The unique hash used in your test link
    • Campaign: Name of your campaign (Live measurement only) 
    • Media: Name(s) of the media the participant watched
    • Level: Allows you to filter based on the Campaign, Collection, or Media level
    • State: The state of the collection (Live, Completed etc.)
    • Seen: Number of sessions that landed on the Realeyes platform
    • Capable: Number of sessions that passed the capability checks
    • Prompted: Number of sessions that were successfully prompted to give webcam access
    • Agreed: Number of sessions that gave webcam access
    • Playback Started: Number of sessions for which the media started playing
    • Recorded: Number of sessions for which the participant completed watching the video(s)
    • Processed: Number of sessions for which the participant's recording has been processed
    • Delivered: Number of sessions for which the participant's emotion results were included in the final data set
    • Created: Date the collection was created

     

    You can switch between counts and the percentages by using the radio buttons in the upper right corner:

     

    3.PNG

     

    It is also possible to check the final status of each individual session ID to see whether it was successful or whether it failed. To do so, click the three dots on the far right-side of the table:

     

    2.PNG

     

    From there, select one of the export options depending on your needs:

    • Export Sessions: Exports all individual sessions. A single participant ID may have multiple sessions if they interacted with the test multiple times (such as if they failed to enable their webcam when they first landed on the test link).
    • Export Participants: Exports the final list of participants. Only 1 ID per participant will be listed, despite whether the participant had multiple attempts at viewing the video.

    The exports include:

     

    Participant Export

    • DataCollectionStructureID - Realeyes ID for your collection
    • DataCollectionStructureName - Name of your collection
    • CampaignID - Realeyes ID for your campaign (Live measurement only)
    • CampaignName - Name of your campaign (Live measurement only)
    • SourceMediaNames - Name(s) of the media the participant watched
    • ParticipantID - The Realeyes participant ID
    • IdentityProviderKey - The external ID provided by you
    • FirstSeen - Date/time the participant was first seen on the platform
    • LastSeen - Date/time the participant was last seen on the platform
    • Sessions - Number of sessions associated with the participant
    • Live - 1 or 0 designating if the participant watched the collection in the 'live' state or not
    • Capable - 1 or 0 designating if the participant passed the capability checks
    • Prompted - 1 or 0 designating if the participant was successfully prompted to give webcam access
    • AgreedToRecord - 1 or 0 designating if the participant gave webcam access or not
    • PlaybackStarted - 1 or 0 designating if the media started playing
    • Recorded - 1 or 0 designating if the participant completed watching the video(s) or not
    • Processed - 1 or 0 designating if the participant's recording has been processed or not
    • EmotionsRead - 1 or 0 designating if emotions were detected in the participant's recording
    • IncludedInAnalysis - 1 or 0 designating if the participant's emotion results were included in the final data set or not
    • SessionStates - The end state(s) of the participant's session(s)
    • EndReasons - The description of the participant's end state(s)

     

    Session Export

    • DataCollectionStructureID - Realeyes ID of your collection
    • DataCollectionStructureName - Name of your collection
    • CampaignID - Realeyes ID for your campaign (Live measurement only)
    • CampaignName - Name of your campaign (Live measurement only)
    • SourceMediaNames - Name(s) of the media the participant watched
    • SessionID - The Realeyes session ID
    • ParticipantID - The Realeyes participant ID
    • IdentityProviderKey - The external ID provided by you
    • IdentityProviderName - The source name of the external ID provided by you
    • RecorderBackend - The name of the backend source of the recorder used to record the webcam video
    • Browser - The detected browser of the participant
    • AgeGroup - The age group of the participant (if this information has been provided to Realeyes)
    • Gender - The gender of the participant (if this information as been provided to Realeyes)
    • Country - The detected country of the participant
    • Seen - Date/time the participant was first seen on the platform
    • Live - 1 or 0 designating if the participant watched the collection in the 'live' state or not
    • Capable - 1 or 0 designating if the participant passed the capability checks
    • Prompted - 1 or 0 designating if the participant was successfully prompted to give webcam access
    • AgreedToRecord - 1 or 0 designating if the participant gave webcam access or not
    • PlaybackStarted - 1 or 0 designating if the media started playing
    • Recorded - 1 or 0 designating if the participant completed watching the video(s) or not
    • Processed - 1 or 0 designating if the participant's recording has been processed or not
    • EmotionsRead - 1 or 0 designating if emotions were detected in the participant's recording
    • IncludedInAnalysis - 1 or 0 designating if the participant's emotion results were included in the final data set or not
    • SessionStates - The end state(s) of the participant's session(s)
    • EndReasons - The description of the participant's end state(s)
    Was this article helpful?
    0 out of 0 found this helpful
    Read more
  • How do you track participants between platforms?

    In order to track participants between platforms we must receive and pass back unique Participant IDs. This helps us to tag participants from outside our systems in order to match up the emotion data we collect with any external demographic or survey data that you collect.

    The ID must be passed as a variable in the URL called 'CustomParticipantID' and should be sent as a regular URL encoded GET query string. It can be any number or string up to 255 Unicode characters.

    For example, in the following test URL, 'YYY' is the external tracking ID:  https://collect.realeyesit.com/XXXXXX?CustomParticipantID=YYY 

    The ‘YYY’ at the end of the test link will need to be dynamically replaced by a unique participant ID, which allows us to upload any associated information that you send us after the data collection has been completed.

    Was this article helpful?
    1 out of 1 found this helpful
    Read more
  • Are there any best practices when using our own panel?

    Usually, we see that webcam rates in online panels are around 70-80% if it cannot be targeted, and about 70% of people are willing to allow access to their webcam. There are a number of best practices to help optimize these rates: 

    • Only targeting participants that meet all technical requirements.
    • Implementation of our Environment Detection API by panel(s).
    • Optimizing the study invites to maximize the acceptance rate amongst respondents. 

    Ensuring technical requirements are met

    Because we use webcams to read peoples facial expressions, there are a number of technical requirements that must be met in order to participate. For instance:

    • The respondent must have at least one webcam 
    • The respondent must not have a firewall, or a firewall allowing outgoing RTMP/HTTP encapsulated RTMP protocol usage
    • The respondent must be using a relatively modern browser

    When possible, targeting for any of the above factors will help to significantly increase incidence rates. 

     

    Environment Detection API

    Although most new devices rolling out already have a webcam, there is still a notable portion of users who don't have webcams to have their facial expressions read. We have an automatic detector in place within our tests to ensure participant's computers meet all technical requirements, however, the detector only runs once our opt-in page is reached. 

    The recommended route is to avoid participants entering the test and answering your survey questions to only then be disqualified due to not meeting technical requirements when they reach the webcam opt-in page. To do so we have created a client-side component that can easily be included in survey routers, web pages or survey scripts themselves. The component checks that all technical requirements are met and returns a code which can be used to make better decisions in routing respondents. The result of the script itself is not stored or recorded anywhere, it simply appears for the client to make a decision on the next steps for participants that do not qualify.

    Learn more about implementing our Environment Detection API here. 

    Maximizing acceptance rates

    Lastly, maximizing the number of participants that grant access to their webcams will help increase IRs.

    To maximize this group, we suggest not adding additional facial coding screens or opt-in language to your survey invites. Rather, stick to using our current opt-in page only, or minimal additions when necessary. We have found our current opt-in leads to the highest participation rates while still abiding to all privacy policies and security concerns.

    Here is an example of the opt-in page participants see:

     

     

     

    Was this article helpful?
    1 out of 1 found this helpful
    Read more
  • Can we detect whether participants' computers meet all technical requirements?

    Yes, we have a client-side component that can be implemented to check that all technical requirements are met. Please note, when using the checker it should NOT be downloaded to your own servers, but rather will be run from our servers. The full details on our Environment Detection API are below.

    Environment Detection API

    Realeyes uses webcams to read peoples facial expressions and behaviour. Although most new devices rolling out already have a webcam, there is still a notable portion of users who don't have webcams to have their facial expressions read.

    This can lead to the frustrating scenarios for users:

    • Spending tens of minutes going through a lengthy screener or survey only to discover they can't do the study because they don't have a webcam when they get to the facial coding part
    • Being routed to studies which require webcams several times in a row and being disqualified repeatedly
    • Being asked to take part in a survey via a web page pop-up even though they couldn't possibly do the study in the first place

    And the following scenarios for sample providers:

    • Sample burn
    • Low incidence rates

    It's clear that this is an annoyance for the users and causes unnecessary burn for sample providers.

    One option is to ask for existence of webcam manually as a separate question, but in our experience running webcam based studies we've noted that this doesn't work well. People sometimes don't even know they have webcams or don't care to answer correctly. This leads to a reduction in the number of people successfully completing the test.

    Fortunately, all the capability checks can be performed automatically without user intervention using our Environment Detection API. This makes it possible to make a quick and transparent check for capability of doing a Realeyes test and only invite or route capable people into the study.

    Our Environment Detector, a client-side script that performs the required checks, is light-weight and easy to insert in webpages or surveys. The environment checker itself is distributed via the Amazon CDN to ensure it loads and executes at speeds which will not affect the user experience.

    This component checks that:

    • The respondent is using a compatible web browser
    • The respondent has at least one webcam

    These checks can be done fully automatically without user intervention and are unnoticeable to the respondent.

    Possible usage scenarios:

    • Integrated into a survey router before screeners so that only capable people are routed into webcam studies or studies where there is part which strictly requires a webcam access
    • Inside a script that captures users from websites or other river sampling scenarios such that only capable people are asked to participate in surveys
    • To automatically update data about the participant capabilities when they are updating their panel profile
    • Integrated as the first step in the screener or survey script to make sure they don't unnecessarily waste time answering questions if they are not capable

    Our full documentation on the Environment Detection API can be found here.

    If you have any questions about how to use the component, please contact us at support@realeyesit.com

    Was this article helpful?
    0 out of 0 found this helpful
    Read more
  • What is required in order to upload our survey data to the dashboard?

    For redirect studies, should you want your demographic or survey data to be displayed on the dashboard, we will need you to send us a data file that includes all participant ids and any associated data. This way we can easily match the survey data you have collected with the recorded emotions data. The following protocol outlines best practices when sending a survey data file in order to ensure smooth upload:

    • The file must be saved in .CSV format
    • The file must contain a column for the Custom Participant IDs
    • The file should contain columns for age and gender (whenever possible) for comparison to our demographic norms
    • Our platform uses numerical values to translate data. If possible, please use number responses only in the data file and send a key indicating what the numbers in each column represent and how they should be labelled on the dashboard (see attached example)
    • Any survey question columns that are left blank for certain IDs will be labelled as “unknowns” on the dashboard

    Please see the attached example of a standard survey data file.

    If you have any other questions, please reach out to support@realeyesit.com.

    Was this article helpful?
    0 out of 0 found this helpful
    Read more
  • How to merge data from different ads?

    If you want to merge data from different ads then it is possible to do, however we strongly advise that this is only done if your data merge meets any one of the scenarios below:

    • You have run two or more separate facial coding tests using the exact same media file in both instances
    • You have run two or more separate facial coding tests using very similar ads with only minor differences, such as a different language overlaid across the same video. 
    • You have run two or more separate facial coding tests using similar ads that have a few seconds of different content, but are otherwise the same. 

    In all above cases, the ads must be practically identical in length to ensure the data remains meaningful post-merging. If one ad is a few seconds longer than the other then this can give false impressions of the audience response towards the end of the ad as suddenly the sample is reduced by 50% for X seconds, where X equals the difference in length between the two ads. 

    There are four methods for merging the data: 

    Grouping ads into a single study

    Use Case: You have tested two or more media which you wish to compare and contrast on the same study page.

    Summary: This method isn't a true merging of data; rather it allows you to view the results of multiple ads along side each other. So rather than having two separate studies, each with their own charts, you can instead have a single study that depicts the trend lines for both ads on the same chart, allowing for ease of comparison. 

    Procedure: This can be either performed by you if you control the set up of the study via API, or by Realeyes at any point, pre, post, or during collection. If you require us to create the grouped study for you then please simply send a ticket to support@realeyesit.com asking for us to create a new study that contains the data for X, Y and Z ads, including the original study names the ads come from if possible. In order to compare the two (or more) ads together on a single chart, simply:

    1. Open the study page and click the 'Select media' button in the top left of the page. Select all media you wish to compare, and click 'Done'
    2. Once the data for all of the ads you selected has loaded, ensure the 'Compare' option is selected in the 'Media' filter. 

    mceclip1.png

    Auto 'Summing' ads within a single study

    Use Case: You have tested two or more very similar media (length and content) and wish to see their combined results.

    Summary: Again like with the previous method this isn't true merging. It does combine the results of two or more ads together into a single chart line, but not permanently. If you were to refresh the page or deselect 'Sum' from the media filter (see Procedure below) then the data would no longer be merged. 

    Procedure: This can be performed by you once two or more ads in a study have enough data (10+ sessions) to display the chart results, or at any other point during or post collection. If you require us to create the grouped study for you then please simply send a ticket to support@realeyesit.com asking for us to create a new study that contains the data for X, Y and Z ads, including the original study names the ads come from if possible. Once you have a study with all the required media in it, then in order to 'Sum' the results together, please do the following: 

    1. Open the study page and click the 'Select media' button in the top left of the page. Select all media you wish to 'Sum' the results together for and click 'Done'
    2. Once the data for all of the ads you selected has loaded, ensure the 'Sum' option is selected in the 'Media' filter. 

    mceclip2.png

    NB! Now that the chart depicts the combined ('Summed') results for multiple ads, the exports available on this page will also be for the combined results. 

    Manual 'Summing' ads from different studies

    Use case: You have tested two or more very similar ads (length and content) in different studies, and wish to view the combined results without creating a new study with all of the ads included. This would normally be if you wanted to quickly check what the merged results would look like, IE the task isn't worth the time/effort to submit a ticket for the 'Auto summing' option listed above. 

    Summary: Again as with the last two, this isn't a true merging of the data. You are downloading the separate data exports for each ad, and then averaging them together to create your own version of the 'Sum' media filter result. 

    Procedure: Open each study that contains one or more of the ads you wish to merge the data for, then perform the following steps: 

    1. Select all the media that is relevant to you in each study using the 'Select media' button. Ensure the desired metrics (Happy, Engagement, etc.) that you wish to manually combine are selected. Furthermore, if you have multiple media selected within a single study, ensure the 'Compare' option is selected in the media filter. 
    2. Click the 'Export' button just above the chart, and select 'Chart Data'. A CSV export will download via your browser that contains one column of chart data for each ad/emotion pair you selected for the chart. Repeat this step in every study you opened in step 1 until you have a column of results data for each ad you wish to combine.
    3. Copy/paste all columns of data side-by-side into a single Excel spreadsheet. Create a new column at the end of the data, named 'Combined Data' or something else appropriate. You can then use Excel's 'AVERAGE' function to average the data across each row (each row being a second's worth of data) to get the average for that second. Do this for every row of data. 
    4. You now have a combined data column from all of the included ads, which you can turn into a chart via Excel's charting functions. To make the chart line identical to what it would look like in the 'Automated Sum' option above, ensure the 'Smoothed line' option is ticked in the chart settings. This is not the same as the smoothing functionality we provide on all charts pages, it's merely a chart style option that makes the trend lines a little nicer to look at. 

    NB! When you export the data, it will have already been 'Smoothed' to the same level you set it to in on the charts page. If you didn't specifically set the level, then it will be at whatever the default setting is. 

    mceclip3.png

     Full database data merge

    Use case: An issue with data collection resulted in you having two data sets for the same ad, however it's imperative that the data is merged so that it can be fully presented to your client as a single results set. 

    Summary: The data needs to be manually combined by Realeyes developers. This is a big task and could take up to several days to plan and execute. The result will be a single study/single ad, but the results combined from two or more collections. This is a true merging of the data. 

    Procedure: A formal request needs to be submitted to support@realeyesit.com that includes the following information: 

    • A summary of what you are trying to achieve and why
    • A list of the ads/studies you wish to combine
    • The urgency of the request

    To reiterate: This is a large, complex, manual process. It's not the type of task that can be done instantly or quickly - it must be planned into our developers' schedule. We may refuse the request if we do not agree that the justification for doing this is valid over the other merging options listed above. 

    NB! If you have used *the exact same media file* for two or more collections, then you can simply request that we create a new study using those two (or more) collections.  If the media is the same in both collections then our system will automatically merge the results and depict them as a single chart line. This assumes that the media file is the same in all collections - if you uploaded multiple copies of the same ad and therefore used different files then we would need to do the full database data merge in order to get the study depicting the results fully as a single results set. 

    Was this article helpful?
    1 out of 1 found this helpful
    Read more