exploring the potential for fusing technology with behavior recognition
Reallaer has created solutions to aid in marking up videos with behaviors and other facial related cues in the past. Our experience in managing this data, alongside these annotation tools, has paved the way to start researching a platform that opens the doors on creating an assessment process that reacts to a client’s needs. One of our most frequently requested need is for an application to not be locked to the local PC. In response, we have created a web application to fill these needs.
Custom questionnaires can be created through our admin portal, conveniently assembled from old questionnaires to entirely new questions. Videos can be associated with a questionnaire from our vast behavior libraries, and questionnaires can be composed together to create an overall assessment. User management modules allow for standing up an assessment workflow where evaluated users can login to our web application and interact with videos using on-screen tools to submit answers. Your standard text and multiple choice based questions still have a home next to our video based questions. All of these features combine to create a comprehensive assessment when using videos.
While manual markup of video is useful for evaluating user’s detection skills, there will always be a need to do this automatically. We have worked alongside many universities in developing classifiers to detect various behaviors using the Facial Action Coding System (FACS). While we were integrating these classifiers into an overall detection system, we collected additional training data and accelerated the overall training process through parallelization and cloud infrastructure. However the fielding of this solution can have an extremely large physical foot print when using multiple sensors and is not portable enough for more constrained problems such as evaluation of marketing content.
Reallaer researched a web-application based solution that utilizes a consumer-grade web camera to record a user’s face while interacting with some visual stimulus, such as a marketing commercial. Rather than collecting a user’s experience and manually scoring it, a set of classifiers were trained and implemented to automatically grade these videos on a range of emotions. The output of this process can be used to quickly provide fast and concise emotional feedback of the reactions from users with an associated video.