The ICT2013 event (#ICT2013eu) was organized by the European Commission and took place on November 6-8 at Vilnius, Lithuania. ICT is the Europe’s biggest digital technology event and ILHAIRE's team was at this event to exhibit and demonstrate their work to incorporate laughter into human-avatar interactions.
5,000 people attended the event and most of them stopped at the booth to make our avatar system laugh automatically.
ICT was a great opportunity to show a demo of our avatars' systems using laughter to engage online users.
For the ILHAIRE project, we use virtual characters that have been designed to display facial emotions and models of expressive gestures so we can combine them according to valence and activation dimensions. To prepare the virtual character, the 3D artist separates the face and the body, since the animation systems differ.
- The face is initially modeled with a neutral position. Then we create different morphing possibilities of the face to simulate muscles and facial movements. The main parts of the face that we “morph” are the lips, chin, cheeks, eyes, eyebrows, nose and forehead. Very often small, incremental modifications create an important difference in perception.
- For the rest of the body, we insert a 3D model “skeleton” to the 3D mesh that will allow us to manipulate the avatar as if it were a puppet. We prepare different series of simple or complex different body part movements that the avatar will be able to execute with agility.
With Living Actor™ Avatar maker, we prepare several combinations of face and posture expressions (“states”) so the avatar is able to perform actions related to the variants of laughter we desire. Then, Living Actor™ synchronizes the combination sets with the audio files, so when laughing, the virtual character moves the body in synchrony with the different facial expressions according to the sound of laughing. All of the body language, facial expressions, and essence of laughter are created within Living Actor™ Avatar maker.
Living Actor™ Avatar Maker Graph of States
The Social Signal Interpretation (SSI) framework developed by partners from University of Augsburg, which in the ILHAIRE project is used for multi-modal laughter detection in real-time, was awarded with an Honorable Mention at the ACM MM Open Source Software Competition of this year's ACM Multimedia Conference (http://sigmm.org/Resources/software/ossc). ACM Multimedia (MM) is the flagship conference of the Multimedia Community, which profiles cutting-edge scientific developments and showcases innovative industrial multimedia technologies and applications. Starting in 2004, ACM Multimedia hosts every year an Open Source competition, providing an award for the best Open Source computer program(s). To qualify, software must be provided with source code and licensed in such a manner that it can be used free of charge in academic and research settings. Bringing tools to record, analyse and recognize human behaviour, the Social Signal Interpretation (SSI) supports the development of automated computer systems sensitive to a user's verbal and non-verbal behaviour. In the future such technology will help to establish a more natural human-computer interaction. Within the ILHAIRE project SSI is used to collect large-scale multi-modal laughter corpora and create an online laughter detector, which combines facial and vocal laughter cues to output a continuous decision whether a user is laughing and at what intensity.
Johannes Wagner, Florian Lingenfelser, Tobias Baur, Ionut Damian, Felix Kistler, and Elisabeth André. 2013. The social signal interpretation (SSI) framework: multimodal signal processing and recognition in real-time. In Proceedings of the 21st ACM international conference on Multimedia (MM ’13). ACM, New York, NY, USA, 831-834.http://hcm-lab.de/projects/ssi/wp-content/uploads/2013/11/ssi-acm13-camera.pdf