Benoit Morel, CEO at Cantoche, participated at a the STEAM3 conference in Austin, TX on March 1-2. STEAM3, meaning Science + Tech + Engineering + Art + Math “cubed” is a public event to present a comprehensive look into the future of experiential learning and several experts in art and education were invited.
During this conference, Benoit presented the work done at Ilhaire that was one of the hits of his presentation as creating emotions and especially laughter is an important topic to engage learners.
The ILHAIRE project was recently featured on the New Scientist website. The article describes the work done at UCL in capturing natural laughter-related body movements and its application to visual laughter synthesis, such as the Living ActorTM characters developed by ILHAIRE partners Cantoche. This welcome piece of publicity came about from the “CS Unveiled” open day at the Computer Science Department UCL, which invited visitors to come and see the work and innovations at a world-class Computer Science department. As part of the open day, Harry Griffin presented the ILHAIRE project alongside UCLIC colleagues who presented the EMO+PAIN project. Researchers on these projects at UCLIC use similar methodology (motion capture for recognition of emotion), but the contrasting applications (laughter for ILHAIRE; chronic pain in EMO+PAIN) serve to demonstrate the flexibility and power of this approach. You can read an award-winning paper on laughter detection and perception authored by members of both projects here.
The relationship between scientists and the media is an interesting one. The value of public engagement in science is increasingly recognised and researchers are encouraged (and funded) to dedicate time to it. Articles in popular science publications and the general media are fantastic tools for this. They invite readers to further investigate scientific areas in which they are interested. Blogs such as this one are often the next port of call, since they are a direct channel of communication from researchers to those members of the public who want information direct from the horse’s mouth. Open days and science festivals, such as the Royal Society Summer Science Exhibition bring researchers and the public face-to-face and can be hugely rewarding for both sides. The public also have increasing access to peer-reviewed articles, for example through the Open Access movement, which aims to make previously costly journals freely available to researchers and the public in order to foster better research and greater public involvement in science.
Unfortunately the first contact in this journey of engagement can be hazardous; there are many good science writers out there, but science as presented to the public in non-academic publications is sometimes inappropriately cut up into bite-size ideas and framed around a simple, attention-grabbing headline, although this is often the responsibility of an editor, rather than the journalist who writes the main text. The increased impact this approach offers can be at the cost of accuracy and subtlety. Journalists work to deadlines that seem remarkably fast compared to the duration of a research project. The information given to journalists, sometimes in a brief interview, is filtered through their own scientific experience when they come to reproduce it for publication. Caveats and qualifications that are vital in science are sometimes omitted. Results and quotations can be presented out of the context of the previous research and the wider project. Scientists, like any profession, don’t wish to see their work or themselves misrepresented, so articles targeted at the general public are often anticipated with some trepidation.
Happily this article gives a good indication of work within the small part of the ILHAIRE project that it describes and the authors were happy to quickly correct to minor technical errors. The headline “Motion-captured laughs make animations more amusing”, is catchy but perhaps over-generalised, since making the animations simply amusing is not the specific aim of this aspect of the project nor the project as a whole. The main body of the article does accurately describe the methodology and the overall aim: to endow avatars with laughter-related behaviour (in this case body-movements) to enhance their perceived naturalness and efficacy in human-avatar interactions. As mentioned in the article, contagion – both in the sense of laughter inducing laughter and people mimicking each other’s laughter behaviour – is a major research focus in the project’s final year, particularly for UCL. We hope that adding natural body movements may make the animations more amusing in the sense of being more contagious. Hopefully with further exposure in the press, more people will see these results and many others to come in ILHAIRE’s final year. Watch this space!
When designing a laughing virtual agent, there is the need to detect the context (multimodal analysis) and to synthesize laugh. Between these two modules, there is a decision problem handled by a laugh manager: should the agent laugh, and what type of laughter? Moreover, what the agent does has an effect on the human interacting with it, modifying the context and calling for a new reaction of the agent. In other words, this decision problem takes place in a perception/action loop. Therefore, laughing can be framed as a sequential decision making problem.
A standard machine learning approach for such a problem is reinforcement learning. In this paradigm, an agent learns to behave optimally with its environment by interacting with it. At each time step, given the current context, it chooses and action and apply it, resulting in a new context. The agent receives a reward for this transition, quantifying the local quality of the control. The ultimate goal of the agent is to maximize the cumulative reward over the long term (not just the next reward). If designing a reward is easy for some tasks (for example, if the agent learns to play a game, the reward could be +1 for winning, -1 for loosing and 0 else), it is a very hard problem for the task of laughing (which is anyway quite subjective). Even a human cannot explicit the cumulative reward he is optimizing when laughing.
Yet, it is easy to record an expert behavior: it suffices to collect data of a human (replacing the virtual agent) interacting with other humans (playing their own role). Then, a natural approach is to use supervised learning (and more precisely classification) to generalize the mapping between contexts and actions recorded with the human. However, this does not take into account the sequential nature of the problem.
To handle this issue, a principled approach is inverse reinforcement learning. Provided some examples of an expert behavior (the human laughing), the aim is to estimate the reward which is optimized. This allows imitating the human, by optimizing the learnt reward, but this could also provide some information on the laugh principle, by semantically analyzing the learnt reward. However, inverse reinforcement learning is a very hard problem (for diverse reasons), much less studied than supervised. A contribution of the Ilhaire project has been to draw connections between supervised learning and inverse reinforcement learning, leading to new algorithms advancing the state of the art and motivated by the laughing problem [1-4].
 E. Klein, M. Geist, B. Piot and O. Pietquin. “Inverse Reinforcement Learning through Structured Classification”. In Advances in Neural Information Processing Systems (NIPS), 2012.
 E. Klein, B. Piot, M. Geist and O. Pietquin. “A cascaded supervised learning approach to inverse reinforcement learning”. In European Conference on Machine Learning (ECML), 2013.
 B. Piot, M. Geist and O. Pietquin. “Learning from demonstrations: Is it worth estimating a reward function?”. In European Conference on Machine Learning (ECML), 2013.
 B. Piot, M. Geist and O. Pietquin. “Boosted and Reward-regularized Classification for Apprenticeship Learning”. In International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2013.