HI & NI: James Pustejovsky Visit, November 6
On Monday, November 6, James Pustejovsky from Brandeis University will visit the VU.

We would like to get a sense of how many people will attend events throughout the day to reserve appropriate spaces. Please fill out this form at your earliest convenience, and no later than Wednesday, October 25.

Additionally, if you would like to meet with James individually and have not already signed up, please email Lucia (l.e.donatelli@vu.nl).

Many thanks, and hope to see you on the 6th! 

Lucia & Piek
Email *
James will give a talk from 13:30-15:00 on "Modeling Theory of Mind in Multimodal Dialogue." Do you plan to attend? 

Abstract: Theory of Mind (ToM) refers to the cognitive capacity that humans have to attribute mental states such as beliefs (true or false), desires, and intentions to oneself and others, thereby predicting and explaining behavior. Within the domain of Human-Computer Interaction (HCI), this concept has recently become more relevant for computational agents, especially in the context of multimodal communication. As multimodal interactions involve not only speech, but gestures, haptics, eye movement, and other types of input, each modality introduces subtleties which can be misinterpreted without a deeper understanding of the agent’s mental state. In this talk, I argue that Simulation Theory of Mind (SToM), encoded as an evidence-based dynamic epistemic logic (EB-DEL), can help model these complexities. Specifically, I  apply this model to the problem of Common Ground Tracking (CGT) in task-oriented interactions.

Unlike dialogue state tracking (DST), which is the ability to update the representations of the speaker's needs at each turn in the dialogue by taking into account the past dialogue moves and history, common ground tracking (CGT) identifies the shared belief space held by all of the participants in a task-oriented dialogue. Within the framework of SToM, I present a method for automatically identifying the current set of shared beliefs and questions under discussion (QUDs) of a group with a shared goal. We annotate a dataset of multimodal interactions in a shared physical space with speech transcriptions, prosodic features, gestures, actions, and facets of collaboration, and operationalize these features for use in a deep neural model to predict moves toward construction of common ground. Model outputs cascade into a set of formal closure rules derived from situated evidence and belief axioms and update operations. We empirically assess the contribution of each feature type toward successful construction of common ground relative to ground truth.
*
We will host drinks for James at NENI around 17:30. Would you like to attend?  *
Do you have any questions or comments about the visit? You may also write Lucia directly. 
Submit
Clear form
Never submit passwords through Google Forms.
This form was created inside of Vrije Universiteit.

Does this form look suspicious? Report