RL-Emo-Per Data Compliance Form
"Empathetic Persuasion: Reinforcing Empathy and Persuasiveness in Dialogue Systems"

a) This work is accepted for publication in Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Seattle, Washington, July 10–15, 2022.
b) This dataset is intended only for non-commercial, educational and/or research purposes only.  
c) For access to Empathetic Persuasion Dataset and any associated queries, please reach us at iitpainlpmlresourcerequest@gmail.com / mishra.kshitij07@gmail.com
d) The dataset is allowed to be used in any publication, only upon the following citations.

Please use the following BibTeX to cite this work:

@inproceedings{samad-etal-2022-empathetic,
    title = "Empathetic Persuasion: Reinforcing Empathy and Persuasiveness in Dialogue Systems",
    author = "Samad, Azlaan Mustafa  and
      Mishra, Kshitij  and
      Firdaus, Mauajama  and
      Ekbal, Asif",
    booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",
    month = jul,
    year = "2022",
    address = "Seattle, United States",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.findings-naacl.63",
    doi = "10.18653/v1/2022.findings-naacl.63",
    pages = "844--856",
    abstract = "Persuasion is an intricate process involving empathetic connection between two individuals. Plain persuasive responses may make a conversation non-engaging. Even the most well-intended and reasoned persuasive conversations can fall through in the absence of empathetic connection between the speaker and listener. In this paper, we propose a novel task of incorporating empathy when generating persuasive responses. We develop an empathetic persuasive dialogue system by fine-tuning a maximum likelihood Estimation (MLE)-based language model in a reinforcement learning (RL) framework. To design feedbacks for our RL-agent, we define an effective and efficient reward function considering consistency, repetitiveness, emotion and persuasion rewards to ensure consistency, non-repetitiveness, empathy and persuasiveness in the generated responses. Due to lack of emotion annotated persuasive data, we first annotate the existing Persuaion For Good dataset with emotions, then build transformer based classifiers to provide emotion based feedbacks to our RL agent. Experimental results confirm that our proposed model increases the rate of generating persuasive responses as compared to the available state-of-the-art dialogue models while making the dialogues empathetically more engaging and retaining the language quality in responses.",
}
Sign in to Google to save your progress. Learn more
Email *
Name *
Affiliation (Department/Institute/University you belong to) *
You are *
Address of correspondence *
Contact Information *
How did you come to know about the dataset? *
Describe briefly about how you intend to use this dataset? (minimum character count = 350) *
Accept Terms *
Required
Date *
MM
/
DD
/
YYYY
Submit
Clear form
Never submit passwords through Google Forms.
This content is neither created nor endorsed by Google. Report Abuse - Terms of Service - Privacy Policy