YINS Seminar Archives: Nazneen Rajani (Mar. 3, 2020)

YINS Seminar Archives: Nazneen Rajani (Mar. 3, 2020)

Talk Summary: 

Using Natural Language Explanations to Incorporate Commonsense Reasoning in Neural Networks

Speaker: Nazneen Rajani, Salesforce Research

Deep learning models perform poorly on tasks that require commonsense reasoning, which often necessitates some form of world knowledge or reasoning over information not immediately present in the input. In the first part of the talk, I will discuss how language models can be leveraged to generate natural language explanations which are not just interpretable but can also be used to improve performance on a downstream task such as CommonsenseQA and empirically show that explanations are a way to incorporate commonsense reasoning in neural networks. Further, I will discuss how explanations can be transferred to other tasks without fine-tuning. In the second part of the talk, I will talk about how we can train neural networks to do commonsense reasoning for qualitative physics and demonstrate on simulations involving physical laws such as collision, friction, and gravity. Our proposed framework first detects salient collisions and then generates natural language reasoning about the events in those salient frames.

This presentation was jointly with Computer Science and took place on March 3, 2020.

Nazneen Rajani

I am a Research Scientist at Salesforce Research since January 2019. My primary research interests are in the fields of Natural Language Understanding, Machine Learning and Explainable Artificial Intelligence (XAI). I am currently working on projects in the areas of language modeling with commonsense reasoning, language grounding with vision, gender bias in language modeling and explainable reinforcement learning. Website: http://www.nazneenrajani.com/