All events are in Central time unless specified.
Activity

M.S. Defense: Mostafa Rafaiejokandan

Date:
Time:
3:00 pm – 4:00 pm
Zoom Room: https://unl.zoom.us/j/91344168310
M.S. Defense: Mostafa Rafaiejokandan

Wednesday, November 30, 2022
3:00 PM
Zoom: https://unl.zoom.us/j/91344168310

“Attention in the Faithful Self-Explanatory NLP Models”

Deep neural networks (DNNs) can perform impressively in many natural language processing (NLP) tasks, but their black-box nature makes them inherently challenging to explain or interpret. Self-explanatory models are a new approach to overcoming this challenge, generating explanations in human-readable languages besides task objectives like answering questions. The main focus of this thesis was the explainability of NLP tasks, as well as how attention methods can help enhance performance. Three different attention modules are proposed, SimpleAttention, CrossSelfAttention, and CrossModality. It also includes a new dataset transformation method called Two-Documents that converts every dataset into two separate documents required by the offered attention modules. The proposed ideas were incorporated in a faithful architecture in which a module produces an explanation and prepares the information vector for the subsequent layers. The experiments were run on the ERASER Benchmark’s CoS-E dataset, restricting them to the transformer used in the baseline and only training data from the dataset while it required common sense knowledge to improve the accuracy. Based on the results, the proposed solution produced an explanation that outperformed Token F1 and IOU F1 by about 4% and 12%, respectively, while being about 1% more accurate.


Committee member:
Advisor: Dr. Stephen Scott
Dr. Ashok Samal
Dr. Vinodchandran Variyam

Download this event to my calendar