作者Hu, Hai
ProQuest Information and Learning Co
Indiana University. Linguistics
書名Symbolic and Neural Approaches to Natural Language Inference
出版項2021
說明1 online resource (250 pages)
文字text
無媒介computer
成冊online resource
附註Source: Dissertations Abstracts International, Volume: 83-01, Section: A
Advisor: Kuebler, Sandra;Moss, Lawrence
Thesis (Ph.D.)--Indiana University, 2021
Includes bibliographical references
Natural Language Inference (NLI) is the task of predicting whether a hypothesis text isentailed (or can be inferred) from a given premise. For example, given the premise that twodogs are chasing a cat, it follows that some animals are moving, but it does not follow thatevery animal is sleeping. Previous studies have proposed logic-based, symbolic models andneural network models to perform inference. However, in the symbolic tradition, relativelyfew systems are designed based on monotonicity and natural logic rules; in the neural networktradition, most work is focused exclusively on English.Thus, the first part of the dissertation asks how far a symbolic inference system can gorelying only on monotonicity and natural logic. I first designed and implemented a system thatautomatically annotates monotonicity information on input sentences. I then built a systemthat utilizes the monotonicity annotation, in combination with hand-crafted natural logic rules,to perform inference. Experimental results on two NLI datasets show that my system performscompetitively to other logic-based models, with the unique feature of generating inferences asaugmented data for neural-network models.The second part of the dissertation asks how to collect NLI data that are challenging forneural models, and examines the cross-lingual transfer ability of state-of-the-art multilingualneural models, focusing on Chinese. I collected the first large-scale NLI corpus for Chinese,using a procedure that is superior to what has been done with English, along with four types oflinguistically oriented probing datasets in Chinese. Results show the surprising transfer ability of multilingual models, but overall, even the best neural models still struggle on Chinese NLI,exposing the weaknesses of these models
Electronic reproduction. Ann Arbor, Mich. : ProQuest, 2021
Mode of access: World Wide Web
主題Linguistics
Chinese natural language processing
Natural langauge inference
Natural language understanding
Neural models
Recognizaing textual entailment
Symbolic models
Electronic books.
0290
ISBN/ISSN9798516947674
QRCode
相關連結: click for full text (PQDT) (網址狀態查詢中....)
館藏地 索書號 條碼 處理狀態  

Go to Top