Transfer Learning and Uncertainty Quantification in Natural Language Processing for Political Science and Cyber Security





Journal Title

Journal ISSN

Volume Title



Recent advancements in Natural Language Processing (NLP) driven by pretrained language models have revolutionized various fields reliant on large-scale text-based research through transfer learning. This dissertation presents efficient, reliable computational NLP applications to address real-world challenges, with a focus on political science, cyber security, and uncertainty quantification. The dissertation begins with interdisciplinary research in political science, where advanced NLP models are developed to track and analyze dynamics related to global political conflict. The creation of ConfliBERT, the first domain-specific sociological language model, enables improved performance on 18 downstream tasks, particularly in scenarios with limited data availability. Moreover, by leveraging transfer learning and existing expert knowledge, specific tasks such as political event extraction and classification are further optimized. One approach called Confli-T5 is a text generation model that augments labeled data by in- corporating achievable templates derived from political science knowledge bases. Another technique introduced is the Zero-Shot fine-grained relation classification model for PLOVER ontology (ZSP), which eliminates the need for labeled data by relying solely on an annotation codebook to classify intricate interactions between political actors. These strategies combine the power of transfer learning with domain-specific expertise to reduce the dependence on extensive labeled data, making them valuable tools in the field. In the field of cyber security, text generation techniques are employed for cyber deception, generating multiple fake versions of critical documents to deter malicious intrusion. A context-aware model called Fake Document Infilling (FDI) addresses the limitations of existing approaches by considering contextual awareness. FDI produces highly believable fake documents, protecting critical information and deceiving adversaries effectively. Finally, uncertainty quantification techniques are explored to enhance the reliability of NLP models in such interdisciplinary or cross-domain applications. A novel model, BERT-ENN, employees evidential theory to quantify multidimensional uncertainty in the data and calibrate uncertainty estimation in text classifiers. This approach achieves state-of-the-art out-of-distribution detection performance, thereby improving the reliability of NLP models.



Computer Science