Natarajan, Sriraam

Permanent URI for this collectionhttps://hdl.handle.net/10735.1/6766

Sriraam Natarajan is a Professor of Computer Science and the head of the StARLinG (Statistical Artificial Intelligence and Relational Learning Group) Lab. His research interests are in the fields of Artificial Intelligence and Machine Learning as they apply to healthcare problems. They include:

  • Relational Learning
  • Reinforcement Learning
  • Graphic Models
  • Planning
  • Statistical Relational AI
  • Health Informatics/Precision Health

Browse

Recent Submissions

Now showing 1 - 2 of 2
  • Item
    Planning with Actively Eliciting Preferences
    (Elsevier Science BV, 2018-11-22) Das, Mayukh; Odom, Phillip; Islam, Md Rakibul; Doppa, Janardhan Rao (Jana); Roth, Dan; Natarajan, Sriraam; 315996584 (Natarajan, S); Das, Mayukh; Natarajan, Sriraam
    Planning with preferences has been employed extensively to quickly generate high-quality plans. However, it may be difficult for the human expert to supply this information without knowledge of the reasoning employed by the planner. We consider the problem of actively eliciting preferences from a human expert during the planning process. Specifically, we study this problem in the context of the Hierarchical Task Network (HTN) planning framework as it allows easy interaction with the human. We propose an approach where the planner identifies when and where expert guidance will be most useful and seeks expert's preferences accordingly to make better decisions. Our experimental results on several diverse planning domains show that the preferences gathered using the proposed approach improve the quality and speed of the planner, while reducing the burden on the human expert.
  • Item
    Human-Guided Learning for Probabilistic Logic Models
    (Frontiers Media S.A.) Odom, P.; Natarajan, Sriraam; Natarajan, Sriraam
    Advice-giving has been long explored in the artificial intelligence community to build robust learning algorithms when the data is noisy, incorrect or even insufficient. While logic based systems were effectively used in building expert systems, the role of the human has been restricted to being a "mere labeler" in recent times. We hypothesize and demonstrate that probabilistic logic can provide an effective and natural way for the expert to specify domain advice. Specifically, we consider different types of advice-giving in relational domains where noise could arise due to systematic errors or class-imbalance inherent in the domains. The advice is provided as logical statements or privileged features that are thenexplicitly considered by an iterative learning algorithm at every update. Our empirical evidence shows that human advice can effectively accelerate learning in noisy, structured domains where so far humans have been merely used as labelers or as designers of the (initial or final) structure of the model.

Works in Treasures @ UT Dallas are made available exclusively for educational purposes such as research or instruction. Literary rights, including copyright for published works held by the creator(s) or their heirs, or other third parties may apply. All rights are reserved unless otherwise indicated by the copyright owner(s).