Efficient Neural-Symbolic Reasoning Via Reinforcement Learning


Cancelled: Recording will be uploaded, but the live tutorial will not take place


An ICAPS'22 Tutorial

(half day)

June 17, 2022

Description

Neural-symbolic reasoning has a wide range of applications, including natural language understanding, explainable inference and reasoning, and medical treatment. Integrating reinforcement learning (RL) and neural-symbolic reasoning has a long history, and has developed primarily in several major directions. The first direction applies neural operators such as tensor calculus to simulate logic reasoning. This family of approaches uses binary tensors over constant domains to represent the predicates and tensor chain products to simulate logic clauses. This family of methods is good at generalizing across different tasks, but the reasoning path remains completely opaque. The second direction involves relaxed subset selection with a predefined set of task-specific logic clauses. This approach reduces the task to a subset-selection problem by selecting a subset of clauses from the predefined set and using deep RL approaches to search for a relaxed solution. This approach family is good at finding an explicit solution path to the selected subset, but the predefined clause set's task-specificity restricts generalizability. The third direction is Neural Relational RL, which integrates RL with statistical relational learning and connects RL with classical AI for knowledge representation and reasoning. More recently, proposed to incorporate relational inductive bias, proposed deep relational RL, proposed strategic object-oriented RL, and other researchers have also adopted deep learning approaches for dealing with relations and reasoning.

Another important area of neural-symbolic reasoning and inference using RL is multi-agent planning and scheduling, which involves several extra challenges, such as scalability with multiple decision-makers, partial observability, and uncertainty. Several important research challenges remain unresolved. The first is efficiently compiling domain knowledge in a compact form using knowledge compilation engines such as logic-based decision diagrams. The second involves developing modular techniques to integrate domain knowledge with deep multi-agent RL algorithms. The third challenge is developing computationally efficient algorithms to query the compiled knowledge for accelerated episode simulation in RL.

This tutorial aims to provide state-of-the-art progress in single-agent and multi-agent neural-symbolic reasoning and inference, identify existing challenges and bottlenecks, and indicate promising future research directions and application potentials.

Target audience

The primary target audience for this proposed tutorial is students and researchers interested in neural-symbolic reasoning and planning. This tutorial should also be of interest to researchers on applied RL research. We also anticipate that this tutorial will appeal to industry participants interested in applied work and to decision-making practitioners who may be interested in learning more about the specific techniques employed in this important class of decision-making.

The tutorial aims to be self-contained. The idea is to provide all the knowledge needed to understand the topics discussed in the tutorial. However, a basic understanding of neural-symbolic reasoning, RL, and planning can be helpful for better comprehension.

Outline

  • Motivations
    • Motivating domains: control, robotics, e-commerce, autonomous driving, medical treatment, etc.
    • Goal: provide algorithms for neural-symbolic reasoning with RL
    • What are the challenges of designing and deploying RL-based neural-symbolic reasoning algorithms?
  • Introduction to neural-symbolic reasoning and RL
    • Neural-symbolic reasoning
    • Single-agent RL and Multi-agent RL
    • Relational RL
  • Single-agent RL-based neural-symbolic reasoning
    • Definition and examples
    • Some representative algorithms
    • Experimental results
  • Multi-agent RL-based neural-symbolic reasoning
    • Definition and examples
    • Some representative algorithms
    • Experimental results● Emerging directions and challenges
    • Natural language understanding and reasoning
    • Explainable logic inference and reasoning
    • Medical treatment
  • General discussion/questions

Bios

Bo Liu

Bo Liu is a tenure-track assistant professor in the Computer Science Department at Auburn University. He obtained his Ph.D. from Autonomous Learning Lab at the University of Massachusetts Amherst, 2015, co-led by Drs. Sridhar Mahadevan and Andrew Barto. His research areas cover decision-making under uncertainty, human-aided machine learning, symbolic AI, trustworthiness and interpretability in machine learning, and their applications to BIGDATA, autonomous driving, and healthcare informatics. He is a recipient and the Tencent Faculty Research Award'2017 and Amazon Faculty Research Award'2018. He is an associate editor of IEEE Transactions on Neural Networks and Learning Systems (IEEE-TNN), an editorial member of Machine Learning (MLJ), an IEEE senior member, and a regular Area Chair/Senior PC of several flagship AI conferences, including UAI/AAAI/IJCAI.

Daoming Lyu

Daoming Lyu is a Ph.D. student in the Dept. of Computer Science and Software Engineering at Auburn University. He works with Prof. Bo Liu in the Computational Autonomous Learning Lab. His research lies in reinforcement learning, trustworthy decision-making, symbolic AI, and healthcare informatics.

Jianshu Chen

Jianshu Chen is a principal researcher at Tencent AI Lab, Bellevue, WA. Before that, he was a researcher at Microsoft Research, Redmond, WA. He completed his PhD in Electrical Engineering at University of California, Los Angeles (UCLA), in 2014. His primary research interest covers different aspects of machine learning, including self-supervised learning, reinforcement learning, neuro-symbolic reasoning, and natural language processing.

Akshat Kumar

Akshat Kumar is an Associate Professor and Lee Kong Chian Fellow at the School of Computing and Information Systems at SMU. His research interests lie at the intersection of Artificial Intelligence (AI) and Machine Learning (ML). His work has received the best dissertation award at ICAPS-14 and from the School of Computer Science, University of Massachusetts Amherst USA, a best application paper award at ICAPS-14, best paper at theAAAI-17 computational sustainability track, best demo paper award at the AAMAS-21 conference. He has also featured in the IEEE Intelligent System’s Magazine’s 2018 AI’s ten to watch. He was the co-chair of the ICAPS-19 doctoral consortium, co-chair of the ICAPS-18 planning and learning track, and part of the AAMAS-16 local organizing committee. He is also one of the conference chairs for ICAPS 2022.

Jiajing Ling

Jiajing Ling is a Ph.D. candidate in Computer Science at Singapore Management University. He is supervised by Prof. Akshat KUMAR. His research fields include reinforcement learning, large-scale multi-agent decision making, urban system optimization, and propositional logic.