AAAI 2022 Tutorial:
Reasoning on Knowledge Graphs: Symbolic or Neural?
This is the website of the AAAI 2022 tutorial, Reasoning on Knowledge Graphs: Symbolic or Neural?
Introduction
Knowledge graphs encode realworld facts and are critical in a variety of applications and domains such as natural language understanding, recommender systems, drug discovery, and image understanding. A fundamental problem on knowledge graphs is to predict missing facts by reasoning with existing facts, a.k.a. knowledge graph reasoning. Such a problem has been extensively studied in different communities of AI including general AI community (AAAI, IJCAI), machine learning community (ICML, NeurIPS, ICLR), data mining community (KDD, WSDM, WWW), and NLP community (ACL, EMNLP, NAACL), which either focus on development of fundamental methodology or solutions to important realworld problems. Therefore, a systematic introduction to knowledge graph reasoning summarizing the progress across different communities would benefit broad audience. In this tutorial, we plan to give a comprehensive introduction to different methods of knowledge graph reasoning, including traditional symbolic logic rulebased methods, neuralbased methods, neuralsymbolic methods, logic rule induction approaches, and different applications. This tutorial will benefit both junior and senior researchers and researchers interested in both methodology development and applications.
The slides of the tutorial are available at this link.
Outline

Introduction [slides]
 Part 1: Neural Methods [slides]
 Knowledge graph embeddings
 TransE, ComplEx, RotatE, QuatE
 Graph neural networks
 RGCN, GraIL
 Benchmark datasets and evaluation
 Knowledge graph embeddings
 Part 2: Symbolic Logic Methods [slides]
 Logic programming
 Forward chaining, backward chaining
 Bayesian logic programming
 ProbLog, DeepProbLog
 Markov logic programming
 Marlov logic networks
 Stochastic logic programming
 TensorLog
 Logic programming
 Part 3: NeuralSymbolic Methods [slides]
 Markov logic programming based
 pLogicNet
 Stochastic logic programming based
 ProPPR
 Markov logic programming based
 Part 4: Logic Rule Induction Methods [slides]
 Inductive logic programming
 Neuralpowered ILP
 Differentiable ILP, Neural Theorem Provers, RNNLogic
 Neural ILP with SLP
 NeuralLP, DeepPath, NBFNet
 Part 5: Summary and Future Directions [slides]
 Summary
 Complex logical query answering
 Natural language processing
 Recommendation
 Drug discovery
Speakers
 Meng Qu
 meng.qu at umontreal.ca
 Meng Qu is a 4thyear Ph.D. candidate at Mila  Quebec AI Institute, supervised by Prof. Jian Tang. His research focuses on reasoning on graph structured data such as knowledge graphs. He obtained M.Sc. in computer science from University of Illinois at UrbanaChampaign and B.Sc. in computer science from Peking University. He has published several papers on combining deep learning and statistical relational learning for knowledge reasoning in toptier venues, including GMNN at ICML 2019, pLogicNet at NeurIPS 2019, RNNLogic at ICLR 2021.
 Zhaocheng Zhu
 Zhaocheng.zhu at umontreal.ca
 Zhaocheng Zhu is currently a 4thyear Ph.D. candidate at Mila  Quebec AI Institute, advised by Prof. Jian Tang. His research mainly focuses on largescale knowledge graphs, including algorithms and systems for knowledge graph reasoning, as well as applications of knowledge graphs in drug discovery. He received his B.Sc. in computer science (with honors) from Peking University in 2018. He has been actively working on knowledge graphs and graph representation learning since 2018, with two publications in toptier venues. He led 2 opensource projects, GraphVite and TorchDrug, which has received wide recognition in the graph representation learning community. He is also a contributor of PyTorchGeometric and Gensim.
 Jian Tang
 Jian.tang at hec.ca
 Jian Tang is currently an assistant professor at Mila  Quebec AI Insitute, a research institute focusing on deep learning and reinforcement learning led by Turing Award Winner Yoshua Bengio, starting from December 2017. His research focuses on graph representation learning, graph neural networks, drug discovery, and knowledge graphs. He is named the first cohort of Canada CIFAR Artificial Intelligence Chairs (CIFAR AI Research Chair). He was a research fellow in University of Michigan and Carnegie Mellon University. He was a researcher in Microsoft Research Asia for two years. He received the best paper award of ICML’14 and nominated for the best paper of WWW’16. Most of his papers are published in toptier venues across artificial intelligence, machine learning and data mining conferences (ICML, NeurIPS, ICLR, AAAI, IJCAI, KDD, WWW, and WSDM). He coorganized a tutorial on graph representation learning at KDD 2017 and AAAI 2019, organized a few workshops on graph representation learning at SDM 2019, CIKM 2019, AAAI 2020, and ICML 2020. He published quite a few representative work on graph representation learning (including LINE, LargeVis, RotatE).