The Third Wave of AI: Neuro-symbolic AI


By Houbing Herbert Song, Ph.D., IEEE Fellow, AAIA Fellow, ACM Distinguished Member|Associate Professor of Information Systems, University of Maryland, Baltimore County

AI is advancing rapidly. According to The Impact of Technology in 2024 and Beyond: an IEEE Global Study, AI helps detect and predict events quickly, such as outbreaks, unauthorized or unsafe drone operations, bias, cybersecurity threats and malicious activities, driving innovation and competition in a range of application domains including environmental sustainability, space tech and exploration, smart cities, manufacturing, agriculture, energy, healthcare and medicine, and transportation. On October 30, 2023, President Biden signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI. However, there are three major challenges associated with state-of-the-art (SOTA) AI algorithms: they lack generalizability (i.e., AI models are only as good as the data they are trained on), transparency and interpretability (i.e., AI models are “black box” models: opaque, non-intuitive, and difficult for people to understand), and robustness (i.e., imperceptible perturbations to AI inputs could altering its output). AI systems of the future will need to be strengthened so that they enable humans to understand and trust their behaviors, generalize to new situations, and deliver robust inferences. Neuro-symbolic AI, which integrates neural networks with symbolic representations, has emerged as a promising approach to address the challenges of generalizability, interpretability, and robustness.

Neuro-symbolic” bridges the gap between two distinct AI approaches: “neuro” and “symbolic.” On the one hand, the word “neuro” in its name implies the use of neural networks, especially deep learning, which is sometimes also referred to as sub-symbolic AI. This technique is known for its powerful learning and abstraction ability, allowing models to find underlying patterns in large datasets or learn complex behaviors. On the other hand, “symbolic” refers to symbolic AI. It is based on the idea that intelligence can be represented using symbols like rules based on logic or other representations of knowledge.

Neuro-symbolic AI has the potential to create safe, secure and trustworthy AI systems, including healthcare and medicine, finance, criminal justice, autonomous and cyber-physical systems, and high-performance computing applications. However, transformative advances are needed to enable the safe, secure, and trustworthy development and use of neuro-symbolic AI.

In the history of AI, the first wave of AI emphasized handcrafted knowledge and computer scientists focused on constructing expert systems to capture the specialized knowledge of experts in rules that the system could then apply to situations of interest; the second wave of AI emphasized statistical learning and computer scientists focused on developing deep learning algorithms based on neural networks to perform a variety of classification and prediction tasks; the third wave of AI emphasizes the integration symbolic reasoning with deep learning, i.e., neuro-symbolic AI, and computer scientists focus on designing, building and verifying safe, secure and trustworthy AI systems.

The Security and Optimization for Networked Globe Laboratory (SONG Lab, http://www.songlab.us/), which is directed by an IEEE Fellow and a Clarivate/Web of Science Highly Cited Researcher, has focused on the investigation of neuro-symbolic approaches in strengthening AI towards safe, secure and trustworthy AI systems. The SONG Lab discovered and proved mathematically the role of “rotating memories” in enabling AI, which has been verified by Princeton neuroscientists’ study published in Nature Neuroscience. This breakthrough has triggered a revolution in AI models. Testing and evaluations, including post-deployment performance monitoring, will help ensure that AI systems function as intended, are resilient against misuse or dangerous modifications, are ethically developed and operated in a secure manner, and are compliant with applicable Federal laws and policies. To respond to this, the SONG Lab investigated the verification, validation, testing, and evaluations of neuro-symbolic AI systems. To overcome the challenge of generalizability, the SONG Lab has focused on transfer learning, (particularly distant-domain transfer learning and cross-modality transfer learning), the key to artificial general intelligence (AGI), and its applications in healthcare and medicine, smart buildings, waste classification, stress and sentiment classification. Recently, the SONG Lab has been investigating the transfer from imprecise and abstract models to autonomous technologies towards an emerging subfield of neuro-symbolic AI: neuro-symbolic transfer learning. To overcome the challenge of interpretability, the SONG Lab has developed an explainable AI (XAI) approach for smart manufacturing. To overcome the challenge of robustness, the SONG Lab has developed robust AI algorithms to deal with environmental and adversarial perturbations. The SONG Lab also investigated the development of neuro-symbolic reinforcement learning algorithms and architectures to combine the strengths of neuro-symbolic AI and reinforcement learning for optimal decision-making. The SONG Lab is currently making progress in both foundational neuro-symbolic AI and use-inspired domains to benefit society and promote the communication of research findings and the exchange of best practices among neuro-symbolic AI-related researchers and practitioners to foster a greater neuro-symbolic AI community.

Neuro-symbolic AI has the potential to create safe, secure and trustworthy AI systems, including healthcare and medicine, finance, criminal justice, autonomous and cyber-physical systems, and high-performance computing applications. However, transformative advances are needed to enable the safe, secure, and trustworthy development and use of neuro-symbolic AI.