Revolution of AI Models?

By Houbing Herbert Song, Associate Professor of Computer Science, Embry-Riddle Aeronautical University

Since AI was coined in 1956, AI research has gone through three AI booms and two AI winters. In the third AI boom that started in 2012, there has been a massive resurgence of AI research and applications. AI has had impressive successes in solving specific problems with specific solutions. These problems span a variety of application domains, including agriculture, aeronautics, civil infrastructure, energy, healthcare, manufacturing, military, and smart cities. These AI applications can be categorized into three waves: describe (handcrafted knowledge), categorize (statistical learning), and explain (contextual adaptation); or four waves: Internet AI, business AI, perception AI, and autonomous AI. AI research’s future lies in Artificial General Intelligence (AGI), which is on par with human capabilities, and Artificial Superintelligence (ASI), which is more capable than a human. AGI should be explainable, transparent, and transferrable, particularly for safety-critical applications. Explainable AI (XAI), which contrasts with the concept of the “black box”, is AI in which humans can understand the results of the solutions. Transparent AI is AI that expresses its reasoning before arriving at its conclusion in some form that humans understand. Transferrable AI, based on the concept of transfer learning, is AI in which the knowledge learned can be reused in new situations, tasks and environments. One promising solution to AGI is to model artificial neurons to capture the detailed cellular behavior of biological neurons, and their decision-making process. Two approaches to AI have evolved over the years: one is symbolic AI, which attempts to explicitly represent human knowledge in a declarative form (i.e., facts and rules); the other is connectionist AI (biological in nature), which attempts to understand how the human brain works at the neural level using artificial neural networks (ANNs). Integrating connectionist AI with symbolic AI, which requires a deep understanding of both biological foundations of the human brain and mathematical foundations of AI, has the potential to achieve AGI by enabling explainable AI, transparent AI and transferrable AI. Unifying the mathematical foundations and biological foundations of AI is a grant challenge for AGI and ASI. Computer scientists and neuroscientists need to collaborate to bridge the gap between mathematical foundations and biological foundations of AI. Addressing this grand challenge will trigger the revolution of AI models.

The Security and Optimization for Networked Globe Laboratory (SONG Lab,, which I direct, developed a unique AI system for drone detection. After our AI system was featured by popular news media outlets, including IEEE GlobalSpec’s Engineering360, Association for Uncrewed Vehicle Systems International (AUVSI), Security Magazine, Fox News, U.S. News & World Report, The Washington Times, New Atlas, Battle Space, and Defense Daily, our AI research attracted wide attention from government, military and industry. Immediately I recognized the need to develop an AGI system for the quickest event detection to meet their needs. The first challenge was that such an AGI system should be domain agnostic, i.e., applicable to multiple application domains such as threat detection, intrusion detection, vulnerability detection, malware detection, anomaly detection, bias detection, and signal detection. The second challenge was explainable AI. Such an AGI system should be able to explain why it arrives at a specific decision, while maintaining a high level of learning performance (prediction accuracy and latency). I decided to start with research on the mathematical foundations of deep learning. Fortunately, we made a breakthrough: we discovered “memory orthogonality”, which is defined as the AI model in which an ANN rotates neural representations of new input to eliminate the interference with one another; and proved mathematically the role of “rotating” memories in enabling AI, particularly incremental learning, a method of machine learning in which input data is continuously used to extend the existing model’s knowledge, i.e., to train the model further. Our discovery was published in the IEEE Internet of Things Jour,nal and our mathematical proof was published in TechRxiv. At the same time, Princeton University’s research in mice showed that neural representations of sensory information get rotated 90 degrees to transform them into memories, and their findings were published in Nature Neuroscience. It is interesting to note that the idea of “rotating memories” is similar to the idea of “cross-writing” (the lines of penmanship were written both horizontally and vertically to keep them legible while conserving paper and minimizing postage expenses) used in human history. The coincidence of the SONG Lab’s mathematical discovery and Princeton University’s biological discovery of “memory orthogonality” unlocks the possibilities of unifying mathematical foundations and biological foundations of AI. More discoveries on the mathematical and biological foundations of AI will trigger the revolution of AI models, towards AGI and ASI. The third challenge was transfer learning. AI has met with impressive success when learning from massive amounts of data, i.e., big data.  However, many applications don’t have big data available. This necessitates data-efficient machine learning. We decided to focus on transfer learning, i.e., modifying an ANN trained for one set of tasks to do a new task, using only a small amount of training data for the new task. Our understanding and perspectives of transfer learning were published in IEEE Transactions on Artificial Intelligence. We have investigated distant domain transfer learning, cross-modality transfer learning, and applied transfer learning in bioinformatics and smart cities. We are investigating transfer learning of control policies and developing similarity metrics to determine whether transfer between two situations, tasks and environments.

AI holds the potential to transform our society through increased economic prosperity and improved quality of life. At the same time, a deep understanding of AI including mathematical and biological foundations of AI, is required to build trustworthy and responsible AI. The revolution of AI models will be triggered by more discoveries on unifying mathematical foundations and biological foundations of AI, similar to “memory orthogonality”.

Houbing Herbert Song, Ph.D.
Web of Science Highly Cited Researcher
ACM Distinguished Speaker
Director, Security and Optimization for Networked Globe Laboratory (SONG Lab,
Associate Professor of Computer Science, Embry-Riddle Aeronautical University