Javascript must be enabled to continue!
Optimisation in Neurosymbolic Learning Systems
View through CrossRef
In the last few years, Artificial Intelligence (AI) has reached the public consciousness through high-profile applications such as chatbots, image generators, speech synthesis and transcription. These are all due to the success of deep learning: Machine learning algorithms that learn tasks from massive amounts of data. The neural network models used in deep learning involve many parameters, often in the order of billions. These models often fail on tasks that computers are traditionally very good at, like calculating arithmetic expressions, reasoning about many different pieces of information, planning and scheduling complex systems, and retrieving information from a database. These tasks are traditionally solved using symbolic methods in AI based on logic and formal reasoning.
Neurosymbolic AI instead aims to integrate deep learning with symbolic AI. This integration has many promises, such as decreasing the amount of data required to train the neural networks, improving the explainability and interpretability of answers given by models and verifying the correctness of trained systems. We mainly study neurosymbolic learning, where we have, in addition to data, background knowledge expressed using symbolic languages. How do we connect the symbolic and neural components to communicate this knowledge to the neural networks?
We consider two answers: Fuzzy and probabilistic reasoning. Fuzzy reasoning studies degrees of truth. A person can be very or somewhat tall: Tallness is not a binary concept. Instead, probabilistic reasoning studies the probability that something is true or will happen. A coin has a 0.5 probability of landing heads. We never say it landed on "somewhat heads". What happens when we use fuzzy (part I) or probabilistic (part II) approaches to neurosymbolic learning? Moreover, do these approaches use the background knowledge we expect them to?
Our first research question studies how different forms of fuzzy reasoning combine with learning. We find surprising results like a connection to the Raven paradox, which states that we confirm "ravens are black" when we observe a green apple. In this study, we gave our neural network a training objective created from the background knowledge. However, we did not use the background knowledge when we deployed our models after training. In our second research question, we studied how to use background knowledge in deployed models. To this end, we developed a new neural network layer based on fuzzy reasoning.
The remaining research questions study probabilistic approaches to neurosymbolic learning. Probabilistic reasoning is a natural fit for neural networks, which we usually train to be probabilistic. However, probabilistic approaches come at a cost: They are expensive to compute and do not scale well to large tasks. In our third research question, we study how to connect probabilistic reasoning with neural networks by sampling to estimate averages. Sampling circumvents computing reasoning outcomes for all input combinations. In the fourth and final research question, we study scaling probabilistic neurosymbolic learning to much larger problems than possible before. Our insight is to train a neural network to predict the result of probabilistic reasoning. We perform this training process with just the background knowledge: We do not collect data.
How is this related to optimisation? All research questions are related to optimisation problems. Within neurosymbolic learning, optimisation with popular methods like gradient descent undertake a form of reasoning. There is ample opportunity to study how this optimisation perspective improves our neurosymbolic learning methods. We hope this dissertation provides some of the answers needed to make practical neurosymbolic learning a reality: Where practitioners provide both data and knowledge that the neurosymbolic learning methods use as efficiently as possible to train the next generation of neural networks.
Title: Optimisation in Neurosymbolic Learning Systems
Description:
In the last few years, Artificial Intelligence (AI) has reached the public consciousness through high-profile applications such as chatbots, image generators, speech synthesis and transcription.
These are all due to the success of deep learning: Machine learning algorithms that learn tasks from massive amounts of data.
The neural network models used in deep learning involve many parameters, often in the order of billions.
These models often fail on tasks that computers are traditionally very good at, like calculating arithmetic expressions, reasoning about many different pieces of information, planning and scheduling complex systems, and retrieving information from a database.
These tasks are traditionally solved using symbolic methods in AI based on logic and formal reasoning.
Neurosymbolic AI instead aims to integrate deep learning with symbolic AI.
This integration has many promises, such as decreasing the amount of data required to train the neural networks, improving the explainability and interpretability of answers given by models and verifying the correctness of trained systems.
We mainly study neurosymbolic learning, where we have, in addition to data, background knowledge expressed using symbolic languages.
How do we connect the symbolic and neural components to communicate this knowledge to the neural networks?
We consider two answers: Fuzzy and probabilistic reasoning.
Fuzzy reasoning studies degrees of truth.
A person can be very or somewhat tall: Tallness is not a binary concept.
Instead, probabilistic reasoning studies the probability that something is true or will happen.
A coin has a 0.
5 probability of landing heads.
We never say it landed on "somewhat heads".
What happens when we use fuzzy (part I) or probabilistic (part II) approaches to neurosymbolic learning? Moreover, do these approaches use the background knowledge we expect them to?
Our first research question studies how different forms of fuzzy reasoning combine with learning.
We find surprising results like a connection to the Raven paradox, which states that we confirm "ravens are black" when we observe a green apple.
In this study, we gave our neural network a training objective created from the background knowledge.
However, we did not use the background knowledge when we deployed our models after training.
In our second research question, we studied how to use background knowledge in deployed models.
To this end, we developed a new neural network layer based on fuzzy reasoning.
The remaining research questions study probabilistic approaches to neurosymbolic learning.
Probabilistic reasoning is a natural fit for neural networks, which we usually train to be probabilistic.
However, probabilistic approaches come at a cost: They are expensive to compute and do not scale well to large tasks.
In our third research question, we study how to connect probabilistic reasoning with neural networks by sampling to estimate averages.
Sampling circumvents computing reasoning outcomes for all input combinations.
In the fourth and final research question, we study scaling probabilistic neurosymbolic learning to much larger problems than possible before.
Our insight is to train a neural network to predict the result of probabilistic reasoning.
We perform this training process with just the background knowledge: We do not collect data.
How is this related to optimisation? All research questions are related to optimisation problems.
Within neurosymbolic learning, optimisation with popular methods like gradient descent undertake a form of reasoning.
There is ample opportunity to study how this optimisation perspective improves our neurosymbolic learning methods.
We hope this dissertation provides some of the answers needed to make practical neurosymbolic learning a reality: Where practitioners provide both data and knowledge that the neurosymbolic learning methods use as efficiently as possible to train the next generation of neural networks.
Related Results
Optimisation of power electronics for regulated transformer rectifier units
Optimisation of power electronics for regulated transformer rectifier units
This study presents a comprehensive approach for the design optimisation of passive components and heatsinks in power electronic converters. The design and selection of passive com...
Initial Experience with Pediatrics Online Learning for Nonclinical Medical Students During the COVID-19 Pandemic
Initial Experience with Pediatrics Online Learning for Nonclinical Medical Students During the COVID-19 Pandemic
Abstract
Background: To minimize the risk of infection during the COVID-19 pandemic, the learning mode of universities in China has been adjusted, and the online learning o...
Investigation of the structural behaviour of non-conventional beams designed using a Topology Optimisation procedure
Investigation of the structural behaviour of non-conventional beams designed using a Topology Optimisation procedure
In this research we use topology optimisation to design statically determinate rectangular beams with different heights. For the sake of simplification, the constitutive material i...
Systematics of Literature Reviews: Learning Model of Discovery Learning in Science Learning
Systematics of Literature Reviews: Learning Model of Discovery Learning in Science Learning
The development of the 21st century has affected the world of education. Current education students must be led to learn more creatively and actively. This study aims Furthermore, ...
IDENTIFYING BARRIERS IN E – LEARNING, A MEDICAL STUDENT’S PERSPECTIVE
IDENTIFYING BARRIERS IN E – LEARNING, A MEDICAL STUDENT’S PERSPECTIVE
Objective:
To recognize the barriers in different modes of e learning, from the medical student’s perspective during the period of Covid 19 pandemic.
Study Desi...
Topology Optimisation in Structural Steel Design for Additive Manufacturing
Topology Optimisation in Structural Steel Design for Additive Manufacturing
Topology Optimisation is a broad concept deemed to encapsulate different processes for computationally determining structural materials optimal layouts. Among such techniques, Disc...
Optimasi Biaya Pengendalian Persediaan Alat Suntik (Spuit) Dengan Metode Optimisasi Robust Menggunakan Aplikasi Python Di RSUD Dr. Pirngadi
Optimasi Biaya Pengendalian Persediaan Alat Suntik (Spuit) Dengan Metode Optimisasi Robust Menggunakan Aplikasi Python Di RSUD Dr. Pirngadi
Inventory control is very important for companies because without proper inventory control the company will experience problems in meeting consumer needs both in the form of goods ...
E-Learning
E-Learning
E-Learning ist heute aus keinem pädagogischen Lehrraum mehr wegzudenken. In allen Bereichen von Schule über die berufliche bis zur universitären Ausbildung und besonders im Bereich...


