One way to explain the hierarchical levels of understanding in a machine learning model is through the use of inductive logic programming (ILP), which is a data efficient approach capable of learning logical rules that can capture data behavior. A variation of ILP, known as differentiable Neural Logic (dNL) networks, can learn Boolean functions and incorporate symbolic reasoning into their neural architecture. In this context, we propose the application of dNL in Relational Reinforcement Learning (RRL) to tackle dynamic continuous environments. This builds upon previous work in using dNL-based ILP in RRL, but our model introduces updates to the architecture to handle problems in continuous RL environments. The aim of our research is to enhance current ILP methods for RRL by incorporating non-linear continuous predicates, enabling RRL agents to reason and make decisions in dynamic and continuous environments.
Combining Deep Inductive Logic Programming with Reinforcement Learning (arXiv:2308.16210v1 [cs.LG])
by instadatahelp | Sep 1, 2023 | AI Blogs