Artificial Intelligence has experienced a surge in interest in the last decade, largely due to the advancements in Deep Learning. Deep learning researchers have achieved great success in image processing through the use of Convolutional Neural Networks (CNNs). However, CNNs lack an inductive bias on the embedding space for images, which limits their effectiveness. The same limitations can be observed in Graph Convolutional Neural Networks (GCNNs).

To address these limitations, researchers have started exploring the use of non-Euclidean spaces for embedding data, such as hyperbolic space. Hyperbolic spaces offer several advantages, including the ability to fit more data in a low-dimensional space and tree-like properties. Previous studies have demonstrated the benefits of using hyperbolic spaces to build hierarchical embeddings using shallow models, as well as Multi-Layer Perceptrons (MLPs) and Recurrent Neural Networks (RNNs).

However, there is currently no established approach for using Hyperbolic Convolutional Neural Networks (HCNNs) for structured data processing, despite the fact that structured data is commonly encountered in various domains. Therefore, the objective of this research is to develop a general method for building HCNNs. We hypothesize that the ability of hyperbolic space to capture hierarchy in the data will result in improved performance. This capability is especially valuable when dealing with data that has a tree-like structure, which is often the case in many existing datasets like WordNet, ImageNet, and FB15k. We believe that the development of HCNNs would offer significant advantages in terms of practical applications and future research opportunities.