Neural Networks have acquired enormous popularity in recent years due to their usefulness and ease of use in the fields of Pattern Recognition and Data Mining. Deep Learning’s application to tasks such as object identification and voice recognition through the use of techniques such as CNN, RNN, and autoencoders has resulted in a massive amount of effort in the study and development of Neural Networks.

Images, text, and videos may be easily analysed using Deep Learning since they are based on Euclidean data sets. It’s also important to think about applications where data is represented in graphs (non-Euclidean) with intricate interactions between items.

This is where the notion of Graph Neural Networks comes into play (GNN). The concepts and principles of Graphs and GNNs will be covered in this article as well as some of the most recent uses of Graph Neural Networks.

This article was published as a part of theÂ Data Science Blogathon.

As implied by the term â€“ Graph Neural Networks â€“ the most essential component of GNN is a graph.

A graph is a data structure in computer science that consists of two components. Vertices and Envelopes G=VE can be used to define a graph. Each vertex has an associated edge (E) that connects it to every other vertex (V). The phrase vertices and nodes are commonly used interchangeably, however, they are distinct concepts. A directed graph has arrows on its edges, which is referred to as directional dependence. If not, the graphs are undirected.

A graph may be used to depict a variety of objects, including social media networks, city networks, and molecules. Consider the following graph, which depicts a city network. Cities are depicted as nodes, while the highways that link them are depicted as edges.

We may use the graph network to solve a variety of issues relating to these cities, such as determining which cities are well-connected or determining the shortest distance between two cities.

Due to their extraordinarily powerful expressive capabilities, graphs are getting significant interest in the field of Machine Learning. Each node is paired with an embedding. This embedding establishes the node’s location in the data space. Graph Neural Networks are topologies of neural networks that operate on graphs.

A GNN architecture’s primary goal is to learn an embedding that contains information about its neighborhood. We may use this embedding to tackle a variety of issues, including node labeling, node and edge prediction, and so on.

In other words, Graph Neural Networks are a subclass of Deep Learning techniques that are specifically built to do inference on graph-based data. They are applied to graphs and are capable of performing prediction tasks at the node, edge, and graph levels.

The primary benefit of GNN is that it is capable of doing tasks that Convolutional Neural Networks (CNN) are incapable of performing. Convolutional neural networks are used to handle tasks such as object identification, picture categorization, and recognition. CNN accomplishes this through the use of hidden convolutional layers and pooling layers.

CNN is computationally challenging to perform on graph data because the topology is very arbitrary and complicated, implying that there is no spatial locality. Additionally, there is an unfixed node ordering, which complicates the use of CNN.

Thus, as the name implies, a GNN is a neural network that is directly applied to graphs, giving a handy method for performing edge, node, and graph level prediction tasks. Graph Neural Networks are classified into three types:

**Recurrent Graph Neural Network****Spatial Convolutional Network****Spectral Convolutional Network**

One of GNN’s fundamental intuitions is that nodes are defined by their neighbors and connections. To visualize this, consider that if all of a node’s neighbors are removed, the node will lose all of its information. Thus, the idea of a node’s neighbors and the connections between them describe a node.

With this in mind, let us assign a state (x) to each node to describe its notion. We may utilize the node state (x) to generate an output (o), which represents the concept’s choice. The node’s ultimate state (x_n) is referred to as the “node embedding.” The primary purpose of all Graph Neural Networks is to discover the “node embedding” of each node by examining its neighboring nodes’ information.

Let’s begin with the most powerful GNN variant, the Recurrent Graph Neural Network, or RecGNN.

RecGNN is based on the Banach Fixed-Point Theorem, which asserts that: Let (X,d) be a full metric space and (T:Xâ†’X) be a contraction mapping. Then T has a unique fixed point (xâˆ—), and for each xâˆˆX, the sequence T n(x) for nâ†’âˆž converges to (xâˆ—). This suggests that if I apply the mapping T to x for k times, x^k should be close to x^(k-1).

**Architecture of RGNN-**

Spatial Convolutional Networks have a similar idea to CNNs. As is well known in CNN, convolution is performed by summing the neighboring pixels around a central pixel using a filter and learnable weights. Spatial Convolutional Networks operate on a similar principle, aggregating the properties of neighboring nodes toward the centre node.

In comparison to other types of Graph Neural Networks, this sort of GNN is built on a solid mathematical foundation. It is based on the theory of Graph Signal Processing. It simplifies by the use of Chebyshev polynomial approximation.

The challenges that a GNN can solve are divided into three categories:

- Node Classification
- Link Prediction
- Graph Classification

Predicting the node embedding for each node in a network is what this task entails. Only a portion of the graph is labeled in such circumstances, resulting in a semi-supervised graph. YouTube videos, Facebook friend recommendations, and other applications are examples.

The primary objective is to determine the relationship between two things in a graph and to forecast if the two entities are connected. Consider a recommender system in which a model is given a collection of user reviews for various items. The objective is to forecast users’ preferences and optimize the recommender system so that it promotes goods that align with the users’ interests.

It entails sorting the entire graph into a variety of groups. It’s a lot like an image classification problem, however the goal here is to identify graphs. Examples of Graph Classification include the classification of a chemical structure into one of several categories in chemistry, for example.

Many real-time GNN applications have emerged since they were first introduced in 2018. A few of the most notable are outlined below.

**Natural Language Processing**

A wide range of NLP tasks can benefit from the use of GNN, including sentiment classification, text classification, and sequence labelling, to name just a few. In NLP, they are employed because of their convenience. Social Network Analysis use them to forecast similar postings and provide users with relevant content recommendations.

**Computer Vision**

Computer Vision is a large discipline that has seen fast growth in recent years due to the use of Deep Learning in areas such as image classification and object detection. Convolutional Neural Networks are the most often used application. Recently, GNNs have been employed in this sector as well. Though GNN applications in Computer Vision are in their infancy, they demonstrate enormous potential in the next years.

**Science**

Another scientific use of GNNs is the prediction of pharmacological adverse effects and the categorization of diseases using GNNs. Chemical and molecular graph structure is also being studied using GNNs.

**Other Domains**

In addition to the functions described above, GNN has a wide variety of functions. Recommender systems and social network research are only two areas where GNN has been tried out.

Application | Deep Learning | Description |

Text classification | Graph convolutional network/ graph attention network | Text Classification is a well-known use of GNNs in NLP. GNNs infer document labels by analyzing the relationships between documents or words. This challenge is solved using GCN and GAT models. They transform text to a graph of words and then convolve the word graph using graph convolution methods. They demonstrate through experiments that representing texts as a graph of words has the benefit of capturing non-consecutive and long-distance semantics. |

Neural machine translation | Graph convolutional network/ gated graph neural network | NMT is a sequence-to-sequence task. One of the most often used uses of GNN is the incorporation of semantic information into the NMT task. This is accomplished by the use of the Syntactic GCN on syntax-aware NMT jobs. Additionally, we may use the GGNN in NMT. It transforms the syntactic dependency graph into a new structure by converting the edges to extra nodes, allowing for the representation of edge labels as embeddings. |

Relation extraction | Graph LSTM/ graph convolutional network | Relation Extraction is the process of extracting semantic relationships between two or more things from text. Traditionally, this task has been treated as a pipeline of two distinct tasks, namely named entity recognition (NER) and relation extraction, but recent research indicates that end-to-end modelling of entities and relations is critical for high performance, as relations are intimately connected with entity information. |

Image classification | Graph convolutional network/ gated graph neural network | Classification of images is a fundamental task in computer vision. When given a large training set of labelled classes, the majority of models provide favourable results. The goal now is to improve the performance of these models on zero-shot and few-shot learning challenges. GNN seems pretty appealing in this regard. Knowledge graphs may include sufficient information to assist the ZSL (zero-shot learning) job. |

Object detection
Interaction detection Region classification Semantic segmentation | Graph attention network
Graph neural network Graph CNN Graph LSTM/ gated graph neural network/ graph CNN/ graph neural network | Additionally, computer vision tasks such as object identification, interaction detection, and region categorization may be used. GNNs are used to derive ROI features in object detection; in interaction detection, GNNs operate as communication brokers between people and objects; and in region classification, GNNs do reasoning on graphs connecting regions and classes. |

Physics | Graph neural network/ graph networks | Modeling physical systems in the actual world is a fundamental part of comprehending human intelligence. We can execute effective GNN-based reasoning about objects, relations, and physics by modelling them as nodes and relations as edges. Training interaction networks to reason about the interactions of items in a complicated physical system is possible. It is capable of making predictions and inferences about the characteristics of diverse systems in fields such as collision dynamics. |

Molecular fingerprints | Graph convolutional network | Molecular fingerprints are vector representations of molecules. Machine learning algorithms estimate the characteristics of a new molecule using examples of existing molecules and fixed-length fingerprints as inputs. GNNs can be used in place of standard methods that generate a fixed encoding of the molecule, allowing for the development of differentiable fingerprints tailored to the job at hand. |

Graph generation | Graph convolutional network/ graph neural network/ LSTM /RNN/ relational-GCN | For its critical applications, such as simulating social interactions, identifying novel chemical structures, and generating knowledge graphs, generative models for real-world graphs have garnered considerable interest. The GNN-based model individually learns and matches node embeddings for each graph. This strategy outperforms conventional relaxation-based strategies. |

Since GNNs was first introduced a few years ago, they’ve been shown to be an effective tool for solving issues that can be represented as graphs. This is because of its adaptability, expressive capability, and ease of visualization. As a result, GNNs is an easy-to-understand solution for unstructured data that may be used in a variety of real-world settings.

A. A graph neural network (GNN) is a type of neural network designed to operate on graph-structured data, which is a collection of nodes and edges that represent relationships between them. GNNs are especially useful in tasks involving graph analysis, such as node classification, link prediction, and graph clustering.

A. A graph neural network works by propagating information along the edges of a graph to update the node representations. In other words, the network iteratively aggregates the information from a node’s neighboring nodes and uses this information to update its own representation. This process is repeated for multiple iterations until the nodes converge to a stable representation.

A. Graph neural networks have a wide range of applications, including social network analysis, recommendation systems, drug discovery, natural language processing, and computer vision. They can be used to model complex relationships between entities and to make predictions based on these relationships.

A. The main advantage of using graph neural networks is their ability to handle complex graph-structured data. They can capture non-linear relationships between nodes and can generalize to unseen data. Additionally, GNNs can be used for both supervised and unsupervised learning tasks, making them a versatile tool for many applications.

A. One limitation of graph neural networks is that they can be computationally expensive, especially for large graphs. Additionally, GNNs can suffer from overfitting, especially when the graph structure is noisy or incomplete. Finally, the interpretability of GNNs can be a challenge, as it is often difficult to understand how the network arrives at its predictions.

**The media shown in this article is not owned by Analytics Vidhya and are used at the Authorâ€™s discretion. **

Lorem ipsum dolor sit amet, consectetur adipiscing elit,