What are Graph Neural Networks, and how do they work?
This article was published as a part of the Data Science Blogathon.
Introduction
Neural Networks have acquired enormous popularity in recent years due to their usefulness and ease of use in the fields of Pattern Recognition and Data Mining. Deep Learning’s application to tasks such as object identification and voice recognition through the use of techniques such as CNN, RNN, and autoencoders has resulted in a massive amount of effort in the study and development of Neural Networks.
Images, text, and videos may be easily analysed using Deep Learning since they are based on Euclidean data sets. It’s also important to think about applications where data is represented in graphs (nonEuclidean) with intricate interactions between items.
This is where the notion of Graph Neural Networks comes into play (GNN). The concepts and principles of Graphs and GNNs will be covered in this article as well as some of the most recent uses of Graph Neural Networks.
What is a Graph?
As implied by the term – Graph Neural Networks – the most essential component of GNN is a graph.
A graph is a data structure in computer science that consists of two components. Vertices and Envelopes G=VE can be used to define a graph. Each vertex has an associated edge (E) that connects it to every other vertex (V). The phrase vertices and nodes are commonly used interchangeably, however, they are distinct concepts. A directed graph has arrows on its edges, which is referred to as directional dependence. If not, the graphs are undirected.
A graph may be used to depict a variety of objects, including social media networks, city networks, and molecules. Consider the following graph, which depicts a city network. Cities are depicted as nodes, while the highways that link them are depicted as edges.
We may use the graph network to solve a variety of issues relating to these cities, such as determining which cities are wellconnected or determining the shortest distance between two cities.
What are Graph Neural Networks?
Due to their extraordinarily powerful expressive capabilities, graphs are getting significant interest in the field of Machine Learning. Each node is paired with an embedding. This embedding establishes the node’s location in the data space. Graph Neural Networks are topologies of neural networks that operate on graphs.
A GNN architecture’s primary goal is to learn an embedding that contains information about its neighborhood. We may use this embedding to tackle a variety of issues, including node labeling, node and edge prediction, and so on.
In other words, Graph Neural Networks are a subclass of Deep Learning techniques that are specifically built to do inference on graphbased data. They are applied to graphs and are capable of performing prediction tasks at the node, edge, and graph levels.
Why not CNN?
The primary benefit of GNN is that it is capable of doing tasks that Convolutional Neural Networks (CNN) are incapable of performing. Convolutional neural networks are used to handle tasks such as object identification, picture categorization, and recognition. CNN accomplishes this through the use of hidden convolutional layers and pooling layers.
CNN is computationally challenging to perform on graph data because the topology is very arbitrary and complicated, implying that there is no spatial locality. Additionally, there is an unfixed node ordering, which complicates the use of CNN.
Types of Graph Neural Networks
Thus, as the name implies, a GNN is a neural network that is directly applied to graphs, giving a handy method for performing edge, node, and graph level prediction tasks. Graph Neural Networks are classified into three types:
 Recurrent Graph Neural Network
 Spatial Convolutional Network
 Spectral Convolutional Network
One of GNN’s fundamental intuitions is that nodes are defined by their neighbors and connections. To visualize this, consider that if all of a node’s neighbors are removed, the node will lose all of its information. Thus, the idea of a node’s neighbors and the connections between them describe a node.
With this in mind, let us assign a state (x) to each node to describe its notion. We may utilize the node state (x) to generate an output (o), which represents the concept’s choice. The node’s ultimate state (x_n) is referred to as the “node embedding.” The primary purpose of all Graph Neural Networks is to discover the “node embedding” of each node by examining its neighboring nodes’ information.
Let’s begin with the most powerful GNN variant, the Recurrent Graph Neural Network, or RecGNN.
Recurrent Graph Neural Network
RecGNN is based on the Banach FixedPoint Theorem, which asserts that: Let (X,d) be a full metric space and (T:X→X) be a contraction mapping. Then T has a unique fixed point (x∗), and for each x∈X, the sequence T n(x) for n→∞ converges to (x∗). This suggests that if I apply the mapping T to x for k times, x^k should be close to x^(k1).
Architecture of RGNN
Spatial Convolutional Network
Spatial Convolutional Networks have a similar idea to CNNs. As is well known in CNN, convolution is performed by summing the neighboring pixels around a central pixel using a filter and learnable weights. Spatial Convolutional Networks operate on a similar principle, aggregating the properties of neighboring nodes toward the centre node.
Spectral Convolutional Network
In comparison to other types of Graph Neural Networks, this sort of GNN is built on a solid mathematical foundation. It is based on the theory of Graph Signal Processing. It simplifies by the use of Chebyshev polynomial approximation.
What Functions Can A GNN Perform?
The challenges that a GNN can solve are divided into three categories:

Node Classification

Link Prediction

Graph Classification
Node Classification
Predicting the node embedding for each node in a network is what this task entails. Only a portion of the graph is labeled in such circumstances, resulting in a semisupervised graph. YouTube videos, Facebook friend recommendations, and other applications are examples.
Link Prediction
The primary objective is to determine the relationship between two things in a graph and to forecast if the two entities are connected. Consider a recommender system in which a model is given a collection of user reviews for various items. The objective is to forecast users’ preferences and optimize the recommender system so that it promotes goods that align with the users’ interests.
Graph Classification
It entails sorting the entire graph into a variety of groups. It’s a lot like an image classification problem, however the goal here is to identify graphs. Examples of Graph Classification include the classification of a chemical structure into one of several categories in chemistry, for example.
GNN Applications in RealTime
Many realtime GNN applications have emerged since they were first introduced in 2018. A few of the most notable are outlined below.
Natural Language Processing
A wide range of NLP tasks can benefit from the use of GNN, including sentiment classification, text classification, and sequence labelling, to name just a few. In NLP, they are employed because of their convenience. Social Network Analysis use them to forecast similar postings and provide users with relevant content recommendations.
Computer Vision
Computer Vision is a large discipline that has seen fast growth in recent years due to the use of Deep Learning in areas such as image classification and object detection. Convolutional Neural Networks are the most often used application. Recently, GNNs have been employed in this sector as well. Though GNN applications in Computer Vision are in their infancy, they demonstrate enormous potential in the next years.
Science
Another scientific use of GNNs is the prediction of pharmacological adverse effects and the categorization of diseases using GNNs. Chemical and molecular graph structure is also being studied using GNNs.
Other Domains
In addition to the functions described above, GNN has a wide variety of functions. Recommender systems and social network research are only two areas where GNN has been tried out.
Application

Deep Learning

Description

Text classification  Graph convolutional network/ graph attention network  Text Classification is a wellknown use of GNNs in NLP. GNNs infer document labels by analyzing the relationships between documents or words. This challenge is solved using GCN and GAT models. They transform text to a graph of words and then convolve the word graph using graph convolution methods. They demonstrate through experiments that representing texts as a graph of words has the benefit of capturing nonconsecutive and longdistance semantics. 
Neural machine translation 
Graph convolutional network/ gated graph neural network  NMT is a sequencetosequence task. One of the most often used uses of GNN is the incorporation of semantic information into the NMT task. This is accomplished by the use of the Syntactic GCN on syntaxaware NMT jobs. Additionally, we may use the GGNN in NMT. It transforms the syntactic dependency graph into a new structure by converting the edges to extra nodes, allowing for the representation of edge labels as embeddings. 
Relation extraction  Graph LSTM/ graph convolutional network  Relation Extraction is the process of extracting semantic relationships between two or more things from text. Traditionally, this task has been treated as a pipeline of two distinct tasks, namely named entity recognition (NER) and relation extraction, but recent research indicates that endtoend modelling of entities and relations is critical for high performance, as relations are intimately connected with entity information. 
Image classification  Graph convolutional network/ gated graph neural network  Classification of images is a fundamental task in computer vision. When given a large training set of labelled classes, the majority of models provide favourable results. The goal now is to improve the performance of these models on zeroshot and fewshot learning challenges. GNN seems pretty appealing in this regard. Knowledge graphs may include sufficient information to assist the ZSL (zeroshot learning) job. 
Object detection
Interaction detection Region classification Semantic segmentation 
Graph attention network
Graph neural network Graph CNN Graph LSTM/ gated graph neural network/ graph CNN/ graph neural network 
Additionally, computer vision tasks such as object identification, interaction detection, and region categorization may be used. GNNs are used to derive ROI features in object detection; in interaction detection, GNNs operate as communication brokers between people and objects; and in region classification, GNNs do reasoning on graphs connecting regions and classes. 
Physics  Graph neural network/ graph networks  Modeling physical systems in the actual world is a fundamental part of comprehending human intelligence. We can execute effective GNNbased reasoning about objects, relations, and physics by modelling them as nodes and relations as edges. Training interaction networks to reason about the interactions of items in a complicated physical system is possible. It is capable of making predictions and inferences about the characteristics of diverse systems in fields such as collision dynamics. 
Molecular fingerprints  Graph convolutional network  Molecular fingerprints are vector representations of molecules. Machine learning algorithms estimate the characteristics of a new molecule using examples of existing molecules and fixedlength fingerprints as inputs. GNNs can be used in place of standard methods that generate a fixed encoding of the molecule, allowing for the development of differentiable fingerprints tailored to the job at hand. 
Graph generation  Graph convolutional network/ graph neural network/ LSTM /RNN/ relationalGCN  For its critical applications, such as simulating social interactions, identifying novel chemical structures, and generating knowledge graphs, generative models for realworld graphs have garnered considerable interest. The GNNbased model individually learns and matches node embeddings for each graph. This strategy outperforms conventional relaxationbased strategies. 
Conclusion
Since GNNs was first introduced a few years ago, they’ve been shown to be an effective tool for solving issues that can be represented as graphs. This is because of its adaptability, expressive capability, and ease of visualization. As a result, GNNs is an easytounderstand solution for unstructured data that may be used in a variety of realworld settings.
The media shown in this article is not owned by Analytics Vidhya and are used at the Author’s discretion.