Auto-encoders are neural networks which consist of two networks combined via a bottleneck layer: An encoder, which downsamples the input by passing it through convolutional filters to provide the compact feature representation of the image, and a decoder which takes the representation provided by the encoder as input and tries to re-construct the input according to the same. Graph auto-encoders try to learn a compact representation of the graph and then re-construct the graph using the decoder. They can be used to learn graph embeddings and hence can be used for predicting embeddings for un-seen nodes and to classify newer nodes into existing categories within the graph.
Image source: https://www.frontiersin.org/articles/10.3389/fdata.2019.00002/full
Real-Life Applications of Graph Neural Network
Being introduced recently in 2018, the GNNs still have a lot of real-life applications because their architecture resonates with the irregularity in data collected from various sources. Currently, GNNs have been the hot topic for:
Social Network Analysis — Similar posts prediction, tags prediction, and recommending content to users.
Natural Sciences — GNNs have also gained popularity in dealing with molecular interactions like protein-protein interactions.
Recommender Systems — A heterogenous graph can be used to capture relationships between users and items to recommend relevant items to a buyer.
The intuition of GNN is that nodes are naturally defined by their neighbors and connections. To understand this we can simply imagine that if we remove the neighbors and connections around a node, then the node will lose all its information. Therefore, the neighbors of a node and connections to neighbors define the concept of the node.
Open Problems
Graph neural networks still have many open problems that need to be solved:
Shallow structure -Traditional deep learning achieves better performance by superimposing the number of layers, but graph neural networks often have only two or three layers. If the number of superimposed layers is too much, it will cause excessive smoothness, and all nodes will converge to a value.
Dynamic picture-The static graph is stable, so it can be effectively modeled, and when edges or nodes appear or disappear, GNN cannot change adaptively
Unstructured scene-There is no good way to generate a graph from unstructured data
Scalability-The time complexity is too high and it is difficult to use in a big data environment.
Conclusion
In this post, we went through a journey of understanding graphs and graph neural networks. Graph neural networks are an intuitive solution to making sense of unstructured data and are useful for many real-world applications. Over the past few years, graph neural networks have become powerful and practical tools for machine learning tasks in graph domain. This progress owes to advances in expressive power, model flexibility, and training algorithms. In this survey, we conduct a comprehensive review of graph neural networks.
Reference
F.Scarselli, M.Gori, “The graph neural network model,” IEEE Transactions on Neural Networks, 2009
T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in Proc. of ICLR, 2017.
Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, Philip S. Yu, “A Comprehensive Survey on Graph Neural Networks”, arXiv:1901.00596
D. Xu, Y. Zhu, C. B. Choy, and L. Fei-Fei, “Scene graph generation by iterative message passing,” in Proc. of CVPR, vol. 2, 2017
J. Johnson, A. Gupta, and L. Fei-Fei, “Image generation from scene graphs,” in Proc. of CVPR, 2018
X. Wang, Y. Ye, and A. Gupta, “Zero-shot recognition via semantic embeddings and knowledge graphs,” in CVPR 2018
Do'stlaringiz bilan baham: |