Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Graph Machine Learning

You're reading from   Graph Machine Learning Take graph data to the next level by applying machine learning techniques and algorithms

Arrow left icon
Product type Paperback
Published in Jun 2021
Publisher Packt
ISBN-13 9781800204492
Length 338 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (3):
Arrow left icon
Aldo Marzullo Aldo Marzullo
Author Profile Icon Aldo Marzullo
Aldo Marzullo
Claudio Stamile Claudio Stamile
Author Profile Icon Claudio Stamile
Claudio Stamile
Enrico Deusebio Enrico Deusebio
Author Profile Icon Enrico Deusebio
Enrico Deusebio
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Section 1 – Introduction to Graph Machine Learning
2. Chapter 1: Getting Started with Graphs FREE CHAPTER 3. Chapter 2: Graph Machine Learning 4. Section 2 – Machine Learning on Graphs
5. Chapter 3: Unsupervised Graph Learning 6. Chapter 4: Supervised Graph Learning 7. Chapter 5: Problems with Machine Learning on Graphs 8. Section 3 – Advanced Applications of Graph Machine Learning
9. Chapter 6: Social Network Graphs 10. Chapter 7: Text Analytics and Natural Language Processing Using Graphs 11. Chapter 8:Graph Analysis for Credit Card Transactions 12. Chapter 9: Building a Data-Driven Graph-Powered Application 13. Chapter 10: Novel Trends on Graphs 14. Other Books You May Enjoy

Summary 

In this chapter, we have learned how unsupervised machine learning can be effectively applied to graphs to solve real problems, such as node and graph representation learning.

In particular, we first analyzed shallow embedding methods, a set of algorithms that are able to learn and return only the embedding values for the learned input data.

We then learned how autoencoder algorithms can be used to encode the input by preserving important information in a lower-dimensional space. We have also seen how this idea can be adapted to graphs, by learning about embeddings that allow us to reconstruct the pair-wise node/graph similarity.

Finally, we introduced the main concepts behind GNNs. We have seen how well-known concepts, such as convolution, can be applied to graphs.

In the next chapter, we will revise these concepts in a supervised setting. There, a target label is provided and the objective is to learn a mapping between the input and the output.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image