Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Codeless Deep Learning with KNIME

You're reading from   Codeless Deep Learning with KNIME Build, train, and deploy various deep neural network architectures using KNIME Analytics Platform

Arrow left icon
Product type Paperback
Published in Nov 2020
Publisher Packt
ISBN-13 9781800566613
Length 384 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (3):
Arrow left icon
Kathrin Melcher Kathrin Melcher
Author Profile Icon Kathrin Melcher
Kathrin Melcher
KNIME AG KNIME AG
Author Profile Icon KNIME AG
KNIME AG
Rosaria Silipo Rosaria Silipo
Author Profile Icon Rosaria Silipo
Rosaria Silipo
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Section 1: Feedforward Neural Networks and KNIME Deep Learning Extension
2. Chapter 1: Introduction to Deep Learning with KNIME Analytics Platform FREE CHAPTER 3. Chapter 2: Data Access and Preprocessing with KNIME Analytics Platform 4. Chapter 3: Getting Started with Neural Networks 5. Chapter 4: Building and Training a Feedforward Neural Network 6. Section 2: Deep Learning Networks
7. Chapter 5: Autoencoder for Fraud Detection 8. Chapter 6: Recurrent Neural Networks for Demand Prediction 9. Chapter 7: Implementing NLP Applications 10. Chapter 8: Neural Machine Translation 11. Chapter 9: Convolutional Neural Networks for Image Classification 12. Section 3: Deployment and Productionizing
13. Chapter 10: Deploying a Deep Learning Network 14. Chapter 11: Best Practices and Other Deployment Options 15. Other Books You May Enjoy

Encoder-Decoder Architecture

In this section, we will first introduce the general concept of an encoder-decoder architecture. Afterward, we will focus on how the encoder is used in neural machine translation. In the last two subsections, we will concentrate on how the decoder is applied during training and deployment.

One of the possible structures for neural machine translation is the encoder-decoder network. In Chapter 5, Autoencoder for Fraud Detection, we introduced the concept of a neural network consisting of an encoder and a decoder component. Remember, in the case of an autoencoder, the task of the encoder component is to extract a dense representation of the input, while the task of the decoder component is to recreate the input based on the dense representation given by the encoder.

In the case of encoder-decoder networks for neural machine translation, the task of the encoder is to extract the context of the sentence in the source language (the input sentence) into...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime