Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On High Performance with Go

You're reading from   Hands-On High Performance with Go Boost and optimize the performance of your Golang applications at scale with resilience

Arrow left icon
Product type Paperback
Published in Mar 2020
Publisher Packt
ISBN-13 9781789805789
Length 406 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Bob Strecansky Bob Strecansky
Author Profile Icon Bob Strecansky
Bob Strecansky
Arrow right icon
View More author details
Toc

Table of Contents (20) Chapters Close

Preface 1. Section 1: Learning about Performance in Go
2. Introduction to Performance in Go FREE CHAPTER 3. Data Structures and Algorithms 4. Understanding Concurrency 5. STL Algorithm Equivalents in Go 6. Matrix and Vector Computation in Go 7. Section 2: Applying Performance Concepts in Go
8. Composing Readable Go Code 9. Template Programming in Go 10. Memory Management in Go 11. GPU Parallelization in Go 12. Compile Time Evaluations in Go 13. Section 3: Deploying, Monitoring, and Iterating on Go Programs with Performance in Mind
14. Building and Deploying Go Code 15. Profiling Go Code 16. Tracing Go Code 17. Clusters and Job Queues 18. Comparing Code Quality Across Versions 19. Other Books You May Enjoy

What this book covers

Chapter 1, Introduction to Performance in Go, will discuss why performance in computer science is important. You will also learn why performance is important in the Go language.

Chapter 2, Data Structures and Algorithms, deals with data structures and algorithms, which are the basic units of building software, notably complex performance software. Understanding them will help you to think about how to most impact fully organize and manipulate data in order to write effective, performant software. Also, iterators and generators are essential to Go. This chapter will include explanations of different data structures and algorithms, as well as how their big O notation is impacted.

Chapter 3, Understanding Concurrency, will talk about utilizing channels and goroutines for parallelism and concurrency, which are idiomatic in Go and are the best ways to write high-performance code in your system. Being able to understand when and where to use each of these design patterns is essential to writing performant Go.

Chapter 4, STL Algorithm Equivalents in Go, discusses how many programmers coming from other high-performance languages, namely C++, understand the concept of the standard template library, which provides common programming data structures and functions in a generalized library in order to rapidly iterate and write performant code at scale.

Chapter 5, Matrix and Vector Computation in Go, deals with matrix and vector computations in general. Matrices are important in graphics manipulation and AI, namely image recognition. Vectors can hold a myriad of objects in dynamic arrays. They use contiguous storage and can be manipulated to accommodate growth.

Chapter 6, Composing Readable Go Code, focuses on the importance of writing readable Go code. Understanding the patterns and idioms discussed in this chapter will help you to write Go code that is more easily readable and operable between teams. Also, being able to write idiomatic Go will help raise the level of your code quality and help your project maintain velocity.

Chapter 7, Template Programming in Go, focuses on template programming in Go. Metaprogramming allows the end user to write Go programs that produce, manipulate, and run Go programs. Go has clear, static dependencies, which helps with metaprogramming. It has shortcomings that other languages don't have in metaprogramming, such as __getattr__ in Python, but we can still generate Go code and compile the resulting code if it's deemed prudent.

Chapter 8, Memory Management in Go, discusses how memory management is paramount to system performance. Being able to utilize a computer's memory footprint to the fullest allows you to keep highly functioning programs in memory so that you don't often have to take the large performance hit of swapping to disk. Being able to manage memory effectively is a core tenet of writing performant Go code.

Chapter 9, GPU Parallelization in Go, focuses on GPU accelerated programming, which is becoming more and more important in today's high-performance computing stacks. We can use the CUDA driver API for GPU acceleration. This is commonly used in topics such as deep learning algorithms.

Chapter 10, Compile Time Evaluations in Go, discusses minimizing dependencies and each file declaring its own dependencies while writing a Go program. Regular syntax and module support also help to improve compile times, as well as interface satisfaction. These things help to make Go compilation quicker, alongside using containers for building Go code and utilizing the Go build cache.

Chapter 11, Building and Deploying Go Code, focuses on how to deploy new Go code. To elaborate further, this chapter explains how we can push this out to one or multiple places in order to test against different environments. Doing this will allow us to push the envelope of the amount of throughput that we have for our system.

Chapter 12, Profiling Go Code, focuses on profiling Go code, which is one of the best ways to determine where bottlenecks live within your Go functions. Performing this profiling will help you to deduce where you can make improvements within your function and how much time individual pieces take within your function call with respect to the overall system.

Chapter 13, Tracing Go Code, deals with a fantastic way to check interoperability between functions and services within your Go program, also known as tracing. Tracing allows you to pass context through your system and evaluate where you are being held up. Whether it's a third-party API call, a slow messaging queue, or an O(n2) function, tracing will help you to find where this bottleneck resides.

Chapter 14, Clusters and Job Queues, focuses on the importance of clustering and job queues in Go as good ways to get distributed systems to work synchronously and deliver a consistent message. Distributed computing is difficult, and it becomes very important to watch for potential performance optimizations within both clustering and job queues.

Chapter 15, Comparing Code Quality Across Versions, deals with what you should do after you have written, debugged, profiled, and monitored Go code that is monitoring your application in the long term for performance regressions. Adding new features to your code is fruitless if you can't continue to deliver a level of performance that other systems in your infrastructure depend on.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image