Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Haskell High Performance Programming

You're reading from   Haskell High Performance Programming Write Haskell programs that are robust and fast enough to stand up to the needs of today

Arrow left icon
Product type Paperback
Published in Sep 2016
Publisher Packt
ISBN-13 9781786464217
Length 408 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Samuli Thomasson Samuli Thomasson
Author Profile Icon Samuli Thomasson
Samuli Thomasson
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Identifying Bottlenecks FREE CHAPTER 2. Choosing the Correct Data Structures 3. Profile and Benchmark to Your Heart's Content 4. The Devil's in the Detail 5. Parallelize for Performance 6. I/O and Streaming 7. Concurrency and Performance 8. Tweaking the Compiler and Runtime System (GHC) 9. GHC Internals and Code Generation 10. Foreign Function Interface 11. Programming for the GPU with Accelerate 12. Scaling to the Cloud with Cloud Haskell 13. Functional Reactive Programming 14. Library Recommendations Index

Running with the CUDA backend

To compile using the CUDA backend, we should install the accelerate-cuda package from Hackage. Also required is the CUDA platform. Refer to the accelerate-cuda package documentation and CUDA platform documentation for further information:

cabal install accelerate-cuda -fdebug

The Haskell dependencies require some additional tools in scope, including alex, happy, and c2hs. Install those first if necessary. The debug flag gives our Accelerate CUDA programs some additional tools. There's no extra runtime cost versus no debug flag. The additional flags could interfere with the user program, though.

In principle, the only necessary code change for using the CUDA backend instead of the interpreter is to import the run function from Data.Array.Accelerate.CUDA instead of the Interpreter module:

import Data.Array.Accelerate.CUDA

The program below executes our matrix product of 100x100 matrices on the GPU using CUDA. Note that swapping back to the interpreter is a...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image