Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Learning Julia

You're reading from   Learning Julia Build high-performance applications for scientific computing

Arrow left icon
Product type Paperback
Published in Nov 2017
Publisher Packt
ISBN-13 9781785883279
Length 316 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Rahul Lakhanpal Rahul Lakhanpal
Author Profile Icon Rahul Lakhanpal
Rahul Lakhanpal
Anshul Joshi Anshul Joshi
Author Profile Icon Anshul Joshi
Anshul Joshi
Arrow right icon
View More author details
Toc

Table of Contents (11) Chapters Close

Preface 1. Understanding Julia's Ecosystem FREE CHAPTER 2. Programming Concepts with Julia 3. Functions in Julia 4. Understanding Types and Dispatch 5. Working with Control Flow 6. Interoperability and Metaprogramming 7. Numerical and Scientific Computation with Julia 8. Data Visualization and Graphics 9. Connecting with Databases 10. Julia’s Internals

Understanding LLVM and JIT

The LLVM project was started as a research project at the University of Illinois. Its aim was to create modern, Static Single Assignment (SSA)-based compilation strategies and type safety, low-level operations, flexibility, and the capability of representing all high-level languages cleanly. It is actually a collection of modular and reusable compiler and toolchain technologies. LLVM doesn't have to do much with traditional virtual machines. Some of the objectives of LLVM are:

  • The LLVM Core libraries were created to provide a modern source—and target-independent optimizer, along with code generation support for many popular CPUs. These libraries are built around a well-specified code representation known as the LLVM intermediate representation (LLVM IR).
  • Clang is an LLVM native C/C++/Objective-C compiler, which aims to deliver amazingly fast compiles (for example, about 3x faster than GCC when compiling Objective-C code in a debug configuration), extremely useful error and warning messages and to provide a platform for building great source-level tools.
  • DragonEgg integrates the LLVM optimizers and code generator with the GCC parsers. This allows LLVM to compile Ada, Fortran, and other languages supported by the GCC compiler frontends, and access to C features not supported by Clang.
  • The LLDB project builds on libraries provided by LLVM and Clang to provide a great native debugger. It uses the Clang ASTs and expression parser, LLVM JIT, LLVM disassembler, and so on, so that it provides an experience that just works. It is also blazingly fast and much more memory-efficient than GDB at loading symbols.
  • The SAFECode project is a memory safety compiler for C/C++ programs. It instruments code with runtime checks to detect memory safety errors (for example, buffer overflows) at runtime. It can be used to protect software from security attacks and can also be used as a memory safety error debugging tool, like Valgrind.

In computing, JIT compilation, also known as dynamic translation, is compilation done during the execution of a program–-at runtime—rather than prior to execution. Most often, this consists of translation to machine code, which is then executed directly, but can also refer to translation to another format. A system implementing a JIT compiler typically continuously analyses the code being executed and identifies parts of the code where the speedup gained from compilation would outweigh the overhead of compiling that code. The LLVM JIT compiler can optimize unnecessary static branches out of a program at runtime, and thus is useful for partial evaluation in cases where a program has many options, most of which can easily be deemed unnecessary in a specific environment. This feature is used in the OpenGL pipeline of Mac OS X Leopard (v10.5) to provide support for missing hardware features. Graphics code within the OpenGL stack was left in intermediate representation and then compiled when run on the target machine. On systems with high-end graphics processing units (GPUs), the resulting code was quite thin, passing the instructions onto the GPU with minimal changes. On systems with low-end GPUs, LLVM would compile optional procedures that run on the local central processing unit (CPU) and emulate instructions that the GPU cannot run internally. LLVM improved performance on low-end machines using Intel GMA chipsets. A similar system was developed under the Gallium3D LLVMpipe and incorporated into the GNOME shell to allow it to run without a proper 3D hardware driver loaded.

You have been reading a chapter from
Learning Julia
Published in: Nov 2017
Publisher: Packt
ISBN-13: 9781785883279
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime