The LLVM project was started as a research project at the University of Illinois. Its aim was to create modern, Static Single Assignment (SSA)-based compilation strategies and type safety, low-level operations, flexibility, and the capability of representing all high-level languages cleanly. It is actually a collection of modular and reusable compiler and toolchain technologies. LLVM doesn't have to do much with traditional virtual machines. Some of the objectives of LLVM are:
- The LLVM Core libraries were created to provide a modern source—and target-independent optimizer, along with code generation support for many popular CPUs. These libraries are built around a well-specified code representation known as the LLVM intermediate representation (LLVM IR).
- Clang is an LLVM native C/C++/Objective-C compiler, which aims to deliver amazingly fast compiles (for example, about 3x faster than GCC when compiling Objective-C code in a debug configuration), extremely useful error and warning messages and to provide a platform for building great source-level tools.
- DragonEgg integrates the LLVM optimizers and code generator with the GCC parsers. This allows LLVM to compile Ada, Fortran, and other languages supported by the GCC compiler frontends, and access to C features not supported by Clang.
- The LLDB project builds on libraries provided by LLVM and Clang to provide a great native debugger. It uses the Clang ASTs and expression parser, LLVM JIT, LLVM disassembler, and so on, so that it provides an experience that just works. It is also blazingly fast and much more memory-efficient than GDB at loading symbols.
- The SAFECode project is a memory safety compiler for C/C++ programs. It instruments code with runtime checks to detect memory safety errors (for example, buffer overflows) at runtime. It can be used to protect software from security attacks and can also be used as a memory safety error debugging tool, like Valgrind.
In computing, JIT compilation, also known as dynamic translation, is compilation done during the execution of a program–-at runtime—rather than prior to execution. Most often, this consists of translation to machine code, which is then executed directly, but can also refer to translation to another format. A system implementing a JIT compiler typically continuously analyses the code being executed and identifies parts of the code where the speedup gained from compilation would outweigh the overhead of compiling that code. The LLVM JIT compiler can optimize unnecessary static branches out of a program at runtime, and thus is useful for partial evaluation in cases where a program has many options, most of which can easily be deemed unnecessary in a specific environment. This feature is used in the OpenGL pipeline of Mac OS X Leopard (v10.5) to provide support for missing hardware features. Graphics code within the OpenGL stack was left in intermediate representation and then compiled when run on the target machine. On systems with high-end graphics processing units (GPUs), the resulting code was quite thin, passing the instructions onto the GPU with minimal changes. On systems with low-end GPUs, LLVM would compile optional procedures that run on the local central processing unit (CPU) and emulate instructions that the GPU cannot run internally. LLVM improved performance on low-end machines using Intel GMA chipsets. A similar system was developed under the Gallium3D LLVMpipe and incorporated into the GNOME shell to allow it to run without a proper 3D hardware driver loaded.