Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Mastering Code Generation: Exploring the LLVM Backend

Save for later
View related Packt books & videos

article-image

This article is an excerpt from the book, Learn LLVM 17 - Second Edition, by Kai Nacke, Amy Kwan. Learn how to build your own compiler, from reading the source to emitting optimized machine code. This book guides you through the JIT compilation framework, extending LLVM in a variety of ways, and using the right tools for troubleshooting.

mastering-code-generation-exploring-the-llvm-backend-img-0

Introduction 

Generating optimized machine code is a critical task in the compilation process, and the LLVM backend plays a pivotal role in this transformation. The backend translates the LLVM Intermediate Representation (IR), derived from the Abstract Syntax Tree (AST), into machine code that can be executed on target architectures. Understanding how to generate this IR effectively is essential for leveraging LLVM's capabilities. This article delves into the intricacies of generating LLVM IR using a simple expression language example. We'll explore the necessary steps, from declaring library functions to implementing a code generation visitor, ensuring a comprehensive understanding of the LLVM backend's functionality. 

Generating code with the LLVM backend 

The task of the backend is to create optimized machine code from the LLVM IR of a module. The IR is the interface to the backend and can be created using a C++ interface or in textual form. Again, the IR is generated from the AST. 

Textual representation of LLVM IR 

Before trying to generate the LLVM IR, it should be clear what we want to generate. For our example expression language, the high-level plan is as follows: 

1. Ask the user for the value of each variable. 

2. Calculate the value of the expression. 

3. Print the result. 

To ask the user to provide a value for a variable and to print the result, two library functions are used: calc_read() and calc_write(). For the with a: 3*a expression, the generated IR is as follows: 

1. The library functions must be declared, like in C. The syntax also resembles C. The type before the function name is the return type. The type names surrounded by parenthesis are the argument types. The declaration can appear anywhere in the file: 

declare i32 @calc_read(ptr)

declare void @calc_write(i32) 

2. The calc_read() function takes the variable name as a parameter. The following construct defines a constant, holding a and the null byte used as a string terminator in C: 

@a.str = private constant [2 x i8] c"a\00" 

3. It follows the main() function. The parameter names are omitted because they are not used.  

Just as in C, the body of the function is enclosed in braces: 

define i32 @main(i32, ptr) { 

4. Each basic block must have a label. Because this is the first basic block of the function, we name it entry: 

entry: 

5. The calc_read() function is called to read the value for the a variable. The nested getelemenptr instruction performs an index calculation to compute the pointer to the first element of the string constant. The function result is assigned to the unnamed %2 variable. 

  %2 = call i32 @calc_read(ptr @a.str) 

6. Next, the variable is multiplied by 3: 

  %3 = mul nsw i32 3, %2 

7. The result is printed on the console via a call to the calc_write() function: 

  call void @calc_write(i32 %3) 

8. Last, the main() function returns 0 to indicate a successful execution: 

  ret i32 0

} 

Each value in the LLVM IR is typed, with i32 denoting the 32-bit bit integer type and ptr denoting a pointer. 

Note: previous versions of LLVM used typed pointers. For example, a pointer to a byte was expressed as i8* in LLVM. Since L LVM 16, opaque pointers are the default. An opaque pointer is just a pointer to memory, without carrying any type information about it. The notation in LLVM IR is ptr.

Previous versions of LLVM used typed pointers. For example, a pointer to a byte was expressed as i8* in LLVM. Since L LVM 16, opaque pointers are the default. An opaque pointer is just a pointer to memory, without carrying any type information about it. The notation in LLVM IR is ptr. 

Since it is now clear what the IR looks like, let’s generate it from the AST. 

 Generating the IR from the AST 

The interface, provided in the CodeGen.h header file, is very small: 


#ifndef CODEGEN_H

#define CODEGEN_H



#include "AST.h"



class CodeGen

{

public:

void compile(AST *Tree);

};

#endif

Because the AST contains the information, the basic idea is to use a visitor to walk the AST. The CodeGen.cpp file is implemented as follows: 

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at R$50/month. Cancel anytime

1. The required includes are at the top of the file: 

#include "CodeGen.h"

#include "llvm/ADT/StringMap.h"

#include "llvm/IR/IRBuilder.h"

#include "llvm/IR/LLVMContext.h"

#include "llvm/Support/raw_ostream.h" 

2. The namespace of the LLVM libraries is used for name lookups: 

using namespace llvm; 

3. First, some private members are declared in the visitor. Each compilation unit is represented in LLVM by the Module class and the visitor has a pointer to the module called M. For easy IR generation, the Builder (of type IRBuilder<>) is used. LLVM has a class hierarchy to represent types in IR. You can look up the instances for basic types such as i32 from the LLVM context. 

These basic types are used very often. To avoid repeated lookups, we cache the needed type instances: VoidTy, Int32Ty, PtrTy, and Int32Zero. The V member is the current calculated value, which is updated through the tree traversal. And last, nameMap maps a variable name to the value returned from the calc_read() function: 

namespace {

class ToIRVisitor : public ASTVisitor {

 Module *M;

 IRBuilder<> Builder;

 Type *VoidTy;

 Type *Int32Ty;

 PointerType *PtrTy;

 Constant *Int32Zero;

 Value *V;

 StringMap<Value *> nameMap;

4. The constructor initializes all members: 

public:

 ToIRVisitor(Module *M) : M(M), Builder(M->getContext())

 {

   VoidTy = Type::getVoidTy(M->getContext());

   Int32Ty = Type::getInt32Ty(M->getContext());

   PtrTy = PointerType::getUnqual(M->getContext());

   Int32Zero = ConstantInt::get(Int32Ty, 0, true);

 }

5. For each function, a FunctionType instance must be created. In C++ terminology, this is a function prototype. A function itself is defined with a Function instance. The run() method defines the main() function in the LLVM IR first: 

 void run(AST *Tree) {

   FunctionType *MainFty = FunctionType::get(

       Int32Ty, {Int32Ty, PtrTy}, false);

   Function *MainFn = Function::Create(

       MainFty, GlobalValue::ExternalLinkage,

       "main", M);

6. Then we create the BB basic block with the entry label, and attach it to the IR builder: 

BasicBlock *BB = BasicBlock::Create(M->getContext(),

                                       "entry", MainFn);

   Builder.SetInsertPoint(BB);

7. With this preparation done, the tree traversal can begin: 

    Tree->accept(*this); 

8. After the tree traversal, the computed value is printed via a call to the calc_write() function. Again, a function prototype (an instance of FunctionType) has to be created. The only parameter is the current value, V: 


   FunctionType *CalcWriteFnTy =

       FunctionType::get(VoidTy, {Int32Ty}, false);

   Function *CalcWriteFn = Function::Create(

       CalcWriteFnTy, GlobalValue::ExternalLinkage,

       "calc_write", M);

   Builder.CreateCall(CalcWriteFnTy, CalcWriteFn, {V});

9. The generation finishes  by returning 0 from the main() function:

 Builder.CreateRet(Int32Zero);

 }

10. A WithDecl node holds the names of the declared variables. First, we create a function prototype for the calc_read() function: 


 virtual void visit(WithDecl &Node) override {

   FunctionType *ReadFty =

       FunctionType::get(Int32Ty, {PtrTy}, false);

   Function *ReadFn = Function::Create(

       ReadFty, GlobalValue::ExternalLinkage,

       "calc_read", M);

11. The method loops through the variable names: 


   for (auto I = Node.begin(), E = Node.end(); I != E;

        ++I) {

12. For each  variable, a string  with a variable name is created: 


     StringRef Var = *I;

     Constant *StrText = ConstantDataArray::getString(

         M->getContext(), Var);

     GlobalVariable *Str = new GlobalVariable(

         *M, StrText->getType(),

         /*isConstant=*/true,

         GlobalValue::PrivateLinkage,

         StrText, Twine(Var).concat(".str"));

13. Then the IR code to call the calc_read() function is created. The string created in the previous step is passed as a parameter: 


     CallInst *Call =

         Builder.CreateCall(ReadFty, ReadFn, {Str});

14. The returned value is stored in the mapNames map for later use: 


     nameMap[Var] = Call;

   }

15. The tree traversal continues with the expression: 


   Node.getExpr()->accept(*this);

 };

16. A Factor node is either a variable name or a number. For a variable name, the value is looked up in the mapNames map. For a number, the value is converted to an integer and turned into a constant value: 

virtual void visit(Factor &Node) override {

   if (Node.getKind() == Factor::Ident) {

     V = nameMap[Node.getVal()];

   } else {

     int intval;

     Node.getVal().getAsInteger(10, intval);

     V = ConstantInt::get(Int32Ty, intval, true);

   }

 };

17. And last, for a BinaryOp node, the right calculation operation must be used: 


 virtual void visit(BinaryOp &Node) override {

   Node.getLeft()->accept(*this);

   Value *Left = V;

   Node.getRight()->accept(*this);

   Value *Right = V;

   switch (Node.getOperator()) {

   case BinaryOp::Plus:

     V = Builder.CreateNSWAdd(Left, Right); break;

   case BinaryOp::Minus:

     V = Builder.CreateNSWSub(Left, Right); break;

   case BinaryOp::Mul:

     V = Builder.CreateNSWMul(Left, Right); break;

   case BinaryOp::Div:

     V = Builder.CreateSDiv(Left, Right); break;
    
   }
    

    
 };
    

    
};
    

    
}

18. With this, the visitor class is complete. The compile() method creates the global context and the  module, runs the tree traversal, and dumps the generated IR to the console: 


void CodeGen::compile(AST *Tree) {

 LLVMContext Ctx;

 Module *M = new Module("calc.expr", Ctx);

 ToIRVisitor ToIR(M);

 ToIR.run(Tree);

 M->print(outs(), nullptr);

}

We now have implemented the frontend of the compiler, from reading the source up to generating the IR. Of course, all these components must work together on user input, which is the task of the compiler driver. We also need to implement the functions needed at runtime. Both are topics of the next section covered in the book. 

Conclusion 

In conclusion, the process of generating LLVM IR from an AST involves multiple steps, each crucial for producing efficient machine code. This article highlighted the structure and components necessary for this task, including function declarations, basic block management, and tree traversal using a visitor pattern. By carefully managing these elements, developers can harness the power of LLVM to create optimized and reliable machine code. The integration of all these components, alongside user input and runtime functions, completes the frontend implementation of the compiler. This sets the stage for the next phase, focusing on the compiler driver and runtime functions, ensuring seamless execution and integration of the compiled code. 

Author Bio

Kai Nacke is a professional IT architect currently residing in Toronto, Canada. He holds a diploma in computer science from the Technical University of Dortmund, Germany. and his diploma thesis on universal hash functions was recognized as the best of the semester. With over 20 years of experience in the IT industry, Kai has extensive expertise in the development and architecture of business and enterprise applications. In his current role, he evolves an LLVM/clang-based compiler.nFor several years, Kai served as the maintainer of LDC, the LLVM-based D compiler. He is the author of D Web Development and Learn LLVM 12, both published by Packt. In the past, he was a speaker in the LLVM developer room at the Free and Open Source Software Developers’ European Meeting (FOSDEM).

Amy Kwan is a compiler developer currently residing in Toronto, Canada. Originally, from the Canadian prairies, Amy holds a Bachelor of Science in Computer Science from the University of Saskatchewan. In her current role, she leverages LLVM technology as a backend compiler developer. Previously, Amy has been a speaker at the LLVM Developer Conference in 2022 alongside Kai Nacke.