Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Preparing Optimizations

Save for later
  • 11 min read
  • 04 Jun 2015

article-image

In this article by Mayur Pandey and Suyog Sarda, authors of LLVM Cookbook, we will look into the following recipes:

  • Various levels of optimization
  • Writing your own LLVM pass
  • Running your own pass with the opt tool
  • Using another pass in a new pass

(For more resources related to this topic, see here.)

Once the source code transformation completes, the output is in the LLVM IR form. This IR serves as a common platform for converting into assembly code, depending on the backend. However, before converting into an assembly code, the IR can be optimized to produce more effective code. The IR is in the SSA form, where every new assignment to a variable is a new variable itself—a classic case of an SSA representation.

In the LLVM infrastructure, a pass serves the purpose of optimizing LLVM IR. A pass runs over the LLVM IR, processes the IR, analyzes it, identifies the optimization opportunities, and modifies the IR to produce optimized code. The command-line interface opt is used to run optimization passes on LLVM IR.

Various levels of optimization

There are various levels of optimization, starting at 0 and going up to 3 (there is also s for space optimization). The code gets more and more optimized as the optimization level increases. Let's try to explore the various optimization levels.

Getting ready...

Various optimization levels can be understood by running the opt command-line interface on LLVM IR. For this, an example C program can first be converted to IR using the Clang frontend.

  1. Open an example.c file and write the following code in it:
    $ vi example.c
    int main(int argc, char **argv) {
    int i, j, k, t = 0;
    for(i = 0; i < 10; i++) {
       for(j = 0; j < 10; j++) {
         for(k = 0; k < 10; k++) {
           t++;
         }
       }
       for(j = 0; j < 10; j++) {
         t++;
       }
    }
    for(i = 0; i < 20; i++) {
       for(j = 0; j < 20; j++) {
         t++;
       }
       for(j = 0; j < 20; j++) {
         t++;
       }
    }
    return t;
    }
  2. Now convert this into LLVM IR using the clang command, as shown here:
    $ clang –S –O0 –emit-llvm example.c

    A new file, example.ll, will be generated, containing LLVM IR. This file will be used to demonstrate the various optimization levels available.

How to do it…

Do the following steps:

  1. The opt command-line tool can be run on the IR-generated example.ll file:

    $ opt –O0 –S example.ll

    The –O0 syntax specifies the least optimization level.

  2. Similarly, you can run other optimization levels:

    $ opt –O1 –S example.ll
    $ opt –O2 –S example.ll
    $ opt –O3 –S example.ll

How it works…

The opt command-line interface takes the example.ll file as the input and runs the series of passes specified in each optimization level. It can repeat some passes in the same optimization level. To see which passes are being used in each optimization level, you have to add the --debug-pass=Structure command-line option with the previous opt commands.

See Also

Writing your own LLVM pass

All LLVM passes are subclasses of the pass class, and they implement functionality by overriding the virtual methods inherited from pass. LLVM applies a chain of analyses and transformations on the target program. A pass is an instance of the Pass LLVM class.

Getting ready

Let's see how to write a pass. Let's name the pass function block counter; once done, it will simply display the name of the function and count the basic blocks in that function when run. First, a Makefile needs to be written for the pass. Follow the given steps to write a Makefile:

  1. Open a Makefile in the llvm lib/Transform folder:

    $ vi Makefile
  2. Specify the path to the LLVM root folder and the library name, and make this pass a loadable module by specifying it in Makefile, as follows:

    LEVEL = ../../..
    LIBRARYNAME = FuncBlockCount
    LOADABLE_MODULE = 1
    include $(LEVEL)/Makefile.common

This Makefile specifies that all the .cpp files in the current directory are to be compiled and linked together in a shared object.

How to do it…

Do the following steps:

  1. Create a new .cpp file called FuncBlockCount.cpp:

    $ vi FuncBlockCount.cpp
  2. In this file, include some header files from LLVM:

    #include "llvm/Pass.h"
    #include "llvm/IR/Function.h"
    #include "llvm/Support/raw_ostream.h"
  3. Include the llvm namespace to enable access to LLVM functions:

    using namespace llvm;
  4. Then start with an anonymous namespace:

    namespace {
  5. Next declare the pass:

    struct FuncBlockCount : public FunctionPass {
  6. Then declare the pass identifier, which will be used by LLVM to identify the pass:

    static char ID;
    FuncBlockCount() : FunctionPass(ID) {}
  7. This step is one of the most important steps in writing a pass—writing a run function. Since this pass inherits FunctionPass and runs on a function, a runOnFunction is defined to be run on a function:

    bool runOnFunction(Function &F) override {
         errs() << "Function " << F.getName() << 'n';
         return false;
       }
    };
    }

    This function prints the name of the function that is being processed.

    Unlock access to the largest independent learning library in Tech for FREE!
    Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
    Renews at €18.99/month. Cancel anytime
  8. The next step is to initialize the pass ID:

    char FuncBlockCount::ID = 0;
  9. Finally, the pass needs to be registered, with a command-line argument and a name:

    static RegisterPass<FuncBlockCount> X("funcblockcount", "Function Block Count", false, false);
    Putting everything together, the entire code looks like this:
    #include "llvm/Pass.h"
    #include "llvm/IR/Function.h"
    #include "llvm/Support/raw_ostream.h"
    using namespace llvm;
    namespace {
    struct FuncBlockCount : public FunctionPass {
    static char ID;
    FuncBlockCount() : FunctionPass(ID) {}
    bool runOnFunction(Function &F) override {
       errs() << "Function " << F.getName() << 'n';
       return false;
    }
               };
           }
           char FuncBlockCount::ID = 0;
           static RegisterPass<FuncBlockCount> X("funcblockcount", "Function Block Count", false, false);
    

How it works

A simple gmake command compiles the file, so a new file FuncBlockCount.so is generated at the LLVM root directory. This shared object file can be dynamically loaded to the opt tool to run it on a piece of LLVM IR code. How to load and run it will be demonstrated in the next section.

See also

Running your own pass with the opt tool

The pass written in the previous recipe, Writing your own LLVM pass, is ready to be run on the LLVM IR. This pass needs to be loaded dynamically for the opt tool to recognize and execute it.

How to do it…

Do the following steps:

  1. Write the C test code in the sample.c file, which we will convert into an .ll file in the next step:

    $ vi sample.c
     
    int foo(int n, int m) {
    int sum = 0;
    int c0;
    for (c0 = n; c0 > 0; c0--) {
       int c1 = m;
     for (; c1 > 0; c1--) {
         sum += c0 > c1 ? 1 : 0;
       }
    }
    return sum;
    }
  2. Convert the C test code into LLVM IR using the following command:

    $ clang –O0 –S –emit-llvm sample.c –o sample.ll

    This will generate a sample.ll file.

  3. Run the new pass with the opt tool, as follows:

    $ opt -load (path_to_.so_file)/FuncBlockCount.so 
    -funcblockcount sample.ll

    The output will look something like this:

    Function foo

How it works…

As seen in the preceding code, the shared object loads dynamically into the opt command-line tool and runs the pass. It goes over the function and displays its name. It does not modify the IR. Further enhancement in the new pass is demonstrated in the next recipe.

See also

Using another pass in a new pass

A pass may require another pass to get some analysis data, heuristics, or any such information to decide on a further course of action. The pass may just require some analysis such as memory dependencies, or it may require the altered IR as well. The new pass that you just saw simply prints the name of the function. Let's see how to enhance it to count the basic blocks in a loop, which also demonstrates how to use other pass results.

Getting ready

The code used in the previous recipe remains the same. Some modifications are required, however, to enhance it—as demonstrated in next section—so that it counts the number of basic blocks in the IR.

How to do it…

The getAnalysis function is used to specify which other pass will be used:

  1. Since the new pass will be counting the number of basic blocks, it requires loop information. This is specified using the getAnalysis loop function:

    LoopInfo *LI = &getAnalysis<LoopInfoWrapperPass>().getLoopInfo();
  2. This will call the LoopInfo pass to get information on the loop. Iterating through this object gives the basic block information:

    unsigned num_Blocks = 0;
    Loop::block_iterator bb;
    for(bb = L->block_begin(); bb != L->block_end();++bb)
       num_Blocks++;
    errs() << "Loop level " << nest << " has " << num_Blocks
    << " blocksn";
  3. This will go over the loop to count the basic blocks inside it. However, it counts only the basic blocks in the outermost loop. To get information on the innermost loop, recursive calling of the getSubLoops function will help. Putting the logic in a separate function and calling it recursively makes more sense:

    void countBlocksInLoop(Loop *L, unsigned nest) {
    unsigned num_Blocks = 0;
    Loop::block_iterator bb;
    for(bb = L->block_begin(); bb != L->block_end();++bb)
       num_Blocks++;
    errs() << "Loop level " << nest << " has " << num_Blocks
    << " blocksn";
    std::vector<Loop*> subLoops = L->getSubLoops();
    Loop::iterator j, f;
    for (j = subLoops.begin(), f = subLoops.end(); j != f;
    ++j)
       countBlocksInLoop(*j, nest + 1);
    }
    virtual bool runOnFunction(Function &F) {
    LoopInfo *LI = &getAnalysis<LoopInfoWrapperPass>().getLoopInfo();
    errs() << "Function " << F.getName() + "n";
    for (Loop *L : *LI)
       countBlocksInLoop(L, 0);
    return false;
    }

How it works…

The newly modified pass now needs to run on a sample program. Follow the given steps to modify and run the sample program:

  1. Open the sample.c file and replace its content with the following program:

    int main(int argc, char **argv) {
    int i, j, k, t = 0;
    for(i = 0; i < 10; i++) {
       for(j = 0; j < 10; j++) {
         for(k = 0; k < 10; k++) {
           t++;
         }
       }
       for(j = 0; j < 10; j++) {
         t++;
       }
    }
    for(i = 0; i < 20; i++) {
       for(j = 0; j < 20; j++) {
         t++;
       }
       for(j = 0; j < 20; j++) {
         t++;
       }
    }
    return t;
    }
  2. Convert it into a .ll file using Clang:

    $ clang –O0 –S –emit-llvm sample.c –o sample.ll
  3. Run the new pass on the previous sample program:

    $ opt -load (path_to_.so_file)/FuncBlockCount.so - funcblockcount sample.ll

    The output will look something like this:

    Function main
    Loop level 0 has 11 blocks
    Loop level 1 has 3 blocks
    Loop level 1 has 3 blocks
    Loop level 0 has 15 blocks
    Loop level 1 has 7 blocks
    Loop level 2 has 3 blocks
    Loop level 1 has 3 blocks
    

There's more…

The LLVM's pass manager provides a debug pass option that gives us the chance to see which passes interact with our analyses and optimizations, as follows:

$ opt -load (path_to_.so_file)/FuncBlockCount.so - 
funcblockcount sample.ll –disable-output –debug-pass=Structure

Summary

In this article you have explored various optimization levels, and the optimization techniques kicking at each level. We also saw the step-by-step approach to writing our own LLVM pass.

Resources for Article:


Further resources on this subject: