Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Linux Device Driver Development

You're reading from   Linux Device Driver Development Everything you need to start with device driver development for Linux kernel and embedded Linux

Arrow left icon
Product type Paperback
Published in Apr 2022
Publisher Packt
ISBN-13 9781803240060
Length 708 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
John Madieu John Madieu
Author Profile Icon John Madieu
John Madieu
Arrow right icon
View More author details
Toc

Table of Contents (23) Chapters Close

Preface 1. Section 1 -Linux Kernel Development Basics
2. Chapter 1: Introduction to Kernel Development FREE CHAPTER 3. Chapter 2: Understanding Linux Kernel Module Basic Concepts 4. Chapter 3: Dealing with Kernel Core Helpers 5. Chapter 4: Writing Character Device Drivers 6. Section 2 - Linux Kernel Platform Abstraction and Device Drivers
7. Chapter 5: Understanding and Leveraging the Device Tree 8. Chapter 6: Introduction to Devices, Drivers, and Platform Abstraction 9. Chapter 7: Understanding the Concept of Platform Devices and Drivers 10. Chapter 8: Writing I2C Device Drivers 11. Chapter 9: Writing SPI Device Drivers 12. Section 3 - Making the Most out of Your Hardware
13. Chapter 10: Understanding the Linux Kernel Memory Allocation 14. Chapter 11: Implementing Direct Memory Access (DMA) Support 15. Chapter 12: Abstracting Memory Access – Introduction to the Regmap API: a Register Map Abstraction 16. Chapter 13: Demystifying the Kernel IRQ Framework 17. Chapter 14: Introduction to the Linux Device Model 18. Section 4 - Misc Kernel Subsystems for the Embedded World
19. Chapter 15: Digging into the IIO Framework 20. Chapter 16: Getting the Most Out of the Pin Controller and GPIO Subsystems 21. Chapter 17: Leveraging the Linux Kernel Input Subsystem 22. Other Books You May Enjoy

Setting up the development environment

When you're working in embedded system fields, there are terms you must be familiar with, before even setting up your environment. They are as follows:

  • Target: This is the machine that the binaries resulting from the build process are produced for. This is the machine that is going to run the binary.
  • Host: This is the machine where the build process takes place.
  • Compilation: This is also called native compilation or a native build. This happens when the target and the host are the same; that is, when you're building on machine A (the host) a binary that is going to be executed on the same machine (A, the target) or a machine of the same kind. Native compilation requires a native compiler. Therefore, a native compiler is one where the target and the host are the same.
  • Cross-compilation: Here, the target and the host are different. It is where you build a binary from machine A (the host) that is going to be executed on machine B (the target). In this case, the host (machine A) must have installed the cross-compiler that supports the target architecture. Thus, a cross-compiler is a compiler where the target is different from the host.

Because embedded computers have limited or reduced resources (CPU, RAM, disk, and so on), it is common for the hosts to be x86 machines, which are much more powerful and have far more resources to speed up the development process. However, over the past few years, embedded computers have become more powerful, and they tend to be used for native compilation (thus used as the host). A typical example is the Raspberry Pi 4, which has a powerful quad-core CPU and up to 8 GB of RAM.

In this chapter, we will be using an x86 machine as the host, either to create a native build or for cross-compilation. So, any "native build" term will refer to an "x86 native build." Due to this, I'm running Ubuntu 18.04.

To quickly check this information, you can use the following command:

lsb_release -a
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.5 LTS
Release:    18.04
Codename:   bionic

My computer is an ASUS RoG, with a 16 core AMD Ryzen CPU (you can use the lscpu command to pull this information out), 16 GB of RAM, 256 GB of SSD, and a 1 TB magnetic hard drive (information that you can obtain using the df -h command). That said, a quad-core CPU and 4 or 8 GB of RAM could be enough, but at the cost of an increased build duration. My favorite editor is Vim, but you are free to use the one you are most comfortable with. If you are using a desktop machine, you could use Visual Studio Code (VS Code), which is becoming widely used.

Now that we are familiar with the compilation-related keywords we will be using, we can start preparing the host machine.

Setting up the host machine

Before you can start the development process, you need to set up an environment. The environment that's dedicated to Linux development is quite simple – on Debian-based systems, at least (which is our case).

On the host machine, you need to install a few packages, as follows:

$ sudo apt update
$ sudo apt install gawk wget git diffstat unzip \
       texinfo gcc-multilib build-essential chrpath socat \
       libsdl1.2-dev xterm ncurses-dev lzop libelf-dev make

In the preceding code, we installed a few development tools and some mandatory libraries so that we have a nice user interface when we're configuring the Linux kernel.

Now, we need to install the compiler and the tools (linker, assembler, and so on) for the build process to work properly and produce the executable for the target. This set of tools is called Binutils, and the compiler + Binutils (+ other build-time dependency libraries if any) combo is called toolchain. So, you need to understand what is meant by "I need a toolchain for <this> architecture" or similar sentences.

Understanding and installing toolchains

Before we can start compiling, we need to install the necessary packages and tools for native or ARM cross-compiling; that is, the toolchains. GCC is the compiler that's supported by the Linux kernel. A lot of macros that are defined in the kernel are GCC-related. Due to this, we will use GCC as our (cross-)compiler.

For a native compilation, you can use the following toolchain installation command:

sudo apt install gcc binutils

When you need to cross-compile, you must identify and install the right toolchain. Compared to a native compiler, cross-compiler executables are prefixed by the name of the target operating system, architecture, and (sometimes) library. Thus, to identify architecture-specific toolchains, a naming convention has been defined: arch[-vendor][-os]-abi. Let's look at what the fields in the pattern mean:

  • arch identifies the architecture; that is, arm, mips, x86, i686, and so on.
  • vendor is the toolchain supplier (company); that is, Bootlin, Linaro, none (if there is no provider) or simply omitting the field, and so on.
  • os is for the target operating system; that is, linux or none (bare metal). If omitted, bare metal is assumed.
  • abi stands for application binary interface. It refers to what the underlying binary is going to look like, the function call convention, how parameters are passed, and more. Possible conventions include eabi, gnueabi, and gnueabihf. Let's look at these in more detail:
    • eabi means that the code that will be compiled will run on a bare metal ARM core.
    • gnueabi means that the code for Linux will be compiled.
    • gnueabihf is the same as gnueabi, but hf at the end means hard float, which indicates that the compiler and its underlying libraries are using hardware floating-point instructions rather than a software implementation of floating-point instructions, such as fixed-point software implementations. If no floating-point hardware is available, the instructions will be trapped and performed by a floating-point emulation module instead. When you're using software emulation, the only actual difference in functionality is slower execution.

The following are some toolchain names to illustrate the use of the pattern:

  • arm-none-eabi: This is a toolchain that targets the ARM architecture. It has no vendor, targets a bare-metal system (does not target an operating system), and complies with the ARM EABI.
  • arm-none-linux-gnueabi or arm-linux-gnueabi: This is a toolchain that produces objects for the ARM architecture to be run on Linux with the default configuration (ABI) provided by the toolchain. Note that arm-none-linux-gnueabi is the same as arm-linux-gnueabi because, as we have seen, when no vendor is specified, we assume there isn't one. The variant of this toolchain supporting hardware floating point would be arm-linux-gnueabihf or arm-none-linux-gnueabihf.

Now that we are familiar with toolchain naming conventions, we can determine which toolchain can be used to cross-compile for our target architecture.

To cross-compile for a 32-bit ARM machine, we would install the toolchain using the following command:

$ sudo apt install gcc-arm-linux-gnueabihf binutils-arm-linux-gnueabihf

Note that the 64-bit ARM backend/support in the Linux tree and GCC is called aarch64. So, the cross-compiler must be called something like gcc-aarch64-linux-gnu*, while Binutils must be called something like binutils-aarch64-linux-gnu*. Thus, for a 64-bit ARM toolchain, we would use the following command:

$ sudo apt install make gcc-aarch64-linux-gnu binutils-aarch64-linux-gnu

Note

Note that aarch64 only supports/provides hardware float aarch64 toolchains. Thus, there is no need to specify hf at the end.

Note that not all versions of the compiler can compile a given Linux kernel version. Thus, it is important to take care of both the Linux kernel version and the compiler (GCC) version. While the previous commands installed the latest version that's supported by your distribution, it is possible to target a particular version. To achieve this, you can use gcc-<version>-<arch>-linux-gnu*.

For example, to install version 8 of GCC for aarch64, you can use the following command:

sudo apt install gcc-8-aarch64-linux-gnu

Now that our toolchain has been installed, we can look at the version that was picked by our distribution package manager. For example, to check which version of the aarch64 cross-compiler was installed, we can use the following command:

$ aarch64-linux-gnu-gcc --version
aarch64-linux-gnu-gcc (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0
Copyright (C) 2017 Free Software Foundation, Inc.
[...]

For the 32-bit ARM variant, we can use the following command:

$ arm-linux-gnueabihf-gcc --version
arm-linux-gnueabihf-gcc (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0
Copyright (C) 2017 Free Software Foundation, Inc.
[...]

Finally, for the native version, we can use the following command:

$ gcc --version
gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Copyright (C) 2017 Free Software Foundation, Inc.

Now that we have set up our environment and made sure we are using the right tool versions, we can start downloading the Linux kernel sources and dig into them.

Getting the sources

In the early kernel days (until 2003), odd-even versioning styles were used, where odd numbers were stable and even numbers were unstable. When the 2.6 version was released, the versioning scheme switched to X.Y.Z. Let's look at this in more detail:

  • X: This was the actual kernel's version, also called major. It was incremented when there were backward-incompatible API changes.
  • Y: This was the minor revision. It was incremented after functionality was added in a backward-compatible manner.
  • Z: This is also called PATCH and represented versions related to bug fixes.

This is called semantic versioning and was used until version 2.6.39, when Linus Torvalds decided to bump the version to 3.0, which also meant the end of semantic versioning in 2011. At that point, an X.Y scheme was adopted.

When it came to version 3.20, Linus argued that he could no longer increase Y. Therefore, he decided to switch to an arbitrary versioning scheme, incrementing X whenever Y got so big that he ran out of fingers and toes to count it. This is the reason why the version has moved from 3.20 to 4.0 directly.

Now, the kernel uses an arbitrary X.Y versioning scheme, which has nothing to do with semantic versioning.

According to the Linux kernel release model, there are always two latest releases of the kernel out there: the stable release and the long-term support (LTS) release. All bug fixes and new features are collected and prepared by subsystem maintainers and then submitted to Linus Torvalds for inclusion into his Linux tree, which is called the mainline Linux tree, also known as the master Git repository. This is where every stable release originates from.

Before each new kernel version is released, it is submitted to the community through release candidate tags so that developers can test and polish all the new features. Based on the feedback he receives during this cycle, Linus decides whether the final version is ready to go. When Linus is convinced that the new kernel is ready to go, he makes the final release. We call this release "stable" to indicate that it's not a "release candidate:" those releases are vX.Y versions.

There is no strict timeline for making releases, but new mainline kernels are generally released every 2-3 months. Stable kernel releases are based on Linus releases; that is, the mainline tree releases.

Once a stable kernel is released by Linus, it also appears in the linux-stable tree (available at https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/), where it becomes a branch. Here, it can receive bug fixes. This tree is called a stable tree because it is used to track previously released stable kernels. It is maintained and curated by Greg Kroah-Hartman. However, all fixes must go into Linus's tree first, which is the mainline repository. Once the bug has been fixed in the mainline repository, it can be applied to previously released kernels that are still maintained by the kernel development community. All the fixes that have been backported to stable releases must meet a set of important criteria before they are considered – one of them is that they "must already exist in Linus's tree."

Note

Bugfix kernel releases are considered stable.

For example, when the 4.9 kernel is released by Linus, the stable kernel is released based on the kernel's numbering scheme; that is, 4.9.1, 4.9.2, 4.9.3, and so on. Such releases are called bugfix kernel releases, and the sequence is usually shortened with the number "4.9.y" when referring to their branch in the stable kernel release tree. Each stable kernel release tree is maintained by a single kernel developer, who is responsible for picking the necessary patches for the release and going through the review/release process. Usually, there are only a few bugfix kernel releases until the next mainline kernel becomes available – unless it is designated as a long-term maintenance kernel.

Every subsystem and kernel maintainer repository is hosted here: https://git.kernel.org/pub/scm/linux/kernel/git/. Here, we can also find either a Linus or a stable tree. In the Linus tree (https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/), there is only one branch; that is, the master branch. Its tags are either stable releases or release candidates. In the stable tree (https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/), there is one branch per stable kernel release (named <A.B>.y, where <A.B> is the release version in the Linus tree) and each branch contains its bugfix kernel releases.

Downloading the source and organizing it

In this book, we will be using Linus's tree, which can be downloaded using the following commands:

git clone https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git --depth 1 
git checkout v5.10
ls

In the preceding commands we used --depth 1 to avoid downloading the history (or rather, picking only the last commit history), which may considerably reduce the download size and save time. Since Git supports branching and tagging, the checkout command allows you to switch to a specific tag or branch. In this example, we are switching to the v5.10 tag.

Note

In this book, we will be dealing with Linux kernel v5.10.

Let's look at the content of the main source directory:

  • arch/: To be as generic as possible, architecture-specific code is separated from the rest. This directory contains processor-specific code that's organized in a subdirectory per architecture, such as alpha/, arm/, mips/, arm64/, and so on.
  • block/: This directory contains codes for block storage devices.
  • crypto/: This directory contains the cryptographic API and the encryption algorithm's code.
  • certs/: This directory contains certificates and sign files to enable a module signature to make the kernel load signed modules.
  • documentation/: This directory contains the descriptions of the APIs that are used for different kernel frameworks and subsystems. You should look here before asking any questions on the public forums.
  • drivers/: This is the heaviest directory since it is continuously growing as device drivers get merged. It contains every device driver, organized into various subdirectories.
  • fs/: This directory contains the implementations of different filesystems that the kernel supports, such as NTFS, FAT, ETX{2,3,4}, sysfs, procfs, NFS, and so on.
  • include/: This directory contains kernel header files.
  • init/: This directory contains the initialization and startup code.
  • ipc/: This directory contains the implementation of the inter-process communication (IPC) mechanisms, such as message queues, semaphores, and shared memory.
  • kernel/: This directory contains architecture-independent portions of the base kernel.
  • lib/: Library routines and some helper functions live here. This includes generic kernel object (kobject) handlers and cyclic redundancy code (CRC) computation functions.
  • mm/: This directory contains memory management code.
  • net/: This directory contains networking (whatever network type it is) protocol code.
  • samples/: This directory contains device driver samples for various subsystems.
  • scripts/: This directory contains scripts and tools that are used alongside the kernel. There are other useful tools here.
  • security/: This directory contains the security framework code.
  • sound/: Guess what falls here: audio subsystem code.
  • tools/: This directory contains Linux kernel development and testing tools for various subsystems, such as USB, vhost test modules, GPIO, IIO, and SPI, among others.
  • usr/: This directory currently contains the initramfs implementation.
  • virt/: This is the virtualization directory, which contains the kernel virtual machine (KVM) module for a hypervisor.

To enforce portability, any architecture-specific code should be in the arch directory. Moreover, the kernel code that's related to the user space API does not change (system calls, /proc, /sys, and so on) as it would break the existing programs.

In this section, we have familiarized ourselves with the Linux kernel's source content. After going through all the sources, it seems quite natural to configure them to be able to compile a kernel. In the next section, we will learn how kernel configuration works.

You have been reading a chapter from
Linux Device Driver Development - Second Edition
Published in: Apr 2022
Publisher: Packt
ISBN-13: 9781803240060
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image