Making the majority of kernel locks preemptible is the most intrusive change that PREEMPT_RT makes, and this code remains outside of the mainline kernel.
The problem occurs with spin locks, which are used for much of the kernel locking. A spin lock is a busy-wait mutex that does not require a context switch in the contended case, and so it is very efficient as long as the lock is held for a short time. Ideally, they should be locked for less than the time it would take to reschedule twice. The following diagram shows threads running on two different CPUs contending the same spin lock. CPU 0 gets it first, forcing CPU 1 to spin, waiting until it is unlocked:
The thread that holds the spin lock cannot be preempted since doing so may make the new thread enter the same code and deadlock when it tries to lock the same spin lock. Consequently, in mainline Linux...