Jump to content

Spinlock: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m 8 additional references, 12+ years, and 75+ constructive edits later I remove the tag.
 
(20 intermediate revisions by 12 users not shown)
Line 1: Line 1:
{{short description|In computing, a lock which causes a thread to loop continuously}}
{{refimprove|date=October 2012}}
In [[software engineering]], a '''spinlock''' is a [[lock (computer science)|lock]] which causes a [[thread (computer science)|thread]] trying to acquire it to simply wait in a loop ("spin") while repeatedly checking if the lock is available. Since the thread remains active but is not performing a useful task, the use of such a lock is a kind of [[busy waiting]]. Once acquired, spinlocks will usually be held until they are explicitly released, although in some implementations they may be automatically released if the thread being waited on (the one which holds the lock) blocks, or "goes to sleep".


In [[software engineering]], a '''spinlock''' is a [[lock (computer science)|lock]] that causes a [[thread (computer science)|thread]] trying to acquire it to simply wait in a [[While loop|loop]] ("spin") while repeatedly checking whether the lock is available. Since the thread remains active but is not performing a useful task, the use of such a lock is a kind of [[busy waiting]]. Once acquired, spinlocks will usually be held until they are explicitly released, although in some implementations they may be automatically released if the thread being waited on (the one that holds the lock) blocks or "goes to sleep".
Because they avoid overhead from [[operating system]] [[Scheduling (computing)|process rescheduling]] or [[context switch]]ing, spinlocks are efficient if [[thread (computer science)|thread]]s are likely to be blocked for only short periods. For this reason, [[operating system kernel|operating-system kernel]]s often use spinlocks. However, spinlocks become wasteful if held for longer durations, as they may prevent other threads from running and require rescheduling. The longer a thread holds a lock, the greater the risk that the thread will be interrupted by the OS scheduler while holding the lock. If this happens, other threads will be left "spinning" (repeatedly trying to acquire the lock), while the thread holding the lock is not making progress towards releasing it. The result is an indefinite postponement until the thread holding the lock can finish and release it. This is especially true on a single-processor system, where each waiting thread of the same priority is likely to waste its quantum (allocated time where a thread can run) spinning until the thread that holds the lock is finally finished.


Because they avoid overhead from [[operating system]] [[Scheduling (computing)|process rescheduling]] or [[context switch]]ing, spinlocks are efficient if threads are likely to be blocked for only short periods. For this reason, [[operating system kernel|operating-system kernels]] often use spinlocks. However, spinlocks become wasteful if held for longer durations, as they may prevent other threads from running and require rescheduling. The longer a thread holds a lock, the greater the risk that the thread will be interrupted by the OS scheduler while holding the lock. If this happens, other threads will be left "spinning" (repeatedly trying to acquire the lock), while the thread holding the lock is not making progress towards releasing it. The result is an indefinite postponement until the thread holding the lock can finish and release it. This is especially true on a single-processor system, where each waiting thread of the same priority is likely to waste its quantum (allocated time where a thread can run) spinning until the thread that holds the lock is finally finished.
Implementing spin locks correctly offers challenges because programmers must take into account the possibility of simultaneous access to the lock, which could cause [[race condition]]s. Generally, such an implementation is possible only with special [[assembly language|assembly-language]] instructions, such as [[atomic operation|atomic]] [[test-and-set]] operations and cannot be easily implemented in programming languages not supporting truly atomic operations.<ref>{{cite book | last= Silberschatz| first= Abraham |author2= Galvin, Peter B. | title=Operating System Concepts | edition=Fourth | year=1994 | publisher=Addison-Wesley | pages=176–179 | isbn= 0-201-59292-4 }}</ref> On architectures without such operations, or if high-level language implementation is required, a non-atomic locking algorithm may be used, e.g. [[Peterson's algorithm]]. However, such an implementation may require more [[Computer memory|memory]] than a spinlock, be slower to allow progress after unlocking, and may not be implementable in a high-level language if [[out-of-order execution]] is allowed.

Implementing spinlocks correctly is challenging because programmers must take into account the possibility of simultaneous access to the lock, which could cause [[race condition]]s. Generally, such an implementation is possible only with special [[assembly language]] instructions, such as [[atomic operation|atomic]] (i.e. un-interruptible) [[test-and-set]] operations and cannot be easily implemented in programming languages not supporting truly atomic operations.<ref>{{cite book | last= Silberschatz | first= Abraham |author2= Galvin, Peter B. | title=Operating System Concepts | edition=Fourth | year=1994 | publisher=Addison-Wesley | pages=176–179 |isbn= 0-201-59292-4 }}</ref> On architectures without such operations, or if high-level language implementation is required, a non-atomic locking algorithm may be used, e.g. [[Peterson's algorithm]]. However, such an implementation may require more [[Computer memory|memory]] than a spinlock, be slower to allow progress after unlocking, and may not be implementable in a high-level language if [[out-of-order execution]] is allowed.


==Example implementation==
==Example implementation==
Line 17: Line 18:
spin_lock:
spin_lock:
mov eax, 1 ; Set the EAX register to 1.
mov eax, 1 ; Set the EAX register to 1.

xchg eax, [locked] ; Atomically swap the EAX register with
xchg eax, [locked] ; Atomically swap the EAX register with
; the lock variable.
; the lock variable.
; This will always store 1 to the lock, leaving
; This will always store 1 to the lock, leaving
; the previous value in the EAX register.
; the previous value in the EAX register.

test eax, eax ; Test EAX with itself. Among other things, this will
test eax, eax ; Test EAX with itself. Among other things, this will
; set the processor's Zero Flag if EAX is 0.
; set the processor's Zero Flag if EAX is 0.
Line 28: Line 27:
; we just locked it.
; we just locked it.
; Otherwise, EAX is 1 and we didn't acquire the lock.
; Otherwise, EAX is 1 and we didn't acquire the lock.

jnz spin_lock ; Jump back to the MOV instruction if the Zero Flag is
jnz spin_lock ; Jump back to the MOV instruction if the Zero Flag is
; not set; the lock was previously locked, and so
; not set; the lock was previously locked, and so
; we need to spin until it becomes unlocked.
; we need to spin until it becomes unlocked.

ret ; The lock has been acquired, return to the calling
ret ; The lock has been acquired, return to the calling
; function.
; function.
Line 38: Line 35:
spin_unlock:
spin_unlock:
xor eax, eax ; Set the EAX register to 0.
xor eax, eax ; Set the EAX register to 0.

xchg eax, [locked] ; Atomically swap the EAX register with
xchg eax, [locked] ; Atomically swap the EAX register with
; the lock variable.
; the lock variable.

ret ; The lock has been released.
ret ; The lock has been released.
</syntaxhighlight>
</syntaxhighlight>

==Significant optimizations==
==Significant optimizations==


Line 51: Line 45:
On later implementations of the x86 architecture, ''spin_unlock'' can safely use an unlocked MOV instead of the slower locked XCHG. This is due to subtle [[memory ordering]] rules which support this, even though MOV is not a full [[memory barrier]]. However, some processors (some [[Cyrix]] processors, some revisions of the [[Intel]] [[Pentium Pro]] (due to bugs), and earlier [[Pentium (brand)|Pentium]] and [[i486]] [[Symmetric multiprocessing|SMP]] systems) will do the wrong thing and data protected by the lock could be corrupted. On most non-x86 architectures, explicit memory barrier or atomic instructions (as in the example) must be used. On some systems, such as [[IA-64]], there are special "unlock" instructions which provide the needed memory ordering.
On later implementations of the x86 architecture, ''spin_unlock'' can safely use an unlocked MOV instead of the slower locked XCHG. This is due to subtle [[memory ordering]] rules which support this, even though MOV is not a full [[memory barrier]]. However, some processors (some [[Cyrix]] processors, some revisions of the [[Intel]] [[Pentium Pro]] (due to bugs), and earlier [[Pentium (brand)|Pentium]] and [[i486]] [[Symmetric multiprocessing|SMP]] systems) will do the wrong thing and data protected by the lock could be corrupted. On most non-x86 architectures, explicit memory barrier or atomic instructions (as in the example) must be used. On some systems, such as [[IA-64]], there are special "unlock" instructions which provide the needed memory ordering.


To reduce inter-CPU [[Bus (computing)|bus traffic]], code trying to acquire a lock should loop reading without trying to write anything until it reads a changed value. Because of [[MESI]] caching protocols, this causes the cache line for the lock to become "Shared"; then there is remarkably ''no'' bus traffic while a CPU waits for the lock. This optimization is effective on all CPU architectures that have a cache per CPU, because MESI is so widespread. This can be done with the <code>cmpxchg</code> instruction on x86, or the <code>__sync_bool_compare_and_swap</code> built into many Unix compilers:
To reduce inter-CPU [[Bus (computing)|bus traffic]], code trying to acquire a lock should loop reading without trying to write anything until it reads a changed value. Because of [[MESI]] caching protocols, this causes the cache line for the lock to become "Shared"; then there is remarkably ''no'' bus traffic while a CPU waits for the lock. This optimization is effective on all CPU architectures that have a cache per CPU, because MESI is so widespread. On Hyper-Threading CPUs, pausing with <code>rep nop</code> gives additional performance by hinting to the core that it can work on the other thread while the lock spins waiting.<ref>{{cite web |title=gcc - x86 spinlock using cmpxchg |url=https://stackoverflow.com/a/6935581 |website=Stack Overflow}}</ref>

[[Transactional Synchronization Extensions]] and other hardware [[transactional memory]] instruction sets serve to replace locks in most cases. Although locks are still required as a fallback, they have the potential to greatly improve performance by having the processor handle entire blocks of atomic operations. This feature is built into some mutex implementations, for example in [[glibc]]. The Hardware Lock Elision (HLE) in x86 is a weakened but backwards-compatible version of TSE, and we can use it here for locking without losing any compatibility. In this particular case, the processor can choose to not lock until two threads actually conflict with each other.<ref>{{cite web |title=New Technologies in the Arm Architecture |url=https://static.sched.com/hosted_files/bkk19/3c/BKK19-202_New-Technologies-in-Arm-Architecture.pdf |access-date=2019-09-26 |archive-date=2019-04-02 |archive-url=https://web.archive.org/web/20190402110117/https://static.sched.com/hosted_files/bkk19/3c/BKK19-202_New-Technologies-in-Arm-Architecture.pdf |url-status=live }}</ref>

A simpler version of the test can use the <code>cmpxchg</code> instruction on x86, or the <code>__sync_bool_compare_and_swap</code> built into many Unix compilers.

With the optimizations applied, a sample would look like:
<syntaxhighlight lang="nasm">
<syntaxhighlight lang="nasm">
; In C: while(!__sync_bool_compare_and_swap(&locked, 0, 1));
; In C: while (!__sync_bool_compare_and_swap(&locked, 0, 1)) while (locked) __builtin_ia32_pause();
spin_lock:
spin_lock:
mov ecx, 1 ; Set the EAX register to 1.
mov ecx, 1 ; Set the ECX register to 1.
retry:
retry:
xor eax, eax ; Zero out EAX, because cmpxchg compares against EAX.
xor eax, eax ; Zero out EAX, because cmpxchg compares against EAX.
    lock cmpxchg ecx, [locked] ; atomically decide: if locked is zero, write ecx to it.
XACQUIRE lock cmpxchg [locked], ecx
jne retry ; If it's not set, try again.
; atomically decide: if locked is zero, write ECX to it.
ret ; The lock has been acquired, return to the calling
; XACQUIRE hints to the processor that we are acquiring a lock.
; function.
je out ; If we locked it (old value equal to EAX: 0), return.
pause:
mov eax, [locked] ; Read locked into EAX.
test eax, eax ; Perform the zero-test as before.
jz retry ; If it's zero, we can retry.
rep nop ; Tell the CPU that we are waiting in a spinloop, so it can
; work on the other thread now. Also written as the "pause".
jmp pause ; Keep check-pausing.
out:
ret ; All done.

spin_unlock:
XRELEASE mov [locked], 0 ; Assuming the memory ordering rules apply, release the
; lock variable with a "lock release" hint.
ret ; The lock has been released.
</syntaxhighlight>
</syntaxhighlight>

On any multi-processor system that uses the [[MESI protocol | MESI contention protocol]],
such a test-and-test-and-set lock (TTAS) performs much better than the simple test-and-set lock (TAS) approach.<ref name="herlihy" >
Maurice Herlihy and Nir Shavit.
"The Art of Multiprocessor Programming".
[http://cs.brown.edu/courses/cs176/lectures/chapter_07.pdf "Spin Locks and Contention"].
</ref>

With large numbers of processors,
adding a random [[exponential backoff]] delay before re-checking the lock performs even better than TTAS.<ref name="herlihy" /><ref>
[https://www.boost.org/doc/libs/1_78_0/libs/fiber/doc/html/fiber/tuning.html "Boost.Fiber Tuning: Exponential back-off"].
</ref>

A few multi-core processors have a "power-conscious spin-lock" instruction that puts a processor to sleep, then wakes it up on the next cycle after the lock is freed. A spin-lock using such instructions is more efficient and uses less energy than spin locks with or without a back-off loop.<ref>
John Goodacre and Andrew N. Sloss.
[https://www.ics.uci.edu/~eli/courses/cs244-w12/arm.pdf "Parallelism and the ARM Instruction Set Architecture"].
p. 47.
</ref>


==Alternatives==
==Alternatives==
Line 70: Line 101:


# Do not acquire the lock. In many situations it is possible to design data structures that [[Non-blocking synchronization|do not require locking]], e.g. by using per-thread or per-CPU data and disabling [[interrupt]]s.
# Do not acquire the lock. In many situations it is possible to design data structures that [[Non-blocking synchronization|do not require locking]], e.g. by using per-thread or per-CPU data and disabling [[interrupt]]s.
# [[Context switch|Switch]] to a different thread while waiting. This typically involves attaching the current thread to a queue of threads waiting for the lock, followed by switching to another thread that is ready to do some useful work. This scheme also has the advantage that it guarantees that [[resource starvation]] does not occur as long as all threads eventually relinquish locks they acquire and scheduling decisions can be made about which thread should progress first. Spinlocks that never entail switching, usable by [[real-time operating system|real-time operating systems]], are sometimes called ''raw spinlocks''.<ref>{{cite web |url=https://lwn.net/Articles/365863/ |title=Spinlock naming resolved |author=Jonathan Corbet |date=9 December 2009 |publisher=[[LWN.net]] |accessdate=14 May 2013 |archive-date=7 May 2013 |archive-url=https://web.archive.org/web/20130507133754/http://lwn.net/Articles/365863/ |url-status=live }}</ref>
#* [[Transactional Synchronization Extensions]] and other hardware [[transactional memory]] instruction sets serve to replace locks in most cases. Although locks are still required as a fallback, they have the potential to greatly improve performance by having the processor handle entire blocks of atomic operations. This feature is built into some mutex implementations, for example in [[glibc]].<ref>{{cite web |title=New Technologies in the Arm Architecture |url=https://static.sched.com/hosted_files/bkk19/3c/BKK19-202_New-Technologies-in-Arm-Architecture.pdf}}</ref>
# [[Context switch|Switch]] to a different thread while waiting. This typically involves attaching the current thread to a queue of threads waiting for the lock, followed by switching to another thread that is ready to do some useful work. This scheme also has the advantage that it guarantees that [[resource starvation]] does not occur as long as all threads eventually relinquish locks they acquire and scheduling decisions can be made about which thread should progress first. Spinlocks that never entail switching, usable by [[real-time operating system|real-time operating systems]], are sometimes called ''raw spinlocks''.<ref>{{cite web |url=https://lwn.net/Articles/365863/ |title=Spinlock naming resolved |author=Jonathan Corbet |date=9 December 2009 |publisher=[[LWN.net]] |accessdate=14 May 2013}}</ref>


Most operating systems (including [[Solaris (operating system)|Solaris]], [[Mac OS X]] and [[FreeBSD]]) use a hybrid approach called "adaptive [[Mutual exclusion|mutex]]". The idea is to use a spinlock when trying to access a resource locked by a currently-running thread, but to sleep if the [[Thread (computing)|thread]] is not currently running. (The latter is ''always'' the case on single-processor systems.)<ref>{{cite book | last=Silberschatz| first=Abraham |author2=Galvin, Peter B. | title=Operating System Concepts | edition=Fourth | year=1994 | publisher=Addison-Wesley | pages=198 | isbn=0-201-59292-4}}</ref>
Most operating systems (including [[Solaris (operating system)|Solaris]], [[Mac OS X]] and [[FreeBSD]]) use a hybrid approach called "adaptive [[Mutual exclusion|mutex]]". The idea is to use a spinlock when trying to access a resource locked by a currently-running thread, but to sleep if the [[Thread (computing)|thread]] is not currently running. (The latter is ''always'' the case on single-processor systems.)<ref>{{cite book | last=Silberschatz| first=Abraham |author2=Galvin, Peter B. | title=Operating System Concepts | edition=Fourth | year=1994 | publisher=Addison-Wesley | pages=198 | isbn=0-201-59292-4}}</ref>


[[OpenBSD]] attempted to replace spinlocks with [[ticket lock]]s which enforced [[FIFO (computing and electronics)|first-in-first-out]] behaviour, however this resulted in more CPU usage in the kernel and larger applications, such as [[Firefox]], becoming much slower.<ref>{{cite web|url=http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/lib/librthread/rthread.c?rev=1.71&content-type=text/x-cvsweb-markup|title=src/lib/librthread/rthread.c - Revision 1.71|date=2013-06-01|author=Ted Unangst}}</ref><ref>{{cite web|url=https://lobste.rs/c/6cybxn|title=tedu comment on Locking in WebKit - Lobsters|date=2016-05-06|author=Ted Unangst}}</ref>
[[OpenBSD]] attempted to replace spinlocks with [[ticket lock]]s which enforced [[FIFO (computing and electronics)|first-in-first-out]] behaviour, however this resulted in more CPU usage in the kernel and larger applications, such as [[Firefox]], becoming much slower.<ref>{{cite web|url=http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/lib/librthread/rthread.c?rev=1.71&content-type=text/x-cvsweb-markup|title=src/lib/librthread/rthread.c - Revision 1.71|date=2013-06-01|author=Ted Unangst|access-date=2022-01-25|archive-date=2021-02-27|archive-url=https://web.archive.org/web/20210227043208/http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/lib/librthread/rthread.c?rev=1.71&content-type=text%2Fx-cvsweb-markup|url-status=live}}</ref><ref>{{cite web|url=https://lobste.rs/c/6cybxn|title=tedu comment on Locking in WebKit - Lobsters|date=2016-05-06|author=Ted Unangst}}</ref>


==See also==
==See also==
*[[Synchronization (computer science)|Synchronization]]
*[[Synchronization (computer science)|Synchronization]]
*[[Busy spin]]
*[[Busy spin]]
*[[Deadlock]]
*[[Deadlock (computer science)]]
*[[Seqlock]]
*[[Seqlock]]
*[[Ticket lock]]
*[[Ticket lock]]

Latest revision as of 06:51, 12 November 2024

In software engineering, a spinlock is a lock that causes a thread trying to acquire it to simply wait in a loop ("spin") while repeatedly checking whether the lock is available. Since the thread remains active but is not performing a useful task, the use of such a lock is a kind of busy waiting. Once acquired, spinlocks will usually be held until they are explicitly released, although in some implementations they may be automatically released if the thread being waited on (the one that holds the lock) blocks or "goes to sleep".

Because they avoid overhead from operating system process rescheduling or context switching, spinlocks are efficient if threads are likely to be blocked for only short periods. For this reason, operating-system kernels often use spinlocks. However, spinlocks become wasteful if held for longer durations, as they may prevent other threads from running and require rescheduling. The longer a thread holds a lock, the greater the risk that the thread will be interrupted by the OS scheduler while holding the lock. If this happens, other threads will be left "spinning" (repeatedly trying to acquire the lock), while the thread holding the lock is not making progress towards releasing it. The result is an indefinite postponement until the thread holding the lock can finish and release it. This is especially true on a single-processor system, where each waiting thread of the same priority is likely to waste its quantum (allocated time where a thread can run) spinning until the thread that holds the lock is finally finished.

Implementing spinlocks correctly is challenging because programmers must take into account the possibility of simultaneous access to the lock, which could cause race conditions. Generally, such an implementation is possible only with special assembly language instructions, such as atomic (i.e. un-interruptible) test-and-set operations and cannot be easily implemented in programming languages not supporting truly atomic operations.[1] On architectures without such operations, or if high-level language implementation is required, a non-atomic locking algorithm may be used, e.g. Peterson's algorithm. However, such an implementation may require more memory than a spinlock, be slower to allow progress after unlocking, and may not be implementable in a high-level language if out-of-order execution is allowed.

Example implementation

[edit]

The following example uses x86 assembly language to implement a spinlock. It will work on any Intel 80386 compatible processor.

; Intel syntax

locked:                      ; The lock variable. 1 = locked, 0 = unlocked.
     dd      0

spin_lock:
     mov     eax, 1          ; Set the EAX register to 1.
     xchg    eax, [locked]   ; Atomically swap the EAX register with
                             ;  the lock variable.
                             ; This will always store 1 to the lock, leaving
                             ;  the previous value in the EAX register.
     test    eax, eax        ; Test EAX with itself. Among other things, this will
                             ;  set the processor's Zero Flag if EAX is 0.
                             ; If EAX is 0, then the lock was unlocked and
                             ;  we just locked it.
                             ; Otherwise, EAX is 1 and we didn't acquire the lock.
     jnz     spin_lock       ; Jump back to the MOV instruction if the Zero Flag is
                             ;  not set; the lock was previously locked, and so
                             ; we need to spin until it becomes unlocked.
     ret                     ; The lock has been acquired, return to the calling
                             ;  function.

spin_unlock:
     xor     eax, eax        ; Set the EAX register to 0.
     xchg    eax, [locked]   ; Atomically swap the EAX register with
                             ;  the lock variable.
     ret                     ; The lock has been released.

Significant optimizations

[edit]

The simple implementation above works on all CPUs using the x86 architecture. However, a number of performance optimizations are possible:

On later implementations of the x86 architecture, spin_unlock can safely use an unlocked MOV instead of the slower locked XCHG. This is due to subtle memory ordering rules which support this, even though MOV is not a full memory barrier. However, some processors (some Cyrix processors, some revisions of the Intel Pentium Pro (due to bugs), and earlier Pentium and i486 SMP systems) will do the wrong thing and data protected by the lock could be corrupted. On most non-x86 architectures, explicit memory barrier or atomic instructions (as in the example) must be used. On some systems, such as IA-64, there are special "unlock" instructions which provide the needed memory ordering.

To reduce inter-CPU bus traffic, code trying to acquire a lock should loop reading without trying to write anything until it reads a changed value. Because of MESI caching protocols, this causes the cache line for the lock to become "Shared"; then there is remarkably no bus traffic while a CPU waits for the lock. This optimization is effective on all CPU architectures that have a cache per CPU, because MESI is so widespread. On Hyper-Threading CPUs, pausing with rep nop gives additional performance by hinting to the core that it can work on the other thread while the lock spins waiting.[2]

Transactional Synchronization Extensions and other hardware transactional memory instruction sets serve to replace locks in most cases. Although locks are still required as a fallback, they have the potential to greatly improve performance by having the processor handle entire blocks of atomic operations. This feature is built into some mutex implementations, for example in glibc. The Hardware Lock Elision (HLE) in x86 is a weakened but backwards-compatible version of TSE, and we can use it here for locking without losing any compatibility. In this particular case, the processor can choose to not lock until two threads actually conflict with each other.[3]

A simpler version of the test can use the cmpxchg instruction on x86, or the __sync_bool_compare_and_swap built into many Unix compilers.

With the optimizations applied, a sample would look like:

; In C: while (!__sync_bool_compare_and_swap(&locked, 0, 1)) while (locked) __builtin_ia32_pause();
spin_lock:
    mov     ecx, 1             ; Set the ECX register to 1.
retry:
    xor     eax, eax           ; Zero out EAX, because cmpxchg compares against EAX.
    XACQUIRE lock cmpxchg [locked], ecx
                               ; atomically decide: if locked is zero, write ECX to it.
                               ;  XACQUIRE hints to the processor that we are acquiring a lock.
    je      out                ; If we locked it (old value equal to EAX: 0), return.
pause:
    mov     eax, [locked]      ; Read locked into EAX.
    test    eax, eax           ; Perform the zero-test as before.
    jz      retry              ; If it's zero, we can retry.
    rep nop                    ; Tell the CPU that we are waiting in a spinloop, so it can
                               ;  work on the other thread now. Also written as the "pause".
    jmp     pause              ; Keep check-pausing.
out:
    ret                        ; All done.

spin_unlock:
    XRELEASE mov [locked], 0   ; Assuming the memory ordering rules apply, release the 
                               ;  lock variable with a "lock release" hint.
    ret                        ; The lock has been released.

On any multi-processor system that uses the MESI contention protocol, such a test-and-test-and-set lock (TTAS) performs much better than the simple test-and-set lock (TAS) approach.[4]

With large numbers of processors, adding a random exponential backoff delay before re-checking the lock performs even better than TTAS.[4][5]

A few multi-core processors have a "power-conscious spin-lock" instruction that puts a processor to sleep, then wakes it up on the next cycle after the lock is freed. A spin-lock using such instructions is more efficient and uses less energy than spin locks with or without a back-off loop.[6]

Alternatives

[edit]

The primary disadvantage of a spinlock is that, while waiting to acquire a lock, it wastes time that might be productively spent elsewhere. There are two ways to avoid this:

  1. Do not acquire the lock. In many situations it is possible to design data structures that do not require locking, e.g. by using per-thread or per-CPU data and disabling interrupts.
  2. Switch to a different thread while waiting. This typically involves attaching the current thread to a queue of threads waiting for the lock, followed by switching to another thread that is ready to do some useful work. This scheme also has the advantage that it guarantees that resource starvation does not occur as long as all threads eventually relinquish locks they acquire and scheduling decisions can be made about which thread should progress first. Spinlocks that never entail switching, usable by real-time operating systems, are sometimes called raw spinlocks.[7]

Most operating systems (including Solaris, Mac OS X and FreeBSD) use a hybrid approach called "adaptive mutex". The idea is to use a spinlock when trying to access a resource locked by a currently-running thread, but to sleep if the thread is not currently running. (The latter is always the case on single-processor systems.)[8]

OpenBSD attempted to replace spinlocks with ticket locks which enforced first-in-first-out behaviour, however this resulted in more CPU usage in the kernel and larger applications, such as Firefox, becoming much slower.[9][10]

See also

[edit]

References

[edit]
  1. ^ Silberschatz, Abraham; Galvin, Peter B. (1994). Operating System Concepts (Fourth ed.). Addison-Wesley. pp. 176–179. ISBN 0-201-59292-4.
  2. ^ "gcc - x86 spinlock using cmpxchg". Stack Overflow.
  3. ^ "New Technologies in the Arm Architecture" (PDF). Archived (PDF) from the original on 2019-04-02. Retrieved 2019-09-26.
  4. ^ a b Maurice Herlihy and Nir Shavit. "The Art of Multiprocessor Programming". "Spin Locks and Contention".
  5. ^ "Boost.Fiber Tuning: Exponential back-off".
  6. ^ John Goodacre and Andrew N. Sloss. "Parallelism and the ARM Instruction Set Architecture". p. 47.
  7. ^ Jonathan Corbet (9 December 2009). "Spinlock naming resolved". LWN.net. Archived from the original on 7 May 2013. Retrieved 14 May 2013.
  8. ^ Silberschatz, Abraham; Galvin, Peter B. (1994). Operating System Concepts (Fourth ed.). Addison-Wesley. p. 198. ISBN 0-201-59292-4.
  9. ^ Ted Unangst (2013-06-01). "src/lib/librthread/rthread.c - Revision 1.71". Archived from the original on 2021-02-27. Retrieved 2022-01-25.
  10. ^ Ted Unangst (2016-05-06). "tedu comment on Locking in WebKit - Lobsters".
[edit]