Searched refs:smp_mb__after_spinlock (Results 1 – 19 of 19) sorted by relevance
6 * This litmus test demonstrates how smp_mb__after_spinlock() may be27 smp_mb__after_spinlock();
6 * Do spinlocks combined with smp_mb__after_spinlock() provide order18 smp_mb__after_spinlock();
70 Protect the access with a lock and an smp_mb__after_spinlock()145 As above, but with smp_mb__after_spinlock() immediately
18 #define smp_mb__after_spinlock() smp_mb() macro
14 #define smp_mb__after_spinlock() smp_mb() macro
12 #define smp_mb__after_spinlock() smp_mb() macro
70 #define smp_mb__after_spinlock() RISCV_FENCE(iorw,iorw) macro
173 #ifndef smp_mb__after_spinlock174 #define smp_mb__after_spinlock() do { } while (0) macro
33 'after-spinlock (*smp_mb__after_spinlock*) ||
25 smp_mb__after_spinlock() { __fence{after-spinlock}; }
160 of smp_mb__after_spinlock():174 smp_mb__after_spinlock();187 This addition of smp_mb__after_spinlock() strengthens the lock acquisition
160 o smp_mb__after_spinlock(), which provides full ordering subsequent
2501 smp_mb__after_spinlock(). The LKMM uses fence events with special2513 smp_mb__after_spinlock() orders po-earlier lock acquisition
204 smp_mb__after_spinlock(); // Order updates vs. GP. in rcu_tasks_kthread()
911 smp_mb__after_spinlock(); /* Timer expire before wakeup. */ in do_nocb_deferred_wakeup_timer()
1409 smp_mb__after_spinlock(); in kthread_unuse_mm()
498 smp_mb__after_spinlock(); in exit_mm()
634 smp_mb__after_spinlock();660 been able to write-acquire the lock otherwise. The smp_mb__after_spinlock()
1414 smp_mb__after_spinlock(); in uclamp_sync_util_min_rt_default()4004 smp_mb__after_spinlock(); in try_to_wake_up()6168 smp_mb__after_spinlock(); in __schedule()
Completed in 39 milliseconds