- 03 May, 2021 40 commits
-
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Out-of-band IRQs and EVL thread contexts may compete for such lock, which would require hard irqs to be disabled while holding it. Therefore we would not benefit from the preemption disabling feature we are going to add to the EVL-specific spinlock. Make it a hard lock to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Out-of-band IRQs and EVL thread contexts may compete for such lock, which would require hard irqs to be disabled while holding it. Therefore we would not benefit from the preemption disabling feature we are going to add to the EVL-specific spinlock. Make it a hard lock to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
We don't actually need to rely on the oob stall bit, provided hard irqs are off in the deemed interrupt-free sections, because the latter is sufficient as long as the code does not traverse a pipeline synchronization point (sync_current_irq_stage()) while holding a lock, which would be in and of itself a bug in the first place. Remove the stall/unstall operations from the evl_spinlock implementation, fixing the few locations which were still testing the oob stall bit. The oob stall bit is still set by Dovetail on entry to IRQ handlers, which is ok: we will neither use nor affect it anymore, only relying on hard disabled irqs. This temporary alignment of the evl_spinlock on the hard spinlock is a first step to revisit the lock types in the core, before the evl_spinlock is changed again to manage the preemption count. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Prioritization of timers in timer queues dates back to the Dark Ages of Xenomai 2.x, when multiple time bases would co-exist in the core, some of which representing date values as a count of periodic ticks. In such a case, multiple timers might elapse on the very same tick, hence the need for prioritizing them. With a single time base indexing timers on absolute date values, which are expressed as a 64bit monotonic count of nanoseconds, the likelihood of observing identical trigger dates is very low. Furthermore, the formerly defined priorities where assigned as follows: 1) high priority to the per-thread periodic and resource timers 2) medium priority to the user-defined timers 3) low priority to the in-band tick emulation timer It turns out that forcibly prioritizing 1) over 2) is at least debatable, if not questionable: resource timers have no high priority at all, they merely tick on the (unlikely) timeout condition. On the other hand, user-defined timers may well deal with high priority events only some EVL driver code may know about. Finally, handling 3) is a fast operation on top of Dovetail, which is already deferred internally whenever the timer management core detects that some oob activity is running/pending. So we may remove the logic handling the timer priority, only relying on the trigger date for dispatching. This should save precious cycles in the hot path without any actual downside. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
This change first asserts that the FIFO class is the topmost scheduling class by design. From this point, we may check this class upfront when looking for the next runnable thread to pick, without going though the indirection of its .sched_pick handler. This allows the compiler to fold most of the FIFO picking code into the generic __pick_next_thread() routine, saving an indirect call. This is nicer to the I-cache in all cases, and spares the cycles which would otherwise be consumed by some vulnerability mitigation code like retpolines. On a highly cache-stressed i.mx6q, the worst case latency figures with this change in dropped from about 5%. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
A thread might resume oob from a CPU which is excluded from the set dedicated to oob scheduling (evl_cpu_affinity), due to a migration which occurred while in-band (e.g. sched_setaffinity()). Make sure the thread is allowed to run shortly on the target CPU nevertheless, so that it can exit cleanly before returning from evl_switch_oob(). The previous implementation would leave it hanging in limbo. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Observables enable the observer design pattern, in which any number of observer threads can be notified of updates to any number of observable subjects, in a loosely coupled fashion. In the same move, an EVL thread becomes in and of itself an observable which can be monitored for events. As a by-product, provide support for channeling SIGDEBUG notifications to threads through their own observable on request instead of, or in addition to issuing SIGDEBUG. The built-in runtime error detection capabilities the core provides can feed health monitors, based on the observability of threads. The core switches to ABI 23 as a result of these changes. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Upon requests involving SCHED_TP, SCHED_QUOTA policies which have been compiled out, we should send -EOPNOTSUPP instead of -EINVAL to the caller, so that we can distinguish between lack of kernel support from mere invalid argument issue. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
All timespec passed from/to user-space are now y2038-compliant (i.e. tv_sec is 64bit wide), using the __evl_timespec and __evl_itimerspec type definitions at kernel boundary. Conversions happen back and forth between these types and the timespec64 and itimerspec64 types used internally. Invariant: __evl_timespec and __evl_itimerspec are compatible bitwise with __kernel_timespec and __kernel_itimerspec respectively. libevl does assume so. Also: - The sanitization fixes the ABI so that timespec and itimerspec structs are always passed by address, ensuring -EFAULT on invalid pointer received from the user, instead of putting the latter at risk of SIGSEGV by forcing it to copy/dereference these arguments. - what EVL_CLKIOC_ADJ_TIME should do was never specified in the context of an EVL clock, and no defined use case ever existed. However, this service caused a y2038 problem due to the legacy timex struct argument. This service was removed from the ABI. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Synchronous breakpoints make sure to keep a ptrace-stepped thread synchronized with its siblings from the same process running in the background, as follows: - as soon as a ptracer (e.g. gdb) regains control over a thread which just hit a breakpoint or received SIGINT, sibling threads from the same process which run out-of-band are immediately frozen. - all sibling threads which have been frozen are set to wait on a common barrier before they can be released. Such release happens once all of them have joined the barrier in out-of-band context, after the (single-)stepped thread resumed. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
When a huge number of CPUs is available (e.g. CONFIG_MAXSMP/x86), we might overflow the stack with cpumask_t variables, for instance with stack-based thread init descriptors. Since such descriptor only needs to refer to a constant cpumask, we can maintain a reference to a global mask instead of embedding a private object there. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
With the new fine-grained locking model fully in place, there are no more users of the ugly big (nk)lock originally inherited from Xenomai's Cobalt core, drop it and its API altogether. Good riddance. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
This is the last step to get rid of the ugly lock, which was still required for serializing accesses to the runqueue information. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
There is no point in expecting the EVL spinlock API to be usable across calls to rescheduling points, so disabling preemption when holding such lock is useless. The core should call evl_schedule() explicitly when it has to do so and leaking a rescheduling opportunity there is definitely a bug which has to be fixed and not papered over. With respect to managing the signal vs wakeup issue in event-based waits, the EVL-based drivers should use the evl_waitqueue* API for synchronizing on event receipt, which does and helps doing the right thing. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
This new lock will gradually substitute to the ugly one in the relevant sections of code, particularly: - anything related to PI/PP management for mutexes - changing the scheduling parameters of a thread - updating a thread's shared state and/or info bits This is a crucial state in phasing out the ugly lock. Mutexes won't work properly after this change is in, until the related code is converted as well. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
We cannot use the rq determined by evl_schedule() in its inner helper, since a migration might have taken place if the former was called from inband context. Re-fetch the current rq pointer in __evl_schedule() instead. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Manual round-robin is a relic from the dark ages, in most cases used to paper over an implementation issue, which eventually yields to sub-optimal scheduling or even disfunctioning systems. Drop this. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
evl_call_mayday() can only apply to a running thread (i.e. current and non-blocked EVL-wise), called from an interrupt handler that preempted it. Since the watchdog handler is the only place where it would make sense to trigger the mayday signal, fold the latter action in this code directly. At this chance, fix a potential race by holding the lock protecting the ->info flags when raising T_KICKED. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Propagating the proxy tick to the inband stage is a low priority task: postpone this until the very end of the core tick interrupt, saving a branch to a trivial handler in the same move. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Since we want to be able to reschedule immediately from interrupt handlers, the pipeline no longer fires the {enter, exit}_oob_irq() notifiers which became useless. Interrupt handlers must now call explicitly: - evl_enter_irq() on entry to block rescheduling attempts - evl_leave_irq() at exit to trigger a call to evl_schedule() Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
SCHED_FIFO is the most critical and frequently used scheduling policy with EVL, so there is a net gain in inlining the small helpers manipulating the thread list for this one. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
We want the policy to appear clearly in the name. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
The new hierachy of priority scales is as follows: EVL_CORE_MIN_PRIO = EVL_WEAK_MIN_PRIO ... EVL_FIFO_MIN_PRIO == EVL_QUOTA_MIN_PRIO == EVL_TP_MIN_PRIO (== 1) ... EVL_FIFO_MAX_PRIO = EVL_QUOTA_MAX_PRIO == EVL_TP_MAX_PRIO == EVL_WEAK_MAX_PRIO (< MAX_USER_RT_PRIO) ... EVL_CORE_MAX_PRIO (> MAX_RT_PRIO) We reserve a couple of priority levels above the highest inband kthread priority (MAX_RT_PRIO..MAX_RT_PRIO+1), which are guaranteed to be higher than the highest inband user task priority (MAX_USER_RT_PRIO-1) we use for SCHED_FIFO. Those extra levels can be used for EVL kthreads which must top the priority of any userland thread. SCHED_EVL was dropped int the process, since userland is now constrained to EVL_FIFO_MAX_PRIO by construction. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Add missing policy accessors and init bits. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-