- 15 Jun, 2021 12 commits
-
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
We may be running a SMP kernel on a uniprocessor machine whose interrupt controller supports no IPI. We should attempt to hook IPIs only if the hardware can support multiple CPUs, otherwise it is unneeded and poised to fail. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
We have only very few syscalls, prefer a plain switch to a pointer indirection which ends up being fairly costly due to exploit mitigations. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
EVL_HIGH_PERCPU_CONCURRENCY optimizes the implementation for applications with many real-time threads running concurrently on any given CPU core (typically when eight or more threads may be sharing a single CPU core). This is a combination of the scalable scheduler and rb-tree timer indexing as a single configuration switch, since both aspects are normally coupled. If the application system runs only a few EVL threads per CPU core, then this option should be turned off, in order to minimize the cache footprint of the queuing operations performed by the scheduler and timer subsystems. Otherwise, it should be turned on in order to have constant-time queuing operations for a large number of runnable threads and outstanding timers. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
For applications with only few runnable tasks at any point in time, a linear queue ordering the latter for scheduling delivers better performance on low-end systems due to smaller CPU cache footprints, compared to the multi-level queue used by the scalable scheduler. Allow users to select between lightning-fast and scalable scheduler implementation depending on the runtime profile of the application. Signed-off-by:
Philippe Gerum <rpm@xenomai.org> # Please enter the commit message for your changes. Lines starting # with '#' will be ignored, and an empty message aborts the commit. # # On branch evl/master # Your branch is ahead of 'origin/evl/master' by 2 commits. # (use "git push" to publish your local commits) # # Changes to be committed: # modified: include/evl/sched.h # modified: include/evl/sched/queue.h # modified: include/evl/sched/tp.h # modified: include/evl/sched/weak.h # modified: kernel/evl/Kconfig # modified: kernel/evl/sched/core.c # # Untracked files: # include/trace/events/mm.h #
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Add (back) the ability to index timers either in a rb-tree or linked to a basic linked list. The latter delivers lower latency to applications systems with very few active timers at any point in time (typically less than 10 active timers, e.g. not more than a couple of timed loops, very few timed syscalls). Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
- 15 May, 2021 2 commits
-
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
The pipelined interrupt entry code must always run the common work loop before returning to user mode on the in-band stage, including after the preempted task was demoted from oob to in-band context as a result of handling the incoming IRQ. Failing to do so may cause in-band work to be left pending in this particular case, like _TIF_RETUSER and other _TIF_WORK conditions. This bug caused the smokey 'gdb' test to fail on x86: https://xenomai.org/pipermail/xenomai/2021-March/044522.html Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
- 03 May, 2021 26 commits
-
-
Philippe Gerum authored
Since #ae18ad28 , MAX_RT_PRIO should be used instead. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
A process is now marked for COW-breaking on fork() upon the first call to dovetail_init_altsched(), and must ensure its memory is locked via a call to mlockall(MCL_CURRENT|MCL_FUTURE) as usual. As a result, force_commit_memory() became pointless and was removed from the Dovetail interface. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Zhang Kun authored
evl/factory.h is included more than once, remove the one that isn't necessary. Signed-off-by:
Zhang Kun <zhangkun@cdjrlc.com>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
An EVL lock is now distinct from a hard lock in that it tracks and disables preemption in the core when held. Such spinlock may be useful when only EVL threads running out-of-band can contend for the lock, to the exclusion of out-of-band IRQ handlers. In this case, disabling preemption before attempting to grab the lock may be substituted to disabling hard irqs. There are gotchas when using such type of lock from the in-band context, see comments in evl/lock.h. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Very short sections of code outside of any hot path are protected by such lock. Therefore we would not generally benefit from the preemption disabling feature we are going to add to the EVL-specific spinlock. Make it a hard lock to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
For the most part, the gate lock is nested with a wait queue hard lock - which requires hard irqs to be off - to access the protected sections. Therefore we would not benefit in the common case from the preemption disabling feature we are going to add to the EVL-specific spinlock. Make it a hard lock to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
For the most part, a thread hard lock - which requires hard irqs to be off - is nested with the mutex lock to access the protected sections. Therefore we would not benefit in the common case from the preemption disabling feature we are going to add to the EVL-specific spinlock. Make it a hard lock to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Sleeping voluntarily with EVL preemption disabled is a bug. Add the proper assertion to detect this. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Given the semantics of an evl_flag, disabling preemption manually around the evl_raise_flag(to_flag) -> evl_wait_flag(from_flag) sequence does not make sense. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
The subscriber lock is shared between both execution stages, but accessed from the in-band stage for the most part, which implies disabling hard irqs while holding it. Meanwhile, out-of-band IRQs and EVL threads may compete for the observable lock, which would require hard irqs to be disabled while holding it. Therefore we would not generally benefit from the preemption disabling feature we are going to add to the EVL-specific spinlock in any case. Make these hard locks to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
The data protected by the inbound (oob -> in-band traffic) buffer lock is frequently accessed from the in-band stage by design, where hard irqs should be disabled. Conversely, the out-of-band sections are short enough to bear with interrupt-free execution. Therefore we would not generally benefit from the preemption disabling feature we are going to add to the EVL-specific spinlock. Make it a hard lock to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Out-of-band IRQs and EVL thread contexts would usually compete for such lock, which would require hard irqs to be disabled while holding it. Therefore we would not generally benefit from the preemption disabling feature we are going to add to the EVL-specific spinlock. Make it a hard lock to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
The data protected by the file table lock is frequently accessed from the in-band stage where holding it with hard irqs off is required. Therefore we would not benefit in the common case from the preemption disabling feature we are going to add to the EVL-specific spinlock. Make it a hard lock to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Now that the inclusion hell is fixed with evl/wait.h, we may include it into mm_info.h, for defining the ptsync barrier statically into the out-of-band mm state. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Out-of-band IRQs and EVL thread contexts would usually compete for such lock, which would require hard irqs to be disabled while holding it. Therefore we would not generally benefit from the preemption disabling feature we are going to add to the EVL-specific spinlock. Make it a hard lock to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Out-of-band IRQs and EVL thread contexts may compete for such lock, which would require hard irqs to be disabled while holding it. Therefore we would not benefit from the preemption disabling feature we are going to add to the EVL-specific spinlock. Make it a hard lock to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Out-of-band IRQs and EVL thread contexts may compete for such lock, which would require hard irqs to be disabled while holding it. Therefore we would not benefit from the preemption disabling feature we are going to add to the EVL-specific spinlock. Make it a hard lock to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
We don't actually need to rely on the oob stall bit, provided hard irqs are off in the deemed interrupt-free sections, because the latter is sufficient as long as the code does not traverse a pipeline synchronization point (sync_current_irq_stage()) while holding a lock, which would be in and of itself a bug in the first place. Remove the stall/unstall operations from the evl_spinlock implementation, fixing the few locations which were still testing the oob stall bit. The oob stall bit is still set by Dovetail on entry to IRQ handlers, which is ok: we will neither use nor affect it anymore, only relying on hard disabled irqs. This temporary alignment of the evl_spinlock on the hard spinlock is a first step to revisit the lock types in the core, before the evl_spinlock is changed again to manage the preemption count. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Checking the oob stall bit in __evl_enable_preempt() to block the rescheduling is obsolete. It relates to a nested locking construct which is long gone, when the evl_spinlock managed the preemption count and the big lock was still in, i.e.: lock_irqsave(&ugly_big_lock, flags) /* stall bit raised */ evl_spin_lock(&inner_lock); /* +1 preempt */ wake_up_high_prio_thread(); evl_spin_unlock(&inner_lock); /* -1 preempt == 0, NO schedule because stalled */ unlock_irqrestore(&ugly_big_lock, flags) /* stall bit restored */ This was a way to prevent a rescheduling to take place inadvertently while holding the big lock. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
This is a simple synchronization mechanism allowing an in-band caller to pass a point in the code making sure that no out-of-band operations which might traverse the same crossing are in flight. Out-of-band callers delimit the danger zone by down-ing and up-ing the barrier at the crossing, the in-band code should ask for passing the crossing. CAUTION: the caller must guarantee that evl_down_crossing() cannot be invoked _after_ evl_pass_crossing() is entered for a given crossing. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Returns the current kthread descriptor or NULL if another thread context is running. CAUTION: does not account for IRQ context. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-