Bringing the Multicore Revolution to Safety-Critical Cyber-Physical Systems

pdf

Shared hardware resources like caches and memory introduce timing unpredictability for real-time
systems. Worst-case execution time (WCET) analysis with shared hardware resources is often so pessimistic
that the extra processing capacity of multicore systems is negated. We propose techniques to
improve performance and schedulability for multicore systems.

When certifying the real-time correctness of a system running on m-cores, excessive analysis pessimism
can negate the processing capacity of the additional m-1 cores. To address this problem, two
orthogonal approaches have been investigated previously: mixed-criticality (MC) allocation techniques
and hardware-management techniques. Recent work involving a mixed-criticality framework called MC2
(mixed-criticality on multicore) has shown that, by combining both approaches, capacity loss can be significantly
reduced when supporting real-time workloads on multicore platforms. However, the ability to
support real-world workloads has not been realized due to a lack of support for sharing among tasks. In
this work, we consider two types of sharing: shared buffers and shared libraries.

This work presents a new version of MC2that allows tasks to share data within and across criticality
levels through shared memory. Several techniques are presented for mitigating capacity loss due to data
sharing. The effectiveness of these techniques is demonstrated by means of a large-scale, overhead-aware
schedulability study driven by micro-benchmark data.

Another source of sharing, shared libraries, can be obviated by statically linking libraries. However,
this solution can degrade schedulability by exhausting memory capacity. An alternative approach is proposed
herein that allows library pages to be shared while preserving isolation properties.
Memory access latencies vary significantly depending on which NUMA (non-uniform memory access)
node data is located at and how banks are shared so that execution times may become highly unpredictable
in a multicore real-time system. This results in overly conservative scheduling with low utilization due to
loose bounds on the WCET of tasks.

This work contributes a controller/node-aware memory coloring (CAMC) allocator comprehensively
for all segments of the address space implemented inside the Linux kernel. To our knowledge, this work
is first to (a) consider memory controllers in real-time systems, (b) combine memory controller and bank
coloring, and (c) color the entire memory space, not just the heap. This reduces conflicts in memory
accesses and latency, and it effectively isolates a tasks timing behavior with respect to other tasks via
software-based controller- and bank-accesses partitioning.

Results from a multicore platform indicate that CAMC improves performance by reducing memory
latency, avoids inter-task conflicts, and increased timing predictability, which makes CAMC suitable for
mixed criticality, weakly hard, and soft real-time systems. CAMC further outperforms the standard buddy
allocator as well as prior coloring methods on an x86 platform. When just one core per memory controller
is used, CAMC provides single core equivalence.

  • CPS Safety
  • Control
  • Platforms
  • Real-Time Coordination
  • CPS Technologies
  • Foundations
  • National CPS PI Meeting 2016
  • Poster
  • Posters and Abstracts
  • Posters
Submitted by James Anderson on