Efficient Scheduling and Analysis Techniques for Supporting Real-Time Cyber-Physical Applications and Systems
To guarantee predictable temporal correctness in multicore systems, a difficult problem is to analyze the execution behaviors of tasks that may access heterogeneous resources including both CPU cores and shared resources (e.g., memory or accelerators). A traditional approach is to analyze schedulability from a core-centric perspective. The core-centric perspective is sound for traditional embedded systems where computing resources may be insufficient while the contention on shared resources is often light. However, it is not applicable to many CPS supporting rather resource-demanding workloads. Due to the large number of physical components involved, the operations of CPS include sensing, processing and storage of massive big data. In most CPS, such as the autonomous driving system, shared resources such as memory may become the actual scheduling bottleneck, causing the worst-case latency bound on the shared resource to be rather pessimistic or even impossible to bound. Motivated by this observation, we argue that it may be much more viable to resolve this multi-resource scheduling problem from the counter-intuitive shared-resource centric perspective since tasks may experience much lighter contention on CPU cores. In my PhD dissertation, I resolve the above-discussed multi-resource scheduling problem from the shared-resource-centric perspective, which brings new challenges in analyzing real-time task systems. For instance, traditionally, each task executes sequentially on one processor, but on shared resources, the number of processors simultaneously used by a task can be greater than one (i.e., parallelism). Moreover, tasks can execute continuously and preemptively on CPUs, but on shared resources, a task may experience suspension delays on the CPU cores due to accessing shared resources and may need to execute in a non- or limited-preemptive manner. My research seeks to resolve these hard challenges. Specifically, my developed techniques focus on judiciously scheduling tasks that may exhibit parallel execution and selfsuspension behaviors on the limited-preemptive shared resource. Our research methodology is counter-intuitive yet efficient: we treat the shared resource as the first-class units and bound the worst-case latency a task may experience on the CPU cores (i.e., treating CPU cores as “I/O”). Interestingly, some practical issues in CPS can be solved properly using real-time scheduling theory. In my PhD dissertation, I have developed several scheduling techniques to analyze CPS in different scenarios, providing predictable latency performance to end users.