Embedded Real-Time Operating System: Differences Between RTOS and PC

The operating system used in embedded real-time systems is called an embedded real-time operating system. It is both an embedded operating system and a real-time operating system. As an embedded operating system, it has features such as trimming, low resource consumption and low power consumption that are common to embedded software; as a real-time operating system (this article discusses the characteristics of real-time operating systems only for strong real-time operating systems. The real-time operating systems mentioned below are also referred to as strong real-time operating systems. They differ greatly from general-purpose operating systems (such as Windows, Unix, Linux, etc.). We will compare these two operating systems. The difference between them gradually describes the main features of real-time operating systems. The most common among us in our daily work-learning environment is the general operating system. General operating systems are developed from time-sharing operating systems. Most of them support multi-users and multi-processes and are responsible for managing numerous processes and allocating system resources for them. The basic design principles of a time-sharing operating system are: to shorten the average response time of the system as much as possible and increase the throughput rate of the system, and to provide services for as many user requests as possible within a unit time. It can be seen that the time-sharing operating system focuses on the average performance performance and does not pay attention to individual performance performance. For the entire system, focusing on the average response time of all tasks without concern for the response time of a single task, for a single task, focusing on the average response time of each execution without concern for the response time of a particular execution. Many of the strategies and techniques used in general-purpose operating systems embody this design principle. For example, due to the LRU page replacement algorithm used in the virtual memory management mechanism, most of the memory requirements can be quickly completed through physical memory. A small part of the memory access needs to be completed through paging, but overall, the average memory access time is not greatly improved compared to no virtual memory technology, and at the same time, the virtual space can be much larger than the physical memory capacity, etc. Benefits, so virtual memory technology has been widely used in the general operating system. There are many similar examples, such as the indirect index query mechanism of the file storage location in the Unix file system, and even the Cache technology in the hardware design and the dynamic branch prediction technology of the CPU also reflect this design principle. From this we can see that the design principles that focus on the average performance, that is, the statistical performance characteristics are very far-reaching.



For real-time operating systems, we have already mentioned that in addition to meeting the functional requirements of applications, it is more important to meet the real-time requirements of applications. The real-time requirements of many real-time tasks that make up an application are Different, in addition there may be some complex associations and synchronization relationships between real-time tasks, such as the execution of the order restrictions, the mutual access requirements of shared resources, etc., which brings a lot of real-time system guarantees difficult. Therefore, the most important design principle followed by a real-time operating system is the adoption of various algorithms and strategies that always guarantee the predictability of system behavior. Predictability means that at any moment in the system operation, in any case, the real-time operating system's resource allocation strategy can reasonably allocate resources for multiple real-time tasks that compete for resources (including CPU, memory, network bandwidth, etc.). The real-time requirements of each real-time task can be met. Unlike general operating systems, real-time operating systems do not focus on the average performance of the system, but require that each real-time task meet its real-time requirements in the worst case, that is, the real-time operating system focuses on individual performance. More precisely, it is the individual's worst-case performance. For example, if the real-time operating system uses standard virtual memory technology, the worst-case scenario for a real-time task execution is that paging is required for each fetch, so the accumulated time for the task to run in the worst case is not Predicted, so the real-time nature of the task cannot be guaranteed. It can be seen that the widely used virtual memory technology in general operating systems should not be directly adopted in real-time operating systems. Because the basic design principles of real-time operating systems and general operating systems are very different, there are great differences in the selection of many resource scheduling strategies and the methods implemented by operating systems. These differences are mainly reflected in the following points:

(1) Task scheduling strategy:

The task scheduling strategy in general operating systems generally adopts a preemptive scheduling strategy based on priorities. For processes with the same priority, time-table rotation scheduling is adopted. User processes can dynamically adjust their own priorities through system calls. The operating system is also According to the situation, the priority of some processes can be adjusted. The task scheduling strategy in the real-time operating system is currently widely used and can be mainly divided into two types. One is a static table driving method, and the other is a fixed-priority preemptive scheduling method. The static table driving mode refers to an engineer running a task schedule based on the real-time requirements of each task or with the help of auxiliary tools before the system is running. This schedule is similar to the running schedule of the train. The initial running time and running length of each task, the running schedule will not change once it is generated. In runtime, the scheduler only needs to start the corresponding task according to this table at a specified time. The main advantages of the static table-driven approach are:

The operation schedule is generated before the system runs, so more sophisticated search algorithms can be used to find a better scheduling scheme; runtime scheduler overhead is small;

The system has very good predictability, and real-time verification is also more convenient;

The main drawback of this approach is inflexibility. Once the demand changes, it is necessary to regenerate the entire operation schedule. Due to its very good predictability, this method is mainly used in areas where aerospace, military, and other systems require very real-time systems. The fixed-priority preemptive scheduling method is basically similar to the priority-based scheduling method used in the general operating system. However, in the fixed-priority preemptive scheduling method, the priority of the process is fixed, and the priority is Before running, it is specified by some sort of priority assignment strategy (Rate-Monotonic, Deadline-Monotonic, etc.). The advantages and disadvantages of this approach are exactly the opposite of the advantages and disadvantages of the static table-driven approach. It is mainly applied to some simpler, more independent embedded systems, but as the scheduling theory continues to mature and improve, this approach will gradually It is applied in some areas where real-time requirements are very strict. Most current real-time operating systems on the market use this scheduling method.

(2) Memory Management:

About the virtual memory management mechanism We have already discussed some of the above. In order to solve the unpredictability of virtual storage to the system, real-time operating systems generally adopt the following two methods:

Based on the original virtual memory management mechanism, the page lock function is added, and the user can lock the key page in the memory so that the swap program cannot swap the page out of the memory. The advantage of this approach is that it not only benefits the virtual memory management mechanism for software development, but also improves the predictability of the system. The disadvantage is that the design of the TLB and other mechanisms is also based on the principle of focusing on average performance. Therefore, the predictability of the system cannot be fully guaranteed.

A static memory partition is used to divide a fixed memory area for each real-time task. The advantage of this method is that the system has good predictability. The disadvantage is that the flexibility is not good enough. Once the task's demand for memory changes, it needs to re-allocate the memory. In addition, the benefits of the virtual memory management mechanism are also lost. Now.

Currently, real-time operating systems on the market generally use the first management method.

(3) Interrupt processing:

In a general-purpose operating system, most external interrupts are on, and interrupt handling is generally done by device drivers. Since the user process in the general operating system generally has no real-time requirements, and the interrupt handler directly interacts with the hardware device, real-time requirements may exist, so the priority of the interrupt handler is set higher than any user process. However, it is not suitable for the real-time operating system to adopt the above-mentioned interrupt handling mechanism. First, an external interrupt is the input of the environment to the real-time operating system. Its frequency is related to the rate of change of the environment, and is independent of the real-time operating system. If the frequency of external interrupt generation is unpredictable, the time overhead of a real-time task being blocked by the interrupt handler at runtime is also unpredictable, so that the real-time performance of the task is not guaranteed; if the frequency of external interrupt generation is Predicted, once an external interrupt is generated more frequently than it is predicted (such as a false signal generated by a hardware failure or the predicted value itself is incorrect), it may destroy the predictability of the entire system. Secondly, each user process in the real-time operating system generally has real-time requirements. Therefore, it is not appropriate that the priority of the interrupt handler is higher than that of all user processes. An interrupt handling method that is more suitable for real-time operating systems is to block all other interrupts except the clock interrupt, and the interrupt handler becomes a periodic polling operation, which is performed by the kernel-mode device driver or the user-mode device. Support the library to complete. The main advantage of adopting this method is to fully ensure the predictability of the system. The main disadvantage is that the response to environmental changes may not be as fast as the above interrupt handling method. In addition, the polling operation reduces the effective utilization of the CPU to some extent. Another possible way is to use the interrupt mode for external events that cannot meet the demand by using the polling mode, and still use the polling mode at other times. However, at this point, the interrupt handler has the same priority as all other tasks. The scheduler uniformly schedules the tasks and interrupt handlers in the ready state based on the priority. This approach speeds up the response to external events and avoids the second problem caused by the aforementioned interrupts, but the first problem still exists. In addition, in order to improve the predictability of the clock interrupt response time, the real-time operating system should shield the interrupt as little as possible.

(4) Exclusive access to shared resources:

General operating systems generally use semaphore mechanisms to solve mutual access problems of shared resources. For a real-time operating system, if the task scheduling is a static table-driven approach, the exclusive access of shared resources is taken into consideration when generating the running schedule, and it is not necessary to consider it at runtime. If the task scheduling adopts a priority-based approach, the traditional semaphore mechanism can easily cause a priority inversion during system operation. That is, when a high-priority task accesses a shared resource through a semaphore mechanism, The volume has been occupied by a low-priority task, and this low-priority task may be preempted by other medium-priority tasks when accessing shared resources, thus causing high-priority tasks to be blocked by many tasks with lower priority. Real-time is difficult to guarantee. Therefore, in the real-time operating system, the traditional semaphore mechanism is often extended to introduce mechanisms such as Priority Inheritance Protocol, Priority Ceiling Protocol, and Stack Resource Policy. Good solution to the issue of priority inversion.

(5) Time spent on system calls and system internal operations:

The process obtains the services provided by the operating system through system calls. The operating system accomplishes some internal management tasks through internal operations (such as context switching). In order to guarantee the predictability of the system, all system calls in the real-time operating system and the time overhead of the internal operation of the system should be bounded, and the limit is a specific quantitative value. However, these time overheads are not so limited in the general operating system.

(6) Reentrancy of the system:

In a general operating system, kernel-state system calls are often not reentrant. When a low-priority task invokes a kernel-mode system call, high-priority tasks that arrive within that time period must wait until low-priority system calls are completed. To get the CPU, this reduces the predictability of the system. Therefore, core-state system calls in real-time operating systems are often designed to be reentrant.

(7) Auxiliary tools:

The real-time operating system additionally provides auxiliary tools, such as real-time task execution time estimation tools in the worst case, system real-time verification tools, etc., which can help engineers perform real-time verification of the system. In addition, the real-time operating system also puts forward some requirements for system hardware design. Some of the requirements are: (1) DMA DMA is a data exchange protocol. The main function is to store data in memory and other external devices without the need of CPU participation. Exchange between. One of the most common implementations of DMA is known as Cycle Stealing, which first competes with the CPU through the bus arbitration protocol to control the bus, and then obtains control rights and then performs data exchange according to the user's preset operating instructions. Because this cycle stealing method will bring unpredictable additional blocking overhead to user tasks, real-time operating systems often require that the system design not use DMA or adopt some DMA implementation methods with better predictability, such as Time-slice method. . (2) The main role of Cache Cache is to use relatively fast storage components with relatively small capacity to compensate for the performance differences between high-performance CPUs and relatively low-performance memories, because it can greatly improve the average performance of the system. Therefore, it has been widely used in hardware design. However, the real-time operating system focuses not on the average performance but on the individual worst-case performance. Therefore, the real-time verification of the system must take into account the worst case of the real-time task running, that is, each time the memory is not hit by the cache. Running time, so when using the auxiliary tool to estimate the execution time of the real-time task in the worst case, all the Cache functions in the system should be temporarily shut down, and then the Cache function will be activated when the system is actually running. In addition, another more extreme approach is to not use Cache technology completely in hardware design.

Elf Bar BC5000

Elf Bar has been making an effort on innovative product via a smart heating system. As we discover the need for natural-based taste and more healthy ways of vaping are thriving year by year.

To get better tastes and to transfer a repeatable experience to our customers, the products have been obedient to the keyword of Elf Bar: Healthier and better. After thousands of experiments, we found methods of making our products the lowest harm materials, pure taste and delicate design, only to present you products that will surely meet your demand.

And it is very popular and hot selling so far in the world, like 800,1500,3000,4000 and 5000puffs models. the quality of elfbar Disposable Vape device is great.

Elf Bar Bc5000,Elf Bar Bc5000 Disposable Vape Pen,5000Puffs Disposable Vape Pod,Elf Bar Bc1500 Vape

Shenzhen Uscool Technology Co., Ltd , https://www.uscoolvape.com