Automatic Controls: White Paper | |||||||||||
Home | Download this White Paper as a PDF file | View as one pageImplementing Automatic Controls Using Data Acquisition Processor BoardsThough Data Acquisition Processor (DAP) boards are primarily data acquisition devices, the onboard processing power that makes rapid service to data conversion and buffering devices possible can also be applied for rapid service to external control processes. There are certain limitations, but for systems that can stay within these limitations, Data Acquisition Processors provide a reliable platform for the implementation of control methods that would be very difficult or very expensive to deliver any other way. Such systems might implement advanced control methods, require special data analysis, or operate on a scale beyond the capabilities of ordinary control devices. This white paper surveys the challenges and strategies for implementing automatic control systems on Data Acquisition Processors. Contents
Applicability
Data Acquisition Processors provide device-level controls. They can operate in stand-alone workstation nodes, or as low-level remote processing nodes within a multi-level control hierarchy. The fact that they can capture a large number of measurements accurately means that they are useful as data server nodes in SCADA networks, with the host supporting the SCADA protocols and network connections. They can apply local process knowledge to implement a control policy downloaded from a higher level node, receive and respond to feedback, and intelligently analyze data so that higher level controls receive the best information. Perhaps it is easier to say when Data Acquisition Processors are not appropriate. Applications with relatively low measurement rates, moderate accuracy requirements, one or very few measurement channels, and minimal processing needs will probably find alternative low-cost devices that serve the purpose well. Most feedback regulator controls have pre-packaged solutions available at a reasonable cost. Even though this means that the great majority of measurement and control applications do not need the high-end capabilities of Data Acquisition Processors, this still leaves many difficult measurement and control applications for which Data Acquisition Processors are well-suited but for which few alternatives are available. In addition to basic requirements for signal capture, control applications must process those measurements, apply a control policy, and generate control responses, meeting real-time constraints while doing these things. The simplicity of pre-packaged control solutions offers very little in the way of extensibility or flexibility. For those cases in which the most basic control solutions are not sufficient, the options are few.
The Role of ArchitectureData Acquisition Processors are designed for measurement processing, where speed means lots of channels and bulk data movement. Devices that best facilitate bulk data movement are not the ones that best facilitate responses to individual items. Consequently, there are limitations on the speed and regularity of process timing to be considered.
When processing captured data streams at very high rates, a delay of a millisecond causes backlogs of thousands of samples in hardware devices. Both conversion inaccuracy and loss of data are unacceptable. This suggests the following criterion for evaluating real-time performance: response times must be in fractions of a millisecond. In contrast, workstation systems must operate on a human time scale, and delays much longer than 50 milliseconds make system response seem sluggish. This establishes a comparable criterion for its real-time performance. Delays for processing and moving data make the problems of converter device operation more difficult. Processing is interleaved, with some processing time applied to device operation, some to data management, some to data communication, some to higher-level processing. It takes time for data to propagate through these stages. As a general rule of thumb, you cannot depend on a Data Acquisition Processor to deliver a completed result in response to an individual sample in less than 100 microseconds (and you must verify by testing). Contrast this to the architecture of a typical workstation host. It is optimized for moving bulk data to support graphical data presentations and multimedia. After operations are set up, data movement is fast, but chip sets for memory management, bus control, and peripheral control require setup times in preparation for the data movement. There is even a setup time for an operation as simple as reading the current system time from an external timer chip. Workstation systems typically host an array of diverse devices, and these devices compete for service. There is no assurance about how much time the devices will take, or how much time device driver code will take to operate them. Add to these difficulties the heavy computational loading of a typical workstation operating system, and perhaps a GUI environment specialized for data presentation, and in the end you can't depend on delivery of any data in less than about 10 milliseconds... Or it might be 100 milliseconds or longer depending on a variety of uncontrolled conditions. Even if you replace the operating system with a fine-tuned embedded microkernel system specialized for real-time systems, disable all unused devices, eliminate user interaction, and devise a way to load your control software as an embedded application, even in this highly customized PC environment, you will still have a very difficult time getting responses for the best case in less than a millisecond or two, and even then you can't trust it. It simply isn't designed for this purpose. The Role of ProcessingOn a Data Acquisition Processor, the hardware processes for measurement and the low-level software processes to capture and move that data are pre-programmed. Control algorithms will reside in higher-level processing organized as independent processing tasks, which are scheduled for execution by a system task scheduler. There isn't much overhead with any of these operations, but there is some overhead with each, and the effects on delay are additive. Data movement mechanisms called data pipes will transfer data between tasks or to external devices. The schedulability of the tasks depends on availability of data. << Previous 1 2 3 4 Next >> View as one page |