Automatic Controls: White Paper | |||||||||||
Home | Download this White Paper as a PDF file | View as one page<< Previous 1 2 3 4 Next >> Timing and RegularityBasically, you do not need to worry about obtaining samples consistently and accurately on a Data Acquisition Processor. A hardware-based sampling clock controls the timing of sample capture when an input sampling configuration is started.
Sampled data systems depend on frequent and regular sample values. While the sampling is hardware controlled, the control processing is not directly tied to the hardware oscillator. Collected data will be moved out of hardware devices and into a memory buffer where they can be managed by the onboard processor. The pipe system will then make the data available to processing tasks. Control processing can then begin to respond to the new data. Collecting data in hardware and managing it in blocks with a general purpose processor has big advantages for general data acquisition, which is of course what Data Acquisition Processors are primarily about. But data collection and buffering can be an undesirable level of "baggage handling" for the case of high-speed control applications. Collecting samples in memory means delays, which are undesirable in many ways. But for applications where these delays are small compared to the time scale of system operation, the ability to collect a block of data spanning many signal channels, and then responding block by block instead of sample by sample, delivers a large boost in efficiency. Ideally, all previous processing is completed and tasks are idling at the time that a new sample becomes available. This sample is plucked immediately from the hardware buffering devices. Processing of that value is completed and responses are delivered before additional sample values have time to accumulate. There are delays, but these are bounded, and there are not many of them. If the interval between samples is longer than the maximum possible delay plus the processing time, responses for each sample are guaranteed to be delivered before the next sample is captured. Under these conditions, no supplementary synchronization mechanism is required. To respond to new data in minimum time, all that is necessary is to attempt to read new data, and that request will be completed as soon as possible after the data become available. Sampling and BandwidthBefore worrying about what the Data Acquisition Processor will do with data, it is important to determine the application needs first. The sampling rate is important. For the case of digital systems, you will want to watch hold times closely. When update cycles are fast, you might want to deliberately hold digital output levels for an extra update cycle. Otherwise, delay variability might cause very late activation in the current cycle but very early deactivation in the next, leaving a very narrow pulse that an external system might miss. For input data, there is a similar hazard. If the digital level is not held for a full sampling interval or more, it is possible for sampling to miss a pulse completely. For continuous signals, there is the fundamental Nyquist Limit. Approaching this limit, the fundamental one-sample delay inherent in digital systems starts to become as important as the real content of the signals. Even if all of the important frequencies are below the Nyquist bound, it is still possible for the aliasing phenomenon to track higher harmonic frequencies if they are present, producing misleading effects in the sample sequence. For digital to analog output conversions, the latching converters act as a zero-order hold, and operating them too slowly produces a very poor stepped approximation to the desired output signal. You can perform separate simulation studies to optimize timing rates, but generally for output signals that are reasonably smooth the sample rate should be at least 10 times larger than the highest relevant frequency in theresponse spectrum. Pipelining DelaysThere is always an inherent one-sample delay in a discrete control loop. Given that an output was applied at the current time, the control loop will not see the feedback results until the next sample is captured. Additional delays include the conversion time, the time for processing to detect new samples and access the hardware devices, data transfer delays to place the sample into memory and attach this memory to the pipe system. These delays are fundamental limitations of the Data Acquisition Processors and the DAPL system. They occur before processing can begin. Task Scheduling DelaysExistence of data for control processing does not in itself guarantee that the control processing begins execution immediately. Task scheduling introduces additional delays. In a high-performance control application, there should be a minimum number of other tasks interfering with the execution of the critical control sequence. System-level tasks for purposes of device control are always present and these you cannot control. There are application-specific delays associated with completion of the control processing, and you do have some control over these delays. Some means for controlling scheduling delays are:
Real-time constraints are less about raw speed than about what delays might happen, how often they happen, and how corrections are applied when they happen. Applications will usually fall into one of these categories:
When more than one sample has time to collect before control processing is scheduled, your control application will receive more than one new input value. Your processing can spin through the backlog of values quickly to "catch up" to current time, losing no information and bringing state up to current, but it cannot go back to generate missed outputs. For applications where these delays are not long enough or frequent enough to matter, this is a natural means of recovery and you won't need to take any other corrective action. If you are willing to tolerate the delays, you might be able to operate at higher sampling rates, but a consistent backlog of data will result in a consistent output delay. Sometimes this backlog merely cancels out any speed increases you might have thought you were getting because of the higher sampling rate. At other times, the increased processing overhead will mean that there is a net increase in average delay time, and this could increase the phase lag enough to compromise the stability margin. You might have to operate the loop at a lower gain setting to compensate, with reduced disturbance rejection at the lowered gain. Time Delay Benchmarks
There is a limited amount that you can do to avoid delays, but the DAPL system can provide you with information about what those delays are. You will need to simulate system timing with executable processing tasks. These do not need to be the actual processing tasks, particularly not in early project stages while feasibility is being evaluated. This is actually a rather important point: you do not need to fully implement your system to study critical timing.
Once you have a processing configuration to test, you can activate the built-in task statistics monitor. Documentation on this is provided in the DAPL Manual. In DAPstudio, you can right-click the Processing configuration tab, and, in the Options dialog, activate the Sequencing option. This will give you a new Sequencing sub-tab under the Processing tab. In the sequencing editor dialog, you can add the following lines to your processing sequence, after the statistics on pause 10000 statistics statistics off This will monitor task timing for a period of 10 seconds, displaying the timing statistics for all tasks. The most important statistics:
Multitasking ControlSome control activities are critical for delivering a response as quickly as possible, while other activities perform the analytical work to prepare for this. It is possible to take advantage of multitasking features of DAPL to separate these activities so that there is minimum interference with time-critical processing. If you can use tasking priorities, you can place the time-critical processing at a high priority level and leave the preparation and analysis processing at lower priority. When you do this, the lower-level tasks have negligible effects on process timing, yet they will use any available time for completing their processing in the background. The disadvantage of this scheme is that the tasks are completely independent, so sharing state information can be complicated. You might need to keep some of the preparatory state-management code in the high-priority task, but perform those computations after time-critical processing is finished. << Previous 1 2 3 4 Next >> View as one page |