Data Acquisition (DAQ) and Control from Microstar Laboratories

Automatic Controls: White Paper

data acquisition filtering data acquisition software channel architecture enclosures services

Home | Download this White Paper as a PDF file | View as one page

<< Previous 1 2 3 4 Next >>

Timing and Regularity

Basically, you do not need to worry about obtaining samples consistently and accurately on a Data Acquisition Processor. A hardware-based sampling clock controls the timing of sample capture when an input sampling configuration is started.

DAPstudio interface
DAPstudio software acts as a no-coding-required development environment for creating applications with onboard intelligence.

Sampled data systems depend on frequent and regular sample values. While the sampling is hardware controlled, the control processing is not directly tied to the hardware oscillator. Collected data will be moved out of hardware devices and into a memory buffer where they can be managed by the onboard processor. The pipe system will then make the data available to processing tasks. Control processing can then begin to respond to the new data.

Collecting data in hardware and managing it in blocks with a general purpose processor has big advantages for general data acquisition, which is of course what Data Acquisition Processors are primarily about. But data collection and buffering can be an undesirable level of "baggage handling" for the case of high-speed control applications. Collecting samples in memory means delays, which are undesirable in many ways. But for applications where these delays are small compared to the time scale of system operation, the ability to collect a block of data spanning many signal channels, and then responding block by block instead of sample by sample, delivers a large boost in efficiency.

Ideally, all previous processing is completed and tasks are idling at the time that a new sample becomes available. This sample is plucked immediately from the hardware buffering devices. Processing of that value is completed and responses are delivered before additional sample values have time to accumulate. There are delays, but these are bounded, and there are not many of them. If the interval between samples is longer than the maximum possible delay plus the processing time, responses for each sample are guaranteed to be delivered before the next sample is captured. Under these conditions, no supplementary synchronization mechanism is required. To respond to new data in minimum time, all that is necessary is to attempt to read new data, and that request will be completed as soon as possible after the data become available.

Sampling and Bandwidth

Before worrying about what the Data Acquisition Processor will do with data, it is important to determine the application needs first. The sampling rate is important.

For the case of digital systems, you will want to watch hold times closely. When update cycles are fast, you might want to deliberately hold digital output levels for an extra update cycle. Otherwise, delay variability might cause very late activation in the current cycle but very early deactivation in the next, leaving a very narrow pulse that an external system might miss. For input data, there is a similar hazard. If the digital level is not held for a full sampling interval or more, it is possible for sampling to miss a pulse completely.

For continuous signals, there is the fundamental Nyquist Limit. Approaching this limit, the fundamental one-sample delay inherent in digital systems starts to become as important as the real content of the signals. Even if all of the important frequencies are below the Nyquist bound, it is still possible for the aliasing phenomenon to track higher harmonic frequencies if they are present, producing misleading effects in the sample sequence. For digital to analog output conversions, the latching converters act as a zero-order hold, and operating them too slowly produces a very poor stepped approximation to the desired output signal. You can perform separate simulation studies to optimize timing rates, but generally for output signals that are reasonably smooth the sample rate should be at least 10 times larger than the highest relevant frequency in theresponse spectrum.

Pipelining Delays

There is always an inherent one-sample delay in a discrete control loop. Given that an output was applied at the current time, the control loop will not see the feedback results until the next sample is captured.

Additional delays include the conversion time, the time for processing to detect new samples and access the hardware devices, data transfer delays to place the sample into memory and attach this memory to the pipe system. These delays are fundamental limitations of the Data Acquisition Processors and the DAPL system. They occur before processing can begin.

Task Scheduling Delays

Existence of data for control processing does not in itself guarantee that the control processing begins execution immediately. Task scheduling introduces additional delays. In a high-performance control application, there should be a minimum number of other tasks interfering with the execution of the critical control sequence. System-level tasks for purposes of device control are always present and these you cannot control. There are application-specific delays associated with completion of the control processing, and you do have some control over these delays.

Some means for controlling scheduling delays are:

  • Use the minimum number of tasks consistent with application requirements and acceptable software complexity.
  • Deliver responses as soon as possible. If any processing can be deferred until after the result is delivered, do so.
  • Optimize the execution path between the detection of new data and the delivery of the control result.
  • Break up processing of long numerical sequences with OPTION QUANTUM=200.
  • Where you can, use lower priority processing for tasks that might interfere with critical time response.

Real-time constraints are less about raw speed than about what delays might happen, how often they happen, and how corrections are applied when they happen. Applications will usually fall into one of these categories:

  1. very short delays never matter,
  2. short delays do not matter as long as they occur only occasionally and recovery is fast after each,
  3. delays beyond the fundamental one-sample delay are not tolerated.

When more than one sample has time to collect before control processing is scheduled, your control application will receive more than one new input value. Your processing can spin through the backlog of values quickly to "catch up" to current time, losing no information and bringing state up to current, but it cannot go back to generate missed outputs. For applications where these delays are not long enough or frequent enough to matter, this is a natural means of recovery and you won't need to take any other corrective action.

If you are willing to tolerate the delays, you might be able to operate at higher sampling rates, but a consistent backlog of data will result in a consistent output delay. Sometimes this backlog merely cancels out any speed increases you might have thought you were getting because of the higher sampling rate. At other times, the increased processing overhead will mean that there is a net increase in average delay time, and this could increase the phase lag enough to compromise the stability margin. You might have to operate the loop at a lower gain setting to compensate, with reduced disturbance rejection at the lowered gain.

Time Delay Benchmarks

DAP 5200a
Take a look at benchmarks for the DAP 5200a and other products

There is a limited amount that you can do to avoid delays, but the DAPL system can provide you with information about what those delays are. You will need to simulate system timing with executable processing tasks. These do not need to be the actual processing tasks, particularly not in early project stages while feasibility is being evaluated. This is actually a rather important point: you do not need to fully implement your system to study critical timing.

  • To simulate processor loading, you will need a custom processing command that wastes a known amount of processor time for each sample that it reads. You can get this using a loop that writes to a hardware device repeatedly, for example, using a dac_out function. (Putting a hardware function in the loop prevents the compiler from "optimizing away" your processing code and eliminating the processor loading that you want.) Suppose for example that you observe that, on your DAP model, the loop can execute 2.5 million times per second. Then invoking the loop for 2500 cycles will produce a simulated 1 millisecond processing activity.
  • To simulate system processing delays, determine how many processing tasks you will have, and produce a network of tasks that move data through the pipes in much the same configuration that your application will use. Lay out your application data flow sequence, and substitute arbitrary processing for actual processing that is not yet available. For tasks that are primarily data movement, use COPY commands. For tasks that are primarily data selection, use SKIP commands. For tasks that are primarily numerical processing, substitute FIRFILTER or FFT commands.
  • To simulate delays from transfers to the host, you can use a COPY(IPIPE0,$Binout) processing command, and configure DAPstudio software to receive and discard the data.
  • Try to use a realistic number of samples per second in each data transfer.

Once you have a processing configuration to test, you can activate the built-in task statistics monitor. Documentation on this is provided in the DAPL Manual. In DAPstudio, you can right-click the Processing configuration tab, and, in the Options dialog, activate the Sequencing option. This will give you a new Sequencing sub-tab under the Processing tab. In the sequencing editor dialog, you can add the following lines to your processing sequence, after the start command.

  statistics  on
  pause 10000
  statistics
  statistics  off

This will monitor task timing for a period of 10 seconds, displaying the timing statistics for all tasks. The most important statistics:

  1. Compare the amount of time used by the task to the total run time. This will verify assumptions about the amount of processor loading in each task.
  2. Check the worst-case latency. All tasks in one processing loop will have the same latency, so the worst case for one is the worst case for all. If this time interval is less than the sampling interval, there will never be a data backlog, and the most data received at any time will be one new sample.
  3. If the worst-case time is less than two sample intervals, and the average processing time is much less than one time interval, there will be infrequent backlogs of no more than 1 extra sample.

Multitasking Control

Some control activities are critical for delivering a response as quickly as possible, while other activities perform the analytical work to prepare for this. It is possible to take advantage of multitasking features of DAPL to separate these activities so that there is minimum interference with time-critical processing.

If you can use tasking priorities, you can place the time-critical processing at a high priority level and leave the preparation and analysis processing at lower priority. When you do this, the lower-level tasks have negligible effects on process timing, yet they will use any available time for completing their processing in the background. The disadvantage of this scheme is that the tasks are completely independent, so sharing state information can be complicated. You might need to keep some of the preparatory state-management code in the high-priority task, but perform those computations after time-critical processing is finished.

<< Previous 1 2 3 4 Next >> View as one page

DAPstudio