Speed is relative. Especially in the world of industrial control. 1 millisecond look ahead features in the machine control world used be considered “cutting edge” (pun intended).
The programmable controller, the standard of industrial control, has a speed of execution metric. It is generally specified in thousands of instructions per millisecond. At 1000 instructions every 2 milliseconds, for example, it is easy to assume that there are no applications that will be a problem.
Programmable controllers, PLC’s, are the very essence of dependability. In fact, they are one of the few controller technologies recognized by all the major safety agencies. PLC code execution is very robust, and in recent years, has even migrated to fault tolerant systems that are the next level for high reliability.
As these systems have migrated to faster and less costly processors, users have benefited from falling prices and increasing performance. Enhanced features like Ethernet communications, math functions, importing values from Excel spreadsheet, and even motion control has made their way to the PLC platform as a common industrial hardware solution.
However, when you add motion control to any control system application, you must ask and answer the question, how fast is fast. How “real time” does my solution have to be in order to behave correctly?
This actually gets down to the level of understanding digital sampling and analog to digital conversion. You can look at a 12 bit analog command signal and think that 4096 increments of the command voltage to the drive is plenty of resolution. But you could also be wrong.
If the drive is a servo, it is entirely possible for the 2 kilohertz sampling rate of the drive to catch the little steps between output values and actually try to follow the step function, which was never intended. This situation can cause current inrushes to occur which will either cause nuisance trips in the drive, or gradually cause the drive to shut down from overheating.
Think its far fetched? Not at all. It actually happened to me during a project on a PC board plating line at Hewlett Packard some years ago. The fix for this might be output smoothing of the analog command, but that function doesn’t exist in PLCs.
But one of the critical aspects of insuring the reliable performance of the PLC also creates some problems for time sensitive applications. PLCs update all of their inputs first, execute all of their logic and then set all of their outputs. So latency is built into the process intentionally to protect the application from certain types of failure.
If you have to read a value from a sensor, the analog value has to be stored in a register. Then it has to go through a read cycle in order to be retrieved and operated on by the math calculation in the program. So there can be several clock cycle of difference between when the data was read, when the program calculates a value, and when the output value is sent to the output to be updated.
These little timing anomalies creep into the execution and can appear entirely random and very difficult to de-bug. And often you don’t know ahead of time that the problem is going to affect your project. Until it’s too late.
Industrial PCs, as an alternative, have migrated from the old 25 megahertz 486 to the current 1.8 gigahertz Celeron chips. These systems run hundreds if not thousands of times faster than PLCs. Often for industrial application the Operating System can be Linux to increase reliability.
So ask yourself how fast your system really needs to be to meet its requirement. Sometimes the fastest PLC on the block isn’t going to be the right choice.