The common definition of mechatronics does not include testing. Perhaps it should.
By Sugato Deb, Ph.D., MBA & Director Emerging Markets / Partnerships
In the traditional design process of parts and assemblies, engineers produce models, analyze their behaviors under operating conditions, and pass physical prototypes “over the wall” for test engineers to evaluate in a pass-fail mode. Any problems that come to light are “thrown back” for design changes that, though necessary, come at the cost of additional prototypes and development time.
If that wall could be broken down, with analysis and testing working together in a closed-loop cycle, both groups would reap benefits from the use of test-based input values to drive analysis models, the use of analysis results to recommend sensor locations and test scenarios, and faster and better product development cycles.
Typically, once you have created a solid model of a part or assembly, the next steps are to define boundary and operating conditions, then perform a finite element analysis (FEA) to identify the behavior of the part in response to those conditions. Influencing the accuracy of such analyses are the mathematical algorithms and actual coding of the analysis software, and the assumptions made throughout the problem definition process, whether based on geometry or physics. Factors that influence these areas include available computing power, especially at the desktop level, and refinements in FEA algorithms in the newer analysis software packages. Thus, accuracy now stems from assumptions made in determining material properties; boundary conditions; geometry idealization; and physics simplification, such as flexible versus rigid behavior and linear versus non-linear behavior. Ideally, you would have continually improving sources of data on which to base the values for such input conditions.
Factors in mechanical test procedures
Figure 1. To record the shape of the vibration response in this hollow aluminum 50-cm diameter wheel in the shape of the Euro symbol, accelerometers were attached around the rim and parallel bars, and connected to dynamic signal acquisition devices on a PXI (PCI eXtensions for Instrumentation) platform.
Test set-ups for a mechanical part or assembly involve best practices, years of experience, flexible hardware measurement systems, test and control software and input from the actual mechanical designer. Typically, testing takes a pass/fail approach, verifying failure at some maximum load value or confirming in-spec temperatures at locations throughout a part. If the measured values don’t match the predictions, it’s back to the drawing board with revisions and more tests. However, it can be difficult to tell if the test itself generated inaccurate data, since the following experimental parameters can lead to errors: sensor locations, sensor and system calibration, sensor adhesion, sensor mass loading, test fixturing (free-free or constrained), excitation or loading locations, and load cycle. With better sources of specific information for choosing these variables, the test results would be more reliable and provide better feedback to verify and improve the analyses.
Analyses results can be viewed on the original 3D CAD models. They can be displayed as color maps representing small changes where you can rotate, zoom, and select any point, then read its corresponding value (such as stress) across the model.
In the testing world, it’s not as easy to look at the results and draw conclusions about physical behavior. For example, the output of a series of strain gauges is a stream of data, plotted as a set of superimposed curves on an x-y graph, with each curve tracking the measured values from a single sensor over time. An experienced viewer can pick out significant peak values or identify a trend of measurements from a sub-set of physically clustered sensors. However, it’s still a challenge to sort out a hundred or a thousand sensors, and track them back to their corresponding locations on the physical model to fully understand their relevance.
What if the test results could directly, point-by-point, help calibrate and verify the approach to the analysis? You could compare an analysis with the test values to see when and where they differed. If a subset of values were quite off the mark, this might indicate that a nonlinear instead of linear analysis would provide a more accurate approach.
Conversely, what if the analysis could help test engineers determine the best locations for sensors and decide where and how to place the loads? Overlaying test locations on a stress distribution model would better support decisions of where to place the sensors – targeting key expected stress points – instead of attaching them in a simple grid pattern that might miss local areas of unusual activity.
Bringing in test data
To integrate test with design and analysis, four types of disparate information must be correlated: the 3D part geometry from the FEA mesh or the CAD model, analysis data, the physical location of each sensor, and the measured values taken from each sensor over time. Test data are usually sparse as they come from discrete sensor locations while FEA data are integrated over millions of individual elements. It would be useful to interpolate between the sensors to generate test values for every physical point on the model at a resolution comparable to that of an FEA mesh. Then a color-shaded image would let you “see” the test data in the same graphic style as the analysis results, overlaid on the exact geometry, with animations showing behavior over time.
Since every node on an FEA mesh can have a calculated and a measured value, correlated data sets would also let you generate error-map images comparing both values. Here is an example of a project that mapped these requirements into a common view, based on integrating software from SolidWorks (SolidWorks 3D CAD and Cosmos Analysis software) and National Instruments (NI).
Figure 3. Mapping test data to the geometry and deforming it let the engineers see the test mode shape without a high-speed camera. The differences between test and analysis were displayed in the same view, along with a simple camera-image of the device under test for comparison. The views could be used to calibrate and improve the analysis prediction.
Modal frequencies and mode shapes are often evaluated for structures operating in a dynamic environment such as an automobile or in industrial machinery. The main concern is that the structure may vibrate excessively, causing it or other parts to fail prematurely. Vibrations may also transmit to other parts of the structure affecting the perceived quality of the system. The historical challenge in vibration testing is that in addition to requiring expensive measurement systems with high accuracy (24 bit) and high sampling rates (greater than 100k samples per second), the short dynamic nature of the event requires synchronized and sampled measurements at all the sensors (accelerometers).
Another issue is where to place the sensors. A sensor placed at a primary system node will register zero displacement and acceleration. Use of a force hammer and a trial and error process to excite the structure to capture all the mode shapes can also result in inaccurate data. Often the test engineer does not know whether the tests have been successful until all the data are analyzed off-line, possibly several days later. If the mode shapes have not been sufficiently captured, the tests need to be redone. Lastly, the test design must account for mass loading from the accelerometers, since this factor can often distort the test results for light or hollow structures. Usually, the density of sensors is sequentially reduced to limit the effect; unfortunately, this also decreases the amount of captured test data.
In the example, the unit under test was a hollow aluminum 50-cm diameter wheel in the shape of the Euro symbol. The structure was fixed at two locations but otherwise free to vibrate. To record the shape of the vibration response, accelerometers were attached around the rim and parallel bars, then connected to the appropriate NI dynamic signal acquisition (DSA) devices on the PXI (PCI eXtensions for Instrumentation) platform (figure 1).
Engineers analyzed the same structure in the identical constrained mode for the natural frequency response in CosmosWorks (figure 2).
An instrumented force hammer was used to excite the structure at the free end of the shorter straight cross-bar; the response at all the accelerometers was recorded over 100 milliseconds, at a sampling rate of 10,000 Hz, until the vibrations had died down. The accelerometer data was recorded and analyzed by NI LabVIEW Sound and Vibration Toolset and transformed from the time domain to frequency domain for easier analysis.
The resulting mode shape was brought up in NI Insight, side-by-side with the CosmosWorks analysis results and the comparable normalized test values interpolated from the sensors. The animation option generated the mode shape. The ability to map test data to the geometry and deform it accordingly let the engineers see the test mode shape, a task that would otherwise require a high-speed camera. The differences between test and analysis were displayed in the same view, along with a simple camera-image of the device under test for comparison (figure 3), which could be used to calibrate and improve the analysis prediction.
The analysis results helped guide the test engineers to optimize the sensor locations and change the placement of the excitation strike.
For sensor mass loading, you can model the accelerometer masses in the analysis, and then calibrate the mass-loaded analysis with the similar mass-loaded physical test results to improve the analysis fidelity. Then, the accelerometer masses can be unloaded in the analysis (which is not possible in the physical world) and the true modal frequency and mode shape predicted without mass loading.
This approach is only possible by integrating the analysis with the physical test; neither analysis nor physical test alone can accomplish the task, which further points to the real value that integration brings to the table.
Validation for integrated motion control
Another area that could benefit from feedback between software analysis and actual testing is control-system design, whether in mechanics, thermal, or fluid-solid systems. Today’s high-speed electromechanical systems often include a servo-driven actuator that must operate with microsecond response times. Incorrect motion control configuration settings such as Proportional, Integral and Derivative (PID) gain parameters can lead to large settling time or excessive over- or undershoot.
If the motion dynamics of the plant could be analyzed, accounting for forces, friction, gravity, mass or thermal inertia, the information could be fed back to the controller analysis to improve motion dynamics. This design validation capability exists through the combination of CosmosMotion dynamics analysis software and NI LabVIEW Control Design along with the NI SoftMotion Development Module software for motion controller analysis. CosmosMotion helps simulate mechanism motion by taking into account mechanism dynamics, such as forces and friction, and generates such information as position and kinetic energy.
NI LabVIEW with NI SoftMotion help simulate a custom motion controller with functions such as trajectory generation, spline interpolation and control algorithms such as PID. The first round of control parameters calculated in NI LabVIEW is fed back to CosmosMotion to verify how the plant will react to that stimulus, and, depending on how large the feedback error is, the control parameters are tuned until acceptable system performance is reached.
Such closed-loop analysis between mechanical motion and control development environments can help drive design decisions for both the mechanical and controller aspects of the design. For example, engineers may choose to replace a ball-screw stage with a linear motor when they discover the given load cannot be moved at the rate they want.
Using analysis results to refine tests, and using test data to improve analysis models, offers a win-win approach to increasing company-wide productivity and gaining a competitive advantage in the marketplace.
National Instruments Corp. (www.ni.com)