The Sensors Expo & Conference will include discussions of how sensors are being used in multiple industries, as well as with robotics and IoT. The host of the Sensors Expo explains his take on the sensors market.
Drones and robots need good data to navigate and operate, and the Industrial Internet of Things depends on high-quality data to deliver insights about connected systems. Underlying them all are the sensors that gather the data. Attendees at the Sensors Expo & Conference will be able to learn about market-specific technologies, new applications, and how sensors are being used across vertical markets.
The Sensors Expo will be held from June 26 to 28, 2018, in San Jose, Calif. It will feature the IoT Connectivity Ecosystem and collocated with the Medical Sensors Design Conference and the Autonomous Vehicle Sensing Conference.
The conference will include 10 tracks of sessions, which will explore topics including machine learning, wearable technologies, and designing for industrial and embedded IoT.
The Sensors Expo will also include programming in expo theaters, startup and university zones, and a Women in Sensors Engineering Program.
In advance of the Sensors Expo, Robotics Business Review recently communicated with Mat Dirjish, executive editor of Sensors Online, about his observations about how sensors and their market are evolving.
What recent developments in sensor technology are directly affecting robots and drones? What hurdles remain?
Robots and drones have several things in common, a great use of sensors being just one. There is constant chatter about how humans and robots will be working together with the usual heightened concern for human safety.
A sensor technology that will make robots safer to work with is touch sensing. Advancements in this area have been making touch sensors more sensitive to pressure, torque, and temperature.
One viable application for mobile robots is that of recover and rescue. Robots can be used to enter dangerous environments and recover important items such as flight recorders, black boxes, and retrieve, rescue if you prefer, trapped, and/or injured humans.
The ability to sense what force is needed to move debris out of the way and differentiate the force necessary to lift a boulder or hold a fragile being is critical. Hence, touch and pressure sensors are being developed that are capable of precisely evaluating these parameters.
For drones, which are generally more compact than robotic systems, sensor fusion shows great promise in extending drone functionality as well as making control more precise and versatile. Just some sensors you find on a drone include temperature, pressure, humidity/moisture, speed, altitude, and optical sensors.
Sensor fusion is the term for integrating several sensors on a single device, in the form of a silicon (or other material) chip. Each integrated sensor, however will perform a different function, i.e., measure pressure, temperature, velocity, etc.
Strides in 3D optical sensing offers great possibilities for both drones and robots. The ability to not only see the immediate environment, but also have peripheral vision capability is a huge step forward. And since numerous such optical devices can be integrated in either a drone or robot, the vision capabilities can far exceed that of several humans.
The hurdle right now for robots and drones is the regular triad of techno tribulations: size, power consumption, and cost. Touch sensing, to be accurate and viable requires several devices including sensors, processors, and software. These all consume power and money.
Optical sensors have some size limitations in that the smaller the sensing area, the less it can see. Miniaturization can get expensive. Sensor fusion holds great promise. However, the more items we squeeze on to silicon, the less functional silicon becomes. The next advancement will come with the development of other semiconductor materials, like gallium nitride (GaN).
How is the emerging Industrial Internet of Things (IIoT) dependent on sensors? What should users know about big data requirements as we get ready for the Sensors Expo?
The Industrial Internet of Things would not be the IIoT without sensors. Basic industrial processes use a wide range of sensors, i.e., temperature, pressure, leak/level, position, etc., for everyday operations.
Networking those processes via the IIoT concept for enabling remote control, security, maintenance (active and proactive), collecting data on a wide range of parameters from productivity to waste management, just to name the most basic functions, will require yet another layer of sensors. These may include many wireless sensors with or without energy-harvesting capabilities. Naturally, with that comes sensor interfaces, embedded systems, and software.
Now, not to make things look bigger than they really are, the application will determine the extent of sensor deployment and access. Small remote prototyping applications on a factory floor may require anywhere from one to x-amount of sensors to monitor parameters of the device under test. To interface the operation to the web can require as little as one sensor. In brief, no sensor equals no IIoT.
About “big data” and the requirements thereof – I’ve always liked that term, big data, because it’s self explanatory, yet made vague by the countless definitions out there for it. I personally prefer the Wiki definition of “big data,” which I quote: “Big data is data sets that are so voluminous and complex that traditional data-processing application software are inadequate to deal with them.”
I would label “big data” as those parts of the “voluminous and complex data sets” that actually impact my application. Everything else I would label “small potato data.”
What should one know about big data requirements? Be able to discern what data your application collects are relevant to your end goal and either acquire or develop the software tools that will be able to extract that data, be able to compare it to the lesser and irrelevant data just to verify you’re not missing anything, and then be able to analyze the final collection for possible corrections, maintenance, or new strategies.
How competitive is the machine vision space, and how are vendors differentiating themselves? Who are the leaders and upstarts in this market?
If you consider the established research firms forecasting the machine vision market hitting between $14 billion and $20 billion by 2025, I’d say there’s some serious competition on the horizon. However, you can’t omit the current political environment where one day we trade with foreign countries for materials necessary for producing vision devices, then the next day we face a different tax and tariff structure. So those numbers may change significantly.
Also, traditionally defined, machine vision is an industrial application mainly for inspecting products and controlling factory processes. Obviously, different factories produce different goods and thereby require different types of machine vision tools, the primary ones being cameras. Vendors differentiate themselves by the applications they cater to.
For example, a company making inspection systems for aerospace, automotive, and medical applications will most likely produce high-resolution, high-accuracy, precision cameras and related peripherals as opposed to a company specializing in vision systems for companies making pan-head screws, where higher resolution and costly systems are not necessary.
There are several major, and highly respected contributors in the machine vision arena. Off the top, I’d have to say Basler, Baumer, Hammamatsu, Sony, Omnivision, Framos, Laser Components, FLIR Systems, ON Semiconductor, and SICK. These companies are regulars in the daily tech news.
In terms of upstarts, you probably meant to say “disruptive.” Machine vision is a precise application and precision, accuracy, and speed of measurement are constant goals. And the top companies are on top of it.
As systems become more autonomous, how much are they relying on modeling and other machine learning techniques? Are there shortcuts?
There will be a major upswing in the need for machine learning and deep learning across the board for just about every mechanical device, mobile or stationary. The trend now seems to be “we automate, therefore we are”, with no slowdown expected.
Obviously, autonomous vehicles and robotics are the primary targets for machine learning, and they have quite a way to go before being engineered to a better-than-acceptable form.
For robotics, engineers are trying to create electrical networks that mimic and perform like the complex array of neural networks found in the human brain and body. That involves the ability to recognize and react to all situations a human may encounter. And that requires the ability to learn.
Currently, shortcuts are few if any. However, the big shortcut will come when the devices acquire the ability to learn without intervention. Then, the devices will acquire the ability to teach other devices, after which, we will be finished — for a few moments.
For mobile robots and autonomous vehicles, is vision sufficient? How many cameras is enough?
Basic, straight-ahead vision is not enough now, or in the future. For autonomous vehicles to be safe, they need to have vision capabilities similar, if not beyond that of humans.
I mentioned earlier that 3D optical sensing can enable peripheral vision like the human eye. When humans drive a modern vehicle with rear cameras and a dashboard display, they have one point of vision, meaning they are seated in one spot and have sight via several devices: front windshield, rear windshield, rear view and side view mirrors, and whatever the external cameras display on the dashboard.
In an autonomous vehicle, all points of vision need to be acquired, and sent to a processing unit for interpretation. The brains of the system will not be centrally seated, but spread across various parts of the vehicle. These remote parts of the system will tell the vehicle how to react to various situations. Again, there will be machine learning involved.
How many cameras are necessary for effective and safe autonomous driving? One might say the same as if a human were driving: as many as possible.
However, camera tech is evolving rapidly, and it would be feasible to create cameras that move faster than the human eye, thereby reducing the number needed for total vision. Rather than increasing the number of cameras, better processing, data analysis, and reaction times will solve the vision challenges.
Are there some novel applications of sensor technology in say, healthcare, pick-and-place, or other operations?
If you are expecting an exciting response filled with enthusiasm for the “next big thing,” I think you may be disappointed. Sensors are standard components like resistors, capacitors, inductors, etc., and there are few, if any electronic designs that do not contain at least one sensor. Most analog electric guitars have one or two transducers, a.k.a. pickups.
Autonomous vehicles, which we’ve bounced around a bit here, are still considered novel designs, and they use a lot of sensors. But there are no dedicated “autonomous vehicle sensors.”
There are pressure, temperature, level, and optical sensors used in these vehicles, which would fit very comfortably in a smart-home design or a toy. A sensor is a sensor.
Novel applications that use a lot (and a few) sensors that are being worked on, or hinted at, include therapy robots. We’ve all heard about therapy dogs, cats, and other pets, but therapy bots are on the horizon.
These devices are forecast to be able to adapt to more than a patient’s basic needs like aiding in ambulation, reminding the patient about taking medications at the proper time, or moving obstacles out of the way.
Some of these therapy robots can be programmed to learn the patient’s moods and behaviors and react with empathy via human-like responses. Here, and in other robotics apps, the innovative effort will be in speech recognition, processing, and language learning. This will be a true test of sensors interfacing with processors and software. Optical sensors may be employed that are capable translating human mouth movements (via lip reading) into tangible speech.
Another novel and controversial app on the horizon is robotic surgery. Enough said.
What are you looking forward to the most at the Sensors Expo & Conference and why?
As the executive editor of Sensors Online and host of Sensors Expo, I have both a personal and business interest in every aspect of the event. I look forward to meeting with every exhibitor and seeing in action all the products and technologies I’ve written about over the past year.
Also, the exhibitors and vendors are a wellspring of information about what will be coming in the immediate and distant future. Those insights are priceless.
Editor’s Note: Robotics Business Review subscribers can use code RT100 to receive $100 off passes or a Free Sensors Expo Hall Pass. To register, visit: https://www.sensorsexpo.com/register.