CEOs from IBM and Intel, in keynote presentations at CES in Las Vegas last week, described and gave examples of disruptive changes in how consumers and businesses transact and interact with their purchases.
Intel’s Brian Krzanich described the use of embedded chips and cloud services to enable experience-based transactions, many of which lead to sales but others also lead to enhancing the user’s experience. Krzanich provided many examples but four stood out and made his case:
- In a new Guinness World Record for the most unmanned aerial vehicles airborne simultaneously, 100 flying drones flew and shone their colored lights in sync with an orchestra playing Beethoven’s Fifth Symphony. Because it was so colorful, and timed to the music, Krzanich characterized the event as a potentially safer and reusable replacement for fireworks shows.
- Using augmented reality, the ModiFace mirror enables women to try out different looks and make-up choices (blush, eye shadow, lip stick color, foundation, glosses, etc.) and then buy the resulting colors and product selections. Trying on different color combinations is often an embarrassing process done in public in department stores. An augmented reality product such as this one, disrupts the process and provides a better experience.
- Krzanich also demonstrated a Yuneec Typhoon H with an onboard RealSense 3D camera and chip, enabling a follow-me collision-avoiding drone available later this year as a consumer product.
- A pair of Oakley sunglasses provided the sensors and communication to track, interact, coach and provide data extrapolated from sensors tracking the wearer’s progress, comparing that progress to set goals, and providing spoken coaching and responsive reports along the way – the result being an experience instead of a pair of sunglasses.
As an aside, Intel recently announced its acquisition of Ascending Technologies, the German developer of the collision avoidant autopilot system used in the 100 drones video above. Last year Intel also made a big investment in Yuneec, whose drones now include onboard Intel RealSense cameras and chips, and Airware, a competing autopilot developer. And this week Intel invested in robotics startup Savioke, maker of the Relay robot which autonomously navigates around hotels delivering toothpaste, towels, Starbucks coffee and other items.
IBM’s CEO, Ginny Rometty, told a packed CES audience that Watson and IBM were changing the nature of data processing from transactional to cognitive. Last year Robert High, CTO of IBM’s Watson Group, described the emerging era of embodied cognitive computing leading to providing cognition as a service, as a 3rd hand such as a lab technician might need, or as a concierge as Jibo and Echo (and human concierges) offer, as an office assistant might provide, and in field settings like search and rescue. The cognition process involves machines interacting with humans in writing, verbally, with tactile and visual cues, and with gestures. Cognitive algorithms and cloud computing are the keys to Watson’s feats and they also happen to be where Rometty has pushed IBM since her tenure began.
Rometty described Watson’s progress thus far and demonstrated on stage many areas where applying Watson to an application is changing the nature of the experience. It’s not enough to just be able to hear what a user says. Watson must be able to understand what they want, be able to make that happen, and then interact back with the user in as conversational a tone as possible. SoftBank’s Pepper robot is an example of how this works. In this excerpt from Rometty’s presentation, she and SoftBank Robotics’ Kenichi Yoshida announced that IBM will provide global distribution and support for SoftBank’s Watson-powered Pepper robot as they scale up to begin selling into China and the U.S. Pepper has already sold 7,000 units in Japan and is in 300 bank branches, and 100 stores.
Bottom line:
These two CES keynote presentations illustrate how artificial intelligence and data synthesis will provide the backbone to enable meaningful and productive interaction between humans and machines – not only on screens, but with gestures, visual cues and spoken understandable communication to and from smart devices and robots of all types. And it’s not just a near-term future they foretell — they give examples of where it is already happening.
Tell Us What You Think!