The CA Accelerator encourages technical innovation and entrepreneurship in-house, while CA’s Strategic Research team is working on three projects to advance the state of AI and IoT.
Startups aren’t the only companies actively exploring the frontiers of artificial intelligence, robotics, and the Internet of Things. Yesterday, CA said that its Strategic Research team will participate in three projects, including one partly funded by the European Union’s Horizon 2020 initiative. In addition, CA Technologies has been working on internal development through its CA Accelerator.
Three projects for AI, IoT, and GDPR
“To realize the massive promise of an IoT-driven world, we must solve complex challenges,” said Otto Berkes, CA Technologies chief technology officer. “These hurdles must be overcome before we can deliver IoT systems that can provide valuable and trusted data, be adaptable and open to new technologies — systems that haven’t even been invented yet.”
The Adaptive Learning on Heterogeneous Architectures, or ALOHA, project plans to take advantage of the increasing compute capacity of IoT edge devices such as routers and servers for more human-like deep learning. CA Security will advise ALOHA on how to create “bias-free AI.”
“CA’s goal through ALOHA is to learn from experience and react autonomously to a surrounding environment, while avoiding AI bias,” stated Victor Muntés, vice president of Strategic Research at CA Technologies.
The ENACT project will work to develop smart-home “e-health applications” and apply AI to IoT self-diagnosis. By finding vulnerabilities early in the software development process with simulation models, ENACT plans to tackle the challenge of security for connected systems.
The Privacy and Data Protection for Engineers (PDP4E) project will create tools and processes to make it easier for developers to comply with the EU’s new General Data Protection Regulation (GDPR). All three of these programs include researchers across Europe.
Pitches become part of CA Accelerator
“The CA Accelerator serves as an internal VC [venture capitalist], with about a dozen projects at any given time,” Berkes told Robotics Business Review. “The idea is for them to exit internally, keeping the innovations in-house.”
The accelerator is looking for projects with a high likelihood of commercialization, said Berkes. Business-unit strategists and CA’s product-development organization are available for applied research proposals.
CA launched its accelerator about two and a half years ago and added a pitching approach in the past year.
“Our angel team evaluates pitches for potential applicability and uniqueness,” said Berkes. “We’re checking on projects monthly, and they’re multi-year efforts.”
The vetting of CA Accelerator projects is a collaborative effort, involving universities in Melbourne, Australia; Barcelona, Spain; and Santa Clara, Calif. The efforts are vendor-agnostic, so in some cases, CA could even be working alongside competitors.
The CA Accelerator measures success in terms of patents generated, products commercialized, and thought leadership, Berkes said.
“We have a ‘heat map’ of which technology areas are attracting the most pitches,” he added. “Not surprisingly, the biggest overlap was around data science, machine learning, human-machine interaction, and blockchain.”
Cobotics and AI ethics
The research group this year is focusing on cobotics and the ethics of AI.
The next-generation use of data is for automation to improve over time with machine learning and human-aided training, Berkes explained. The combination of human judgment, robots tackling repetitive tasks, and AI insights into big data is especially promising.
“Cobotics is a particularly interesting area,” he said. “How do people and learning agents coexist, and how can the whole be greater than the sum of its parts?”
“Some research projects are making sure we inject a human element into the art of the possible,” Berkes said. “We’re making research a core part of what CA’s about.”
Training for the future of work
Other CA Accelerator projects are tackling challenges that have not been well-defined — until now. Machines that are less rigid in behavior are more likely to learn and be helpful to human users.
“AI and ethics are not necessarily quantifiable — software implements algorithms following a well-defined syntax,” said Berkes. “We need to let go of the notion of predictability from traditional programming algorithms.”
“CA is focused on he future of work in the context of the enterprise,” Berkes said. “There’s a tremendous opportunity to use intelligent automation to shift workloads and to shift human capabilities to value-creating work.”
At the same time, the emergence of racist chatbots demonstrates that machines are only as good as the data they use. The handoff of control between humans and machines is a major challenge for AI.
“Things get tricky when you reach points in a process where you give up decision-making control to an agent,” he explained. “For example, in payment security, we can use data science to detect if a pending transaction is likely to be fraudulent.”
“We can fine-tune decisions over time. If a system is down, it could be a unique, one-time event,” said Berkes. “We have to build trust that the agent can find and fix the problem on its own.”
“For self-driving cars, we’ll need to aggregate and analyze massive amounts of data, but we can’t brute-force it,” he observed. “We need systems that understand what data is interesting — we need self-awareness built into systems.”
“Over the next several decades, understanding data science and deep learning will be more important than traditional computer science,” Berkes said. “We’re looking further ahead than the next product release.”