Returning from vacation, my inbox overflowed with e-mails announcing robot “firsts.” At the same time, my relaxed post-vacation disposition was quickly rocked by the latest tragic news and recent discussions regarding the extent of AI bias within New York’s financial system. These unrelated incidents are very much connected in representing the paradox of the acceleration of today’s inventions.
Last Friday, The University of Maryland Medical Center (UMMC) became the first hospital system to safely transport, via drone, a live organ to a waiting transplant patient with kidney failure. The demonstration illustrates the huge opportunity of unmanned aerial vehicles (UAVs) to significantly reduce the time, costs, and outcome of organ transplants by removing human-piloted helicopters from the equation.
As Dr. Joseph Scalea, UMMC project lead, explained, “There remains a woeful disparity between the number of recipients on the organ transplant waiting list and the total number of transplantable organs. This new technology has the potential to help widen the donor organ pool and access to transplantation.”
Last year, America’s managing body of the organ transplant system stated it had a waiting list of approximately 114,000 people with 1.5% of deceased donor organs expiring before reaching their intended recipients. This is largely due to unanticipated transportation delays of up to two hours in close to 4% of recorded shipments.
Based upon this data, unmanned systems could save more than 1,000 lives. “Delivering an organ from a donor to a patient is a sacred duty with many moving parts,” said Scalea. “It is critical that we find ways of doing this better.”
Unmentioned in the UMMC announcement are the types of ethical considerations required to support autonomous delivery to ensure that rush to extract organs in the field are not overriding the goal of first saving the donor’s life.
Drone deliveries prep for takeoff
As May brings clear skies and the songs of birds, the premise of non-life-saving drones crowding the airspace above is often a haunting image. Last month, the proposition of last mile delivery by UAVs came one step closer with Google’s subsidiary, Wing Aviation, becoming the first drone operator approved by the U.S. Federal Aviation Administration and the Department of Transportation. According to the company, consumer deliveries will commence within the next couple of months in rural Virginia.
“It’s an exciting moment for us to have earned the FAA’s approval to actually run a business with our technology,” declared James Ryan Burgess, CEO of Wing.
The regulations still ban drones in urban areas and limit Wings autonomous missions to farmlands, but they allow the company to start charging customers for UAV deliveries.
While the rural community administrators are excited “to be the birthplace of drone delivery in the U.S.,” what is unknown is how its citizens will react to the technology, prone to menacing noise and privacy complaints.
“Across the board, everybody we’ve spoken to has been pretty excited,” stated Mark Blanks, director of the Virginia Tech Mid-Atlantic Aviation Partnership. However, he acknowledged, “We’ll be working with the community a lot more as we prepare to roll this out.”
Google’s terrestrial autonomous driving tests have received less than stellar reviews from locals in Chandler, Ariz., which reached a crescendo earlier this year with one resident pulling a gun on a car (one-third of all Virginians own firearms). Understanding the rights of citizenry in policing the skies above their properties is an important policy and ethical issue as unmanned operators move from testing systems to live deployments.
AI and trust
The rollout of advanced computing technologies is not limited to aviation; artificial intelligence is being rapidly deployed across every enterprise and organization in the U.S.
Last Friday, McKinsey & Co. released a report on the widening penetration of deep learning systems within corporate America. While it is still early in the development of such technologies, almost half of the respondents to the study said that their departments have embedded such software within at least one business practice this past year. “Forty-seven percent of respondents say their companies have embedded at least one AI capability in their business processes — compared with 20% of respondents in a 2017 study,” stated McKinsey.
This dramatic increase in adoption is driving tech spending, with 71% of respondents expecting large portions of digital budgets going toward the implementation of AI. The study also tracked the perceived value of the use of AI, with “41% reporting significant value, and 37% reporting moderate value,” compared with 1% “claiming a negative impact.
Before embarking on a journey south of the border, I participated in a discussion about AI bias at one of New York’s largest financial institutions. The output of this think tank became a suggested framework for administrating AI throughout an organization to protect its employees from bias. We listed three principal requirements:
- Defining bias (as it varies from institution to institution)
- Policies for developing and installing technologies (from hiring to testing to reporting metrics)
- Employing a chief ethics officer who would report to the board not the chief executive officer (since the CEO is concerned about profit and could potentially override ethics for the bottom line)
These conclusions were supported by a 2018 Deloitte survey that found that 32% of executives familiar with AI ranked ethical issues as one of the top three risks of deployments. At the same time, Forbes reported that the idea of engaging an ethics officer is a hard sell for most blue-chip companies.
In response, Prof. Timothy Casey of California Western School of Law recommended repercussions similar to other licensing fields for malicious software.
“In medicine and law, you have an organization that can revoke your license if you violate the rules, so the impetus to behave ethically is very high,” he said. “AI developers have nothing like that.”
Casey suggested that building a value system through these endeavors will create an atmosphere whereby “being first in ethics rarely matters as much as being first in revenues.”
While the momentum of AI adoption accelerates faster than a train going downhill, some forward-thinking organizations are starting to take ethics very seriously. As an example, Salesforce this past January became one of the first companies to hire a “chief ethical and humane use officer.” It empowered Paula Goldman “to develop a strategic framework for the ethical and humane use of technology.”
Writing this article, I am reminded of the words of Winston Churchill in the 1930s cautioning his generation about balancing morality with the speed of scientific discoveries, as the pace of innovation even then far exceeded humankind’s own development:
“Certain it is that while men are gathering knowledge and power with ever-increasing and measureless speed, their virtues and their wisdom have not shown any notable improvement as the centuries have rolled. The brain of modern man does not differ in essentials from that of the human beings who fought and loved here millions of years ago. The nature of man has remained hitherto practically unchanged. Under sufficient stress—starvation, terror, warlike passion, or even cold intellectual frenzy—the modern man we know so well will do the most terrible deeds, and his modern woman will back him up.”
Join RobotLab on May 16, when we dig deeper into ethics and technology with Alexis Block, inventor of HuggieBot, and Andrew Flett, partner at Mobility Impact Partners, discussing “Society 2.0: Understanding The Human-Robot Connection In Improving The World” at SOSA’s Global Cyber Center in NYC – RSVP today!