March 16, 2018

The development of AI ethics must keep pace with innovation

Author : Vrajesh Bhavsar / Publisher: VentureBeat

The ability of artificial intelligence to make ethically sound decisions is a hot topic in debates around the world. The issue is particularly prevalent in discussions on the future of autonomous cars, but it spans to include ethical conundrums similar to those depicted in sci-fi flicks like Blade Runner.

These high-level debates are mainly about a future that's still years away, but it is true that AI is already becoming part of our lives. Think Siri, Amazon's Alexa, and the photo-sorting function on many smartphones. The popularity of technologies like these already influences how people think about machine intelligence. In a recent survey, 61 percent of respondents said society would become "better" or "much better" from increased automation and AI.

But getting to truly humanlike intelligence will take time. And the science must go hand-in-hand with thinking about the implications of creating intelligent machines -- possibly to the level of robot rights and the nature of consciousness.

As AI innovation continues to move forward at warp speed, it will be important for the development of ethics surrounding the technology to keep up.

Distributed computing brings bots closer to human

Complex AI topics are hard to grasp, but the debate won't wait. Many of the technologies required to make replicants like those seen in Blade Runner are currently in development. Machine learning, a subset of AI, already gives early robots the ability to interact with humans through touch and speech. These capabilities are vital to mimic human behaviors. Natural language processing has also improved to the point where it enables some robots to respond to complex voice commands and identify multiple speakers, even in the middle of a noisy room.

However, to get to the next level, AI will need to move beyond today's technologies, where intelligence in devices is still relatively immature, and offer far more advanced processing. That processing will require major engineering advances, especially around efficient compute power, data storage, and independence from the cloud (replicants can't be reliant on an internet connection to think). The challenge is to make a machine capable of "close learning," which is similar to how humans learn from experience.

It starts with the idea of distributed computing. AI algorithms mainly run in huge data centers. While a smart speaker, for example, may recognize keywords and wake up, the real brainwork happens in a data center that may be thousands of miles away. In the future, this will change as researchers enhance the ability of AI algorithms to run locally on devices. Discrete learning accelerators could support these algorithms where necessary, giving machines a new level of independent thinking and learning ability and making them more resilient against any disruption on the internet. It also means a machine may not need to share sensitive information with the cloud.

Many tech companies believe in this vision and are pushing machine learning capabilities in devices with the aim of one day enabling advanced intelligence in all devices, from a tiny sensor to a supercomputer. This is distributed AI computing, a process similar to those found within the human brain, which has developed independent cognitive abilities based on super-efficient computing. While the human brain is still tens of thousands of times more efficient than any chip in existence today, we can see how AI is following the evolutionary path to a close learning state.

In recent advances, researchers have applied distributed computing concepts to create bidirectional brain computing interfaces (BBCIs) -- networks of computer chips with the goal of implanting them in human brains. The purpose of this process is to help improve brain function in people with neurological conditions or brain injuries. But the technology has implications for advanced AI, too.

As in the human brain, each node in a distributed network can store and consume its own energy. These mini-computers could have the ability to run on electromagnetic radiation extracted from the environment -- much like cellphone and Wi-Fi signals. With massive computing power in small, self-charging packages, a distributed computing network could, in theory, perform advanced AI processing without relying on bulky battery packs or distant server farms.

Machines learn ethics through programming

The advent of distributed computing may one day lead to comparing artificial intelligence with human intelligence. But how do we make sure real-life replicants have the complex decision-making and moral reasoning they'll need to safely interact with humans in challenging real-world environments?

Louise Dennis, a post-doctoral researcher at the University of Liverpool, sees a path. For her, it's all about programming AI with values rigid enough to guarantee human safety, but flexible enough to accommodate complicated situations and sometimes contradictory ethical principles.

While we're far from dealing with replicants making choices as complex as K does in Blade Runner 2049, AI ethicists are already grappling with some tough questions.

Take, for example, the debate in the U.K. around regulations for pilotless planes. Dennis' group suggested that companies program automated planes to follow the Civil Aviation Authority's rules of the air. But the Civil Aviation Authority had a concern. While the CAA trains pilots to follow the rules, they're also expected to break them when it's necessary to preserve human life. Programming a robot to know when to follow the rules and when to break them is no easy task.

To make things even more complicated, values aren't universal. Many of our ethical priorities depend on our communities. AI ethics must be context-specific with some values and non-negotiable with others in order for a machine to function like a human.

Mistakes in AI decision-making are inevitable, and some of these will have consequences. Still, says Dennis, AI promises to pose a net increase in safety, and we will ultimately have to decide what is an acceptable level of risk.

"It's never going to be perfect," Dennis said. "I think it's just a matter of people deciding what due diligence is and accepting that accidents will happen."

Fears of an AI uprising are overblown

Popular culture is awash with cautionary tales of technological creations rising up against humans, and there will always be naysayers who want to keep the genie in the bottle. But it's too late. Many of us already interact with AI in our daily lives, and that integration will undoubtedly become more entrenched. But many experts like Dennis aren't losing sleep over a replicant revolution. In fact, she thinks such concerns often distract people from issues raised by AI in the here and now.

"We should stop worrying about robots as an existential threat and start worrying about how society is going to adapt to an information revolution that's likely to be quite disruptive," Dennis said.

Developing robots capable of making moral decisions will be a key part of that adaptation process. While distributed computing could make a real-life K possible, the replicants of the future are unlikely to hunt down humanity. And if the field of AI ethics advances just as quickly as AI itself, it seems likely that the machines of the future will be designed and engineered to make complex, yet ethical, decisions for which we humans will set the rules.

Vrajesh Bhavsar works on the machine learning ecosystem and partnership development teams at Arm, a company that produces processor designs for silicon chips that power products like sensors, smartphones, and supercomputers.

This article was written by Arm and Vrajesh Bhavsar from VentureBeat and was legally licensed through the NewsCred publisher network. Please direct all licensing questions to legal@newscred.com.