Total Pageviews

Monday, June 12, 2017

MEDIA MONDAY / QUESTIONING ARTIFICIAL INTELLIGENCE’S DECISIONS



GUEST BLOG / By John Frank Weaver, Esq., Contributor New America.
My family has grown very attached to our Amazon Echo, particularly for music. We can access Prime Music by asking Alexa for an artist, song, or station. Even my young kids can navigate the verbal interface to request “Can’t Fight the Feeling” from the movie Trolls or the soundtrack to the musical Hamilton.

As part of the smart speaker’s artificial intelligence, the program picks up on our tastes and preferences, so when we simply say “Alexa, play,” the device will queue up suggested tracks. In theory, what it picks should have some obvious relationship to music we chose ourselves. And songs it selects usually do. Usually.

ABOUT THE AUTHOR:
John Frank Weaver is an attorney in Boston who works on artificial intelligence law. He is the author of “Robots Are People Too.” 

But recently, Alexa considered our diet of kids’ music, show tunes, the Beatles, the Rat Pack, and Pink Martini, and decided to cue up … Sir Mix-a-Lot.

After we stopped laughing, I wanted desperately to ask, “Alexa, why?” What in the name of Weird Al Yankovic was Alexa thinking when it determined that we needed to listen to a one-hit wonder hip-hop artist from the 1990s?

Sadly, Alexa currently isn’t built to provide answers to such pressing questions about its judgment. Scores of current and potential autonomous devices and A.I.-powered programs—including personal A.I. assistants like Alexa, self-driving cars, chatbots, and smart appliances that learn our preferences—provide little to no transparency for their decisions, which are made with no direct human control or input.

Whether through user error, poor design, profit-based manufacturer decisions, or any number of factors, technology can make suspect decisions. Remember how Google Home sometimes spouts ludicrous conspiracy theories, or how Tay, Microsoft’s A.I.-enabled Twitter bot, learned abhorrent racism in less than 24 hours?

ABOUT NEW AMERICA:
New America is a think tank and civic enterprise committed to renewing American politics, prosperity, and purpose in the Digital Age. It generates big ideas, bridge the gap between technology and policy, and curate broad public conversation. Structurally, it combines the best of a policy research institute, technology laboratory, public forum, media platform, and a venture capital fund for ideas. New America is a distinctive community of thinkers, writers, researchers, technologists, and community activists who believe deeply in the possibility of American renewal.  It generously shares its findings and reports with the public and the media.  More on New America click here.
     And as A.I. programs and autonomous devices continue to expand into decisions that have more serious consequences, including those affecting justice, health, well-being, and even life and death, the stakes become much higher than an out-of-place rap tribute to the backside.

Enter the right to an explanation, a movement to combat the broad move to a “black box society”—a culture that largely accepts we have no way to understand how technology makes many basic decision for us, like when self-driving cars choose particular routes home or autonomous shopping assistants generate your grocery lists. As you can probably guess, the right to an explanation would require that autonomous devices and programs tell consumers how the A.I. reached a decision: Why did you play that song? Why did you get off the highway? Why are you burning my cookies?

This emerging right is another form of algorithmic transparency, which seeks to ensure that algorithms consumers interact with do not enable discrimination, exert hidden political pressures, and engage in other unfair or illegal business practices. Many of those efforts are focused on public policy. For example, the Federal Trade Commission’s Office of Technology Research and Investigation conducts independent studies and provides training and technical expertise to FTC consumer protection investigators and attorneys related to algorithmic transparency. The right to an explanation is focused on providing consumers with personalized, easy to understand algorithmic transparency.

One of the most prominent moves in the direction of the right to an explanation comes from the European Union. In 2016, the European Parliament and the Council of the European Union adopted the General Data Protection Regulation—a new data protection regime that promises to usher in major changes to how companies handle the personal data they gather about EU-based consumers. It’s a sprawling document—you have to get through 173 nonbinding perambulatory paragraphs before you even get to the regulation itself. But once you do, you’ll find several new rules directly responding to the question of how artificial intelligence technologies, like Amazon’s Alexa, should be allowed to access and use personal data. Among the most noteworthy: When companies collect personal data related to their consumers, they are required to inform individuals whether “automated decision-making, including profiling” is involved in processing that data and provide them with “meaningful information about the logic involved” with that processing.

In other words, come the May 2018 deadline when the regulations kick in, if you’re in the EU, A.I. owes you an explanation every time it uses your personal data to choose a particular recommendation or action. Forcing the A.I. to explain its decisions, advocates say, could provide an important check on unintentional or unsavory algorithmic bias. It would put consumers in a better position to evaluate and potentially correct these kinds of decisions and prevent companies fearful of embarrassment or legal action from allowing inappropriate bias in A.I. decisions. It would also give consumers the opportunity to see how their personal data is used to generate results.

Sounds great, right? But it’s not yet clear exactly what would happen after I ask my Echo, “Alexa, why did you just play ‘Baby Got Back’?” The General Data Protection Regulation does not provide any specific format or content requirements, leaving experts in the field to make educated guesses and recommendations. For example, Bryce Goodman from the Oxford Internet Institute and Seth Flaxman from the University of Oxford suggest that “any adequate explanation would, at a minimum, provide an account of how input features relate to predictions, allowing one to answer questions such as: Is the model more or less likely to recommend a loan if the applicant is a minority? Which features play the largest role in prediction?”

Of course, the usefulness of any such disclosure depends on the ability of the A.I. to appropriately analyze personal data and give the consumer choices based on it. Some people think that cannot be done the way businesses treat data now. John Havens, executive director of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, argues that consumers should be able to review their personal data and how A.I. relies on it, functions that are anathema to how businesses buy and sell data today. He also believes that you should be able to provide guidance to A.I., correcting mistaken data and telling it which personal data is more important than others.

So with a self-driving car, the right to an explanation might look something like this: Upon purchase, the car would ask for basic user information (home location, age, sex, etc.) and offer the user a menu of options to prioritize: speed, scenery, preference for certain types of driving (urban, highway), avoiding traffic, etc. Based on those responses, the car would have a better idea of the personal data to rely on when making decisions and could use the preset priorities to answer questions like, “Why did you get off the highway an exit early?” During the lifetime of the car, the user could review the personal data collected by the car and adjust the preset priorities.

In this way, the right to an explanation also functions as a data protection tool much more powerful than rules currently on the books in the United States, such as laws that require individuals to consent before a third party can use or disclose their data and that require businesses to notify users if their data may have been breached by hackers. Both ideas affirmatively grant each person greater control of his or her data. But the right to an explanation lets you see how the information you’re handing over is used in context and can be used to grant you greater control of what you want to input and how it’s processed.

The right to an explanation is not without tradeoffs. Thomas Burri, an assistant professor of international and European law at the University of St. Gallen, told me via Skype that though he believes there should be something like these required disclosures, he’s concerned that the requirements could go too far, hindering and infringing on the rights of developers.

“If the first thing you need to consider when designing a new program is the explanation, does that stifle innovation and development?” he said. “Some decisions are hard to explain, even with full access to the algorithm. Trade secrets and intellectual property could be at stake.” The fear is that broad language creating the right to an explanation could discourage companies, entrepreneurs, and developers from fully exploring the possibilities of A.I. because they don’t want to reveal sensitive information or navigate the complexities of complying with ambiguous requirements.

In the U.S., such broad language and potentially onerous requirements—plus recent lack of concern for constituents’ personal data—make it unlikely that Congress or state legislatures would adopt a similar right to an explanation anytime soon. But that doesn’t mean we American users shouldn’t lobby for it.

As A.I. and autonomous devices become more adept at interpreting our personal data, and our reliance on those devices increases, it becomes more important for every user to know what that personal data is and how it’s being used. Policymakers can work with experts in the field to draft balanced regulations that give companies enough room to innovate and give consumers the kind of meaningful control that ensures they can’t use our personal data in objectionable, dangerous, or discriminatory ways.

And it would also help explain how Sir Mix-a-Lot was cued up for my family. Amazon, I understand Alexa doesn’t operate under the right to an explanation, but call me if you have one.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture.


-->
Source: https://www.newamerica.org/

No comments:

Post a Comment