Slade
Anything Worth Doing Well Is Well Worth Overdoing
Follow Us
Future Tense
The U.K. Wants to Become the World Leader in Ethical A.I.
But what does that actually mean? And is it possible?
By Joelle Renstrom
Aug 01, 20187:30 AM
A principled-looking robot holds up a U.K. flag.
Photo illustration by Slate. Photos by Thinkstock.
Future Tense
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.
In 2013, an algorithm determined Eric Loomis’ six-year prison sentence in Wisconsin for attempting to flee a traffic officer and operating a motor vehicle without the owner’s consent. No one knew how the software, Correctional Offender Management Profiling for Alternative Sanctions, or COMPAS, worked—not even the judge who delivered the sentence. Analyses conducted by ProPublica later found the predictive artificial intelligence used in this case, which attempts to gauge the likelihood of an offender committing another crime, to be racially biased: A two-year study involving 10,000 defendants found that the A.I. routinely overestimated the likelihood of recidivism among black defendants and underestimated it among whites. The U.S. Supreme Court declined to review Eric Loomis’ case, so the sentence stands.
Increasingly, A.I. has the power to alter the course of people’s lives. It’s becoming part of decisions about who gets hired, who gets fired, who goes to prison, which students schools pursue, and how doctors treat patients. It’s going to affect foreign affairs, the economy (particularly through job automation), transportation, and infrastructure. Each new application represents an economic opportunity, which is part of the reason why the rush to develop has been dubbed the “new space race.” The current front-runners are China, with billions of dollars in investments and an ambitious national plan for establishing global dominance over the industry, and the U.S., where advancements come primarily from the private sector and academia. Other countries are trying to figure out how they can keep up, even if they can’t compete with the U.S. or China in A.I. funding and development.
The U.K. has settled on another path to become a leader in the A.I. game. At the World Economic Forum in Davos, Switzerland, in January, U.K. Prime Minister Theresa May announced her country’s goal to become a world leader in “ethical A.I.” Three months later, the U.K. unveiled its A.I. Sector Deal, a comprehensive policy that establishes a partnership between government, academia, and industry to address residents’ and businesses’ goals and concerns with respect to A.I.
And there’s lots to be concerned about, like technological unemployment, a homogeneous A.I. workforce creating products that have very human biases, the dissemination of misinformation, military applications, and a widening wealth gap. Crime-prediction software focuses on nonwhite neighborhoods, perpetuating profiling, resentment, and potential due process violations. LinkedIn’s search algorithms make it easier to find prospective male employees than female ones. China’s facial-recognition programs threaten privacy and suppress freedom. (China’s willingness to bypass data-protection concerns may be an advantage in the A.I. race.) Due to backlash, Google recently opted out of a contract to work on A.I. for weapons, though it will continue to do military work.
The U.K. deal is designed to address many of these worries. If it succeeds in balancing economic growth with concerns about privacy, trust, and access, it could demonstrate that ethical behavior is good for business. That in turn could influence policies in the European Union (regardless of what happens with Brexit) and around the world.
At a roundtable discussion at the British Consulate in Cambridge, Massachusetts, in May, Matthew Gould, U.K. director general for digital and media, said the goal is to have researchers consider ethics every step of the way, rather than relegate it to an afterthought. That sounds wonderful. But it is also abstract and seems destined to become overwhelming, even Sisyphean, given the industry’s size and growth. What does it mean to bake in ethics? Is it even possible?
The 21-page deal “establishes the beginning of [the] partnership” among business, academia, and government by responding to recommendations about “how the government and industry can work together on skills, infrastructure and implement a longterm strategy for AI in the UK.” Its key policies revolve around research and development, skills and digital literacy of human workers, infrastructure, and the business environment. The four “grand challenges” noted in the deal address the A.I. and data economy, clean growth, future mobility of goods and services, and the needs of an aging society. The deal covers many significant aspects and implications of A.I., all of which require attention to ethics. “A revolution in AI technology is already emerging,” reads the deal. “If we act now, we can lead it from the front.”
Much of the ethical discussion comes down to the role played by data—how A.I. uses it and how it’s collected. Data sets contain information that can be isolated by specific variables—such as age, gender, race, or education—or organized and analyzed by A.I. in a way that provides insight and identifies trends. They’re also used in training A.I. to better perform these tasks. Data sets gathered by private companies or researchers often contain far more information (medical records, bank information, purchase histories), which could go unused by A.I. in other sectors, thus impeding disease research, algorithm programming and implementation, understanding of financial trends, and insight into other far-reaching issues. Worse, data can also be misused. Imagine the Cambridge Analytica scandal with far more powerful technology.
Since 2010, the British government has granted open access to its public data sets and mandated other public bodies to do the same. The U.K. deal advocates for the use of data trusts (also called data collaboratives)—frameworks that facilitate mutually beneficial data-sharing between the government and other sectors. The idea is to provide access to new data, incentives to share data, and assurance that such data are being used for the public good. For example, a data trust could provide invaluable geographical, health, demographic, political, and other information about migration trends to various researchers, companies, and the government to help shape policies. Any data strategy must comply with the recently passed EU General Data Protection Regulation (the U.K. has its own similar Data Protection Act, which it recently strengthened) as well as maintain consumer trust.
"The likelihood that the ethics focus will survive is slim." — Kentaro Toyama
Maintaining trust also includes taking on the problem of programmer homogeneity, which contributes to cases such as Eric Loomis’. “A diverse group of programmers reduces the risk of bias embedding into the algorithm and enables a fairer and higher quality output,” computer science professor Dame Wendy Hall and industry expert Jérôme Pesenti wrote in the recommendations they passed on to the U.K. government. “Currently, the workforce is not representative of the wider population. In the past, gender and ethnic exclusion have been shown to affect the equitability of results from technology processes. If UK AI cannot improve the diversity of its workforce, the capability and credibility of the sector will be undermined.” A report from the Chartered Institute for IT found that the vast majority of IT specialists in the U.K. are male, able-bodied, and under 50 years old. Seventeen percent are female, 21 percent are older than 50, and 8 percent are disabled. Digital, media, and creative sectors have similarly disproportionate demographics. Ultimately, according to Hall, this creates a pervasive problem: “bias in, bias out.” While some U.S. companies have managed to increase diversity, they’ve only made a tiny dent: As of 2017, black workers filled only 3.1 percent of jobs in the eight largest American tech companies.
Thus, the U.K. deal aims to create a more heterogeneous workforce. The newly established Ada Lovelace Institute will work with the government to promote diversity, and the Alan Turing Institute’s new fellowship program will offer 1,000 government-supported A.I.-related Ph.D. placements. The U.K. will also double the number of Tier 1 exceptional talent visas and make it easier for visa holders to apply for long-term settlement. Diversity in programmers and researchers leads to products and services for different demographics, as well as algorithms that account for skin color and gender.
Investing in education for both children and adults also helps make opportunities in A.I. available to more people. The U.K. will spend 406 million pounds (about $533 million) to boost STEM education, train up to 8,000 computer science teachers, and create the National Centre of Computing Education. Adults can participate in the National Retraining Scheme, which will put 36 million pounds (roughly $47 million) toward digital-skills training and 40 million pounds (about $52 million) toward construction training—a savvy combination for workers and for national infrastructure. The U.K. is already experiencing housing shortages, and jobs such as bricklaying and roofing are predicted to be particularly hard hit by technological unemployment.
The deal also addresses concerns about A.I. supplanting human workers, particularly lower-skilled ones. Gould said they’re “confident new jobs will materialize,” but much debate remains about whether that will happen, how many and what types of new positions A.I. might create, and who will be qualified for those jobs. “New jobs don’t always go to those who’ve lost them,” Gould acknowledged. “We’re trying to avoid a haves and have-nots situation.” That wealth gap already exists. Income inequality isn’t quite as bad in the U.K. as it is in the U.S., but it’s getting worse. Perhaps it can be narrowed, or at least not exacerbated, by improved access. The U.K. is moving forward with 5G and fiber networks, as well as plans to provide high-speed broadband access to everyone. (Currently, 95 percent of U.K. residents have access.) Internet connectivity delivered at 10 megabits per second or more will be a legal right in the U.K. by 2020.
Crafting ethical policies regarding A.I. also requires addressing the notoriously complicated issue of liability, especially in the event of a malfunction or an autonomous A.I. decision or action. Areiel Wolanow, managing director of Finserv Experts, a consulting firm that customizes A.I. for businesses, attended evidence sessions—meetings in which committees of experts provide relevant data and perspectives to various governmental departments—that helped guide the deal. The moral of those sessions was that accountability is the most important aspect of A.I. regulations.
Slow progress on legal standards means laws and precedents must perpetually play catch-up with A.I. advancements. Many laws don’t have specific provisions for A.I., especially advanced A.I. capable of making autonomous decisions. However, “A.I. doesn’t get people around existing laws,” Wolanow said. As an example, he mentioned airlines’ “dynamic pricing,” which uses personal information to individualize fares. Germany’s Federal Cartel Office investigated Lufthansa’s 2017 25–30 percent ticket price hike just after the shuttering of rival Air Berlin and rejected Lufthansa’s explanation that its prices are generated by algorithms, which aren’t responsible for following the law. Although the FCO ultimately didn’t open a formal case, “the company that owns the A.I. makes the decision to declare themselves accountable, even if the A.I. makes unethical decisions autonomously,” Wolanow said. In other words, Spider-Man’s first lesson applies.
While A.I.—and those who use it—can’t circumvent existing regulations, a lack of guidelines raises significant challenges. Who makes the rules that apply to technology so nascent and powerful that we don’t understand all it can do? Governments and industries set the standards for technologies used for health care, medical devices, and banking procedures, but that hasn’t happened yet with A.I.: “Even if you have something ethical, you can’t get it approved for use because there were no standards to develop it,” said Wolanow, who is in a group working to develop those standards. This critical step dictates the pace at which companies can roll out A.I.; it’s similar to private companies twiddling their thumbs for years while the FAA devised commercial drone regulations. “Certification for use is more time consuming and expensive than building A.I. solutions,” Wolanow said. Without standards, IT architects who ensure the functionality, safety, and compliance of technological systems “can’t really say an A.I. solution is safe for use.” Wolanow cited Bank of America’s security blockchain, which took 18 months to advance from prototype to pilot program because it lacked a basis for approval.
To address these problems, the deal calls for the creation of an A.I. Global Governance Commission, which recently met for its first planning and strategy session. The commission will “provide a point of auditability against which existing solutions can be measured,” said Wolanow. All guidance published by the commission will be testable: A.I. developers will be able to assess and verify that their solutions follow the guidelines, speeding up the process by which solutions can be implemented.
Promoting a diverse workforce, setting standards and a procedure for regulation, addressing the impact of technological unemployment, providing broadband access to all residents, protecting private data, incentivizing research and development, and fortifying the economy—the deal ticks many boxes when it comes to the concerns about and implications of A.I. In fact, it’s enough boxes that it seems too good to be true. Can ethical practice coexist with rampant capitalism in the world’s most lucrative and dynamic industry?
Kentaro Toyama, associate professor at the University of Michigan School of Information and author of Geek Heresy: Rescuing Social Change from the Cult of Technology, said the ethical focus “sounds great in theory … [but] when policies start off well-intentioned, the good intentions tend to erode under the constant efforts of lobbyists and less-than-noble politicians.” Certainly, the deal warrants skepticism. It’s not difficult to imagine companies seeking ways to exploit or circumvent the system. “The likelihood that the ethics focus will survive is slim,” Toyama said.
Optimism about the deal seems as naïve as it is reassuring. The ethical pieces could get jettisoned over time, and perhaps the reality won’t match the vision, but as Gould put it, the deal is a “declaration of intent,” a crucial first step. The American government isn’t talking about ethical A.I., and in an interview in Wired, Michael Kratsios, the president’s deputy assistant for technology policy, indicated that the Trump administration plans to intervene as little as possible in the development of A.I. so as not to inhibit its growth. The U.S. Office of Science and Technology now has only 45 employees, compared with 135 under President Barack Obama, which slows down policy implementation and, more worryingly, inhibits the administration’s ability to understand and act on trends in science and technology, as well as their implications. Ignoring the ethical questions and consequences of A.I. could lead to a future most of us would rather avoid. “We in the United States should be equally up in arms about these issues,” Toyama said, “but apart from a few voices, many of which are appropriated by large tech companies, little is being done.”
Given that the U.S. has practically abdicated its moral responsibility here, and China doesn’t seem terribly interested either, it seems that we should be rooting for the U.K. to succeed. Fusing ethics and economic growth seems both obvious and ingenious because it doesn’t matter which holds more sway. Some people switch to solar power because it’s cheaper, not because they care about the environment, but the outcome is the same. Perhaps after it becomes evident that combining ethics and economics amounts to a win-win, we’ll see more of that.
Knowing what we know now about the implications of A.I., maybe we’d make different decisions around data protections, social media, and privacy if we could go back in time. But since each generation of technology gives rise to the next and irrevocably affects society, moving backward when it comes to A.I. is next to impossible. We decide what happens and how, which includes figuring out how to develop and use A.I. for the good of everyone and to accept responsibility for our mistakes. The U.K. has the opportunity to lead the next generation of tech corporations, research, and governmental policies. Here’s hoping the deal gets it right.
Tweet
Share
Comment
Artificial Intelligence Ethics United Kingdom
Flight Prices You're Not Allowed To See!FlightFinder
Make your Child Learn English with this App (30-D…Get it on Google Play | Lingokids App
Ten Ridiculously Adorable Animals in CupsLOLWOT
Sponsored Links
10 Surprising And Shocking Facts About FoodLOLWOT
10 Amazing Pinterest Desserts For Your Leftover PumpkinsLOLWOT
Sponsored Links
Incredible 3D Notebook Art Created By Talented 15 Yea…LOLWOT
Ten Breathtaking Christmas Light Displays From Aroun…LOLWOT
Five Delicious Looking Christmas Themed CookiesLOLWOT
Sponsored Links
Premier League Betting Tips: Different fates for promoted clubs in 7/1 dabble of the dayGoal
‘Millionaire Matchmaker’ Patti Stanger on building a million-dollar brandUSA Today
Sponsored Links
DealBook Briefing: Google in China Is No Done DealNY Times
The food industry is hot for celebrity investorsUSA Today
Top Dutch Orchestra Fires Conductor After Sexual Mis…NY Times
Sponsored Links
QPR ordered to pay £42m to settle FFP caseGoal
‘The Boys in the Band’ and a Generational DivideNY Times
Sponsored Links
Woman sues Canada Dry over 'lack of ginger'USA Today
States with the best retirement fundingUSA Today
Shocking electric bill? Free changes can help you cut c…USA Today
Sponsored Links
Recommended For You
The Real Threat to American Democracy Isn’t Russia. It’s the Right.
Why does China care about Tibet, and when are monks allowed to get violent?
Reprints
Advertise: Site / Podcasts
Commenting
Contact / Feedback
Pitch guidelines
Corrections
About us
Work with us
User agreement
Privacy policy
AdChoices
Follow Us
Facebook
Twitter
Instagram
Slate is published by The Slate Group, a Graham Holdings Company.
All contents © 2018 The Slate Group LLC. All rights reserved.
Illustration depicting a colorful group of people using an array of mobile devices
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment