custom ad
NewsApril 8, 2019

The biggest tech companies want you to know they're taking special care to ensure their use of artificial intelligence to sift through mountains of data, analyze faces or build virtual assistants doesn't spill over to the dark side. But their efforts to assuage concerns their machines may be used for nefarious ends have not been universally embraced. Some skeptics see it as mere window dressing by corporations more interested in profit than what's in society's best interests...

By MATT OÂ’BRIEN and RACHEL LERMAN ~ Associated Press
People stand in front of the Google tent during preparations for CES International on Jan. 5 in Las Vegas. Google employees have had more success than other tech workers at demanding change at the company. Google dropped a contract with the Pentagon after employees pushed back on the ethical implications of using company technology to analyze drone video.
People stand in front of the Google tent during preparations for CES International on Jan. 5 in Las Vegas. Google employees have had more success than other tech workers at demanding change at the company. Google dropped a contract with the Pentagon after employees pushed back on the ethical implications of using company technology to analyze drone video.John Locher ~ Associated Press, file

The biggest tech companies want you to know they're taking special care to ensure their use of artificial intelligence to sift through mountains of data, analyze faces or build virtual assistants doesn't spill over to the dark side.

But their efforts to assuage concerns their machines may be used for nefarious ends have not been universally embraced. Some skeptics see it as mere window dressing by corporations more interested in profit than what's in society's best interests.

"Ethical AI" has become a new corporate buzz phrase, slapped on internal review committees, fancy job titles, research projects and philanthropic initiatives. The moves are meant to address concerns over racial and gender bias emerging in facial recognition and other AI systems, as well as address anxieties about job losses to the technology and its use by law enforcement and the military.

But how much substance lies behind the increasingly public ethics campaigns? And who gets to decide which technological pursuits do no harm?

Google was hit with both questions when it formed a new board of outside advisers in late March to help guide how it uses AI in products. But instead of winning over potential critics, it sparked internal rancor. A little more than a week later, Google bowed to pressure from the backlash and dissolved the council.

The outside board fell apart in stages. One of the board's eight inaugural members quit within days and another quickly became the target of protests from Google employees who said her conservative views don't align with the company's professed values.

As thousands of employees called for the removal of Heritage Foundation president Kay Coles James, Google disbanded the board last week.

"It's become clear that in the current environment, (the council) can't function as we wanted," the company said in a statement.

That environment is one of increasing concern the corporate AI ethics campaigns lack teeth.

"I think (Google's decision) reflects a broader public understanding that ethics involves more than just creating an ethics board without an institutional framework to provide for accountability," AI researcher Ben Wagner said.

Google's original initiative fell into a tech industry trend Wagner calls "ethics-washing," which he describes as a superficial effort that's mostly a show for the public or lawmakers.

"It's basically an attempt to pretend like you're doing ethical things and using ethics as a tool to reach an end, like avoiding regulation," said Wagner, an assistant professor at the Vienna University of Economics and Business. "It's a new form of self-regulation without calling it that by name."

Big companies have made an increasingly visible effort to discuss their AI efforts in recent years.

Microsoft, which often tries to position itself as an industry leader on ethics and privacy issues, published its principles around developing AI, released a short book discussing the societal implications of the technology and has called for some government regulation of AI technologies.

The company's president even met with Pope Francis earlier this year to discuss industry ethics. Amazon recently announced it is helping fund federal research into "algorithmic fairness," and Salesforce employs an "architect" for ethical AI practice, as well as a "chief ethical and human use" officer. It's hard to find a brand-name tech firm without similar initiatives.

Receive Daily Headlines FREESign up today!

It's a good thing companies are studying the issue and seeking perspectives on industry ethics, said Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, a research organization. But ultimately, he said, a company's CEO is tasked with deciding what suggestions on AI ethics to incorporate in business decisions.

"I think overall it's a positive step rather than a fig leaf," he said. "That said, the proof is in successful implementation. I think the jury is still out on that."

The impact artificial intelligence can have on society has never been more clear, Etzioni said, and companies are reacting to studies about the power of recommendation algorithms and gender bias in AI.

But as Google's attempt shows, discussing the issues in the public eye also invites public scrutiny.

Google employees have had more success than other tech workers at demanding change at their company. The internet search behemoth dropped a contract with the Pentagon after employees pushed back on the ethical implications of using the company's AI technology to analyze drone video.

And after more than 2,400 Google employees signed a petition calling for James to be taken off the board, Google scrapped the board altogether. Employees said James has made past comments that were anti-trans and anti-immigrant and should not be on an ethics panel. The Heritage Foundation did not respond to a request for comment.

Google had also faced dissent from its chosen councilmembers. Alessandro Acquisti, a professor at Carnegie Mellon University, announcing on Twitter he was declining the invitation. He wrote he is devoted to grappling with fairness and inclusion in AI but this was not "the right forum for me to engage in this important work." He declined to comment further.

One expert who had committed to staying on the council is Joanna Bryson, associate professor in computing at the University of Bath. A self-described liberal, she said before the dissolution that it makes sense to have political diversity on the panel, and she didn't agree with those who think it's just for show.

"I just don't think Google is that stupid," Bryson said. "I don't think they're there just to have a poster on a wall."

She said, however, companies such as Google and Microsoft do have a real concern about liability -- meaning they want to make sure they show themselves, and the public, they've tried their best to build products the right way before releasing them.

"It's not just the right thing to do, it's the thing they need to do," she said. Bryson said she was hopeful Google actually wanted to brainstorm hard problems and should find another way to do so after the council dissolved.

It's unclear what Google will do next. The company said its "going back to the drawing board" and would find other ways of getting outside opinions.

Wagner said now would be the time for Google to set up ethics principles that include commitments they must stick to, external oversight and other checkpoints to hold them accountable.

Even if companies keep setting up external boards to oversee AI responsibility, government regulation will still be needed, said Liz O'Sullivan, a tech worker who left AI company Clarifai over the company's work in the Pentagon's Project Maven -- the same project Google dropped after its employees protested.

O'Sullivan is wary of boards that can make suggestions companies are under no legal obligation to stick to.

"Every company of that size that states they're interested in having some sort of oversight that has no ability or authority to restrict or restrain company behavior seems like they're doing it for the press of it all," she said.

Story Tags
Advertisement

Connect with the Southeast Missourian Newsroom:

For corrections to this story or other insights for the editor, click here. To submit a letter to the editor, click here. To learn about the Southeast Missourian’s AI Policy, click here.

Advertisement
Receive Daily Headlines FREESign up today!