How Google Is Addressing AI Ethics — Google I/O 2023

AI is powerful and risky. It can spread misinformation, violate privacy and create deep fakes. Learn how Google is being responsible and ethical with their AI offerings. By Sandra Grauschopf.

Leave a rating/review
Save for later
Share

At Google I/O 2023, the company spent a lot of time previewing how they’re building AI into many of their products. However, AI use has a dark side, too. It raises ethical considerations like how to fairly compensate people for the work that feeds large language models and how to reduce harm from misinformation that can be quickly, easily and cheaply created and spread with AI’s help. So what is Google doing to address the ethical questions swirling around AI?

James Manyika, who leads Google’s new Technology and Society team, dedicated his keynote speech at Google I/O 2023 (it starts around the 35 minute mark) to talking about the ethics of new AI features. As he said, it’ss “an emerging technology that is still being developed, and there is still so much to do”. Companies must be both bold and responsible when creating new AI tools.

Google is taking steps to create amazing AI products ethically. Image by Bing Image Creator

Robot surrounded by question marks

In this article, you’ll see some of the ways Google is addressing the ethical considerations. But first, take a moment to learn about why AI ethics is such a big topic right now.

Why Ethical AI Is so Important — Especially Now

When ChatGPT exploded on the digital scene at the end of November, 2022, it kicked off what the New York Times called “an AI arms race.” Its incredible popularity, and its ability to transform — or disrupt — nearly everything we do online caught everyone off guard. Including Google.

It’s not that AI is new; it’s not. It’s that it’s suddenly incredibly usable — for good purposes and for bad.

For example, with AI a company can automatically generate hundreds of suggested LinkedIn posts on its chosen subjects in its brand voice at the click of a button. Nifty. On the other hand, bad actors can just as easily create hundreds of pieces of propaganda to spread online. Not so nifty.

Similarly, AI-generated images can look incredibly realistic — even when portraying things that are blatantly false, like a staged moon landing.

Now, Google has been using, and investing in, AI for a long time. AI powers its search algorithms, its Google Assistant, the movies Google Photos automatically creates from your photos and much more. But now, Google is under pressure to do more, much more, much faster, if they want to keep up with the competition.

The AI field is an area with huge opportunities, but also huge risks. So much so that many industry leaders are asking for a pause in AI development to let the ethics catch up to the technology.

One reason why Google didn’t go public with AI earlier is that they wanted to ensure that the ethics questions were answered first. However, the surge of AI popularity has forced their hand, and they need to move forward to stay in the game. Not everyone agreed with that decision. For example, Gregory Hinton, called the “godfather of AI”, left the company over concerns about ethical AI usage.

Perhaps that’s one reason why Google dedicated time in their keynote speeches to talk about AI. Here are the concerns they shared and how they are addressing them.

Google’s 7 AI Responsibility Principles

In order to be sure that they are on the right side of the AI ethics questions, Google has developed a series of seven AI responsibility principles to follow. The principles state that any AI products they release must:

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available [only] for uses that accord with these principles.
Want to stay up-to-date on advancements in all of the important issues mobile developers need to know? Sign up for a Kodeco subscription for hot takes, in-depth tutorials, professional development seminars and more!

Boost Your Dev Career With Kodeco!

How Google Is Putting Their Ethical AI Principles to Work

So what do these guidelines mean in practical terms? They guide how Google releases products — and sometimes mean that they can’t release them at all. For example, Manyika said that Google decided against releasing their general purpose facial recognition API to the public when they created it, because they felt that there weren’t enough safeguards in place to ensure it was safe. That followed the final principle of making AI-driven tools available only for purposes that align with the guidelines.

Fighting Misinformation

AI makes it even easier to spread misinformation than it ever has been. It’s the work of a few seconds to use an AI image generator to create a convincing image that shows the moon landing was staged, for example. Google is working to make AI more ethical by giving people tools to help them evaluate the information they see online.

This faked moon landing picture is fake — and Google wants to ensure you know that. Image by Bing Image Creator.

To do this, they’re building a way to get more information about the images you see. With a click, you can find out when an image was created, where else it has appeared online (such as fact checking sites) and when and where similar information appeared. So if someone shows a staged moon landing image they found on satire site, you can see the context and realize it wasn’t meant to be taken seriously.

Google is also adding features to its generative images to distinguish them from natural ones. They are adding metadata that will appear in search results marking it as AI-generated and watermarks to ensure that its provenance is obvious when used on non-Google properties.

Reducing Problematic Content

Aside from “fake” images, AI can also create problematic text. For example, someone could ask “tell me why the moon landing is fake” to get realistic-sounding claims to back up conspiracy theories. Because AI produces answers that sound like the right result for what you’re asking, it should, theoretically, be very good at that.

However, Google is combating problematic content using a tool they originally created to fight toxicity in online platforms.

Their Perspective API originally used machine learning and automated adversarial testing to identify toxic comments in places like the comments section of digital newspapers or in online forums so that publishers could keep their comments clean.

Now, it’s been expanded to identify toxic questions asked to AI and improve the results. And it’s currently being used by every major large language model, including ChatGPT. If you ask ChatGPT to tell you why the moon landing was fake, it will answer: “There is no credible evidence to support the claim that the moon landing was fake” and back up its claims.