How Google Is Addressing AI Ethics — Google I/O 2023

AI is powerful and risky. It can spread misinformation, violate privacy and create deep fakes. Learn how Google is being responsible and ethical with their AI offerings. By Sandra Grauschopf.

Leave a rating/review
Save for later
You are currently viewing page 2 of 2 of this article. Click here to view the first page.

Working With Publishers to Use Content Ethically

While users might be very excited about some of Google’s AI integrations, authors and publishers are rightly concerned. After all, large language models are training on their content, but not compensating them for it — or even asking if they’re OK with the way their content is being used. So making sure that authors and publishers can both consent to and be compensated for the use of their work is a major ethical consideration with AI.

Ethical AI means that the AI creator and the publisher are working together. Image by Bing Image Creator.

A robot and a human shaking hands

Ethical AI means that the AI creator and the publisher are working together. Image by Bing Image Creator.

Google says they’re working with publishers to find ways to ensure that AI is only trained on work that publishers allow, just as publishers can opt out of having their work indexed by Google’s search engine and that they are working on fair compensation for authors and publishers. However, it was disappointing that they didn’t share any details about how they’re planning on doing this.

Restricting Problematic Products

Sometimes, there’s a conflict where a product can be both hugely beneficial and hugely harmful. In these instances, Google is heavily restricting those products to limit the malicious uses.

For example, Google is bringing out a tool where you can translate a video from one language to another, and even copy the original speaker’s tone and mouth movements, automatically. This has clear and obvious benefits; for example, in making learning materials more accessible.

On the other hand, the same technology can be used to create deep fakes to make people seem to say things they never did.

Because of this huge potential downside, Google will only make the product available to approved partners to limit the risk of it falling into the hands of a bad actor.

Where to Go From Here?

If you’d like to learn more, here’s some suggested reading (or watching):

Do you have any thoughts on ethical AI you’d like to share? Do you think that Google will be able to live up to their promises? Click the “Comments” link below to join our forum discussion!