AI Europe Trending

Big Issues for Corporate Giants: Content, AI and Privacy

At WebSummit in Portugal last week large corporations took to the stage to tackle what they believe are the issues of the day. From the President of Microsoft, Brad Smith, declaring that AI could be used as a weapon, to David Graff, Google’s Head of Content Moderation admitting that false information will still be indexed in its organic search.

In a bid to shock the audience, Smith used a clip from 2001: A Space Odyssey which sees AI bot Hal refuse to open a door to let crew return, fully overriding the humans decision. This focus on robots thinking for themselves is something which until now has always had its place in the sci-fi world, but Smith used his time on stage to send a warning to the audience, where he called for “guard rails on the technology we are creating to protect against their abuse or their misuse or their unintended consequences.” 

The public perception of AI, bolstered by every Terminator movie he quipped, would only be embraced if it was built to put the public first. Adding that Microsoft was already using AI for things like humanitarian relief, work in managing the environment, and cultural heritage preservation.

He concluded, “Any tool can become a weapon. When we look to the decade ahead, in many respects AI will be a tool of the sort the world has seldom seen before. And hence it can become a weapon as well.”

“You have to face forward on this and make sure that people see tech for good, not as a job killer.”

This view was shared on another stage by CEO Richard Edelman of global PR firm Edelman, who acknowledged the fear felt by blue collar workers, particularly in the US, who see AI as something that can replace them. He said, “Tech has to have a soul, tech can’t just make money. Tech has to somehow recognise that AI is powerful, if it’s used well. You have to face forward on this and make sure that people see tech for good, not as a job killer.”

As for Graff, with election fever embracing the UK and soon-to-be the US, content and its moderation have become something of a two-headed monster. He said, “At Google we are constantly reflecting on the creation of the internet and find that the biggest challenge is that most content platforms have few barriers to entry. This means information can rapidly disseminate allowing people to find communities, which is a tremendous strength as they can amplify the voice of the marginalised. But these tools and features can also be used by bad actors to sew division and discord. Essentially tools can be used for good or evil.”

Graff spoke of how Google has somehow become a reluctant gatekeeper, forced by some governments and agencies to withdraw information that they believe violates local laws. While some efforts have been a success, such as Germany’s hate speech act, Graff was less forthcoming on the Right to be Forgotten. The company won a court case just last month which would only see the request removal feature to be used in the European Union. 

Drawn upon its relationship with countries such as China, who banned apps on the Google Play store, Graff said that apps present unique concerns due to the ability to marry user generated content with developer based content. However, he ended by saying he had not noticed an increase in requests from the country. 

As for the Right to be Forgotten, he believes that his department is responding to requests in a timely manner, but that the topic would have to be revisited in a few years to see if it was successful or not.

Author

  • Gina is a fintech journalist (BA, MA) who works across broadcast and print. She has written for most national newspapers and started her career in BBC local radio.

Related posts

NFTs: What’s in It for Players of the Payments Market?

Tyler Pathe

Citi Ventures: Wealth Customer Needs in an Age of Anxiety

The Fintech Times

Blockchain technology used to solve KYC issues for South American banking consortium

Manisha Patel