1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites
BusinessGlobal issues

Can you really trust AI? Davos crowd treads with caution

January 17, 2024

Artificial intelligence is the hot topic at the World Economic Forum this year, with political and business leaders discussing how the technology can be used responsibly in the job market, health care and education.

A man walks past a shop window bearing the words "Lets bring trust and AI together"
AI is the buzzword at the WEF gathering in Davos this yearImage: Gian Ehrenzeller/KEYSTONE/picture alliance

The buzz around artificial intelligence is palpable on the promenade in Davos. Much of the real estate here has been plastered with posters extolling the virtues of AI.

There is even an entire pavilion dedicated to the technology. It's aptly called the AI House and is among the most sought-after addresses at this year's World Economic Forum as business leaders debate the risks and opportunities presented by AI and figure out how to adopt the technology effectively.

A sense of optimism prevails about the possibilities the AI promises to usher in in fields such as health care and education. However, the enthusiasm is often preceded by "AI, if done responsibly" or followed by "but we should be careful."

The World Economic Forum in its annual risks survey assesses AI-driven misinformation and disinformation as the biggest danger in the next two years. The survey said the "nexus between falsified information and societal unrest will take center stage" this year when more than 2 billion people go to the polls in countries such as the US and India.

Deepfakes: Manipulating elections with AI

01:10

This browser does not support the video element.

The International Monetary Fund has warned that the technology revolution will impact almost 40% of jobs globally, including high-skilled jobs. In developed economies it could even be 60% of jobs.

While emerging and developing economies might face fewer immediate disruptions from AI, the IMF warns that many of these countries would struggle to harness the benefits of AI due to lack of infrastructure and skilled workforce, raising the risk that over time AI could worsen inequality among nations.

"In most scenarios, AI will likely worsen overall inequality, a troubling trend that policymakers must proactively address to prevent the technology from further stoking social tensions," IMF chief Kristalina Georgieva said at the start of the annual WEF meeting.

Risks galore as AI rolls on

Notwithstanding a sense of optimism about AI's potential, there are also warnings about the technology's pitfalls.Image: Lian Yi/Xinhua News Agency/picture alliance

Among the biggest concerns are the quality of the data powering the various AI models and how the technology has led to the scaling up of high-quality manipulative content like deep fakes at a relatively low cost.

Critics say that generative AI companies haven't been transparent about the source of data powering their large language models such as ChatGPT, leading to concerns about the reliability of the underlying data.

"Did they take data from 4chan (a discussion site known for coordinating harassment attacks as well as distributing illegal and offensive content — the ed.)? Or did they take data from a certain part of Reddit? You can only assume," James Landay, a computer science professor at Stanford University, told DW.

"What we do know is that they are mainly coming from a Western perspective. The cultural values embedded in this data are not appropriate for other cultures. It's almost a form of imperialism," he said.

Landay, who specializes in human–computer interaction, points to the triple D threats posed by AI models: disinformation, deep fakes and discrimination.

Endorsement for AI

Despite its current shortcomings, AI is being touted as a major game changer for the industry. Tech leaders have been raving about how the technology has led to a major jump in productivity.

Nigel Vaz, CEO of Publicis Sapient — the digital arm of the French ad agency Publicis — said AI had led to 30-40% productivity gains in software development.

"It's allowing them [developers] to focus more, not so much on generation of code, but on actual ideation," he said during a panel discussion.

Experts also underscore the potential benefits that AI promises especially in fields like education where children with limited access to schools could one day have access to personal tutors, and health care where the technology is already helping enhance the quality of patient care.

European Commission President Ursula von der Leyen, a self-confessed tech optimist, agrees.

"AI can boost productivity at unprecedented speed," she told the Davos crowd. "Europe must up its game and show the way to responsible use of AI. That is artificial intelligence that enhances human capabilities, improves productivity and serves society."

New AI technology conquers all at CES 2024

04:53

This browser does not support the video element.

The hype that generative AI witnessed last year might cool down a little this year as companies grapple with actual workable use cases of the technology and the uncertainty around the regulation on its use, says Alexandra Mousavizadeh, CEO of Evident, a platform that specializes in benchmarking and tracking AI adoption across the banking sector.

"There is a huge amount of hype and exploration as to what generative AI can do for businesses, but it's extremely difficult to implement," Mousavizadeh told DW. "There is clear understanding what the large language models can do but the hesitancy is whether it is reliable enough for the use cases that we have inside our organizations."

How to use AI responsibly?

A lot of the chatter in Davos has been around how to harness the benefits of the tech revolution while minimizing the risks.

For example, Ramayya Krishnan, an expert on digital transformation at Carnegie Mellon University in the US, says the risks of job losses can be reduced by real-time monitoring of local labor markets so that changing needs of the employers can be quickly identified.

"It's very unlikely that an AI is going to essentially be a substitute for the all the tasks in a given occupation," said Krishnan, who is also a member of the US Department of Commerce's National Artificial Intelligence Advisory Committee.

Getting situational awareness of what's going on in the marketplace will help identify the skill gaps, and workers can then be reskilled to help them transition from the jobs which are declining, he said.

When it comes to misinformation and disinformation, companies are making efforts to better inform the users. Google, for instance, has made SynthID, a tool for watermarking and identifying AI-generated images.

But currently there is no industry standard. 

"The requirement should be that any AI model when it develops content should have provenance associated with that and, alongside the content, release a tool via which the watermark or the content provenance can be processed so that the citizen who's interacting with the content can be informed whether they're interacting with an AI content or not," Krishnan told DW.

Edited by: Rob Mudge

Skip next section Explore more
Skip next section DW's Top Story

DW's Top Story

Skip next section More stories from DW