AI doomerism is overblown

AI doomerism is overblown

Published: Jun 5, 2023

Many of the leading voices in AI have co-signed yet another ominous open letter warning that we should be “mitigating the risk of extinction from AI.” However, the voices shouting for regulation the loudest have us wondering how much of the AI fear-mongering is warranted, and how much is self-serving theater. This week, I’m joined by Devin Coldewey to talk about why AI doomerism is overblown, and why the blowhards doing the blowing want it that way.

When you hear the phrase “artificial intelligence,” it may be tempting to imagine the kinds of intelligent machines that are a mainstay of science fiction or extensions of the kinds of apocalyptic technophobia that have fascinated humanity since Dr. Frankenstein’s monster.

But the kinds of AI that are rapidly being integrated into businesses around the world are not of this variety — they are very real technologies that have a real impact on actual people.

While AI has already been present in business settings for years, the advancement of generative AI products such as ChatGPT, ChatSonic, Jasper AI and others will dramatically escalate the ease of use for the average person. As a result, the American public is deeply concerned about the potential for abuse of these technologies. A recent ADL survey found that 84% of Americans are worried that generative AI will increase the spread of misinformation and hate.

Leaders considering adopting this technology should ask themselves tough questions about how it may shape the future — both for good and ill — as we enter this new frontier. Here are three things I hope all leaders will consider as they integrate generative AI tools into organizations and workplaces.

Make trust and safety a top priority

While social media is used to grappling with content moderation, generative AI is being introduced into workplaces that have no previous experience dealing with these issues, such as healthcare and finance. Many industries may soon find themselves suddenly faced with difficult new challenges as they adopt these technologies. If you are a healthcare company whose frontline AI-powered chatbot is suddenly being rude or even hateful to a patient, how will you handle that?


Author(s)

Samson

Author: Samson

The best house husband in Mars.

Categories

AI/ML

LBA

About

Blog

Main Website

UI/UX Design

Pages

NFT

Team

Blog

Demo