Why Does Everyone Think AI Is So Scary? Battling Fear Before It Cripples The Industry

Loading...
Loading...

To say that artificial intelligence is a hot topic is somewhat of an understatement. You cannot even go to the local coffee shop without overhearing someone talking about it. And while work within the areas of AI, machine learning and neural networks have been ongoing for decades, it wasn’t until recently that it all came to a head. The tipping point? ChatGPT! 

Created by OpenAI, ChatGPT has received a lot of attention recently. And love it or hate it, it’s a rocket ship that’s showing no signs of a slowdown. As Dmitry Volkov, an early investor in OpenAI, explained in a recent social post, “ChatGPT reached over 100 million monthly active users in January, just two months after launch, making it the fastest-growing consumer application in history. And it keeps growing.”

In his post, Volkov, who is the founder and CEO of Social Discovery Group, added that the value of the startup is now “around $29 billion.” He also stated that “OpenAI anticipates steep growth in the coming years: by 2024, the company aims to reach $1 billion in revenue.” 

Volkov and Social Discovery Group’s investment arm, which according to sources has more than $500 million in assets under management and $50 million invested in public markets, joined Koshla Ventures as a limited partner in the OpenAI investment.

But AI Is Really, Really Scary! 

While the success of OpenAI’s ChatGPT is impressive, the growing dialogue around the technology has made some people feel uneasy. Perhaps it’s because AI technology is advancing so rapidly, and many don’t know what AI will mean for their future. Or it’s because we’ve been brainwashed for decades to believe that AIs are out to get us. On the big screen, AI is always portrayed as malevolent robots created to take over the world. From the classic novel ‘Frankenstein’ to contemporary films such as ‘The Terminator’, AI has been portrayed as a tool that will ultimately break down the gates of human civilization. 

Additionally, day-to-day media reports on futuristic AI technology only fuel this fear further. Take for example the US Air Force drone simulation that ended in the hypothetical killing of the craft’s operator. Whether the test did or didn’t happen, these reports make it easy to believe the idea of super-intelligent robots scampering around targeting humans is real. However, this perception of AI is not accurate at all. 

In reality, AI technology has been designed to improve our daily lives, and many businesses have deployed it within their operations to enhance productivity, profitability and overall growth. From healthcare to retail, AI technology is being adopted in countless sectors. For instance, banks are applying AI algorithms in fraud detection, credit risk evaluation, and customer service. AI improves their operations, such as money handling, trading, and wealth management, resulting in efficient and cost-effective processes for financial organizations. 

AI has also improved the customer's buying journey through personalized product recommendations, chatbots, and voice assistants. For some people, AIs are even filling in as companions, confidants and advisers. One woman even discovered more about her sexuality with help from an AI. But with all the benefits AI can bring to society, it has somehow found itself in the crosshairs.

Fear Is Leading To Regulation 

Recently, a growing number of regulators and experts have become increasingly outspoken about the potential dangers of AI. The US government has already put the regulatory ball into motion with things like The White House’s Blueprint for an AI Bill of Rights. But what’s even more surprising is the support regulation has been receiving from industry heavyweights. During recent testimony in front of a panel of US Senators, OpenAI’s CEO Sam Altman stated that AI regulation is needed. 

While some praise the regulations being proposed, others believe it is misguided. According to Nick Davidov, Partner at Davidovs Venture Capital (DVC), AI regulation will do little to guide the industry or protect society.   

“l have been developing and investing in AI companies since 2012, and I love the technology,” said Davidov. “I completely believe in its potential to change society for the better. However, the types of regulations being advocated for will do little to further this. And ultimately, it will do more harm than good.”

Davidov, whose portfolio includes noted players such as Narrative BI, Intently, Prisma and others, explained that regulation will ultimately hurt startups. 

“The pace of innovation in AI is happening at scale, just look at how quickly applications like Prisma or ChatGPT have grown, and regulators will never be able to keep up with it,” he said. “Additionally, the costs of doing business that regulation brings will only push out startups and help the big players corner the market. Furthermore, bad actors will ignore any regulations, so there's no point in creating red tape that only affects small, ethical players.”

Instead of regulation, Davidov proposes that what’s truly needed are programs to help people upskill and more tools that help address things like bias, misinformation, trolling and deepfakes.

What Can The AI Community Do? 

Loading...
Loading...

The reality is that AI is here to stay,” added Davidov. “So instead of trying to control it, which never works, we should focus on guiding it towards its intended purpose, which is to help humanity evolve further towards self-actualization.”

Part of that evolution means society will need to embrace a future where humans work with help from digital assistants. 

And according to Davidov, there are three things that need to happen for the dialogue around AI to improve.

  • The public needs to become better educated: AI technology is still in its infancy, and there is a lot the public doesn’t know about it. This means that education is critical, and the AI community can play an essential role in helping decision-makers and the public understand the true benefits and risks of AI. This includes working with universities to create AI courses, conducting research that addresses issues, and making sure that information about AI is accessible to everyone.

The AI community needs to take a proactive stance in educating the public,” said Davidov. “By explaining how AI works, its limitations, and the benefits it brings, we can build real awareness and knowledge around it. And from there, society will be able to make informed decisions rather than reacting from a place of fear.”

  • There needs to be a focus on training and upskilling: AI will eliminate many mundane tasks, which will change the work we do. People will have more freedom to do the type of work they want, not the tedious work. However, for the workforce to prepare for these changes, training and education programs will be needed in order to help people learn how to use technology to support and improve the work they already do. This will be critical, given that AI is rapidly becoming a commodity. 

Virtual employees will replace some human employees at scale, which means certain jobs will need to evolve,” said Davidov. “Institutions will need to retrain people and prepare them to do higher-level, creative tasks that add value to society.”

  • Promote the good: Promoting positive use cases is also a critical step toward safeguarding AI's future. With AI's impact on the workforce, many are concerned about job loss in certain industries. Promoting positive use cases highlights AI's potential for growth, thus nourishing the development of AI-enabled solutions that can create more job opportunities. 

Companies in the AI Industry should promote the benefits of AI, which include increased efficiency, better decision-making, and new opportunities for upcoming markets,” said Davidov. “If the AI community does more to motivate and support examples of good AI demonstrations, it can help the sector gain greater public trust.”

The AI community has a vital role in safeguarding AI’s future. And taking proactive steps by providing education and training, and promoting positive use cases can help ensure that people understand how AI can help them achieve their limitless potential. As such, it is vital that the public understands that AI is not the enemy. Rather, it should be viewed as an opportunity and a vital tool for researchers, innovators, business owners and workers to solve the most challenging problems we face today.

Loading...
Loading...
Market News and Data brought to you by Benzinga APIs
Posted In: OpinionTechartificial intelligenceChatGPTcontributors
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!

Loading...