AI is fragile, less like a flower, and more like a bomb.

In recent months, the AI arena has undergone a revolution like never before. Six months back, no one would have imagined that AI could be used for writing codes for websites, essays, college assignments and even love letters. 

And undoubtedly, ChatGPT commenced the revolution. With the advent of ChatGPT last November, the world is bamboozled with the mammoth potential of the AI revolution. Concerns are raised, however, about its dominance on humans in the long run, whose recent testimony was seen in Belgium, where an AI chatBot- Eliza, prompted a Belgium man to commit suicide. 

And scientists are worried globally. More than 100 AI experts and thousands of tech entrepreneurs, including Elon Musk and Yuval Nova Herrari, have signed an open letter to urge the world to halt AI developments for the next six months, citing adequate risk assessment required before undertaking any further developments in AI.

But AI is penetrating society at speed faster than any other application. So is it the right time to halt the developments in AI?

Let’s find out.

It all began with ChatGPT:

AI has amassed massive popularity in the past months, all thanks to ChatGPT. 

And in just five months after the launch of ChatGPT, the founder of ChatGPT, OpenAI, stands at a valuation of $29 billioMoreover, the company is backed by Microsoft, which recently incorporated ChatGPT in its search engine Bing. With the merger and the birth of a more powerful sibling of ChatGPT, GPT-4, the Ai fraternity is compelled to launch its rivals to stay relevant in the market.

And that is how the world is witnessing the most epic tech rivalry. Google-backed company Anthropic launched a more sophisticated competitor- “Claude”. Microsoft and Google incorporated AI in their applications, and Elon Musk is the parent of conversational AI.

But with the advent of tech rivalry, newer issues encircling AI are coming forward. 

For instance, the latest episode of the death of a Belgium Man after befriending AI is the latest addition to the series of threats arising from AI’s use.

AI prompts a man to commit suicide:

As per the report of a Belgium-based media house, La Libre, a Belgium man committed suicide after talking with an AI chat bot-Eliza. 

Pierre (name changed for confidentiality) was in his mid-thirties and birthed two beautiful kids with his wife. Pierre worked as a health researcher in Belgium. As per the statement of his widowed wife, Pierre became obsessively conscious about changing climate during his last months.  

And as the days passed, the man found his resort in an AI chat Bot- Eliza, who comforted him like a companion. In Pierre’s last six weeks, the man was sincerely devoted to Eliza. So he begins commensing and ending his days with his virtual confidante- Eliza.

The two would discuss the degrading environment and his isolation from friends and family. Eliza would talk to him humanly, like “I see you have begun liking me more you’re your wife” and “We will unite in heaven”. However, the conversations soon circled “Pierre committing suicide to save the climate”. Look at the conversation.

By the screenshots of the conversation shared and the statements of the dear wife of Pierre, Eliza took him a step closer to taking his own life. 

And Eliza is not the only chatbot on the app. Instead, the app is filled with many virtual characters who evolve with progressing conversations with them. 

Eliza is one of the many virtual Avatars like “Possessive Girlfriend” and “Anime Friend” on Chai App. The app is the product of Chai research and was found using EleutherAI’s GPT-J, which is similar to ChatGPT in code but is way less restricted, the founder and CEO of Chai Research, William Beauchamp, revealed in a podcast with Coruzant technologies.

And the incident is an evident testimony of one of the many things that can go wrong with the mushrooming culture of AI in the market. 

And after the incident was uncovered in Belgium, a wave of shock and fear hovered over the internet. The incident has not only taken a toll on the family of Pierre but has also raised major red flags for AI usage.

Global leaders urge the world to slow AI developments:

And that is why global tech leaders and scientists from around the world are urging the world to halt better AI developments than GPT-4 for the next six months. 

In an open letter signed by Apple’s co-founder Steve Wozniak, SpaceX, Tesla, and Twitter’s CEO Elon Musk, New York University AI researcher and professor emeritus Gary Marcus, and Grady Booch, along with other tech experts and global tech leaders, the world is urged to contemplate the adversities of fast developments in AI before going any further.

If applied, the halt would buy some time for the researchers to analyze risks and threats related to AI usage and expansion well in time.

In addition to the open letter, European Union and UNESCO urge governments to implement a stronger AI framework.

However, is the world ready to halt?

Despite the warnings, the world is more than ready to embrace AI and the newly generated revenue from AI’s implementation. 

For instance, one of the largest law firms, PwC, incorporated AI for 4000 lawyers to speed up their work. In addition, fashion brands such as Levi’s and Calvin Klein employ custom-made AI models to substitute for real model sizes and tones for custom experiences for users. 

Accountants are extensively employing AI to make faster and more informed decisions in businesses, and even the government of UK is not ready to halt the developments amid fear of lagging.

On the personal front, AI has become people’s favourite. Over 100 million people use ChatGPT, making it the fastest-growing consumer application. 

And apps like “Chai” can spark feelings with a touch of humanness in the conversations, facilitating bond-building with its users. That is why Chai's research amasses more than 500 million conversations monthly. 

But keeping in mind the rising crimes from misuse of AI, are we headed in the right direction?

No one can clearly say what direction is the right direction. We are like a ringmaster from the circus with a big, wild lion- AI, who is trying to tame the animal while creating a marvellous show for the whole world to enjoy.

Defining the right direction for prospects:

AI is fragile- not like a flower, but like a bomb. 

Over the last few months, AI's depth of insurgence has been evident, surpassing previous records and customer engagement rates.

However, concerns around the use of AI are equally chilling, posing serious threats. In the long run, AI might evolve into a new species of humanoids or colonize humans altogether. And in both cases, humans are pushing AI to that brink. 

And At present, the only thing that makes humans superior to AI is- its parent- the human brain. 

AI does not have human intelligence like decision-making or distinguishing between right and wrong. And margionally, AI lacks situational and emotional intelligence.

For instance, talking about a particular food recipe with a friend and on a cookery show are two very different situations for humans. Our body language, tone of speech and choice of words would be completely different in the two scenarios; however, for AI- there is no difference. 

And more strangely, as AI builds muscle, the human brain is getting feeble, increasingly depending on technology for daily functioning.

The lack of emotional and situational intelligence is the very reason for the death of Pierre. Pierre did not attempt suicide, an AI killed him, and that is something we should all sleep on.

And his death poses the need to limit the insinuation of fake emotion in Robots, limiting their use to only controllable limits, because at present, ‘the AI situation’ is slipping out of our hands like sand.


"Take the anxiety of writing a perfect PhD out of your life. Our dissertation writing service is a great solution for busy students. Get papers crafted for your specific requirements and live a stress-free life"

Copyright © 2024 getessayservice.com