Skip to content

On Monday, May 22, 2023, a verified Twitter account “Bloomberg Feed” shared a tweet about an explosion at the Pentagon, along with an image. If you’re wondering what this has to do with artificial intelligence (AI), the image was created by AI, the tweet quickly went viral and caused a brief stock market plunge. Things could have been a lot worse – a reminder of the dangers of artificial intelligence.

The dangers of artificial intelligence

It’s not just fake news that we have to worry about. There are many immediate or potential risks associated with AI, from privacy and security concerns to bias and copyright issues. We’ll dive into some of these artificial intelligence risks, see what’s being done to mitigate them now and in the future, and ask whether the risks of AI outweigh the benefits.

Fake news

Back when deep fakes first landed, there were concerns that they could be used for nefarious purposes. The same can be said for new AI image generators such as DALL-E 2, Midjourney or DreamStudio. In the year March 28, 2023 Fake AI generated images of Pope Francis wearing a white Balenciaga puffer jacket and enjoying various adventures including skateboarding and playing poker. It was hard to distinguish these images from the real thing unless you studied the images closely.

While the analogy with the pope was a bit amusing, the Pentagon picture (and accompanying tweet) was something else. AI-generated fake images have the power to damage reputations, end marriages or careers, cause political unrest, and if deployed by the wrong people, start wars – in short, these AI-generated images have the potential to be very dangerous if misused.

AI image generators are now freely available for anyone to use, and with Photoshop adding an AI image generator to its popular software, the possibility to manipulate images and create fake news is greater than ever.

Privacy, security and hacking

Privacy and security are major concerns when it comes to AI threats, with several countries already banning OpenAI’s ChatGPT. Italy has banned the model over privacy concerns, believing it does not comply with Europe’s General Data Protection Regulation (GDPR), while the governments of China, North Korea and Russia have banned it for fear of spreading misinformation.




So why should we care about privacy when it comes to AI? AI applications and systems collect large amounts of data to learn and make predictions. But how is this data stored and processed? There is a real risk of data breaches, hacking and data falling into the wrong hands.

It’s not just our personal data that’s at risk. AI hacking is a real risk – it hasn’t happened yet, but if people with malicious intent can hack into AI systems, this could have serious consequences. For example, hackers can take control of driverless vehicles, hack AI security systems to gain entry into highly secure areas, and disable weapon systems with AI security.

Experts at the US Department of Defense’s Defense Advanced Research Projects Agency (DARPA) are working on the DARPA Guaranteeing AI Robustness Against Deception (GARD) project to address these risks. The aim of the project is to ensure that resistance to hacking and hacking is built into algorithms and AI.

Copyright infringement

Another danger of AI is copyright infringement. This may not seem as serious as the other risks we’ve mentioned, but the emergence of AI models like GPT-4 increases everyone’s risk of transgression.

Every time you ask ChatGPT to create something for you—a blog post on the go or a new name for your business—you’re feeding it information it can use to answer future questions. The information it feeds back to you may be infringing someone else’s copyright, which is why it’s so important to use a plagiarism checker and edit any AI-generated content before publishing.

Social and data bias

AI isn’t human, so it can’t be biased, right? Error. People and data are used to train AI models and chatbots, which means that biased data or personality can result in biased AI. There are two types of bias in AI: community bias and data bias.

With so many biases in everyday society, what happens when these biases become part of AI? Programmers responsible for training the model may have biased expectations, which then feed into AI systems.

Or the data used to train and develop the AI ​​may be inaccurate, biased or collected in bad faith. This leads to information bias, which can be as dangerous as social bias. For example, if a facial recognition system is trained to use predominantly white faces, it will continue to struggle and oppress those from minority groups.

Robots are taking our jobs.

The development of chatbots like ChatGPT and Google Bard has opened up a new worry around AI: the fear that robots could take our jobs. We’re seeing writers in the tech industry replaced by AI, software developers worried about losing their jobs to bots, and companies using ChatGPT to create blog content and social media content instead of hiring human writers.

According to the World Economic Forum’s 2020 Future of Jobs report, AI is expected to replace 85 million jobs globally by 2025. While AI has not replaced writers, it is being used as a tool by many. People in jobs that are at risk of being replaced by AI may need to adapt to survive – for example, writers may be quick engineers of AI, allowing them to work with tools like ChatGPT for content creation instead of being replaced by these models.

Possible future AI threats

All of these are immediate or looming risks, but what about the smaller but still-possible AI risks we may see in the future? These include AI programmed to harm humans, such as autonomous weapons trained to kill in war.

Then there’s the risk that the AI ​​will focus solely on its programmed goal, developing destructive behaviors as it tries to accomplish that goal at all costs, even when humans try to stop this from happening.

Skynet taught us what happens when AI becomes sentient. However, while Google engineer Blake Lemoine may have tried to convince you that LaMDA, Google’s artificial intelligence chatbot generator will ship in June 2022, thankfully there is no evidence to date to suggest this is true.

The challenges of AI regulation

On Monday, May 15, 2020, OpenAI CEO Sam Altman attended the first congressional hearing on artificial intelligence, warning that “if this technology goes wrong, it could go very wrong.” OpenAI said it supports CO regulation and brought many of its own ideas to the hearing. The problem is that AI is developing at such a fast pace that it is difficult to know where to start.

Congress wants to avoid making the same mistakes early in the social media era, and a group of experts is working with Senate Majority Leader Chuck Schumer on a rule that would require companies to disclose who and what sources of data they used to train models. They trained them. Although it may be some time before it becomes clear how exactly AI will be controlled, there is no doubt that there will be a response from AI companies.

The threat of artificial general intelligence

There is also the risk of creating artificial general intelligence (AGI) that can perform any task a human (or animal) can perform. As often mentioned in sci-fi movies, we may still be decades away from such a creation, but if we do invent and develop AGI, it could pose a threat to humanity.

Many public representatives support the belief that AI threatens the existence of humanity, Stephen Hawking, Bill Gates and former Google CEO Eric Schmidt, “Artificial intelligence can threaten the existence and governments need to know how to ensure it.” The technology will not be misused by bad people.

So is artificial intelligence dangerous, and does the risk outweigh the benefits? The jury is still out on that, but we are now seeing evidence of some of the risks in our environment. Other risks are unlikely to arrive anytime soon, if at all. One thing is clear, though: the danger of AI should not be underestimated. It is critical to ensure that AI is properly controlled from the start in order to mitigate and hopefully mitigate future risks.

[ad_2]