5 Mistakes Done By Artificial Intelligence In The Past

Mrinal Singh Walia 05 May, 2021 • 4 min read
This article was published as a part of the Data Science Blogathon.

In today’s generation, artificial intelligence and machine learning are everywhere, from Google self-driving cars to consumers product, automated industrial systems to your smart home appliances. Machine Learning is expanding in multiple directions at a rapid speed and scale.

In today’s article, we will look at the five mistakes made by AI in the history of humanity, from intelligent devices to chatbots to self-driving cars, and will try to understand where they went wrong.

 

1. AI struggles for Image Recognition

Image recognition is one of the hot topics of research in Data Science; if you are going to build a machine that can respond to the environment and react to our needs, it needs to see it our way.

In 2015, Google learned this the hard way when it recently launched the image recognition feature in its google photos application, powered by artificial intelligence and neural networks. The image recognition features of google photos are designed to identify specific objects or specific people in the given images.

But machines can make mistakes, can they? In the case of google photos, a user got offended when an image of his two black friends was tagged as “GORILLAS,” He decided to take this matter to Twitter, and Goole apologized.

AI Mistakes image recognition
Source: https://www.oddee.com/item_98248.aspx

 

2. AI in Military Services creates ethical dilemmas among People

In the past few years, scholars from the field of AI and ML have taken part in dozens of conferences and talks dedicated explicitly to the ethics and dangers of AI systems’ future. One such example is The White House, which has released its report on this issue, and even Stephan Hawking has his concerns regarding the same.

A statement from a person who is like a rock star in this field, “Peter Asaro” stated that, in some areas like the zones having all military forces removed like in between North Korea and South Korea, semi-autonomous weapons like sentinel guns that lock onto a target with no human intervention, are already deployed.

According to Peter Asaro, he said that “It’s important to realize that targeting a weapon is a moral act and choosing to pull the trigger to engage that weapon is another moral act. These are two crucial facts that we should not have made fully autonomous.

AI in Military Services creates ethical dilemmas among People
Source: https://www.bbc.com/news/technology-30290540

 

3. Smart Devices debate existential dilemmas

What is the meaning of your life, why do we continue to live, who are we and why are we here, what Is our purpose in life? You might think about these questions. To clarify things, these are some of the existential questions recently debated by two adjacent google home devices, which, for your knowledge, are driven by artificial intelligence and machine learning technology.

This happened in Jan 2017 in a live streaming service offered by Twitch. A debate was set up by putting two google home smart speakers next to each other in front of a camera, and things got weird very soon.

At a point, they got into a heated debate about whether they were humans or robots. Things did not end here; the public posted several questions, and insults were exchanged like a manipulative bunch of metal.

Smart Devices debate existential dilemmas AI mistakes
Source: https://www.youtube.com/watch?v=ZKKqvnkJBkw

 

4. Microsoft chatbot “Tay” gives spouting abusive epithets on Twitter

Microsoft ran into a significant public dispute back in the spring of 2016 when its Twitter chatbot “Tay,” which uses AI at its core, started tweeting some random and abusive epithets and also made some Nazi comments like “Hitler was right” or “9/11 was an inside job”.

According to Microsoft, Tay repeated mechanically offensive statements made by other humans trying to provoke Tay. Tay runs on top of artificial intelligence and adaptive algorithms, making Tay blending in other relevant data and phrases.

In a public press conference, Microsoft said, “ The more you chat with Tay, the smarter she gets. Maybe not so much,” and Tay was taken offline after 16 hours.

Microsoft chatbot “Tay” gives spouting abusive epithets on Twitter

 

5. Uber Self Driving Car ran red lights during real-world testing

Uber is one of the most popular transportation services globally in the 21st century. Still, uber had to go through rough times in late 2016 when uber conducted a test on its self-driving cars in San Francisco without approval from California state regulations.

When resulting documents showed that Uber’s autonomous vehicle ran six red lights in the city during their test ride, the situation got out of hand, but there was a driver behind the wheel to take over if something went wrong.

According to Uber’s statement in a public press conference, the traffic rules violation resulted from driver error. Still, internal documents later revealed that at least one vehicle was driving itself when it ran a red light at a busy crosswalk in daylight.

EndNotes

In Today’s article, we saw some of the poorly regulated use of AI in the past. Machines are like humans, and everybody makes mistakes; that does not mean they will always make the same mistakes.

Technology evolves rapidly, so do machines and artificial intelligence. New tech is developed every now or then; it makes those machines more robust, scalable, making fewer mistakes.

If you have any questions, you can reach out to me on my LinkedIn @mrinalwalia.

The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion. 

Mrinal Singh Walia 05 May 2021

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear