The Year AI Erred

From spreading misinformation to amplifying biases, the consequences of poorly managed AI became impossible to ignore. The post The Year AI Erred appeared first on Analytics India Magazine.

The Year AI Erred

ARE YOU TIRED OF LOW SALES TODAY?

Connect to more customers on doacWeb

Post your business here..... from NGN1,000

WhatsApp: 09031633831

ARE YOU TIRED OF LOW SALES TODAY?

Connect to more customers on doacWeb

Post your business here..... from NGN1,000

WhatsApp: 09031633831

ARE YOU TIRED OF LOW SALES TODAY?

Connect to more customers on doacWeb

Post your business here..... from NGN1,000

WhatsApp: 09031633831

As AI continues to integrate deeper into everyday life, its unintended consequences reveal both the potential and risks that come with this rapidly advancing technology. 

From generating misinformation to creating ethical dilemmas, AI’s shortcomings at times overshadowed its successes, sparking debates about its role and regulation.

When AI Crossed Personal and Ethical Boundaries

In a world that is becoming increasingly reliant on AI, privacy and ethics have often been compromised. OpenAI’s ChatGPT became the centre of controversy in October when users reported that the model had initiated conversations without being prompted first

In one such incident, the chatbot looked into a user’s first week of high school, raising concerns about privacy. OpenAI later clarified that the issue stemmed from a bug, but the event highlighted how easily AI can blur the line between machine and human interaction.

Did ChatGPT just message me… First?
byu/SentuBill inChatGPT

Character AI recently became a source of controversy following a lawsuit in Florida. A mother accused the platform of abetting her son’s suicide, citing his unhealthy attachment to a chatbot modelled after a fictional character. 

In another case, an AI chatbot replicated a deceased girl’s personality without her family’s knowledge. These incidents underscore the profound emotional impact AI can have and the questions it raises about consent and responsibility.

AI’s Struggle with Accuracy and Bias

AI has been actively deployed in sensitive fields like healthcare and legal systems, revealing its tendency for errors and biases. 

OpenAI’s transcription tool Whisper, used by over 30,000 medical professionals, was reportedly criticised for generating false and sometimes harmful text, including fabricated medical advice and racial commentary. 

Despite prior warnings, its widespread adoption in sensitive industries led to instances of misdiagnosis and mistranslation, emphasising the urgent need for regulatory oversight.

In addition, the same model also received criticism from the research community when researchers from Digital University Kerala (DUK) found Whisper to be inaccurate when dealing with native Indic languages like Malayalam. 

At the same time, Google’s Gemini drew criticism for historically inaccurate and racially insensitive images, after it inaccurately depicted people of colour in Nazi-era uniforms. This led the company to temporarily disable its image-generation features. 

Google co-founder Sergey Brin commented, “We definitely messed up on the image generation, and it was mostly due to not thorough testing.”

These mishaps added to the failures of Google’s AI Overview, a search summary tool that offered dangerously misleading health advice, such as using glue to ‘make cheese stick to pizza’.

The technology even suggested tobacco’s benefits for children and displayed political biases, pushing the company to modify its algorithms. Google responded by temporarily disabling the system for health-related queries on the basis of making adjustments.

Legal and Regulatory Chaos

AI is slowly expanding into the legal domain, where its unchecked use resulted in profound consequences. In February, Vancouver-based lawyer Chong Ke submitted fictitious cases generated by ChatGPT in a custody battle, unintentionally misleading the court. 

Upon realisation, Ke apologised and said, “I had no idea that these two cases could be erroneous. I had no intention to mislead the opposing counsel or the court and sincerely apologise for the mistake I made.”

Even though the lawyer apologised, the incident underscored the risks of using AI in legal proceedings without proper oversight.

Similarly, Perplexity AI faced legal action from major media outlets for unauthorised content usage. Allegations of copyright infringement forced the company to initiate a revenue-sharing program, highlighting the tension between AI innovation and legal concerns regarding its content usage.

Adding to this, the misuse of AI deepfakes surged in 2024, threatening democratic integrity. From fabricated videos of Taylor Swift endorsing Donald Trump to misleading advertisements, deepfake technology became a popular tool for manipulation. 

In response, lawmakers and developers collaborated to create more robust detection technologies and stricter content verification policies.

A Double-Edged Sword

The risks of over-reliance on AI were evident in industries like recruitment and transportation. 

In September, an HR team was dismissed after its automated hiring system rejected every job applicant, including a manager who tested the system with his own resume. The incident highlighted the critical need for human oversight in crucial decision-making processes.