A principios de este mes, un estremecedor examen de The Guardian brought to light unexpected flaws in Google’s AI – revealing misleading and outright false information in response to certain health-related searches. In a rapid response from Google, the inaccurate AI-generated health suggestions have now been expunged from search engine results.
The original probe stood as an alarming revelation outlining Google’s AI overviews and their capacity to circulate incorrect information about medical conditions, treatments, and dietary recommendations. Some instances pointed to potentially grave risks for users, with inaccurate health guidance which could lead to health complications or exacerbate existing conditions.
Uno de estos casos giraba en torno a los consejos para personas diagnosticadas de cáncer de páncreas. En una clara desviación de las recomendaciones dietéticas aceptadas para esta enfermedad, Google indicó erróneamente que los pacientes debían evitar los alimentos ricos en grasas, cuando, en realidad, el consejo adecuado es todo lo contrario. Estos datos engañosos podrían dar lugar a una disminución de la calidad de vida, con los pacientes sufriendo efectos secundarios innecesarios o incluso experimentando un mayor riesgo de muerte por la enfermedad.
Considering the widespread reliance on Google by the general public for a myriad of information, including health inquiries, the reported inaccuracies are indeed ‘really dangerous’. It also underscores the ongoing struggle technology companies are facing in delivering precise and reliable health solutions, even as AI and machine learning technologies evolve at a rapid pace. The question begs: can AI and machine-learning algorithms adapt to meet the complex demands posed by healthcare queries?
Faced with the reality of its AI service delivering misleading health information, Google swiftly moved to expunge the disturbing findings from its AI overviews. This decision underscores the magnitude of the problem when AI goes awry, especially in sensitive and critical areas such as health. It also speaks volumes about Google’s readiness to address such issues and rectify them promptly. However, the incident raises crucial questions about the reliability and oversight of automated algorithms intended to provide robust health advice and guidance.
Acknowledging the complex nature of human health conditions and their treatments, coupled with the continuous evolution in medical research, it’s clear that perfect accuracy from technological sources still has a substantial room for improvement. However, the positive takeaway from this incident is Google’s engagement in promptly addressing problematic results. Moving forward, incidents such as these should serve as a wake-up call for tech companies, leading to improvements in their systems that prioritize safety and accuracy.
Para conocer todos los detalles del incidente, visita este informe original de The Verge: Google Pulls ‘Alarming’, ‘Dangerous’ Medical AI Overviews