Grammarly Exploiting Identities Without Consent.

In a striking development, Grammarly’s “expert review” feature recently came under scrutiny when users noticed AI-generated advice appearing to come from industry experts. This included several prominent individuals who had not given their consent for their names to be associated with this function, sparking a conversation about the ethics of AI use within writing aids.

Grammarly, the spelling and grammar checker lauded by both students and professionals alike, introduced the ‘expert review’ feature back in August. This innovation promised to provide users with writing advice “inspired by” subject matter experts. However, this claim rather spectacularly backfired when it transpired that some of these ‘experts’ were completely unaware of their contribution.

In a surprising twist, writers testing out the feature discovered feedback that appeared to come from individuals within their own professional circles. One discovery of note was made by a writer who found comments that seemed to emulate the advice of his colleagues at the Verge, including editor-in-chief, Nilay Patel, editor-at-large David Pierce, and senior editors Sean Hollister and Tom Warren. Nothing, it appears, had been done by Grammarly to seek permission from these individuals before using their names within the feature.

This oversight by Grammarly raises important questions about data privacy and the ethical use of AI. Notably, it highlights the need for tech companies to adopt transparent processes when developing features that reference or replicate living individuals.

The breach also points to a potential disconnect between tech developers and users when it comes to expectations of privacy and consent. Innovations like the ‘expert review’ function may seem ground-breaking in theory, but if not executed correctly, they can lead to crossed boundaries and loss of trust.

As AI continues to shape our lives and workplaces, these types of conversations are becoming more critical. Companies like Grammarly have a responsibility not just to create helpful tools for their users, but to foster a culture of respect for the individuals their products reference. The incident serves as a reminder to all that proper care must be taken in navigating the complex lines between AI use, privacy, and ethics.

For more details, you can always refer to the original news piece over at The Verge. https://www.theverge.com/ai-artificial-intelligence/890921/grammarly-ai-expert-reviews

.

You may also like these

Porozmawiaj z ALIA

ALIA