Brazil’s National Data Protection Authority (ANPD) has issued a preventive measure halting Meta’s processing of personal data for the training of artificial intelligence (AI) systems.
The action comes in response to concerns over the company’s updated privacy policy, which permits the use of publicly available data and user-generated content from platforms like Facebook, Messenger and Instagram for AI system development.
Effective immediately, the ANPD directive suspends Meta’s policy, citing potential violations of Brazil’s General Data Protection Law (LGPD).
The authority’s decision, enforceable by a daily fine of R$ 50,000 (roughly US$9,000) for non-compliance, underscores worries about inadequate legal bases for data processing, lack of transparency regarding policy changes and potential infringements on user rights, particularly concerning children and adolescents.
In its preliminary assessment, described in a blog post published on Tuesday, the ANPD found that Meta’s use of personal data under the guise of legitimate interests did not sufficiently consider the sensitivity of the data involved or adequately inform users about the scope and implications of AI training.
Moreover, the authority highlighted barriers that hindered users’ ability to access information and exercise their rights effectively.
According to the blog post (in Portuguese), the regulatory intervention aims to safeguard data subjects from potentially irreversible harm and seeks to enforce compliance while further detailed investigations are conducted.
The ANPD’s preventive measure said the decision represents a pivotal step in ensuring corporate accountability and protecting individuals’ data privacy rights amid rapid technological advancements.
The development signals growing regulatory scrutiny over AI data processing practices, reflecting broader concerns about privacy and the ethical use of personal data in AI development globally.
Image credit: Mamun sheikh K / Shutterstock.com