Exclude Your Facebook & Instagram Posts from Meta’s AI Training

PLEASE NOTE: I have written this as an informed opinion piece, and I haven’t slowed down to add references and sources. However, you can find a whole lot of information & context on The EU’s Artificial Intelligence Act website that will highlight from a legislative body why the concerns I have listed below are all legitimately risks to our digital safety and human identity.


From June 26, 2024, Meta - the company behind Facebook, Instagram, and Threads, will be using your photographs, posts, and videos to train their future AI models. Photos of your children, your family, your pets - your stories, your posts, your conversations.

Here’s the notification they sent:

“We’re getting ready to expand our AI at Meta experiences to your region. AI at Meta is our collection of generative AI features and experiences, like Meta AI and AI Creative Tools, along with the models that power them.

To help bring these experiences to you, we'll now rely on the legal basis called legitimate interests for using your information to develop and improve AI at Meta. This means you have the right to object to how your information is used for these purposes. If your objection is honored, it will be applied going forward.

We’re including updates in our Privacy Policy to reflect these changes. The updates go into effect on June 26, 2024.”

This is setting an extremely dangerous precedent.

Allowing companies like Meta to freely use our biometric data without strong ethical guidelines, government regulation, or societal consent sets a dangerous precedent for normalising intrusive data practices, and our collective expectations of privacy and autonomy.

When our faces and voices are used to train AI models, we are in extremely dystopian territory, where we no longer have autonomy over our personhood and how our biometric data is processed, used, and stored.

This is also a major child safety issue, given the way that social media content is already a huge proportion of content found by police on the computers and hard drives of offenders. I don’t want to say anything too much about that on this blog post, but please research that if you are skeptical.

If you live in Europe, you have the Right to Object.

It is extremely important for human rights for yourself - and for any minors whose faces you have posted on your Facebook feed - that you use your right to object if you have it.

You have to do this for BOTH your Instagram AND your Facebook accounts separately - and if that doesn’t tell you everything you need to know.

  1. Go to Settings -> Privacy Policy

  1. You will see a text box under the Privacy Policy title about the updates to their data privacy.

  2. Click on “Right to Object” and you will be taken to a new page.

  3. Fill in the details. There’s a template message below you can use to outline why you object.

Here is the page >>

You must fill it in for the email addresses associated with your Instagram account and your Facebook account separately if they don’t use the same email. 

Right to Object Template

In the box for “Please tell us how this processing impacts you. (Required)” copy and paste the following:


Formal Objection to the Use of my Biometric Data for AI Training

I am writing to formally object to the processing of my biometric data, including but not limited to any photographs and videos of myself and of my family members, for the purpose of training Meta’s visual AI and other related technologies. This objection is made pursuant to Article 21 of the General Data Protection Regulation (GDPR).

Grounds for Objection:

Concerns of Legality: I strongly question the legality of the use of my biometric data for such purposes. The extraction and use of private data without explicit, informed consent from users appears to be in potential violation of GDPR principles including data minimisation, purpose limitation, and lawfulness of processing

Privacy and Autonomy: My biometric data is highly personal and sensitive information. Its use in AI training undermines my right to privacy and individual autonomy. I object to my data and the data of my dependents being exploited in this manner.

Copyright and Intellectual Property: I retain copyright and control over my content, and I object to any unauthorised usage or extraction of this data. This includes, but is not limited to, images and videos that I have uploaded as well as my likeness and any biometric data that is uniquely mine.

Lack of Benefit: I do not see how the use of my biometric data for AI training will provide any tangible benefits to me as a user. Instead, it poses significant risks concerning privacy and misuse.

I hereby request that Meta:

- Immediately cease the processing of my biometric data for AI training or any related purposes.

- Permanently delete any biometric data of mine that has already been processed or incorporated into AI datasets.

- Forward this objection to Meta's Data Protection Officer (DPO).

- Provide a comprehensive reply to this objection within the legal timeframe stipulated under GDPR.

In the event that my concerns are not addressed satisfactorily, I reserve the right to escalate this matter to the relevant data protection authorities. I would like to draw your attention to ongoing legal cases concerning similar issues, underscoring the seriousness of my request.

Thank you for your prompt attention to this matter. I look forward to your reply within the stipulated legal limits.


noyb has a very helpful article about it here which is extremely eye-opening. Here’s a quote:

All non-public data for some undefined future "AI technology". Unlike the already problematic situation of companies using certain (public) data to train a specific AI system (e.g. a chatbot), Meta's new privacy policy basically says that the company wants to take all public and non-public user data that it has collected since 2007 and use it for any undefined type of current and future "artificial intelligence technology". [emphasis mine]

This includes the many "dormant" Facebook accounts users hardly interact with anymore – but which still contain huge amounts of personal data. In addition, Meta says it can collect additional information from any "third party" or scrape data from online sources.

The only exception seems to be chats between individuals – but even chats with a company are fair game. Users aren't given any information about the purposes of the "AI technology" – which is against the requirements of the GDPR.

Meta's privacy policy would theoretically allow for any purpose. This change is particularly worrying because it involves the personal data of about 4 billion Meta users, which will be used for experimental technology essentially without limit. At least users in the EU/EEA should (in theory) be protected from such abuse by the GDPR.

About noyb: noyb uses best practices from consumer rights groups, privacy activists, hackers, and legal tech initiatives and merges them into a stable European enforcement platform


Human rights? Aren’t you being dramatic?

No, I’m not. And in the next few paragraphs, I’m going to highlight some of the issues we face when we give our biometric data (and that of our children) to tech companies - that have very little governance, and don’t listen/care about legislation and governance even when it does exist.

Please remember that when you see their announcements, you are seeing a marketing message. They are positioning it in the best light possible so that your objections are reassured enough to be quiet - but they are not giving you the whole story. They’re telling you what you want to hear so you’ll keep using their platform and feel like they’re doing you a great big favour.

Anyway, on with just a few of the MYRIAD ISSUES:

Intellectual Property & the Commodification of Identity:

Biometric data, including facial features, are deeply, deeply personal, and unique to each individual. Sharing this data with tech companies erodes intellectual privacy (your ability to control personal information) and is a huge security risk when you consider the ways in which your facial features may be used to access your most private information, including banking, credit, and medical data.

Reducing our faces and voices to data points for a tech company’s algorithm commodifies your identity and diminishes your human dignity, seeing you as a means to improving an AI algorithm instead of an end as yourself.

You are not being reimbursed for the use of your data. You are not being provided with your share of the current $1.25 TRILLION valuation of this company to train their models, they are taking something from you that they have no right to because they think you’re numb and exhausted and busy enough not to notice.

Informed Consent

European users of Meta’s products can opt out because the European Union has forcibly ensured that Meta cannot use our data if we don’t want them to.

This is not true for many, many countries/geographies. Informed consent about the genuine risks associated with the use of your data is not provided - we have no idea how our biometric information is going to be used, processed, or shared - who will have access to it, what it will be used for, and what we should do if a third-party Meta provider recreates our face or voice for their own ends.

This is a * child safety issue * I cannot overstate this.

Children cannot consent to their faces being used to train AI. They cannot fully understand or consent to the use of their biometric data.

We have no idea what the world is going to be like 20 years from now, and Facebook using their photographs to train AI now violates their human rights and exposes them to unknown future risks - unknown to you, unknown to Facebook, and certainly unknown to the child who has to live with the consequences.

Data Security

Data breaches happen all the time at tech companies. Remember when 1 million 23&Me DNA samples were sold on the dark net? Exactly. There are huge risks for data security at the best of times, and the consequences and risks are multiplied many times over when the data that is being harvested without our consent is our faces, our voices, and our children.

Identity theft, deep fakes, scams, and more malicious misuse than the average person can imagine is possible - made easy because the elements of your embodied humanity that make you who you are have been exploited by a trillion dollar tech company.

On top of that, your data is likely to be grossly misused beyond the purposes you think you agreed to - which can be seen in how AI has been trained to begin with. It could be sold to third parties, law enforcement, and the government without any checks or balances, because there is no legal framework that stops them from acting right now - due to law moving more slowly than emerging technology.

The European Union is the furthest behind the race to AI, at the same time as leading in legislation around data privacy - coincidence? No.

Surveillance Capitalism, Control, and Manipulation

Data has been collected for decades that is then used to influence your thoughts, behaviours, choices, and perceptions. Whether it’s advertising, suggested posts, or physical habits like how you swipe and click, the way you act online is designed by people to give you the perception of free will and autonomy while manipulating you to take the actions they want you to take.

There are entire books about this, and entire departments dedicated to predictive analytics and behavioural design.

You’re not part of some big charitable enterprise to improve AI technologies for the good of humanity, you’re being exploited and controlled by companies that are so big, and so profitable, that there is no free market left in capitalism and convenience is the trojan horse for observing you and then wringing you for whatever profit they can - whether it’s to sell you stuff, or sell you (… your data) to their third party contacts.

Previous
Previous

Email Marketing Strategy for Small Businesses: Review & Refresh Your Approach to Emails

Next
Next

Zoom Alternatives: Why You Should Care About Zoom’s Intent to Harvest Data