Users upset over LinkedIn's AI policy update

Here's how to opt out.

Users upset over LinkedIn's AI policy update
Photo Credit: Unsplash/Greg Bulla

Users are upset over a policy update that lets LinkedIn use their data for AI training. Actually, it's too late.

According to the Straits Times, users in Singapore have cried foul over a recent policy update that lets LinkedIn use their content for AI training.

What are you talking about?

Here are the main complaints, according to the report.

  • No notification about the policy change.
  • Opt-out button was added without fanfare.
  • A feeling of not being consulted.

I previously wrote about how AI is the new gold rush, and the tech giants are sponging up whatever publicly accessible data they can.

Hint: It will only get worse, not better.

In the wake of Meta's admission

I don't have evidence that it is related, but I observed how LinkedIn's update came a week after Meta was grilled by a Select Committee in Australia 3 weeks ago.

Meta confirmed:

  • All public posts since 2007 were scraped.
  • Users aged 18 and over are affected.
  • Includes videos and photos of your children.

And while Europeans under GDPR can opt out, Meta has no plans to make it available to others right now.

In that sense, by offering an opt-out to all, LinkedIn is actually the gentler of the social media giants.

Opting out

To be clear, your data has already been scraped on LinkedIn. Opting out now "does not affect training that has already taken place".

You can opt-out by going to:

-> Settings & privacy
-> Data privacy
-> Data for Generative AI Improvement

Note that opting out above doesn't affect the non-content aspect of LinkedIn which uses machine learning for personalisation and moderation.

As explained by The Verge, you need to fill out a separate form for that - link in comments.

We need to talk

As I wrote yesterday, the world is changing and while I opted out the moment I knew about it, the reality is almost everyone training AI models is busy scraping your data.

I thought Matt Johnson summed it up beautifully in his comment:

"Our current laws are not adequate to deal with the challenges created by generative AI and the gathering of personal data. I think this is something we need to work on. Transparency and consent are sorely needed."

I personally think the pre-GenAI status quo will never return. But wholesale copying can't be right either - we need to settle somewhere in the middle.

But before that can happen, we must first have that conversation.