In January 2025, LinkedIn, the professional networking platform owned by Microsoft, faced a class-action lawsuit in California alleging that it used private messages from Premium subscribers to train generative AI models without proper user consent. This development raises serious concerns about data privacy and the ethical use of user-generated content in artificial intelligence (AI) training.
Allegations Against LinkedIn
The lawsuit claims LinkedIn violated the Stored Communications Act and breached its contract with users by utilizing private messages for AI training without explicit permission. It further accuses the platform of violating California’s Unfair Competition Law. Users allege that in August 2024, LinkedIn implemented a new privacy setting that automatically opted Premium members into sharing their data for AI purposes without adequate transparency or notification. Following backlash, the company updated its privacy policy in September 2024 to reflect these practices.
The lawsuit seeks $1,000 per affected user for violating privacy laws and additional damages. These allegations highlight the increasing tension between AI innovation and individuals’ rights over their data.
LinkedIn’s Response
LinkedIn has denied the claims, stating that the allegations are unfounded. However, the platform has acknowledged user concerns by introducing an option for members to opt out of data sharing for AI training. To disable this feature, users can navigate to their account settings, select “Data privacy,” and toggle off the “Data for Generative AI Improvement” option. However, it is important to note that this action will not retroactively affect data already used for AI training. You can revoke permission by heading to the Data privacy tab in your account settings and clicking on “Data for Generative AI Improvement” to find the toggle. Turn it to “off” to opt-out.
Ethical Implications
This controversy underscores tech companies’ ethical challenges when integrating AI into their services. While AI requires vast amounts of data to improve and refine its capabilities, companies must ensure that their data collection and usage practices align with legal requirements and user expectations.
Transparency and consent are critical in fostering trust. If companies fail to provide clear and accessible options for users to control how their data is utilized, they risk eroding their customer base and facing legal consequences. LinkedIn’s situation is a cautionary tale for other organizations looking to leverage user data for AI advancements.
Protecting Your Privacy
For LinkedIn users concerned about their privacy, opting out of data sharing for AI is a step in the right direction. Beyond LinkedIn, users should regularly review their platforms’ privacy policies and settings. As technology evolves, staying informed and proactive about data privacy is essential.
The Bigger Picture
The LinkedIn lawsuit highlights a broader debate about how technology companies balance innovation with user privacy. As regulatory bodies and consumers demand greater accountability, the tech industry must adapt by prioritizing ethical practices and transparent policies. The outcome of this case could establish an important precedent for the treatment of user data in the age of AI.
In a time when data is often described as the “new oil,” protecting user rights while promoting innovation will be a defining challenge of the 21st century. This serves as a reminder that the future of AI should not come at the expense of privacy.
Sources:
The Times. (2025). “LinkedIn ‘used private messages illegally’ to train AI.” Link
Reuters. (2025). “Microsoft’s LinkedIn sued for disclosing customer information to train AI models.” Link
BBC. (2025). “LinkedIn accused of using private messages to train AI.” Link
Read more:
https://www.linkedin.com/help/linkedin/answer/a6278444
https://www.theverge.com/2024/9/18/24248471/linkedin-ai-training-user-accounts-data-opt-in