Weather     Live Markets

LinkedIn, a major tech company owned by Microsoft, has come under fire for training AI models with user data without first informing users. Following in the footsteps of other tech giants like Meta and X’s Grok, LinkedIn is opting its users into training its AI, as well as models belonging to unnamed “affiliates.” This decision has raised concerns about privacy and data protection.

Microsoft, which owns a large chunk of ChatGPT developer OpenAI, will also be training its AI with information from LinkedIn. After facing criticism, LinkedIn clarified that user data will not be used to train base OpenAI models, but will be shared with Microsoft for its own OpenAI software. The company stated that the artificial intelligence models used to power generative AI features may be trained by LinkedIn or another provider, such as Microsoft’s Azure OpenAI service.

LinkedIn spokesperson Greg Snapper explained that when the platform uses the models made available through Microsoft’s Azure AI Service, it does not send data back to OpenAI for them to train their models. The company also stated that they seek to minimize personal data in the data sets used to train the models, including using privacy-enhancing technologies to redact or remove personal data from the training dataset. Additionally, LinkedIn mentioned that they are not training “content-generating AI models” on data from the EU, EEA, or Switzerland.

Users who are concerned about their data being used for AI training by LinkedIn can easily stop it by going to the data privacy section in settings and switching off the option to use their data for training content creation AI models. However, privacy activists are not satisfied with this opt-out model, calling it inadequate to protect users’ rights. Mariano delli Santi, a legal and policy officer at the U.K.-based privacy advocacy nonprofit the Open Rights Group, emphasized the need for opt-in consent as a legally mandated and common-sense requirement.

Delli Santi urged the U.K. privacy watchdog to take urgent action against LinkedIn and other companies that opt users into AI training without their explicit consent. The concerns raised by privacy advocates highlight the importance of transparency and informed consent when it comes to using user data for AI training. As tech companies continue to leverage user data for AI models, it becomes crucial for regulators to enforce strict guidelines to protect user privacy and ensure data protection standards are upheld.

Share.
Exit mobile version