Slack AI Training Messages — Facing Privacy Backslash

Share the joy

Slack Scraping Customer Data 

Image Source

Slack, the workplace collaboration tool, is now under fire after revelations emerged that the company has been scraping customer data to develop new AI and machine learning (ML) models. 

The company has admitted to analyzing customer data and usage information by default, without requiring users to opt-in. 

The disclosure has sparked significant concerns among corporate users and IT admins, particularly regarding the lack of transparency and the burdensome process for opting out of the data collection. 

Slack aims to have technical controls to prevent access to the underlying content and promises that data will not leak across workplaces. 

Despite these assurances, many Slack admins are scrambling to opt-out, worried about the implications for their organization’s privacy. 

A particularly disturbing aspect of Slack’s communication is the process required for opting out. Customers must send an email request to Slack’s Customer Experience team with specific details to exclude their data from the global models. 

This opt-out procedure has been criticized for being cumbersome and for placing the responsibility on the customer rather than offering an easier opt-in approach. 

On social media, users expressed their frustration and concern, particularly regarding the use of direct messages and other sensitive content. 

Slack attempted to clarify its stance in response to the backlash, explaining that while it uses platform-level ML model features like channel and emoji recommendations, customers can exclude their data from these non-generative models. 

Generative AI 

However, the company emphasizes that its generative AI tool, Slack AI, which uses Large Language Models (LLMs), is a separate add-on and does not use customer data for training. 

Security researchers were not surprised by Slack’s data practices, noting that many large tech companies engage in similar activities 

However, they criticized the default opt-in approach, arguing that customers should not have to bear the burden of opting out. 

Slack’s documentation states that data will not leak across workspaces and assures that models used broadly across customers do not learn, memorize, or reproduce customer data. Yet, the inconsistency between Slack’s privacy principles and its actual practices has led to confusion and distrust among users. 

Corey Quinn, an executive at the DuckBill Group, highlighted the policy on social media, expressing shock and dismay at Slack’s data practices. The company’s response to Quinn’s concerns reiterated their existing policies but did little to assuage fears. 

Slack’s conflicting statements about its AI and ML practices have only added to the controversy. While the company insists that its premium generative AI tools do not use customer data for training, the broader machine learning model training policy seems to contradict this, causing confusion among users. 

The Lack of Transparency

Its approach has drawn criticism for its lack of transparency and for not providing an easier way for individual users to opt-out. Legal experts and privacy advocates have called the situation a “privacy mess,” urging Slack to atop an opt-in policy instead. 

In response to the outcry, Slack has promised to update its privacy principles to better explain the relationship between customer data and generative AI in Slack. The update will clarify that Slack does not develop LLMs or other generative models using customer data.


Share the joy

Author: Jane Danes

Jane has a lifelong passion for writing. As a blogger, she loves writing breaking technology news and top headlines about gadgets, content marketing and online entrepreneurship and all things about social media. She also has a slight addiction to pizza and coffee.

Share This Post On