Artificial intelligence startup Aware is analyzing messages on popular workplace apps to monitor employee sentiment and behavior.
Artificial intelligence startup Aware is analyzing messages on popular workplace apps to monitor employee sentiment and behavior.

The Rise of Workplace Surveillance

Cue the George Orwell reference. Depending on where you work there's a significant chance that artificial intelligence is analyzing your messages on Slack Microsoft Teams Zoom and other popular apps.

Monitoring Chatter at Major Companies

Huge U.S. employers such as Walmart Delta Air Lines T Mobile Chevron and Starbucks as well as European brands including Nestle and AstraZeneca have turned to a seven year old startup Aware to monitor chatter among their rank and file.

Understanding Employee Sentiment in Real Time

Jeff Schumann co founder and CEO of Aware says the AI helps companies "understand the risk within their communications," getting a read on employee sentiment in real time rather than depending on an annual or twice per year survey.

Identifying Toxic Behavior

Aware's AI models can identify bullying harassment discrimination noncompliance pornography nudity and other behaviors. However its analytics tool doesn't have the ability to flag individual employee names.

Potential Dystopian Consequences

"A lot of this becomes thought crime," says Jutta Williams co founder of AI accountability nonprofit Humane Intelligence. She adds "This is treating people like inventory in a way I've not seen."

Growing Industry and Competition

Employee surveillance AI is a rapidly expanding but niche piece of a larger AI market that's exploded in the past year. Top competitors to Aware include Qualtrics Relativity Proofpoint Smarsh and Netskope.

Privacy Concerns and Debunked Anonymization

Privacy concerns arise even when data is aggregated or anonymized. Research shows that personal identifiers can still be inferred from aggregate data. The idea of privacy through anonymization is debunked according to experts.

Employee Recourse and AI Explainability

If an interaction is flagged and a worker faces disciplinary action it's difficult for them to defend themselves without access to all the data involved. AI explainability is still immature making it challenging to question the decisions made by AI models.


Comments

  • No comments yet. Become a member to post your comments.