Nasty Language Now Detected in Email and Teams
In March, I wrote about the update to Office 365 supervision policies to support monitoring of Teams communications in personal and channel conversations. Supervision policies are an Office 365 E5 feature and not every organization feels the need to check email and Teams to ensure compliance with company or industry regulations, but it’s an important part of data governance in some industries.
Recently, in another example of how Microsoft uses the cloud to bring machine learning and artificial intelligence into their products, supervision policies acquired the ability to use data models to check messages. The first data model is “Offensive Language,” which covers a wide range of conditions, including slurs, taunts, racism, homophobia, profanities, and taboo terms designed to help organizations implement anti-harassment and cyber-bulling in the workplace.
Adding the Offensive Language Data Model to a Supervision Policy
Adding the Offensive Language data model to a supervision policy is easy. When creating or editing a policy, you chose what communications to review, including the conditions to select messages. All you need to do is set the Use match data model condition checkbox (Figure 1).
Testing the Policy
After saving the policy, the next thing is to test its effectiveness. This is more easily done with email because Office 365 captures copies of messages for supervision immediately while it takes Teams up to 24 hours to do the same.
Messages selected for supervision are kept in special mailboxes and processed there by reviewers using OWA or the Supervision section of the Security and Compliance Center (Figure 2). Reviewers must decide if the messages picked up by a policy are compliant or non-compliant. Anyone who sends a message containing offensive language to other people needs some counseling. Ideally, the organization should have well-documented and clear procedures to report issues detected through supervision policies to line managers and HR for further action.
Blatant examples of grossly offensive language are picked up without a doubt (for instance, calling someone a f***ing idiot in email) as are messages containing specific keywords (like “homos”). Other messages get through that some might find offensive (you’ll have to do your own testing to find out), but might be caught in time as the learning model is refined to understand the kind of language used in the organization. At least, the great promise of artificial intelligence and machine learning is that administrators don’t have to keep on updating policies to take account of changing circumstances (new forms of insults, for instance). We shall see over time.
The current data model only handles English language terms. It will take time for Microsoft to build the models to handle offensive language in the many languages supported by Office 365, including regional variations used in the 240+ markets where Office 365 is sold. In the meantime, any local patois won’t be detected by policy, even if it is utterly offensive.
Along with a ton of other information about auditing, supervision policies are covered in Chapter 21 of the Office 365 for IT Pros eBook.