Facebook’s Adding New Warning Prompts in Messenger Threads to Protect People from Scammers

Facebook is adding two new warning tools designed to protect users from scams and unwanted attention, while it’s also looking to increase its focus on protecting minors within the app.

First off, Facebook’s adding a new safety notice within Messenger streams that will seek to help users identify when they may be interacting with potential scammers.

As explained by Facebook:

“We’re introducing safety notices in Messenger that will pop up in a chat and provide tips to help people spot suspicious activity and take action to block or ignore someone when something doesn’t seem right.”

As you can see in this interaction (above), when a user has accepted a chat request from somebody they’re not otherwise connected with, Facebook’s machine learning systems will use a range of signals to detect whether the discussion could be suspicious, and will provide a warning prompt if certain parameters are met.

Those parameters could include the user sending out a lot of messages to different people they’ve not interacted with before, specific keywords within threads, and more.

A particular focus here will be on interactions with minors – as explained by Facebook:

“We developed these safety tips with machine learning that looks at behavioral signals like an adult sending a large amount of friend or message requests to people under 18.”

Facebook says that Messenger already has special protections in place for minors which limit contact from adults they aren’t connected to, but this new feature will seek to educate people under the age of 18 to be cautious when interacting with an adult that they may not know, while also providing means for them to take action before responding to a message.

The idea, says Facebook, is for these tools to keep people safe without them ever having to access message content, which could have a significant impact in reducing exposure to such within the app.

In addition to this, Facebook’s also adding new warnings to help users identify potential imposters in their message streams.

Facebook Messenger warning

“Too often people interact with someone online they think they know or trust, when it’s really a scammer or imposter. These accounts can be hard to identify at first and the results can be costly.”

It’s concerning that this type of activity is so prevalent that Facebook needs a specific warning tool for such, but you’ve likely seen or heard of similar, people imitating others online in order to worm into their networks for malicious purpose.

Ideally, you would hope, Facebook would tag these accounts as fake and remove them – in a recent update, Facebook said that the amount of fake accounts on its platform was around 5% of its total user figure. Which sounds fairly low, but at Facebook’s scale, 5% equates to some 125 million active fake profiles on the platform at present.

And it may be difficult to ever get lower than that, given the way scammers and fake profiles work, so while Facebook’s systems are getting better at blocking fakes at the sign-up stage, many still exist. And some are imitating accounts in this way, so users need to be wary.

Given this, both of these new warning tools could prove very beneficial – and beyond immediate action, they may also prompt more users to approach such with added skepticism in future, improving digital literacy.

Facebook notes that the new warnings have been designed to work with full encryption, which it’s in the process of enabling, by default, for all of its messaging tools.

Facebook’s movement towards full encryption has raised the ire of various regulators, in various regions, who claim that it will end up providing protection for criminals by better enabling them to hide their activity. But Facebook’s pushing ahead regardless.

“People should be able to communicate securely and privately with friends and loved ones without anyone listening to or monitoring their conversations. As Messenger becomes end-to-end encrypted by default, we will continue to build innovative features that deliver on safety while leading on privacy.”

As noted, adding internal detection tools like this should help users to help themselves, essentially, but they won’t appease regulators, who have been calling for special access to messaging streams for police and investigators.

Going on this statement from Facebook, that seems unlikely, though you can expect that debate to carry on for some time yet.

The new Messenger features have already begun rolling out on Android, and are launching on iOS from next week.