Imagine this happening to you: you log onto Facebook and you see someone else, with your name and photo. The person is pretending to be you. Pretty creepy, right? What would you even do about it?
While stuff like this does happen, its not something that affects a lot of users. Right now, though, Facebook has been making a big effort to crack down on hate speech and harassment, so the company is taking action to stop this kind of identity-theft.
For the last few months the company has been working on a new feature which will automatically alert users if someone is trying to impersonate them, according to a report from Mashable on Thursday.
If such a thing occurs, the user will get an alert, and will be allowed to tell Facebook if that other account is actually impersonating them or not.
Facebook first began testing this in November, and it will soon be available to everybody.
This does not seem to be a very widespread problem; according to Facebook in its 10-K filing from earlier this year, said that duplicate accounts, which it called "an account that a user maintains in addition to his or her principal account" made up less than 5 percent of the company's 1.55 billion MAUs. (By the way, that would equal 75 million accounts at the high end. That's how many users Facebook has)
It also said there were less than 2 percent of what it called "false" accounts, meaning either those that are misclassified, like personal pages for businesses, and "undesirable accounts, which represent user profiles that we determine are intended to be used for purposes that violate our terms of service, such as spamming."
The percentage of accounts that are duplicate or false is higher in developing markets, such as India and Turkey.
So why tackle such a seemingly small problem like this? Because Facebook has been under fire lately to curb hate speech on the platform, and to up its anti-harassment features, especially in the wake of the Syrian refugee crisis, which led to heated rhetoric on the site.
Late last year, Facebook, Google, Twitter all agreed to delete hate speech that violates German laws from their websites within 24 hours, following an investigation into Facebook by the Germany government over whether or not Facebook had failed to remove hate speech.
The new agreement made it easier for users and anti-racism groups to report hate speech by creating specialist teams to deal with these incidents at the three companies.
In January Facebook launched a new anti-hate speech initiative in Europe and pledged over 1 million euros (or $1.09 million) to support non-governmental organizations in their efforts to rid its platform of racist and xenophobic posts.
Last month the company began offering incentives to users who stood up to hate speech on the site.
In addition to the new anti-impersonation feature, Facebook is also said to be testing out a better way to deal with someone putting up inappropriate pictures. While nudity on Facebook is already banned (something that has gotten the company in trouble in the past), the new feature would give victims of that kind of harassment guides to other resources they could user, including support groups and legal options.
Keeping users safe has to be a top priority for social networks, as we've seen what can happen to a company that gains a reputation for not protecting its users.
VatorNews reached out to Facebook to confirm these new features. We will update this story if we learn more.
(Image source: screenshadowsgroup.com)