Here’s What Facebook Says It’s Doing to Protect Election Security

Here’s What Facebook Says It’s Doing to Protect Election Security

Last year, Facebook came under fire when it was revealed that it had been weaponized by foreign actors to spread misinformation and divisive content in hopes of influencing the 2016 U.S. presidential election. Facebook published a transcript of today's remarks, where VP of Product Management Guy Rosen indicated the network would be focusing on four core areas of election protections: "Combating foreign interference" "Removing fake accounts" "Increasing ads transparency" "Reducing the spread of false news" Here's a look at the work Facebook says it's doing in each area. "Combating Foreign Interference" Samidh Chakrabarti, a product manager at Facebook, spoke on how proactive measures to combat bad actors of foreign origin relate to some of the efforts to combat fake accounts -- which, he said, is one of the most common ways such bad actors "hide". At this point, Chakrabarti explained, Facebook blocks "millions" of fake accounts on a daily basis as they're being created, which can help stop them before they can create and distribute content. The efforts here appear to be two-fold: machine learning capabilities that stop the creation of these pages before they can distribute content, as well as technology that seeks out existing Pages engaging in such activity. As the name suggests, it allows users to view any ads the Page is running under its "About" section. And once the verification process is complete, Leathern explained, ads pertaining to an election will be clearly labeled as such in both Facebook and Instagram feeds, including the individual, business, or organization that paid for it. To determine which content needs to be fact-checked, Lyons said, the platform will use various "signals" that include reports from Facebook users themselves. From there, fact-checkers can rate a story as false -- and if they do, its ranking in the News Feed will be dropped, which leads to an average of 80% fewer views. Any Page that habitually shares false news, Lyons said, will face reduced distribution, and lose its advertising and monetization privileges, "stopping them from reaching, growing, or profiting from their audience."

How to Find High ROI Facebook Interest Audiences
How to Win at Facebook Advertising
Digital Marketing News: Google’s Branded Search Reporting, Bing Connects With LinkedIn, & Facebook’s New Ad Dashboard
facebook-election-protection

Earlier today, members of Facebook’s staff held a small press event with a status update on efforts to prevent its platform from being weaponized to influence major national events like elections.

Last year, Facebook came under fire when it was revealed that it had been weaponized by foreign actors to spread misinformation and divisive content in hopes of influencing the 2016 U.S. presidential election.

Facebook published a transcript of today’s remarks, where VP of Product Management Guy Rosen indicated the network would be focusing on four core areas of election protections:

  1. “Combating foreign interference”
  2. “Removing fake accounts”
  3. “Increasing ads transparency”
  4. “Reducing the spread of false news”

Here’s a look at the work Facebook says it’s doing in each area.

1. “Removing Fake Accounts”

This might be the most complex and far-reaching area where Facebook will be putting new efforts into place. In order to remove fake accounts, Facebook’s Chief Security Officer Alex Stamos explained, the network will have to identify fake identities and audiences alongside false narratives and facts.

Doing so begins by identifying motives, which boils down to three main areas: influencing public debate, money, and what the “classic internet ‘troll,'” Stamos said.

Fake accounts motivated by the first item on the list range from what Stamos called “ideologically motivated groups” to state intelligence agencies, whose target audiences could exist within their own countries or others.

The second motivator, money, is the most common one. Many times, these bad actors stand to financially profit by driving traffic to their sites — even if it means, speculatively, doing so by linking to false or divisive content.

Countering that, Stamos said, will require decreasing the account’s profits by increasing its operational costs — which is how Facebook has previously curbed activity from spammers. Facebook has made similar efforts in the past to penalize content with “clickbait” link titles that don’t necessarily lead to quality or genuine content.

These motivations can vary or even be combined according to the event the actor is trying to influence. That’s why Stamos said Facebook will be enlisting the help of external experts who are familiar with the various geographical or cultural factors that could play a role in what different actors are trying to accomplish.

2. “Combating Foreign Interference”

Samidh Chakrabarti, a product manager at Facebook, spoke on how proactive measures to combat bad actors of foreign origin relate to some of the efforts to combat fake accounts — which, he said, is one of the most common ways such bad actors “hide”.

At this point, Chakrabarti explained, Facebook blocks “millions” of fake accounts on a daily basis as they’re being created, which can help stop them before they can create and distribute content. Machine learning is said to play a major role here, which has been trained to identify suspect activity without having scan actual content.

Previously, members of the Facebook community were responsible for reporting what looked like suspicious activity, especially anything that might pertain to an election. Now, Chakrabarti said, Facebook will deploy an “investigative tool” that proactively looks for this kind of activity, like the creation…

COMMENTS

WORDPRESS: 0
DISQUS: 0