Tuesday Feb 20

Technically Speaking: Facebook: From Russia With Love

Facebook, alongside Twitter, and Google were called to the Senate to respond to their role in the spread of misinformation during the 2016 election. During her opening statement, California Senator Dianne Feinstein outlined how the Russian government used modern technology to impact the election results via cyber-attacks and the generation and distribution of false information through Twitter and Facebook.

As part of the hearing, all three companies provided documentation on the number of accounts with ties to Russia and their overall reach:

• The Internet Research Agency, a group described by Senator Feinstein professional trolls reportedly financed by a close Putin Ally created 470 Facebook accounts and 2752 Twitter accounts
• Twitter alone had over 37,000 Russian-linked accounts auto-generating content
• Over 3000 Facebook ads were purchased targeting conservative and progressive audiences. In a blog post they also shared that those ads were purchased for around $100,000
• Facebook also acknowledged that over a 2 year period, posts from accounts tied to Russia targeted millions of US citizens.

While this hearing might have been the most politicized setting in which Facebook was asked to explain its role in the distribution of misinformation during the 2016 this is not the first time they have been asked to take responsibility for their role in the 2016 election results.

To their credit, they are now putting in a public effort to understand how their platform was able to be misused so easily and what measures can be put in place to prevent this from taking place again in the future.

On November, in a blog post called ‘Continuing Transparency on Russian Activity’ Facebook announced the launch of a portal to be able to identify if they had liked or followed any pages or Instagram account created by the Internet Research Agency between January 2015 to August 2017 which is expected to launch at the end of 2018.

In addition to the portal, Facebook has also hired new employees to identify and remove fake accounts and posts and creating stronger policies
to prevent fake ads from being re-shared. They are also investing in improving their machine learning to better identify these false ads as they are created.

Facebook has made changes to their newsfeed in an effort to reduce the number of false stories shared through a Facebook. A common tactic of getting someone to click on a link is adding a fake video play button which takes the user to a static image and a false news article. Facebook is now preventing tactics like this to reduce space and false information.

The election and this highly-politicized time has changed the way people use Facebook entirely. Political discussions, debates on hot button issues, and of course news and information are shared, posted, liked, and commented on. Facebook can better facilitate this type of activity by ensuring its content is accurate and is created and developed in an authentic non-politically motivated way.

This is a great responsibility Mark Zuckerberg likely did not anticipate or ask for in its early early development but now that Facebook has grown and become a mainstay in our digital culture, the only thing we can ask is that it does better.

Noorin Ladhani is a freelance writer in Toronto. She blogs about travel and technology at www.noorinladhani.com and writes about Canadian start-ups and tech news at http://www.techvibes.com/global/author/noorin-ladhani. Follow her on Twitter at @NoorinLadhani.

Subscriber Login

Latest Issue