A week after Facebook banned several NYU researchers from examining its political ad practices, a second group is claiming Facebook tried to block its research into the social network’s recommendation algorithms.
On Friday, Berlin-based nonprofit AlgorithmWatch claimed it had to terminate a project analyzing Instagram’s recommendation algorithms over fears Facebook would retaliate with a lawsuit. “Ultimately, an organization the size of AlgorithmWatch cannot risk going to court against a company valued at one trillion dollars,” the nonprofit says.
The project tapped volunteers to install a browser add-on capable of collecting data from the user’s Instagram feed. Insights from 1,500 volunteers allegedly uncovered evidence that Instagram’s recommendation algorithms favored showing pictures of scantily clad people.
The nonprofit routinely deleted the information received from volunteers, it says. But according to AlgorithmWatch, Facebook is no fan of the research. In May, the company held a meeting with the nonprofit and claimed the data collection from its browser add-on violated Facebook’s terms of service and European data privacy laws.
“They (Facebook) would have to ‘mov[e] to more formal engagement’ if we did not ‘resolve’ the issue on their terms —a thinly veiled threat,” AlgorithmWatch claims. “Facebook’s reaction shows that any organization that attempts to shed light on one of their algorithms is under constant threat of being sued.”
As a result, the nonprofit ended the project. However, AlgorithmWatch has decided to speak out after Facebook also claimed a terms-of-service violation in banning researchers at New York University from studying the social network’s political advertising practices. (Last week, the FTC sent a letter to Facebook slamming the company’s justification for the NYU researchers’ ban.)
AlgorithmWatch is now calling on European regulators to intervene. The nonprofit is circulating a letter to EU lawmakers demanding they help protect third-party researchers’ access to studying social media platforms.
“Without independent public interest research and rigorous controls from regulators, it is impossible to know whether Instagram’s algorithms favors specific political opinions over others,” the group says. “The European Parliament and EU Member States must act now to prevent further bullying.”
However, Facebook dismissed the allegations from AlgorithmWatch as inaccurate. “We believe in independent research into our platform and have worked hard to allow many groups to do it, including AlgorithmWatch —but just not at the expense of anyone’s privacy,” a company spokesperson tells us.
“We had concerns with their practices, which is why we contacted them multiple times so they could come into compliance with our terms and continue their research, as we routinely do with other research groups when we identify similar concerns. We did not threaten to sue them,” the spokesperson adds.
Facebook also claims it repeatedly asked the German nonprofit for an informal meeting on their research to explain why it violated the company’s terms of service. Following several refusals, Facebook said it had no choice but to call for a “formal meeting.”
The social network also points out that it offers several tools to help researchers examine the company’s practices. Nevertheless, AlgorithmWatch contends Facebook can’t be trusted to supply accurate data on its own algorithms.