By Michael Bürgi, Published by Digiday
It’s fair to say the major social media platforms that take up so much of people’s time and attention have made some progress in trying to ensure their environments are safer for consumers and advertisers — but it’s probably more accurate to say they still have a long way to go cleaning up their acts.
Those shortcomings were a major motivator for IPG’s Mediabrands and MAGNA units to issue their Media Responsibility Index (MRI) report every six months, starting back in early 2021. Essentially offering what it says is an unbiased assessment of the social media platforms’ efforts in areas such as data collection and use; mis- and dis-information levels; advertising transparency; promoting respect and diversity; monitoring and limiting hate speech; enforcing policies; and highlighting accountability, the report relies on responses from the platforms as well as some original reportage and social listening.
Digiday got a first look at the third installment of the MRI, which is being released today.
New areas of focus within the MRI questionnaire sent to Facebook, Instagram, Pinterest, Reddit, Snapchat, TikTok, Twitch, Twitter and YouTube (LinkedIn was sent a questionnaire, but declined to respond) include: biometric data collection and storage; gender identity and ad targeting; hate-speech policies; reporting of BIPOC and underrepresented creators; misinformation labels; and others.
As evidence of the need for the MRI, the report states upfront the fact that “64 percent of Americans say social media has a mostly negative effect on the way things are going in the U.S. today.”
But since social media isn’t the only way Americans digest content and opinions, future installments of the MRI will aim to expand to other media, according to Elijah Harris, executive vp of global digital partnerships & media responsibility at MAGNA, who oversees the report. “In order for us to scale it and make a bigger impact, it’s going to require us to expand the media types we look at … We’re still covering a minority of the investment pie,” he said.
Harris added another expansion plan for MRI is to enable IPG’s various countries and regions to customize elements of the report to their local needs and challenges. “The goal is to, at some point get everywhere our clients are spending, and that’s a big pie,” added Dani Benowitz, MAGNA’s U.S. president.
IPG is not alone in its efforts to move the industry forward when it comes to issues of brand safety, representation, enforcement against bad-actor behavior, data accuracy and eliminating bias — each of the holding companies devotes significant energies to at least a few of these areas. But arguably, the MRI is the most broad, tackling five different elements of evaluation of the social platforms’ efforts in media responsibility:
- advertising controls
- enforcement
- policy
- reporting
- user controls
“We’re constantly adapting it to change with what’s going on around us,” said Benowitz. “Our clients are 100 percent asking for this from us, they’re expecting it from us. They’re leaning on us to hold our media partners accountable and hold them accountable — and they want guidance from us on how to do that.”
“Safety is a constantly evolving topic,” said David Byrne, TikTok’s global head of brand safety and industry relations. “What may have been ‘best in class’ last year quickly becomes industry-standard, highlighting the importance of being proactive.
Of all the platforms that responded, Twitter emerged with the best overall performance across the platforms, noted Harris. In the report, the platform improved its performance over prior reports in all elements of evaluation except advertising controls.
Caitlin Rush, Twitter’s head of global brand safety strategy, said the report has helped Twitter keep better track of its own progress in areas beyond brand safety. “As we get into a year-plus of MRI under our belts, we’re able to see and measure the progress we’re making. Having this longevity of ups and downs has been really helpful for us,” she said.
Rush also noted the widened scope of brand safety as an important factor of the MRI. “Some of the topics that have evolved in the report are starting to emphasize bigger-picture things like how is your company supporting DE&I goals, and what are you doing to support responsible machine learning and algorithmic transparency?”
Citing the release of the Facebook Papers and Frances Haugen’s whistleblower testimony last fall, the report also shows that Facebook, while making progress, has also been caught obfuscating its darker elements.
As Harris explained, “When it comes to the foundation of their policies, the controls Meta’s platforms use and the detection techniques they leverage, there’s absolute industry leadership within those systems,” he said. “What hinders them is the consistency by which they enforce their policies and rules, which is where things start to go awry. We and other industry bodies have been encouraging that platform in particular to work with independent parties, specifically when it comes to how they report on prevalence of harmful or violating content.”
Ultimately, the third MRI makes the following recommendations to the platforms:
- Expand the labeling of all platform policy violating content
- Platforms should all audit for algorithmic bias
- Platforms should also be more careful
- Industrywide adoption of a violative view rate, a measure that’s been developed recently by YouTube and Snap that “contextualizes, as a percentage, views of offending content relative to all content views on a platform.”
- Platforms should work in collaboration with each other in limiting “harmful content,” as suggested by a TikTok memorandum of understanding it proposed to the other social platforms.
“We regularly partner with experts, industry organizations, and brand partners to help inform our policies, practices, and solutions,” said TikTok’s Byrne. “As an industry, it’s vital to be transparent in order to build and maintain trust among our community of users, creators and brands.”