NPR just announced that Facebook reached a $52M payout following a class action lawsuit for failure to provide a safe working environment for its content moderators. To maintain a safe and "sanitized platform," Facebook contracts third-party content moderators to view user-generated content uploaded to Facebook's online platform. Posts they find that violate the corporation’s terms of use laid out in its Community Standards are removed.
Most content moderators witness consists of "videos, images, and livestreamed broadcasts of child sexual abuse, rape, torture, bestiality, beheadings, suicide, and murder." The suit claims that as a result of the repeated exposure to extreme graphic content, many content moderators suffered "debilitating physical and psychological harm," including post-traumatic stress disorder (PTSD). The lawsuit claimed that "Facebook helped draft workplace safety standards to protect content moderators," which included "providing moderators with robust and mandatory counseling and mental health supports; altering the resolution, audio, size, and color of trauma-inducing images; and training moderators to recognize the physical and psychological symptoms of PTSD." However, it fell short in actually implementing these safety standards it helped create. Therefore, Facebook breached its duty to protect employees from unsafe working conditions required by California labor laws.
The settlement gives class members, consisting of current or former content moderators, $1000 each and up to $50,000 for medical treatment. Although the social media giant denies any wrongdoing, it agreed to provide mental health counseling to its moderators. A Facebook statement read, "We are grateful to the people who do this important work to make Facebook a safe environment for everyone. We're committed to providing them additional support through this settlement and in the future." Steve Williams, a lawyer representing the workers said, "We are so pleased that Facebook worked with us to create an unprecedented program to help people performing work that was unimaginable even a few years ago. The harm that can be suffered from this work is real and severe."
While this litigation served as a step in the right direction for moderators at the job site, the recent coronavirus outbreak has presented challenges about third-party moderators working from home. First, moderators working remotely will only be able to view a restricted amount of the usual content due to security and privacy concerns. Working from home presents an obstacle for information security because it lacks the same safeguards as in the office. When employees are at a worksite, they are working behind layers of preventive security controls. Working from home presents some security vulnerabilities.
Another issue is the availability of wellness procedures and support for those working remotely. In a BBC report, Mark Zuckerbook said that the decisions regarding the most disturbing content "would be taken over by Facebook's full-time staff because the infrastructure was not in place to support the mental health repercussions of the contractors dealing with the posts." However, finding a solution to provide moderators with some level of psychological and health monitoring will need to be addressed.
Third, content moderators' ability to hit accuracy goals in removing the toxic and disturbing content raises another question. Living in quarantine and isolation could possibly cause more depression and mental health issues, thus increasing the amount of inappropriate or harmful posts. Although Facebook employs 15,000 third-party content moderators, the sheer volume could cause some questionable content getting overlooked.
The last issue is the use of artificial intelligence to help evaluate exploitative content on the social network. "Facebook has been working on algorithms for several years to automatically spot and remove content that violates its policies," but recently found some content was mistakenly censored from the platform due to "a spam-filter bug." Some glitches may emerge but the use of AI could be helpful to fill in some gaps. AI could also replace some jobs but not all. Humans seem better suited to decipher content and language in a cultural and political context.
Overall, tech companies will need to re-examine current practices to protect their workers not only from a physical standpoint but also from a mental health perspective as many employees now work from home.