As tech companies have grown increasingly prevalent, moderation, especially in regards to hateful or discriminatory content, has as well.
While most tech companies have policies that clearly delineate what is free speech and what is hate speech, for some — such as LinkedIn — enforcing those policies is what has become problematic.
Edward Hsieh, chief operating officer of the Asian American Civic Association (AACA), said he received a harassing message from a LinkedIn burner-profile accusing him of being a “spy from a foreign country” and threatening to “report [him] to the officials.”
In their community policies, LinkedIn states that they prohibit content that “incites or threatens hatred, violence … or discriminatory action” because of “race, ethnicity, national origin caste, gender, gender identity, sexual orientation, religious affiliation, or disability status.”
Additionally, LinkedIn prohibits creating false profiles, clearly stating, “We don’t allow fake profiles or entities.”
When Hsieh reached out to LinkedIn to report the two violations, however, LinkedIn dismissed the case, stating, “There’s nothing there.”
Hsieh said after receiving LinkedIn’s initial response, he reached out to their Safety Center to further clarify why the message was harassing.
“I gave him some context, and I think the biggest problem is that there were no clear words in there saying ‘Asian’ because the person was smart enough not to say anything about ‘Asian,’” Hsieh said. “Instead, they abbreviated AACA and used the term foreign.”
Hsieh also pointed out the current “high levels” of anti-Asian racism due to the pandemic, but even after this second message, LinkedIn closed and dismissed the case again.
“Even when I took the step of explaining to him that this is not just me being mad at a potential spam [message],” Hsieh said, “they still close it down.”
Hsieh reached out one last time, this time pointing out the current state of racism in social media — specifically with sites like Parlor becoming widespread.
However, unlike the other two times, Hsieh said he received a message from LinkedIn asking for “as much details” as possible.
Despite LinkedIn finally acknowledging the hateful message, Hsieh still expressed disappointment at the long process it took to get there.
“The fact is that it took me three messages before they even acknowledged there might be a problem,” Hsieh said.
According to Amy Zhang, an assistant professor at University of Washington, focusing on social computing and human-computer interaction, most tech companies typically rely on a combination of algorithms and human moderation to detect hate speech.
Zhang emphasized, however, that companies most likely rely on human moderators over basic algorithms, because detecting hate speech is “highly subjective, contextual and cultural.”
Zhang also said that for companies, determining what is free speech or what is hate speech is a difficult matter to resolve.
“Where do we care more about societal harm and helping marginalized people versus respecting people’s individual freedoms?” Zhang said.
In Hsieh’s case, his last message where LinkedIn asked for more details about the message most likely went through human moderation rather than a default algorithm designed to quickly determine whether the content was hateful or not.
Despite this response, Hsieh said his biggest problem with the company was their inability to enforce their own policies.
“They are, by all measures, [individuals] of private corporations, meaning they can set their own terms,” Hsieh said. “But the fact is, they’re not even following their own terms, based on what I saw.”
To read this article in Chinese (Traditional), please click here.