According to Monika Bickert, head of policy management at Facebook, they receive over one million reports of user violations every day. The company official spoke about the very indistinguishable difference between free speech and hate speech.
However, she told CCN Money that she doesn't have any data what percentage is serious and how many of these are removed from their website.
She made these comments at the SXSW's first conference of Online Harassment Summit on Saturday. The conference panel focused on how much tech companies can and should do to remove potentially damaging content on their platforms.
"You can criticize institutions, religions, and you can engage in robust political conversation," said Bickert, referring to where her company draws the line.
"But what you can't do is cross the line into attacking a person or a group of people based on a particular characteristic," she added.
The criteria used by Facebook to determine if a content is hateful or not is if it basically attacks people according to their actual or perceived race, ethnicity, sex, religion, national origin, disease, disability or sexual orientation. These types of contents are not allowed in the site.
The social media giant encourages respectful behavior. Since there are different types of people with different cultural backgrounds using the platform, Facebook finds the need balance their needs, safety and interests. Therefore, the company may have to remove any sensitive content or limit the audience that sees it.
But the social network finds it hard to enforce this rule.
"When it comes to hate speech, it's so contextual ... We think it's really important for people to be making that decision," Bickert said.
She hinted however that automation will someday play a bigger role in this regard. But she also noted that the reported number of violations are "steadily increasing" as users were allowed by Facebook to flag what they think are hate content from their devices.