Content moderation on the Internet is an incredibly difficult challenge to solve. Despite the years of image recognition research done at universities and in corporations, a lot of that process is more accurately done by real people. Even software to determine if a piece of text is offensive, threatening, or soliciting services is very difficult to accurately determine. It really does require the full breadth of human experiences and knowledge just to make judgement calls like that. Even then, false-positives or negatives are still going to slip through.

So when reading the article that I link to below, try to read it with the mindset that artificial intelligence will not be here anytime soon. That those jobs simply can not be automated in time to save people from having to be employed to do this work. Its rather unsettling.

I think this is a difficult topic to talk about, and it certainly isn’t glamorous. I usually like to link to interesting and (what I feel are) unique things. Not because I take some strange enjoyment in reading these bizarre things, but because, according to this article, the truly bizarre is normal for many employees. It is something that any large website that allows user generated content and still wants to maintain a positive PR face has to do.

Twitter gets constant demands from its users to raise the content moderation, as too many users are subjected to hate, threats, and abuse. They want Twitter to be an open platform where anyone can immediately join, post, and start conversations. They also want it to be a safe place to engage in those activities. Even more importantly those needs to operate in a safe-place are absolutely critical when discussing topics that are progressive in nature (not to mean liberal, but I mean against the norms of society). How can we find that balance, should Twitter be the de facto middle man arbitrating all discussions and content made by users? That would be impossible given the amount of content generated. So most systems are reactive, responding to cries from users of a misuse of the service. Clearly, Twitter’s users find this solution ineffective.

Further, I would like to add, that this level of content moderation is the exact opposite of Reddit.com. At Reddit, users create their own sub-reddits, they self-moderate to the degree their community agrees to. Their users can join any public sub-reddit, be banned from it, or have their posts downvoted to the point where they are no longer seen. There are still moderators for each sub-reddit, of course, but users are generally able to “hide” the bad content with enough votes, and not requiring direct intervention from a moderator. This isn’t always the case, but it usually happens before a moderator removes the content, saving some people from the low-quality post.

But that democratized version of self-moderating with down-votes doesn’t work without substantial intervention and banning for subreddits that demands a high level of quality posts (such as r/AskHistorians or r/AskScience). It also doesn’t appear to work to outsiders of Reddit when they see subreddits dedicated to some of the shadiest parts of desire, or blatantly sharing the private photos of celebrities. To outsiders, and even many Reddit users, it is a clear failure of self-moderation. While Reddit is a space where free-speech generally has complete reign, it is still subject to the laws and morals of people (however contradictory that may be across countries or even county lines). Reddit itself very rarely moderates the content of its users, for better or worse. So this isn’t a good solution either (for both the evidence that it has proven ineffective and still requires manual human intervention).

More importantly, I don’t think these kinds of problems are best solved with technology. Many things happened that made those people do those things, let alone decide to share it. Once again, we will have to come to terms that technology can not solve these problems, because they are cultural and societal in nature. People will do whatever they can to harass others, to do awful things and share it, or to threaten others. What we can do, is create environments where those people are simply turned away. Where their acts are not glorified or supported. Where there is a general understanding, both in the real world and online, that we hold ourselves to a higher standard, because everyone is better off for it. All these problems stem from real-world acts and people, not technology and not websites. Until AI is sophisticated enough to automate these things away, jobs like the below will become increasingly more common.

The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed