Platforms & Content

I think we’ve all been a little bothered over the past week by what seems to be an increase in censorship by some of the platform companies. It seems as though the companies are taking steps to reduce certain types of content based on what their users are demanding. I understand it but don’t think it is right, given the current Section 230 protections available to tech companies.

While I can see the concern about content that might include specific threats or clearly show that someone is planning to do something illegal, I don’t see the problem with content that is simply passionate. Many people are passionate about social or political issues and can get pretty crazy, but they have the right to express themselves and censorship of that content is not right and should not be in the hands of a few tech executives.

Biden and Trump have both said they want Section 230 removed. Why? They weren’t very specific other than each just saying they felt it should be removed. But, at the same time, if platform providers are claiming they are platform independent and therefore don’t have control over the content on their platforms – yet, at the same time, they operate large manual and automated content monitoring systems, something is off.

It is a difficult problem. A news distribution service, for example, isn’t writing the news, they are just distributing it. Often a press release could contain statements that are incorrect or later claimed to be false but that doesn’t mean that the news release distribution service should be sued and put out of business. Having systems in place to monitor content is important but determining the liability for either or both parties is challenging.

I’m thinking through this as I’m writing. It is important to monitor content for false information, spam or ill-intent. That, I am clear.