Draft:Online semi-censorship

Online semi-censorship is Internet censorship carried out largely or entirely by non-governmental entities that does not remove or censor content, but only substantially limits or eliminates their reach.

Deboosting
This practice refers to hiding or demoting content to reduce its visibility. For example, a reply can be moved closer to the end of the replies section, which may be hidden behind a collapsible panel that requires a user to press a "show more" button instead of the usual infinite scrolling. This may also include blocking content from a platform's notification system. Theoretically such visibility restrictions can also be personalised towards specific audiences to hide an item from certain cohorts, IPs, or demographics but not from others. Various techniques can be deployed to deboost specific posts or a specific user's posts such as involving delays.

Targeted drowning out of content
Direct censorship can be supplemented or supplanted by private "troll armies" or bands of individuals and/or robots programmed to drown out disfavoured speech. Unfair amplification of posts from select demographics, groups or individuals may also drown out other contents.

Removal of posts relating to specific subjects or claims
For example, in the early 2020s, many online platforms such as Facebook censored or suppressed posts and information relating to the COVID-19 lab leak hypothesis which at no point has been scientifically refuted. This was later undone and caused substantial controversy.

Research
A 2018 study found there to be growing calls to regulate companies of major online platforms and to make them more accountable, outlining an interdisciplinary research agenda for platform governance. The algorithmic accountability relates to both intrinsic opacity in computational processes and the lack of transparency in platform governance. According to a 2023 book, the news and public information which citizens perceive "is no longer solely determined by journalists, but increasingly by algorithms", which "affect exposure to diverse information and misinformation and shape the behavior of professional communicators". Key recommendations of an OSCE report includes "Make certain that robust remedy mechanisms against censorship and surveillance power are in place, including through human review and independent appeal mechanisms". Many reports assume suppressed posts are largely constituted of misinformation or similar malicious contents. Research may also investigate alternative approaches to some of the more accepted rationales for semi-censorship, such as computer-assisted classification of, for example, misinformation-containing claims to enable large-scale correction – via approaches like Twitter Community Notes or replies beneath them – without outright hiding or removing the posts. Semi-censorship was also found to be present in the scientific literature itself, where ideas that "challenge current scientific or publishing paradigms" are sometimes suppressed.

Reddit
Reddit deploys a decentralized approach to content moderation, combining its algorithmic spam-filter with moderators who can approve or remove any post as well as ban any user from their site – or "subreddit" – for any reason, albeit users can send a message to the moderators asking for an explanation which they may sometimes receive. Like on Twitter, a standardized process to appeal against content removal is lacking. This means that large subreddits, often with no notable alternative, can also approve or remove posts, make narrow subreddit rules, or ban users based on their personal political views, commercial interests, or for any other undisclosed reason. A study found the "moderation brings with it the inevitable questions of notice and proportionality of sanction, not to mention the question of who adjudicates".