Stack Exchange moderators on strike over AI-related issues

A bunch of Stack Exchange moderators have gone on strike because the company is telling them not to moderate AI-generated posts.

There are a bunch of complicated and nuanced aspects to the situation. Here’s my understanding of what’s going on, but I’m not an expert (and I’m biased toward believing the moderators):

  • AI-generated material often looks superficially good, so readers often give it upvotes; but it often includes made-up material that isn’t immediately obviously wrong.
  • Moderators have been using various signals/indicators to help them identify material as AI-generated. For example, if they see a user post several multi-paragraph posts within a few seconds, they get suspicious.
  • Moderators have not been relying much (if at all) on automated AI detectors, because those detectors aren’t very accurate.
  • Some moderators say that after seeing enough AI-generated material, they start to be able to identify it easily. They’ve suggested that the company could test them on this, using known-AI-generated and known-human-generated material, but so far the company has declined to do so.
  • Moderators have been suspending users who appear to be posting AI-generated material.
  • The company declared that moderators must stop suspending users for posting AI-generated material. (The company didn’t consult with the moderators before issuing this policy.)
  • Moderators have gone on strike, refusing to do moderation, in protest of that policy.
  • The company has been posting initially-reasonable-seeming explanations of where it’s coming from, but moderators have been responding by pointing out the flaws in the company’s reasoning and methodology. For example, the company keeps pointing out that the detectors are inaccurate; the moderators keep responding by saying that they’re not relying on the detectors.
  • The company receives appeals of suspensions, and because the moderators don’t have concrete evidence that the material was definitely AI-generated, the company is having a hard time justifying the suspensions.
  • The company says that certain demographics, including users from certain countries, are more likely to get suspended by moderators for supposedly posting AI-generated material.
  • The company seems to be indirectly claiming that a lot of the material that moderators are saying is AI-generated really isn’t. This seems to imply that the company believes it has a method of identifying AI-generated material that’s more accurate than what the moderators are doing, but the company hasn’t explained what their method is.

…And there’s lots more to the situation than that; in particular, there’s background tension between moderators and the company over past situations and actions. But I feel like the above covers the core of the current situation (bearing in mind again that I’m not an expert in any of this).

Here are some links:

Join the Conversation