Thursday, January 15, 2026

Stack Alternate Strike – Now AI is dangerous? Does Stack Alternate know what it’s doing? · Ponderings of an Andy


Introduction

My earlier posts concerning the ongoing moderator and curator strike on the Stack Alternate community may be discovered linked on the backside
of this put up, or by visiting the Stack Alternate Strike class on this website. I might put up a abstract about what’s occurred within the final
ten days, however there may be nothing to report. There are discussions, however no agreements. The appointed Stack Alternate worker empowered to
discuss with moderators stepped again and isn’t taking part any longer.

Tomorrow marks the one month level. We’re hours away from 10,000 pending moderator flags on Stack Overflow. That is up from 78 (sure,
two digits, in mid-Could). The way in which this has gone down, the shortage of progress, and the continued mischaracterization of moderators to the press
hasn’t motivated me to spend my free time to volunteer within the final lengthy although. I nonetheless have this sense that Stack Alternate is wanting
on the reddit protests lately with their demand that moderators return to the group and questioning if they will replicate that right here.

New confusion

On July 3, 2023 Stack Overflow revealed a weblog put up entitled: “Do massive language fashions know what they’re speaking about?”. Spoiler:
the conclusion of the article is “Nope.”

However that is not the attention-grabbing factor. The attention-grabbing factor is how this reply is introduced. The final paragraph of the put up cuts to
the guts of the matter that moderators on Stack Overflow raised in December once we banned ChatGPT.

Treating AI-generated info as purely actionable could be the most important hazard of LLMs, particularly as an increasing number of internet content material will get generated by GPT and others: we’ll be awash in info that nobody understands. The unique data could have been vacuumed up by deep studying fashions, processed into vectors, and spat out as statistically correct solutions. We’re already in a golden age of misinformation as anybody can use their websites to publish something that they please, true or in any other case, and none of it will get vetted. Think about when the fabric doesn’t even should cross via a human editor.

We noticed this in motion with ChatGPT. We nonetheless see it in motion with ChatGPT and it is nonetheless a drawback customers have gotten extra conscious of because the
strike continues. We noticed it when Stack Alternate tried their formatting assistant on Stack Overflow. What I see right here is Stack Overflow
admitting that the moderators are appropriate, in public.

The opposite attention-grabbing factor about that paragraph is that it hyperlinks to an article from The Verge that quotes the Stack Overflow moderators
and the choice to ban AI. It additionally has this dig at Stack Alternate executives:

The mods say AI output can’t be trusted, however execs say it’s definitely worth the threat.

Their very own put up is explaining why it isn’t definitely worth the threat.

What’s this imply?

I see this as extra communication failure on Stack Alternate’s half. In an replace I posted weeks in the past, I linked to inside emails that
the place leaked.

How are we messaging this? Who’s allowed to put up and reply to questions and feedback on Meta, chat, social media, and so on?

The Neighborhood Management Crew ([redacted]) are working collectively in shut coordination with Advertising ([redacted]) on comms. They may put up and reply to questions on-site. Until you might be particularly tapped to answer one thing please don’t have interaction. It’s best to keep away from commenting on something associated to this motion on website, even when you assume you’ve one thing useful so as to add. Please get assessment and approval from Philippe previous to posting on website, or from [redacted] in case you are approached off-site.

Somebody, someplace, did not notice what this weblog put up was about or what it linked to.

However, nothing modifications with this. The corporate has dug in so laborious on forcing GenAI to be on the websites and is marching towards an announcement
of somekind in late July 2023 about AI. Within the meantime, I can solely see weblog posts like this one as a sign that Stack Alternate
does not know what they’re making an attempt to construct towards and on the similar time have come to the conclusion (or at the least a staff inside Stack
Alternate has) that GenAI is not to be trusted.

Identical to the group stated again in December and continues to say now.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles