The article proposes that comments on Lobsters which merely accuse a post of being LLM-generated should be flagged as off-topic to maintain the quality of discussion.
<p>There've been endless discussions about whether we should ban LLM-generated text, or change the ai/vibecoding tags, or etc. The general consensus seems to be (???) flag low-effort/uninformative stories as spam and move on.</p>
<p>My proposal here is that <em>comments</em> on these stories that just say "this is LLM slop" or something equivalent should be flagged as off-topic. Clearly everyone has different thresholds for what triggers their "slop-o-meter" but at least 80% of the reason I read lobsters is for the quality of the commentary here, and it's frustrating to have to wade through arguments about whether the story under discussion is LLM slop or not. It's also frustrating to <em>submit</em> a story that I thought was interesting and (for whatever reasons) didn't trip my slop-o-meter, and then have the only comment be "would have been a nice article if it weren't written by an LLM". It's <em>even more frustrating</em>, and frankly kindof demoralizing, to have an article that I wrote (without an LLM) get submitted and then get accused of it being LLM-generated [1].</p>
<p>I get that LLMs are polarizing and frustrating to everyone in the community; at this point I don't think anybody is going to change their minds about anything here, so this proposal is "In addition to 'flag low-effort articles as spam and move on', we should also 'flag this meta commentary as off-topic and move on'".</p>
<hr>
<p>[1] To be clear that hasn't happened here for articles I've written, but I have gotten that reaction on other platforms, and I've witnessed that happen to other authors on Lobsters.</p>
# "This is written by an LLM" comments should be flagged as off-topic
Source: [https://lobste.rs/s/wee21u/this_is_written_by_llm_comments_should_be](https://lobste.rs/s/wee21u/this_is_written_by_llm_comments_should_be)
There've been endless discussions about whether we should ban LLM\-generated text, or change the ai/vibecoding tags, or etc\. The general consensus seems to be \(???\) flag low\-effort/uninformative stories as spam and move on\.
My proposal here is that*comments*on these stories that just say "this is LLM slop" or something equivalent should be flagged as off\-topic\. Clearly everyone has different thresholds for what triggers their "slop\-o\-meter" but at least 80% of the reason I read lobsters is for the quality of the commentary here, and it's frustrating to have to wade through arguments about whether the story under discussion is LLM slop or not\. It's also frustrating to*submit*a story that I thought was interesting and \(for whatever reasons\) didn't trip my slop\-o\-meter, and then have the only comment be "would have been a nice article if it weren't written by an LLM"\. It's*even more frustrating*, and frankly kindof demoralizing, to have an article that I wrote \(without an LLM\) get submitted and then get accused of it being LLM\-generated \[1\]\.
I get that LLMs are polarizing and frustrating to everyone in the community; at this point I don't think anybody is going to change their minds about anything here, so this proposal is "In addition to 'flag low\-effort articles as spam and move on', we should also 'flag this meta commentary as off\-topic and move on'"\.
---
\[1\] To be clear that hasn't happened here for articles I've written, but I have gotten that reaction on other platforms, and I've witnessed that happen to other authors on Lobsters\.
A user on Lobsters proposes that LLM-generated submissions should be disallowed, arguing that users posting such content should be banned and a notification should be added to remind submitters.
The author investigates how LLMs are influencing word usage in coding and everyday language, finding that words favored by LLMs show increased frequency in both coding sessions and Google Trends, raising concerns about humans adopting LLM writing styles.
Hillel Wayne discusses how LLMs, while popular for writing formal specifications like TLA+ and Alloy, often produce shallow, tautological properties that fail to capture subtle bugs, based on analysis of community projects.
Bryan Cantrill critiques LLMs for lacking the optimization constraint of human laziness, arguing that LLMs will unnecessarily complicate systems rather than improve them, and highlighting how human time limitations drive the development of efficient abstractions.
This article critiques the sensationalized media coverage of mathematical proofs regarding LLM limitations, specifically highlighting how conditional results about self-improvement are often misrepresented as universal impossibilities.