Is this another thing that the rest of the world didn’t know the US doesn’t have?
Is this another thing that the rest of the world didn’t know the US doesn’t have?
Every right-wing accusation is a confession.
many years now
This appears to be an escalating fraud, affecting newer models more than old. So I’d guess that’s ^^ the answer.
It’s not just a Reuters investigation, they’ve been fined by a few jurisdictions and they absolutely do have the ability to pay lawyers to defend those charges if they’re false.
They don’t seem to list the instances they trawled (just the top 25 on a random day with a link to the site they got the ranking from but no list of the instances, that I can see).
We performed a two day time-boxed ingest of the local public timelines of the top 25 accessible Mastodon instances as determined by total user count reported by the Fediverse Observer…
That said, most of this seems to come from the Japanese instances which most instances defederate from precisely because of CSAM? From the report:
Since the release of Stable Diffusion 1.5, there has been a steady increase in the prevalence of Computer-Generated CSAM (CG-CSAM) in online forums, with increasing levels of realism.17 This content is highly prevalent on the Fediverse, primarily on servers within Japanese jurisdiction.18 While CSAM is illegal in Japan, its laws exclude computer-generated content as well as manga and anime. The difference in laws and server policies between Japan and much of the rest of the world means that communities dedicated to CG-CSAM—along with other illustrations of child sexual abuse—flourish on some Japanese servers, fostering an environment that also brings with it other forms of harm to children. These same primarily Japanese servers were the source of most detected known instances of non-computer-generated CSAM. We found that on one of the largest Mastodon instances in the Fediverse (based in Japan), 11 of the top 20 most commonly used hashtags were related to pedophilia (both in English and Japanese).
Some history for those who don’t already know: Mastodon is big in Japan. The reason why is… uncomfortable
I haven’t read the report in full yet but it seems to be a perfectly reasonable set of recommendations to improve the ability of moderators to prevent this stuff being posted (beyond defederating from dodgy instances, which most if not all non-dodgy instances already do).
It doesn’t seem to address the issue of some instances existing largely so that this sort of stuff can be posted.
There are exceptions to the rule, and this is one of them.
The rule works so well because journalists who can make a statement of fact, make a statement of fact. When they can’t stand the idea up, they use a question mark for cover. eg China is in default on a trillion dollars in debt to US bondholders. Will the US force repayment? .
This is an opinion piece which is asking a philosophical question. The rule does not apply.
tbf this is not very much different from how many flesh’n’blood journalists have been finding content for years. The legendary crack squirrels of Brixton was nearly two decades ago now (yikes!). Fox was a little late to the party with U.K. Squirrels Are Nuts About Crack in 2015.
Obviously, I want flesh’n’blood writers getting paid for their plagiarism-lite, not the cheapskates who automate it. But this kind of embarrassing error is a feature of the genre. And it has been gamed on social media for some time now (eg Lib Dem leader Jo Swinson forced to deny shooting stones at squirrels after spoof story goes viral)
I don’t know what it is about squirrels…
This lil robot was trained to know facts and communicate via natural language.
Oh stop it. It does not know what a fact is. It does not understand the question you ask it nor the answer it gives you. It’s a very expensive magic 8ball. It’s worse at maths than a 1980s calculator because it does not know what maths is let alone how to do it, not because it’s somehow emulating how bad the average person is at maths. Get a grip.
It’s OK. Ordinary people will have no trouble at all making sure they use a different vehicle every time they drive their kid to college or collect an elderly relative for the holidays. This will only inconvenience serious criminals.
It will almost always be detectable if you just read what is written. Especially for academic work. It doesn’t know what a citation is, only what one looks like and where they appear. It can’t summarise a paper accurately. It’s easy to force laughably bad output by just asking the right sort of question.
The simplest approach for setting homework is to give them the LLM output and get them to check it for errors and omissions. LLMs can’t critique their own work and students probably learn more from chasing down errors than filling a blank sheet of paper for the sake of it.
They’re circular. If the text is too predictable it was written by an LLM* but LLMs are designed to regurgitate the next word most commonly used by humans in any given context.
*AI is a complete misnomer for the hi-tech magic 8ball
Yes they are. Probably not in the country that calls it transit, mind. And lots of people would like to be able to have more private conversations in public, whether or not they’re travelling at the time.
Plus, I’ve seen a lot of threads over the years from gamers, or the people who have to live with them, looking for something exactly like this.