

A console in 2025 “runs at a stable 30” fps and that’s good news? Of course this is slightly faster than a mobile chip from 10 years ago, but that’s an incredibly low bar to set.
A console in 2025 “runs at a stable 30” fps and that’s good news? Of course this is slightly faster than a mobile chip from 10 years ago, but that’s an incredibly low bar to set.
Carl Pei and Pete Lau. They both co-founded OnePlus, Carl Pei later founded Nothing.
What you’re saying absolutely makes sense. However as someone with ADHD I couldn’t relate any less, I wish I could get addicted to something like that and not lose interest after ten minutes!
Even assuming we’re okay with using AI for language learning - then why would anyone pay for Duolingo instead of the many LLMs that people already use and pay for?
They’ve alienated their customer base hard. And this marketing video pretending they are siding with the users and against “their corporate overlords” is horribly tone deaf.
In my view it’s a Linux subsystem for Windows.
Why the name is the other way around, I’ll never understand.
It’s okay. We can all play that game. I’ve replaced my use of Duolingo with AI.
Pro tip: have as your “system prompt” in your LLM of choice “at the end of every query, include me a short Swedish relates to my prompt”. No need for Duolingo.
This can be correct, if they’re talking about training smaller models.
Imagine this case. You are an automotive manufacturer that uses ML to detect pedestrians, vehicles, etc with cameras. Like what Tesla does, for example. This needs to be done with a small, relatively low power footprint model that can run in a car, not a datacentre. To improve its performance you need to finetune it with labelled data of traffic situations with pedestrians, vehicles, etc. That labeling would be done manually…
… except when we get to a point where the latest Gemini/LLAMA/GPT/Whatever, which is so beefy that could never be run in that low power application… is also beefy enough to accurately classify and label the things that the smaller model needs to get trained.
It’s like an older sibling teaching a small kid how to do sums, not an actual maths teacher but does the job and a lot cheaper or semi-free.
“There are some bad things on the internet”
“Just… Don’t use the internet?”
The ads for apps, Xbox games, trial versions of Office preinstalled, the minesweeper and solitaire collection that are preinstalled but actually ad supported or non-free, depending on the region spotify/TikTok/Facebook also come preinstalled, “Movies & TV”, Bing/MS News…
I think all of those count as bloat. I haven’t included Edge because I guess having a browser is a necessity, or copilot/cortana because you said “excluding AI features”.
Oh no! They’re using an emulator! I choose you NINTENDO! Use “Sue for copyright”!
Unfortunately, it’s not very effective (Anthropic’s type is “AI Company”).
I’ve only started using Storygraph recently (which I also like) but I’d consider a federated alternative. Does anybody know whether its possible to migrate the history from SG to Bookwyrm?
Excellent in which specific sense? Most competitors offer better everything (performance, range, build quality) for a given price point.
The fact that Tesla has managed to make EVs that consistently rank below most ICE brands in terms of reliability is mind blowing.
Over the past 5 years, I’ve installed ubuntu about 30 times on different computers. Not once has an install on an SSD taken me more than an hour, with it typically taking me 30 minutes or less except for rare occasions where I’ve messed something up.
It’s the other way around, an Apple Silicon Mac would be able to run an intel binary through Rosetta (I think there’s almost no exceptions at this point). It’s intel macs that can’t run Arm specific binaries.
I thought a few days ago that my “new” laptop (M2 Pro MBP) is now almost 2 years old. The damn thing still feels new.
I really dislike Apple but the Apple Silicon processors are so worth it to me. The performance-battery life combination is ridiculously good.
Less conveniently while costing something like $700 plus a monthly $25 subscription.
I don’t get how it got pitched either.
It’s UE in Spanish, from Unión Europea. (Non-doubled letters because it’s a single Union, there’s no plural like in “States”).
Sometimes people in Spain do use the English acronyms for both EU/USA, but I don’t think I’ve seen it often. Both UE and EEUU are more common from what I’ve seen, and also people rarely say these out loud, it’s exclusively a written language problem.
I’m talking about running them in GPU, which favours the GPU even when the comparison is between an AMD Epyc and a mediocre GPU.
If you want to run a large version of deepseek R1 locally, with many quantized models being over 50GB, I think the cheapest Nvidia GPU that fits the bill is an A100 which you might find used for 6K.
For well under that price you can get a whole Mac Studio with those 192 GB the first poster in this thread mentioned.
I’m not saying this is for everyone, it’s certainly not for me, but I don’t think we can dismiss that there is a real niche where Apple has a genuine value proposition.
My old flatmate has a PhD in NLP and used to work in research, and he’d have gotten soooo much use out of >100 GB of RAM accessible to the GPU.
Only 6M €? For an event of that size that feels a lot cheaper than I would have thought.