AI trained to do that job? Sure, yeah. LLM AI? Fuck no.
AI trained to do that job? Sure, yeah. LLM AI? Fuck no.
Hah, I tend to make huge minecart networks, so I used the gold for the tracks. I know I could technically duplicate them, but that just feels too cheaty to me.
And again, I know some other crops can be generated faster, but pumpkins and melons are for me the sweet spot for density. With 4 farmers I can trade 3 stacks of melons and 5 stacks of pumpkins for enough emeralds to get what I need in a day. With paper that’d be 6 stacks of paper per librarian and I’d need 6 librarians to get the same amount of emeralds, so 36 stacks total. I’d rather not click back and forth all the time between my chests and the villagers to do my trading.
Farmers are IMO much better for getting emeralds than librarians, because you can trade both pumpkins and watermelon (which the crafter block now makes less of a pain to store). In terms of auto farmers with chests for overflow storage, watermelon + pumpkin are much more emerald dense than paper too (6 pumpkin/4 melon per emerald vs 24 paper per emerald). Plus you can trade for golden carrots with maxed farmers, which are one of the best foods for hunger saturation and can be used to breed horses.
Pumpkins can also be used to craft jack-o-lanterns, which are convenient early in the game as lit blocks, and the seeds can go straight into a composter (which you’ll conveniently have right next to the farmer).
Also you need leather to craft books (unless you buy bookshelves and chop them down, which I find annoying), which brings us back to cows, and if you have cows you may as well have sheep and pigs, and a butcher.
By the time you can craft sugar cane auto farmers you can craft pumpkin auto farmers, which are more emerald efficient. Until then I would recommend a meat farm (pigs/sheep/cows) to level butchers, and sweet berries (which grow insanely fast and just need dirt and light) for emeralds.
What games do they know? Can you draw any analogies?
I will never forgive JSON for not allowing commas after the last element in a list.
The article’s author mentioned that the problem is not limited to Samsung TVs - someone reported the issue on their phone.
The article does not mention a root cause, but I have a theory that it’s likely a malformed subtitle track. I tend to watch with subtitles on so I run into related issues every once in a while. Most of the time it’s one of two things:
The latter can have multiple effects depending on what format the subs are in, but most of the time it’s a missing end time, meaning that the subtitle stays on. However, some formats also have cues as to who the speaker is, and that comes with a start and end tag like in HTML. I suspect that in this case the end tag is either missing or misaligned in the syntax tree, causing this one line of dialogue to be displayed over and over when the player reaches other lines matching the cue for it, but that don’t get shown because the user has turned subtitles off.
As to why this is bleeding into other shows: I suspect it’s an issue with how the software clients are caching the subtitle files. This would also explain why going back into the episode that caused this fixes things, because it would reset the cached file. Which in turn brings me back to pointing the finger at Amazon, not Samsung, because Samsung would just be loading Amazon’s software client to play the video and subtitles.
Transparent vs translucent.
Since FF 6 and 7 have already been mentioned, I’m going to give a honorable mention to Shining Force.
User-serviceable switches would be so nice…
Deep fried AI.
The patients or their families don’t even get the gift card, that goes to the hospital.
They do have a bpf sensor. It’s still shite, managing to periodically peg a CPU core on an idle system. They just lifted and shifted their legacy code into the bpf sensor, they don’t actually make good use of eBPF capabilities.
If the sensor was using eBPF (as any modern sensor on Linux should) then the faulty update would have made the sensor crash, but the system would still be stable. But CrowdStrike has a long history of using stupid forms of integration, so I wouldn’t put it past them to also load a kernel module that fucks things up unless it’s blacklisted in the bootloader. Fortunately that kind of recovery is, if not routine, at least well documented and standardized.
The London Underground is actually kind of a dumb use-case because it’s fixed infrastructure.
On the other hand it’s a perfect test bed, because there’s sufficient changes of direction and speed, and the fixed infrastructure lets you measure drift. Plus it being underground helps simulate GPS signal being weak or unavailable.
When I was younger and more naïve, I used to think a case was useless. I kept my phones in my pocket most of the time, and didn’t feel particularly clumsy or reckless. Then I got a phone that happened to have a glass back, and it broke not because I fumbled it, but because it slid out of my pocket onto time floor while I was sitting down. Glass backs on phones are bullshit.
You can, but you’ll need to increase the microwave’s power accordingly.
That’s LLM AI, but the type I’m talking about is the machine learning kind. I can envision a system that takes e.g. a sample’s test data and provides a summary, which is not far from what doctors do anyway. If you ever get a blood test’s results explained to you it’s “this value is high, which would be concerning except that this other value is not high, so you’re probably fine regarding X. However, I notice that this other value is low, and this can be an indicator of Y. I’m going to request a follow-up test regarding that.” Yes, I would trust an AI to give me that explanation, because those are very strict parameters to work with, and the input comes from a trusted source (lab results and medical training data) and not “Bob’s shrimping and hoola hoop dancing blog”.