That fork seems like a cash grab considering it already has a Patreon.
Have they learned nothing from the lawsuit?
That fork seems like a cash grab considering it already has a Patreon.
Have they learned nothing from the lawsuit?
Eh, the built-in speakers on most TVs these days are all pretty trash across the board. You pretty much need a sound bar at the very least, these days.
IBM is still just as active, just not in the consumer markets anymore. They’re big into industry research and more specialized computing these days.
I like to imagine that this whole event was the result of the first truly rogue AI that generated its own plans for an event, sent out the necessary emails to hire the people to put it together, and everything in secret under its creator’s nose.
It probably isn’t that, though. Because even AI wouldn’t fuck up this badly.
Weird, maybe the Pixel build is slightly different, because that’s not happening on mine and I believe I’ve already got the latest updates for it.
It’s default? When I downloaded it, I had to manually choose Gemini to be the default. It wasn’t set like that for me.
I don’t believe USPS can open packages without a warrant (which is why they’re the preferred courier for drugs), and I don’t think “multiple packages going to a wrong address” counts as probable cause. But it’s been a minute since I’ve been involved in that end of things, so I dunno if that’s still current protocol.
That’s why you use a fake return address that doesn’t exist. Allowing your product to get into real people’s hands was just asking for trouble.
It was always AI, what are you talking about?
It’s copyright infringement to do so. No need getting the Beehaw admins in trouble; Google paywall bypassing tools and read away.
I think implying that it has a bias is giving the Advanced Auto Prediction Engine a bit too much credit.
Just don’t search that if you’ve also been searching for any flights recently.
The fleet of cars is summoned back to the HQ to have the update installed, so it causes a temporary service shutdown until cars are able to start leaving the garage with the new software. They can’t do major updates over the air due to the file size; pushing out a mutli-gigabyte update to a few hundred cars at once isn’t great on the cellular network.
It’s pretty handy for things like being able to just say “hey Google, unlock the door” when I’m carrying a dozen bags of groceries.
I use automations as well, but sometimes I need something done outside of my otherwise-considered parameters. And it’s easier to just yell your wish into being than to take out your phone, open an app, select the device, then pick your command.
They’ve already been testing on private tracks for years. There comes a point where, eventually, something new is used for the first time on a public road. Regardless, even despite even idiotic crashes like this one, they’re still safer than human drivers.
I say my tax dollar funded DMV should put forth a significantly more stringent driving test and auto-revoke the licenses of anybody who doesn’t pass, before I’d want SDCs off the roads. Inattentive drivers are one of the most lethal things in the world, and we all just kinda shrug our shoulders and ignore that problem, but then we somehow take issue when a literal supercomputer on wheels with an audited safety history far exceeding any human driver has two hiccups over the course of hundreds of millions of driven miles. It’s just a weird outlook, imo.
After an investigation, Waymo found that its software had incorrectly predicted the future movements of the pickup truck due to “persistent orientation mismatch” between the towed vehicle and the one towing it.
Having worked at Waymo for a year troubleshooting daily builds of the software, this sounds to me like they may be trying to test riskier, “human” behaviors. Normally, the cars won’t accelerate at all if the lidar detects an object in front of it, no matter what it thinks the object is or what direction it’s moving in. So the fact that this failsafe was overridden somehow makes me think they’re trying to add more “What would a human driver do in this situation?” options to the car’s decision-making process. I’m guessing somebody added something along the lines of “assume the object will have started moving by the time you’re closer to that position” and forgot to set a backup safety mechanism for the event that the object doesn’t start moving.
I’m pretty sure the dev team also has safety checklists that they go through before pushing out any build, to make sure that every failsafe is accounted for, so that’s a pretty major fuckup to have slipped through the cracks (if my theory is even close to accurate). But luckily, a very easily-fixed fuckup. They’re lucky this situation was just “comically stupid” instead of “harrowing tragedy”.
I wasn’t asking about the car’s logic algorithm; we all know that the SDC made an error, since it [checks notes] hit another car. We already know it didn’t do the correct thing. I was asking how else you think the developers should be working on the software other than one thing at a time. That seemed like a weird criticism.
So their plan is to fix one accident at a time…
Well how else would you do it?
This is a really awesome article that explains the technical aspects in a way that makes sense to non-coders, without having to over simplify. I feel like this sort of writing should be much more appreciated. Also, the graphic at the top has no business being that good, this whole piece is a banger.
This only hides content locally for Threads users, it doesn’t affect visibility from any other fedi platform. It’s not that different from a Lemmy instance downvoting a comment to the point of being auto-hidden; it still exists but requires an extra click to see from your instance, and the rest of the fediverse can access it normally.