• 1 Post
  • 294 Comments
Joined 3 years ago
cake
Cake day: June 9th, 2023

help-circle

  • Cory Doctorow actually goes more in depth on the radiologist example in a post from last year:

    'If my Kaiser hospital bought some AI radiology tools and told its radiologists: “Hey folks, here’s the deal. Today, you’re processing about 100 x-rays per day. From now on, we’re going to get an instantaneous second opinion from the AI, and if the AI thinks you’ve missed a tumor, we want you to go back and have another look, even if that means you’re only processing 98 x-rays per day. That’s fine, we just care about finding all those tumors.”

    If that’s what they said, I’d be delighted. But no one is investing hundreds of billions in AI companies because they think AI will make radiology more expensive, not even if that also makes radiology more accurate. The market’s bet on AI is that an AI salesman will visit the CEO of Kaiser and make this pitch: "Look, you fire 9/10s of your radiologists, saving $20m/year, you give us $10m/year, and you net $10m/year, and the remaining radiologists’ job will be to oversee the diagnoses the AI makes at superhuman speed, and somehow remain vigilant as they do so, despite the fact that the AI is usually right, except when it’s catastrophically wrong.

    “And if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop.’ It’s their signature on the diagnosis.”

    This is a reverse centaur, and it’s a specific kind of reverse-centaur: it’s what Dan Davies calls an “accountability sink.” The radiologist’s job isn’t really to oversee the AI’s work, it’s to take the blame for the AI’s mistakes.’

    In short, we definitely could (and indeed should) be using tools like tumor detecting machine vision as something that helps humans build a better world for humans. But we’ve seen time and time again, across countless fields that it never works out that way.

    That’s because this isn’t a problem with the technology of AI, but the fucked up sociotechnical and economic systems that govern how this tech is used, who gets to use it, who it gets used on, whose consent is required for those uses and most significant of all: who gets to profit?

    !Not us, that’s for sure!<






  • I get like this often. I’m fortunate enough to have friends with whom I have enough of a rapport that we have shorthands for stuff like this (usually established after a couple times of attempting to articulate this sentiment (or more commonly, months after it would have been ideal to articulate this sentiment, after many guilt ridden apologies for not replying.))

    For example, with one of my friends, we use a black heart emoji to relay the sentiment in the OP (black to distinguish it from the generic heart react). I like having this shorthand because it means that when I receive a message from a friend and I’m too low brain to have any hope of replying, I’m more likely to be able to attend least read the message — being able to trivially communicate both my affection and my burnout makes reading the message feel less overwhelming, and disrupts the cycle of shame. And sometimes reading that message from a friend is exactly what I needed if I’m feeling low





  • The idea of copyright is to protect the financial rights of creatives, thus incentivising people to make more stuff, right?

    Well even before AI, it wasn’t doing its job very well on that front. The only ones with the power and money to be able to leverage copyright to protect their rights are those who are already so powerful that they don’t need those protections — big music labels and the like. Individual creatives were already being fucked over by the system long before AI.

    If you haven’t read the article, I’d encourage you to give it a try. Or perhaps this one, which goes into depth on the intrinsic tensions within copyright law.


  • Something that I’m super chuffed with is that a few years back, one of my most cheapskate friends asked me for advice on buying a new laptop. When I presented their options to them, they were reluctant to cheap out and get a mediocre laptop that wouldn’t last them very long, but they also balked at the price of even the midrange laptops (they weren’t keen on spending more than £250 on a laptop, which wasn’t enough to get anything that they’d consider to be decent and worth the effort/cost).

    As a long shot offer, I told them that I could always try installing Linux on their laptop if they wanted to wring another couple of years out of their existing laptop. I was a tad surprised when they opted for this, and even more surprised at how well they took to it; I jokingly call them one of my “normie” friends, because they’re one of the people whose perspective I ask for when I’m trying to calibrate for what non-techie people know/think. I only had limited experience with Linux myself at that point, having only played around with things on live USBs before. I had heard that Linux could give new life to slow computers, but I was surprised at just how effectively it did this.

    (A small amusing aspect to this anecdote is that when I was installing it, I said that one of the side benefits of running Linux is that it could boost nerd cred amongst folk like me. They laughed and said that they didn’t expect that this would be a thing that would ever end up being relevant. Later that year, they got a girlfriend who saw that my friend was running Linux, and expressed approval, which is quite funny to me)







  • So many features like this have gotten so much worse over the years. Google assistant is the big stand out one for me. I first switched to Android in 2014ish, and I got heavily into tinkering and automating stuff. I could say “Okay Google, make a coffee”, or “pop a coffee on please”, and Google assistant would hear this, parse it and understand that this wasn’t a command it knew. This would lead to that input being passed over to Tasker, the app I used for automating stuff, and that would then do the behind the scenes magic of turning on the coffee brewer as I was on my way home (It was very funny, because I didn’t have a fancy smart coffee pot or anything — I just used a ball bearing on a track to hit the on button)

    Nowadays, I say something simple like “Okay Google, make a note” and it will say “I’m sorry, I don’t understand that” more often than not. The speech recognition used to be so good, especially after training it on your voice for a while. Now it’s just shit.

    It makes me disproportionately sad. Like, enshittification is everywhere, but this is something distinct, even if it is linked to enshittification. If they were gating better voice recognition behind paywalls, I’d be annoyed, but much less sad, because at least that functionality still exists. Modern software, especially that produced by the tech giants, has gotten so complex that I wonder whether even the most proficient engineers in Google understand their software nowadays.