• 0 Posts
  • 33 Comments
Joined 2 years ago
cake
Cake day: July 31st, 2023

help-circle


  • It’s rough. Children shriek, scream, and make a lot of noise while playing. I don’t really blame the children or even really their parents. Children need freedom to explore existence and sadly that includes making noise.

    However, that kind of noise can be extremely difficult for some people. Though I can deal with it, it upsets me. My partner can not deal with it. It may seem trivial to some, but just knowing that for most of the day and evening a blood-curdling scream might randomly happen right outside our apartment door or window (just from neighborhood children playing) raises the base level of anxiety a few notches, and can startle the hell out of us and destroy our nerves for quite a while when it happens - which it absolutely does. …and it’s a strong possibility virtually anywhere we go.

    I don’t know how else to protect ourselves other than to discriminate against families with children. I have nothing against them, but we deserve to have a decent quality of life too and them just existing nearby makes that impossible for us.


  • I think it would be infinitely better for an LLM to walk a user through the use of the formula in their specific use case rather than do it for them… but that won’t sell as well because most people don’t want to learn to use a spreadsheet they just want to do a thing and move on to something else. This is how it is sold and this is why it is used, in most cases. It’s not a hammer that people misused despite there being nothing in the sales material about it’s usefulness as a bludgeoning device against other humans. LLMs, spreadsheet copilot included, is commonly packaged and sold as a magic solution that will just do the work for you, with an asterisk and fine print stating that it’s for entertainment purposes only and that whoever isn’t liable for any false information or whatever bullshit clause they come up with. People use it as it is sold to them and that’s what worries me.

    another optional tool at users’ disposal.

    I just had my place of work upgrade me to Windows 11 this week. In order to install office, I was directed by Microsoft to download the “Office 365 Copilot” app which downloaded the office installer. Copilot is not subtle. It may be technically optional but good lord does it want you to know about and use it for everything.

    And no, I didn’t try it yet. I will likely be trying it and Gemini soon out of curiosity. Last time I tried to use it I was given hallucinated nonexistant python modules and powershell commands that wasted my time. It’s been a year or so though.


  • it’s not a replacement for a human brain, it’s an assistant.

    This is what I think AI and automation is generally good at and should be used for - mitigating unpleasant or repetitive work so that the focus of the user is productivity/creativity.

    This is what this integration is for - it’s not a replacement for a human brain, it’s an assistant. As are all LLMs.

    The context is something we disagree on wholeheartedly. Those funding and fundraising for AI and an enormous subset of those using are not looking to use AI in the way we are talking about. The prior are hoping to use AI to extract value from it at the expense of people who would otherwise need to be paid, or they and claim it can do anything and everything. Those using it, many of them, do not have a sufficient understanding to comprehend the solution. They are basically “vibe coding”. Tell the LLM to do something they aren’t knowledgeable about, then keep telling it to fix the problems until they don’t see problems anymore. Yes, spreadsheet formulas are likely simpler than an app but I know people who use AI for Google Sheets and they rarely test any results, let alone rigorously.

    Anecdotal, sure, but I don’t have enough faith in humanity to presume everyone else is doing something wildly different.

    Edit: To expand, LLMs specifically, are what I consider to be the worst side of “AI”. You can use ML and neural networks to create “AI” (self altering, alien blackbox algorithms) to become proficient in analyzing information and solving problems. LLMs create a situation where the model appears intelligent because it knows how to mimic language… and so now we pretend like it can do whatever people can do.



  • For me, it’s hard to say one or the other is worse. It might depend on circumstances. Is someone making a false claim of rape worse than someone raping someone, or worse than a child rapist, serial rapist, etc? In damage collectively done to all victims, quite possibly. However, the intention of the false witness isn’t usually to delegitimize the claims of other victims. It’s usually shortsited and desperate - a desire to avoid social consequences or to punish the accused. I suspect such false claims are also more common among a younger, naive, immature population. It’s fucked up and selfish, surely. As fucked up as feeling you have the right to use another person’s body for your own physical pleasure or to assert dominance or whatever shit goes through a rapists mind? I don’t know.

    Frankly, I’ve never been the victim or perpetrator of either crime, nor have I even been in a situation where either crossed my mind. All I’m qualified to say is that I wish neither crimes would happen to anyone.



  • I switched from Manjaro to Bazzite on my gaming PC. I don’t have time to read changelogs.

    Things went fantastically so I put Kinoite on my laptop. I do a lot more random shit on the laptop so it’s a bit more complicated but so far so good. Atomic distros take getting used to but it still feels less stressful than coming back to my computer after a few days and digging through like 100+ package updates and eventually saying “Fuck it” and just updating blindly.


  • Clearly the author doesn’t understand how capitalism works. If Apple can pick you up by the neck, turn you upside down, and shake whatever extra money it can from you then it absolutely will do so.

    The problem is that one indie developer doesn’t have any power over Apple… so they can go fuck themselves. The developer is granted the opportunity to grovel at the feet of their betters (richers) and pray that they are allowed to keep enough of their own crop to survive the winter. If they don’t survive… then some other dev will probably jump at the chance to take part in the “free market” and demonstrate their worth.



  • I think the word “learning”, and even “training”, is an approximation from a human perspective. MLs “learn” by adjusting parameters when processing data. At least as far as I know, the base algorithm and hyperparameters for the model are set in stone.

    The base algorithm for “living” things is basically only limited by chemistry/physics and evolution. I doubt anyone could create an algorithm that advanced any time soon. We don’t even understand the brain or physics at the quantum level that well. Hell, we are using ML to create new molecules because we don’t understand it well.


  • I think you’re either being a little dismissive of the potential complexity of the “thinking” capability of LLMs or at least a little generous if not mystical in your imagination of what the purely physical electrical signals in our heads are actually doing to learn how to interpret all these little shapes we see on screens.

    I don’t think I’m doing either of those things. I respect the scale and speed of the models and I am well aware that I’m little more than a machine made of meat.

    Babies start out mimicking. The thing is, they learn.

    Humans learn so much more before they start communicating. They start learning reason, logic, etc as they develop their vocabulary.

    The difference is that, as I understand it, these models are often “trained” on very, very large sets of data. They have built a massive network of the way words are used in communication - likely built from more texts than a human could process in several lifetimes. They come out the gate with an enormous vocabulary and understanding of how to mimic, replicate it’s use. If they had been trained on just as much data, but data unrelated to communication, would you still think it capable of reasoning without the ability to “sound” human? They have the “vocabulary” and references to mimic a deep understanding but because we lack the ability to understand the final algorithm it seems like an enormous leap to presume actual reasoning is taking place.

    Frankly, I see no reason for models like LLMs at this stage. I’m fine putting the breaks on this shit - even if we disagree on the reasons why. ML can and has been employed to achieve far more practical goals. Use it alongside humans for a while until it is verifiably more reliable at some task - recognizing cancer in imaging or generating molecules likely of achieving a desired goal. LLMs are just a lazy shortcut to look impressive and sell investors on the technology.

    Maybe I am failing to see reality - maybe I don’t understand the latest “AI” well enough to give my two cents. That’s fine. I just think it’s being hyped because these companies desperately need VC money to stay afloat.

    It works because humans have an insatiable desire to see agency everywhere they look. Spirits, monsters, ghosts, gods, and now “AI.”


  • Yes, both systems - the human brain and an LLM - assimilate and organize human written languages in order to use it for communication. An LLM is very little else beyond this. It is then given rules (using those written languages) and then designed to create more related words when given input. I just don’t find it convincing that an ML algorithm designed explicitly to mimic human written communication in response to given input “understands” anything. No matter *how convincingly" an algorithm might reproduce a human voice - perfectly matching intonation and inflexion when given text to read - if I knew it was an algorithm designed to do it as convincingly as possible I wouldn’t say it was capable of the feeling it is able to express.

    The only thing in favor of sentience is that the ML algorithms modify themselves and end up being a black box - so complex with no way to represent them that they are impossible for humans to comprehend. Could it somehow have achieved sentience? Technically, yes, because we don’t understand how they work. We are just meat machines, after all.




  • Thanks for being so detailed!

    I use caddy for straightforward https, but every time I try to use it for a service that isn’t just a reverse_proxy entry, I really struggle to find resources I understand… and most of the time the “solutions” I find are outdated and don’t seem to work. The most recent example of this for me would be Baikal.

    Do you have any recommendations for where I might get good examples and learn more about how do troubleshoot and improve my Caddyfile entries?

    Thanks!


  • I replied to the following statement:

    I could look up my dad’s name and all I get are articles about a serial killer who just happened to have the same name

    I countered this dismissal by quoting the article, which explains that it was more than just a coincidental name mix up.

    You response is not really relevant to my response, unless you are assuming I’m arguing for one side or the other. I’m just informing someone who dismissed the article’s headline using an explanation that demonstrated that they didn’t bother to read the article.

    Nothing is wrong with the tech (except it doesn’t seem very useful when you firmly know what it can’t do), but everything is wrong with that tech being called artificial intelligence.

    If the owners of the technology call it artificial intelligence and hype or sell it as a potential replacement for intelligent human decision making then it should be absolutely be judged on those grounds.



  • There is also the corpo verified id route. In order to avoid the onslaught of AI bots and all that comes with them you’ll need to sacrifice freedom, anonymity, and privacy like a good little peasant to prove you aren’t a bot… and so will everyone else. You’ll likely be forced to deal with whatever AI bots are forced upon you while within the walls but better an enemy you know I guess?