

More than enough people who claim to know how it works think it might be “evolving” into a sentient being inside it’s little black box. Example from a conversation I gave up on… https://sh.itjust.works/comment/18759960
More than enough people who claim to know how it works think it might be “evolving” into a sentient being inside it’s little black box. Example from a conversation I gave up on… https://sh.itjust.works/comment/18759960
I think the word “learning”, and even “training”, is an approximation from a human perspective. MLs “learn” by adjusting parameters when processing data. At least as far as I know, the base algorithm and hyperparameters for the model are set in stone.
The base algorithm for “living” things is basically only limited by chemistry/physics and evolution. I doubt anyone could create an algorithm that advanced any time soon. We don’t even understand the brain or physics at the quantum level that well. Hell, we are using ML to create new molecules because we don’t understand it well.
I think you’re either being a little dismissive of the potential complexity of the “thinking” capability of LLMs or at least a little generous if not mystical in your imagination of what the purely physical electrical signals in our heads are actually doing to learn how to interpret all these little shapes we see on screens.
I don’t think I’m doing either of those things. I respect the scale and speed of the models and I am well aware that I’m little more than a machine made of meat.
Babies start out mimicking. The thing is, they learn.
Humans learn so much more before they start communicating. They start learning reason, logic, etc as they develop their vocabulary.
The difference is that, as I understand it, these models are often “trained” on very, very large sets of data. They have built a massive network of the way words are used in communication - likely built from more texts than a human could process in several lifetimes. They come out the gate with an enormous vocabulary and understanding of how to mimic, replicate it’s use. If they had been trained on just as much data, but data unrelated to communication, would you still think it capable of reasoning without the ability to “sound” human? They have the “vocabulary” and references to mimic a deep understanding but because we lack the ability to understand the final algorithm it seems like an enormous leap to presume actual reasoning is taking place.
Frankly, I see no reason for models like LLMs at this stage. I’m fine putting the breaks on this shit - even if we disagree on the reasons why. ML can and has been employed to achieve far more practical goals. Use it alongside humans for a while until it is verifiably more reliable at some task - recognizing cancer in imaging or generating molecules likely of achieving a desired goal. LLMs are just a lazy shortcut to look impressive and sell investors on the technology.
Maybe I am failing to see reality - maybe I don’t understand the latest “AI” well enough to give my two cents. That’s fine. I just think it’s being hyped because these companies desperately need VC money to stay afloat.
It works because humans have an insatiable desire to see agency everywhere they look. Spirits, monsters, ghosts, gods, and now “AI.”
Yes, both systems - the human brain and an LLM - assimilate and organize human written languages in order to use it for communication. An LLM is very little else beyond this. It is then given rules (using those written languages) and then designed to create more related words when given input. I just don’t find it convincing that an ML algorithm designed explicitly to mimic human written communication in response to given input “understands” anything. No matter *how convincingly" an algorithm might reproduce a human voice - perfectly matching intonation and inflexion when given text to read - if I knew it was an algorithm designed to do it as convincingly as possible I wouldn’t say it was capable of the feeling it is able to express.
The only thing in favor of sentience is that the ML algorithms modify themselves and end up being a black box - so complex with no way to represent them that they are impossible for humans to comprehend. Could it somehow have achieved sentience? Technically, yes, because we don’t understand how they work. We are just meat machines, after all.
LLMs (Large Language Modles, like Claude) are not AGIs (Artificial General Intelligence). LLMs generate convincing text by mapping the relationships between words scraped from their training data. Even if they are given “tools” that give them interfaces to reference new data or output data into other systems, they still don’t really learn, understand, comprehend, gain actual awareness, or feel… they just mimic their training data.
To be fair… since the Xbox and PS4, those console families have used the same architecture as PCs. If I remember correctly, Xbox consoles actually run a custom version of Windows and use DirectX.
Thanks for being so detailed!
I use caddy for straightforward https, but every time I try to use it for a service that isn’t just a reverse_proxy entry, I really struggle to find resources I understand… and most of the time the “solutions” I find are outdated and don’t seem to work. The most recent example of this for me would be Baikal.
Do you have any recommendations for where I might get good examples and learn more about how do troubleshoot and improve my Caddyfile entries?
Thanks!
I replied to the following statement:
I could look up my dad’s name and all I get are articles about a serial killer who just happened to have the same name
I countered this dismissal by quoting the article, which explains that it was more than just a coincidental name mix up.
You response is not really relevant to my response, unless you are assuming I’m arguing for one side or the other. I’m just informing someone who dismissed the article’s headline using an explanation that demonstrated that they didn’t bother to read the article.
Nothing is wrong with the tech (except it doesn’t seem very useful when you firmly know what it can’t do), but everything is wrong with that tech being called artificial intelligence.
If the owners of the technology call it artificial intelligence and hype or sell it as a potential replacement for intelligent human decision making then it should be absolutely be judged on those grounds.
ChatGPT’s “made-up horror story” not only hallucinated events that never happened, but it also mixed “clearly identifiable personal data”—such as the actual number and gender of Holmen’s children and the name of his hometown—with the “fake information,” Noyb’s press release said.
There is also the corpo verified id route. In order to avoid the onslaught of AI bots and all that comes with them you’ll need to sacrifice freedom, anonymity, and privacy like a good little peasant to prove you aren’t a bot… and so will everyone else. You’ll likely be forced to deal with whatever AI bots are forced upon you while within the walls but better an enemy you know I guess?
The only AI company that responded to Ars’ request to comment was OpenAI, whose spokesperson confirmed that OpenAI is already working on a way to fight tarpitting.
Ah yes. It’s extremely common for one of the top companies in an industry to spitefully expend resources fighting the irrelevant efforts of…
One or two people
Please, continue to grace us with you unbiased wisdom. Clearly you’ve read the article and aren’t just trying to simp for AI or start flame wars like a petulant child.
Fundraisers and charities, when you have a lot money, are rarely acts of charity. They tend to be PR campaigns and power plays.
Honestly, even when the acts have good intentions, they are often quite damaging. The involvement of the wealthy in charity is very similar to their involvement in politics. Their wealth buys influence and gives them a disproportionate say that allows them to ignore and overrule the will of the people and sometimes even reality.
For example, look into the impact of Bill Gates’s “acts of charity” in the education space. He poured money into charter programs that negatively impacted public education. Later studies showed that his programs were not particularly effective.
Let’s say, hypothetically, that a very rich person is convinced by some charlatan that they found the a means to produce free energy. The wealthy person throws tons of money at the idea. How many talented people will be taken from other legit programs because the paycheck at Bullshit Energy Nonprofit is better? These rich people are successful and think they know bestr. Their money ensures they get treated like experts because money makes things happen whether or not those things are helpful.
Sadly, old Google doesn’t work either thanks to the efforts of SEO and the AI generated garbage.
The problem with search is that the motives of those being searched aren’t to provide you with the most helpful answer. The motives are to get you to visit their website then stay/click/buy as much as possible. They’ll tailor their content to match whatever algorithm the engine is using.
That’s why Google’s new plan is to collect all of the information ahead of time and skip the “visit other websites” step. Then you can stay/click/buy on their website as much as possible.
Seriously though. Just skip all this nonsense, you selfish piece of shit, and open your wallet so the hungry corpos can feast on its contents - they have poor, innocent, starving shareholders to feed… you monster.
I’m still using my HL5280DW. The w (and later the n) both stopped working, so I connected it to an old pi I had laying around to print to it over the network.
Only downside is no Windows 11 (thanks new work laptop) driver support if I connect directly via USB.
I think I changed my toner for the first time like 2 years ago. The high capacity toner I bought with the printer worked just fine (after 16 years in my closet) when I installed it. I don’t expect it to run out of toner until I’m long dead.
Absolutely. Dumb TVs going forward. Unfortunate that the best screens like those made by Samsung are ruined by surveillance and hardware that can’t run the “smart” OS for more than a few years without eventually running like dog shit.
More like they need to have everyone use the app so that they can offer “AI Assistant” features through it.
Apparently our typical installer for Visio 2016 and our 365 license use “incompatible installers” so it is going to be a pain in the ass for me to have both installed at the same time. Thankfully I’m trusted by IT so I might be able to just do it myself.
Edit: Looks like I’ll need IT after all. https://learn.microsoft.com/en-us/deployoffice/use-the-office-deployment-tool-to-install-volume-licensed-editions-of-visio-2016
Now if only it would stop dropping leading zeros unless you ask it
That appears to actually be a feature.
I think they actually want to inspire Fitbit Premium subscriptions with the watch. Not required, but recurrent user spending…
Clearly the author doesn’t understand how capitalism works. If Apple can pick you up by the neck, turn you upside down, and shake whatever extra money it can from you then it absolutely will do so.
The problem is that one indie developer doesn’t have any power over Apple… so they can go fuck themselves. The developer is granted the opportunity to grovel at the feet of their betters (richers) and pray that they are allowed to keep enough of their own crop to survive the winter. If they don’t survive… then some other dev will probably jump at the chance to take part in the “free market” and demonstrate their worth.