

I don’t have a specific service to recommend, but you might look at lowendbox.com, which specializes in listing inexpensive VPS services.
Off-and-on trying out an account over at @[email protected] due to scraping bots bogging down lemmy.today to the point of near-unusability.


I don’t have a specific service to recommend, but you might look at lowendbox.com, which specializes in listing inexpensive VPS services.


“Restraint’s probably not the perfect word,” but the president may start exhibiting “a little more contemplation and thoughtfulness,” Dolen suggested.
I mean, theoretically, yes…


even at JEDEC speeds.
My last Intel motherboard couldn’t handle all four slots filled with 32GB of memory at rated speeds. Any two sticks yes, four no. From reading online, apparently that was a common problem. Motherboard manufacturers (who must have known that there were issues, from their own testing) did not go out of their way to make this clear.
Maybe it’s not an issue with registered/buffered memory, but with plain old unregistered DDR5, I think that manufacturers have really selling product above what they can realistically do.


Anecdotal evidence, but I had both a 13th gen and 14th gen Intel CPU with the bug that caused them to over time, destroy themselves internally.
The most-user-visible way this initially came up, before the CPUs had degraded too far, was Firefox starting to crash, to the point that I initially used Firefox hitting some websites as my test case when I started the (painful) task of trying to diagnose the problem. I suspect that it’s because Firefox touches a lot of memory, and is (normally) fairly stable — a lot of people might not be too surprised if some random game crashes.


The problem is that ECC is one of the things used to permit price discrimination between server (less price sensitive) and PC (more price sensitive) users. Like, there’s a significant price difference, more than cost-of-manufacture would warrant. There are only a few companies that make motherboard chipsets, like Intel, and they have enough price control over the industry that they can do that. You’re going to be paying a fair bit more to get into the “server” ecosystem, as a result of that.
Also…I’m not sure that ECC is the right fix. I kind of wonder whether the fact is actually that the memory is broken, or that people are manually overclocking and running memory that would be stable at a lower rate at too high of a rate, which will cause that. Or whether BIOSes, which can automatically detect a viable rate by testing memory, are simply being too aggressive in choosing high memory bandwidth rates.
EDIT: If it is actually broken memory and only a region of memory is affected, both Linux and Windows have the ability to map around detected bad regions in memory, if you have the bootloader tell the kernel about them and enough of your memory is working to actually get your kernel up and running during initial boot. So it is viable to run systems that actually do have broken memory, if one can localize the problem.
https://www.gnu.org/software/grub/manual/grub/html_node/badram.html
Something like MemTest86 is a more-effective way to do this, because it can touch all the memory. However, you can even do runtime detection of this with Linux up and running using something like memtester, so hypothetically someone could write a software package to detect this, update GRUB to be aware of the bad memory location, and after a reboot, just work correctly (well, with a small amount less memory available to the system…)


It wouldn’t be effective, because it’s trivial to bypass. There are many ways one can do a DNS lookup elsewhere and get access to the response, as the information isn’t considered secret. Once you’ve done that, you can reach a host. And any Computer A participating in a DDoS such that Comptuer B can see the traffic from the DDoS has already resolved the domain name anyway.
It’s sometimes been used as a low-effort way for a network administrator to try to block Web browser users on that network from getting access to content, but it’s a really ineffective mechanism even for that. The only reason that I think it ever showed up is because it’s very easy to deploy in that role. Browsers often use DNS-over-HTTP to an outside server today rather than DNS, so it won’t even affect users of browsers doing that at all.
In general, if I can go to a website like this:
https://mxtoolbox.com/DNSLookup.aspx
And plonk in a hostname to get an IP address, I can then tell my system about that mapping so that it will never go to DNS again. On Linux and most Unixy systems, an easy way to do this would be in /etc/hosts:
5.78.97.5 lemmy.today
On Windows systems, the hosts file typically lives at C:\\Windows\system32\drivers\etc\hosts
EDIT: Oh, maybe I misunderstood. You don’t mean as a mechanism to block Computer A from reaching Computer B itself, but just as just a transport mechanism to hand information to routers? Like, have some way to trigger a router to do a DNS lookup for a given IP, the way we do a PTR lookup today to resolve an IP address to a hostname, but obtain blacklist information?
That’s a thought. I haven’t spent a lot of time on DNSSec, but it must have infrastructure to securely distribute information.
DNS is public — I don’t know if that would be problematic or not, to expose to the Internet at large the list of blacklists going to a given host. It would mean that it could be easier to troubleshoot problems, since if I can’t reach host X, I can check to see whether it’s because that host has requested that my traffic be blacklisted.


I’m not sure when we lost that, but oh boy, it’s a festival.
I remember when it was considered outrageous that Flash would phone home and report its version, because that would leak the fact that a given machine was running a given version of Flash.
We sure don’t live in that world today.
I mean, it’s telling you to update the Address Library mod. If Fallout 4 gets an update, it may take them a bit, but they should roll it out.
https://www.nexusmods.com/fallout4/mods/47327
You probably want the current version of that. If Fallout 4 just updated, then you may need to wait for it to be updated.
That thing is required to know where in memory other mods need to fiddle to do stuff outside of the APIs that Bethesda provides, so it’s extremely dependent on the version of Fallout 4 — basically, it being updated means that all the other mods can rely on it knowing what the relevant addresses are and don’t have to be updated themselves. If Fallout 4 gets updated, you’re probably going to need to update it.
EDIT: Some people who run heavily-modded Fallout just buy the thing via GOG to avoid Steam pushing updates entirely, so that they don’t have a window where an update comes out but relevant mods haven’t been updated. You can prevent Steam from doing updates via breaking the update mechanism, but as of 2026, Steam normally updates games automatically, and GOG’s mode of operation is to only update manually.
My own view is that the real issue is that Bethesda really should (a) expose some of the stuff that Address Library provides via supported APIs to mods, so that Address Library isn’t necessary, and (b) should use Steam’s beta branch functionality to establish a “stable” branch for people running mods to have an option for. Not really what Steam intends the feature for, but there isn’t really a better way to both permit updates and deal with the “mod authors need time to update their mods” issue in 2026 in Steam.
EDIT2: I don’t think that you can directly blame Bethesda for Fallout 4 updates breaking compatibility there, aside from maybe not providing APIs for some stuff that mods would like to do so that they don’t need Address Library. Address Library is twiddling values in memory. It’d be unreasonable for Bethesda to commit to having a fixed memory layout across versions; no game developer is likely going to do that.


Not what you asked, but regardless of whatever else you’re doing, I would take any really critical data you need, encrypt it, put it on a laptop or other portable device, and bring it with you. Trying to throw together some last-minute setup that you rely on and can’t easily resolve remotely is asking for trouble.
Another fallback option, if you have a friend who you trust and can call and ask them to type stuff in – give 'em a key before you go and call 'em and ask 'em to type whatever you need if you get into trouble.


Of course, another option is for people to dramatically curb their use of social media, or at a minimum, regularly delete posts after a set time threshold.
Deletion won’t deal with someone seriously-interested in harvesting stuff, because they can log it as it becomes available. And curbing use isn’t ideal.
I mentioned before the possibility of poisoning data, like, sporadically adding some incorrect information about oneself into one’s comments. Ideally something that doesn’t impact the meaning of the comments, but would cause a computer to associate one with someone else.
There are some other issues. My guess is that it’s probably possible to fingerprint someone to a substantial degree by the phrasing that they use. One mole in the counterintelligence portion of the FBI, Robert Hanssen, was found because on two occasions he used the unusual phrase “the purple-pissing Japanese”.
FBI investigators later made progress during an operation where they paid disaffected Russian intelligence officers to deliver information on moles. They paid $7 million to KGB agent Aleksander Shcherbakov[48] who had access to a file on “B”. While it did not contain Hanssen’s name, among the information was an audiotape of a July 21, 1986, conversation between “B” and KGB agent Aleksander Fefelov.[49] FBI agent Michael Waguespack recognized the voice in the tape, but could not remember who it was from. Rifling through the rest of the files, they found notes of the mole using a quote from George S. Patton’s speech to the Third Army about “the purple-pissing Japanese”.[50] FBI analyst Bob King remembered Hanssen using that same quote. Waguespack listened to the tape again and recognized the voice as Hanssen’s. With the mole finally identified, locations, dates, and cases were matched with Hanssen’s activities during the period. Two fingerprints collected from a trash bag in the file were analyzed and proved to be Hanssen’s.[51][52][53]
That might be defeated by passing text through something like an LLM to rewrite it. So, for example, to take a snippet of my above comment:
Respond with the following text rephrased sentence by sentence, concisely written as a British computer scientist might write it:
Deletion won’t deal with someone seriously-interested in harvesting stuff, because they can log it as it becomes available. And curbing use isn’t ideal.
I mentioned before the possibility of poisoning data, like, sporadically adding some incorrect information about oneself into one’s comments. Ideally something that doesn’t impact the meaning of the comments, but would cause a computer to associate one with someone else.
I get:
The deletion of data alone will not prevent a determined party from gathering information, as they may simply record the information as it becomes available prior to its deletion. Moreover, restricting usage is not an ideal solution to the problem at hand.
I previously mentioned the possibility of introducing deliberate errors or misinformation into one’s own data, such as periodically inserting inaccurate details about oneself within comments. The goal would be to include information that does not significantly alter the meaning of the comment, but which would cause automated systems to incorrectly associate that individual with another person.
That might work. One would have to check the comment to make sure that it doesn’t mangle the thing to the point that it is incorrect, but it might defeat profiling based on phrasing peculiarities of a given person, especially if many users used a similar “profile” for comment re-writing.
A second problem is that one’s interests are probably something of a fingerprint. It might be possible to use separate accounts related to separate interests — for example, instead of having one account, having an account per community or similar. That does undermine the ability to use reputation generated elsewhere (“Oh, user X has been providing helpful information for five years over in community X, so they’re likely to also be doing so in community Y”), which kind of degrades online communities, but it’s better than just dropping pseudonymity and going 4chan-style fully anonymous and completely losing reputation.


Eh, I disagree with them on the “illegal most places” thing, but I don’t know if I’d just ignore it. Like, okay, say they want to be pseudonymous. In their shoes, I’d probably:
See if I can get the relevant mods/admins to ban the user for doxxing. It’s not illegal to do so, but many services do have policies against it. I doubt that this will be incredibly efficacious, since the user in question can probably just use a throwaway account to do this.
Delete my account, which I believe on the Threadiverse also deletes posts and comments. You can’t really “wipe” stuff reliably that you’ve posted and commented on — someone can always be running an instance that is logging it. But it’ll at least increase the barrier to someone reading them.
Make a new account and try to avoid leaking information this time that ties you to your identity or old account. Rotating accounts periodically might not be a terrible idea if you’re really concerned about pseudononymity. Sucks from a community standpoint for everyone here, because it prevents people from building up reputation associated with a handle, but it is what it is.


It’s certainly not illegal in the US. Heck, Google Street View has images of most addresses online.
EDIT: Well, okay. There are forms in which it could be illegal, like if someone was, oh, trying to convince someone to kill a person and provided the images as part of that, where the act of trying to get someone to do so might be incitement or conspiracy or something. But it’s not ordinarily illegal in-and-of-itself to post it.


When I looked at condensers in the past, they weren’t incredibly energy-efficient. I suspect that it’s cheaper in the long run to do desalination and build a pipeline to wherever inland you want freshwater, unless you have very limited-in-scale need.


Nah. He just has a very high willingness to lie or misrepresent anything. Like, no regard for reputation or consistency or anything. He’ll make entirely-contradictory statements depending upon occasion. If he thinks that it will buy the slightest iota of political oomph to say that Anthropic is violating the Constitution, he’ll do so, even if he doesn’t have the slightest grounds to make that statement.


It doesn’t matter if you’re, say, Debian, because they’ll just put up some symbolic “not intended for use in state X” and then continue doing whatever they were doing, but if you’re Red Hat and actually selling something like Red Hat Enterprise Linux to companies in the state, stuff like this is actually a pain in the ass.
And to reiterate a previous comment, the Democrats have a trifecta in both California and Colorado, and the legislation here is something that they are squarely to blame for. I’d really rather that they knock this kind of horseshit off so that I can go back to being upset with the Republican Party.


The Democrats have a trifecta in both Colorado and California. I really wish that they’d knock bullshit like this off so that I could go back to being upset about Trump and company.


If it happens again and you have Magic Sysrq enabled, you can do Magic Sysrq-t, which may give you some idea of what the system is doing, since you’ll get stack traces. As long as the kernel can talk to the keyboard, it should be able to get that.
https://en.wikipedia.org/wiki/Magic_sysrq
You maybe can’t see anything on your monitor, but if the system is working enough to generate the stack traces and log them to the syslog on disk (like, your kernel filesystem and disk systems are still functional), you’ll be able to view them on reboot.
If it can’t even do that, you might be able to set up a serial console and then, using another system running screen or minicom or something like that linked up to the serial port, issue Magic Sysrq to that and view it on that machine.
Some systems have hardware watchdogs, where if a process can’t constantly ping the thing, the system will reboot. That doesn’t solve your problem, but it may mitigate it if you just want it to reboot if things wedge up. The watchdog package in Debian has some software to make use of this.


I’m not sure that memory — and I’m speaking more-broadly than HBM, even — optimized for running neural nets on parallel compute hardware and memory optimized for conventional CPUs overlap all that much. I think that, setting aside the HBM question, that if we long-term wind up with dedicated parallel compute hardware running neural nets, that we may very well wind up with different sorts of memory optimized for different things.
So, if you’re running neural nets, you have extremely predictable access patterns. Software could tell you what its next 10 GB of accesses to the neural net are going to be. That means that latency is basically a total non-factor for neural net memory, because the software can request it in huge batches and do other things in the meantime.
That’s not the case for a lot of the memory used for, say, playing video games. Part of the reason, aside from hardware vendors using price discrimination that PCs (as opposed to servers) don’t use registered memory (which makes it easier to handle more memory) is because it increases latency a little bit, which is bad when you’re running software where you don’t know what memory you’re going to need next and have a critical path that relies on that memory.
On the other hand, parallel compute hardware doing neural nets are extremely sensitive to bandwidth. They want as much as they can possibly get, and that’s where the bottleneck is today for them. Back on your home computer, a lot of software is oriented around doing operations in serial, and that’s more prone to not saturate the memory bus.
I’d bet that neural net parallel compute hardware does way more reading than writing of memory, because edge weights don’t change at runtime (on current models! That could change!).
searches
Yeah.
https://arxiv.org/html/2501.09605v1
AI clusters today are one of the major uses of High Bandwidth Memory (HBM). However, HBM is suboptimal for AI workloads for several reasons. Analysis shows HBM is overprovisioned on write performance, but underprovisioned on density and read bandwidth, and also has significant energy per bit overheads. It is also expensive, with lower yield than DRAM due to manufacturing complexity.
But there are probably a lot of workloads where your CPU wants to do a ton of writes.
I’d bet that cache coherency isn’t a huge issue for neural net parallel compute hardware, because it’s going to be a while until any value computed by one part of the hardware is needed again, until we reach the point where we can parallel-compute an entire layer at one go (which…I suppose we could theoretically do. Someone just posted something that I commented on about someone making an ASIC with Llama edge weights hard-coded into the silicon, which probably is a step in that direction). But with CPUs, a big problem is making sure that a value written by one CPU core reaches another CPU core, that the second doesn’t use a stale value. That’s gonna impact the kind of memory controller design that’s optimal.


I think that it’s fair to say that AI is not the only application for that hardware, but I also think that carpelbridgesyndrome’s point was that they aren’t really well-suited to replace conventional servers, where all local computing just moves to a server, which is the sort of thing that ouRKaoS was worried about. Maybe for some very specialized use cases, like cloud gaming in some genres. I’d also add that the physical buildings have way more cooling capacity than is necessary for conventional servers, so they probably wouldn’t be the most-cost-effective approach even if you replaced the computing hardware in the buildings.
https://play.google.com/store/apps/details?id=com.hanaGames.LifeOfBlackTigerFREE&hl=en-US
1M+ downloads on Android
Huh.