Something to be said for the wfh movement too.
Something to be said for the wfh movement too.
I thought I read somewhere that when they were making one of the Toy Story movies, there was some catastrophic data loss that nearly tanked the whole production. But then one of the animators came back from maternity and said wait, I think I have most of it synced to my home server? And the next thing you know, John Lasseter himself is barrelling down the highway to her place and it turned out yeah, she did have it.
Cool! I can see how optical media could, in theory, be very long-lasting as long as you don’t use materials that oxidize or otherwise degrade over time.
Well I guess I’m picturing DNA encoding like a RAID billion in terms of redundancy, so with some checksumming, you ought be able to sort out any mutations? But I’m no geneticist.
That’s why I back up my data on stone tablets in Cunieform.
Seriously though, if you wanted data to last for centuries, what would be your best bet? Would it be some sort of 3D-printed mechanical storage? At least plastics are generally not biodegradable, though they are photodegradable, so I guess you’d want to stick your archive in a dry cave somewhere?
Or what about this idea of encoding the data in the DNA of some microbe and cutting it loose? What could possibly go wrong?
You know, I’m not actually quite sure what I’m doing, but I can tell you I am not looking at the keyboard. I suppose it’s similar to how I play violin? I don’t look at where my hand is but it shifts to different positions depending on what makes the most sense for the pattern I’m trying to play, and yes, a different position does imply a different fingering to reach the same notes.
When learning to program, I initially tried to follow the touch typing guidelines, but they say that you should use the right pinky to reach every key towards the upper right end of the keyboard, which gets old fast given how frequently you need to access them. And just as with music, there are patterns. In programming, you may frequently need to type {}
, :=
, or even something like \{\}
, and flailing around with the pinky is a good way to give yourself carpal tunnel. So your right hand learns to shift to hit those keys using a combination of fingers.
As a Gen X, I think my typing speed peaked around late high school/early university? I tried to teach myself touch typing and got moderately proficient. Then I got into programming where you need to reach all of those punctuation marks. So my right hand has drifted further to the right over the years, which is better for code but suboptimal for regular text.
One thing that’s really tanked for me though is writing in cursive. I used to be able to take notes in class as fast as the prof could speak. Now I can scarcely sign my own name.
I suppose it depends on how you look at it. Take solar, for example. On the one hand, you could argue that if your primary goal is to generate heat, you might as well use a solar thermal plant with lots of focusing mirrors over photovoltaics. The conversion to electricity first would inevitably be far less efficient.
On the other hand, if you’ve got your PV plants for electricity already but they are overproducing at times, there is the question of what to do with the excess power, and using it to run heat pumps may actually be a pretty efficient application at the point?
I paid a visit to Green Bank WV once out of an interest in astronomy. The giant radio telescopes are truly a sight to behold!
Less impressive were the people camped out nearby who saw the place as the promised land where they could cast off their tinfoil hats in the cellular-banned zone surrounding the complex.
The thing about the MPW Shell is it was sort of the only game in town if you actually wanted a command line with the classic Mac OS. (There’s an awesome little emulator called SheepShaver if you ever want to explore it btw.) Well, I suppose there was A/UX. I thought it was a miracle when that came out. You have to realize in those early days a good chunk of the operating system itself was actually baked in to ROM. (You had to do desperate things to squeeze a GUI out of such limited resources as existed back then!) So to this day I have no idea how they managed to spin off a 'nix despite that.
Anyways. I wonder, if you made some sort of template format today, to what extent you could write some sort of conversion tool that would scrape a man page or whatever to rough it in and then you could tweak it to get what you want? man pages aren’t super standardized in their format I guess, so it’s probably more trouble than it’s worth. I like to use Python’s argparse
when rolling out scripts myself, and its --help
format is pretty rigid given that it’s algorithmically generated. Might be more plausible with something like that? I had a quick look just now to see if you can drill down into the argparse.ArgumentParser
class itself to pull out the info more directly, but it seems a rather opaque thing that doesn’t expose public APIs for that. Oh well…
This reminds me of something from my ancient past. Back in the early-ish days of Apple, there was a development system called MPW (Macintosh Programmer’s Workshop) which included its own little kludgy shell.
The weird thing about it though was while you could enter commands on the command line like in any shell, you could prefix them with the word commando
(presumably a portmanteau of “command” and “window”) and this window would pop up showing various buttons, checkboxes, etc. correponding to command line options. When you ok’d the window, it would generate the command line for you.
I’m rather hazy about how all this worked, but I think there was some sort of template language to define the window layout if you wanted to add commando support for your own tool? And presumeably, as you say, you could restrict what’s possible with the window interface as you deemed fit?
You mean like the comment fields we’re using right here on lemmy?
As others have pointed out, it’s usually some markdown that’s embedded within the text. Lemmy is using a format that’s actually called “markdown” if I’m not mistaken, or a slight variation/subset thereof.
I’ve gotten used to the double-star for bold and what not to the point that it annoys me when some message client or whatever doesn’t support it. I share code snippets with people fairly often, and the code markdown is particularly useful to maintain its legibility.
You can always combine integer operations in smaller chunks to simulate something that’s too big to fit in a register. Python even does this transparently for you, so your integers can be as big as you want.
The fundamental problem that led to requiring 64-bit was when we needed to start addressing more than 4 GB of RAM. It’s kind of similar to the problem of the Internet, where 4 billion unique IP addresses falls rather short of what we need. IPv6 has a host of improvements, but the massively improved address space is what gets talked about the most since that’s what is desperately needed.
Going back to RAM though, it’s sort of interesting that at the lowest levels of accessing memory, it is done in chunks that are larger than 8 bits, and that’s been the case for a long time now. CPUs have to provide the illusion that an 8-bit byte is the smallest addressible unit of memory since software would break badly were this not the case, but it’s somewhat amusing to me that we still shouldn’t really need more than 32 bits to address RAM at the lowest levels even with the 16 GB I have in my laptop right now. I’ve worked with 32-bit microcontrollers where the byte size is > 8 bits, and yeah, you can have plenty of addressible memory in there if you wanted.
I know a google engineer who was saying they’re having to update their code bases to handle > 16 exabytes of storage, if you can imagine. But yeah, that’s storage, not RAM.
I guess the MAC address guy is up next. 48 bits may not go so far if every light bulb is going to want its own.
Imagine if you were the guy who made the call on IPv4 addresses…
Falsehoods About Time
Having a background in astronomy, I knew going into programming that time would be an absolute bitch.
Most recently, I thought I could code a script that could project when Easter would land every year to mark it on office timesheets. After spending an embarrassing amount of…er…time on it, I gave up and downloaded a table of pre-calculated dates. I suppose at some point, assuming the code survives that long, it will have a Y2K-style moment, but I didn’t trust my own algorithm over the table. I do think it is healthy, if not essential, to not trust your own code.
Falsehoods About Text
I’d like to add “Splitting at code-point boundary is safe” to your list. Man, was I ever naive!
So the next captcha will be a list of AI-generated statements and you have to decide which are bat shit crazy?
“Recall uses Copilot+ PC advanced processing capabilities to take images of your active screen every few seconds,”
Seems like a lot of extra disk thrashing that would shorten the life expectancy of an SSD? Like it would be considerably more than your usual background chatter of daemons writing to log files and what not. Unless I’m misunderstanding this?
That’s interesting. The article they link gives a bit more detail:
The wide range for nuclear apparently comes from difficulties in estimating the carbon footprint of mining/processing the uranium, but that nuclear is sort of in the middle of the pack in carbon footprint relative to renewables in spite of the fueling costs is good to know.
I suppose these sort of numbers may change dramatically in years to come. Take solar. A lot of focus seems to be on the efficiency of panels, which would almost certainly lower the carbon cost per unit of energy as it improves, but a breakthrough in panel longevity would also do that in an amortized emissions sort of way.