Yeah sure, a distro could start spying on users. How easy it would be would depend on their distribution model, and how willing they are to violate the GPL.
Yeah sure, a distro could start spying on users. How easy it would be would depend on their distribution model, and how willing they are to violate the GPL.
Linux is a tool that big corporate entities have profited greatly from for many years, and will continue to. Same with BSD, Apache, Docker, MySQL, Postgres, SSH…
Valve, Sys76, Framework, etc. Are proving that using Linux to serve an end user market is also profitable, and are capable of supporting enterprise use-cases.
I understand that there may be specific problems to solve wrt improving adoptability, usability, compatibility, etc., but Linux is doing more than ok within the context of the FOSS ecosystem (and increasingly without).
Your thinking is slightly skewed, IMHO. Linux doesn’t have an inherent incentive to compete with MacOS or MS, and if it did, it would be subject to the same pressures that encourage bad behavior like spying on users, creating walled gardens, and so forth.
Fixed it for you: VSCode, Red Star OS, and sh
Fugg yeah thanks for the tip!. I was not there for krohnkite the first time around, but I’m here for it now.
Whatever you do, steer clear of Plasma Wayland right now. Polonium has a lot of issues.
You haven’t provided any info about your partition scheme for either drive, but I assume you’ve got your bootloader installed in an EFI partition in the newer drive. You will still have an EFI partition on the old drive created by the Ubuntu installer, so just be sure you know which bootloader you’re using.
Option 1 and 2 aren’t functionally any different. It’s not clear what issues you’re worried about, but if you’re nervous about breaking the Ubuntu installation, you might just want to wait until you can get the new drive.
You also don’t give any indication of how much data you have that you want to keep. If the 2tb drive is almost full, you have fewer options than if it is mostly empty or half full. You could resize your EXT4 partition and create a new partition, for example, allowing you to mount a fresh, clean filesystem to a subfolder in your home directory. Once the data migration is finished, you can format the old partitions and mount them somewhere else, or resize the newer partition over them. Be aware that your HDD will eventually fail mechanically, however. Maybe 5 years from now or next week, but they all fail someday.
It’s not clear to me what the goal of option 3 is, but it’s dependent on how you use your machine. If you want to install a lot of applications or games that you want to run fast, you don’t want to migrate a bunch of your data to your newer SSD. If you just want a temporary place to store the data you want to keep until you can format the old drive, I guess this is a fine approach, but creating a dedicated user for this is just adding unnecessary complexity, IMHO.
I would recommend they follow the full installation guide instead, which is probably one of the best pieces of technical documentation in existence at the moment. The amount of detail, context, and instruction provides both an invaluable learning experience and introduction to Linux.
archinstall is not foolproof; that’s why I wouldn’t recommend it to an absolute beginner. IMHO, It’s more valuable for people who are familiar with the process and want a shortcut.
As great as archinstall is, it can’t possibly account for every contingency. Troubleshooting a bootloader issue, for example, is easy if you’ve installed one before. If a noob managed to navigate the TUI (with all of the confusing questions and settings) and complete the installation only to have something go wrong there, they’re off it, maybe for good.
Two DEs enter the steel cage… Only one will emerge.
It seems to shortcut implementations that require more than one block, and mimicks parameters from other functions.
One of the first things I noticed when I asked ChatGPT to write some terraform for me a year ago was that it uses modules that don’t exist.
I’d recommend just scripting with rsync commands and run with cron or whatever scheduling automation. Backup locally to an external drive or orchestrate with cloud provider cli tools for something like S3.
There are some tools that probably assist with this, but it’s just very few moving parts to roll your own. Clonezilla seems overkill and harder to automate, but I will admit I’m not an expert with it.
I have Arch on a 2013 mbp and it has served very well for years. I think I had to do a little work getting the backlight controls bound to some hotkey combos, but that might depend more on DE than distro. I’m probably going to put NixOS on it, since I’m not using it as my work laptop anymore. Use whatever you want! Debian is always a pleasure, too, in my experience.
I sync important files to s3 from a folder with awscli. Dot files and projects are in a private git repos. That’s it.
If I maintained a server, I would do something more sophisticated, but installation is so dead simple these days that I could get a daily driver in working order very quickly.