100%.
I see a CLA or a goofy “source-available” license, I just assume it’s going to be a rugpull and that I should move on. I very much do not give anyone the benefit of the doubt anymore.
100%.
I see a CLA or a goofy “source-available” license, I just assume it’s going to be a rugpull and that I should move on. I very much do not give anyone the benefit of the doubt anymore.
Also if you’ve never seen it, lazydocker might be something up your alley.
It’s a TUI, but it provides easy access to docker containers, logs, updating/restarting/stopping/etc them and so on.
My comment was more FDM vs resin support removal, and that it’s not like resin is all sunshine and rainbows.
If anything, modern tree supports for FDM have fixed the giant-blob-of-plastic problem with supports you’d previously get on smaller models, where you’d end up with, uh, well, a giant blob of plastic stuck to an arm or a sword or whatever.
Still not fantastic, but until someone figures out antigravity, it’s what it is.
And it doesn’t mean they can take away anything.
Not if they’re able to monetize your small bugfix
The problem is they can, and that’s not the point - I don’t care if you make money with something I spent my time on willingly, I care that you’re forcing me to say you’re the full and sole owner of my contributions and can do whatever you want at any point in the future with them.
Signing a CLA puts the full ownership of the code in the hands of whomever you’ve signed the CLA with which means they have the full ability and legal right to do any damn thing they want, which often includes telling you to fuck yourself, changing the license, and running off to make a commercial product while both killing the AGPLed version, and fucking everyone who spent any time on it.
If you have a CLA, I don’t care if your project gives out free handjobs: I don’t want it anywhere near anything I’m going to either be using or have to maintain.
And sure you can fork from before the license change, but I’m unwilling to put a major piece of software into my workflows and hope that, if something happens, someone will come along and continue working on it.
Frankly, I’m of the opinion that if you’re setting up a project and make the very, very involved decision to go with a CLA and spend the time implementing one, you’re spending that time because you’ve already determined it’s probably in your interests later to do a rugpull. If you’re not going to screw everyone, you don’t go to the store and buy a gallon of baby oil.
I’ve turned into the person who doesn’t really care about new shit until it’s been around a decade, has no CLAs, and is under a standard GPL/AGPL license (none of this source-available business license nonsense), and has a proven track record of the developers not being shitheads.
Also, if you like htop, youre going to love btop.
take a few extra taps and swipes than they would on Android
I’ve swapped from iOS to Android and I very much have the opposite experience.
Everything in Android feels just a little bit like someone somewhere went ‘well we have to put this option SOMEWHERE’ and just shoved it in, which leads to me fiddling in apps and system settings a lot more than I was on iOS.
I’m happy to chalk it up to much more experience in iOS than modern Android, but it’s been kinda a pervasive experience.
And, also related and annoying: googling ‘how do I change a thing’ routinely makes me nuts because how you do something seems to vary from manufacturer to manufacturer and even like, model to model.
I guess it’s just… maybe iOS needs more button presses, but Android is utterly inconsistent as to where something might be which means you spend a little more time digging for a specific thing than you might on iOS which leads to the impression that you’re hitting a lot more buttons to do something, even if maybe the actual number of presses would be lower if you knew exactly how to do it.
Give me a new version of the 5c, but use the G3 iMac colors as your color options, including a transparent look into the guts of the phone.
Would buy at least one.
Quickest peak and then utter vanishing of any interest in a project I’ve had in a while.
Wouldn’t mind something a little more open than SearXNG in that it owns it’s own database, but requiring that they be the sole owner of anything anyone contributes AND having the ability to yank the rug at any time they feel like it pretty much puts it in the meh-who-cares category.
Had enough stupid shit yanked over the past few years that I really just don’t care or have time to deal with any that is already prepping for their eventual enshittification.
print with supports, but removing supports from such thin, fragile bits of a model is nigh impossible without doing damage
Removing resin supports is worse, if anything.
They leave little bumps where they’re cut off that you have to then try to VERY VERY gently sand off without bending or breaking said fiddly models.
You could also use nginx if you wanted; it’ll do arbitrary tcp data with the stream plugin.
contrast to their desktop offerings
That’s because server offerings are real money, which is why Intel isn’t fucking those up.
AMD is in the same boat: they make pennies on client and gaming (including gpu), but dumptrucks of cash from selling Epycs.
IMO, the Zen 5(%) and Arrow Lake bad-for-gaming results are because uarch development from Intel and AMD are entirely focused on the customers that pay them: datacenter and enterprise.
Both of those CPU families clearly show that efficiency and a focus on extremely threaded workloads were the priorities, and what do you know, that’s enterprise workloads!
end of the x86 era
I think it’s less the era of x86 is ended and more the era of the x86 duopoly putting consumer/gaming workloads first has ended because, well, there’s just no money there relative to other things they could invest their time and design resources in.
I also expect this to happen with GPUs: AMD has already given up, and Intel is absolutely going to do that as soon as they possibly can without it being a catastrophic self-inflicted wound (since they want an iGPU to use). nVidia has also clearly stopped giving a shit about gaming - gamers get a GPU a year or two after enterprise has cards based on the same chip, and now they charge $2000* for them - and they’re often crippled in firmware/software so that they won’t compete with the enterprise cards as well as legally not being allowed to use the drivers in a situation like that.
ARM is probably the consumer future, but we’ll see who and with what: I desperately hope that nVidia and MediaTek end up competitive so we don’t end up in a Qualcomm oops-your-cpu-is-two-years-old-no-more-support-for-you hellscape, but well, nVidia has made ARM SOCs for like, decades, and at no point would I call any of the ones they’ve ever shipped high performance desktop replacements.
Yeah DNS is, in general, just goofy and weird and a lot of the interactions I wouldn’t expect someone who’s done it for years to necessarily know.
And besides, the round-robin thing is my favorite weird DNS fact so any excuse to share it is great.
I mean, recovery from parity data is how all of this works, this just doesn’t require you to have a controller, use a specific filesystem, have matching sized drives or anything else. Recovery is mostly like any other raid option I’ve ever used.
The only drawback is that the parity data is mostly equivalent in size to the actual data you’re making parity data of, and you need to keep a couple copies of indexes since if you lose the index or the parity data, no recovery for you.
In my case, I didn’t care: I’m using the oldest drives I’ve got as the parity drives, and the newer, larger drives for the data.
If i were doing the build now and not 5 years ago, I might pick a different solution but there’s something to be said for an option that’s dead simple (looking at you, zfs) and likely to be reliable because it’s not doing anything fancy (looking at you, btrfs).
From a usage (not technical) standpoint, the most equivalent commercial/prefabbed solution would probably be something like unraid.
A tool I’ve actually found way more useful than actual raid is snapraid.
It just makes a giant parity file which can be used to validate, repair, and/or restore your data in the array without needing to rely on any hardware or filesystem magic. The validation bit being a big deal, because I can scrub all the data in the array and it’ll happily tell me if something funky has happened.
It’s been super useful on my NAS, where it’s the only thing standing between my pile of random drives and data loss.
There’s a very long list of caveats as to why this may not be the right choice for any particular use case, but for someone wanting to keep their picture and linux iso collection somewhat protected (use a 321 backup strategy, for the love of god), it’s a fairly viable option.
Yeah, I noticed that.
It’s on my to-do list for next week since I should have time to sort that out/file a bug if it’s something that’s not a configuration error.
Uh, don’t do that if you expect your mail to be delivered.
Multiple PTRs, depending on how the DNS service is set up, may be returned in round-robin fashion, and if you return a PTR that doesn’t match what your HELO claims you are, then congrats on your mail being likely tossed in the trash.
Pick the most accurate name (that is, match your HELO domain), and only set one PTR.
(Useless fact of the day: multiple A records behave the same way and you can use that as a poverty-spec version of a load balancer.)
sudo smartctl -a /dev/yourssd
You’re looking for the Media_Wearout_Indicator which is a percentage starting at 100% and going to 0%, with 0% being no more spare sectors available and thus “failed”. A very important note here, though, is that a 0% drive isn’t going to always result in data loss.
Unless you have the shittiest SSD I’ve ever heard of or seen, it’ll almost certainly just go read-only and all your data will be there, you just won’t be able to write more data to the drive.
Also you’ll probably be interested in the Total_LBAs_Written variable, which is (usually) going to be converted to gigabytes and will tell you how much data has been written to the drive.
thought more than 2%
What confuses me is a survey earlier this year was 2.32%, so why the actual regression?
I’d have expected it to go up with more time to sell steam decks and whatnot, not regress by 15%.
Hell, maybe not since 1997!
Office 2000 was peak office: it had the definitive version of Clippit, and every actually useful feature you’ll probably ever need to type and edit any sort of document.
…I will say, though, that Excel has improved for the weirdos that want 100,000 row spreadsheets since then, but I mean, that’s a small group of people who need serious help.
This has nothing to do with anything, but whatever.
Assuming the rates go down, which, heh, the last fed cut had them going up so who the fuck knows.