Akkoma actually, but same basic thing.
The_Decryptor
spoiler
made you look
- 0 Posts
- 31 Comments
The_Decryptor@aussie.zoneto Fediverse memes@feddit.uk•Anybody else kinda annoyed with this, or am I just being ridiculous lolEnglish7·22 days agoAnd PeerTube ideally, though it doesn’t seem to work properly at the moment. You can see channels (as a community), but posts don’t seem to be resolved.
The_Decryptor@aussie.zoneto memes@lemmy.world•We are way overdue for an open source 2d printerEnglish511·1 month agothe drivers and communications to talk to printers are all closed source
That hasn’t stopped CUPS
The_Decryptor@aussie.zoneto memes@lemmy.world•What would you do if you had 32GB RAM?English1·2 months agoSo one of the problems is the size of a “physical page”, on a stock x86 system that’s only 4KiB. If you allocate just 1MiB of RAM you need to back that with 256 “page table entries”, and to then load a virtual address within that allocation you need to walk that list of 256 entries to find the physical address in RAM that the CPU needs to request.
Of course these days an app is more likely to use 1 GiB of RAM, that’s a mere 262,144 page table entries to scan through, on each memory load.
Oh but then we’re also not running a single process, there’s multiple processes on the system, so there will be several million of these entries, each one indexed by address (Which can be duplicated, each process has its own private view of the address space), and then by process ID to disambiguate which entry belongs to each process.
That’s where the TLB comes in handy, to avoid the million or so indexing operations on each and every memory load.
But caching alone can’t solve everything, you need a smarter way to perform bookkeeping than simply using a flat list for when you don’t have a cached result. So the OS breaks down those mappings into smaller chunks and then provides a table that maps address ranges to those chunks. An OS might cap a list of PTEs at 4096 and have another table index that, so to resolve an address the CPU checks which block of PTEs to load from the first table and then only has to scan the list it points to.
Like this, this is a 2 level scheme that Intel CPUs used before the Pentium Pro (iirc), the top 10 bits of an address selected an entry in the “page directory”, the CPU loads that and uses the next 10 bits to select the group of PTEs from that list, following that link that it finds the actual PTEs that describe the mappings and then it can scan that list to find the specific matching entry that describes the physical address to load (And it then promptly caches the result to avoid doing that again)
So yes, for a given page size and CPU you have a fixed number of walks regardless of where the address lives in memory, but we also have more memory now. And much like a hoarder, the more space we have to store things, the more things we do store, and the more disorganised it gets. And even if you do clear a spot, the next thing you want to store might not fit there and you end up storing it someplace else. If you end up bouncing around looking for things you end up thrashing the TLB, throwing out cached entries you still need so now need to perform the entire table walk again (Just to invariably throw that result away soon after).
Basically, you need to defrag your RAM periodically so that the mappings don’t get too complex and slow things down (Same is true for SSDs btw, you still need to defrag them to clean up the filesystem metadata itself, just less often than HDDs). Meta have been working on improvements to how Linux handles all this stuff (page table layout and memory compaction) for a while because they were seeing some of their long-lived servers ending up spending about 20% of CPU time simply wasted on doing repetitive walks due to a highly fragmented address space.
The_Decryptor@aussie.zoneto memes@lemmy.world•What would you do if you had 32GB RAM?English5·2 months agoThe RA in RAM stands for random access; there is no seeking necessary.
Well there is, CPUs need to map virtual addresses to physical ones. And the more RAM you have the more management of that memory you need to do (e.g. modern Intel and AMD CPUs have 5 levels of indirection between a virtual and physical address)
But it also caches those address mappings, as long as your TLB is happy, you’re happy. An alternative is to use larger page sizes (A page being the smallest amount of RAM you can address), the larger the page the less you need recurse into the page tables to actually find said page, but you also can end up wasting RAM if you’re not careful.
The_Decryptor@aussie.zoneto memes@lemmy.world•4chan... A part of internet (history) for sureEnglish5·3 months agomoot created the site in response to lowtax banning hentai on Something Awful, I’m not sure I’d call that “innocent”
The lossy mode is a dirty hack, the lossless mode is genuinely good though.
Shame it only supports a subset of what PNG does though.
The_Decryptor@aussie.zoneto Fediverse memes@feddit.uk•Bluesky already doing enshittificationEnglish2·3 months agoTrue, but at the same time it’s their app. They already know what profiles you’re looking at, what posts you’re viewing, and the images you view, knowing what links you’re clicking on is just another event handler.
The_Decryptor@aussie.zoneto Fediverse memes@feddit.uk•Bluesky already doing enshittificationEnglish36·3 months agoThen you have to scan every single existing known post every time a new link is blocked, if you redirect it through a bouncer it’s a single endpoint to block any link, regardless of the source of the post (since bluesky is in theory decentralized)
The_Decryptor@aussie.zoneto Fediverse memes@feddit.uk•Bluesky already doing enshittificationEnglish1011·3 months agoThey already know your IP address, you’re using their website/app.
It’s either to track outbound clicks (And potentially block them if they’re harmful, YouTube and Steam do that), or a much more unlikely option is to hide the referrer from the target site (Since browsers have better ways to handle that now, but old ones don’t)
Probably ~15 years ago I knew a guy who used to help run a large local forum, one day without warning they got cut off entirely by Google because they decided some of their content wasn’t suitable to run ads against, so that was it the entire site got blocked.
Ended up having to break the site into 2 separate domains, one advertiser friendly, and one they wouldn’t touch.
The_Decryptor@aussie.zoneto Technology@beehaw.org•AI Slop Is a Brute Force Attack on the Algorithms That Control RealityEnglish8·4 months agoFor a while Google let you blacklist domains from search results, fantastic feature so of course they killed it off.
The_Decryptor@aussie.zoneto Self-hosting@slrpnk.net•Mastodon to GoToSocial MigrationEnglish2·4 months agoA single user mastodon instance has the same issues, any hashtags you can see will be from users you already follow. You need to subscribe to a relay to see more posts (Which GTS actually currently doesn’t support, so you need to follow tags from something like https://relay.fedi.buzz/)
The_Decryptor@aussie.zoneto Technology@beehaw.org•Photographers Are on a Mission to Fix Wikipedia's Famously Bad Celebrity Portraits [404 Media]English10·4 months agoThey’d run afoul of the whole “editing your own article” restrictions.
I’m not sure if there is just some “point of diminishing returns” or whatever where JPG actually becomes more efficient or what.
There is, but it’s at high quality levels. If you’re using WebP for thumbnails or other lower quality situations (Which was the original intended use) then WebP will give you better quality than JPEG for a given filesize.
For lossless uses it’s even better, the format is much more limited than PNG, but in the common cases it beats it.
The_Decryptor@aussie.zoneto Free and Open Source Software@beehaw.org•Debian Trixie raises x86 minimum requirements to i686.English1·5 months agoI take that there isn’t much motivation in moving to 128 because it’s big enough; it’s only 8 cycles (?) to fill a 512 (that can’t be right?).
8 cycles would be an eternity on a modern CPU, they can achieve multiple register sized loads per cycle.
If we do see a CPU with 128 bit addresses anytime soon, it’ll be something like CHERI, where the extra bits are used for flags.
The_Decryptor@aussie.zoneto Free and Open Source Software@beehaw.org•Debian Trixie raises x86 minimum requirements to i686.English3·5 months agoI think CHERI is the only real attempt at a 128 bit system, but it uses the upper 64 bits for metadata, so the address space is still 64 bits.
He named the recovery barges after sci fi spaceships (modern sci fi, not old nazi stuff)
It’s pretty clear to me that Elon’s never read a Culture novel, they’re antithetical to him.
https://tardis.wiki/wiki/Skin
Why are they all in past-tense?