The Internet is not a “series of tubes” … It’s a packet-switched messaging network. The fact that billions of computers are “connected” to a single address doesn’t really mean much other than they’ve exchanged some messages within the last several minutes (or some other arbitrary amount of time).
You’re not wrong: any sizeable web service must distribute to several servers and data centers for performance (e.g. response times and data throughput), and for resiliency (e.g. if a server fails then another one can take over). But the difference is these data centers have a financial incentive to maximize efficiency in both hardware costs and electricity usage (which includes cooling, etc.). Folks self-hosting Lemmy/Mastodon/etc. servers in their basement have much less incentive, and so less effort is put into eeking out every ounce of capability per dollar. Even hosting on AWS/Google/Azure/etc is never going to beat a bespoke data center dedicated to one particular application.
Although they don’t necessarily publish this information, at least a data center can accurately measure its energy usage (which tends to dwarf hardware costs…). Also newer hardware will always outperform old hardware per energy usage. For either aspect I can’t say the same for the server in my basement … It’s 10 year-old hardware running on the same circuit as the beer fridge next to it. I have no idea how much electricity it uses to handle like 2 users. It’s a glorified space heater.
It’s all about trade-offs. Fediverse applications value open standardization, availability, and long-term resiliency over efficiency, performance, and short-term profits.
The Fediverse is great, but in the short/mid-term, efficiency and ecological impact aren’t things i would expect it to excel at.
If you’ve got some scientific papers handy, I’d love to see them!
The point I’m trying to make is that YouTube has an incentive to design their system to not let traffic travel further than it has to (users closer to a data center hosting the content they want will get it faster). They build data centers close to where their users live. Even then, delivery is likely less energy-intensive than video transcoding, meaning large, specialized data centers make a lot of sense for that task. They then distribute transcoded content to smaller, regional servers to improve user experience … again, specialized systems for a specialized task.
This means that YouTube has already distributed their system across many different servers in many different regions around the world, so in many ways, they already take advantage of the efficiency benefits of p2p, but they can carefully coordinate to reduce overall costs in a way that p2p can’t (yet).
But the Fediverse will lag in efficiency for exactly the reason you pointed out: it’s running on low tech, general-purpose hardware. Energy usage has the largest environmental impact by far. Hardware that is specialized (like Application-Specific Integrated Circuits) or newer will always outperform general or old hardware.