I responded above, but my point kind of was that it doesn’t work that way, but as we rethinking content delivery we should also rethinking hosting distribution. What I was saying is not a “well gee we should just do this…” type of suggestion, but more a extremely high level idea for server orchestration from a public private swarm that may or may not ever be feasible, but definitely doesn’t really exist today.
Imagine if it were somewhat akin to BitTorrent, only the user could voluntarily give remote control to the instance for orchestration management. The orchestration server toggles the nodes contents so that, lets say, 100% of them carry the most accessed data (hot content, <100gb), and the rest is sharded so they each carry 10% of the archived data, making each node require <1tb total. And the node client is given X number of pinned CPUs that can be used for additional server compute tasks to offload various queries.
See, I’m fully aware this doesn’t really exist on this form. But thinking of it like a Kubernetes cluster or a HA webclient it seems like it should be possible somehow to build this in a way where the client really only needs to install, and say yes to contribute. If we could cut it down to that level, then you can start serving the site like a P2P bittorrent swarm, and these power user clients can become nodes.
I responded above, but my point kind of was that it doesn’t work that way, but as we rethinking content delivery we should also rethinking hosting distribution. What I was saying is not a “well gee we should just do this…” type of suggestion, but more a extremely high level idea for server orchestration from a public private swarm that may or may not ever be feasible, but definitely doesn’t really exist today.
Imagine if it were somewhat akin to BitTorrent, only the user could voluntarily give remote control to the instance for orchestration management. The orchestration server toggles the nodes contents so that, lets say, 100% of them carry the most accessed data (hot content, <100gb), and the rest is sharded so they each carry 10% of the archived data, making each node require <1tb total. And the node client is given X number of pinned CPUs that can be used for additional server compute tasks to offload various queries.
See, I’m fully aware this doesn’t really exist on this form. But thinking of it like a Kubernetes cluster or a HA webclient it seems like it should be possible somehow to build this in a way where the client really only needs to install, and say yes to contribute. If we could cut it down to that level, then you can start serving the site like a P2P bittorrent swarm, and these power user clients can become nodes.