Volunteer Computing Could Help Expand Data Centers Without Large Negative Effects
Recently, data centers have come under fire for the negative side effects of their use. People claim that they use large amounts of water, increase energy prices, produce excessive noise, and more, and as a result of these complaints, there is now work being done to limit their construction.
This would be all well and good were it not for the hugely positive impact that data centers have on the economy. Ever since the digital revolution, the internet has relied on servers stored in data centers so that it can function. This means that if we were to limit their construction due to these negative side effects, real as they are, there would also be negative effects faced due to the lack of progress being made through computation. In economics, this is known as opportunity cost. And while many would argue that the opportunity cost of not building a data center is not worth it in the long run, I argue something else entirely. We can expand effective data center capacity without relying on building new ones.
In simple terms, data centers house large computers that store, process, and transmit information just like any other computer might, just with a far larger amount of memory and processing power than your typical desktop. Other than their aggregated capacity, there isn’t anything particularly special about these computers; they still have storage, RAM, CPUs, GPUs, and all of the other important parts that constitute a personal computer. This means that we could feasibly build a data center by using computers that we would use on a daily basis. So why don’t we?
It turns out that, since it is essentially impossible for a computer to run at maximum computing capacity all the time, excess capacity can be offloaded to a sort of decentralized data center. By taking and pooling this excess computing power, we can construct a network made up of thousands of personal computers. This is known as volunteer computing, and it can be used to take advantage of the positive network effects that arise when you decide to chain together a set of computers. This is already being used in various industries, such as 3D rendering with the SheepIt render farm and academia with Berkeley’s BOINC system. The most famous of these systems is known as Folding@home, which is a protein folding volunteer computing system, and in 2020, it became the largest computing system ever created, being the first system to achieve exaflop status.
There are a few ways to incentivize someone to enroll in a system like this. As with the above examples, a user could receive the benefit of having direct access to the system and its increased computing power. This, however, is a bit more of a niche incentive, only applying to a small number of people. Another simple benefit would be to pay people for the amount of computing power that they donate to the system. This is already a thing with smaller systems like the Akash and Golem Networks that pay people for providing them with compute, but due to their small client base, these aren’t yet known to be amazingly effective when it comes to making consumers extra income.
It seems strange, then, that if there are so many benefits to using these systems, they aren’t more popular. A lot of this has to do with the fact that decentralized systems also tend to lack standardization and consistency. A computer that is deploying to a volunteer computing service can only offer its computing power when the user allows it to, and when it “donates” compute, it offers an entirely unique set of specifications that the system must then find a way to mold so that it can fit into the current system.
A simple analogy would be a railroad network that is built entirely from donated pieces of wood and metal. These have not been vetted beforehand, and thus need to be reformatted and reshaped before they can be properly used. If they do not undergo this reshaping process, then there is no guarantee that they will allow the train to run on it properly. At this point, why would the railroad company want the donated material when they could harvest it themselves? The reformatting process, the costs associated with not knowing when people will donate, and the general uncertainty involved in the system make it almost not worth it.
But certainly, the railroad companies would be unwise to reject all donations. Not everything offered is junk. Those pieces that need only to be cleaned of their rust, sanded down, have lower costs associated with them than other, more mangled bits. This is true as well of certain computers. Not all are created equal, not all are used equally, and as a result, using the system is costly. For example, it is considered standard practice in a volunteer computing system to perform every calculation twice to verify that the first result was not mistaken. This, among other problems, already hinders efficiency quite a bit, thus reducing the amount of incentives the systems would be able to give to users to enroll.
The best way to reconcile this would be to have a set of standards for those participating in the system. This would look like minimum hardware requirements, uptime guarantees, and other software-side systems that could more properly handle these constraints. Then, by interfacing with a centralized network, these volunteer computing systems would be able to perform those highly parallelizable computations that larger systems should not be wasting their time and capacity on, thus freeing data centers from having such high requirements and offloading some of their benefits to consumers.
It is important to note, however, that this would not serve to replace data centers, but instead to bolster them. Not all tasks qualify to be solved with volunteer computing; some need a greater degree of centralization, either due to process constraints or limitations due to the size of the data set. In these cases, the scope of the data further disqualifies it from participation due only to the massive cost associated with transferring the data, making local data centers necessary. This volunteer computing solution would only be able to solve those simple tasks that data centers should not be wasting their time on.
From an economic perspective, the benefits are twofold:
1. Data centers’ capacity expands, while diminishing the increase in local negative externalities associated with them.
2. Consumers are paid for their volunteer computing power, meaning that the social cost is further offset by the benefits faced by the consumer.

