Worker pools
Let’s say you have large amounts of data elements (for instance, image files) and you want to apply the same logic to each of them. You can write a function that processes one instance of the input, and then call this function in a for
loop. Such a program will process the input elements sequentially, and if each element takes t
seconds to process, all inputs will be completed at last at n.t
seconds, n
being the number of inputs.
If you want to increase throughput by using concurrent programming, you can create a pool of worker goroutines. You can feed the next input to an idle member of the worker pool, and while that is being processed, you can assign the subsequent input to another member. If you have p
logical processors (which can be cores of physical processors) running in parallel, the result can be available in as fast as n.t/p
seconds (this is a theoretical upper limit because the distribution of load among parallel processes is not always perfect, and...