Summary
In this chapter, we covered the two key developments that have resulted from the recent growth in accessible computational power at all levels. First, we looked at the importance of models and how this has enabled ML to grow considerably in practical usage, with increases in computational power allowing stronger models that surpass manually created white-box systems to continuously be produced. We called this the what of FL – ML is what we are trying to perform using FL.
Then, we took a step back to look at how edge devices are reaching a stage where complex computations can be performed within reasonable timeframes for real-world applications, such as the text recommendation models on our phones. We called this the where of FL – the setting where we want to perform ML.
From the what and the where, we get the intersection of these two developments – the usage of ML models directly on edge devices. Remember that the standard central training approach...