Accelerating the grapevine effect
Gossip is an efficient way to share information across large networks and has unexpected applications in solving other mathematical and machinelearning problems.
By looking at classical gossip algorithms from a novel perspective, Peter Richtarik, KAUST Professor, has found a way to significantly speed up gossipbased information sharing, and in the process, he discovered new applications for this efficient mathematical approach.
Gossip involves the sharing of information between individuals in a network and can be applied mathematically in both human social networks and data networks, such as distributed sensors.
“A network is a collection of nodes, each connected to other nodes via links,” explains Richtarik. “In social networks, for instance, individuals are connected to others via friendship links. In sensor networks, sensors could be connected if they are close enough to communicate over a wireless connection.”
In many realworld applications, it is often useful to perform calculations based on the data stored by all nodes across a network, such as computing the average of the private data stored by each node, which is known as the average consensus problem. However, because communication is limited to direct links between nodes, in practice, this is very challenging.
“The idea of gossip algorithms is to perform this calculation by pairwise communication among randomly selected friends and to repeat this process until all individuals learn the result,” says Richtarik. “This mimics the way gossip works among humans. It’s surprising that it is possible to show mathematically that this simple communication strategy can solve a global, networkwide problem.”
In collaboration with Nicolas Loizou from the University of Edinburgh in Scotland, Richtarik has been studying the randomized gossip algorithms and their connections to other branches of mathematics and computer science.
Their theoretical study revealed a deep connection between randomized gossip algorithms and a branch of mathematics called linear algebra, which involves solving systems of many equations with many unknowns. They also established a direct deep link with one of the most famous algorithms in machine learning, the stochastic gradient descent method, which is used to train the deeplearning models employed in almost all industrial applications. These insights helped the researchers develop new and much faster gossip protocols.
“We were able to develop an accelerated gossip algorithm that needs many fewer gossip rounds to reach the average consensus value,” says Richtarik. “Our method needs only the square root of the number of rounds needed for a classical gossip algorithm, that’s 100 rounds instead of 10,000. We proved this mathematically and observed this acceleration in practice as well.”
References

Loizou, N. & Richtarik, P. Accelerated gossip via stochastic heavy ball method. 56th Annual Allerton Conference on Communication, Control, and Computing 25 October 2018 IEEE, 927934. article
You might also like

Computing solutions for biological problems
Oct 14, 2018

Robots learn by checking in on team members
Jun 10, 2018

Less chat leads to more work for machine learning
Oct 15, 2019

Blind matchmaking for more efficient wireless networks
Feb 25, 2017

Machine learning models gather momentum
Nov 24, 2019