In distributed learning. a central server trains a model according to updates provided by nodes holding local data samples. In the presence of one or more malicious servers sending incorrect information (a Byzantine adversary). standard algorithms for model training such as stochastic gradient descent (SGD) fail to converge. https://shopredoners.shop/product-category/the-slacker-skirt/