{"forum": "B1edvs05Y7", "submission_url": "https://openreview.net/forum?id=B1edvs05Y7", "submission_content": {"title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", "abstract": "Currently, progressively larger deep neural networks are trained on ever growing data corpora. In result, distributed training schemes are becoming increasingly relevant. A major issue in distributed training is the limited communication bandwidth between contributing nodes or prohibitive communication cost in general. \n%These challenges become even more pressing, as the number of computation nodes increases. \nTo mitigate this problem we propose Sparse Binary Compression (SBC), a compression framework that allows for a drastic reduction of communication cost for distributed training. SBC combines existing techniques of communication delay and gradient sparsification with a novel binarization method and optimal weight update encoding to push compression gains to new limits. By doing so, our method also allows us to smoothly trade-off gradient sparsity and temporal sparsity to adapt to the requirements of the learning task. \n%We use tools from information theory to reason why SBC can achieve the striking compression rates observed in the experiments.\nOur experiments show, that SBC can reduce the upstream communication on a variety of convolutional and recurrent neural network architectures by more than four orders of magnitude without significantly harming the convergence speed in terms of forward-backward passes. For instance, we can train ResNet50 on ImageNet in the same number of iterations to the baseline accuracy, using $\\times 3531$ less bits or train it to a $1\\%$ lower accuracy using $\\times 37208$ less bits. In the latter case, the total upstream communication required is cut from 125 terabytes to 3.35 gigabytes for every participating client. Our method also achieves state-of-the-art compression rates in a Federated Learning setting with 400 clients.", "keywords": [], "authorids": ["felix.sattler@hhi.fraunhofer.de", "simon.wiedemann@hhi.fraunhofer.de", "klaus-robert.mueller@tu-berlin.de", "wojciech.samek@hhi.fraunhofer.de"], "authors": ["Felix Sattler", "Simon Wiedemann", "Klaus-Robert M\u00fcller", "Wojciech Samek"], "pdf": "/pdf/79a831ab8097889e3fd0194e2ca435da6c069550.pdf", "paperhash": "sattler|sparse_binary_compression_towards_distributed_deep_learning_with_minimal_communication", "_bibtex": "@misc{\nsattler2019sparse,\ntitle={Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication},\nauthor={Felix Sattler and Simon Wiedemann and Klaus-Robert M\u00fcller and Wojciech Samek},\nyear={2019},\nurl={https://openreview.net/forum?id=B1edvs05Y7},\n}"}, "submission_cdate": 1538087775593, "submission_tcdate": 1538087775593, "submission_tmdate": 1545355387235, "submission_ddate": null, "review_id": ["rJgh-xo53X", "Skgb2YE5n7", "SkeeEbivo7"], "review_url": ["https://openreview.net/forum?id=B1edvs05Y7¬eId=rJgh-xo53X", "https://openreview.net/forum?id=B1edvs05Y7¬eId=Skgb2YE5n7", "https://openreview.net/forum?id=B1edvs05Y7¬eId=SkeeEbivo7"], "review_cdate": [1541218308237, 1541192105414, 1539973415987], "review_tcdate": [1541218308237, 1541192105414, 1539973415987], "review_tmdate": [1541534136543, 1541534136333, 1541534132461], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1edvs05Y7", "B1edvs05Y7", "B1edvs05Y7"], "review_content": [{"title": "important problem, limited novelty, significance not clearly established because reporting focuses on means (bit/communication metric) not ends (optimization time)", "review": "Lowering costs for communicating weights between workers is an important intermediate goal for distributed optimization, since presumably it can limit the parallelization achievable once available bandwidth is saturated. This work reports reasonable approaches to try to overcome this through a mix of techniques, though none in particular seem especially novel or surprising. For example, the abstract claims a novel binarization method, but what is described in the paper does not seem especially novel (e.g. zero the negative weights and replace positives with their mean, if negative's mean < positive's mean, else vice versa); but more importantly, the experiments don't explore/support why this approach is any better (or when worse) than other schemes.\n\nTo its credit, the paper provides experiments data (ImageNet and Cifar, not just MNIST) and models (e.g. ResNet50) that can support reasonable claims of being representative of modern optimization tasks. What the paper is most lacking, though, is a clear and convincing argument that the large bit compression rates claimed actually lead to significant time speedups of the resulting optimization. The paper seems to just assume that lowering communication costs is inherently good and this goodness is proportional to the rate of compression. But as Table 3 shows, there IS some degrading in accuracy for this reduction in communication overhead. Whether this is worth it depends critically on whether the lower overhead actually allows optimization to speedup significantly, but the time of training seems to not be mentioned anywhere in this paper. Thus, in its current form, this paper does not clearly establish the significance and limits of their approach. Given that the novelty does not appear high, the value of this current paper is mainly as an engineering analysis of some design tradeoffs. And viewed that way, this paper is a bit disappointing in that tradeoffs are not acknowledged/examined much. E.g. readers cannot tell from these experiments when the proposed approach will fail to work well -- the limitations are not clearly established (all results provided are cast in positive light).", "rating": "6: Marginally above acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "I do not see this paper having enough contribution and novelty to be accepted", "review": "The paper once again looks at the problem of reducing the communication requirement for implementing the distributed optimization techniques, in particular, SGD. This problem has been looked at from multiple angles by many authors. And although there are many unanswered questions in this area, I do not see the authors providing any compelling contribution to answering those questions. A big chunk of the paper is devoted to expressing some shallow theorems, which in some cases I do not even see their importance or connection to the main point of the paper; see my comments below. In terms of the techniques for reducing the communication burden, the authors seem to just put all the other approaches together with minimal novelty.\n\nMore detailed comments:\n- I do not really understand what the authors mean by noise damping. I would appreciate it if they could clarify that point. This seems to be a very important point as the model they propose for the noise in the process is basically based on this notion. It is a great failure on the authors' part that such a crucial notion in their paper is not clearly described. \n- The model that is proposed for noise is too strong and too simplistic. Do you guys have any evidence to back this up?\n- Theorem 2.1 is not a theorem. The result is super shallow and relatively trivial. \n- In corollary 2.1 it seems that no matter what the randomness in the system is, the algorithm is going to converge to the same solution. This is not true even for the non-strongly convex objectives, let alone the non-convex problems where there are so many stationary solutions and whatnot.\n- With regards to Fig 3 (and other related figures in the appendix) and the discussion on the multiplicative nature of compression: The figure does not seem to suggest multiplicative nature in all the regimes. It seems to hold in high compression/ low-frequency communication regime. But on the other side of the spectrum, it does not seem to hold very strongly.\n- The residual accumulation only applies when all the nodes update in all the iterations. I do not believe this would generalize to the federated learning, where nodes do not participate in all the updates. I do not know if the authors have noted this point in their federated learning experiments.\n- Theorem 3.1 is very poorly stated. Other that than it is shallow and in my opinion irrelevant. What is the argument in favor of the authors' thought that could be built based on the result of Theorem 3.1?\n- One major point that is missing in the experiments (and probably in the experiments in other papers on the same topic) is to see how much do all these compressions affect the speed of learning in different scenarios in realistic scenarios? Note that in realistic scenarios many things other than communication could affect the convergence time.", "rating": "3: Clear rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Good results, however, I have questions about the algorithm", "review": "In the paper, the authors combine the federated method, sparse compression, quantization and propose Sparse Binary Compression method for deep learning optimization. Beyond previous methods, the method in this paper achieves excellent results in the experiments. The paper is written very clearly and easy to follow. \n\nThe following are my concerns,\n1. In the introduction, the authors emphasize that there is a huge compression in the upstream communication. How about the downstream communication, I think the server should also send gradients to clients. The averaged gradient is not compressed anymore. \n\n2. I think the method used in the paper is not federated learning. Federated learning averages the models from multiple clients. however, in the paper, the proposed methods are averaging gradients instead. It is called local updates, and is a well-known tradeoff between communication and computation in the convex optimization.\n\n3. I want to point out that the similar local update (federated learning) technique has already explored, and proved not work well. In [1] the authors showed that deploying the local update simply may lead to divergence. Therefore, the iterations of the local update are constrained to be very small. e.g. less than 64. Otherwise, it leads to divergence. I also got similar results in my experience. The temporal sparsity in the paper looks very small. I am curious about why it works in this paper.\n\n4. Another issue is the results in the experiments. It is easy to find out that resnet50 can get 76.2% on Imagenet according to [2]. However, the baseline is 73.7% in the paper. I didn't check the result for resnet18 on cifar10 or resnet34 on cifar 100, because people usually don't use bottleneck block for cifars.\n\n5. In Table 2, Federated average always has worse results than other compared methods. Could you explain the reason? If using federated average is harmful to the accuracy, it should also affect the result of the proposed method. \n\n\n[1] Zhang, Sixin, Anna E. Choromanska, and Yann LeCun. \"Deep learning with elastic averaging SGD.\" Advances in Neural Information Processing Systems. 2015\n[2]https://github.com/D-X-Y/ResNeXt-DenseNet", "rating": "5: Marginally below acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["B1lrCrFN3Q"], "comment_cdate": [1540818381157], "comment_tcdate": [1540818381157], "comment_tmdate": [1540818381157], "comment_readers": [["everyone"]], "comment_writers": [["ICLR.cc/2019/Conference/Paper275/Authors", "ICLR.cc/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}], "comment_content": [{"title": "Answer to Comment", "comment": "We thank you for your comment and that you show interest in our paper. We believe that we made an important contribution by demonstrating that the distinction between the two formerly treated as separate worlds of Federated Learning and Parallel Training is somewhat arbitrary and misleading and that better results can be achieved by combining the best approaches from both of these worlds. For instance, contrary to the paradigms suggested in previous literature, communication delay is not a well-suited approach for communication reduction in the Federated Learning setting. Much higher compression gains are achievable if sparsification is applied instead. On the other hand, communication delay can speed-up parallel training as it allows the individual computation devices to perform multiple steps of SGD without interruption. Our proposed method can adapt smoothly to these different settings and achieves much higher compression gains than previously reported (e.g. \u00d737208 on ImageNet).\nMoreover, it can also adapt to and perform optimally under different constrains that limit the communication structure, such as network bandwidth and latency, (SGD-)computation time, as well as temporal inhomogeneities therein.\nOn top of that we propose a novel binarization method and Golomb encoding. Combined they can make up for another \u00d76 compression on top of what is achieved by communication delay and sparsification alone without harming the convergence speed."}], "comment_replyto": ["S1gEi0Cxh7"], "comment_url": ["https://openreview.net/forum?id=B1edvs05Y7¬eId=B1lrCrFN3Q"], "meta_review_cdate": 1544778658775, "meta_review_tcdate": 1544778658775, "meta_review_tmdate": 1545354523145, "meta_review_ddate ": null, "meta_review_title": "Lack of novelty and strong empirical results; no rebuttal", "meta_review_metareview": "This paper proposes a sparse binary compression method for distributed training of neural networks with minimal communication cost. Unfortunately, the proposed approach is not novel, nor supported by strong experiments. The authors did not provide a rebuttal for reviewers' concerns. \n", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2019/Conference/Paper275/Area_Chair1"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1edvs05Y7¬eId=SJgsomgWl4"], "decision": "Reject"}