AMSR / conferences_raw /midl20 /MIDL.io_2020_Conference_EiT7GQAj-T.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
10.6 kB
{"forum": "EiT7GQAj-T", "submission_url": "https://openreview.net/forum?id=sdSOFa3Y09", "submission_content": {"authorids": ["blendowski@imi.uni-luebeck.de", "heinrich@imi.uni-luebeck.de"], "abstract": "Current deep learning methods are based on the repeated, expensive application of convolutions with parameter-intensive weight matrices. In this work, we present a novel concept that enables the application of differentiable random ferns in end-to-end networks. It can then be used as multiplication-free convolutional layer alternative in deep network architectures. Our experiments on the binary classification task of the TUPAC'16 challenge demonstrate improved results over the state-of-the-art binary XNOR net and only slightly worse performance than its 2x more parameter intensive floating point CNN counterpart. ", "paper_type": "methodological development", "authors": ["Max Blendowski", "Mattias P. Heinrich"], "track": "short paper", "keywords": ["End-To-End Trainable Ferns", "Network Efficiency", "Binary Embedding"], "title": "Learning to map between ferns with differentiable binary embedding networks", "paperhash": "blendowski|learning_to_map_between_ferns_with_differentiable_binary_embedding_networks", "TL;DR": "We demonstrate how to implement differentiable random ferns as energy & parameter efficient alternative to convolutional layers.", "pdf": "/pdf/6619c86530b4e7f9240efb49efb6050aa164992c.pdf", "_bibtex": "@inproceedings{\nblendowski2020learning,\ntitle={Learning to map between ferns with differentiable binary embedding networks},\nauthor={Max Blendowski and Mattias P. Heinrich},\nbooktitle={Medical Imaging with Deep Learning},\nyear={2020},\nurl={https://openreview.net/forum?id=sdSOFa3Y09}\n}"}, "submission_cdate": 1579955689643, "submission_tcdate": 1579955689643, "submission_tmdate": 1587172199340, "submission_ddate": null, "review_id": ["TbetNRIrCwQ", "Pann-l2fWF", "iEZkK0sQE6", "Kym0pN-yb-"], "review_url": ["https://openreview.net/forum?id=sdSOFa3Y09&noteId=TbetNRIrCwQ", "https://openreview.net/forum?id=sdSOFa3Y09&noteId=Pann-l2fWF", "https://openreview.net/forum?id=sdSOFa3Y09&noteId=iEZkK0sQE6", "https://openreview.net/forum?id=sdSOFa3Y09&noteId=Kym0pN-yb-"], "review_cdate": [1584587282369, 1584134729330, 1584071092602, 1583798156978], "review_tcdate": [1584587282369, 1584134729330, 1584071092602, 1583798156978], "review_tmdate": [1585229444741, 1585229444226, 1585229443709, 1585229443201], "review_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2020/Conference/Paper133/AnonReviewer3"], ["MIDL.io/2020/Conference/Paper133/AnonReviewer4"], ["MIDL.io/2020/Conference/Paper133/AnonReviewer1"], ["MIDL.io/2020/Conference/Paper133/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["EiT7GQAj-T", "EiT7GQAj-T", "EiT7GQAj-T", "EiT7GQAj-T"], "review_content": [{"title": "Differentiable ferns for end-to-end training of architectures", "review": "The paper proposes embedding ferns as an alternative to convolutional layers for deep learning architectures. It departs from standard random ferns and variants in that it is a drop-in replacement and allows for end-to-end training of architectures.\n\nThe abstract is well structured and relatively easy to follow, and overall it can be interesting for MIDL.\n\nWhat is gained from moving from a convolutional layer to a fern? Ultimately it looks like the computational complexity of the proposed layer, and the memory footprint is going to be similar to that of a standard convolutional layer. There are $c_{out}\\times \\text{#ferns} \\times 2^{depth}$ trainable parameters (plus the fixed ones) where a convolutional layer would have at worst, $c_{out}\\times c_{in}\\times k^2$.\nIt is not clear whether there are settings of the depth and number of ferns such that the number of parameters, computational complexity and/or memory usage will be reduced by an order of magnitude at little cost in performance, since even a depth of $3$ leads to similar number of parameters as a $3\\times 3$ kernel.\n\nIn terms of operations, floating point multiplications from convolutions are replaced by $\\Vert \\, \\cdot \\Vert^2$, which expands one way or the other to similar floating point multiplication or squaring, plus the $tanh$.\n\nWhat is the energy consumption gain attributable to? Also why not report other metrics that relate to computational/memory complexity?", "rating": "3: Weak accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Not so clear...", "review": "Summary: Authors propose to replace the matrix-multiplication part of a convolutional layer with a differentiable random fern (as defined in \u00d6zuysal et al. IEEE TPAMI 2009). It is shown that this method reduces by two the number of parameters, with respect to a standard CNN, and preserves almost the same performance in terms of classification accuracy.\n\nRemarks:\n1- the paper is quite difficult to read and understand. Many concepts such as ferns, IM2COL, UFM and EmbeddingBag-Layer are not sufficiently explained in the paper. \n\n2- A fern, as introduced by \u00d6zuysal and colleagues, is a small set of binary tests that is used with a Semi-Naive Bayesian approach in a problem of classification. It's not clear how exactly ferns can replace matrix-multiplication. Authors should better explain this point.\n\n3- Many choices are not well motivated or explained such as: the use of tanh, the offset s^k, the definition of w_u^k\n\n4- Results seem interesting and it's a pity that the paper is not so clear. Authors probably need more space to better explain their algorithm and all related concepts. ", "rating": "2: Weak reject", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "Learning to map between ferns", "review": "The main contribution of the paper is a novel approach of taking advantage of ferns and achieving the (almost) same performance of a Vanilla Net at TUPAC challenge with only half of network size and 1/60th of energy consumption. Removal of floating point multiplications are attributed to using a Look-Up Table that holds the trainable weights, whose indices come from binary string comparison between feature vectors, hence introducing non-linearity to the system. Overall, the paper is concise and benefits great from the fact that the authors implemented their novel approach on TUPAC challenge, which is a publicly known/studied dataset contest. \nThe authors results are tested on a public challenge, and their findings on the public challenge validate their methodological improvements. \nThe authors reduce the net size substantially by removing multiplications with a Look-Up Table and improves the accuracy by learning the feature embeddings instead of using histogram. Their approach allows their implementation to achieve the (almost) same accuracy as Vanilla Net and outperform the XNOR net while having a substantially small network/parameter size. \n\nSome implementation details were given without explanation. This could be attributed to the fact that the paper was submitted for a short paper track and did only have a 3-page limit. Nevertheless, the authors don't explain why they chose tanh for the threshold subtraction. They also don't explain the reasoning behind choosing ferns 3 for the depth of the ferns they use at every layer. \n\nSpatial convolutions are integrated thanks to IM2COL operator. The authors might be valuable to explain the benefits that they got from that operator more to help the reader with her understanding. \n\nThe LUT size is #ferns * m. This LUT becomes a hash table where the lookup is in constant time, but the storage is still needed. The paper was unclear whether they included the size of the look-up table in their parameter calculation in Table 1 since building that LUT will still require space. One still might say choosing 24 ferns at every layer with a depth 3 will not consume substantial space for the generation of the lookup table, but greater number of layers and ferns might make this approach converge to the vanilla net in terms of size if one wants to scale it up. \n\nAlso is there a particular reason this method is used or benefits medical machine learning?\n", "rating": "3: Weak accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "interesting topic but vague presentation", "review": "This paper relies on Fig. 1 to convey most parts of the important ideas. Unfortunately, the figure doesn't show all the key components clearly.\n\nthe main innovation seems to be replacing a multiplication operation with a lookup table + trainable weighted sums. Since convolution can be implemented as im2col followed by a matrix multiplication, I feel it's more appropriate to claim it's a fast convolution by caching some binary encoded results. It's not clear how much memory it takes and how the energy consumption calculated in terms of both memory access and arithmetic operations. The paper also claims \"without using floating-point multiplications\", but there're floating number weighted sums as shown in Fig. 1.", "rating": "1: Strong reject", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": 1585282164618, "meta_review_tcdate": 1585282164618, "meta_review_tmdate": 1585282164618, "meta_review_ddate ": null, "meta_review_title": "MetaReview of Paper133 by AreaChair1", "meta_review_metareview": "The majority of reviewers acknowledge that the idea of using random ferns as a replacement for convolutions is worthwhile to investigate. There are some concerns regarding the presentation and the lack of clarity where gains in energy consumption come from. Overall, it seems the methodological idea is interesting and suitable to be discussed at MIDL 2020.", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2020/Conference/Program_Chairs", "MIDL.io/2020/Conference/Paper133/Area_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=sdSOFa3Y09&noteId=8AUnttgUZjH"], "decision": "reject"}