AMSR / conferences_raw /iclr19 /ICLR.cc_2019_Conference_B1euhoAcKX.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
17 kB
{"forum": "B1euhoAcKX", "submission_url": "https://openreview.net/forum?id=B1euhoAcKX", "submission_content": {"title": "DppNet: Approximating Determinantal Point Processes with Deep Networks", "abstract": "Determinantal Point Processes (DPPs) provide an elegant and versatile way to sample sets of items that balance the point-wise quality with the set-wise diversity of selected items. For this reason, they have gained prominence in many machine learning applications that rely on subset selection. However, sampling from a DPP over a ground set of size N is a costly operation, requiring in general an O(N^3) preprocessing cost and an O(Nk^3) sampling cost for subsets of size k. We approach this problem by introducing DppNets: generative deep models that produce DPP-like samples for arbitrary ground sets. We develop an inhibitive attention mechanism based on transformer networks that captures a notion of dissimilarity between feature vectors. We show theoretically that such an approximation is sensible as it maintains the guarantees of inhibition or dissimilarity that makes DPP so powerful and unique. Empirically, we demonstrate that samples from our model receive high likelihood under the more expensive DPP alternative.", "keywords": ["dpp", "submodularity", "determinant"], "authorids": ["zelda@csail.mit.edu", "jsnoek@google.com", "yovadia@google.com"], "authors": ["Zelda Mariet", "Jasper Snoek", "Yaniv Ovadia"], "TL;DR": "We approximate Determinantal Point Processes with neural nets; we justify our model theoretically and empirically.", "pdf": "/pdf/00280754d08b102a91067f3ba3553bc9d89999a2.pdf", "paperhash": "mariet|dppnet_approximating_determinantal_point_processes_with_deep_networks", "_bibtex": "@misc{\nmariet2019dppnet,\ntitle={DppNet: Approximating Determinantal Point Processes with Deep Networks},\nauthor={Zelda Mariet and Jasper Snoek and Yaniv Ovadia},\nyear={2019},\nurl={https://openreview.net/forum?id=B1euhoAcKX},\n}"}, "submission_cdate": 1538087855731, "submission_tcdate": 1538087855731, "submission_tmdate": 1545355412280, "submission_ddate": null, "review_id": ["B1lR8paT3X", "r1x_WCGan7", "HylaX8OF2m"], "review_url": ["https://openreview.net/forum?id=B1euhoAcKX&noteId=B1lR8paT3X", "https://openreview.net/forum?id=B1euhoAcKX&noteId=r1x_WCGan7", "https://openreview.net/forum?id=B1euhoAcKX&noteId=HylaX8OF2m"], "review_cdate": [1541426518356, 1541381632334, 1541142052899], "review_tcdate": [1541426518356, 1541381632334, 1541142052899], "review_tmdate": [1541533741697, 1541533741494, 1541533741288], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1euhoAcKX", "B1euhoAcKX", "B1euhoAcKX"], "review_content": [{"title": "comparison with faster algorithms for sampling from DPPs", "review": "Determinantal Point Processes provide an efficient and elegant way to sample a subset of diverse items from a ground set. This has found applications in summarization, matrix approximation, minibatch selection. However, the naive algorithm for DPP takes time O(N^3), where N is the size of the ground set. The authors provide an alternative model DPPNet for sampling diverse items that preserves the elegant mathematical properties (closure under conditioning, log-submodularity) of DPPs while having faster sampling algorithms.\n\nThe authors need to compare the performance of DPPNet against faster alternatives to sample from DPPs, e.g., https://arxiv.org/pdf/1509.01618.pdf, as well as compare on applications where there is a significant gap between uniform sampling and DPPs (because there are the applications where DPPs are crucial). The examples in Table 2 and Table 3 do not address this.", "rating": "3: Clear rejection", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"title": "Interesting paper with good ideas but limited applicability (in its current form)", "review": "This paper proposes a scaleable algorithm for sampling from DppNets, a proposed model which approximates the distribution of a DPP. The approach builds upon a proposed inhibitive attention mechanism and transformer networks.\n\nThe proposed approach and focus on sampling is original as far as I can tell. The problem is also important to parts of the community as DPPs (or similar distributions) are used more and more frequently. However, the applicability of the proposed approach is limited as it is unclear how to deal with varying ground set sizes \u2014 the authors briefly discuss this issue in their conclusion referring to circumvent this problem by subsampling (this can however be problematic either requiring to sample from a DPP or incurring high probability of missing \u201eimportant\u201c items).\n\nFurthermore the used evaluation method is \u201ebiased\u201c in favor of DppNets as numerical results evaluate the likelihood of samples under the DPP which the DppNet is trained to approximate for. This makes it difficult to draw conclusions from the presented results. I understand that this evaluation is used as there is no standard way of measuring diversity of a subset of items, but it is also clear that \u201eno\u201c baseline can be competitive. One possibility to overcome this bias would be to consider a downstream task and evaluate performance on that task. \n\nFurthermore, I suggest to make certain aspects of the paper more explicit and provide additional details. For instance, I would suggest to spell out a training algorithm, provide equations for the training criterion and the evaluation criterion. Please comment on the cost of training (constantly computing the marginal probabilities for training should be quite expensive) and the convergence of the training (maybe show a training curve; this would be interesting in the light of Theorem 1 and Corollary 1).\n\nCertain parts of the paper are unclear or details are missing:\n* Table 3: What is \u201eDPP Gao\u201c?\n* How are results for k-medoids computed (including the standard error)? Are these results obtained by computing multiple k-medoids solutions with differing initial conditions?\n* In the paper you say: \u201eFurthermore, greedily sampling the mode from the DPPNET achieves a better NLL than DPP samples themselves.\u201c What are the implications of this? What is the NLL of an (approximate) mode of the original DPP? Is the statement that you want to make, that the greedy approximation works well?", "rating": "5: Marginally below acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "This paper proposes DppNet, which approximates determinantal point processes with deep networks by inhibitive attention mechanism. The authors provided a theoretical analysis under some condition that the DppNet is of log-submodularity. Further, some experiments are conducted to show the performance.", "review": "Quality (5/10): This paper proposes DppNet, which approximates determinantal point processes with deep networks by inhibitive attention mechanism. The authors provided a theoretical analysis under some condition that the DppNet is of log-submodularity.\n\nClarity (9/10): This paper is well written and provides a clear figure to demonstrate their network architecture.\n\nOriginality (6/10): This paper is mainly based on the work [Vaswani et al, Attention is all you need, 2017]. It computes the dissimilarities by subtracting attention in the original work from one, and then samples a subset by an unrolled recurrent neural network. \n\nSignificance (5/10): This paper uses negative log-likelihood as the measurement to compare DppNet with other methods. Without further application, it is difficult to measure the improvement of this method over other methods.\n\nPros: \n(1) This paper is well written and provides a figure to clearly demonstrate their network architecture.\n\n(2) This paper provides a deep learning way to sample a subset of data from the whole data set and reduce the computation complexity.\n\nThere are some comments.\n(1) Figure 4 shows the sampled digits from Uniform distribution, DppNet (with Mode) and Dpp. How about the sampled digits from k-Medoids? Providing the sampled digits from k-Medoids can make the experiments more complete.\n\n(2) The object of DppNet is to minimize the negative log-likelihood. The DPP and k-Medoids have other motivations, not directly optimizing the negative log-likelihood. This may be the reason why DppNet has a better performance on negative log-likelihood, even than DPP. Could the authors provide some other measures (like the visual comparison in figure 4) to compare these methods?\n\n(3) Does GenDpp Mode in Table 2 mean the greedy mode in Algorithm 1? A clear denotation can make it more clear.", "rating": "5: Marginally below acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": ["SJgXOvI90m", "r1gyaOndAQ", "r1gBYd2O07", "HJeUZuh_07", "BkxkYv2ORX"], "comment_cdate": [1543296875027, 1543190710975, 1543190653087, 1543190526331, 1543190390530], "comment_tcdate": [1543296875027, 1543190710975, 1543190653087, 1543190526331, 1543190390530], "comment_tmdate": [1543296875027, 1543190710975, 1543190653087, 1543190526331, 1543190390530], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2019/Conference/Paper723/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper723/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper723/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper723/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper723/Authors", "ICLR.cc/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Additional experiments have been included", "comment": "Following the recommendation of the reviewers, we have added two experiments to our paper: \n\n- A timing comparison between MCMC sampling and DppNet, which shows that DppNet is significantly faster than MCMC sampling.\n- An evaluation of DppNet for kernel reconstruction via the Nystrom method (a downstream task for which DPPs are known to be successful), which we compare to standard and MCMC sampling. In practice, we see that DppNet's performance on this task matches or outperforms the other baselines.\n\nPut together, these experiments show that DppNet is significantly faster than MCMC, while being competitive (or outperforming) MCMC on downstream DPP tasks."}, {"title": "Clarifications", "comment": "Thank you for your comments; we hope the below clarifications answer your questions.\n\n(1) Sampled digits from k-medoids: Thank you for catching this oversight; we will include the k-medoid samples in the updated paper.\n(2) Other measure of performance: we will add an additional evaluation on a downstream task, using DppNet and other baseline methods to sample columns to reconstruct a large kernel with the Nystrom method.\n(3) GenDpp mode: Yes, this indicates the greedy mode of Algorithm 1; we will clarify this."}, {"title": "Several clarifications", "comment": "Thank you for your suggestions regarding the clarity of the paper; we will augment our work with all suggested algorithms and equations.\n\nLimited applicability to variable ground set sizes: this is a drawback of our current approach. However, one can easily circumvent this problem in cases where an upper bound N_max on ground set sizes is known: train a DppNet with ground set size N_max, and in all cases where N <= N_max, complete the missing items with placeholder 0 vectors. Algorithm 1 can be trivially modified to take this into account to ensure that these dummy items are not selected.\n\nEvaluation biased towards DppNets: our goal is to show that DppNet approximates DPP-like samples (much) better than other reasonable approximations. We did not originally include downstream tasks as DPP have been accepted as a state-of-the-art method for diverse sampling in ML applications (see e.g. recent work such as https://dl.acm.org/citation.cfm?id=3272018 for real-world applications). However, as mentionned to Reviewer 1, we will include a downstream task (kernel reconstruction via the Nystrom method) to further support our claims.\n\nCost of evaluating the marginal probabilities: indeed, this is the costly part of our algorithm (which only impacts training time); this cost is mitigated by two aspects:\n- Given S, we can compute the probability P(S U {i} | S) for all i simultaneously with no overhead (Eq. 2)\n- When training a DppNet over varying ground sets, this cost is offset by the fact that we are not learning one but a whole class of DPPs simultaneously.\n\n* \u201cDPP Gao\u201d: This is a typo; it should read \u201cDPP Goal\u201d\n* K-medoids: Yes, we run the algorithm multiple times with different initializations.\n* Greedy sampling: Yes, we are stating that greedy sampling with DppNet yields realistic DPP samples, as evidenced by the high DPP log-likelihoods. This is a significant advantage over standard DPPs, since the greedy DPP mode algorithm is costly even with recent improvements [NIPS \u201818, Hulu].\n"}, {"title": "Expected runtimes of various DPP sampling methods; only DppNet benefits from hardware acceleration.", "comment": "Thank you for your feedback. We will include a comparison to other approximate sampling methods such as the coresets method you mentioned in an updated version of the paper.\n\nHowever, we would like to insist upon the following: \n- The runtime of the coreset sampling method is O(Nk^3), which is the same as the runtime as dual DPP sampling discussed in section 2.\n- The coreset approach does not have a hardware accelerator-friendly implementation, as it requires iteratively computing elementary symmetric polynomials and many sequential operations; the same holds of MCMC sampling methods.\nFor this reason, we expect to see DppNet have a drastically faster runtime even when compared to such methods on small datasets. \n\nRegarding evaluating DppNet to other methods on applications where there is a gap between uniform and DPP sampling, we agree that doing so will increase the impact of our paper. We are planning on augmenting our experimental section with an evaluation of all methods on the task of reconstructing large kernels via the Nystrom method."}, {"title": "Clarification re: experimental section", "comment": "We thank the reviewers for their detailed comments. We would like to clarify the aim of our experimental section: over the past years DPPs have been proven crucial to modeling diversity and quality trade-offs in subset selection problems (recommender systems, kernel reconstruction, \u2026). For this reason, our experiments aim to show that DppNets approximate DPPs better than other reasonable baselines (which we show by comparing NLLs under the true DPP). Crucially, our experiments do not aim to show that DppNet generates more diverse subsets: showing that DppNet is close to DPP samples is sufficient.\n\nHowever, based on the feedback we have received, we plan on incorporating additional experiments to an updated version of the paper to show that DppNet\u2019s DPP-like samples imply good performance on downstream tasks where DPPs have been shown to be valuable."}], "comment_replyto": ["BkxkYv2ORX", "HylaX8OF2m", "r1x_WCGan7", "B1lR8paT3X", "B1euhoAcKX"], "comment_url": ["https://openreview.net/forum?id=B1euhoAcKX&noteId=SJgXOvI90m", "https://openreview.net/forum?id=B1euhoAcKX&noteId=r1gyaOndAQ", "https://openreview.net/forum?id=B1euhoAcKX&noteId=r1gBYd2O07", "https://openreview.net/forum?id=B1euhoAcKX&noteId=HJeUZuh_07", "https://openreview.net/forum?id=B1euhoAcKX&noteId=BkxkYv2ORX"], "meta_review_cdate": 1544632786412, "meta_review_tcdate": 1544632786412, "meta_review_tmdate": 1545354502150, "meta_review_ddate ": null, "meta_review_title": "Limited applicability", "meta_review_metareview": "The paper addresses the complexity issue of Determinantal Point Processes via generative deep models.\n\nThe reviewers and AC note the critical limitation of applicability of this paper to variable ground set sizes, whether authors' rebuttal is not convincing enough.\n\nAC thinks the proposed method has potential and is interesting, but decided that the authors need more works to publish.", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2019/Conference/Paper723/Area_Chair1"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1euhoAcKX&noteId=H1gcAF3AkV"], "decision": "Reject"}