{"forum": "B1gdkxHFDH", "submission_url": "https://openreview.net/forum?id=B1gdkxHFDH", "submission_content": {"title": "Training individually fair ML models with sensitive subspace robustness", "authors": ["Mikhail Yurochkin", "Amanda Bower", "Yuekai Sun"], "authorids": ["mikhail.yurochkin@ibm.com", "amandarg@umich.edu", "yuekai@umich.edu"], "keywords": ["fairness", "adversarial robustness"], "TL;DR": "Algorithm for training individually fair classifier using adversarial robustness", "abstract": "We consider training machine learning models that are fair in the sense that their performance is invariant under certain sensitive perturbations to the inputs. For example, the performance of a resume screening system should be invariant under changes to the gender and/or ethnicity of the applicant. We formalize this notion of algorithmic fairness as a variant of individual fairness and develop a distributionally robust optimization approach to enforce it during training. We also demonstrate the effectiveness of the approach on two ML tasks that are susceptible to gender and racial biases. ", "pdf": "/pdf/c58026f0eb4878500263d20e9fb3ceb1ba26c7ca.pdf", "code": "https://github.com/IBM/sensitive-subspace-robustness", "paperhash": "yurochkin|training_individually_fair_ml_models_with_sensitive_subspace_robustness", "_bibtex": "@inproceedings{\nYurochkin2020Training,\ntitle={Training individually fair ML models with sensitive subspace robustness},\nauthor={Mikhail Yurochkin and Amanda Bower and Yuekai Sun},\nbooktitle={International Conference on Learning Representations},\nyear={2020},\nurl={https://openreview.net/forum?id=B1gdkxHFDH}\n}", "full_presentation_video": "", "original_pdf": "/attachment/07792d74600d9211881f43803eb5646f637e5dbe.pdf", "appendix": "", "poster": "", "spotlight_video": "", "slides": ""}, "submission_cdate": 1569439712236, "submission_tcdate": 1569439712236, "submission_tmdate": 1583912043284, "submission_ddate": null, "review_id": ["SJlzds52FS", "r1xmBTKkqH", "Syx0-bXNcB"], "review_url": ["https://openreview.net/forum?id=B1gdkxHFDH¬eId=SJlzds52FS", "https://openreview.net/forum?id=B1gdkxHFDH¬eId=r1xmBTKkqH", "https://openreview.net/forum?id=B1gdkxHFDH¬eId=Syx0-bXNcB"], "review_cdate": [1571756906180, 1571949882980, 1572249861786], "review_tcdate": [1571756906180, 1571949882980, 1572249861786], "review_tmdate": [1575301774173, 1572972387261, 1572972387216], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2020/Conference/Paper2068/AnonReviewer3"], ["ICLR.cc/2020/Conference/Paper2068/AnonReviewer2"], ["ICLR.cc/2020/Conference/Paper2068/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1gdkxHFDH", "B1gdkxHFDH", "B1gdkxHFDH"], "review_content": [{"experience_assessment": "I have published one or two papers in this area.", "rating": "6: Weak Accept", "review_assessment:_checking_correctness_of_experiments": "I carefully checked the experiments.", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "title": "Official Blind Review #3", "review": "This paper proposes a new definition of algorithmic fairness that is based on the idea of individual fairness. They then present an algorithm that will provably find an ML model that satisfies the fairness constraint (if such a model exists in the search space). One needed ingredient for the fairness constraint is a distance function (or \"metric\") in the input space that captures the fact that some features should be irrelevant to the classification task. That is, under this distance function, input that differ only in sensitive attributes like race or gender should be close-by. The idea of the fairness constraint is that by perturbing the inputs (while keeping them close with respect to the distance function), the loss of the model cannot be significantly increased. Thus, this fairness constraint is very much related to robustness.\n\n---\n\nOverall, I like the basic idea of the paper but I found the presentation lacking.\n\nI do think their idea for a fairness constraint is very interesting, but it gets too bogged down in the details of the mathematical theory. They mention Dwork et al. at the beginning but don't really compare it to their idea in detail, even though I think there would be a lot of interesting things to say about this. For example, the definition by Dwork et al. seems to imply that some labels in the training set might be incorrect, whereas the definition in this paper does not seem to imply that (which I think is a good thing).\n\nThe main problem in section 2 is that the choice of distance function is barely discussed although that's what's most important to make the result fair. For all the mathematical rigor in section 2, the paragraph that is arguing that the defined constraint encourages fairness is somewhat weak. Here a comparison to other fairness definitions and an in-depth discussion of the distance function would help.\n\n(In general I felt that this part was more trying to impress the reader than trying to explain, but I will try to not hold it against this paper.)\n\nAs it is, I feel the paper cannot be completely understood without reading the appendix.\n\nThere is also this sentence at the bottom of page 5: \"A small gap implies the investigator cannot significantly increase the loss by moving samples from $P_*$ to comparable samples.\" This should have been at the beginning of section 2 in order to motivate the derivation.\n\nIn the experiments, I'm not sure how useful the result of the word embedding experiment really is. Either someone is interested in the sentiment associated with names, in which case your method renders the predicted sentiments useless or someone is not interested in the sentiment associated with names and your method doesn't even have any effect.\n\nFinal point: while I like the idea of the balanced TPR, I think the name is a bit misleading because, for example, in the binary case it is the average of the TPR and the TNR. Did you invent this terminology? If so, might I suggest another name like balanced accuracy?\n\nI would change the score (upwards) if the following things are addressed:\n\n- make it easier to understand the main point of the paper\n- make more of a comparison to Dwork et al. or other fairness definitions\n- fix the following minor mistakes\n\nMinor comments:\n\n- page 2, beginning of section 2: you use the word \"regulator\" here once but everywhere else you use \"investigator\"\n- equation 2.1: as far as I can tell $M$ is not defined anywhere; you might mean $\\Delta (\\mathcal{Z})$\n- page 3, sentence before Eq 2.3: what does the $\\#$ symbol mean?\n- page 3, sentence before Eq 2.3: what is $T$? is it $T_\\lambda$?\n- Algorithm 2: what is the difference between $\\lambda^*_t$ and $\\hat{\\lambda}_t$?\n- page 7: you used a backslash between \"90%\" and \"10%\" and \"train\" and \"test\". That would traditionally be a normal slash.\n- in appendix B: the explanation for what $P_{ran(A)}$ means should be closer to the first usage\n- in the references, you list one paper twice (the one by Zhang et al.)\n\nEDIT: changed the score after looking at the revised version", "review_assessment:_checking_correctness_of_derivations_and_theory": "I assessed the sensibility of the derivations and theory."}, {"experience_assessment": "I do not know much about this area.", "rating": "6: Weak Accept", "review_assessment:_thoroughness_in_paper_reading": "I made a quick assessment of this paper.", "review_assessment:_checking_correctness_of_experiments": "I did not assess the experiments.", "title": "Official Blind Review #2", "review_assessment:_checking_correctness_of_derivations_and_theory": "I did not assess the derivations or theory.", "review": "\nSummary\nThe authors propose training to optimize individual fairness using sensitive subspace robustness (SenSR) algorithm.\n\nDecision\nOverall, I recommend borderline as the paper seems legit in formulating the individual fairness problem into a minmax robust optimization problem. The authors show improvement in gender and racial biases compared to non-individual fair approaches. However, I think some sections are hard to follow for people not in the field.\n\nSupporting argument:\n1. End of P3, it is not clear to me why solving the worst case is better.\n2. Though this paper studied individual fairness, can it also work for group fairness? I am not sure whether this is the only work in this direction (baselines are not for individual fairness).\n3. Some of the metrics in the experiments are not precisely defined such as Race gap, Cuis. gap, S-Con, GR-Con. It is hard to follow from the text description. \n4. Some baseline models are not clearly defined such as \u201cProject\u201d in Table 1.\n5. Not sure how Section 3 connects with the rest of the paper.\n\n\nAdditional feedback:\n1. Missing reference: https://arxiv.org/abs/1907.12059\n2. What\u2019s TV distance in introduction?\n"}, {"experience_assessment": "I have read many papers in this area.", "rating": "8: Accept", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "review_assessment:_checking_correctness_of_experiments": "I assessed the sensibility of the experiments.", "title": "Official Blind Review #1", "review_assessment:_checking_correctness_of_derivations_and_theory": "I assessed the sensibility of the derivations and theory.", "review": "General:\nThe authors propose a method to train individually fair ML models by pursuing robustness of the similarity loss function among the comparable data points. The main algorithmic tool of training is borrowed from the recent adversarial training, and the paper also gives the theoretical analyses on the convergence property of their method. \n\nPros:\n1. They make the point that the individual fairness is important. \n2. The paper proposes a practical algorithm for achieving the robustness and the indivdual fairness. Formulating that the main criterion for checking the fainess is Eq.(2.1), the paper takes a sensible route of using dual and minimax optimization problem (2.4).\n3. The experimental results are compelling \u2013 while the proposed method loses the accuracy a bit, but shows very good individual fairness under their used metric. \n\nCons & Questions:\n1. What is the empirical convergence property of the algorithm? How long does it take to train for the experiments given?\n2. It seems like the main tools for algorithm and theory are borrowed from other papers in adversarial training e.g., (Madry 2017). Are their any algorithmic alternatives for solving (2.4)?\n3. Why do you use d_z^2 instead of d_z for defining c(z_1,z_2)?\n4. What happens when you use more complex models than 1 layer neural net?"}], "comment_id": ["B1xTGh_ziB", "ryxUvjOMoH", "HJx13oOfjS", "HyeYKndMoS"], "comment_cdate": [1573190676691, 1573190494371, 1573190567347, 1573190784785], "comment_tcdate": [1573190676691, 1573190494371, 1573190567347, 1573190784785], "comment_tmdate": [1573252339345, 1573191013961, 1573190987458, 1573190784785], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2020/Conference/Paper2068/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper2068/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper2068/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper2068/Authors", "ICLR.cc/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Response to Reviewer 3", "comment": "Thank you for the feedback. We address the key issues you mentioned below and we have updated the draft accordingly.\n\nYou are correct that the fairness constraint is not exactly Dwork et al's notion of individual fairness, but it is very similar. We added an explicit statement of our definition in section 2 (see (2.2)). We also added a passage to section 2 comparing the two notions. In summary, we modify Dwork et al's definition in two ways: (i) instead of requiring the output of the ML model to be similar on all inputs comparable to a training example, we require the output to be similar to the training label; (ii) we use the increase in loss value to measure the difference between the outputs of a predictor on the different training sets instead of a metric on the output space of the predictor. The main benefits of these modifications are (i) this modified notion of individual fairness encodes not only (individual) fairness but also accuracy (as you noted in your comments), (ii) it is possible to optimize the fairness constraint efficiently, (iii) we can show this modified notion of individual fairness generalizes (see section 3 for formal statements). The unfortunate side effect is the additional mathematical details.\n\nThe detailed description of the metric is in the Appendix B. To help readers find the description, we added references to it where necessary. We also added a summary of how we learn the metric near the beginning of section 2. \n\nThe resume screening example at the beginning of section 2 is our motivation for the subsequent derivations, we added a bit to the first paragraph of section 2 to make the connection between the example and the derivations clear. \n\nIn the word embedding experiment the application we have in mind is when someone needs to evaluate sentiment of sentences that can contain negative/positive sentiment words and names at the same time. Sentiment of a sentence can be evaluated by averaging sentiments of the corresponding words. This application is motivated by the paper \"Mining and summarizing customer reviews\" of Hu, M. and Liu, B. (2004). Training and testing dataset of positive and negative words also originates in their paper. From the perspective of individual fairness, when summarizing customer reviews, our sentiment prediction for two hypothetical restaurant reviews \"My friend Adam liked their pizza\" and \"My friend Tashika liked their pizza\" should be the same. As our experiment shows, this is achieved with SenSR. Resulting classifier is good at identifying sentiment of words and does not discriminate against names at the same time. It also reduces discrimination beyond names, e.g. \"Let\u2019s go get Italian food\" and \"Let\u2019s go get Mexican food\" have almost identical sentiment prediction with SenSR and severely biased in favor of the Italian food when using baseline classifier.\n\nWe borrowed the term balanced TPR from Romanov et al (2019), but we are not particular tied to the term. We changed all instances of balanced TPR to balanced accuracy.\n\nWe corrected the minor mistakes you mentioned.\n\nRefs: Romanov et al, What's in a Name? Reducing Bias in Bios without Access to Protected Attributes, NAACL 2019."}, {"title": "Response to Reviewer 1", "comment": "Thank you for the feedback. We address the Cons & Questions in what follows.\n\n1. On a laptop without GPU, training SenSR on the sentiment data (experiment in Section 4.1) takes about 6 minutes. \n\n2. You are correct that the proposed algorithm is similar to adversarial training. We consider this a benefit of our approach because it allows practitioners to borrow algorithms for adversarial training to train fair ML models. Theoretically speaking, the main distinction of our approach is a generalization error bound for data-driven Wasserstein distributed robust optimization (DRO). In most prior work on Wasserstein DRO, the metric is known, so there is no need to study the effect of error in the metric on generalization. In our application, the metric is learned from data, and we show that generalization degrades gracefully with error in the metric (see the third term on the right side of (3.2)).\n\n3. We use d_z^2 instead of d_z because it is a common choice in Wasserstein DRO. For example, Sinha et al also use the squared Euclidean distance.\n\n4. To answer the question about more complex models, we trained a deep neural network with 10 hidden layers (100 neurons each) on the sentiment prediction task (using exactly same hyperparameters as in the paper). SenSR continues to be effective: test accuracy is 94.3% and race gap is 0.2.\n\nRefs: Sinha et al, Certifying Some Distributional Robustness with Principled Adversarial Training, ICLR 2018."}, {"title": "Response to Reviewer 2", "comment": "Thank you for the feedback. We address your concerns below.\n\n1. The objective that we minimize is the worst-case performance of a predictor on hypothetical training sets that are similar (only differ in irrelevant features) to the observed training set. This leads to fairness because it penalizes predictors that perform well on the observed training set but poorly on similar hypothetical training sets. For example, an unfair resume screening model may perform very well on a set of training resumes from mostly white men, but poorly on resumes from women or minorities. By considering hypothetical sets of resumes from women or minorities during training, the objective we minimize penalizes models that only perform well on white men.\n\n2. You can certainly encode group fairness by picking a metric that declares a pair of inputs similar whenever they are from the same group, but this is tangential to our goal of operationalizing individual fairness. We have baselines and metrics for group fairness because group fairness is the prevalent notion in the literature.\n\n3. Each of the experiments has a dedicated \"Comparison metrics\" paragraph. We clarified the definitions of race and gender gaps in the corresponding paragraph. They are the differences between average logits output by the classifier evaluated at Caucasian vs African-American names for the Race gap and Male vs Female names for the Gender gap. Cuisine gap is the difference between logits of the embedded sentences: \"Let\u2019s go get Italian food\" and \"Let\u2019s go get Mexican food\". Spouse Consistency (S-Con.) and Gender and Race Consistency (GR-Con.) quantify the individual fairness intuition, i.e. how often classifier prediction remains unchanged when we evaluate it on a hypothetical \"counterfactual\" example created by changing features such as gender and keeping all other features unchanged. For these individual fairness metrics we did not write mathematical definition, but are happy to add one if the reviewer believes it would improve clarity.\n\n4. In our experiments we discuss all baselines in the corresponding \"Results\" paragraphs. Project is the pre-processing baseline where we project data onto the orthogonal complement of the sensitive subspace and then train regular classifier with the projected data. SenSR outperforms this baseline suggesting that simply projecting out sensitive subspace is not sufficient and that robustness to unfair perturbations through SenSR gives better results in terms of fairness. This is analogous to the observation made in the group fairness literature that simply excluding protected attribute is not sufficient to achieve fairness.\n\n5. The main point of section 3 is to show that the fairness constraint generalizes; i.e. if you train a model with SenSR, and it performs well on all hypothetical training sets that are similar to the observed training set (i.e. it seems fair on the training data), then it also performs well with high probability (WHP) on all hypothetical test sets that are similar to a test set (i.e. it is fair WHP at test time). \n\nWe added the missing reference and clarified what is TV distance in the introduction."}, {"title": "General Response", "comment": "We thank all the reviewers for the thoughtful comments. We answer each reviewer\u2019s questions individually and we have updated the draft according to the feedback."}], "comment_replyto": ["SJlzds52FS", "Syx0-bXNcB", "r1xmBTKkqH", "B1gdkxHFDH"], "comment_url": ["https://openreview.net/forum?id=B1gdkxHFDH¬eId=B1xTGh_ziB", "https://openreview.net/forum?id=B1gdkxHFDH¬eId=ryxUvjOMoH", "https://openreview.net/forum?id=B1gdkxHFDH¬eId=HJx13oOfjS", "https://openreview.net/forum?id=B1gdkxHFDH¬eId=HyeYKndMoS"], "meta_review_cdate": 1576798739682, "meta_review_tcdate": 1576798739682, "meta_review_tmdate": 1576800896607, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "The paper addresses individual fairness scenario (treating similar users similarly) and proposes a new definition of algorithmic fairness that is based on the idea of robustness, i.e. by perturbing the inputs (while keeping them close with respect to the distance function), the loss of the model cannot be significantly increased.\nAll reviewers and AC agree that this work is clearly of interest to ICLR, however the reviewers have noted the following potential weaknesses: (1) presentation clarity -- see R3\u2019s detailed suggestions e.g. comparison to Dwork et al, see R2\u2019s comments on how to improve, (2) empirical evaluations -- see R1\u2019s question about using more complex models, see R3\u2019s question on the usefulness of the word embeddings. \nPleased to report that based on the author respond with extra experiments and explanations, R3 has raised the score to weak accept. All reviewers and AC agree that the most crucial concerns have been addressed in the rebuttal, and the paper could be accepted - congratulations to the authors! The authors are strongly urged to improve presentation clarity and to include the supporting empirical evidence when preparing the final revision.", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1gdkxHFDH¬eId=qWLi0UczvQ"], "decision": "Accept (Spotlight)"}