review_id,review,rating,decision graph20_1_1,"""The paper presents the design and implementation of a web-based online learning platform that allows its users to share their feedback in real-time. The platform was used to teach an online HCI class and some of the results from class assignments were presented as examples to demonstrate the learning outcomes. Although the platform was nicely designed and incorporated instant feedback and participator design elements, the research question is not clearly articulated and the evaluation does not sufficiently demonstrate the effect of instant feedback and participatory design. First of all, the research question is not explicitly articulated. The paper claims the novelty of their approach to online learning is the introduction of the participatory design component and instant online feedback. It seems that the participator design component was that students took turns to be users of others designs and offered feedback during an online HCI class. The instant online feedback referred to the feedback that students offered to student presenters in real-time during the class. If the research question is to examine whether having instant feedback and participatory design can help students learn concepts and methods better in the class online, then the evaluation should be designed to demonstrate the effect of the instant feedback mechanism on students learning experiences. For example, such evaluation can be an experimental study that compares the learning outcomes of the same class offered with and without an instant feedback mechanism or participatory design component. If a comparative experiment is too costly to conduct, then an alternative study design is to show how the instant feedback mechanism allowed students to better incorporate feedback into their learning processes (e.g., their design assignments). What challenges did the student presenters encounter when incorporating the feedback? How did the platform allow them to better incorporate the feedback? How did they handle the challenges? For example, it is possible that the audience provided contradictory/competing suggestions that would require student presenters to think about the tradeoffs when incorporating the feedback. The answers to the above questions can show whether and how instant feedback with the participatory design affected students learning outcomes. However, the examples shown in Figures 3, 4, and 5 only showed students final learning outcomes but did not demonstrate the effect of the instant feedback and participatory design on these students learning experiences because we do know how students would perform without using participatory design and instant feedback. It is also unclear how participatory design affected students learning experiences. For example, what were the practices and challenges that students had when playing the role of users of others designs? How had playing the role of users promoted or hindered learning? Unfortunately, the Results on the Projects only provides some students feedback on others designs but has not provided information to help evaluate the effect of participatory design. In sum, although the online learning platform seems to be nicely built and participatory design and instant feedback has been incorporated into the platform, the research question is clearly articulated and sufficiently evaluated. """,1,0 graph20_1_2,"""The paper presents a promising web-based system for collecting instant feedback on design artefacts. A case study describes nicely how the system works for artefacts of different complexity. Visual material further supports the understanding of the interface and study. The paper is easy to read even if at times too verbose. While certainly interesting, I encourage the authors to improve: Related Work: This work should be put in context of current literature. While the summary on page 2 forecasts a literature overview, it is missing in the paper. Several work in the HCI and CSCW community has focused on digital classrooms and peer/crowd feedback on design artefacts (e.g. work by A. Xu or Lan Li among others) in the recent years. Contrasting existing approaches to the chosen one, e.g. benefits of peer-novices compared to crowd-novices or experts, would support the motivation of the contribution. Motivation: The authors refer to a common HCI process. HCI as a field includes several perspectives on the interaction between human and machines, which ranges from social studies to hardware improvements to psychological impacts on creativity or work, to name a few. The source used in this paper discusses HCI on the example of a usability engineering process of user interfaces. Similarly, the process described in the current submission rather reflects the interaction design process. The authors should iterate over the wording (e.g. HCI process, interactive HCI communication) in their paper and clarify them. Methodology: - The paper describes three different project for evaluating the system. It is not clear why the feedback process was changed between the first two and the last project, i.e. not all feedback was collected after the presentations. - The description of the evaluation method is missing. Results: - The results do not reveal the benefits of using instant online feedback in comparison to face-to-face feedback of a professional (teacher) or group feedback in the classroom. How did the presented tool impacted these results? - The selected quotes are referring to common problems of interaction design novices, which would have likely be pointed out by teachers or peers in classroom situations. The authors should prefer quotes that underline the benefits of the selected approach. - The work would benefit from a more thorough and detailed analysis. Discussion: - The authors conclude that the presented tool helps to 'improve participation and inclusion in-class setting' and to give 'more honest and direct critics and dive deep into analyzing' the interfaces. The evidence for this statement is not provided by the study results described earlier. - The authors further explain that 'the quantity of feedback has increased significantly'. Please clarify how feedback quantity was measured and what data source was used for the comparison. - The contribution would further benefit from discussing limitations of the study and the system. - Finally, the larger impact of this work on the HCI and CSCW community is not clear. Due to the major limitations outlined above, I recommend to reject this contribution. """,2,0 graph20_1_3,"""This paper presents a web-based interactive presentation system that incorporates anonymous peer evaluation into HCI education for computer science students. The authors evaluated the systems in a project-based HCI course and reported the outputs of the student projects. The authors discussed the value of the system and the concept of immediate anonymous feedback in HCI education. The paper focuses on an important topic. Complimentary to lecture and other instruction forms, engaging students in real-time feedback is a smart, novel choice. It also expedites the feedback loop, which is critical for iterative design ideation and rapid prototyping. As the authors stated at the end of the paper, instant online feedback in a class setting can be scaled to other disciplines that regard crowd wisdom in the development process That being said, a few issues prevent me from recommending the acceptance of the paper as-is. First is the many misuses of terminologies and HCI concepts. It seems like what the authors meant by participatory design throughout the paper (sharing users voices with developers about the design of the product. considering users voices to improve the design) is rather just the idea of human-centered computing (See the difference between the two here [1]). Across the paper, the authors also suggest a dichotomous view of prototyping versus evaluation, which misaligns with the mainstream HCI and design research [2]. The authors stated that the goal of the proposed system is such that the developers can adequately communicate their accomplishments to end-users. This proposition, I think, undermines the generative value of iterative prototyping and user testing. Second is research methodology choice and clarity. The paper currently does not have a Related Work section, making it difficult to assess the proposed systems novelty. The paper would be stronger if the authors can more explicitly articulate how the proposed system improves on previously proposed crowd-sourced, real-time feedback systems for formal learning (e.g. [3]). The paper also did not report the feedback users provided to the proposed system. For example, what evidence supports the finding that Compared to face-to-face feedback, the online feedback gives more honest and direct critics and dive deep into analyzing which part of the interface should be improved (page 6)? I encourage the authors to revise the papers organization and add these details, which would help clarify and highlight the contributions the proposed system makes. [1] From user-centered to participatory design approaches, Sanders 2002 pseudo-url [2] The anatomy of prototypes: Prototypes as filters, prototypes as manifestations of design ideas. Lim et al. pseudo-url [3] PeerPresents. Shannon et al. DIS16 pseudo-url """,2,0 graph20_2_1,"""This paper looks to address the challenge of target selection in mixed and augmented reality applications. Target selection is difficult in MAR because targets may be moving, occluded, and vary in size based on distance. Providing target assistance to MAR users is proposed as a means to help improve MAR system usability. Five techniques are investigated (baseline, bubble, sticky, gravity, touch). Target assistance is not a novel idea and the authors leverage existing methods that have been applied in cursor-based applications in 2D. These methods are applied with some adaptations. The paper does a good job at reviewing the related work particularly in cursor-based techniques and 3D. However, I felt that less was discussed about the work in MAR. It also wasnt too clear on differences with target assistance in 3D games. Making this distinction would be helpful. A user study is conducted to asses how target assistance techniques may help increase speed and accuracy of selection. For the study task, targets are placed in a diverse type of scenarios e.g. co-planar, varying distances from user, mobile, occluded, etc. I found the discussion interesting. The authors discuss the strengths and weaknesses of each technique and provide design recommendations for different applications. E.g. Target gravity is recommended in scenarios where selection objects are sparse and bubble cursor may not be as appealing when the visuals may interfere. How do these findings compare to prior work either in MAR or other contexts? Have these target assistance techniques shown similar strengths weaknesses across comparable scenarios? Additional Questions: How was the target size selected? Did participants walk around during the evaluation? Beyond applying target assistance techniques that have been shown beneficial in other scenarios, could there be unique ways to leverage the MAR context to reduce the high error rates? Grammatical errors in: Typo in last sentence of last paragraph in Related Work > Target Selection in 3D Typo in first sentence of last paragraph in ""Are the different Target Types Scenarios useful? Overall, this was a good paper, sound implementation and grounded design choices, a thorough evaluation with outcomes that may be of interest to the community. I recommend for accepting this work.""",3,1 graph20_2_2,"""This submission compared different target selection techniques in mobile augmented reality (AR). It is a timely exploration and evaluation of the choices available on the mobile AR environment. The paper is overall well-written and easy to read. I especially like the subsection titles in the discussion, which highlight the topic and attract the reader with the most exciting research questions. However, I do have concerns about some strong claims, as well as some of the study designs. My primary concern is that, compared to the carefully designed cursor-based techniques, the touch selection in the user study is a complete naive implementation. Some simple ""assistance"" should be able to improve the performance of the touch condition significantly. For example, instead of only using the single touchpoint for selection, using the contact area (or even a small-sized sphere around the touchpoint) for selection could facilitate the selection in the user study. There are also many advanced techniques for improving touch selection, for example [1,2]. I suggest the authors add a discussion about this limitation, and, more importantly, DO NOT claim touch selection in mobile AR is worse than cursor-based selection. As a matter of face, touch should be faster in ""targeting,"" since the user only needs to put the finger on the screen while moving the cursor requires more body movement in mobile AR. [1] Wang, F., & Ren, X. (2009, April). Empirical evaluation for finger input properties in multi-touch interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1063-1072). [2] Benko, H., Wilson, A. D., & Baudisch, P. (2006, April). Precise selection techniques for multi-touch screens. In Proceedings of the SIGCHI conference on Human Factors in computing systems (pp. 1263-1272). I am also slightly concerning about the recorded time. During the trails, after a participant answering one trial, the next target will be randomly chosen. If the next target is close to the previous one, then less time is required for moving the cursor. I am also curious about the physical movement of the participants. As stated by the authors, they allowed participants to move around, but advise them that more time may be required. I consider this as discouraging physical movement, while physical movement is quite essential when using a real-world mobile AR application. The visual feedback in the bubble cursor condition can be improved. As discussed by the authors, the visual feedback in the bubble cursor is essential for the user to understand the selection area. However, from the provided video, it seems the implementation in this user study did not have the ""morphing"" feature which is used to contain the target when it is not completely contained by the main bubble. The authors include a discussion of the pros and cons of each technique in the discussion. However, it is hidden in the text. Since such discussion can be unrelated to the experiment results, the authors can discuss this before introducing the study and give the reader a general sense of the advantages and disadvantages of each technique. The first three paragraphs in the RELATED WORK section should be moved to the beginning of the discussion section. I appreciate that the authors tested five scenarios. However, I am also curious if other factors may affect the tested task performance from the literature (e.g., from the VR studies). For example, I can consider the screen size of the mobile devices can be one possible factor. Also, the study only includes only regular layouts of targets, which is impractical. Density may also need to be discussed. I suggest the authors add a discussion of the mentioned limitations in the study design. Some minor things: * The bubble cursor was mentioned many time to be ""well-known,"" I like the technique too, but there is no need to emphasize many times. * When removing outliers, were the standard (time above 3 s.d. than the mean) considered within each participant, or across all participants? * At the end of the first paragraph in the STUDY section, you mentioned the five scenarios in a different order than anywhere else. Overall, I think this study provides preliminary insights into object selection techniques in mobile AR. I would argue to accept this submission.""",3,1 graph20_2_3,"""Overall, I have very few complaints about this submission: the motivation is convincing, the related work has the proper width and depth, the rationale for choosing selection techniques is reasonable, the experimental setup follows standard procedures, the data analysis seems correct, and the discussion provides some insights. My first point of criticism is that the work in its current state is mostly incremental. As the authors have pointed out, there has been a plethora of published work in 2D target acquisition (mostly static desktop UIs, highly dynamic video games, etc.), and the results presented in this submission, while confirming previous research, do not go far beyond what we already knew. My second point of criticism is that the authors are vague about the contribution of their work. I would have generally preferred to see a deeper theory-based discussion on what performance the authors expect using well-established techniques in the different context of MAR. Currently, I feel that the authors approach was lets test some selection techniques in MAR and see what happens; this is not a scientific approach, this is product testing. I would still argue for accepting this submission for two reasons: first, I believe that this work can be a great first step towards a more detailed analysis of what makes MARs different from desktops and video games (e.g., unlike desktops, they are not static, but unlike video games, only users introduce dynamics). Second, I also believe that the threshold for accepting work at GI is somehow lower than at other conferences (e.g., CHI), and we should accept work, especially if it has the potential to become more impact-full in the future.""",3,1 graph20_3_1,"""This paper describes a series of overlay techniques appropriate for an augmented reality device used in concert with a shared public display, in particular for the task of graph traversal. The techniques are designed to help a user do two things: select and follow a path, and find their way back to a selected path/node when it is out of their immediate field of view. The authors describe two studies on path following tasks that make use of the system, demonstrating that for certain kinds of tasks and in particular configurations it is superior to a baseline system. Overall, I enjoyed this paper, but it did take a couple of readthroughs to orient myself (and I am very grateful for the video figure). The words personal and context are used throughout the paper in a variety of different ways, and I found this confusing to follow (for example, on page 6 the additional context we bring to the personal view sort of collapses the distinction between these words for me); it would be very helpful if the authors had an explicit sentence upfront which said context is the view that ... , personal is a view that and used the words consistently throughout the document. In addition, the four experimental conditions tested in the user studies are never all laid out until the study design is described; in the intro the authors mention that they explore variations of *two* navigation techniques, when in reality its more of a 2x2 design? A figure demonstrating the 4 designs tested all together might be helpful? (And perhaps the authors could invert the colors in their figures to make them more legible?) I do believe that the previous reviewers comments about selection vs tracing and the way in which weights were used for the user studies have been adequately addressed in the revised text. The added (orange) text on page 7 could use a thorough proofread, the authors use hypothesis instead of hypotheses, leave unclosed parentheses, etc. I dont know what Procrustes analysis is, mentioned on page 5. Another specific location where I had trouble was the Questionnaire section on page 10 for the Path Tracing Experiment, whose first paragraph gives a lot of data in a challenging fashion but doesnt sum up what it means. In general, I found the variety of ways the authors refer to specific components of their work challenging to keep up with, but the technical meat appears to be there. Clarity of presentation aside, the authors have described an interesting set of techniques that seem to be quite promising! I have no qualms with the methodology of the user studies performed, and the results are definitely interesting. I believe one piece of related work is missing: Parallel Reality Displays by Paul Dietz and Matt Lathrop, presented at SIGGRAPH 19. While theyre a technology rather than an interaction technique, they also provide personal/private information in public display scenarios.""",3,1 graph20_3_2,"""I appreciate that the authors of this submission produced a rebuttal and highlighted in orange the changes compared to their previous submission. I am R3 for both submissions. I am satisfied with the changes applied to most of the concerns that I had raised: pseudo-url In the new Related Work paragraph about eye-tracking, I am not sure that sentence ""We thus conducted our study using head-tracking."" is fair use, as we (authors and reviewers) know that this paragraph on eye-tracking has been added after the choice of using head-tracking, so the decision was not informed as it appears to reads now. I honestly think that this minor lack of clarity can be addressed. The companion video seems identical to the first submission, based on file date. In conclusion, I thus keep my rating unchanged.""",3,1 graph20_3_3,"""Thank you for submitting a revised version of this submission, and addressing concerns raised in the previous round of reviews. I reviewed the previous submission as R2. pseudo-url The submitted modifications show a marked improvement in the exposition of the work. In particular, clarifications around the motivation behind the path tracing task, and additional related work that have utilized path tracing to determine endpoints (e.g., [17], [18]) and to mark or detect features along a path (e.g., [66]) were helpful in positioning the contributions of this work in relation to prior work. I am satisfied with the changes in the modified manuscript, and changing am my recommendation to accept. However, I noted that there are several typos throughout the text, and I recommend a thorough editing pass for the camera ready. For example, page 3: HoloLense -> Hololens. """,3,1 graph20_4_1,"""This is a well-written paper that makes a meaningful contribution to the face modeling research area. It proposes a piecewise morphable model for human face meshes, with well-justified and reasonable methods for initial (manual) decomposition into ""semantic"", artist-friendly pieces, eigenvector basis selection for each piece, fitting to a target shape (but see comment on explaining how this is done, below), and blending independently reconstructed pieces together. It also proposes a mapping between anthropometric measurements of the face (e.g. lip length, nose width...) and the parameters of the model (building upon [ACP03] but with part-specific measurement saliency), in order to synthesize and edit faces with desired attributes. The authors evaluate many aspects of their method and compare to a good set of baselines. I am not an expert in the specific area of face modeling, which has a pretty extensive history and lots of active work. With that caveat, I think this paper makes a solid contribution and I support acceptance. - they control mechanism --> the control mechanism - ""Furthermore, we compared our approach"": At this point the existence of ""our approach"" has not even been mentioned. Maybe move the rest of this para after the next one? - Section 4 needs to clearly state right at the beginning that all faces in the dataset are assumed to have the same mesh topology, with vertices in semantic correspondence. Otherwise the rest of the section (e.g. the average per-vertex distance) is not justified. - ""we rely on the eigenvalues to sort the eigenvectors for each part"": How is it sorted? The ordering of the eigenvectors from P_1 to P_{n_e} is not explained. - ""19 validation faces that were not part of the training data set"": The process of fitting the model to a novel (non-training) face (presumably by projecting each part onto the current eigenvector basis followed by blending) is not actually described anywhere I think? - zero eigenvector --> zero eigenvectors - In Table 1 the column headers should probably be something like ""Facial part"", ""#EVs for females"", ""#EVs for males"" - ""Face Generation through Parts Blending"" suggests a probabilistic generative model (as in statistics/ML) is developed from which novel faces are sampled. Since this is not what is described here, I would suggest renaming this section to ""Blending Synthesized Parts into a Complete Face"" or something of the sort. - ""Fig. 4 shows an example of blended face"": Missing ""a"". - ""doing independently for each of the d parts"" --> ""doing this independently for each of the 5 parts"" (d is the index of part, not the part count) - At the end of Sec 5, I suggest showing some straight reconstruction examples on test faces. I.e. after selecting the final sets of per-part eigenvectors, try reconstructing a novel face mesh and measure the reconstruction error. This is a standard experiment in prior 3DMM papers, so might be useful to examine the fitting quality of the model ignoring the whole anthropometric measurement part. - ""6. Significance of Anthropometric Measurements"": This is the title of an entire section, so should cover the whole content of that section. I'd suggest ""Synthesizing Faces from Anthropometric Measurements"" or something of the sort. - Eq 8: Presumably P is a matrix stacking all eigenvectors, but this not actually defined anywhere I think. - The math in Sec 6.1 is identical to [ACP03]. While this paper is cited here (so I don't accuse the authors of any plagiarism), the fact that the method is the same should be stated clearly. Maybe rephrase the intro sentence as: ""We adopt the general approach of Allen et al. [ACP03]. However, while that method learns a global mapping that adjusts the whole body, we will learn per part local mappings. Furthermore..."" - ""B_d is a (n_e ) (n_s + 1) matrix containing the corresponding eigenvector of the related facial part"": Do you mean containing the corresponding eigenvector weights? I think it would be clearer to first define the regression problem for each part in each shape: M_d [ f_1...f_{n_m} 1]^T = b. This is what is done in [ACP03]. - I may have missed something but what is ""our global approach"" in Fig 9? - In Fig 15, does it make sense to also check that the other measurements _don't_ change? Since measurements are correlated, I'm not sure if this is a good or bad thing. Maybe the authors can comment.""",4,1 graph20_4_2,"""This paper describes a method of part-based morphable facial model allowing for localized user control. The method first splits a 3D face into pre-defined semantic parts. Then a PCA-based morphable model is constructed for each part. The best subset of anthropometric measurements is also selected, forming a mapping matrix. During the online stage, the user edits the facial model by prescribing the anthropometric measure values. The parts are then reconstructed with the mapping matrices. Finally, parts are fused together to form the final face model. At first sight, the method design is ad-hoc. It relies on a predefined partition of facial models and involves a final step of part stitching. However, the fact that the existing models with localized control cannot handle well variations in terms of identity makes this paper well motivated and the solution respectful. The method is, albeit heuristic, practically useful and it demonstrates good results. The evaluation is quite extensive and satisfying. Overall, I am happy to see it published at GI. I wonder if the part-based model can support realistic wrinkling. The authors should consider adding some discussion on this point. For the two limitations pointed out in the paper, some failure examples should be provided. In addition, I'd like to see to what degree the localized edit (in terms of the anthropometric measurements) can be supported by the model in order to avoid issues of part fusing.""",4,1 graph20_4_3,"""The paper proposes a method for improved editing of a morphable 3D face model (3DMM), leveraging a part-based decomposition of the face mesh. Edits are performed through a set of anthropometric measurement parameters selected using an iterative geometric reconstruction error-guided selection process. These selected anthropometric measurements are then mapped onto a set of per-part Eigenvectors which were obtained through per-part PCA. The proposed scheme improves the locality of edits and reduces the redundancy between different edit parameters, compared to prior work which typically employs method with global support. The method is evaluated using a dataset of 135 training face scans and 19 validation face scans. Qualitative results compare the proposed method with adapted versions of global SPLOCS (Neumann et al. 2013) and clustered PCA (Tena et al. 2011), demonstrating the improved locality of the approach. Quantitate results show that the set of selected anthropometric measurements leads to lower overall reconstruction error on the validation faces, compared to using the full set of measurements as parameters, and that edits using the proposed method have lower error on validation faces compared to SPLOCS and clustered PCA. The paper is relatively clear, with a few minor issues that could be fixed to improve clarity (see list below). Though I am not a domain expert, I believe the work presents an original method. On the positive side, this is a simple, seemingly technically sound approach that addresses practical problems with existing methods for editing morphable face models (namely locality of edits, and interpretable low-dimension edit parameters). On the negative side, the presented method is quite straightforward, and the evaluation is carried out at a fairly shallow level. It would have been nice to see the effectiveness of the proposed method be evaluated in practical use through a user study as well. Given the above, I am borderline to slightly positive with respect to acceptance, as I believe the paper would be interesting to the community. Minor issues with exposition p2 and they control mechanism -> and the control mechanism perform worst than -> perform worse than to constructs realistic 3DMMs -> to construct realistic 3DMMs decouple the rigid pose -> decouples the rigid pose p3 geometry of the parts is presented -> geometry of the parts is represented p4 Table 1: ""46 steps"" is unclear, should likely be explained in caption p5 Figure 5: Add labels for the axes, and/or explain in caption p6 Figure 6: Label for y axis. Also, it is unclear exactly how the given reconstruction errors are aggregated for the given parts. Good to explain in caption/text p7 Figure 9: The caption could be improved by stating a clear conclusion to be drawn from the comparisons p9 leading to an improve reconstruction -> leading to improved reconstruction p10 The last paragraph contains a couple of sentences that are disjointed and need to be rewritten: ""... different parts that."" and ""Using a Generative Adversarial Network..."" """,3,1 graph20_5_1,"""The paper proposes a method for identifying the elements and their connections in a circuit diagram based on graph representation. The method contains several modules responsible for sketch analysis, circuit component classification and circuit state and attribute calculation. The method seems solid. The circuit state and attribute calculation module has some theoretic analysis. The method is tested with a user study. The user study was thoroughly conducted and carefully analyzed. The only missing part seems a study on the robustness of the method on really noisy and poorly drawn sketches. Although it is hard to quantify the quality of sketches, some visual results and associated analysis would be good enough. About the recognition of the circuit elements, I wonder if it would be sufficient to just use sketch-based 2D shape retrieval, such as a 2D version of this method: pseudo-url The paper writing: vertexes -> vertices In abstract: is not apparent for machines as for people - > is not *as* apparent for machines as for people In intro: ""due to the architecture of computer itself, there is a fundamental difference between the way humans and machines work. Specifically, it is still challenging for computers to divide a series of input strokes into several components ..."" It is weird to say that the difficulty of sketch recognition is due to the architecture of computer. The reason sounds too vague and rough. Overall, the paper presents a solid algorithm with satisfying evaluations. I think it could be accepted to GI.""",3,0 graph20_5_2,"""The lack of a summary of changes makes it hard to compare the two revisions. This paper proposed a way to convert user-drawn sketches into graphs that support circuit recognition. The target use case is physics education where circuits are present. In the previous submission, my main concerns were. 1) abstract is not informative. 2) the academic impact since this is mostly a straightforward system building. 3) missing real-world measurement. 4) the lack of comparison in the evaluation section. I will start by revisiting these 4 concerns in the updated revision. 1) The abstract is improved. Overall I think the clarity of the paper is better. 2) I am still concerned about the novel contribution to the research community. Re-reading section 3/4, it remains unclear to me how others would benefit broader research themes. I think LS4D is the primary contribution; however, its highly specialized in the proposed problem and can the authors comment on the broader influence? 3) The last paragraph in section 4 shows some real-world test time. But the examples drawings are fairly simple. I am not sure if it serves the purpose of validating the running time. 4) There is still no comparison. Based on a quote from the user study, the authors claimed the proposed method ""has a higher practical value"". I would like to see more supporting evidence for this argument beyond one subjective sample point. """,2,0 graph20_5_3,"""This is the second time I review this paper. As far as I can see, the authors added a rather complicated circuit model from the previous submission. In the previous review, a reviewer pointed out that there are a number of highly related work that also tackles the same handwritten circuit recognition task. It is not sure why these are not included in this revision. Just in case the authors missed it, I copy & pasted the previous comment again. >>>>>> There is a lot of similar circuit diagram recognition works this paper is not aware of: + Ruwanee de Silva, David Tyler Bischel, WeeSan Lee, Eric J. Peterson, Robert C. Calfee, and Thomas F. Stahovich. 2007. Kirchhoffs Pen: a pen-based circuit analysis tutor. In Proceedings of the 4th Eurographics workshop on Sketch-based interfaces and modeling (SBIM 07). Association for Computing Machinery, New York, NY, USA, 7582. DOI:pseudo-url + Edwards, Brett and Vinod Chandran. Machine recognition of hand-drawn circuit diagrams. 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100) 6 (2000): 3618-3621 vol.6. + Dreijer, Janto F. ""Interactive recognition of hand-drawn circuit diagrams."" PhD diss., Stellenbosch: University of Stellenbosch, 2006. Furthermore, the paper needs to cite studies of the sketch-based interface to teach scientific concepts (physics, math equations) such as: + Lee C, Jordan J, Stahovich T and Herold J. Newtons Pen II Proceedings of the International Symposium on Sketch-Based Interfaces and Modeling, (57-65) <<<<< I don't think it is necessary to compare the performance with these previous systems, but I do believe the difference from these previous works should be discussed in the related work section. The paper still doesn't answer my questions (1) what exactly is input (2) how the system tell difference between connected and unconnected trace for crossing lines. The paper made some changes and improved a bit, but they're still a lot of room for improvement. I recommend rejecting this paper for now. """,2,0 graph20_6_1,"""The paper offers a practical account of how-to better design video conferencing platforms for patient-doctor appointments. Using a scenario-based design method and a range of user interviews, it points to important issues about how accessibility, relationality, privacy and information disclosure concerns, and humanely interaction can be incorporated into the thinking and design of telemedicine systems. A well-motivated and situated work that made emphasis on the socio-technical challenges and potential design needs and opportunities of a patient and not some abstract or idealised understanding of telemedicine systems. Methodologically, the paper clearly outlines and provided some relevant justification for the chosen researchdesign approach; the sampling approach, size and characterisation; and the sensitivity towards various interactive scenarios. Generally, the study method, the method of data collection and visualisation, and the level of sensitivity and relativeness of the authors towards not only being reflexive but also practising some form of relational accountability and openness shows the originality of the methodology section. The analysis might be considered providing a thick description of what was conducted, how, why and to what extent the findings can be considered representative of the 22 participants therefore considered a clearly detailed paper. From my understanding, this suggests originality, not only methodologically, but of its situatedness within the context of the literature. The few reservations I had relates to the recommendation for future work at the start of the paper, specifically Future work should consider narrowing in on particular populations and types of visits this should have been at the end as it might tip off the focus of the narrative having such statement in the introduction. It makes me wonder why not focus on a particular population, and not broadly. In conclusion, Dourish's 'implications for design' ought to be cited where necessary. It will boost the paper to include a few lines about the implication of the methodological sensitivity, the design of the research, and how the methodological/analytical aspect of the empirical contribution might reframe our thinking and understanding of telemedicine systems. I believe this will exemplify the significance of the work to the audience. Generally, I believe the paper makes a significant contribution to our understanding of designing video-conference platforms (and those that are compatible with smartphones) specifically tailored to the need of patients and in consideration of important factors that commercially available systems might have taken for granted. It also points to specific challenges issues of privacy, accessibility, visuals, camera work, trust, and relationality and provides possible insights for designing telemedicine tools specific for a virtual appointment. """,4,1 graph20_6_2,"""The paper explores the design challenges of video conferencing for doctor appointments. This work combined semi-structured interviews with a scenario-based interview to elicit patients comments on their actual doctor appointments and the reactions to a set of staged video-based appointment scenarios. Interviews and comments on the scenarios were analyzed using open coding, and the results are organized into four themes. Video-conferencing for doctor appointments could potentially be beneficial for patients who live far away from hospitals or have chronical diseases. Thus, it is important to understand its design challenges. The method is appropriate; the results are well-organized and the discussion highlights key design challenges derived from the findings. Overall, the paper is well written and easy to follow. Although I am positive about the paper in general, I do have the following concerns: The paper briefly talks about some limitation of the video-conferencing based doctor appointments. For example, doctors could not physically touch patients. However, the paper does not discuss the limitations of video-conferencing based doctor appointments based on the study findings and what could be done to mitigate such limitations or the design implications for such limitations. In the discussion of Camera Work and Visuals of the Patient, the paper mostly focuses on issues with manipulating a first-person camera (e.g., mobile phone camera). However, the scenarios provided two camera views (both the first-person and the third-person). What is the camera work for operating a third-person camera? What are the design implications for the third-person camera? Typo: they are design opportunities for exploring systems -> there are """,4,1 graph20_6_3,"""This paper presents the results of exploratory contextual interviews around the use of video calling for health appointments with a particular focus on patients views on privacy issues. Using six scenarios, increasing in degree of privacy concerns, participants shared their thoughts around accessibility, privacy, and other issues. They find that, rather than relying on existing video calling tools, new systems need to be designed to consider the privacy and practical issues specific to this space. This exploration of perspectives on video calling for medical appointments focuses on input from patients in order to present an important point of view on the issues surround this technology. The in-depth focus on patients allows for a clear understanding of their perceived benefits and issues, beyond the potential practicality, and reveals patients specific needs. The qualitative study design allows participants to react to the increasing privacy needs in context and based in their own experience. The paper is overall well-written, and I have only one suggestion. While the discussion is well situated in the existing literature, having more clear design suggestions would give the paper a stronger take away. What needs to be done specifically to support this area of video calling? Are all the areas for improvement equally important or do patients see some as having higher priority? Providing a more concrete presentation of the design implications found in this study would help readers to understand the overall contribution. Generally, this is very interesting research into a particularly sensitive area of research, and the focus on the voices of patients provides a clear view of their valid concerns and what needs to be done to support them.""",4,1 graph20_7_1,"""This paper reports on the results of three user studies that explored the effects of temporal delays, spatial gaps, and the combination of delays and gaps on discrete pointing using a mouse. When evaluated independently, temporal delays and spatial gaps were found to increase movement times, however, when evaluated concurrently, an increase in temporal delays resulted in more pop-up window closing errors when gap sizes were small. Overall, this paper was well written and contains a number of experiments, metrics, statistical results, and Figures. It is quite dense and, although the language is clear, it is difficult to pull out the overarching narrative and contribution to the community. Some of the confusion arises because the full experimental results are not presented. Using Experiment 1 as an example, the text presents the interaction and main effect results for some of the factors but not all (e.g., MT is missing the tDelay x W and tDelay x W x A, ERClick is missing tDelay x W x A, and ERclose is missing tDelay x W x A). For those factors for which interactions were found (for example MT), there are no post-hoc results presented in the text (or details to describe which post-hoc tests were conducted) to explain the tDelay x A and A x W results, yet Figure 4 contains denotations of some post-hoc testing for tDelay and appears to have wrongly collapsed across the A and W factors in the graph (even though there is an A x W interaction effect and there was a tDelay x A but perhaps not a tDelay x W interaction). Oddly enough, the text itself actually identifies that it is necessary to look at the interaction between factors, i.e., For example, for Gap = 24 pixels in Figure 11, the ERclose rates are already small (<5%), and thus the ERclose rates did not significantly change even if Tdelay increased. This finding could not have been observed if we had tested only the effect of Tdelay; as shown in Figure 10e, the ERclose rate of Tdelay = 0.4 sec was significantly different from the other three values. Figure 11 shows the results of the tDelay x Gap interaction for ERclose, which only collapses across the A and W factors (I assume there is no A x W or other interactions, however the text doesnt describe them), however Figure 10e shows results that have been aggregated over A, W, and Gap which as Figure 11 shows Gap is important at some tDelay levels so it cannot be aggregated. So, because an incomplete analysis is presented for each metric in each Experiment, an incomplete picture of the actual influence of spatial gaps and temporal delays is presented in this paper and therefore the contribution of this paper is unclear (and likely wrong). (Note that the reference used for the statistical analysis and importance of reporting interaction effects and post-hoc tests comes from Fields book one version of which can be found here: pseudo-url ). Aside from the issues with the experimental results, I did have a few other questions about the paper. In the description of the third experiment, the text states that tDelay infinity was removed to evaluate the interaction between tDelay and Gap and then tDelay 0.8 was removed because it was not significantly different from tDelay infinity. In Figure 4, however, there was a significant difference between tDelay 0 and 0.8 on both MT and ER_close and 0.2 and 0.8 on ER_close, so perhaps it should not have been eliminated because it did influence some of the MT and ER_close results. I will also note that the same argument was not applied to the 216 and infinity pixel gaps (which also show a similar pattern of non-significance between 216 and infinity and significance between 8, 24, 72 and 216 for MT and 0, 8, 24 and 216 for ER_close). I wonder what the results of this third experiment would have been, and thus the 400 ms recommendation, if the same exclusion criteria were applied to all the levels of the gap and tDelay factors. I also was unclear as to the way latency was measured with the mouse what object was the mouse hit with, how was the velocity of the object controlled over trials, and where did this methodology originate from? The Figure 2 use case also really isnt ever referenced in the text outside the first few paragraphs of the Introduction so did some text get accidently deleted in the paper or was the inclusion of this Figure in error? I didnt see any mention of the post-experiment questionnaires that were administered outside of the Participants Strategy section it would be beneficial to include these details in the methodology and is quotes are available, them as well. How long was the third experiment? Lastly, the text refers to both seconds and milliseconds and I found it cumbersome to have to mentally switch between both units using one unit throughout the text (and one standard abbreviation for milliseconds) would improve readability. In summary, while I do appreciate the effort and time that has gone in to conducting these three experiments, the analysis that is presented does not provide the level of detail or use an aggregation method necessary to make the types of conclusions that the paper argues is necessary. Because the contribution of the paper comes from the findings, I am thus unable to recommend it for acceptance at this time. I strongly encourage the author(s) to redo the statistical analysis and resubmit their paper because there is value in the experiments they have ran. """,1,1 graph20_7_2,"""This paper studies the effects of gaps around and time delays on UI elements that are sensitive to overshooting. These elements were studied as the targets of mouse-pointing tasks and evaluated using Fitt's law. The works is very well motivated by two common usage scenarios: popup elements and scrolling type targeting. Overshooting of hover-based popup elements is a relevant problem as it closes the element and the targeted button becomes unavailable. The same goes for scrolling type interactions, as it is are very tedious to get back to the target once overshot. The authors did a great job in motivating their work, explaining their research question. They clarify that the right gap and delay metrics is not obvious as e.g. larger gaps might reduce overshoot but increase distance and longer delays mitigate overshoot effects but introduce wait times. The authors conducted three user studies. They evaluate the gap and delay individually, but with the same 12 participants. Their third study investigated interactions between gap and delay and was conducted with mostly new participants, 3 of 12 participants did the previous 2 studies as well. In general, all studies are sound, Bonferroni correction and sphericity tests are reported and the procedure is clearly layed out. However, one question I had is if the order of studies 1 and 2 were counterbalanced to learning effects. The other question was why 8% of the trials were removed for study 2 (page 7)? This should be justified. In summary, I would recommend accepting this paper as it is very well written, well motivated, relevant, and sound. I also very much appreciate that the authors clearly and openly state that the results cannot be generalized. All design decisions are justified leaving very little open questions. """,3,1 graph20_7_3,"""This paper studies the effect of delayed closing and/or gap to mitigate against overshoot effects in interfaces. The idea is that if targets are sticky (i.e. they don't disappear immediately) or if targets have some tolerance beyond them, then users can increase speed in their movements toward targets, thus improving Fitts's Law performance for targeting tasks. This is a well-written paper with carefully run experiments. Interestingly, the challenge with this paper is that, in broad strokes, it confirms many of our hypotheses -- that spatial gap around target allows users to be slightly more agressive, that temporal dealy of around 0.4s seems to encourage more agressive pointing behavior that ""tops-out"" at around this value (see the speed profiles). However, due to between participant speed variances and a small number of participants, the experiments did not show statistically significant effects. Normally, when I review a paper, I start with big picture ideas, then drill down to some more focused commentary. However, in this case, I actually want to invert my review, talking about some detail through the paper before discussing my current perspective on the paper. Starting with Related Work, the authors include a section on Latency in Fitts's Law Tasks, but this section is somewhat misleading. Cursor latency is a very different phenomenon from the one explored in this paper. I'd call T_delay in this paper a ""deactivation delay"", where users have some temporal grace period if they overshoot a transient on-screen target before they are penalized by the target disappearing and then them needing to return to the beginning of the task again, re-activating the transient target, and reaquiring it. I'm not sure why the latency section was included in this paper -- it's probably worth mentioning, but the only thing I can conclude is that someone in the past might have mentioned this phenomenon, but it is very different, and this section almost seemed a distractor. I would encourage the authors to address this a bit earlier if it was ever a concern, in the intro, and leave it out of related work because it is not directly relevant -- except insofar as cursor latency can result in overshoot. The studies were largely well-conducted, but I do have some small issues with study design. In particular, the authors state, in Experiment 1, ""Even if an erroneous operation was performed, the participants had to immediately aim for the target again; the task was not restarted from the beginning."" From my perspective, this actually penalizes their study design, limiting the effects they are looking to identify versus real-world, ecologically valid costs in interfaces. In real interfaces, if you exceed a delay and/or a gap, then the cost increases because you need to restart the task. Perhaps a friction sound was sufficient for the participants in the study such that their response was appropriately biased simply due to this friction sound, but in the real world the response bias in these tasks is created via temporal cost. This commentary brings up another point, that of response bias. In psychology, one reading that I might suggest to the authors of this paper is Swetts and Green's work on signal detection in psychological experiments, and in particular the phenomenon of psychological response bias. The idea is that a factor like deactivation delay or gap creates a pre-existing bias in users toward care of agression, and it is this pre-existing user bias that you are trying to explore with your experiment. It isn't necessary to incorporate this into the paper, but it's a useful thing to think -- how psychologists think about these biases. Another small point in experimental design. When the authors write that ""The order of the six T_delay values was balanced among the 12 participants,"" I assume some sort of Latin Square was used (it's definitely not a full-factorial!). Finally, in your results and discussion, I'm trying to understand the use of the word suboptimal. It seems to me that, of the values measured, 0.4s was the optimal, trading off performance in various ways -- e.g. in its interaction with gap in experiment 3, for example. The challenge, here, is that there is not statistical significance beyond 0.1s, and so, given our standards in HCI, it is hard to conclude anything strong from this paper. However, as a demonstration of work-in-progress, I actually think there is something here regarding 0.4s delays. Obviously gap is going to be an advantage if sufficiently large ... It almost seems to me that the ""no benefit"" beyond 0.1s should be softened to something like ""While we see no statistical benefit beyond 0.1s in terms of movement time, it is informative to look at figure 6, where we see that, for delays of 0.4 seconds or larger, the speed profiles differ in peak speed. In fact, one thing I would consider if I were the authors was to include an analysis of peak speed from both studies, to see if there is a difference of peak speed at 0.4s. However, even if we aren't seeing differences, one problem that the authors may be facing is statistical power due to either the tests they are running or due to the small number of participants. I would encourage the authors to try a linear mixed effects model to see if they can discriminate better with participants as a random effect. However, even in the absence of statistical effect here, I see value in this paper. I would enourage the authors to do a bit more exploration before it is published (analysis of peak speed, try a more powerful statistical test the RM-ANOVA -- I strongly suggest LME) However, even if these don't result in effects, I would soften some of the language. There is some qualitative evidence here that 0.4s is a good delay (not sub-optimal, perhaps not optimal, but appropriate based on interactions in experiment 3 and speed profiles in experiment 1. At the very least, this work gives a roadmap for more accurately measuring and inferring these effects. As a result, I'm somewhat positive on this paper. I think it's well written, just a bit too absolute given the somewhat ambiguous results. I am leaning toward accept.""",3,1 graph20_8_1,"""This paper proposed a complicated system for driving simulation, and apparently a great a mount of work was involved. I found it interesting to read the technical details and interview comments. The system contribution of UniNet was clearly illustrated. My major concern was the user study. I am a bit confusing about why ""MR"" and ""Triple Monitor"" conditions are using the ""car-crash"" event, while ""VR with hand"" and ""VR without hand"" conditions are using the ""jump-scare"" event. This setting make it less convincing to compare the response time between conditions, not to mention that each condition only involved one event during the experiment. In addition, the writing of this paper can be condensed, e.g., the material of the camera mount and working voltage/current seem to be less related to this task. Other minor comments can be found below. (Other minor comments) Pros: Great amount of details were provided to help readers understand the technical difficulties, such as the map projection distortion, VR HMD and camera synchronization, ""lessons learnt"" section, etc. These tips would also help other researchers & developers in this field. The qualitative findings from the interview & the discussion section were interesting to read. Cons: The ""Introduction"" section can be condensed, e.g., the ""Motivation"" and ""Purpose"" parts can be merged. In general, the related work section was interesting to read, but many paragraphs are redundant and can be condensed. E.g., the ""Mixed Reality"" section, and the 3rd paragraph of the ""Immersion and Presence"" section. The structure of the system architecture can be improved -- now readers have to combine ""System Architecture"" on p5, ""Hardware"" on p7, and ""Apparatus"" on p8. Some paragraphs also provided duplicate information about the system, which can be better worded as well. """,3,1 graph20_8_2,"""This paper presents a mixed reality driving simulator setup. The objective is to enhance the sensation of presence. The description of the system is detailed and interesting. Overall this is an interesting read. I am surprised that the introduction says that these modern forms of VR are relatively new"". As far as I understand it, they refer to AR and MR. However authors cite Witmer's 1994 paper, which makes it not modern at all. However I agree there is a recent rise of interest in these techniques in the past few years. Overall the paper is long (14 pages + refs + appendix). It is lengthy at parts. For example it details the full history of Virtual Reality. While it is interesting, it moves the focus out of the scope of the paper in my opinion. My few remarks are essentially about the user study. Systems are hard to evaluate overall, especially as this one is intended to be a generic tool rather than a particular application. The study about presence is a good choice. Embodiment would be another good option (Gonzalez-Franco & Peck 2018). However, the studies has only 24 participants, spread on 3 conditions in a between-subjects design, and only one trial per condition. By the way, there sees to be a vocabulary confusion in the paper. I believe that what authors name trial is actually a condition. This is is tricky because there is only one trial per condition, with only one event. This makes it difficult to make a fair statistical analysis. I suggest authors to remove everything related to reaction time because of that. Then I wonder why different conditions had different routes and events? As far as I understand it is unnecessary because of the between-subject design, and it complicates the analysis. In summary, There is an interesting implementation effort, and such a tool can be useful for research on interactions with cars. I am just wondering if GI is the best venue for such a work. AutoUI or VR conferences would be a better match. Other details: On the presentation of results, Figures 11 and 12 should be bars with 95% intervals, because of the ANOVA analysis. Figure 13 should be grouped by factor rather than condition, because we would rather like to compare conditions, not factors. Some references have formatting issues: 8, 21, 28, 39, 51, 34 has initials instead of first names, and last name should appear before first names. """,3,1 graph20_8_3,"""The designed system presents a serious engineering effort and addresses a genuine problem area which can be very useful. The system design is described clearly, and so is the user study and the various designs that were evaluated. However, a video would have helped here to understand the differences. I do not have an expertise in driving simulation work and cannot judge the novelty of the MR approach for driving simulation. However, assuming it is novel, this work does present a significant contribution to the literature. Given that the contribution of the paper is in the MR system that leads to enhanced presence for a driving simulator and in the inclusion of the traffic simulation, it is unclear if GI is the right venue for such a work. While there are elements of HCI here, the authors would benefit more from presenting this work at a venue which looks at driving simulation more carefully. Some minor points- 1. The paper is highly redundant in its descriptions and writing and can be compressed significantly. """,3,1 graph20_9_1,"""This article investigates emphasis effects in information visualisation. It focuses on three emphasis strategies : varying colour, size, and blur/focus. The authors conducted two studies, the first one seeks to establish the effect of varying levels of salience of one emphasised element among many. Based on the measured effects the authors built a model for predicting the relationship between the emphasis effects. A second study was conducted with more realistic visualisations and is meant to test the model in more complex conditions. The authors found that blur was the most effective strategy to improve prominence followed by size variation and colour rating the worst. The level of the emphasis has an impact on prominence as well. The model built from study one managed to accurately predicting perceived emphasis in the 2nd study (particularly subjective ratings). As a non specialist, I found the paper interesting, and the studies tackling a complex and relevant problem. The emphasis effect chosen appear to be justified for a first study. Not being familiar with the use of distractors, I would have expected slightly more control in their design, or a discussion of the distractor design and default selected. While I may have missed it, I could not find the scale and question used for the subjective rating, which makes it particularly challenging to interpret the results. The results of study 1 are insightful. They provide a basis to build upon in future research, and they can already be used to provide design guidelines. I appreciated the reporting of effect size using general eta squared. Study 2 was conducted with more realistic visualisations. It provides more nuanced results on the effect of emphasis strategies. It would have been really useful to have a description of all the visualisations selected to better interpret the results. I would have expected that some visualisations would have been more suited for one type of emphasis or another. I regret the (relative) lack of discussion of emphasis management in real-life visualisations, whether some emphasis strategies can be more easily integrated in existing visualisations, or whether some types of emphasis strategies are more suited for one visualisation idiom or another (e.g. for online articles or for visual analytics tools). Overall I found the paper insightful and the implications for design derived from Study 1 actionable. The methods section of both studies could be reformatted to make the study design stand out more clearly. And the discussion could open up to broader reflections on emphasis strategies in visualisation design.""",3,1 graph20_9_2,"""In their revisions the authors have addressed a number of the problems identified by the reviewers, but they have neglected to attend to several important flaws that should be corrected. These remaining issues continue to weaken the paper. The authors do not really tackle the difficult issue of distance in the users' reaction metrics to the different effects and magnitudes. There are both perceptual (as in foveal acuity) and interaction influences. At the very least, they could have considered standard Fitts' law models of reaction time to click the target. There should be a more substantive discussion of the weaknesses inherent in this lack. (By the way, i am not sure where the authors base their comment on ""area is more perceptually noticeable than size"". Noticeable aside, we are much poorer at making area quantity judgments as opposed to length judgments.) Similarly, the authors claim they clarified how visualizations were chosen. They used 16 visualizations. Which ones??? I'd like to know exactly which forms were used. Saying they are :""like"" maps and scatterplots is insufficient. This complicates the still unsatisfactory experimental design description of Study 2 in the paper. For example, they claim this is a repeated measures study with 3 effects x 3 levels. This suggests 9 distinct conditions. BUt in fact there are 16 visualizations, so now there are 9x16 = 144 distinct conditions. They say they used a repeated measures design, indicating participants did MORE than one repetition of each condition. How many? Did all participants carry out 144 trials in complete random order? Unclear. a randomly generated order will handle only first-order effects and not second-order effects (where factors repeat.) I don't want to be too picky, but if the authors made these compromises in design, they should DISCUSS how these may be limitations of the study design. It's almost impossible to claim robust results from this kind of study with these kinds of questions and conflating factors incompletely described. """,2,1 graph20_9_3,"""The new revision of the paper has clarified several issues. For example the points that were well addressed are the addition of information on participants, the reporting of p-values, and for the most part the equivalence of perceptual levels (see minor 2-3 things to fix in the end). I still believe this paper makes a good contribution, has interesting findings and is of interest to our community. I am overall in favour of acceptance at this point if the list of issues mentioned next are addressed for the camera ready: 1. The addition of past work was not well integrated. For example, the paper added a reference on past work by Haley & Enns [24] but did not explain how this influences/informs designs or explains results. Similarly, the addition of Duncan & Humphreys [18] has implications about the study for creating the model (choice of distractors), but are not really discussed. A deeper reflection as to the relationship of this work to these included papers work would be good (although many more references were mentioned by reviewers and do not seem to be added). 2. I am overall happy with how the differences in perceptual magnitude were treated in the reporting of the paper. I would suggest the following minor additions: - p2: make the following its own paragraph => It is important to note that the magnitude scales - p2: remove the sentence (since area is more perceptually noticeable) this is not true, differences in area are notoriously harder to see than in length (diameter). - p11: (section 8.1) Please stress here again the goal and lack of perceptual magnitude as some readers may not read the details (but will read the summary). 3. It would be good to further add the number of trials seen by participants to make sure all factors are reported (for example was it 3x8 in study 1 and 16x3x3 in study 2 as the text implies). 4. Finally, the paper now reports that the choice of visualizations in study 2 was based on their resemblance of scatterplots. This makes sense given the creation of the model, but it would be good to have the list of them (and levels) in supplementary material or an appendix. (More generally, for such a study that raises opportunities for future research replication is important, so I would strongly encourage the authors to share their experimental material, results, and analysis scripts). It would also be good to have a comment about possible differences across visualizations in study 2. This is a point that likely requires additional analysis and space to report formally, so I am personally ok if it is not discussed in detail or provided in supplementary material. It is more of a wish and research curiosity (if the materials are shared this would help).""",3,1 graph20_10_1,"""This submission reports on two user studies on path selection and tracing, by combining a large shared display for visualizing node-link diagrams and a personal augmented-reality display overlaying visual cues on tasks with various new techniques with metaphors (Elastic, Magnetic, Ring, Sliding) inspired from related works that were applied to other types of interactive displays (desktop, multitouch). Quality: - Pros: Related work review and discussion of experimental results are both weighted with honest observations. The empirical studies follow a classic structure that is easy to follow. - Cons: The small sample size of participants and tasks for the first ""pilot"" experiment may have contributed to its ""negative"" results (BaseLine outperforms new techniques). Clarity: - Pros: Presentation is generally very clear with crisp text and illustrations. - Cons: A few clarifications are needed, most notably: is BaseLine dependent on Path conditions (Weighted/Homogeneous)? (I assume independence.) Originality: - Pros: Techniques are inspired by well identified sources from the literature. - Cons: It may be useful to check related work on eye tracking for interaction and evaluation. Signifiance: - Pros: Newly introduced path selection and tracing techniques show evidence of efficiency when task precision is required. - Cons: Controlled lab experiments would benefit to be complemented with longitudinal studies in real life settings. I am in favor of accepting this submission. Here follow detailed comments per section. INTRODUCTION ""The techniques use only AR view-tracking as a means of user input, as gesture recognition and hand-held devices are not supported by all AR technologies and may be awkward to use in public settings [53]. Our hands-free interaction techniques help the viewer maintain a visual connection between their personal AR view, that may shift (for example due to small head movements), and their preferred route on the network on the shared display."" + ""Our results show that persistent coupling works well for high-precision path-following tasks, where controlling the view through the AR headset is hard; while more flexible transient coupling works best for low-precision tasks, in particular when following paths of (personal) high weight."" + ""Prouzeau et al. [48] compare two techniques to select elements in a graph using multi-touch."" I would suggest to anchor this work in a larger subset of the computer graphics and human-computer interaction communities. It would be useful to compare outcomes from research on pointing with other input modalities, not only on AR/VR and multitouch/large displays, but also with gaze/eye/head tracking. Such research was already possible to conduct before the recent technological progress that enabled the current wave of AR/VR devices. Gaze/eye tracking is an alternative to the proposed solution for interaction, but also a solution for behavioral measurement for evaluating the proposed solution. To illustrate my claim, I suggest one reference per application of eye tracking (interaction, evaluation), both obtained with ACM Digital Library and IEEE Explore by querying ""eye tracking pointing"". Vildan Tanriverdi and Robert J. K. Jacob. 2000. Interacting with eye movements in virtual environments. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems (CHI 00). Association for Computing Machinery, New York, NY, USA, 265272. DOI:pseudo-url R. Netzel, M. Burch and D. Weiskopf, ""Comparative Eye Tracking Study on Node-Link Visualizations of Trajectories,"" in IEEE Transactions on Visualization and Computer Graphics, vol. 20, no. 12, pp. 2221-2230, 31 Dec. 2014. doi: 10.1109/TVCG.2014.2346420 RELATED WORK ""Network and shared/collaborative visualization are well-studied topics."" This claim could be directly supported by citing references in lists, from research already overviewed in the introduction. The related work review: - is very well grounded, - discusses their applicability vs the constraints of AR displays (Link-Sliding vs free head orientation in AR), - clearly attributes inspirations (Sliding metaphor inspired by Link Sliding [43], magnet metaphor inspired by area cursors [29] and bubble cursors [19], Magnetic Area inspired by Rope Cursor [20], but second technique (Magnetic Elastic, according to order in EXPERIMENTS introduction?) inspired by RouteLens [3]), - and positions this work honestly (""Combining AR with 2D displays is not new.""). ""This work does not consider interaction."" ""This work considers external displays only as input."" The antecedent of ""this"" is unclear particularly within a review of related works. ""Their"", or ""Our"" as in the next paragraph ""Our work focuses on a different visualization context.""? DESIGN GOALS 2 design goals G1 and G2 are clearly explained: preserve coupling between personal and context views, and overcome AR technology limitations (narrow field of view and accidental head movements). TECHNIQUES Sliding Elastic ""With this variation, the ring is always attached to the viewers gaze-cursor, due to the deformed path curve that remains within the personal view."" I understood that the Sliding Ring *also* remains always attached to the viewer's gaze-cursor by a dashed trail from the center of the personal view, even if the ring is outside their AR field of view? ""In practice, we found that c 1 = 0.1 and c a = 0.75 worked well for our setup."" How were these coefficients determined? The value identified with the letter 'v' serves as weights to give priorities to links in path selection and following. The various expressions depending on techniques are sometimes homogeneous to a product of two distances (becoming representative to the ""area of influence""), sometimes to a linear combination of distances and areas, with coefficients 'c' that do not appear to be inspired by physical metaphors. But ""value"" 'v' is never clearly defined as ""area of influence"". For future work, it would be interesting to model weights based on physical metaphors, particularly when one such physical metaphor is used to name techniques: ""magnet"". BaseLine: I did not find a written confirmation whether or not BaseLine is designed to exhibit interactive behavior based on path (weighted or homogeneous). Comparing B task times between both Path conditions (very similar time pairs) for both Graph conditions in Figure 11 leads me to think BaseLine is Path-independent, Figure 13 the opposite (very dissimilar time pairs). What is the right explanation? EXPERIMENTS The experiment is a within-participants design with classical explanation structure. ""A simple attachment (or touch with the gaze-cursor for BaseLine) of nodes and links is enough to consider that part of the path selected."" ""Touch"" is ambiguous. ""Hover""? ""Collision""? ""For experimental proposes, the path to follow was shown in red, with the starting node highlighted with a green halo (Figure 9)."" With the paper zoomed fullwidth to a 16"" screen, I see the red node with a green halo as an orange node. A different visual encoding than the halo for the starting node may be preferred. Note that ""Some color sequences will not be perceived by people who suffer from the common forms of color blindness: protanopia and deuteranopia. Both cause an inability to discriminate red from green."" Colin Ware: ""Information visualization: perception for design"", Morgan Kaufmann (Elsevier), 3rd edition, 2013. Luckily, all participants to evaluations reported ""normal or corrected-to-normal vision"". ""To be able to generalize our results, we considered two graphs with different characteristics"" ""our goal is not to compare the performance between graphs"" Both sentences are a bit contradicting. ""Our main measure is the time to complete the task"" Why not have measured/recorded the path of the center of the personal view, since its position is already used for interaction? It would have been interesting to compare recorded path deviation with reported perceived accuracy. Figure 11. ""Note: A CI that does not cross 0 shows evidence of a difference between the two techniques"" is useful. Similarly, for the top part, a ""lower is better"" note would be useful. ""We see that BaseLine and MagneticArea exhibit, overall, the best performances with very similar mean task completion times across conditions (no evidence of difference)."" The phrasing is very positive, while a more honest first interpretation could be ""In the path selection experiment, none of our techniques outperformed the baseline in terms of completion time."" Probably 6 training trials and 6 measured trials provides not enough experience for participants new to the techniques to pass a learning phase. DISCUSSION AND PERSPECTIVES ""In an explanatory path-following task where precision may be less important, (...) a-priori no specific technique is needed (BaseLine)."" ""When precision is important, (...), a technique that is tightly coupled to the graph like SlidingRing, and to a lesser degree a snapping technique like MagneticArea, are clearly better."" ""The elastic variations of our techniques performed poorly. However, they can potentially be interesting when the viewer has identified a path of interest and needs to keep its local structure in their field of view."" These elements of discussion are fair and well balanced. ""AR technology still cannot overwrite reality"" I personally hope that this never happens. Why not consider a longitudinal study to understand which techniques are efficient and effective and satisfactory over time in real settings? CONCLUSION The conclusion is effective and the last paragraph weights the scope of the contributions fairly (depending on task precision). REFERENCES Missing citation ids in [23, ?, 62], [?], [2, ?, 34, 65] SUPPLEMENTARY VIDEO The companion video illustrates well the interaction techniques and the examples of study trials. """,3,0 graph20_10_2,"""=CONTRIBUTION= This paper explores AR-based visual feedback techniques to help guide users in path following and path tracing when navigating node-link visualizations presented on large public displays. Designed for use with a head-orientation-based cursor, two approaches are presented, with two variants each, based on extensions of previously published methods. These approaches are empirically evaluated against a baseline without visual feedback through two user studies that evaluate user performance on coarse-grain and fine-grain selection and tracing tasks. The results suggest that the visual feedback methods offer little user performance improvement for coarse-grain path selection, but significantly improve performance for fine-grain path tracing. =SUMMARY OF JUSTIFICATION= The topic area of augmenting public displays with personal AR content is an interesting and topical area. The paper is well written and the topic area is clearly motivated. In designing the techniques, clear design goals are stated and it is clear that a lot of work went into refining and iterating on the designs. The user studies are well described and reported, and I applaud the use of confidence interval testing and clear reporting of study results. However, a fundamental issue with the work lies in that is does not clearly describe concrete usage scenarios or user interaction problems that the techniques seek to address. While the work is framed in terms of facilitating private content and discrete interactions when interacting with public displays, such as navigating maps in public spaces, it is not clear how the selected atomic graph-navigation tasks address a tangible user problem in this space. The tasks that are evaluated in the user studies are very niche and since it is not clear what usage scenarios would benefit from these approaches, it is not clear how the results generalize or guide future work in the area. In light of these concerns, I do not feel the contribution of the work is sufficient for publication at this time. STRENGTHS - Topical area, well motivated, well written - Good reporting of study results, use of confidence interval testing WEAKNESSES - Lack of concrete usage scenarios - Unclear generalizability of study results =THE REVIEW= SIGNIFICANCE I applaud the presentation of Design Goals, however, a preceding description of the interaction problem and description of the usage scenario are missing. What is the usage scenario? What is the user trying to accomplish? How do path following / tracing atomic tasks aid the user in accomplishing their goals? How are weighted links in the graph related to the goals a user aims to accomplish via path following / tracing? There is a gap here, which makes it hard to understand why the proposed coupling mechanisms are being developed, and how and why evaluation criteria are being selected. This, in turn, impacts the significance and generalizability of the user study results. NOVELTY The work takes inspiration from prior work on desktop navigation methods and extends two ideas into AR. As a result, two approaches to coupling (persistent and transient) and two variants to off-screen feedback (magnetic and elastic) are developed and evaluated. The techniques themselves appear to be relatively novel, however, I return to my comments regarding the significance of the work. Without a usage scenario, it is challenging to evaluate novelty of the proposed methods. VALIDITY Path-following is a common task cited in the visualization literature. However, the goal of path-following is to understand the connections that the graph is representing by visually tracing a path through links. The interaction mechanics of path tracing are not typically evaluated. To do so is more of a steering task, which requires users to perform very precise movements through 2D tunnels in order to complete actions. In particular, given the task of the user study, which presents a highlighted path to the participant to follow, does not require the participant to make any interpretations about the node-link representation at all, merely perform steering movements along the highlighted path. Precision -- Why was precision selected as the core factor for evaluation? In what situations would a user need to coarsely select links, or carefully trace a set of links in a node-link diagram? Abstractly, these methods are all approaches to path snapping in the paper it is even suggested that certain cases were ignored because they would not work well (weighted path that crossed homogenous paths). Persistent vs Transient -- It is clear that a user will need to integrate the two sources (local, private & distal, public) to understand the information and augmentations. How were these two types of connections determined: persistent and transient? What is the definition of each? How do they differ from each other? Are they mutually exclusive? Are these two types exhaustive are there other types of connections not considered in the present work? Weighted edges In the user studies, the weights are used to make the snapping behaviour of the coupling mechanisms more pronounced and less prone to errors. However, I question the validity of the task itself -- if the system knows the path the user wants to trace, then why would it require a user to trace the path with high-precision? In Study 2, was the visual thickness of the links increased in the weighted condition, since there appears to be a significant improvement in baseline performance between Homogenous and Weighted conditions. To me, the formulation of this task and its results resemble a steering task using a head-orientation-based cursor, where the different coupling mechanisms reduced the Index of Difficulty of the task. PRESENTATION Page 2, Related work, first paragraph, sequel => not sure intended meaning Page 2, Navigating Paths in Graphs, first paragraph personal view may include _less_ elements => fewer Page 3, Design Goals, missing references: ? Page 10, Discussion and Perspectives, last paragraph, missing reference: ? References formatting is inconsistent: Not all authors names are listed for all references. """,2,0 graph20_10_3,"""This paper presents techniques for Head-mounted AR-based personal navigation on a graph displayed on a shared large display. A total of four techniques are presented and evaluated for objectively for efficiency, along with subjective evaluation for user preferences. The text is overall well-written, modulo some issues I'll outline later in the review. I'm not an expert in this specific domain of immersive network visualization/navigation, but the cited references seem appropriate to me, and aid in the understanding of the text. My biggest concern is the lack of motivation for the steering-like task, the so-called ""high-precision"" task in the text. The vertices and edges of the graph (along with possible weights on them) are the only pieces of information revealed by the graph. Therefore, I do not understand why a user would want to follow the edges like a steering task? In what scenarios would the ""low-precision"" select-and-move-on interaction be inadequate? In my mind, this missing motivation for one of the central tenets of the work puts it below the acceptance threshold. Some minor issues: 1. The word ""quasi-planar"" is not defined. Given that the graph used is a subway map, I think I understand what the authors are going at: a graph whose given spatial embedding in pseudo-formula makes it almost-planar, with a small number of intersecting edges. But this should be stated. 2. What does ""link density"" mean? It seems close to the average degree of the vertices of the graph, but not exactly. 3. The subsection on ""Personal Navigation"" talks about indicating viewer preferences in the headset. I would like to see a short discussion on rendering concerns here. One cannot modify physical reality, but visualizations in the device can be adapted to how the graph is rendered on the shared display. 4. Some citations are missing (rendered as pseudo-formula ). """,2,0 graph20_11_1,""" AffordIt! is an VR authoring tool for adding translational and rotational affordances to components of an object. It consists of a mesh cutting step used to select the component and an interaction step for defining translational and rotational affordances, both of which can be done in VR without explicit coding. User study is performed to evaluate the usability and workload of the different aspects of the tool. This paper addresses a problem that to my knowledge hasnt been tackled. As the authors acknowledged, there are tools that could automate some of the processes like segmenting parts of an object, but there still are some benefits in enabling real-time VR authoring as algorithms may not always be available or perfect. The study results suggest that the interface and interaction techniques are relatively well-received as well. However, the tool itself seems quite preliminary as it only allows two different geometries for mesh cutting and only translation + rotation affordances. While the study provides an evaluation of the tool and some useful insights, the study design could have been improved to obtain more useful results. For instance, some of the questions in the post questionnaire do not have any meaningful contribution and it would have been useful to report performance-related measures like completion time, breakdown of time used, or number of adjustments during segmentation. Overall, this paper provides a preliminary solution to a problem that is of importance. Thus, I lean towards accepting it. Minor: - In Figure 4, it is difficult to see which one is the new vertex. Highlighting it would be helpful. - Perhaps Translation is a better term for the Perpendicular interaction. - A picture of the study setup would be useful. Are the participants standing or sitting? How big is the VR space? (e.g., are they given the equivalent amount of space as the kitchen shown in VR?) - I am curious as to how participants reacted when the mesh they created didnt perfectly overlap with the intended component of the object as it seems like there is no automatic snap function. For instance, I can imagine it being a little unattractive to perfectionists if the created mesh was noticeably larger or smaller than the washer door, - In Figure 1c, it is difficult to see the yellow dots demonstrating the rotation of the door. Making it bigger would be helpful. Also, I think it would be useful to visualize the door in the along with the yellow dots to preview the final rotation, which is suggested by participants. - Very minor, it seems a little incongruous to have a washer in the kitchen, at least to me. Missing references: Deering, Michael F. ""HoloSketch: a virtual reality sketching/animation tool."" ACM Transactions on Computer-Human Interaction (TOCHI) 2.3 (1995): 220-238. """,3,1 graph20_11_2,"""AffordIt! is a tool for authoring object component behaviour within VR. With this, users can select part of a VR object, assign an animation behaviour, and preview it. The tool is a very useful and novel contribution, although I have some questions about the validity of the use case scenario. The system requires that the virtual objects are implemented in a way that they do not only present an outside facade but also contain primitives of its components not displayed on the outside (i.e., ""internal faces""). This is briefly addressed in the limitations, but I would have found some discussion of this aspect very helpful, especially earlier when introducing the research motivation. How likely are designers of 3D objects to include such ""internal faces""; is this common? The paper further assessed the tool in an exploratory study looking at usability and induced workload, with promising results. This consisted of a small user study (N=16) featuring qualitative and quantitative measures. The latter assessed usability (SUS) and workload (NASA TLX) and custom miscellaneous items. Some issues in the study reporting: - What was the scale range for the prior experience questions? - The quantitative data is described as ""qualitative"" for some reason, even when referring to barplots in Figure 9. - ""Finally we saw a high rating for the perception of realism and feelings of immersion in the environment (Q10) ( = 5.88, = 0.78)."" Q10 only refers to realism - where is the immersion aspect coming from here? For some reason, the actual qualitative aspects of the study are then reported as a subsection in the discussion (6.3 - Comment Observations). I strongly recommend that this be moved to a subsection of the previous section, i.e., the Results section. The actual discussion of the results unfortunately is very limited (especially because large parts of it consist of qualitative reporting), and are mostly a summary, rather than a contextualization of the results within existing work, or statements on implications of the results. The paper does discuss limitations, but I think that this section should also address the fact that the study was largely preliminary / exploratory in nature; there was no comparison condition, nor a discussion of what a baseline condition might look like for this context. Despite these weaknesses with regards to the study reporting and discussion, the paper is interesting and showcases good and novel work and I think the GI community would benefit from its presentation (albeit with some changes as suggested above). General minor issues: - ""users authoring process"" -> ""users' authoring process""""",3,1 graph20_11_3,"""This paper is about AffordIt, a VR authoring system that helps users segment existing 3D models using geometric cutters. The system also allows users to add two basic functionalities to the objects, which are one axis rotations and translations. Finally, AffordIt shows the added functionalities as animations. The main advantage of AffordIt is that it allows users to add affordances to the objects of a scene directly in VR, which removes the need to use third-party software. In general, I found the proposed system novel and interesting, and a good addition to the conference. The authors did a good job describing how their system works, and they discuss the limitations of their paper, which I think is important in these types of papers. My main critique of the paper is that the authors do not specify the general design decisions they took. For example, to which type of users (novices or experts), and types of products (concept vs final) they designed AffordIt. Also, the justifications for their interactions are in the related work section, which makes them difficult to follow. Clearly stating all these design decisions in their own section, will make the paper stronger. There are also problems with the figures, which are missing some elements. This makes the description of the mesh cutting and the interactions difficult to follow. Here are specific comments: Fig 3 does not have labels for Pv and nR. Also, an extra model showing how PV is projected to l might help with clarity. There is also a figure missing to show the cuboid and cylinder manipulation widgets. Even if some of them are the same as P1, P2 and P3. Fig 5 is not clear. It is missing an image of a 3D object with the interaction points and lines over imposed. Finally, the description of the user study is repetitive. Especially the paragraph under section 4 title and the other sections. Also, the previous work is only referenced using the citation number. I think is better to use the system name or the author's name, so the text it's easy to follow. Overall, this paper will be a good inclusion to the GI conference.""",4,1 graph20_12_1,"""This paper is a re-submission of the first GI 2020 deadline. The contribution of the paper is exploring visualization of non-Euclidean spaces in VR using RTX GPUs. This work can lead new research and applications into non-Euclidean space visualization. The primary problem of the paper is the same: little technical novelty. The authors compensate by giving detailed explanations on their shader algorithm. Compared to the last submission, the paper structure is improved and some missing details are added. Topology explanation is reduced, increasing readability. Although as a result, some topological concepts like fundamental domains are used without explanation or reference. I would suggest the authors notify users of which references are needed to understand those unexplained concepts. Some minor points: - I think the abstract should include more about the paper, like the contribution and experiment, not the NVIDIA's product announcement. I understand Mr. Huang gave a wonderful speech and I too was excited by this. But I would prefer more professionalism in writing research publications. - Section 6 is missing in Section 1.3 Structure of the Paper. - An idea for demonstrating the topology: adding animations of objects (e.g. planes) traveling between cells. I think observing how objects move through the space should help user understanding. - An interesting future work is how to optimize global rendering in those spaces made by recurring cells. Typical rendering methods don't consider such cases and therefore existing algorithms may not help. Overall I think this is better than the last submission, enough for acceptance into GI after some more polish.""",3,1 graph20_12_2,"""I reviewed the previous version of this paper submitted to GI's first review cycle. I'm fairly happy with the changes: the unnecessary portions of the math section have been removed, the implementation is described in more detail, and a new supplemental video provides a better view of the system. However, the central problem I, along with the other reviewers, had pointed out still remains: there is very little technical novelty in the paper. The main effort is the RTX implementation of [3]. I would reiterate to the authors that I appreciate all the changes, but the main concerns have not been addressed. I would encourage looking at the old metareview and following the major recommendations noted in there: 1. User evaluation of the system. Did it help understand the visualized spaces? 2. Exploring novel applications arising from the implementation. A minor point, with the math subsections removed, is that the term ""orbifold"" is never defined. I don't think a precise definition is needed, but it should be introduced before the term is used.""",2,1 graph20_12_3,"""This paper presents a method for visualizing ""non Euclidean spaces"", based on ray tracing using RTX. It is sound and well engineered, and the paper is well written and well illustrated. The presentation has been improved since last submission (in particular, the technical level of the introduction is more appropriate). However, I still believe that the paper itself presents limited novelty: this type of visualization has been proposed already in previous works, and the contribution seems to be limited to the use of the RTX pipeline to obtain faster rendering times than before. This amount of technical contribution is in my opinion below the bar for GI. I rated the previous submission as ""slightly below acceptance level"", and my concerns were not really addressed by this revision. As already written in the previous round of reviews: I think that the presented technique might allow for nice new applications, and demonstrating this might strengthen the paper in a future submission in a Computer Graphics conference. Without new applications (requiring non-trivial technical work), the RTX implementation might not be enough to justify publication in a CG conference. Another option would be to study more in depth how useful is this type of visualization in practice, in which case the paper could probably be presented in a Scientific Visualization conference. pros: - well illustrated, well written - sound engineering work cons: - technical contribution is too small and too incremental (RTX implementation of an existing visualization technique)""",2,1 graph20_13_1,"""This paper compares the performance of the learning-based pressure solvers of the Tompson et al. against the commonly used choices of interactive solvers for fluids simulation. Although this paper is not proposing a new method, rather it compares the different approaches, it still gives some useful insight. Hence, I think it is worth publishing paper. The convergent graph shows similar convergence behavior for different problem settings (in Fig. 5 and in Fig 6). So the number of the setting is enough. The paper is more informative if it has 3D examples and comparisons between different resolutions. The iterative solvers are implemented on GPU, not on CPU. This is reasonable because the approach by Topmoson et al. also use GPU. The performance of GPU solver is affected by its implementation so it would be nicer if the paper describes more how they are actually implemented using Nvidia's library. The only downside of this paper is that it doesn't describe the learned-based solver well. The iterative solver described in Section 3 is fairly standard and doesn't require a lot of explanation. The paper should focus on illustrating how the learned-based solver was implemented (network architecture, training data, and training parameters). Nevertheless, the paper did a nice job in comparison and it demystified the performance of the learned-based poison solver, I believe it worths publishing. """,3,0 graph20_13_2,"""Paper Summary: The paper compares some existing methods for solving the pressure projection step (Poisson problem) in a standard 2D fluid simulation, including a CNN-based approach, and concludes that the Jacobi method is preferable in terms of cost vs. error for a small fixed time budget. Review: From my perspective, lack of originality/significance is the main shortcoming of the paper. Its level of novelty is very low: it simply re-implements (or possibly just executes) a few existing techniques/codes and compares plots of their performance vs. error tradeoff. For the work to be broadly useful, it would need to compare a wider range of state-of-the-art alternative solver strategies, and preferably consider the three-dimensional case. There is simply not enough of a novel contribution in the current submission. For example, the paper should have compared alternative CG preconditioners besides incomplete Cholesky, since IC is not very parallelizable, while using Jacobi or RBGS as a preconditioner to CG is more readily parallelizable. Likewise, solvers based on FFT and multigrid (e.g., ""Low Viscosity Flow Simulations for Animation"") should probably be considered. Similarly, since all the test domains are fixed in time and the problem is strictly two-dimensional and not huge, another useful point of comparison would be a direct solver approach, where the matrix A is prefactored once, so that only the forward/backward substitution phase needs to be executed at each step of the animation. One potentially interesting insight from my perspective is that the CNN approach fares poorly compared to simpler traditional techniques, at least for the scenes considered. However, the authors don't really investigate/discuss why this (presumably) disagrees with the earlier findings of Tompson et al., who claimed that their method accelerates fluid simulation. Was it the GPU implementation? The focus on 2D? The time budget? Something else? In addition, one important technical issue is that the use of regularization for the Poisson system is unnecessary. The null space in the system is a known one-dimensional space consisting of a constant offset of all the pressure values (since only the gradient matters to the subsequent velocity update). This null space can be eliminated directly by picking one active cell and assigning it a constant pressure value (e.g., zero). Therefore the regularization should be eliminated, and the experiments re-done. It is hard to know for certain a priori what effect this will have on the observed results. Clarity: The paper is mostly clearly written aside from a few minor typos and grammatical errors. (By the way, the proper expression is ""bang for the buck"") Section 3.3. can be removed and replaced with an appropriate reference to a linear algebra textbook, since it describes well-known existing linear solvers. This space might have been better spent reviewing Tompson's CNN-based scheme, since it is far less well-known/standard. Additional comments: The fact that RBGS converged more slowly than Jacobi even in terms of *iteration counts* is suspicious, since these are both standard techniques whose convergence behaviors are usually thought of as well-understood. Has the RBGS solver been verified on simpler model problems (e.g., Poisson on a square) to confirm that its implementation is indeed correct? Did the authors simply reuse the existing code from Tompson et al.? I would have liked to have seen a video comparing the visual results for the various solvers (unless I overlooked one in the submission?), since for a very small time-budget the remaining errors may yield quite different behavior, as seen in Figure 1. This is really mandatory for a paper in which the focus is computer animation. """,1,0 graph20_13_3,"""This is a well-written paper that benchmarks the performance of different pressure solvers for 2D fluid simulation, including the interesting case of learned (CNN) vs traditional solvers. I am not an expert in this area at all, so my comments are of a general nature. For the specific examples studied, the evaluation seems to be reasonably thorough and touches upon important aspects of interest. However, I do wonder how representative the set of test scenarios is. To my understanding, Tompson et al. also test against Jacobi and PCG, measure the divergence norm, and report conclusions that are (unsurprisingly) roughly opposite to those in this paper. Asuming both sets of authors are reporting their findings accurately, we can assume the difference boils down to specifics of the testing scenarios (2D vs 3D, initial and boundary conditions, hyperparameters, training data, etc). In which case, it is unclear what the practitioner should conclude from the current study, especially since it focuses only on the less realistic case of 2D flows. I think such studies are very valuable in general, but I am not sure whether the current one is broad enough. Separately, this is obviously not a paper that proposes any new technical components, it is purely a benchmarking study of existing algorithms. I am not in a position to judge its suitability for GI on that score, I'll leave that to the meta-reviewer(s). There are a few typos, e.g. ""The scripted mouse interactions used to inject a consist disturbance in each test, and the result of the disturbance, can be seen in Figure Figure 7(a)-(b)""""",2,0 graph20_14_1,"""This paper presents a computational technique, W-graphs, that visualizes the different steps multiple people perform to complete the same task. The example task demonstrated in the paper is that of 3D modelling using TinkerCAD software. The contribution of the paper is threefold: an explanation for how W-graphs can be constructed, descriptions of potential applications for W-graph, and preliminary evaluation of W-Suggest (an application that suggests alternative workflows). Overall, the paper is well written and presents an interesting idea. There is value in learning about how others perform the same task and, it can be useful to gain insights about the higher-level categories of sub-steps and the sequence in which people complete them to achieve the end result. The idea of representing workflows, especially in the context of software tool uses, isn't necessarily new, but the approach of synthesizing other people's sequences of higher-level actions is an interesting variant in my opinion. The paper also did a good job explaining their explorations including alternative algorithms they considered for implementation. While I liked the paper, there are a few questions I had: I wondered if the potential benefit of W-graphs can be further refined. At a software level, there is a relatively low-cost associated with engaging in trial and error, so I wondered who would really benefit from seeing the sequence of high-level actions taken by others? For example, an expert may be more willing to try out a few options by themselves (and still not compromise on the time taken) without requiring to reference other people's workflows. Alternatively, if the W-graphs were more useful for novices, then is it enough to only know the sequence in which high-level actions were completed, or would they also need to know how the actual steps were performed? I was also curious to know how the algorithm handled mistakes made by people. There is mention of treating erase and undo as one type of operation (mentioned in the future work), but it is unclear how they are currently represented in the W-graphs. I wondered if the algorithm in any way distinguishes between useful information and those that are simply redundant or less valuable. The pre-processing stage seems to somewhat tackle this issue by collapsing reducant information, but I wondered if there was even a need to represent such information. For example, if a person rotated or used zoom several times to explore the model to view it from different angles, is that useful information for others to know about or is that simply information overload? I suppose that it depends on the application, but I wondered if the authors had considered some kind of data cleaning or filtering option. Lastly, since the paper is primarily a technical algorithm contribution, it may help to provide more implementation details to enables others to implement or use the W-graphs in their applications. Perhaps the authors can consider sharing their code as an open-source resource? Some minor comments: There seem to be a few typos (E.g., SD=107 ?) """,3,1 graph20_14_2,"""The paper ""Workflow Graphs: A Computational Model of Collective Task Strategies for 3D Design Software"" presents the concept of workflow graphs (W-graphs) for representing multiple divergent solutions to 3D design tasks, a technique to generate these, and three suggestions for application areas. Overall I enjoyed reading the paper. The paper is well written and presents a compelling idea. The authors do a good job covering related work and positioning their contribution to it. The three application examples present a good validation of the potentials of the idea presented. That said, there are some things that I would like the authors to address: - The motivation of the paper is as it is weak, and could easily be strengthened as I believe the contribution as stronger potential than what is presented in the abstract and introduction. The abstract doesn't at all mention what motivated the work, and the introduction is vague and indirect. I would suggest the authors to emphasise the potential for computational support in 3D modeling tutorials as the main motivation. - It is unclear how well the idea scales beyond fixed examples. The authors address this a bit in the discussion, but I would have liked to see some technical details. - It isn't clear to me how Tinkercad was instrumented, as I assume everything didn't happen through Autodesk Screencast? How was, e.g., the 3D model snapshots made? - I would like the authors to discuss how labour intensive it is to create a w-graph where saturation is reached. Also, what is the increase in complexity and time when going from creating a w-graph for the mug task to the standing table? How much of this work can be crowd-sourced and how much requires expert supervision/curation? - The first sentence of ""Workflow graphs"" seems out of place in relation to the contribution as it is described in the introduction. Overall I would recommend accepting the paper for GI 2020.""",3,1 graph20_14_3,"""# Summary This paper presents W-graphs, a method to capture, classify, and simplify the multiple users' workflow of designing a 3D model. To construct the graph representation, the paper describes how to process the data and how to collapse and merge the similar nodes in the graph. Particularly, the autoencoder approach to determine the similarity of the 3D model in each edge seems effective and clever. I also appreciate the design rationale behind it. Based on this W-graphs engine, the authors suggest three possible applications and use scenarios. # Review Overall, I enjoyed reading this paper. I think the paper is well-motivated and well-written. It also reviews the literature well. As the authors argue, I am also not aware of any prior work that investigates data-driven workflow analysis for 3D modeling tasks. Thus, I think the three contributions raised by the authors are valid, and these contributions should be strong enough. Therefore, I would like to recommend this for acceptance. However, I have some comments or concerns that the authors should be able to address in revision. # Suggestive user interfaces Although the authors described three possible applications, these are essentially the same, in terms of providing the feature of ""suggesting"" the next step. I think there are a lot of papers related to this suggestion feature in 3D modeling tasks. For example, - A Suggestive Interface for 3D Drawing - Data-Driven Suggestions for Creativity Support in 3D Modeling - Autocomplete 3D Sculpting - Autocomplete Textures for 3D Printing - Guided Exploration of Physically Valid Shapes for Furniture Design - A Suggestive Interface for Image Guided 3D Sketching (if we expand the application domain, I think there should be much more related work, such as Shadow Draw or Interactive Beautification.) I still can see the novelty and benefits of the presented approach, as many of them provide suggestions based on the pre-defined algorithm or heuristics, but I also think there should be some more that use the data-driven approach for suggestion feature in 3D modeling (e.g., similar work as Chaudhuri and Koltun's Data-driven Suggestions). Honestly, I think these works are more relevant to the current ""Learning at Scale"" of the Related Work. At least, the authors should expand the Discussion section about what are benefits and advantages, and what are and limitations or disadvantages, if any, when compared with these existing approaches. # Scalability I just wondered how this system can work in real-world scenarios. For example, in the paper's settings, the authors specifically confined the participants to model a mug or a desk. However, in the real world scenarios, the user would create a lot of variety of models, and the system need to learn and construct a graph based on all of them, if the system captures all of these workflows. Even in the simplest case, for example, if one provides the mixed data of both mug and desk workflows as an input, can the system still successfully construct a nice workflow-graph? Or, alternatively, do the authors think to collect every possible model in advance in the same approach (e.g., chair, airplane, lamp, etc)? It was not sure how it works in these scenarios. It is useful to clarify this point. Also, it might be related to the above point (i.e., suggestive interfaces), but in the real-world W-suggest scenario, what the users want to create can be very diverse --- it is more likely that the user wants to create other than the mug. In that case, I was not sure how the proposed high-level workflow suggestion can work well. Obviously, in the presented user study, the participants were asked to create either a mug or a desk. However, it is not the case in real-world scenarios, particularly for professional users. Low-level feature suggestions (e.g., the suggestion of repetition, aligning, etc, similar to the related work listed above) can be generalizable for different use cases, but for the high-level feature suggestions proposed in this paper, I wonder if this lack of generalizability could become one of the disadvantages or not. I also want to hear the authors' opinions and ask the authors to clarify this point. Given this point, I partly agree with the authors that this might be more useful for software learnability particularly for novice users (thus, I think the use of Tinkercad makes sense). I think it is fair for the authors to frame that this tool is intended for novices to learn how to model some pre-defined models. But, still, it is nice to see the discussion about the scalability and real-world deployment issue.""",4,1 graph20_15_1,"""The paper presents a technique to design shapes that can tile space seamlessly. The authors build on the theory of weaving patterns and use them to define a space partitioning via Voronoi diagrams with 1-dimensional sources. While the theory of wallpaper groups in 2d is well known, I was not aware of the analogous 3d problem. The paper does an excellent job at introducing the problem and also describes the theory of weaving patterns in a very compelling way. The practical algorithm is well motivated and seems to be very solid. I also like the FEA analysis part which gives an idea of the physical properties of the designs. It would have been nice to see some more practical examples where such patterns could be employed. From a practical perspective I would have liked to see a clearer focus on patterns that form an interlocking assembly without the need of compliant elements. I believe this paper should be accepted due to its novel content, thorough evaluation and convincing presentation. This paper could also inspire further research leading to practical applications in architecture, design and fabrication in general.""",4,1 graph20_15_2,"""This paper proposed an approach to design tiles that interlock and fill space in two and a half dimensions (height field). It is claimed that prior work (primarily theoretical) focused on interlocking shapes, or space filling shapes, but not both. The space filling property is achieved by building on Delaunay's Stereohedra and Delaunay Lofts. The interlocking property is achieved by building on the theory of genus-0 2-way 2-fold (biaxial weave) fabric. Combining these two theories appears to me to be novel, and exciting. An interface allows the user to create 3D curve segments that are closed under the symmetry operations of a 2-way 2-fold genus-0 fabric. The generated designs are also actually fabricated (in various materials and with various methods), and the structural mechanical behaviour is studied for the different weave patterns (nice!). Physical locking is demonstrated in the fabricated examples. The role of chirality is briefly explored and demonstrated. The paper builds on (and nicely attributes) Voronoi's Stereohedra, space-filling polyhedra designed by applying symmetry operations to generating points of a Voronoi diagram. Building on the recent ""Delaunay Lofts,"" the present paper allows for any line/curve/surface to serve as the Voronoi site, provided certain reasonableness conditions, e.g., that the initial configuration of the replicated shape is closed under symmetry operations. P. 4 L. 69 and P.5 L. 7 the writing is confusing because they both seem to introduce Delaunay Lofts for the first time. I found the demonstration of choice of chirality neat. I found the discussion summarizing the work of Grunbaum and Shephard and genus-1 fabric theory to be very illuminating. Neat stuff! I'm glad I learned that. Fig 2: Since the point of this figure is to show the beautification benefit of including the top and bottom lines, it would help to show these results side by side with those obtained without including the top and bottom lines. Note that I am unclear what that means... to exclude the top and bottom lines.... since then the regions would be infinite. The first mention of ""2-way 2-fold weaving patterns"" merits one or more citations, and an explanation. line 23: ""under under"" For twill, you mention that there was no need for a flexible piece in the fabrication process. Do that mean that the material can still come apart? Presumably (from the theory) it cannot. So, maybe this can be explained further. For instance, is the twill easy to assemble / disassemble without any forces or flexibility because the boundary conditions at the border are not being imposed, but then when the boundary conditions are improved it gains its strength and stays together? What does this tell us about transfer of stress to the boundary I wonder? This is a very thought provoking paper! """,4,1 graph20_15_3,"""Originality and significance: Woven fabrics have been heavily studied in computer graphics. This paper provides a different framework to classify such woven structures, considering base template curves, how to express the over-under pattern, and symmetry properties. I find the framework a bit oversold, but it does offer a clean way to parameterize weaving. The paper describes a pipeline to create space filling tiles by applying 3D voronoi decomposition, using the template curves as voronoi sites. This idea is effective but a minor contribution as it is a straightforward application of voronoi cells. I found the structural applications compelling. The tiles could be practical for pre-fab construction, e.g. interlocking bricks rather than cast-in-place slabs. As mentioned below, this application domain should be described much earlier in the paper to better motivate the space filling tiles. FEM tests for different weave patterns are interesting to see how stress distributions are affected (see questions below). The paper mentions reinforced slab blocks, but as far as I can tell reinforcement was not tested in the paper. It would be helpful to see further discussion (or future work) on assembly order of the tiles, and how to make guarantees on locking/stability behavior. Clarity: The exposition could use improvement. - The introduction is heavy in terminology (e.g. 2-fold 2-way biaxial structures) making it a dense read. I appreciated the helpful background given in Sec. 2.3 ""Geometry and Topology of Fabric Weaves"". Please re-summarize definitions of terminology immediately in the introduction to improve clarity. - The practical implications of the paper did not become clear to me until Sec. 6 ""Structural Evaluation"". I encourage discussing applications of structural building blocks in the Introduction. Also since Sec. 6 is also a major piece of the contribution. - Pg 4, please define ""relatively prime"" - Pg 4 lin 32: typo: b=n-b The paper is quite long. Writing could be made more concise overall. Some other areas to cut: - Fig 8-10: The difference between figures for overall configuration and union of surrounding curves is very minor. I'd suggest removing the union of surrounding curves images to shorten the paper. It's also not very illustrative for understanding the mold structure since only the curves are shown, not the 3D geometry. - Fig 14: The aluminum casts are nicely made, but I don't see the need for including these in the paper. What is the relevance to the research contributions? - Sec 5.1 should be significantly reduced. Locking ability is a compelling problem, but there are no guarantees or other formal conclusions given here. Validation: Questions regarding the structural testing: - Are the planar and normal load testing using the same amount of material between all pattern types? I see that an 8x8 grid is used, but there may still be variations in volume. It also looks as though extra material surrounding the 8x8 grid was included in the analysis for twill and satin. - How do the woven structures compare to a regular flat slab of the same volume? This should be included in the results. Overall I find this a strong paper for GI with interesting contributions at the intersection of weaving and structural analysis. But exposition should be improved to properly reflect the contributions and improve readability.""",4,1 graph20_16_1,"""The authors present a new visual analytic system called Gaggle, which aims to enable non-expert users to interactively navigate a model space by using a demonstration-based approach. An evaluation with 22 non-experts support the claim to simplify the complex model and hyperparameter search by using such an interaction paradigm. The system is well motivated, its structure is sufficiently described and the overall paper is well written. However, some open questions and comments remain: 1) The usage scenario is helpful to better understand the application of Gaggle. However, the difference between the scenario and the example data presented in Figure 1 makes it unnecessary complicated to understand the described scenario in context. I recommend the authors to align these two to improve readability. 2) The last paragraph of the usage scenario is not clear. I recommend to reformulate and clarify this part of the paper. 3) The authors claim that the presented technique guards against possible model overfitting incurred due to adjusting the models confirm to specified user preferences."" (p.2) However, the authors declare later that the risk of overfitting is high with such aggressive model space search approaches like used in Gaggle. While the authors argue further that overfitting is less problematic in an exploratory context, it would strengthen the contribution to discuss potential solutions to this common issue. 4) The author describe the aim of Gaggle as to help users to explore data and gain insights. However, the process described rather helps users to faster or more accurate build a model that produce intended outcome, similar to active learning approaches. The contribution would benefit from a reflection and discussion on the active training vs. exploration trade-off. This could take the form of a more detailed related work analysis regarding active learning and similar approaches and also as part of a larger discussion about the benefits of the presented approach over them. 5) Regarding the presented model, two main question occur: 1) How does the model acts if the users selection is not coherent? In the paper it is described that if a feature satisfies one interaction but fails on another, they are left out. Only the common features across interacted items get selected. The set of selected features Fs are then used to build the random forest model. While no or a very small set of common features might represent an edge case, it is still important to evaluate the robustness and generalizability of the model. 2) The weight selection is described as The weights are set based on the model accuracy on various datasets. It would help the reader if the author could elaborate on this aspect and would make the presented approach more replicable by the research community. 6) I encourage the authors to an elaborated discussion of the potential generalizability to other models, contexts and real world scenarios. The current study takes a rather small dataset and exclusively random forest algorithms as an example case, which is quite limited in its application. To open the contribution of this work to a larger audience, a discussion should include details about necessary changes, limitations of applicability and Gaggle's potential over known approaches. 7) The qualitative evaluation should be described in more detail. This would include which likert scale questions were asked, their results and what did other participants report to present a more comprehensive picture of the overall results (currently only 6/22 referenced). I encourage the authors to consider the above mentioned comments to improve their submission, especially regarding the difference to other active learning approaches as well as the generalizability to other models and scenarios. In conclusion, the authors present an interesting approach to help non-experts in ML to consider a diverse set of model parameters, without the burden of setting them manually. The system is well designed for the use case and the study reflects its applicability in this case. Therefore, I recommend to rather accept this paper, under the condition that the before mentioned comments are considered and addressed. Spelling mistakes: - p.2: domainstration-based - p.9: might require different different model""",3,1 graph20_16_2,"""This paper presents Gaggle, a visual analytics system that helps novice analysts navigate model space in performing classification and ranking tasks. The system has many features and is probably useful and effective. But there is not much contribution in terms of the visual analytics research or understanding how humans use these types of systems. There is no doubt in my mind that a lot of work and thoughts have gone into the development of this system. However, mixed initiative systems have been studied for quite a long time. There seems not sufficient novelty in terms of the technical contribution or visualization design in this paper. First, it is unclear about the effectiveness of the proposed Bayesian based model searching technique. Auto ML has been a hot topic in the machine learning community, e.g., pseudo-url. This paper does not compare their approach with any other existing methods. It is not convincing that there exists sufficient novelty or contribution. It is also unclear if the proposed method works for navigating any ML model space (e.g., SVM, neutral networks) or just Random Forests (as described in the paper). If this is a limitation of the method, it needs to be discussed. Moreover, it is not clear who are the end users of Gaggle and whether Gaggle is useful in real world. The presentation of usage scenario is nice. However, it does not come from a real-world use case and I can hardly imagine how Gaggle would contribute to analytical process. It would be fine if the authors collect requirements from target users. The design goals seem to be distilled without involving end users in the loop. This would be okay if there was an insightful section on how human users would interact with such a system based on real user interviews. But the evaluation just uses standard techniques to confirm the usability of this system. """,2,1 graph20_16_3,"""In this submission the authors describe Gaggle, a system that takes input from user to facilitate model space navigation in VA contexts. The paper is overall well written (a couple of typos here and there, including some I will report below) and the topic is relevant to GI and the visualization community. While I was not an expert at all in this domain, I found the paper relatively easy to follow and understand. Not being an expert, I cannot judge whether or not all appropriate previous approaches are cited, but I trust that other reviewers would be able to point out missing references if there are any. I would overall argue that the work should be accepted provided that the authors can address my (relatively small) concerns (and the ones from other reviewers). I will list my concerns and questions below. Given the scalability issue that the authors currently highlight, it seems that such a complex system might be an overkill for small datasets. Especially in the way they write about this limitation. I would argue that the authors should somehow justify that their system can be useful in real scenarios despite that limitation and give concrete examples to avoid leaving the reader with this feeling. This is currently my main concern about the submission and the reason why I put a rating a bit lower than 7. It would be nice to have access to the full set of questions that were asked during the semi-structured interviews, as well as the likert scale questions that were given to participants after each trial. Currently the likert-scale results do not make much sense without being able to see what questions were asked. I overall found the qualitative evaluation to be not correctly reported. Linked to this, I would argue that the datasets used by the authors do not seem very interesting or complicated. I, so far, fail to see why users would need to use Gaggle to make use of this data. I dont know if these datasets are representative of the datasets that the authors envision for Gaggle, and I surely hope that they are not, but in this case I would argue that the authors should properly justify why they chose these specific datasets. This is currently missing and it hinders the work that the authors have conducted and reported on previously. Typos: Page 3, second column The found They found Page 8 In future In future work """,3,1 graph20_17_1,"""The paper presents three heuristic techniques to improve the rendering performance of the View Independent Rendering presented previously in reference [9] in this manuscript. These techniques include: (i) A practical suggestion of not requiring to buffer the point cloud, instead send it directly to desired off-screen buffers. This would avoid buffer overflow, but I expect this could make it slower. I would have liked to see some empirical evidence that it does not reduce speed. Table 1 shows some time improvements, but the larger gain seems to be due to stochastic culling of sub-pixel size triangles (technique iii). (ii) The second technique is to use orthographic sampling over perspective sampling used in the original paper. This eliminates non-uniform sampling due to the perspective, and yields smaller point clouds. But again I suspect it will take longer as the model size increases. It would be good to see how this affects speed as the model size increases. (iii) The third technique is stochastic culling of small triangles which span less than 1/8 of a pixel . This is a heuristic which certainly speeds up the method, but has the danger of leaving holes. Their claim is that in their example renderings it did not show holes! This is not convincing. Also, how was the factor 1/8 chosen? Why not use, say, 1/4 or 1/32? If this was chosen based on empirical studies, then it would be good to have presented them. The results do not seem to show much improvement either in speed or in rendered image quality. The paper claims that they expect improvement over [9] in renderings requiring more demanding shading loads, such as environment mapping, diffuse global illumination and defocus or motion blur. However this is not implemented and not demonstrated in any way. Overall I find the technical contribution rather limited, though there are some practical improvement techniques. This paper could fit into a short contribution category, if there is one. """,2,0 graph20_17_2,"""The work proposes adjustments to an existing algorithm. One of the contributions seems to be based on a misunderstanding of the original algorithm. The other two are rather minor adjustments. The paper is short and the text states that it was submitted as a ""short paper"". This category does not seem to exist though. For a full paper, the work does not seem ready for publication. This paper presents a modification of the existing work by Marrs et al., which converts a mesh into a point cloud that can then be used for various applications (soft shadows, depth of field...). The conversion process is fast by relying on the rasterization capabilities of the graphics cards. In principle, each triangle is rendered at a suitable resolution and the resulting fragments are interpreted as point primitives. The paper claims to make three contributions. 1) The storage of the points in a buffer is avoided and the points are instead directly rendered into additional views. 2) The authors claim to correct a mistake in the original work that leads to incorrect sampling precision. 3) A culling algorithm is proposed to stochastically ignore small triangles. The first and third contribution are small modifications. The second would be of interest but it seems that the original algorithm was potentially misunderstood. The submission mentions that Marrs et al. setup an orthographic matrix and therefore the sampling rate cannot be correct. Nevertheless, this is only the first step. Marrs et al. state after the ortho matrix was applied: ""Next, we apply a default view-projection transformation to each polygon that positions the camera at the world origin looking down the positive z-axis."" Their first ortho matrix rotates the triangle such that its normal aligns with the z-axis and shifts it along this z-Axis according to the wanted scale factor, which is determined involving the various views. Next, the default perspective camera is applied along the z-Axis, which therefore produces the appropriate amount of pixels. This mechanism is also illustrated in Fig. 1 of their paper. With this observation, the actual contribution of the submission is reduced significantly, as the other two elements are not sufficient to grant acceptance. Additionally, the submission would benefit from a careful rewrite. The paper assumes the knowledge of Marrs et al.'s work, which makes it less stand-alone. Furthermore, the mathematical formulas are not entirely sound. Equation 1 is a recursive definition and also contains a forall operator that is misplaced. The initial value is left out. The same holds for Equation 2. The pseudocode hints at what was meant but it unnecessarily complicates the reading flow. A very positive element in this work is the evaluation. Effort was spent on implementing the soft-shadow application. The resulting shadows look convincing and even a comparison to an existing competing solution is included. Unfortunately, the current application is not performing better than Marrs et al.'s work. Another interesting evaluation concerns the influence of triangle sizes on the performance. The paper distinguishes several cases and shows the high efficiency when sub-pixel triangles occur - where the culling mechanism has its opportunity to shine. It would have been good to show the limitations of the culling step. In principle, with a plane that is tessellated, there should be visible gaps for higher resolution levels. Overall, it is clear that quite some work went into the submission but given the above flaws, the paper is not ready for publication. Still, the work is a nice starting point. The authors could address the above issues, add the promising new applications that they give an outlook on, and resubmit in the future.""",2,0 graph20_17_3,"""Summary: This paper suggests two improvements to the View-independent rasterization (VIR) algorithm of Marrs et al. [9]. First, it eliminates the point buffer to avoid buffer overflow issues, and second, it replaces the projection matrix utilized in the original method with an orthographic projection computed per-polygon. Further, a stochastic sampling technique is introduced to speed up the algorithm. Overall, the paper reads well and problems are stated clearly. However, I feel that the paper lacks comparisons which would really show the impact of the proposed improvements to VIR. Comparisons are performed, but only against other methods such as PCSS and MVR. However, comparisons against the original version of VIR are missing. In my opinion, the comparisons provided only show how VIR compares to other methods, irrespective of the improvements suggested here. Runtime comparisons against the original VIR will show that the unbuffered implementation is faster than [9] (or just as fast as it). Visual comparisons against [9] will show that the orthographic projection improves visual quality. > Currently, the paper only offers the conjecture that these improvements will show up for more demanding effects. > With ref. to the buffer-free implementation, it's not clear to me how the presented implementation is different from the alternative proposed by Marrs et al. [9, Figure 4 and Section 4]. They further mention that the chose the buffered technique with compute shaders since it was observed to be faster. > I like Fig. 5 which shows that the stochastic sampling provides performance benefits without a perceptible visual degradation. To summarize, I think this paper needs an ablation study. I would like to see comparisons of unbuffered vs. original VIR, then a visual comparisons b/w [9]'s perspective projection vs. the proposed per-polygon orthographic projection. It would also be good to include more result figures. Some possible typos: > Eq. 1 should be pseudo-formula mv = v V area_p {area_{P,v,p} pseudo-formula > Eq. 2 has the same issue. > pseudo-formula and pseudo-formula are not initialized in Algorithm 1. > The use of the term ""watertight"" is confusing. I would associate watertight with topologically structured data such as triangle meshes, and not with point clouds.""",2,0 graph20_18_1,"""The paper is written well and covers an interesting technology and experiment. The authors designed and tested an embodied experience in which they enable a human to navigate environments with artificial cat whiskers which can sense the environment. Paper raises interesting questions about replacing sensory experiences with wearables to better match those of animals, in an attempt to understand animals experiences. They had 6 participants navigate a small maze blindfolded while using the whiskers. They found similarities in behavior to the participants and animals in similar scenarios. The paper would be helpful for other researchers looking to explore embodied experiences. Motivation Embodied studies with pets is an intriguing and open problem, and there does seem to be a lack of papers that explore animals and especially pets in particular. The related work section covers a lot of interesting and relevant information but is missing some important animal embodiment papers: Arque: artificial biomimicry-lnspired tail for extending innate body functions (pseudo-url) A mobile pet wearable computer and mixed reality system for humanpoultry interaction through the internet (pseudo-url) The motivation to use cat whiskers specifically was not really there. It jumps directly in to whisker design and implementations. What other options did the authors consider that could enable this embodied experience? After all, RQ1 is ""In what ways can we create technologies and environments that remediate human experiences to be like those of non-humans?"" I would either better motivate why cat whiskers was the chosen approach OR make RQ1 less general since the paper does not really dive in to any other approaches. The study hinges on sensory deprivation but does not adequately address how that deviates from the animal experience, and most importantly how despite the eye-sight deprivation the reader should still be able to draw conclusions on the animal experience from the papers findings. As mentioned in the paper, eyesight is one of the predominant senses in cats as well. The lack of eyesight seems to heavily influence nearly every interaction in the study, which would call into question how relatable this task is to a general cat-like experience (except for perhaps a blind cat). Motivation is too broad on pet embodiment, when it focuses on a particular pet and on a particular interaction. Study: In the paper, the authors write: ""cats have difficulty seeing objects very close to their faces, so whiskers help them sense objects close to them, as well as protecting their face from harmful objects"". The authors also mention that cats use both vision and whiskers to navigate. However, the study involves that people remove their sight by blindfolding them instead of alternatives such as limiting their vision. For a short-term focused study, six (6) users seems like a small count. Results and discussion appear closer to a pilot study in their breadth and depth. Clarity Well-written. Strong collection and range of cited works. Originality Very original and novel research. The experiment is interesting and the approach to analyzing the data is strong. Focuses on increasing empathy. The paper would be helpful for other researchers looking to explore embodied experiences. Significance Presented device seems well designed and appears to function well. Study and evaluation appear correct and sound. However, ultimately papers motivation is weak, research questions are too broad, the number of users is too low, and the study hinging on eyesight deprivation ultimately might influence the animal experience far too much for meaningful conclusions to be derived. Recommendations Make research questions less broad and more specific to what you actually did Motivation needs to be MUCH stronger. The Motivation mentions feminist critique and feminist science and education. It is unclear how these relate to the motivation at large and the project as a whole - Recommend csquotes package for quotes instead of bullet list items (looks nicer) - Add implications for how technology like this can benefit people. """,2,0 graph20_18_2,"""# Summary This paper describes the development of ""Whisker Beard"", a wearable interface that helps people shift perspectives to animals, such as cats. The device includes flex sensors attached to prosthetic whiskers, with flex values mapped to vibrotactile feedback on the scalp. Six participants were blindfolded and tasked with finding as many toy mice as possible (maximum of 10 were placed) in 10 minutes in a maze. Think aloud, video and focus group feedback was analyzed using qualitative coding techniques. # Review Overall, I find this paper is borderline but lean towards accept, as I don't have a strong reason to reject. The positives: The design and methods seem principled and valid, it seems quite novel to me, and most of the related work is reasonably covered. There is some takeaway, as the authors report finding animal like behaviours from the participants, as well as some quotations that suggest a shifted perspective. Limitations: I find that this paper could have several improvements that prevent me from more strongly arguing for accept. 1) The main concern I have is the limited takeaways. While the results seem valid, especially the coded results of how people behaved, I'm not really sure what to do with this information. Yes, it sounds like there is some evidence that this system can help people frame themselves as an animal, but I think the results would need more analysis before I would trust these as strong results. 2) I think the methods and results, while mostly valid, need additional analysis and reporting to be persuasive. The inter-rater reliability score is nice - it indicates that a principled coding process was followed - but I hoped to see more grounding in qualitative methodologies, which would help me trust the results of the coding process more. The quotations provided by participants are barely analyzed - simply sorted into lists - and here I think the results could be substantially stronger. The main goal of this system, if I understand the paper correctly, is to convey experience, not behaviour. As such, I think that a more thorough analysis of the participants' quotations, possibly using techniques like phenomenology and thick description, would really benefit the paper by interpreting and conveying participant experience. If this were done, then I could much more strongly argue that the paper should be accepted. 3) The system needs more description. I would have liked to see a full system diagram, and more description of the actuators used and how the voltage was mapped to vibrotactile feedback. I also would have liked to see a more rigorous calibration method - for this type of system, we don't need decibel values, but I would like to make sure the vibrotactile feedback is felt at similar levels of deflection for different participants, rather than just whether participants felt the feedback at all. (This is a minor point.) 4) The related work or design section should probably mention bio-inspired work on whisker sensors for robots, and whether those informed or inspired the work (or why not). Examples of such works: pseudo-url, pseudo-url Ultimately, after reading this paper - I'm intrigued, I think I've learned something, but I don't know how much I've learned. As such, I won't argue for rejection, but I cannot champion the paper.""",3,0 graph20_18_3,"""In this paper, the authors present a whisker-like wearable device (Whisker Beard) and conducted a study to investigate how we can create technologies and environments that remediate non-human experiences (cat in this case), and humans impressions towards them. Main findings include participants did exhibit navigation behaviour similar to whiskers-bearing animals, and drew connections to prior experiences with pets. Overall I find this paper clear-written and original (Im not aware of any studies like this). The study procedures are also well-explained. The motivation of enabling people to have an empathetic understanding of a situation through the sensory feelings of the affected animals is well-articulated and admirable. However, I think the significance/impact of the current stage of this work is lacking, and the findings dont warrant the empathetic understanding that I believe would be the most valuable contribution of this work. Therefore I do not recommend accepting this paper. Pros -Good structuring of the paper. -Made a good case of motivating this work with empathy and perspective-taking. -Good visuals to illustrate results (though Figure 7 is just an instance of the analysis and the authors didnt use it that much afterwards) -Good summary of limitations on hardware implementation and study design, and what can be done to mitigate that. Cons -The linking between scientific investigation and empathy & perspective-taking is somewhat thin. Im actually fine with just letting people sense like their pet and establish empathy on animals. In the same vein, relating this work with impairment also seems a bit of a stretch. -The design/implementation of the whisker is unclear. Including details like the mapping between intensity of vibration and bend angles (not just they are proportional), where on the scalp (relative positions for each motor), on which development platform it is built (Arduino?) would make it better. -Some findings are hard to verify (e.g., some used their whiskers to gauge the width of the passage by moving back and forth brushing each side of their whiskers on the opposing walls.) as there appears to have no other ways participants were blindfolded and explicitly told to use their whiskers to navigate. -The connection to prior experiences of pets drew by the participants feels like forced, because the researchers actually asked the participants this question. As mentioned in the Cons, I find some the claims made in the Discussion not well-supported. As another example, the authors discussed sight being a dominant sense for obtaining information about an environment. So why did they blindfolded the participants? It does not make sense to strip that away and force people to rely in the whisker senses. A pair of tinted glasses or dimmed lighting would be more appropriate. In the end, what this work really answered is just how human beings use a whisker-like sensory feature to navigate an unfamiliar environment, but not whether empathy is established nor the research questions being asked in the beginning of the paper.""",2,0 graph20_19_1,"""This paper is well-written, an easy read that clearly describes three experiments. I also think that the quantitative data from the experiments was well-done. Finally, the idea of creating a system to simplify emoji use would definitely be an aid -- as anyone who has tried to search through the many emojis available on mobile keyboards. I wish I knew more about past work in emojis prior to writing this review. While, overall, I am aware of work that explores the benefits of emojis in CMC, I am not fully informed about any studies that look, specifically, at lexical emojis and their overall purpose. I want to start this review grounded in the paper. At one point, one participant notes that they primary use lexical emojis in an ironic sense, as a humor mechanism, and I believe that there is some truth to that. If one examines past work in emoji use (both cited and via a quick google scholar search), most of the work of which I am aware or that I can find highlights -- as does this paper -- that emojis are primarily useful for their affective nature, the emotional component that they convey during communication. I actually think that this result is also mirrored in the data collected in this paper, particularly some of the qualitative data, as highlighted by the above quote. Lexical emojis like the 'school' emoji serve a limited purpose, particularly as suggestions, when I decide that I'm going to pick up my kids after school. The replacement of school with an emoji, to me, makes little sense in this context, as, as noted in the paper, the word has already been typed. This is at odds with semantic emojis, where the affect hasn't been explicitly indicated, and the emoji can serve as a useful tool to add this affective channel to communication. What is most interesting to me in this work is the 'crossed wires' effect that I see, where it almost seems like lexical emojis foster increased use of semantic emojis. Table 6 and Figure 5 highlight this for me. In Table 6, we see emoji use jump for both semantic and lexical, but it doesn't necessarily seem that people are selecting from the lexical emojis, and, in Figure 5, we also see this confound where people select the semantic emoji, but the selected emoji percentage is really low (might be a ""why would I replace the word I just typed with an emoji thing""). I can't help but wonder what would happen if the system simply suggested random semantic emojis (e.g. one happy and one sad) for every message, just to see what happens. Basically, I'm wondering if showing emoji suggestions makes a writer remember to consider adding emojis, and, for that, they simply add the emojis in. All this being said, what I feel is missing in the paper is this completeness in results. We see that both lexical and semantic emojis increase emoji use, but many other measures are unrevealing, and there is this question, even with users who a priori used lexical emojis, of whether those lexical suggestions helped. I would love to see more information, particularly from the lexical emoji users, that seeked to probe why their use of emojis increased even when their selection from suggested emojis did not. Were they typing semantic emojis? Overall, my take-away at this point from the paper is this mixed message, but not an analysis of this mixed message of lexical suggestion but not selection confounded with increased emoji use. This observation is interesting, but I'm not sure, without some drilling down, that it is enough to carry the paper in its present form. If I were to make a suggestion to the authors, my suggestion, depending on whether this paper is accepted or not, would be, if unsuccessful, to spend some time on the qualitative data. I would really like to have this data set more fully developed in the paper, beyond the relatively surface treatment of quotes in the paper. I would also suggest that the authors consider a study with random semantic emojis (as opposed to the ones produced algorithmically) to see if it is, perhaps, the prompting that emojis are possible that changes emoji use.""",2,0 graph20_19_2,"""The paper compares two emoji suggestion systems (i.e., lexical and semantic) in three ways: a small-scale crowd-sourcing study, a lab study, and a field deployment study. Results show that semantic emoji suggestions were perceived as more relevant than lexical emoji suggestions. However, the suggestion type did not affect chat experience significantly. The strengths of the paper are as follows: the paper is well-written and easy to follow. The paper contains three types of evaluations: a small-scale crowd-sourcing study, a lab study, and a field deployment study. Details of each study are presented sufficiently. The results of the three studies are complementary to each other. The weaknesses of the paper are as follows: First, the motivation of the work is the effect of the two emoji suggestion mechanisms on chat experience is unknown. One key finding is that the two emoji suggestion mechanisms did not affect chat experience significantly. The paper only provides a brief conjecture in DISCUSSION that the two types of emoji suggestion mechanisms might only affect the ease of inputting an emoji. Such an explanation (conjecture) is unconvincing. The study should have designed to gain more qualitative data to understand what factors affect their chat experience. Moreover, the studies found that users entered roughly the same amount of emoji with and without any suggestion mechanisms. This finding challenges the significance of the research question. Why is it an important research question to study in the first place (other than no one has studied this question)? Second, the frequency of lexical and semantic suggestions is different. One is at word-level and the other is at sentence-level. Even though the authors acknowledge this limitation, such design did introduce a confounding variable into the experiment. Perhaps the word-level suggestion is so frequent that participants felt overwhelmed and tended to ignore them. Third, the connection between the design guidelines and user studies is ambiguous. For example, it is unclear what the second design guideline is based on even though it seems to be reasonable. In sum, the paper explores a research question with three types of studies and presents the details of the findings. At the same, Im concerned about the significance of the research question, some aspects of the experimental design, and the foundations on which the design guidelines are based on. I'm on the fence for the paper toward rejection. """,2,0 graph20_19_3,"""This paper presents several studies of emoji suggestion systems for online conversations. The paper includes a preliminary crowdsourcing study, an in-lab study, and a 15-day longitudinal field deployment. In particular, the studies compare a lexical suggestion system (which suggest emoji that match words in the text that has been typed) and a semantic suggestion system (which suggests popular emoji that match the sentiment of the message). The paper also presents design guidelines for emoji suggestion systems. I'm somewhat torn on this paper. On the one hand, the paper is well written and easy to read; the studies are described in a good amount of detail, with justifications for the different decisions that were made; and the analysis is clear and well presented. On the other hand, I'm not sure that the studies achieve their intended goal of revealing deeper insights into emoji suggestion mechanisms, which weakens the contribution of the work. Expanding on the latter point above, the main stated contribution of the work is in comparing the lexical and semantic suggestion approaches. However, because the semantic suggestion system suggested popular emoji (while the lexical did not), it's not clear how much of the preference for the semantic suggestion system has to do with the semantic approach itself, versus just suggesting popular emoji (which may be popular because they are relevant in many different contexts). This weakens the main finding of the paper. A second criticism I have of the paper is that it is unclear how some of the design guidelines follow from the study results. The results seem to favor the semantic approach, but the first design guideline recommends suggestion diversity. The second guideline, personalization based on user and usage context, seems entirely unrelated to the studies that were conducted and their findings. Finally, the finding that the chat experience was not measurably affected by the suggestion systems is a negative result. This isn't necessarily bad, but it's hard to say what we can take away from it to inform further research or practice. As a result of the above, I'm on the fence on this paper and have given it a neutral score. If the paper is accepted, I think it should acknowledge that the popularity of the emoji suggested in the semantic condition (as compared to the lexical condition) may have played a role in the results, and the design guidelines should be modified to align more closely with the study findings. Smaller points: Page 9 - ""We then conducted two experiments, finding that emoji usage had a strong effect on senders than receivers, "" I went back and looked for this finding in the earlier sections, but I couldn't find it. What is this referring to? Page 5 - ""most commonly selected tweet per emoji"" - I think this should be ""most commonly selected emoji per tweet"" Page 5 - Q4 in Table 4 is cut off. """,3,0 graph20_20_1,"""Paper summary ================================ The paper presents a new method for the automatic colorization of pattern-based images using other colored graphics as a reference. The authors show in a study that their colorization method is superior to others. References ================================ The references are good. Implementation ================================ The algorithm is relatively simple and is explained well. Writing ================================ The writing is good, and easy to follow. Furthermore, the paper is short, and still manages to discuss the method in full details, which is very welcome. Equations (4) and (6) should specify (as a subscript) over which variable the argmax / argmin is performed. Minor writing issues: - p1: one keyword is ""G-r-a-phic-arts"" - p1: ""It is considered a valuable ingredient of our artistic abilities, so and so, that [...]"" - unusual use of ""so and so"" - p2: ""If the template and reference image has [...]"" should be ""have"" - p3: ""because the content in natural images have"" singular / plural mismatch - p3: ""collected from Colourlovers(an online community"" space missing between Colourlovers and opening bracket - p4: ""Each element of the vector represents size of corresponding color group"" article missing - p4: ""The objective is to propagate colors of reference image to the input template"" article missing - p5: ""We created a Graphical User Interface(GUI)"" space missing between Interface and opening bracket Novelty ================================ I am not versed enough in the field of image processing and colorization to judge the novelty of this paper. Given the discussion of previous works presented in this article, this article seems sufficiently novel to warrant publication. General ================================ This article addresses an interesting problem in image processing with an easy and compelling solution. The authors apply their method to a variety of images to be colored, and perform a user study to quantify their method. This paper is ready for publication after the minor issues in this review have been addressed. """,3,1 graph20_20_2,"""This paper proposes an interesting approach to colorize grayscale graphics arts by using the color scheme in reference images. Given input templates, it firstly searches similar reference images from a colored image dataset. Then colors can be transferred from reference images to input templates. The colorization pipeline is validated by two different user studies. The results are impressive in the paper and the supplementary materials. But the exposition of the paper should be improved. There are many typos and grammatical errors in the paper, which hinder understanding. I incline to weakly accept the paper if the authors can improve the exposition and address my concerns in the following. Major issues: The authors claim that they use an analytic approach other than a combinatorial or an iterative one. But they use the Hungarian algorithm approach to solve the matching problem, which is an iterative method essentially. The method requires the template and the reference images to have the same number of color groups. It is unclear what are the olor groups distribution in CRID and VG dataset respectively. What are the lower bound and the upper bound of the number of color groups in the implementation based on those datasets? How to select a reference image from the KNN search from the dataset? Are all k candidates feasible for final results? Are all results in Figure 1 generated using k reference images? What is the value of k in the implementation? It is not mentioned in the paper. Since the idea of graphic arts colorization is inspired by [5], it would be better to show the results of [5], the network of which can be trained on CRID and VG dataset using a similar way presented in their paper. Minor issues: - Prepositions in the title should not be capitalized; - In 4.1, a reference images are -> ""a reference image is"" or ""reference images are""; - In Eq.4, Pargmax -> argmax_P; - In 4.2, the subject is missing after Composition Matching (Mcmp):; - eq. eq:maximization -> eq.4; - In Eq.6, Pargmax -> argmax_P; - eq. eq:minimization -> eq.6; - In Figure 7, Result demonstrate -> ""Results demonstrate""""",3,1 graph20_20_3,"""The paper idea is pretty much summarized in the title: given a greyscale template image, they find the closest colored reference image in some custom metric, and match the greyscale values to the colors using a weighted graph matching. The technical contribution of the paper is formulating the problem as an instance of graph matching and solving it using a polynomial-time approximation introduced in [22]. The overall idea of the paper is quite clean and straightforward: in order to color a template, it makes sense to find a geometrically similar image and use its colors, matching geometrically similar elements. However, this is exactly where the paper falls a little short on its promise: instead of some notion of geometric similarity, the two out of three features the paper uses to compare images are purely pixel statistics ('Composition' and 'Sum of gradient'). I guess I would have expected some more geometric features like shape context or at least some histograms of gradient. Moreover, the paper doesn't really demonstrate how well the reference image search works. I would have expected some validation of just that stage: perhaps, a few closest images and a few far images in the collection, at the very least. Otherwise it's hard to judge whether it works at all (Fig. 4 doesn't help here: template doesn't look like the ref image). Finally, I'm a little surprised to see the discussion of Experiment-2. Do I understand correctly that the authors infer their method is superior to PERCEPT based on 10% out of 10 students' opinions, i.e. one person on average? I do realize it is not a completely formal user study, but I would not call this statistically significant. So while the paper is written well and clearly, while I appreciate the novelty of formulating the problem as a graph matching and using an approximation algorithm, even while the final colorizations look pretty, the issues above make me a little less optimistic about the paper. If the other reviewers think those issues are minor, I won't argue though.""",2,1 graph20_21_1,"""In this submission, the authors present their results of a comparative study on scalar-data fields between 5 different visual representations. The paper is well written (except a few typos mentioned at the end of this review) and easy to follow. I am not an expert in the topic but I would argue that enough related work was covered. The results are interesting and relevant to the community and I would overall argue for accepting the submission. I nonetheless have a couple of comments and issues with the submission that I detail below. I particularly appreciate that the authors made their stimulus and data analysis available. I would recommend however to put all files in a safe online repository (such as osf.io) --- instead of relying on supplementary materials --- and then link to the online repository. I also enjoyed the bayesian analysis of the results that was quite easy to understand (I would nonetheless remove the 2nd paragraph of page 6, which seems a bit odd in a research paper). I would also argue that the caption of figure 5 should mostly be in the text and not used as a caption. The caption should mostly describe the figure. It was somewhat annoying that the results for experiment 2 and 3 were not close to the text. Perhaps the authors could envision splitting their figures differently to make it easier to read the figures and the text together. The authors mention in the limitation Our main motivation was to give all techniques the best chance; there is a growing body of evidence that large displays are beneficial (e.g.,[23, 22]) but I was expecting there to get a quick summary of what they mean by this. Minor: - Make sure all references are in order (e.g., in the introduction, [32,7] or [44,27]). - There is a comment from one author left in the manuscript I believe ""CUT:that we judged (page 3, bottom left) -preavious previous (page 3, right) -underlie underline (page 4, left) """,4,1 graph20_21_2,"""This paper reports on a study on scalar data fields (SDFs) comparing 5 techniques (Digits, Color, ToolTip, Digits+Color, FatFonts variant juxtaposed not embedded) through 3 tasks (Locate Value, Find Extrema, Cluster) on a large display in a controlled lab experiment with 25 participants recruited from their university. With various reported metrics (time, errors), the main outcome is that Digits+Color (a table of digits overlaid with a heatmap) is recommended for increased accuracy with a small trade-off in time when spatial resolution is not a constraint. I would recommend for acceptance of this paper. Evaluation based on criteria suggested for reviews Quality Some claims in the introduction could benefit from references. See details thereafter. The technique named FatFont in the paper is not identical to previous related works: digits are embedded in [26,31], juxtaposed in this submission; so comparing results requires exercising caution. Clarity Figure 1 would have gained to clearly show the 5 techniques as named thereafter in the study. Figures 4,5,6 demand some effort to be interpreted, particularly since color mapping is not consistent across Figures: technique (4), column/row (5), value (6). I am not convinced by the need of a statistical method (Bayesian vs Frequentist) that is ""less familiar to readers and other researchers"" that diverts attention and requires lengthy explanations (Figure 5 caption and disclaimer on page 6). Originality This work builds upon previous studies on FatFonts presented at GI'17 [26] by introducing a straightforward baseline in the comparison: Digits (table). Signifiance While this is not clearly addressed in this submission, I believe that overlaying heatmaps over tables, the recommended technique overall, also directly brings benefits to presentation in scientific papers. Comments organized by appearance over the paper Abstract I would suggest to align presentation orders and descriptions of techniques in Abstract vs Figure 1 for faster understandability: "" 1) a state-of-the-art heatmap: 2) regular tables of digits, 3) an interactive tooltip showing the value under the cursor, 4) a heatmap with the digits overlapped over it, 5) and FatFonts. "" vs Figure 1 "" a) digits (table), b) red-blue static diverging color scale, c) color scale with digits (conditional formatting) d) and FatFonts "" Is the following mapping correct? 1=b? 2=a 3=? 4=c 5=d Why not use the names of techniques as in the study (Digits, Color, ToolTip, Digits+Color, FatFonts)? INTRODUCTION ""A large corpus of research"" Could references be cited to support this claim? ""appearance of artefacts that are due to the representation "" Are artefacts also present for techniques using text-based representations? Marcos Serrano, Anne Roudaut, and Pourang Irani Investigating Text Legibility on Non-Rectangular Displays. CHI 16 DOI:pseudo-url ""value of a continuous (or almost continuous) variable"" What is an almost continuous variable? ""Available techniques to address the problems of heat mapsinclude a cursor-controlled tooltip that renders the correspond-ing values digits (if the media is interactive)."" Any reference? Is it the same tooltip used in [26]? "" The results also refute earlier results about FatFonts being the best representation [26]"" The last sentence of the abstract of [26] reads ""The FatFonts technique showed better speed and accuracy for reading and value comparison, and high accuracy for the extrema finding task at the cost of being the slowest for this task."" [26] does not claim that FatFonts are the best representation. RELATED WORK Color Scales How does color blindness affect the choice of color scales? Text-Based Graphical Representation ""Third,the color scales that they used are not currently considered state-of-the-art, or the best current ones for continuous data"" Why? Would you have a proof to support your claim? EMPIRICAL STUDY ""In our study, we used the blue-red diverg-ing color scale implemented in D3 2"" In addition to footnote 2, why not cite at least one article by the authors of d3? M. Bostock, V. Ogievetsky and J. Heer, ""D Data-Driven Documents,"" in IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 12, pp. 2301-2309, Dec. 2011. ""ColorBrewer color scales (including divergingcolor schemes) have been used in preavious studies [6, 11].This choice also has the advantage that it is not significantlyimpacted by the most common non-typical vision anomalies"" Not all color scales proposed by colorbrewer.org are colorblind safe. Try to tick/untick the related checkbox in the website. In addition to footnote 4, why not cite the article by the authors of Colorbrewer referenced in the information popup on Number of data classes in their website? ColorBrewer: An online tool for selecting color schemes for maps. The Cartographic Journal 40(1): 27-37. 2003 pseudo-url Please typeset URLs in footnotes 1,2,3,4 correctly with or {} as the document already uses LaTeX hyperref according to document properties. ""We selected a state-of-the-art FatFont variant that is slightlydifferent from the original versions by Nacenta et al. [31].Instead of putting the second digit (second order of magnitude)inside the first one as in the original versions of FatFonts,in this variant the second digit, which is still1~10thof thearea, appears to the right of the first one (see Figure 2)."" The FatFont variant is interesting, but in that case comparison with previous work does not apply. "" such as ""cluster"" in Amar and Stackos [2] and ""identify clusters""in Lees [25]."" [2] has 3 authors (not 2) and [25] has more than 1. ""We recruited 30 participants with a variety of backgrounds(finance, physics, art-history, administrative staff) from thelocal university. "" Is the variety of backgrounds desired or emergent from recruitment? ""If the experimenter notedconfusion or an unintended error (e.g., unintended tap on thescreen), he marked the trial as invalid, which automaticallyadded a new identical trial at the end of that trials block."" Plural ""they"" vs ""he"" would be more inclusive for all ""25 participants (9 female)"". ""We simulated SDFs for the study using MATLAB and R."" Why both environments and not just one? ""6 Strictly speaking, our data and representations are discretizationsof a continuous scale (101 levels)"" Why 101? ""We ran MCMC simulations through JAGS 4.2.0 [34]"" It would be great to explain both acronyms at their first occurrence in the document here. ""(variance cannot be reliably estimated when it is zero)"" In that case variance is then reliably equal to zero? I guess that the formulation of this sentence needs editing. ""a) it is less prone to some ofthe serious reliability and interpretation problems that haveled to the replicability crisis in Psychology and other areas"" Any reference to support this claim? ""For this reason we ask for additional patienceand effort from the reader to read the result table summaries offigures 5, 7, and the unusually long caption of Figure 5, whichguides the reader on how to interpret the results."" Figure 5 does not seem to be cited elsewhere in the paper other than here in this disclaimer. When should readers check Figure 5? EXPERIMENT 1: LOCATE ""Tooltip is preferable to FatFonts"" To avoid overgeneralization I would suggest rephrasing into: ""Participants preferred Tooltip over FatFonts"". Figure 5: ""Darker means higher, which is worse for time and error, but better for correctness and accuracy."" So ""lighter is better"" for subfigures 1 and 2, but ""darker is better"" for subfigure 3. This seems prone to confusion. EXPERIMENT 2: FIND EXTREMA ""For minima,DigitsColor and FatFatfonts are most accurate (although statisically indistinguishable from each other)"" How are both statistically indistinguishable? Error SD (Minimum) values are different between both in Figure 4, row 3 col 3. I have an issue with the lack of consistency in assignment of colors across Figures. - Figure 4: colors (red, blue, green, violet, yellow/orange) are categorically mapped to techniques - Figure 5: colors are mapped to rows (blue) and columns (red) for pairwise comparison - Figure 6: colors are mapped quantitatively to values between 1 (blue) and 5 (red) Meta question: how do the results of the study described in the paper inform how Figures in the paper could be optimized so that readers can easily locate values and extrema across metrics, and cluster techniques to better understand results of the study? EXPERIMENT 3 CLUSTER ""We did not implement pen-and-touchinput for this because most displays do not allow this doubleinteraction style."" Large displays? Because many displays, like Apple iPad + Pencil, the Microsoft Surface series and Wacom Cintiq series support pen and touch. LIMITATIONS AND FUTURE WORK ""there is a growingbody of evidence that large displays are beneficial (e.g.,[23,22])."" I have a few more reference to suggest: - Xiaojun Bi and Ravin Balakrishnan. Comparing usage of a large high-resolution display to single or dual desktop displays for daily work. CHI 09 DOI:pseudo-url - Fateme Rajabiyazdi, Jagoda Walny, Carrie Mah, John Brosz, and Sheelagh Carpendale. Understanding Researchers Use of a Large, High-Resolution Display Across Disciplines. ITS 15 DOI:pseudo-url """,3,1 graph20_21_3,"""This article compares five representations of scalar fields: number table, fatfonts, color map, color map+tooltips, color map+numbers. The experiment consisted of three tasks: locate a value, find an extrema, and identify a cluster, on synthetic data. It involved 25 participants conducting the tasks on a large display (84""). Based on a bayesian analysis of the data, the authors discuss the trade-offs between accuracy and speed involved with the techniques. The paper is clearly written and nice discusses the related work (albeight too cursorily when it comes to data tables). The analysis method is interesting and appear to be solid. Like in many experimental studies of the sort, I was left wondering to which extent some simple design improvements on the techniques would not have changed the outcome of the experiment. And the techniqu studied are not particularly original. This does not dimish the results, but rather I would encourage the author to further frame their experimental questions in real-life visualization work, and discuss them in term of design choices. Alternatively, broad, generic questions such as symbolic vs. visualization can be asked, but in such a case, a bit more modeling or theorization would be expected, in order to draw some lessons from the experiment. We are left somewhat in between precision generality and precision. On one hand, the paper provides some useful insights on the pros and cons of the techniques for differents tasks, but the techniques are somewhat rough (the tooltip technique is not very subtle and much richer forms of interaction could be imagined, color inversion of text in the color+digits could help readability, the number table layout could be improved for better legibility...) On the other hand, we are left with the experimental results without a model of speed/accuracy tradeoffs that could help pick the best representation, or some explanations on why some techniques perform better in one task. I commend the authors for their analysis. I am not expert enough to assess the quality of the bayesian analysis, but the presentation is clear, and I could follow from beginning to end. The supplementary material is detailed and useful to reproduce the analysis (I haven't tried to run the notebooks though). Regarding the presentation, I would suggest to split the figures so that the figures associated to each task appear in the relevant section. On figure 4, it feels like starting all y axes at 0 would give a better sense of the distribution of the results. Regarding fig.5 the caption is not legible, as VIS/HCI researchers I hope we can do better than warn the reader that it's going to be challenging. -> why not annotate the figure directly, rather than adding an indirection (A, B, C...) ? -> why not give a bit more salience to the interesting results and ""fade"" the other ones ? """,4,1 graph20_22_1,"""### Summary This submission presents a method for upsampling point clouds using a GAN. The method operates on height maps over local patches of a sparse point cloud; for each height map from the sparse cloud, the GAN outputs a corresponding dense map, which can then be sampled to yield a dense cloud. The GAN is trained by sampling heightmaps from a collection of high-quality meshes, as well as downsampled versions of those same meshes. The method yields visually plausible results, and is shown to be competitive-with or outperform recent work according to several evaluation criteria. The submission also demonstrates applications to upsampling extremely sparse point clouds, and upsampling scalar fields. ### Feedback Overall, the method seems sound, and the presentation is acceptable. The basic idea of performing upsampling in height-map space complements other neighborhood-based strategy, and it is makes sense that this allows simple and powerful image-based networks to be used. The presentation of the method as ""domain translation using GANs"" strikes me as odd, because it is not really the GAN that is doing domain translation---the GAN operates exclusively on images. It is the raycaster/rasterizer which translate from mesh/point cloud to image/heightmap domains. Is this terminology typical? The experimental evidence is barely-adequate. Although the submission compares to two recent methods, the comparison is only on a small, manually collected dataset presented in this work. At least one recent work is unmentioned, which also uses GANs to perform upsampling (though the rest of the methodology is quite different): > PU-GAN: a Point Cloud Upsampling Adversarial Network. (ICCV 2019) Given how recent this competing work is, I'm okay with not having a comparison, but it should be cited. ### Questions: - Section 4, paragraph 2 indicates that normals for the sparse cloud are computed using 30-neighbor PCA. Are these neighbors in the sparse cloud, or the original dense cloud? If it is the dense cloud, this would seem to be propagating information from the ground truth via the normals. - The last paragraph of section 4.1 is not clear to me. When the text says ""This variation is much larger..."", which method does ""this"" refer to? Does the methodology perform multiple runs of this method, 3PU, or both? - I'm surprised by the argument that PointNet++-based methods are not permutation-invariant (Section 5, list point 1). Choosing a random initial point yields a permutation invariant distribution of outputs (though this is not deterministic), and choosing the point nearest the cloud centroid gives full permutation invariance. This seems like a very small detail as a criticism of all PointNet++-based methods. - The dataset sounds very similar to the Sketchfab dataset used in [19]. Is it the same? If so this is an important detail.""",3,1 graph20_22_2,"""This paper presents a data-driven point cloud upsampling method using conditional GANs. The key idea is to process local oriented point cloud patches, instead of the entire point cloud, and represent the local patch as a height map image in order to leverage the power of image-based network architectures. Such technique generalizes well to low-res input point clouds and shows superior performance comparing to previous works. Overall, the results are convincing and the network seems to generalize well, but I still have some concerns and questions. 1. How does the approach performs on real-world scanned point clouds? It seems like the training data comes from uniformly sampled (Poisson disk) points on meshes. I wonder how does the approach generalize to scanned point clouds with missing regions due to occlusions. 2. How is the orientation of each height-map image determined? 3. A follow-up question is how does the overlapped region between adjacent patches be handled? Two adjacent height-map images may result in different predictions within the overlapped region, due to inconsistent orientations or the network. I wonder how does the approach handle such inconsistency. In short, the results are convincing. The ability to generate details from low-resolution point cloud is promising. My main concern is on the applicability of the method to real-world scenarios because this method seems requiring the input to be a uniformly sampled oriented point cloud. """,3,1 graph20_22_3,"""This paper addresses the point cloud upsampling problem by viewing it as an domain translation problem and using a conditional GAN framework. More concretely, the proposed method first constructs sparse heightmaps from local patches of point clouds and feeds them as inputs to a generative adversary network to translate them into denser heightmaps. The resulted heightmaps are then compared with the ground-truth heightmaps, which are generated from the ground-truth meshes using raycasting. Finally, the upsampled point clouds can be obtained by reprojecting the 2D heightmaps into point clouds. The authors claimed that their results are superior to the state-of-the-art methods and their method is faster. 1) It seems that the proposed method handles each patches of an input point cloud independently, does the number of these local patches constructed affect the final reconstruction result? How to determine this number? 2) The training details of the compared methods EC-Net [25] and 3PU [19] are missing. Are they trained with the same dataset with the proposed method? 3) More quantitative comparison results on common datasets, like ModelNet10 used in 3PU, should be provided. 4) The proposed method is very similar with Pointpronets [17], except that it leverages a conditional GAN architecture. An ablation study should be conducted to show its necessity and effectiveness. 5) It seems that the comparison is based on the reconstructed meshes using the screened Poisson surface reconstruction, which requires the vertex normals as input. While the proposed method also predicts vertex normal and uses them in surface reconstruction, the vertex normal of the compared methods are estimated in a relatively simple way. This may cause the comparison unfair. Since the main contribution of the paper is on upsampling of point clouds, the direct comparison of the output point clouds should be provided. 6) According to the authors, different random samplings of 625 input points for the same testing mesh give slightly different resulting meshes. How did the numbers in quantitative comparison calculated? Are they average value of several different runs or you just chose the best one among them? """,2,1 graph20_23_1,"""The paper addresses an important area - how can AR authoring tools support training of assembly line systems? The approach taken by the paper is promising. However, this seems like a work in progress. The paper describes a very generic system and it is unclear how this design separates it from other works. Why can't one design the 3D models in Solidworks and then place those 3D models in AR using another AR tool. Why does everything need to be done with a single tool? The 3D modelling tool described seems nowhere close to professional 3D modelling tools. How does the system address the specific challenges of the assemble line? What interaction flows can it support? The paper presents no evaluations of the system as well. """,1,0 graph20_23_2,"""The paper ""WAAT: a Workstation AR Authoring Tool for Industry 4.0"" presents an AR guidance system for industrial assembly lines that allows for on-site authoring of AR content. The system allows for the placement of 3D models, guidance widgets, and calibration of placement in relation to physical counterparts. The system is not evaluated in-situ, but a small validation was done in the lab where the task was installing a graphics card in a computer. The topic addressed by the paper is important and timely as the technology for augmented reality is maturing and the big commercial players such as Apple, Microsoft and Google are starting to pick it up. However, augmented reality for assembly tasks has been a core vision in AR research for over two decades and manufacturing is one of the domains highlighted on the Vuforia website (Vuforia is the software development kit for AR used by the authors). Therefore, I would have liked to see the authors present the current state of the art in AR for assembly line tasks and how their work relates. A quick Google Scholar search on ""augmented reality assembly line"" reveals dozens of papers some of which dating back to the early nineties. The paper provides no real evaluation of the system and technique. The authors have made a brief preliminary evaluation in the lab on an artificial task. The authors state in the conclusion that for future work the system needs to be tested with elements corresponding to the workstation at the factory and it should be tested with real operators. I agree with this sentiment, but without this evaluation, the paper remains a work-in-progress paper. Therefore my conclusion is that the work is in a too preliminary state for publication and that the authors will need to position their work better towards previous research on AR for assembly lines.""",1,0 graph20_23_3,"""This paper presents a system that allows users to place and position 3D objects in augmented reality. The system is motivated by a need to train factory workers at an increased rate, and demonstrated through a system which helps users align a graphics card into a computer. This paper, while it addresses an important area (improving training for skilled workers), doesn't appear to make a clear research contribution. Using augmented reality for training workers is a widely studied area that has been examined for over a decade. Boud, Andrew C., et al. ""Virtual reality and augmented reality as a training tool for assembly tasks.""1999 IEEE International Conference on Information Visualization (Cat. No. PR00210). IEEE, 1999. This paper does not contrast itself to the vast array of prior work, or make any distinguishable contribution. The idea of using AR to guide workers is very well explored. There is also no evaluation, simply a presentation of ideas and screenshots of an early prototype. The paper reads more like a technical report for a company than an academic research paper. Much more work needs to go into this before it is ready for publication. """,1,0 graph20_24_1,"""This paper presents QCue, a tool to assist mind-mapping through suggested context related to existing nodes and through question that expand on less developed branches, including two studies, a detailed description of the algorithm design, and rater evaluation of their results. The first study explores how users respond to new node ideas suggested by the tool and whether that creates more detailed maps. The second study expands on those findings to balance the depth and breadth of mind maps creation. Both studies compare the new mind mapping tool to digital options without computer assistance. They find that QCue produces more balanced and detailed mind maps and that some mind mapping tasks may be better suited to this type of computer intervention than others. Overall, this paper is an interesting exploration of a novel area of computer supported brainstorming. The two studies are well-described and designed studies. The level of detail in the algorithm description is a particular strength, giving a clear picture of how it works and why those choices were made. One small point that could be clarified is why a between subjects design was chosen over a counterbalanced within subjects. Finally, the discussion would benefit from some more general discussion, before the limitations, on the overall findings and what they mean for mind mapping and similar applications moving forward. The results are individually compelling, but what does it mean all together? This research is well-written and a good contribution to the area of brainstorming, and it would be interesting to get more of a complete sense of the results.""",4,1 graph20_24_2,"""This paper presents an approach for assisting people with mindmapping tasks by prompting them to expand particular nodes (to add additional depth), or to consider different aspects of the problem (to add additional breadth). These suggestions are powered by ConceptNet -- an online semantic network of concepts. In addition to the design of an interface to support mindmapping, and the algorithmic approaches to power these features, the paper contributes a evaluative study demonstrating that the QCue approach enabled users to explore diverse aspects of a given topic, and make non-obvious relationships across ideas. The paper also provides insights into future directions for developing this approach. Overall, I enjoyed reading this paper and I believe it should be accepted. Though the idea of providing support for ideation and brainstorming is not new, I am unaware of other work that has looked at mind-mapping specifically, or the application of semantic networks such as ConceptNet to this purpose. I appreciated the detail with which the paper described the process through which QCue was developed, including a preliminary study, the design of the interface and underlying algorithms, the rationale for how cues are generated, and a detailed evaluation that includes quantitative measures, ratings by experts, and subjective feedback. Moreover, I found the writing and presentation throughout the paper to be generally clear and easy to follow. Though I'm generally quite positive on this work, I do have a few small criticisms: First, I felt that the paper could do more to acknowledge and summarize some of the other approaches that have been used to support brainstorming activities, outside of the area of mindmapping. Notable work that comes to mind is Siangliulue et al.'s IdeaHound paper from UIST 2016, which includes a solid review of creativity enhancing interventions, which may contain other relevant work to cite. Second, I found some of the reporting of questionnaire results to be confusing. In particular, it was not clear the exact question that was asked of participants and what the responses were. For example, on page 3 the paper reads ""We did not find consensus regarding self-reported satisfaction with the mind-maps created by participants in pen-paper mind-mapping."", and does not provide further detail than that. It would be good to specify clearly the questions asked (including the wording), and the counts of participants that provided different ratings or responses. This applies to both the preliminary study and to the larger evaluation study. Finally, there are some minor grammatical and wording issues that could be corrected: - pg. 1 - ""Asking of one question"" - awkward - pg. 1 - The sentence that begins ""We apply this tenet"" is a run-on sentence. - pg. 2 - ""and limit the"" should be ""and limits the"" - pg. 2 - ""during idea generation process"" should be ""during the idea generation process"" - pg. 2 - ""from note-taking to information integration in areas"" - ""in areas"" can be deleted - pg. 2 - The sentence beginning ""Few works that have considered this idea [7, 25] have"" is awkward and should be rephrased. - pg. 3 - ""18 students (8 females)"" - you should report # of males, # of females, and any participants who declined to answer or specified something else, rather than assuming total = males + females - pg. 4 - ""Once a valid answer recorded, the"" - missing word - pg. 5 - ""Tesniere et al [66] note that continuous thoughts can only be expressed with built connections."" - I think it's worth unpacking the meaning of this a bit more, for people who aren't familiar with this work. - pg. 5 - ""at regular intervals of the computational cycle."" - I'm not sure what is trying to be expressed by this -- that CPU ticks were used? - pg. 5 - ""node that added to a given node"" - awkward - pg. 5 - ""using ConceptNet semantic network"" - missing ""the"" - pg. 6 - ""As user queries a word or phrase in natural language"" - ""As *the* user queries"" - pg. 6 - ""Section 5.1"" - There are no section numbers in this format - pg. 6 - ""On the other hand"" - this is informal language, and also you didn't set it up with ""On one hand"" earlier, so it shouldn't be used. - pg. 6 - ""rather a more abstract topic."" - I don't think that it's more abstract, it's just that it isn't something people usually think about. - pg. 6 - ""and the two central topics were randomized across the participants."" - This suggests that each participant was only assigned one topic. Do you mean that the order of the two topics was randomized for each participant? - pg. 7 - ""P M Q"" are used for each phase, but then are never referred to again. Unless you have a good reason to create new terminology or assign symbols, it's better to not. - pg. 8 - ""after creation of one mind map"" - Why only one and not both? Do you mean ""each""? - pg. 10 - ""generating new directions of ideas respected to the central topics"" - awkward - pg. 10 - ""The rationale behind cue comes from"" - The rationale behind ""providing cues"", or ""cuing users""""",4,1 graph20_24_3,""" This manuscript considers the problem of how to enhance the practice of mindmapping, where people construct a visual representation to support a brainstorming process. The authors' approach is to design a new workflow/tool (called QCue) that can support different aspects of mindmapping through the generation of suggestions (where the suggestions are driven by an online ontology called ConceptNet). The authors evaluate this tool through two phases, the first which considers suggestions for terms to put into the mindmap; the second of which considers the full-blown tool (suggestions for terms in the mindmap; suggestions for new nodes in the mindmap based on relationships). The evaluation suggests that the mindmaps generated with QCue are better than those generated without this tool support. The contribution of this work is in illustrating how to enhance a cognitive brainstorming task with computational support. The manuscript is tidily written, and provides a nicely motivated contribution to the community. Although I am not an expert in the particular domain of work that the research is focused on, there are some nice ideas in the design of the tool. The introduction is excellent -- it introduces the problem well, describes the approach, the evaluation and provides a clear roadmap for the manuscript. I think the related work section is also pretty good. It does an excellent job of laying out the field, and setting the scene. References seem to be great starting points for others. The only thing that I wondered about was why the digital mind mapping stuff was last, since this was the focus of the manuscript. The linkage between this and the problem exploration and computer-based cognitive support could be made even more explicit (i.e. that digital mind-mapping is an instance of these). The only bummer is that the review of digital mind-mapping tools is a bit on the thin side -- for instance, the two references that describe the problems with digital mind mapping (65,5) are written well before [25], and so it's not clear if these problems have since been resolved. For the remainder of the manuscript, I provide below some smaller points for the authors to clarify. While I think these are executed reasonably well, there are perhaps some opportunities for the authors to consider in future work, or at least to reflect on in the current manuscript (say, in the discussion). Overall, I think the manuscript should be accepted. Some larger issues that the authors should address are as follows: * In phase 2, the authors describe their model of a mindmapping process to include ""choosing a smaller subset to refine and detail"", yet my interpretation of the algorithm is that it treats all nodes of equivalent depth equally. Thus, the tool support does not account for this in a semantic way; rather, it simply says (through the cues), ""Hey, you haven't looked at this for a while,"" if it is a subtree the user has chosen to ignore. I think this shortcoming of the algorithm should be noted clearly, since the algorithm seems mainly to push for areas of the mindmap has not been worked on for a while. Similarly, when constructing mindmaps, we know that some nodes are not worthwhile progressing further; how can we inform the tool of this? Or, how could the tool account for this in the future? * The UI for the tool is not illustrated well in the manuscript. While the video does a lot to alleviate this, my belief is that the manuscript should still stand on its own in this regard. I would recommend taking space from Figures 2, 6 as space for illustration or description of the UI. ## Phase 1: Points of clarification: * Perhaps providing a clearer picture of the UI for the tool participants used in the query-expansion version would be useful. E.g. Is this what is illustrated in the video? As-is, readers need to guess what the experience was like. * For the ""pen and paper"" version, were participants really using a pen and paper? * 6/18 participants were familiar with mind-mapping; does this mean the other 12 had no experience with mind-mapping? Was there effort to familiarize these participants to the technique? [This is somewhat addressed in the discussion; it may be useful to speculate on how the tool would perform with different populations -- those familiar with mindmapping vs those not familiar with it] * Was this a within-subjects design? Is it counterbalanced? * Do the authors consider both prompts to be essentially equivalent, or not? * This sentence is awkward: ""Moreover, while pen-paper mind-mapping participants agreed that the time for map creation was suffi- cient, nearly 50% did not agree with being able to span their ideas properly."" -- how do participants know whether they are able to span well or not? * Awkward (partly because we don't know what the UI looks like): ""... we observed two main limitations in our query-expansion workflow. First, the addition of a new idea required the query of the word."" ## Phase 2: Perhaps it would be useful to clarify this point: * ""we subdivide each list categorically based on the 25 relationship types provided by ConceptNet. Subsequently, we select one subdivision which has the highest sum of relation weights and use it as basis for a new cues content (Figure 2(b)). "" -- what are the relation-weights? does this vary over time? does it vary somehow? or, does ""Car"" always result in the same subdivision? (where do the relation weights come from??) ## Main Study * The main effect for the topic type is not clear (as a reader, we don't know why this is important or relevant). It might be useful to provide an explanation or interpretation. * Figures 4 and 5 are a bit hard to read. For Figure 4, perhaps the error bars (or variance lines) aren't necessary (they obscure the message). For Figure 5, I just am not sure what this means. Are these for both groups of participants? Are they stacked charts? My guess is that they are only for QCue participants (this should be made explicit), and that they are stacked, I think these are reporting on averages across participants. If so, then the variation from minute to minute seems pretty low. I'm not sure how meaningful this chart is. - It might instead be useful or interesting to consider what suggestions/queries were made, and whether they were used (or not). Also what about entries that were developed based on those the results of those queries? * For me, I would have preferred to see a bit more qualitative description of what participants did/how they used the QCue interface. The current minute-by-minute breakdown does not seem to illustrate this well for me. * The main result comes from expert evaluation. I buy this on face value, but it might be useful to understand the rubric the authors (or the evaluators) used for themselves. For instance, what does a 2, 3 or 4 rating on these metrics mean for them? Is the difference between a 2.1 score and a 2.9 score meaningful? ## Discussion * It may be worthwhile for the authors to consider whether the end result (of using QCue or TMM) to be different for the participants themselves. While I am not an expert in this space, I seem to recall hearing from some professional development workshop that mindmapping was a good tool to help brainstorm, but the mindmap itself was not a useful artefact in the end -- rather it was the *act of construction* that help to create these ""mental structures"" that would provide utility later on. If we buy this interpretation, then how can we evaluate how rich the participants' own mental structures of the problem spaces are? (Since, this is the main goal, I would guess -- i.e. it's not really about enhancing the creation of a mindmap artefact; it's really about enhancing one's cognitive understanding of the problem domain). * Perhaps for future work, might the authors consider an altogether different type of UI presentation? My current read of the cues is that they appear over time (kind of like an alert prompt). Personally, I think I would find this stressful (in the same way that an unread email count in my mailbox bothers me). What if we considered an on-demand workflow -- e.g. when I am stuck, I get to ask the tool, ""Hey, where should I work now?"" * Something the manuscript made me think about was how teachers support mindmapping activities among students. It might be interesting (perhaps not for these authors, but in the future) to explore how it is that teachers decide when and how to prompt students that are stuck in their mindmapping activities, and then to understand how the algorithm could be modified to model this type of suggestion-making approach that teachers use. Overall, I like this manuscript. But, it is still possible to push it further. For instance, the tool is cool, but I am left without a clear direction for future work. Do we think the design was perfect? Is there something we could improve on? Either in the algorithm, or the UI? Or, should the study be improved somehow? Are we convinced of the result? Are there other places we could apply this idea? These things might be interesting points for the authors to consider. """,3,1 graph20_25_1,"""This paper presents an image stylization technique that produces a different super-pixel effect where pixel boundaries are highly irregular and aligned with the feature of the original image. Overall, I like the artistic style and it has different characteristics compared to previous methods. The paper is well-written and limitations of the method are clearly carved out. Some captions (e.g., Figure 4,5,7,8,10) could be improved to have self-contained explanations. My major concern is the runtime. It seems that the proposed stylization is mainly served as a design tool. Having runtimes in the order of minutes may hinder practical usage. I would like to see more discussion on the computational bottleneck and how to improve the performance. Another concern is the motivation of this particular style. It would be nicer to motivate the proposed style with either applications that prefer to have this style or some art pieces that exhibit this style. I admit that I am not an expert in this field, it is a bit difficult for me to access the technical contributions compared to a plethora of image stylization techniques. But the results are plausible, extensions and limitations are well-discussed, thus I am leaning towards accepting this paper.""",3,1 graph20_25_2,"""The seed placement algorithm seems to make sense is there any rationale behind it? If this is original, please give more details. Otherwise, if it is inspired by some other prior work, please reference them accordingly. Section 4 discusses quite a few applications. Most of them would be better demonstrated if compared with an existing method. For example, the reduced color palettes to black and white indeed seem impressive. It would be nice to show what a naive method would produce as well as how a strong existing baseline method would perform. Figure 12 shows several comparisons with [DP73]. Overall I prefer the ones in the middle column. However, regardless of the algorithmic details, as I zoom in, it seems like the level of details is quite different in the [DP73] column and the middle column. For example, there are a lot of details in the Chimney sky and the leaves. Is epsilon of 4 a comparable parameter? If [DP73] does not abstract the image so far, would the result be comparable to the proposed method? How do the flat regions change with a video or an animated scene? One known issue with video stylization is to keep the regions somewhat stable across frames. I wonder if the proposed method has some insight in terms of temporal consistency.""",3,1 graph20_25_3,"""The paper proposes an ""image abstraction"" (i.e. simplification of image content for non-photorealistic rendering) algorithm based on region growing. The approach starts with a seed placement stage where each region is initialized from a set of connected components of a difference image between the original and a blurred version. Then, a region growing mechanism is used to grow up to a per-region local threshold. The grown regions are then rendered in order of size to produce the output. The paper is appropriate to the GI community, is clearly written, and to my knowledge presents an original algorithm for image abstraction. Strengths: + Straightforward algorithm with interpretable parameters should allow for ease of user-direction of the results + Qualitative results show good quality relative to other traditional over-segmentation based approaches from prior work. Weaknesses: - No quantitative evaluation or user study to validate benefit of proposed algorithm relative to baselines Despite the lack of quantitative evaluation, I believe the paper proposes a technically sound algorithm which appears to produce good quality image abstractions. I am therefore in favor of acceptance. """,3,1 graph20_26_1,"""Summary: In this paper the authors outline a new drone control interface StarHopper that they have developed, that is combines automated and manual piloting into a new hybrid navigation interface. The automated part of the interface builds upon existing object-centric techniques, but gets rid of the assumption that the target object is already in the drones FOV by using an additional overhead camera. The interface itself consists of four modes 360 degree viewpoint widget, delayed through-the-lens control, object-centric joysticks, and full manual joystick controls. This hybrid interface was compared to a fully manual drone interface in a user study, and flight times for navigation tasks were compared between the two methods. Additionally, subjective user parameters such as effort, frustration, mental demand, etc. were compared. It was found that StarHopper outperformed the manual controls for this given task by a significant margin. Review: I think the most exciting contribution from this paper is the way that each of the four modes included with StarHopper has a different strength, and they work very well used in series when completing a specific task, as is evidenced in the Navigation Patterns section. Shows that StarHopper is well-suited for the task for which the user study was run. It is also encouraging that many of the users converged to a similar work flow after using the controls for such a short period of time shows that they work together intuitively. It seems as though using well-known gestures such as pinch-and-zoom for the fine adjustments worked well and was intuitive for the users, based on the ratings in the subjective user preferences. That being said, part of the motivation/design guidelines for StarHopper was to create an interface that performs well in a changing dynamic environment such as a warehouse. I think that future studies performed in a more realistic environment, including obstacles moving around in real-time would be a more useful assessment of the interface. I also wonder how well StarHopper would work in a typical warehouse environment that has tall aisles and other things that would obstruct the drones view to register an object of interest it is necessary for it to not only be in the view of the overhead camera, but also within the drones FOV minus a rotation. This means that fully manual control would be necessary to approach a potential object of interest. And then if it moves in the dynamic environment, fully manual location may be necessary again. This could significantly reduce the speed improvements that are seen in the user study. The paper claims that previous object-centric solutions assume a subject that is already in the camera field-of-view, but it seems as if this is also true here minus a rotation. Another concern is that StarHopper was compared against fully manual piloting rather than something more analogous (one of the other automatic techniques that had been mentioned in the paper). It seems almost self-evident that it will be more efficient to have the drone automatically aligned to the correct side of an object (using 360 viewpoint) with only fine tuning required. It makes sense that in general navigation to the front of an object took less time than navigation to the sides, and navigation to the sides took less time than navigation to the back (ie. This result showed that the participants performance decreased as the navigation route complexity increased.). But I found it surprising that the speed benefits of StarHopper were not more pronounced for more complex routes such as navigating to the back of an object (StarHopper demonstrated a consistent efficiency advantage over manual control (31% ~ 39%, Figure 11), across the four sides.). Since the purpose of the 360 degree viewing widget is to increase the efficiency of the object-centric navigation rather than just flying to the object in the first place, why werent such benefits seen? Next, the paper isnt totally clear about the drone collisions it mentions that a few took place, but doesnt specify whether they were mostly under manual or automatic control. If StarHoppers automatic controls lead to decreased collisions, this would be a big benefit over manual. NASA-TLX responses along six dimensions. StarHopper was ranked significantly higher for mental demand, physical demand, performance, and effort. doesnt seem to be what the chart shows for performance, although I may just be misinterpreting how it is presented. To me the graph reads as StarHoppers performance being rated lower. Overall, I think that the proposed suite of navigation techniques alone are enough to make this contribution useful. But in future user studies I would prefer the technology compared to more analogous drone interfaces than pure manual, and also be used in a more natural environment. """,4,1 graph20_26_2,"""################## BY EXTERNAL REVIEWER ################## This paper presents StarHopper, a system for semi-automatic drone navigation in the context of remote inspection. By using an external camera in addition to the drone camera, a set of interaction techniques for navigation are presented. Those techniques balance automatic navigation and manual input. In a user study, the authors show that their system is significantly faster than a manual 2-joystick control, and preferred by participants. The paper is very well written, and the related work comprehensively covered and clear. While I liked the paper, there are a set of challenges that decrease my excitement, mostly in terms of conceptual and technical novelty, outlined below. In general, I am learning slightly positively towards the paper, mostly since the authors quantitatively show the superiority of advanced semi-automatic navigation techniques for drones, and that those can be combined in a single easy-to-use system. On the positive side, the system and interaction techniques are very well described and clear. The paper provides a nice rational (design guidelines) for the system, and analyses it in terms of its levels of automation. The study is sound, and very well described. The results show that StarHopper, as a combination of multiple interaction techniques, is 35% faster than a conventional manual control, and preferred by users. The analysis of flight traces is interesting as well. Lastly, the paper provides an interesting discussion on the balance between automation and manual control, and how users would not interfere while the drone would perform a maneuver, even though they were explicitly instructed that this would be okay. My main concern is regarding the novelty of the system. For me, the paper has two main contributions: 1) the idea of using an additional external camera for interaction, and 2) a set of interaction techniques. While the first contribution is novel but not used extensively, the interaction techniques are very similar to what is implemented in DJI's ActiveTrack system. The 360 Viewing Widget (circle around target) is included in DJI ActiveTrack (Trace mode + Joystick input). Object-centric joystick navigation is the same as DJI ActiveTrack (Spotlight mode). I acknowledge that 360 Viewing Widget is quasi-autonomous after the circle is set, this however I do not see as a major difference. This leaves the Delayed interaction technique, which is nice. The authors mention ActiveTrack in the related work section, and state that it assumes a single subject within the FoV and that it is less flexible in terms of navigation. I am not convinced by this argument. Circling with ActiveTrack is arguably more flexible (because less autonomy), and Spotlight and Object-centric joystick navigation are similar. The authors should make more explicit how the interaction techniques are different; and potentially condense the description of the techniques are less novel. I think the paper has the right amount of technical description given its part in the contribution. On thing that should be clarified is if the authors only calibration the camera intrinsics and extrinsics, or also the 3D position of the external camera. Overall, I think the triangulation is a clever idea and can be replaced by some advanced techniques in the future. In terms of presentation, while the paper is well written, it is quite verbose on some parts. The introduction focusses a lot on telepresence, but this is not addressed in the paper. I think the focus on remote inspection is fine and does not require such a lengthy intro. Similarly, parts of the interaction techniques could be condensed (manual navigation, managing the object of interest list), as well as the automation analysis. The focus on a touch interface is fine as well, but could be toned down. While touch is a dominant input modality, the interface would work equally well with a classical WIMP interface. Therefore, it is actually more general than 'only touch'. In terms of the study results, it would have been interesting to see a break-down of how often the different modes were used. While the authors noted that a combination was used, I would have liked to read a more detailed analysis. If this data is available, I think it would be a valuable addition to understand the difference and usage between the individual modes better. Overall, while this is a nice paper, I am somewhat on the fence due to the limited novelty, especially since large parts of the interaction techniques are already implemented in a commercially available and widely-spread system. Interesting directions would be to highlight the difference between outdoor navigation in nearly unconstrained space and very constrained outdoor space; and exploiting the external camera even more, eg for multiple or moving targets. """,3,1 graph20_26_3,"""The paper is clear and well written. Authors did a good job at motivating their work and the need for alternatives to current control methods for uavs used for video inspection tasks. However, I believe the introduction can be shortened since several parts do not bring much. For instance, the beginning of the introduction's second paragraph is not very relevant within the context of this paper. Only the last sentence adds to the argument I think. The related work section covers and discusses well prior research in the field. I really appreciated the design guidelines section even if it sounds more like design requirement to me than design guidelines per se. I have some concerns regarding the simple touch guideline argument. Authors state that it should be simple with already known gestures (I assume pinch, drag and swipe) and for more complex tasks (advanced path planning) an automated system should be able to do it. This should either be removed or clearly be defined since I do not believe that a ""magic"" algorithm will be able to infer users intentions. I would argue instead for basic and advanced interactions, possibly using another model that the object centric model for other type of tasks related to path planning. Finally, the last guideline (respect physical constraints) seems a bit in contradiction with minimizing the reliance on environmental data. I would suggest author to discuss more on how to combine these two guidelines. The star hopper description is clear and easy to follow. The interactions and technical details are well presented. I have some concerns on how users can edit or delete an object of interest if they did not select the correct position or radius on the user interface. The size of the screen and the possibly large view of the scene seems very error prone to me. I would suggest authors to add some details on this issue. The experiment is well presented and seems adequate to compare the StarHopper navigation techniques to a standard one. The results confirm that the systems helps user navigating efficiently in scenes to find and inspect objects. Regarding the discussion, authors did a god job at highlighting limitations of their work. In particular, I also believe that making the state of the system visible and helping users to understand that they are able to act while the drone is operating is a very interesting direction. Overall, I believe this is a very good paper that blends computer vision techniques, path planning and interaction. As such it clearly deserves an audience at GI. I would also suggest authors to submit a proposal to the CHI2020 workshop on human drone interaction to present this work or other related material. pseudo-url minor comments: - no keywords provided - Many drone systems are already use in industry such as monitoring paintings on wind turbines or aircrafts. Existing solutions are both manual and fully automated. It would be valuable for authors to include some examples in the paper. - figure 13 is not understandable and grey print. authors can use colors with different luminance to counterbalance the problem.""",4,1 graph20_27_1,""" This submission explores the question of how to design gaze-based interaction techniques. The authors propose and refine a gaze-based dwell+gesture technique through several micro-studies and simulation. They demonstrate that the technique is effective for what seems to be an acceptable amount of time without too many erroneous activations through a final study (which curiously is not described as an experiment). The contribution here is a technique that ameliorates accidental dwell activations through a more extensive activation step. I recommend this manuscript be accepted. While I am not an expert in the gaze-based interaction space, it seems here that there is something interesting that will be of benefit to the community. If nothing else, it contributes to the discussion of how to design these interaction techniques, and provides an example of a fairly extensive refinement process that helps the authors arrive at the well-defined model at the end of the manuscript. I do think that there are some significant presentation problems in the manuscript that we see submitted here. At some points, the presentation problems interfere with the clarity of the ideas, and this is problematic. I have provided several opportunities for how I think the manuscript's clarity can be improved by approaching the presentation slightly differently. # Abstract * Would be better to clarify what these three experiments are and/or what was revealed about eye movement in these experiments. * Do you evaluate the technique that you design here? * It doesn't look like the last clause in the last sentence is written correctly. # Introduction * First two paragraphs are very well written. The third one is a bit confusing. * The fourth paragraph begins in a very confusing way: ""combination of dwell-based manipulation and gesture- based manipulation shows potential for gaze-based command activation"", since the previous sentence says: ""Although this scheme works in mouse-based manipulation well, in gaze- based interaction, it does not because gaze-based interaction still faces the problem of unintentional manipulations."" These two seem to contradict one another. * I feel it would be useful in the introduction to describe the experiments (why were they done? what do we learn? how does it motivate the design of your technique?) # Experiment 1 * It would be useful to clarify the purpose of this study in the lede of the study description. Is this study to assess what is easy to do? Or is it to train a gesture recognition system? * Where are the instructions (e.g. UR) displayed? Does a participant receive the instructions verbally from the experimenter before staring at the black circle? Or are the instructions illustrated on the black circle somehow? * Analysis: It seems surprising that so many trials have these errors. Are these slips, or mistakes, or miscalibrations of the equipment? etc. * In Figure 2, this seems to represent UR. If the participant had gone slightly downward first (in the trial illustrated, they go slightly upward first), would the trial have been discarded? Reading the analysis description, it looks like it would have. * I am a little confused by the description of what happens next, but it looks like overall, you are trying to develop a model that has parameters that are tunable that can account for slight variations in the trajectories as people travel from one point to the end of a stroke. If this is the case, then this needs to be clarified as the goal of the experiment in the lede. * One thing that isn't clear is what happened to the single- and two-level stroke data. Were they treated the same? Were they separated in the analysis? # Gaze Detection System * This might be clearer if D_thld was renamed to W_dwell -- i.d. the width of the dwell point (since there is a large threshold before it is considered not to be a ""dwell"" anymore) # Experiment 2 * Lede is much clearer * I'm really puzzled why the design has participants having to do mapping between a letter and a gesture path. Wouldn't it be more prudent to simply have them look at an arrow that tells them which gesture to perform? (this would negate the necessity of a cheat sheet) It's also unclear to me why the study design provides the detection result -- the participants are simply creating data; it's not necessary for them to know how the system performed, right? * Why does the experimenter do this? What is the purpose: ""In description phase of the practice sessions, an experimenter told the participants their results and the reasons for their fail- ures."" * I *think* what is happening is that there are a series of trial rounds where the participant is ""learning"" how to do the gestures well, and getting feedback from the system and the experimenter. In the subsequent ""train the computer"" rounds, there is no feedback given, and THIS is treated as real data for a later phase. If this is the case, then this needs to be explained a little more clearly. * The basic idea of simulation I understand. In this case though, is the idea to use some of the earlier data collected in the simulation? Would this be appropriate? It cannot properly simulate how a person might react, because if the system provides feedback of an early activation, the person may behave differently than they do for the collected simulation data (since this was collect without visual feedback) # Experiment 3 I understand the basics of this # Conclusion * Be careful of the final claims you make here. You *do not* make it equal to mouse-based interaction. """,3,1 graph20_27_2,"""This submission mostly reports on the technical description of a gesture recognition algorithm for non-guided gaze-based command activation in gaze-only controlled interfaces. It describes four main studies. The first one characterizes one- and two-level gaze gestures when performed without guidance and from a point located at the center of the display. The trajectories collected in this study are then used to design a gaze-gesture recognition system for detecting two-level gaze gestures, and to set the initial values of all its parameters. The second study attempts to fine tune the parameters of the gaze-gesture recognition system for two-level gaze gestures performed from different locations on screen. In that respect, participants were instructed to perform a gaze-gesture from one of 5 different locations on screen, and were then notified wether the correct gesture was recognize. The third study is focused on fine-tuning the parameters of the gesture recognition algorithm in order to optimize its performance regarding the correct recognition/accidental activation trade-off. Finally, a fourth study investigates correct command selection rate with fine-tuned parameters. This is a very dense paper, with lots of content and reporting (to varying level of details) four user studies. The scope of the paper, that is the design of a gaze-gesture recognition algorithm is interesting. However, the work suffers in my opinion from the two following main limitations. #Scope and positioning Gaze-gesture has been studied for years in the accessibility community, and also explored more recently in the context of head-mounted displays as an alternate method notably for command selection. However, the paper quickly dismiss previous gaze-based gesture systems to quickly conclude that non-guided gaze-based selection will be explored, and jump into building a gesture recognition algorithm without carefully explaining why existing algorithms are limited and would not work. More precisely, several gaze-gesture methods have been explored, from unidirectional gestures [19,20] to pie menus [24,A] to smooth-pursuit based gestures [3]. While the current submission cite these works, it right away decides to investigate the very specific instance of *non-guided* *two-level* gaze gestures not relying on smooth-pursuit, and to investigate in depth a recognition algorithm for this specific case without providing any rationale abut how this design is sound. Guidance is a critical component of both command selection [B] and gesture execution [C] and dismissing these aspects for a gaze-based command selection mechanisms would require to carefully discuss these aspects (earlier than a paragraph in the discussion) and possibly to investigate their impact on user performance using the proposed system. Moreover, command selection is a task that involves several component that need to be discussed (see tables in [B]). Otherwise, if users cannot browse the commands, it is mostly a shortcut mechanism. Similarly, gesture recognition can be achieved in many ways, and different methods have been employed to recognize gestures, both in general and in gaze-based interfaces (1$ recognizer, DTW, Rubine, classification using SVM, [E], Knn, etc.). Current submission barely discusses why these methods cannot be applied directly. One specificity of gesture recognitionin gaze-only controlled interfaces is that the gestures must be self-delimited thus requiring most of existing algorithm to be adapted to this context. However, DTW and Knn have already been demonstrated as efficient in the context of self-delimited 3D gesture recognition [E,F]. Unfortunately, current submission does not carefully explain why existing gesture recognition method are not investigated, and the proposed solution is not compared to any baseline either. Therefore, the submission falls between two stools. It does not convincingly present or validate a novel gaze-based command selection technique; it does not convincingly present or validate a novel gesture recognition algorithm for gaze-based command selection. Instead, it swiftly starts to investigate the design and fine-tuning of an ad-hoc gesture recognition algorithm, which is fine, but need to be more carefully motivated and validated. #Validation of the proposed technique The four studies are interesting, but several aspects are highly questionable. First, it could be anticipated (notably because it is raised by authors in studies 2 and 3) that on-screen location as well as the objects displayed on screen can influence the accuracy of gaze gestures. Therefore I was surprised that study 1 characterizes gestures from the center of the screen only, leaving other locations to study 2 (with different goals and procedure). A more generic characterization of gaze gestures should probably use location and displayed content as factors from the start. Instead users iterate on this characterization (or more precisely, on the inferred parameters) along the studies. Then, study 3 follows a surprising and unconventional experimental procedure to investigate the robustness of the algorithm against unintentional activations. Indeed, it asked users to perform reading and text entry tasks in which dwells were used specifically for activating commands. Previous work investigating the Midas-touch problem [E,F,G] tests accidental/unintentional activation by collecting data (in this case, eye-tracker information) while using the device in real-world situation for a given duration (e.g. 24h in E). In contrast, authors decided to collect data from artificial tasks while users interacted using a ad-hoc dwell-based command selection technique. As a result, it is unclear how collected data is valid to test for unintentional activation. Finally, and combing back to previous step, the proposed method is not compared against other gaze-gesture command selection mechanisms # Main recommendations/suggestion That being said, the work has still value, but the lack of careful positioning and motivation, combined with questionable validation makes me reluctant to accept this paper. My main suggestion for alleviating this problem would be as follow First, carefully describe the context, motivation and research questions. Describing an algorithm for gaze-based command selection requires to 1-carefully explain why the chosen command-selection mechanism is sound (why two levels gestures, why no visual aid, why no smooth pursuit such as in G3 [3]) and why existing gesture recognition algorithm cannot work or be adapted. So far, this aspect is briefly summarized at the end of the related work as ""In our technique, a user can activate a command by performing a dwell and then a simple two-level stroke."" but does not explain why it would be more efficient than G3 [3] for instance. Second, assuming that the work carefully motivates non-guided gaze-based command activation in gaze-only controlled interfaces, I would suggest to rearrange the experiment 1, ""gesture detection system"", experiment 2 and ""fine-tuned parameters"" sections. It currently uses an iterative design methodology without having the corresponding section. Gestures are characterized in experiment 1, then gesture detection system is explained, and then experiment 2 tests it in different conditions, which yields that parameter values are not adequate and uses a simulation to update their values, and so on. Rather, the submission should first more carefully and comprehensively characterize the gestures, and then describe a gesture-detection system. Fine tuning the parameters afterwards is sound and makes sense, but iteratively doing it after each experiment using simulation makes it harder to follow. Third, compare the performance of the proposed gesture-recongition system both in term of true and false positive, with a baseline from the literature. [A]- pseudo-url [B]- pseudo-url [C]- Gordon Kurtenbach. PhD Thesis. The design andevaluation of marking menus.University of Toronto. 1993 [D]- pseudo-url [E]- pseudo-url [F]- pseudo-url [G]- pseudo-url ======================================================= Minor comments - p1, intro: [3] uses Dwell but as far as I remember, I believe it is not literally gaze dwell, but a cursor that is at the center of the Head-mounted display - p2, RW: [27] tests two different pursuits technique. Therefore, saying that dwell ""performed as well or better than THE pursuit technique"" is misleading - p2, RW: there might be a sentence or transition missing before the last paragraph of the ""Dwell-based command activation"" subsection - p2, RW: why characterizing gestures in ""one-level"" or ""two-level"" strokes? maybe saying that the more complex the gesture, the less likely it is to result in false positive would be sufficient? - p2, RW: I would cite [3] regarding guiding gesture, even though smooth-pursuit is not exactly gesture - p2, Xp1: why using a chin rest if the idea is to characterize gesture execution, it does not sound extremely sound if the idea is to implement a system that could be used in real world. - p3, Xp1: for reproductive research, how were directions counter-balanced? - p3, Xp1: I find 9.8% of outliers really high. I would suggest a stronger justification for removing so much data - p4, GDS: ""In our technique, a dwell was detected when the gaze stays in 5 mm (t1) for the dwell time (Tdwell)."". Why using 5mm if this is the accuracy of the Tobii? (that is an offset of 5mm can be measured). Wouldn't it be better to make it more permissive? - p5, Xp2: Was there any interaction position x gesture direction interaction effect? No effect of group at this stage? - p7, Xp3: where the 26 unintentional dwell-then-gestures gesture recognized after a dwell on a button? - p7, Xp3: why simulating by changing parameters, while authors acknowledged that users may ""learn"" the technique. Therefore, simulation would miss this aspect. - p7, Xp3: I did not understand how longer dwell can result in a higher unintentional detection rate. - p9, Discussion: ""To fine-tune parameters more, we need to conduct more experiments to obtain gaze trajectories in various environments and more varied situations,"" and ""In the experiments, no object unrelated to the experiment was displayed.""; Yes and Yes; In my opinion, these points are critical to characterize gesture and test the robustness of the algorithm - p10, Discussion: ""Using Other Gesture Detection Algorithms""; Yes, and this should actually frame the contribution. - p10: Why increasing even more the delay to display the menu? Given that users have to dwell regardless to activate a command, length of the gesture is used for recognition, and participants complained about the lack of visual landmark guiding the gesture, then why not displaying the menu in order to provide guidance and prevent error from poor recall? see [Henderson et al. Investigating The Necessity Of Delay In Marking Menu Invocation. CHI 2020] """,2,1 graph20_27_3,"""This paper presents the results of studies that examined using dwell and stroke gaze-based gestures. Through a series of experiments, the paper presents an analysis of how well dwell-then-gesture-based gaze interactions can perform. Overall, the paper is well written. It also makes progress in an emerging area that is especially important to accessibility applications. Regarding drawbacks: #1 There are some language and phrasing issues but I think those are fixable. #2 There are some issues with the study, regarding possible confounds due to lack of counterbalancing, but these may be minor. #3 Lastly, and as the Abstract acknowledges, this can be seen as a relatively minor improvement and contribution. #1 - Language While the majority of the paper is clear and easy to read, there are some sections that are difficult to understand which impacts the readability and comprehension of the contribution. For instance, what does it mean to 'enrich the vocabulary', and what does 'as much as mouse-based interaction'. I cannot understand this, and it is supposed to be the succinct definition of the contribution. #2 - Study Execution and Reporting In Experiment 3, which is potentially the most interesting, the task order was not counterbalanced. There may be a good reason for this, but it should be better detailed in the paper. Additionally, can more support be added as to why the particular study design and order of tasks was chosen? In statistical reporting, please include the actual p-value, as well as the test statistic. For example on page 4, a 'Wilcoxon signed-rank test showed no significant difference (p < 0.05)' - this sentence is confusing because p < 0.05 would be considered significant by most, so its unclear if the wrong test was done, or the wording is confusing. #3 - Contribution The paper acknowledges that this can be seen as a combination of prior works. I think that is true to some degree. I think the contribution is a bit minor - I'm concerned that such large, eye-fatiguing gestures were used. I'm concerned that such precise parameters are tested and reported on. I'm concerned that the results will not be generalizable. To me, the paper looks, for the most part, to be methodologically sound, complete in its reporting, and if some changes can be made, I do believe it will contribute to our global body of knowledge around eye gestures with the data it presents. """,3,1 graph20_28_1,"""This paper presents a projection system to help unexperienced people to draw latte art on a cappuccino. There is a user study comparing participants performance with the system, and with watching explanatory videos only. The results suggest that participants perform better with the system. This is overall an interesting idea of interactive system supporting skill acquisition. The system remains simple. This will not be a revolution, but it might be of interest. To begin with, there is little details about the design rationale. What are the design choices? The system does not seem to follow a particular rationale. The fact that participants complained about the lack of information about syrup pouring reveals that this is more a trial and error approach than an informed design procedure. There is no clue about scalability neither. To which extent the system supports other patterns? For example between the hears and the leaf the syrup is either a series of dots or a continuous line. This inevitably has an effect on syrup pouring. Are there other patters with features not presented in these three? Looking at table 1 makes me think these instructions are quite clear on how to make these 3 patterns. I wish there was a condition with these schematics only. But it also makes me think about the actual difficulty of performing such art (I never tried myself). I expected more discussion on this point in the paper. It would have been a good start for a design rationale. The experiment procedure give little details about participants background. How did authors ensure homogeneity of the groups? Last, I would like to talk about the results. First of all I am unsure a pixel comparison metric is fair. The projection method inevitably show the precise spot for pouring syrup. But in the other condition, participants could have perform just as well, with a slight rotation or translation. This might have affected the metric, with no real impact on the perceived result. The discussion mentions participants who felt the drawing were similar while the metric showed they were not. What is the objective: people's perception or a metric? Also, how many times could participants practice? The results presented in appendix do not seem so different, and I think the result will be even more similar with a little practice. In summary, the idea is interesting, but the design rationale is unclear, and it is unclear the results justify using this system.""",2,1 graph20_28_2,"""This submission describes a projector-based system to help non-baristas create latte art. The design of the system is presented, along with the results of a user study. The study found that most creations made with the system were more visually similar to templates than those created without the use of the system (as determined using image processing and questionnaire data from the public). Participants also found it easier to create latte art when using the projected system animations compared to watching a video of the process. The contribution of this submission comes from the system itself and from the results of the user study. It fits within the scope of the call for papers for GI 2020. Overall, I enjoyed reading this submission and found that the domain of latte art presented some new opportunities for exploration that are not typically found within the skill acquisition literature. I also appreciated that the study included data not only from the participants but also from (what I assume to be) the general publics opinions of the resulting latte art creations and a computer vision-based comparison. While the submission is hard to read in some places and some details about the system and study are missing, I think it is above the bar and should be accepted. Readability: If possible, it would be for the submission to be copy-edited. There are quite a few places where it was difficult to understand what the intention of a paragraph or sentence was so the messaging was lost (e.g., Even though making etching latte art while watching making videos which show the procedure, it is difficult to keep balance. -> Although one can watch a video that shows how to make etching latte art, videos make it difficult to maintain a learn how to balance different two fluids.). I also found the organization and presentation of the results to be confusing. In Section 4.2, to understand the text, one needs to reference Table 4, but Table 4 is found two pages later and it is unclear what the difference between the four sub-figures are. Moving the table closer to the text would provide clarity and adding annotations to the figures in the table would help call out the specific features of the figures that the text is referring to (i.e., I was not able to understand this description by looking at the figures in the table because they all appear to be very similar: Participants E, F, and K were not able to draw a spiral with the certain space.). I also found Section 4.3 difficult to understand because participant quotations, Likert ratings, and the result synthesis use some odd paragraphing and dont provide clear takeaway messages. Perhaps organizing the results thematically, with one theme or main result per paragraph would improve clarity. Lastly when looking at Figure 4 without the text, it is unclear what questionnaire question was asked. Changing the caption to include the question would improve readability it might also be a good idea to shrink this Figure because its a bit too large and takes up space that could be allotted towards describing the results in more detail. Missing Details: The submission would be strengthened if there were details about the inexperienced peoples questionnaire process (e.g., where were they recruited from, how old were they, did they have experience with latte art, were they paid, etc.) and the participants in the study (same questions were lingering about experience, recruitment, age, payment, etc.). I was also unclear how long the study took to complete (i.e., length of time making each design), if participants were able to replay animations / the video while they were creating their designs, and how the system advanced from the syrup placement to pick manipulation steps. More generally, after reading the submission I was left wondering about the generalizability of the system and what the unique challenges are to learners when they are learning to control and use fluids or gels. The Introduction touches a bit on this, but I was left wanting more of a discussion about the unique aspects of this domain in the Introduction, System Design, and possibly a new Discussion section about this. One unique challenge seems to be that there is no room for error or mechanism to correct mistakes if a design is messed up, the entire drink needs to be created again. This differs from other skill acquisition tasks where one could restart a task or remove what they have added. Because latte art is a form of food decoration, I was also wondering how the system could be appropriated or applied to decorating cookies or cakes with icing, which also use materials of different viscosity for decoration but dont have the added element of heat / cooling. I encourage the author(s) to consider adding a Discussion section that highlights the findings of their study and discusses the generalizability of their work. Lastly, for the Likert scale data, because Likert scales are ordinal rather than interval, the median instead of the average should be provided. """,3,1 graph20_28_3,"""This paper presents a system designed to help latte art novices create etched patterns and images in the surface of coffee foam. The authors implemented the system and performed a user study (n=12). Using the system seems to help participants create art that is more similar to the presented templates. Overall, this paper seems to be reasonably well-executed, with good figure support throughout the text (very important for a visual task like this!) and a nice implementation of a creation support tool. I find myself with some questions in the end, detailed below, but I think the paper is just about at the bar. Goal of system: I am not clear if this system is intended to teach people how to create etching latte art unsupported, or if it is for constant use by a barista. The care that went into simulation of the two different liquid viscosities suggests that part of the goal may be to give intuition to the baristas about how the syrup and milk foam might interact, and the authors discuss in the intro that an earlier system which printed onto lattes cannot make latte art with milk foam like one in free pour latte art. Therefore, baristas still have to practice latte art to make other kinds of latte art, suggesting that their system could be used for bootstrapping knowledge into other kinds of latte art. However, the study only had users create two pieces (one supported and one unsupported, counterbalanced), and did not examine learning effects between the two conditions. On the same note, I find the simulation aspect of this work very interesting: I wonder how users would have done if they had been presented with a third condition that was a non-projection video of the etching creation (i.e., I wonder what effect the co-location of instruction and execution has vs. the effect of just simplifying the instructional video). Mechanics of etching latte art creation: As a non-expert in latte art, I am curious about the other variables that go into creating successful etchings. Does the depth, angle, or speed of the tool affect the final outcome? Were other variables like these controlled or measured in the user study? I am also curious about the definition of well-balanced latte art. The authors use this term frequently throughout the text, but I dont know what exactly it means. System implementation: The authors describe the use of a small projector for displaying (and include a figure of it), but I am not clear how its video is projected onto the latte surface. Is there a depth camera component? Or is the projector simply adjusted to point at the latte each time? Does the animation loop over and over until the user completes the etching task, or what happens if a user misses the animation component of the training? I also do not completely understand how the fluid viscosities are simulated. Are there values that are plugged into After Effects to allow it to create this simulation? If so, what were the values, and how were they chosen? If it isnt as simple as putting a viscosity value in for each fluid, how was this simulation done? The authors describe that making a template for this system takes about 30 minutes. What does a template author need to do or know in order to create templates for this system? Study: I wish I knew a bit more about the study participants. How old were they? Were any of them aspiring baristas? What did they get for participating in the study (free coffee? money? extra credit?)? Did any of them have art backgrounds or other experience that might make their results different from others? I also think it would be beneficial to reformat the questionnaire results into a table or figure (for example, a box and whisker plot showing the distribution of answers on each question) to aid in exposition of that section. I like the two forms of post-hoc evaluation used for this study (both having humans look at the designs and doing background subtraction for creating a more numerical baseline of similarity. The fact that the background subtraction is so sensitive to shifting the design slightly to the side is also interesting. Overall: As I said, I like this paper in general and find the topic pretty fun. While there are some lingering questions, they could be cleaned up by another revision pass (I would also suggest to the authors that they get a native English speaker to thoroughly proofread the paper). As I said, I think this paper is just about at the bar. Nitpicking: I think in Table 1 the spiderweb design template is actually created using concentric circles, rather than a spiral (as the table seems to suggest). The spiderweb design elsewhere is correct. Structurally, it doesnt make a lot of sense to have making by oneself, making with our system, and notes as making etching latte art as parallel pieces; I would bump notes on making etching latte art to a different level, since it is somewhat unrelated to the other two.""",3,1 graph20_29_1,"""This paper presents two variations of the standard Fitts' law study, to understand the effect of (1) a situation where targets initially appear with a given size (called the ""visual width"" in the paper) but are revealed to have a larger clickable size revealed once the cursor gets close (called the ""motor width"") or vice versa; and (2) different gaps between targets arranged side-by-side. Models are fit which account for these differences, on both new data gathered from 12 participants, and data sets gathered from several past studies. Overall, I found the design of the study to be sound, as is the data analysis and modeling methodology. I also think that the overall motivation of understanding whether interfaces with distinct visual and motor widths (to use the paper's terms) is interesting. Despite the above, I am not very enthusiastic about this paper. While I appreciate the overall motivation, I'm not sure if a Fitts' law study is the right approach for going about understanding the effects of these kinds of interfaces. Or, put in a different way, I'm not sure if the study results are all that valuable for designers (given that it's looking at 1D pointing), or whether this type of interface is common enough that it's useful to have a new Fitts' law formula to account for it. The situation in which motor width differs from visual width seems fairly niche overall, and the examples cited in the introduction where visual width is greather than motor width seems like a situation that will almost always be due to poor interface implementation, rather than a conscious design decision. In addition to the above concerns about the contribution of the paper, the term ""motor size"" is already used in Blanch et al.'s CHI 2004 work to refer to the situation where the control-display gain is manipulated to create objects with a higher or lower size in motor space as compared to their visual space on screen, work which is not cited in this paper. It seems awkward to use such a similar term here, when C-D manipulation is not the focus. Finally, I found the study results to be difficult to interpret, as many of the results subsections are ANOVA output with little interpretation and commentary to help the reader understand what was found. Based on the above, I feel the paper is marginally below the acceptance threshold.""",2,0 graph20_29_2,"""This paper presents, develops and tests a series of models of pointing time in situations where the apparent size of a target does not represent its actual clickable area. The models include both visible and clickable widths in the visual space, as well as the distance between the clickable areas. In two studies, the authors show that their models do better than classic Fitts' law. They also validate some of their models using data from similar past studies. This paper is generally well written, the studies well described. The related work is clear and well documented. I particularly appreciated that the models are tested on prior data, even though the experimental setups seem very similar and therefore generalization remains an open questionstill, few papers bother to do that. However I also have a number of issues with this paper. In particular, I think the choice of experimental tasks needs to be justified before conclusions are drawn, as well as how these results would apply in real use. It is also unclear how designers could use these models in their work, which is left unexplained. Finally, a number of rationales need more justification to be convincing, and clarifications are required throughout the paper. # APPLICABILITY I feel like some discussions are missing to justify some of the design choices made in this work, and how it can be applied in real life as claimed. ## To Interface Design Throughout the paper, the authors claim that these models will help interface designers. I propose that this is not an obvious outcome, and this paper would have benefited from e.g. a quick how-to. Interaction designers already work under a number of constraints: label text lengths, available space, graphic charters, item positioning, readability, discoverability, aesthetics, etc. Some of these elements may share some parameters with the proposed models, but cannot simply become secondary to these models. With all that in mind, how can one integrate these models in a realistic design process? ## To Real Interfaces This is mentioned a few times but never really addressed: how generalizable are the tasks performed in the presented studies? The examples provided at the beginning of the paper include mostly menus, for which one would expect that the difference between V and M would remain constant. How long can one use an interface where e.g. V = a+M, and not quickly get used to it? Yet the studies in this paper made this difference constantly changing and unknown, putting participants in a constant state of ""discovering the interface"". What scenario does that represent? In that perspective, most of the presented results would only apply to the first few selections in a given targets set before its design flaws are understood and integrated by the user. ## To Realistic Mouse Interaction >> ""Following previous studies [14,15], we asked the participants to avoid clutching (replacing the mouse on the mouse pad)."" >> ""We did this because clutching may reduce the fitness of pointing models [14]. If we had allowed clutching and obtained poor regression fitness, it would have been unclear whether the results were due to clutching or experimental conditions such as the difference between the motor and visual widths."" (p. 4) - This seems like an argument for the sake of fitness reporting, not of empirical evidence. People do clutch quite a lot with mice in real life, so the presented results only apply to a subset of real-life pointing actions.Shouldn't a realistic model hold when clutching occurs? This is an artificial instruction that hurts the results more than they reveal an empirical truth. ## Should It Apply? Overall I think there is something missing regarding the general phenomenon of having clickable objects that do not disclose their full shape, except perhaps in video games. There is probably a discussion to be had about whether this constitutes bad design (especially considering the ""U-shape"" of performance briefly mentioned in the paper), and in this case, whether this paper should provide guidelines to implement it. # RATIONALES / REASONING A number of rationales in this paper seem based on fragile bases, which makes me wonder if some of them are not just here to justify the models a posteriori. For instance, it is entirely unclear to me why the transition between Equations (10) and (11) was at all necessary, at least _prior_ to performing the tests. Equation (11) is not mathematically equivalent to (10) and requires approximations, while (10) does not and was rather well justified. What made (11) an interesting model to explore beforehand? >> ""Usuba el al. found that dwell time and movement time are U-shaped functions whose origin point is located where the motor and visual widths are the same when users click on a target with different motor and visual widths [35]."" (p. 6) - Several things here. 1) ""origin"" -> ""minimum"" 2) reformulate: either visual and motor widths are the same, or they are different 3) More importantly, knowing this, it appears strange to have suggested models in which the components involving V are simply added to the components involving M. A model of the form pseudo-formula or pseudo-formula , with x > 0 will increase with pseudo-formula when pseudo-formula is held constant, not form a U-shape around pseudo-formula . Of course, the quoted sentence does not specify the input parameter of that U-shape, so my interpretation could be wrong. This needs to be clarified, but otherwise it would seem that some informations were ignored when forming the model in Equation (10). >> ""As shown in Figure 15, we found that increasing the I between the target and distractors decreased MT."" (p. 8) >> ""As shown in Figure 15, we found that enlarging I between the target and distractors decreased MT."" (p. 9) - Fig. 15 shows a significant difference in *one* pair of I values, with a difference of perhaps 20 ms. It seems difficult to justify anything strongly, with such a specific and small difference. This 'result' is used twice in the paper, and it makes me question the arguments it introduces. >> ""However, enlarging I increased the spread of the clicked positions (SDx) and error rate (Figures 17 and 19)."" (p. 9) - Fig. 17 shows no significant difference and a rather ""flat"" curve. Fig. 19 shows a significant difference in one pair, by about 2%. The arguments that follow therefore have brittle bases. >> ""We constructed a model (Model #4 in Table 2) that [...] showed the highest adjusted R2 and lowest AIC."" (p. 9) - Compared to the 2nd and 3rd models in Table 2, the difference is 0 or 0.003 in adjR2, and 1 or 7 AIC units, which seems rather small to confidently select a model over another. >> ""Thus, a UI designer can provide wide intervals to allow users to perform operations more quickly"" (p. 9) - Wide intervals can also mean more distance to the intended target, in certain configurations, which probably does not make pointing faster. # CLARITY / INACCURACIES Generally, ""motor width"" would be understood as ""the size of the target in motor space"", i.e. the distance a physical limb or device would have to travel to go through it (typically with fixed CD gain). In many interactive situations the target width in motor space is not equal to the area in which it can be clicked in the virtual space, whether that full visual width is visible or not. I believe other terms should be used throughout, e.g. ""visible width"" vs. ""clickable width"". Conversely... >> ""Thus, changing the C-D gain allows users to feel as if the motor width is enlarged without changing the visual width."" (p. 3) ...this sentence uses the ""limb motor"" definition of motor width, not the one used in the rest of the paper. With a different gain, this paper's definitions of visual and motor widths both change. Throughout the paper, a number of r-squared values are deemed ""sufficient"". What is ""sufficient"", and why, should be clarified. R-squared are not one-size-fits-all metrics of goodness, their interpretation can be contextual. Sentences like ""a model for considering the difference between the motor and visual widths and the intervals between the target and distractors"" (several times throughout the paper, in slightly different phrasings) also need to mention *what* is being modeled, i.e. pointing time. >> ""In graphical user interfaces (GUIs), users move a cursor and then click"" (p. 1) - While prominent, this does not apply to all GUIs. >> ""passing through two goals called crossing [1]"" (p. 1) - You mean ""steering""? Figure captions should be proofread. >> ""In normal Fitts tasks, the target has a certain width and practically infinite height, i.e., a 1D pointing task."" (p. 3) - Define ""normal."" A large number of Fitts' pointing studies in HCI are performed on 2D and even 3D environments and target placement. Figure 2 is referred to quite often, including near the end. This requires the reader to go back quite a lot, and quite often, while Figure 2 is not that necessary early in the paper. Figure 4 should also show the feedback when M < V. >> ""The error rate was lower than those in previous studies [...]. participants[...] performed the pointing operation while watching the highlight of the motor width. Thus, we believe that because the highlight allows the participants to operate more accurately, a lower error rate was observed"" (p. 4) - I do not understand, how else could participants have known the actual (""motor"") size of the target anyway? Of course highlights make people more accurate, if there is no other way to tell where the real target is. Also, for consistency and in relation to the point above, it would be interesting to also comment on possible differences in MT between studies, since many parameters were the same. >> ""On the basis of a previous study [33], we added 0.0049, which can be rounded to 0.00 due to preventing division by zero."" (p. 8) - That value seems random in the absence of more detailed explanations, even if they were directly taken from [33].""",2,0 graph20_29_3,"""This paper reports models for predicting pointing time on targets that have a different visual and motor width. Visual width is how the targets visually appears to users, and motor width is the actual areas on which users can acquire the target. The goal of the paper is to provide an all in one Fitts like model, factoring in amplitude, visual width, motor width, as well as interval between the target and potential adjacent distractors. Two experiments varying these factors were conducted. I would like to start my review by thanking the authors for their paper. It is an interesting piece of work that offered new perspectives on approaching pointing tasks. That being said, I'm afraid the current state of the writing prevent me from arguing for acceptance. Throughout my reading, I struggled with the concept definitions. Although defined, at the beginning, a clear context is lacking (on which situations the difference between motor and visual width appears in). In particular for situation where there is a the shorter motor width, which seems uncommon. Figure 1 provides an example of it, but it seems that it is a UI/ergonomic problem which deems the need for a model describing this situation. I understand my comment might not be relevant, as one could argue that we could still study a broader picture. However, in several papers from the related work [35,36], compelling situation (like resizing windows) are given. Therefore I do believe, reshaping the introduction using these examples early on would sweep away doubts and get the reader on board from the get-go. I would also shorten drastically the first paragraph to get quicker to the ""meat"". Neat related work. For the study design, only one repetition of each factor combination for each participant seems really odd. I understand the need to keep the xp time under 45min, but could 1M and 1V could have been dropped? Also why not choosing amplitudes that were drastically different? And having only one distractor on either side with drastically different color schemes seems quite far from realistic settings. One could argue this helped participants to simply ignore distractors. Apart from, the ANOVA results, no figures are reported in the results which makes it hard to grasp which hypothesis could be drawn or not. Given the size of the graphs and the difficulty to read them, a lot of effort is required to jump back and forth from graphs to results. I struggled interpreting the results, and was left having to decide whether I trust or not the authors assessments, which is generally not a good sign. Similarly, curves (eg. ""U"" shapes) are being referred to and compared to related work ones, however, never presented to the reader. The model fitting parts are very straightforward and tend to be text explanation of equation rather than explanation of rational. A lot of information are given on 8 variations of a model, with a clear presentation in the end that only one or two can be kept. Since all models are coming from the authors themselves, there is no particular need for comparison. I understand the method used by the authors could have been to test every possible candidates, but I would argue, presenting the reasoning behind the last model as well as the results. But no need for intermediate models. Another minor point. I understand the need for a '0.0049' but explanations are seriously lacking to justify the value as well. As it currently reads, I feels somehow parachuted in. A neat step to compare to the related work xps was taken, how did the authors check their models with these studies? Did they contacted the authors to get the data? I'm also not entirely sure how the model would benefit UI designers. A use case on how to use these results could be useful. ""Our IDmvi2 (combined) can provide UI designers with the optimal motor width, visual width, and intervals in terms of movement time; "" What does it mean? Are UI designers fixing a time and then look at what sizes targets should be? If so, it seems pretty hard to use as is. This papers presents models for predicting pointing time on targets that have a different visual and motor width. There is a value for the community. However the current state of the writing prevents me from arguing acceptance. Minor comments: - English can be off sometimes, I would suggest getting the paper thoroughly re-read - ""We observed the main effects for XX"" -> ""We observed main effects of XX"" - ""On the basis of a previous study"" -> ""Based on a previous study"" - Titles of the graph figures are not explicit at all. I would suggest using full sentences to have self-explanatory captions.""",2,0 graph20_30_1,"""The authors describe the design and implementation of a shape-based brushing technique targeted at selecting a particular type of data - trajectories. These are notoriously difficult to select directly due to issues of occlusion and the ""hairball"" effect when there are many trajectories intertwines, as is the case with eye tracking, network, or flight trails data. The authors do an excellent job of describing the problem and grounding the approach in previous work. The approach is interesting and the use cases described demonstrate the technique well. However, the paper is weakened by several writing and organizational aspects, and by an odd off-hand report of user feedback. The basics of the technique are well-described: the user draws a shape that the system then selects matches for, based on two similarity metrics (one calculated by Pearson's coefficient and the other by a PCA algorithm). As these two metrics deliver different candidates, the resulting set of trajectories is provided to the user in a set of small multiples illustrating the selected trajectories and sorted by similarity; the user can refine selections, although it was not clear how. There appears to be a set of small multiples for each of the two metrics. One main weakness of the paper is manifested here: I found the description of the bins, and how they are calculated, quite confusing. I had to re read the paper back and forward to finally tease out what I think is the way it works. Overall, the writing and the organization of the paper suffered from similar issues. A similar problem occurred with a critical aspect of the brushing technique: direction. The authors state directionality is a critical advantage of their brushing technique, but never actually stipulate how direction is specified in the original share definition. I assuming - as one would consider the obvious choice - that directionality is taken from the direction of the sketched brush at the time the user draws it. But this is not clear. IN fact, the whole way the user draws the shape is poorly described. The nice video provided was helpful in showing this technique. However, the video alludes to something not mentioned in the paper about directionality: only the Pearson algorithm identifies direction, and even from the video it was not clear how the user selected it. These critical areas of confusion around how the process actually unfolds from start to finish should have been more clearly described. I found it odd that at the authors retained both metrics, delivering different results, without trying some blended version that might reduce complexity for the user. One would expect that trying some combination would be an obvious step, especially given the unclear feedback from the expert review. The last point leads me to what I see as *the* major weakness of the paper. Having reviewed this approach with experts, the authors state that the experts did not get it, and so they choose to describe the system with a use-case method. In fact, this reads as if the feedback from the experts was so bad that they did not want to describe it. Why dont they include the feedback? Surely they found out useful information. It sounds like a classic case of theres nothing wrong with our system, just change the user. Because of that last point, I am somewhat on the fence about this paper, but am willing to consider that it might be acceptable. Id like to see an inclusion of the user review. """,3,1 graph20_30_2,"""This paper presents a technique for sketch-based brushing of multiple trails (""trail sets"") to select one or more trails with precision. The technique uses two different measures (FPCA and Pearson) to find trails that are similar in shape or similar in direction to the sketched path. The applications discussed include selecting GPS-driven paths of aircraft or cars, or selecting scanpaths from eye-tracking for further analysis. While the paper has many minor typos and errors, they did not detract from my understanding, and the work is generally well-written and clear (with one major exception - see next paragraph) and discusses the relevant literature in a thorough and balanced manner. The figures illustrate the technique appropriately, though some of them are low resolution and hard to follow - in particular the small multiples in Figures 5 and 7. Please embed high-resolution figures so an interested reader can zoom in. The small multiples view and how it works took me a while to figure out. And I'm still not sure how someone would ""refine"" a query other than redrawing a trajectory or changing the size of an existing brush (and that's really just my guess). The end of the paper suggests that the queries can't be combined, so it seems to be a bit strange to say that query refinement is possible at all. The use case discussion on page 8 bottom left, it says that FPCA and small multiples can be used first to find matching shapes, then Pearson can be used to take into account directionality. Given that the queries cannot be combined to allow for iterative refinement of selections, it was not clear to me how the Pearson measure would help if it cannot match the shape well. If you have to start over again with a new query, isn't all the shape matching lost? Wouldn't it make sense to first refine by shape, then further refine by direction? The way the measures can be used together (and the types of things they capture) should be further clarified with a figure. Furthermore, the way the range slider is used in the small multiples isn't clear when it is first introduced on page 4. It becomes clearer in the interaction discussion on page 5, but I'm not sure these two sections are needed and I would advise to consider merging them into a single discussion of the small multiples view and how to interact with it. Also, a better depiction of the range slider in use would help. I liked the use of the color scale atop the multiples to clarify the uneven bin sizes that result from the specific distribution of the measure of interest. The video really helped clarify this (though, a narration would be more interesting than the subtitles, if possible). The technique is not evaluated with any sort of user study or formal analysis, but rather illustrated with some use cases created by the authors. That said, to me the utility is clear and convincing, and I do not see the need for a study other than perhaps to better understand if people can figure out how to use the small multiples appropriately (or if they know when to use each measure). For usability, it may be better to call the similarity modes ""shape"" and ""direction"" rather than using their formal names on the interface itself. Overall, with some editing, I think this would be a good contribution to GI. The following minor issues should be fixed: - Remove references to section numbers as there are no numbers in this format - Order grouped references so the orders appear sequential [13][41] rather than [41][13] - The rendering of ""Shape"" is odd and inconsistent on page 5. Sometimes the S is bold, sometimes not. I suggest just use normal font and don't worry about constantly calling out the relation to the S vector of shape points - the relationship is clear. - Left of page 5 trial should be trail - The abstract is too long - Some spacing issues, e.g. space before period on page 1, no space before reference 44 on page 2 (there are more throughout) - In the final scene of the video the small multiples extend beyond the boundaries of the frames - is that a bug? - Page 6 ""ML 3?"" seems to be an editing note - Figure 8 c it appears to be 2 trails not one that appear between ND and FCU. Also, the caption has spacing issues - Page 8 top right ""Fr???et distance"" (looks like a special character error) - Figure 10 caption needs to be clarified to explain what is meant by different event sequences - that is means the green complete paths have different patterns - - """,4,1 graph20_30_3,"""This paper presents a technique for brushing trajectories based on two different metrics, and provides case studies illustrating the efficiency of the technique. I am not an expert in brushing per se, but I find the related work accessible enough for a non-expert to grasp the research landscape in the area. The authors do a great job at identifying limitations of related work to position their contribution with clarity. One thing I find lacking in the introduction, however, is the motivation for this work. It comes through later, but should be clearer upfront. Why is brushing 3D trajectories (in large datasets) important? In which domain/context is it useful? What are the current techniques being used and what are their limitations? Clarifying these points (possibly through an example) would strengthen the argument. Along these lines, the requirements make sense to me, but they seem rather arbitrary too. Tying them to the motivation and related work/background will also make a stronger case. The proposed algorithms make sense. I have a few comments though: - why using Pearson in 2D while the focus of the paper is supposed to be on 3D trajectories? - The FCPA section is difficult to read and would benefit from a rewriting. The figures help though. - Did the authors consider establishing a metric that would be a weighted mean of the two metrics? This would be worth mentioning. The binning and small multiple filtering is not very well explained. It i difficult to grasp how it works. How are the 5 small multiples selected, when the two metrics are on different scales? How does one selects, filters, or adjusts one of the brushes? Is it possible to weigh the different small multiples? Answers to these questions come later in the scenario, but make the explanation confusing at first. I recommend moving these explanations before the scenario.s It is disappointing to read that the authors collected feedback from domain experts, to only ignore this feedback after all. Given that the experts misunderstood the filtering parameters, I would expect the authors to at least iterate over a few alternatives and go back to the experts with those. What I read is that the authors assume that the default parameters are good enough thus they do not worry that domain experts do not understand how to change the parameters; thus decide to not try further to design a system that would allow the experts to benefit from the full power/flexibility of the technique. I have a hard time understanding why that is. A straightforward improvement could be to explain the metrics and their parameters in understandable terms, rather than technical ones. For example, replacing ""Pearson"" by ""Direction-aware"" or something similar would certainly make its purpose clearer. The example with flight trajectories is quite interesting to read. The case study with eye tracking data, on the other hand, is hard to follow. It is an interesting case study because the approach is clearly different from what is being used currently to analyze gaze data, however, it requires some heavy re-writing. Another point of concern I have is that the authors state that they ""found two reasonable similarity measures that fulfill our shape-based brushing method"", but do not show evidence that they tried others. This is intriguing, because the authors argue that their pipeline is easily adaptable to other metrics, thus I would expect them to implement a bunch of metrics and show a comparison of the results. This would provide much more valuable knowledge (which gives which result for which dataset and which brush) than just reporting the results for two somewhat arbitrarily chosen metrics. Overall, this work clearly has merits: it is an interesting problem, a good implementation of a reasonable solution, and some interesting discussion of the technique. However, the paper lack clarity and focus throughout. The writing is not great, and some of the arguments are difficult to follow because the information is scattered throughout the paper. There are also some more fundamental issues, like having arbitrary design requirements, ignoring domain expert feedback, and not exploring further metrics although the pipeline was designed with this in mind. So, while the paper is fine in its current state, it would likely become a much better paper after a round of revisions. Last, the writing is fine but the language could be improved. There are also a few typos or grammatical mistakes, including: - This figure shows the interaction pipeline to filtered items - Followed by the binning process and its filtering, the resulting data is presented to the user. - our technique provides initial good selection result - Discrete Frlchet distance There is also a leftover comment in the paper: USE CASES, second paragraph (ML: 3?). """,2,1 graph20_31_1,"""Good study paper, with in-depth analysis This paper presented a comprehensive experiment in VR, focusing on various types of menu placements, shapes, and selection techniques. Both quantitative results (incl. task completion time, error rates, # of re-entries) and post-study comments were reported. Pros: I enjoyed reading this paper. The topic domain was nicely chosen -- I agree that this work made a contribution by analyzing the experimental results and proposing categorized recommendations for developers in this field. The discussions of each independent variables and their interactions were inspiring to read. Cons: As for the clarity of presentation, I have a few concerns: In Section 3.4, it was mentioned that the participants got instructions ""in a system message"". But where is this system message displayed? In the 4 different menu placement conditions (arm/hand/waist/spatial), obviously the target UI element was displayed at different locations. Then it would be important to know where the participant got the instruction, because this would have an effect on the task completion time. Since Menu Placement is a variable with 4 levels (within-subject), and there were 24 participants in total, the degrees of freedom for F values before sphericity correction shall be (3, 69) instead of (3, 23), isn't it? Please also check the degrees of freedom reported in other parts of this paper. For Fig. 3: The legends and x-axis groups could be re-designed to better convey the message illustrated in Section 4.2. Now the figure focused more on the comparison of menu placements for each type of selection technique, rather than comparing the selection techniques. In summary, I would like to accept this paper, and I am looking forward to seeing your answers to these questions. """,3,1 graph20_31_2,"""Motivation: 1. Solidly written motivation and well-cited literature review. 2. The authors seem to provide a strong justification and explanation for exploring optimal interaction experiences for placement of graphical menus on body parts in VR body. 3. While there has been extensive studies on menu selection in VR, I am not familiar with one that is specific to menus located on virtual body parts. The proposed work seems to have at least reasonable novelty in this regard. 4. The proposed work of study on VR body-referenced menus seems to be an appropriate fit to the conference's focus on graphics and on interactions. 5. Hypotheses are solid and provide a good frame for the rest of the study 6. One of the concerns with the motivation is the fact that the authors found body-referenced graphical menus to be insufficiently explored. However, I found the implementation of body-referenced options in the papers methodology to be lacking in a few key ways (see Approach section). 7. Its possible that the lack of body-referenced graphical menu studies is due to the technological limits. This factor does not seem to be explored in the evaluation sections beyond VR technology will get better in the future. 8. The proposed work's contributions listed at the end of page 1 seem valid, but could be better condensed. That is, the two bulleted items are most redundant since most of the text is repeated except for the first word. Perhaps revise the text so that the two bulleted items are instead more succinct, such as merging them into a single merged sentence. 9. The authors were very thorough in citing and discussing references that strongly covered domains that both directly and peripherally related to the proposed work. It is clear that the authors put a lot of effort and planning to provide a detailed literature review and discuss how they differ or support the authors' proposed work. 10. Although the authors were quite thorough with their 25 references, one concern was that only one reference was from the past year and only three references were from the past two years. That is, most of the references were seminal but not state-of-the-art. If space permits, I recommend that the authors explore and include more recent related references of importance, and discuss why those recent works still do not address the challenges that remain open to the authors. 11. Suggested References: Monteiro et al. Comparison of Radial and Panel Menus in Virtual Reality. IEEE Access. 2019. Park et al. HandPoseMenu: Hand Posture-Based Virtual Menus for Changing Interaction Mode in 3D Space. ACM ISS. 2019 Approach: 1. Study design is well-written, well-motivated, and the questionnaire adequately captures the user experience 2. The three metrics of task completion times, error rates, and number of target re-entries seem appropriate and informed given the length of the work, and were also appropriately discussed. 3. I particularly appreciate the attention put into whether participants short-answer feedback matches with that of the usage metrics. For example, I like that the paper points out differences between what users think feels better for them despite lower accuracy and/or higher time taken. 4. Several references to radial menus were made but the one figure in the paper seems to depict linear menu placements. Radial menus should be shown in a second figure, especially since the paper is primarily UI-focused 5. Paper only explores a completely flat menu hierarchy. While exploring multiple hierarchies and different forms of input (e.g. drop-down menus) could be understandably out of scope, the authors should still mention justification on using the most basic of menu designs. The chief concern here is that most menus are not simple pick from list single-tier affairs, and that could make the papers findings not readily applicable to VR UI development. 6. My main concerns on the approach is the robustness of the system. Despite the fact that the papers main motivation is in body-referenced menus, the implementation does not integrate any kind of body-tracking. -The effects of this are seen primarily in the arm-menu. Many participants reported the VR arm did not match their own; they felt it was too short, long, or was not in the location they anticipated relative to their head. -The hardware used was an HTC-vive, which only directly tracks hand movement through the location and orientation of the controllers. This study then extrapolates an arm, but it appears this implementation was insufficient for several participants. -Perhaps the system did not account for differences in height, which directly correlates to forearm length If this problem can be solved via software, it did not appear that the research study took this issue into account or attempted to solve it. If the problem can only be solved via hardware, the study did not appear to integrate body-tracking hardware to bridge this accuracy gap. -This does not invalidate the research study. In fact, its easy to argue that using an off-the-shelf HTC Vive implementation without custom hardware reflects real-world usage better. However, I would like to see that explicitly laid out in the Discussion section. I dont think deferring to better full-body tracking technology in the future quite captures the level of discussion needed for this particular point. 7. Input errors could potentially be due to software implementation. Its unclear whether the authors explored pointer smoothing (for ray-casting and eye-tracking), padding between control elements, a buffer for pointer leniency (so that the pointer does not have to be exactly inside the button). -If these were not implemented, could this be a confounder? This should be discussed in the Discussion or Limitations section 8. One mystery about the paper was in regards to the orientation and direction of the body parts. That is, does the study take into account different orientations of the limbs such as the rotation of the parts, the facing direction of the hands, and the angle of the arms? 9. What was the starting position of the limbs for each stage? Were the users' limbs at rest on their side and were instructed to position them to reveal the menus, or were the users' limbs always in the necessary starting position to trigger the menu display? This information seemed to have been skipped over. 10. How were the menus triggered? Were the limbs recognized and the menus automatically placed on them? This information may be useful to readers who may not be familiar with how menus work in VR given the context of body parts. The only discussion was brief mentions of full-body tracking in the latter discussion sections, when the paper would benefit from earlier mention in the study approach's sections. 11. How was the amount of menu content for placement on the limbs determined? The approach described states a total of six, but how was that number decided? I believe that content amount in the menus may cause an effect, but it seemed like the menu content was arbitrarily selected or lacked justification? 12. One of the major limitations of the approach is the singular task of menu interactions in terms of selecting books. This is not a detriment to the study since this task does appear to cover quite a number of similar VR interactions. However, I was curious to know how generalizable this type of study is for VR menu selection in general. I believe that the study is appropriate for static standing context to perform precise selection of options, but I am unsure if this type of interaction is appropriate for more dynamic VR interactions. That is, maybe more dynamic VR interactions may prefer more immersive experiences where the user is okay with performing more physical actions to view VR menus on their body parts for more ""cinematic"" experiences. Perhaps optimal menu selections may be different when the user is more actively moving. It is hard to tell on this type of generalizability with the one task that was conducted. The authors may want to consider revising their approach to reflect this more constrained interaction scenario---which is still important---so that readers are not misled by the paper's actual evaluation. Alternatively, the authors should defend why their interaction scenario is actually generalizable. With the paper's current state, this question remains unaddressed. Evaluation: 1.This is probably personal preference, but I felt most of the text reporting completion times and error rates could be much more succinctly summarized in a table or integrated into Figs 2, 3, and 4. Doing this would likely free up space for additional UI figures that are much needed. 2.The evaluation provided detailed quantitative and qualitative outcomes that were appropriate for the study that they conducted. 3. It is general practice to remove the leading zero before the decimal point when listing p-values, since these leading zeroes are redundant. Please remove leading zeros in your listed p-values in the paper. 4. Table 2 presents the data with detailed results, however the table header values (i.e., TCT, ERR, TRE) are not too intuitive. I suggest replacing these initials with more intuitive names, such as Completion, Error, and Re-Entries. 5. One discussion area that I was interested in knowing about but did not see was the authors' observations and the participants' remarks regarding how they posed their body parts to prompt and view the menus. Specifically, I was curious if participants were consistent in how they posed their body parts or if people biased their body parts uniquely from other participants. The only remarks that I saw was on isolated observations that were consequences of participants failing to trigger them in the latter discussion sections. 6. The results that were provided show valid and relevant statistical significance that does highlight which menu interactions were better than others given the context of the study. """,3,1 graph20_31_3,"""Evaluation of Body-Referenced Graphical Menus in Virtual Environments In this paper, the authors present a study to evaluate different aspects of menu selection task in Virtual Reality (VR). Four different types of menu placement locations, two menu shapes, and three menu selection techniques were evaluated. Overall, the paper is well motivated and the study design attempts to answer key questions about menu design in VR. The study and the associated discussion of the findings are the key contributions of this paper. The study design, with 24 conditions, is quite extensive and the stats are appropriate. I have some concerns about the study design and the thoroughness and clarity of describing the study setup. I think these concerns need to be address to help contextualize the results better and improve the readability and replicability of the study. Primary concern: - The primary issue is the design of the task itself. Each trial goes something like this: the rectangular panel in the centre of the viewer's field of view displays the next target (instruction panel). The participant then either clicks the 'start' button or goes ahead and selects the indicated menu item. ( It is unclear whether the participant has to press start and then press the requested menu item.) Nonetheless, we can assume that for atleast eye tracking and head movement conditions the pointer travel distance from the instruction panel to the menu varies for different menu placement locations. For example, the menu placed on the arm is quite far away compared to the spatial menu. Based on Fitts' law, it is not surprising that the arm placement fared the worst in task completion times. Since the dependent variable is task completion time, the distance travelled from the pointer resting position to the target menu position can affect the study significantly. This implies that some of the conclusions drawn at the end of the paper would need to be updated. Ideally, the study design should have attached the instruction panel close to each type of menu to homogenize the distance from the instructions to the target. Alternatively, the paper has to acknowledge this confounding error and revise the findings. - Following up from the above point, I want to note that the description of different conditions could be improved through detailed explanation and visuals. For example, was the laser selection pointer and the menu selection visually displayed for all conditions? if so, how? The inserted image only shows visuals for the ray-tracing condition. Images are needed for all the task conditions in the study. Without this, there are several lingering assumptions on how people may have actually used the different menu selection techniques. Other minor concerns: - Actual task for the study is unclear from the study design and procedure. For example, do the participants click on the 'start' before making the next menu selection? It appears so from the images. But, this has to be explicitly mentioned in the procedure. Also, were the participants seated or could they be standing and/or walking around? - How were the hypotheses reached? Although some of this is implied from the related works section, it is better to repeat the information in the study hypothesis. Overall, I think the study findings could be improved by fixing the issues pointed out. """,2,1 graph20_32_1,"""This paper seeks a temporally evolving word-cloud visualization. Given frequency data for a set of words, the method computes target sizes and has the words grow and shrink. It uses physical simulation (collisions and Hookean springs) to manipulate word placement. A brief informal user study is inconclusive about the usefulness of time-varying word clouds. The paper identifies an excellent problem and makes a nice first attempt to address it. I am not convinced by the technique and the user study adds little value. Overall, I would encourage the authors to continue work in this direction but I think the work is not quite mature enough for publication. The design presented does not quite crack the problem: the physical forces are a nice ingredient but as presented are not sufficient to deal with the layout, packing, and alignment issues that the problem conjures. The user study does not show the visualization to be superior to a conventional line graph, and the data mostly points in the opposite direction. How do the authors envision the visualization being used? The questions asked for Table 2 do not seem like ones that users would really be interested in. The discussion is a bit too tied to the specific libraries and web technologies used. These are not details of great interest. They should be disclosed, yes, but it would be better to concentrate on the design aspects and algorithmic aspects independent of implementation. The interactive aspects of the method are not much justified or explored, leaving this reader wondering why they were discussed at all. The core problem seemed to be to generate a temporal visualization of a dataset. Adding interaction might make a more useful tool for data exploration, but seems external to the initial algorithmic problem. Since the interactions described have little algorithmic interest and were not evaluated, they can safely be omitted from the discussion. This is not to say that the researchers should give up on interactions, only that there should be a clear purpose and contribution if they are in the paper. Specific comments: ""Kane [5] uses a physics engine in Python called Box2D to create a live depiction of how the words grow and shrink over time. This is a very similar project to ours, with the exception that it is not on the web, not interactive, and the words do not stack up onto each other, they simply hover in the air"" : Some of the criticisms here seem unfair. ""Not on the web"" is an extremely weak point, not addressing the method at all. ""[the words] simply hover"": this is a design decision, and it is not clear (no case is made) that stacking is better. ""dynamic word cloud ... may even have a slight advantage for detecting the most constant words."" Why do you think so? Table 2 reports a 22% success rate for the word cloud, vs. 67% for the graph. Is there an error in how the data is reported, or is my interpretation of the data mistaken somehow? In my reading, this is a clear win for the graph. Section 5.3 is largely about a failed design direction and can be cut. The lack of broad-phase collision detection in Matter.js is not broadly interesting and the researchers should not feel restricted to using Matter (many other physics engines exist). """,2,0 graph20_32_2,"""This paper presents time-varying word-clouds that uses physically based simulation. The words in the word cloud change their size based on the data at a given time point. By using a time slider users can see the evolution of the frequency of words over time. The color of each element is selected based on the variance of word frequency over the time period of the dataset. The authors allow the user to interactively select and move words around. I really liked the introduction and how it introduced the problem. The introduction made me want to continue reading. However, the rest of the paper did not follow on that promise. The design decisions are unfortunately not well documented and the reason for certain decisions are vague. For example the first sentence about the design says ""... effective dynamic word cloud"", but there are no requirements of what effective should be or what an effective dynamic word cloud could be. I'm intrigued by the authors decision of making their tool interactive to move words around. There is no clear argument and reasoning of why this could be helpful. What could this be helpful for? Are maybe any task that would benefit from such a functionality? The comparison between interactive word clouds and line graphs in the discussion is vague and not substantiated. Why are line graphs not visual and concrete? The section on implementation details could be made much more succinct. The issues mentioned are not very relevant and interesting. Especially the section on optimization hangs in the air and does not connect to any other parts of the paper. Why is this relevant to the reader? I would suggest to use any gained space to describe and analyze the ""user study"" in more details. Because as it s now the study is not giving much arguments for using time varying word clouds. The informal ""user study"" is described very minimalistically and the results suggest that line graphs are better suited for the described tasks. If I would just look at the two tables I would choose the line graph over the dynamic word cloud. The line graph shows all the data at once where as a dynamic word cloud needs manual interaction and a user only gets a moment view of a specific time point. Using the slider can probably show which words stay and that some words are appearing and disappearing if the set of words stays constant but this is not always the case. A dynamic word cloud might be useful for some settings of data but which one? Finally, the conclusion and future work could be directed more towards what are the questions that are still unanswered. What about improving and making a formal and correct study? Current suggested future work is engineering but I am missing interesting research question that might have come up during this project.""",1,0 graph20_32_3,"""When I saw the video for this paper, my reaction was ""this is so fun! I wish I had done this!"" I feel like the *concept* is in itself an extremely strong part of this submission. Independent of implementation details, I feel like the concept itself is an exciting point of merit for publication of this submission. It's interesting that when the paper discusses prior work, a lot of that prior work is clearly targeting printed media (or other non-dynamic media). The reason that this paper works is because the medium is dynamic. If I were limited to a hardcopy printed paper, I couldn't use this method. So in some sense, the novelty here is ""what can we do in an interactive setting."" It's not a ""we can do better than the previous work"" rather a ""we consider a different problem than the prior work."" The actual implementation is maybe a bit more ""permissive"" than it needs to be. In principle, to visualize a time varying word cloud I need a temporally coherent display of the words as they are born, change in prominence, and die. The physical simulation is a very good metaphor for getting that coherence and getting all the words to fit together. On the other hand, it goes too far, in a sense, since the momentum, wiggling, and ability to shuffle the words around are all distractions from studying the temporal trend. Were these latter features just an artifact of using a physical simulation? How could we keep the benefits of the simulation while factoring out these distractions?""",4,0 graph20_33_1,"""The paper presents insights from a literature review of computer vision and risk factors in the global south. The paper is well written, timely and provides limited, but interesting insights and a good framework to discuss differences in computer vision use between regions. The limitations are primarily in the approach: A literature review will only reveal research systems published and neglect commercial efforts or research prototypes that were never presented in a publication. I think it is important to understand the work that this limitation is the lens the results need to be interpreted through. The authors express these concerns in the limitation, but the paper would benefit from addressing the limitation of the collected data early and assure that the presented insights are framed accordingly. Overall, I think the manuscript holds merit and the work would spawn interesting conversation at GI. --- //Abstract How did were the papers reviewed selected? I personally dont like the use of the acronym throughout the paper. Its not a large issue, but it decreases readability for a very small gain. //Introduction I am not sure if the dimensions mentioned in the abstract (i.e.,[] three principal dimensions: where the technology was designed, the addressed problems and the potential ethical risks arising following deployment.) at the end of the abstract are supposed to line-up. Currently there is some similarity, but the authors might want to align both to strengthen the relevance of the perspective taken. //Framework I wonder if literature on ethics in AI in general would be informative for understanding ethics in regard to computer vision specifically. I agree with the use of the frame, but see opportunity for improvement regarding the background literature. While the scenarios might not apply to the global south, the underlying ethical principle would translate. I find the paragraph 218-229 difficult to parsethe ethical standards should be the same even, but there might just at this moment just not be an applicable scenario in certain regionsa differentiation that also hold in the global north; for example, poverty and a lack of health care in the US might lead to a similar situation. //Methodology While I dont see any immediate flaws with the mythology, it seems that the authors decided to define their own approach instead of following existing guidelines for systemic reviews. //Discussion I wonder if not many of the current AI applications that are have potential risk are not on the commercial side. While the insights about research produced is most interesting the real impact on lives using AI and CV is currently made in commercial settings, e.g., Sesame Credit in China. Further, concluding that just because there were no publications that point to risks of computer vision in certain areas, doesnt imply that these risk dont exist, but just that nobody has published on the results. An important distinction that the authors should make clear to avoid confusion by the reader. """,3,1 graph20_33_2,"""The paper conducts a survey of computer vision (CV) applications in the global south (i.e. developing countries). The aim is to identify the main applications of CV and related ethical risks that exist in developing countries, and determine how these applications and ethical risks differ from the same in the global north (i.e. developed countries). The authors identify 55 research papers and manually code them on the basis of location, data topic, application domain, and ethical risk type. From this coding, the authors draw conclusions regarding the prevalence of different applications and risk types in the global south vs global north, and speculate as to why these differences emerge. There is a foundational, and to me somewhat questionable assumption, that a relatively small number (55) of research papers are indicative of the actual use and deployment of CV systems. Moreover, these papers are selected by the authors especially to be related to the global south, so conclusions contrasting trends in the global south vs the global north seem to be inappropriate. In addition, the paper's contribution seems to be out of scope of GI -- there is no technical contribution to computer graphics or HCI methods. It could be of interest to researchers interested in the overlap between policy and computer vision applications, but most likely not to the GI community. """,1,1 graph20_33_3,"""This survey paper discusses an analysis of the application areas for computer vision technology in the Global South and their potential risks. The authors analyzed 55 research papers using open and axial coding to identify application ideas and used the moral compass framework to categorize the types of risks associated with the CV-related research projects. Overall I found the paper to be interesting and think it clearly identifies the goal, contributions and limitations of this work. The methodology is well explained and the rationales provided make sense. The informal comparison with results from the Global North was also very interesting to read about. That said, a few things could have been better clarified and the discussion further enhanced. Below are some questions and thoughts I had from reading the paper: - It would have been good to see a definition of what the authors consider CV systems or what does it encapsulate early on in the paper. For example, would systems that used publically shared media data be considered in the scope of CV systems? I may have missed it, but did not see any references to algorithms that may have caused ethical issues using social media data for example. - The authors covered several variables such as country, data topic, and application area. I wondered if the authors considered the location of where the technology was deployed, not in terms of country, but more specifically where was it used e.g., offices, streets, home, hospital? This may have helped shed more light on the types of application areas explored and may have provided more context for the risk assessment. - Results -- currently as written the results are a mix of somewhat obvious or expected results (e.g., all technologies have a potential second, third...n order effect which could be harmful to people) and those that are surprising (e.g., important trends in application areas). Readability can be improved if the paper clearly identified the most surprising results helping the readers easily learn about the takeaways. - Design considerations -- I found them to be less effective given the nature of the study. I am not sure design considerations are even necessary for this paper. A rich discussion is perhaps a good conclusion for this paper. Perhaps some of the system work discussion in the design considerations section can be used instead to provide more context for the types of systems the research papers included to help readers better understand how the risk was estimated. - Minor comment -- it would have been nice to read an explicit motivation for this work.""",3,1 graph20_34_1,"""This paper describes the use of Line-Storm, an interactive system augmenting a mechanical pencil that produces sounds as the writer writes on a paper pad, to enhance creativity. No significance was found in doing so, but the authors stated that some participants wrote more in the study and reported greater presence and engagement, indicating a correlation. I had a hard time understanding this paper. It jumps between creativity, popular believes, and performance, with little cohesiveness. I dont see any particular reason to read through arguments of augmentation or replacement just state that you want to augment a common, familiar tool to improve the creative process is sufficient. Similarly, the comparison between Line-Storm and puzzle (Section 6.5) as well as other art work (Section 6.6) adds very little to the discussion of creativity, and feel very out of place. I would argue rejecting this paper. Pros -Draws an interesting connection between creativity and presence and engagement. -Quite detailed description of the sensor-fob construction, except all the figures are missing. Cons -Inconsistent reference style (e.g., Steven Jesse Bernstein 1992, Csikszentmihalyi 2014). -Related work has a lot of random concepts (e.g., play, gamification, privacy) and unnecessary quotes (e.g., Tod Machover, Heidegger). It is also packed. Consider breaking it into similar groups like Music-augment objects, Handwriting & learning, etc., and pick those that are most relevant. -Figure references in Section 3.2 are nowhere to be found, making it almost impossible to visualize how the sensor-fob looks like. -Description of the study methodology is missing. What did the participants do during the study? Information about dependent variables (sense of presence and engagement) are only briefly mentioned in the summary of results. It is also unclear whether the study was a within- or between-participant study. -Section 6.3 is highly speculative and it is unclear why attention is discussed there. -Section 6.4 is very abstract and I find many of the descriptions superficial it is really just adding some sound to a writing tool without even hiding the electronics to make it more aesthetically pleasing or non-intrusive. Minor things -Where were the two sites the study took place? Laboratory? Classrooms? -The one-sentence first paragraph for Sections 3-6 reads a bit odd and obvious. Typically it has more content to highlight the main points of the section. It is quite apparent that this paper is a trimmed down version of a longer thesis of sorts. As indicated by words like committee, incorrect figure references (e.g., Figure 17), inconsistent/missing references (e.g., [FirstSecondAuthor]), and justification of SPSS (most researchers in this community know what it is and have used it). The authors might be able to better articulate their intention by reporting comments/answers made by the participants, particularly the preservers. Currently the only findings are abstracted to significances and correlations, which are not that helpful. Lastly, I find it disturbing to include LSD as a way to enhance creativity especially when this work is unrelated to drug use.""",1,0 graph20_34_2,"""This paper describes Line-Storm, a stylus-based system that augments the writing process with audio. The paper describes the philosophical underpinnings and inspiration behind the system, the design of the stylus itself, and reports on the results of a user study with line storm. The results did not find a significant difference between Line-Storm and a control condition but found that some participants were more responsive to the study and stylus than others. I found this submission very difficult to review for a few reasons, and each of these reasons prevent me from recommending this paper for acceptance in its current form. First, it appears that some pages of the submission may be missing or were accidentally omitted. In many places there are references to Figure 17, 21, 16, 22, etc. however there is only one figure in the paper, Figure 1 (first page). This makes it difficult to understand the relevance of some of details in the system description because the level of detail seems unnecessarily fine-grained, but the figure citations suggest that such details could be crucial to replicate the device in the future (e.g., does the reader really need to know that five wires were soldered between the PCBs to replicate the system?, most likely no but without the figures it is impossible to know). It also seems like a Section 3.3, which may have detailed the software side of Line-Storm, is missing. The paper doesnt have any details on the actual sound augmentations that are performed and which user interactions cause them to be created / played / rendered. Second, there is no description of the study methodology, i.e., equipment / apparatus, tasks, measures recorded and questionnaires administered, task order or counterbalancing, etc. (perhaps the rest of Section 4 was accidentally deleted?). Because of these omissions, it is impossible to understand or replicate the study that was conducted, determine if the statistics that were performed are correct, if the conclusions correctly follow from the results, or what the contribution of the study is (i.e., Section 5.3 and 6 cannot be properly reviewed without the missing information about the study). There is also confusion about the number of participants (13? 12? 10?) that needs to be fixed, and a power analysis to account for the small number of participants (if there were actually 10) is missing. Thirdly, I found the Introduction and Section 2 much too focused on the philosophical notions of play, flow, and creativity. While I can appreciate that there was inspiration drawn from a great deal of scholarship and others writings, it overshadows and obfuscates the research questions this paper was attempting to explore. This is especially clear in 2.0.2 Previous and Related Work, where the reader does not get a clear picture of how this work extends, supplants, contrasts, or complements the HCI literature (it is unfortunately that of the 64 references, only 3 are from HCI venues (TEI, CHI, OzCHI)). Lastly, there is one citation in the paper that suggests this paper may be an extension of prior work or possibly a concurrent submission, i.e., More programming and other details can be found in [FirstSecondAuthor]. While I do respect the desire to maintain anonymity, providing this citation in this way is problematic because (i) those details are not included in this submission and (ii) reviewers cannot read this other citation to determine how similar (if at all) this paper and the citation are. It would have been beneficial to provide the actual citation and refer to the citations work in the third person (e.g., We used the programming approach / method proposed by X et al. in [Y].). Other Notes: - Musc Grip is misspelled - Two different citation styles are used i.e., [X] and (X, XXXX). - Section 5.2 can be removed because it does not add anything to the submission and the description in 3.1 can do away with a description of MAX/MSP unless it is important for the reader to know such small details as the different types of inputs and outputs - Effect sizes are missing in Section 5, as are the test statistics and degrees of freedom - No details were provided about the k-means clustering tests - What is a creative worker?""",1,0 graph20_34_3,"""This paper describes a system for augmented writing and drawing, which the authors evaluated with a user study and which they also place in context of various philosophical writings. The outcome of the study seems to indicate that there are two distinct groups of users: those who perceived the system as just another way of writing/drawing, and those who engaged with the system as a creative tool. Overall, this paper was somewhat of a challenge to review, as I believe there may have been a technical problem or incorrect document submission. The paper contains only one figure (at the top of page 1), though it does reference many others within the text. The paper contains one or two unrendered references (e.g., [FirstSecondAuthor] in section 3.2.1) and a mixture of citation styles (e.g., (Springob, 2015) vs [34] vs calling out works by name within the text as in Taskscape is a complex term from Ingolds The Temporality of the Landscape). That said, most of the prose is easy to follow, if a bit repetitive and overly-detailed in some places. I have detailed comments and questions below about the study, system, and philosophical components of the paper, but unfortunately its readability makes it hard for me to recommend it for publication without substantial rewriting and a re-review with figures included. Study: What was the prompt that was given to users? The authors mention that they removed data related to two users who used the system for drawing rather than writing, but I am not certain how to interpret that: were the users going against the prompt, or were they within the boundaries that were set by the experimenters? What was the control condition (presumably writing or drawing without LineStorm turned on)? Was the order of control and experimental condition randomized across participants to reduce ordering effects? What was the payoff for users who participated? It seems to be implied that extra credit was given to participants, although this isnt explicitly stated. Were they still given extra credit if they chose not to continue with the study at any point? Did the authors track which users had more experience in creative writing/drawing with any kind of intake survey? In general, I found this section challenging to understand: I would encourage the authors to read it again with the idea that they want another researcher to be able to re-do their study in the same way. The data analysis from the study is also a bit hard to understand (due to the missing figures). I would like to see a bit more exposition on the k-means clustering, since the authors argue that is their primary result. Was the k-means clustering performed on all the answers to all the survey questions, or only a subset? What were the questions on the survey? Was there a particular question that separated the Preservers of Line-Storm most clearly from the other group of users, or were their results just slightly different on all questions? System: The authors include a lot of technical details of parts and wiring for the Line-Storm system prototype. I believe that the physical components of the system would be easily replicable from their description (though I would encourage the use of a wiring diagram, e.g., from Fritzing pseudo-url to cut down on the needed exposition). The digital component, though, would be more challenging for me to replicate. The details of how the system reacts to user input are not included in the implementation section; its possible that they are in the unrendered [FirstSecondAuthor] paper? In an earlier section, the authors mention The sounds made, while writing or drawing, are captured using a contact microphone and are played through headphones. Sounds of natural phenomenathe sounds of a thunderstormaugment the writing or drawing experience. Im not clear what triggers the inclusion of additional sounds: is it related to accelerometer readings? When the authors later mention a comment from a committee member who suggested allowing rapid-fire triggerings of thunderstorm sounds, it suggests to me that there is also some kind of time-delay currently implemented in Line-Storm which limits sounds, though I am not certain how it works. Given MaxMSPs graphical programming nature it could be appropriate to include a figure of the developed patch here, as well, to aid in understanding and replication, or alternatively a system flow diagram. Philosophical underpinnings of Line-Storm: The authors clearly have a good grasp of the ludic literature to which their work relates, and walk the reader through some interesting references. As my background unfortunately does not include many of these, I found myself looking up a lot of terminology (e.g., thingly character and part of a matrix of all equipment). It would be helpful to include a few definitions for these sorts of terms: the audience of this paper may be very diverse in their backgrounds, and likely not everyone who reads it will be familiar with Heideggers essay. Overall: This paper presents an interesting system with some intriguing first results from a user study. However, because of missing figures, it is challenging to understand, and some sections would make replication difficult.""",1,0 graph20_35_1,"""This paper presents an interactive visualization system for exploring genomic conservation. The design is based on a few existing visualizations such as dot plot and parallel plot. The system was developed by involving domain experts and evaluated through deployment studies. Overall, it seems a strong paper. It is clearly written. The visual design looks appealing. One issue I had with this paper is its contributions. It says that Sybteny has ""novel visualizations such as stacked plots and hive plots,"" however, I didn't find sufficient novelty in its visualization design. These charts exist and have been widely used in many applications. A better story of this paper should focus on the design study itself, including the requirement gathering, design process, and deployment. While I think this system can be useful by putting together different views to serve the purpose, I don't think it is novel enough, at least not the selling point of this paper. Focusing on the actual design study, I found several details are missing. I'd like to know the design process and how the requirements were gathered, via interviews, or focus groups? The paper just briefly mentions ""our discussion with the three research teams led to..."" But how? Further, I was very confused about the so-called taxonomy of design space created by prior visualizations. It lists a few existing plots, but I don't think it can be called a design space. Design space is a multidimensional combination and interaction of factors under consideration. The content of this section can be integrated into the actual system description and added with justification for fulfilling the design requirements. Another big concern is that the case studies lack depth. While presenting three case studies look good, none of them reveal interesting insights. The system has been up running for a year. There must be a lot of data to collect. I'm disappointed with the three toy examples, which lacks sufficient details in how the analysts use the system. I suggest removing one or two case studies and describing one in greater detail, through a step by step demonstration of the usage of the system with figures illustrating the insights found. As the system is deployed, some interaction log analysis is necessary, in addition to just the summary web traffic. In summary, the evaluation part of this paper needs to be strengthened. This is because the visualization design is not that novel and I view whole design study as the core contributions. """,3,1 graph20_35_2,"""This design study paper describes a tool to interactively visualize conserved genomic regions. It describes the problem domain and provides a data characterization. The paper provides 6 requirements on which the tool is build on. The visualization system consists of multiple coordinated views including known plots such as Dot Plots and Parallel Coordinate plots, but also includes two novel visualizations Tree plot and Hive plot. The paper ends with a description of a user evaluation based on semi-structured interviews conducted with 5 domain experts of 3 different research groups the authors collaborated with. The paper is well structured and well written. The images nicely supports the narrative of the paper and provides a better understanding. The mix of insights from the biological domain area and the description of the tool and the decisions underlying the tool make it a nice reading experience. Some of the section might be made more succinct such as the requirements which could be a little bit more memorable so its easier to follow them through the paper. Additionally, the two novel visualizations could have taken a little bit more space as I found it to be very short to catch the details and understand it correctly. THe user evaluation start a slow. I would make these cases studies a little bit shorter and add some more analysis of the tool's usage. The web traffic logs analysis make a nice point of showing that the tool is being used not only by the research group it was designed for and therefore fills a gap. However, this could be said in one sentence in the summary to gain some space for describing the two novel plots. I would also suggest to share the interview questions and anonymized responses as this would help other researchers doing work in the same or similar domain. Making this paper great would be to add some details about how these two new visualizations were developed and validated. And adding for example some more details about the process of the collaboration such as how did the authors engage with the biologist, was it workshop based, were the authors embedded in the research groups in some way or another, and how did the authors manage the interests of the different research group. - Wheat case studies (p.9) first word on the page, should it not be ""... through ..."" - Figure is sometimes capitalized sometime not - p5: Dot Plot: 2nd sentence should probably be ""Dot plots .... """"",4,1 graph20_35_3,"""This submission described its visual analytics system for exploring genomic conservation data. I appreciate the detailed description of the proposed system. However, I think the system lacks novelty, and the evaluation was insufficient as evidence for its effectiveness. Meanwhile, I believe the qualitative research part should be a strength of the paper. However, the current description is not enough. One of the most exciting parts of this submission to me is how the authors understand the domain expectations of a visual analytics system. However, the part is only briefly covered. For example, I would like to know how the authors discussed with the domain experts (e.g., interview, workshop, focus group), what data collected in these processes, and how the authors analyzed the collected data (coding, etc.). Qualitative evaluation is considered as a standard way for domain-specific visualization systems. However, the same as the final evaluation interviews as to the requirement analysis, a scientific method to plan the evaluation, collect the data, and analysis are needed. Just mentioning the experts think your system is sound is not enough. Some of the terminologies used by the authors are inconsistent with the visualization community. For example, the parallel plot and the tree plot from the authors are considered as sankey diagram in the visualization community. The dot plot is a scatterplot matrix. The authors also claimed the tree plot (sankey diagram) and the hive plot as the novel, but they are not. The proposed system basically is a multiple coordinated view system with existing visualization views and common interactions. I do not think there is too much novelty in the design, but I also understand this is not the focus of this submission. I have a few comments for the proposed system as well: * The authors barely mentioned the histograms appear in the teaser figure. * The filter can only work on one property, is that sufficient? * From the video, the snapshot is presented as a labeled button, and I suspect anyone can recall useful information from a button like that. A thumbnail image may be more useful. * The figure of the Hive plot is not informative. It is too cluttered, while the one presented in the video seems to be a better example. Overall, I think the authors are contributing to the field of genomic conservation by providing a good visual analytics tool. From the statements from the authors, the tool seems to be quite popular in the field with real users, which is quite amazing. However, I think the current presentation of the work is not ready for publication. I think the authors may reduce some descriptions of the system and add more detail of the qualitative research. I would suggest not accept it at its current stage.""",2,1 graph20_36_1,"""The paper proposes to design natural interaction for secondary tasks where primary task requires bimanual interaction. The goal is to make sure that the task performance of a complex primary task is not impacted when simultaneously executing the associated secondary task. The paper selected interventional radiology to extract actions and abstract them for designing final interactions. The final designs (body movement, facial expressions, head movement, and voice commands) were evaluated by non-experts. The abstracted action and the interactions within the context of interventional radiology form the original contribution of the paper. Overall, the paper is well structured, easy to read, and follow. However, the reasons for abstracting the task and the usefulness of the evaluation outcomes for the abstracted task are not evident. This affects the quality and significance of the work. The reason for abstracting the original complex task is not sufficiently justified. One explanation given is that this allows for evaluating the interactions by people outside the domain and possibly applying the findings in other domains. The final interactions were not evaluated by radiologists. Also, the findings were not applied and verified in any other domains. Therefore, it appears that the paper fails to establish both the validity of the interactions for Radiologists and the generalizability of the findings for other domains. The benefits of the abstraction process need to be clearer. The primary abstracted task was designed with the goal of maintaining a participant's ""concentration"". However, there are other factors that may impact a radiologist's performance: steadyness of the hand and the head or even the head orientation of the Radiologist in relation to the patient. There is no clear indication why only concentration was selected. Were the factors omitted on purpose? The final design may potentially be unsuitable for a radiologist as shaking and nodding their heads could be unsuitable when handling a catheter. This brings me to my next point. Details on the task performance is missing. Without this data, we do not know how many errors occurred in the primary task and when. As a minor note, the choice of using head movement for pointing was not supported by any of the observations of the expert. I suggest three possible ways to strengthen the evaluation of the system. - get experts' feedback on the final design. This will help confirm whether the interaction works. Feedback may also indicate whether the abstraction process needs to be tweaked. - apply insights to a different domain to establish generalizability of the findings. - if the need for abstraction cannot be clearly established, choose a generic concurrent multitasking scenario that truly doesn't require domain knowledge to design & evaluate interaction. """,2,1 graph20_36_2,"""This paper describes a study on multimodal interaction, investigating the effects of a primary task on a secondary (navigation) task. Conditions also look at different modalities for selection zooming, drawn from observations of the target domain based on a medical task. I enjoyed reading this paper, which is quite well written. The motivation and method are clearly explained. In particular, the introduction does a good job of explaining how the goals are approached by including an exemplary task for observation, which is then abstracted for use in a controlled study. The approach seems like a suitable way to address the research questions about the relationship between the primary and secondary task. The discussion of multi-modal interaction is well-grounded in the literature. The study design and analysis are well done. My only comment is that the choices of interaction modalities seem somewhat arbitrary. Although I like the idea of basing these from observations, there is no clear connection made to the facial and head gestures, which seem to have been introduced ad-hoc. There is nothing particularly bad about this either, but there are many more potential options that could then be explored as well, so the choices need to be clearly justified. The authors may also be interested to look at work on proxemic interaction, which introduces similar concepts to the leaning. This and other prior work could provide alternate motivations for choosing the modalities. My only other comment is that is would be good to include additional figures of the task, to better demonstrate the various steps, which are a bit challenging to follow from the text alone.""",4,1 graph20_36_3,"""The paper is not a candidate for acceptance. The first issue is that the paper is not sure whether the work is on solving the specific problem of image navigation and zooming while doing catheter insertion or a generic multitasking scenario. The paper's argumentation is focused on the medical use-case, but then it takes liberties with the inherent assumptions in the medical use-case to argue for a more generic use-case. In the process, the contribution do not make sense either for the specific medical use-case or for the generic multitasking use-case. Looking at the task from the perspective of the medical use-case, the catheter navigation task is a highly specialized one is the forward-backward simplification is not justified. Further, the authors disregard the most prominent alternative - using the foot, with some argument about the foot being extremely busy. That is more true for the modalities the paper explores - facial expressions, head pointing, speech, and full body movement are all things that the doctor would need to do in the OR. The explicit start/stop does not make sense for head gestures, since the doctor may nod for other reasons. Further, these gestures run the risk of affecting the steadiness of the catheter - shaking the head can dangerously affect the catheter positioning. A good, physical foot based system seems like a clear and better option any day. Without the medical usecase, the task specifics do not make sense for generalization to other tasks. Other points - 1. Why wasn't counterbalancing not done in the evaluation? 2. The charts need to show the performance of all four conditions in one graph. """,1,1 graph20_37_1,"""This submission presents a new VR-based interaction technique, warped virtual surfaces. With this technique, the user holds a stylus and moves it along the surface of a tablet, however, instead of the stylus having a 1:1 mapping with its movement along the table surface, the mapping is warped to allow the user to interact with objects that are much further away / in a larger planar interaction volume. This technique was evaluated via a 24-person user study using a Fitts Law reciprocal tapping task. The results demonstrated that regardless of the scale factor, throughput and error rate were consistent between the new technique and the use of a traditional 1:1 mapping technique. This submission is quite well written. The technique is interesting and I can see how it could be useful for some VR-based tasks. It would be useful to talk a bit more what applications the technique may apply to and how the author(s) think that the technique could extend to non-table-based interaction (the tablet was on the table in this study, but this likely won't be the case for many applications in the future). I found the related work section to be through and the included references seem complete. However, the related work would benefit from a summary paragraph (either at the end of each subsection or the end of the entire section) that delineates how the experiment / technique / findings from this work differ from the cited work. In Section 2.2, for example, I was left wondering what was important and relevant about the JND and 2AFC methodologies that were mentioned and how the results from these studies related to the design of this study or the proposed technique (note that these are methodologies so the word methodology should be included in the text and the JND result summary needs to explain what the referent level was). Ditto for Sections 2.1 and 2.3. Section 2.4 also seems unnecessary given that Fitts Law is well known within the HCI literature and I suspect the readership of this paper is already familiar with the derivation of throughput. Section 2.4 can probably be removed. In terms of the presented technique itself and the methodology used in the study, I found the explanations to be clear and easy to follow. The discussion noted that not all participants held the stylus the same way, although they all held it at an angle, and in some instances small movements of the stylus led the cursor to fall outside a target. This comment led me to wonder about the calibration and accuracy of the tablet specifically where the electromagnetic receiver was located in the stylus (i.e., right at the tip of the nib or was it further up in the barrel) and the role that this had on the accuracy of the pointing (i.e., the further away the receiver is from the nib, the more of an influence holding the stylus at an angle will have on the detected tip position, and thus the cursor and warped positions). It would be useful to include a discussion on this and detail how the tablet was calibrated (i.e., with the stylus perpendicular to the tablet or at an angle). One challenge I have with this paper is that there are a lot of metrics that are reported and graphs / figures that are shown without a clear message. Because of this, it is very easy to get bogged down in trying to understand what the results of each metric actually mean and how various results related to each other and what the overall findings of this work are. To make it clearer why each metric is important, it would be a good idea to remove the metric discussion from 4.3.2 Software and create a specific metric section that justifies each metric and highlights the hypotheses of the experiment. Moving this statement from the Discussion up into a metrics section would also help the reader while they parse through all of the findings in Section 5: Similar to other authors [22, 44, 45, 57, 64], we argue that throughput gives a better idea of overall selection performance than either movement time or error rate. This is because the Accuracy adjustment Used to derive throughput incorporates speed and accuracy together, making throughput constant regardless of participant biases towards speed or accuracy. It is thus a better point of comparison to other studies, and more representative of performance than speed or accuracy alone [45]. While the discussion does try to tie the results together, it is very confusing, especially when there are statements that refer back to Figures but the statements dont actually detail what about the figure is important or participant comments are mentioned without the actual quoted text. For example, Also, the tablet placed on the table caused participants to experience some neck fatigue, as indicated in the post-questionnaire results (see Figure 12) and participant comments. This statement doesnt actually refer to what the results were, so I have to go back to the Figure. Because there is no synthesis of the results in the text in Section 5 and no quotes are given, the reader is left to make their own guesses about what the author(s) intend. To fix this, I recommend adding a summary sentence at the end of each subsection in Section 5 to identify what each result means at a high level (e.g., So these throughput results suggest that the WVS technique performs no worse than the use of traditional 1:1 mappings) . Adding in participants comments would be beneficial - especially any comments about the haptic feedback that was given by the hand and stylus on the tablet during interaction (which the paper touts as the motivation for using this technique to begin with). It would also be useful to include more sub-sectioning in Section 6 and moving Section 7.2 into Section 6 so that all the limitations can be discussed together (i.e., paragraph 2 of Section 6 already discusses some limitations). Overall, the bones of this paper seem to be quite good, however, the organization and clarity of the conclusions need quite a bit of work. Fixing the related work, applications, and stylus accuracy is quite easy, but I am on the fence as to how much transformation is needed within Section 5 and 6 (which really form the meat of the paper) to make the paper easier to digest and the findings clearer. Because of this, I am assigning a marginal rating.""",3,1 graph20_37_2,"""#== SUMMARY ==# The authors present a warping technique for tablet-based VR input. With their technique, the CD ratio of pen input changes the further away the user moves the pen from the tablet's center. The main part of the paper is a Fitts' Law test in which participants had to select virtual targets as quickly and accurately as possible. The authors conducted statistical significance tests like ANOVA. In addition, they used non-inferiority tests to find out whether a warped CD ratio can be seen as equal in user performance compared to a constant CD ratio. The authors found that while movement time increases when using a warped CD ratio, the throughput can be seen as statistically equal between conditions. #== REVIEW ==# The presented idea is nice and simple. The authors support it well based on related work and position their work quite clearly. The study is well motivated and executed. The system implementation is well described, which improves reproducability. Unfortunately, it is hard to clearly see what to learn from the paper. While the results provide a good indication that users would be able to effectively interact in such a warped interface, the findings are not overly surprising and the technique itself is not very fleshed out. One problem I see is the lack of a strong use case to support the study design. There are many use cases for general warping and redirecting in VR and the authors described those well. However, as opposed to examples from previous research, the authors did not provide strong and more specific use cases for their particular technique. While warping is a common theme, previous researchers in that area designed their techniques with specific use cases, applications and constraints in mind. For instance, NaviFields by Murillo et al. warp the user's locomotion based on points of interest (i.e., pre-defined regions in the virtual environment where more accuracy is desired). The Go-Go technique has the user as the warping origin to allow fine grained selection and manipulation close to the user. Other examples argue with fatigue (e.g., Feuchtner et al. as cited in the presented work). In the presented work, such an argument or reason for warping in this particular way is lacking. More specifically, why is the center of the tablet chosen as the warping origin? Do envisioned applications require more accuracy in the center than in the periphery, i.e., is the point of interest always in the center? Is that the most ergonomic area of tablet input? The authors mainly argue with enabling interactions beyond the limits of the tablet, but this can be trivially achieved by uniformly scaling the input coordinates, i.e., right now there is no argument supporting why not only more space, but also more accuracy is needed at the origin compared to the periphery. Only then warping is necessary in the first place. The task design in the study is then becoming questionable, because the Fitts' Law targets are arranged in a rotationally symmetric layout around the center, which is also the warping origin. As far as I understand, for each movement, the cursor is slowing down, passing through the origin and then speeding up again. There is no variation of different changes of the CD ratio across selections. While the study is still insightful, I believe that adjusting the task to the technique would have been interesting, even if it would not have been a conventional Fitts' Law task. For instance, placing the targets to random positions while adjusting their target size to the CD ratio at that point would potentially reflect the actual use of the technique better, as the path from one target to the a other would not always pass though the origin and users would need to visually adapt to different changes of the CD ratio (again, such ""use"" of the technique and how this study design would reflect it would then also need to be described). Furthermore, different ""regions of interest"" could be defined to assign different CD ratios to different regions to then, e.g., test moving from one region of interest to another. I do not insist on those particular examples, but I would have liked to see a better connection between the task and the proposed technique - possibly in context. As of now, I am unsure how to apply the findings. In summary, there are some shortcomings. A concrete use case and/or application would have supported the proposed technique and study. However, the work is well focused and executed. A better framing can potentially mitigate the shortcomings. I therefore slightly lean towards acceptance of this work. #== MINOR ISSUES ==# -The figures in the paper and their layout can be improved. For instance: -Figure 12 should be moved to the top of the subsequent page -Figure 1 has a lot of empty space and does not visually summarize the paper. The depiction itself might be useful, but not as a teaser. The easiest fix would be to display the tablet and overlay in a one-column figure instead of in a teaser figure. If an additional teaser figure is used, then the content and caption should describe the paper as a whole. -As described in the review, a strong use case or example applications are lacking. In connection to this, I did not understand why a virtual scene with various assets was created and then described in the Software section within Apparatus (4.3.2). Was that virtual environment shown to users? According to the procedure, they entirely focused on the Fitts' Law task. In addition, was that demo scene interactive or what was the general purpose of it? If this is an envisioned example application with selection and manipulation techniques (with varying levels of accuracy?) then this should be re-contextualized and described in one of the introductory sections. -The related work is thorough with a large amount of references and I only have minor suggestions for improvements. First, I suggest to have sub section 2.4 (Fitts' Law) as a dedicated section or as a sub section in methodology. Second, a small summarizing paragraph at the end of the related work section (which would then be right after 2.3) might be beneficial to emphasize the gap in literature and transition to later sections. -Having the limitations section as the very last sub section seemed a little bit odd to me. I suggest to add a dedicated Limitations and Future Work section before the Conclusion or to remove the sections and incorporate the contents in the discussion.""",3,1 graph20_37_3,"""WWS is a gain-based selection method that allows the use of traditional inputs in VR. In particular, the paper proposes the use of a pen + tablet to select objects in any size of virtual panels. To evaluate this interaction technique the authors ran a traditional Fitts Law study, where they evaluated different scale factors and IDs. Their goal was to see if user performance when using WWS is like 1:1 mapping. I have a couple of problems with the paper, and I will discuss them next: 1) Previous work: The introduction and related work section discuss papers from too many different areas. I recommend the authors to remove the mention of other haptic feedback devices and redirect walking. And to focus less on the theory behind visual illusions and detection thresholds for illusions, as the paper contribution is not in this area. 2) Previous work: I suggest the authors to better explain the differences of other gain-based selection methods and WWS, as here is where the novelty of their interaction technique relies. 3) Study design: There is no discussion about the selected W and A, and if they might cofound the results. See [10.1145/3173574.3173770]. Regardless if this was considered or not, it is important to mention this. 4) Study design: why was the virtual environment populated with space objects? Was there an expectation of this affecting the result? 5) Study design: Pointing performance is affected by the muscles used to reach the target. See [10.1145/238386.238534]. However, in the used tablet, all scale factors use similar muscles. This might be one of the reasons behind the results, but there is no mention of this cofound in the discussion 6) Results: Even if the effect of ID is well known, I think it is important to include the analysis of ID in the paper. Specially to see if there is an interaction between ID and SF. 7) Results: The authors should calculate the movement paths. See [10.1109/MCG.2009.82]. And discuss any difference between SF. This might show interesting results that will make the paper stronger. I think the WWS interaction technique is novel and interesting, but the paper is not ready for publishing. The introduction and related work need work, and there are a some considerations in the study design and analysis that need to be addressed. """,2,1 graph20_38_1,"""The paper describes the technical details of the system well. But, it can be further improved by talking about how the design guidelines and design space are derived and improving its validation framework. This paper presents a design for an augmented reality (AR) based authoring tool for assembly tutorials. It proposes a system which attempts to generate tutorials with mixed media, and also allows the author to generate the tutorial while performing the assembly in-situ requiring minimal to no post-production. Following presenting an overview of the current literature in AR based tutorials and authoring, a set of design guidelines and the relevant design space is described. The use of these elements to guide the design of the system is a good justification made by the authors of the paper. Following which the system design and the tutorial authoring process is described. Finally, the authors discuss some initial user feedback and possible ways in which the system can be extended. While the approach to the problem is interesting, some of the concerns I had with the overall system and the study following are as follows: How the guidelines and the design space are derived is not clear; this isnt discussed in any of the previous work in any form? or is it based on some empirical evidence? How does these guidelines and design space described compare to the design choices made in previous work? How is the design choices for the heads-up-display (HUD) made, are they grounded on any other previous work? and how does it influence the overall experience of authoring a tutorial? The validation of the system is weak. It is unclear as to what is implied here by ""validation"", is it simply assessing that the system works, is it being compared to another system or it being validated using a study? How successfully does the system address the limitations of prior work? how does it compare to the previous systems? What were the approaches taken by authors of the previous systems to validate their system? are any of them applicable here? if not, why? How is the generated tutorial compare to a tutorial generated by other systems or done manually? In the user feedback, the background of the users is not provided. Do they have any experience in authoring tutorials? do they have any related expertise to the assembly process? Did they have anything to compare their experience against? Since colours used in the interface, has the design taken into consideration colour blindness or were the participants tested for colour blindness. Ideally for a system of this nature, it would require two sets of validations, one by the authors of a tutorial, another by end-users who use the tutorial, since there a generated components in the tutorial. Alternatively, if the proposed system is going to have a middle phase to generate the tutorial for the end-user which is not in the scope of this paper, it is not made clear. The paper can benefit if it had a separate section for limitations and future work, rather than bundling it into the discussion. Also the ""Tutorial playback/walk-through"" sub-section seems like a better fit for the section ""The AuthAR System"" as what the output is and how it is generated is also a component of the system. Other general fixes in the paper: On page 5, under the ""Interaction Paradigm"" sub-section, there is a paragraph that is not formatted correctly. The citations are inconsistent in formatting. There are multiple citations that only have the authors and a title, what are they? Also what is the citation number 48?""",3,1 graph20_38_2,"""The paper is well written and easy to follow. The system seems to be well implemented and the video is well made. However, it is not clear, what the research contribution of this work is. In the following, I would like to elaborate on the main issues of this work. -- Technical contribution Even though well implemented, the system is based on simple components and a simple overall architecture. Therefore, there is no contribution from a technical standpoint. -- Design contribution The design of the system and the interactions are straightforward (e.g., based on standard HoloLens interaction techniques). The authors claim the following: ""Our design of the AuthAR system was grounded by our exploration of the assembly task design space"". It is unclear, in which way the authors ""explored"" the assembly task design space. All design considerations are fairly obvious and not based on, e.g., domain experts in assembly lines or a thorough identification of challenges in the authoring process. -- Scope The topic that the authors chose is relevant and the challenges that the authors identified make sense from a generic high-level perspective. However, the authors did not clearly define a scope and the paper and ideas remain on a superficial level. Primarily, the authors do not clearly define a target group. Is the goal to enable tutorial authors without video editing knowledge to create tutorials, i.e., to allow a broader audience to generate tutorials? Or is it about reducing cost? The introduction seems to imply the latter. More importantly, is the system intended to be used for anything from furniture assembly (like, e.g., hinted in the video) to more professional assembly lines (PCB assembly, assembly of machinery etc.)? Even though those are all assembly tasks superficially speaking, their requirements, authors, end-users etc. vastly differ. The authors need to specify, for which scenarios their system meets the requirements. For instance, if a furniture assembly tutorial is authored so that end-customers can be guided through the assembly at home, then this tutorial will be used by thousands or even millions of customers. This means that, making a properly produced tutorial seems a lot more valuable than saving comparatively small costs in the creation process. On the other hand, for professional assembly lines, the tasks might be more specialized, subject to regular change and tutorials will only be used by few employees. In this case, a cost-saving quick creation process of tutorials like proposed by the authors might be very beneficial. But with that, some other, potentially interesting challenges arise. For instance, what if a step needs to be changed? Furthermore, products become increasingly personalized with an increased demand for customization. A system like the one proposed by the authors could potentially allow for authoring dynamic tutorials that play back the correct steps depending on the variant of the product. As of now, the paper entirely lacks reflection and a clear definition of goals. -- Dependency on predefined 3D content The currently supported co-located augmentations are limited to skrews. The authors mention that representations for other parts and tools can be implemented in the future. However, I think the problem is a bit deeper than that. While it is not a requirement for a research prototype to support a lot of different types of content, it is questionable whether the overall principle meets the requirements of low-cost and easy creation of AR tutorials. The reason is that assembly steps often require specific 3D content that represent parts and specific procedures for attaching them. All of those would need to be programmed and then accessed by the author, similar to the skrews and the tracked skrewdriver that served as an example in the paper. Technically, the authoring process already starts with generating and programming such virtual representations. Recording the actual steps in the end might be low-cost, but generating the needed 3D content beforehand and selecting them from a large database of virtual parts and pre-programmed behavior increase the cost heavily. To meet the requirements and claims of the authors, it might therefore be preferable to devise a solution that does not rely on predefined specific 3D content. For this matter (and also more generally), the authors might want to seek inspiration from remote collaboration or telepresence research systems, that do not rely on predefined 3D content, but instead utilize 3D reconstructions. Example: Gao et al. 2016 ""An oriented point-cloud view for MR remote collaboration"". Analogously, an authoring system could use 3D video capture to fully generate co-located instructions. In summary, due to the very basic design, implementation and evaluation I cannot recommend acceptance of this work. The clarity and good presentation increases the value of this work, but due to the lack of a research contribution, I think that publication cannot be warranted at this point. For future iterations, I suggest that the authors define a clear scope and identify challenges a lot more thoroughly. Lastly, here are some minor issues: -The ""design guidelines"" section header should be renamed, since the design decisions are based on observations and not guidelines. Maybe ""design rationale"" ? -The authors occasionally jump to implementation details and terminology (e.g., ""renderer component""). Furthermore, the descriptions are oddly phrased (e.g., using an ""invisible object with the same shape as the physical object""). At the same time, those details are not important for reproducibility and can be removed. -Playback is only described in a sub section towards the end. However, I consider this to be a rather crucial part of the overall system that should be discussed earlier and in conjunction with the authoring process. -The ""Discussion"" section is not structured well. The sub sections (validation, playback, future work) have a very weak connection. One possible fix might be to remove the ""Discussion"" header and paragraph, make the user feedback its own section, move playback to earlier parts of the paper and create a dedicated ""Limitations and Future Work"" section (in connection to this, a discussion about limitations is currently lacking as well).""",2,1 graph20_38_3,"""The paper presents a system for authoring augmented-reality (AR) tutorials for manual construction tasks, such as assembling IKEA furniture. The system combines multiple instruction representations, in particular, video, images and text. It allows tutorial authors to record and then review their physical interactions from different perspectives (first view + third view). The paper does not present any informative user study or formal evaluation but discusses results of a preliminary evaluation with two participants (acting as authors). The submission further includes a video that demonstrates key features of the system. The work is still in progress, and the system definitely requires further design iterations before being usable. On the other hand, building a system that combines tracking from multiple sources (motion tracking system + tablet + HoloLens) is challenging and can be considered as a contribution on its own. The presented scenario provides insights about the technical but also interaction-design challenges for building such systems and their limitations. Therefore, I would recommend (weak) acceptance. That said, I also believe that this work is not complete yet, and the paper can be significantly improved. These are some major limitations of the current work and some suggestions for thought and improvement: - Although I understand the the focus of the paper are the authoring tools, it is hard to think about the design of such a system without carefully considering the quality of the produced tutorials from the perspective of a novice maker who needs to assembly an object. The paper provides very little discussion about the needs of users who follow a tutorial and how the system tries to meet them. - The lack of a formal evaluation is definitely an important weakness of the current work. The informal study indicates several limitations and I wonder how easy it is to use such as a system, in particular when tracking fails or is not well precise. - I am further concerned about the quality of the final tutorials. How easy it is to create a comprehensive tutorial, which is at least better than a simple video showing a person assembling the object? I agree that providing multiple representations is valuable, but at the same time, this complicates the authoring process. Unfortunately, the paper does not provide clear insights about effective techniques of combining different representations. Are there any standards or recommendations about how to create successful tutorials or individual assembly instructions? I encourage the authors to check related documentation or research work and come with some specific (well argued) guidelines. If there is no related work (I am not very familiar with this topic), I think that it is worth running a design study with both experts and novice assemblers. - The paper presents some interesting ideas about how to guide users to choose and drive screws. It is further inspired by some common techniques, as the ones shown in Figure 6, on how to draw a user's attention. I encourage the authors to look into such techniques in more detail and try to identify some more generic principles for their tutorial designs. Similarly, the authors may need to identify concrete sub-tasks in such assembly scenarios and their challenges (e.g., choosing the correct screw, fitting a piece in the correct direction) and then propose alternative techniques to guide them. - The paper claims that generalizing the approach could be extended to any physical task. However, it seems to me that the models of individual tracked pieces is preconfigured, and the current system does not provide any tools for facilitating this process. Furthermore, tracking can become problematic in several scenarios, in particular when parts are visually occluded. I think that the paper would need to provide further evidence to make such claims. - The first sections of the paper are very well written. However, the later ones that describe the actual system are not easy to follow. It would be probably better to start with a walkthrough scenario to explain the use of the system. Then, the paper could summarize key interaction principles, justify their design, and further emphasize their contributions.""",3,1 graph20_39_1,"""Overall, this is a clever and interesting idea which takes the idea of a physical key and applies it to a capacitive touchscreen. The analogy to a key is quite apt - the proposed devices produce inputs that have combinatorially many possibilities, that are physically distinct and need to be physically protected, and whose inputs are hard to forge by hand (without simply designing a replacement key). The idea of printing conductive sheets to interact with a screen is not new (as evidenced by the related work, and also [A] below), but this is a nice physical-key based technique for authentication that could complement existing capacitive-biometric techniques (e.g. [B]) as a primary or secondary authentication factor. I have a few concerns about the paper. The accuracy as demonstrated in the paper seems somewhat low (~80%), which could significantly impair usability, and it's not clear why the accuracy is low (touch sensing on the screen itself is in excess of 99% accurate). Details on the recognition algorithm are not clear; I wonder if the authors are matching on the absolute positions of the touch points, or on their positions relative to other touch events. The keys are large, a bit unwieldy, and seem potentially fragile due to the exposed conductive ink, which might make them hard to carry around; this issue makes them much less attractive than e.g. smartcards or fobs for authentication. Some ideas on how they can be made more robust and easy to use, and not simply ""trashed and reissued"" when they fail (which would require security procedures at many institutions equivalent to reissuing a normal key) would be strongly welcome. Finally, the user has to hold the key to the screen and thereby inject their own touches onto the screen, while also being somewhat unergonomic. Some ideas are given but these seem to assume that users will carry around additional hardware, which might be onerous. In terms of the evaluation, I would have liked to see some more details from the fabrication side of the project - how easy is it to design these keys, how much it costs to print them and how long it takes, etc. There is some anecdotal information in the introduction but no hard numbers. The keyboard is a very interesting and cute idea! I like that it can be used to input a secure password/PIN which users do not have to memorize. This paper is also missing a few references: [A] Yuntao Wang, Jianyu Zhou, Hanchuan Li, Tengxiang Zhang, Minxuan Gao, Zhuolin Cheng, Chun Yu, Shwetak Patel, and Yuanchun Shi. 2019. FlexTouch: Enabling Large-Scale Interaction Sensing Beyond Touchscreens Using Flexible and Conductive Materials. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 3, 3, Article 109 (September 2019), 20 pages. DOI:pseudo-url [B] Anhong Guo, Robert Xiao, and Chris Harrison. 2015. CapAuth: Identifying and Differentiating User Handprints on Commodity Capacitive Touchscreens. In Proceedings of the 2015 International Conference on Interactive Tabletops & Surfaces (ITS 15). Association for Computing Machinery, New York, NY, USA, 5962. DOI:pseudo-url [A] discusses a method for fabricating flexible films for capacitive interaction in the same fashion as described here, and proposes a number of use cases. [B] describes a technique which similarly uses the touchscreen to authenticate users, using their palmprint. Overall, I think this paper is above the bar for publication to GI, pending a better discussion of the current limitations and a more thorough comparison with related work. """,3,1 graph20_39_2,"""This paper contributes to the development of SheetKey, a tangible electronics-based system used for password/pin authentication. The paper includes a discussion of the implementation of SheetKey and a preliminary evaluation of 13 participants acting as valid users who use sheetkey for password authentication and as attackers who try to guess the password. The paper also discussed several future directions for improving the system. Overall, the paper is well written and easy to read. The development of a relatively accessible solution i.e., printing sheetkeys using a regular printer is promising. The current draft of the paper seems to raise more questions than it answers, which I think is fine for exploratory research, but that said, I found it difficult to fully understand the contributions of this work and below I elaborate on my main concerns. (A) Technical implementation: the paper mentions that the main contribution of this work is the technical development of SheetKey. I think there is some novelty in the use of conductive ink, however, I am not convinced that the silver nano-particle ink is as widely accessible as the paper argues to emphasize the benefit for mass production. Also, I found it difficult to understand what the technical differences are compared to the CipherCard project cited in the paper i.e., what additional features does this technique offer? The paper mentions the use of sheetkeys on alphabetical keyboards but I missed seeing any evaluation for that. (B) Evaluation: the basic usability testing makes sense, however, because the paper claims that one of the novel aspects of this method is that it can be easily reproduced by people, I was hoping to actually see participants go through an end-to-end evaluation i.e., set a password, print the sheetkey, and use it for authentication. Also, the paper indicates that 27% of the attackers were successful in identifying the pattern. This seems like a relatively high number for critical tasks concerned with security, making me wonder what then is the benefit of this approach? (C) User Experiences: a majority of the user experiences related questions were indicated as future work. For me, this is slightly problematic for an HCI publication. While the engineering efforts are extremely important, it is difficult to fully assess the benefits of the work without a sense of how and when would people use such techniques. In summary, I think the work is interesting, but as currently presented it is difficult to fully assess what the contribution of this work is. Just based on reading the paper, I think a short paper or a poster might be a better fit for this work. """,2,1 graph20_39_3,"""The authors present Sheetkey, a sheet with conductive patterns that allow the users to quickly enter complicated touch pattern passwords (<0.3s). I like the idea of this prototype, its simple, straightforward, and powerful. The application is appropriate. Of course, the big downside to this method is the necessity for yet again another thing for users to carry, and that it can easily be stolen. The authors do discuss this in the paper. Introduction lists a number of future work to clarify where the paper stands. This is an interesting writing technique, but it leaves me with more questions than I had before, mainly why werent some of these done here? They seem very basic and critical to evaluating the usability of the method. None the less, the effect is that the papers aim and scope is clear and I very much appreciate that. Similarly, I think that the paper is well organized with regards to its content: there are small studies, but the improvement sections clearly outlines issues and how to fix them, its not left to future work, which strengthens the work. One argument that is weaker is that this sheet is easily produced with a consumer printer and ink the ink is specialized, so its not really something easy to produce. However, I understand that it might be in comparison to complex security key systems. Study design: - Were all 30 tasks for each level of difficulty different, or were there repetitions? - Was the randomization done per block (easy-difficult or difficulty-easy) or were the easy and hard tasks mixed? A few more details would enhance the paper: - Add the width of the disk in preliminary study 1 - Can you have an overlapping figure with the sheet and the phone, as to compare the sizes? I understand why the sheetkey must be larger but not why it must be taller than the screen. - Please add details regarding compensation of participants and whether this study received institutional ethics approval. - interviewed their impressions -> interviewed THEM ABOUT their impressions Overall, I recommend the acceptance of this work, its interesting and appropriate. """,4,1 graph20_40_1,"""I have reviewed this paper earlier as a SIGGRAPH submission. The paper presents a novel method based on deep learning to convert a rough stick figure, with joints drawn in a fixed pre-determined order, into a 3D human character in the intended shape. As a representation for the 3D shape, the authors chose a point cloud. The model has been trained on a tiny dataset (SCAPE), augmented with rotations. The architecture of the network is based on an autoencoder: a variational autoencoder learns the latent representation of the 3D point clouds for various poses, and a regular encoder learns a mapping from the 2D stick figure into the latent space. The paper is validated in various ways: doing the sketch-based posing of the character, pose interpolation, etc. Since the SIGGRAPH submission, I was glad to see that the most important non-technical bits were done, such as missing references etc. Overall, the paper is in a good shape and easy to read. The results are not stellar, but certainly a worthy investigation. The model clearly has issues learning the latent space, because the dataset is so small, but it's still an interesting approach. Moreover, I think this area of sketch-based posing is largely underdeveloped, so I welcome contributions like this. My only comment is really the lack of details in 4.4: how the models are visualized. Since the network outputs a point cloud, the authors combine it with the meshes from SCAPE, but I would like more details on how. Also, L797: ""fat"" -> ""obese"". I would also rethink the intro sentence ""Despite the increasing number of talented artists"" - not sure if that's what the authors intend to say really. Other than that, I think the paper is ready for GI. """,3,1 graph20_40_2,"""The paper presents a system to obtain posed meshes of a human, based on input sketches of a stick figure. The method employs a neural network (VAE) to learn a latent space relating stick figures and meshes. The latent space can also be used to interpolate between poses. Since I am not an expert on neural networks, I can not judge the network architecture, however, it seems to follow well-established design principles. The paper is well written and good to follow. There are only a few grammar issues that can easily be resolved. The figures are a little hard to see. Maybe less intermediate steps (e.g. Figure 8) but larger images would be a good idea to make the differences between results more apparent. The method relies on the SCAPE dataset and is limited to a specific character at a time given at different poses. The posed models have to be meshed consistently and specific landmark points defining a skeleton have been identified beforehand. I argue that this is enough information to easily find blend skinning weights and pose the character directly. In other words, the main problem has already been solved in this dataset. Of course one can argue that a stick figure is easier to draw and that automatic skinning has to be performed, however, compared to the 2 days of network training for just one specific model, this seems to be the better option. Moreover, the method shows severe artifacts, e.g. the head in Fig. 5, second row. Furthermore, if skinning weights are available, the matching could also be performed between 2d and 3d stick figures which appears to be much simpler. That said, while quite limited, the method is still an innovative approach to a real problem and the paper could stimulate more research in this direction. Therefore I wouldn't argue against accepting the paper. """,3,1 graph20_40_3,"""This paper proposes a method to convert single-view 2D skeleton sketches to 3D human body poses based on VAE network architecture. The interface is easy to use. The paper was generally well-written and clear. But in my opinion, the results are not of high quality and the comparisons are not convincing enough. I intend to weakly reject the paper. Some comments - The input sketch/skeleton should be exactly the same when compared to [Kanazawa et al. 2018] in Section 5.2. - The output of the sketch-based face modeling system [Han et al. 2017] is with fine details. So the production time is not comparable to this paper. And it is noticeable that the model inference based on their deep regression network takes 50ms, which is much faster than 1 second in this paper. - The training dataset is quite small (only 72 poses) and the network produces incompatible 3D models with sketch inputs that are not closely represented in the dataset. And the interface provides no additional editing function to achieve the desired model when the direct output is not satisfactory. The application of this paper is relatively limited. """,2,1 graph20_41_1,"""This paper reports on the results of an experiment that evaluated the influence of two layout techniques on retrieval and revisitation activities when working with large visual datasets. The outcome of the study was that the spatially-based interface lead to faster revisitation activity and the use of fewer filters compared to a paginated interface. The study also found that the spatial interface also led participants to be slower when initially attempting to find content in the dataset. I thought that this was an interesting paper to read. I found the focus on a more classical HCI problem to be refreshing and well motivated. The experimental methodology was also quite straightforward (i.e., a clear and simple design was used) and aside from issues with the focus of the metrics (mentioned below), the paper sets up a nice comparison between the two interfaces. There are some concerns that I have with the paper, however. First, the contribution of this research is the finding that participants used fewer filters and were faster as use with an interface increased. However, the paper presents three research questions and at least seven different metrics to these questions to arrive at the above conclusion. Because the research questions are not referred to again in the latter half of the paper, there is no justification for why these research questions or this number of metrics were used, some of the findings were not significant or only significant for the Block factor or were significant but had a small effect size, and there were no corrections applied to account for the number of metrics that were evaluated, it is difficult to understand if the research questions actually informed the comparison and subsequently why 10 pages are necessary to report on these results. There needs to be a clear focus and mapping between research questions and metrics so that the reader can find the contribution within the presented data and not get weighed down in data that ultimately doesnt add to the overall story of the paper. In addition to the focus challenges, I also found that the paper was too long and could benefit from a condensing pass to make it better fit the depth of the contribution (which I see as about 6 or 7 pages excluding references). In the introduction, for example, many parts can be trimmed or removed without losing clarity (e.g., the second paragraph, the explanation of the experiment, and all of the Figures). The findings are actually best summarized in the single sentence in the conclusion so the contribution statement in the Introduction can probably be condensed into one sentence as well. Other sections have a lot of repetition (e.g., Design first paragraph) and circular self-referencing (e.g., referring forward to a forthcoming section or backwards to a previous section). Given the main finding of the paper, the discussion and limitations are too long (close to 2 pages) and I dont see the relevance of devoting a whole page to the Generalizing the Findings to Real-World Systems section. Lastly, there are a number of claims, decisions, and terms made in the paper that are unclear. For example, why were 700 items chosen for each dataset and how representative is this of large datasets (a reference would be beneficial), how was 19 mm x 12 mm close to the limit of what would be acceptable to users, how participants using Paged were more reliant on filters overall given that this did not occur on Block 1 as per Figure 7, what are location memory and lowlighting, and where does Figure 1 come from? While the experiment is well explained, it also seems that a time limit was imposed on trials after the experiment was completed. It is unclear how this time limit influenced the retrieval time metric given that means and standard errors are given (i.e., were these trials removed from the dataset or were any values over 90 seconds scaled down to 90 seconds). Although I appreciate the overall goal of this paper, the paper is too long given the contribution and the important results are hidden amongst too many unnecessary metrics, exposition, and discussion. If others are strongly in favor of accepting this paper then would recommend that it be shepherded (if possible) so that it can be condensed and better focused. Other: - Figure 3 refers to in this trial but the figure and caption do not refer to trials but rather an interface - Page 4: little is known about how the approach works with large item sets spatial memory is not an approach - Page 5: previous work suggests that the spatial interface will lead to faster retrieval . citation needed - The appropriate scientific symbols should be used for mean and std - It is common to first present all of the statistical results and then provide some analysis of them in the text rather than present the main effects and interactions, synthesis, and then the post-hoc analysis. Dividing the main effects and interactions from the post-hoc tests makes it difficult for the reader to understand the synthesis given that in many cases the post-hoc tests explain the interactions and thus synthesis that is presented. - Figure 12 is missing an error measure - There is quite a bit of use of bulleted lists in the text. These should be converted into sentences to allow for easier readability and better organization of the themes / findings (Page 2, 5, 8, and 10). - An inconsistent citation format is used in the reference list""",3,1 graph20_41_2,"""The paper ""Testing the Limits of the Spatial Approach"" presents a study comparing a spatial technique for finding and revisiting objects in a visual content collection with a standard paginated layout common on the web. The tasks in the study involve searching in 700 graphics cards and computer cases, filtering on various attributes such as memory size, etc. Results show that the spatial layout performs better for refinding items and filtering was more reliant, but the traditional paginated layout outperforms on the initial search. The paper presents a well designed and well-executed study with a good and nuanced analysis of the results. It is clear that the spatially-stable technique has potential, but that it is not purely black-and-white, which the authors carefully detail. The paper does not present groundbreaking research, but it is a solid piece of work providing evidence that it is worthwhile exploring the use of spatially-stable organization techniques for certain search and filtering tasks. While the paper is generally well-written I find it to be too long. The introduction and related work could (and should) be shortened to around two pages. I also found the discussion overly verbose and I would suggest shortening it as well to make the paper more on point. I missed a better description of the tasks, it was not clear to me what the participants were supposed to find, and why and under what conditions they needed to refind items. Also, do the tasks reflect typically search and filter tasks (e.g., on the web)? Overall, I find the results of the paper to be interesting, and the study well designed and executed. I missed a bit of clarity around the tasks, and I would strongly recommend the authors to shorten the paper significantly.""",3,1 graph20_41_3,"""This work investigates the effectiveness of a spatially-stable layout for finding items within a large item set (700). The authors conduct a within-subject study (n=20) to compare paged and spatial layout performance with two datasets. The results show that while spatially-stable layouts are faster for revisitation (from spatial cognition), they were less accurate. The overall performance times were similar across both types of interfaces. While the contribution of this work is somewhat limited, the main finding that end-users are able to leverage spatial memory for larger datasets both supports and extends prior work (e.g., [9] evaluates thumbnails of text documents with 300 pages). The specific findings presented in the paper (interface, navigation, finding items for the first time) offer useful insights for future research on spatial-layouts. Pros: The paper is well written and the main arguments are easy to follow Related work is quite comprehensive, but can be synthesized better with a stronger connection to this work. The study design is sound and follows the protocol from prior research (repeated-measure factorial design) The analysis is also clear and the authors do a good job of summarizing the findings. The discussion does a good job of expanding on the findings, and the generalization section offers directions for future research. (Although more discussion about (1) the advantages and tradeoffs between spatial memory and remembering ""attributes"" about items, and (2) time vs. accuracy tradeoffs, and (3) dataset characteristics would have been helpful.) Comments and clarifications: I had a set of comments that can potentially be addressed in the final version of the paper: Besides accounting for popout effects, were there other reasons for selecting the datasets used in the study? Did participants have familiarity/ domain understanding of the datasets? Would familiarity change the current findings? I was a bit unclear about how were the blocks*targets presented, in what order..block1-target1, block1-target 2... block8-target 1... or was it random? What was the spatial distribution of the targets selected in the study? How far apart were they? The text description labels the cues as full, partial, and minimal which is inconsistent with the axis labels in the figures: All, medium, least. There is no discussion about visual cues on recently selected items (blue outline) and recency list on participants' performance. Same for the use of popup animation to see a larger image in the spatial view. """,3,1 graph20_42_1,"""This paper is a novel visualization of parallel coordinate plots by condensing lines/flows when there exists a cluster or trend in a subspace of the data appearing in 2D scatter plots. To make this visualization possible, some extra axes are duplicated when a clusterpresents in the data. The paper is very well-written, visualizations are good looking and the method seems mathematically robust. However, I have serious problems with the usefulness of the approach in practice. In this novel representation, as shown in Fig 2, values of data points are omitted. One of the good properties of parallel coordinate plots is that following the chart, we can understand the value of each data point and also its scale. However, this is not possible in the proposed visualization. Inparallel coordinate plots, a line takes the data from one dimension to another. In this new approach, the line is replaced by a Hermite curve trying to respect the angles that the line makes withthe vertical axes. However, I find it a bit confusing specially because Hermite curves sometimes produce peaks in the curve that do not help the clarity of visualization. It also took me some time to understand the role of the extra stacked axes in these plots.The same is true for the fuzzy color coding. I cannot make sure that these replacements can actually increase the readability of the plots. Considering the above issues, I think that it is necessary to perform thorough user studies and quantitative studies that why these design decisions help the readability of parallel coordinate plots. Although performing such studies is left for the future work by the authors, I think the usefulness of this new visualization is under question without having such studies. The examples that are provided in the paper along with their interpretations and comparisons are interesting but I am not sure that an external user or observer can understand the data as nicely and interpret them similarly. Since the paper is well written and it has some merits, I am okay with publishing the paper if other reviewers find it a good fit. I believe the paper is on the border-line. Minor issue: The paper would be more clear if the method was discussed using a simple numerical data instead of only visual data (Fig 5). """,3,1 graph20_42_2,"""The paper proposes a new visualization scheme that combines the properties of scatterplots and parallel coordinates plots (PCPs): the Cluster-Flow Parallel Coordinates Plot (CF-PCP). The visualization represents clusters of data points in multivariate data by duplicating axes from the canonical PCP visualization to represent 2D subspaces of the multivariate data. This approach preserves the readability of correlational patterns from the original PCP while making cluster assignments more obvious than alternatives relying on edge bundling and on just the use of line color. The implementation of the proposed visualization requires tackling several interesting aspects including a scheme to connect lines between duplicated axes by drawing Hermite spline segments that preserve the line slopes at the axes and a layout optimization based on an A* algorithm to compute the shortest path ordering of duplicated axes. The results are demonstrated on several example datasets and contrasted against visualizations using traditional PCP and scatterplots. This is a nice paper that I believe proposes and novel and useful visualization scheme. However, there is one key weakness which prevents me from being more positive with respect to acceptance: an evaluation of the proposed visualization in practical use through a user study is absent. The benefits of the visualization are only demonstrated through qualitative results. The paper would have been significantly stronger if the expected benefits were measured in a practical scenario.""",3,1 graph20_42_3,"""The authors present a cluster-centric visualization of high-dimensional data using parallel coordinates. It duplicates axes for each data to show data flow between subspaces. The authors also present a description of a layout algorithm. Uses cases of visualization are provided. The paper claims that the technique is an improvement over traditional approaches but without a user study this is difficult to know. I would have rated the paper higher if there was at least a preliminary evaluation. So this is a borderline paper. On the positive side, the presentation of work is of high quality. """,3,1 graph20_43_1,"""This work reports on a qualitative investigation of how visualization can support patients to share personal health data with health care providers. It entails a formative study with healthcare providers, interviews and prototype design of health data visualization for 8 patients, and follow-up interviews with physicians. The work concludes with a set of design guidelines that summarize findings from all three studies. I found this a very interesting read. The detailed discussion of individual patient stories is very illustrative of the design challenges around this critically relevant domain. The study methodology is thorough, with iterations that consistently build on previous findings, and covers perspectives on both the patient and physician sides. This methodology could inspire similar studies in patient/doctor communication. The paper is generally well written, with a solid motivation and grounding in related work. It also provides a clear outline of the design guidelines; while most of them are derived from the formative study, they are fairly actionable, and contextually validated by the deep dive on patient stories. The work also has a number of issues, although they can be largely addressed with further clarifications and better restructuring. The most critical one is that while the paper is framed as covering BOTH patient and healthcare provider's perspectives, there is a disproportionate focus on the healthcare provider side. The designs were not discussed with patients, not their perspectives on the design contrasted with physician's feedback. Patients and physicians often have competing objectives, and this work stopped short of informing those. While I find there's sufficient contribution to the paper in its current form, the focus on the healthcare provider side should be much more clearly communicated from the Introduction onwards. Second, I missed details on the overall design rationale behind the choices of visualization design beyond a general approach of ""taking user's data collection & sharing needs into account"". There were some cases that could arguably be supported by the same visualization type/style, and yet, design were really diverse and it wasn't clear to what extent they *had* to be. Was that a deliberate choice? How did you narrow down what particular use cases to address for each patient? How many designers were involved? What made you discard design alternatives? Were you explicitly seeking variety in the designs to enrich discussion with physicians? A biased focus on design diversity during the design process weakens the paper's criticism against ""one-size-fits all"" approaches, specially given the lack of validation with patients. Finally, this work has fairly limited generalizability. While (a) this is less of a concern with qualitative studies and (b) recruitment in the medical space is famously challenging, a sample of 3 physicians is a bit too minimal to provide authoritative design guidance. This is evident enough with the lack of consensus on a number of topics in the interview discussions (e.g. whether or not allow for correlations to be drawn for patient views), and the limited impact of physician interviews on the design guidelines. I appreciate the statement that ""these guidelines are not exhaustive"", but again, I missed a more candid discussion of such challenges in a Limitations section. In summary, this is an interesting read, albeit with some issues and limited generalizability of which many can be addressed with restructuring and clarifications. And while findings are not exactly new or ground breaking, the contextual richness provides a deeper level of understanding on these well known issues, and may serve as a good reference for those seeking concrete examples of patient cases. ***** Other Improvements - *Significantly* improve resolution and usability of Figure 1. I would strongly recommend to bring each individual patient column closer to their respective visualization designs subsection, ideally on the same page. - I missed more concrete details on the coding strategy. E.g., did you use open + selective coding, how many independent coders, etc. - It seems likely but unclear that C1 and C2 from interviews were the same as two physicians in the as formative study. Clarify. - It is unclear who the third ""healthcare provider with 9y experience that supports patients monitoring data"" is. A physician, a nurse, a nutritionist, health counsellor? Clarify role. Also important to note that these healthcare providers may have different objectives compared to physicians when it comes to leveraging patient data, which might be worth discussing. - Fix typos: (-remove) (+add) -- Page 2: ""In (-the) this section..."" -- Page 3: ""...we interviewed eight patients (+who?) actively collect health data..."" ""...we use (-the) grounded theory [46] to (-analysis)(+analyse) the data"" -- Page 7: ""...in form of seven rings (days) (Fig. 1 - column P#6 - (-first) (+second) row)"" """,3,1 graph20_43_2,"""The paper explores the possibilities of reviewing and visualising patient-generated data from a range of stakeholders consisting mainly of healthcare providers and patients. The authors utilise a range of methods in order to better understand the attitude and perspective of both participants to provide relevant and appropriate design insights for developing tools to support the visualisation of data collected during a clinical visit. First, the authors attempted to identify a gap in the literature concerning how visualisation designs can support the review and analysis of user-generated data. What is missing is a clear articulation of the research problem and question within the literature provided. It read like some form of a haphazard account of few studies that point to the relevance of tracking and visualising patient data in order to inform better health decisions, and ultimately a better lifestyle. Although sections 2 attempts to situate the research question into the context of varied perspectives, a better justification of the stake for the field would have been made clearer had it being the section doesn't read as if its an analysis of prior data, and not of related works. Second, I particularly appreciate the authors' use of different methods (focus group, interviews, and observation) but fail to see an understanding of the needed sensitivity towards participants with some form of a chronic condition. Its well known that 'chronic conditions might take a different form and thus interpreted within a particular context; this makes the contribution of the paper marginal, as one would expect a clear articulation of how the method is chosen to fit into the context of the wider literature on similar issues and ultimately the nature of the study participants. We need more detail to determine whether what the data suggest reflect the subjective perspective of the different users that participated in the study. Thirdly, from the discussion of the findings, quotes appear unpacked. How representative is it, whats the bigger picture, can it be generalised to other not known scenarios? The analysis of the patient's interview provided a bigger picture of the different perspectives, and which makes the different factors more relational and understandable. Overall, the analysis lacks clarity, rigour and situated in the literature. With a few grammatical typos, it reads as a thread of different perspective, with little grounding in HCI and related field. Lastly, in HCI, there is a movement towards ideas about participatory design, user-centred design, value-sensitive design and so on. A utilisation of these perspectives in framing the research ideas would have done more good to the paper than proposing a new design space for visualisation of user-generated data. From the guidelines outlined in section 9, it is hard to pinpoint new learning that the paper provides to the visualisation of subsequent design practices apart from restating well-known design insights. There is the question of how the data and the proposed guidelines might bring about some implications for design (Dourish, 2006) and practice. Although the issues of implication for design has been misunderstood and widely misrepresented, what the proposed design guideline sought to point to might be regarded as some form of outlining implications for a design practice that is minimal and non-representative. This makes the paper weak, lacking impactful significance, and thus leaning would not argue strongly towards acceptance. I would encourage the authors to situate the research questions into the broader literature and determine whether they fit into some of the well-established methods informing the designing of health-related technologies. """,3,1 graph20_43_3,"""This paper describes the exploration of designing data visualizations of daily medical records by patients, and what kinds of visualizations may assist providers in best keeping track with an patients medical status. The authors perform three phases: An interview with providers to assess their needs, sessions with patients to gather their unique medical history and develop several visualizations for their data, and going back to providers with these visualizations to gather their ideas of how well these visualizations would assist them. The authors then suggest some design guidelines at the end for developing usable patient data visualizations. I enjoyed the paper. It is a qualitatively-driven paper, but I believe it provides much insight into what providers would like in patient visualizes, and takes into account how patients already record their information. The writing is clear and the paper is easy to read. There are a few comments I have about the paper that I describe below. The description of each patient drags on a little long, and much of it does not become useful after in the later sections, since particular medical history is not referenced in later sections. While identifying the uniqueness of each patients medical conditions and how/why they record information is important, I think this could be greatly shortened to the most pertinent points to demonstrate the differences. I would have also liked to see some of the images of the visualizations for myself. Another concern I have is about the disparity between the emphasis on how each patients medical history (and in turn, visualization) is unique, and then the proposal of general design guidelines for creating patient visualizations. It seemed that the initial statement was that general guidelines were not useful because of the uniqueness at each patient. I would have liked a little more discussion on the limitations of the authors proposed guidelines at the end and how did or did not mitigate this issue. I think these changes/clarifications can be made easily, and therefore I would argue for the acceptance of this paper pending these changes. """,3,1 graph20_44_1,"""The paper could use a more extensive engagement with prior research on non-visual authentication mechanisms for accessibility, e.g. Azenkot (cited, but not extensively discussed beyond simply a passing mention), or following surveys of similar literature (e.g. Helkala, K., 2012. Disabilities and authentication methods: Usability and security), or drawing from other research on alternative models of authentication (e.g. Aly, Y., 2016. Spin-lock gesture authentication for mobile devices), or in general from work on security/authentication in the context of accessibility (e.g. work by Shirali-Shahreza on accessible CAPTCHA). In particular I would have liked to see closely-related work such as that by Azenkot being engaged with more critically and more deeply, including in terms of anchoring the finding of the study in this (and other related) research. This is not present in the paper to the extend it could be to enhance the value of the contribution. The motivation for the actual solution is very limited, and not grounded in prior work. This is not to say that this is not a clever design idea, but rather that the way the design is justified is not sufficient. The evaluation study is rather limited, and does not compare this with other prior solutions for accessible authentication (for example, the PassChords system by Azenkot). To a minimum, I expect some very detailed / introspective analysis of the proposed system (and the results of the evaluation study) in comparison to such prior work. Similar to the research done by Azenkot, why were no other baselines considered? It seems that the baseline used had some accessibility features (the use of VoiceOver) but it's not clear how was this combined with the external pad? Why was a baseline fully contained within the phone that offered accessible authentication not considered? (Azenkot used the iPhone's built-in Passcode Lock with VoiceOver). I suggest that the authors clarify this part of the paper. The participants' details are also lacking some crucial data. For example, how many are familiar with alternative authentication methods? How many use for example a gesture-based physical lock or other form of tactile lock in real life? What was their overall phone use, and mobile proficiency in general? There are no details presented related any ethical considerations about this study. (and I consider that simply stating that the study was approved by an ethics office is not sufficient) The interview data was analyzed through grounded theory -- more details are needed about how this was carried out (how many independent coders, how were the codes reconciled, how many codes, how were these grouped in themes, what was the resulting thematic map?) I am actually surprised that the authors deemed grounded theory to be necessary vs simply complementing their quantitative findings with quotes from participants. The discussion should also include some consideration of the level of security afforded by the proposed method. While a full threat analysis model may be too much for a paper focused on usability, some discussion is still warranted -- after all, if the proposed method is significantly weaker security-wise, then there is not much point in proposing it as an alternative to current mechanisms. While the statistical analysis seems correct, I would suggest some limitations are mentioned, as 18 participants is on the low end for such parametric tests to be meaningful. The reporting of statistical results does not include details provided about any corrections or any other distribution-based tests that may have been employed, and a justification for the selection of the analysis method. Finally, there are no hypotheses being provided, nor any justification for why this was the case. """,2,1 graph20_44_2,"""The submission presents evaluation of BendyPass, a prototype based on Bend Passwords design [33], with visually impaired people. The prototype is a simplified version of Bend Passwords [33] geared towards users who are visually impaired. The evaluation consisted of two sessions (taking place one week apart) in which participants first created their passwords and then used them to sign in. The experiment compared BendyPass with standard PIN security feature on touchscreen devices. The results show that although it took longer for participants to create their passwords with BendyPass, they were able to recall and enter them quicker with BendyPass than with PIN. This submission contributes new knowledge about how users who are visually impaired can enter passwords. The main strength of the paper is the experimental user study design with users who are visually impaired. It is particularly important to evaluate technology with target stakeholders. The paper is well written: the work is motivated well, the related work is mostly comprehensive, and the design and evaluation sections are clear and have enough detail for others to attempt to reproduce/replicate the study. However, there are two main weaknesses: 1) the submission narrowly focuses on bend passwords, and 2) the evaluation compares BendyPass against only one baseline. The paper never justifies why Bend Passwords [33] is the best design to adapt for users who are visually impaired. There are many other potential designs out there and the paper does not fully explore the potential design space before picking Bend Passwords [33]. For example, an equally feasible alternative is a design that uses a small physical numerical keyboard that users can carry with them and enter passwords even from their pockets (the haptic feedback that such a keyboard would enable would allow such interaction). Such alternative design is similar to BendyPass along many dimensions (e.g., users need to carry an additional device, but offers a more familiar interface). Other designs exist (e.g., work by Das et al. (2017) is just one example. Thus, the paper should better position the proposed design/prototype within this design space. This brings up another issue: the PIN baseline is the current de facto standard, but other baselines (e.g., physical PIN from the previous paragraph) would position the work better and help justify use of BendyPass very different and unfamiliar interaction modality. Also, entering PIN on touchscreen devices is notoriously difficult for people who are visually impaired, so it is no wonder that BendyPass outperforms it. Thus, ideally the evaluation would compare other ways that participants can enter PIN passwords. In summary, this is an interesting paper that will contribute to the GI community. Thus, I look forward to seeing this paper as part of the program. REFERENCES Sauvik Das, Gierad Laput, Chris Harrison, and Jason I. Hong. 2017. Thumprint: Socially-Inclusive Local Group Authentication Through Shared Secret Knocks. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI 17). Association for Computing Machinery, New York, NY, USA, 37643774. DOI:pseudo-url """,3,1 graph20_44_3,"""This is an interesting and well-written paper which proposes a hardware device for entering tactile passwords (""Bendypass""). The target audience is blind or vision-impaired individuals that might find entering a traditional PIN or password too visible (and thus less secure) on their screens. I liked the physical design of the device, and appreciated the short exploration into the device's design process. I found the Bendypass to be a clever and relatively simple idea built on top of this platform. Users found value in it, and it by and large performed about on par with traditional PINs. One concern I had was with the analysis of the login time. The paper notes that ""we selected the fasted login time"", and yet the average (fastest) PIN time is 18.27s. This seems very high - if these were all 6-character passwords, that's an average of 3s per key. I would encourage the authors to double check this result, and if it holds, to explain if any outliers significantly affected the data. It seems cumbersome to have to carry around a separate device just to unlock your phone. This could be made palatable if there were more ways to use the device, or if the functionality could be baked into the phone. Either way, the authors could expand on the practical considerations a bit more. Overall, nifty idea, good prototype and good evaluation with the target population. I am therefore in favour of accepting this work.""",3,1 graph20_45_1,"""This paper reports empirical results of a study investigating how viewable forward distance affects steering error and speed in a steering tasks where a corner turn is present. The paper is articulated well and the study and analysis are sound. Overall I think it makes a small but original and useful contribution in terms of modeling performances for steering tasks. I do however think the introduction is a bit over-promising without adequate backings. For example, it talks about possible utility in lasso selection and VR environments, but the study in the paper used mouse only and the steering task was a 2D movement with just one turn. Nevertheless if the authors could elaborate more on its applications, or provide a more reasonable scope, I'm inclined in accepting this paper. Pros: -The explanation of models and the study is well-written. The inclusion of apparatus choice and system setting is useful for replication. -The authors did a good job in discussing other experimental design choices, which I think warrant further investigation like left-hand masks and for people who use a different hand. -It is good for the authors to state and discuss inconsistent results with prior work. Cons: -There are several parts in the paper that require clarifications: --In related work the authors claimed that ""If Bateman et al. had tested longer S values like 100 m, S might have significantly affected the results"", but offered no explanation. Please back this claim. --It seems to me that the authors used two different models to estimate MT for the first and second path segments. If that's the case, why? Could one use the same model (eq. 10) for the second path as well? Please elaborate the rationale behind, or make it clear which model was used. --Choice of some experimental parameters were not explained well. Base on what did the authors ""set a narrow W2 requiring careful movements to safely turn at the corner""? What makes it narrow enough? Why did the corner always turn downwards in the study? --Some data were left unexplained. Why the slightly out-of-order jump of error in S for 4th and 6th widths? I'd have expected the wider S gets the less error one makes so the trend will be monotonous. It is also strange to have some of the results (error rate analysis) attached as supplementary materials. --In the discussion of limitation the authors said they tested only no-PK conditions, but during the model development they said the participants had the previous knowledge of W2 being fixed. It appears that here the authors were refereing to A instead W2. I'd suggest clarifying what was known and what was not known by the participants. -Lastly, I think the driving simulators and racing games examples provided in the future work aren't appropriate. This is because braking (pressing/releasing a button) is different from slowing down wrist movements, and the way the player avoids running to the sides is different from the pointer hitting the walls. I think lassoing would be a better example. There is nothing wrong with limiting the scope to what the study actually reveals. Minor points: -missing example in a few places such as abstract and related work (do a search for ""(e.g.)""). -it would be easier for readers to understand what ""viewable forward distance"" is if Figure 1a is annotated. -Equation 14 in the sentence ""Equation 14 can be simplified further"" should be Equation 13? -I'm not certain what the authors mean by ""human online response skills"" when discussing future work. Please explain. """,3,1 graph20_45_2,"""This paper proposes a steering speed model that takes path visibility into account. It provides a theoretical discussion to justify specific aspects of the model, and reports a controlled study in which various combinations of parameters were tested as an extension of the steering law. The results can inform the design of steering tasks in which parts of the path is occluded, and overshoot and clutching are forbidden. Overall I am in favor of accepting this paper, which I try to explain below # Strengths of the paper The paper is well written and addresses a specific but interesting aspect of steering models. The ""hand occlusion when tracing"" example that is shown on figure 1 is a nice example use-case. The related work section clearly describe existing steering law models, as well as discussing research on other yet related research area (and typically peephole pointing or scrolling models), making it interesting and rather complete regarding the scope of the problem explored in the paper. The theoretical discussion of models that can play a role in speed limitations for steering performances with restricted view is very clear, easy to follow, and it is easy to understand why the proposed model might be adapted. The experiment reported in the paper, while testing a limited number of parameters and conditions, is well described and sounds relevant. Finally, the discussion section provides a short but neat discussion of the results, and I especially appreciate that the authors discuss few of the their results who were inconsistent with the literature. The final model, while useful for a very niche task (see below), is clearly good. # Weaknesses and limitations While sound and interesting, the proposed model remains focused on a very specific use case that is, in my humble opinion, not so frequent in everyday interaction with computing systems (or at least, not in the way it was tested in this submission). As mentioned above, I especially liked the ""hand occlusion when tracing"" use case example, but I would not necessarily consider all other examples (namely, 3D drawing or steering with head-mounted displays or racing games/driving simulators) provided as relevant in this context. Typically, in the context of HMDs, users often have distinct and integrated control of viewport and pointing device through two dedicated devices. Regarding racing/driving apps, they tend to rely on rate control rather than position control. In addition, when reading the introduction of the paper I was always thinking of when I import a photo in illustrator, zoom on it, and then draw contours on top of it using my mouse which ""autoscrolls"" the viewport so I can move all around the figure I am contouring. All these scenarios, while similar, differ from the experimental setup in term of how users control scrolling and cursor position (when relevant). As such, the model may or may not apply. I would recommend to remove these examples from the paper and would focus the examples on lassoing and hand occlusion when tracing uses cases, which would avoid potential overclaims. Another weakness in my opinion lies in how experimental results are reported. While clear and easy to follow, authors on several occasions provide null hypothesis testing result, reporting significant differences, but without providing post-hoc tests (e.g ErrSteer results). Such post-hoc tests, while not the core of the paper, would still be interesting. In its current form, the results are not complete and significant differences between conditions must be inferred from figures. That being said, while of limited contribution, the contribution remains significant and well explained which is why I remain in favor of accepting this paper.""",3,1 graph20_45_3,"""This paper proposes a statistical model for steering control. Understanding user behavior in a restricted visibility setting is definitely a topic of interest in the HCI community. While I am excited about this paper, I would like to raise a few points for the authors to further clarify to strengthen the presentation and the readability of the paper. I hope the authors find them useful. Theoretically, this is an interesting cognition scenario. While we are known to employ intermittent control, we cannot plan as much as we like when view is limited. However, in practice, even with more devices with different screen estates make the scenario more common than before, it would help the readers if the authors provided more examples of when occluded steering scenarios occur. The experiments are solid. Although somewhat limited in task setup - like fixing segment widths. However, it would help the readers if some of the choices were elaborated more, for example, what makes a path ""narrow""? It would also help the readers if the results were unpacked more and contextualized. For example, I am still having difficulty understanding what it means to observe the interaction of S W1 being statistically significant. Lastly, I would like the authors to draw more implications for designers. For example, what do the results of this study provide for building computational models of steering with prediction capabilities? What are the takeaways for building a simulator? The paper is well written and well structured. """,4,1 graph20_46_1,"""This paper presents a case study in crisis communications, examining the differences in the way news outlets and a company (in this case, Equifax) report about a large scale data breach, and how these different styles may affect the interpretations of the public. The authors use several different techniques to analyze the data, including Image Repair Theory and Narrative Semiotics. These were not theories I was familiar with, and I found the paper introduced them well and the analysis was thorough and well-written. The theories present the press releases of Equifax in a very different light from the news reporting from a variety of sources. The coding was carried out by one researcher, but confirmed by two others, so while we don't have a measure of interannotator agreement, I think the method is suitable. The paper includes a lengthy narrative of the Equifax data breach, including the events leading up to it, the impact, and the response. The two theories are then applied to the statements of the company and the statements of others as listed in the news. The IRT framing helped me to understand better the strategies used by the company (table 2) in such a situation, and I think in future I will read and understand such statements with a more critical eye. The paper also includes a survey of 100 Turkers who were asked to read statements from the company and the news (in random order) and complete a 10 item questionnaire. The researchers were looking at whether the framing by either the company or the news (or the order) would affect things such as if the reader feels the customer or the company was the victim. The results were interesting, in particular, that the framing may affect the way people interpret whether the company acted responsibly. I have some concerns about the study design, which should be addressed in a revision. In particular, how were the specific passages selected for the survey? There were two pairs of passages - were they matched somehow to be similar in length and topic? Why not use many more different passages? The biggest negative for this paper from the point of view of the GI conference is the topic match and relevance. This is only tangentially an HCI paper. For example, none of the references are from HCI literature. The paper presents some ideas for HCI takeaways such as the need for tools to assist people in tracking the use of their data. To me this is weakly related to this work. The stronger takeaways as presented are for communications and legal professionals around crafting communications so that they are truthful and not manipulative. One HCI implication that I see is that perhaps we could automate some of the analysis here to provide tools for consumers to ""see through"" the manipulation of corporate (or news) messaging to support critical readings. Overall, I think this is a solid paper but borderline out of scope for the conference. Minor: some issues with spacing around punctuation - bottom left of p3""",3,1 graph20_46_2,"""In this paper, study how organization communications about breaches affect customer perceptions. The authors presented a study of the nature of the communication using image repair theory by analyzing press releases on company websites. It was revealed that the press releases were consistent with approaches to reduce reputational damage. Narrative semiotics were used to examine the stories structures in the media. It was found that the company was not depicted the same way. The authors next explored how the strategies used in the company press releases influence the public understanding of data breaches. This was done through a questionnaire which collected responses to see if data breach incident descriptions from different sources, the companys and the news media, result in different perceptions of the incident. Participants were recruited through TurkPrime. The statistical test chosen (Wilcoxon) is the correct inferential statistical method for this study to the best of my knowledge. Findings are as expected: companies tend to present themselves in a better light that the news media. The quality of presentation in this paper is very high. The main issue with the paper is that there is virtually no relevance to HCI. I am very surprised why authors chose this conference to submit a piece of work like this. The subject matter that is being explored fits much better with a business conference. It is an interesting paper to read though. I did not rate this paper high because of its low relevance to the theme of this conference. It does not reflect the quality of work. """,3,1 graph20_46_3,"""This paper presents a study of user perceptions to communications about data breaches, based on a use case from a real example. The method applies Image Repair Theory to understand the strategies applied in the communications and narrative semiotics to model the examples of individual strategies. A user study conducted on mechanical turk shows a difference in perceptions for different communications sources - company or external sources. This paper is very well written and quite interesting to read. The method and study are described in detail and the analysis and interpretation of the results are clear. The figures do a good job of demonstrating each of the examples discussed. Overall the study is sound and the results provide a contribution in an area that has apparently received little direct attention. However, I do have a few concerns worth noting: - my main concern is that the paper seems to have little relevance to HCI. The study primarily concerns communications and user perceptions, and the link to HCI is not evident. This comes a little more clear in the final pages of the paper. It would be better to make the connection very evident early in the motivation, however it is only mentioned very tangentially. Despite the later discussion, I am still not fully convinced that the paper would not be a better fit for a business or marketing conference, thus my rating does not reflect the overall high quality of the work. - The study contains several random elements and could be better controlled. In general it is better to control variables rather than randomise them, for instance to ensure the intended balancing is achieved. I understand that this was probably done for practical reasons but these should be discussed along with the potential effects on the results. - The paper does not mention whether participants were paid for the study. I note that the ethics review was passed, however, given that turk recruits paid workers, not volunteers, it is important to ensure that the participants are not exploited and paid a fair wage. - the boxplots in figure 3 are difficult to interpret. I assume these show the median not the mean, but it would be good to clarify this. The whiskers add visual clutter rather than provide much additional information, so a more standard visualisation method would be preferred.""",3,1 graph20_47_1,"""The paper presents a method to find ""hollow bodies"" in point clouds by voxelizing the space of point clouds and the adjacency information of the voxels. For the following reasons, I suggest to reject the paper: 1) The motivation of the paper is questionable. The main motivation discussed in the introduction is to use the proposed method to improve the quality of 3D displays. However, no visual result of such a system is presented in the paper to back up this claim. 2) The voxelization technique in the paper, while it seems like a reasonable method, is straightforward. There is no clear contribution in the proposed method for the area of Computer Graphics. 3) There is no clear mathematical definition for hollow bodies. ""Above, upper, and outer"" terminologies in the definition of the hollow body make this definition unclear and coordinate dependent. I was looking for a better mathematical definition which could be considered as a contribution. 4) The results' rendering and presentation are poor. 5) The paper is full of grammar mistakes and typos. I suggest that the paper gets thoroughly edited by an English native speaker. These are *some* of the *many* mistakes: *) space is needed after comma, periods, etc. *) Page 1: ""-This paper proposes a novelty voxel connectivity"" novelty---> novel *) Page 1: ""Yongtae Jun [ 8 ] rstly determine"" determine---> determines. It is a single author paper. *) Page 1: ""The human brain need to"" need----> needs *) Page 2: ""depth ration to describe their"" ration---> ratio *) Figure 2: All the sentences need to be re-written. *) Page 3: ""model as the volume radio in order"" radio---> ratio ..... """,1,0 graph20_47_2,"""The problem statement is unclear and informal throughout most of the paper. Towards the end the reader gets an idea in retrospect, but the involved concepts remain vague. The definition of ""hollow body"" is essentially via an algorithm (that involves quite a number of rather arbitrary engineering choices) rather than via a mathematical concept (that would then be realized or approximated by an algorithm / implementation). The motivation remains unclear. No plausible use case for the orientation dependent definition of ""hollow body"" is given. Then paper mentions (quite extensively!) 3D display technology as well as CAD systems, but ultimately no connection to these is made. There are numerous language and grammar errors. At times this makes it hard to even make sense of the text, or to be certain things are understood as intended. The paper spells out algorithmic parts in detail that are standard knowledge (like determination of connected components in a labeled voxel grid) and therefore could be reduced to a short sentence (and perhaps a reference). Robustness (with respect to non-uniforn sampling, noise, etc.) is not addressed or discussed. Presented results apparently use synthetic point clouds rather than real-world data. The algorithm makes use of multiple thresholds. Their choice is not discussed. This, together with the various formal clarity issues, hinders reproducibility of the method and the results. In the result section successes and fails are reported; it remained unclear what even constitutes a ""fail"". Overall, the paper's presentation is of a quality and clarity that is not acceptable for publication. The paper's contribution is quite limited and its potential significance remained open. """,1,0 graph20_47_3,"""This submission addresses (I believe...) the problem of finding empty regions in pointsets in a specific setup. The presented technique relies on a voxelization of the input pointset, and defines a hollow voxel as ""a voxel that is closed to the upper area in y-axis (has voxels above itself) and connects with the outer area in x-axis or z-axis"". The set of hollow regions is depicted in Fig 2b in blue. The hollow regions are then partitioned into connected components, that finally define the hollow bodies. Each hollow body is then characterized by volume, ""normal line"" (I do not understand the definition of the normal line as described in the manuscript but I assume, based on Fig6, that it could be the direction of smallest variance given by the PCA passing through the center of mass?) and ""depth ratio"" (I did not understand this definition either...). The characteristics of the hollow bodies are not really used in the present work, but ""could be used for further research"". I am actually not entirely sure of what is described in the manuscript, because I have extreme difficulties in making sense of the phrasing... However, if the authors address the consistent volumetrization of pointsets (or consistent in/out segmentation), there are a number of existing techniques that the authors could compare to. For example: A Global Parity Measure for Incomplete Point Cloud Data, Seversky and Yin, Pacific Graphics 2012 (does not assume that the input pointset contains normals). Maybe there are extremely specific constraints that guide the proposed approach, which make the technique not possible to compare to existing work, but then I did not understand them... (and it may be partly my fault). I believe that given the current state of the manuscript in terms of clarity of exposition, this submission should be rejected. I encourage the authors to proof-read the manuscript before submitting it again.""",1,0 graph20_48_1,"""While it is absolutely true that there has been a lot of work in gestures to control carat position on soft keyboards (Fuccella et al. have a great review of past techniques for discrete carat movement) and Fuccella et al.'s work includes a similar interaction for edit commands (cut/copy/paste). To me, this is a really nice piece of follow-on work that avoids the need to clutch multiple times to perform gestures, that integrates well with a WGK (unlike Fuccella where, afaik, assumes tap-based typing), and has a nice use of bezel to access command mode. So, at a high level, this represents a well-engineered enhancement to past work in this area, and I recommend its acceptance. Keeping with the positive, the prose is great, with no typos I noticed and no grammar issues either. While the interaction technique is, in my opinion, neat and well-engineered, I have two concerns with the paper. One concern with the paper itself has to do with the evaluation. I assume that the edit gestures could co-exist with traditional touch-screen carat placement? I assume that participants could skip copy-paste and just keep on typing? I would have loved to see how frequently participants used the edit gestures, what an optimal use of edit gestures (for select/copy/paste) would have been, and how close participants came to this optimal. To be clear, I don't consider this important from the perspective of user preference, but it would let me know what percentage of participants used these edit gestures and how frequently. Unless I missed it, it may be that edit time was relatively rare? Are there any participants that just transcribed? Was there something in the participant brief that pointed to copy-paste editing to ensure participants used this option, or was it self-directed? A second (larger) concern with the paper came to light for me when I read the phrase, directly under Table 3, and then I went back to look at the one hand and two hand modes. Essentially, I am a bit confused by what one-handed and two-handed modes are. Unless I missed something, I am unclear whether these are explicit modes, or do they co-exist? Clarifying: my definition of mode follows Raskin's definition (Humane Interface): moded systems provide a different interpretation of an identical action given the state of the system. From this perspective, I understand what edit mode is, but I don't exactly understand whether one-hand vs two-hand is a separate mode or is it just a separate input constraint? I think it is the former (i.e. in one-handed mode, edit mode becomes an explicit mode, whereas in two-handed mode, edit mode is a quasi-mode, again as per Raskin, meaning that a bezel swipe gesture in one-hand mode behaves differently than in two-hand mode, but I could be wrong). I would really wish for some clarification on this. This is a larger concern for me because, if you need to explicitly switch between one and two hand style keyboard input, it means that you have this moded keyboard. I'm actually not even sure it is needed; might it be possible to enter the mode if the user users a bezel swipe from the bottom of the keyboard (the most comfortable for one-hand bezel swipe) and a quasi-mode otherwise? In summary, because of the novelty of the technique, I think this paper is a clear accept for Graphics Interface. If it were ever considered below the bar for a conference, I might suggest: - That the authors clarify the one-hand vs two-hand modes (this should be done anyway) so that the design has good clarity. I admit that it is possible that I missed something in the paper about whether handed use was an explicit mode, but I don't think so. I have read and reread the section on design, and I don't think I missed anything. If there were a rebuttal period, this is something I would look for in rebuttal. - That the authors consider a slighly different experiment where the phrases are transcribed with errors and the participants need to reposition the cursor and perform explicit editing operations to increase the frequency of the use of the gesture-based commands. It may be the case that some additional detail regarding experimental procedure would alleviate these concerns for me, simplifying the acceptance of the study data. Again, additional details are something I would request in a rebuttal to enhance the paper. Smaller details: - The log-transformed time was for the obvious reason of highly skewed data, I assume? Perhaps articulate those. Also, how skewed was the time, and was this a result of inaccurate typing or of more or less use of edits? """,4,1 graph20_48_2,"""This paper reports the design and evaluation of the Gedit interaction techniques. Gedit is a gesture based interaction on keyboard for cursor placement, text selection and text edition (copy, cut, paste, undo). Gedit can be used in a one or two handed manner. It was compared to the standard interaction technique used in Android Phones, and showed better performances and preferences in the one-handed condition compared to the standard interaction technique, while the two-handed condition had similar results but importantly not worst. I would like to start my review by commending the authors for their work. This paper is easy to read, flows nicely, is precise and concise. The contribution is a neat technique that works well, is easy to implement in existing OSs. It tackles a concrete problem and provide a simple yet elegant and efficient solution. The length is appropriate for the contribution. I usually am lengthy in my reviews but after reading the paper, I only have two suggestions, which are no major concerns: - adding an explicit sentence in the ""Editing Mode"" section stating that once the editing mode has been entered every cursor movement results in a text selection operation (as pressing shift on a computer keyboard would do). - adding a short sentence explaining why authors ""log-transformed time value"" avoiding referencing back to [2]. I would therefore argue for accepting this paper. The topic is relevant and this work is well executed.""",4,1 graph20_48_3,"""The paper presents a new set of touch gestures to perform seamless transition between text entry and text editing in mobile devices. The authors expose their design rationales and the corresponding technique, then report a controlled experiment in which their technique was tested against a baseline in one- and two-handed text input tasks. Participants were overall faster with the candidate technique, which was preferred by participants in the one-handed condition. The technique is well motivated and reasonably well described. I appreciate the addition of Undo to mobile text editing, and the frequent update of the ellipse parameters for the wheel gesture. The 500-ms delay is however kind of a bummer, but if the participants did not complain, then why not. I am overall positive about this submission, but there are elements that would need to be clarified before I fully commit to it. ---------- # Study p. 3: ""For fairness between conditions, we did not include the undo function, because it was unavailable for touch+widget."" >> I actually have a problem with this. ""Undo"" with Gedit could in some case replace basic commands like backspace, which would then be counted for T+W but not for Gedit. Results without Undo could be reported to illustrate a point, but only in addition to the overall results (with Undo). Why wasn't ""number of hands"" counterbalanced? This experiment protocol systematically ""pre-trains"" for one-hand in each condition. In particular, the results presented in the paragraph before [Conclusion] could be due entirely to order effects. Plus, in the absence of counterbalancing, ""number of hands"" cannot be tested for order effect, and therefore the sentence ""condition order had no significant effect on any of our dependent variables"" (p. 4) is incorrect. Why not compare the candidate technique to Fuccella et al.'s [7, 8]? This paper claims to build upon their work, but right now we don't know if the candidate technique actually does improve upon it. At the end of this paper, an interaction designer wouldn't know if they should pick [8] or this technique for their new system. # Technique How does one perform right-, up-, and down-swipes in the one-handed mode? Sounds like rather drastic angles are needed, e.g. left from the edge then backwards for a right swipe, that might not be correctly classified by an out-of-the-box recognizer. These specific gestures could use some more discussion and illustration for the one-handed case. The inverted 'V' (right to left) also seems counter-intuitive, but for the user. # Related work This paper is missing some relevant references, notably regarding gesture augmentation of virtual keyboards ([A] and the work that followed), and circular gestures to avoid clutching [B, C]. [A] Jessalyn Alvina, Joseph Malloch, Wendy Mackay. Expressive Keyboards: Enriching Gesture-Typing on Mobile Devices. In UIST 2016. [B] Grham Smith, m. c. schraefel, and Patrick Baudisch. Curve dial: eyes-free parameter entry for GUIs. In CHI 05 EA. [C] Sylvain Malacria, Eric Lecolinet, and Yves Guiard. Clutch-free panning and integrated pan-zoom control on touch-sensitive surfaces: the cyclostar approach. In CHI 10. # Clarity p. 1: ""cursor movement action"" is unclear at this stage. p. 2: ""(4) we provide text editing gestures in both one- and two-handed modes, a significant design achievement given the constraints of one-handed use."" >> At this stage we don't know what the gestures are, and have little reason to believe that they should be different with one or two hands; the ""significant design achievement"" is not yet assessable (and perhaps a bit overclaimed?). p. 3: ""We conducted the study in both one- and two-handed modes. For each mode, the study was a within-subjects design, with a Technique factor having two levels"" >> This is a strange formulation, as it seems to imply that ""modes"" is not a within-subject variable (the text later confirms that it was). Fig. 4: I know SD is symmetric, but the error bars should also be readable within the bar chart. Fig. 5 should also display error bars. Fig. 4: Why explain Touch+widget again in the caption? The notion of ""marginal effect"" seems used rather generously (p>0.09), the paper should explain what made (and did not make) the cut. Also, result sections should also report effect sizes, e.g. as differences of means.""",3,1 graph20_49_1,"""In this submission, the authors investigate the influence of the visual appearance of crowd members on people's traversal of said crowd in a virtual reality environment. The crowd appearance (IV) varies between neutral, human, cartoon, zombie, and fantasy; the authors measure people's traversal (DV) in terms of walk speed, path length, deviation from from average path, and average distance between participant and (virtual) crowd characters. The authors reported multiple significant differences, mostly between fantasy and cartoon, such that, broadly speaking, participants traversed cartoon crowds slower than fantasy crowds. The authors presented multiple, in parts contradictory, explanations for this result (e.g., cuteness and eeriness). Overall, I find this research topic could be of interest to the HCI community, especially CHIPLAY. As a minor note, however: I wish the authors would have motivated their research with some potential real-world applications; just because something is an ""overlooked factor"" does not necessarily make it an relevant research topic. A more severe issue for me is the lack of a theoretical foundation for the the expected results of the experiment. It seems to me that the authors simply ran a study in hope of finding some effect, without much a priori hypothesizing of possible outcomes. This sets of the usual chain of doubt about the validity of the results: finding some significant differences between conditions, struggling to explain some of the results, skipping over an analyses of effect sizes, and uncertainty about the reproducibility of the results. This becomes evident in the discussion section, where the authors struggle finding concise explanations for the experiment results. Almost all explanations sound contrived to me, and could be easily reversed if the data would have been different (e.g., cartoons are usually simplified abstractions, so people need less time to visually analyze them compared to complex fantasy characters). While I find the research idea compelling, I ultimately cannot argue for accepting this paper as it does not explain the results in a satisfactory way: even if the results are reproducible, how can other researchers or practitioners generalize them and apply to their work?""",2,0 graph20_49_2,"""This paper presents a study analyzing participants walking behaviour amongst five different types of virtual crowds in VR. They find differences in participants behaviours depending on which type of virtual character the participants were walking alongside. I don't believe this paper is ready for publication in its current form, though it might be if two concerns can be addressed: The analysis and presentation of results, and the motivation and relevant use cases that these findings might have. Additionally, I think the effects of the findings are actually pretty small, but if the paper can be better motivated than perhaps the small findings may be valuable. Regarding the analysis, there are several errors that should be corrected, and could potentially be without impacting the findings of the study. First, a one-way ANOVA was used when an RM-ANOVA should have been used. Participants performed all five conditions, so the samples and data should be treated as related samples. Secondly, the paper states that a bonferroni test was used, but that is a *correction* not a test. The Bonferroni correction should be *applied* to a test, such as a paired samples t-test. Additionally, meaningful effect sizes should be presented. Partial eta-squared is difficult to interpret, and something like Cohen's d, or the natural effects (e.g., in m or m/s) should be presented to let the reader know the magnitude of these effects. I think the magnitude is actually pretty small. There are only slight differences in the means for many of the measures, and with 5 factors, multiple metrics, and 18 participants running 25 trials each, the probability of some significant factors is pretty high. Regarding motivation, I just don't understand what these results add. Are they actionable for any use case (current, or imagined?), or do they tell us something interesting about human psychology? The discussion presents some potential explanations as to the results, but I'm still left wondering what to do with this information now that I have it. I have no expertise in understanding crowd simulation.""",2,0 graph20_49_3,""" This manuscript considers the question of whether people in room-scale VR walk differently in crowds depending on the different types of avatars in the VR environment. The authors approach this problem by designing an experiment where participants don head-mounted VR displays and physically walk about 8m while viewing a simulated crosswalk (with other avatars). The authors find that based on the type of avatar, there are some differences in people's walking speeds, and some other variables (interpersonal) distance that the authors are measuring. This manuscript is straightforward, but I think the main challenge for me is that the authors have not done a good enough job of motivating the particular work. Why are we concerned specifically with the avatars? What does the prior work tell us about this that makes it important to study this problem? And, what hypotheses should we have about these differences a priori? I think right now, the explanations and interepretations feel very ad hoc, speculative, and unsubstantiated. It also means that it's difficult to necessarily be confident extrapolating from the results. The other challenge is that, even though the authors have taken considerable time explaining the experimental setup, there are still points of uncertainty. While the clarity issues may seem pedantic, I think for me, they stem from the uncertainty about *why* we might see results. For a piece of research the reports on a study, it is important to be able to replicate the study set up for reproducibility and verification. I do not think the manuscript is ready for publication. What I like about the manuscript is that the research question is straightforward (even if not well-motivated), and so the study design is also fairly straightforward. While I do not agree with the choices the authors made in the measures they use, these are understandable choices. So in this sense, the manuscript is easy to understand at a high level. Below, I detail some of the more substantive issues that I think the authors should consider moving forward: * The authors need to provide a clearer explanation of the motivation: why is answering this research question important or valuable? Has prior work suggested that this is something important or valuable to study? * The related work is extensive, but there isn't a really great narrative that brings us through it. It mostly feels like a laundry list. Suggestion: for each paragraph, what is the message that is being conveyed? What's the purpose of the paragraph? How does it help or set up the context for the present work? is it related? is it peripheral? * While the setup of the study is straightforward, it isn't clear why we should expect there to be differences between these avatar types a priori. * To me, Figure 2 would make more sense to show the backs of these characters, I think, since this is what participants were looking at Within the study: * Clearer description of the participants would be valuable. Have these people had experience with VR? Have they had experience with room-scale VR? * Do the virtual characters move in a straight line? Do they disappear when they reach the other side? Or, does the crowd continue to accumulate? Based on the manuscript, I think each character moves at a different rate; do they vary in their own individual movement speed (i.e. variance/deviation), or is their individual movement rate constant? * Did the characters similarly avoid the participant? * How much delay time was there before participants were allowed to start? How was the start signaled? * On the metrics: * ""Speed would tell us if the participants either felt more at ease or on edge in their walk,"" -- why would you say this? I'm not sure if I agree with this. * "" Deviation: The average deviation (absolute value) between the global trajectory of the virtual crowd and the trajectory of the participant. The average deviation was measured in meters."" -- Is this a speed measure? Or a distance measure? Where is the trajectory measured? Wouldn't this be angular difference? * I'm not sure I understand the ""length"" measure. Is this the *distance traveled* by the participant? How is this measured? How far do participants travel in the virtual world if they were going in a straight line? * There are statistical differences between these measures, but the explanations feel speculative. It might have been valuable to inform these by including some qualitative reports from participants. Did participants have explanations for this? Did participants spend time looking at the avatars? Maybe some qualitative description of what they did would be useful. Did participants avoid the characters? Did they just barrel through them? And, so on. * The 3m interpersonal distance seems a lot, given that the characters held a distance of 0.76cm min ... how far apart were characters from one another? """,2,0 graph20_50_1,"""This paper proposes a follow-up idea to RepulsionPak, extending the static packing to a dynamic packing animation. The idea of treating the time domain as the 3rd dimension in physical simulation is neat. Similar ideas have been studied in space-time optimization. Watching the video, I think the animation quality needs more improvement. Most of my concerns are around quality. 1) The generation of the guided elements is to interpolate between a small set of target positions. Since the driving animations from these guided elements are simple, the final movement of all the shapes is not very dynamic. 2) Watching Fig1 video and Fig13 video, it seems like to perturb a few key shapes and everything else follows the repulsion force. The most interesting video to me is Fig17 since there is consistent and large motion. All other videos seem like random perturbation. 3) I dont know if this is out of scope for this paper, but I do find it hard to define What is a good AnimationPak?. The motivating example from Unilever is static. I tried to search online to find a baseline on what is a good animation packing for abstract shapes and its unclear. I found this video to animate the Unilever logo, but its not 2D packing, as described in this paper. pseudo-url 4) Animation packing is new and its probably hard to find direct comparisons with previous work. One possibility is to run a 2D simulation in the time domain as opposed to the current 3D space-time formulation. For Fig11 snake_birb example, I am curious how it would perform with a 2D simulator. """,2,1 graph20_50_2,"""This paper extends the work of Saputra et al (GI 2018, TVCG 2019) on static packing to animated packings where each element potentially has some sort of scripted behaviour. The method creates a static optimization over a timespace cube, which then can be interpreted as an 2D animation by taking cross-sections perpendicular to the cube's time dimension. It is quite well written and has worthwhile results. It is a clear accept, somewhere between 7 and 8 on OpenReview's 10-point scale. The paper does an excellent job of describing the method in detail. In one respect, though, more discussion would be warranted. I am not sure about the directability of these animations -- i.e., how the scripting is meant to be done. There are tantalizing statements such as ""The artist can optionally specify trajectories for a subset of the elements"". How are these trajectories specified, and how time-consuming is it to specify constraints? I understand that the authors have not built an animation tool. Still, it would be nice if the paper gave some indication of the amount of human effort needed to create the animations shown. The animations are mostly not very sophisticated, and I wondered what could be done with them. Some examples, such as the lion's mane animation, make good use of the limited movements. The technique works nicely and the animations seem decent. It is hard to judge success in work like this with no clear antecedent. Controlling smoothness, probably with higher-order continuity of the spacetime worms, will be an obvious direction for future work. The paper shows a variety of examples demonstrating some of the possible uses. In their closing remarks, the authors suggest that artists will find additional imaginative uses for this work. I concur. The paper addresses the problem posed but also opens the door to future unknown possibilities. Minor points: Figure 15 compares RepulsionPak and AnimationPak, and judges RepulsionPak to have higher ""packing quality"". RepulsionPak packs elements more tightly, which I suppose is what is meant by ""quality"". It might be better to use a less judgemental term and say something like ""packing density"". The authors might like to compare with the abstract animations of gMotion, GI 2018. """,3,1 graph20_50_3,"""The paper presents a new system for a new application: packing animated elements within a given shape ('container'). The overall idea of the system is to model this as a physics-inspired system. Given shape elements are first picked randomly from the collection, their centroids distributed as a blue noise over the container shape. Then the elements repulse, deform each other, move etc.; at each time step the system models the necessary forces, and the final animation is the time integration of that. In addition, to prevent overlaps, the elements are initialized small, and are gradually grown, whenever possible. The final results look quite appealing. Even though I can not think of any application beyond pure graphics, I don't mind. The text is written clearly, and even though it often answers the question 'what we do' instead 'why', the 'why' is almost everywhere easy to decipher. I have a few reservations about the paper: - I don't really understand the value of introducing spacetime here. I do understand that it's a convenient visualization or another point of view that this process can be viewed as packing a spacetime, but I don't think the authors really use it beyond pure visualization. I would appreciate if the authors could clarify that in the text. - While I do understand that this is a new and purely graphics-related application, the quickly varying size of elements during the animation seems... suboptimal. I would have expected the animation to be as-rigid-as-possible. I guess it would make the problem more complex, but I would nevertheless appreciate some justification in the text. - Also a clarification point: the repulsion force seems to care about the vertices on the convex hull of each element, but in the final results it seems like the elements don't intersect, but their convex hulls do (which is great, that's what I'd expect). So it is because the weight of the repulsion force is small, so it's mildly unhappy? - This is very reminiscent of the spacetime in 'Vector Graphics Complexes' by Dalstein et al. (SIGGRAPH 2014). I think it would be a good addition to the related work. A few minor comments: - While I can guess the electric repulsion inspiration in the repulsion force, edge force is a little surprising. Why is it quadratic? Which physical model of elasticity is it inspired by? - L89: CAVDs -- abbreviation without explanation - 3.2 seems a little bizarre from a UI standpoint: why isn't it possible to just take an existing animation in standard format? Other than that, I think the results look nice, the technique makes sense; the contribution, albeit modest, is still quite interesting.""",3,1 graph20_51_1,"""This work considers mobile devices augmented with 3 side-buttons to be used in chords, and investigates if there is interference when learning new mappings for these chords (in different applications) and how easy it is to remember them after one week. Studying additional input modalities in mobile devices (for shortcuts or to enhance existing navigation) is an important and well studied subtopic in interaction research. But as the paper indicates, there is little work looking at how overloading shortcuts could affect learning and memorisation. As such I believe it is of interest to the GI community. I am generally positive about the paper. I particularly enjoyed how the study progresses, starting from a simple training task, moving to a complex usage task that resembles real use of mobile phones where one hand holds it (and does the chords) while the other may do detailed actions on the screen, to the two memorisation tasks (one right after the training and complex usage, one a week later). The study is well reported, with the design of experimental conditions, the procedure, counterbalancing, and results reported in detail. An additional benefit is that the software and hardware prototype (although a bit bulky) could be used in practice. I still nevertheless believe there are some aspects in the paper that require clarification / discussion: 1. The paper shows that there is no interference in learning, on the contrary overloading mappings (learning new mappings for the same chords) seem to happen faster than learning the first mapping. Nevertheless, inn the memorization part 1 (the one done after training and usage), I felt there was an analysis missing that would shed more light to interference. The paper tests overall memorisation after training for three mappings, but does not consider the order of presentation when reporting result. It is possible the results hide memorisation interference: for example is it possible that that commands learned or used later in the study tend to be remembered more, but this difference is hidden due to counterbalancing. 2. While the paper decided to not study a base-case condition (where no augmentation is possible), I believe this is a fair decision as augmentation is provided as a means to augment simple touch interaction. Nevertheless, what is less clear is (i) why this specific augmentation was chosen, and (ii) why it was not compared to other augmentations to see if they are similar or different in terms of interference and memorability. An additional study for such a comparison is not possible in a review round, and I believe the findings of this study are interesting enough to stand on their own. But I would like for the paper to at least expand on the specific augmentation choice (3 buttons at the side) compared to all the other possible augmentations mentioned in the related work. 3. More generally, design choices should be clarified better. At the very least it should be clarified: - (the choice of augmentation mentioned before) - Why 3 buttons and not two or four? How was the specific location of buttons selected? Is there prior research that suggests this design is better or is the choice based on author intuition or observation of users holding the phone? I believe the 3 fingers make sense (holding my own phone), but clearly stating if any specific methodology informed (or not) the choice would be good. - Where does the 200msec wait time to recognize the cord come from? Is it from pilot studies, previous research? This is particularly important given the timeout issues when recognising the chords. I appreciated the authors honesty in reporting the timeout issue when detecting the chords. The paper reports accuracy rates that are a bit low (around 80%) but it seems this is due to how the chord is captured (rather than true accuracy which likely is higher). The reported accuracy in the discussion though seems to be wrongly reported at 80% (instead of 80%). As a side note, I find the motivation (abstract, intro) making a somewhat broad claim about providing results on augmentation in general. There all kinds of possible augmentations as the paper admits and saying that the findings involve augmented input in general (end of Abstract, Intro) is stretching the contribution - it is not clear these findings will hold for example in bimanual interaction or pressure sensor augmentation. I suggest the authors adjust their language and use the term chorded shortcut buttons throughout when they talk about their contribution and findings (as they do in their title and conclusions). Finally, I enjoyed the extensive Related Work. Nevertheless, the connection to the current work are not always clear. Eg. what have previous studies not considered or found that can be useful in the context of chorded buttons? Are there any indications that chords may be a better augmentation? Having listed some concerns, I still believe this paper makes a good contribution to GI and should get accepted with minor clarifications (and ideally reporting of possible order effects in memorisation). """,4,1 graph20_51_2,"""The submission presents a design and evaluation of corded input for mobile devices. The proposed prototype mounts three button interface onto the side of an existing touchscreen mobile device. The user can then press on any combination of the buttons with the hand holding the device to provide input. The evaluation tests participants ability to provide input using the prototype to select items from predefined sets (Apps, Colors, and Commands) over 10 blocks. The results report selection time and accuracy, and workload. The submission contributes knowledge about how users can use corded input to interact with their mobile devices. The main strength of the paper is a comprehensive investigation of the proposed design and prototype. The paper is well written and the proposed technique is interesting. However, the paper has two main weaknesses: 1) the evaluation lacks a baseline, and 2) the accuracy of the proposed method is low. The paper presents a thorough investigation of the proposed cording interaction technique, but it remains unclear how it compares to the existing ways users select items from a set (e.g., App icons from a menu). For example, although one would expect an expert user to be able to select applications using cording quicker than using a home menu, it is likely that they would also make more selection errors using cording. It is not clear what this tradeoff is and if the speed of selection would justify potentially lower accuracy. Thus, the paper should compare against a baseline. The accuracy of the proposed interaction is low. Although the accuracy reaches 90% this also means that the users would make at least 1 error out of every 10 selections. This would likely frustrate the user. Thus, the paper should discuss this limitation. In summary, the submission presents an interesting interaction technique that could potentially expand the expressiveness of interactions with mobile touchscreen devices. However, the current iteration is not ready for publication. Thus, I encourage the authors to continue this work and improve the accuracy of the interaction and show comparison with a baseline. """,2,1 graph20_51_3,"""This paper presents a study of how mobile phone users can transfer and overload gestural inputs between different contexts: in particular, the authors investigate chorded inputs for specifying colors, apps, and textual characteristics. The study described is tidy and interesting, with well-thought-out implications for design. The authors also briefly describe the physical prototype system they created for testing chorded button interactions on an Android phone. I really enjoyed reading this paper, which asked and answered a very precise, important research question in the space of mobile interactions. While mobile interactions arent my specialization, I believe this work to be novel. The study design was well-suited to the task at hand (although I do have a small question about one of the mappings tested, see below). The authors use of plots clarified and emphasized their analysis points, making the work easy to follow and evaluate. It was really exciting to me that the authors determined that multiple mappings did not reduce accuracy! I enjoyed the discussion sections discourse on how to understand and apply this result. The technical system that supported the study was detailed at a level appropriate for replication. I am curious how the authors chose the 200ms timeout: was there a pilot study run to determine how long the timeout should be? Did they record data during the studies which would allow them to empirically determine a superior timeout? Im also interested in how this compares to the timeout used for e.g., Twiddlers or other chorded input devices. The mappings described by the authors were interesting, as one of them was semi-modal (colors) and two were simple inputs. The colors mapping was especially interesting to me, as it has a natural component to it: i.e., red + blue = magenta, just as in colour theory. I was surprised to note that this didnt seem to affect its learnability versus the other two mappings. While I understand that the design of mappings was not a part of the goal of this study, I found myself wondering whether natural mappings of this kind would have different levels of memorability than arbitrary mappings. Overall, I liked the paper, and I would encourage the authors to continue studying further outcomes from their work.""",4,1 graph20_52_1,"""This well-written, clear paper presents the design of a mobile application to support the sharing of personal stories using templates that are contextually customized to prompt easy curation and sharing of stories. The design is grounded in a formative study that motivates the requirements. The resulting design responds to the findings of the formative study to present an app that allows users to quickly choose a template and customize it to create a visual story that can be shared a social website of the user's choice. For example, templates are pre-populated with images and prompt questions based on their theme, and the user can customize the image or add a new one, answer the questions or write something else. Some data is automatically populated (e.g. date) or made available (e.g. map of run). Then the user shares the combined image with friends. The initial prototype was populated with templates related to running (getting started to run, hard day, etc.) The application was deployed in a pair of deployment studies with real runnings for 4 weeks each. The participants all had an active running lifestyle. Rich field study data about the use of Yarn and the participant feedback including quotes were provided. Some interesting findings, such as visual templates constrain creativity (while providing other functions) were surprising and informative. The description sentences preformatted into the box were also not well subscribed as users preferred flexibility. Finally, there was an unexpected finding that Yarn is best used for personal notes rather than shared. Both the formative study and field study are clearly reported with extensive evidence of participant feedback. Overall, I liked this paper. The accompanying video was interesting. as were the supplied supplemental materials. The paper is a bit longer than necessary to tell the story, but there were no parts I felt were egregious as the longest sections included quite a few participant quotes which was helpful. The negative of this paper is the level of contribution in terms of what new, generalizable HCI is coming from the work. The results may not generalize to other story-sharing support systems. """,4,1 graph20_52_2,"""The submission presents a user-centered investigation of how to help people involved in long-term goals (running training, DYI projects), create and share stories of their progress. The goal for creating these stories can be either personal, sharing and receiving feedback, or both. The paper follows a very thorough design methodology, starting with interviews with interested parties (23 participants) and deploying the designed mobile app (Yarn) in a field study (21 participants). Yarn as far as I can tell, goes beyond existing social media apps in that: it allows the creation of persistent stories, story posts are divided in goals (rather than one unified feed), and there is support to use different templates for structuring the story depending on the authors needs (goal may be to get feedback, request emotional support, to inform about progress or of an achievement). The paper discusses in detail several findings of the use (or not) of the templates and the response from social peers of the authors. The strongest point of this work is the methodology followed. The choice to involve end-users in the form of interviews and a field study is appropriate, the process is well described, and the analysis well reported and discussed. This is really admirable. I was also very pleasantly surprised by the honesty of the work. The results, including negative ones (like the fact that participants worked with few templates and found them generally restrictive) are objectively presented and discussed. The creation of the story templates that are goal driven is not something that I have seen in such tools before. This choice was well grounded from the interviews. It was thus fairly disappointing to see that in the end very few of these templates were used in practice. Some of the explanations provided in the paper (templates too rigid, their visuals may prevent people from showing the image, they did not highlight important milestones, etc.) make sense. Given that the most popular templates are todays progress and nothing to report I wander how much of this is the actual template and how much it is that the story is still being built and as such harder to reflect upon- i.e., other templates may make more sense at the end of the story. I would have liked to see a summary (in a table?) of the types of templates used per participant and ideally when they were used in the story (chronologically). I would expect for example that posts like long run or my journey are naturally less frequent because they happen at the end of the goal /story (or are just rare). I believe the authors should provide this information to contextualise the use of the templates. I also appreciated the attempt to ask authors specific questions in order to suggest templates and text for their posts in the story. While this seems to be in practice unsuccessful in determining what msg the authors want to pass, I found it very interesting that some participants became more reflective of their stories nevertheless. I believe it is worth stressing that these questions may be a practice worth adopting, not for detecting/suggesting the text but to help authors distill what is important about their post they are sharing. It is indeed surprising that despite all this effort, the audience did not react to the posts as much as the authors of the stories wanted them to. The reasons given in the paper (audience not sharing/understanding the goal) is very convincing. But I am wandering if it is also related to the lack of use of the question and emotional support templates, (or if this is unrelated and they gave up on the use of the template). Again, some information about how often and when the templates were used would have helped. I would also have liked for the paper to stress these more surprising findings (discussed above). Now they are lost in the numerous observations reported (I may have even missed some - these are the ones that resonated to me). A summary, in the end of the observations, in the discussion or in the conclusions, could help readers with some key take-aways. I would have also liked for the paper to clearly state the technical/feature contribution over existing work and systems. In my summary (1st paragraph) I have listed the ones I felt are novel, but I may be wrong, it would be good if the paper highlights these. Finally, the paper should acknowledge other forms of expressing personal stories in the related work. For example there is extensive work in data visualization for personal data and reflection through personalized visuals [a][b][c], work on visual templates for stories [d]) (that comes from a long line of work in data storytelling, a recent book on the topic [e]). While this submission deals with stories that mix numerical and pictorial information, and focuses on construction, I believe this work is relevant and worth mentioning. Overall, my recommendation would be to accept this work with the following additions: - A table or visual showing when and how often the different templates were used in a story (possibly per participant). - Highlight the most surprising findings (for me it was the list of all the items/findings I present in the paragraphs above, but there may be others I missed). I believe these are worth iterating in condensed/summarised form somewhere (in the conclusions or end of the discussion). - Stress what is novel (in terms of design/features) in Yarn. I have provided my interpretation in the first paragraph of the review but may have missed something or misunderstood something as being novel. - Add in related work relevant papers from data storytelling. I provide some that I believe are very relevant but encourage the authors to read more on the topic. Minor ==== - p7-8 I found the need for the templates a hard time and I am back a bit hard to understand from the interviews, it may be worth expanding a bit more - p1 allow people to share _with_ others who - p12 and p18 at the bottom of the page is the title of a section that should be in the next page - p15 connec tions => connections [a] Nam Wook Kim, Hyejin Im, Nathalie Henry Riche, Alicia Wang, Krzysztof Gajos, and Hanspeter Pfister. 2019. DataSelfie: Empowering People to Design Personalized Visuals to Represent Their Data. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI 19). Association for Computing Machinery, New York, NY, USA, Paper 79, 112. DOI:pseudo-url [b] Alice Thudt, Uta Hinrichs, Samuel Huron, and Sheelagh Carpendale. 2018. Self-Reflection and Personal Physicalization Construction. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI 18). Association for Computing Machinery, New York, NY, USA, Paper 154, 113. DOI:pseudo-url [c] pseudo-url [d] E. Segel and J. Heer, ""Narrative Visualization: Telling Stories with Data,"" in IEEE Transactions on Visualization and Computer Graphics, vol. 16, no. 6, pp. 1139-1148, Nov.-Dec. 2010. [e] Data-Driven Storytelling (AK Peters Visualization Series) 1st Edition by Nathalie Henry Riche (Editor), Christophe Hurter (Editor), Nicholas Diakopoulos (Editor), Sheelagh Carpendale (Editor) """,3,1 graph20_52_3,"""This article presents Yarn a documentation and sharing application for personal activities. Yarn focuses on running and DIY activities. The authors based the design of Yarn on interviews with local people interested in the project. They iteratively developed and deployed an iOS based on their insights and user feedback. And finally conducted a study of Yarn with 21 participants. The amount of work that went into designing and evaluating Yarn is impressive. # Related Work The project touches on many sub-fields of HCI, from personal informatics, to research on reflection and reminiscence, to research on social aspects of sharing. The related work does a good job of treating all these topics and drawing relevant insights from the literature. # Study 1 Section 4 presents an interview study focusing on what type of stories people would want to tell based on personal data. The study is insightful and the four design guidelines are clearly presented. # Yarn The design of Yarn is clearly presented. # Study 2 Section 6 conducted a field study of Yarn with 21 participants over four weeks. Section 6 would benefit from a more structured presentation of the method as is classically done in HCI papers (Participants, Study Protocol, Data collected, Data Analysis Method). The tables are somewhat overwhelming and hard to parse. Diagrams and graphics could help to better understand the scope of the deployment, and the profile of the participants. # Remarks and spaces for improvement - The Introduction raises one overall research question, split into 3 sub-sections, and followed by five contribution (with sub-contributions). This leads to some confusion on the message of the article. I would encourage the authors to structure the questions and contributions to match each other, and to focus on what are the most relevant ones. For instance the type of ""data to tell"" risks to be heavily influenced by the recruiting strategy, in term of social experiences of participants that may not really be representative of the country the study was conducted in, and much less of other cultures internationally. - Section 3 is not really needed and could be cut altogether, human-centred design is the de-facto approach in HCI, there is no need to present it. The half of section 3 announcing the paper structure could be integrated to the introduction. # Synthesis Overall, the article is methodologically sound, and while the results are not ground-breaking they will be of interest to people working on the topic. The richness of the application, and the diverse audiences it caters to might have led to challenges in framing the precise contribution of the paper. Insights relevant to people working on related topics are peppered throughout the article, but a big overarching view of the main contribution is missing. On the , the authors would probably benefit from getting more familiar with the (Interaction) Design literature, especially on the modes of knowledge production in Design (e.g. designing for the particular instead of the generic [1]) and how to go beyond human-centred-design (see Will Odoms work for instance). Given the situatedness of in the wild deployment it is quite normal not to find significant effects, or minimal adoption/appropriation. The value of design inquiry lies in reflecting on the detailed idiosyncratic practices participants develop, regardless of generalisability. [1] Olav W. Bertelsen, Susanne Bdker, Eva Eriksson, Eve Hoggan, and Jo Vermeulen. 2018. Beyond Generalization: Research for the Very Particular. Interactions 26, 1 (Dec. 2018), 3438. DOI: pseudo-url""",3,1 graph20_53_1,"""The paper presents a new method to incorporate user constrains in improving cross-surface mappings. After growing regions around user point constraints and parameterizing them on both involved shapes into 2D, they perform a 2D optimization to align the boundaries and satisfy the points constraints while keeping the distortion low. As distortion, they chose LSCM energy. In order to prevent flips, they use an iterative energy where once flips are detected, they lower the weight of the soft constraints. Clarity: The paper is written well and is easy to read. A few things: in 3.2, ""we fill any artificial internal boundaries"" -- unclear at this point. It's explainer further in the text, but the intuition should be as soon as those are introduced. Missing Related Work: [A] Jason Smith and Scott Schaefer. 2015. Bijective parameterization with free boundaries. ACM Trans. Graph. 34, 4, Article 70 (July 2015) Notes and Questions to be addressed in the final publication: As far as I understand, the contribution of SLIM [15] is not to introduce energies that go to infinity once a triangle degenerates, those were introduced before ([A], [Schuller et al. 2013; Fu et al. 2015]). Rather, they came up with an strategy how to minimize such energies using the local-global method. I'm not exactly clear how the initial regions are selected in 3.1. Is the method robust to user error here, as long as the different sets of constraints are in different regions? If so, why not use a clustering mechanism? ""LIM and SLIM often produce ..."" -- needs a reference to the comparison figure(s). In 3.2: ""... and around each corresponding position pseudo-formula ."" -- so if the point is inside the triangle, all the adjacent triangles are taken? ""One disadvantage ... Is that it can contain concavities"" -- I'm not sure what's the issue with concavities. I would understand if the authors were using Tutte's embedding, but it's ABF++, so it's unclear. Have the authors considered an iterative region growing strategy instead of their potentially unstable heuristic 3.2 with a fixed threshold of twice the geodesic distance? One can imagine, e.g. Parameterizing the patch, measuring the distortion, adjusting the patch. In 3.3.1, I'm not totally sure which energies were tested in SLIM. Exponential Dirichlet via their quadratic proxy? Symmetric Dirichlet? In any case, they don't guarantee injectivity because of their choice of energy, it's the line search strategy that they use. Which brings me to the question: instead of adjusting the weight in the optimization, have the authors considered, say, a first-order optimization method (e.g. Gradient descent/projected gradient descent/augm lagrangian, etc.), with the line search constraints like in [A]? Some implementation details are missing: how is LSCM energy minimized? CG, like in the original paper? Fig 16. Is not great, the texture is too smooth and even to see issues clearly. Maybe replace with the number grid? Typos: Panozzoet al. -> Panozzo et al. Conclusion: I think it's not a large, but a contribution worthy of publication. Before publication, however, the aforementioned issues and questions should be addressed in the text.""",3,1 graph20_53_2,"""This paper presents a method to improve upon existing maps between surfaces by a supervised method: the user provide landmarks of places where they deem to have a bad map, which are supposedly localized. The method then automatically devises a local region around these benchmarks, and optimizes for an LCSM-like energy where the landmarks are iteratively moved (with slow enough steps) to the desired positions. This is a nice idea that I think is worth publishing, and the method is decently---though minimally---evaluated and compared. i am convinced it is generally a good idea to work this way. However, I have a few reservations about the method and the evaluation that should be resolve in a revision: 1. Did I miss where the bound on the geodesic distance is? I mean for the computation of the size of the patch. Also, why is geodesic a good idea? it would seem that one would like to grow the patch until they reach boundaries that are already good enough. 2. What is the distortion measured in Figures 5 and 6 exactly? 3. There are definite cases where the user can make things worse but ""ruining"" the map, albeit not making the constraints impossible, which is mentioned in Appendix B. how would you converge then? 4. To be a fair comparison, since the method is supervised, one would have to compare the automatic method with the full set of landmarks obtained (both original and new input). Is this the case? 5. I don't understand Figure 18: why is the deformation in (c) not interpolating the green landmarks? 6. Sometimes it's hard to see the difference in the maps, for instance in Figure 2. The authors say that visualization is a good way to see conformal distortion, but a scalar color map of the local distortion in some cases would help much to see the contribution. """,3,1 graph20_53_3,"""This paper presents a technique to incrementally improve dense surface-to-surface maps by minimizing conformal distortion in local parameterized patches and ensuring injectivity during optimization. Unlike previous work that updates the entire surface correspondence on insertion of landmarks, the key emphasis of this work is to only edit the map locally. The following can be viewed as contributions of the paper: (1) The idea of locally editing the surface-to-surface map via construction of local cross parameterizations. (2) A different approach to planar deformation that minimizes conformal distortion and preserves local injectivity. While the techniques proposed in this paper might be useful in parts for a surface map editing tool, I think the paper has a number of weaknesses. In particular: (1) The authors should clarify what energy they minimize with SLIM/LIM - symmetric Dirichlet or conformal AMIPS? If the former is used, then it is not fair to claim that SLIM & LIM demonstrate larger overall distortion compared to ILSCM when a conformal distortion metric is used for comparison. It is expected that SLIM & LIM will demonstrate larger angle distortion than LSCM inspired approaches if they minimize an isometric energy. (2) The authors should provide timing comparisons against related work. While the primary cost of the method appears to be a numerical factorization for every iteration of the deformation scheme on each patch, not much can be said (or is said) about the convergence behavior (such as number of iterations) of ILSCM. However, the lack of theoretical guarantees can be compensated with performance benchmarks of just the segmentation and deformation schemes against related work on a large dataset of meshes (e.g., pseudo-url). (3) Most of the textures used in the paper are fairly smooth, and as a result, its difficult to ascertain whether the resulting mappings are smooth, for example, near the patch boundaries. Figure 14 by itself is not very particularly convincing of the robustness of the approach. (4) ABF++ is used to find the initial parameterization of the patches, but it is not guaranteed to be injective. Presumably, other methods such as SLIM with conformal AMIPS can be used to get an initial injective mapping. Overall, I can see parts of this framework being used in a production pipeline. However, in my opinion a better evaluation is required to make up for the lack of theoretical guarantees (in particular, via testing the performance characteristics of the deformation scheme and the quality of the final mapping in terms of distortion on a larger dataset, rather than just the 13 meshes in the paper). """,2,1 graph20_54_1,"""This paper introduces an augmented-reality (AR) application in the form of a heads-up display helping both designers and novice users to create and visualize a gallery wall of art items (paintings), with suggestive features on spatial and color compatibility. It also reports a usability study evaluating the usage of the system. I think this work proposes a niche feature (recommendation and application on interior design) to an already quite saturated arena of AR interior design tools (as mentioned in the Related Work, the idea of using AR to for viewing paintings is not new) and I'm not sure if it is enough for a full publication. There is about 1 page on the mathematics behind the art selection, and another 2 pages on the system functionalities. While informative, it further diminishes the research contribution of this paper. Especially when the technical approach doesn't seem to be a contribution this paper is making. The evaluation is rather straight-forward and does not provide deeper insights on how the mixed reality mode of interior art gallery design helps the activity. One of the interview results from interior designers is it takes about 40 minutes to complete a conventional design process. Why time wasn't measured? And why not asking the designers in Group 1 more probing questions (the paper says ""a company"", I assume its the same design company)? I think their feedback (even with a few of them) would be useful for qualitative analysis and could be more insightful. Overall, I think the content is disproportional to the contributions claimed by the authors. I'd expect reading a more in-depth qualitative analysis and reflection of the design than details of the system. My suggestion is to swap the supplementary materials with the technical approach, and condense the user interaction section. Pros: -Adequate coverage of related work. -Writing is clear and easy to follow. Description of the implementation and user interaction is very detailed. -It is really helpful to have most of the data included in the supplementary materials. Not many submissions do that. Cons: -Some clarifications needed: --Workflow needs a bit more details. How does the user select one (the word ""filter"" was used but that's all I could find)? Base on what criteria is the color palette generated? Is it just one palette? Can the user freely adjust the weighs wc and ws? --The 4 templates provided in the system seems to be limited to a ""cluster"" style. How about those common ones like an array of images, or art pieces of the same sizes side-by-side (the Align All function seems to do that but it's not clear how the user chooses between templates and other alignment functionalities)? -A big portion of the paper is used to describe how arts are selected based on similarities and wall color. I understand it is useful to provide details like these. But they are not really the contributions claimed by the authors. -Some of the sections feel repetitive. For example, the workflow and details of the functionalities, the description of the conditions. Consider combining each of them using subsections. -Several shortcomings of the study methodology: --The performance metrics (# clicks, # movements) need justification. I know these are common ones for task analysis, but why are they useful in interior design? One could argue more trial and errors could help generating better designs, which could result in more clicks and movements. --The Usage evaluation reads more like the evaluation of the effectiveness of the template & suggestion, which seems to deviate from the focus of this paper. Minor points: -Some of the technical details can be moved to the workflow discussion for clarity. For example, the system retrieves the top-20 focal arts, that the user can choose between color and style compatibility when choosing auxiliary art items. -I wonder if the user interface allows the user to only show the design canvas (without the other panels) so they can get a better sense of how the design will look like. The demo video doesn't really show how the mixed reality interface look like in action. -Figure 11a is rather trivial for describing how ""replace"" works, perhaps it does more than just replacing an art item like auto-resizing?""",2,0 graph20_54_2,"""The paper presents an AI powered mixed reality interface to compose image galleries. The abstract and introduction could be improved. Specifically, the problem could be introduced more carefully and a clear line of argumentation that highlights how the authors solution, i.e., the mixed-reality gallery interface, addresses the outlined problem could be provided. The related literature section suggests a lack in literature for gallery wall design. While this is not my area of expertise a database search reveals several paper that address virtual reality gallery design. Besides that just a lack in literature is not a sufficient motivation. Further, there has been plenty of work done on augmented and mixed reality for gallery design that there is no sufficient argument for a novel area. The background section focuses on interior design applications and automated layout design, which diverges from the introduction of the paper. See [1,2] for work on AR and galleries. Even if the work produced for art galleries is not what the authors intended to do, differentiating their work clearly from the work done before would be appreciated. The approach, process, and design of the AI powered gallery recommender is the focal piece of the manuscript and sound in development. I am not an expert in machine learning, but the interface to use AR to modify and create wall galleries is well constructed. I can see that this part could be of interest to the GI community. The user evaluation presents two groups, but within each group you find different conditions. I seems the others conducted two different studies. Considering that expertise or interest might affect the presented data, it would be good to know from what kind of company participants were recruited from. The generated insights are inconclusive. The confusion around the study design and the lack of insights make it very challenging to interpret the results from the user study. This section is not ready for publication. In summary, I think the manuscript has too many flaws in the current stage. The introduction lacks clarity, the related work section is incomplete and partially disconnected from the introduction. The system is sound, but the user evaluation incomplete and could be better presented. I believe that the work can be turned into a valuable contribution, but , from my perspective, more work is required. 1. Leue, M. C., Jung, T., & tom Dieck, D. (2015). Google glass augmented reality: Generic learning outcomes for art galleries. In Information and communication technologies in tourism 2015 (pp. 463-476). Springer, Cham. 2. tom Dieck, M. Claudia, Timothy Hyungsoo Jung, and Dario tom Dieck. ""Enhancing art gallery visitors learning experience using wearable augmented reality: generic learning outcomes perspective."" Current Issues in Tourism 21, no. 17 (2018): 2014-2034. """,2,0 graph20_54_3,"""The paper has been written clearly. It addresses the specific problem of gallery wall design and how to ease the process for non-professional designers. This is an interesting problem and the paper presents an intuitive and effective solution. It follows a user-centric design process, talking to designers first, that effectively motivates the design for the workflow. It further uses computational approaches to drive focal and auxiliary art suggestions. It then describes the interface features available to the users. The system design is methodical and it is delightful to read through. The paper presents a unique contribution both from an MR perspective as well as from a design perspective in easing gallery wall design for regular users. However, I have a few questions and concerns - 1. The paper does not ground its decisions with references in multiple places. Some instances - a. Why select the neighbor colors and complementary colors? Are there any references that suggest that this is the optimal way to get compatible colors? b. Is the database public or described anywhere? c. Are there any references or literature on gallery wall clusters that refer to how they are always based on this idea of focal+auxiliary? 2. The user evaluation is surprisingly sparse in user's subjective feedback and usability scores. While it may be difficult to compare the MR usecase due to the FOV constraints and arm fatigue issues, some insights here will be very useful. As it stands right now, the evaluation does not inform much beyond showing that users are able to use all three interfaces, which is not saying much. 3. What were the instructions for time for the study? Were the users asked to finish each design within a particular time or were they asked to do the best job possible with flexible timings? 4. Why weren't tags used for auxiliary item suggestion, only color and visual features?""",3,0 graph20_55_1,"""This paper presents the design and evaluation of visualization concepts using transparent displays to assist mobile crane operators to perform safe operations. The contribution of the work is the visualization concepts and the feedback of the visualization concepts from mobile crane operators. The three research questions are well motivated and logically connected. The method used is appropriate to generate low-fidelity paper prototypes of visualization concepts and to understand how the concepts were understood and could be improved with target users---mobile crane operators. The details of the method are well presented and the illustrations are helpful for readers to comprehend the method. The paper is well written and easy to follow. That said, the paper does have limitations. The visualization concepts were only tested with a low-fidelity prototype and the focus was more on the understandability of the concepts. However, there remain questions to explore. For example, the contexts of operation tasks are not considered in all tasks. Operating a mobile crane on a typical construction site is different from operating it on a city road. The types and intensity of information that mobile crane workers need to attend to and take care of are different. Such contextual challenges are hard to simulate and construct with low-fidelity paper prototypes, which are known to be limited in their interactivity. It is necessary to acknowledge this limitation and other similar context-induced challenges that the current method is unable to handle. In terms of visualization concepts, the visualizations of swinging of the lifted load (figure 14) have only considered the swinging degree but not the direction of swinging. Swinging back and forth can cause a different effect on a mobile crane than swinging left and right. Was the swinging direction intentionally left out in the visualizations? If yes, what was the rationale? It is interesting that mobile crane workers commented that some visualized information was not necessary simply because they could directly observe such information (e.g., swinging of the lifted load) or such information has already been shown (e.g., on head-down display). This makes me wonder there are other types of visual information (e.g., directly observable information and information on the head-down display) that must be taken into consideration when designing the visualization concepts. This point should be discussed in the paper. In sum, the submission makes a valid contribution. With the comments incorporated in the revision, I would feel confident to support the acceptance of the submission. """,3,1 graph20_55_2,"""The paper is well-motivated, but I do not think this paper presents sufficient contributions for acceptance. First of all, the contributions are weak. The formative study and the low-fi design prototype may not be strong enough for this conference. I understand the actual deployment is challenging, as there are many safety concerns, but still, the authors could develop and evaluate the proposed design through VR head-set with 360 photos of construction site + visualization, or something similar. It is also uncertain how this visualization can be integrated with the other sensor values, such as wind, proximity sensing, etc. I know the authors argue this is beyond the scope of the paper, but in that case, the contributions seem a bit weak. Second, the authors did not sufficiently review the literature. There are tons of work of see-through display or projection mapping for information visualization or instructions. For example, from the classic one like Feiner's ""Knowledge-based Augmented Reality"" which has over 1300 citations, to relatively recent ones like Willet's ""Embedded Data Representations"". The authors should review these prior works that explore visualizations to increase awareness with augmented reality. There are also many missing works in the immersive environment for safety purposes. Even limited in this specific application of crane operations, I was able to quickly find: - Development of user interface for teleoperated cranes - Attention-based user interface design for a teleoperated crane - Using affective humanmachine interface to increase the operation performance in virtual construction crane training system: A novel approach - SimCrane 3D +: A crane simulator with kinesthetic and stereoscopic vision - Multiuser virtual safety training system for tower crane dismantlement Although I am not an expert in this domain (AR/VR for safety applications), it sounds like the literature review of this paper may miss some important prior works. There are also some minor points, such as the presentation of the work can be improved (e.g., Figure 4 is almost unreadable) or concerns about how the findings of the paper can be generalizable beyond this specific application (given the author's claim about the design guidelines is the main contribution), but my major concern is the first point (i.e., the significance of the contribution). Therefore, I would not recommend for acceptance. """,2,1 graph20_55_3,"""The authors have prototyped potential transparent display based solution to bringing critical information closer to the person operating a mobile crane. The authors identified a problem of information disconnect wherein the operators are looking out through the glass to move the crane and all the supplementary information is presented on a screen in the bottom corner of their field of view. The proposed solution identifies critical information and surfaces it in the field of view by leveraging transparent displays, so as to minimize any obstructions to the view. Three HCI researchers designed various visualizations to present the critical information, the authors created low-fidelity prototypes to conduct initial testing of the usefulness of the visualization, and recruited six crane operators to undergo an interactive session where they evaluated the different visualizations and its utility in their own workflow. The paper presents an interesting new domain to the HCI community and provides an initial study toward what can be a longer term research project aimed at building out such critical, real-time systems. Although the paper was written well-enough to comprehend, I found the sectioning of the content to be non-traditional. Few examples: - Research questions which would ideally be mentioned near the end of introduction section were first introduced in the methods section. - Parts of method and design of the study was mentioned in the results section. I would ideally prefer to see the system design and rationale before diving into the results. I would point the authors to Professor Wobbrock's guide on writing HCI research papers for structural guidelines which I am talking about[1]. While I appreciated reading about the study protocol which was something different and new for me, there were some limitations which could have been addressed to further improve the paper. - Figure 7. I understand developing working prototype was difficult. However, it would have been easy to print a post with the view that is being shown to the operator to get a 1:1 ratio of the view and the displays. In the current set-up the operator has to do additional mental calculation of scaling the displays for potential use in their workflow. - Some of the findings and discussion around preferences or what information operators would like to see and where, could have been gathered earlier through some formative interviews or surveys. Some additional comments to further improve the paper: - Figure 4 needs to be redone. Increase the contrast even if it reduces the quality of the image, as currently, the image is quite illegible. You may even consider shortlisting 3-4 of these images and enlarging them for added clarity. - Section 3.1 In coming up with the list of information which is critical, was any expert consulted? Is any member of the research team and expert in the area? If so, highlighting that may alleviate any concerns regarding the validity of the choice of parameters. - Section 3.2 Are the three researchers mentioned here also the authors of this paper? If so, call it out for added clarity. - Section 3.2 Given the heavy restrictions on transparent displays (only available in two colours, can only present static visualizations) why did the authors not consider other alternatives such as AR glasses (e.g., Google Glass, Microsoft Hololens). A rationale of this would be helpful to have in the paper. - Table 1, unless information of age and experience is highly critical, I would recommend presenting them as ranges so as to further obfuscate the participants and avoid any unnecessary identification. (e.g., age between 30-40, experience between 10-15 years) - Section 3.3 It was not clear what authors meant by ""forms"" on page 4. (There were ten forms for each concept ...) Overall, even though I point of several ways in which the current submissions could have been stronger, the paper does have merits. It presents a novel problem area and potential ways of addressing them. As such, I do not see a strong reason to reject the paper in its current form. There are several small changes I suggested above which the authors can make in the camera ready submission to improve it. Reference: 1- Catchy Titles Are Good: But Avoid Being Cute. Jacob O. Wobbrock """,3,1 graph20_56_1,"""The paper is a little hard to follow due to poor grammar and wording as well as a general lack of structure and focus. It is not really clear what the authors want to achieve and why it is relevant. As far as I understand, the idea of the paper is to use the output of a greedy sphere packing algorithm for 3d volumes to define a rotation invariant shape descriptor. To this end a voxelization of the object is used and spheres centered at voxels are greedily added such that they are inside the object, have maximal radius and don't intersect with previous spheres. The distances of consecutive spheres are then used to compare shapes. I see several problems with this approach. I agree that for an arbitrarily fine discretization the method could identify different rotated versions of the same object. Since the discretization level will be limited, it can always happen, that a rotation of the object will lead to a completely different relative sphere placement, affecting all subsequent spheres arbitrarily. So there will be no guarantees for finding pairs of rotated shapes. This instability can also lead to vastly different signatures for slightly deformed objects, which is highly undesirable for a shape signature. Just checking 30 datasets does not convince me that this is not an issue. Another problem is the complete lack of comparison with respect to competing methods. At least a comparison to a very basic algorithm, like alignment using PCA or ICP, is necessary here. The implementation section on the other hand is far to detailed and presents some details that are almost trivial in my opinion and do not contribute to the understanding of the algorithm. Figures 9-11: It is not a good idea to put the patient ids on the x-axis and plotting results as a piecewise linear graph. The shape of this graph does not convey any information. A histogram plot should be used here. In conclusion, I do not think that the presented approach is a useful shape descriptor. Unfortunately the evaluation could not convince me of the opposite. Therefore the paper should not be accepted.""",1,0 graph20_56_2,"""The problem statement of this paper is very vague. Apparently, authors have already published a greedy sphere packing and they try to make it rotation invariant or experiment its properties under an arbitrary 3D rotation. Reading the paper, I could not figure out what are the challenges of this problem. Why if we have a set of spheres, we cannot simply rotate all of them against a point or axis and preserve the desired properties such as distance between the sphere centers (apparently this was the key in defining the feature). Is it due to re-voxelization? Why sphere packing at the first place? Why not simply using voxels? ... The paper has not been motivated well. Aside form the lack of clarity of the intention of the paper, it suffers from presentation issues. Teaser has no informative caption and it is confusing to see the first Figure with graphs that are not fully understandable and in the text there is no reference to this figure. The same for many other figures in the paper: no reference in the text, no informative caption (e.g., Figure 4, 5, 6, 8, etc). Introduction is a mix of related work and motivation and the paper suffers from the lack of having an actual related work and also a good motivation. There are many papers related to shape descriptors and also sphere packing and they have not been cited in the paper. Unnecessary/obvious information is given. Providing rotation along different axis is not necessary. These are common knowledge in the Graphics community and they are not needed to be repeated in the paper. Instead the algorithm (which does not have a number in the paper) needs more explanation and a better presentation. Aside from many problems about presentation and also technical issues, the contribution of the paper is unknown and marginal and I do not believe that it advances Graphics state of the art. The paper is clearly below the GI acceptance bar.""",1,0 graph20_56_3,"""The paper's exposition is somewhat confuse. It is not easy to figure out what the paper actually aims to ultimately solve or show. Nowhere does it state unambiguously what it aims to achieve, what problem is to be solved or what question is to be answered, and in what sense the state of the art is supposed to be advanced. At some points in the paper (such as ""In this paper we considered the set of spheres as shape descriptors"") it seems that the paper's main proposal is to use the packed spheres (their radius and distance sequences?) as shape descriptor. If this was the key intent of the paper, comparisons with some of the many previous 3D shape descriptors would be necessary (which the paper does not deliver) -- and the entire SRS-based motivation and context would make no obvious sense. In terms of contribution, it appears that the paper proposes no novel algorithm. The greedy ('largest sphere first') sphere packing algorithm listed on page 5 was apparently described in other work by the author(s) (Anonymous 2019), and also mentioned in various previous papers on sphere packing problems, such as [2] or [1]. The paper's contribution therefore boils down to an empirical evaluation - of this sphere packing approach's rotation invariance (in a discrete voxelized setting). This evaluation provides only limited insight. This is due to two reasons: 1) It is clear already from the very definition of the procedure that the result can be highly unstable with respect to tiny perturbations of the input (as, e.g., induced by the rediscretization when rotating). 2) The analysis performed is of very questionable quality: the paper does not state in detail what the analysis is supposed to evaluate, and does not justify the choice of measures reported and compared in the context of SRS. Concretely, what the paper evaluates is the amount of change in (the ordered list of) sphere radii and in (the ordered list of) distances of consecutive sphere's centers. It remains unclear why invariance of these two measures is of relevance - whether in the SRS context or otherwise. After all, two geometrically identical (just differently ordered) sphere packings can differ largely in these measures, while two geometrically very different packings can be indistinguishable from these measures alone. The paper's claim that this is a ""useful"" ""approximation"" is not underpinned. The choice of distance ratios instead of distances is not justified either. Furthermore, the paper only considers whether identical shapes (up to rotation) have similar values, not whether different shapes have dissimilar values; this is not a reasonable way to evaluate a shape descriptor (a trivial 0-descriptor would be considered perfect under the paper's methodology). There are various language issues. Most parts remain understandable, but several sentences are hard to make sense of. At several points it is unclear whether there is a language issue or there is a formally incorrect statement. One example is the sentence ""There are three major techniques to prove the rotation invariance""; what follows is not a list of proof techniques. It did not become clear how the following landmarking/alignment discussion is related to the paper's content. There are further formally imprecise or incorrent statements, such as ""our algorithm ... is defined as a set of ... spheres"", ""sphere centers ... respresent a spatial template as a graph"", ""intersection of the sphere's centers"", or ""a measure called epsilon-rotation invariant"". ""Experimental results demonstrate the effectiveness and efficiency of the proposed method"": neither the effectiveness nor the efficiency relative to the state of the art is demonstrated in the paper. It furthermore did not become clear what precisely is actually meant by ""the proposed method"" in the context of this paper. ""... are actually within epsilon value criteria"": this appears to be a trivial statement as epsilon is nowhere specified; it appears that any outcome could be considered to meet this criterion; there is always some value such that all observed errors are smaller. ""high probability that the 3D volumes are similar"": the paper does not establish this. The difference in radii before and after rotation is put in relation to the ""risk of developing recurrent disease""; it remained unclear what basis this statement is made on. The title is inadequate because the paper does not describe a novel spheres packing algorithm - and the algorithm it describes is not epsilon-rotation invariant in any reasonable sense of the term (and a formal definition is not given in the paper). The paper motivates the sphere packing problem that it considers using SRS. The description of that technique and the problem's relation to it are, however, not clear; a reader not familiar with the detailed principle of SRS will not understand the description in the paper. Previous work in the field, such as [1] or [2] does a much better job at concisely conveying the relation and relevance. The conclusion starts with ""Our novel medical visualization technique"". It remains unclear how that is related to the shape descriptor focused paper content. Comparison to previous approaches to the sphere packing problem, such as [1], [2], or previous work on shape descriptors, is missing. The detailed (and very toolset specific) description of the implementation is unnecessarily verbose. E.g., there is no need to spell out standard 3D rotation matrices, etc. The discussion of related work is insufficient. It focuses on shape descriptors but almost entirely ignores SRS and previous work on sphere packing and related problems. There is no good reason to mention ""Slicer3D"" in the title, considering that the described method and evaluation are not Slicer3D-specific in any way. The paper does not discuss the fact that the rotation variance comes from discretization artifacts. It does not explain why invariance (in particular invariance of radii and ratios of center distances) to rotation-and-rediscretization induced shape perturbations is relevant in any context. [1]: ""Packing of Unequal Spheres and Automated Radiosurgical Treatment Planning"" by Wang (1999). [2]: ""Use of Shape for Automated, Optimized 3D Radiosurgical Treatment Planning"" by Bourland and Wu (1996).""",1,0 graph20_57_1,"""The paper presents a modular annotation, visualization, and inference software combining machine learning approaches from computational language and vision research. The software leverages ML visualization approaches as a canvas for annotating events. This is a timely topic considering the need for high quality annotation and need for convenient systems that make use of multi-model approaches. While aspects of the presented system have been presented in previous software, the presented system is novel in the completeness and maturity of the development. I believe that this work has potential to support many who use annotated video data as a source for their research or product. The paper is well written and structured and provides a sufficient level of detail to follow the approach taken. A section on the limitations and future work would have been appreciated. """,4,0 graph20_57_2,"""Id like to thank the authors for submitting this work to GI2020. The paper describes a comprehensive data annotation tool that has components and features for language, vision, and relations labeling that are customizable and stitchable to suit various labeling needs. Taking a perspective of technical HCI systems research, I focus on design rationales and evaluation of the newly proposed tool. While the technical and engineering effort involved is not getting unnoticed, I would like to raise a number of issues to contextualize this research to help the readers understand the contribution more clearly. I find it difficult to position this work. Theres a specific area with much history and a large body of work in microtasking, crowdsourcing and crowd workflow that focus on how to annotate ""well"". I suggest the authors consult this area to strengthen the narrative around labeling task component of the work. (for example, Michael Bersteins group and Dan Welds group have a long list of publications that the authors can reference - apologize for namedropping, but there are just too many relevant papers to list.) The general direction suggests the tool as a human-ai collaboration tool, Does it mean the authors want this tool to be seen as a machine-assisted labeling tool? That is a very specific type of human-ai collaboration, and is not the first thing that pops into my mind when I hear ""human-ai collaboration"". The use of terminology can be more concrete and specific throughout. Similarly, ""XAI"" also is a very catchy term, but can mean multiple things in academic discourse. The first sentence in the abstract, Artificial Intelligence (AI) research, including machine learning, computer vision, and natural language processing, requires large amounts of annotated data. is false. There are specific approaches (possibly suitable for specific problems) that researchers take that require annotated data. There are other approaches in reinforcement learning, self-supervised learning, probabilistic programming that do not need labels to work. Its an incorrect claim to argue the whole field requires annotated data. I urge the author to correct this sentence. While the authors claim their tool facilitates efficient annotation, the evidence isnt to be found in the submission. What evaluations were done to prove its efficiency? And what other tools are considered baseline, and other candidates? I find the biggest potential impact of this system to be the synergy of one place for all label brings(i.e. Figure 8) However, the use cases (Application section) list a number of different applications the system can be / has been used for without really demonstrating what clear benefits there were over other possible options. The current submission is a good presentation of what the system can do. To help the readers understand the contribution better, I would like the authors to highlight the arguments to why certain design choices of the component were meaningful, why it makes the tool more usable than others, or what benefits the tool brings to the users that were previously difficult to attain. """,2,0 graph20_57_3,"""This paper presents an annotation tool to assist the labeling of language or vision datasets as used for machine learning. The claimed contribution of this tool are modularity and, in a way, interoperability. While I can see the merits of a unified tool, I am not sure what is proposed in this paper qualifies as a novel contribution to the field of HCI. The tool may indeed be useful, but the only claim to this comes from the use for some of the authors' own projects -- while this is presented instead as a universal tool addressing the issue of different research labs using different tools. Given this claim, one would expect some external validation, grounded in HCI methodology. Secondly, I am not sure why there is a problem in the first place. Surely over the past many decades, if indeed the lack of a standardized tool is such a dire situation, there would be attempts at creating such a solution. The motivation for this being a problem is not argued convincingly enough in the paper. One possible approach to this would be again grounded in HCI methodology, such as conducting interviews with current users of annotation tools. Finally, in my almost 30 years of experience, I have used various tools for these purposes, and none of these was proprietary to the labs where I conducted research. In fact, some annotations tools were provided open source or free from various universities or institutes -- this suggests that there aren't any insurmountable barriers to sharing (and potentially, standardizing) such tools. Given this, it is not clear to me whether the proposed interface actually solves a real problem. This doesn't mean that the proposal tool is not good, but it means that its value-proposition may not be that one claimed by the authors in the abstract. One minor point that I would encourage the authors to consider in a subsequent revision of the paper: the ""integration of language and vision in machine learning applications"" is not necessarily new, and more important, it is the other way around, especially for language (it is recently that ML has started being the de facto approach to language processing). Overall, in my view, this paper may be of interest to the community in terms of becoming aware of a potentially useful tool. However, there is not enough in terms of research contribution for a full-length paper (a demo or poster may be more suitable).""",1,0 graph20_58_1,"""This paper explores whether Bi and Zhai's dual Gaussian model of touch accuracy, originally specified wrt off screen initiation, also works with on-screen initiation. They find that it does, albeit with slightly different fitting parameters (Figure 12). There is relatively little that I can say about this work that is constructive. My read-through of the paper is that it is a relatively straightforward replication of Bi and Zhai but using an on-screen start location instead of an off-screen start location to measure touch point accuracy. Overall, the experiments are similar to Bi and Zhai, the results track Bi and Zhai, and some rationale for measured differences (e.g. in fitting parameters) are suggested. The only question that I struggle with is whether this contribution is that suprising. I would love it if the authors could have queried Bi and Zhai to determine why the off-screen starting position was suggested in their work. Do they have data that contradicts the data in this paper? Does the difference in fitting parameters pointed to in Figure 12 result not from an age difference in participants but from on-screen versus off-screen start? Given the results synthesized from this work, it seems strange that Bi and Zhai were so clear as to restrict their model to off-screen start. Bi and Zhai's ""Off Screen Start Target Acquisition"" does little besides clarify that they focus on off-screen start. I truly wonder why. Perhaps the authors could, prior to publication, contact Xiaojun or Shumin and ask them? In summary, given the restriction in Bi and Zhai's original paper, and given the similarity in execution of this paper, I find this contribution worthwhile, if minor.""",3,0 graph20_58_2,"""The paper reproduces existing studies to validate applicability of an existing touch accuracy model [10] in an on-screen-start setting. The paper describes in detail the existing model. It then presents a series of experiments (some which reproduce existing studies). The results show the model offers a decent estimate of touch accuracy in on-screen-start setting even when it disregards the distance to target and only accounts for its width. The main strength of the paper is a comprehensive set of experiments. The motivation for the work is good and the related work mostly comprehensive. The paper reproduces existing experiments, which should be treated as a hallmark of good science. However, there is one main weakness of this paper: the reasoning behind removing an important parameter (A), which has been shown in many existing studies to be a factor in pointing, from the model cannot simply be that the resulting estimate of touch accuracy is good enough. Models should not simply disregard parameters that are fundamental to a behavior. The experiments in the paper do not necessarily show that A plays no role in touch accuracy. Leaving out A could reduce the accuracy of the estimateit is just that the experiments do not show it. Also, it is possible that the model still predicts touch accuracy somewhat well because the range of possible A values on such small screens as smartphone screens is relatively small. But even then, this is not a valid reasoning for removing an important parameter. Instead, the paper should try to integrate A parameter into the dual gaussian touch accuracy model. The submission should at least discuss (if not include into their model) other factors (e.g., time-based cost of missing the target (Banovic et al., 2013)) that could affect the error rate or the probability of hitting/missing the target. In summary, the paper tackles an interesting topic and the reproduced studies could help strengthen our understanding of touch accuracy. However, the modeling approach requires significant changes prior to publication. Thus, I encourage the authors to continue this interesting work and I look forward to next iterations of this work. REFERENCES Nikola Banovic, Tovi Grossman, and George Fitzmaurice. 2013. The effect of time-based cost of error in target-directed pointing tasks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 13). Association for Computing Machinery, New York, NY, USA, 13731382. DOI:pseudo-url """,2,0 graph20_58_3,"""Through four studies, this paper proposes to lift a theoretical limitation in the application range of the Dual Gaussian Distribution Model, namely that it could also work when touch acquisition occurs from a touchscreen to that same touchscreen. This paper is well written and shows good experiment design and consistent analyses. However I found the theoretical argument to use the DGDM in screen-to-screen pointing quite hard to follow, even though it is the main point of this article. I also have a number of concerns that I would like to see addressed in a revision. # BLAMING AGE Honestly, I found it quite a weak argument to put the lack of generalization of the approach on age (p. 10). Age difference is one among many possible explanations, but one in which this paper rushes in nevertheless, at the expense of any other. The paper doesn't even acknowledge that this lack of success could simply be due to a lower external validity than the authors hoped for. As the authors state themselves p. 9, ""A common way to check external validity is to apply obtained parameters to data from different participants."" Checking can also come up negative, and that is ok. These results remain valid, even if the proposed approach is not as context-independent as hoped. Perhaps worse, the paper immediately jumps from this patched-together explanation, straight to calling it a ""novel finding"", and then to suggesting design guidelines from it, as if it was now a proven fact. I think this part needs to be drastically shortened or even removed, in favor of a more realistic discussion about generalization---and possible lack thereof. # ""UNLIMITING"" I found it quite hard to understand the point of Bi et al. for rejecting screen-to-screen pointing, at least the way it is explained in this paper. That, in turn, makes it quite difficult to understand the counter-argument developed in this paper---and especially since ""The evidence comes from a study by Bi et al."" (p. 4), which makes one wonder why Bi et al. put that ""limitation"" up in the first place. One example, in the last paragraph before EXPERIMENTS (p. 4), a point is made that goes like this: - a lack of effect might be due to A values that are too close to each other, - even if A should in fact have an effect according to some model (Eq. 12), - and for some reason that makes it ok to consider that screen-to-screen pointing is compatible with Bi et al.'s model (which does not consider A). # DESIGN APPLICATIONS I am not sure that the possible applications of this model are well described or argued for in this paper. The described examples feel rather artificial. - In the example given in p. 1 (choosing between 5 or 7-mm circular icons), it is unclear why the designer would need a model, or to know by how much a 7-mm icon would improve accuracy. It seems that this sort of design issues can be solved using threshold values under which users simply cannot accurately acquire a target. I assume that strong design guidelines already exist for this? - Similar argument about the second and third paragraphs in p. 9. The level of detail argued here seems quite artificial, e.g. ""If designers want a hyperlink to have a 77% success rate"". I doubt many designers would consider a clickable, 2.4-mm high font or icon on a touch screen in any case. I might be wrong. - ""by reducing the time and cost of conducting user studies, our model will let them focus on other important tasks such as visual design and backend system development, which will indirectly contribute to implementing better, novel UIs."" (p. 2) That seems quite a stretched ""contribution"", at least in the absence of actual data about how long designers do spend on testing width values today. # AMOUNT OF ERROR Throughout the paper, prediction errors (additive) up to 10% are described as small, and that is surprising (5% in Exp 1, 10% in Exp 2, 7% in Exp 3, 10% in Exp 4). To the best of my understanding, these are not percentages of prediction error (e.g. going from 50 to 55 is a 10% increase), which would be more ok. These are differences between values that are already expressed in percents. In my experience, many pointing studies have error rates ranging from 0 to, say, 15%, perhaps more when the tasks or input devices make it particularly difficult. 2-mm targets on a touch device could definitely count as difficult. However, that still makes a 10% prediction error quite high in my book, and worthy of contextualization. Perhaps I misunderstood something. >> ""the error rate difference was |29 38| = 9%. Similarly, their 2D tasks showed only small differences in error rate, up to 2% at most."" - First, for a metric that can often be between 0 and 15%, 2 and 9% are not ""similar"" values. Second, 29% and 38% error seems alarmingly high. # CLARITY Removing tap points that are further than a fixed distance away from the target center will likely affect W levels differently. I imagine that more of these errors occurred in the W=10mm condition. This would be good to report, either way, even though only a small number of trials was removed overall. Fig. 12 should also show the actual success rates measured in these studies.""",2,0 graph20_59_1,"""This paper explores the design of user interfaces (UIs) for visual guidance in the context of interactive and navigate-able 360 degree virtual reality (VR) systems. They present and describe a software system that enables the interactive and navigate-able 360 degree VR environment, within which they implemented four visual guidance UI techniques. Overall the research objective seems interesting, but I would have appreciated a clearer overall presentation of the research and the specific contributions, and a clearer motivation of the research gap. I also believe that the the study design holds some potential flaws that need addressing. The introduction mixes in some lists of related work that would be better suited in the respectively named section; instead, a clear and focused motivation of the research gap being explored here would benefit the paper. The authors motivate the system a little bit, but not so much the visual guidance UI. I think it would be very helpful as well if the introduction provided a summary of the paper's contributions. This would also help to assess whether the number of pages is well matched to the size of the contribution. Although the end of the introduction seems to focus more on the study (as does the Discussion), I am pretty sure that the authors are presenting the RealNodes software system as part of the main contributions, as the paper goes into a lot of detail on this - to clarify, I agree that this can represent a contribution, however the section in which this is described could use some more general / abstract overview information before deep-diving into the specifics, and some more ""sign-posting"" in the text's structure. In particular, I think that ""3.4 - Visual Guidance User Interface Methods"" should get more focus compared to the other aspects, as they represent the focus of the study presented in the paper, as opposed to the interaction objects and scenarios that are then described in detail in 3.5 & 3.6. A clearer summary of the systems' overall benefits as well as a bit more motivation on why *these* four UI visual guidance techniques were chosen for the system would also be helpful. The related work addresses navigation techniques, wayfinding guidance, visual transitions, and interactive elements. These are largely presented as as a list; the paper would be stronger if it provided a bit more of a summary that outlines the research gap addressed by the authors. Study: In an experimental within-subjects user study (N=24), the paper explores effects of four different UI techniques on engagement, simulator sickness, and completion time. Some general notes: - participants' ages are not reported. gender distribution could be a bit better. prior VR experience not reported. - UI techniques were counterbalanced, but were the hiding spots counter-balanced as well? - effect sizes are not reported - some of the descriptive data reported in-text would be better suited for display in a table - what is the scale of the SSQ, does it only go up to 3.5 as indicated by Fig.17? If so, then the nausea scores aren't that great for any of the conditions. - similarly, what is the scale range of the UES SF? Finally, and most importantly, the discussion then states: ""Arrow is unique compared to the other UIs in that it stays active on screen."" Before this statement, it was not clear to me that the Path technique was not visible continuously. If only Arrow is continuously shown, then the study design seems unbalanced, comparing 1 continuous wayfinding techniques with 3 discrete target marking techniques. That Arrow then leads to significantly faster completion times hardly seems surprising. Overall, I find that the potentially unbalanced design and the missing information in the study (particularly the effect sizes) make it very hard to judge what is being investigated here. Without addressing these issues, I do not think that this submission is quite ready for publication. Minor points: - ""reignited"" -> ""re-ignited"" - ""[21] [2]"" -> bibentry21, bibentry2 -> [2, 21] (can use a parameter for the package to tell LaTeX to auto-sort the citations) - ""demonstrates how we build upon and differentiates from prior research"" -> ""and differentiate our work from..."" - Fig. 1 is a nice overview but also to a degree redundant with Fig.5, Fig.6, Fig. 7, and Fig.8""",2,0 graph20_59_2,"""#== SUMMARY ==# The authors present RealNodes. VR users can freely look around in a pre-recorded 360 video. Multiple 360 videos are shot at different key locations. Users can navigate between them using gaze direction and walking in place to simulate moving to different locations in space. For each 360 video, particular gaze directions are mapped to different pre-defined paths available for navigation. The system provides visual guidance to communicate, which directions the users can move to from their position. The authors compared four different visual aids and found significant differences between the techniques in terms of completion time and preference. In the paper, the authors describe the implementation of the system and report the study results. #== REVIEW ==# The authors present a well made system with a nicely executed study. The paper is well written and easy to follow. However, there are issues with the novelty and generalizability of the results. In the following, I would like to elaborate on those issues. The amount of related work is somewhat sparse, but there are some places in which related work is incorporated well to inform the design. For instance, the guidance techniques are inspired by previous work. However, other parts, like the claimed ""novel additions"" or ""novel changes"" in section 3 are not backed by related work. In fact, those claimed ""novelties"" are just specific ways of implementing the desired system by customizing existing assets. Therefore, they have no novelty from a research perspective and, again, not positioning them in literature makes those claims entirely unfounded. For instance, using three textures as the environment map and combining them, e.g., through blending is described as novel, but this is just a specific way of implementing the desired effect. Therefore, while the implementation is well made, there is no contribution from a technical perspective. The descriptions about the implementation details can be shortened and the concept can be explained in more general terms than describing it around Unity3D assets. Furthermore, implementation details like for instance preloading videos into memory are not essential for reproducibility. That said, the description about synchronizing the walking speed (i.e., the oscillation of the HMD) with the video playback is the most interesting of the presented ideas. However, this part is not fleshed out and ends up as another minor addition or implementation detail, even though it might have some potential to be explored more in depth. The study is well designed and executed. However, it is hard to generalize or draw conclusions from the results. While there are significant differences, those are bound to the specific design of the four techniques and are prone to many confounding variables. For instance, ""path"" almost looks like an incomplete implementation of a visual aid compared to arrow. The significant factor might be the fact that the arrow is more integrated and curved while the path is always straight along the viewing direction, which might be misleading. The arrow (i.e., including the arrow tip) in itself might have a very insignificant role, i.e., having a curved path might have yielded the same results. One simple way of mitigating this would be to rename the conditions, e.g., curved versus non-curved. However, the issue remains that there are many confounding variables and that the results are hard to generalize from. A closely related issue is that the techniques provide different amounts of information, making them hard to compare. This is again most apparent in the arrow versus path case. The arrow has all information to guide the user precisely to the closest waypoint, whereas the path only provides binary information, i.e., whether there is a waypoint currently viewed. With the increasing number of waypoints, it might even never be inactive, basically showing no information. As of now, the arrow implementation looks like a heavily improved path visualization as it contains a curved path plus an arrow tip. One approach to have those two techniques comparable would be to compare a curved path with a rotated arrow so as to make them display the same amount of information. Alternatively, the path could be aligned with the footage (e.g., like in Google Street View) instead of just being overlayed as an image. As of now, I believe that the path condition invalidates most of the results as we cannot derive that paths are in fact inferior to arrows as claimed in the paper. Lastly, the ripple visualization is very specific and distracting (as also pointed out by participants). Instead of a subtle ripple effect, it is very opaque and occludes the actual region of interest. While the other techniques can potentially be made comparable, the ripple technique has a lot of parameters (opacity, size, shape, amount of distortion etc) making it hard to compare with the other techniques, which are more basic. In more general terms, the goal of the study has to be made more clear. What are preferences based on? Is it a trade-off of the amount of information, distraction and aesthetics? In summary, the system is well implemented and the study is well executed. However, it is hard to tell what the contribution of this paper is, because neither is there technical novelty, nor do the results have clear implications. Therefore, I slightly lean towards rejection of this work. """,2,0 graph20_59_3,"""This paper proposes a VR video system which uses ""nodes"" to enable navigation in a video-based 3D environment. The user sees real 360-degree imagery captured at each node and can navigate between them. The authors present their implementation and a user study on different navigation/cueing techniques. Foremost, I think this idea is immensely cool and I appreciate the authors for realizing it into a fully functional system. Like old adventure games (e.g. Myst), this approach enables users to navigate a virtual environment consisting of pre-rendered content (in this case: real videos which would be hard or impossible to replicate as pure renderings). The system comprises a lot of neat ideas - walking-in-place locomotion (WIP), composited 3D videos (green-screen), interactivity with video cut-outs, and navigation between nodes. It really feels like the design of an old adventure game updated for VR. On the negative side, I found the study somewhat lacking. It lacks external validity as the authors did not compare against any existing baseline approaches (e.g. direct teleportation between nodes)/ Since teleportation is one of the most common VR locomotion techniques, omitting it for comparison seems like a significant limitation. It's a bit odd to state that Arrow was ""more than two times faster"" in the introduction; it rather seems like the Path condition is the outlier which performed significantly worse than the other conditions. The study also misses a good opportunity to gather subjective feedback from users on the nodes and the navigation scheme in general, i.e. whether it was intuitive or easy for users to understand and use. The paper should also cite some existing work on nodal VR experiences, for example: - M. P. Jacob Habgood, D. Moore, D. Wilson and S. Alapont, ""Rapid, Continuous Movement Between Nodes as an Accessible Virtual Reality Locomotion Technique,"" 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Reutlingen, 2018, pp. 371-378. Despite the study, I think the system itself is useful and novel, and I would argue for acceptance provided the authors tone down their claims about the study and its results.""",3,0 graph20_60_1,"""The submission presents an empirical user study with 24 participants that compares the participants' text entry performance (speed and error rate) in VR in two conditions: 1) using a Vive controller with raycasting, and 2) using a stylus on a touch-sensitive surface (tablet+stylus). The results of the study report that participants had higher text entry speeds and lower error rates using the Vive controller than tablet+stylus. The submission contributes to existing knowledge about people's text entry performance with two existing VR text entry techniques. The main strength of the paper is a good user study design, which follows most of the best practices. The study had 24 participants, which is more than most existing text entry studies in the lab. The work tackles an interesting and timely topic. However, there are a number of weaknesses that must be addressed prior to publication: 1) the contribution is unclear, 2) study design and data analysis have potential issues, and 3) there is no clear future work or takeaway points. The submission motivates a need for new VR text entry techniques that address existing challenges, but it does not describe how the two techniques or the empirical study address these challenges. The submission compares two text entry techniques that have already been studied. It seems like the intention was to show that tablet+stylus has a potential to outperform the controller condition (the baseline). However, it does not. This could be en interesting negative finding; however, it is possible that this is due to design issues with the tablet+stylus and the limitations that the submission points out in the limitations section. The information in the limitations section is not a contribution either because such design considerations are already known to the community (e.g., providing visual feedback that would map the user's hand to a virtual hand). Having a more robust implementation of tablet+stylus could provide more evidence that this technique is an alternative to text entry using raycasting. Although the study design is mostly commendable, it seems like participants performed only a small number of trials in a single session. In this case, it is usually better to average the typing speeds over trials and to compare the mean typing speeds and their variances using a parametric test (e.g., a t-test). However, ideally, the study would have multiple sessions over days to show the learning effect of each technique. Also, the statistical analysis could be improved. To analyze speeds over sessions (or trials which I do not suggest), the repeated measures ANOVA requires a post-hoc test to find where the differences are following evidence of a main effect of trial on text entry speed. It is a bit odd that error rates passed normalcy and sphericity tests because error rates (hopefully) tend to be skewed towards 0. The submissions should report the tests. The submission does not contain clear takeaway points that would suggest what the reader should learn from the work. This is reflected in a very vague conclusion and future work that only suggests minor improvement to the tablet+stylus technique based on the existing knowledge about typing in VR. In summary, this work has the potential to uncover new knowledge about how people entry text in VR that could inform the design of future text entry techniques. However, there are a number of weaknesses that need to be addressed prior to publication. Thus, I encourage the authors to continue their work and I look forward to their next iteration. """,2,0 graph20_60_2,"""This work presented a study which investigated the text input performance in VR. Two input method was compared: 1) using a tracked tablet & stylus, and 2) using a VR controller with ray-casting enabled. In general, the writing was smooth and I found the literature review (incl. Table 1) valuable and would certainly help researchers in this field. I also found some qualitative results very illustrating, e.g., some participants mentioned that extra fatigue was caused by moving their head up and down during the tablet & stylus session. Citing several existing work which used similar approach, this work proposed that some prior work failed to report text entry speed or error rate (Sec. 2.1). I doubt whether the motivation is convincing (unless the interaction techniques involved is novel or the study is designed and conducted thoroughly). In addition, several important details about the study conditions were missing: dimensions of the virtual keyboard on the tablet and the ray-casting keyboard. Only dimensions of the tablet were reported, but given Fig. 1, the keyboard size is still hard to get. The size & depth of the ray-casting keyboard (or FOV of each key) also affects the difficulty of typing in VR. Without these details, it would be hard to compare the results with other work and thus the entry speed or error rate would provide limited value. In summary, the literature review and the study analysis was good, but given the issues found in the study procedure, I found the contribution was rather incremental. I would argue for a weak reject. """,2,0 graph20_60_3,""" This paper presents a novel mechanism of providing text input in VR using a stylus on a tracked tablet. The paper describes their method of input and implementation, as well as the results of a user study comparing traditional keyboard input to their new approach, as well as an existing technique of raycasting to a keyboard. Overall, I like the aim of this work. Text entry in VR is far from a solved problem. The paper was also very easy to read and the methods were well described. I have a couple of concerns preventing me from recommending acceptance. The success of the technique and its benefit over prior approaches, the methods used for testing and analysis, and the presentation of the paper. Regarding the technique itself, it seems like a straight-forward idea. However, it does require the user to hold a stylus and tablet, which can be bulky and may not be compatible with a lot of VR tasks that require more unencumbered interactions. I would also be curious to see how this approach compares to a direct stylus-based text-input mechanism. I have not seen studies comparing the relative impact of being able to see the tablet directly, the impact of having to hold the tablet in 3D space vs. Resting on a surface, or even stylus vs. finger input on these traditional interfaces. Instead, the study presented compared the novel technique against a pretty common, commercial baseline and found little difference in terms of resulting effect on text entry speed. Regarding the methods, while the results are presented well, but it wasn't clear if it was a multi-factor repeated measures ANOVA that was run. It seems like there were independent one-way RM-ANOVAs run on trial and condition. There are no interaction effects mentioned, and so it's not clear exactly how the analysis was conducted, and if all multiple-comparison effects were accounted for. Lastly, the presentation of the paper was a bit troublesome in some places. In the introduction, the reader doesn't need a paragraph explaining the overall structure of the paper. Similarly, when presenting the results, figures should be referenced in parentheses, with the focus of the sentences emphasizing the synthesis of the results. E.g., 'there was no significant effect of text entry type on error (Figure 8)', rather than 'Figure 8 shows'. This is a minor issue, but something that will improve the readability of the paper. """,2,0 graph20_61_1,"""Summary: In this paper a study is conducted to analyze the performance of target acquisition on handheld virtual panels based on four variables: the width of the targets, distance from start position to desired target, direction of dominant hand movement with respect to gravity, and the angle of approach from start point to target. The results of this study claim that all of the above variables do significantly impact the time taken to select a given target. Based on these findings, some suggestions for more functional and user-friendly panel designs are made. Review: Overall the paper is good ergonomics in virtual reality is a worthwhile and interesting area of research, and the authors make some good suggestions for implementation. The related work section was concise and relevant, and for the most part got me up to speed on the technology relevant to the study. The related work mentions other kinds of virtual panels such as surrounding fixed and display fixed. It would be nice if there were a control condition with one of these other panel types in order to a) see if the results are generalizable, and b) see if any of the effects observed are due to also having control of the panel with the non-dominant hand. Confirmation that starting from the bottom and working against gravity is more fatiguing is an interesting result my assumption was that it would be negligible in such a study since regardless of the direction travelled for completing the selection, the user still would need to return to the start point after each trial, effectively retracing the same line in the other direction. Can it be confirmed that this effect is actually due to the effects of gravity, and not due to the fact that the angle of motion itself relative to the body is simply more ergonomic? Trying this myself in midair, the rotation angle of the shoulder joint is completely different between these two types of motion, and my immediate assumption for an explanation would be biomechanical rather than due to external forces. Also, it was noted that the user could move the non-dominant hand as a way to move the target panel closer to the pointer. If gravity were so fatiguing I would expect the users to do this more often when their dominant hand was starting at the bottom for economy of motion was this ever observed? My critique for angle is similar interesting, but are we sure this is because of gravity and not economy of joint rotation? The suggested solution to minimize the effects of gravity by having HVPs with larger horizontal widths than vertical, with high frequency targets being placed along the bottom is a good one and is easily implementable. Regardless of whether the observed effects are actually due to gravity or not, this suggestion should still result in a more ergonomic panel, as long as users typically approach such panels from the bottom when they havent been instructed to do so. Finally, the discovery of the existence of an ideal distance is very useful, but only if the typical starting point for a user in a non-laboratory setting is well-defined. In the real world would the user have some semi-constant resting point that they return to in between selections rather than just hovering above the previous selection until another one is to be made? If the latter then perhaps the ideal distance could be used for the spacing between tiles rather than where to place the most important tiles. It was found that larger width corresponds with a shorter time, but this was perhaps an obvious result. Since we cannot make tiles arbitrarily large, there doesnt exist some ideal panel size other than making them as large as possible based on the amount of necessary tiles/size of the HVP itself. """,3,0 graph20_61_2,"""Pros As the authors clearly state, handheld virtual panels are becoming ubiquitous in VR applications, yet no other paper has study how users interact with them. So, there is a need for a user study that evaluates different selection conditions. And this paper is at the correct time to start a discussion about the design of these menus. Clarity The description of the independent variables is difficult to understand, and this affects the understanding of the participant's actions. The paper will benefit from additional figures. For example, an image of the participant view during the experiment. Or a diagram that explains the angles and the arm movements using a human figure. Cons There are some problems with the experiment design. In specific with the control of the non-dominant hand movement. As far as I understand, participants were free to move both hands in any direction they wanted. Yet, the authors only require participants to bring the dominant hand to a start position, but not the non-dominant hand. If this is correct, then every try could be different, as the participants could have used different muscles. The authors explain some of their results by using the non-dominant hand-wrist movement, but they do not discuss how having the menu in different positions can affect the selection. See Paul Lubos' work on pointing and joint movement, e.g 10.1145/2983310.2985753. Also, the authors need to better justify some of their decisions regarding the experiment design and data analysis. First, why Width is part of the Latin-square, instead of Angle. Usually, in pointing experiments, distance and width are random variables as their effect are well know on the user performance. Second, the authors removed the incorrect trials for time analysis, but they do not explain why. Finally, in the whole-handed 3D movements in the air section, the authors mix previous literature about the virtual-hand selection and ray-casting selection. For example, 7 and 23 are Fitts Law extensions for 3D virtual-hand pointing. At the same time, there is previous work missing related to 3D ray-casting pointing, i.e. 10.1016/j.ijhcs.2010.05.001. Summary I think the topic is relevant and well-timed. However, I think the paper needs more work, both to make it easier to understand and to better explain some of the decision regarding the study design. My main concern is that the authors discuss the importance of the second-hand movement and position, yet they did not control it on the user study. Based on that, I don't think I can recommend it for acceptance.""",2,0 graph20_61_3,"""The paper studies target acquisition in VR environments when users select menus from handheld virtual panels controlled with the non-dominant hand. In such scenarios, pointing performance can be affected not only by the size and distance of the targets but also by the angle of the movement and gravity. The paper presents the result of an experiment that studies the interactions of these factors. This is a short 4-page paper with the appropriate depth, related work, and size of contribution. The paper is very well-written. Unfortunately, the submission does not include any video, thus some aspects of the task are not easy to understand. Overall, the paper is interesting and parts of the work are well executed. However, as I have major concerns about the experimental design as well as about the validity of some of the paper's conclusions/interpretations, I hesitate to argue for acceptance. My main concerns are as follows: The use of both hands (asymmetric bimanual input) is a key aspect that makes this task interesting and worth studying further. This is also part of the motivation of the paper (see Introduction). Unfortunately, the experiment does not clarify how the bimanual dynamics of this task affect user performance. I acknowledge that the paper discusses this limitation (Section 5.2) but I am not convinced. The paper mentions that participants could ""also move the non-dominant hand to move the target on the panel closer to the pointer."" But no further information is provided. I would at least need to know about the strategy that users employed to perform the tasks, as this largely determines the interpretation of the findings. Did participants move their the non-dominant hand to facilitate pointing? Notice that movement gravities are inverse for the two hands, so if gravity is the factor of interest, the aspects needs further investigation. Furthermore, the experimental design does not seem to control for the initial positioning of the non-dominant hand. I would expect some additional information about this issue. The effects that the authors observe may be due to physiological constraints that are specific to the angles and movement directions that they tested. Gravity may be less important. To support their interpretation, I think that the authors need to explore a wider range of movement directions, such as ones that move from left-top (or left-bottom) to the right. I wonder why the authors did not test a circular pointing task with additional angles (which eliminates the need for the Top/Bottom conditions). I am not sure if I fully understand the experimental design from the schematic in Figure 3 and the text description. In particular, the angles do not seem to have a clear common definition for TOP and BOTTOM starting positions. I am further perplexed about the interactions shown in Figure 4. I cannot explain them, and the paper does not provide any clear intuition. Although the gravity interpretation seems to hold if we examine TOP and BOTTOM separately, the comparison between BOTTOM and TOP does not support this interpretation. Why isn't TOP always faster? As I explained above, physiological constraints may better explain the observed results. Targeting performance across the two angles may have been further affected by the rectangular shape of the targets. The experiment should have tested circular targets instead. For the same reasons, I don't feel comfortable with the design takeaways (Section 5.1), as they are not clearly supported by the findings. For example, the paper mentions that ""he design needs to minimize motion when the user is performing tasks below the panel."" Again, how is this supported by the results? These are additional comments about the analysis of the results: - Don't reporting on main effects (because of interaction effects) is a weird non-justified decision. Even if these effects are not ""statistically"" significant, the paper needs to report on them. I would also expect some analysis of the authors' expectations or prediction before running the experiment. As I mentioned above, one could expect that TOP is faster than BOTTOM. I would also expect that smaller distances would result in faster times. Disregarding the analysis of these main effects is not justified. - I recommend reporting precise p-values rather than inequalities of the form p < .05 or p < .01. See: pseudo-url""",2,0 graph20_62_1,"""This paper investigates how the visual distinctiveness of icons in toolbars influences the speed with which users can learn and subsequently retrieve the locations of icons. The results suggest that color and shape distinctiveness in themselves are not very helpful with learning and retrieval of icons, particularly compared to icons expressing a meaning congruent with their associated commands. Overall, I thought this was an interesting paper to read, and I am recommending that it be accepted. I think the work is well motivated by the debate about minimalist visual designs for icons; the studies are well designed to test the effects of different kinds of visual distinctiveness; and the paper is well written, with good justifications for the methods used, and a clear analysis and presentation of the study results. I think the biggest weakness of this work is that it shows that visual distinctiveness doesn't really matter that much, but there's a value in publishing negative results. Suggestions for revisions: - The paper enumerates the main hypotheses (H1, H2, ) in the method section for each study, and I was expecting the results section to revisit these hypotheses with a clear statement of how they were confirmed or rejected. Doing so would more clearly communicate to the reader the results of the study. - In the charts showing trial completion time and hover amounts, it would be good to add a slight dodge in the x-axis, so the points with error bars do not overlap one another and can be more easily compared. - On page 6, there is a typo ""Our fourth comparison (H4) compared Mixed and Concrete to see whether having two distinctive visual variable would improve performance (i.e., Mixed is more differentiable both in terms of colour and shape than Concrete)."" should be ""variables"".""",4,1 graph20_62_2,"""# Summary This paper presents two studies that look at how visual distinctiveness (color and shape) and meaning (meaningless, contextual, and familiar) impact the learnability of an interface. In the studies, participants are asked to recall a set of target icons for each of a number of icon sets with different meaning, color distinctiveness, and shape distinctiveness. # Overview At a high level, the paper studies an interesting and relevant topic (to more than just icon design) and has interesting findings (including some rather surprising). It is well-written, has helpful figures/tables, and is thorough in its structure and study design. I appreciate the details such as considering how to determine a suitable number of colors for an icon set. My main concern is that the task seems to be slightly far from the real-world task (which the authors also note in their limitations), but makes me wonder a bit if the results would be able to be replicated in a more realistic scenario. I would encourage the authors to discuss why this task is a reasonably proxy for the task of using icons in software. # Task Continuing the discussion of the task I think what concerns me the most is the density of icons being presented to the participant all together in one location on the screen this isnt a common design in current interfaces. As the authors note, it also results in not really having any landmarks because the visual differences werent enough for them to be an anchor for the participants. I also think the description of the tasks/procedure were maybe the more confusing sections in the otherwise very clear paper, and might be worth going back to clarify a bit. I wondered if the positions of the icons (and targets) were randomized between participants? My impression is no, which is slightly concerning as it is possible that certain target locations mightve been easier to remember. # Meaning and Distinctiveness The authors provided a very interesting breakdown of types/levels of meaning and shape/color distinctiveness. Were the visual distinctiveness levels defined by the authors or from related work? Is there a conventional way to determine these levels (eg. how much difference in shape is counts as high versus medium distinctiveness)? While the selected icon sets do seem to fall clearly into the categories as defined in Table 1, such a dictionary of levels would be more useable in different future scenarios if this were further clarified. # Results Results were clearly presented the hypotheses helped make it easier to process the statistics. I appreciated that authors used hover as a proxy for proficiency of the system rather than just speed (in particular given my concern with the task design). I wouldve been interested in more discussion of qualitative feedback to gain more insight into why some of these results occurred. For instance, is there a point at which too much visual difference (Mixed) is distracting and makes the icons harder to recall? Or when participants selected which sets they preferred, what qualities influenced that choice. In the qualitative comments, please also add numbers for more context, eg. Several (#) participants stated that they # Related Work The paper does a thorough job of covering all the important related topics for to motivate their work and approach. The initial motivation discusses how the visual consistency of flat/subtle designs might make them harder for users to distinguish, and therefore harder to learn. They also state that learning commands is important because this is a way in which users can improve their performance and transition from novice to expert. Here I had two main questions as I was reading and they address both in their related work: 1) how do you define and measure visual distinctiveness? (though measurement could be more precise as I discuss above) 2) how does learning commands help improve users performance? # Minor - Figure/Table labels seem to have an extra space before the number - meaning they posses -> possess - in Subjective Responses and Comments for Study 1: Concret, Concrete+Colour -> Concrete - Study 2 - Design, missing period after planned comparisons - I believe usually the Chinese characters are referred to as Simplified Chinese characters and Mandarin is more in reference to the spoken rather than written language. Also, worth double-checking, but all characters might fit under this category, so you might not need to say Kanji and Mandarin everywhere (other than when referring to participants existing knowledge of the language)""",3,1 graph20_62_3,"""This paper presents the results from two studies to investigate the effect of icon distinctiveness on how people learn and retrieve icons from GUI. They found no evidence that increasing the distinctiveness of colours or shapes can improve learning, however, they also found that adding concrete imagery to icons makes them easier to learn. The paper is well-written, the methodology is sound, and it is highly relevant for the GI community. My only reservation for the work is the lack of clear contributions. Although this is not my area of expertise, however, it came as a surprise to me that there is little known about the effect of visual distinctiveness on GUI usability and learnability as claimed by the authors. The paper by Bateman et. Al. (2010) which shows that adding some visual embellishment improves the memorability of charts is somehow related and should be reviewed. Bateman, S., Mandryk, R. L., Gutwin, C., Genest, A., McDine, D., & Brooks, C. (2010, April). Useful junk? The effects of visual embellishment on comprehension and memorability of charts. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 2573-2582). I would also like to see more background on the participants in this study. What is their age groups, technical experience/expertise? I do believe these factors could have some effect on the obtained results, hence its necessary to provide the information. Do studies one and two involve the same set of participants? (within or between subject?) """,3,1 graph20_63_1,"""This work investigates interface presentation styles (visual clues and recommendation lists) for auditing group biases in machine learning (ML) models. Through an in-lab within-subject study with 16 ML engineers, the authors evaluate performance measures while using the two types of interfaces. The paper contributes to interface ""design dimensions"" for bias detection and auditing tasks, i.e., information load, and comprehensiveness. Creating usable bias detection tools is an important and urgent problem in ML research. I commend the authors for taking on this problem. Overall this paper is very well written with adequate details about motivation and study design. The related work cites many key papers and does a good job of synthesizing prior literature to situate this work. The findings from this study (design dimensions) offer interesting insights for future research on auditing tools. However, I do have a few concerns about the design of the interfaces used in the study. I find that overall, the interface design lacks justification about how it affords/supports different types of auditing tasks. For example, I see that end-users need to scroll quite a bit to compare measures across different sub-groups. This may have influenced the number of measures they select. It would also be nice to synthesize insights based on different sub-tasks for bias auditing (even just looking at the ""foraging"" and ""sensemaking"" tasks). Further, the highlighting feature in the visual cues interface is not salient enough (both in the video and the screenshots in the paper are hard to see). And in the recommendation list interface, the ""see all"" option is not discoverable. As the authors reported, they had to remind participants to click on the group name to see all measures. This confounds the results to some extent. I would have preferred an *always visible* button instead of clicking on the group header. The time for each prototype was set at 10 minutes (from a few hours in the pilot). A sentence about this choice might be helpful. Additionally, better measures on cognitive load (e.g., NASA TLX, CLS questionnaire) could strengthen the study findings. In summary, while there are some flaws in the study, the results are useful and provide directions for future research. I advise the authors to discuss the above-mentioned limitations of the paper. I recommend accepting this paper for publication. """,3,1 graph20_63_2,"""The paper examines different visual representations of algorithmic biases and how they affect the behavior of detection. Biases of ML algorithms are of increasing interest in the HCI due to a growing number of decision support systems in everyday processes. This problem is well-motivated and sufficiently outlined. Two prototypes were developed to investigate this bias. These were well designed and seem sufficient to investigate the problem at hand. The paper introduces guidelines for designing bias-detection interfaces based on the comprehensiveness and information load necessary. This analysis further reveals current research gaps in the context of bias investigation tools. Recommendations for minor improvements: - The introduction as well as the discussion feels in some parts repetitive. - Figure 1 and 2 are hard to read as a printed version. - The caption of Figure 1 does not match the figure. - Typo: 'For example, A model may' (page 4) Final comments: The paper is informative, addresses a very timely topic and is well conducted. The final results open new opportunities for future research. Hence, I would recommend to accept this paper.""",3,1 graph20_63_3,"""This paper describes a lab study with 16 participants that investigate the effect of presentation style (recommendation list or visual cues) on user behaviors in reviewing algorithmic bias reports. Through this study, the authors provided guidance in the design of semi-automated bias detection tools. This paper addresses a timely and important topic. I think the hybrid (qualitative and quantitative) method the authors chose is not the easiest choice but the authors executed it very well. The paper is thoughtfully written and very easy to follow. I also appreciate that the resulting design guidelines can potentially generalize to many critical AI application domains beyond hiring. The outlined design space (Figure 4) could serve as a valuable instrument for designers and researchers working in visualizations and information design for ML outputs. I do have two critiques, specifically concerning 1) the definition of algorithmic fairness/bias and 2) novelty of the findings around information overload/comprehensiveness tradeoff. I think the paper would benefit from a clearer definition of algorithmic biases. This paper focuses almost exclusively on the *outputs* of algorithmic systems (e.g. accuracy disparity, classification rate, etc.). Algorithmic bias and/or biases in the training data can result in biased system outputs. In this sense, I suspect what the authors meant by algorithmic biases is actually biases in system outputs (including intrinsic biases in training data). Prior work addresses these two kinds of biases quite differently [1]. The other opportunity for improvement is in articulating the novelty of the design guidelines more explicitly. One way to achieve so can be adding a section in the Related Work on related existing data visualization and information design research (e.g. [2] and many more), which could help frame and highlight the novelty of this papers findings. [1] Consider Microsofts data card (pseudo-url) versus modeling card (pseudo-url) [2] Designing Theory-Driven User-Centric Explainable AI, CHI19 pseudo-url """,4,1 graph20_64_1,"""This paper presents a design study of a visualization system for tracking and reviewing resident physicians performance in a medical training program. The design of the system was through four focus groups and the evaluation was through a four-month deployment study. User behavior changes were observed after introducing the system. Overall, I think this is a great paper. I appreciate the authors effort in designing and deploying a visualization system in real-world scenarios. The topic is definitely critical, because effective training programs are essential in providing high-quality medical care. Although there is not much novelty in the visualization design, I think this paper focuses on the process, insights, and patterns learned during the design and evaluation process. However, this paper can be stronger if there is more meat in the requirement analysis and design implication. For the requirement analysis that was achieved through focus groups, this paper lists five questions to solve (four in the resident view, and one in the reviewer view). Id like to see more regarding how these questions are derived. For example, this can be benefited from a summary table of the data that the focus groups want to visualize, a few figures of the sketches, and quotes of the users in the focus groups. Currently, it is still unclear whether these questions cover all the needs or are cherry picked. The design implication section lacks depth. The insights gained are mainly on usability, which in my view are not very insightful. I wish the authors can distill a set of higher level principles to inform the design of similar systems in the future. Moreover, I think the reviewer view is too simplistic. The system allows the reviewer to compare residents in different time period, but does not have the ability to show why certain residents are underperformed. As a good system to support educational training, it should help the reviewer/educator identify problems of the weaker learners and help them. In other words, the comparison should inform actions. Id like to see some discussion on this aspect in the paper. """,3,1 graph20_64_2,"""This submission reports on the creation of a system to help medical residents and their reviewers to assess their learning using an information visualization dashboard, designed for and with them in a participatory process, deployed in their setting, and evaluated with them through a longitudinal study. Quality The methodology employed for conducting this research sources methods from diverse fields and is relevant. Clarity The presentation is very clear, with pertinent textual and visual explanations. Originality The review of related work is varied across relative disciplines and well positioned. Signifiance The system has been designed and developed and evaluated so that it ended up being useful to domain experts (medical residents and their reviewers). I advocate for accepting this submission. ABSTRACT Abstract provides information that is ideally expected: one sentence of context, summary of contribution, explanation of system and methodology. I would suggest to use active voice instead of passive to clarify who contributed what (""The system was developed"", ""...was installed""). INTRODUCTION The motivation and context is sound, with references on how information visualization and dashboards support learning analytics or educational data visualization. The proposed methodology of design and development relies on well established practices: eliciting requirements through focus groups, designing using action design research framework, implementation through agile development, evaluating the system through uncontrolled longitudinal studies and feedback sessions. Obtained results are supported with clear metrics. RELATED WORK The related work is well balanced with a review on visualization dashboards and visualization in medical training with references from diverse related research communities. ""One reason for this gap seems to be the lack of collaboration among the developers, end-users and visualization experts."" The passive voice of the sentence does not help to identify who posited this reason: the authors of the submission or Vieira et al. [36]? Also, before initiating collaborations, I would say that all parties must first be aware of each others contributions, so I would rephrase the reason as a ""lack of communication"" among them. APPLICATION BACKGROUND This section conveniently introduces domain-specific terms and thus contributes to make the paper standalone in understanding the context. Requirement analysis was conducted through focus groups including active participation of domain experts (including involving them in sketching their desired features for data presentation). Data characterization is assorted with visibly clear understanding and explanation of the domain. Q1 can be reformulated with plural to avoid gender bias (so that this is harmonized with similar efforts along the paper). VISUALIZATION DESIGN The rationale for visualization design is clearly explained and illustrated. The choice for visualizing rotation schedules using an interval chart rather than a more space-consuming Gantt chart widespread in time/project management is smart. The decisions on color scales adjustments to highlight under-performance while shadowing over-performance on EPA count per rotation is well motivated by contextual needs. Figure 4: I would suggest to split the figure into 2 rows (3.5 and 3.6) and annotate columns in black font over white paper background, instead of white font over blue application background: with a low zoom level on my PDF reader, I had first confused these annotations with potential widgets in the application. For further inspiration on visualization for comparing (resident) profiles, I'd suggest to browse other works by Plaisant et al. in addition to [29]: pseudo-url pseudo-url IMPLEMENTATION DETAILS The implementation details report on constraints that may be too project-specific (with occurrences of ""project"" or ""the University"") and would gain to be generalized. Congratulations for opensourcing the code to potentially help other institutions with medical programs (""across Canada"", or beyond?). The responsive design choice is great for multiple device access with various form factors. Rendering in SVG with d3 might pose issues regarding accessibility, where efforts for compliance are left at the discretion of application developers rather than library developers. See pseudo-url USER EVALUATION AND FEEDBACK The user evaluation and feedback proposes analysis of user logs that informed changes in metrics for measuring improvement in learning program once their system was adopted by residents and reviewers; and their feedback. I would suggest the following references to inform analysis of user logs: - H. Guo, S. R. Gomez, C. Ziemkiewicz and D. H. Laidlaw, ""A Case Study Using Visualization Interaction Logs and Insight Metrics to Understand How Analysts Arrive at Insights,"" in IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, pp. 51-60, 31 Jan. 2016. doi: 10.1109/TVCG.2015.2467613 - Papers from the IEEE VIS'16 Workshop: Logging Interactive Visualizations & Visualizing Interaction Logs pseudo-url DESIGN CHOICES AND INSIGHTS GAINED I found the design considerations to be mostly obvious and known to designers and developers of user interfaces and information visualization. LIMITATIONS AND FUTURE WORK The limitations are mainly focused on the specificity of project requirements to one University in Canada, the small sample size of participants to evaluations. SUPPLEMENTARY VIDEO The video introduces the application domain and showcases diverse tasks supported by the tool presented in the submission. Audio quality of the voice over could be improved with a proper microphone and recording settings. """,3,1 graph20_64_3,"""The authors have built a platform for assisting medical residents and their supervising doctors in keeping track of their progress toward different milestones. The system was designed over multiple iterations with feedback from the residents and supervisors and was finally deployed and monitored for a period of four months to collect in-the-wild usage data. The goal here was to understand the scope of visualizations to improve the training and learning process in a residency program. The manuscript communicates the overall design process and several design decisions in-depth providing the readers with sufficient information to appreciate the efficiency and simplicity of the end-product. The authors have done a good job explaining the context and the relevant terminology. The requirement analysis, research questions, and resident/supervisor priorities are well explained and flow well in to the final design of the system. Finally, building such a system, completing the data pipeline, and maintaining it for over four months is considerable effort. Kudos in taking it beyond requirement gathering and building the system. That said, there are some areas which can be improved upon: - Although the authors did evaluate the performance of their system against historical data, there were a few things which I would have liked to see addressed in that comparison. There was no mention of the previous system or set-up which was used to review residents before the new system was implemented. Even if it was just manually parsing logs, that information would be vital in appreciating what was developed even more. Another thing that stood out in quantitative evaluation was the comparison of metrics from current deployment compared against those of the prior year. Given it was the same cohort, there would be some amount of learning or improvement which would have happened naturally over their first year and would have influenced their performance in the second year when the system was deployed. Some discussion around this would be useful to better appreciate the system. - The design choices section on pg. 11 seems a bit superficial to be considered as generalized take-aways for future research. Maybe, instead of just focusing on what design choices worked, dedicating some space to discuss failure points would also be beneficial. What are some potential flaws or issues which future researchers should be aware of when building similar systems? Overall, I think this is a good first step towards building visualization systems to aid in medical professional training. """,4,1 graph20_65_1,"""This paper considers which visual highlighting is perceived faster in data visualization; and how different highlighting methods compare to each other. The paper compares three types of highlighting, changing the size or color of the target datapoint, or blurring the non-targets in two studies: one scatterplot visualizations that helps create a model that is then tested in a second study on 16 different visualisations. Different levels of each highlighting are tested in the two studies (8 in the first, 3 in the second). The results indicate blurring is the superior method. The paper has many positive aspects. The research question asked in the paper is important for the visualization community. Emphasis effects are used in data-storytelling, infographics, but also visual exploration when viewers select and highlight datapoints (e.g., to show connections in coordinated views/dashboards). Moreover, the paper starts with a more controlled study (on scatterplots) and then moves to a broader set of visualizations and real world examples that are more ecologically valid. This approach of isolating effects and then testing them in a broader context is very appreciated and often ignored in similar types studies. The paper is also well written and easy to follow for a general HCI/Visualization audience. I started out very positive about the work reported here given the importance of the question. Nevertheless, I have some concerns regarding the choice of magnitude levels of each highlighting technique and the fact that these levels are compared across techniques across both experiments (details provided next). There are other smaller concerns relating to experimental design and reporting. 1. Choice of magnitude levels. I have some trouble with the different levels of magnitude that are compared across highlights - this is stated as major need and contribution. What concerns me is the choice of levels across highlights techniques as the perceptual distance of these levels are very different and change at a different rate for each technique. The paper argues well for the choice of color levels (perceptually equidistant in terms of based of a perceptual model). I am wandering why this was not applied for the other visual highlights. For example it is known that area perception is not linear and follows Stephens power law (Wagner [a] has a meta analysis of several studies that provide power law exponents for area). The paper could have (similar to color) used similar distances for area. It is likely equivalent exponents can be found for blurriness/depth of field. [a] M. Wagner. The Geometries of Visual Space. Lawrence Erlbaum Associates, Mahwah, NJ, USA, 2006. I feel the above choice to not use perceptual distances has the following drawback. The perceptual difference between levels, for example 1 and 4 and 4 and 7 (chosen for the 2nd study) are now very different between color and the other variables, thus it becomes hard to interpret the results. For example it is possible the results are trivial in the sense that the perceptual distance 1-4 in color is smaller than that of area/blurriness (or it is possible the results are very surprising, but there is no way of knowing). And we fail to understand the importance of other factors if there is a concern that performance differences are easily explained (e.g., sensitivity of each of the highlights to using complex backgrounds and/or many points, as suggested as an explanation in the discussion of study 2). Or fail to understand what is the meaning of sentences such as with an increasing perceived visual prominence as the difference increased. I believe there is still value in the final results and the discussion, in particular in the main finding that blurring is the best technique. Nevertheless claims of comparisons of levels need to be carefully reconsidered. First of all, the paper has to provide a more detailed explanation about the level choices (and issues) in section 3.1 given past work on perceptual distances and why it was not used for area (and if relevant literature for blurriness exists); a more detailed discussion of the issue in the limitation section; and very importantly toning-down or rephrasing when reporting contributions/findings related to magnitude levels and their comparisons across the paper (which is extensive). Related to this, it seems that the paper reports multiple times that it tries to match equivalence (as the authors state in the abstract is difficult for designers to know [] what level of one effect is equivalent to what level of another, and then have several mentions of equivalence later in the paper). This equivalence is not studied nor reported in the paper (at least I could not find anything stating that level X of one highlight is equivalent to level Y of another). What the paper reports is a comparison of performance on a specific level (say level 7) across highlights, not equivalence. And of course any discussion of equivalence needs to be considered carefully given the caveat of levels that are not really representing perceptual differences as mentioned before. It is best to rephrase such statements. 2. Explaining experimental design choices As mentioned, I really appreciated the use of different visualizations in study 2. I would have liked more information about how these visualizations were selected and if the same ones were used across highlighting conditions. More generally, there is some lack of information about aspects of the experiment design, especially in study 2. Were highlighting conditions counterbalanced? Were there measures taken to have consistent difficulty across highlights but avoid memorisation? I would have also liked to see more information about the per-visualisation results. Were the overall trends reported in the paper consistent across visualization types or are some visualizations particularly problematic for some highlights (for instance from seeing the images I would expect maps to be particularly challenging for color). Finally, the paper reports that the experiment also included tasks without any highlighting to see the salience of existing features, which is indeed very important. Previous studies have shown that position, existence of distractors and size may affect perception. So taking into account the a-priori effect/existence of salient features could explain some results. How were these trials with no highlights used in the analysis / reporting (I was not able to find any reference to them later on). 3. Previous work and choice of highlighting techniques The paper mentions a distinction of possible highlighted choices in related work, I believe it would be good to acknowledge the existence of several pieces of work on motion highlighting (see Lyn Bartrams work). Moreover, the paper should argue/explain better the choice of highlighting techniques. For example, explain the focus on non-time varying variations and why these three. Text in 3.1 mentions an informal survey, it would be good to have some references of the tools visited (when the paper states widely used, several visualization tools, etc.). I was actually surprised that border highlighting was not mentioned anywhere in the paper as a choice, did the authors not come across it in their informal survey or do they consider it less effective than the designs tested here? Overall, I like this paper and believe it can make a contribution to the field. But I believe there is need for a considerable amount of re-writing to address my main concern, thus my borderline positive score.""",3,0 graph20_65_2,"""The authors of this paper carried out two studies on the efficacy of emphasis effects, one basic in assessing the levels of useful differences and one more applied using actual different visualizations for a more ecologically valid investigation. They considered three so-called emphasis effects: blurring the background so that only the target is in focus (""blur""); colour change; and size change. How to design good highlighting techniques has been an active area of interest in user interfaces and more recently in visualization for decades, and I welcome work in the area as there are clear issues with highlighting that is poorly designed. I also laud the authors for the challenge of trying to run both a perceptually basic study and then apply their results in a much more complex - but more ecologically valid - experiment see how -and whether -they could predict results from the previous study. These are hard studies to design, and when well done, can really contribute to design knowledge and better practice in the field. I very much wanted to like and promote this paper for the above reasons - but there are two essential fundamental flaws with the work that concern me enough to recommend rejection, because they really show a lack of understanding of both the problem space and of how you design studies to explore it. 1. The first issue I have already alluded to: research in this area has been underway for DECADES and yet the authors seem unaware of it. The design of notifications and alerts in visual environments has been a challenge in large control interfaces for a long time, and leaders in the human factors field like David Woods and Nadine Sarter were exploring the attributes of what Woods called ""cognitive signals"" in the 90s. The authors say research in the areas in visualization have ""recently emerged"": this is not true. Healey, Ware, Bartram and McCrickard were all researching techniques for perceptually efficient highlighting and alerting in the early 2000s. Bartram and Ware, in particular, specifically looked at colour, size and motion, in both basic perceptual studies and more applied ecological task contexts. If the authors of this paper had reviewed this research they might have structured their own studies differently and avoided some of the pitfalls they encountered in both study design and reporting. It does indeed lessen how the authors can situate their current work in the larger area of salient highlighting techniques if they are unaware of any of the precious work. There are also, by the way, other perceptual references about salient signals and visual search to do with distractors that should at least be acknowledged (see Duncan and Humphreys.) I have provided a list of relevant references at the end of this review should the authors be interested. 2. The experimental design and analysis is very poorly reported, leading to many unanswered questions about the validity of the results. At best this is poor reporting, at worst it is poor design, but we are unable to distinguish. Let me detail my oncenrs. A. It is good practice when describing (or specifying) an experiment design to identify the factors of interest, the resulting combinations (""conditions""), the repeated or not measures, and the the block balance and counter-balancing to ensure validity. So in Ex 1 we have 3 effects x 8 levels = 24 conditions with 5 repeats = 110 trials. The authors then tell us these were arranged in an ""order-balanced presentation"". What is that?? Did they just order the 3 effects but keep levels ordered? did they order all trials (so a random order of 24)? In either case,there are first-order and second-order balancing effects to be considered. They should at least acknowledge these. But we have no indication of this. B. These design problems propagate when we consider the other non-control random factors that enter the design: distance (in Ex 1 and 2) and then visualization type (Ex 2). These introduce more levels of balancing complexity. Let me stay with Ex 1 for a minute. Distance matters in visual salience. Outside a very small foveal angle, perception acuity diminishes. So having a target fixation and target selection(by mouse click) time is meaningless without normalizing this for distance. (I am assuming that the authors fixed starting visual and mouse locations at the center of the screen between trials, but this is not stated). How did these results stand up over distance? (For an example of how to design for these factors, see Bartram, Calvert and Ware 2002). That means that the metrics used to evaluate emphasis performance were incompletely evaluated. The authors state that they chose to randomize distance for more ecological validity but not including it in the analysis reduces the validity of any findings enormously. The same occurs with the choice of visualization type. Apparently in Ex2 there are 16 visualization types x 3 effects x3 levels = 144 trials (not clear if this was a repeated measures study). Context has a huge effect on visual search (see Duncan and Humphreys). Not considering these factors in the analysis again reduces any ""ecological"" claims the authors make. Incorporating the standard ""Threats to Validity"" discussion common to experimental reporting would have addressed some-of these questions and made the results more of interest. As it stands, this is very nicely motivated work that is insufficiently grounded in previous research and therefore runs afoul of experimental design pitfalls that could have been better addressed. Suggested references: -Sarter, Nadine B., and David D. Woods. ""How in the world did we ever get into that mode? Mode error and awareness in supervisory control."" Human factors 37.1 (1995): 5-19. -Healey, Christopher, and James Enns. ""Attention and visual memory in visualization and computer graphics."" IEEE transactions on visualization and computer graphics 18.7 (2011): 1170-1188. -Woods (1995). Woods, D., 1995. The alarm pro/blem and directed attention in dynamic fault management. Ergonomics. -Bartram, Lyn, Colin Ware, and Tom Calvert. ""Moticons:: detection, distraction and task."" International Journal of Human-Computer Studies 58.5 (2003): 515-545. -McCrickard, D. Scott, Mary Czerwinski, and Lyn Bartram. ""Introduction: design and evaluation of notification user interfaces."" International Journal of Human-Computer Studies 58.5 (2003): 509-514. -Liang, Jie & Huang, Mao. (2010). Highlighting in Information Visualization: A Survey. Proceedings of the International Conference on Information Visualisation. 79-85. 10.1109/IV.2010.21. -Duncan J, Humphreys G. Visual search and stimulus similarity. Psychological Review 1989; 96: 433 458. """,2,0 graph20_65_3,"""In this submission, the authors present a set of two studies to better understand the emphasis effect in visualizations. The topic of the manuscript is relevant to GI and the submission constitutes a very compelling read. The related work is very thorough and seems, to the best of my knowledge, quite complete. It is an easy read that helps introducing the concepts used in the rest of the paper. Similarly, I appreciate the level of details for the study design choices and the thorough report on how the study was conducted to facilitate future replications of this work. The discussion highlights the limitations of the study really well and help put the results in context. I would argue that results offered by the two studies are really interesting to the visualization community and the submission should be accepted. I nonetheless have some comments and issues that I will list below. 1/ Some of the p-values are not reported, going against recommendations to report all p-values as exactly as possible and avoid dichotomous interpretations of them but instead think in terms of strength of evidence [A--D]. I would appreciate if the authors could report the p-values of non significant results so that readers could interpret them by themselves. 2/ The authors do not precisely explain how they selected the 16 visualizations used in study 2. Similarly the manuscript currently does not make it clear that the participants of the second experiment are different from the one of the first experiment. Overall, I would argue that there are not enough details on the second experiment to properly understand what the authors did. Did all participants see all 16 visualizations? How many of them were maps? How many were bar charts? Why did the authors feel that they should combine different visualization types together for the second experiment. In particular since the results they have obtained for their models were on a scatterplot, I wouldnt expect a completely different visualization to exhibit the same behaviour and I would therefore expect to see results categorised by visualization type. 3/ The model made by the authors surprisingly did not predict the anomalous values for size 7 in study 2 while a similar pattern could already be observed in study 1 (with size magnitude of difference 5 and 6 performed apparently worse than 4 and 8 worse than 7). I wonder how much of these anomalies are actually just possible noise (which is an option that the authors did not seem to consider, but the differences are quite small between levels in both studies) or if the authors could explain why their models did not include these anomalies. 4/ Will the code and data be released? It would encourage replications and facilitate future works from other groups. Minor comments: In the third paragraph of the introduction the words we know little about are repeated three times in a row. Perhaps the authors would like to change the phrasing slightly. I would argue that the introduction goes into too much details about the study designs and the authors might want to shorten it a bit. In section 5.2, the authors wrote p < 0.13 which I believe to be a typo. Did they mean p = 0.13? Participant comments for the three emphasis effects reflect the empirical findings, favouring blur/focus. I would tend to disagree with that statement from the authors. The data they obtained just slightly favours blur/focus, while the quantitative data strongly favours it. REFs: [A] pseudo-url [B] pseudo-url [C] pseudo-url [D] pseudo-url """,3,0