AMSR / conferences_cleaned /graph20_papers.csv
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
141 kB
paper_id,title,keywords,abstract,meta_review
1,"""Participatory Design with Instant Online Feedback in Teaching Human-Computer Interaction""","['Human-computer interaction', 'participatory design', 'instant online feedback', 'interface design', 'peer evaluation', 'web presentation system']","""Human-computer interaction focuses on the principles of effective communication between human and machine. From this point of view, computer science students may greatly benefit from engaging in hands-on experience in human-computer interaction subjects in terms of broadening the vision and expanding understanding when it comes to application and interface design. Participatory design has been widely known as a method for sharing users' voices with developers about the design of the product. Evaluation plays an immensely important role in revising and developing the application interface, together with design and implementation, makes an iterative process of refinement and development. We combined the advantage of participatory design with instant online feedback in teaching human-computer interaction, including evaluation for application interface design projects. In this study, we introduce a web-based interactive presentation framework for 1) presentation session using one main computer with automatic session switching, incorporate with interactive features on receiving instant online feedback, and 2) peer evaluation from the class audience. To evaluate the effectiveness of the interactive peer-review approach into the progress of students, this paper presents a case study of computer science students throughout a human-computer interaction course. ""","""All reviewers have mentioned the strengths of the paper. For example, the paper focuses on an important topic (R2); the platform was nicely built (R1); the case study is helpful to understand how the system works (R3); examples (visual materials) further enhances the understanding of the learning outcomes of the system (R1, R3). However, all reviewers unanimously agreed that the paper has critical limitations that make it hard to be accepted. Misuses of HCI concepts: the process used in the research is at best an example of human-centered computing (R2) or interaction design process (R3), which is NOT participatory design. The paper also confuses the concepts of prototyping and evaluation. R2 has provided reference papers to help distinguish these concepts. Unclear Research Question: it is unclear what the research question that the paper tries to answer. R1 articulates possible research questions and how the evaluation can be designed to answer these possible research questions. Missing Related Work: R2 and R3 have pointed out that it is hard to evaluate the contribution of the work without a comparison of the current work with other online learning methods. R2 and R3 each have provided some references. Insufficient Evaluation: Both R2 and R3 argue that the paper does NOT provide sufficient evidence to show that compared to face-to-face feedback, the online feedback gives more honest and direct critics Similarly, R1 argues it is hard to understand the benefit of the proposed method without any comparative experiment. Furthermore, R1 provides potential evaluation methods that can be used. Reviewers also have provided other valuable comments, which I encourage authors to refer to when revising the paper. Although all reviewers appreciate that the paper addresses an important research topic and the proposed system seems to be nicely implemented, they share a common set of concerns. Thus, it is hard to recommend the paper for acceptance in its current shape. However, I hope the authors will not feel discouraged and would encourage them to carefully consider the valuable feedback and to make the contribution of the work clearer and stronger. """
2,"""Assistance for Target Selection in Mobile Augmented Reality""","['Mobile augmented reality', 'target assistance', 'augmented reality', 'mobile devices', 'pointing assistance']","""Mobile augmented reality where a mobile device is used to view and interact with virtual objects displayed in the real world is becoming more common. Target selection is the main method of interaction in mobile AR, but is particularly difficult because targets in AR can have challenging characteristics such as moving or being occluded (by digital or real world objects). To address this problem, we conduct a comparative study of target assistance techniques designed for mobile AR. We compared four different cursor-based selection techniques against the standard touch-to-select interaction, finding that a newly adapted Bubble Cursor-based technique performs consistently best across a range of five target characteristics. Our work provides new findings demonstrating the promise of cursor-based target assistance in mobile AR. ""","""Meta by R1: Assistance for Target Selection in Mobile Augmented Reality - Meta Review Overall, all three reviewers agree that this submission is at a level that warrants acceptance into GI 2020. There are three main issues that I believe the authors should improve in future iterations of this draft. First, they should address the limitations of their work (=> limitation section). The touch-condition was implemented in a way that does not represent the current state-of-the-art (R2). Using such an inferior implementation in a comparative study is bad scientific practise; the rationale for this decision has to be laid out. The study layout actively discourages physical movement of the participants (R2). Again, this is a design decision that has to be better explained, as this implies that the results might change in a different scenario. Second, the authors should be clearer about their contribution (=> introduction, discussion, and conclusion). While reviewers generally lauded the background research (R1, R3), the contribution was perceived as vague (R1) and some claims as too strong (R2). The authors should be utmost precise about what their work adds to the body of knowledge: ""present[ing] the first study"" does not warrant publication, and providing ""a reference implementation"" implies that the authors will provide some source code. Third, the authors should fix all the minor problems pointed out, particularly by the reviewers (R2, R3). """
3,"""Personal+Context navigation: combining AR and shared displays in Network Path-following""","['Networks', 'Path following', 'Link sliding', 'Personal views', 'Augmented Reality', 'Shared Public displays', 'Lab experiments']","""Shared displays are well suited to public viewing and collaboration, however they lack personal space to view private information and act without disturbing others. Combining them with Augmented Reality (AR) headsets allows interaction without altering the context on the shared display. We study a set of such interaction techniques in the context of network navigation, in particular path following, an important network analysis task. Applications abound, for example planning private trips on a network map shown on a public display. The proposed techniques allow for hands-free interaction, rendering visual aids inside the headset, in order to help the viewer maintain a connection between the AR cursor and the network that is only shown on the shared display. In two experiments on path following, we found that adding persistent connections between the AR cursor and the network on the shared display works well for high precision tasks, but more transient connections work best for lower precision tasks. More broadly, we show that combining personal AR interaction with shared displays is feasible for network navigation.""",""" This paper is a resubmission; two reviewers provided reviews on the original submission. Both reviewers report that their concerns regarding the paper have been addressed in the revised version of the manuscript. The reviewers agree that the paper presents a promising set of techniques to tackle topical problem for augmented reality. Based on first- and second-round reviews, the highlights of this paper include clear definition of design goals, and thorough reporting of study results. Based on the reviewer ratings, I recommend acceptance of the paper at GI 2020, with minor corrections: 1. Improve clarity around the description to use head-tracking instead of eye-tracking [R3] 2. Use consistent terminology to describe concepts, techniques, and methods in the paper [R1] 3. Proof-read added text (orange) and overall manuscript for typos [R1, R2] """
4,"""Part-Based 3D Face Morphable Model with Anthropometric Local Control""","['Shape modeling', '3D facial morphable models', 'anthropometric measurements']","""We propose an approach to construct realistic 3D facial morphable models (3DMM) that allows an intuitive facial attribute editing workflow. Current face modeling methods using 3DMM suffer from the lack of local control. We thus create a 3DMM by combining local part-based 3DMM for the eyes, nose, mouth, ears, and facial mask regions. Our local PCA-based approach uses a novel method to select the best eigenvectors from the local 3DMM to ensure that the combined 3DMM is expressive while allowing accurate reconstruction. The editing controls we provide to the user are intuitive as they are extracted from anthropometric measurements found in the literature. Out of a large set of possible anthropometric measurements, we filter the ones that have meaningful generative power given the face data set. We bind the measurements to the part-based 3DMM through mapping matrices derived from our data set of facial scans. Our part-based 3DMM is compact yet accurate, and compared to other 3DMM methods, it provides a new trade-off between local and global control. We tested our approach on a data set of 135 scans used to derive the 3DMM, plus 19 scans that served for validation. The results show that our part-based 3DMM approach has excellent generative properties and allows intuitive local control to the user. ""","""All 3 reviewers were positive about the paper, with two clear accepts and one weak accept. Hence, I recommend that the paper be accepted to GI, and request the authors to address all the issues identified by the reviewers in their final revision."""
5,"""Graph-Based Locality-Sensitive Circuit Sketch Recognizer""","['Pen Interaction', 'Graph', 'Circuit Sketch']","""The understanding of circuit diagram is very important for the study of electrical engineering. Existing circuit diagram simulation tools are mostly based on GUI interface and rely on users to click or drag icons with mouse, which requires them to be familiar with the software and distracts a great deal of their attention from the circuit diagram itself. This paper constructs a prototype of pen-based circuit diagram system. It enables users to draw circuit diagrams directly on the digital screen without learning how to use it. At the same time, a graph-based sketch recognition algorithm is proposed to recognize diagram components efficiently and it is not sensitive to different drawing habits. Our approach has achieved 93.04 recognition accuracy on an experiment of 158 samples collected from 17 users and 4.53 out of 5 on average for user satisfaction. Theoretical derivation and experiments have demonstrated that our algorithm and prototype system are efficient as well as stable with high value in practice compared with previous state-of-the-art methods. The same approach can also be applied to other general sketch recognition scenarios. To facilitate future researches and applications, we publish our source code, model, and training data pseudo-url . ""","""Thanks for resubmitting this paper with upgrades. Unfortunately, both reviewers agreed that the paper needs more improvements e.g., adding the references, broader technical contribution, evaluation with a comparison. We reached a conclusion to reject this paper. """
6,"""Exploring Video Conferencing for Doctor Appointments in the Home: A Scenario-Based Approach from Patients Perspectives""","['Mobile video communication', 'doctor appointments', 'domestic settings', 'computer-mediated communication']","""We are beginning to see changes to health care systems where patients are now able to visit their doctor using video conferencing appointments. Yet we know little of how such systems should be designed to meet patients needs. We used a scenario-based design method with video prototyping and conducted patient-centered contextual interviews with people to learn about their reactions to futuristic video-based appointments. Results show that video-based appointments differ from face-to-face consultations in terms of accessibility, relationship building, camera work, and privacy issues. These results illustrate design challenges for video calling systems that can support video-based appointments between doctors and patients with an emphasis on providing adequate camera control, support for showing empathy, and mitigating privacy concerns.""","""Reviewers acknowledged that the paper is interesting, well-written, well-motivated, and addresses an important problem in an area that is relevant to the HCI community. It is also methodologically sound. They also highlighted the thoroughness of the methodology employed. However, they also highlighted some issues with the paper that the authors need to address: Below, I summarize the key issues. However, I encourage the authors to read through the reviews carefully and address all issues highlighted by individual reviewers: - Missing citations and some reflections on implications of the methodological sensitivity for future designers [R1]. - Some misplaced sections, e.g., some future work included at the beginning instead of at the future work section, it breaks the flow [R1]. - Lack of clear and concrete design suggestions for future research in the area of video conferencing design and related areas [R2]. - Lack of discussion of the limitations of video-conferencing based doctor appointments especially the third-person camera and how to overcome them [R3]. Despite the shortcoming and highlighted weaknesses, the reviewers believe the paper hold some potential. I also believe that, although the issues highlighted are very important and must be addressed in the final version, they do not require significant changes to the paper. Hence, I recommend that the paper be accepted. """
7,"""Evaluating Temporal Delays and Spatial Gaps in Overshoot-avoiding Mouse-pointing Operations""",[],"""For hover-based UIs (e.g., pop-up windows) and scrollable UIs, we investigated mouse-pointing performance for users trying to avoid overshooting a target while aiming for it. Three experiments were conducted with a 1D pointing task in which overshooting was accepted (a) within a temporal delay, (b) via a spatial gap between the target and an unintended item, and (c) with both a delay and a gap. We found that, in general, movement times tended to increase with a shorter delay and a smaller gap if these parameters were independently tested. Therefore, Fitts' law cannot accurately predict the movement times when various values of delay and/or gap are used. We found that 0.4 sec is a sub-optimal delay for densely arranged targets, but we found no optimal gap.""","""This paper has three detailed reviews, each of which highlights strengths and weaknesses of the paper. Despite the deviation in decision between the three reviewers, there is broad overall agreement that: - The paper is largely well-written. - The experiments appear to be conducted appropriately. As well, all reviewers have suggestions in clarifying language and presentation of results including: - Some suggestions for additional analyses to clarify whether there truly are no differences between factors (R1). - Some additional analyses that should be more fully developed in the paper (R3). - Some additional details on experimental design (R2). - Some better treatment of latency (R1/3). Rather than summarize detailed reviews by the external reviewers, I encourage the authors to read the reviews with care and to incorporate the constructive suggestions made by reviewers. In summary, while I am leaning toward accepting this paper, the authors should carefully consider the points made in each review, perform the additional analyses and report the results with greater care. However, I believe that the authors can modify the paper as per the suggestions of all reviewers such that the paper would rise to the level of acceptability for Graphics Interface."""
8,"""UniNet: A Mixed Reality Driving Simulator""","['Driving Simulator', 'Mixed Reality', 'Virtual Reality', 'Passthrough', 'Green Chamber', 'SUMO', 'Procedural CityGeneration', 'Traffic Generation']","""Driving simulators play an important role in vehicle research. However, existing virtual reality simulators do not give users a true sense of presence. UniNet is our driving simulator, designed to allow users to interact with and visualize simulated traffic in mixed reality. It is powered by SUMO and Unity. UniNet's modular architecture allows us to investigate interdisciplinary research topics such as vehicular ad-hoc networks, human-computer interaction, and traffic management. We accomplish this by giving users the ability to observe and interact with simulated traffic in a high fidelity driving simulator. We present a user study that subjectively measures user's sense of presence in UniNet. Our findings suggest that our novel mixed reality system does increase this sensation. ""","""There is a concern if GI is the right venue for such a contribution and whether the authors will benefit more from submitting this to a more apt venue. That said, all reviewers agree that the contribution has merit for acceptance. """
9,"""A Baseline Study of Emphasis Effects in Information Visualization""","['Human-centered computing', 'Visualization', 'Visualization techniques', 'Perception', 'Visualization design and evaluation methods']","""Emphasis effects visual changes that make certain elements more prominent are commonly used in information visualization to draw the users attention or to indicate importance. Although theoretical frameworks of emphasis exist (that link visually diverse emphasis effects through the idea of visual prominence compared to background elements), most metrics for predicting how emphasis effects will be perceived by users come from abstract models of human vision which may not apply to visualization design. In particular, it is difficult for designers to know, when designing a visualization, how different emphasis effects will compare and how to ensure that the users experience with one effect will be similar to that with another. To address this gap, we carried out two studies that provide empirical evidence about how users perceive different emphasis effects, using three visual variables (colour, size, and blur/focus) and eight strength levels. Results from gaze tracking, mouse clicks, and subjective responses in our first study show that there are significant differences between different kinds of effects and between levels. Our second study tested the effects in realistic visualizations taken from the MASSVIS dataset, and saw similar results. We developed a simple predictive model from the data in our first study, and used it to predict the results in the second; the model was accurate, with high correlations between predictions and real values. Our studies and empirical models provide new information for designers who want to understand how emphasis effects will be perceived by users.""","""The reviewers found that the article tackles of problem of relevance in formation visualisation, and that the studies bring relevant insights to the community. They nonetheless highlight of number of problems remaining in the article that should be addressed before being published. 1. Clarify the presentation of the study conditions (all reviewers). 2. Better integrate of the literature in relation to the problem tackled in the article, both in the related work section (R1) *and* in the discussion (R2). 3. Clarify the visualizations used in study 2 (all reviewers). 4. Expand on the limitations of the experiments and their analysis (R2) 5. Reflect and contextualise of the results may apply in the wild (R3) """
10,"""Personal+Context navigation: combining AR and shared displays in Network Path-following""","['Networks', 'Path following', 'Link sliding', 'Personal views', 'Augmented Reality', 'Shared Public displays', 'Lab experiments']","""Shared displays are well suited to public viewing and collaboration, however they lack personal space to view private information and act without disturbing others. Combining them with Augmented Reality (AR) headsets allows interaction without altering the context on the shared display. We study a set of such interaction techniques in the context of network navigation, in particular path following, an important network analysis task. Applications abound, for example planning private trips on a network map shown on a public display. The proposed techniques allow for hands-free interaction, rendering visual aids inside the headset, in order to help the viewer maintain a connection between the AR cursor and the network that is only shown on the shared display. In two experiments on path following, we found that adding persistent connections between the AR cursor and the network on the shared display works well for high precision tasks, but more transient connections work best for lower precision tasks. More broadly, we show that combining personal AR interaction with shared displays is feasible for network navigation.""","""The reviewers highlight strengths of the paper, including topical research area, well-written paper, defining design goals, reporting of study results, and use of confidence interval testing. However, key issues are identified that raise concerns about the significance of the contribution (R2) and the validity of the study results (R2, R4). Although there is some disagreement between the reviewersR2 and R4 recommend rejection and R3 recommends acceptance given that the overall score remains low, and the concerns raised by R2 and R4 are significant, my recommendation is that the paper is not yet ready for publication at Graphics Interface 2020."""
11,"""AffordIt!: A Tool for Authoring Object Component Behavior in Virtual Reality""","['Virtual Reality', 'Usability Study', 'HCI for Development', 'Interactive Systems']","""In this paper we present AffordIt!, a tool for adding affordances to the component parts of a virtual object. Following 3D scene reconstruction and segmentation procedures, users find themselves with complete virtual objects, but no intrinsic behaviors have been assigned, forcing them to use unfamiliar Desktop-based 3D editing tools. AffordIt! offers an intuitive solution that allows a user to select a region of interest for the mesh cutter tool, assign an intrinsic behavior and view an animation preview of their work. To evaluate the usability and workload of AffordIt! we ran an exploratory study to gather feedback. In the study we utilize two mesh cutter shapes that select a region of interest and two movement behaviors that a user then assigns to a common household object. The results show high usability with low workload ratings, demonstrating the feasibility of AffordIt! as a valuable 3D authoring tool. Based on these initial results we also present a road-map of future work that will improve the tool in future iterations.""","""Despite some issues (that will hopefully be largely improved for a camera-ready) and limitations, all three reviewers either tend towards acceptance or outright recommend it. I think this has the potential to spark some interesting discussions and future research at the GI conference. """
12,"""Immersive Visualization of the Classical Non-Euclidean Spaces using Real-Time Ray Tracing in VR""","['Ray tracing', 'VR', 'Non-Euclidean geometry']","""This paper presents a system for immersive visualization of the Classical Non-Euclidean spaces using real-time ray tracing. It exploits the capabilities of the latest generation of GPUs based on the NVIDIAs Turing architecture in order to develop new methods for intuitive exploration of landscapes featuring non-trivial geometry and topology in virtual reality.""",""" The reviewers all appreciated the quality of the writing of the paper, but all pointed out that the technical contribution is extremely limited for the work to be presented at GI. Though the concepts presented in the paper are well known in the CG community, it might still be of interest to the audience at GI. For these reasons, we recommend to reject the work in its current state, but suggest that a Poster presentation might be adequate. If the authors choose to resubmit the work in another (Computer Graphics, or Scientific Visualization) conference, the reviewers encourage the authors to increase originality/novelty by, either: 1. Performing a thorough user evaluation of the system. Did it help understand the visualized spaces? 2. Presenting novel, non-trivial, applications arising from the implementation. Main pros and cons noted by the reviewers: Pros: + System to visualize non-Euclidean spaces in VR with RTX + Novel design to show non-Euclidean spaces (by rendering the edges of the fundamental domain with different colors) Cons: - Lack of technical novelty"""
13,"""Comparing Learned and Iterative Pressure Solvers for Fluid Simulation""","['Fluid simulation', 'Pressure projection', 'Convolutional neural network', 'Jacobi', 'RBGS', 'PCG']","""This paper compares the performance of the neural network based pressure projection approach of Tompson et al. to traditional iterative solvers. Our investigation includes the Jacobi and preconditioned conjugate gradient solver comparison included in the previous work, as well as a red-black Gauss-Seidel method, all running with a GPU implementation. Our investigation focuses on 2D fluid simulations and three scenarios that present boundary conditions and velocity sources of different complexity. We collect convergence of the velocity divergence norm as the error in these test simulations and use plots of the error distribution to make high-level observations about the performance of iterative solvers in comparison to the fixed time cost of the neural network solution. Our results show that Jacobi provides the best bang of the buck with respect to minimizing error using a small fixed time budget.""","""Reviewer recommendations ranged from Clear Reject to Marginally Acceptable. Pros: -The paper performs comparisons of a few solvers for 2D fluid animation, including basic iterative solvers and a relatively recent learning-based method. Reviewers agreed that there can be value in benchmark studies of this kind. Cons: -The paper offers no novel simulation techniques. -The range of solvers considered does not include state of the art techniques, e.g., multigrid, direct solvers (Cholesky), etc. -Evaluation was restricted to 2D simulations. -Minimal insight is offered as to why the findings disagree with those of Tompson et al. -One reviewer raised technical concerns regarding unnecessary regularization and the observed convergence behavior of the author's RBGS implementation. Since none of the reviewers advocated strongly for this work and several criticisms were raised, I am recommending rejection. """
14,"""Workflow Graphs: A Computational Model of Collective Task Strategies for 3D Design Software""","['Software learning', 'Workflow analysis', 'Peer learning']","""This paper introduces graphs} or pseudo-formula , which encode how the approaches taken by multiple users performing a fixed 3D design task converge and diverge from one another. The graph's nodes represent equivalent intermediate task states across users, and directed edges represent how a user moved between these states, inferred from screen recording videos, command log data, and task content history. The result is a data structure that captures alternative methods for performing sub-tasks (e.g., modeling the legs of a chair) and alternative strategies of the overall task. As a case study, we describe and exemplify a computational pipeline for building pseudo-formula using screen recordings, command logs, and 3D model snapshots from an instrumented version of the Tinkercad 3D modeling application, and present graphs built for two sample tasks. We also illustrate how pseudo-formula can facilitate novel user interfaces with scenarios in workflow feedback, on-demand task guidance, and instructor dashboards.""","""Meta by R2: Overall the reviewers were positive in their reviews, they all agree that the paper is well written and presents an interesting idea. R1 finds the paper well-motivated an both R1 and R2 both point out that the paper has good coverage of related work. As of improvements to the paper, R3 would like some more details on how mistakes are handled, and both R1 and R2 miss some technical details regarding the implementation. They also ask for more elaboration on the scalability of the approach. In contrast to R1, R2 finds that the motivation could be strengthened in the introduction. The paper is positively received by the reviewers who all recommend accepting it. My recommendation is therefore to accept the paper. For a final version of the paper, the authors should carefully consider all reviewers' constructive comments. """
15,"""Bi-Axial Woven Tiles: Interlocking Space-Filling Shapes Based on Symmetries of Bi-Axial Weaving Patterns""","['Space-Filling shapes', '3D Tile Design', 'Weaving patterns', '3D printing', 'Computational Fabrication']","""In this paper, we introduce a geometric design and fabrication framework for a family of interlocking space-filling shapes which we call bi-axial woven tiles. Our framework is based on a unique combination of (1) Voronoi partitioning of space using curve segments as the Voronoi sites and (2) the design of these curve segments based on weave patterns closed under symmetry operations. The underlying weave geometry provides an interlocking property to the tiles and the closure property under symmetry operations ensure single tile can fill space. In order to demonstrate this general framework, we focus on specific symmetry operations induced by bi-axial weaving patterns. We specifically showcase the design and fabrication of woven tiles by using the most common 2-fold fabrics called 2-way genus-1 fabrics, namely, plain, twill, and satin weaves.""","""This paper is likely to inspire and spur additional work. Including it clearly strengthens GI. Please incorporate the feedback noted in the reviews."""
16,"""Gaggle: Visual Analytics for Model Space Navigation""","['visual analytics', 'interactive machine learning', 'classification', 'ranking']","""Recent visual analytics systems make use of multiple machine learning models to better fit the data as opposed to traditional single, pre-defined model systems. However, while multi-model visual analytic systems can be effective, their added complexity poses usability concerns, as users are required to interact with the parameters of multiple models. Further, the advent of various model algorithms and associated hyperparameters creates an exhaustive model space to sample models from. This poses complexity to navigate this model space to find the right model for the data and the task. In this paper, we present Gaggle, a multi-model visual analytic system that enables users to interactively navigate the model space. Further translating user interactions into inferences, Gaggle simplifies working with multiple models by automatically finding the best model from the high-dimensional model space to support various user tasks. Through a qualitative user study, we show how our approach helps users to find a best model for a classification and ranking task. The study results confirm that Gaggle is intuitive and easy to use, supporting interactive model space navigation and automated model selection without requiring any technical expertise from users.""","""This paper receives the scores of 6, 5, 6, which positive borderline. In general, this paper is acceptable, while there are several places to improve, especially demonstrating the usefulness and generalization of the system. This paper also misses some details and have several clarity issues. I suggest the authors take the reviewers' comments and revise the paper accordingly. In summary, addressing these concerns should be doable within the review cycle, and thus I recommend to accept this paper. """
17,"""Improving View Independent Rendering: Towards Robust, Practical Multiview Effects""","['Point-Based Rendering', 'Real-Time Rendering', 'Sampling and Reconstruction', 'Shadow Algorithm', 'Stochastic Sampling', 'Multiview Rendering']","""This paper describes improvements to view independent rendering (VIR) designed to make its immediate application to soft shadows more practical, and its future application to other multiview effects such as reflections and depth of field more promising. Realtime rasterizers typically realize multiview effects by rendering a scene from multiple viewpoints, requiring multiple passes over scene geometry. VIR avoids this necessity by crafting a watertight point cloud and rendering it from multiple viewpoints in a single pass. We make VIR immediately more practical with an unbuffered implementation that avoids possible overflows, and improve its potential with more efficient sampling achieved with orthographic projection and stochastic culling. With these improvements, VIR continues to generate higher quality real time soft shadows than percentage-closer soft shadows (PCSS), in comparable time.""","""The paper presents three extensions to an earlier method by Mars et al. All of them are not very significant, and do not seem to show much improvement over the earlier work in terms of quality of results, as shown in the examples in the paper. The authors are recommended to work on the anticipated improvements over Marrs et al's work in renderings requiring more demanding shading loads, such as environment mapping, diffuse global illumination and defocus or motion blur. That could help make a case for these extensions."""
18,"""Becoming Cat People: Animal-like Human Experiences with a Sensory Augmenting Whisker Wearable""","['Wearables', 'biosensing', 'empathy', 'exploration', 'exploitation']","""Humans have a natural curiosity to imagine what it feels like to exist as someone or something else. This curiosity becomes even stronger for the pets we care for. Humans cannot truly know what it is like to be our pets, but we can deepen our understanding of what it is like to perceive and explore the world like them. We investigate how wearables can offer people animal perspective-taking opportunities to experience the world through animal senses that differ from those biologically natural to us. To assess the potential of wearables in animal perspective-taking, we developed a sensory-augmenting wearable that gives wearers cat-like whiskers. We then created a maze exploration experience where blindfolded participants utilized the whiskers to navigate the maze. We draw on animal behavioral research to evaluate how the whisker activity supported authentically cat-like experiences, and discuss the implications of this work for future learning experiences.""","""In this paper, the authors propose an interesting paper that explores what appears to be a novel wearable system for people to achieve a closer perspective to pets, specifically to cats in their particular system called Whisker Beard. The work involves the use of flex sensors to serve as prosthetic whiskers that are placed on the users face, which then provides vibrotactile feedback to the users scalp for better informing their surroundings as they are navigating. The paper clearly brought consistency among the reviewers on their thoughts of the Whisker Beard system, particularly the strong and articulated writing style and the novelty of a wearable system that was designed for people to better empathize with their pets by attempting to emulate the physical sensory tools of a house cat. However, the reviewers also shared other concerns of the paper such as inadequate motivation, lack of implementation details, and weak study evaluation. As a result, we believe that the paper is not in a ready-enough state to be accepted into this year's conference. Please see below for a list of the reviewers expanded explanation of the papers strengths and weaknesses. We believe that addressing these concerns will better prepare the work for submission to a future venue, and we wish the authors the best of luck in furthering improvements of this intriguing work. Pros 1. The reviewers pointed out that the paper appeared to be original novel research. 2. The reviewers praised the paper for being well-written and strongly articulated. 3. With some exceptions, the reviewers felt that the literature review from the related works were thoroughly reviewed. 4. The reviewers expressed that the proposed wearable system seemed well-designed and operated as intended. They also shared that the study and evaluation appeared to be correct and sound. 5. The reviewers expressed that the authors provided a reasonable summary of the limitations on their wearable system's hardware implementation and study design, and followed up with the ways that can mitigate these limitations. Cons 1. The reviewers had concerns regarding the planning and execution of the study, specifically the low count of six participants and the bias onto the participants from blindfolding them during the study. 2. The reviewers expressed some issues with the overly broad nature of the research questions due to the actual study of the paper not fully addressing those research questions. 3. Reviewers 2 and 3 communicated concerns regarding the wearable system lacking sufficient details on how it actually works. 4. Reviewers 1 and 2 expressed how the related work section was still lacking on including several prior studies that are similar to the study described in the paper. 5. Reviewers 1 and 3 stated their concerns on one execution aspect of the paper involving the blindfolding of the study participants. Specifically, the reviewers explained that the users' reliance on the whisker sensors did not seem to similarly reflect how cats use whiskers. 6. The reviewers had varying concerns regarding weak or lacking support of the paper's motivation claims."""
19,"""A Comparative Study of Lexical and Semantic Emoji Suggestion Systems""","['Emoji suggestion', 'text entry', 'text messaging', 'mobile chat']","""Emoji suggestion systems based on typed text have been proposed to encourage emoji usage and enrich text messaging; however, such systems actual effects on the chat experience remain unknown. We built an Android keyboard with both lexical (word-based) and semantic (meaning-based) emoji suggestion capabilities and compared these in two different studies. To investigate the effect of emoji suggestion in online conversations, we conducted a laboratory text-messaging study with 24 participants, and also a 15-day longitudinal field deployment with 18 participants. We found that lexical emoji suggestions increased emoji usage by 31.5% over a keyboard without suggestions, while semantic suggestions increased emoji usage by 125.1%. However, suggestion mechanisms did not affect the chatting experience significantly. From these studies, we formulate a set of design guidelines for future emoji suggestion systems that better support users needs.""","""All of the reviewers appreciated the clear presentation of this work, finding it to be well-written with clear justifications for the design of the presented studies. The reviewers also appreciated that the paper includes a number of different studies to probe its research questions. Though the presentation was appreciated, all of the reviewers also raised concerns about the paper. R2 and R1 expressed concern that there may be confounds in the comparison of semantic and lexical systems (due to the popularity of the emoji in the semantic system, and the different mechanisms through which suggestions are made). R2 expressed concerns about the contribution of the work being thin, in part because it is unclear what is driving the preference for the semantic suggestion system. Finally, R1 and R2 both pointed out that the design guidelines are not clearly linked to the results of the studies. In terms of how this paper could be improved, R1 and R3 suggested that the paper would benefit from a more in-depth analysis of qualitative data, to gain deeper insights into the different suggestion methods. I agree with this suggestion, and I think that if it was done well, the insights from such an analysis would strengthen the qualitative results, and also enable the paper to make design recommendations that are more grounded in findings (that is, it could address many of the criticisms raised above). As it stands, my recommendation is Reject, because I think that the paper as it stands does not make enough of a contribution, and the addition of a more in-depth qualitative analysis would represent a major change that would need a new review cycle to evaluate."""
20,"""ColorArt: Suggesting Colorizations For Graphic Arts Using Optimal Color-Graph Matching""","['Graphic arts', 'Colorization', 'Infographics', 'Color Graph Matching', 'Automatic colorization', 'Reference based colorization']","""Colorization is a complex task of selecting a combination of colors and arriving at an appropriate spatial arrangement of the colors in an image. In this paper, we propose a novel approach for automatic colorization of graphic arts like graphic patterns, info-graphics and cartoons. Our approach uses the artist's colored graphics as a reference to color a template image. We also propose a retrieval system for selecting a relevant reference image corresponding to the given template from a dataset of reference images colored by different artists. Finally, we formulate the problem of colorization as a optimal graph matching problem over color groups in the reference and the template image. We demonstrate results on a variety of coloring tasks and evaluate our model through multiple perceptual studies. The studies show that the results generated through our model are significantly preferred by the participants over other automatic colorization methods.""","""All the reviewers agree that the results are impressive, and the problem is interesting. The exposition and validation of the paper, however, need some work (R1, R3). Since overall the reviewers are positive about the paper, I recommend accepting it with the following provisions: - Clarify or rephrase the analysis of the second user study (R1) - Add missing details on kNN (R3) and, if possible, a figure demonstrating how well the reference search works (R1) - Fix typos (R2, R3)."""
21,"""The Effect of Visual and Interactive Representations on Human Performance and Preference with Scalar Data Fields""","['Perception', 'Scalar data field']","""2D scalar data fields are often represented as heatmaps because color can help viewers perceive structure without having to interpret individual digits. Although heatmaps and color mapping have received much research attention, there are alternative representations that have been generally overlooked and might overcome heatmap problems. For example, color perception is subject to context-based perceptual bias and high error, which can be addressed through representations that use digits to enable more accurate value reading. We designed a series of three experiments that compare four alternative techniques to a state-of-the-art heatmap: regular tables of digits, an interactive tooltip showing the value under the cursor, a heatmap with the digits overlapped over it, and FatFonts. Data analysis from the three experiments, which test locating values, finding extrema, and clustering tasks, show that overlapping digits on color offers a substantial increase in accuracy (between 10 and 60 percent points of improvement over the plain heatmap, depending on the task) at the cost of extra time when locating extrema or forming clusters, but none when locating values. The interactive tooltip offered a poor speed-accuracy tradeoff, but participants preferred it to the color-only or digits-only representations. We conclude that hybrid color-digit representations of scalar data fields could be highly beneficial for uses where spatial resolution and speed are not the main concern. ""","""This submission has received the following scores: - Average Rating: 7.33 (Min: 6, Max: 8) - Average Confidence: 4 (Min: 4, Max: 4) Reviews highlighted the following strenghts: - clear written presentation (R1+R3) - related work well covered (R1+R3) and built upon (R2) - supplementary material detailed (R1) and available (R3) - insight on pros/cons of evalutation techniques provided (R1) - chosen baseline technique (Digits) is realistic (R2) Reviews pointed the following weaknesses: - challenging visual presentation and positioning of figures and/versus related text (R1+R2+R3) - references (R2) or details (R3) missing for some claims: particularly on large displays - weak research question framing (R1) - lack of design choice rationale (R1) I recommend for acceptance of this paper."""
22,"""Fine Feature Reconstruction in Point Clouds by Adversarial Domain Translation""","['deep learning', 'point clouds', 'surface reconstruction']","""Point cloud neighborhoods are unstructured and often lacking in fine details, particularly when the original surface is sparsely sampled. This has motivated the development of methods for reconstructing these fine geometric features before the point cloud is converted into a mesh, usually by some form of upsampling of the point cloud. We present a novel data-driven approach to reconstructing fine details of the underlying surfaces of point clouds at the local neighborhood level, along with normals and locations of edges. This is achieved by an innovative application of recent advances in domain translation using GANs. We ""translate"" local neighborhoods between two domains: point cloud neighborhoods and triangular mesh neighborhoods. This allows us to obtain some of the benefits of meshes at training time, while still dealing with point clouds at the time of evaluation. By resampling the translated neighborhood, we can obtain a denser point cloud equipped with normals that allows the underlying surface to be easily reconstructed as a mesh. Our reconstructed meshes preserve fine details of the original surface better than the state of the art in point cloud upsampling techniques, even at different input resolutions. In addition, the trained GAN can generalize to operate on low resolution point clouds even without being explicitly trained on low-resolution data. We also give an example demonstrating that the same domain translation approach we use for reconstructing local neighborhood geometry can also be used to estimate a scalar field at the newly generated points, thus reducing the need for expensive recomputation of the scalar field on the dense point cloud.""","""It is recommended that the paper be accepted for presentation at GI'20. The merits of the submission include the importance of the topic and convincing results. But the paper needs to be revised to improve its validation in terms of using a larger data sets and comparing with (or citing) other related works. Please see the detailed review comments to improve the presentation of the paper. """
23,"""WAAT: a Workstation AR Authoring Tool for Industry 4.0""","['VR', 'AR', 'XR', 'AR Authoring', '3D Authoring', 'Industry 4.0']","""The use of AR in an industrial context could help for the training of new operators. To be able to use an AR guidance system, we need a tool to quickly create a 3D representation of the assembly line and of its AR annotations. This tool should be very easy to use by an operator who is not an AR or VR specialist: typically the manager of the assembly line. This is why we proposed WAAT, a 3D authoring tool allowing user to quickly create 3D models of the workstations, and also test the AR guidance placement. WAAT makes on-site authoring possible, which should really help to have an accurate 3D representation of the assembly line. The verification of AR guidance should also be very useful to make sure everything is visible and doesn't interfere with technical tasks. In addition to these features, our future work will be directed in the deployment of WAAT into a real boiler assembly line to assess the usability of this solution.""","""The reviewers agree that the paper addresses an important research area, but they also agree that what the paper reports on is work-in-progress and lacks evaluation and a proper positioning in relation to previous work on AR for assembly tasks. The reviewers unanimously agree that the paper should be rejected."""
24,"""QCue: Queries and Cues for Computer-Facilitated Mind-Mapping""","['mind-mapping', 'computer-supported creativity tools', 'idea generation', 'ConceptNet']","""We introduce a novel workflow, QCue, for providing textual stimulation during mind-mapping. Mind-mapping is a powerful tool whose intent is to allow one to externalize ideas and their relationships surrounding a central problem. The key challenge in mind-mapping is the difficulty in balancing the exploration of different aspects of the problem (breadth) with a detailed exploration of each of those aspects (depth). Our idea behind QCue is based on two mechanisms: (1) computer-generated automatic cues to stimulate the user to explore the breadth of topics based on the temporal and topological evolution of a mind-map and (2) user-elicited queries for helping the user explore the depth for a given topic. We present a two-phase study wherein the first phase provided insights that led to the development of our work-flow for stimulating the user through cues and queries. In the second phase, we present a between-subjects evaluation comparing QCue with a digital mind-mapping work-flow without computer intervention. Finally, we present an expert rater evaluation of the mind-maps created by users in conjunction with user feedback.""","""This paper received three positive reviews, all of which recommend that the paper be accepted. Based on this, I am recommending 'Accept' as well. Though all of the reviews were positive, they also provided suggestions on how to improve the paper, and strengthen its presentation and contribution. I encourage the authors to take these suggestions to heart, and to try and integrate them into the final version of the paper."""
25,"""Image Abstraction through Overlapping Region Growth""","['Non-photorealistic rendering', 'Image stylization', 'Segmentation', 'Abstraction']","""We propose a region-based abstraction of a photograph, where the image plane is covered by overlapping irregularly shaped regions that approximate the image content. We segment regions using a novel region growth algorithm intended to produce highly irregular regions that still respect image edges, different from conventional segmentation methods that encourage compact regions. The final result has reduced detail, befitting abstraction, but still contains some small structures such as highlights; thin features and crooked boundaries are retained, while interior details are softened, yielding a painting-like abstraction effect.""","""All the reviews are positive about the results demonstrated in the paper. It would be good to address the quantitative evaluation concern, clarify the runtime/bottleneck, and add corresponding comparisons. """
26,"""StarHopper: A Touch Interface for Remote Object-Centric Drone Navigation""","['Robot', 'Input Techniques', 'Navigation']","""Camera drones, a rapidly emerging technology, offer people the ability to remotely inspect an environment with a high degree of mobility and agility. However, manual remote piloting of a drone is prone to errors. In contrast, autopilot systems can require a significant degree of environmental knowledge and are not necessarily designed to support flexible visual inspections. Inspired by camera manipulation techniques in interactive graphics, we designed StarHopper, a novel touch screen interface for efficient object-centric camera drone navigation, in which a user directly specifies the navigation of a drone camera relative to a specified object of interest. The system relies on minimal environmental information and combines both manual and automated control mechanisms to give users the freedom to remotely explore an environment with efficiency and accuracy. A lab study shows that StarHopper offers an efficiency gain of 35.4% over manual piloting, complimented by an overall user preference towards our object-centric navigation system.""","""All reviewers were convinced by this work, by the results of the study showing that this drone interface outperforms existing techniques. All reviewers recommend acceptance. Congratulations!"""
27,"""Gaze-based Command Activation Technique Robust Against Unintentional Activation using Dwell-then-Gesture""","['Gaze-based interaction', 'gaze gesture', 'human behavior', 'eye tracking', 'gaze movement', 'GUI', 'user study']","""We show a gaze-based command activation technique that is robust to unintentional command activations using a series of manipulation of dwelling on a target and performing a gesture (dwell-then-gesture manipulation). The gesture we adopt is a simple two-level stroke, which consists of a sequence of two orthogonal strokes. To achieve robustness against unintentional command activations, we design and fine-tune a gesture detection system based on how users move their gaze revealed through three experiments. Although our technique seems to just combine well-known dwell-based and gesture-based manipulations and to not be enough success rate, our work will be the first work that enriches the vocabulary, which is as much as mouse-based interaction.""",""" Based on the reviews, I recommend that this manuscript be accepted to Graphics Interface. While there are some issues with the presentation, both R1 and R3 are confident that there is a small, interesting contribution that is worthwhile publishing for the benefit of the community. All reviewers noted several points where the authors can clarify their motiviation, their contribution, and relationship to prior work. R2 and R1 provides several detail spots where this can be done, and I want to draw the authors' attention to R2's appeal to provide a slightly more thorough treatment of prior work--specifically to clarify the contribution of this work, and the motivation and justification for the approach given prior work."""
28,"""Support System for Etching Latte Art by Tracing Procedure Based on Projection Mapping""","['Projection Mapping', 'Etching Latte Art', 'Learning Support System']","""It is difficult for beginners to create well-balanced etched latte art patterns using two fluids with different viscosities, such as foamed milk and syrup. However, it is not easy to create well-balanced etched latte art even while watching process videos that show procedures. In this paper, we propose a system that supports beginners in creating well-balanced etched latte art by projecting the etching procedure directly onto a cappuccino. In addition, we examine the similarity between etched latte art and design templates using background subtraction. The experimental results show the progress in creating well-balanced etched latte art using our system.""","""All of the reviewers found that the paper explores an interesting topic (latte art) although there are details about the study participants, scaling of the system, and system design choices that should be added to improve clarity. It would be beneficial for the author(s) to include such details and make the explanation of their target audience much clearer. Overall, it seems to be just above the bar and good enough for acceptance at GI."""
29,"""Time Prediction Model for Pointing at Target Having Different Motor and Visual Width with Distractors""","['Difference between motor and visual widths', 'distractor', 'pointing', ""Fitts' law"", 'GUIs']","""In this study, we extend Fitts' law to enable it to predict the movement time of pointing operations in interfaces, such as in navigation bars whose items have different motor and visual widths and intervals between a target and distractors. For this, we conduct two experiments to investigate the presence or absence of the distractors that affect pointing operations and how increasing the size of the intervals changes user performance. We found that the movement time is strongly affected by the motor width and intervals and slightly by the visual width. On the basis of the results, we constructed a model for considering the difference between the motor and visual widths and the intervals between the target and distractors. The model allows user-interface designers to configure these factors on the basis of movement time. Our model also showed a good fit for not only the data of our two experiments but also those of three previous studies. We also discuss future work for making our model more practical.""","""All reviewers found the paper interesting, with sound study design and methodology. However, the ratings are all somewhat negative, with various reasons: - the paper is not very convincing as to why this phenomenon needs to be modeled at all (All Rs). The presented examples either feel ""niche"" or simply bad design, which the reviewers are not sure needs a model. - it is unclear how exactly this model will benefit designers (All Rs), how it would integrate into their existing processes. - R2 and R3 question the studies' approach and parameters. - R2 and R3 offer comments on how to make the results more readable, and easier to interpret - R1 and R3 criticize the use of the term ""motor size/width"", R1 and R2 feel like the 0.0049 value should be rationalized. Other comments can be found in the individual reviews, all worthy of consideration for a future resubmission. In its current state, the consensus is to reject this submission until these issues are addressed."""
30,"""Interactive Shape Based Brushing Technique for Trail Sets""","['Input Techniques', 'Visualization', 'Content Analysis', 'Eye Tracking', 'Interaction Design', 'Trajectory data']","""Brushing techniques have a long history with the first interactive selection tools appearing in the 1990s. Since then, many additional techniques have been developed to address selection accuracy, scalability and flexibility issues. Selection is especially difficult in large datasets where many visual items tangle and create overlapping. Existing techniques rely on trial and error combined with many view modifications such as panning, zooming, and selection refinements. For moving object analysis, recorded positions are connected into line segments forming trajectories and thus creating more occlusions and overplotting. As a solution for selection in cluttered views, this paper investigates a novel brushing technique which not only relies on the actual brushing location but also on the shape of the brushed area. The process can be described as follows. Firstly, the user brushes the region where trajectories of interest are visible (standard brushing technique). Secondly, the shape of the brushed area is used to select similar items. Thirdly, the user can adjust the degree of similarity to filter out the requested trajectories. This brushing technique encompasses two types of comparison metrics, the piecewise Pearson correlation and the similarity measurement based on information geometry. To show the efficiency of this novel brushing method, we apply it to concrete scenarios with datasets from air traffic control, eye tracking, and GPS trajectories.""","""The reviewers were split on this paper, raising significant concerns about the understandability of some aspects, including the query refinement, small multiples, and the way the PCA works. They have made concrete suggestions about sections that need attention and potential reorganization which could improve this manuscript immensely. I suggest the authors pay careful attention to these recommendations as well as the minor edits in order to improve the paper before presentation. Two reviewers raised concerns about the treatment of the expert feedback (that it was collected, then disregarded). Two reviewers mentioned that the challenges may have come from the usability of a system with names like ""Pearson"" on the tools rather than more semantically meaningful names, but it's hard to say what the reasons could be as the feedback is not reported at all. We recommend that at least some of the expert feedback (even if negative) be discussed, to help readers know what to take away from the feedback in case they wish to reimplement the reported techniques and build on what was learned from the study. Overall, the reviews lean weakly towards ""accept""."""
31,"""Evaluation of Body-Referenced Graphical Menus in Virtual Environments ""","['3D Menus', 'Menu Placements', 'Menu Selection Techniques', 'Menu Shapes', 'Virtual Reality']","""Graphical menus have been extensively used in desktop applications and widely adopted and integrated into virtual environments (VEs). However, while desktop menus are well evaluated and established, adopted 2D menus in VEs are still lacking a thorough evaluation. In this paper, we present the results of a comprehensive study on body-referenced graphical menus in a virtual environment. We compare menu placements (spatial, arm, hand, and waist) in conjunction with various shapes (linear and radial) and selection techniques (ray-casting with a controller device, head, and eye gaze). We examine task completion time, error rates, number of target re-entries, and user preference for each condition and provide design recommendations for spatial, arm, hand, and waist graphical menus. Our results indicate that the spatial, hand, and waist menus are significantly faster than the arm menus, and the eye gaze selection technique is more prone to errors and has a significantly higher number of target re-entries than the other selection techniques. Additionally, we found that a significantly higher number of participants ranked the spatial graphical menus as their favorite menu placement and the arm menu as their least favorite one.""","""Overall, all the reviewers acknowledge that the paper is well motivated (R1,R3) with strong justification (R3), and a comprehensive experiment (R1,R2). R3 provides a thorough overview of the key strengths and issues with the paper. R1 and R2 raise specific concerns about aspects of the study design. I'm highlighting some of the key concerns here. Firstly there are questions about the placement of the instruction/system message (R1,R2). R1 gives a detailed explanation of why the design of the system message and its placement needs to be clarified and the discussion section updated. R3 asks for addresses the issues related to the robustness of body tracking. R1 and R3 also request several key pieces of information related to the study task. R3 suggests commenting on the generalizability of the study tasks. R2 suggests checking the degrees of freedom for some of the reported F values. R2 & R3 suggest edits for the figures to improve clarity. R1 & R3 request including images for different task conditions for clarity. Overall, I think the paper can be accepted with minor revisions addressing the issues raised by the three reviewers."""
32,"""Time-Varying Word Clouds""","['word cloud', 'time-varying', 'real time', 'dynamic']","""We visualize time-varying text information with physically based simulation. Word-clouds are a popular means of visualizing the frequency of different concepts in a text document, but there is little work using text that has a time component, for instance, news feeds, twitter, or abstracts of papers published in a given journal or conference by year. We use physically simulated words that grow and shrink with time as an interactive web based visualization tool. We choose to use an existing 2D simulation framework Matter.js to develop the interface, with carefully designed forces to ensure a robust animation. We perform an informal user study to understand the ability of users to understand information presented in this dynamic way.""","""R1 and R3 are for rejecting the paper, R2 finds it a clear accept. However, R2 is very uncertain with his review. All the reviewer think the problem is interesting and that it is a problem worth looking at. However, R1 and R3 have concerns with the study, find the motivation of the technique lacking, and miss a clear elaboration of the design decision employed to get to this technique. In the study time-varying word clouds are compared to line graphs. There is no evidence for the technique being superior to line graphs even though the authors suggest otherwise. Furthermore, R3 hints at possible distractions from the physically based simulations but no discussion in the paper takes up this point. In summary the cons are major and I therefore recommend to reject this paper."""
33,"""Computer Vision Applications and their Ethical Risks in the Global South""","['Computer Vision', 'Ethics', 'Privacy', 'ICTD', 'Global South']","""We present a study of recent advances in computer vision (CV) research for the Global South to identify the main uses of modern CV and its most significant ethical risks in the region. We review 55 research papers and analyze them along three principal dimensions: where the technology was designed, the needs addressed by the technology, and the potential ethical risks arising following deployment. Results suggest: 1) CV is most used in policy planning and surveillance applications, 2) privacy violations is the most likely and most severe risk to arise from modern CV systems designed for the Global South, and 3) researchers from the Global North differ from researchers from the Global South in their uses of CV to solve problems in the Global South. Results of our risk analysis also differ from previous work on CV risk perception in the West, suggesting locality to be a critical component of each risk's importance.""","""The paper received mixed scores from the reviewers (7, 7, 3). I believe that a survey paper is a valid contribution type for GI's HCI track, as it helps further our knowledge about designs that have been explored and highlights what can be done in the future. While the paper has shortcomings (addressed in the paper) and offers limited results (as acknowledged by all reviewers), the results are timely (R3) and interesting (R1, R3). Based on this, I recommend that the paper be accepted. Below I summarize the key issues identified by the reviewers and encourage the authors to read through the individual reviews carefully to address other concerns. - Address the methodological limitations early in the paper (R2, R3) - More directly define the scope of this work (e.g., what does CV-systems encapsulate (R1), acknowledge that risks not identified in the analysis may still exist (R3)) - Provide more details about background literature (R1, R3) - Clearly highlight the surprising and new results (R1) Recommendation: Accept """
34,"""Line-Storm Ludic System: An Interactive Augmented Stylus and Writing Pad for Creative Soundscape""","['Ludic system', 'Creativity', 'Interactive soundscape']","""We present Line-Storm, an interactive computer system for creative performance. The context we investigated was writing on paper using Line-Storm. We used self-report questionnaires as part of research involving human participants, to evaluate Line-Storm. Line-Storm consisted of a writing stylus and writing pad, augmented with electronics. The writing pad was connected to a contact microphone, and the writing stylus had a small micro-controller board and peripherals attached to it. The signals from these electronic augmentations were fed into the audio-synthesis environment Max/MSP to produce an interactive soundscape. We attempted to discover whether Line-Storm enhanced a self-reported sense of being present and engaged during a writing task, and we compared Line-Storm to a non-interactive control condition. After performing statistical analysis in SPSS, we were unable to support our research hypothesis, that presence and engagement were enhanced by Line-Storm. Participants reported they were, on average, no more present and engaged during the experimental condition than during the control condition. As creativity is subtle, and varies with person, time, context, space and so many other factors, this result was somewhat expected by us. A statistically significant result of our study is that some participants responded to Line-Storm more positively than others. These Preservers of Line-Storm were a group, distinct from other participants, who reported greater presence and engagement and who wrote more words with Line-Storm and during the control condition. We discuss the results of our research and place Line-Storm in an artistic-technological context, drawing upon writings by Martin Heidegger when considering the nature of Line-Storm. Future work includes modifying interactive components, improving aesthetics and using more miniaturized electronics, experimenting with a drawing task instead of a writing task, and collaborating with a composer of electronic music to make a more interesting, immersive, and engaging interactive soundscape for writing or drawing performance.""","""All three reviewers unfortunately felt that this paper was not above the bar in its current state; the paper is currently hard to review because it seems incomplete (i.e., missing figures / sections) and much of the related work / introduction concepts don't seem related to the stylus design, motivation, or study. In the current form, it is unclear what the digital aspect of the system does (i.e., which sounds are generated and when) and difficult to evaluate the results due to missing details about the study methodology. """
35,"""Interactive Exploration of Genomic Conservation""","['Interactive visualization', 'genomic visualization', 'coordinated and multiple views', 'synteny', 'synvisio']","""Comparative analysis in genomics involves comparing two or more genomes to identify conserved genetic information. These duplicated regions can indicate shared ancestry and can shed light on an organism's internal functions and evolutionary history. Due to rapid advances in sequencing technology, high-resolution genome data is now available for a wide range of species, and comparative analysis of this data can provide insights that can be applied in medicine, plant breeding, and many other areas. Comparative genomics is a strongly interactive task, and visualizing the location, size, and orientation of conserved regions can assist researchers by supporting critical activities of interpretation and judgment. However, visualization tools for the analysis of conserved regions have not kept pace with the increasing availability of genomic information and the new ways in which this data is being used by biological researchers. To address this gap, we gathered requirements for interactive exploration from three groups of expert genomic scientists, and developed a web-based tool called SynVisio with novel interaction techniques and visual representations to meet those needs. Our tool supports multi-resolution analysis, provides interactive filtering as researchers move deeper into the genome, supports revisitation to specific interface configurations, and enables loosely-coupled collaboration over the genomic data. An evaluation of the system with five researchers from three expert groups provides evidence about the success of our system's novel techniques for supporting interactive exploration of genomic conservation.""","""While the reviewers applauded for the topic and presentation of the paper, they brought some serious concerns. In particular, R1 and R2 thought that the system lacks novelty, and all reviewers found the evaluation can be stronger. R1 asked for fewer case studies but more depth. R2 requested for a more rigorous qualitative evaluation. R3 demanded shorter case studies and more analysis on the web traffic logs (R1 shared the same view). Overall, while this paper is acceptable, but it needs ""shepherding"". Given this is the last deadline of GI, it might be not possible for allowing such a revision within the review cycle. But I keep a positive attitude. """
36,"""Lean-Interaction: passive image manipulation in concurrent multitasking""","['Multimodal Interaction', 'Multitasking', 'Hands-free Interaction', 'Radiology', 'Medical Domain']","""Complex bi-manual tasks often benefit from supporting visual information and guidance. Controlling the system that provides this information is a secondary task that forces the user to perform concurrent multitasking, which in turn may affect the main task performance. Interactions based on natural behavior are a promising solution to this challenge. We investigated the performance of these interactions in a handsfree image manipulation task during a primary manual task with an upright stance. Essential tasks were extracted from the example of clinical workflow and turned into an abstract simulation to gain general insights into how different interaction techniques impact the users performance and workload. The interaction techniques we compared were full-body movements, facial expression, gesture and speech input. We found that leaning as an interaction technique facilitates significantly faster image manipulation at lower subjective workloads than facial expression. Our results pave the way towards efficient, natural, hands-free interaction in a challenging multitasking environment.""","""This paper has mixed reviews, with R2 in favour of acceptance, R1 against, and R3 marginal but slightly against. R3 provides some particularly good feedback about some improvements that can be made to the work. However, both R1 and R3 both focus their reviews on particular details and do not discuss their impact on the overall contribution which R2 sees as lying in the abstract study of complementary modalities, based on observations of a domain task. The primary difficulty outlined by R1 and R3 is that the motivation for the connection the domain task and the abstract task is not clearly explained, leading to confusion about where the paper contribution is intended to focus. I feel this key issue and other issues raised can be addressed with minor revisions. In addition, R2 and R3 both indicate the paper is well structured and written, so overall lean on the side of acceptance."""
37,"""Selection Performance Using a Scaled Virtual Stylus Cursor in VR""",[],"""We propose a surface warping technique we call warped virtual surfaces (WVS). WVS is similar to applying CD gain to mouse cursor on a screen and is used with traditionally 1:1 input devices, in our case, a tablet and stylus, for use with VR head-mounted displays (HMDs). WVS allows users to interact with arbitrarily large virtual panels in VR while getting the benefits of passive haptic feedback from a fixed-sized physical panel. To determine the extent to which WVS affects user performance, we conducted an experiment with 24 participants using a Fitts law reciprocal tapping task to compare different scale factors. Results indicate there was a significant difference in movement time for large scale factors. However, for throughput (ranging from 3.35 - 3.47 bps) and error rate (ranging from 3.6 - 5.4%), our analysis did not find a significant difference between scale factors. Using non-inferiority statistical testing (a form of equivalence testing), we show that performance in terms of throughput and error rate for large scale factors is no worse than a 1-to-1 mapping. Our results suggest WVS is a promising way of providing large tactile surfaces in VR, using small physical surfaces, and with little impact on user performance.""","""All reviewers identified several issues with the study. Especially R2 lists several shortcomings, some of which can be considered major. At the same time, reviewers appreciate the simple, but nice idea of the technique and acknowledge the novelty of the technique and study. While the paper is mostly easy to follow, some of the negative points are due to a lack of clarity in the paper. R1 states that a strong use case is lacking. R1 and R2 were confused about the fact that the authors created an immersive scene and populated it with 3D content, while a descriptions of its actual use and role is lacking. R3 raises several points about the discussion and how results should be presented less confusingly. In addition, R3 has several other detailed suggestions to improve the writing, some of which overlap with the suggestions by R1. R2 suggests to focus the review of related literature more strongly. Overall, this is a borderline paper and could go either or. However, given that the authors incorporate the writing-related feedback from the reviewers in a minor revision, this paper reaches the bar for acceptance."""
38,"""AuthAR: Concurrent Authoring of Tutorials for AR Assembly Guidance""","['Augmented Reality', 'content authoring', 'assembly tutorials', 'gaze input', 'voice input']","""Augmented Reality (AR) can assist with physical tasks such as object assembly through the use of situated instructions. These instructions can be in the form of videos, pictures, text or guiding animations, where the most helpful media among these is highly dependent on both the user and the nature of the task. Our work supports the authoring of AR tutorials for assembly tasks with little overhead beyond simply performing the task itself. The presented system, AuthAR reduces the time and effort required to build interactive AR tutorials by automatically generating key components of the AR tutorial while the author is assembling the physical pieces. Further, the system guides authors through the process of adding videos, pictures, text and animations to the tutorial. This concurrent assembly and tutorial generation approach allows for authoring of portable tutorials that t the preferences of different end users.""","""2 out of 3 reviewers recommend (weak) acceptance of this work, as such I will stick with the majority. Reviewers found the paper to be well written and that the solution dealt with several design challenges. Also, all the reviewers also think that there are major limitations and a lack of focus and provide detailed feedback on how to improve. I hope the authors will find the insightful and constructive criticism provided by the reviewers beneficial for their future works."""
39,"""SheetKey: Generating Touch Events by a Pattern Printed with Conductive Ink for User Authentication""","['Mobile authentication', 'touchscreens', 'conductive ink', 'capacitive touch panel']","""Personal identification numbers (PINs) and grid patterns have been used for user authentication, such as for unlocking smartphones. However, they carry the risk that attackers will learn the PINs and patterns by shoulder surfing. We propose a secure authentication method called SheetKey that requires complicated and quick touch inputs that can only be accomplished with a sheet that has a pattern printed with conductive ink. Using SheetKey, users can input a complicated combination of touch events within 0.3 s by just swiping the pad of their finger on the sheet. We investigated the requirements for producing SheetKeys, e.g., the optimal disc diameter for generating touch events. In a user study, 13 participants passed through authentication by using SheetKeys at success rates of 78-87%, while attackers using manual inputs had success rates of 0-27%. We also discuss the degree of complexity based on entropy and further improvements, e.g., entering passwords on alphabetical keyboards.""","""Three expert reviewers reviewed this paper. R2 noted that the paper could have gone further with the evaluation, to test the full usability of the system, and was concerned that the contribution of the paper was unclear. R1 is a bit more positive, with some comments on the technical aspects of the work. Finally, R3 is the most positive, and is strongly in favor of acceptance, with some requests for additional details. Overall, I believe the paper is above the bar for publication to GI, based on the originality and quality of this work. While the evaluation does not go very far into the usability aspects, some of which are left to future work, the system as a proof of concept more than meets the bar (a completed implementation, an evaluation of the technical aspects, and some sample application areas). In revisions to the paper, I ask that the authors resolve the following issues prior to acceptance: - (all Rs) Add some details substantiating the claim of the conductive ink as being easily/widely accessible (e.g. cost, availability), or remove this claim - (R1, R2) Add some details on the fabrication process, e.g. fabrication design and print time, costs - (R3) Add several details about the evaluation to the paper: details on study design (type/randomization of tasks, compensation of participants, ethics approval), details on sheetkey (size of disk) The authors should also strongly consider the following suggestions: - (R1) Consider including a few additional references in related work - (R1) Consider discussing reasons why the accuracy is low (~80%) - (R3) If space allows, add a figure of the SheetKey overlaid on the phone to show scale """
40,"""Generation of 3D Human Models and Animations Using Simple Sketches""","['Sketch-based shape modeling', 'deep learning', '2D sketches', '3D shapes', 'static and dynamic 3D human models', 'computer graphics']","""Generating 3D models from 2D images or sketches is a widely studied important problem in computer graphics. We describe the first method to generate a 3D human model from a single sketched stick figure. In contrast to the existing human modeling techniques, our method requires neither a statistical body shape model nor a rigged 3D character model. We exploit Variational Autoencoders to develop a novel framework capable of transitioning from a simple 2D stick figure sketch, to a corresponding 3D human model. Our network learns the mapping between the input sketch and the output 3D model. Furthermore, our model learns the embedding space around these models. We demonstrate that our network can generate not only 3D models, but also 3D animations through interpolation and extrapolation in the learned embedding space. Extensive experiments show that our model learns to generate reasonable 3D models and animations.""","""The reviewers have agreed that the paper has results of borderline quality and limited applications (R1,R2,R3). However, the technique and the attempt itself are new rather interesting and might inspire new research (R1,R3). I recommend accepting this paper, encouraging the authors to correct, if possible, the input skeletons in Section 5.2, and correct the comparison text with [Han et al. 2017] to address the concerns raised by R2. Similarly, I encourage the authors to correct the minor spelling mistakes and add the missing details, as suggested by R1, R3."""
41,"""Testing the Limits of the Spatial Approach: Comparing Retrieval and Revisitation Performance of Spatial and Paged Data Organizations for Large Item Sets""","['Filtering', 'spatial memory', 'revisitation']","""Finding and revisiting objects in visual content collections is common in many analytics tasks. For large collections, filters are often used to reduce the number of items shown, but many systems generate a new ordering of the items for every filter update and these changes make it difficult for users to remember the locations of important items. An alternative is to show the entire dataset in a spatially-stable layout, and show filter results with highlighting. The spatial approach has been shown to work well with small datasets, but little is known about how spatial memory scales to tasks with hundreds of items. To investigate the scalability of spatial presentations, we carried out a study comparing finding and re-finding performance with two data organizations: pages of items that re-generate item ordering with each filter change, and a spatially-stable organization that presents all 700 items at once. We found that although overall times were similar, the spatial interface was faster for revisitation, and participants used fewer filters than in the paged interface as they gained familiarity with the data. Our results add to previous work by showing that spatial interfaces can work well with datasets of hundreds of items, and that they better support a transition to fast revisitation using spatial memory.""","""All reviewers agree that the paper presents some interesting findings, although they are somewhat limited in scope. They all find that the experimental methodology is sound and that the paper is well written. The reviewers agree that the paper in its current state is too long and there are a number of sections that can be condensed. Reviewer 2 has concrete suggestions for how to shorten then paper. There was some unclarity in regards to the tasks, and to how blocks*targets were presented that needs to be addressed in a final version. Also, reviewer 2 points out that the authors do not refer back to the research questions in the latter half of the paper. Overall, I lean towards accepting the paper. However, the authors should carefully consider the suggestions made by the reviewers, particularly in regard to shortening the paper and clarifying the questions they have raised."""
42,"""Cluster-Flow Parallel Coordinates: Tracing Clusters Across Subspaces""","['information visualization', 'multivariate data visualization', 'parallel coordinates', 'fuzzy clustering', 'subspace clustering']","""We present a novel variant of parallel coordinates plots (PCPs) in which we show clusters in 2D subspaces of multivariate data and emphasize flow between them. We achieve this by duplicating and stacking individual axes vertically. On a high level, our cluster-flow layout shows how data points move from one cluster to another in different subspaces. We achieve cluster-based bundling and limit plot growth through the reduction of available vertical space for each duplicated axis. Although we introduce space between clusters, we preserve the readability of intra-cluster correlations by starting and ending with the original slopes from regular PCPs and drawing Hermite spline segments in between. Moreover, our rendering technique enables the visualization of small and large data sets alike. Cluster-flow PCPs can even propagate the uncertainty inherent to fuzzy clustering through the layout and rendering stages of our pipeline. Our layout algorithm is based on A*. It achieves an optimal result with regard to a novel set of cost functions that allow us to arrange axes horizontally (dimension ordering) and vertically (cluster ordering).""","""All the reviewers agree that this is a good piece of work but all have concerns that the usefulness of the approach in practice has not been proven due to the absence of evaluation. All reviewers also agree that the quality of the presentation of the work is one of its strong merits."""
43,"""Exploring the Design of Patient-Generated Data Visualizations""","['patient-generated data', 'chronic conditions', 'visualization designs']","""We were approached by a group of healthcare providers who are involved in the care of chronic patients looking for potential technologies to facilitate the process of reviewing patient-generated data during clinical visits. Aiming at understanding the healthcare providers' attitudes towards reviewing patient-generated data, we (1) conducted a focus group with a mixed group of healthcare providers. Next, to gain the patients' perspectives, we (2) interviewed eight chronic patients, collected a sample of their data and designed a series of visualizations representing patient data we collected. Last, we (3) sought feedback on the visualization designs from healthcare providers who requested this exploration. We found four factors shaping patient-generated data: data & context, patient's motivation, patient's time commitment, and patient's support circle. Informed by the results of our studies, we discussed the importance of designing patient-generated visualizations for individuals by considering both patient and healthcare provider rather than designing with the purpose of generalization and provided guidelines for designing future patient-generated data visualizations.""","""Reviewers acknowledged that the paper is interesting, well-written, and relevant to the community. They also highlighted the thoroughness of the methodology employed. However, they also highlighted some issues with the paper that the authors need to address: Below, I summarize the key issues. However, encourage the authors to read through the reviews carefully and address all issues highlighted by individual reviewers: - The paper claims to cover both patient and healthcare provider's perspectives, however, there is a disproportionate focus on the healthcare provider side [R1]. - Lack of a clear rationale behind the choice of visualization presented in this paper [R1] - Limited generalizability of the findings [R1, R2] - Lack of clear articulation of the research problem and question in line with how they are situated within the literature [R2]. - Failed to demonstrate how the authors showed some sensitivity towards participants with some form of a chronic condition [R2]. - Some redundant and/or long-winded discussions [R3] - Some conflicting narratives and recommendations [R3] Despite the shortcoming and highlighted weaknesses, the reviewers believe the paper hold some potential. I also believe that, although the issues highlighted are very important and must be addressed in the final version, they do not require significant changes in the paper. Hence, I recommend that the paper be accepted. """
44,"""Bend or PIN: Studying Bend Password Authentication with People with Vision Impairment""","['Deformable user interaction', 'bend gestures', 'user authentication methods', 'vision impairment', 'low vision', 'accessibility']","""People living with vision impairment can be vulnerable to attackers when entering passwords on their smartphones, as their technology is more observable. While researchers have proposed tangible interactions such as bend input as an alternative authentication method, limited work have evaluated this method with people with vision impairment. This paper extends previous work by presenting our user study of bend passwords with 16 participants who live with varying levels of vision impairment or blindness. Each participant created their own passwords using both PIN codes and BendyPass, a combination of bend gestures performed on a flexible device. We explored whether BendyPass does indeed offer greater opportunity over PINs and evaluated the usability of both. Our findings show bend passwords have learnability and memorability potential as a tactile authentication method for people with vision impairment, and could be faster to enter than PINs. However, BendyPass still has limitations relating to security and usability.""","""Reviewers agree that this paper has the potential to bring an interesting contribution, and that the research is mostly well executed. There are some concerns expressed by reviewers, mainly: - Limited engagement with literature (R2) and with prior work in terms of authentication types (R2, R3) - Limitations of the study re: baseline (R2, R3) - Concerns re: analysis of data (R1, R2) and need to discuss limitations of analysis (R2, R3) - Lack of discussion of ethical considerations (R2) - Motivation for chosen design not well argued (R2, R3) I believe some of these can be addressed with some editing"""
45,"""Peephole Steering: Speed Limitation Models for Steering Performance in Restricted View Sizes""","['Steering law', 'graphical user interfaces', 'motor performance modeling', ""Fitts' law"", 'anticipation strategy']","""The steering law is a model for predicting the time and speed for passing through a constrained path. When people can view only a limited range of a path forward, they have to limit their speed in preparation of possibly needing (e.g.) to turn at a corner. However, only a few studies have focused on how limited views affect steering performance, and no quantitative models have been established. The results of a mouse steering study showed that speed was linearly limited by the path width and limited by the square root of viewable forward distance as well. While a baseline model showed an adjusted R^2 = 0.144 for predicting the speed, our best-fit model showed an adjusted R^2 = 0.975 with only one additional coefficient, which demonstrated a comparatively higher prediction accuracy among given viewable forward distances.""","""Overall, reviewers were on agreement about the quality of the paper and the fact that it is well written and structured, investigating an interesting topic and describing a solid experiment. Reviewers (R1,R2) also praised that while the contribution is small, it remains useful to the HCI community. That being said, reviewers also mentioned several weaknesses in the current submission that the authors should address, as detailed below. # Use cases Reviewers had mixed feelings regarding the use-case examples presented in the papers. On one hand, R1 and R2 found that some examples should be removed and that the paper should be more focused on the use cases on which it is clear that the proposed model applies. On the other hand, R3 would like the authors to provide more example. I would recommend the authors to go with the former and to focus the paper on a very specific, yet existing, use case. As R2 says, ""There is nothing wrong with limiting the scope to what the study actually reveals."". # Clarify decisions R2 points that authors should clarify why two different models are used to estimate MT for the different path of the segments. Authors should clarify why this specific use requires these two models (due to the cornering). Also, R2 would like the authors to better justify some experimental design decisions. Typically, why was W2 sufficiently narrow for this task. # Study results Reviewers also expressed some concerns regarding how results are reported. R1 and R2 raised that post-hoc tests are not mentioned in the paper, and that pairwise significant differences are only present on some figures (or not present at all, only in supplementary materials). The authors should provide all these results for the sake of clarity and completeness. R3 mentions that it would help the readers if the results were unpacked more and contextualized, providing interaction effect as an example. Personally, I believe that it is not necessary for clarity, but agree that it provide a better reading experience Finally, R2 regrets that some data were left unexplained (typically the slightly out-of-order jump of error in S), and authors should provide more explanations regarding this discussion #Discussion section Given above modifications, the authors should reshape the discussion section to better insist on the limitations of their study. More precisely, reminding what was known or not by the participants, and to which extent different variables were tested or not (such as the fact that only one W2 values was tested). Also, the authors should clearly acknowledge that future work is needed to confirm that the model would work in a similar way with other use cases, such as racing games or HMDs interaction. # Other recommendations - Complete or remove claim regarding ""Bateman et al."" work. - Fix typo with (e.g.) in the abstract - Clarify sentence ""Equation 14 can be simplified further"" as pointed out by R2 - Explain what ""human online response skills"" refers to for readers who are not familiar with this concept. - Update figure 1 to remove examples the study does not apply to, and highlight ""viewable forward distance"" on it."""
46,"""We're Here to Help: Company Image Repair and User Perception of Data Breaches""","['Privacy', 'Security', 'Storytelling']","""Data breaches involve information being accessed by unauthorized parties. Our research concerns user perception of data breaches, especially issues relating to accountability. A preliminary study indicated many people had weak understanding of the issues, and felt they themselves were somehow responsible. We speculated that this impression might stem from organizational communication strategies. We therefore compared texts from organizations with those from external sources, such as the news media. This suggested that organizations use well-known crisis communication methods to reduce their reputational damage, and that these strategies align with repositioning of the narrative elements involved in the story. We then conducted a quantitative study, asking participants to rate either organizational texts or news texts about breaches. The findings of this study were in line with our document analysis, and suggest that organizational communication affects the users' perception of victimization, attitudes in data protection, and accountability. Our study suggests some software design and legal implications to support users protecting themselves and developing better mental models of security breaches.""","""All the reviewers have agreed that the relevance of this work to Graphics Interface is very hard to justify. R3 has some concerns about the study design which should be addressed. R1 asks to clarify if participants got paid and gives some further minor suggestions to improve the paper."""
47,"""Determination and quantitative description of hollow body in point cloud""","['point cloud', 'voxel', 'connectivity', 'hollow body']","""When volume 3d display system deals with point cloud data with hollow bodies, the hollow body area cannot be determined correctly causing the lack of color information in responding area. Existing researches lack a solution to determine hollow bodies. This paper firstly gives a quantitative description of hollow body and defines a set of parameters to describe size,shape,position of hollow bodies. Then this paper proposes a voxel connectivity regional-growth hollow body determination algorithm(VCRHD) to determine the hollow bodies in 3D point cloud. The algorithm has two phases. The first phase is to use a small amount of voxels to realizes the voxelization of the point cloud and calculate the approximate volume ratio of hollow bodies based on the voxel connectivity regional-growth principle. Then this paper uses the approximate volume ratio to determine the optimal number of voxels based on the experimental result. The second phase is to use the optimal number of voxels to determine the hollow bodies and calculate hollow body parameters which is proved to be efficient and accurate. In addition,this paper establishes a data set containing 287 different point cloud files in 7 different categories to test algorithm. The experimental results prove the feasibility of the algorithm. Finally,this paper analyzes the limitations of the algorithm and looks forward to the application prospect in the future.""","""All reviewers agree that there are major clarity issues in this paper. The concept of ""hollow body"", which is the central point of this paper, is defined only informally. The motivation for this definition remained unclear. The overall motivation of the paper is partly unclear, partly questionable. The technical and algorithmic aspects are mostly straightforward, no significant novel insights are provided; it remained unclear what could be considered the actual contribution. Reproducibilty is limited, because parameter choices are not discussed."""
48,"""Gedit: Keyboard Gestures for Mobile Text Editing""","['Text entry', 'text editing', 'gestures', 'touch screen', 'ring gesture', 'smartphone', 'mobile devices']","""Text editing on mobile devices can be a tedious process. To perform various editing operations, a user must repeatedly move his or her fingers between the text input area and the keyboard, making multiple round trips and breaking the flow of typing. In this work, we present Gedit, a system of on-keyboard gestures for convenient mobile text editing. Our design includes a ring gesture and flicks for cursor control, bezel gestures for mode switching, and four gesture shortcuts for copy, paste, cut, and undo. Variations of our gestures exist for one and two hands. We conducted an experiment to compare Gedit with the de facto touch+widget based editing interactions. Our results showed that Gedits gestures were easy to learn, 24% and 17% faster than the de facto interactions for one- and two-handed use, respectively, and preferred by participants.""","""All reviewers are in agreement that this paper represents an innovative approach to cursor placement and editing during text entry on soft keyboards. While R3 has few concerns -- mainly of a minor typographical nature -- other reviewers note some potential problems with the experimental validation, including (R2): - Lack of undo - Counterbalancing - Baseline (Fuccella's might have been better) My own read of the paper leads me to believe that baseline could be addressed by a small expansion of the discussion of the shortcomings of Fuccella's work in regards to modern soft keyboards and WGKs, that the edit to correct for overstatement is a minor revision, and that the lack of undo, while a fair comment, does not rise to the level of invalidating the potential contribution of this work. R2's concerns about technique clarification, and R1's concerns regarding moding, can primarily be addressed via an editing pass. I encourage the authors to revise their manuscript as needed to address these reviewer concerns. """
49,"""Walking within a Crowd Full of Virtual Characters: Effects of Virtual Character Appearance on Human Movement Behavior""","['virtual crowd', 'human-crowd interaction', 'character appearance', 'human movement behavior', 'virtual reality']","""This paper is a study on the effects that a virtual crowd composed of virtual characters with different appearance has on human motion in a virtual environment. The study examines five virtual crowd conditions that include the following virtual characters: neutral, realistic, cartoon, zombies, and fantasy virtual characters. Participants were instructed to cross a virtual crosswalk and each time, one of the examined crowd conditions shown. The movement behavior of participants was captured and objectively analyzed based on four measurements (speed, deviation, trajectory length, and interpersonal distance). From the results, it was found that the appearance of the virtual characters significantly affected the movement behavior of participants. Specifically, participants walked slower when exposed to a virtual crowd composed of cartoon characters and faster when exposed to fantasy characters. It was also found participants deviated more when exposed to a crowd composed of fantasy characters compared to a crowd composed of cartoon and zombie characters. Finally, the interpersonal distance between participants and fantasy characters was significantly greater compared to human and zombie virtual characters. Our findings, limitations and future directions are discussed in the paper.""","""Based on the reviews, the reviewers are unanimous that the manuscript is not quite ready to be published. In general, while the reviewers thought the idea was interesting and straightforward, they found the work to be poorly motivated. I'd recommend the authors examine these reviews for more details. """
50,"""AnimationPak: Packing Elements with Scripted Animations""","['Element Distribution', 'Packing', 'Physical Simulation', 'Shape Deformation', 'Animation', 'Spacetime domain']","""We present AnimationPak, a technique to create animated packings by arranging animated two-dimensional elements inside a static container. We represent animated elements in a three-dimensional spacetime domain, and view the animated packing problem as a three-dimensional packing in that domain. Every element is represented as a discretized spacetime mesh. In a physical simulation, meshes grow and repel each other, consuming the negative space in the container. The final animation frames are cross sections of the three-dimensional packing at a sequence of time values. The simulation trades off between the evenness of the negative space in the container, the temporal coherence of the animation, and the deformations of the elements. Elements can be guided around the container and the entire animation can be closed into a loop.""","""This paper presents a method for animating 2D shapes packed into a container space. Reviews were split, with two of three recommending acceptance. The majority view saw the work as solid and clearly explained. Reviewers raised concerns about animation quality and control, and revisions could expand the discussion on these issues, space permitting. Some design decisions could be further explained. Pros: - clear improvement over past results on animation packings - thorough explanation of technical aspects Cons: - animations are not convincing to all viewers - unclear to easy it is to control the animations; possibly linked to above issue - some design decisions lack clear motivation """
51,"""Learning Multiple Mappings: an Evaluation of Interference, Transfer, and Retention with Chorded Shortcut Buttons""","['Augmented interaction', 'modes', 'chording interfaces']","""Touch interactions with current mobile devices have limited expressiveness. Augmenting devices with additional degrees of freedom can add power to the interaction, and several augmentations have been proposed and tested. However, there is still little known about the effects of learning multiple sets of augmented interactions that are mapped to different applications. To better understand whether multiple command mappings can interfere with one another, or affect transfer and retention, we developed a prototype with three pushbuttons on a smartphone case that can be used to provide augmented input to the system. The buttons can be chorded to provide seven possible shortcuts or transient mode switches. We mapped these buttons to three different sets of actions, and carried out a study to see if multiple mappings affect learning and performance, transfer, and retention. Our results show that all of the mappings were quickly learned and there was no reduction in performance with multiple mappings. Transfer to a more realistic task was successful, although with a slight reduction in accuracy. Retention after one week was initially poor, but expert performance was quickly restored. Our work provides new information about the design and use of chorded buttons for augmenting input in mobile interactions.""","""Scores for this submission were somewhat divergent (with R2 suggesting borderline reject and R1&3 suggesting accept). All reviewers agree that the investigation is very thorough. It considers the learning of chord commands on 3-side buttons on the side of a phone: in particular it studies learning performance when overloading the mappings (ie using them for different applications), then tests them in usage tasks that are realistic, and finally considers their memorability. All reviewers found the paper well written and the topic of interest to the HCI community. The main concerns from the most negative reviewer are the lack of a comparison to a baseline (using traditional icons + menus) and the somewhat low accuracy of the approach (80%). These are indeed valid concerns. As the paper considers for the first time the question related to learning interference and memorization of chords, there is arguably enough novelty without a baseline comparison. As for the accuracy, it is low and this should be acknowledged (although it seems that participants responses where occasionally correctly memorised, but not correctly detected, so error rates may be a bit inflated). Given the novelty of the question asked and the good study design and reporting, I would recommend accepting the paper. The following list of changes/clarifications would improve the paper: - Consider whether order may affect/interfere in memorisation (R1) - if results are possible to add it would be great, else consider discussing this (in limitations/discussion). - Explain choice of augmentation (R1). Moreover, If possible explain why it was not tested against a baseline (R2) or other augmentations (R1) - at the very least acknowledge these as limitations. - Explain choice of 200 msec (R1,3) - Comment on accuracy limitations (R2) and fix accuracy reported in discussion (R1). - Adjust a bit the language in abstract + intro (R1) when it comes to reporting findings."""
52,"""Yarn: Adding Meaning to Shared Personal Data through Structured Storytelling""","['Personal data', 'personal informatics', 'sharing', 'storytelling', 'authoring', 'self-tracking', 'social networking sites']","""People often do not receive the reactions they desire when they use social networking sites to share data collected through personal tracking tools like Fitbit, Strava, and Swarm. Although some people have found success sharing with close connections or in finding online communities, most audiences express limited interest and rarely respond. We report on findings from a human-centered design process undertaken to examine how tracking tools can better support people in telling their story using their data. 23 formative interviews contribute design goals for telling stories of accomplishment, including a need to include relevant data. We implement these goals in Yarn, a mobile app that offers structure for telling stories of accomplishment around training for running races and completing Do-It-Yourself projects. 21 participants used Yarn for 4 weeks across two studies. Although Yarns structure led some participants to include more data or explanation in the moments they created, many felt like the structure prevented them from telling their stories in the way they desired. In light of participant use, we discuss additional challenges to using personal data to inform and target an interested audience.""","""The reviewers appreciated the paper. They found the design study to be well conducted and insightful, but lacking to clear take-aways. The main point to tackle while finalising the paper is to better distill the most interesting findings from the deployment. R2 suggests to put forward the most surprising ones and R3 notes the contributions s/he found most relevant. The reviewers further encourage the authors to improve on the visuals related to the deployment : - A table or visual showing when and how often the different templates were used in a story (possibly per participant). - A figure to clarify the deployment. - Reduce the importance of the deployment tables Both R2 and R1 suggest relevant papers on data storytelling and design research. Finally while this may be more challenging, there is a wealth of literature on generalizabilty in Design Research (see R for a good entry point), considering this literature could help frame the contributions and its insights in a more situated and reflexive manner, rather than seeking generalisable findings."""
53,"""Local Editing of Cross-Surface Mappings with Iterative Least Squares Conformal Maps""","['Conformal mapping', 'Cross-surface mapping', 'Non-rigid registration', 'Injective mapping']","""In this paper, we propose a novel approach to improve a given surface mapping through local refinement. The approach receives an established mapping between two surfaces and follows four phases: (i) inspection of the mapping and creation of a sparse set of landmarks in mismatching regions; (ii) segmentation with a low-distortion region-growing process based on flattening the segmented parts; (iii) optimization of the deformation of segmented parts to align the landmarks in the planar parameterization domain; and (iv) aggregation of the mappings from segments to update the surface mapping. In addition, we propose a new method to deform the mesh in order to meet constraints (in our case, the landmark alignment of phase (iii)). We incrementally adjust the cotangent weights for the constraints and apply the deformation in a fashion that guarantees that the deformed mesh will be free of flipped faces and will have low conformal distortion. Our new deformation approach, Iterative Least Squares Conformal Mapping (ILSCM), outperforms other low-distortion deformation methods. The approach is general, and we tested it by improving the mappings from different existing surface mapping methods. We also tested its effectiveness by editing the mappings for a variety of 3D objects.""","""The three reviewers found that the paper makes a small contribution within the domain on cross-surface mapping, however the paper would have benefited from deeper evaluation / verification. There are also a number of questions brought up by the reviewers that should be addressed before publication (e.g., how initial regions are selected, where the bound on the geodesic distance is, if an iterative growing region strategy was considered, the energies tested in SLIM, if the resulting mappings are smooth near patch boundaries, and so on). In light of the review scores and concerns, it appears that this paper is slightly above the bar and may be acceptable for publication."""
54,"""Interactive Design of Gallery Walls via Mixed Reality""","['Design interfaces', 'mixed reality', 'spatial computing']","""We present a novel interactive design tool that allows users to create and visualize gallery walls via a mixed reality device. To use our tool, a user selects a wall to decorate and chooses a focal art item. Our tool then helps the user complete their design by optionally recommending additional art items or automatically completing both the selection and placement of additional art items. Our tool holistically considers common design criteria such as alignment, color, and style compatibility in the synthesis of a gallery wall. Through a mixed reality device, such as a Magic Leap One headset, the user can instantly visualize the gallery wall design in situ and can interactively modify the design in collaboration with our tool's suggestion engine. We describe the suggestion engine and its adaptability to users with different design goals. We also evaluate our mixed-reality-based tool for creating gallery wall designs and compare it with a 2D interface, providing insights for devising mixed reality interior design applications.""","""All reviewers agreed that the design and implementation approach is sound and described in good details, but have different opinions on problem specifications and motivation: R2 believes those are presented effectively, while R3 believes those need more clarity and completeness. R1 also pointed out a disproportion between the presented content and claimed contributions. Most reviewers expressed concerns about inclusion of sufficient related work, so as to ground design decision (R2) and differentiate with prior work (R3). A main weakness pointed out by all reviewers is that the evaluation is lacking, that it is sparse (R2) and does not provide enough insights (R1&3). At its current stage I think the paper is not ready for publication, but could be improved by stronger linkage with related work (R2&3), providing clarification in study procedures (R1,2&3), and a more in-depth evaluation to provide more information and insight (R1,2&3). """
55,"""Presenting Information Closer to Mobile Crane Operators Line of Sight: Designing and Evaluating Visualisation Concepts Based on Transparent Displays""","['human-machine interface', 'transparent display', 'mobile cranes', 'heavy machinery', 'design process']","""We have investigated safety information visualisation for mobile cranes operators utilising transparent displays, where the information can be presented closer to operators line of sight with minimum obstruction on operators view. The intention of the design is to help operators in acquiring supportive information provided by the machine without requiring them to divert their attention far from operational areas. We started the design process by reviewing mobile crane safety guidelines to determine which information that operators need to perform safe operations. Using the findings from the safety guidelines review, we conducted a design workshop to generate design ideas and visualisation concepts, and to delineate their appearances and behaviour based on the capability of transparent displays. We transformed the results of the workshop to a low-fidelity prototype, and then interviewed six mobile crane operators to obtain their feedback on the proposed concepts. The results of the study indicate that, as information will be presented closer to operators' line of sight, we need to be selective on what kind of information and how much information that should be presented to operators. However, all the operators appreciated having information presented closer to their line of sight, as an approach that has the potential to improve safety in their operations.""","""R1 and R2 agree that the paper has merits and can be improved to be well-above the acceptance threshold with small modifications. R3 brings up several good critiques for the work. I would recommend reading their feedback and incorporating the changes in the related work section. R1 and R3 both bring up the relevant area of AR and how that can not just enhance but bypass certain restrictions of transparent displays. Maybe adding a discussion around that point would be fruitful to have. R2 brings up some good questions for the visualization design rationales. Consider addressing them. """
56,"""epsilon-Rotation Invariant Euclidean Spheres Packing in Slicer3D""","['Rotation Invariant', 'Slicer3D', 'Sphere Packing', 'Distance Transformation', 'Stereotactic Radiosurgery']","""Sometimes SRS (Stereotactic Radio Surgery) requires using sphere packing on a Region of Interest (ROI) such as cancer to determine a treatment plan. We have developed a sphere packing algorithm which packs non-intersecting spheres inside the ROI. The region of interest in our case are those voxels which are identified as cancer tissues. In this paper, we analyze the rotational invariant properties of our sphere-packing algorithm which is based on distance transformations. Epsilon-Rotation invariant means the ability to arbitrary rotate the 3D ROI while keeping the volume properties remaining (almost) same within some limit of epsilon. The applied rotations produce spherical packing which remains highly correlated as we analyze the geometrically properties of sphere packing before and after the rotation of the volume data for the ROI. Our novel sphere packing algorithm has high degree of rotation invariance within the range of +/- epsilon. Our method used a shape descriptor derived from the values of the disjoint set of spheres form the distance-based sphere packing algorithm to extract the invariant descriptor from the ROI. We demonstrated by implementing these ideas using Slicer3D platform available for our research. The data is based on sing MRI Stereotactic images. We presented several performance results on different benchmarks data of over 30 patients in Slicer3D platform.""","""The reviewers agree that the problem statement is unclear; the paper lacks structure, focus, and motivation; the contribution is marginal; the discussion of and comparison with previous work is insufficient. The reviewers were not convinced that, considering the state of the art, the proposed use of voxel-based sphere packing to define a shape descriptor is a reasonable or fruitful idea, given its inherent instability."""
57,"""A Modular Interface for Multimodal Data Annotation and Visualization with Applications to Conversational AI and Commonsense Grounding""","['HCI', 'explainable AI', 'conversational AI', 'commonsense grounding', 'multimodal annotation', 'language and vision']","""Artificial Intelligence (AI) research, including machine learning, computer vision, and natural language processing, requires large amounts of annotated data. The current research and development (R&D) pipeline involves each group collecting their own datasets using an annotation tool tailored specifically to their needs, followed by a series of engineering efforts in loading other external datasets and developing their own interfaces, often mimicking some components of existing annotation tools. We present a modular annotation, visualization, and inference software framework for computational language and vision research. Our framework enables researchers to set up a web interface for efficiently annotating language and vision datasets, visualizing the predictions made by a machine learning model, and interacting with an intelligent system. In addition, the tool accommodates many of the standard and popular visual annotations such as bounding boxes, segmentation, landmark points, temporal annotation and attributes, as well as textual annotations such as tagging and free form entry. These annotations are directly represented as nodes and edges as part of the graph module, which allow linking visual and textual information. Extensible and customizable as required by individual projects, the framework has been successfully applied to a number of research efforts in human-AI collaboration, including commonsense grounding of language and vision, conversational AI, and explainable AI.""","""I want to thank the authors for their submission. The manuscript has received three high quality reviews with two reviewers arguing for reject and one reviewer to accept the paper. The reviewers see merit in the tool (R1,R3), and recognize the underlying engineering efforts (R2). However, the reviewers also highlight several limitations of the work. The main issues, highlighted by the reviewers, are: - The paper lacks external validation, grounded in HCI methodology (R1, R2). - The problem is not sufficiently established (R1), and over generalized (R2) - The value-proposition of the proposed solution needs to be strengthened (R1) - Missing background literature on labeling to position the work clearly in the literature (R2) - The specificity in terminology, especially around XAI, needs to improvement (R2) R1 and R2 agree that the lack of validation renders many claims in the manuscript unsupported. Considering outlined limitations of the presentation, I would suggest to clearly identify the gap in literature (R,1 R2), strengthen the value-proposition of the presented solution (R1), and assure to support claims around potential increases of labeling efficiency through user studies (R1,R2). In summary, while the work has potential, the issue is not new and has seen lots of potential solutions, resulting in a high bar to establish usefulness beyond what already exists and to be perceived as novel. Keeping novelty and usefulness in mind, the manuscript is not ready for publication. """
58,"""Unlimiting the Dual Gaussian Distribution Model to Predict Touch Accuracy in On-screen-start Pointing Tasks""","['Dual Gaussian distribution model', 'touchscreens', 'finger input', 'pointing', 'graphical user interfaces']","""The dual Gaussian distribution hypothesis has been utilized to predict the success rate of target acquisition in finger touching. Bi and Zhai limited the applicability of their success-rate prediction model to off-screen-start pointing. However, we found that their doing so was theoretically over-limiting and their prediction model could also be used to on-screen-start pointing operations. We discuss the reasons why and empirically validate our hypothesis in a series of four experiments with various target sizes and distances. Bi and Zhai's model showed high prediction accuracy in all the experiments, with 10% prediction error at worst. Our theoretical and empirical justifications will enable designers and researchers to use a single model to predict success rates regardless of whether users mainly perform on- or off-screen-start pointing and automatically generate and optimize UI items on apps and keyboards.""","""All reviewers appreciated the comprehensive set of experiments, and effort in replicating previous findings. Reviews were mixed, ranging 5-7. Each reviewer brought forward different issues: - R2 disagrees that A should be taken out of a pointing model; they point that the lack of effect could be due to experimental conditions, and to other parameters that could affect the studied phenomenon. - R3 would have appreciated a more thorough description of Bi et al.'s initial rationale for excluding screen-to-screen pointing from their model. R1 makes a similar point: this argument is not explained clearly in this submission, which makes its own argument difficult to assess. This is problematic when it is the core novelty of the paper. - R1 criticizes the ""age"" argument, also mentioned in R3's review. This was not a straightforward decision. I think that each reviewer raises relevant points. Most of them could be solved with a reasonable amount of extra work and/or careful discussion, but put together I believe they amount to a something that would be difficult to address in a single minor pass. I strongly encourage the authors to resubmit this work once these points have been addressed or discussed."""
59,"""RealNodes: Interactive and Explorable 360 pseudo-formula VR System with Visual Guidance User Interfaces""","['Immersive / 360° video', '3D user interaction', 'Nonfatiguing 3DUIs', 'Locomotion and navigation', '3DUI metaphors', 'Computer graphics techniques']","""Emerging research expands the idea of using 360-degree panoramas of the real-world for 360 VR experiences beyond video and image viewing. However, most of these are strictly guided, with few opportunities for interaction or exploration. There is a desire for experiences with cohesive virtual environments with choice in navigation, versus scripted experiences with limited interaction. Unlike standard VR with the freedom of synthetic graphics, there are challenges in designing user interfaces (UIs) for 360 VR navigation within the limitations of fixed assets. We designed RealNodes, a novel software system that presents an interactive and explorable 360 VR environment. We also developed four visual guidance UIs for 360 VR navigation. The results of a comparative study of these UIs determined that choice of user interface (UI) had a significant effect on task completion times, showing one of the methods, Arrow, was best. Arrow also exhibited positive but non-significant trends in preference, user engagement, and simulator-sickness. RealNodes and the comparative study contribute preliminary results that inspire future investigation of how to design effective visual guidance metaphors for navigation in applications using novel 360 VR environments.""","""This submission is very much a borderline one, making it very hard for me to make a clear recommendation. R1 and R2 tended towards rejection due to limitations with the study design and lack of clarity in terms of contribution. In contrast, R3 argues for acceptance due to the novel and exciting overall idea (but also criticises the study design). Overall, being forced to make a choice, I am recommending rejection, however I encourage the authors to iterate on and continue this work: with a clarified contribution and a more polished study, I do think that this work would benefit this (and related) research communities."""
60,"""Text Input in Virtual Reality Using a Tracked Drawing Tablet ""","['Text Entry', 'VR', '3D UI']","""We present an experiment evaluating the effectiveness of a tracked drawing tablet for use in virtual reality (VR) text input. Participants first completed a text input pre-test, entering several phrases using a physical keyboard. Participants then entered text in VR using an HTC Vive, with a tracker mounted on a drawing tablet with a QWERTY soft keyboard overlaid on the virtual tablet. This was similar to text input using stylus-supported mobile devices. Our results indicate that not only did participants prefer the Vive controller, it also offered superior entry speed (16.31 wpm vs. 12.79 wpm with the tablet and stylus) and error rates (4.1% vs. 6.4%). Pre-test scores were also correlated to measured entry speeds, and reveal that user typing speed on physical keyboards provides a modest predictor of VR text input speed (R2 of 0.6 for the Vive controller, 0.45 for the tablet). ""","""All reviews found the work interesting and timely, study design to be mostly correct, and literature review commendable. However, all reviews ask about the contribution of the work given that neither of the text entry techniques are novel. Although, conducting empirical studies that compare different text entry techniques is valuable (even if they are reproducibility studies), all reviews point to issues with the tablet_stylus technique that might have disadvantaged it in various ways. Also, reviews point to potential issues with statistical analysis. Thus, all reviews recommend rejecting this submission at this time."""
61,"""Target Acquisition for Handheld Virtual Panels in VR""","['VR', 'Target Selection', 'Pointing']","""The Handheld Virtual Panel (HVP) is the virtual panel attached to the non-dominant hands controller in virtual reality (VR). The HVP is the go-to technique for enabling menus and toolboxes in VR devices. In this paper, we investigate target acquisition performance for the HVP as a function of four factors: target width, target distance, the direction of approach with respect to gravity, and the angle of approach. Our results show that all four factors have significant effects on user performance. Based on the results, we propose guidelines towards the ergonomic and performant design of the HVP interfaces.""","""On the positive note, the reviewers see a study on the ergonomics of handheld virtual panels in VR applications as valuable. However, 2 of 3 Reviewers found the experimental design to be unclear and have issues, rendering the results unreliable. I would refer the authors to the reviews for details on how they can improve this work for re-submission. """
62,"""Effects of Visual Distinctiveness on Learning and Retrieval in Icon Toolbars""","['icon design', 'visual consistency', 'learnability', 'selection']","""Learnability is important in graphical interfaces because it supports the users transition to expertise. One aspect of GUI learnability is the degree to which the icons in toolbars and ribbons are identifiable and memorable but current flat and subtle designs that promote strong visual consistency could hinder learning by reducing visual distinctiveness within a set of icons. There is little known, however, about the effects of visual distinctiveness of icons on selection performance and memorability. To address this gap, we carried out two studies using several icon sets with different degrees of visual distinctiveness, and compared how quickly people could learn and retrieve the icons. Our first study found no evidence that increasing colour or shape distinctiveness improved learning, but found that icons with concrete imagery were easier to learn. Our second study found similar results: there was no effect of increasing either colour or shape distinctiveness, but there was again a clear improvement for icons with recognizable imagery. Our results show that visual characteristics appear to affect UI learnability much less than the meaning of the icons' representations.""","""This paper received three high quality reviews, all of which expressed appreciation for the work and recommended that the paper be accepted. As such, my recommendation is that the paper be accepted. Though all of the reviews were positive, the reviewers made a number of recommendations on how to improve the paper. All of these are minor and would be easy to do in the revision cycle, so I recommend the authors integrate them into the submission. The authors can check the individual reviews for details, but a short summary of the recommended changes are as follows: - R1 suggests some text to revisit the enumerated hypotheses (H1, H2, ) in the results or discussion sections. - R1 made some small recommendations on presentation. - R2 asked for more detail or justification on why the task is a reasonable proxy for the task of using icons in software. - R2 asked whether the positions of the icons (and targets) were randomized between participants. - R2 asked if there was any prior research that informed the breakdown of types/levels of meaning and shape/color distinctiveness. If there is, it would be good to mention and cite them. - R2 asked if there was more qualitative feedback from participants, which might give deeper insights into their behavior, and also asked that the number of participants citing particular themes in qualitative components should be added (e.g., ""Several (#) participants stated that"" - R3 expressed surprise that little is known about the effect of visual distinctiveness on GUI usability and learnability, and recommended a few papers that might be worth reviewing and potentially integrating into the Related Work, or used to further contextualize the paper's findings. - R3 asked for more details on the background of the participants in the study, and whether the same participants were used in both studies."""
63,"""The Impact of Presentation Style on Human-In-The-Loop Detection of Algorithmic Bias""","['algorithmic bias', 'machine learning fairness', 'lab study']","""While decision makers have begun to employ machine learning, machine learning models may make predictions that bias against certain demographic groups. Semi-automated bias detection tools often present reports of automatically-detected biases using a recommendation list or visual cues. However, there is a lack of guidance concerning which presentation style to use in what scenarios. We conducted a small lab study with 16 participants to investigate how presentation style might affect user behaviors in reviewing bias reports. Participants used both a prototype with a recommendation list and a prototype with visual cues for bias detection. We found that participants often wanted to investigate the performance measures that were not automatically detected as biases. Yet, when using the prototype with a recommendation list, they tended to give less consideration to such measures. Grounded in the findings, we propose information load and comprehensiveness as two axes for characterizing bias detection tasks and illustrate how the two axes could be adopted to reason about when to use a recommendation list or visual cues.""","""All reviewers agree that the paper advanced on a timely topic of algorithmic bias detection. The problem is well-motivated and sufficiently outlined. The paper is very well written with adequate details about motivation and study design."""
64,"""Scope and Impact of Visualization in Training Professionals in Academic Medicine""","['Visualization in Education', 'Qualitative Evaluation', 'Task and Requirements Analysis', 'Visual Design', 'Visualization System and Toolkit Design']","""Professional training often requires need-based scheduling and observation-based assessment. In this paper, we present a visualization platform for managing such training data in a medical education domain, where the learners are resident physicians and the educators are certified doctors. The system was developed through four focus groups with the residents and their educators over six major development iterations. We present how the professionals involved, nature of training, choice of the display devices, and the overall assessment process influenced the design of the visualizations. The final system was deployed at the department of emergency medicine, and evaluated by both the residents and their educators in an uncontrolled longitudinal study. Our analysis of four months of user logs revealed interesting usage patterns consistent with real-life training events and showed an improvement in several key learning metrics when compared to historical values during the same study period. The users' feedback showed that both educators and residents found our system to be helpful in real-life decision making. ""","""The reviewers are in agreement that this is a well-motivated paper and should be accepted. As R1 mentioned, the contribution does not lie in a novel visualization but rather in the process, insights, and patterns learned during the design and evaluation process. The reviewers also agreed that the design implications section lacked depth. This is the one area where the paper has the biggest scope of improving. The reviewers have offered suggestions for different approaches to addressing this shortcoming. Some other comments worth highlighting: R1 has raised some concerns regarding how the 5 questions were selection and would like added details regarding the process. R2 would like some discussion around the prior approach or set-up that this system replaced. R3 has provided detailed feedback on minor changes which will improve the overall readability of the paper and can be accomplished prior to the camera ready submission."""
65,"""A Baseline Study of Emphasis Effects in Information Visualization""","['Human-centered computing', 'Visualization', 'Visualization techniques', 'Perception', 'Visualization design and evaluation methods']","""Emphasis effects visual changes that make certain elements more prominent are commonly used in information visualization to draw the users attention or to indicate importance. Although theoretical frameworks of emphasis exist (that link visually diverse emphasis effects through the idea of visual prominence compared to background elements), most metrics for predicting how emphasis effects will be perceived by users come from abstract models of human vision which may not apply to visualization design. In particular, it is difficult for designers to know, when designing a visualization, how different emphasis effects will compare and what level of one effect is equivalent to what level of another. To address this gap, we carried out two studies that provide empirical evidence about how users perceive different emphasis effects, using three visual variables (colour, size, and blur/focus) and eight strength levels. Results from gaze tracking, mouse clicks, and subjective responses show that there are significant differences between visual variables and between levels, and allow us to develop an initial understanding of perceptual equivalence. We developed a model from the data in our first study, and used it to predict the results in the second; the model was accurate, with high correlations between predictions and real values. Our studies and empirical models provide valuable new information for designers who want to understand and control how emphasis effects will be perceived by users.""","""The reviewers are generally in favour of much of the motivation of this paper, but the two most expert express common concerns about the lack of grounding in previous research (of which there is a substantial amount) and about issues in the experimental design and subsequent analysis. For these reasons I recommend rejection in this cycle. I encourage the authors to build on the strengths of the paper, take the reviewers' concerns into account, and consider the recommended major rewrite for a later revision."""