{"forum": "B14rPj0qY7", "submission_url": "https://openreview.net/forum?id=B14rPj0qY7", "submission_content": {"title": "RETHINKING SELF-DRIVING : MULTI -TASK KNOWLEDGE FOR BETTER GENERALIZATION AND ACCIDENT EXPLANATION ABILITY", "abstract": "Current end-to-end deep learning driving models have two problems: (1) Poor\ngeneralization ability of unobserved driving environment when diversity of train-\ning driving dataset is limited (2) Lack of accident explanation ability when driving\nmodels don\u2019t work as expected. To tackle these two problems, rooted on the be-\nlieve that knowledge of associated easy task is benificial for addressing difficult\ntask, we proposed a new driving model which is composed of perception module\nfor see and think and driving module for behave, and trained it with multi-task\nperception-related basic knowledge and driving knowledge stepwisely. Specifi-\ncally segmentation map and depth map (pixel level understanding of images) were\nconsidered as what & where and how far knowledge for tackling easier driving-\nrelated perception problems before generating final control commands for difficult\ndriving task. The results of experiments demonstrated the effectiveness of multi-\ntask perception knowledge for better generalization and accident explanation abil-\nity. With our method the average sucess rate of finishing most difficult navigation\ntasks in untrained city of CoRL test surpassed current benchmark method for 15\npercent in trained weather and 20 percent in untrained weathers.", "keywords": ["Autonomous car", "convolution network", "image segmentation", "depth estimation", "generalization ability", "explanation ability", "multi-task learning"], "authorids": ["mr.zhihao.li@gmail.com", "motoyoshi@idr.ias.sci.waseda.ac.jp", "ssk.sasaki@suou.waseda.jp", "ogata@waseda.jp", "sugano@waseda.jp"], "authors": ["Zhihao LI", "Toshiyuki MOTOYOSHI", "Kazuma SASAKI", "Tetsuya OGATA", "Shigeki SUGANO"], "TL;DR": "we proposed a new self-driving model which is composed of perception module for see and think and driving module for behave to acquire better generalization and accident explanation ability.", "pdf": "/pdf/71d4407475c16d3cd272915f752a8b6af7ce9efc.pdf", "paperhash": "li|rethinking_selfdriving_multi_task_knowledge_for_better_generalization_and_accident_explanation_ability", "_bibtex": "@misc{\nli2019rethinking,\ntitle={{RETHINKING} {SELF}-{DRIVING} : {MULTI} -{TASK} {KNOWLEDGE} {FOR} {BETTER} {GENERALIZATION} {AND} {ACCIDENT} {EXPLANATION} {ABILITY}},\nauthor={Zhihao LI and Toshiyuki MOTOYOSHI and Kazuma SASAKI and Tetsuya OGATA and Shigeki SUGANO},\nyear={2019},\nurl={https://openreview.net/forum?id=B14rPj0qY7},\n}"}, "submission_cdate": 1538087773297, "submission_tcdate": 1538087773297, "submission_tmdate": 1545355419110, "submission_ddate": null, "review_id": ["BylLHPgO6Q", "BJlG-jX5nX", "BylEIZsUhX"], "review_url": ["https://openreview.net/forum?id=B14rPj0qY7¬eId=BylLHPgO6Q", "https://openreview.net/forum?id=B14rPj0qY7¬eId=BJlG-jX5nX", "https://openreview.net/forum?id=B14rPj0qY7¬eId=BylEIZsUhX"], "review_cdate": [1542092605528, 1541188346049, 1540956491805], "review_tcdate": [1542092605528, 1541188346049, 1540956491805], "review_tmdate": [1542092605528, 1541534146684, 1541534146480], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B14rPj0qY7", "B14rPj0qY7", "B14rPj0qY7"], "review_content": [{"title": "The paper showed the benefit of the proposed multi-task architecture, but not novel enough for ICLR level", "review": "This paper presents one end-to-end multi-task learning architecture for depth & segmentation map estimation and the driving prediction. The whole architecture is composed of two components, the first one is the perception module (segmentation and depth map inference), the second one is the driving decision module. The training process is sequential, initially train the perception module, then train the driving decision task with freezing the weights of the perception module. The author evaluated the proposed approach on one simulated dataset, Experimental results demonstrated the advantage of multi-task compared to the single task. \n\nAdvantages:\nThe pipeline is also easy to understand, it is simple and efficient based on the provided results.\nThe proposed framework aims to give better understanding of the application of deep learning in self-driving car project. Such as the analysis and illustration in Figure 3. \n\nQuestions:\nThere are several typos needed to be addressed. E.g, the question mark in Fig index of section 5.1. There should be comma in the second sentence at the last paragraph of section 5.2. \nMulti-task, especially the segmentation part is not novel for self-driving car prediction, such as Xu et al. CVPR\u2019 17 paper from Berkeley. The experiment for generalization shows the potential advancement, however, it is less convincing with the limited size of the evaluation data, The authors discussed about how to analyze the failure causes, however, if the perception learning model does not work well, then it would be hard to analyze the reason of incorrectly prediction.\n\nIn general, the paper has the merits and these investigations may be helpful for this problem, but it is not good enough for ICLR.\n\n", "rating": "4: Ok but not good enough - rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "The paper is not bad technically, but the contributions is not good enough", "review": "Major Contribution:\nThis paper details a method for a modified end-to-end architecture that has better generalization and explanation ability. The paper outlines a method for this, implemented using an autoencoder for an efficient feature extractor. By first training an autoencoder to ensure the encoder captures enough depth and segmentation information and then using the processed information as a more useful and compressed new input to train a regression model. The author claimed that this model is more robust to a different testing setting and by observing the output of the decoder, it can help us debug the model when it makes a wrong prediction.\n\nOrganization/Style:\nThe paper is well written, organized, and clear on most points. A few minor points:\n1) On page 5, the last sentence, there is a missing table number.\n2) I don't think the last part FINE-TUNE Test is necessary since there are no formal proofs and only speculations.\n\nTechnical Accuracy:\nThe problem that the paper is trying to address is the black-box problem in the end-to-end self-driving system.\nThe paper proposes a method by constructing a depth image and a segmentation mask autoencoder. Though it has been proved that it is effective in making the right prediction and demonstrated that it has the cause explanation ability for possible prediction failures. I have a few points:\nThe idea makes sense and the model will always perform better when the given input captures more relevant and saturated representations. The paper listed two important features: depth information and segmentation information. But there are other important features that are missing. In other words, when the decoder performs bad, it means the encoder doesn't capture the good depth and segmentation features, then it will be highly possible that the model performs badly as well. However, when the model performs bad, it does not necessarily mean the decoder will perform badly since there might be other information missing, for example, failure to detect the object, lines and traffic lights etc.\n\nIn conclusion, the question is really how to get a good representation of a self-driving scene. I don't think to design two simple autoencoders for depth image construction and image segmentation is enough. It works apparently but it is not good enough.\n\nAdequacy of Citations: \nGood coverage of literature in self-driving.", "rating": "4: Ok but not good enough - rejection", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"title": "End-to-end driving with perceptual auxiliary tasks similar to Xu et al CVPR'17", "review": "# Summary\n\nThis submission proposes a multi-task convolutional neural network architecture for end-to-end driving (going from an RGB image to controls) evaluated using the CARLA open source simulator. The architecture consists of an encoder and three decoders on top: two for perception (depth prediction and semantic segmentation), and one for driving controls prediction. The network is trained in a two-step supervised fashion: first training the encoder and perception decoders (using depth and semantic segmentation ground truth), second freezing the encoder and training the driving module (imitation learning on demonstrations). The network is evaluated on the standard CARLA benchmark showing better generalization performance in new driving conditions (town and weather) compared to the CARLA baselines (modular pipeline, imitation learning, RL). Qualitative results also show that failure modes are easier to interpret by looking at predicted depth maps and semantic segmentation results.\n\n\n# Strengths\n\nSimplicity of the approach: the overall architecture described above is simple (cf. Figure 1), combining the benefits of the modular and end-to-end approaches into a feed-forward CNN. The aforementioned two-stage learning algorithm is also explained clearly. Predicted depth maps and semantic segmentation results are indeed more interpretable than attention maps (as traditionally used in end-to-end driving).\n\nEvaluation of the driving policy: the evaluation is done with actual navigation tasks using the CARLA (CoRL'18) benchmark, instead of just off-line behavior cloning accuracy (often used in end-to-end driving papers, easier to overfit to, not guaranteed to transfer to actual driving).\n\nSimple ablative analysis: Table 2 quantifies the generalization performance benefits of pretraining and freezing the encoder on perception tasks (esp. going from 16% to 62% of completed episodes in the new town and weather dynamic navigation scenario).\n\n\n# Weaknesses\n\n## Writing\n\nI have to start with the most obvious one. The paper is littered with typos and grammatical errors (way too many to list). For instance, the usage of \"the\" and \"a\" is almost non-existent. Overall, the paper is really hard to read and needs a thorough pass of proof-reading and editing. Also, please remove the acknowledgments section: I think it is borderline breaking the double-blind submission policy (I don't know these persons, but if I did that would be a breach of ICLR submission policy). Furthermore, I think its contents are not very professional for a submission at a top international academic venue, but that is just my opinion. \n\n\n## Novelty\n\nThis is the main weakness for me. The architecture is very close to at least the following works:\n- Xu, H., Gao, Y., Yu, F. and Darrell, T., End-to-end learning of driving models from large-scale video datasets (CVPR'17): this reference is missing from the paper, whereas it is very closely related, as it also shows the benefit of a segmentation decoder on top of a shared encoder for end-to-end driving (calling it privileged training);\n- Codevilla et al's Conditional Imitation Learning (ICRA'18): the only novelty in the current submission w.r.t. CIL is the addition of the depth and segmentation decoders;\n- M\u00fcller, M., Dosovitskiy, A., Ghanem, B., & Koltun, V., Driving Policy Transfer via Modularity and Abstraction (CoRL'18): the architecture also uses a shared perception module and segmentation (although in a mediated way instead of auxiliary task) to show better generalization performance (including from sim to real).\n\nAdditional missing related works include:\n- Kim, J. and Canny, J.F., Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention (ICCV'17): uses post-hoc attention interpretation of \"black box\" end-to-end networks;\n- Sauer, A., Savinov, N. and Geiger, A., Conditional Affordance Learning for Driving in Urban Environments (CoRL'18): also uses a perception module in the middle of the CIL network showing better generalization performance in CARLA (although a bit lower than the results in the current submission).\n- Pomerleau, D.A., Alvinn: An autonomous land vehicle in a neural network (NIPS'89): the landmark paper for end-to-end driving with neural networks!\n\n\n## Insights / significance\n\nIn light of the aforementioned prior art, I believe the claims are correct but already reported in other publications in the community (cf. references above). In particular, the proposed approach uses a lot more strongly labeled data (depth and semantic segmentation supervision in a dataset of 40,000 images) than the competing approaches mentioned above. For instance, the modular pipeline in the original CARLA paper uses only 2,500 labeled images, and I am sure its performance would be vastly improved with 40,000 images, but this is not evaluated, hence the comparison in Table 1 being unfair in my opinion. This matters because the encoder in the proposed method is frozen after training on the perception tasks, and the main point of the experiments is to convince that it results in a great (fixed) intermediate representation, which is in line with the aforementioned works doing mediated perception for driving.\n\nThe fine-tuning experiments are also confirming what is know in the litterature, namely that simple fine-tuning can lead to catastrophic forgetting (Table 3).\n\nFinally, the qualitative evaluation of failure cases (5.3) leads to a trivial conclusion: a modular approach is indeed more interpretable than an end-to-end one. This is actually by design and the main advocated benefit of modular approaches: failure in the downstream perception module yields failure in the upstream driving module that builds on top of it. As the perception module is, by design, outputting a human interpretable representation (e.g., a semantic segmentation map), then this leads to better interpretation overall.\n\n\n## Reproducibility\n\nThere are not enough details in section 3.1 about the deep net architecture to enable re-implementation (\"structure similar to SegNet\", no detailed description of the number of layers, non-linearities, number of channels, etc).\n\nWill the authors release the perception training dataset collected in CARLA described in Section 4.2?\n\n\n\n# Recommendation\n\nAlthough the results of the proposed multi-task network on the CARLA driving benchmark are good, it is probably due to using almost two orders of magnitude more labeled data for semantic segmentation and depth prediction than prior works (which is only practical because the experiments are done in simulation). Prior work has confirmed that combining perception tasks like semantic segmentation with end-to-end driving networks yield better performance, including using a strongly related approach (Xu et al). In addition to the lack of novelty or new insights, the writing needs serious attention.\n\nFor these reasons, I believe this paper is not suitable for publication at ICLR.", "rating": "3: Clear rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["SylJKf0bqm"], "comment_cdate": [1538544247068], "comment_tcdate": [1538544247068], "comment_tmdate": [1538544247068], "comment_readers": [["everyone"]], "comment_writers": [["ICLR.cc/2019/Conference/Paper262/Authors", "ICLR.cc/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}], "comment_content": [{"title": "Confusion explanation ", "comment": "Firstly, thanks for your comments. I agree with your comment that performance gain is mainly due to the extra segmentation and depth map annotations, which we refer as 'multi-task knowledge' in the manuscript. \n\nAs for your confusion:\n1. Perception training dataset and driving training dataset were not collected simultaneously, which means we do not have 'inpput RGB image, segmentation map, depth map, driving controls' pairs for training driving module. There are two reasons for this: firstly there are lots of published real driving dataset which consists of RGB image and driving commands which could be reused, but only little of them have corresponding segmentation and depth maps. Secondly we want to focus on the effectiveness of 'multi-task knowledge' instead of 'new combination of driving training dataset'. \n\nWe tried to finetune the whole model after each module was trained with iterative training with 'perception dataset' and 'driving dataset', however we don't get better results. \n\n2. We used binary crossentropy for depth loss, as we normalized depth loss from 0-1 and considered it as a two category classification when training and found that worked better than other loss like 'MSE' .\n"}], "comment_replyto": ["SJeblljgqm"], "comment_url": ["https://openreview.net/forum?id=B14rPj0qY7¬eId=SylJKf0bqm"], "meta_review_cdate": 1544050340137, "meta_review_tcdate": 1544050340137, "meta_review_tmdate": 1545354496026, "meta_review_ddate ": null, "meta_review_title": "Simple design to address generalizability and interpretability, but needs more work", "meta_review_metareview": "The paper presents a unified system for perception and control that is trained in a step-wise fashion, with visual decoders to inspect scene parsing and understanding. Results demonstrate improved performance under certain conditions. But reviewers raise several concerns that must be addressed before the work is accepted.\n\nReviewer Pros:\n+ simple elegant design, easy to understand\n+ provides some insight behind system function during failure conditions (error in perception vs control)\n+ improves performance under a subset of tested conditions \n\nReviewer Cons:\n- Concern about lack of novelty\n- Evaluation is limited in scope\n- References incomplete\n- Missing implementation details, hard to reproduce\n- Paper still contains many writing errors", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2019/Conference/Paper262/Area_Chair1"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B14rPj0qY7¬eId=SJehiI0ryN"], "decision": "Reject"}