Project Submission Guidelines
Table of Contents
- Submission Deadline & Website
- Corrections and Clarifications
- Submission Guidelines
- Review Guidelines
- Review Questions
1. Submission Deadline & Website
Submission Deadline: Jun 06 (Sun), 23:59 KST
Review Deadline: Jun 17 (Thu), 23:59 KST
Rebuttal & Final Decision deadline: Jun 20 (Sun), 23:59 KST
OpenReview submission website:
https://openreview.net/group?id=kaist.ac.kr/KAIST/Spring2021/CS492H
2. Corrections and Clarifications
- [Report Guidelines] The review will be single-blind. Add author names in the report (uncomment
\cvprfinalcopy
in the CVPR template).
- [Report Guidelines] Add the project type (Development Track / Research Track) in the second line of the report title. Using LaTex, you can add like this:
\title{Project Title \\ {Development/Research} Track}
.
- [Report Guidelines] Acknowledgments section is required, even when you do not have the other team member, external collaborators, or any data/codes you borrowed from the other places (state the fact).
- [Report Guidelines] For development track, it is allowed to use qualitative result figures in the original paper for comparisons (actually highly recommended), but clearly mention that the figures are from the original paper.
- [Code Submission] In the code repository, provide links of the pretrained models (stored in Google Drive or any other places) and also datasets you created, so that anyone can easily check the replicability.
- [Review Guideline] Guidelines about “Infeasible Experiments” are added. The review question about experimental results is also revised to refer to the guidelines about “Infeasible Experiments”.
- [Poster Guidelines] The poster must only include the figures and experimental results shown in the report. Any new figures and new experimental results must not be shown on the poster. It is allowed to edit the figures and tables.
Back to top
3. Submission Guidelines
OpenReview submission website:
https://openreview.net/group?id=kaist.ac.kr/KAIST/Spring2021/CS492H
If you don’t have an account at OpenReview, sign up here:
https://openreview.net/signup
Please use your KAIST email address.
- A submission must include a report, a poster, and a code repository link. (Supplementary materials are optional.)
- Submission with the wrong format or any missing items may receive a zero score.
- Make a submission for each team (not for each person). Add all the team members’ names in the submission.
- Development Track: Make the title to be the same as the original paper.
- In the OpenReview webpage, the deadline ie written as
Jun 06 2021 02:59PM UTC-0)
, which is the same as Jun 06 (Sun), 23:59 KST.
- Even in the case when OpenReview does not block the submission after the deadline (due to some system errors), any submissions after the deadline will be rejected.
Report Guidelines
Format
- A single PDF file (maximum 20MB). Supplementary materials are allowed.
- Must be in CVPR format:
LaTeX/Word Templates Zip file (CVPR 2021 format)
Overleaf template (CVPR 2018 format)
- Add the project type (Development Track / Research Track) in the second line of the report title. Using LaTex, you can add like this:
\title{Project Title \\ {Development/Research} Track}
.
- Must be up to four pages long, including figures, tables, and acknowledgments, but not references. Additional pages containing only cited references are allowed.
Structure
Please use the same section names below to facilitate the review process.
Additional sections/subsections are allowed. Also, the detailed instructions in each section are recommendations, and you do NOT need to exactly follow them, except for the highlighted parts.
The review will be single-blind. Add author names in the report (uncomment \cvprfinalcopy
in the CVPR template).
[Development Track]
[Research Track]
- Abstract
- one-sentence TL;DR,
- 1-2 sentences for the motivation of the work,
- a few sentences summary of the problem statement and key contributions, and
- 1-2 sentences summary of the experimental results.
- In the end, add “Code is available at: {link}.”
- Introduction
- What is the motivation for the work? Why is it important to solve the problem? What would be the potential impacts of this work in the research field, or what would be the potential applications?
- What are the challenges in the problem? How has previous work approached this problem, and what are the limitations (add citations)? Or, why hasn’t the problem been addressed while it is important?
- What is the problem statement? What are the input and desired output? What are the given information in the training time and the given assumptions? A teaser figure that effectively illustrates the problem setup or the desired output would be great. (E.g., show the best result with the input.)
- What are your key ideas for solving the challenges? It would be also great if your teaser image also describes the key ideas. (E.g. how do your key ideas make differences in the results?)
- Briefly describe the experimental setups. Which datasets/benchmarks do you use? How do you evaluate the results? What is the conclusion? (E.g. our method outperformed SotA methods by huge margins.)
- A summary of contributions at the end is also recommended (in a bullet point list). You can start the summary with (for example) ‘In summary, our contributions are:’.
- Related Work:
- Consider making two or three groups of related papers and writing a short paragraph for each of them, which a paragraph title. E.g., if your project is about ‘3D GAN’, you can consider groups of ‘Deep Learning for 3D Data’ and ‘Deep Generative Models’. Also, an intro sentence briefly describing the groups is recommended (but not necessary).
- Summarize each work with 1-2 sentences while focusing on describing in which aspects your work is different or overcoming the limitations.
- Method
- Describe the problem setup again, but in detail and also in a formal way. Introduce essential math notations for the following formulations. A figure describing the overall method is recommended.
- The report needs to be self-contained; the readers must be able to understand the ideas without reading the other papers. If needed, provide background knowledge (the ‘minimal’ background to understand your work).
- Think about the most effective way to explain the key ideas. One option would be prioritizing components in your method not based on the order in the method but based on the importance. Less important details can go to the end or to the supplementary.
- Use visual aids in your exposition. More would be better.
- Make the exposition as clear and specific as possible. Formulations can make the exposition clearer and more concise.
- Experimental Results:
- Clearly describe the experiment setups, including benchmarks/datasets, evaluation metrics, and baseline methods.
- Provide both quantitative results (tables) and qualitative results (figures). Provide comparisons with the baseline methods. Explain in which aspects your method is better. Think about the best way to demonstrate the advantages of your method over the baseline methods.
- Conduct the ablation study and show the results. If you have multiple technical contributions, demonstrate how each component affects the results.
- Do not miss any well-known benchmarks/datasets, standard evaluation metrics, or previous methods. Any lack of experiments, evaluations, or comparisons will be penalized in the evaluation.
- Make fair and reasonable (apples-to-apples) comparisons. (Or, if the baseline methods take some advantages such as stronger supervision, describe clearly.) Unfair benefits to the proposed method in the experiments will also be penalized.
- In here or in the conclusion, consider showing some remarkable failure cases explaining the limitation of the proposed method or motivating some future works.
- Conclusion
- Briefly summarize the project, particularly the key ideas and the experimental results.
- Describe the limitation of the proposed work and potential future research directions (if you have space in the report).
- Acknowledgments
- Make acknowledgments as a subsection without a section number or as a paragraph.
-
Describe the role of each team member and the external collaborators and their contributions.
-
Hiding the external collaborators will be considered academic misconduct.
-
Acknowledgments section is required, even when you do not have the other team member, external collaborators, or any data/codes you borrowed from the other places (state the fact).
Poster Guidelines
Format
- A single PNG file (maximum 3MB).
- Must have 2560x1280 resolution.
- Poster template: Link (Design adapted from Zoya Bylinskii)
Do not need to use this template.
- Must include project title, project track, team member names, and acknowledgments.
- Must only include the figures and experimental results shown in the report. Any new figures and new experimental results must not be shown on the poster. It is allowed to edit the figures and tables.
Resources
Code Submission
Supplementary Materials
- Supplementary materials are optional.
- Any formats (PDF document, video, web page) are allowed (maximum 100MB).
- Submit as a ZIP file.
Plagiarism / Academic Misconduct
Plagiarism consists of appropriating someone else’s ideas or any texts, figures, or results of the other work, without sharing credit or correctly citing or mentioning the work.
If plagiarism or other academic misconduct is discovered in your submission (in any of your report, poster, code, or supplementary), you will receive an F grade, and also it will be reported to the university.
Back to top
4. Review Guidelines
- You will be asked to review four projects, and each project will be reviewed by at least four peer reviewers.
- The review will be single-blind and will be performed in OpenReview.
(Both the submissions and reviews will NOT be open to the public.)
- Reviews can be rejected based on the assessment of the review quality.
- Each unsubmitted or rejected review will be penalized by 20% in the project evaluation.
Infeasible Experiments
In the report, you can argue the infeasibility of experiments that are expected to be shown in the results.
For example,
- Some experiments in the original work (Development Track) may not be reproducible if the experiments require multiple GPUs or too large disk/memory spaces (note that only one GPU is provided to each student) or some important details in the experiment are missing.
- An exact comparison with the original work (Development Track) or previous work (Research Track) may not be possible when the data used in the experiment is not released.
For such infeasible experiments, the authors must consider substituting them with similar but feasible experiments (or, also argue why it is even not possible to substitute them).
For example,
- If an experiment is computationally infeasible, try to simplify the experiment (use a smaller dataset, use a simplified neural network, reduce the batch size, etc).
- If data for replication is not provided, consider using the other similar data.
- If some experiment details for replication are not provided, use any common parameters or approaches.
As a reviewer, you need to judge whether the argument about infeasibility is valid or not and whether the authors tried to substitute the infeasible experiments.
For the experiments considered infeasible, do not take them into account in the evaluation.
Back to top
5. Review Questions
Project Summary
Please summarize the project idea in your own words (3+ sentences).
Please read the report thoroughly. Your review can be rejected based on the assessment of the review quality. Each unsubmitted or rejected review will be penalized by 20% in the project evaluation.
Report Evaluation
[All] (Introduction) Are the motivation and problem statement clearly described?
- Yes.
- No. (Please describe the details in the detailed comments.)
[All] (Introduction) Are the implementation challenges (Development) or the technical contributions/novelties (Research) clearly described?
- Yes.
- No. (Please describe the details in the detailed comments.)
[Development] (Method Summary) Is the method summary clearly described (including input, desired output, supervision provided in the network training, representation of the data, etc)?
- Yes.
- No. (Please describe the details in the detailed comments.)
- Research track.
[Development] (Implementation Details) Are the implementation details clearly described (including the parts where the existing codes are used)?
- Yes.
- No. (Please describe the details in the detailed comments.)
- Research track.
[Research] (Related Work) Is the related work adequate, and Is relation to prior work well-explained? (If not, is there any missing prior work?)
- Yes.
- No. (Please describe the details in the detailed comments.)
- Development track.
[Research] (Method) Is the method clearly described (including input, desired output, supervision provided in the network training, representation of the data, main technical ideas, etc)?
- Yes.
- No. (Please describe the details in the detailed comments.)
- Development track.
[All] (Experimental Results) Are the experiment setups clearly described (including benchmarks/datasets, evaluation metrics, baseline methods, quantitative/qualitative results, comparisons, failure cases)?
- Yes.
- No. (Please describe the details in the detailed comments.)
[All] (Acknowledgments) Are the 1) the role of each team member and 2) the external collaborators and their contributions clearly described?
- Yes.
- No. (Please describe the details in the detailed comments.)
[All] Overall, is the report well-organized and clearly written? If not, should there be additional explanations or illustrations?
- Clear, only minor flaws.
- Mostly clear, but improvements needed. (Please describe the details in the detailed comments.)
- Difficult to understand in places. (Please describe the details in the detailed comments.)
- Very hard to understand. (Please describe the details in the detailed comments.)
Overall Evaluation
[Development] (Degree of Challenges) How do you assess the difficulty of the implementation (when considering the number of team members, external collaborators, the parts where the submitters used the existing codes)?
- Very hard, requires extraordinary efforts and/or skills.
- Challenging.
- Moderate. Doable.
- Easy.
[Research] (Novelty) Does it present a new concept or idea?
- Proposes novel and interesting views, ideas, or problems
.
- Worthy contributions with some novel (but small) ideas.
- Minor variations to existing techniques
.
- Does not advance the state of knowledge
, or closely duplicates existing work.
[Research] (Technical quality) Is the approach technically sound?
- Technically very strong and solid.
- Technically adequate.
- A few minor flaws.
- Claims not completely supported, assumptions or simplifications unrealistic, fatal mistakes.
[All] (Experimental Results) Are the experiments well designed, sufficient, clearly described?
(Refer to the guidelines about “Infeasible Experiments” before answering this question.)
- Extensive, detailed, and informative evaluations without any missing benchmarks/datasets, evaluation metric, and comparison.
- Solid and sufficient to convince the merits of the proposed method (Research) or the performance of the implementation (Development) (although lacking some minor experiments or simplifying the experiment setup a bit.)
- Lacking some important experiments (in terms of benchmarks/datasets, evaluation metrics, comparisons, quantitative/qualitative results, etc).
- Insufficient, unfair, inadequate, or uninformative comparisons with related work
(Research) or the original work (Development).
[Development] (Reproduction) Do the implementation reproduce the results in the original work?
(Refer to the guidelines about “Infeasible Experiments” before answering this question.)
- Yes, and even improved/outperformed the original work, or showed more results in different applications.
- Almost, the results are similar.
- A bit worse but comparable.
- Significantly worse, or couldn’t reproduce the results at all.
[All] (Strengths) Describe the strengths of the work in bullet points (with 2-3 bullets).
[All] (Weaknesses) Describe the weaknesses of the work in bullet points (with 2-3 bullets).
[All] (Detailed Comments) Describe the detailed comments here and how you decided the ‘Overall Score’.
Please be constructive, respectful, and detailed in your comments.
[All] (Feedback) Provide questions or comments to the submitters if you have.
[All] (Overall Score) Please provide an “overall score” for this submission (in [1-5] range). One decimal is allowed (e.g., 3.5).
(Refer to the “Rating Criteria” before answering this question.)
Rating Criteria
Criteria of score 4 (‘Great!’) (AND conditions)
- [All] (Report Writing) The report is well-organized and clearly written.
- [All] (Experiment Designs) The experiments are well-designed and convincing.
- [Development] (Experimental Results) Achieved similar results in a challenging development task with the original or improved/outperformed the original work in a doable task.
- [Research] (Experimental Results) Achieved promising results. Worth pursuing the direction for future paper publication (although not ready for now).
- [All] (Etc) No significant flaws in the report writing, experimental result demonstration, and poster presentation.
Criteria of score 5 (‘Outstanding!’)
- Better than ‘Great!’. Probably the best project in the class.
Criteria of score 3 (‘Good!’)
- Has one significant reason not to be ‘Great!’.
Criteria of score 2 (‘Not Good’) (OR conditions)
- Has two significant reasons not to be ‘Great!’.
- Shows poor experimenal designs or experimental results or technical contributions (while everything else is ok).
Criteria of score 1 (‘Poor’) (OR conditions)
- Has three or more significant reasons not to be ‘Great!’.
- Shows poor experimental designs or experimental results or technical contributions, and has one more significant reason not to be ‘Great!’.
Back to top