WILDS Leaderboard Submission Form
To submit to the WILDS leaderboard, please fill out this form after creating a submission bundle in CodaLab. Read more about submission rules at http://wilds.stanford.edu/submit. Thank you!
Sign in to Google to save your progress. Learn more
WILDS version *
Please verify the version of the WILDS package that you used.
Contact information
Contact name *
Please provide a contact name that we will use to list on the leaderboard.
Contact email *
The email address that people should contact you at if they have questions. This will be made public on the leaderboard.
Contact affiliation
What institution/organization you are from, if you'd like to provide this information.
Method information
Official vs. unofficial *
Is your implementation official (i.e., you proposed this method, or you made substantial modifications to an existing method and have written that up), or unofficial (i.e., you re-implemented a method that others had proposed)?
Algorithm *
The name of the algorithm you used (e.g., "ERM", "Group DRO").
Validation sets *
Which validation sets did you use? If "other", please describe how you did model/hyperparameter selection.
Submission link *
Please provide a publicly-accessible URL to the .tar.gz or .zip file containing your submission.
Paper *
Please link to the original paper that describes your method. Please be sure to only include the URL.
Code *
Please link to a Github or other public repository that contains all of the code and scripts required to reproduce your results. Do not put a placeholder.
Submission type *
Indicate if the submission is standard or non-standard. Please read the rules concerning standard vs. non-standard submissions at http://wilds.stanford.edu/submit.
Non-standard submission elaboration
 If the submission is non-standard, please describe what changes you made from the standard protocol.
Amazon
Please specify the model and hyperparameters used for Amazon if predictions for Amazon were included in your submission.
Model for Amazon
If you used the default model in the WILDS package, select "Default". Otherwise, provide the name of the model you used (e.g., "BERT-base-uncased").
Clear selection
Unlabeled data used for Amazon
If you used any unlabeled data for training your model or for doing hyperparameter search, please indicate so below. If you've used other sources of unlabeled data that aren't from the official WILDS dataset (e.g., from other existing datasets or scraped from the Internet) and that are not part of the official WILDS default model pretraining, please describe them briefly.
Tuned hyperparameters for Amazon
Please disclose the hyperparameters you selected and how you tuned them. If you did a grid search, please follow this format: "lr: [0.001*, 0.01*], dropout: [0*, 0.5], etc.", where the asterisks denote the hyperparameters that you eventually selected based on validation performance. If you did a random search, please let us know what ranges you searched over and how many hyperparameter combinations you tried. This information is not currently displayed on the leaderboards, but it is important for record-keeping.
Camelyon17
Please specify the model and hyperparameters used for Camelyon17 if predictions for Camelyon17 were included in your submission.
Model for Camelyon17
If you used the default model in the WILDS package, select "Default". Otherwise, provide the name of the model you used (e.g., "ResNet50").
Clear selection
Unlabeled data used for Camelyon17
If you used any unlabeled data for training your model or for doing hyperparameter search, please indicate so below. If you've used other sources of unlabeled data that aren't from the official WILDS dataset (e.g., from other existing datasets or scraped from the Internet) and that are not part of the official WILDS default model pretraining, please describe them briefly.
Tuned hyperparameters for Camelyon17
Please disclose the hyperparameters you selected and how you tuned them. If you did a grid search, please follow this format: "lr: [0.001*, 0.01*], dropout: [0*, 0.5], etc.", where the asterisks denote the hyperparameters that you eventually selected based on validation performance. If you did a random search, please let us know what ranges you searched over and how many hyperparameter combinations you tried. This information is not currently displayed on the leaderboards, but it is important for record-keeping.
CivilComments
Please specify the model and hyperparameters used for CivilComments if predictions for CivilComments were included in your submission.
Model for CivilComments
If you used the default model in the WILDS package, select "Default". Otherwise, provide the name of the model you used (e.g., "BERT-base-uncased").
Clear selection
Unlabeled data used for CivilComments
If you used any unlabeled data for training your model or for doing hyperparameter search, please indicate so below. If you've used other sources of unlabeled data that aren't from the official WILDS dataset (e.g., from other existing datasets or scraped from the Internet) and that are not part of the official WILDS default model pretraining, please describe them briefly.
Tuned hyperparameters for CivilComments
Please disclose the hyperparameters you selected and how you tuned them. If you did a grid search, please follow this format: "lr: [0.001*, 0.01*], dropout: [0*, 0.5], etc.", where the asterisks denote the hyperparameters that you eventually selected based on validation performance. If you did a random search, please let us know what ranges you searched over and how many hyperparameter combinations you tried. This information is not currently displayed on the leaderboards, but it is important for record-keeping.
FMoW
Please specify the model and hyperparameters used for FMoW if predictions for FMoW were included in your submission.
Model for FMoW
If you used the default model in the WILDS package, select "Default". Otherwise, provide the name of the model you used (e.g., "ResNet50").
Clear selection
Unlabeled data used for FMoW
If you used any unlabeled data for training your model or for doing hyperparameter search, please indicate so below. If you've used other sources of unlabeled data that aren't from the official WILDS dataset (e.g., from other existing datasets or scraped from the Internet) and that are not part of the official WILDS default model pretraining, please describe them briefly.
Tuned hyperparameters for FMoW
Please disclose the hyperparameters you selected and how you tuned them. If you did a grid search, please follow this format: "lr: [0.001*, 0.01*], dropout: [0*, 0.5], etc.", where the asterisks denote the hyperparameters that you eventually selected based on validation performance. If you did a random search, please let us know what ranges you searched over and how many hyperparameter combinations you tried. This information is not currently displayed on the leaderboards, but it is important for record-keeping.
GlobalWheat
Please specify the model and hyperparameters used for GlobalWheat if predictions for GlobalWheat were included in your submission.
Model for GlobalWheat
If you used the default model in the WILDS package, select "Default". Otherwise, provide the name of the model you used (e.g., "Faster R-CNN").
Clear selection
Unlabeled data used for GlobalWheat
If you used any unlabeled data for training your model or for doing hyperparameter search, please indicate so below. If you've used other sources of unlabeled data that aren't from the official WILDS dataset (e.g., from other existing datasets or scraped from the Internet) and that are not part of the official WILDS default model pretraining, please describe them briefly.
Tuned hyperparameters for GlobalWheat
Please disclose the hyperparameters you selected and how you tuned them. If you did a grid search, please follow this format: "lr: [0.001*, 0.01*], dropout: [0*, 0.5], etc.", where the asterisks denote the hyperparameters that you eventually selected based on validation performance. If you did a random search, please let us know what ranges you searched over and how many hyperparameter combinations you tried. This information is not currently displayed on the leaderboards, but it is important for record-keeping.
iWildCam
Please specify the model and hyperparameters used for iWildCam if predictions for iWildCam were included in your submission.
Model for iWildCam
If you used the default model in the WILDS package, select "Default". Otherwise, provide the name of the model you used (e.g., "ResNet50").
Clear selection
Unlabeled data used for iWildCam
If you used any unlabeled data for training your model or for doing hyperparameter search, please indicate so below. If you've used other sources of unlabeled data that aren't from the official WILDS dataset (e.g., from other existing datasets or scraped from the Internet) and that are not part of the official WILDS default model pretraining, please describe them briefly.
Tuned hyperparameters for iWildCam
Please disclose the hyperparameters you selected and how you tuned them. If you did a grid search, please follow this format: "lr: [0.001*, 0.01*], dropout: [0*, 0.5], etc.", where the asterisks denote the hyperparameters that you eventually selected based on validation performance. If you did a random search, please let us know what ranges you searched over and how many hyperparameter combinations you tried. This information is not currently displayed on the leaderboards, but it is important for record-keeping.
OGB-MolPCBA
Please specify the model and hyperparameters used for OGB-MolPCBA if predictions for OGB-MolPCBA were included in your submission.
Model for OGB-MolPCBA
If you used the default model in the WILDS package, select "Default". Otherwise, provide the name of the model you used (e.g., "ResNet50").
Clear selection
Unlabeled data used for OGB-MolPCBA
If you used any unlabeled data for training your model or for doing hyperparameter search, please indicate so below. If you've used other sources of unlabeled data that aren't from the official WILDS dataset (e.g., from other existing datasets or scraped from the Internet) and that are not part of the official WILDS default model pretraining, please describe them briefly.
Tuned hyperparameters for OGB-MolPCBA
Please disclose the hyperparameters you selected and how you tuned them. If you did a grid search, please follow this format: "lr: [0.001*, 0.01*], dropout: [0*, 0.5], etc.", where the asterisks denote the hyperparameters that you eventually selected based on validation performance. If you did a random search, please let us know what ranges you searched over and how many hyperparameter combinations you tried. This information is not currently displayed on the leaderboards, but it is important for record-keeping.
PovertyMap
Please specify the model and hyperparameters used for PovertyMap if predictions for PovertyMap were included in your submission.
Model for PovertyMap
If you used the default model in the WILDS package, select "Default". Otherwise, provide the name of the model you used (e.g., "ResNet50").
Clear selection
Unlabeled data used for PovertyMap
If you used any unlabeled data for training your model or for doing hyperparameter search, please indicate so below. If you've used other sources of unlabeled data that aren't from the official WILDS dataset (e.g., from other existing datasets or scraped from the Internet) and that are not part of the official WILDS default model pretraining, please describe them briefly.
Tuned hyperparameters for PovertyMap
Please disclose the hyperparameters you selected and how you tuned them. If you did a grid search, please follow this format: "lr: [0.001*, 0.01*], dropout: [0*, 0.5], etc.", where the asterisks denote the hyperparameters that you eventually selected based on validation performance. If you did a random search, please let us know what ranges you searched over and how many hyperparameter combinations you tried. This information is not currently displayed on the leaderboards, but it is important for record-keeping.
Py150
Please specify the model and hyperparameters used for Py150 if predictions for Py150 were included in your submission.
Model for Py150
If you used the default model in the WILDS package, select "Default". Otherwise, provide the name of the model you used (e.g., "ResNet50").
Clear selection
Tuned hyperparameters for Py150
Please disclose the hyperparameters you selected and how you tuned them. If you did a grid search, please follow this format: "lr: [0.001*, 0.01*], dropout: [0*, 0.5], etc.", where the asterisks denote the hyperparameters that you eventually selected based on validation performance. If you did a random search, please let us know what ranges you searched over and how many hyperparameter combinations you tried. This information is not currently displayed on the leaderboards, but it is important for record-keeping.
RxRx1
Please specify the model and hyperparameters used for RxRx1 if predictions for RxRx1 were included in your submission.
Model for RxRx1
If you used the default model in the WILDS package, select "Default". Otherwise, provide the name of the model you used (e.g., "ResNet50").
Clear selection
Tuned hyperparameters for RxRx1
Please disclose the hyperparameters you selected and how you tuned them. If you did a grid search, please follow this format: "lr: [0.001*, 0.01*], dropout: [0*, 0.5], etc.", where the asterisks denote the hyperparameters that you eventually selected based on validation performance. If you did a random search, please let us know what ranges you searched over and how many hyperparameter combinations you tried. This information is not currently displayed on the leaderboards, but it is important for record-keeping.
Additional comments
Submit
Clear form
Never submit passwords through Google Forms.
This content is neither created nor endorsed by Google. Report Abuse - Terms of Service - Privacy Policy