Leaderboard Submission Form for Temporal Graph Benchmark
This is the form for submitting to Temporal Graph Benchmark with your method results. If you have any questions, please reach out to shenyang.huang@mail.mcgill.ca
Sign in to Google to save your progress. Learn more
Contact Email  *
Please provide your email to contact about your submission, method / code.
Primary Contact Name *
Please provide your own name and a short affiliation name in parentheses, e.g., Geoffrey Hinton (UToronto)
TGB Package Version *
Please provide the package version you used for this submission in reporting the results. check the TGB website for the latest version. 
Name of Your Method *
Please provide the name of your method, e.g., "TGN", "CAWN", "EdgeBank" (maximum character count is 30).
External data
*
When building your model, did you use external data (in the form of external pre-trained models, raw text, external unlabeled/labeled data)? If "Yes", please clearly indicate that in your method name above, e.g., TGN (pre-trained on GPT4).
Dataset *
Please provide the name of the dataset with version number if possible (e.g., "tgbl-wiki-v2") that you would like the report the performance.
Test Performance
*
For the chosen dataset, please report the raw test performance output by the TGB evaluator. Please follow the following example format when reporting the average and unbiased standard deviation. "0.3661198375, 0.01374213057  " Here, the average is  0.3661198375  and the standard deviation is  0.01374213057 .
Validation Performance
*
For the chosen dataset, please report the raw validation performance output by the TGB evaluator. Please follow the same format as the test performance.
Code Access  *
Please provide the link to Github repository or directory that contains all the code/instruction/command to reproduce your submitted result. Please ensure the link works correctly, placeholder is NOT allowed.
Paper Link *
Please provide the link to the original paper that describes the method. If your method has any original component (e.g., even just combining existing methods XXX and YYY), you have to write a technical report describing it (e.g., how you exactly combined XXX and YYY).
Tuned Hyper-parameters
*
Please kindly disclose all the hyper-parameters you tuned, and how much you tuned for each of them. Please follow the following form: "lr: [0.001*, 0.01], num_layers: [4*,5], hidden_channels: [128, 256*], dropout: [0*, 0.5], epochs: early-stop*", where the asterisks denote the hyper-parameters you eventually selected (based on validation performance) to report the test performance. This information will not appear in the leaderboard for the time being, but it is important for us to keep the record and encourage the fair model selection.
Implementation *
Is the implementation official (implementation by authors who proposed the method) or unofficial (re-implementation of the method by non-authors)?
# of Parameters *
The number of parameters of your model.
Hardware
*
The hardware accelerate (GPU, TPU, etc.) used for the experiments, e.g., GeForce RTX 2080 (11GB GPU). If multiple accelerators  (e.g. multiple GPUs) are used, please specify so.
Submit
Clear form
Never submit passwords through Google Forms.
This content is neither created nor endorsed by Google. Report Abuse - Terms of Service - Privacy Policy