You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+17-17Lines changed: 17 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,13 +5,13 @@ Template code for participating in Topcoder Marathon Matches
5
5
## Submission format
6
6
Our template supports both the "submit data" and "submit code" submission styles. Your submission should be a single ZIP file not larger than 500 MB, with the following content:
7
7
8
-
'''
8
+
```
9
9
/solution
10
10
solution.csv
11
11
/code
12
12
Dockerfile
13
13
<your code>
14
-
'''
14
+
```
15
15
16
16
, where /solution/solution.csv is the output your algorithm generates on the provisional test set. The format of this file is described above in the Output file section.
17
17
/code contains a dockerized version of your system that will be used to reproduce your results in a well defined, standardized way. This folder must contain a Dockerfile that will be used to build a docker container that will host your system during final testing. How you organize the rest of the contents of the /code folder is up to you, as long as it satisfies the requirements listed below in the Final testing section.
@@ -36,54 +36,54 @@ large data files that can be downloaded automatically either during building or
36
36
4. Your trained model file(s). Alternatively your build process may download your model files from the network. Either way, you must make it possible to run inference without having to execute training first.
37
37
38
38
The tester tool will unpack your submission, and the
39
-
'''
39
+
```
40
40
docker build -t <id> .
41
-
'''
41
+
```
42
42
command will be used to build your docker image (the final . is significant), where <id> is your TopCoder handle.
43
43
The build process must run out of the box, i.e. it should download and install all necessary 3rd party dependencies, either download from internet or copy from the unpacked submission all necessary external data files, your model files, etc.
44
44
Your container will be started by the
45
-
'''
45
+
```
46
46
docker run -v <local_data_path>:/data:ro -v <local_writable_area_path>:/wdata -it <id>
47
-
'''
47
+
```
48
48
command (single line), where the -v parameter mounts the contests data to the containers /data folder. This means that all the raw contest data will be available for your container within the /data folder. Note that your container will have read only access to the /data folder. You can store large temporary files in the /wdata folder.
49
49
50
50
To validate the template file supplied with this repo. You can execute the following command:
51
-
'''
51
+
```
52
52
docker run -it <id>
53
-
'''
53
+
```
54
54
55
55
## Training and test scripts
56
56
57
57
Your container must contain a train and test (a.k.a. inference) script having the following specification:
58
58
train.sh <data-folder> should create any data files that your algorithm needs for running test.sh later. The supplied <data-folder> parameters point to a folder having training data in the same structure as is available for you during the coding phase. The allowed time limit for the train.sh script is 3 days. You may assume that the data folder path will be under /data.
59
59
As its first step train.sh must delete the self-created models shipped with your submission.
60
60
Some algorithms may not need any training at all. It is a valid option to leave train.sh empty, but the file must exist nevertheless.
61
-
Training should be possible to do with working with only the contest's own training data and publicly available external data. This means that this script should do all the preprocessing and training steps that are necessary to reproduce your complete training work flow.
61
+
Training should be possible to do with working with only the contest\'s own training data and publicly available external data. This means that this script should do all the preprocessing and training steps that are necessary to reproduce your complete training work flow.
62
62
A sample call to your training script (single line):
63
-
'''
63
+
```
64
64
./train.sh /data/training/
65
-
'''
65
+
```
66
66
67
67
In this case you can assume that the training data looks like this:
68
-
'''
68
+
```
69
69
data/
70
70
training/
71
71
TODO fill after structure fixed
72
-
'''
72
+
```
73
73
74
74
test.sh <data-folder> <output_path> should run your inference code using new, unlabeled data and should generate an output CSV file, as specified by the problem statement. The allowed time limit for the test.sh script is 24 hours. The testing data folder contain similar data in the same structure as is available for you during the coding phase. The final testing data will be similar in size and in content to the provisional testing data. You may assume that the data folder path will be under /data.
75
75
Inference should be possible to do without running training first, i.e. using only your prebuilt model files.
76
76
It should be possible to execute your inference script multiple times on the same input data or on different input data. You must make sure that these executions don't interfere, each execution leaves your system in a state in which further executions are possible.
77
77
A sample call to your testing script (single line):
78
-
'''
78
+
```
79
79
./test.sh /data/test/ solution.csv
80
-
'''
80
+
```
81
81
In this case you can assume that the testing data looks like this:
0 commit comments