Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit a9dcd18

Browse files
docs: update wording
1 parent df48bfe commit a9dcd18

File tree

1 file changed

+3
-6
lines changed

1 file changed

+3
-6
lines changed

‎README.md‎

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -49,14 +49,11 @@ cargo scaffold <day>
4949

5050
Individual solutions live in the `./src/bin/` directory as separate binaries. _Inputs_ and _examples_ live in the the `./data` directory.
5151

52-
Every [solution](https://github.com/fspoettel/advent-of-code-rust/blob/main/src/template/commands/scaffold.rs#L9-L35) has _tests_ referencing its _example_ file in `./data/examples`. Use these tests to develop and debug your solutions against the example input.
52+
Every [solution](https://github.com/fspoettel/advent-of-code-rust/blob/main/src/template/commands/scaffold.rs#L9-L35) has _tests_ referencing its _example_ file in `./data/examples`. Use these tests to develop and debug your solutions against the example input. In VS Code, `rust-analyzer` will display buttons for running / debugging these unit tests above the unit test blocks.
5353

5454
> [!TIP]
5555
> If a day has different example inputs for both parts, you can use the `read_file_part()` helper in your tests instead of `read_file()`. For example, if this applies to day 1, you can create a second example file `01-2.txt` and invoke the helper like `let result = part_two(&advent_of_code::template::read_file_part("examples", DAY, 2));` to read it in `test_part_two`.
5656
57-
> [!TIP]
58-
> when editing a solution, `rust-analyzer` will display buttons for running / debugging unit tests above the unit test blocks.
59-
6057
### Download input & description for a day
6158

6259
> [!IMPORTANT]
@@ -99,7 +96,7 @@ For example, running a benchmarked, optimized execution of day 1 would look like
9996
#### Submitting solutions
10097

10198
> [!IMPORTANT]
102-
> This command requires [installing the aoc-cli crate](#configure-aoc-cli-integration).
99+
> This requires [installing the aoc-cli crate](#configure-aoc-cli-integration).
103100
104101
In order to submit part of a solution for checking, append the `--submit <part>` option to the `solve` command.
105102

@@ -123,7 +120,7 @@ This runs all solutions sequentially and prints output to the command-line. Same
123120

124121
#### Update readme benchmarks
125122

126-
The template can output a table with solution times to your readme. In order to generate a benchmarking table, run `cargo all --release --time`. If everything goes well, the command will output "_Successfully updated README with benchmarks._" after the execution finishes and the readme will be updated.
123+
The template can output a table with solution times to your readme. In order to generate a benchmarking table, run `cargo time`. If everything goes well, the command will output "_Successfully updated README with benchmarks._" after the execution finishes and the readme will be updated.
127124

128125
Please note that these are not "scientific" benchmarks, understand them as a fun approximation. 😉 Timings, especially in the microseconds range, might change a bit between invocations.
129126

0 commit comments

Comments
(0)

AltStyle によって変換されたページ (->オリジナル) /