As a rule of thumb, parsing in a compiled language should be I/O bound. In other words, it shouldn't take much longer to parse the file than it takes to just copy it by any method.
Here's how I optimize code. I'm not looking for "slow routines". I want a much finer level of granularity.
I'm looking for individual statements or function call sites, that are on the call stack a substantial fraction of the time, because any such call, if it could be replaced by some code that did the same action in a lot less time, would reduce total running time by roughly that fraction.
It's extremely easy to find them. All you gotta do is pause the program pause the program a few times, and each time get out your magnifying glass and ask what it's doing and why.
Don't stop after you've found and fixed just one problem. Since you've reduced the total time, other problems now take a larger percentage of the time, and are thus easier to find. So, the more you do it, the more you find, and each speedup compounds on the previous ones. That's how you really make it fast really make it fast.
As a rule of thumb, parsing in a compiled language should be I/O bound. In other words, it shouldn't take much longer to parse the file than it takes to just copy it by any method.
Here's how I optimize code. I'm not looking for "slow routines". I want a much finer level of granularity.
I'm looking for individual statements or function call sites, that are on the call stack a substantial fraction of the time, because any such call, if it could be replaced by some code that did the same action in a lot less time, would reduce total running time by roughly that fraction.
It's extremely easy to find them. All you gotta do is pause the program a few times, and each time get out your magnifying glass and ask what it's doing and why.
Don't stop after you've found and fixed just one problem. Since you've reduced the total time, other problems now take a larger percentage of the time, and are thus easier to find. So, the more you do it, the more you find, and each speedup compounds on the previous ones. That's how you really make it fast.
As a rule of thumb, parsing in a compiled language should be I/O bound. In other words, it shouldn't take much longer to parse the file than it takes to just copy it by any method.
Here's how I optimize code. I'm not looking for "slow routines". I want a much finer level of granularity.
I'm looking for individual statements or function call sites, that are on the call stack a substantial fraction of the time, because any such call, if it could be replaced by some code that did the same action in a lot less time, would reduce total running time by roughly that fraction.
It's extremely easy to find them. All you gotta do is pause the program a few times, and each time get out your magnifying glass and ask what it's doing and why.
Don't stop after you've found and fixed just one problem. Since you've reduced the total time, other problems now take a larger percentage of the time, and are thus easier to find. So, the more you do it, the more you find, and each speedup compounds on the previous ones. That's how you really make it fast.
As a rule of thumb, parsing in a compiled language should be I/O bound. In other words, it shouldn't take much longer to parse the file than it takes to just copy it by any method.
Here's how I optimize code. I'm not looking for "slow routines". I want a much finer level of granularity.
I'm looking for individual statements or function call sites, that are on the call stack a substantial fraction of the time, because any such call, if it could be replaced by some code that did the same action in a lot less time, would reduce total running time by roughly that fraction.
It's extremely easy to find them. All you gotta do is pause the program a few times, and each time get out your magnifying glass and ask what it's doing and why.
Don't stop after you've found and fixed just one problem. Since you've reduced the total time, other problems now take a larger percentage of the time, and are thus easier to find. So, the more you do it, the more you find, and each speedup compounds on the previous ones. That's how you really make it fast.
If I could make a guess (which I really shouldn't do), I bet you'll find a large fraction of the time calls to fscanf are on the stack. For parsing numbers, I roll my own lexer, and I can be pretty sure it can't be beat.
As a rule of thumb, parsing in a compiled language should be I/O bound. In other words, it shouldn't take much longer to parse the file than it takes to just copy it by any method.
Here's how I optimize code. I'm not looking for "slow routines". I want a much finer level of granularity.
I'm looking for individual statements or function call sites, that are on the call stack a substantial fraction of the time, because any such call, if it could be replaced by some code that did the same action in a lot less time, would reduce total running time by roughly that fraction.
It's extremely easy to find them. All you gotta do is pause the program a few times, and each time get out your magnifying glass and ask what it's doing and why.
Don't stop after you've found and fixed just one problem. Since you've reduced the total time, other problems now take a larger percentage of the time, and are thus easier to find. So, the more you do it, the more you find, and each speedup compounds on the previous ones. That's how you really make it fast.
If I could make a guess (which I really shouldn't do), I bet you'll find a large fraction of the time calls to fscanf are on the stack. For parsing numbers, I roll my own lexer, and I can be pretty sure it can't be beat.
As a rule of thumb, parsing in a compiled language should be I/O bound. In other words, it shouldn't take much longer to parse the file than it takes to just copy it by any method.
Here's how I optimize code. I'm not looking for "slow routines". I want a much finer level of granularity.
I'm looking for individual statements or function call sites, that are on the call stack a substantial fraction of the time, because any such call, if it could be replaced by some code that did the same action in a lot less time, would reduce total running time by roughly that fraction.
It's extremely easy to find them. All you gotta do is pause the program a few times, and each time get out your magnifying glass and ask what it's doing and why.
Don't stop after you've found and fixed just one problem. Since you've reduced the total time, other problems now take a larger percentage of the time, and are thus easier to find. So, the more you do it, the more you find, and each speedup compounds on the previous ones. That's how you really make it fast.
As a rule of thumb, parsing in a compiled language should be I/O bound. In other words, it shouldn't take much longer to parse the file than it takes to just copy it by any method.
Here's how I optimize code. I'm not looking for "slow routines". I want a much finer level of granularity.
I'm looking for individual statements or function call sites, that are on the call stack a substantial fraction of the time, because any such call, if it could be replaced by some code that did the same action in a lot less time, would reduce total running time by roughly that fraction.
It's extremely easy to find them. All you gotta do is pause the program a few times, and each time get out your magnifying glass and ask what it's doing and why.
Don't stop after you've found and fixed just one problem. Since you've reduced the total time, other problems now take a larger percentage of the time, and are thus easier to find. So, the more you do it, the more you find, and each speedup compounds on the previous ones. That's how you really make it fast.
If I could make a guess (which I really shouldn't do), I bet you'll find a large fraction of the time calls to fscanf are on the stack. For parsing numbers, I roll my own lexer, and I can be pretty sure it can't be beat.