I am having performance problems with the following python function:
def remove_comments(buffer):
new_buffer = ''
lines = buffer.split('\n')
for line in lines:
line_wo_comments = line.split('--')[0] + '\n'
new_buffer = new_buffer + line_wo_comments
return new_buffer
When buffer is very large (thousands+ lines), the function gets slower and slower as it processes the buffer.
What techniques could I use to speed this function call up?
Assume that the input is a source code file. Lines of length 1 - ~120 characters. Lines may or may not have comments. The files could be many lines long. The especially problematic ones are machine generated (1-10k+ lines long).
Update: The intention is to use this as a "pre-processing" step for the buffer contents (a file). I guess I am not too interested in possibly ways to completely refactor this (i.e. methods to avoid needing to iterate through all the lines multiple times), but rather make the essence of buffer in / buffer out as fast as possible.
I am having performance problems with the following python function:
def remove_comments(buffer):
new_buffer = ''
lines = buffer.split('\n')
for line in lines:
line_wo_comments = line.split('--')[0] + '\n'
new_buffer = new_buffer + line_wo_comments
return new_buffer
When buffer is very large (thousands+ lines), the function gets slower and slower as it processes the buffer.
What techniques could I use to speed this function call up?
Assume that the input is a source code file. Lines of length 1 - ~120 characters. Lines may or may not have comments. The files could be many lines long. The especially problematic ones are machine generated (1-10k+ lines long).
I am having performance problems with the following python function:
def remove_comments(buffer):
new_buffer = ''
lines = buffer.split('\n')
for line in lines:
line_wo_comments = line.split('--')[0] + '\n'
new_buffer = new_buffer + line_wo_comments
return new_buffer
When buffer is very large (thousands+ lines), the function gets slower and slower as it processes the buffer.
What techniques could I use to speed this function call up?
Assume that the input is a source code file. Lines of length 1 - ~120 characters. Lines may or may not have comments. The files could be many lines long. The especially problematic ones are machine generated (1-10k+ lines long).
Update: The intention is to use this as a "pre-processing" step for the buffer contents (a file). I guess I am not too interested in possibly ways to completely refactor this (i.e. methods to avoid needing to iterate through all the lines multiple times), but rather make the essence of buffer in / buffer out as fast as possible.
I am having performance problems with the following python function:
def remove_comments(buffer):
new_buffer = ''
lines = buffer.split('\n')
for line in lines:
line_wo_comments = line.split('--')[0] + '\n'
new_buffer = new_buffer + line_wo_comments
return new_buffer
When buffer is very large (thousands+ lines), the function gets slower and slower as it processes the buffer.
What techniques could I use to speed this function call up?
Assume that the input is a source code file. Lines of length 1 - ~120 characters. Lines may or may not have comments. The files could be many lines long. The especially problematic ones are machine generated (1-10k+ lines long).
I am having performance problems with the following python function:
def remove_comments(buffer):
new_buffer = ''
lines = buffer.split('\n')
for line in lines:
line_wo_comments = line.split('--')[0] + '\n'
new_buffer = new_buffer + line_wo_comments
return new_buffer
When buffer is very large (thousands+ lines), the function gets slower and slower as it processes the buffer.
What techniques could I use to speed this function call up?
I am having performance problems with the following python function:
def remove_comments(buffer):
new_buffer = ''
lines = buffer.split('\n')
for line in lines:
line_wo_comments = line.split('--')[0] + '\n'
new_buffer = new_buffer + line_wo_comments
return new_buffer
When buffer is very large (thousands+ lines), the function gets slower and slower as it processes the buffer.
What techniques could I use to speed this function call up?
Assume that the input is a source code file. Lines of length 1 - ~120 characters. Lines may or may not have comments. The files could be many lines long. The especially problematic ones are machine generated (1-10k+ lines long).
I am having performance problems with the following python function:
def remove_comments(buffer):
new_buffer = ''
lines = buffer.split('\n')
for line in lines:
line_wo_comments = line.split('--')[0] + '\n'
new_buffer = new_codenew_buffer + line_wo_comments
return new_buffer
When buffer is very large (thousands+ lines), the function gets slower and slower as it processes the buffer.
What techniques could I use to speed this function call up?
I am having performance problems with the following python function:
def remove_comments(buffer):
new_buffer = ''
lines = buffer.split('\n')
for line in lines:
line_wo_comments = line.split('--')[0] + '\n'
new_buffer = new_code + line_wo_comments
return new_buffer
When buffer is very large (thousands+ lines), the function gets slower and slower as it processes the buffer.
What techniques could I use to speed this function call up?
I am having performance problems with the following python function:
def remove_comments(buffer):
new_buffer = ''
lines = buffer.split('\n')
for line in lines:
line_wo_comments = line.split('--')[0] + '\n'
new_buffer = new_buffer + line_wo_comments
return new_buffer
When buffer is very large (thousands+ lines), the function gets slower and slower as it processes the buffer.
What techniques could I use to speed this function call up?