Skip to main content
Code Review

Return to Question

added more usage context
Source Link
Josh
  • 151
  • 1
  • 6

I am having performance problems with the following python function:

def remove_comments(buffer):
 new_buffer = ''
 lines = buffer.split('\n')
 for line in lines:
 line_wo_comments = line.split('--')[0] + '\n'
 new_buffer = new_buffer + line_wo_comments
 return new_buffer

When buffer is very large (thousands+ lines), the function gets slower and slower as it processes the buffer.

What techniques could I use to speed this function call up?

Assume that the input is a source code file. Lines of length 1 - ~120 characters. Lines may or may not have comments. The files could be many lines long. The especially problematic ones are machine generated (1-10k+ lines long).

Update: The intention is to use this as a "pre-processing" step for the buffer contents (a file). I guess I am not too interested in possibly ways to completely refactor this (i.e. methods to avoid needing to iterate through all the lines multiple times), but rather make the essence of buffer in / buffer out as fast as possible.

I am having performance problems with the following python function:

def remove_comments(buffer):
 new_buffer = ''
 lines = buffer.split('\n')
 for line in lines:
 line_wo_comments = line.split('--')[0] + '\n'
 new_buffer = new_buffer + line_wo_comments
 return new_buffer

When buffer is very large (thousands+ lines), the function gets slower and slower as it processes the buffer.

What techniques could I use to speed this function call up?

Assume that the input is a source code file. Lines of length 1 - ~120 characters. Lines may or may not have comments. The files could be many lines long. The especially problematic ones are machine generated (1-10k+ lines long).

I am having performance problems with the following python function:

def remove_comments(buffer):
 new_buffer = ''
 lines = buffer.split('\n')
 for line in lines:
 line_wo_comments = line.split('--')[0] + '\n'
 new_buffer = new_buffer + line_wo_comments
 return new_buffer

When buffer is very large (thousands+ lines), the function gets slower and slower as it processes the buffer.

What techniques could I use to speed this function call up?

Assume that the input is a source code file. Lines of length 1 - ~120 characters. Lines may or may not have comments. The files could be many lines long. The especially problematic ones are machine generated (1-10k+ lines long).

Update: The intention is to use this as a "pre-processing" step for the buffer contents (a file). I guess I am not too interested in possibly ways to completely refactor this (i.e. methods to avoid needing to iterate through all the lines multiple times), but rather make the essence of buffer in / buffer out as fast as possible.

added info on typical input.
Source Link
Josh
  • 151
  • 1
  • 6

I am having performance problems with the following python function:

def remove_comments(buffer):
 new_buffer = ''
 lines = buffer.split('\n')
 for line in lines:
 line_wo_comments = line.split('--')[0] + '\n'
 new_buffer = new_buffer + line_wo_comments
 return new_buffer

When buffer is very large (thousands+ lines), the function gets slower and slower as it processes the buffer.

What techniques could I use to speed this function call up?

Assume that the input is a source code file. Lines of length 1 - ~120 characters. Lines may or may not have comments. The files could be many lines long. The especially problematic ones are machine generated (1-10k+ lines long).

I am having performance problems with the following python function:

def remove_comments(buffer):
 new_buffer = ''
 lines = buffer.split('\n')
 for line in lines:
 line_wo_comments = line.split('--')[0] + '\n'
 new_buffer = new_buffer + line_wo_comments
 return new_buffer

When buffer is very large (thousands+ lines), the function gets slower and slower as it processes the buffer.

What techniques could I use to speed this function call up?

I am having performance problems with the following python function:

def remove_comments(buffer):
 new_buffer = ''
 lines = buffer.split('\n')
 for line in lines:
 line_wo_comments = line.split('--')[0] + '\n'
 new_buffer = new_buffer + line_wo_comments
 return new_buffer

When buffer is very large (thousands+ lines), the function gets slower and slower as it processes the buffer.

What techniques could I use to speed this function call up?

Assume that the input is a source code file. Lines of length 1 - ~120 characters. Lines may or may not have comments. The files could be many lines long. The especially problematic ones are machine generated (1-10k+ lines long).

Post Reopened by Ethan Bierlein, Mast , Community Bot, Malachi, Simon Forsberg
fixed typo
Source Link
Josh
  • 151
  • 1
  • 6

I am having performance problems with the following python function:

def remove_comments(buffer):
 new_buffer = ''
 lines = buffer.split('\n')
 for line in lines:
 line_wo_comments = line.split('--')[0] + '\n'
 new_buffer = new_codenew_buffer + line_wo_comments
 return new_buffer

When buffer is very large (thousands+ lines), the function gets slower and slower as it processes the buffer.

What techniques could I use to speed this function call up?

I am having performance problems with the following python function:

def remove_comments(buffer):
 new_buffer = ''
 lines = buffer.split('\n')
 for line in lines:
 line_wo_comments = line.split('--')[0] + '\n'
 new_buffer = new_code + line_wo_comments
 return new_buffer

When buffer is very large (thousands+ lines), the function gets slower and slower as it processes the buffer.

What techniques could I use to speed this function call up?

I am having performance problems with the following python function:

def remove_comments(buffer):
 new_buffer = ''
 lines = buffer.split('\n')
 for line in lines:
 line_wo_comments = line.split('--')[0] + '\n'
 new_buffer = new_buffer + line_wo_comments
 return new_buffer

When buffer is very large (thousands+ lines), the function gets slower and slower as it processes the buffer.

What techniques could I use to speed this function call up?

Post Closed as "Not suitable for this site" by 200_success
edited tags
Link
200_success
  • 145.5k
  • 22
  • 190
  • 478
Loading
edited tags
Link
Josh
  • 151
  • 1
  • 6
Loading
Source Link
Josh
  • 151
  • 1
  • 6
Loading
lang-py

AltStyle によって変換されたページ (->オリジナル) /