Here is a simple program that I created to print a file by breaking it after a fixed number of columns. I feel it can be optimized but not sure how. Any feedback is great.
#This is a simple program that takes a file and the limits the columns length to
#the number passed in as second argument
if ($#ARGV < 1) {
printf("Usage: perl %s <name of the input file> <column length>\n", 0ドル);
exit(-1);
}
$file=$ARGV[0];
open(INFO, $file) or die("Could not open file.");
$LINE_LENGTH = $ARGV[1];
$count = 0;
foreach $line (<INFO>) {
#print $line;
if (length($line) <= $LINE_LENGTH) {
printf("%s\n", $line);
} else {
$remaining_chars = length ($line);
$remaining_line = $line;
while ($remaining_chars >= $LINE_LENGTH) {
$cur_str = substr($remaining_line, 0, $LINE_LENGTH - 1 );
printf("%s\n", $cur_str );
$remaining_line = substr($remaining_line, $LINE_LENGTH - 1);
$remaining_chars = $remaining_chars - length($cur_str);
}
printf("%s",$remaining_line );
}
if ($++counter == 2){
last;
}
}
printf("\n");
close(INFO);
-
\$\begingroup\$ See also Text::Fold and Text::Wrap \$\endgroup\$Håkon Hægland– Håkon Hægland2015年10月23日 20:50:43 +00:00Commented Oct 23, 2015 at 20:50
3 Answers 3
I am not sure it can be optimized in a sense of better performance. It could be streamlined though. The initial test for length($line) <= $LINE_LENGTH
is just redundant. Compare:
while (length ($line) > LINE_LENGTH) {
$prefix = substr($line, 0, $LINE_LENGTH - 1 );
$line = substr($line, $LINE_LENGTH - 1);
printf("%s\n", $prefix);
}
printf("%s",$line );
PS: I am not familiar with perl enough. If there is a split at position
function, you may use it instead of calling substr
twice.
You can use regular expression instead of substr()
,
use strict;
use warnings;
if (@ARGV != 2) {
die("Usage: perl 0ドル <name of the input file> <column length>\n");
}
my ($file, $LINE_LENGTH) = @ARGV;
open(my $INFO, "<", $file) or die($!);
my $count = 0;
while (my $line = <$INFO>) {
print 1,ドル "\n" while $line =~ /( .{1,$LINE_LENGTH} )/xg;
# ...
if (++$count == 2) {
last;
}
}
print "\n";
close($INFO);
use strict;
^^^ always.
You will be saved from embarassing things, like this:
if ($++counter == 2){ last; }
Seriously, what is that? ;-)
With perl it's often best to put required arguments first, and then use ARGV
magic to handle the input files. In other words, put the column width first, and then the input files... The ARGV
magic is powerful, it will read all the files given on the commandline, or, read the STDIN if no files are given. By shifting the column width off the @ARGV
you can let the magic do its thing.
chomp
is another nice trick, it removes any trailing newlines, if any.
The lines can be printed with brute-force substrings in a while loop then.
I would write your code as:
#!/usr/bin/perl
my $cols = shift @ARGV;
# Set a default, if empty
$cols ||= 80;
while (my $line = <>) {
chomp $line;
while (length($line) > $cols) {
print substr($line, 0, $cols) . "\n";
$line = substr($line, $cols);
}
print $line . "\n";
}
Note, assuming this is saved as a file called "narrow", then this can now be called in many ways:
narrow 80 *.java
curl http://example.com | narrow
curl http://example.com | narrow 120
- etc.
-
\$\begingroup\$
for my $line (<>)
=>while (my $line = <>)
\$\endgroup\$mpapec– mpapec2015年10月15日 03:24:47 +00:00Commented Oct 15, 2015 at 3:24 -
\$\begingroup\$ @mpapec - hmm.. i should run things before answering.... checking now. \$\endgroup\$rolfl– rolfl2015年10月15日 03:26:11 +00:00Commented Oct 15, 2015 at 3:26
-
\$\begingroup\$
while
doesn't slurp a whole file into memory stackoverflow.com/questions/585341/… \$\endgroup\$mpapec– mpapec2015年10月15日 03:27:58 +00:00Commented Oct 15, 2015 at 3:27 -
\$\begingroup\$ @mpapec - yeah, and even the for should have been a foreach, I think, without the slurp (actually, not so much a slurp, but a non-scalar context). \$\endgroup\$rolfl– rolfl2015年10月15日 03:29:10 +00:00Commented Oct 15, 2015 at 3:29