Showing posts with label string manipulation. Show all posts
Showing posts with label string manipulation. Show all posts
Friday, February 25, 2011
Example 8.27: using regular expressions to read data with variable number of words in a field
[フレーム]
A more or less anonymous reader commented on our last post, where we were reading data from a file with a varying number of fields. The format of the file was:
1 Las Vegas, NV --- 53.3 --- --- 1
2 Sacramento, CA --- 42.3 --- --- 2
The complication in the number of fields related to spaces in the city field (which could vary from one to three words).
The reader's elegant solution took full advantage of R's regular expressions: a powerful and concise language for processing text.
file <- readLines("http://www.math.smith.edu/r/data/ella.txt")
file <- gsub("^([0-9]* )(.*),( .*)$", "\1円'\2円'\3円", file)
tc <- textConnection(file)
processed <- read.table(tc, sep=" ", na.string="---")
close(tc)
The main work is done by the gsub() function, which processes each line of the input file and puts the city values in quotes (so that it is seen as a single field when read.table() is run.
While not straightforward to parse, the regular expression pattern can be broken into parts. The string ^([0-9]* ) matches any numbers (characters 0-9) at the beginning of the line (which is indicated by the "^"), followed by a space. The "*" means that there may be more than one such 0-9 character included. The string (.*), matches any number of characters followed by a comma, while the last pattern matches any characters after the next space to the end of the line. After the comma (between the quotes) the user gives the characters to replace the found character strings with. To replicate the data found between the parens, we can use the "\\n" syntax; the fact that the comma in the second clause "(.*)," is outside the parens means that it is not replicated.
It may be slightly easier to understand the code if we note that the third clause is unnecessary and split the remaining clauses into two separate gsub() commands, as follows.
file <- readLines("http://www.math.smith.edu/r/data/ella.txt")
file <- gsub("^([0-9]* )", "\1円'", file)
file <- gsub("(.*),", "\1円'", file)
tc <- textConnection(file)
processed <- read.table(tc, sep=" ", na.string="---")
close(tc)
The first two elements of the file vector become:
"1 'Las Vegas' NV --- 53.3 --- --- 1"
"2 'Sacramento' CA --- 42.3 --- --- 2"
The use of the na.string option to read.table() is a more appropriate approach to recoding the missing values than we used previously. Overall, we're impressed with the commenter's use of regular expressions in this example, and are thinking more about Nolan and Temple Lang's focus on them as part of a modern statistical computing curriculum.
Monday, July 20, 2009
Example 7.6: Find Amazon sales rank for a book
[フレーム]
In honor of Amazon's official release date for the book, we offer this blog entry.Both SAS and R can be used to find the Amazon Sales Rank for a book by downloading the desired web page and ferreting out the appropriate line. This code is likely to break if Amazon’s page format is changed (but it worked as of October, 2010). [Note: as of spring 2010 Amazon changed the format for their webpages, and the appropriate text to search for changed from "Amazon.com Sales Rank" to "Amazon Bestsellers Rank". We've updated the blog code with this string. As of October 9, 2010 they added a number of blank lines to the web page, which we also now address.]
In this example, we find the sales rank for our book. Some interesting information about interpreting the rank can be found here or here.
Both SAS and R code below rely on section 1.1.3, ”Reading more complex text files.” Note that in the displayed SAS and R code, the long URL has been broken onto several lines, while it would have to be entered on a single line to run correctly.
In SAS, we assign the URL an internal name (section 1.1.6), then input the file using a data step. We exclude all the lines which don’t contain the sales rank, using the count function (section 1.4.6). We then extract the number using the substr function (section 1.4.3), with the find function (section 1.4.6) employed to locate the number within the line. The last step is to turn the extracted text (which contains a comma) into a numeric variable.
SAS
filename amazon url "http://www.amazon.com/
SAS-Management-Statistical-Analysis-Graphics/
dp/1420070576/ref=sr_1_1?ie=UTF8&s=books
&qid=1242233418&sr=8-1";
data test;
infile amazon truncover;
input @1 line 256ドル.;
if count(line, "Amazon Bestsellers Rank") ne 0;
rankchar = substr(line, find(line, "#")+1,
find(line, "in Books") - find(line, "#") - 2);
rank = input(rankchar, comma9.);
run;
proc print data=test noobs;
var rank;
run;
R
# grab contents of web page
urlcontents <- readLines("http://www.amazon.com/
SAS-Management-Statistical-Analysis-Graphics/
dp/1420070576/ref=sr_1_1?ie=UTF8&s=books
&qid=1242233418&sr=8-1")
# find line with sales rank
linenum <- suppressWarnings(grep("Amazon Bestsellers Rank:",
urlcontents))
newline = linenum + 1 # work around October 2010 blank spaces
while (urlcontents[newline] == "") {
newline = newline + 1
}
# split line into multiple elements
linevals <- strsplit(urlcontents[newline], ' ')[[1]]
# find element with sales rank number
entry <- grep("#", linevals)
# snag that entry
charrank <- linevals[entry]
# kill '#' at start
charrank <- substr(charrank, 2, nchar(charrank))
# remove commas
charrank <- gsub(',','', charrank)
# turn it into a numeric opject
salesrank <- as.numeric(charrank)
cat("salesrank=",salesrank,"\n")
The resulting output (on July 16, 2009) is
SAS
rank
23476
R
salesrank= 23467
Subscribe to:
Comments (Atom)