skip to main | skip to sidebar
Showing posts with label variable. Show all posts
Showing posts with label variable. Show all posts

Tuesday, July 29, 2008

Using Sysctl To Change Kernel Tunables On Linux

Hey again,

After correcting a few of the train-wreck sentences I wrote for yesterday's post (note to self: must not think while typing ;) I figured I'd flip 180 degrees and move on to the subject of modifying kernel tunable parameters on Linux (specifically tested on RedHat) using sysctl. For the Solaris enthusiast out there (who hasn't worked with Linux all that much) /etc/sysctl.conf on RedHat Linux can be thought of as a rough equivalent of /etc/system on Solaris (The last time we came close to trying to draw a comparison between these two files was back in May in a post regarding safe patching of a Veritas root disk ). The sysctl command on Linux doesn't translate quite so well to some older versions of Solaris (some things could be done with ndd, but less of some and more of the other - a vague resemblance at best). Newer versions of Solaris that make use of the "project" based configuration are more directly relatable (projadd, projdel, projmod and projects are four commands that all modify, or report on, kernel tunables on-the-fly like sysctl can, although they make use of a completely separate configuration file - /etc/project - to store values), but I digress... (After yesterday's train wreck sentences we have a run-on... coincidence? :)

As noted, sysctl is a very versatile command and can be used either in its standalone form, or through the modification of the /etc/sysctl.conf file. First, we'll take a brief look at what the sysctl standalone command can do. It doesn't have too many options, so explaining them quickly upfront will make the rest seem like it makes more sense ;) You can run sysctl with the following flags (maybe more, depending on your distro):

-a to display all the tunable key values currently available
-A to display all the tunable key values currently available, as well as table values
-e to ignore errors (specifically pertaining to unrecognized characters)
-n to "not" print the key names when printing out values
-N to "only" print the key names and forgo printing their values
-p (sometimes -P) to import and apply settings from a specified file. This option will use /etc/sysctl.conf as the default if no file name argument is provided on the command line
-q for your standard quiet mode
-w to change kernel tunable (sysctl) settings - This will make the change in real time, as well as update the /etc/sysctl.conf file

and two more "special" arguments:

variablename <-- Use this on its own to read a key from sysctl matching your variablename
variablename=value <-- Use this to set a variablename (key) to a specific value. Note that this needs to be used with the -w flag (which changes sysctl settings)

Some basic examples of sysctl's use would include:

host # sysctl -a <-- This will produce a huge list of output. The basic format would be: NAME TYPE CHANGEABLE - with each column's name accurately depicting what it represents. For instance you could get an entry with a NAME of "kernel.hostid" and a TYPE of "u_int" (note that this is the datatype) and a notation on whether or not it's CHANGEABLE - in this instance "yes" The changeable field can also return "no" and "raise only" It seems logical to assume that it could return "lower only" as well, but I've yet to see it.

SPECIAL ERRATA NOTICE: If sysctl -a spews a lot of kernel warnings, check out Advisory RHBA-2008:0020-4 on RedHat's website for a patch to fix that issue.

host # sysctl -p /etc/mytestsysctl.conf <-- this will read in and enact all the kernel changes specified in your special /etc/mytestsysctl.conf file. It's a good idea to use a different filename when testing out new sysctl.conf settings, especially if you're making broad changes, since, if you completely screw the pooch and your machine reboots, it will come back up looking for the default /etc/sysctl.conf which will still be good to go)

host # sysctl -p <-- Use the -p option without any arguments if you've made adjustments to your sysctl.conf file and want to reload it, or just to be sure that it is actually being read (if you have serious doubts, you can run "strace -f /sbin/sysctl -p" to get more granular information. If you find that you do need to use strace to run down a problem with sysctl, hopefully our previous post on using strace to debug application issues will help get you off on the right foot and an expedited solution)

host # sysctl -w kernel.hostname="Error.Dumping.core" <-- this will set the hostname of your machine to something that might possibly be amusing. Please ensure that your superiors (or the folks you work for) have a sense of humor before pulling a stunt like this and walking away ;)

It's interesting, also, to note that, while sysctl will work just fine with an /etc/sysctl.conf file that includes nothing but comments (or is completely non-existent), your /proc filesystem "must" be of the type "procfs" in order for it to function correctly. This is picking a nit, really, since you'd have to go out of your way to build your RedHat Linux box to use (for instance) ext3 for the /proc filesystem, but a bit of information that's good to know (maybe... at some point in the future ;) /proc/sys is the base directory for sysctl. In fact, if you wanted to emulate "sysctl -a", you could just do an ls in that directory.

Tomorrow, or sometime later this week, we'll take a look at some of the kernel tunables you'll probably want to change, or may have to modify, most often with sysctl and, with as even a hand as possible, debate the pro's and con's of some of the more "impactful" values that you can mess with.

Cheers,

, Mike

Friday, June 6, 2008

Piped Variable Scoping In The Linux Or Unix Shell

Hey There,

Today we're going to look at variable scoping within a piped-while-loop in a few Linux and Unix shells. We're actually almost at this point in our series of ongoing posts regarding bash, Perl and awk porting.

Probably the most interesting thing about zsh (and shells that share its characteristic, in this sense) is that the scope of variables passed through a pipe is slightly different than in other shells, like bash and ksh (Note that not all vendor's versions are equal even if they have the same name! For instance, HP-UX's sh is the Posix shell, while Solaris' is not) I'm taking care to separate the piping construct from the "is the while loop running in a subshell?" argument, as I don't want to get too far off course. And, given this material, that can happen pretty fast.

For a very simple demonstration of whether the scoping issue is a "problem" (defining problem as either a bug or a feature ;) with the while-loop or pipes, we'll look at a very simple "scriptlet" that sticks to using a while-loop, without any piping, like this:

while true
do
bob=joe
echo " $bob inside the while"
break
done
echo " $bob outside the while"


And we can see, easily, that the value of the "bob" variable stays the same, even after the while loop breaks, for all 3 of our test shells. If the while loop, alone, was the issue, bob shouldn't be defined when the while loop breaks:

host # zsh ./test.sh
joe inside the while
joe outside the while
host # ksh ./test.sh
joe inside the while
joe outside the while
host # bash ./test.sh
joe inside the while
joe outside the while



If we change this scriptlet slightly to make it "pipe" an echo to the while-loop, the behaviour changes dramatically:

echo a|while read a
do
bob=joe
echo " $bob inside the while"
break
done
echo " $bob outside the while"


Now, if we use zsh, the value assigned to the "bob" variable inside our while loop (which has been created on the other side of the pipe) actually maintains it state when coming out of the loop, like this:

host # zsh ./test.sh
joe inside the while
joe outside the while


On most other shells, because of variable scope issues with the pipe, an empty value of the "bob" variable is printed after they break out of the while loop, even though it does get correctly defined within the while loop. This is because (and here's where the technicality, and subtle differences between myriad shells, usually becomes a hotbed of raging debate ;) after the pipe, the read command (as opposed to the while loop) runs in a subshell, like so:

host # bash ./test.sh
joe inside the while
outside the while
host # ksh ./test.sh
joe inside the while
outside the while


Notice, again, that the "echo $bob outside the while" statement in these two executions prints an empty variable when the value bob is declared outside the while loop, even though it is set within the while loop.

For most shells, this is easy to get around in one aspect. The main problem stems from the fact that the value is being piped to the while loop, and not a direct fault of the while loop itself. Therefore, a fix like the following should work, and does. Unfortunately, with the command-pipe (such as an echo statement), you won't be able to use a while-loop in many cases, and would have to substitute a for-loop, like so (In most shells, redirecting at the end of a while loop with << will either result in an error or clip the script at that line):

for x in 1
do
bob=joe
echo " $bob inside the while"
done
echo " $bob outside the while"


host # zsh ./test.sh
joe inside the while
joe outside the while
host # ksh ./test.sh
joe inside the while
joe outside the while
host # bash ./test.sh
joe inside the while
joe outside the while


This gets worse (usually hangs) if you try to get around the pipe by doing some inline subshelling with backticks, like:

while read `echo 1`

However, the following solution (awkward though it may be) does actually do the trick (substitute any other fancy i/o redirection you want, as long as you "avoid the pipe"):

exec 7<>/tmp/bob
echo -n "a" >&7
while read -r line <&7
do
bob=joe
echo " $bob inside the while"
done
echo " $bob outside the while"
exec 7<&-
exec 7>&-
rm /tmp/bob

host # zsh ./test.sh
joe inside the while
joe outside the while
host # ksh ./test.sh
joe inside the while
joe outside the while
host # bash ./test.sh
joe inside the while
joe outside the while


For more examples of input/output redirection, check out our older post on bash networking using file descriptors.

Now, when it comes to reading in files, the case is a bit easier to remedy. If you're in the habit of doing:

cat SOMEFILE |while read x
...


You'll run into the same scoping problem. This is also easily fixed by using i/o redirection, which would change our script to this:

while read x
do
bob=joe
echo " $bob inside the while"
done < SOMEFILE
echo " $bob outside the while"


Assuming the file SOMEFILE had one line of content, you'd get the same results as we got above with the for loop.

And that's about all there is to that (minus the highly-probably ensuing arguments ;). There are, I'm sure, a couple more ways to do this, but using the methods that fixed the "problem" of variable scope in bash and ksh is probably better practice, since zsh (and shell's that share its distinction in this case) is a rare exception to the rule (even though zsh may very well be doing things the "proper" way) and the bash/ksh fix works in zsh, while the opposite is not true.

At long last, good evening :)

, Mike


Thanks for this comment from Douglas Huff, which helps to clarify the underbelly of the process:

A friend of mine pointed me to this article and the
previous one in the series that you wrote [on variable scoping]...

I had two comments on these articles but you seem to have
comments disabled, so I figured I'd email them to you.

First, calling it a "scoping" issue is a bit misleading.
While technically true, understanding the underlying
reasons why this doesn't work as "expected" is key to
understanding how you can work around it in POSIX sh or in
ksh without the zsh/bash syntatical sugar for doing so.

What's going on is that a process cannot modify the
environment of it's parent.

When you do:

something | while read blah; do blah; done

What the shell is doing is first executing a subshell
(separate process) that runs the while with stdin
redirected to read from the unnamed pipe. Then in another
subshell it runs "something" with standard out redirected
to the unnamed pipe.

Knowing this it's quite easy to replicate the behaviour
from bash 2/3 and zsh in POSIX sh and ksh with a bit of
understanding of the underlying mechanics. The trick is to
keep the while inside of the original process (since it is
run by the interpretter and does not require a separate
process) and execute the other command in a subshell.
Which is exactly what the syntactical sugar does for you
behind the scenes in bash2&3/zsh.

Monday, May 12, 2008

String Variables In Bash, Perl, C and Awk on Linux or Unix - Porting

Hey There,

Once again, we're back to porting. In this series of posts that's run the gamut from a somewhat brief explanation of the shebang line (The rightful starting place) to shell, Perl and C porting examples of a fully functional useradd program , we're finally coming back around to the beginning and getting back to basics. Trust me; this will eventually all fall together. When it does, I'll be sure to put up a road map so no one has to try the hit-and-miss blog-search method of information nesting ;)

Today, we're going to look at the simple variable (also referred to as scalar, string, etc); defining, assigning value to and extracting value from it on both Unix and Linux based systems. Our approach is going to be concept-based. That is to say that, for each post on cross-language porting, we'll be hitting on a single concept (such as the simple variable) and showing how each can be applied to our chosen four languages: bash (or shell), Perl, C and Awk (Some folks think Awk isn't a programming language for some reason, but we'll demonstrate, over time, that it must be, since it contains all the constructs that generally define a language as a "programming" language; as opposed to a "markup" language like HTML). I'm going to try and keep this linear, so we're starting out with the very basics and will, eventually, work toward more complex programming constructs.

Here we go:

The simple variable is relatively accurately described in its name. This is one of the simplest forms of variables, as it can be realistically thought of as having only two parts: Part 1 is the variable itself, and part 2 is that variables definition or value. If we say x=y, then the variable x (itself) equals y (its definition/value). Simple enough.

1.Defining, Initializing or Declaring the simple variable. This part is going to be simple for every kind of variable (simple and otherwise), because (except in C), no explicit declaration of most variables is necessary. For the simple variable, Bash, Perl and Awk allow you to define the variable when you assign it value. C requires that you define your variable before you use it. In Bash, Perl and Awk, you have the option to define your variable before use if you wish. Examples below (Note that all beginning and ending double quotes are for emphasis only and not actual code):

Ex: Defining a variable called MySimpleVariable.

In Bash: Just type "declare MySimpleVariable" (you can also use "typeset," and both have options to specify what type of variable you want your simple variable to be. For instance, you could type "declare -i MySimpleVariable" if you wanted your variable to be limited to only being an integer. For now, we're not imposing any restrictions.

In Perl: Just type "$MySimpleVariable;"

In Awk: Just type "declare MySimpleVariable"

In C: You "need" to declare/initialize your variables before you can use them. For this post, we'll stick to numbers and strings for the simple variable (even though, technically, a char string is an array in C). Pretty much everything else isn't simple ;)

For a simple integer variable, just type: "int MySimpleVariable;"
For a simple string variable, just type: "char *MySimpleVariable;" (This generally needs to be followed by a declaration of the size/memory-allocation-requirement of the string, like "MySimpleVariable = (char *)malloc(8*sizeof(char));" for an 8 character string)

2. Assigning values to the simple variable. This is very straightforward in all of our four languages:

Ex: We want to assign the value "MySimpleValue" to the simple variable named MySimpleVariable (Note that any values that contain spaces should be quoted).

In Bash: Just type "MySimpleVariable=MySimpleValue" - Spaces between the variable, "=" sign and value are not permitted.

In Perl: Just type "$MySimpleVariable = MySimpleValue;" - Spaces between the variable, "=" sign and value are optional.

In Awk: Just type "MySimpleVariable = "MySimpleValue"" - Spaces between the variable, "=" sign and value are not, technically, necessary, but recommended. Also, note that "MySimpleValue" is placed within double quotes in the assignment. This is sometimes necessary for string variables, but not for numeric variables (e.g. sometimes "a = b" doesn't work, but "a = 1" does. In this case "a = "b"" ( double quoted value) is required for the string variable assignment, but not the integer).

In C: Just type: "MySimpleVariable = MySimpleValue;" for an integer assignment. For a character, or string, assignment you must surround the value with double quotes (e.g. "MySimpleVariable = "MySimpleValue";").

3. Extracting the value from your simple variable. Finally, it's all going to pay off :)

Ex: We want to print the value of the MySimpleVariable variable. This is also fairly simple in all four languages (Okay, C is always going to be a bit more of a pain ;)

In Bash: Just type "echo $MySimpleVariable" - Note that the $ character needs to precede the variable name when you want to get the value.

host # echo $MySimpleVariable
MySimpleValue


In Perl: Just type "print "$MySimpleVariable\n";" - Note that the $ character needs to precede the variable name when you want to get the value - The \n, indicating a carriage-return, line-feed or new-line isn't necessary, but is nice if you don't want your output on the same line as your next command prompt:

host # perl -e '$MySimpleVariable = MySimpleValue;print "$MySimpleVariable\n";'
MySimpleValue


In Awk: Just Type "print MySimpleVariable" - Note that the $ character "must not" precede the variable name when you want to get the value.

host # echo|awk '{MySimpleVariable="MySimpleValue";print MySimpleVariable}'
MySimpleValue


In C: Just type "printf("%d\n", MySimpleVariable);" for an integer assignment. For a character, or string assignment, you would type: "printf("%s\n", MySimpleVariable);" -- The %s in printf indicates a "string" value and the %d indicates a simple decimal (or integer) value. Note that, for this post, we're going to skip the whole compile part of getting your C program to get you output. You can just take the examples from the preceding steps as guidance. There is a bit more to making a standalone C program than there is to making a standalone program with our other three languages.

host # ./c_program
MySimpleValue


And that's all there is to the simple variable (for the most part ;)

Enjoy, and Cheers,

, Mike

Saturday, May 10, 2008

Finding the Number of Characters In A Variable Regardless Of Your Linux Or Unix Shell

Hey there,

Here's a simple enough task: Finding (or determining) the amount of characters in a string variable. In most Unix or Linux shells, this has become a trivial exercise. For instance, in bash or ksh, you can easily tell how many characters are in a particular string variable by either writing one line of code, or skipping right to the end and printing out the answer. By way of example:

host # x=thisIsMyVariable
host # echo ${#x}
16


And it's that simple. The variable x, which contains the string value "thisIsMyVariable", does, indeed, contain 16 elements (or characters). As you can see, in the more advanced shells, taking care of this is no problem.

However, in certain circumstances, you may be required to use a less-advanced, but more highly-portable, shell, like the original Bourne shell. In that case, this same sequence of commands would result in the following:

host # x=thisIsMyVariable
host # echo ${#x}
host #
<--- At best, you'll get zero here. Expect nothing. You might possibly get an error.

Our script attached today, uses the old-style expr (Well, it uses the new style expr with the old format and options, again, for maximum portability between systems and shells). It does the exact same thing that echo'ing ${#VARIABLE_NAME} does, but goes about it in a grueling and confusing manner. Remember back when you had to write scripts that took the shell's needs into consideration before your own? ;) Nowadays, things are much more intuitive.

Given a string, like "thisIsMyVariable," the following script will produce the exact same results. It will spit out how many characters are in that string, but take a longer, more confusing ,route to the answer. This is another good example of security through obfuscation as well. In general, having to use the expr command can be a hurdle for folks who've never used a shell that can't do arithmetic or string comparison on its own :)

Sample output (as boring as it is ;):

host # ./string.sh thisIsMyVariable
16


And now, you can get the string length no matter how limited your shell is!

Cheers,


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/sh

#
# string.sh - print out how many characters are in a string variable
#
# 2008 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

if [ $# -ne 1 ]
then
echo "Usage: 0ドル string"
exit 1
fi

string=1ドル
echo $string|while read x
do
chars=`expr "$x" : '.*'`
echo $chars
done


, Mike

Saturday, April 26, 2008

Shell Special Variables In Bash

Hello again,

Today, we're going to take a look at the Bash shell (for Linux or Unix) and go over some of (I find) the most useful aspects of it. I could be talking about quite a lot of things, actually. It's hard to say what I really like the "best" about anything, but the built in special variables in Bash come in handy an awful lot (Especially if you get stuck in the shell on a crashing system or have to access the network under equally hopeless conditions).

Today, we're going to run down these very basic variables, one by one. And, the reason that they're called "special" isn't an indication of their relative value. These variables are "special" because they can only be referenced and never directly assigned values to :)

1. * : The * variable expands to all the parameters on a command line. Since we're talking about built in variables today, I don't mean *, like in "ls *", but * as in "echo $*", which produces nothing. However if there are other parameters on the command line, expanding this variable equals all of the command line parameters, like 1,ドル 2,ドル 3ドル, etc. If $* is surrounded by quotes ("$*"), it equals all of the parameters as one value, separated by the default field separator (IFS - usually a space, tab or newline), like "1ドル 2ドル 3ドル"

2. @ : The @ variable expands the same as the * variable when called without quotes as $@. When called between double quotes, as "$@", it expands into all the command line parameters, but each parameter is separate (rather than all together in one giant double quoted string, separated by spaces, as with "$*"), like "1ドル", "2ドル", "3ドル", etc.

3. # : The # variable expands to the number of parameters on a line. It's most often used to check to see if the proper amount of arguments have been passed to a script. For example, this would show how the $# variable could be used to test that a script is being called with only 2 arguments:

if [ $# -ne 2 ]
then
echo "Usage: 0ドル arg1 arg2. Quitting"
exit
fi


4. ? : The ? variable expands ( as $? ) to the return code (or errno value) of the last executed command. In a pipe-chain, it equals the return code of the last executed command in the chain.

5. - : The - variable expands to the current shell's options. For instance, if you where logged into a shell and executed "echo $-", you'd probably see something similar to this:

host # echo $-
himBH


Which (of course ;) would mean that the shell had been invoked with the -i (forces the shell to interactive mode, so it reads the .bashrc when it starts), -h (remembers where commands are when they get looked up), -m (enables job control, so you can run background processes), -B (enables brace expansion in the shell whereby, for instance, "file{a,b}" would equal "filea fileb") and -H (enables ! character history substitution) flags.

6. $ : The $ variable expands to the process ID of the shell, or subshell (as happens when a script is executed, for instance), in which it's invoked (as $$). It's generally used to determine the process ID of a shell in programming and it should be noted that, if it's used within a subshell that is generated with parentheses (e.g. ($$)), it will actually retain the process ID of the parent shell, rather than the subshell.

7. ! : The ! variable (which you might remember from your options list when checking $-) expands to the process ID of the last run background command. This is different than $?, which reports on the return code of the last run command. For instance, this is one way to demonstrate using $!:

host # echo $! <--- No value because we have no jobs running in the background
host # sleep 200000 &
[1] 23902
host # echo $!
23902


8. 0 : 0ドル expands to the name of the shell you're in, or the shell script that it's being called from. It's generally found in usage messages, like in example 3 in the Usage message from the test against $#'s value. From within a script called blackhat.sh,

"Usage: 0ドル arg1 arg2. Exiting."

would print something like:

"Usage: ./blackhat.sh arg1 arg2. Exiting."

In certain circumstances it can resolve, or expand, to the first argument after the string set to execute when a shell is invoked with the "-c" option, or it can be set to the file name used to invoke Bash, if Bash is called by another name (like "rbash").

9. _ : The _ variable is set to the absolute (not relative) file name of your shell when you start it up (e.g. $_ = /bin/bash) or the script being executed if it's passed in an argument list when the shell is invoked. After that, it always expands to the value of the last command executed, or argument typed. For instance:

host # vmstat 1 1
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr m1 m1 m1 m1 in sy cs us sy id
0 0 0 37857184 2413576 16 152 0 0 0 0 0 0 1 1 0 1204 996 483 1 2 97
host # echo $_
1


As a side note, when you're using /bin/mail, the $_ variable is equal to the mail file you're checking.

And that's it, as far as I know (The basic ones, anyway ;)

Hopefully being aware of these Bash built in variables, and knowing what they all do and mean, will help you out or, at the very least, make your shell scripting easier and/or more productive in the future :)

Best wishes,

, Mike

Tuesday, April 22, 2008

Functions Vs. Subroutines In Perl And Bash - Palindromes Revisited

Hey There,

As the clever title of today's post suggests, this is the follow up to our post on scripting out a way to determine if a string is a palindrome using Perl.

One of the main things to note (which isn't the only difference, of course) is the way in which the Bash shell and Perl deal differently with "routines." In our Perl script, we did the meat of our recursive work inside a "subroutine." In today's Bash script, that same work is handled inside a "function."

Functionally speaking, both a Perl "subroutine" (Identified by the "sub" declaration) and a Bash "function" (Identified by the "function" declaration) do the same things. They allow you to create a block of code for use within in your script. Naturally, functions and/or subroutines, are a great help if you find that your script consists of typing the same set of instructions more than once (if you have to type the same block of code more than twice, they're even better ;)

Technically speaking, there are some major differences between the two that should be noted. These should be platform independent and true for both Bash and Perl on Linux or Unix.

In Perl, since all of the information in the script is processed before the script is run, you can include your "subroutine" anywhere in the script, even if it's after the line on which you call it. It's common practice to put subroutines at the bottom of the script, but you can put them anywhere within the script that you like, if you're so inclined.

In Bash, since the script is parsed from top-to-bottom in a left-to-right fashion, "functions" absolutely need to appear in the script before the line on which they are called. If you put a function definition at the bottom of your Bash shell script and call that function 15 lines prior, you'll receive something along the order of a "command not found" error.

In Perl, when you pass arguments to a subroutine, you have a few different ways you can do it (basically, deprecated methods still work), but the most elegant way to pass simple variables to your Perl subroutine is to include them within parentheses, separated by commas, like so:

MySub( $var1, $var2, $var3);

The subroutine that would accept, and process, all of those arguments may look something like this:

sub MySub( $var1, $var2, $var3) {
print "$var1 $var2 $var3\n";
}


In Bash, when you pass arguments to a function, you can just pass them as if they were arguments to a regular command, like this:

MyFunction $var1 $var2 $var3

The function that would accept, and process, these arguments may look something like this:

function MyFunction {
echo "$var1 $var2 $var3"
}


Interesting side note: In Bash, if you declare a function, you can leave out the reserved word "function" if you want to, but this requires that you use parentheses, like with Perl. So you could write your function like this:

MyFunction () {
echo "$var1 $var2 $var3"
}


However, if you opt to use the reserved word "function" when declaring your function (like we do), the parentheses following the function name are optional :)

You may have noted that we haven't mentioned anything about "scope" with regards to variables and functions or subroutines. This is on purpose. We'll inevitably tackle that in a later post, but it's beyond the "scope" of this one (that was a truly painful pun for all of us... but utterly unavoidable ;)

On that note (for the palindromes), in our Bash script we've taken advantage of that "lack of scope" and simply set the "$status" variable globally within the palindrome function. In our Perl script, we used "return" to pass back either a 0 or a 1 to the calling procedure. The major difference here is that our Perl script was returning a code back to a variable which was defining its value based on the outcome of the subroutine process, so:

$status = palindrome( @string, $chars, $count );

was giving the "$status" variable a value of 0 or 1, depending on what the "palindrome" subroutine returned. In Bash, we jumped right over the middle man and just set the global "$status" variable from within the function (This isn't generally recommended, but okay for our purposes here). Assuming there are no bugs in your version of Bash that mangle the scope of variables within functions, the only risk you're taking, by defining a global variable from within a function, is that you'll forget about it and redefine it outside the function, thereby overwriting the value. But, again, that's for another day, as I can feel an essay coming on ;)

Hope you enjoy the Bash version of our simple palindrome script and that it helps you see the correlation between not only functions and subroutines, but some other aspects of coding that are either common, or unique, to both Bash and Perl.

Best wishes,

Cheers,


Creative Commons License


This work is licensed under a
Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License

#!/bin/bash

#
# bashpal.sh
#
# 2008 - Mike Golvach - eggi@comcast.net
#
# Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
#

function palindrome {
if [ "${string:${count}:1}" == "${string:${chars}-${count}-1:1}" ]
then
let count=$count+1;
palindrome "$string" "$chars" "$count"
fi
if [ $count -eq $chars ]
then
status=1;
else
status=0;
fi
}

printf "Enter A String: "
read string

chars=${#string}
count=0
palindrome "$string" "$chars" "$count"

if [ $status -eq 1 ]
then
printf "\n That String Is A Palindrome.\n\n"
else
printf "\n That String Is Not A Palindrome.\n\n"
fi


, Mike




[フレーム]

Wednesday, March 5, 2008

Beginning Your Spec File For Building Linux RPM's.

Hello Again,

This is a continuation of yesterdays post on doing the initial build of your software as the first step toward creating your own Linux RPM. Today we're going to continue on that path with our software package (cleverly named PACKAGE-3.2-1 ;) and move on to the next step in the RPM creation process; the creation of the specification file (which I'll refer to as the "spec file" from here on out) and its basic structure.

For the purposes of our example, we won't be including every single option you can include in a spec file, but we'll touch on all the required ones, and a few that might make your life a bit easier :)

We'll create our spec file by simply opening it up as a new file in our favorite editor. The spec file is simply a plain text file, describing our RPM's attributes, that we'll be supplying to the "rpmbuild" command later on. We'll look at the spec file, line by line, for our soon-to-be-completed package. We'll call it PACKAGE.spec

The first section is what I like to call the "header section," although I don't know that it's ever actually referred to as that. It looks like the following

Summary: The World Famous PACKAGE software
Name: PACKAGE
Version: 3.2
Release: 1
Copyright: Gnu Public License
Group: Applications
Source0: PACKAGE-3.2-1.tar.gz
Prefix: /usr/local
Provides: PACKAGE3, PACKAGE-devel, bunk
Requires: PACKAGE1, db
URL: http://xyz.com
Packager: Mike Tremell
# %define _topdir /users/me/softbuilds/rpm
%description
PACKAGE 3.2-1 Standard Build
%define PACKAGEDIR %{prefix}


The lines in this section are defined as follows:

Summary - This line should be a descriptive summary of your RPM package, in short form. It shouldn't be too wordy, but you can make it as long as you want.

Name - This is simply the name of your package. It must match the source package's name (which we created in /usr/src/packages/SOURCES/ in yesterday's post on setting up the initial build for your RPM. It will later be used (with the Version and Release keyword values appended) by rpmbuild to determine what source file it should look for in /usr/src/packages/SOURCES

Version - The version of your software. Again this must be exactly the same version of the source package you're building from.

Release - The release of the software. This, again, must be exactly the same as the release you're building from. Using the "Name," "Version," and "Release" variables, rpmbuild will surmise that we want to build our RPM from PACKAGE-3.2-1.tar.gz in /usr/src/packages/SOURCES. Note: The Release variable is required and (for whatever reason) cannot be an empty value. Thus we put in the obligatory 1 for our new build of the generic PACKAGE software. Whatever you're building will probably have a release number.

Copyright - Just attribution for the package and/or package source's author(s)

Group - This identifies our package as belonging to the "Applications" package group in Linux.

Source0 - The name (Yes, this is redundant ;) of the source package in /usr/src/packages/SOURCES. As the name implies, you can have more than one source in certain circumstances. Note that there is also a "Patch0," etc, declaration that you can use to specify diff files you may need to include, but that's beyond the scope of our exercise.

Prefix - The root under which the RPM will install itself. Note: You need to include the Prefix declaration if you want your RPM package to be "relocatable." If your package is not relocatable, all installs will always install to the same place. If your package is relocatable, it allows the user or administrator to change the prefix, or root install directory, of the RPM when they run the "rpm" command to install it!

Provides - What your RPM will "provide" once it is installed. This is mainly for rpm's dependency checking. You can have your package provide anything, but it's best to have it provide only useful values. This variable isn't necessary if your software package isn't providing anything significant and/or doesn't have another package that's dependant on it.

Requires - The RPM packages your RPM package needs to have installed on your system before it will install correctly. In our example, PACKAGE will not install correctly (it will fail the dependency check) if some version of the "db" (Berkeley Database) RPM isn't already installed on our system.

URL - The location where a user can download the latest version of the RPM source. Not necessary, but can be helpful.

Packager - The name of the guy (or gal) who put together the RPM. Again, not necessary unless you're seeking notoriety ;)

# %define _topdir /users/me/softbuilds/rpm - Note that this line is commented out for our build, since we're going with the system defaults. However, comments in spec files work just like comments in shell scripts, so if you wanted to change the top level directory for your RPM structure, you could do it by redefining the _topdir variable (In this case, simply by removing the comment). You may need to create its required BUILD, BUILDROOT, RPMS, SOURCES, SPECS and SRPMS subdirectories manually. This option is useful if you're trying to do a build as a regular user and don't have the system privilege that will allow you to manipulate your server's main RPM package working directories :)

%description - Just like the summary above, but should be more descriptive. Also, you should add your description on a new line following the %description declaration (unlike all the others). "rpmbuild" will spew error messages if you put your description's value on the same line as the variable. You can actually include "one" word on the same line as the %description variable, but the rest of your description must be below that line. It should also be noted that you can have multiple %description declarations, which is useful if you're building multiple RPM's from one spec file (although that's deeper than we want to go for now).

%define - This directive can be used over and over again. It's simple format is: %define variable_name value. You can call the variable name you define here using the %{} notation. For instance:

%define mydir /usr/local/bin

Would allow you to reference /usr/local/bin, at any point (from the point of definition on) in your spec file as %{mydir}

Tomorrow, we'll check out the next, and final section of the RPM spec file and use "rpmbuild" to create our RPM package!

Cheers,

, Mike




[フレーム]

Wednesday, February 27, 2008

Simplifying File Renaming Using Bash Without Sed

Hello again,

This is a little tip I picked up that I really like. It's not new; just something I never gave any real thought to. It has to do with renaming files (A common enough task) en masse and how to do that in as few keystrokes as possible. But that really puts a limitation on this trick that doesn't exist. It can be used for many other purposes.

I was in the bad habit (Well, it's not a bad habit, really, since it'll work on almost any distro of Linux or Unix) of typing extremely long command lines to change file names. For instance, if I had a directory full of script files that looked like this:

host # ls
script1 script2 script3


and I wanted to copy them all off to files with a .bak extension (In case I screwed up my edits on the originals), I would almost always type something like the following (Note that this command line is a perfectly acceptable way to rename your files if you can't do it the way I'm going to explain here):

host # ls -1 script*|while read name;do newname=`echo $name|sed 's/$/\.bak/'`;cp $name $newname;done

which would work perfectly well, and leave me with:

host #
script1 script1.bak script2 script2.bak script3 script3.bak


In Bash (on Linux or Unix) you can get around this with variable expansion. Like I said, especially in this instance, this may not seem like much, but it does save a lot of typing (and can be used to help you save time in a lot of different ways if you use your imagination :)

So, for purposes of demonstration, we'll put everything back the way it was:

host # rm *.bak
host # ls
script1 script2 script3


Now we're going to look at Bash's expansion operators and see how much easier they can make this whole process. The expansion operators are, basically, curly brackets - which can be nested - containing values. The two most important things to note about them are that:

1. The values must be separated by commas (e.g. {1,2,3})
2. The values can't contain any spaces between themselves and the commas - on either side - unless the values are quoted, like so (there can also be no space between the quotation marks and the comma separators):

Correct = {"1 "," 2 "," 3"}
Who Knows? = {1 , 2 , "3"}


So, now we can do the exact same thing we did above (add the .bak extension to our 3 script files), but do it a lot more cleanly and quickly - not to mention the fact that your input shouldn't bleed over a simple 80 column tty display any more ;)

host # ls -1 script*|while read name;do cp $name{,.bak};done
host # ls
script1 script1.bak script2 script2.bak script3 script3.bak


Here's an even shorter line, but it might cause you problems, given "for x"'s default behavior of reading each variable on a line as the value of x. This wouldn't work very well if you were trying to operate on files with spaces in their names (You can read more on that in our previous post on working with Windows generated file names):

host # for x in `ls -1 script*`;do cp $name{,.bak};done

Not only do Bash's built in variable expansion operators shorten my line, once you get used to using them they make it more easily readable. If you noted that my variable expansion, inside the curly brackets, was missing a value before the first comma, that was intentional. Although you can't have spaces between the values in your expansion set, you can have an empty value (no spaces). This way the first "append" that the variable expansion does is just like doing no append at all.

For example:

host # ls script{1,2}
script1 script2


And now, with a blank first variable (You can put your blank variable anywhere - more than once - but it makes the most sense to use it first in the "cp" command above since I want to copy from the original file name to the new name):

host # ls script{,2}
ls: script: No such file or directory
script2


Like I said, this little trick (well, technically not a trick, since it's built in to Bash ;) has saved me a lot of time and I hope you find it useful as well!

Cheers,

, Mike




[フレーム]

Posted by Mike Golvach at 12:15 AM  

, , , , , , , ,

Subscribe to: Comments (Atom)
 

AltStyle によって変換されたページ (->オリジナル) /