Saturday, May 19, 2007

在 BASH Shell 提示符號顯示整個路徑名稱

> vi /etc/bashrc
> PS1="[\u@\h \w]\\$
"

Bash Prompt HOWTO

http://tldp.org/HOWTO/Bash-Prompt-HOWTO/

Saturday, January 27, 2007

BASH Quick Guide

http://pegasus.rutgers.edu/~elflord/unix/bash-tute.html

A quick guide to writing scripts using the bash shell

A simple shell script

A shell script is little more than a list of commands that are run in sequence. Conventionally, a shellscript should start with a line such as the following:
#!/bin/bash
THis indicates that the script should be run in the bash shell regardless of which interactive shell the user has chosen. This is very important, since the syntax of different shells can vary greatly.

A simple example

Here's a very simple example of a shell script. It just runs a few simple commands
#!/bin/bash
echo "hello, $USER. I wish to list some files of yours"
echo "listing files in the current directory, $PWD"
ls # list files

Firstly, notice the comment on line 4. In a bash script, anything following a pound sign # (besides the shell name on the first line) is treated as a comment. ie the shell ignores it. It is there for the benifit of people reading the script.

$USER and $PWD are variables. These are standard variables defined by the bash shell itself, they needn't be defined in the script. Note that the variables are expanded when the variable name is inside double quotes. Expanded is a very appropriate word: the shell basically sees the string $USER and replaces it with the variable's value then executes the command.

We continue the discussion on variables below ...

Variables

Any programming language needs variables. You define a variable as follows:
X="hello"
and refer to it as follows:
$X
More specifically, $X is used to denote the value of the variable X. Some things to take note of regarding semantics:
  • bash gets unhappy if you leave a space on either side of the = sign. For example, the following gives an error message:
    X = hello
  • while I have quotes in my example, they are not always necessary. where you need quotes is when your variable names include spaces. For example,
    X=hello world # error
    X="hello world" # OK
This is because the shell essentially sees the command line as a pile of commands and command arguments seperated by spaces. foo=baris considered a command. The problem with foo = bar is the shell sees the word foo seperated by spaces and interprets it as a command. Likewise, the problem with the command X=hello world is that the shell interprets X=hello as a command, and the word "world" does not make any sense (since the assignment command doesn't take arguments).

Single Quotes versus double quotes

Basically, variable names are exapnded within double quotes, but not single quotes. If you do not need to refer to variables, single quotes are good to use as the results are more predictable.

An example

#!/bin/bash
echo -n '$USER=' # -n option stops echo from breaking the line
echo "$USER"
echo "\$USER=$USER" # this does the same thing as the first two lines
The output looks like this (assuming your username is elflord)
$USER=elflord

$USER=elflord
so the double quotes still have a work around. Double quotes are more flexible, but less predictable. Given the choice between single quotes and double quotes, use single quotes.

Using Quotes to enclose your variables

Sometimes, it is a good idea to protect variable names in double quotes. This is usually the most important if your variables value either (a) contains spaces or (b) is the empty string. An example is as follows:

#!/bin/bash
X=""
if [ -n $X ]; then # -n tests to see if the argument is non empty
echo "the variable X is not the empty string"
fi

This script will give the following output:
the variable X is not the empty string
Why ? because the shell expands $X to the empty string. The expression [ -n ] returns true (since it is not provided with an argument). A better script would have been:
#!/bin/bash
X=""
if [ -n "$X" ]; then # -n tests to see if the argument is non empty
echo "the variable X is not the empty string"
fi

In this example, the expression expands to [ -n "" ] which returns false, since the string enclosed in inverted commas is clearly empty.

Variable Expansion in action

Just to convince you that the shell really does "expand" variables in the sense I mentioned before, here is an example:
#!/bin/bash
LS="ls"
LS_FLAGS="-al"

$LS $LS_FLAGS $HOME

This looks a little enigmatic. What happens with the last line is that it actually executes the command
ls -al /home/elflord
(assuming that /home/elflord is your home directory). That is, the shell simply replaces the variables with their values, and then executes the command.

Using Braces to Protect Your Variables

OK. Here's a potential problem situation. Suppose you want to echo the value of the variable X, followed immediately by the letters "abc". Question: how do you do this ? Let's have a try :
#!/bin/bash
X=ABC
echo "$Xabc"
THis gives no output. What went wrong ? The answer is that the shell thought that we were asking for the variable Xabc, which is uninitialised. The way to deal with this is to put braces around X to seperate it from the other characters. The following gives the desired result:
#!/bin/bash
X=ABC
echo "${X}abc"

Conditionals, if/then/elif

Sometimes, it's necessary to check for certain conditions. Does a string have 0 length ? does the file "foo" exist, and is it a symbolic link , or a real file ? Firstly, we use the if command to run a test. The syntax is as follows:
if condition
then
statement1
statement2
..........
fi
Sometimes, you may wish to specify an alternate action when the condition fails. Here's how it's done.
if condition
then
statement1
statement2
..........
else
statement3
fi
alternatively, it is possible to test for another condition if the first "if" fails. Note that any number of elifs can be added.
if condition1
then
statement1
statement2
..........
elif condition2
then
statement3
statement4
........
elif condition3
then
statement5
statement6
........


fi

The statements inside the block between if/elif and the next elif or fi are executed if the corresponding condition is true. Actually, any command can go in place of the conditions, and the block will be executed if and only if the command returns an exit status of 0 (in other words, if the command exits "succesfully" ). However, in the course of this document, we will be only interested in using "test" or "[ ]" to evaluate conditions.

The Test Command and Operators

The command used in conditionals nearly all the time is the test command. Test returns true or false (more accurately, exits with 0 or non zero status) depending respectively on whether the test is passed or failed. It works like this:
test operand1 operator operand2
for some tests, there need be only one operand (operand2) The test command is typically abbreviated in this form:
[ operand1 operator operand2 ]
To bring this discussion back down to earth, we give a few examples:
#!/bin/bash
X=3
Y=4
empty_string=""
if [ $X -lt $Y ] # is $X less than $Y ?
then
echo "\$X=${X}, which is greater than \$Y=${Y}"
fi

if [ -n "$empty_string" ]; then
echo "empty string is non_empty"
fi

if [ -e "${HOME}/.fvwmrc" ]; then # test to see if ~/.fvwmrc exists
echo "you have a .fvwmrc file"
if [ -L "${HOME}/.fvwmrc" ]; then # is it a symlink ?
echo "it's a symbolic link
elif [ -f "${HOME}/.fvwmrc" ]; then # is it a regular file ?
echo "it's a regular file"
fi
else
echo "you have no .fvwmrc file"
fi

Some pitfalls to be wary of

The test command needs to be in the form "operand1operatoroperand2" or operatoroperand2 , in other words you really need these spaces, since the shell considers the first block containing no spaces to be either an operator (if it begins with a '-') or an operand (if it doesn't). So for example; this

if [ 1=2 ]; then
echo "hello"
fi
gives exactly the "wrong" output (ie it echos "hello", since it sees an operand but no operator.)

Another potential trap comes from not protecting variables in quotes. We have already given an example as to why you must wrap anything you wish to use for a -n test with quotes. However, there are a lot of good reasons for using quotes all the time, or almost all of the time. Failing to do this when you have variables expanded inside tests can result in very wierd bugs. Here's an example: For example,

#!/bin/bash
X="-n"
Y=""
if [ $X = $Y ] ; then
echo "X=Y"
fi
This will give misleading output since the shell expands our expression to
[ -n = ]
and the string "=" has non zero length.

A brief summary of test operators

Here's a quick list of test operators. It's by no means comprehensive, but its likely to be all you'll need to remember (if you need anything else, you can always check the bash manpage ... )
operatorproduces true if... number of operands
-noperand non zero length1
-zoperand has zero length1
-dthere exists a directory whose name is operand1
-fthere exists a file whose name is operand1
-eqthe operands are integers and they are equal2
-neqthe opposite of -eq2
=the operands are equal (as strings)2
!=opposite of = 2
-ltoperand1 is strictly less than operand2 (both operands should be integers)2
-gtoperand1 is strictly greater than operand2 (both operands should be integers)2
-geoperand1 is greater than or equal to operand2 (both operands should be integers)2
-leoperand1 is less than or equal to operand2 (both operands should be integers)2

Loops

Loops are constructions that enable one to reiterate a procedure or perform the same procedure on several different items. There are the following kinds of loops available in bash
  • for loops
  • while loops

For loops

The syntax for the for loops is best demonstrated by example.
#!/bin/bash
for X in red green blue
do
echo $X
done
THe for loop iterates the loop over the space seperated items. Note that if some of the items have embedded spaces, you need to protect them with quotes. Here's an example:
#!/bin/bash
colour1="red"
colour2="light blue"
colour3="dark green"
for X in "$colour1" $colour2" $colour3"
do
echo $X
done
Can you guess what would happen if we left out the quotes in the for statement ? This indicates that variable names should be protected with quotes unless you are pretty sure that they do not contain any spaces.

Globbing in for loops

The shell expands a string containing a * to all filenames that "match". A filename matches if and only if it is identical to the match string after replacing the stars * with arbitrary strings. For example, the character "*" by itself expands to a space seperated list of all files in the working directory (excluding those that start with a dot "." ) So

echo *
lists all the files and directories in the current directory.
echo *.jpg
lists all the jpeg files.
echo ${HOME}/public_html/*.jpg
lists all jpeg files in your public_html directory.

As it happens, this turns out to be very useful for performing operations on the files in a directory, especially used in conjunction with a for loop. For example:

#!/bin/bash
for X in *.html
do
grep -L '
    ' "$X"
    done

While Loops

While loops iterate "while" a given condition is true. An example of this:

#!/bin/bash
X=0
while [ $X -le 20 ]
do
echo $X
X=$((X+1))
done

This raises a natural question: why doesn't bash allow the C like for loops

for (X=1,X<10;> As it happens, this is discouraged for a reason: bash is an interpreted language, and a rather slow one for that matter. For this reason, heavy iteration is discouraged.

Command Substitution

Command Substitution is a very handy feature of the bash shell. It enables you to take the output of a command and treat it as though it was written on the command line. For example, if you want to set the variable X to the output of a command, the way you do this is via command substitution.

There are two means of command substitution: brace expansion and backtick expansion.

Brace expansion workls as follows: $(commands) expands to the output of commands This permits nesting, so commands can include brace expansions

Backtick expansion expands `commands` to the output of commands

An example is given;:

#!/bin/bash
files="$(ls )"
web_files=`ls public_html`
echo $files
echo $web_files
X=`expr 3 \* 2 + 4` # expr evaluate arithmatic expressions. man expr for details.
echo $X

Note that even though the output of ls contains newlines, the variables do not. Bash variables can not contain newline characters (which is a pain in the butt. But that's life) Anyway, the advantage of the $() substitution method is almost self evident: it is very easy to nest. It is supported by most of the bourne shell varients (the POSIX shell or better is OK). However, the backtick substitution is slightly more readable, and is supported by even the most basic shells (any #!/bin/sh version is just fine)

Friday, January 26, 2007

BASH Tutorial

BASH Programming - Introduction HOW-TO
http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO.html

Conditional Parts of Makefiles

Example of a Conditional

    libs_for_gcc = -lgnu
normal_libs =

ifeq ($(CC),gcc)
libs=$(libs_for_gcc)
else
libs=$(normal_libs)
endif

foo: $(objects)
$(CC) -o foo $(objects) $(libs)

Syntax of Conditionals

The syntax of a simple conditional with no else is as follows:

conditional-directive
text-if-true
endif

The text-if-true may be any lines of text, to be considered as part of the makefile if the condition is true. If the condition is false, no text is used instead.

The syntax of a complex conditional is as follows:

conditional-directive
text-if-true
else
text-if-false
endif

or:

conditional-directive
text-if-one-is-true
else conditional-directive
text-if-true
else
text-if-false
endif

Conditional-directive

ifeq, ifneq, ifdef, and ifndef
ifeq (arg1, arg2)
ifeq 'arg1' 'arg2'
ifeq "arg1" "arg2"
ifeq "arg1" 'arg2'
ifeq 'arg1' "arg2"
ifneq (arg1, arg2)
ifneq 'arg1' 'arg2'
ifneq "arg1" "arg2"
ifneq "arg1" 'arg2'
ifneq 'arg1' "arg2"
ifdef variable-name
ifndef variable-name

設定 NFS

[Server]

> vi /etc/exports

/jannyroot *(rw,async,nohide,no_auth_nlm,no_root_squash)

> service nfs restart

[Client]

mount -t nfs 192.168.0.2:/jannyroot /home/nfs/public

設定 SELinux

有時候若打開 SELinux,有的 dynamic link library 將無法使用 。
關掉 SELinux 的方法:

> cd /etc/selinux
> vi config

將 SELINUX=enforcing
改為 SELINUX=disabled

Makefile Pattern Rule Examples

Here are some examples of pattern rules actually predefined in make. First, the rule that compiles `.c' files into `.o' files:

     %.o : %.c
$(CC) -c $(CFLAGS) $(CPPFLAGS) $< -o $@

defines a rule that can make any file x.o from x.c. The command uses the automatic variables `$@' and `$<' to substitute the names of the target file and the source file in each case where the rule applies (see Automatic Variables).

Here is a second built-in rule:

     % :: RCS/%,v
$(CO) $(COFLAGS) $<

defines a rule that can make any file x whatsoever from a corresponding file x,v in the subdirectory RCS. Since the target is `%', this rule will apply to any file whatever, provided the appropriate prerequisite file exists. The double colon makes the rule terminal, which means that its prerequisite may not be an intermediate file (see Match-Anything Pattern Rules).

This pattern rule has two targets:

     %.tab.c %.tab.h: %.y

bison -d $<

This tells make that the command `bison -d x.y' will make both x.tab.c and x.tab.h. If the file foo depends on the files parse.tab.o and scan.o and the file scan.o depends on the file parse.tab.h, when parse.y is changed, the command `bison -d parse.y' will be executed only once, and the prerequisites of both parse.tab.o and scan.o will be satisfied. (Presumably the file parse.tab.o will be recompiled from parse.tab.c and the file scan.o from scan.c, while foo is linked from parse.tab.o, scan.o, and its other prerequisites, and it will execute happily ever after.)

Makefile Implicit Rules

Compiling C programs
n.o is made automatically from n.c with a command of the form `$(CC) -c $(CPPFLAGS) $(CFLAGS)'.
Compiling C++ programs
n.o is made automatically from n.cc, n.cpp, or n.C with a command of the form `$(CXX) -c $(CPPFLAGS) $(CXXFLAGS)'. We encourage you to use the suffix `.cc' for C++ source files instead of `.C'.
Compiling Pascal programs
n.o is made automatically from n.p with the command `$(PC) -c $(PFLAGS)'.
Assembling and preprocessing assembler programs
n.o is made automatically from n.s by running the assembler, as. The precise command is `$(AS) $(ASFLAGS)'.

n.s is made automatically from n.S by running the C preprocessor, cpp. The precise command is `$(CPP) $(CPPFLAGS)'.

Linking a single object file
n is made automatically from n.o by running the linker (usually called ld) via the C compiler. The precise command used is `$(CC) $(LDFLAGS) n.o $(LOADLIBES) $(LDLIBS)'.

This rule does the right thing for a simple program with only one source file. It will also do the right thing if there are multiple object files (presumably coming from various other source files), one of which has a name matching that of the executable file. Thus,

          x: y.o z.o

when x.c, y.c and z.c all exist will execute:

          cc -c x.c -o x.o
cc -c y.c -o y.o
cc -c z.c -o z.o
cc x.o y.o z.o -o x
rm -f x.o
rm -f y.o
rm -f z.o

In more complicated cases, such as when there is no object file whose name derives from the executable file name, you must write an explicit command for linking.

Each kind of file automatically made into `.o' object files will be automatically linked by using the compiler (`$(CC)', `$(FC)' or `$(PC)'; the C compiler `$(CC)' is used to assemble `.s' files) without the `-c' option. This could be done by using the `.o' object files as intermediates, but it is faster to do the compiling and linking in one step, so that's how it's done.

Tuesday, January 16, 2007

Colour ls

http://www.ibiblio.org/pub/Linux/docs/HOWTO/unmaintained/Colour-ls

[ 12 September 1999
The Linux Colours with Linux terminals mini-HOWTO is not being maintained by
the author any more. If you are interested in maintaining the
Colours-ls mini-HOWTO, please get in touch with me at
. ]

Colours with Linux terminals
Thorbjørn Ravn Andersen, ravn@dit.ou.dk
v1.4, 7 August 1997

Most Linux distributions have a 'ls' command for listing the contents
of a directory that can visually enhance their output by using differ­
ent colours, but configuring this to taste may not be a trivial task.
This document explains the various aspects and approaches of altering
the setup by configuring existing software, plus locations of alterna­
tive software usually not included with Slackware or RedHat, which may
be used on most versions of Unix. The HTML version is also available
from my own source at .

1. Introduction

In recent years colour displays have become very common, and users are
beginning to exploit this by using programs that utilizes colours to
give quick visual feedback on e.g. reserved keywords in programming
languages, or instant notification of misspelled words.

As the Linux text console supports colour, the original GNU ls was
quickly modified to output colour information and included in
Slackware around version 2.0. Improved versions of these patches have
now migrated into the standard GNU distribution of ls, and should
therefore be a part of all new Linux distributions by now.

This revision is an update on a major rewrite from the initial
release, including information on xterms and kernel patching.

The information in this document has been confirmed on Redhat 4.1, and
was originally compiled with the 2.0.2 release of Slackware, and the
1.1.54 kernel. The kernel patch information was retrieved on
slackware 2.2.0 with the 1.2.13 kernel, and tcsh as the default shell,
and later confirmed with a 2.0.27 kernel. If you use any other
configuration, or unix version, I would appreciate a note stating your
operating system and version, and whether colour support is available
as standard.

2. Quickstart for the impatient

If you have a new distribution of Linux, do these modifications to
these files in your home directory. They take effect after next
login.

~/.bashrc:
alias ls="ls --color"

~/.cshrc:
alias ls 'ls --color'

That's it!

You may also want to do an ``eval `dircolors $HOME/.colourrc`'', to
get your own colours. This file is created with ``dircolors -p
>$HOME/.colourrc'' and is well commented for further editing.

3. Do I have it at all?

First of all you need to know if you have a version of ls which knows
how to colourize properly. Try this command in a Linux text console
(although an xterm will do):

% ls --color

(the % is a shell prompt):

If you get an error message indicating that ls does not understand the
option, you need to install a new version of the GNU fileutils
package. If you do not have an appropriate upgrade package for your
distribution, just get the latest version from your GNU mirror and
install directly from source.

If you do not get an error message, you have a ls which understands
the command. Unfortunately, some of the earlier versions included
previously with Slackware (and possible others) were buggy. The ls
included with Redhat 4.1 is version 3.13 which is okay.

% ls --version
ls - GNU fileutils-3.13

If you ran the ``ls -- color'' command on a Linux textbased console,
the output should have been colourized according to the defaults on
the system, and you can now decide whether there is anything you want
to change.

If you ran it in an xterm, you may or you may not have seen any colour
changes. As with ls itself, the original xterm-program did not have
any support of colour for the programs running inside of it, but
recent versions do. If your xterm doesn't support colours, you should
get a new version as described at the end of this document. In the
meantime just switch to textmode and continue from there.

4. Which colours is there to choose from?

This shell script (thanks to the many who sent me bash versions) shows
all standard colour combinations on the current console. If no
colours appear, your console does not support ANSI colour selections.

#!/bin/bash
# Display ANSI colours.
#
esc="\033["
echo -n " _ _ _ _ _40 _ _ _ 41_ _ _ _42 _ _ _ 43"
echo "_ _ _ 44_ _ _ _45 _ _ _ 46_ _ _ _47 _"
for fore in 30 31 32 33 34 35 36 37; do
line1="$fore "
line2=" "
for back in 40 41 42 43 44 45 46 47; do
line1="${line1}${esc}${back};${fore}m Normal ${esc}0m"
line2="${line2}${esc}${back};${fore};1m Bold ${esc}0m"
done
echo -e "$line1\n$line2"
done

The foreground colour number is listed to the left, and the background
number in the box. If you want bold characters you add a "1" to the
parameters, so bright blue on white would be "37;44;1". The whole
ANSI selection sequence is then

ESC [ 3 7 ; 4 4 ; 1 m

Note: The background currently cannot be bold, so you cannot have
yellow (bold brown) as anything but foreground. This is a hardware
limitation.

The colours are:
0 - black 4 - blue 3# is foreground
1 - red 5 - magenta 4# is background
2 - green 6 - cyan
3 - yellow 7 - white ;1 is bold

5. How to configure colours with ls

If you wish to modify the standard colour set built into ls, you need
your personal copy in your home directory, which you get with

cd ; dircolors -p > .coloursrc

After modifying this well-commented file you need to have it read into
the environment string LS_COLORS, which is usually done with

eval `dircolors .colourrc`

You need to put this line in your .bashrc/.cshrc/.tcshrc (depending on
your shell), to have it done at each login. See the dircolors(1)
manual page for details.

6. How to change the text-mode default from white-on-black

You will need to tell the terminal driver code that you want another
default. There exists no standard way of doing this, but in case of
Linux you have the setterm program.

"setterm" uses the information in the terminal database to set the
attributes. Selections are done like

setterm -foreground black -background white -store

where the "-store" besides the actual change makes it the default for
the current console as well. This requires that the current terminal
(TERM environment variable) is described "well enough" in the termcap
database. If setterm for some reason does not work, here are some
alternatives:

6.1. Xterm

One of these xterms should be available and at least one of them
support colour.

xterm -fg white -bg blue4
color_xterm -fg white -bg blue4
color-xterm -fg white -bg blue4
nxterm -fg white -bg blue4

where 'color_xterm' supports the colour version of 'ls'. This
particular choice resembles the colours used on an SGI.

6.2. Virtual console.

You may modify the kernel once and for all, as well as providing a
run-time default for the virtual consoles with an escape sequence. I
recommend the kernel patch if you have compiled your own kernel.

The kernel source file is /usr/src/linux/drivers/char/console.c around
line 1940, where you should modify

def_color = 0x07; /* white */
ulcolor = 0x0f; /* bold white */
halfcolor = 0x08; /* grey */

as appropriate. I use white on blue with

def_color = 0x17; /* white */
ulcolor = 0x1f; /* bold white */
halfcolor = 0x18; /* grey */

The numbers are the attribute codes used by the video card in
hexadecimal: the most significant digit (the "1" in the example
colours above) is the background; the least significant the
foreground. 0 = black, 1 = blue, 2 = green, 3 = cyan, 4 = red, 5 =
purple, 6 = brown/yellow, 7 = white. Add 8 to get "bright" colours.
Note that, in most cases, a bright background == blinking characters,
dull background. (From sjlam1@mda023.cc.monash.edu.au
).

You may also supply a new run-time default for a virtual console, on a
per-display basis with the non-standard ANSI sequence (found by
browsing the kernel sources)

ESC [ 8 ]

which sets the default to the current fore- and background colours.
Then the Reset Attributes string (ESC [ m) selects these colours
instead of white on black.

You will need to actually echo this string to the console each time
you reboot. Depending on what you use your Linux box for, several
places may be appropriate:

6.2.1. /etc/issue

This is where "Welcome to Linux xx.yy" is displayed under Slackware,
and that is a good choice for stand-alone equipment (and probably be a
pestilence for users logging in with telnet). This file is created at
boottime (Slackware in /etc/rc.d/rc.S; Redhat in /etc/rc.d/rc.local),
and you should modify lines looking somewhat like

echo ""> /etc/issue
echo Welcome to Linux `/bin/uname -a | /bin/cut -d\ -f3`. >> /etc/issue

to

ESCAPE=""
echo "${ESCAPE}[H${ESCAPE}[37;44m${ESCAPE}[8]${ESCAPE}[2J"> /etc/issue
echo Welcome to Linux `/bin/uname -a | /bin/cut -d\ -f3`. >> /etc/issue

This code will home the cursor, set the colour (here white on blue),
save this selection and clean the rest of the screen. The
modification takes effect after the next reboot. Remember to insert
the _literal_ escape character in the file with C-q in emacs or
control-v in vi, as apparently the sh used for executing this script
does not understand the /033 syntax.

6.2.2. /etc/profile or .profile

if [ "$TERM" = "console" ]; then
echo "\033[37;44m\033[8]" #
# or use setterm.
setterm -foreground white -background blue -store
fi

6.2.3. /etc/login or .login

if ( "$TERM" == "console" ) then
echo "\033[37;44m\033[8]"
# or use setterm.
setterm -foreground white -background blue -store
endif

6.3. Remote login

You should be able to use the setterm program as shown above. Again,
this requires that the remote machine knows enough about your
terminal, and that the terminal emulator providing the login supports
colour. In my experience the best vt100 emulation currently available
for other platforms are:

· MS-DOS: MS-Kermit (free, not a Microsoft product)

· Windows 95/NT: Kermit/95 (shareware)

· OS/2: Kermit/95 (shareware). Note though that the
standard telnet understands colours and can be customized locally.

See for details about Kermit.

7. Software

All the information described here is assuming a GNU/Linux
installation. If you have something else (like e.g. a Sun running X
or so) you can get and compile the actual software yourself.

The colour version of 'xterm' is based on the standard xterm source
with a patch available from any X11R6 site. The xterm distributed
with R6.3 is rumoured to have native colour support, but is untested
by me.

ftp://ftp.denet.dk/pub/X11/contrib/utilities/color-xterm-R6pl5-patch.gz

See the documentation if you use an older version of X. Note: I
haven't tried this myself!

of the several mirrors. Get at least version 3.13.

ftp://ftp.denet.dk/pub/gnu/fileutils-3.XX.tar.gz

I have myself successfully compiled color-ls on Solaris, SunOS and
Irix.

I would appreciate feedback on this text. My e-mail address is
ravn@dit.ou.dk

--

Thorbjørn Ravn Andersen

Monday, January 15, 2007

去掉 Windows 的換行符號

cat [old_file] | tr -d '\r' > [new_file]

Friday, January 05, 2007

GNU `make'

http://www.gnu.org/software/make/manual/make.html

所有關於 Make 和 Makefile 的詳細說明

A Simple Makefile Tutorial

http://palantir.swarthmore.edu/maxwell/classes/tutorials/maketutor/

Makefiles are a simple way to organize code compilation. This tutorial does not even scratch the surface of what is possible using make, but is intended as a starters guide so that you can quickly and easily create your own makefiles for small to medium-sized projects.
A Simple Example

Let's start off with the following three files, hellomake.c, hellofunc.c, and hellomake.h, which would represent a typical main program, some functional code in a separate file, and an include file, respectively.
hellomake.c hellofunc.c hellomake.h

#include

int main() {
// call a function in another file
myPrintHelloMake();

return(0);
}



#include
#include

void myPrintHelloMake(void) {

printf("Hello makefiles!\n");

return;
}



/*
example include file
*/

void myPrintHelloMake(void);

Normally, you would compile this collection of code by executing the following command:

gcc -o hellomake hellomake.c hellofunc.c -I.

This compiles the two .c files and names the executable hellomake. The -I. is included so that gcc will look in the current directory (.) for the include file hellomake.h. Without a makefile, the typical approach to the test/modify/debug cycle is to use the up arrow in a terminal to go back to your last compile command so you don't have to type it each time, especially once you've added a few more .c files to the mix.

Unfortunately, this approach to compilation has two downfalls. First, if you lose the compile command or switch computers you have to retype it from scratch, which is inefficient at best. Second, if you are only making changes to one .c file, recompiling all of them every time is also time-consuming and inefficient. So, it's time to see what we can do with a makefile.

The simplest makefile you could create would look something like:
Makefile 1

hellomake: hellomake.c hellofunc.c
gcc -o hellomake hellomake.c hellofunc.c -I.

If you put this rule into a file called Makefile or makefile and then type make on the command line it will execute the compile command as you have written it in the makefile. Note that make with no arguments executes the first rule in the file. Furthermore, by putting the list of files on which the command depends on the first line after the :, make knows that the rule hellomake needs to be executed if any of those files change. Immediately, you have solved problem #1 and can avoid using the up arrow repeatedly, looking for your last compile command. However, the system is still not being efficient in terms of compiling only the latest changes.

One very important thing to note is that there is a tab before the gcc command in the makefile. There must be a tab at the beginning of any command, and make will not be happy if it's not there.

In order to be a bit more efficient, let's try the following:
Makefile 2

CC=gcc
CFLAGS=-I.

hellomake: hellomake.o hellofunc.o
$(CC) -o hellomake hellomake.o hellofunc.o -I.


So now we've defined some constants CC and CFLAGS. It turns out these are special constants that communicate to make how we want to compile the files hellomake.c and hellofunc.c. In particular, the macro CC is the C compiler to use, and CFLAGS is the list of flags to pass to the compilation command. By putting the object files--hellomake.o and hellofunc.o--in the dependency list and in the rule, make knows it must first compile the .c versions individually, and then build the executable hellomake.

Using this form of makefile is sufficient for most small scale projects. However, there is one thing missing: dependency on the include files. If you were to make a change to hellomake.h, for example, make would not recompile the .c files, even though they needed to be. In order to fix this, we need to tell make that all .c files depend on certain .h files. We can do this by writing a simple rule and adding it to the makefile.
Makefile 3

CC=gcc
CFLAGS=-I.
DEPS = hellomake.h

%.o: %.c $(DEPS)
$(CC) -c -o $@ $< $(CFLAGS)

hellomake: hellomake.o hellofunc.o
gcc -o hellomake hellomake.o hellofunc.o -I.

This addition first creates the macro DEPS, which is the set of .h files on which the .c files depend. Then we define a rule that applies to all files ending in the .o suffix. The rule says that the .o file depends upon the .c version of the file and the .h files included in the DEPS macro. The rule then says that to generate the .o file, make needs to compile the .c file using the compiler defined in the CC macro. The -c flag says to generate the object file, the -o $@ says to put the output of the compilation in the file named on the left side of the :, the $< is the first item in the dependencies list, and the CFLAGS macro is defined as above.

As a final simplification, let's use the special macros $@ and $^, which are the left and right sides of the :, respectively, to make the overall compilation rule more general. In the example below, all of the include files should be listed as part of the macro DEPS, and all of the object files should be listed as part of the macro OBJ.
Makefile 4

CC=gcc
CFLAGS=-I.
DEPS = hellomake.h
OBJ = hellomake.o hellofunc.o

%.o: %.c $(DEPS)
$(CC) -c -o $@ $< $(CFLAGS)

hellomake: $(OBJ)
gcc -o $@ $^ $(CFLAGS)

So what if we want to start putting our .h files in an include directory, our source code in a src directory, and some local libraries in a lib directory? Also, can we somehow hide those annoying .o files that hang around all over the place? The answer, of course, is yes. The following makefile defines paths to the include and lib directories, and places the object files in an obj subdirectory within the src directory. It also has a macro defined for any libraries you want to include, such as the math library -lm. This makefile should be located in the src directory. Note that it also includes a rule for cleaning up your source and object directories if you type make clean. The .PHONY rule keeps make from doing something with a file named clean.
Makefile 5

IDIR =../include
CC=gcc
CFLAGS=-I$(IDIR)

ODIR=obj
LDIR =../lib

LIBS=-lm

_DEPS = hellomake.h
DEPS = $(patsubst %,$(IDIR)/%,$(_DEPS))

_OBJ = hellomake.o hellofunc.o
OBJ = $(patsubst %,$(ODIR)/%,$(_OBJ))


$(ODIR)/%.o: %.c $(DEPS)
$(CC) -c -o $@ $< $(CFLAGS)

hellomake: $(OBJ)
gcc -o $@ $^ $(CFLAGS) $(LIBS)

.PHONY: clean

clean:
rm -f $(ODIR)/*.o *~ core $(INCDIR)/*~

So now you have a perfectly good makefile that you can modify to manage small and medium-sized software projects. You can add multiple rules to a makefile; you can even create rules that call other rules. For more information on makefiles and the make function, check out the GNU Make Manual, which will tell you more than you ever wanted to know (really).

Thursday, December 28, 2006

Use OpenSSL to Get Hash Values

echo "12345" | openssl sha1

echo "12345" | openssl md5

echo "12345" | openssl md2

Linux Hostname

看目前 hostname:
hostname

暫時改 hostname:
hostname new_name

更改 hostname:
vi /etc/sysconfig/network

Monday, December 25, 2006

Linux File Access Permissions

From
http://www.linuxfocus.org/English/January1999/article77.html


Abstract:

This article is divided into two parts:

* The first part (Basic file access permissions) is a very short introduction to the basic file permission concept under Unix.
* The second part (T-bit, SUID and SGID) covers more advanced features of Linux that go beyond the basic "read-write-execute" flags.



Basic file access permissions

Linux is a multiuser system where users can assign different access permission to their files. Every user has a user-Id, a unique number that identifies her/him. Users belong also to one or more groups. Groups can be used to restrict access to a number of people. A good feature to make team work with a number of people easier. To check your user-Id and see the group(s) to which you belong to just type the command id:

>id
uid=550(alice) gid=100(users) groups=100(users),6(disk)

Access permissions can be set per file for owner, group and others on the basis of read (r), write (w) and execute permissions (x). Your can use the command ls -l to see these permissions.

>ls -l /usr/bin/id
-rwxr-xr-x 1 root root 8632 May 9 1998 /usr/bin/id

The file /usr/bin/id is owned by user root and belongs to a group called root. The

-rwxr-xr-x

shows the file access permissions. This file is readable(r),writable(w) and executable(x) for the owner. For the group and all others it is readable(r) and executable(x).

You can imagine the the permissions as a bit vector with 3 bits each allocated to owner, group and others. Thus r-x corresponds to 101 as a bit pattern or 4+1=5 in decimal. The r-bit corresponds to decimal 4 the w-bit to decimal 2 and the x-bit to decimal 1.

sst
421
(discussed
later) rwx
421
user
(owner) rwx
421
group
rwx
421
others


The command chmod can be use to change these permission. For security reasons only root or the file owner may change the permissions. chmod takes either the decimal representation of the permissions or a symbolic representation. The symbolic representation is [ugoa][+-][rwx]. This is one of the letters u (user=file owner), g (group), o(others), a(all=u and g and o) followed by + or - to add or remove permissions and then the symbolic representation of the permissions in the form of r(read) w(write) x(execute). To make the file "file.txt" writable for all you type:

>chmod a+w file.txt
or
>chmod 666 file.txt
>ls -l file.txt
-rw-rw-rw- 1 alice users 79 Jan 1 16:14 file.txt

chmod 644 file.txt would set the permissions back to "normal" permissions with owner writable+readable and only readable for everyone else.

Changing into a directory (with the command cd) is equivalent to executing the directory. "Normal" permissions for a directory are therefore 755 and not 644:

>chmod 755 mydir
>ls -ld mydir
drwxr-xr-x 2 alice users 1024 Dec 31 22:32 mydir

The umask defines your default permissions. The default permissions are applied when new files (and directories, etc ...) are created. As argument it takes those bits in decimal representation that you do NOT want to have set.

umask 022 is e.g a good choice. With 022 everybody can read, your files and "cd" into directories but only you can modify things. To print the current umask settings just type umask without arguments.

Here is an example of how umask and chmod are used:

The umask is set to a good standard value
>umask
22

Take your editor and create a file called myscript:
>nedit myscript (or vi myscript ...)
Put the following code into it:

#!/bin/sh
#myscript
echo -n "hello "
whoami
echo "This file ( $0 ) has the following permissions:"
ls -l $0 | cut -f1 -d" "

Save the script.
Now it has 644 permissions:
>ls -l myscript
-rw-r--r-- 1 alice users 108 Jan 1 myscript
To run it you must make it executable:
>chmod 755 myscript
or
>chmod a+x myscript
Now run it:
>./myscript

Note that a script must be readable and executable in order to run where as a normal compiled binary needs only to be executable. This is because the script must be read be the interpreter (the shell). Running the script should produce:


hello alice
This file ( ./myscript ) has the following permissions:
-rwxr-xr-x


T-bit, SUID and SGID

After you have worked for a while with Linux you discover probably that there is much more to file permissions than just the "rwx" bits. When you look around in your file system you will see "s" and "t":

>ls -ld /usr/bin/crontab /usr/bin/passwd /usr/sbin/sendmail /tmp

drwxrwxrwt 5 root root 1024 Jan 1 17:21 /tmp
-rwsr-xr-x 1 root root 0328 May 6 1998 /usr/bin/crontab
-r-sr-xr-x 1 root bin 5613 Apr 27 1998 /usr/bin/passwd
-rwsr-sr-x 1 root mail 89524 Dec 3 22:18 /usr/sbin/sendmail

What is this "s" and "t" bit? The vector of permission bits is really 4 * 3 bits long. chmod 755 is only a shortcut for chmod 0755.

The t-bit

The t-bit (sometimes referred to as "sticky bit") is only useful in combination with directories. It is used with the /tmp directory as you can see above.

Normally (without the t-bit set on the directory) files can be deleted if the directory holding the files is writable for the person deleting files. Thus if you have a directory where anybody can deposit files then also anybody can delete the files of everybody else.

The t-bit changes this rule. With the t-bit set only the owner of the file or the owner of the directory can delete the files. The t-bit can be set with chmod a+tw or chmod 1777. Here is an example:

Alice creates a directory with t-bit set:
>mkdir mytmp
chmod 1777 mytmp

now Bob puts a file into it:
>ls -al
drwxrwxrwt 3 alice users 1024 Jan 1 20:30 ./
-rw-r--r-- 1 bob users 0 Jan 1 20:31 f.txt

This file can now be deleted by Alice (directory owner) and Bob (file owner) but it can not be deleted by Tux:

>whoami
tux
rm -f f.txt
rm: f.txt: Operation not permitted


S-bit set on the user

With Linux processes run under a user-ID. This gives them access to all resources (files etc...) that this user would have access to. There are 2 user IDs. The real user-ID and the effective user-ID. The effective user-ID is the one that determines the access to files. Save the following script under the name idinfo and make it executable (chmod 755 idinfo).


#!/bin/sh
#idinfo: Print user information
echo " effective user-ID:"
id -un
echo " real user-ID:"
id -unr
echo " group ID:"
id -gn

When you run the script you will see that the process that runs it gets your user-ID and your group-ID:

effective user-ID:
alice
real user-ID:
alice
group ID:
users

When Tux runs your idinfo program then he gets a similar output that shows the process now running under the ID of tux. The output of the program depends only on the user that runs it and not the one who owns the file.

For security reasons the s-bit works only when used on binaries (compiled code) and not on scripts (an exception are perl scripts). Therefore we create a C-program that will call our idinfo program:

/*suidtest.c*/
#include
#include
int main(){
/*secure SUID programs MUST
*not trust any user input or environment variable!! */

char *env[]={"PATH=/bin:/usr/bin",NULL};
char prog[]="/home/alice/idinfo";
if (access(prog,X_OK)){
fprintf(stderr,"ERROR: %s not executable\n",prog);
exit(1);
}
printf("running now %s ...\n",prog);
setreuid(geteuid(),geteuid());
execle(prog,(const char*)NULL,env);
perror("suidtest");

return(1);
}

Compile the program with "gcc -o suidtest -Wall suidtest.c" and set the s-bit on the owner:

>chmod 4755 suidtest
or
>chmod u+s suidtest

Run it! What happens? Nothing ? Run it from a different user!

The file suidtest is owned by alice and has the s-bit set where normally the x is for the owner of the file. This causes the file to be executed under the user-ID of the user that owns the file rather than the user that executes the file. If Tux runs the program then this looks as follows:

>ls -l suidtest
-rwsr-xr-x 1 alice users 4741 Jan 1 21:53 suidtest
>whoami
tux

running now /home/alice/idinfo ...
effective user-ID:
alice
real user-ID:
alice
group ID:
users

As you can see this is a very powerful feature especially if root owns the file with s-bit set. Any user can then do things that normally only root can do. A few words on security. When you write a SUID program then you must make sure that it can only be used for the purpose that you intended it to be used. Always set the path to a hard-coded value. Never rely on environment variables or functions that use environment variables. Never trust user input (config files, command line arguments....). Check user input byte for byte and compare it with values that you consider valid.

When a SUID program is owned by root then both the effective and the real user-ID can be set (with setreuid() function).

Set-UID programs are often used by "root" to give ordinary users access to things that normally only "root" can do. As root you can e.g modify the suidtest.c to allow any user to run the ppp-on/ppp-off scripts on your machine.

Note: It is possible to switch off Suid when mounting a file system. If the above does not work then check your /etc/fstab. It should look like this:
/dev/hda5 / ext2 defaults 1 1
If you find the option "nosuid" there then this Suid feature is switched off. For details have a look at the man-page of mount.

S-bit set on the group

Executable files the that have the s-bit set on the group run under the group-ID of the file owner. This is very similar to s-bit on user in the paragraph above.

When the s-bit is set on the group for a directory then the group is also set for every file that is created in that directory. Alice belong to 2 groups:

>id
uid=550(alice) gid=100(users) groups=100(users),6(disk)

Normally files are created for her with the group set to users. But if a directory is created with group set to disk and the s-bit set on the group then all files that alice creates have also the group ID disk:

>chmod 2775 .
>ls -ld .
drwxrwsr-x 3 tux disk 1024 Jan 1 23:02 .

If alice creates now a new file in this directory then the group of that file will be set to disk
>touch newfile
>ls -l newfile
-rw-r--r-- 1 alice disk 0 Jan 1 23:02 newfile

This is a good feature when you want to work with several people in a team and ensure that the group IDs of the files are set to the right group for the working directory of that team especially in an environment where users normally have a 027 umask that makes files un-accessible for people outside the group.

Saturday, December 23, 2006

LDAP with TLS

LDAP + SSL = port 636
LDAP + TLS = port 389 (no change)

Server:
(1) Create a cert for LDAP server
Note that the field of cn inside MUST match hostname

(2) Remove the DES3 password on private key
openssl rsa -in private.des3 -out slapd.pem

(3) add an empty line in the end of slapd.pem
echo >> slapd.pem

(4) append server's cert in the end of slapd.pem
cat newcert.pem >> slapd.pem

(5) cp slapd.pem /etc/openldap/cacerts/
chown root.ldap /etc/openldap/cacerts/slapd.pem
chmod 644 /etc/openldap/cacerts/slapd.pem

(6) cp cacert.pem /etc/openldap/cacerts/

(7) Modify slapd.conf
TLSCACertificateFile /etc/openldap/cacerts/cacert.pem
TLSCertificateFile /etc/openldap/cacerts/slapd.pem
TLSCertificateKeyFile /etc/openldap/cacerts/slapd.pem

(8) service ldap restart

(9) netstat -ntlp
Then, we can see both port 636 and 389 are being listen

Client:
(1) vi /etc/openldap/ldap.conf
TLS_REQCERT never

(2) search
TLS: ldapsearch -x -ZZ -b "dc=osa,dc=com" -h linux.kirka.idv.tw
SSL: ldapsearch -x -b "dc=osa,dc=com" -H ldaps://linux.kirika.idv.tw
No encrypt: ldapsearch -x -b "dc=osa,dc=com" -h linux.kirika.idv.tw

Linux:
Call authconfig, check "use TLS" in LDAP options
This will change /etc/ldap.conf
We can also change from TLS to SSL by changing
ssl start_tls
to
ssl on

Home Directory Solution for LDAP Linux Users

LDAP server: 10.0.1.11
User directory: 10.0.1.11:/rhome
LDAP user: kevin

Solution 1: use nfs to mount to another machine

Server:
(1) vi /etc/exports
/home 10.0.1.0/24 (rw)

(2) service nfs restart


Client:
(1) use root's account
mount 10.0.1.11:/rhome /home

(2) use ldap user kevin to login



Solution 2: use autofs with LDAP server

Server:
(1) vi /etc/exports
/home 10.0.1.0/24 (rw)

(2) service nfs restart

(3) Add LDAP Data
homeDirectory: /home/rhome/kevin
nisMapEntry: -w,hard,intr 10.0.1.11:/rhome/kevin
nisMapName: auto.misc
objectClass: nisObject

Client:
(1) vi /etc/auto.master
/home/rhome ldap:10.0.1.11:dc=osa,dc=com --timeout=60

This will automatically make a virtual directory which name
is the same as cn, namely, /home/rhome/kevin

(2) service autofs restart

(3) use kevin to login


ps. To add the nisObject with the /etc/passwd migration results,
we may need to change the schema /etc/openldap/schema/nis.schema:

objectclass ( 1.3.6.1.1.1.2.10 NAME 'nisObject'
DESC 'An entry in a NIS map'
SUP top AUXILIARY
MUST ( cn $ nisMapEntry $ nisMapName )
MAY description )

The 3rd line changed from "SUP top STRUCTURAL" to "SUP top AUXILIARY".