LINUX GAZETTE

February 2003, Issue 87       Published by Linux Journal

Front Page  |  Back Issues  |  FAQ  |  Mirrors
The Answer Gang knowledge base (your Linux questions here!)
Search (www.linuxgazette.com)


Linux Gazette Staff and The Answer Gang

Editor: Michael Orr
Technical Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Ben Okopnik, Dan Wilder, Don Marti

TWDT 1 (gzipped text file)
TWDT 2 (HTML file)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.
Linux Gazette[tm], http://www.linuxgazette.com/
This page maintained by the Editor of Linux Gazette, gazette@ssc.com

Copyright © 1996-2003 Specialized Systems Consultants, Inc.

[ Table of Contents ][ Front Page ][ Talkback ][ FAQ ][ Next ]
LINUX GAZETTE
...making Linux just a little more fun!
The Mailbag
From The Readers of Linux Gazette


HELP WANTED : Article Ideas
Submit comments about articles, or articles themselves (after reading our guidelines) to The Editors of Linux Gazette, and technical answers and tips about Linux to The Answer Gang.


H/W detection in Debian ?

Sat, 11 Jan 2003 19:06:15 +0530
Joydeep Bakshi (joy12 from vsnl.net)

Hi all,

  1. kudzu is the DEFAULT H/W detection tool in RH & harddrake in MDK. is there anything in debian?
  2. I have installed kudzu in debian 3.0 , but it is not running as a service. it needs to execute the command kudzu manually. more over it couldn't detect my epson C21SX printer. but under MDK 9.0 kudzu detected the printer . any solution please ?

thanks in advanced.


ppp over nullmodem cable - no response to (LCP ConfReq ...)

Tue, 31 Dec 2002 16:45:02 +0100
Josef Angermeier jun. (josef.angermeier from web.de)

hi linux gazette

first thanks for your great work.

id like to connect over a serial cable to a windows 2000 ras server. i already know that the problem isnt the null modem cable, because i just could remote control my second computer while using getty and windows's hyperterminal on the other side. (btw i first tried gnu/linux's minicom instead of window's hyperterminal but it seemed to me minicom just works with a modem a the end of the cable, am i wrong or is there any other program out there which i should give a try ?) ok ive already read those Serial-* and PPP* howto but i probably missed something. further, i also set the same baud rate at the ras server side. so any idea, why i dont get any reply to my LCP ConfReq Request ??

greets

josef

melee:/home/josef/tmp# pppd /dev/ttyS0 nodetach
Serial connection established.
using channel 1
Using interface ppp0
Connect: ppp0 <--> /dev/ttyS0
sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x143c91f8> <pcomp> <accomp>]
sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x143c91f8> <pcomp> <accomp>]
sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x143c91f8> <pcomp> <accomp>]
sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x143c91f8> <pcomp> <accomp>]
....

my /etc/ppp/options.ttyS0

connect 'chat -v -f /etc/ppp/scripts/winserver.chat'
19200
debug
crtscts
local
user josef
noauth


How can we block mails from users using ipchains

Sun, 5 Jan 2003 22:33:43 -0800
Dan Wilder (SSC sysadmin)
Question by linux-questions-only@ssc.com, vinod (vinod from globaledgesoft.com)

HI,

I would like to know how to block mails from other users on the same system.I tried using 'ipchains' & port no,but it didnt work.Please help me with this.

Thanks

Perhaps you could be more specific about what you're trying to accomplish. For example:

I'll take this one to the readership as a general request for more articles about setting up mail systems to do interesting things. In fact, some things that aren't really about spam could be a fun read :) -- Heather


dual boot problem

Mon, 6 Jan 2003 14:29:33 -0500
Faber Fedor (faber from linuxnj.com)
Question by Phil Harold (Lazybum from sio.midco.net)

I installed Redhat 8.0 on an existing system that has XP pro on it. XP is on ide0 and the Redhat is on ide1 XP hard drive is fat file system. When it boots it asks go to Redhat or dos… I don’t have dos. How do I get back to the windows? What needs to be done to change the boot loader. I thought I had set it up so Linux only booted with a floppy… I guess not Thanks for any help. Phil Harold

Go ahead and choose "DOS". That will boot into the other partition which is set up (hopefully) to boot XP.

Looks normal so far. Hardly worthy of the "help wanted" section here at the Gazette ... but nope, it's a stumper. -- Heather

just before the other symbols it says:

root no verify (hd2,0) chainloader +1

hit the enter key is when the symbols come looks like greek and chinese


Custom kernel, not so custom modules

Fri, 10 Jan 2003 18:25:16 +0100
Eduardo (edlm from wanadoo.es)
An old question - he had said this relates to [[http://www.linuxgazette.com/issue64/tag/16.html][issue 64 #16 in The Answer Gang]] - but still a stumper. We have a lot more readers now; maybe one of you knows what happened here? -- Heather

Hello all,

I have exactly the same problem described by Michael Hansen. Modules doesn't load after recompile Kernel. I'm also a newbie in Linux, but I see (If you are using red hat at least), it creates a directory /lib/modules/2.4.xcustom (in fact kernel version pass to 2.4.18custom in my case), but when you do make modules it copies to directory 2.4.x. If you rename directories the problem comes when you try to install a new driver that use uname -r command during installation to find modules directory (uname -r result is 2.4.xcustom). I don't know how solve this problem.

Best regards


ipchains vs iptables

Wed, 22 Jan 2003 14:39:11 +0100
Dean Buhrmann (d.buhrmann from chello.nl)

Dear Answer Gang members,

I have a linux home network which is connected to the internet through a gateway. This computer runs linux with a 2.2.18 kernel. I use IP-chains to block some unwanted incoming traffic. One of the machines runs mldonkey. This program needs the ports 4161 and 4162. I get the following error from server i contact:

ERROR: Your port 4662 is not reachable. You have a LOWID.

This port is open. The solution to this problem seems to be to redirect incoming packets from the internet for port 4662 directly to the machine where mldonkey runs.

The following iptable should do this:

iptables -A PREROUTING -t nat -p tcp -d $4 --dport 4662 -j DNAT --to 192.168.1.100

$4 is the gateway
192.168.1.100 runs mldonkey

I use a 2.2.18 kernel with ipchains on the gateway. In Howto's and other documentation i can't find a way to do this with ipchains. Do you know if it's possible and how?

Please your help.

greetings Dean Buhrmann.

Articles about trevails, with details are always welcome when you solve a strange problem. Of course there are HOWTO's for ipchains and for netfilters, but perhaps we could see an article about do something complicated enough to illustrate differences that might have you prefer one or the other interface. -- Heather

GENERAL MAIL


Re: your mail

Mon, 20 Jan 2003 11:45:17 -0500
Ben Okopnik (the LG Answer Gang)
Question by Larry Leeds (lleeds from cableone.net)

I had an IBM 20G hard drive that had a lot of bogus information in the master boot record due to formatting it with 2G jumper on and then formatting with the jumper off. It wouldn't load an OS, and it locked up every time I tried to run fdisk, norton disk doctor couldn't fix it either, but your little DOS assembly program http://www.linuxgazette.com/issue63/okopnik.html saved my hard drive.

Thank you!

Glad you found it useful, Larry. A number of people have written in with comments like yours about that particular article; I find it very pleasant that my work has benefitted that many people.

I appreciate your letting me know.


POS Software in Linux

Tue, 21 Jan 2003 21:07:25 -0500
Ben Okopnik (the LG Answer Gang)
Question by Stelian Iancu (stelian.iancu from gmx.net)

Hi!

I was reading the December 2002 issue of LG (btw, great magazine! I've just re-discovered it, and it's fantastic!) and I saw the PC-MOS thread originated by Reilly Burke.

I remembered that I saw something like a prototype for a POS software on the net and I went searching for it. The address is http://www.dnalounge.com/backstage/src/pos and the author is no other than jwz (Jamie Zawinski).

As fas as I can see, there is only an idea and a "little prototype" (as the author describes it), but maybe this prototype can be used for further developement by somebody else.

HTH!

If you need a restaurant-specific POS and don't mind going commercial (for a very small fee as compared to other POSes, actually), I have only good things to say about the ViewTouch POS <http://www.viewtouch.com> in spite of its closed-source nature. The interface is very well thought-out and beautifully done; the layout, menu, employee, and ingredient list configuration is a snap. It supports all the popular touchscreens, industry-standard narrow printers, and all the standard cash drawers. Despite the documentation that insists on "RedHat-only" compatibility, I've run it under Debian from day one (three years or so ago), and it works fine.

My biggest concern with it, of course, is that it is closed-source. I would have liked to tweak some minor features for the client I had who was interested; as well, I wonder what would happen if the developer disappeared off into the ether... but that's the nature of that particular beast. It is, however, an interesting and well-executed option. Interestingly enough, I spotted a major restaurant near Baltimore (a Brazilian steakhouse in Columbia, MD) using it about a year ago. The employees using it didn't have any negative comments, either.

P.S. Keep up the good work!

Thanks, Stelian. That's the reality and the plan. :)


GAZETTE MATTERS


Wanted: Proofreaders

Thu Jan 30 11:24:03 PST 2003
LG Editor Iron (gazette from ssc.com)

LG is looking for proofreaders. The main qualifications are a good command of English grammar, a native or near-native sense of English word usage, and the ability to recognize and clarify phrases that are too academic, not understood outside their own country, or unnecessarily difficult for those with limited English ability to read.

Depending on the number of proofreaders, the workload would be at most one article per month, but more likely one article every 2-3 months. Of course, you would be able to refuse articles you don't have time to proofread, or if you're not interested in the subject.

If interested, send gazette@ssc.com some samples or URLs of stuff you've written or proofread (any topic, any length) that demonstrates your wording style.


Compilation Problem in Writing Your Own Toy OS (PART II)

Thu, 19 Dec 2002 07:15:59 -0800 (PST)
Mohammad Moghal (riazdat from yahoo.com)

Dear Sir,

"Writing Your Own Toy OS" is a Great Contribution towards knowledge.

I have tried PART I successfully.

But, after compiling part II, I booted my system from Drive A. System checked Drive A and was hangged. There was no output of the string.

Could you please help me out.

Best Regards

M. R. Moghal

Forwarding to the author, Krishnakumar R.
He fixed one of the programs somewhere in the series after it was published, but I don't remember exactly where. If you're reading on a mirror, check the main site, and see whether that program has been changed. http://www.linuxgazette.com/issue79/krishnakumar.html -- Mike


publishing

Fri, 27 Dec 2002 03:37:09 -0500
Mike Orr (Linux Gazette Editor)
Question by Felix F. (felix from pz4.org)
Readers, please note that this was actually an exchange of mails back and forth between Mike and Felix, rather than one message which Mike responded to in gory detail. If anyone out there, either in whole or in part, takes on the Herculean task of providing paper editions of LG please let us know - we will very happily spread the word! -- Heather

Have you ever thought of publishing the gazette and require subscriptions? I would sure like to get a monthly magazine then browsing the gazette online.

We've had several requests for a print version of LG. However, the cost of producing it would be prohibitive. (Printing, postage, software to track subscriptions, customer service staff, etc. And if you want a glossy magazine rather than just a xerox copy, there's layout costs, more printer's fees, etc.) Commercial magazines like our Linux Journal can do it because most of their revenue comes from advertising, but Linux Gazette does not accept advertising (except sponsorships).

We have repeatedly asked if any readers would be willing to set up their own print-and-distribute service for LG, but nobody has offered.

What kind of equipment would be required to print-and-distribute services?

At minimum, a lazer printer, envelopes, stamps, and a list of subscribers. That's how small, do-it-yourself zines work. You'd want some kind of cover or binding unless you're just going to send a stack of loose sheets.

But mailing costs alone will soak you, especially since a single issue of LG is something like fifty printed pages. (I've never printed an issue, so that's an estimate.) Sending fifty pages via first-class mail within the US is $3-4, so that's $48/year. Would you pay $48 for LG? You may be able to get a better deal with book rate or presorted rate but you'd have to check with the post office. But how will you recoup your cost for toner cartridges, paper, printer repair/replacement (since it will wear out sooner), envelopes, and the time to write the addresses or attach labels, not to mention the time dealing with subscription requests, complaints about "I didn't receive my issue", etc?

Today many free magazines put ads into the magazine and make money to publish the magazine. It would be a good idea to maybe advertise, but I'm not sure if LG has a high number of subscribers. I can see where the management issue would be a problem (billing, distributing, etc). Hopefully one day maybe. :)

LG has a huge number of readers all over the world. I don't know the number because people who read via mirrors or off-line are uncountable. But there are mirrors in fifty countries, and I figure any country with a mirror must have a subtantial LG readership. Either that, or it at least has one LG fanatic.... :)

You bring up an interesting point. LG itself is not interested in running ads, at least not at present. I like to think of LG as an ad-free zone, a safe haven from ads. But since LG content is freely redistributable, there's nothing prohibiting a print-and-deliver service from inserting ads in their version.

Actually, our author Alan Ward in Andorra said he's seen a Spanish print version of LG on the newsstands there. I assume it was the Spanish translation of Linux Journal, which may include some LG articles.

I've seen few sites publishing their works into magazine (including ads) and subscribers did not get angry at the ads, because they understood that to publish costs money and if the work is quality it's worth subscribing for.

HAPPY NEW YEAR and good luck.

There are a few articles in LG that may not be redistributed in a commercial print publication (where "commercial" means you're charging any amount of money for it). Those articles have a message to that effect at the bottom of the article. The ones that come to mind are:

In those cases, you will have to contact the author for permission.


This page edited and maintained by the Editors of Linux Gazette
HTML script maintained by Heather Stern of Starshine Technical Services, http://www.starshine.org/
Copyright © 2003
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 87 of Linux Gazette, February 2003

[ Prev ][ Table of Contents ][ Front Page ][ Talkback ][ FAQ ][ Next ]
LINUX GAZETTE
...making Linux just a little more fun!
More 2¢ Tips!
By The Readers of Linux Gazette

See also: The Answer Gang's Knowledge Base and the LG Search Engine


Two Sound Cards Under Linux

Tue, 14 Jan 2003 03:02:07 -0500
N4FWD - Tom Kocourek (tko from atempest.net)


The Need

As an Amateur Radio Operator, I wanted to use "QSSTV" under Linux. This program uses the DSP in a sound card to decode pictures being transmitted on Amateur Radio. However, I did not wish to give up the basic sound ability available under KDE. Thus I started reading about dual sound cards.


Research

Searches via Google did not turn up much information on dual sound cards, just the usual "HOW TO" references on getting one sound card running. But, one key piece of information did turn up, that multiple sound drivers can coexist!


Some experimentation and...

Multiple sound cards can work together provided:

  1. Each additional sound card must be a different chip set (ie. different drivers)
  2. Each sound card must have its own IRQ and distinct control register address space


Installation checkup

At this point, you have physically installed the additional sound card and have verified that the BIOS has assigned different IRQs to the cards.

Now you have booted Linux and have logged in. In Mandrake Linux there is an integrated program called the MCC (or Mandrake Control Center). You can either use MCC or you can execute in a term window:

	$ /sbin/lsmod | less

You are verifying that different drivers have been assigned to each Sound Card. If you are not using one of the more recent distributions of Linux (such as Red Hat, Mandrake, or SuSE), you may have to alter the configuration files by hand to achieve the necessary loading of the proper Sound Card drivers.

Next, you run a mixer setting program, like KMIX. If all is ok, the program should display 2 distinct mixers. If not, then you need to recheck the configuration files.


Now for the tough part...

Many sound programs are not well written. That is to say that the program assumes that only one sound card exist in your system. These types of sloppy programs will lock up Linux and require using the reset button

Well written programs allow you to set which sound card is to be used. XMMS is a well written program. While it assumes that sound card 0 is the only sound card in the system, It does not lock down Linux. QSSTV is an even better written program in that it allows you to configure which sound card is to be accessed.

"ARTSD" is a poorly written program and MUST be disabled when you run dual sound cards in your system. Otherwise, you will be reaching for the reset button!


Lastly...

I am able to play my music via XMMS and Sound Card 0; while QSSTV decodes pictures using Sound Card 1 simultaneously under Linux!


rpm in debian ?

Tue, 7 Jan 2003 14:17:47 +0530
Kapil Hari Paranjape (kapil from imsc.res.in)
Question by Joydeep Bakshi (joy12 from vsnl.net)

Hi, I am a Debian user and interested to install the rpm packages ( from RH or MDK cds ) in Debian. but is it possible to do so ? if yse , how ?

[Kapil] A debian package:
Package: alien
Section: admin
Architecture: all
Description: install non-native packages with dpkg
 Alien allows you to convert LSB, Red Hat, Stampede and Slackware Packages
 into Debian packages, which can be installed with dpkg.
 .
 It can also generate packages of any of the other formats.
 .
 This is a tool only suitable for binary packages.
This suggests that "apt install alien" would do the trick for you.
This works as follows. You run
   fakeroot alien -r <RPM>
This produces a .deb which can be installed.
It is a good idea to read the documentation first. In particular, please heed the warning about not installing any critical packages this way. IF (and this is a big if) some mission critical package you absolutely must have is not in Debian (stable or testing or unstable), then it is generally better to run "debmake" on the unpacked source tree to build the relevant debian package. (of course to do this you should generally have installed "build-essential").
[JimD]
... and created a debian/rules file (a makefile starting with
#!/usr/bin/make -f).
[Kapil] The "alien" package is largely for (boo-hiss) non-free stuff that is only available as binaries packaged as RPMs.
[JimD] It is also possible to install the debian rpm package. You can then directly use RPM commands. However, there won't be any dependency database (dbm files) so all dependency checks will fail.
At some point someone may come with with a very clever (and probably difficult to maintain) adapter that will generate a reasonable RPM/DBM database set from a Debian /var/lib/dpkg/info tree. Alas that is not in the cards for now.
'alien' is probably the best way to go in most cases.

Thanks a lot for ur valuable hints. alien is excellent. but *alien -i* command didn't check any dependency when I installed open office (making .deb from Mandrake cd ), hence it could not be started due to missing libraries.

[Kapil] Dependencies are certainly a problem for alien. The way I understand it, if you have the correct libraries installed then the dependencies are included in the .deb package produced by "alien". Otherwise "alien" only produces error messages about unmet dependencies...
... a bit of a catch 22 alright!
But if you create the .deb files and install them in the "correct" order (and assuming that there are no cross dependencies!) the binary dependencies should work out correctly. What "alien" does (I'm guessing here) is it runs "ldd" on the executables and looks for the package that supplied the relevant library. This is how it is often done during .deb creation.
Non-binary dependencies are probably unresolvable unless you can lay your hands on an LSB package---whatever that is.
The Linux Standards Base is an industry-wide effort to make life easier for companies that want to produce commercial shrinkwrap products. If they adhere to the filesystem layout and principles described there, then the package should be able to be installed on any Linux distro which also claims to be LSB compliant.
The installers haven't quite perfected this as far as to handle everybody's slight differences in initscript setup, but other than that it's not too bad. At the very least a knowledgeable system admin has no problem grafting such applications into the company-wide server. -- Heather

1) is it possible to let the kpackage to handle this type of converted .deb packages and their dependency ?

[Kapil] I don't know anything about kpackage but I would guess that if the information is not in the .deb file there is not much kpackage can do.

2) if I have a particular directory to store all these converted .deb packages then how to modify kpackage to display those packages in its tree view ? ( if it is possible at all )

[Kapil] There are some debian packages that allow you to create your private repositories - there is a sledge-hammer called "apt-move" but there may be something simpler for your requirement.
When the deb file is installed, if it has no section it will be placed in the "Obsolete and Locally Created Packages" section under aptitude. I assume kpackage has a similar feature, although I've been a bit shy of the X-windows based apt front-ends, since I prefer to have a minimum of processes running when updating my systems. -- Heather

once again thanks 4 ur solution.

[Kapil] As far as openoffice and other such packages are concerned your best bet is the "unofficial apt repositories" (which I forgot to mention in my list of stable/testing/unstable). You can find these unofficial repositories at:
http://www.apt-get.org
I seem to remember that this site lists a site for openoffice. You can add that site to the list in /etc/apt/sources.list and you should be able to then use apt-get (or probably kpackage) to install openoffice with dependencies resolved.
Be warned that the unofficial repositories are un-signed packages and could contain trojans and other such!

Thanks 4 all ur technical info.

best regards


propagating ownership and permissions

Mon, 30 Dec 2002 08:30:09 -0500
Ben Okopnik (the LG Answer Gang)

A while back, I wrote a utility that propagates ownership and permissions from a sample file to a group of files. Imagine a situation where you have, say, several dozen documents with a scattershot list of permissions and owners/groups (since they were created by different people.) The way to "bring them into line" would be to pick a file that already has The Right Stuff - it doesn't even have to be in the same directory - and say:

cpmod /path/to/example/file *

Note that this utility is self-documenting. Its internal "man page" can be read (as long as "cpmod" is somewhere in your path) with

perldoc cpmod

If you want an actual man page, one can be easily created with

pod2man cpmod|gzip -c>cpmod.1.gz

Put the resulting file somewhere in your man directory structure (/usr/share/man/man1, perhaps).

See attached cpmod.pl.txt

[JimD] In newer GNU utils you can use something like:
	#!/bin/sh
	reference="$1"; shift
  	for i in "$@"; do
		chown --reference="$reference" "$i"
		chmod --reference="$reference" "$i"
		done

[Ben] Very cool, Jim! I hadn't seen that one before; I was only familiar with the older versions.

[JimD] (Technically I think you can just make that for i; do ... since I think that for loops default to being in "$@" if you don't specify an explicit list. I know they default, but I'm not sure if they default to $* or "$@" --- if you care about the distinction; as usual the subtleties of soft-quoting are there to protect degenerate filenames containin whitespace!).
In other GNU utils you can use a little trickery like:
  	#!/bin/sh
	reference="$1";  shift
	UID=$(find "$1" -maxdepth 0 -printf "%U" )
	MODE=$(find "$1" -maxdepth 0 -printf "%m" )
  	for i in "$@"; do
		chown "$UID" "$i"
		chmod "$MODE" "$i"
		done
Ben, am I missing some subtleties here? (Other than the obviously argument counting, error checking and messages, and some getopts to provide --help, --owner-only, --mode-only etc.)

[Ben] Not so far as I can see. However, the Perl version is shorter (if you ignore the included man page.) :)


boot to windows by default

9 Jan 2003 05:16:50 -0000
David Mandala, Jim Dennis (the LG Answer Gang)
Question by anurag sahay (anuragsahay from rediffmail.com)

Hi Answer guy, I ahve two questions

1. I have linux and Windows both loaded on my system.i wanted to boot to windows by default.how can i chang the lilo.conf file.what are the changes to be made there.

[David] The answer to your question about lilo is to edit the /etc/lilo.conf file.
Your file might look something like this:

See attached linux-and-dos.lilo-conf.txt

Cheers, Davidm
[JimD] Essentially, add a default= directive to your /etc/lilo.conf (or edit your /boot/menu.lst file if you're using GRUB). Read the lilo.conf man (and/or GRUB info) pages for more detail on that.
The Linux Documentation Project (http://www.tldp.org ) has an entire section of HOWTOs on boot loaders and related topics (about a dozen of them):
http://www.tldp.org/HOWTO/HOWTO-INDEX/os.html#OSBOOT


network programming - accepting data

9 Jan 2003 05:16:50 -0000
Kapil Hari Paranjape, Jim Dennis (the LG Answer Gang)
Question by anurag sahay (anuragsahay from rediffmail.com)

Hi Answer guy, I ahve two questions

2. This about unix network programming: How to accept any data from any given port.

thanking you
yours anurag

[Kapil] Have a look at the utlities "netcat" and "socat".
[JimD] You could use netcat (often named /usr/bin/nc) or socat directly (from shell scripts, etc) to listen on arbitrary TCP or UDP ports. Note: the process has to have 'root' privileges to listen on "privileged" ports -- those from 1 to 1023 inclusive (or maybe it's 1024 inclusive --- I never remember that one).
More to the point, you can read the source code to netcat or socat (included with most distributions on the "Source Code" disc or readily downloadable from many archive sites on the net. As a Debian user I find it most convenient to get most sources with a simple 'apt-get source' command. Debian tracks, index, and automatically fetches, unpacks and patches the sources for me. With an 'apt-get build-dep' command I can also have Debian fetch and install all of the packages that are required to build almost any other package from its sources (they're still working on that feature).
It makes me reluctant to hunt down the upstream sources, suitable for other distros and other forms of UNIX.
These things change far too frequently, but Google is our friend. It appears that the current canonical location for finding Hobbit's netcat sources is at:
http://www.atstake.com/research/tools/network_utilities
... where he (Hobbit) seems to have an e-mail address. Perhaps he works at @Stake.
As for socat its author, Gerhard Rieger, conveniently list the package's home page in the man page that comes with the package (at least with the Debian package): http://www.dest-unreach.org/socat
Reading the sources to these will teach you alot about UNIX network programming. In particular netcat has been around for a very long time and has had VERY FEW bugs reported against it. It's been scrutinized by thousands, probably tens of thousands of programmers.
You should also buy Richard Stevens' seminal textbook on UNIX Network Programming (Prentice Hall). Read more about that at:
http://www.kohala.com/start


Key bindings in X

Wed, 22 Jan 2003 07:51:49 +0800
jamie sims (jaymz from operamail.com)

Here's the fix I finally hit upon to get those F keys working in xterm. I edited a copy of /usr/X11R6/lib/X11/app-defaults/XTerm and added the following:

See attached XTerm.app-defaults.txt

I then saved it as .Xdefaults and it works very well.

You can use the .Xdefaults file in your home directory to add or override X internal resources for any application - so make sure that if you already have some features stored there, that you add this into it, instead of replacing it. -- Heather


alsa in debian

Sun, 19 Jan 2003 12:52:21 +0530
Kapil Hari Paranjape (kapil from imsc.res.in)
Question by Joydeep Bakshi (joy12 from vsnl.net)

Hi there, u know alsa in not built in debian 3.0 by default. but alsa utils... & driver & header files are present in the 7cd set. could any one please tell me how to build the alsa modules in debian & the required packages 4 this ?

Note: there are some alsa-modules ( in the cds ) based on 2.4.16 kernel, but mine is 2.4.18

Where you got the kernel-image-2.4.18 you should also find the relevant alsa-modules-2.4.18. Anyway here is the procedure to build alsa modules for debian.

1. Use apt-get to install the relevant alsa-source package. You could also download the sources from the alsa ftp site --- I haven't tried that but it should work.

2. Install the relevant kernel source package, and the package kernel-package.

3. Unpack the kernel source and alsa-modules in /usr/src.

4. Run "make-kpkg --config=menuconfig" configure in the kernel source directory.

5. Run make-kpkg kernel-image and make-kpkg modules-image.

6. This should build a pair of compatible kernel-image and alsa-modules package files which you can install with dpkg.

7. Of course you need to edit your grub menu or lilo conf file and so on to run this kernel.

8. You can then configure alsa with alsa-conf alsa-base and so on.

Remember to set and save the mixer settings so that /etc/init.d/alsa script (which is part of alsa-base) can restore these settings.


pppd

Fri, 3 Jan 2003 11:24:26 -0800
Mike Iron Orr, Ben Okopnik (the LG Answer Gang)
Question by Joydeep Bakshi (joy12 from vsnl.net)

pppd command shows a few strings character in RH, but in debian it shows error

" remote system needs to authenticate itself" & discontinue

[Ben] Ah, I'd missed this part. Neil is right - you don't have the "noauth" option defined in your "/etc/ppp/peers/provider" or whatever options file you're using.
[Iron] I haven't used ppp for years (but I will soon, when I set up my mom's computer), but yes, if you're dialing into an ISP you want "noauth". Otherwise your Linux box will require authentication from the server, which it won't do. The server thinks *it's* trusted and *you're* the one who has to authenticate yourself. And even if it was willing to authenticate itself, how could it? It doesn't have a password to authenticate itself with. The (nonexistent) password the servers would authenticate themselves with is different than the user password you authenticate yourself with.
If people are dialing into your Linux system, then you want authorization for those calls.

Thanks 4 the solution, it is working now.


Is that your FIN_WAIT Answer?

Mon, 13 Jan 2003 19:00:25 -0800
Jim Dennis (the LG Answer Guy)

I am using RedHat Advanced Server 2.1, Kernel 2.4.9 and am having the following problem:

If I log on as userA via a telnet session and run Test_pgm and then disconnect the telnet session by closing the window instead of properly logging out, this is what is shown from the ps command:

UID    PID  PPID  C STIME TTY          TIME CMD
userA 8505     1  0 14:00 ?        00:00:00 login -- userA
userA 8506  8505  0 14:00 ?        00:00:00 -bash
userA 8540  8506 87 14:00 ?        00:00:42 Test_pgm

Notice that there is no longer a TTY associated with the running program or the original login and the PPID of the login has been inherited by process ID#1. Furthermore, if I do a top command, the results show that the CPU Idle % is zero, with the Test_pgm using up all of the CPU %. The load average goes through the roof. I've seen it up close to 30.0. However, the system's performance does not seem to be effected by me or by any of the users. These processes are not listed as zombies and are never cleaned up by the system unless I kill the login process or restart the server.

Most of this seems normal (for a program that's ignoring SIGHUP). The loadavg number seems odd.

This scenario happens whether the user is running an in-house 'C' program or an operating system utility such as Redhat's setup. Within our own 'C' programs, I have tried to capture a terminating signal, using the signal() command, but I am not seeing any of the signals that I would expect to see, such a SIGTERM or SIGHUP.

Does anyone have any ideas as to how to tell RedHat to take down the processes associated with a telnet when a tty disappears?

Thanks in advance.
DP

in.telnetd should be sending a SIGHUP to the process when the TCP connection is closed (including when the keepalive fails?).
Run 'netstat -na' and see if the TCP connection is lingering in FIN_WAIT state. This could be a case where your (probably MS-Windows) telnet client is failing to properly perform the three-way disconnection handshaking that's required of TCP. (I recall problems with some MS Windows FTP clients resulting in similar symptoms on high volume public FTP servers).
Try it with a UNIX telnet client.
Try it with ssh.
If it works with ssh, perhaps you can use that as leverage with your users and management to abandon this insecure and deprecated protocol! (PUTTY is a very good, and free, ssh client for MS Windows operating systems. There are many others).
Other than that, I would try upgrading the kernel (2.4.9 was pretty miserable under memory load) and watch one of these sessions with tcpdump and strace (so you can correlate what's happening on the wire with what's happening in the process). Upgrading to RH 7.3 might also be good since the compilers and libraries in 7.1 and 7.2 had ... issues.
Without knowing more about what Test_pgm is supposed to do, I can't immediately suggest any other workarounds.


direct rendering for nvidia RIVA 128

Sun, 19 Jan 2003 00:13:51 +0100
Yann Vernier (yann from algonet.se)
Question by linux-questions-only@ssc.com, Scott Frazier (rscottf from ieee.org)

I have a nvidia velocity 128 video card, which uses the RIVA 128 accelerator chip. I'm running Mandrake 9.0, which sets it up with glx (3D capability), but with no direct rendering (uses software rendering). Needless to say this REALLY slows it down for games. Does anyone know how I might resolve this? I've tried changing an entry in the XF86Config file, in the MODULES section. I added the line Load "dri", to no avail. I'm pretty sure the card is dri capable, as it is able to do bus mastering, which is a must for this.

Sorry to disappoint you, but last time I checked there was no DRI driver for the Riva 128. It's among the earliest nVidia chips, and nVidia's own binary-only driver only supports TNT or later (two models newer). There was a partly accelerated Mesa-based GLX implementation for XFree86 3 that supported it, however, called Utah-GLX. You may be able to run that, but you'd obviously lose out on all other new features of XFree86 4.


xcdroast post cdrom mount problem

Fri, 10 Jan 2003 17:32:51 -0500
()
Question by Brian (bbertsch from surfside.net)

hello, i'm a recovering os/2 user. i used it today, and i may have to tomorrow... but i can stop any time i want to.. but my modem....

Anyway, after i use xcdroast, (which i am getting used to, under RH8-KDE) i am unable to check the cdrom just made because the cdrom will not mount. (ide double cheapo brand 48x, works great). i have to use the newly-made cd on my os/2 machine to check it. my friends laugh at me.

thanks, brian

[JimD] You probably need to change /dev/cdrom to be a symlink to /dev/scd0 or something like that.
Linux normally handles your ATAPI CD-R drive via a SCSI emulation layer. Once this layer is active (possibly via a loadable module) then all access to the CD has to go through the SCSI device nodes (/dev/sg* for writing, and /dev/scd0 for mounting CDs).
Try that. Try this command first:
mount -t iso9660 -o ro /dev/scd0 /mnt/cdrom
... from a root shell prompt.
[John] Greetings from another former OS/2 user - although I used it for about 2 yrs or so, and switched to Linux.
Anyway, have you read CD's made from that cooker before? Could be a hardware issue. Some of those really cheap devices lack some features. But chances of that would seem a bit slim if it's a 48X drive, cuz those compatibility problems are usually more common with the older drives. But I wouldn't rule it out as a possibility.


iptables: What They Are and What They Do

Tue, 7 Jan 2003 04:18:33 -0800
Jim Dennis (the LG Answer Guy)
Question by peter collins (collin_sq2003 from yahoo.com)

could you please explain to me what iptables are and what they do

IPTables are tables (lists) of packet filtering rules in the Linux kernel. They are added (passed into the kernel's address space) and manipulated using a command named: 'iptables' and they are interpreted by various kernel modules written to the "netfilter" APIs (primarily by Paul "Rusty" Russell).

Each rule is a pattern matching some sorts of network traffic based on many criteria (IP source or destination addresses, TCP or UDP source and destination ports, ICMP type, IP or other options (flags), connection status (correlated from other, previous packets), even MAC addresses, which interface and direction they're coming from or destined to, which local processes are generating them, etc.). Part of each rule is a "disposition" like: DROP, REJECT, ACCEPT, "jump" to another ruleset (table) etc.

The ability to conditionally process different packets in various ways, and even to conditionally "call" on some rulesets, makes iptables into a very specialized programming language. IPChains was somewhat different, simpler packet filtering language (also by Rusty), and ipfwadm was a much simpler packet filtering system back in the 2.0 kernel days.

It looks like the 2.6 kernel, probably due out sometime this year, will be the first one since 1.3 that hasn't had a major overhaul in the packet filtering language. IP Tables was released with 2.4 and has only undergone minor bug fixes and refinement since then.

Note that most of the packet filtering rules relate to whether to allow a packet through the system, to DROP it (with no notice) or REJECT it (providing an ICMP or error back to its sender, as appropriate), MASQUERADE or TRANSLATE it (change its apparent source address and port (usually setting up some local state to dynamically capture and re-write any response traffic related to it), REDIRECT it (change its destination address and/or port), change its "ToS" (type of service) bits. It's also possible to attach an FWMARK to a packet which can be used by some other parts of the Linux TCP/IP subsystem.

What IPTables is NOT:

There is another subsystem, similarly complex and seemingly related --- but distinct from netfilter (the kernel code that support IP Tables). This is the "policy routing" code --- which is controlled with the tersely named 'ip' command (the core of the iproute2 package).

Policy routing is different that packet filtering. Where packet filters is about whether the packets go through, and whether some parts of a packet are re-written, policy routing is purely about how they are sent towards their destination. Under normal routing every outbound and forwarded packet is sent to its next hop based exclusively on its destination address. Under policy routing it's possible to send some traffic through one router based on its source address, port or protocol characteristic, etc. This is different than the IP tables "REDIRECT" because this doesn't change the packet --- it just sends it to a different router based on the policy rules.

The two subsystems can interact, however. For example policy routing does include options to match on the ToS or FWMARK that might be attached to a packet by the iptables rules. (These FWMARKs are just identifiers that are kept in the kernel's internal data structure about the packet --- they never leave the system and can't go over the wire with the packet. ToS are only a couple of bits in the header, hints that traditionally distinguish between "expedited" (telnet) and "buld" (ftp) traffic).

The iproute2 package and the 'ip' command replace the ifconfig command and provide considerable control over interfaces. It also allows one to set "queueing disciplines" to interfaces which determine which packets get to "go first" when there are more than one of them waiting to be sent over given interface.

There is alot more I could tell you about Linux routing and network support. For example none of this relates to dynamic routing table management. There are user space programs like routed, gated, and various GNU Zebra modules, that can listening to various dynamic routing protocols such as RIP, RIPv2, OSPF, BGP, etc. to automatically add and remove entries to the kernel's routing tables. Some of these might be able to also dynamically set policies as they do so. There is also a Linux compile time option called "Equal Cost Multi-path" which is not part of policy routing. Normally if you added two static routes of "equal cost" than the first one (of the lowest cost) would always be used, unless the system was getting "router unavailable" ICMP messages from somewhere on the LAN. However, with Equal Cost Multipath the system will distribute the load among such routes. This can be used to balance the outbound traffic from a very busy system (such as a popular web server or busy mail gateway) among multiple routers (connected to multiple ISPs over multiple T1s or whatever).

(This is similar to a trick with policy routing --- assigning a couple of IP "aliases" --- different IP addresses --- to one interface; one from one ISP, another from a different one, and using policy routing to ensure that all response/outbound packets from one of these sources go through the appropriate router. DNS round robin will balance the incoming load, and policy routing will balance the response load. Equal Cost Multipath will balance traffic initiated from that host).

Again, all of these last paragraphs are NOT IP tables. I'm just trying to give you a flavor of other networking stuff in Linux apart from it, and to let you know that it, if you don't find what you need in the iptables documentation, it might be somewhere else.

To learn more about Netfilter and IP Tables, please read though the appropriate HOWTOs:

http://www.tldp.org/LDP/nag2/x-087-2-firewall.future.html http://www.netfilter.org


Code folding in Vim

12 Jan 2003 23:53:53 +0530
Ashwin N (ashwin_n from gmx.net)

Vim versions 6.0 and later support a new feature called Code Folding. Using code folding a block of code can be "folded" up into a single line, thus making the overall code easier to grasp.

The Vim commands to use code folding are quite simple.

To create a fold just position the cursor at the start of the block of code and type : zfap

To open a fold : zo

To close a fold : zc

To open all the folds : zr

To close all the folds : zm

For more commands and information on code folding in Vim query the inbuilt help feature of Vim : :help folding

[John Karns] You're quite right. Folding is particularly useful for long sections of code that contain loops, etc. I use it extensively in this context.
Other uses include long paragraphs of prose.
But make sure you are in command mode! If you are in text entry mode, just typing in "zfap" would literally embed that string into your text!
If you're in text entry mode, press Escape to get back into command mode.
Vi has two command modes and a text entry mode. When you come in you are at ordinary command mode. When you type a colon (such as what precedes the word "help" above) then you end up with a small colon prompt. The above commands are NOT colon mode commands, except for help. But you do need your cursor at the right location.
The colon prompt is also called "ex mode" by old hands at vi, but I'm not entirely sure that all the commands that use it are really old commands at all. Some are surely long words allowing you to access some enhanced features, too, because there are only so many letters in the alphabet.
To get out of the help mode you may need to type :q to quit the extra window it created. Your original textfile is still around, don't worry. -- Heather


Debian "Woody" boot error

Tue, 21 Jan 2003 16:30:32 -0600
Robos (the LG Answer Gang)
Question by Rich Price (rich from gandalf.ws)

After installing the Woody release of Debian using the idepci kernel I noticed the following boot message

modprobe: Can't locate module char-major-10-135

Some Google searching led me to the following factoid:

"char-major-10-135" refers to the character device, major 10, minor 135,

which is /dev/rtc. It provides access to the BIOS clock, or RTC, the Real Time Clock.

[Robos] OH MY GOSH! REINSTALL! (Just kidding)
This doesn't actually mean that your computer has no sense of time at all; it just means you won't be able to access the additional precision it has available, without extra code in the kernel. If you have SMP, the kernel docs warn that it's important to compile this in. Otherwise, very few things actually care.
But in a new enough kernel, with devfs support, any app which is curious about it (that is, would use the extra support if you have it, but ignore it if you don't) will provoke a complaint when the userland devfsd attempts to autoload the module. You can tell it to ignore that stuff, detailed in devfsd's man page. -- Heather

So, fine, I want it.

[Robos] Hmm, ok

I looked around in the distro CDs, but I couldn't find the char-major-10-135 module. No luck at the Debian site either. Where can I find a copy of this module compiled for the Debian Woody idepci kernel?

[Robos] Actually it has to be compiled in the kernel to be either hard integrated or to be loadable as a module. It seems as if they (the debian kernel package maintainer) did neither. So, either you bake your own kernel and tick the appropriate field in make xconfig or you need to look (grep) through some configs of kernels (packaged ones) to find one which has rtc true oder m. BTW, I have this message too on all my machines with hand made kernels and it didn't bother me a thing till now...
[Iron] char-major-10-135 is a generic name; the module itself won't be called that. Take a look in /etc/modules.conf . The "alias" lines map the generic name to a specific module that provides it, for instance:
alias char-major-10-175 agpgart
In this case, some program or daemon is trying to access the real time clock. You can also create your own aliases; e.g., I name my Ethernet cards according to their interfaces:
alias eth0 3c59x
alias eth1 eepro100
So when my network initialization script does:
modprobe eth0
modprobe eth1
I know eth0 will connect to the 3C905 card (DSL) and eth1 will connect to the EE PRO card (LAN). And if I have to change cards later, I can just change the alias lines and leave everything else along. (The only thing I can't do is use two cards of the same brand, because then I would have no control over their initialization order except by seeing which PCI slot has the lowest base address: "cat /proc/ioports". If eth0 and eth1 get reversed, the network won't work because packets will get routed to the wrong network.)
Anyway, the easiest way to "fix" your problem is to add an alias:
alias char-major-10-175 off
That tells modprobe to shut up because there is no module for that service. So whatever is asking for that module will abort or do without. Whether that's a problem or not depends on what the program is trying to do and whether you need it. I have never run into problems aliasing char-major-*-* modules off.
Of course, the "correct" solution is to find out what's using the module and disable it if you don't need it.
In my Linux 2.4.17 source, "make menuconfig", "character devices", "Enhanced Real Time Clock support", "Help" (press Help while the cursor is on the RTC line) says the module file is "rtc.o". You can also guess that from the config option name at the top: CONFIG_RTC. That's the file you want from your distribution disk. On Debian it would be in a kernel modules package.
Note that Debian has a configurator for /etc/modules.conf. Instead of editing that file directly, edit /etc/modutils/aliases and then run "update-modules". See "man 8 update-modules".


Proxying with MAC address

Sun, 12 Jan 2003 05:00:20 -0800
Jim Dennis (the LG Answer Guy)
Question by Ganesh M (gansh from rediffmail.com)

Thanks to Karl-Heinz Herrmann for bearing with me, just one little
question please.

Is it possible to restrict internet access by private LAN PCs taking into account their MAC address instead of the IP address by any means, i.e., Masquerading/Proxying etc. Can masquerading and proxying co-exist, and if so, what is the advantage?

Thanks
M Ganesh

It should be possible (though very cumbersome) to configure your networks so that only registered MAC addresses are routed from one internal network to another (including via the border router to the Internet).

Under Linux you could write scripts to do this using the MAC Address Match option/module in the Linux kernel configuration (*) (named: CONFIG_IP_NF_MATCH_MAC in the .config file).


*(Networking Options --> Netfilter Configuration --> IP Tables)

However, it's probably an ill-advised strategy. Many people try to limit this by setting up their DHCP servers with known MAC addresses and refusing to give out IP addresses to unknown systems. They then might couple this with monitoring using the 'arpwatch' package to detect new ARP (MAC/IP address combinations) and with 'snort' to warn them of other suspicious network activity.

As for co-existence of IP Masquerading and applications layer proxying. Yes they can co-exist --- and are even sensible in some cases. In fact it's common to use something like IP Masquerading with the Squid caching web proxy --- in its "transparent proxy" configuration.

In general you might use proxies for those protocols that support it, and for inbound connections; while letting systems fall back on IP masquerading other work (subject to your packet filtering, of course).

The advantages of application proxy systems are largely in three dimensions: They can be quite simple, and run in user space often as a non-privileged process (security and simplicity); they can reflect higher level policies because they have access to the applications and sessions layers of the protocol that is being proxied (flexibility and control), they may be able to provide better performance (performance, especially via caching).

However, any particular proxy might not provide real advantages in all (nor even ANY) of these areas. In particular the Delegate proxy system seems to be riddled with buffer overflows, for example. Squid is a nice caching proxy for web and some other services --- and it has some security and policy management features and optional modules. However, Squid configuration and administration can be quite complicated. It's far too easy to inadvertantly make your Squid system into a free anonymizing proxy for the whole Internet, or to make it into an unintentional inbound proxy to your own intranet systems.

While a proxy might have access to the application/session layer data (in the payloads of the IP packets) --- it might not have a reasonable means for expressing your policies regarding acceptable use of these protocols.

Also there are always those new protocols for which no proxies have been written. There will frequently be considerable demand by your users and their management to provide access to the latest and greatest new toys on the Internet (Pointcast was an historic example, Internet radio is, perhaps, a more recent one).

These issues are very complex, and I can't do them justice at 5am after staying up all night ;)


fwd: Re: [TAG] wrestling with postfix...

Sun, 19 Jan 2003 09:01:44 -0800
Dan Wilder (the LG Answer Gang)
Question by Radu Negut (rnegut from yahoo.com)

Hi! After going twice through the postfix documentation, I still couldn't figure it out if it is possible to configure mail for groups (e.g. sales_managers@domain.com) otherwise besides aliasing all group members to that address in /etc/postfix/aliases. Does postfix reread the aliases as well if 'postfix reload' is issued or only the .cf file? Does 'service postfix restart' reset all mailques, resulting in dropped/lost mail? I've looked

For alias lists, add stuff to /etc/aliases then run

postalias /etc/aliases

If you don't care whether the new aliases are effective instantly, you're done. Very shortly Postfix will notice the aliases file is updated and will reload it.

You may keep aliases in additional files. See the

alias_maps =

parameter in main.cf. You can add as many alias files as you like.

For bigger lists, or frequently changing ones, investigate mailing list software. I use Mailman or majordomo myself. See the URL below.

around but couldn't find if postfix can be configured to use accounts other than from /etc/passwd (and I'm not talking about aliases). What I mean is normal mail spools, but for users that get specified in a separate file and who do not have any permissions on the system whatsoever.

Briefly, you can't do normal UNIX mail delivery except to users from /etc/passwd. However you can do POP3/IMAP delivery to a software that maintains its own list of users. You're looking for something like Cyrus. You'll find it under the POP3/IMAP servers section of

http://www.postfix.org/addon.html

Take the time to browse the other pages of the postfix.org site.

-- Dan Wilder


This page edited and maintained by the Editors of Linux Gazette
HTML script maintained by Heather Stern of Starshine Technical Services, http://www.starshine.org/
Copyright © 2003
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 87 of Linux Gazette, February 2003

LINUX GAZETTE
...making Linux just a little more fun!
(?) The Answer Gang (!)
By Jim Dennis, Ben Okopnik, Dan Wilder, Breen, Chris, and... (meet the Gang) ... the Editors of Linux Gazette... and You!


We have guidelines for asking and answering questions. Linux questions only, please.
We make no guarantees about answers, but you can be anonymous on request.
See also: The Answer Gang's Knowledge Base and the LG Search Engine



Contents:

¶: Greetings From Heather Stern
(!)LILO problem whith dual linux boot on seperate drives
(!)filter out spam and viruses
(!)The One Remaining (non-Depracated) Use for rsh

(¶) Greetings from Heather Stern

Greetings, everyone. It's another day, another penguin over here at The Answer Gang. I'm sorry there are only three messages this time but I think you'll find them juicy.

Statistics - there were about 460 messages - and almost none of that was spam thanks to Dan Wilder's hard work keeping the list on a leash. I'd say the most common reason to not get an answer or merely get grumped at instead of seeing a useful answer, would be to combine the twin errors of using HTML based mail, and not telling us what few things you've looked up first. We can do much better at translating technese to English than we can do at translating confused-fuzziness to a technical question.

You folks had a gazillion good tips out there and I'm digging myself out from under them right now. [Imagine: a computer workroom filled with little grey envelopes filled with pennies all gabbing about little Linux tidbits. It's quite a chatterbox.]

But that's hardly fair. The real reason I'm running late and a few pennies short is that I've been working really hard on the upcoming LNX-BBC. It's gonna be this year's membership card for the Free Software Foundation. I mean, if you're not a member then perhaps you should be anyway... but this is a definite plus. It's still a toy for experts though. More on cool toys for "the rest of us" in upcoming months. There are lots and lots of good projects out there.

[It wasn't all Heather's fault. Our FTP server played a game of "let's not but pretend we did", accepting Heather's Answer Gang upload but not storing it. Bad FTP daemon, bad! It also has been dying the past few days, which Dan has been combating via upgrades and logfile analysis. At one point logrotate was dying and taking the daemon down with it. -Iron.]

Have fun!



Copyright © 2003
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 87 of Linux Gazette, February 2003
HTML script maintained by Heather Stern of Starshine Technical Services, http://www.starshine.org/


(!) LILO problem whith dual linux boot on seperate drives

From Rich Price

Answered By Matthias Posseldt, Jim Dennis, Mike "Iron" Orr, John Karns, Heather Stern, Benjamin A. Okopnik

(?) I recently bought a new IDE disk drive and installed it as /dev/hdb in my server. While leaving my current [Slackware] distribution on /dev/hda, I wish to install the Debian distribution on /dev/hdb.

After completing the basic Debian install, I edited the lilo.conf file to include a second image. The original file was:

See attached rich-price.slack.lilo-conf.txt

The newly modified file is:

See attached rich-price.slack-debian.lilo-conf.txt

when I tested this config file i got:

See attached rich-price.slack-debian.lilo-complains.txt

/boot/vmlinuz-2.2.20-idepci does exist on /dev/hdb1 but not of course on /dev/hda1. Is this the problem? If so, how do I access an image on a different hard drive?

I downloaded the "LILO User's Guide" and read about the alternate image format:

   image=/dev/hdb1
      range=sss-eee

where sss-eee is the starting and ending sector range of the image, but I don't know how to find out what to use for sss-eee.

Rich Price

(!) [Matthias] Just mount the corresponding partition and use this path then, e.g.
image = /mnt/newdebianroot/boot/vmlinuz-2.2.20-idepci
root = /dev/hdb1
label = Debian
A different option is to separate boot and root partitions and mount the /boot partition in both Slackware and Debian while also keeping /etc/lilo.conf in sync, so that you can easily use the /boot/vmlinuz-debian-2.a.b and /boot/vmlinuz-slackware.2.x.y kernel images and use the /boot path. An easy way would be to symlink /etc/lilo.conf to /boot/lilo.conf in both installations and you can happily run lilo from Debian and Slackware.
(!) [JimD] I'd personally avoid the esoterica of any "alternate image format" (if possible) and simply put the desired kernel and any required initrd (compressed initial RAMdisk) images unto the /boot partition (or into the /boot directory of any rootfilesystem) back on /dev/hda.
There is no problem sharing one /boot directory among multiple Linux distributions --- and it's the easiest way to do it.

(?) Thanks to both of you for your answers.

I have sidesteped the problem for now by booting off of a floppy. But I think Jim's suggestion will make a better long term solution.

(!) [Iron] Jim's method is the easiest and most convenient. However, there's no reason the other kernel has to be in /boot as long as it's mounted somewhere when "lilo" is run. Older Linux distributions used to put the kernel in / by default.

(?) I am not a programmer [any more] but I think that an enhansement to LILO which would allow the use of different file systems for different boot images would be good. Something like this:

image = /boot/vmlinuz-2.2.20-idepci
root = /dev/hdb1
imagefs=/dev/hdb1
label = Debian

Where imagefs is a new parameter used to specify the file system that contains the boot image file.

(!) [Jim] Unfortunately this suggestion exhibits a fundamental misunderstanding of how LILO works. The "image" files are access as regular files, and thus they must reside on some locally mounted filesystem when you run /sbin/lilo. /sbin/lilo then issues ioctl()s to get the low-level block address information about where the image file's parts are located. Those raw device/block addresses are written into the map file (usually found in /boot). The address of the map file is written into the boot block (usually in the MBR of the hard drive).
Your hypothetical imagefs= would require that /sbin/lilo either incorporate all the code to directly access the device/partition as a filesystem (which is infeasible for a large number of filesystem and is just bad engineering --- code duplication for even a single type), or it would have to do something like: make a temporary mount point, mount the imagefs, use this temp mount as a relative chroot point?, then proceed as before. It's VASTLY easier for you to mount the fs up yourself and simply manually refer the appropriate entries in your /etc/lilo.conf to the kernel image (and initrd images, etc) before running /sbin/lilo.
In my MANY discussions about LILO I find it convenient to distinguish between LILO (the whole package) and /sbin/lilo (the utility that reads the /etc/lilo file and various command line options and produces/writes a map file and a bootloader (into the MBR, unto a floppy or into a filesystem superblock or "logical boot record).
Run strace on /sbin/lilo some time and you may find enlightenment.
(!) [John] Yes, Linux is nirvana! :^)
(!) [Ben] I've found that running "strace" _often precedes enlightenment. Also, like reading the dictionary (who the heck can stop at just one entry?), it's usually enlightenment on topics far beyond the original one.
(!) [Iron] What would the information be useful for? "lilo" uses the image= path to determine the kernel's physical location, the boostrapper uses the physical location, and at no time is /boot required to be mounted (except when running "lilo").
However, a few programs use /boot/System.map (or /boot/System.map-VERSION), and these may behave funny if it's not accessible or is out of sync with the running kernel. Currently I see that klogd (the kernel logging daemon) has it open while it's running. But stopping klogd, unmounting /boot and restarting klogd does not cause any errors, although it does generate a log message of:
Jan 10 14:55:28 rock kernel: Cannot find map file.
Jan 10 14:55:28 rock kernel: No module symbols loaded.
"man klogd" says it uses System.map to translate the numeric traceback of a kernel error to a list of functions that were active at the time, which makes it easier for kernel developers to track down what caused the problem.
Dan says modprobe also uses System.map. "strings /sbin/modprobe | grep System.map" shows that word exists in the code, although the manpage doesn't mention it. So you may need /boot mounted when loading modules.
Is there anything else that likes to have System.map around?
(!) [Ben] Oddly enough, Netscape. I remember doing some complicated messing around with multiple kernels, way back when, where I'd hosed System.map in some way or another. It didn't seem to affect too many things, but the annoying error message I got every time I fired up Netscape finally got me to straighten it all out. I was a young Linux cub then... :)
(!) [John] For a time I used to unmount the /boot partition in the init scripts to avoid risking corruption of the ext2fs there during normal operation. Then I noticed the above errors (didn't seem to affect loading modules though), and switched to remounting as ro instead, which rid me of the error, and avoids the problem of having it mounted rw. Alternatively I suppose that one might be able to change the fstab entry to mount it ro. Not sure if there is a requirement to have it rw in the early boot process.
(!) [Iron] I have had /boot mounted read-only for years and have had no problem.
(!) [Heather] On my multi-distro setup, I also mount /boot read-only; depmod tries to run during every boot, and complains that it cannot write. As long as I deliberately run depmod while my /boot is read-write whenever I'm adding modules or new kernels, then this is an ignorable warning because I already did that. When running depmod by hand on a kernel which you do not yet have booted, you definitely need a copy of its System.map on hand, for use with the -F parameter. If I fail to do this, the distro that wants this is a very unhappy camper, because with no depmod information at all, it cannot load any modules.
I occasionally build monolithic kernels deliberately, but that's barely viable with today's huge list of kernel features.

(?) Thanks, Jim.

This information makes LILO much more understandable to me. It enables me to see why my suggestion doesn't make any sense. It also makes the light bulb go on about Matthias's original answer which I admit I didn't understand until now. This is great! I now have two ways to solve my problem and enough understanding about what I am doing to finally be dangerous ;-}>

I think that adding something similar to your comments to the LILO User's Guide would be helpful to part time LINUX hackers like me. Perhaps in section 3.3.1 a second paragraph could be added saying:

"The image file is accessed as a regular file, and thus it must reside on a locally mounted filesystem at the time that /sbin/lilo is run.

(!) [JimD] ... kernel and initrd images files are accessed by /sbin/lilo ...

(?) /sbin/lilo will then issue ioctl()s to get the low-level block address information which shows where the image file's parts are located in the file system. This file system does not have to be on the same physical drive as the root file system."

(!) [JimD] ... but must be accessible to the bootloader code (generally via BIOS functions).

(?) Did I get it right? Do you think I should suggest this to the maintainers?

(!) [JimD] I've touched it up a bit --- their maintainers would, undoubtedly tweak it more to their likely if they choose to incorporate it.
Please feel free to send this to John Coffman and to the maintainers of the appropriate HOWTOs (as referenced in my earlier post).
I'd also highly recommend pointing them at the years of Linuz Gazette Answer Guy/Gang material on this topic --- so they can understand how frequently these questions come up and glean some ideas for how we people in the "support trenches" have been trying to dispel the confusion that plagues so many LILO users. (Did I mix too many metaphors there?)
In particular if they explain LILO as analogous to programming: /etc/lilo.conf is the "program source", /sbin/lilo is the compiler and the bootloader and map files are "objects" --- then a large number of people will "get it." Even people with the barest modicum of (compiler) programming experience understand why changing the source code doesn't change the program until it's recompiled.

(!) filter out spam and viruses

From Jonathan Becerra

Answered By Faber Fedor, Neil Youngman, Kapil Hari Paranjape, Heather Stern

(?) I'm very new to Linux but like what I see

The object here is to install a software that will filter all my e-mails and keep out viruses

(!) [Faber] Look into Amavis (www.amavis.org) and your favorite anti-virus software (Sophos, McAffee, etc.). If you're using Postfix as your MTA, drop me a line and I can help you get the three of them working.

(?) and catch re-occurring spam.

(!) [Faber] Check out Spam Assassin (www.spamassassin.org). It rocks!
(!) [Heather] Since the list which all Answer Gang members are on uses SpamAssassin as one among several defenses, I think yes - it does :) but it is not infallible. With any mail filtering answers I encourage you to take a look at its principles, and decide if you like them, rather than just take someone else's word on what is or isn't spam.
(!) [Neil] LWN seem to rate bogofilter, see http://lwn.net/Articles/9186. I haven't used it myself.
(!) [Kapil] I currently use "bogofilter" and am very happy with it. There are also alternatives such as "spamoracle" and "spamprobe". All these three programs implement Paul Graham's suggestions in "A Plan for Spam".
As far as I can make out "spamassasin" is a much more general tool that can easily incorporate the measurements used by Paul Graham.
The neatness of Paul Graham's approach is that it is entirely "Bayesian" --- spam mails self-select themselves once we have a sufficiently large database of spam and non-spam messages. Moreover, this division is entirely in the hands of the end-user.
On the other hand since this measurement is made after the mail enters the system it is not very useful if you want to reduce bandwidth consumption.

(?) I have 2 NIC cards, etho1 and etho2. Both were picked up by my install and both work, I can get out to the Internet with either one. I need help configuring etho1 to be the incoming route for my e-mails which my software will then pick up and process and then I want etho2 to send it out to my users.

I have been all over the Internet and in the book stores, I even had to break down and buy a Linux book for dummies which was no help at all.

(!) [Kapil] I think what you need is to take a hard look at Firewall-HOWTO.
(!) [Faber] (at http://www.tldp.org, in case you didn't know).

(?) Sound possible?

(!) [Faber] With Linux, almost anything is possible.
(!) [Heather] Though it may take a while to finish coding... no wait, that's "the impossible takes a little longer" :D

(?) I would be so appreciative and so would my head (because then I can quit banging it against my desk) for any and all help you could provide.

(!) [Heather] On behalf of the Gang, we hope you heal up soon! You're following good principles; make all emails have to follow one path into your site, then place some guardians upon that path to nail the miscreants as they come through.
Tune up your firewall to only show services to the outside world which you really provide, and that needed for your inside people to get to outside services they use (generally, using IP masquerading will make this automatic and nearly invisible). If you've got specific hosts pestering you with spam, get your MTA to blow them off with a "551 too much spam, site blocked" so your mailbox guardians don't have to waste CPU time on those bozos. Best of luck in the battle against spam.

(!) The One Remaining (non-Depracated) Use for rsh

From Dave Falloon

Answered By Jim Dennis, Mike "Iron" Orr, Kapil Hari Paranjape

(?) Hi Answer Guy,

I have a 32 node cluster running Debian 3.0 (Woody). The primary way we use these machines is in a batch type submission, kind of a fire an forget thing, via rsh "<command>".

(!) [JimD] These days the knee jerk response would be: "Don't run rsh; use ssh instead."

(?) Agreed, the reason for rsh is that this little cluster is all by itself, accessed through a "choke host" that is pretty well locked down, only a handful of users can access it on the external interface.

(!) [JimD] However, compute clusters, on an isolated network segment, (perhaps with one or more multi-homed ssh accessible cluster controller nodes) are still a reasonable place for the insecure r* tools (rsh, rlogin, rcp). (rsync might still be preferable to rcp for some workloads and filesets).

(?) I crippled PAM a little to allow this ( changed one line to be sufficient). This cluster is not a super critical farm so if things go haywire its not a big deal but it would be nice to figure out why sometimes you can't connect to the nodes, here is the output from one such attempt:

(503)[dave@snavely] ~$ rsh ginzu
Last login: Thu Jan 16 16:37:22 2003 from snavely on pts/1
Linux ginzu 2.4.18 #1 SMP Fri Aug 2 11:20:55 EDT 2002 i686 unknown
rlogin: connection closed.
(504)[dave@snavely] ~$

This happened once then when I repeated the command it succeeded, with no error.

(!) [Kapil] One possible reason for the problem is the assignment of a free pty.
1. You may be running out of pty's if many processes unnecessarily open them.

(?) This is a definate possibility, and I am recompiling a kernel as we speak to up this limit to 2048.

(!) [Kapil] 2. Your tweaking of rsh and PAM was not sufficient to give rsh permission to open a pty.

(?) Would this produce an intermitten connection drop or would it prevent any connection at all?

(!) [Kapil] This would also explain the unable to get TTY name error.

(?) So how does the chain of events happen? Is this correct; I rsh to a machine it, pam looks over its rules and see that it is crippled and should allow this connection with no passwd, passes this on to login which then tries to assigned a pty but the pty's are all currently used, then it tries to assign a TTY because there are no ptys, and in my logs I get the can't get TTY name error?

(!) [Kapil] No, there is no separate "TTY" assignment. The "pty/tty" pair is what is assigned for interactive communication.
Let's see if we can track the sequence of events (the Gang please post corrections, I am sure I'll go wrong somewhere!):
Client "rsh" request is usually handled on the server by "inetd" which then passes this request to "tcpd" which then passes the request to "rshd".
O. However, tcpd may refuse the connection if its host_access rules do not allow the connection. This refusal could be intermittent depending on whether the name service system is responding (NIS/DNS whatever). (This possibility has already been mentioned on the list in greater detail).
At this point, I looked up the Sun Solaris man page for rshd (none of the Linux machines here has "rsh" installed!). The following steps are carried out and failure leads to closing the connection.
A. The server tries to create the necessary sockets for a connection.
B. The server checks the client's address which must be resolvable via the name service switch specification (default NIS+/etc/hosts).
C. The server checks the server user name which must be verifiable via the name service switch specification (default NIS+/etc/passwd).
D. The server checks via PAM that the either (the client is in /etc/hosts.equiv
and the client user name is the same as the server user name) or the client username is in .rhosts.
E. The server tries to acquire the necessary pty/tty's and connects them to the sockets and the server user's shell (which must exist).
I am a bit confused about the use of PAM but I think it is also used in steps C and E through the "account" and "session" entries. The "auth" entry for PAM is used in "D".
So it seems like O,A-E need to be checked on your system. My own earlier suggestion was only about E but the failure could be elsewhere.
Temporary failure of the NIS server to respond could affect B and C; it could even affect E as the "passwd" entry is required to find the user's shell. Thus, in such situations it is a good idea to run the name service caching daemon.
If NFS is used for home directories then temporary failure of the NFS server to respond could affect D as well.
Hope this helps,
Kapil.
(!) [JimD] So it was a transient (or is an intermittent) problem.

(?) Yup

I have adjusted the /etc/inet.conf by adding the .500 to the rsh line nowait:

shell           stream  tcp     nowait.500      root    /usr/sbin/tcpd /usr/sbin/in.rshd

in order for these machines to allow more jobs to be run at a time.

(!) [JimD] This adjusts inetd's tolerance/threshold to frequent connections on a given service. It simply means that inetd won't throttle back the connections as readily --- it will try to service them even if they are coming in fast and furious. In this case it will allow up to 500 attempted rsh connections per minute (about 8 per second).
(!) [JimD] That really doesn't adjust anything about the number of concurrent jobs that a machine can run --- just the number of times that the inetd process will accept connections on a give port before treating it as a DoS (denial of service) attack or networking error, and throttling the connections.

(?) I adjusted this because we ran into lots of problems with inet dropping connections, I just wanted to make sure that it behaved like it was supposed to, ie you didn't know of some immediately relevant bug in this line

(!) [JimD] In your example this is clearly NOT the problem. It made the connection and then disconnected you. Thus it wasn't inetd refusing the connection, but the shell process exiting (or being killed by the kernel).
(!) [Iron] Just to clarify, I think Jim is saying that it's not inetd or tcpd refusing you, because otherwise rlogin wouldn't have started at all, and it (rlogin) ouldn't have been able to print the "last login:" and kernel version lines.
By the way, when tcpd doesn't like me, it waits a couple seconds (usually doing reverse DNS lookup), and then I see "Connection closed by foreign host" with no other messages.

(?) One possibility is that we have everyone's home drive on NFS and if the NFS was slow to respond that may cause rlogin to find no home directory and refuse the connection. Is that a realistic possibility?

One interesting turn of events is the message you get in auth.log :

Jan 20 15:41:31 ginzu PAM_unix[31073]: (login) session opened for user dave by (uid=0) Jan 20 15:41:31 ginzu login[31073]: unable to determine TTY name, got /dev/tty6

These machines have no video cards/keyboards/otherinput, really they are processor/harddrive/ram/NIC and thats all so it would make sense to comment out the getty lines in inittab for these boxes ... correct?

That would at the very least stop the auth.log and daemon.log spamming, I think

(!) [Iron] If inetd is not listening to the port at all and no other daemon is, you'll get an immediate "Connection refused" error. This is confusing because it doesn't mean it doesn't like you, it means there's nobody there to answer the door.
(!) [JimD] I'd run vmstat processes on the affected nodes (or all of them) for a day or two --- redirect their output to local files or over the network (depending one which will have the least impact on your desired workload) and then write some scripts to analyze and/or graph them.

(?) I have started collecting info on these machines.

Can you think of why these machines behave like this? Could it be a load average problem, maybe its network related, is it a setup problem? Any ideas would be appreciated

(!) [JimD] It's not likely to be a networking or setup issue. Your networking seems to work. Things seem to be configured properly for moderate workloads, so we have to find out which host resources are under the most pressure. So it's probably a loading problem.

(?) Its not a loading issue the system is pretty good at evening out load across the pool of machines

(!) [JimD] (Note I did NOT say "load average" problem. "load average" is simply a measure of the average number of processes that were in a runnable (non-blocked) state during each context switch over the last minute, and five and fifteen minutes. A high load average should NOT result in processes dying as you've described --- but often indicates a different resource loading issue. Sorry to split hairs on that point but this is a case were understanding that distinction is important).

(?) These machines can get a little bagged at times but the login failure happens regardless of the load of a given host.

(!) [JimD] As always you should be checking your system logs. Hopefully there'll be messages therein that will tell you if the kernel killed your process and why. Otherwise you can always write an "strace" wrapper around these executables. It will kill your performance, but, if you can reproduce the problem you'll be able to see what the process died.

(?) After a look in the logs ( I can't believe I didn't do this earlier ), I found a lot of messages about getty trying to use /dev/tty*, no such device, which makes sense considering they have no input/output hardware like video/keyboard, etc.

(!) [JimD] Some tweaks to the setup might help.
(!) [JimD] There are basically four resources we're concerned about here: memory, CPU, process table, and file descriptor table (space and contention). (I'm not concerned about I/O contention in this case since that usually causes processes to block --- performance to go very slowly. It doesn't generally result in processes dying like you've described here).
(!) [JimD] vmstat's output will tell you more. You can probably make some guesses based on your workload profile.
(!) [JimD] If you're running many small jobs spawning from one (or a small number of) dispatcher processes (on each node) you might be bumping into rlimit/ulimit issues. Read the man page for your shell's ulimit built-in command, and the ulimit(3) man page for more details on that.

(?) Ulimits have been adjusted already we ran into file descriptor limits before

(!) [JimD] If you're running a few large jobs than its more likely to be a memory pressure problem --- though we'd expect you'd run into paging/thrashing issues first. There are cases where you can run out of memory without doing any signficant paging/swapping (where the memory usage is on non-swappable kernel memory rather than normal process memory).
(!) [JimD] By the way, you might want to eliminate tcpd from your configuration (remove the references to /usr/sbin/tcpd from your inetd.conf file). This will save you an extra fork()/exec() and a number of file access operations on each new job dispatched. (The use of rsh already assumed you've physically isolated this network segment with very restrictive packet filters and anti-spoofing --- so TCP Wrappers is not useful in your case and is only costing you some capacity, albeit small).
(!) [JimD] You might even eliminate rsh/rlogin and go with the even simpler rexec command!

(?) Some times people will run an interactive job on this cluster, so rsh/rlogin is still nice to have. We have no real policy about what can or cannot be run on these machines, like I had said it is more of a playground for our researchers, than a critical cluster.

(!) [JimD] It goes without saying that you may wish to eliminate, renice, or reconfigure any daemons you're running on these nodes. For example, you can almost certainly eliminate cron and atd from the nodes (since your goal is to dispatch the jobs from one or a few central cluster control nodes. They could run a small number of cron/atd processes and dispatch jobs across the cluster as appropriate.

(?) True, but really it doesn't seem related, I can't see an interaction between login and cron that would drop your connection. Although it is nice to cut down bloat where you can.

(!) [JimD] The klogd/syslogd daemons are worth extra consideration. I'd strongly consider running syslog under 'nice' and giving it lowest possible priority. I'd also consider tweaking the syslog.conf to remove the leading "-" (dashes) from any local log file names (so that they will be written asynchronously rather than with fsync() calls after every write to the logs).
(!) [JimD] I'd even consider eliminating all local files from these configurations and having these nodes do all their logging over the net (which being UDP based, might result in some lossage of log messages). However, that depends heavily on your workload and network topology and capacity. Basically you might have bandwidth to burn (Gig ethernet, for example) and this might be a reasonable tradeoff.

(?) We are on a switched full duplex 100 base TX, network. The logs on the switches report that in the last 3 months we have only went above 80% of the switches bandwith once, so I think we have enough bandwidth to support logs over the net.

(!) [JimD] I'd also consider setting the login shell for these job handling accounts to ash (or the simplest, smallest shell that can successfully process your jobs). bash, particularly with version 2.x is a pretty "resourceful" (read "bloated") program which may not be necessary unless you're doing some fairly complex shell scripting.

(?) A possibility, but as I had said some of our researchers will run an interactive job so they want a full shell.

(!) [JimD] Also in your shell/jobs you might want to make some strategic use of the exec built-in command. Basically in any case where the shell or subshell doesn't have a command subsequent to one of your external binaries --- exec the binary. This saves a fork() system call, and means that the shell processes are NOT taking up memory, file descriptors, and entries in the process table just waiting for other executables to exit.
(!) [JimD] I'd also eliminate PAM and look for the older r* and login suite. The traditional /bin/login program does an exec*() system call to run your shell. The PAM based suite performs a fork() and then an exec() --- and the /bin/login program remains in order to perform post logout cleanup. It is quite likely that you are not interested in these more advanced features provided by PAM's approach.

(?) PAM is overkill but I don't think it is the culprit.

(!) [JimD] Incidentally, another point to consider is your local filesystems. You may want to mount as many of them as possible in "read-only" mode and all of them with the noatime option. Both of these tweaks can considerably reduce the amount of work the system is doing to maintain your filesystem consistency and the (rarely used) access time stamps.
(!) [JimD] You may also want to consider using the older ext2 filesystem rather than any of the journaling filesystem choices. This depends on your data integrity requirements, of course, but the journaling done by ext3, reiserfs, XFS and others does come at a significant cost.
(Note: In some other cases, where intensive use of local filesystems is part of the workload, XFS or reiserfs might be VASTLY better than ext2 --- for various complicated reasons).

(?) Reiser is working fine on these nodes. I have found a significant improvement over ext2 for the majority of tasks run on these boxes.

(!) [JimD] Depending on your application, you might even want to consider recompiling it using older, simpler versions of libc/libm (since many of the advanced features of GNU glibc 2.x may be useless for your computations). Of course if the application is multi-threaded then you may needs glibc 2.x' re-entrancy.

(?) Not really possible, in a lot of cases we are running some third party commercial software which is very closed source.

(!) [JimD] It's possible that you need to do some kernel tuning. This might involve writing some magic values into the sysctl nodes under /proc/sys (or running the systune or Linux powertweak utilities). It might also involve rebuilding your kernel, possibly with a few static variables changed or a few scalability patches applied.
(!) [JimD] (In this case, "scalability" is a loaded term --- since it means much different things to differing workloads).

(?) I think the real problem is that we are having a bad interaction with some piece of software and rlogin/login/getty?/init or something and that causes the connection to be dropped.

(!) [JimD] You can find numerous hints about Linux kernel performance tweaking using Google! http://www.google.com/linux
Here's a few links:
The C10K problem
http://www.kegel.com/c10k.html
Written and maintained by Dan Kegel, originally in response to the infamous "Mindcraft" fiasco wherein Microsoft paid an "independent" lab to prove that MS Windows was "faster" or "more scalable" than Linux.
Dan is/was one of the advocates for improving the Linux kernel in a number of key areas regardless of if Mindcraft's credibility.
CITI: Projects: Linux scalability - University of Michigan
http://www.citi.umich.edu/projects/linux-scalability Run by Peter Honeyman (a legendary UNIX programmer).
SGI - Developer Central Open Source: Scalability Project
http://oss.sgi.com/projects/linux-scalability
IBM developerWorks: Linux: Linux Kernel Performance & Scalability
http://www-106.ibm.com/developerworks/linux/library/l-kperf
The problem with all of these links is that they are not focused on the set of requirements specific to your needs. They are more concerned with webserver, database, SMP, and single-server scalability rather than Beowulf style cluster performance.
(!) [JimD] Of course you could read the information at http://www.beowulf.org Strictly speaking it doesn't sound like you're really running a Beowulf cluster --- you're dispatching jobs via rsh rather than distributing computation load using MPI, PVM or similar libraries. However some of the same configuration suggestions and performance observations might still apply.

(?) Most of the tweaking described has already been implemented

(!) [JimD] In general there isn't any silver bullet to increasing the capacity of your cluster. You have to find out which resources are being hit hardest (the bottlenecks), review what is using those resources, find ways to eliminate as much of that utilization as possible (removing tcpd, using a simpler/smaller shell, running terminal processes via exec, changing to a non-journaling filesystem, eliminating unneeded daemons) and try various tradeoffs that shift the utilization of a constrained resource (local filesystem I/O vs. pushing things out to the network, memory/cache and indexed data structures vs CPU and linear searches).

(?) Capacity is not the objective, I would say reliability ( not necessarily 100%, but better than 50%, for sure :) is more my goal.

(!) [JimD] That's really all there is to performance tuning. Finding what's using which resources. Finding what you can "not" do. Finding ways to tradeoff one form of resource consumption with another. Of course the black magic is in the details (especially when it comes to poking new values into nodes under /proc/sys/vm/ --- read the Documentation/sysctl* text files in your Linux kernel sources for some hints about that)!

(?) Perhaps a more detailed view of the cluster will give you more to work on. There are really 34 machines to this cluster, one choke node that stands between the outside world and the inside nodes. 32 machines (identical hardware), dual Pentium 3 550 MHz's, 256 megs SDRAM (133MHz), single Maxtor 12 GB hardrive (7200 RPM ATA 66), 3com 3c590 ethernet card, identical kernel's across the board, 2.4.18, SMP. One single sun machine, that serves, NIS, NFS, and DNS, and home brewed batch server ( keeps track of jobs and hosts loads and assigns jobs to hosts via rsh ). After searching some more websites I found that some people have a problem with the services.byname map in NIS. Could that be an issue here? I have adjusted the inittab by commenting out the lines:

#  <id>:<runlevels>:<action>:<process>
#1:2345:respawn:/sbin/getty 38400 tty1
#2:23:respawn:/sbin/getty 38400 tty2
#3:23:respawn:/sbin/getty 38400 tty3
#4:23:respawn:/sbin/getty 38400 tty4
#5:23:respawn:/sbin/getty 38400 tty5
#6:23:respawn:/sbin/getty 38400 tty6

Because these machines have no video/input.

Thanks for all your help so far. I hope all this new info helps you get a better idea of whats going on.

(!) [JimD] You could build a new kernel, setting CONFIG_UNIX98_PTY_COUNT=2048 (apparently their maximal value).

(?) Running a compile with this right now, thanks

(!) [JimD] I'd also try to eliminate NIS from these systems. I'd look at using rsync to replicate the various /etc configuration files across the cluster (passwd, group, hosts, services, et al). Failing that make sure that you have the nscd (NIS name services cache daemon) properly configured on the client nodes.

(?) I have been waffling on moving this farm over to Cfengine and losing NIS for about three months, I think I am going to get started on that right away. Have you guys used cfengine, or do you have any suggestions for config management tools?

(!) [JimD] I'd also try to eliminate NFS, or at least try to minimize it's use especially for home directories. I'd also eliminate the automounting if at all possible. This requires that the users work a little smarter, manually transferring their data/input files down to the proper nodes, and pulling the results back therefrom.

(?) I have suggested this, but the tools (third party, very badly designed) some of the research guys use need to write to the home drive, and in order to take advantage of more than one node that would suggest that the home drive be in two places at once (NFS, SMB or whatever). The way it works is that one node will modify a model file, another will immediately pick up the change and adjust what it is doing and modify the file more until the proper mathematical model for a given project is found. Then they use that model to figure out a whole range of useful information, at least thats how its supposed to work.

(!) [JimD] If that's not feasible, at least configure these systems so that the home directories are not automounted, replicate the basic suite of "dot files" out to them and have a lower mount point provide the shared data.

(?) I don't think I can.

(!) [JimD] I'd also be quite wary of configuring the systems to allow NFS crossing the isolated segment and out into filers on your network. This sounds like a supremely bad idea allowing anyone with local root access on any node on the outer network to impersonate any users, dropping files into their directories which will be executed/sourced by shell session in the inner network.

(?) NFS traffic never leaves the clusters subnet, think of it as a hole in my network covered by one node that runs ssh with 6 local accounts. Once you log in to that firewall node you need to then rsh or ssh out the other interface to either a node in the cluster or the old sun machine serving NIS/NFS. All traffic on the local subnet stays on the local subnet. Once a researcher has a proper model defined they have to rcp/scp that file to the firewall machine, and then scp (rsync over ssh, or whatever) it to their destination (rcp is not enabled outside of my protected subnet).

(!) [JimD] However, I think you're getting closer to the real heard of the problem by looking into Kapil's suggestion regarding your PTY availability.

(?) I'll know shortly, thanks everyone you guys rock!

Dave



Copyright © 2003
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 87 of Linux Gazette, February 2003
HTML script maintained by Heather Stern of Starshine Technical Services, http://www.starshine.org/

LINUX GAZETTE
...making Linux just a little more fun!
News Bytes
By Michael Conry

News Bytes

Contents:

Selected and formatted by Michael Conry

Submitters, send your News Bytes items in PLAIN TEXT format. Other formats may be rejected without reading. You have been warned! A one- or two-paragraph summary plus URL gets you a better announcement than an entire press release. Submit items to gazette@ssc.com


 February 2002 Linux Journal

[issue 106 cover image] The February issue of Linux Journal is on newsstands now. This issue focuses on enterprise computing. Click here to view the table of contents, or here to subscribe.

All articles older than three months are available for public reading at http://www.linuxjournal.com/magazine.php. Recent articles are available on-line for subscribers only at http://interactive.linuxjournal.com/.


Legislation and More Legislation


 You win some...

Jon Johansen, a Norwegian programmer who has been facing criminal charges as a result of his involvement in the creation of the DeCSS computer code for playing CSS encoded DVDs, has been acquitted on all counts. Jon was charged under a law that relates to breaking into other people's property, a law usually invoked in cases where attackers have attempted to break into another party's computer system. The law had never before been applied to prosecute a defendant for breaking into his own property, and in this case the Norwegian court ruled against the prosecutor on all charges, citing Norwegian law protecting a consumer's rights to use his own property. An English translation of the judgement has been made online by EFF.

The war is not over yet, however, and Norwegian prosecutors are set to appeal the verdict. If the request for an appeal is granted, the case will be heard again before the Norwegian appeal courts. Film industry lawyer, Charles Sims, was keen to assert that a US resident would have been breaking the law if they did what Jon Johansen did.


 You lose some

The United States Supreme Court has ruled to support the 20 year extension of copyright terms that was granted two years ago. The balance of opinion went 7-2, with dissenting opinions coming from Justices Stevens and Breyer.

The constitutional challenge began when Eric Eldred, who distributes public-domain books online, found that he would have to remove some of these works as their copyrights had been reactivated by an extension granted by the US Congress. There is a large amount of information on the case available at eldred.cc. Lisa Rein has also compiled a selection of reports and resources related to the case.

The issue at stake in the Eldred case was whether it was constitutional for Congress to extend copyrights in this way. There are compelling arguments on both sides of this argument (with some more compelling if you own billions of dollars in copyrighted works and want your business to be subsidised by the public), but the court has ruled that Congress had (and has) the right to make this extension. This does not mean that the all is lost. Governments in democratic countries are supposed to be responsive to the desires of citizens, and to act accordingly. Thus, it is important for citizens to make their opinions on these issues apparent to their elected representatives. Simply because a government can pass a law, does not mean that they will pass the law, especially if they can expect to pay a steep price at the ballot box next election time.

This is particularly relevant to European readers. European copyrights last for 50 years. What makes this significant is that about 50 years ago was the beginning of the modern era music recording, so from now on, a steady stream of high quality recordings by still-popular artists will be entering the public domain. Industry bodies are lobbying to have the terms of copyrights extended and are bandying words like piracy around to cloud the waters. As pointed out by Dean Baker, extending copyrights retrospectively on works does nothing to encourage creativity or "To promote the Progress of Science and useful Arts". Instead, it " raises costs to consumers and leads to increased economic inefficiency". This straight-forward truth will not stop industry monopolists and their quislings from attempting to steal the labour of humanity from the public commons, and then telling us it was all for our own good.


 DMCA

Your monthly serving of DMCA madness this time involves garage doors. It would appear that at least one firm believes that making universal garage door remotes is a breach of the DMCA and is prepared to spend some legal money on the idea. That wasn't enough? Well, here's a second helping: Lexmark is invoking the DMCA in an attempt to hobble the printer cartridge remanufacturing industry. Edward Felten has concisely explained that a major issue here is the whole principle of interoperability. Interestingly, the European Parliament has voted in a new law banning such "smart" printer cartridges as they make recycling more difficult and expensive. Bruce Schneier predicts a trade war, but even if it does not come to that, it will be interesting to see where the story goes. Also highlighted by Bruce, and worth reading, is the EFF's guide Unintended Consequences: Four Years under the DMCA.


Linux Links

Linux Magazine article on journaling filesystems.

Linux Planet article discussing basic Linux network security.

Some links highlighted by Linux Today:

Linux Job Market.

Lawrence Lessig discusses whether derivative works are always a bad thing for the owners of the original work. Japanese experience indicates they may be beneficial.

The Register has a report on businesses gathering to fight Hollings' copy controls

Some links from NewsForge:

Dave's Desktop is one Linux user's quest to share information on some of the helpful apps for Linux he has come across recently.

Howard Wen at O'Reilly is on a quest to find good Linux games. On the way, he found Falcon's Eye and talked to the game's creator

Linux Server Hacks: Backups

Both Linux Journal and DesktopLinux have dealt with Linux's relevance to senior citizens.

Some links from Linux Weekly News:

The Chinese Linux Documentation Project (CLDP) has included LDP's and Gnu's documents, translated them into Chinese. It also involve the Linux Gazette.

Some links from Slashdot:

Wikipedia, the free, contributor-maintained on-line encyclopedia, has reached its second birthday and 100,000 articles.


Upcoming conferences and events

Listings courtesy Linux Journal. See LJ's Events page for the latest goings-on.

O'Reilly Bioinformatics Technology Conference
February 3-6, 2003
San Diego, CA
http://conferences.oreilly.com/

Desktop Linux Summit
February 20-21, 2003
San Diego, CA
http://www.desktoplinux.com/summit/

Game Developers Conference
March 4-8, 2003
San Jose, CA
http://www.gdconf.com/

SXSW
March 7-11, 2003
Austin, TX
http://www.sxsw.com/interactive

CeBIT
March 12-19, 2003
Hannover, Germany
http://www.cebit.de/

Software Development Conference & Expo
March 24-28, 2003
Santa Clara, CA
http://www.sdexpo.com/

Linux Clusters Institute (LCI) Workshop
March 24-28, 2003
Urbana-Champaign, IL
http://www.linuxclustersinstitute.org/

4th USENIX Symposium on Internet Technologies and Systems
March 26-28, 2003
Seattle, WA
http://www.usenix.org/events/

PyCon DC 2003
March 26-28, 2003
Washington, DC
http://www.python.org/pycon/

Linux on Wall Street Show & Conference
April 7, 2003
New York, NY
http://www.linuxonwallstreet.com

AIIM
April 7-9, 2003
New York, NY
http://www.advanstar.com/

FOSE
April 8-10, 2003
Washington, DC
http://www.fose.com/

LinuxFest Northwest 2003
April 26, 2003
Bellingham, WA
http://www.linuxnorthwest.org/

Real World Linux Conference and Expo
April 28-30, 2003
Toronto, Ontario
http://www.realworldlinux.com

USENIX First International Conference on Mobile Systems, Applications, and Services (MobiSys)
May 5-8, 2003
San Francisco, CA
http://www.usenix.org/events/

USENIX Annual Technical Conference
June 9-14, 2003
San Antonio, TX
http://www.usenix.org/events/

CeBIT America
June 18-20, 2003
New York, NY
http://www.cebit-america.com/

The Fourth International Conference on Linux Clusters: the Linux HPC Revolution 2003
June 18-20, 2003
Las Vegas, NV
http://www.linuxclustersinstitute.org/Linux-HPC-Revolution

O'Reilly Open Source Convention
July 7-11, 2003
Portland, OR
http://conferences.oreilly.com/

12th USENIX Security Symposium
August 4-8, 2003
Washington, DC
http://www.usenix.org/events/

LinuxWorld Conference & Expo
August 5-7, 2003
San Francisco, CA
http://www.linuxworldexpo.com

Linux Lunacy
Brought to you by Linux Journal and Geek Cruises!
September 13-20, 2003
Alaska's Inside Passage
http://www.geekcruises.com/home/ll3_home.html

Software Development Conference & Expo
September 15-19, 2003
Boston, MA
http://www.sdexpo.com

PC Expo
September 16-18, 2003
New York, NY
http://www.techxny.com/pcexpo_techxny.cfm

COMDEX Canada
September 16-18, 2003
Toronto, Ontario
http://www.comdex.com/canada/

LISA (17th USENIX Systems Administration Conference)
October 26-30, 2003
San Diego, CA
http://www.usenix.org/events/lisa03/

HiverCon 2003
November 6-7, 2003
Dublin, Ireland
http://www.hivercon.com/

COMDEX Fall
November 17-21, 2003
Las Vegas, NV
http://www.comdex.com/fall2003/


News in General


 MEN Micro's New M-Modules

Two new digital input M-Modules from MEN Micro have been released. They have been designed to meet tough environmental and safety specifications and were developed specifically for railway applications, but they can be deployed in a broad range of industrial systems where shock, vibration, temperature and harsh environments are a concern.

The M-Modules, which are designated M31 and M32, each provide 16 binary channels to a control platform. Because they conform to the ANSI-approved M-Module standard, they can be installed in a number of standard bus-based systems, including CompactPCI, PXI, VMEbus and PCI, or they can be used in small busless systems.

Software drivers for the M31 and M32 are available for Windows, Linux, VxWorks, QNX, RTX and OS-9.


Distro News


 Ark

Ark Linux is a new distribution, led by former Red Hat employee Bernhard Rosenkraenzer. It is based on Red Hat 7.3/8.0, and free alpha downloads are available.


 Debian

Debian Weekly News reported the announcement by Steve McIntyre that he has created a set of update CD images that contain new and updated packages from 3.0r1.


Also from Debian Weekly News is a report on the availability of an RSS feed of new Debian packages.


Bdale Garbee, current Debian project leader, has been interviewed by Australian newspaper The Age.


 Eagle

Eagle Linux is a how-to based Linux distribution offering full open source documentation assisting users in creating personal embedded, floppy, and CD based bootable distributions.


 Gentoo

Gentoo Linux has announced the second release candidate for the upcoming 1.4 version of Gentoo Linux. New in 1.4_rc2 is the Gentoo Reference Platform: a suite of binary tarballs that allow for faster initial installation. Currently X, GNOME, KDE, Mozilla, and OpenOffice,org are available as binary installations for x86 architectures and ppc architectures with others to follow.


 Mandrake

Mandrake 9.0 has been reviewed recently by The Register/NewsForge and by Open for Business.


It has been widely reported in the past month that Mandrake is currently experiencing acute financial problems. This has lead company management to apply for Chapter-11 style protection. The purpose of this is to give the company some respite to allow it to reorganise its finances without pressure from creditors. The French courts have approved the plan and hopefully the company will in a better position to make positive progress after this period.


 SCO

The SCO Group have announced plans to work with Wincor Nixdorf to provide Linux-based retail point-of-sale (POS) solutions to retailers in North America. This relationship gives retail customers an economical, reliable choice by combining the functionality and flexibility of Wincor Nixdorf hardware with the stability and reliability of SCO operating systems. The joint retail solutions will rely on Wincor Nixdorf's BEETLE POS family and SCO's Linux POS solution, SmallFoot.


 SuSE

SuSE Linux has annnounced the availability of a desktop Linux product that gives users the full functionality of the Microsoft Office suite of applications. SuSE Linux Office Desktop, available from January 21, is intended for small companies looking for an easy, preconfigured desktop -- as well as for personal users with little or no Linux experience.


 UnitedLinux

UnitedLinux has announced plans to integrate the full OSDL Carrier Grade Linux (CGL) 1.1 feature set for UnitedLinux 1.0, delivering enhanced abilities to develop and deploy carrier-grade applications in a standardized Linux environment.

Developed by UnitedLinux integration partner SuSE Linux with HP, IBM and Intel, the features -- targeted initially for use on Intel-based hardware platforms -- enable telecommunications providers to develop and deploy new products and services on standards-based, modular communications platforms.


LPI a professional certification program for the Linux community, and UnitedLinux LLC have signed a cooperative agreement to market a UnitedLinux professional certification program.


Software and Product News


 KDE

KDE 3.1 has been released.


 Understanding the Linux Kernel, 2nd Edition

O'Reilly & Associates has released a new edition of Understanding the Linux Kernel which has been updated to cover version 2.4 of the kernel. 2.4 differs significantly from version 2.2: the virtual memory system is new, support for multiprocessor systems is improved, and whole new classes of hardware devices have been added.


 Aqua Data Studio 1.5

AquaFold has announced the release of Aqua Data Studio 1.5, a free database tool supporting all major database platforms, including Oracle 8i/9i, DB2 7.2/8.1, Microsoft SQL Server 2000/7.0, Sybase ASE 12.5, MySQL, PostgreSQL and generic JDBC drivers. Aqua Data Studio also supports all major Operating Systems designed to run Sun Microsystem's Java Platform such as Microsoft Windows, Linux, OSX and Solaris. Aqua Data Studio is designed to speed up the development of database and application developers by providing them with an elegant and consistent interface to all databases on all platforms. Free downloads and screenshots of Aqua Data Studio are available online.


 OpenMFG

OpenMFG is a company using open source software to bring enterprise resource planning (ERP) applications to small manufacturers, has welcomed the first ten members of the Open Partners Program.


Copyright © 2003, Michael Conry. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 87 of Linux Gazette, February 2003

LINUX GAZETTE
...making Linux just a little more fun!
The Ultimate Editor
By Stephen Bint

Somewhere, out there, is a C++ programmer whom destiny has chosen to be our liberator.

The Ultimate Editor's Time Has Come

How can it be that Windows users are spoiled for choice of excellent text editors, while not one decent one exists for the Linux console? Linux is a better OS, supported by better programmers who are dedicated to producing better tools for eachother's benefit. The text editor is the programmer's most important and frequently used tool. Yet Linux console editors are rubbish. How can this be?

Those of us who migrate from windows to Linux expect a text editor, at the very least, to allow selection of text with the mouse and to have mouse-sensitive menus and dialogs. Of all the editors only mcedit, the editor built in to the Midnight Commander disk navigator, has these features. The rest have no dialogs and either no mouse interface or a very limited, stupid one.

Yet even mcedit has a fatal flaw. If there is anything about its behaviour you don't like, or a function it lacks which you would like to add, you will find that reverse-engineering the source to solve that problem is more difficult than writing your own text editor from scratch. Unfortunately mcedit is quite basic, so it really needs added functionality and there is no easy way to add it.

What is the point of Open Source being open, if it is so complicated and poorly documented as to be impenetrable to anyone but the author?

Let's face it, we are all the same. We love writing code and hate writing docs. Writing slick algorithms is fun but explaining how they work to newbies is a bore. Yet if someone were to take the trouble to write an editor with maintenace in mind and build in a simple way to add C++ functions to menus, it might be the last editor ever written. No one would bother to write a text editor if one existed, whose behaviour was easy to change and to which any function could be added.

Blasphemy

Stallmanist Fundamentalists may say at this point, emacs is extensible. So it is, but you need to learn a second language to extend it. Besides that, the basic editor has a crude and confusing user interface which cannot be improved by adding lisp modules.

Some of us who aspire to use Linux are ordinary people, not software supermen. It is cruel and unnecessary to tell someone struggling to learn their first language, that they must simultaneously learn a second language in order to make their editor work they way they want it to.

It will never do. Emacs isn't a tool. It's an intelligence test. It is time stupid people fought back against the elitists who are so clever, they find learning emacs a breeze. Notice that you do not have to learn how to use mcedit. It does what you expect so there is nothing to learn.

The Ultimate Editor would be what emacs should have been: an extensible editor with an intuitive mouse-and-menu interface. [Editor's note: emacs was born before mice and pulldown menus were invented.] Instead of complicating the picture with a second language, the extensions would be written in C++. It would come with a programmer's guide, explaining how to install your own menu commands and also describing the anatomy of the source so that you can easily locate the module you are after if you want to change something about its basic behaviour. It would be a do-it-yourself editor kit.

O, Beautiful Tool

If the Ultimate Editor existed, this is what it would be like. You would download it and build it and find it has the basic functionality of mcedit. It would have mouse selection, mouse-sensitive menus and a file open dialog box that allows you to navigate the disk by double-clicking on directories.

It would have few functions: File Open, File Save, File Save As, Exit, Cut, Copy, Paste, Delete and Help. At first there would be no search function, but the README would explain that the source file for the search function is included and would give simple instructions for how to add it. The lines to be added to the source would already be there, but commented out, to make it easy to add the search function.

To add the search function you would have to:

1. Move its source file to the editor's src directory

2. Declare the function at the top of main.cc like this:

   int show_search_dlg();

3. Add a line to main() (actually uncomment a line) like this:

   ed.add_menu_cmd( show_search_dlg, "Search", "Edit", F2_key, SHIFT_PRESSED );

...which installs a command labelled "Search" on the "Edit" menu, which can be activated directly by pressing Shift-F2.

4. In the Makefile, add (uncomment) a compile rule for the source file and add its name to the list of objects to be linked.

5. Run Make and find that the search function is now on the menu.

Having followed this procedure, even a complete newbie will know how to write their own menu functions. The editor will be a global variable, (C++ object) accessible in any source file the user writes, through its header file. Its member functions will report the states of all its internal variables, such as cursor position and area selected. The text array containing the file being edited will be accessible as a member variable, so that the file can be scanned and modified within the user function.

Living Colour

Usually, the logic of colourization is imposed on users. Some editors offer a dialog to change the colours and to add keywords, but the logic is dictated by the author.

The Ultimate Editor will offer an easy way for users to write their own colourization routines. Apart from enabling people to colourize rare and eccentric languages, this feature will unlock the hidden potential of colourization.

Think how many ways you could choose to colour source and what an aid to reverse engineering it could be. Depending on your purpose, you might want to colour identifiers according to which header file they are declared in, or whether they are automatic or allocated, or use colours to indicate their scope. You might choose to have several colouring schemes installed and switch between them with hot keys.

To make colourizing simple, the Ultimate Editor will store its files in file arrays which contain two arrays of strings - one for the text and another for the colours. The file array will keep the sizes of the strings in these arrays synchronized so that, for every character stored in the text array, there is always a byte representing its colour at the same co-ordinates in the colour array.

The editor will always draw on the colour array when it refreshes, so all the programmer has to do in order to colour a character at certain co-ordinates, is change the value in the colour array at those same co-ordinates and refresh the display.

Ninety Percent Widgets

From the user's point of view, dialog boxes appear to be a small part of a text editor. From the programmer's perspective, it is the other way round. The editable fields which appear in dialogs are fully functional editing windows with a couple of features disabled. So to write the Ultimate Editor is really to write the Ultimate Widget Library.

A well-written widget library with good docs is more than an accessory to an extensible editor. If users become familiar with the library in order to improve the editor, they can use it to produce configuration dialogs which assist non-experts in configuring other software, by asking simple questions and writing out their wishes in a config file.

Linuxconf is a very important configuration tool, but it is fading like a dead language because it is hard to use. Because it is hard to use, it is hard to get enthusiastic about improving it. Users and programmers both drift instead towards other, distribution-specific configuration programs. If linuxconf was rewritten to show mouse-sensitive dialogs that behave like proper dialogs (like X-windows dialogs), it might grow to include modules to enable clueless newbies to configure any popular package.

Do you not agree, that the main obstacle to the popularity of Linux, is esotericism? I mean, no-one bothers to write software for newbies because only software experts ever use Linux. The growth of Linux is being prevented by an elitist Catch-22. If idiot-friendly configuration programs were not important to the popularity of an OS, would Microsoft have lavished so much time and money on them?

Rewriting linuxconf with a simple but modern widget library would be the first step to making what it should be - a project that never ends. It should be continually growing as more modules are added, until it becomes the one-stop-shop through which all Linux software can be configured by children.

A Little Help

I want this challenge to be open to anyone who knows C++. Because interfacing with the mouse, keyboard and colour-text screen under Linux is a low-level nightmare, I have produced an interface library which makes it as simple under Linux as it is under DOS. I recommend it over Slang for the purpose of writing an editor for several reasons.

First, the Slang source (including docs and demo programs) zipped is 740k, whereas my library's source zips to 42k. Second, Slang does not report mouse movement, so a Slang program cannot drag-select with the mouse. Third, the colouring system in Slang is complicated, but mine represents the screen as an EGA-style buffer of character/colour byte pairs.

I wrote my library after an attempt to use Slang myself drove me to the conclusion that its all-platform capability generated an unacceptable overhead and took less than full advantage of the potential of the Linux console. I don't doubt that the author of Slang is a better programmer than me, but I have produced a library specifically to serve programmers who want to produce the first adequate editor for the Linux console.

You can download it here: http://members.lycos.co.uk/ctio/

And now that interfacing with the console is as simple under Linux as it ever was under DOS, the obstacle to Linux editors having the same basic features as DOS editors has been removed. Now anyone who knows C++ can do something great. To produce the editor and widget library I have described might change the course of the history of free software, by rolling out a red carpet to entry-level programmers.

Invent the Wheel

I am constantly being told that there is no need to reinvent the wheel. A ship could sail the Atlantic, powered only by my sighs. Let me assure you, I will march up and down the High Street blowing a trumpet and proclaiming at the top of my voice, "NO NEED TO REINVENT THE WHEEL!" on the day that someone actually produces a ROUND WHEEL.

In theory, any Open Source editor can be hacked and made perfect, but we are still waiting for a mouse-aware console editor which can be hacked and improved by programmers with I.Q.s under 170. Without adequate documentation, Open Source is a Closed Book to ordinary mortals.

Destiny

What are you, C++ programmer? Someone with the power to build abstract machines, an inventor that has transcended the limitations of the material world that crushed the dreams of human inventors of every generation before this? The citizens of the beautiful city of Free Software scrape along on square wheels and you could solve their problem.

If you are sitting on your flabby backside thinking, "Nyaahh. It's not for me", then who is it for? Not me, I'm homeless. I have had access to a computer long enough to write the interface library, but now I am living in a tent and the closest I get to a computer is occasional internet access at a day centre for the unemployed. That is why it can't be me. Why can't it be you?

It might be your destiny to be the author of that Ultimate Editor, the last editor ever written. Perhaps no more than a month after the importance of free software has been recognised and Stallman's face is carved on Mount Rushmore, they may have to blow it off with dynamite and carve yours on there instead.

Reference

Slang, by John E. Davis. Slang appears to have eclipsed curses, as the keyboard/mouse/colour text interface library most programmers would recommend. If you are dead clever, you might find a way to use the subset of Slang purely concerned with the console interface, which is part of the Midnight Commander source. It is smaller and allows text selection at the Linux console, while still offering limited functionality on less capable terminals, even telnet windows!

CTIO, by Stephen Bint. By far the simplest and best console interface library I have ever written. Only works at the Linux console and DOS, not in rxvt/xterm nor telnet windows (but it's only 42k). Read about my struggle to write it here.

emacs, by Richard Stallman. A millstone in the history of free software.

 

[BIO] Stephen is a homeless Englishman who lives in a tent in the woods. He eats out of bins and smokes cigarette butts he finds on the road. Though he once worked for a short time as a C programmer, he prefers to describe himself as a "keen amateur".


Copyright © 2003, Stephen Bint. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 87 of Linux Gazette, February 2003

LINUX GAZETTE
...making Linux just a little more fun!
HelpDex
By Shane Collinge

These cartoons are scaled down to minimize horizontal scrolling. To see a panel in all its clarity, click on it.

[cartoon]
[cartoon]
[cartoon]
[cartoon]
[cartoon]
[cartoon]

Recent HelpDex cartoons are at Shane's web site, www.shanecollinge.com, on the Linux page.

 

[BIO] Part computer programmer, part cartoonist, part Mars Bar. At night, he runs around in a pair of colorful tights fighting criminals. During the day... well, he just runs around. He eats when he's hungry and sleeps when he's sleepy.


Copyright © 2003, Shane Collinge. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 87 of Linux Gazette, February 2003

LINUX GAZETTE
...making Linux just a little more fun!
Ecol
By Javier Malonda

These cartoons were made for es.comp.os.linux (ECOL), the Spanish USENET newsgroup for Linux. The strips are drawn in Spanish and then translated to English by the author. Text commentary on this page is by LG Editor Iron. Your browser has shrunk the images to conform to the horizontal size limit for LG articles. For better picture quality, click on each cartoon to see it full size.


All Ecol cartoons are at tira.escomposlinux.org (Spanish) and comic.escomposlinux.org (English).

These cartoons are copyright Javier Malonda. They may be copied, linked or distributed by any means. However, you may not distribute modifications. If you link to a cartoon, please notify Javier.

 


Copyright © 2003, Javier Malonda. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 87 of Linux Gazette, February 2003

LINUX GAZETTE
...making Linux just a little more fun!
Quick-Start Networking
By Edgar Howell

Quick-Start Networking

Contents

Introduction
1. Ethernet
2. Ssh
3. Screen
4. File Transfer
5. Nfs
6. Samba
7. PCMCIA
8. Further Reading
9. A Future without Windows?

Introduction

Over the holidays I finally had a block of time large enough to work on a network at home. But getting started is always my biggest problem and it took a while to understand what had to be done on which machine. In retrospect it was quite easy to get started.

This article is essentially little more than my notes, taken during the experience, less false starts. To the best of my knowledge it documents what I had to do and will be my reference if the need arises to repeat any of this.

To avoid inflating this unnecessarily and because I'm really just an experienced newbie, almost nothing is explained. There are references to some relevant articles but I assume you know how to find the standard documentation.

To my mind there is no reason for anyone with two or more computers not to have them networked. My first step was with an Ethernet card for the PC, a cross-over cable, and a PCMCIA Ethernet card, all for 87.50 euro. Once that was working, another PCMCIA card (should have known by the price that it was Windows-only), 8-port switch and 3 3-meter cables cost 67.50 euro. Roughly $160 wasn't bad. And it shouldn't cost much more than $25 to connect 2 PCs point-to-point.

The current status of this home office network is as follows:

  • Toshiba 486 500MB/24MB, SuSE 8.0 (kernel 2.4.18-4GB) without X
  • PC Pentium 166 2x4GB/32MB, SuSE 6.3 (kernel 2.2.13)
  • Toshiba AMD 4GB/64MB, SuSE 8.0 (kernel 2.4.18-4GB) or Windows 98

    By the way, the asymmetry in the following is not due to anything inherent in networking or the different Linux kernels. Rather, the 486 will one day be my portal to the Internet. It shouldn't be able to do much of anything other than responding to someone it knows. On the other hand the other two should have no restrictions.

    Other than that, be careful: this is merely intended to get up and running as quickly as possible. Everything else has been pretty much ignored. Consider this just a small but important first step. Your next step has to be the relevant documentation because this is quite superficial!

    1. Quick-Start - Ethernet

    Other than a PCMCIA problem (see below), installing and configuring Ethernet is rather straight-forward. To keep things simple I started out with a cross-over cable, i.e. point-to-point, and moved on to a switch only after everything else was known to work.

    Rather than having each machine connect to the network at boot, there are scripts in /root to run when it is time to connect. Here are the relevant scripts and files from two of the machines (less comments and stuff not relevant here):

    Toshiba 486

         /etc/hosts:       127.0.0.1      localhost
                           192.168.0.99   Toshiba486.Lohgo  Lohgo486
                           192.168.0.100  ToshibaAMD.Lohgo  LohgoAMD
                           192.168.0.101  PC.Lohgo          LohgoPC
    
         /etc/hosts.allow: sshd: 192.168.0.100, 192.168.0.101
    
         /root/eth-up:     #!/bin/bash
                           /sbin/ifconfig eth0 192.168.0.99 \
                                          broadcast 192.168.0.255 \
                                          netmask 255.255.255.0 up
    

    Pentium 166

         /etc/hosts:       127.0.0.1      localhost         PC
                           192.168.0.99   Toshiba486.Lohgo  Lohgo486
                           192.168.0.100  ToshibaAMD.Lohgo  LohgoAMD
                           192.168.0.101  PC.Lohgo          LohgoPC
    
         /etc/hosts.allow: sshd:      192.168.0.100
                           portmap:   192.168.0.100
                           lockd:     192.168.0.100
                           rquotad:   192.168.0.100
                           mountd:    192.168.0.100
                           statd:     192.168.0.100
    
         /root/eth-up:     #!/bin/bash
                           /sbin/insmod rtl8139
                           /sbin/ifconfig eth0 192.168.0.101 \
                                          broadcast 192.168.0.255 \
                                          netmask 255.255.255.0 up
    

    The following are the same on all 3 machines:

         /etc/hosts.deny:  ALL : ALL
    
         /root/eth-down:   #!/bin/bash
                           /sbin/ifconfig eth0 down
    
         /root/eth-stat:   #!/bin/bash
                           /sbin/ifconfig eth0; /bin/netstat -r
    

    The extra entries for the P166 in /etc/hosts.allow are to support nfs. And insmod in /root/eth-up is due to the Ethernet card in the PC vs PCMCIA on the notebooks.

    Be aware that SuSE at installation has an option to "re-organize" /etc/hosts that defaults to CHECK_ETC_HOSTS=yes in /etc/rc.config. My suspicion is that this is what can cause the 192-IP-address to be replaced by a 127-address for the host itself in /etc/hosts on reboot. I don't reboot often enough to feel like checking this out. But if you get an inexplicable inability to access the network, do verify the contents of this file.

    2. Quick-Start - Ssh

    Without a doubt this is the most complex of the Linux facilities described here but is the key to a couple of things that are extremely useful and it certainly should be set up, for both convenience and security.

    Prerequisites/definitions:

  • "local" is the machine whose keyboard you want to use
  • "remote" is the machine whose keyboard you don't want to use
  • "<user>" has been set up on both machines
  • "<host>" is the 3rd column of the entry for the "remote" host in /etc/hosts on the "local" machine
  • the entries in /etc/hosts.allow and /etc/hosts.deny on the "remote" machine permit use of sshd from the "local" machine
  • use of the mount command does mean playing disk-jockey between the two machines as appropriate.
  • the following is based on SuSE 6.3 (2.2.13) and 8.0 (2.4.18-4GB)

    This is what you have to do if you don't bother to set ssh up:

    Remote        Local          Comment
    
                  <logon as user also known to remote host>
                  ssh <host>
                                 warning:... SOMETHING NASTY
                  yes            accept it
                  <password>
    

    This is setup:

    Remote        Local          Comment
    
                  <logon as user also known to remote host>
                  /usr/bin/ssh-keygen
                                 accept default: .ssh/identity
                                 no passphrase
                  mount /floppy
                  cp .ssh/identity.pub /floppy/
                  umount /floppy
    
    logon as <the same user>
    mkdir .ssh                   if necessary
    mount /floppy
    cp /floppy/identity.pub .ssh/authorized_keys
    cp /etc/ssh/ssh_host_key.pub /floppy/known_hosts
    umount /floppy
    
                  mount /floppy
                  cp /floppy/known_hosts .ssh/
                  umount /floppy
                  vi .ssh/known_hosts
                                 add <host> at start of line and
                                 remove root@<host> at end
    

    And this is what you have to do to logon after setting things up:

    Remote        Local          Comment
    
                  <logon as user also known to remote host>
                  ssh <host>
    

    Note that the host key is generated as part of system installation (with SuSE anyhow). And there can be differences in directory structure (SuSE's kernel 2.2 didn't have 'ssh' under 'etc'). Also note that this is just intended to get someone unfamiliar with ssh up and running. Do not blindly follow these steps if you have used ssh before! In particular most 'cp's certainly ought to be 'cat ... >>'. In the office at home I don't want a passphrase to begin work on a different machine, but you might.

    3. Quick-Start - Screen

    Although it has been mentioned in Linux Gazette several times and I actually did play with it briefly, the need for screen wasn't at all obvious to me. Given 6+ vt's and X running on at least two others with unlimited windows under whatever window manager one has running, it seemed just another level of complexity.

    The need became obvious as the network at home began taking shape. The rationale behind screen boils down to this: if you start sessions on remote machines under screen, they remain available to you as long as the remote machine isn't shut down -- independent of what happens on the communication link or your local machine. Like one of my PCMCIA Ethernet cards only works under Windows and I can thus only connect one of the notebooks to the PC at a time, if the AMD is also running Linux, as it usually is -- but no need to shut the 486 down, just eject the card, pop it into the AMD and screen keeps sessions active on the 486 for later access.

    To start screen:

        screen -R   restart session if available, otherwise start one
    

    Within screen (not at all apparent, it hides well) use Ctrl-a followed by:

        ?   help
        w   show list of windows
        n   switch to next window
        c   create new window
        d   disconnect
        A   assign title to window
    

    4. Quick-Start - File Transfer

    If you are using ssh, you can get rid of rsh -- and telnet and ftp as well for that matter. Here are a couple of alternatives that to me are more convenient than the lot.

    Netcat is a nifty little tool, analogous to cat. You start it to receive a file on one machine

        netcat -vv -l -p <port> > <file>
    

    and then tell the other machine what to send

        netcat -vv -w 10 <host> <port> < <file>
    or
        tar -czvf - <directory> | netcat -vv -w 10 <host> <port>
    

    Use netstat and /etc/services to find an available port. The option "-w 10" tells the sender to terminate the connection after 10 seconds of inactivity and the option "-vv" lets you verify that the correct number of bytes was sent and received.

    While netcat holds promise for scripts to backup to a different machine as the network at home gradually takes shape, Midnight Commander has amazing facilities for the things one simply has to do by hand.

    If ssh has been set up properly, the following entered in the command line makes mc's active panel point to the same user on the "other" machine -- yes, "#sh" not "#ssh", unfortunately

        cd /#sh:<host>
    

    And if the other side has anonymous ftp running, the following should be fairly self-explanatory

        cd /#ftp:www.tldp.org/
    

    5. Quick-Start - NFS

    I played around with nfs and it works but unfortunately my notes are non-existant (basically just check-marks in the printout of the HOWTO). As I recall, besides installing the relevant package on client and server all that was needed was to edit /etc/exports on the PC (server) as follows:

    /home	192.168.0.100(rw,root_squash,sync,insecure)
    /tmp	192.168.0.100(rw,root_squash,sync,insecure)
    
    See also /etc/hosts.allow under 1. Ethernet, above.

    At installation SuSE has a number of options to be selected, many (all?) of which wind up in /etc/rc.config. Here is an excerpt of those relevant to nfs:

    START_PORTMAP="yes"
    NFS_SERVER="yes"
    USE_KERNEL_NFSD="yes"
    USE_KERNEL_NFSD_NUMBER="4"
    NFS_SERVER_UGID="no"
    REEXPORT_NFS="no"
    

    On the AMD (client) I added the following to /etc/fstab:

    192.168.0.101:/home	/Rhome	nfs	noauto,users,sync 0 0
    192.168.0.101:/tmp	/Rtmp	nfs	noauto,users,sync 0 0
    

    At that point the mount command works with /Rhome etc. just as well as /floppy or any other entry in fstab. One minor annoyance is that user ID's must be the same on all machines using nfs. This was not a problem for me because, when installing Linux, I create the few users in the same order.

    6. Quick-Start - Samba

    Given the difficulty of keeping track of what one is doing under Windows, particularly with false starts and things that turn out to be wrong or simply irrelevant, this needs to be taken with a large grain of salt. It assumes that the driver for the PCMCIA card has been installed, if relevant. And if the terminology is slightly obscure, that is due to my translating from the German versions of Windows.

    The following is what was necessary to enable logon to the PC from the AMD under Samba, i.e. from Windows 98 to Linux 2.2.13 (SuSE 6.3). With appropriate adjustments the same steps worked in the other direction, i.e. from Windows 95 to Linux 2.4.18-4GB (SuSE 8.0). But note these differences:

  • encrypt passwords: 98: yes; 95: no
  • path to smb.conf: 2.4: /etc/samba; 2.2: /etc
  • path to smbpasswd: 2.4: /etc/samba; 2.2: /etc
  • path to netlogon: 2.4: /usr/local/samba; 2.2: /var/lib/samba
    Part 1 - Linux
                                 edit /etc/smb.conf
    [global]
       workgroup = Lohgo
       encrypt passwords = yes
       smb passwd file = /etc/smbpasswd
       password level = 8
       username level = 8
       socket options = TCP_NODELAY
       domain logons = yes
       domain master = yes
       os level = 65
       preferred master = yes
       wins proxy = no
       wins support = yes
       hosts allow = 192.168.0.100 127.
    [homes]
       comment = Home Directories
       read only = no
       browseable = no
    [netlogon]
       comment = Network Logon Service
       path = /usr/local/samba/netlogon
       public = no
       writeable = no
       browseable = no
    [profiles]
       path = /home/%U/profile
       guest ok = yes
       browseable = no
                                 confirm validity, should show no errors
    testparm | less
                                 create user w/password
    smbpasswd -a web
                                 verify user enabled
    smbpasswd -e web
                                 start Samba
    smbd -D
    nmbd -D
                                 at this point from the client -- under
                                 Linux, not Windows -- the following
                                 should give a meaningful response
    smbclient -L LohgoPC
                                 and the following should give you
                                 ftp-like access
    smbclient //LohgoPC/web
    
    Part 2 - Windows98
    
    control panel | network | configuration
      add | client for microsoft network
      properties
        Windows NT-domain: Lohgo
        quick logon
      add | protocol | microsoft | tcp/ip
      properties | set IP-address
        IP-address:     192.168.000.100
        Subnet address: 255.255.255.000
      primary network logon: client for Microsoft network
    control panel | network | identification
      computer name: LohgoAMD
      workgroup:     Lohgo
      description:   ToshibaAMD.Lohgo
    control panel | passwords | user profiles
      users can customize: both
    reboot
                                 if using PCMCIA the following puts
                                 a symbol on the task bar with which
                                 the PCMCIA card can be removed
    <insert PCMCIA Ethernet card and wait for lights to settle down>
                                 the following works ONLY after TCP/IP
                                 has been set up, shows configuration
    start | run | winipcfg
                                 test connection from within a dos-box
    ping -n 5 192.100.0.101
                                 edit c:\windows\hosts.sam
    127.0.0.1       localhost
    192.168.0.101   lohgopc
                                 edit c:\windows\lmhosts.sam
    192.168.0.101   lohgopc
    

    At this point after booting, Windows will ask you to logon, which you can either do with a user known to Samba or cancel to use Windows without the network as before. Now, however, the pop-up window opened by Ctrl-Esc includes near the bottom a line to logoff that afterwards provides the same logon prompt as booting. And the entries in the task bar -- in the home directory, anyhow -- tell you who and where you are, as in

    "Explorer - <user> at <host>"

    where "<host>" is the 3rd column of the entry for the Linux machine in /etc/hosts on the Linux machine.

    Symbolic links work quite nicely. The following executed within the home directory of some user makes a directory -- even on a different partition -- on the Linux machine available to that user on the Windows machine:

    ln -s /dos/f/pictures pictures

    Due to a shortage of resources on the PC and the fact that I have no real use for Windows anyhow, I use the following scripts to start and stop the Samba daemons on the PC as needed:

    /root/samba-up:     #!/bin/bash
                        /usr/sbin/smbd    -d3    -l /tmp/sbd.log
                        /usr/sbin/nmbd -D -d0 -o -l /tmp/sbd.log
    
    /root/samba-down:   #!/bin/bash
                        kill -s SIGTERM $(ps aux | grep mbd \
                            | grep -v grep | awk '{print $2}')
    
    Once you have this working, it won't take you 5 minutes to set up a network printer. Uncomment (or add) the following to smb.conf:
    [printers]
       comment = All Printers
       browseable = no
       printable = yes
       public = no
       read only = yes
       create mode = 0700
       directory = /tmp
    
    And then spend some time with the archaic data entry system on the Windows machine:
    control panel | printer | new printer
      network printer | search
        network environment | Pc
          hpdj-a4-raw
        manufacturer: HP
        printer:      HP OfficeJet
    
    Shut down and re-start Samba and you're in business.

    7. Quick-Start - PCMCIA

    To be honest I have no idea whether this is generally applicable or is specific to SuSE (8.0). And it was only the 2.4 kernel that had problems with PCMCIA, not 2.2 strangely enough. Also, it has nothing to do with networking per se. But if you're going to connect a notebook to your network, you'll probably have to confront the alphabet monster. And a PCMCIA Ethernet card makes a delightful docking station.

    Omitting many details, I initially failed to note an inconsistency with references to irq 5 and 10 that later led to tons of error messages. But this was due to having inserted the PCMCIA card before starting the installation of Linux.

    In my case at least, by not inserting the PCMCIA card before starting installation, there was a reference to only one irq which led to my doing the following.

    After initial boot in /etc/sysconfig/pcmcia add

    	PCMCIA_PCIC="i82365"
    	PCMCIA_PCIC_OPTS="irq_list=10"
    
    and then run /sbin/SuSEconfig and reboot.

    However, installing the PCMCIA software before doing this causes the notebook to hang irrevocably on boot. The only way to boot is by giving LILO the parameter NOPCMCIA=yes. Instead, I installed the PCMCIA software after SuSEconfig and before reboot.

    After that, inserting the PCMCIA card produces a couple of beeps and it works as advertised. Since this is my first personal experience with Ethernet, I can't comment on alternatives but the D-Link DFE-650TXD PCMCIA Ethernet card works well, Linux-to-Linux anyhow (a couple of hours sending stuff over the network before risking the wretched "Recovery CD-Rom" to make Windows 98 work again) and has lots of LEDs to let you know what is going on.

    Here is the output from /sbin/cardctl config and ident.

    CONFIG:

    Socket 0:
      not configured
    Socket 1:
      Vcc 5.0V  Vpp1 0.0V  Vpp2 0.0V
      interface type is "memory and I/O"
      irq 10 [exclusive] [level]
      function 0:
        config base 0x0400
          option 0x60 status 0x00 copy 0x00
        io 0x0300-0x031f [auto]
    

    IDENT:

    Socket 0:
      no product info available
    Socket 1:
      product info: "D-Link", "DFE-650TXD", "Fast Ethernet", "Rev. A1"
      manfid: 0x0149, 0x0230
      function: 6 (network)
    

    8. Further Reading

    See also the following articles in the issue of Linux Gazette indicated:
    36: Introducing Samba by John Blair
    39: Expanding Your Home Network by J.C. Pollman
    44: DNS for the Home Network by J.C. Pollman and Bill Mote
    47: Backup for the Home Network by J.C. Pollman and Bill Mote
    48: SAMBA, Win95, NT and HP Jetdirect by Eugene Blanchard
    50: Sharing your Home by J.C. Pollman and Bill Mote
    57: Making a Simple Linux Network Including Windows 9x by Juraj Sipos
    61: Using ssh by Matteo Dell'Omodarme
    64: ssh suite: sftp, scp and ssh-agent by Matteo Dell'Omodarme
    67: Using ssh-agent for SSH1 and OpenSSH by Jose Nazario
    74: Play with the Lovely Netcat by zhaoway

    The Linux Gazette Answer Gang Knowledge Base under Network Configuration has numerous relevant tidbits among which Routing and Subnetting 101 is mandatory reading.

    And the Linux Focus Index by Subject under System Administration has several articles well worth looking at, e.g.:
    Replacing a Windows/NT/2000 server using Linux and Samba by Sebastian Sasias
    Through the Tunnel by Georges Tarbouriech
    Samba Configuration by Eric Seigne
    Network File System (NFS) by Frederic Raynal
    Home Networking, glossary and overview by Guido Socher

    9. A Future without Windows?

    Coming from pre-TRS-80 days, I've used DOS, various versions of Windows, at least 3 releases of OS/2, Coherent, and now 5 releases of SuSE Linux over at least 5 years. I am convinced that anyone in a position to "compare and contrast" would agree that at best Windows is unstable junk. One of my goals for quite some time had been to gain complete independence from Windows.

    But consider: our ISDN phone system has an RS-232 connector with which it can be programmed via -- yeah, you got it. One of the printers is USB for the notebook and guess whose drivers are available. Our digital camera uses smart media and the USB smart media reader... Oh, well, you get the picture.

    I've only had Samba working for a week and actually hadn't even intended to check it out but everything else worked so well that it seemed worth a try. And it's so slick that I question whether it would really be worth my effort to try to find replacement drivers for this legacy stuff. How many hours, how many experiments, what guarantee of success? Doesn't it make more sense to boot the notebook under the "silly system" (I hope Monty Python put that under GPL) and use the Samba connection to the rest of the network? At least until the last Windows-legacy device eats it.

     

    [BIO] Edgar is a consultant in the Cologne/Bonn area in Germany. His day job involves helping a customer with payroll, maintaining ancient IBM Assembler programs, some occasional COBOL, and otherwise using QMF, PL/1 and DB/2 under MVS.


    Copyright © 2003, Edgar Howell. Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 87 of Linux Gazette, February 2003

    LINUX GAZETTE
    ...making Linux just a little more fun!
    A Keep-Alive Program You Can Run Anywhere
    By Graham Jenkins

    The Poodle and the Labrador

    You are halfway through typing a new program into a remote machine connected over a dial-up line, and you get called to intervene in a fight between your partner's miniature poodle and the neighbour's ugly yellow Labrador. When you get back, your connection has timed-out.

    Is this something that has happened to you? Or perhaps you had to drag your kids away from a particularly offensive episode of Jerry Springer, then found you had to stick around to make sure they didn't come back?

    The Blonde Solution

    The traditional procedure for maintaining activity on your line during an interruption of the type outlined above was to use a 'fortune' program in a small loop so that a random saying got written to your screen every half-minute. This could present some real problems if a person with fair hair looked at your screen and saw something like:

     Q:  How do you make a blonde's eyes light up?
     A:  Shine a flashlight in her ear.
    

    You could of course used a '-i' or equivalent parameter restricting 'fortune' to generating inoffensive material like:

      Computing Definition
      Chaining - A method of attaching programmers to desk, to speed up output.
    

    The more recent incarnations of the 'fortune' program offer their users a more specific set of options. You can chose between offending those who are Irish, gay or intellectually retarded!

    For The Terminally-Challenged

    If you are just using a browser to read your Hotmail messages, you probably won't want to open a terminal window just so you can run a 'fortune' program. If you are using an X11-compliant window-manager, you could start a clock program with something like:

     xclock -digital -update 1 &

    But that's not going to work on your your vintage Windows 95 machine unless you also happen to be running something like PC-Xware.

    The 'KeepAlive.java' program listed here is designed to work anywhere. It's written in Java 1.1 so that even the 'jview' virtual machine on your basic Microsoft machine can handle it. It doesn't rely on finding a 'fortune', 'xclock' or other program on a remote machine. And you don't have to change anything when you connect via a different ISP.

    Finding A Partner

    But you have to send traffic somewhere, right? So how do you find a partner machine which will receive your traffic? If we were writing this program as a shell script, we might work out where our gateway was, and ping it at appropriate intervals. That's not so easy to do in a Java program which might run on any number of platforms. And in any case, it would be nice if we could send traffic somewhere beyond the gateway machine.

    In almost every sort of networking arrangement, the participating machines have knowledge of one or more nameserver addresses. So what we can do from our Java program is make periodic requests to those nameservers. We need to ensure that any hosts whose addresses we request cannot be found locally in a hosts table. And we need to also ensure that the answers to our nameserver requests are not cached locally. If you take a look now at the program, you will see that the names of the hosts whose addresses we are requesting are generated by examining the clock-time in milliseconds at the time of each request. This results in names like A1040689223909, A1040689229448, etc.

    That's really all we need to do. But it's nice to be able to see something happening. So our program defines a 'MessageFrame' class which displays two colored buttons in a GUI window. The colors of these are changed at each iteration. We also set the title on the GUI window, and change it at each iteration - so we can still see something happening when the window is minimized. And we set up a listener to detect 'window closing' events and perform a graceful shutdown.

    Getting It Together

    Here's the program. You need to compile it with a command like:

     javac KeepAlive.java
    This will generate three 'class' files which contain code which can be executed on a java virtual machine. So you can copy those class files to a directory on another machine, then execute it with a command like:
     java KeepAlive
    To use the Microsoft virtual machine on a Windows box, use:
     java KeepAlive

    /* @(#) KeepAlive.java  Trivial keep-alive program. Tries at 5-second intervals
     *                      to find addresses for hosts with generated names. This
     *                      ensures that messages are sent to nameserver(s).
     *                      Copyright (c) 2002 Graham Jenkins <grahjenk@au1.ibm.com>
     *                      All rights reserved. Version 1.06, August 15, 2002.
     */
    import java.io.*;
    import java.net.*;
    import java.awt.*;
    import java.awt.event.*;
    import java.util.Date;
    public class KeepAlive {
      public static void main(String[] args) {
        MessageFrame f=new MessageFrame();  // Change button colours each iteration.
        int flag=0;                         // Also switch frame-title so we can see
        while ( true ) {                    // activity whilst iconified.
          f.statusMess(Color.red,Color.red); f.setTitle("==X==");
          try {InetAddress addr=InetAddress.getByName("A"+(new Date()).getTime());}
          catch (UnknownHostException ioe) {}
          if(flag==0) {f.statusMess(Color.yellow,Color.green); f.setTitle("1.06");}
          else {f.statusMess(Color.green,Color.yellow); f.setTitle("KeepAlive");}
          flag=1-flag;
          try {Thread.sleep(5000L);} catch (InterruptedException e) {}
        }
      }
    }
    
    class MessageFrame extends Frame implements ActionListener {
      private Button b1, b2;                // Displays two coloured buttons.
      public MessageFrame() {
        Panel p=new Panel(); p.setLayout(new FlowLayout());
        b1=new Button() ; b2=new Button(); p.add(b1); p.add(b2);
        this.add("South",p); this.setSize(150,50); this.show();
        this.addWindowListener(new WindowAdapter() {
          public void windowClosing(WindowEvent e) { System.exit(0); }
        });
      }
      public void statusMess(Color left, Color right) {
        b1.setBackground(left); b2.setBackground(right);
      }
      public void actionPerformed(ActionEvent e) {}
    }
    

    If you have Java 1.1 or later, and no requirement to use the Microsoft virtual machine, you can assemble the class files into a single 'jar' file, then execute it using the '-jar' option thus:

      echo "Main-Class: KeepAlive\015" >/tmp/MyManifest
      jar cmf /tmp/MyManifest /tmp/KeepAlive.jar *.class
      
      java -jar /tmp/KeepAlive.jar
    

    If You Don't Have It

    If your machine doesn't have Java, you can get it from Sun MicroSystems. And if you need to know more about network programming with Java, you could take a look at "Java Network Programming and Distributed Computing" by David Reilly and Michael Reilly.

     

    [BIO] Graham is a Unix Specialist at IBM Global Services, Australia. He lives in Melbourne and has built and managed many flavors of proprietary and open systems on several hardware platforms.


    Copyright © 2003, Graham Jenkins. Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 87 of Linux Gazette, February 2003

    LINUX GAZETTE
    ...making Linux just a little more fun!
    Linux-Based Voice Recognition
    By Janine M Lodato

    Let's look at Linux-based voice recognition software from the perspective of China. It would behoove Linux computer makers anyway to begin manufacturing their computers in China, because China offers a low-cost method of manufacturing and provides them with a large market for their hardware which can also be exported to other important markets around the world.

    Linux computers have the capacity to accommodate voice recognition systems, such as IBM ViaVoice. This is especially advantageous to Chinese speakers because both Mandarin and Cantonese are very complex in the written form, so documents could be more easily produced through voice recognition software running on a Linux platform. Using a keyboard is next to impossible for Chinese languages because so many characters are involved in typing a document.

    Other languages will also benefit from using voice recognition software for purposes of speed. Hands-busy, eyes-busy professionals can benefit greatly from voice recognition so they don't have to use a mouse and keyboard to document their findings. Voice-activated, easily-used telephone systems will benefit all walks of life. Anyone driving a car will find voice recognition a much more effective way of manipulating a vehicle and communicating from the vehicle.

    The health-care market alone may justify the Linux based voice recognition project. Health-care services are the largest expense of the Group of Ten nations, and it is the fastest growing sector as well. Health-care workers would benefit from using their voices to document describing the treatments of patients. Voice recognition allows them a hands-free environment in which to analyze, treat and write about particular cases easily and quickly.

    Electronically connected medical devices via wireless LAN can benefit:

    In this life sciences field, the simplicity, reliability and low cost of Linux for servers, tablets, embedded devices and desktops is paramount. Only about 10% of the documents in the health-care field in the USA are produced electronically due to the cumbersome and unreliable nature of the Windows environment. 30% of the cost of health-care is a direct result of manual creation of the documents and many of the malpractice cases are also due to the imprecision of transcriptions of manually scribbled medical records and directives, as anybody who looks at a prescription can attest.

    Obviously, the market for these new technologies exists. What remains is for a hungry company with aggressive sales people to tap into that market. Once those sales people get the technology distributed, the needs of many will be met and a new mass market will open up that Microsoft isn't filling: assistive technology (AT). Actually, the field already exists but needs to be expanded to include both physically disabled and functionally disabled.

    Yes, voice recognition offers great promise for the future. However, it isn't perfect and needs to be improved. One improvement could use lip reading to bolster its accuracy. Still another is multi-tonal voice input. Another is directional microphones. Every generation of voice recognition software will improve as the hardware for Linux gets bigger and stronger.

     

    [BIO]


    Copyright © 2003, Janine M Lodato. Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 87 of Linux Gazette, February 2003

    LINUX GAZETTE
    ...making Linux just a little more fun!
    Perl One-Liner of the Month: The Adventure of the Arbitrary Archives
    By Ben Okopnik

    Spring was in full bloom, and Woomert Foonly was enjoying another perfect day. It had featured a trivially easy configuration of a 1,000-node Linux cluster, and had been brought to an acme by lunching on Moroccan b'stila with just a touch of ras el hanout curry and a fruited couscous on the side, complemented by a dessert of sweet rice with cinnamon. All was at peace... until Frink Ooblick burst in, supporting - almost carrying - a man who seemed to be in the last extremity of shock. Frink helped him to the couch, then dropped into the easy chair, clearly exhausted by his effort.

     -- "Woomert, it's simply scandalous. This is Resolv Dot Conf, a humble... erm, well, a sysadmin, anyway. Recently, he was cruelly forced to install some kind of a legacy OS on his manager's computer - can you imagine? - and now, he's being asked to do something that sounds nearly impossible, although I could only get scant details. He had heard of your reputation (who hasn't, these days?), and was coming to see if you could help him, but collapsed in the street just outside your door due to the residual shock and a severe Jolt Cola deficiency. As to the problem... well, I'll let him tell you."

    Woomert had been tending to their guest while listening, with the result that the latter now looked almost normal. Indeed, Woomert's "sysadmin-grade coffee" was well-known among the cognoscenti for its restorative powers, although the exact recipe (it was thought to have Espresso Alexander and coffee ice cream somewhere in its ancestry, but the various theories diverged widely after that point) remained a deep secret.

    Now, though, the famous detective's eyes sharpened to that look of concentration he habitually wore while working.

     -- "Please state your problem clearly and concisely."

    The quickly recovering sysadmin shook his head mournfully.

     -- "Well, Mr. Foonly... you see, what I have is a script that processes the data submitted to us by our satellite offices. The thing is, it all comes in various forms: we're a health insurance data processor, and every company's format is different. Not only that, but the way everyone submits the data is different: some just send us a plain data file, others use 'gzip', or 'compress', or 'bzip', or 'rar', or even 'tar' and 'compress' (or 'gzip'), and others - fortunately, all of those are just plain data - hand us a live data stream out of their proprietary applications. Our programmers handled the various format conversions as soon as they got the specs, but this arbitrary compression problem was left up to me, and it's got me up a tree!"

    He stopped to take a deep breath and another gulp of Woomert's coffee, which seemed to revive him further, although he still sat hunched over, his forehead resting on his hand.

    "Anyway, at this point, making it all work still requires human intervention; we've got two people doing nothing but sorting and decompressing the files, all day long. If it wasn't for that, the whole system could be completely automated... and of course, management keeps at me: 'Why isn't it fixed yet? Aren't you computer people supposed to...' and so on."

    When he finally sat up and looked at Woomert, his jaw was firmly set. He was a man clearly resigned to his fate, no matter how horrible.

    "Be honest with me, Mr. Foonly. Is there a possibility of a solution, or am I finished? I know The Mantra [1], of course, but I'd like to go on if possible; my users need me, and I know of The Dark Powers that slaver to descend upon their innocent souls without a sysadmin to protect them."

    Woomert nodded, recognizing the weary old warrior's words as completely true; he, too, had encountered and battled The Dark Ones, creatures that would completely unhinge the minds of the users if they were freed for even a moment, and knew of the valiant SysAdmin's Guild (http://sage.org) which had sworn to protect the innocent (even though it was often protection from themselves, via the application of the mystic and holy LART [2]).

     -- "Resolv, I'm very happy to say that there is indeed a solution to the problem. I'm sure that you've done your research on the available tools, and have heard of 'atool', an archive manager by Oskar Liljeblad..."

    At Resolv's nod, he went on.

    "All right; then you also know that it will handle all of the above archive formats and more. Despite the fact that it's written in Perl, we're not going to use any of its code in your script - that would be a wasteful duplication of effort. Instead, we're simply going to use 'acat', one of 'atool's utilities, as an external filter - a conditional one. All we have to do is insert it right at the beginning of your script, like so:


    #!/usr/bin/perl -w # Created by Resolv Dot Conf on Pungenday, Chaos 43, 3166 YOLD @ARGV = map { /\.(gz|tgz|zip|bz2|rar|Z)$/ ? "acat $_ '*' 2>/dev/null|" : $_ } @ARGV; # Rest of script follows
    ...
    
    
    "Perl will take care of the appropriate magic - and that will take care of the problem."

    The sysadmin was on his feet in a moment, fervently shaking Woomert's hand.

     -- "Mr. Foonly, I don't know how to thank you. You've saved... well, I won't speak of that, but I want you to know that you've always got a friend wherever I happen to be. Wait until they see this!... Uh, just to make sure I understand - what is it? How does it work?"

    Woomert glanced over at Frink, who also seemed to be on the edge of his seat, eager for the explanation.

     -- "What do you think, Frink - can you handle this one? I've only used one function and one operator; the rest of it happened automagically, simply because of the way that Perl deals with files on the command line."

    Frink turned a little pink, and chewed his thumb as he always did when he was nervous.

     -- "Well, Woomert... I know you told me to study the 'map' function, but it was pretty deep; I got lost early on, and then there was this new movie out..."

    Woomert smiled and shook his head.

     -- "All right, then. 'map', as per the info from 'perldoc -f map', evaluates the specified expression or block of expressions for each element of a list - sort of like a 'for' loop, but much shorter and more convenient in many cases. I also used the ternary conditional operator ('?:') which works somewhat like an "if-then-else" construct:


    # Ternary conditional op - sets $a to 5 if $b is true, to 10 otherwise $a = $b ? 5 : 10; # "if-then-else" construct - same action if ( $b ){ $a = 5; } else { $a = 10; }
    "Both of the above do the same thing, but again, the first method is shorter and often more convenient. Examining the script one step at a time, what I have done is test each of the elements in @ARGV, which initially contains everything on the command line that follows the script name, against the following regular expression:

    /\.(gz|tgz|zip|bz2|rar|Z)$/

    This will match any filename that ends in a period (a literal dot) followed by any of the specified extensions.

    Now, if the filename doesn't match the regex, the ternary operator returns the part after the colon, '$_' - which simply contains the original filename. Perl then processes the filename as it normally does the ones contained in @ARGV: it opens a filehandle to that file and makes its contents available within the script. In fact, there are a number of ways to access the data once that's done; read up on the diamond operator ('<>') , the STDIN filehandle, and the ARGV filehandle (note the similarity and the difference, Frink!) for information on some of the many available methods of doing file I/O in Perl."

    "On the other hand, if the current element does match, the ternary operator will return the code before the colon, in this case

    "acat $_ '*' 2>/dev/null|"

    Perl will then execute the above command for the current filename. The syntax may seem a little odd, but it's what 'acat' (or, more to the point, the archive utilities that it uses) requires to process the files and ignore the error messages. Note that the command ends in '|', the pipe symbol; what happens here is much like doing a pipe within the shell. The command will be executed, the output will be placed in a memory buffer, and the contents of that buffer will become available on the filehandle that Perl would normally have opened for that file - presto, pure magic! [3]"

    "So, to break it all out in long form, here's what I did:


    @ARGV = map { # Use the BLOCK syntax of 'map' if ( /\.(gz|tgz|zip|bz2|rar|Z)$/ ){ # Look for archive extensions "acat $_ '*' 2>/dev/null|"; # Uncompress/pipe out the contents } else { $_; # Otherwise, return original name } } @ARGV; # This is the list to "walk" over
    "Perl handles it from that point on. Once you pass it something useful on the command line or standard input, it knows just what to do. In fact," he glanced sternly over at Frink, who once again looked abashed, "studying 'perldoc perlopentut' is something I recommend to anyone who wants to understand how Perl does I/O. This includes files, pipes, forking child processes, building filters, dealing with binary files, duplicating file handles, the single-argument version of 'open', and many other things. In some ways, this could be called the most important document that comes with Perl. Taking a look at 'perldoc perlipc' as a follow-up would be a good idea as well - it deals with a number of related issues, including opening safe (low privilege) pipes to possibly insecure processes, something that can become very important in a hurry."

     -- "Now, Resolv, I believe that you have a bright new future stretching out ahead of you; your problem will be solved, your management will be pleased, and your users will remain safe from Those Outside The Pale. If you would care to join us in a little celebration, I've just finished boiling a Spotted Dog, and - oh. Where did he go?... It's a very fine English pudding with currants, after all. Well, I suppose he wanted to implement that change as soon as possible..."


    Footnotes

    [1] "Down, Not Across." For those who need additional clues on the grim meaning of The Sysadmin Mantra, search the archives of alt.sysadmin.recovery at <http://groups.google.com>, and all will become clear. If it does not, then you weren't meant to know. :)

    [2] From The Jargon File:

      Luser Attitude Readjustment Tool. ... The LART classic is a 2x4 or
      other large billet of wood usable as a club, to be applied upside the
      head of spammers and other people who cause sysadmins more grief than
      just naturally goes with the job. Perennial debates rage on
      alt.sysadmin.recovery over what constitutes the truly effective LART;
      knobkerries, semiautomatic weapons, flamethrowers, and tactical nukes
      all have their partisans. Compare {clue-by-four}.


    [3] See "perldoc perlopentut" for a tutorial on opening files, the 'magic' in @ARGV, and even "Dispelling the Dweomer" for those who have seen too much magic already. :)

     

    Ben is a Contributing Editor for Linux Gazette and a member of The Answer Gang.

    picture Ben was born in Moscow, Russia in 1962. He became interested in electricity at age six--promptly demonstrating it by sticking a fork into a socket and starting a fire--and has been falling down technological mineshafts ever since. He has been working with computers since the Elder Days, when they had to be built by soldering parts onto printed circuit boards and programs had to fit into 4k of memory. He would gladly pay good money to any psychologist who can cure him of the resulting nightmares.

    Ben's subsequent experiences include creating software in nearly a dozen languages, network and database maintenance during the approach of a hurricane, and writing articles for publications ranging from sailing magazines to technological journals. Having recently completed a seven-year Atlantic/Caribbean cruise under sail, he is currently docked in Baltimore, MD, where he works as a technical instructor for Sun Microsystems.

    Ben has been working with Linux since 1997, and credits it with his complete loss of interest in waging nuclear warfare on parts of the Pacific Northwest.


    Copyright © 2003, Ben Okopnik. Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 87 of Linux Gazette, February 2003

    LINUX GAZETTE
    ...making Linux just a little more fun!
    Fun with Simputer and Embedded Linux
    By Pramode C.E

    Fun with Simputer and Embedded Linux

    Fun with Simputer and Embedded Linux

    The Simputer is a StrongArm CPU based handheld device running Linux. Originally developed by professors at the Indian Institute of Science, Bangalore, the device has a social objective of bringing computing and connectivity within the reach of rural communities. This articles provides a tutorial introduction to programming the Simputer (and similar ARM based handheld devices - there are lots of them in the market). The reader is expected to have some experience programming on Linux. Disclaimer: I try to describe things which I had done on my Simputer without any problem - if following my instructions leads to your handheld going up in smoke - I should not be held responsible!

    Hardware/Software

    The device is powered by an Intel StrongArm (SA-1110) CPU. The flash memory size is either 32Mb or 16Mb and RAM is 64Mb or 32Mb. The peripheral features include:

    1. USB master as well as slave ports.
    2. Standard serial port.
    3. Infra Red communication port.
    4. Smart card reader

    Some of these features are enabled by using a `docking cradle' provided with the base unit. Power can be provided either by rechargeable batteries or external AC mains.

    Simputer is powered by GNU/Linux - kernel version 2.4.18 (with a few patches) works fine. The unit comes bundled with binaries for the X-Window system and a few simple utility programs. More details can be obtained from the project home page at www.simputer.org.

    Powering up

    There is nothing much to it, other than pressing the `power button'. You will see a small tux picture coming up and within a few seconds, you will have X up and running . The LCD screen is touch sensitive and you can use a small `stylus' (geeks use finger nails!) to select applications and move through the graphical interface. If you want to have keyboard input, be prepared for some agonizing manipulations using the stylus and a `soft keyboard' which is nothing but a GUI program from which you can select single alphabets and other symbols.

    Waiting for bash

    GUI's are for kids. You are not satisfied till you see the trusted old bash prompt. Well, you don't have to try a lot. The Simputer has a serial port - attach the provided serial cable to it - the other end goes to a free port on your host Linux PC (in my case, /dev/ttyS1). Now fire up a communication program (I use `minicom') - you have to first configure the program so that it uses /dev/ttyS1 with communication speed set to 115200 (that's what the Simputer manual says - if you are using a similar handheld, this need not be the same) and 8N1 format, hardware and software flow controls disabled. Doing this with minicom is very simple - invoke it as:

    minicom -m -s

    Once configuration is over - just type:

    minicom -m

    and be ready for the surprise. You will immediately see a login prompt. You should be able to type in a user name/password and log on. You should be able to run simple commands like `ls', `ps' etc - you may even be able to use `vi' .

    If you are not familiar with running communication programs on Linux, you may be wondering what really happened. Nothing much - it's standard Unix magic. A program sits on the Simputer watching the serial port (the Simputer serial port, called ttySA0) - when you run minicom on the Linux PC, you establish a connection with that program, which sends you a login prompt over the line, reads in your response, authenticates you and spawns a shell with which you can interact over the line.

    Once minicom initializes the serial port on the PC end, you can `script' your interactions with the Simputer. You are exploiting the idea that the program running on the Simputer is watching for data over the serial line - the program does not care whether the data comes from minicom itself or a script. You can try out the following experiment:

    1. Open two consoles (on the Linux PC)
    2. Run minicom on one console, log on to the simputer
    3. On the other console, type `echo ls > /dev/ttyS1'
    4. Come to the first console - you will see that the command `ls' has executed on the Simputer.

    Setting up USB Networking

    The Simputer comes with a USB slave port. You can establish a TCP/IP link between your Linux PC and the Simputer via this USB interface. Here are the steps you should take:

    1. Make sure you have a recent Linux distribution - Red Hat 7.3 is good enough.
    2. Plug one end of the USB cable onto the USB slave slot in the Simputer, then boot the Simputer.
    3. Boot your Linux PC. DO NOT connect the other end of the USB cable to your PC now. Log in as root on the PC.
    4. Run the command `insmod usbnet' to load a kernel module which enables USB networking on the Linux PC. Verify that the module has been loaded by running `lsmod'.
    5. Now plug the other end of the USB cable onto a free USB slot of the Linux PC. The USB subsystem in the Linux kernel should be able to register a device attach. On my Linux PC, immediately after plugging in the USB cable, I get the following kernel messages (which can be seen by running the command `dmesg'):
    usb.c: registered new driver usbnet
    hub.c: USB new device connect on bus1/1, assigned device
    number 3
    usb.c: ignoring set_interface for dev 3, iface 0, alt 0
    usb0: register usbnet 001/003, Linux Device
    

    After you have reached this far, you have to run a few more commands:

    1. Run `ifconfig usb0 192.9.200.1' - this will assign an IP address to the USB interface on the Linux PC.
    2. Using `minicom' and the supplied serial cable, log on to the Simputer as root. Then run the command `ifconfig usbf 192.9.200.2' on the Simputer.
    3. Try `ping 192.9.200.2' on the Linux PC. If you see ping packets running to and fro, congrats. You have successfully set up a TCP/IP link!
    You can now telnet/ftp to the Simputer through this TCP/IP link.

    Hello, Simputer

    It's now time to start real work. Your C compiler (gcc) normally generates `native' code, ie, code which runs on the microprocessor on which gcc itself runs - most often, an Intel (or clone) CPU. If you wish your program to run on the Simputer (which is based on the StrongArm microprocessor), the machine code generated by gcc should be understandable to the StrongArm CPU - your `gcc' should be a cross compiler. If you download the gcc source code (preferably 2.95.2) together with `binutils', you should be able to configure and compile it in such a way that you get a cross compiler (which could be invoked like, say, arm-linux-gcc). This might be a bit tricky if you are doing it for the first time - your handheld vendor should supply you with a CD which contains the required tools in a precompiled form - it is recommended that you use it (but if you are seriously into embedded development, you should try downloading the tools and building them yourselves).

    Assuming that you have arm-linux-gcc up and running, you can write a simple `Hello, Simputer' program, compile it into an `a.out', ftp it onto the Simputer and execute it (it would be good to have one console on your Linux PC running ftp and another one running telnet - as soon as you compile the code, you can upload it and run it from the telnet console - note that you may have to give execute permission to the ftp'd code by doing `chmod u+x a.out' on the Simputer).

    A note on the Arm Linux kernel

    The Linux kernel is highly portable - all machine dependencies are isolated in directories under the `arch' subdirectory (which is directly under the root of the kernel source tree, say, /usr/src/linux). You will find a directory called `arm' under `arch'. It is this directory which contains ARM CPU specific code for the Linux kernel.

    The Linux ARM port was initiated by Russell King. The ARM architecture is very popular in the embedded world and there are a LOT of different machines with fantastic names like Itsy, Assabet, Lart, Shannon etc all of which use the StrongArm CPU (there also seem to be other kinds of ARM CPU's - now that makes up a really heady mix). There are minor differences in the architecture of these machines which makes it necessary to perform `machine specific tweaks' to get the kernel working on each one of them. The tweaks for most machines are available in the standard kernel itself, and you only have to choose the actual machine type during the kernel configuration phase to get everything in order. But to make things a bit confusing with the Simputer, it seems that the tweaks for the initial Simputer specification have got into the ARM kernel code - but the vendors who are actually manufacturing and marketing the device seem to be building according to a modified specification - and the patches required for making the ARM kernel run on these modified configurations is not yet integrated into the main kernel tree. But that is not really a problem, because your vendor will supply you with the patches - and they might soon get into the official kernel.

    Getting and building the kernel source

    You can download the 2.4.18 kernel source from the nearest Linux kernel ftp mirror. You will need the file `patch-2.4.18-rmk4' (which can be obtained from the ARM Linux FTP site ftp.arm.linux.org.uk). You might also need a vendor supplied patch, say, `patch-2.4.18-rmk4-vendorstring'. Assume that all these files are copied to the /usr/local/src directory.

    1. First, untar the main kernel distribution by running `tar xvfz kernel-2.4.18.tar.gz'
    2. You will get a directory called `linux'. Change over to that directory and run `patch -p1 < ../patch-2.4.18-rmk4'.
    3. Now apply the vendor supplied patch. Run `patch -p1 < ../patch-2.4.18-rmk4-vendorstring'.

    Now, your kernel is ready to be configured and built. Before that, you have to examine the top level Makefile (under /usr/local/src/linux) and make two changes - there will be a line of the form:

    ARCH := <lots-of-stuff>

    near the top. Change it to

    ARCH := arm

    You need to make one more change. You observe that the Makefile defines:

    AS = ($CROSS_COMPILE)as
    LD = ($CROSS_COMPILE)ld
    CC = ($CROSS_COMPILE)gcc
    

    You note that the symbol CROSS_COMPILE is equated with the empty string. During normal compilation, this will result in AS getting defined to `as', CC getting defined to `gcc' and so on which is what we want. But when we are cross compiling, we use arm-linux-gcc, arm-linux-ld, arm-linux-as etc. So you have to equate CROSS_COMPILE with the string arm-linux-, ie, in the Makefile, you have to enter:

    CROSS_COMPILE = arm-linux-

    Once these changes are incorporated into the Makefile, you can start configuring the kernel by running `make menuconfig' (note that it is possible to do without modifying the Makefile. You run `make menuconfig ARCH=arm'). It may take a bit of tweaking here and there before you can actually build the kernel without error. You will not need to modify most things - the defaults should be acceptable.

    1. You have to set the system type to SA1100 based ARM system and then choose the SA11x0 implementation to be `Simputer(Clr)' (or something else, depending on your machine). I had also enabled SA1100 USB function support, SA11x0 USB net link support and SA11x0 USB char device emulation.
    2. Under Character devices->Serial drivers, I enabled SA1100 serial port support, console on serial port support and set the default baud rate to 115200 (you may need to set differently for your machine).
    3. Under Character devices, SA1100 real time clock and Simputer real time clock are enabled.
    4. Under Console drivers, VGA Text console is disabled.
    5. Under General Setup, the default kernel command string is set to `root=/dev/mtdblock2 quite'. This may be different for your machine.

    Once the configuration process is over, you can run

    make zImage

    and in a few minutes, you should get a file called `zImage' under arch/arm/boot. This is your new kernel.

    Running the new kernel

    I describe the easiest way to get the new kernel up and running.

    Just like you have LILO or Grub acting as the boot loader for your Linux PC, the handheld too will be having a bootloader stored in its non volatile memory. In the case of the Simputer, this bootloader is called `blob' (which I assume is the boot loader developed for the Linux Advanced Radio Terminal Project, `Lart'). As soon as you power on the machine, the boot loader starts running - If you start minicom on your Linux PC, keep the `enter' key pressed and then power on the device, the bootloader, instead of continuing with booting the kernel stored in the device's flash memory, will start interacting with you through a prompt which looks like this:

    blob>

    At the bootloader prompt, you can type:

    blob> download kernel

    which results in blob waiting for you to send a uuencoded kernel image through the serial port. Now, on the Linux PC, you should run the command:

    uuencode zImage /dev/stdout > /dev/ttyS1

    This will send out a uuencoded kernel image through the COM port - which will be read and stored by the bootloader in the device's RAM. Once this process is over, you get back the boot loader prompt. You just have to type:

    blob> boot

    and the boot loader will run the kernel which you have right now compiled and downloaded.

    A bit of kernel hacking

    What good is a cool new device if you can't do a bit of kernel hacking? My next step after compiling and running a new kernel was to check out how to compile and run kernel modules. Here is a simple program called `a.c':

    #include <linux/module.h>
    #include <linux/init.h>
    
    /* Just a simple module */
    
    int 
    init_module(void) 
    { 
       printk("loading module...\n");
       return 0;
    }
    
    void 
    cleanup_module(void) 
    { 
       printk("cleaning up ...\n");
    }
    

    You have to compile it using the command line:

    arm-linux-gcc -c -O -DMODULE -D__KERNEL__ a.c -I/usr/local/src/linux-2.4.18/include

    You can ftp the resulting `a.o' onto the Simputer and load it into the kernel by running:

    insmod ./a.o

    You can remove the module by running:

    rmmod a

    Handling Interrupts

    After running the above program, I started scanning the kernel source to identify the simplest code segment which would demonstrate some kind of physical hardware access - and I found it in the hard key driver. The Simputer has small buttons which when pressed act as the arrow keys - these buttons seem to be wired onto the general purpose I/O pins of the ARM CPU (which can also be configured to act as interrupt sources - if my memory of reading the StrongArm manual is correct). Writing a kernel module which responds when these keys are pressed is a very simple thing - here is a small program which is just a modified and trimmed down version of the hardkey driver - you press the button corresponding to the right arrow key - an interrupt gets generated which results in the handler getting executed. Our handler simply prints a message and does nothing else. Before inserting the module, we must make sure that the kernel running on the device does not incorporate the default button driver code - checking /proc/interrupts would be sufficient.

    Compile the program shown below into an object file (just as we did in the previous program), load it using `insmod', check /proc/interrupts to verify that the interrupt line has been acquired. Pressing the button should result in the handler getting called - the interrupt count displayed in /proc/interrupts should also change.

    
    #include <linux/module.h>
    #include <linux/ioport.h>
    #include <linux/sched.h>
    #include <asm-arm/irq.h>
    #include <asm/io.h>
    
    static void 
    key_handler(int irq, void *dev_id, struct pt_regs *regs)
    {
      printk("IRQ %d called\n", irq);
    }
    
    static int  
    init_module(void)
    {
      unsigned int res = 0;
      printk("Hai, Key getting ready\n");
      set_GPIO_IRQ_edge(GPIO_GPIO12, GPIO_FALLING_EDGE);
      res = request_irq(IRQ_GPIO12, key_handler, SA_INTERRUPT,
      "Right Arrow Key", NULL);
      if(res) {
         printk("Could Not Register irq %d\n", IRQ_GPIO12);
         return res;
       }
      return res ;
    }
    
    static void 
    cleanup_module(void)
    {
      printk("cleanup called\n");
      free_irq(IRQ_GPIO12, NULL);
    }
    

    Future work

    A Linux based handheld offers a lot of opportunities for serious fun - as I learn more about the device, I shall try to share my findings with the readers.

    References

    1. Simputer Project Home Page.
    2. Simputerland and PicoPeta - information about Simputer development activities from companies which are manufacturing and marketing the product.
    3. Arm Linux Project Home Page
    4. Lart Project Home Page. Lots of cool stuff here. You might like to check out the `Clock Scaling' link on this site. Clock scaling allows you to change the clock speed of the running processor on the fly - useful for saving battery power.

     

    [BIO] I am an instructor working for IC Software in Kerala, India. I would have loved becoming an organic chemist, but I do the second best thing possible, which is play with Linux and teach programming!


    Copyright © 2003, Pramode C.E. Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 87 of Linux Gazette, February 2003

    LINUX GAZETTE
    ...making Linux just a little more fun!
    Qubism
    By Jon "Sir Flakey" Harsem

    These cartoons are scaled down to minimize horizontal scrolling. To see a panel in all its clarity, click on it.

    [cartoon]
    [cartoon]

    All Qubism cartoons are here at the CORE web site.

     

    [BIO] Jon is the creator of the Qubism cartoon strip and current Editor-in-Chief of the CORE News Site. Somewhere along the early stages of his life he picked up a pencil and started drawing on the wallpaper. Now his cartoons appear 5 days a week on-line, go figure. He confesses to owning a Mac but swears it is for "personal use".


    Copyright © 2003, Jon "Sir Flakey" Harsem. Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 87 of Linux Gazette, February 2003

    LINUX GAZETTE
    ...making Linux just a little more fun!
    Yacc/Bison - Parser Generators - Part 1
    By Hiran Ramankutty

    Introduction

    Yacc ("Yet Another Compiler Compiler") is used to parse a language described by a context-free grammar. Not all context-free languages can be handled by Yacc or Bison and only those that are LALR(1) can be parsed. To be specific, this means that it must be possible to tell how to parse any portion of an input string with just a single token of look-ahead. I will explain that clearly later in this article.

    Bison is also a parser generator in the style of Yacc. It was written primarily by Robert Corbett and Richard Stallman made it Yacc compatible. There are differences between Bison and Yacc, but that is not the purpose of this article.

    Languages and Context-Free Grammars

    Grammar can be associated with English language as a set of rules to construct meaningful sentences. We can say the same for context-free grammars. Almost all programming languages are based on context-free grammars. The set of rules in any grammar will deal with syntactic groupings that will help in the construction of semantic structures. To be specific, it means that we specify one or more syntactic groupings and give rules for constructing them from their parts. For example in C: `expression' is one kind of grouping. One rule for making an expression is, "An expression can be made of a minus sign and another expression". Another would be, "An expression is an integer". You must have noticed that the rules are recursive. In fact, every such grammar must then have a rule which leads out of the recursion.

    The most common formal system for representing such rules is the Backus-Naur Form or "BNF". All BNF grammars are context-free grammars.

    In the grammatical rules for a language, we name a grouping as a symbol. Those symbols which can be sub-divided into smaller constructs are called non-terminals and those which cannot be subdivided are called terminals. If a piece of input is a single terminal then it is called a token and if it is a single nonterminal it is called a grouping. For example: `identifier', `number', `string' are distinguished as tokens, Whereas `expression', `statement', `declaration' and `function definition' are groupings in C language. Now, the full grammar may use additional language constructs with another set of nonterminal symbols.

    Basic Parsing Techniques

    A parser for grammar G determines whether an input string say `w' is a sentence of G or not. If `w' is a sentence of G then the parser produces the parse tree for `w' otherwise, an error message is produced. By parse tree we mean a diagram that represents the syntactic structure of a string `w'. There are two basic types of parsers for context-free grammars - bottom-up and top-down, the former one being of our interest.

    Bottom-Up Parsing

    It is also known as Shift-Reduce Parsing. Here, attempts to construct a parse tree for an input begin at the leaves (bottom) and work up towards the root (top). In other words this will lead to a process of `reduction' of the input string to the start symbol of the grammar based on its production rules. For example, consider the grammar:

    S -> aAcBe
    A -> Ab/b
    B -> d
    

    Let w = "abbcde". Our aim is to reduce this string `w' to S, where S is the start symbol. We scan "abbcde" looking for substrings that match the right side of some production. The substrings `b' and `d' qualify. Again there are 2 b's to be considered. Let us proceed with leftmost `b'. We replace it by `A' the left side of the production A -> b. The string has now become "aAbcde". We now see that `Ab', `b' and `d' each match the right side of some production. This time we will choose to replace `Ab' by `A', the left side of the production A -> Ab. The string now becomes "aAcde". Then replacing `d' by `B', the left side of the production B -> d, we obtain "aAcBe". The entire string can now be replaced by S.

    Basically, we are replacing the right side of a production by the left side the process being called a reduction. Quite easy! Not always. It sometimes so happen that, the substring that we choose to reduce may produce a string which is not decomposable to the start symbol S.

    The substrings that are the right side of a production and when replaced with the left side of that production in the input string that leads eventually to the start symbol is called a `handle'. Now, the process of bottom-up parsing may be viewed as one of finding and reducing `handles', the reduction sequence being known as `handle pruning'.

    Stack Implementation of Shift-Reduce Parsing

    A convenient way to implement a shift-reduce parser is to use a stack and an input buffer. Let `$' symbol mark the bottom of the stack and the right end of the input.

    The main concept is to shift the input symbols onto the stack until a handle b is on top of the stack. Now we reduce b to the left side of the appropriate production. The parser repeats this cycle until it has detected an error or until the stack contains the start symbol and the input is empty:

    In fact, there are four possible actions that a shift-reduce parser can make and they are;

    1. In a shift action, the next input symbol is shifted to the top of the stack.
    2. In a reduce action, the parser knows that the right end of the handle is at the top of the stack. It must then locate the left end of the handle within the stack and decide with what nonterminal to replace the handle.
    3. In an accept action, the parser announces successful completion of parsing.
    4. In an error action, the parser discovers that a syntax error has occurred and calls an error recovery routine.

    Let us see how these concepts are put into action in the example below.

    Consider the grammar below:

    E -> E + E
    E -> E * E
    E -> (E)
    E -> id
    

    Let the input string be id1 + id2 * id3

    Figure 1

    Constructing a Parse Tree

    The bottom-up tree construction process has two aspects.

    1. When we shift an input symbol a onto the stack we create a one-node tree labeled a. Both the root and the yield of this tree are a, and the yield truly represents the string of terminals "reduced" (by zero reductions) to symbol a.
    2. When we reduce X1X2... Xn to A, we create a new node labeled A. Its children, from left to right, are the roots of the trees for X1,X2,...,Xn. If for all ii the tree for Xi has yield xi, then the yield for the new tree is x1x2...xn. This string has in fact been reduced to A by a series of reductions culminating in the present one. As a special case, if we reduce E to A we create a node labeled A with one child labeled E.

    LR Parsing Algorithm

    Construction of LALR parser requires the basic understanding of constructing an LR parser. LR parser gets its name because it scans the input from left-to-right and constructs a rightmost derivation in reverse.

    A parser generates a parsing table for a grammar. The parsing table consists of two parts, a parsing action function ACTION and a goto function GOTO.

    An LR parser has an input, a stack, and a parsing table. The input is read from left to right, one symbol at a time. The stack contains a string of the form s0X1s1...Xmsm where sm is on top. Each Xi is a grammar symbol and each si is a symbol called a state. Each state symbol summarizes the information contained in the stack below it and is used to guide the shift-reduce decision.

    The function ACTION stores values for sm that is topmost stack element and ai that is the current input symbol. The entry ACTION[sm, ai] can have one of four values:

    1. shift s
    2. reduce A -> B
    3. accept
    4. error

    The function GOTO takes a state and grammar symbol as arguments and produces a state. Somewhat analogous to the transition table of a deterministic finite automaton whose input symbols are the terminals and nonterminals of the grammar.

    A configuration of an LR parser is a pair whose first component is the stack contents and whose second component is the unexpended input:

    (s0 X1 s1 . . . Xm sm, ai ai+1 . . . an$)

    The next move of the parser is determined by reading ai, the current input symbol, and sm the state on top of the stack, and then consulting the action table entry ACTION[sm, ai]. The four value mentioned above for action table entry will produce four different configurations as follows:

    1. If ACTION[sm, ai] = shift s, the parser executes a shift move, entering the configuration

      (s0 X1 s1 . . . Xm sm ai s, ai+1 . . . an$)

      Here the configuration has shifted the current input symbol ai and the next state s = GOTO[sm, ai] onto the stack; ai+1 becomes the new current input symbol.
    2. If ACTION[sm, ai] = reduce A - > B,then the parser executes a reduce a move, entering the configuration

      (s0 X1 s1 . . . Xm-r sm-r A s, ai ai+1 . . . an$)

      where s = GOTO[sm-r, A] and r is the length of B, the right side of the production. Here the first popped 2r symbols off the stack (r state symbols and r grammar symbols), exposing state sm-r. The parser then pushed both A, the left side of the production, and s, the entry for ACTION[sm-r, A], onto the stack. The current input symbol is not changed in a reduce move. Specifically, Xm-r+1 . . . Xm, the sequence of grammar symbols are popped off the stack and will always match B, the right side of the reducing production.
    3. If ACTION[sm, ai] = accept, parsing is completed.
    4. If ACTION[sm, ai] = error, the parser has discovered an error and calls an error recovery routine.

    The LR parsing algorithm is simple. Initially the LR parser is in the configuration (s0, a1a2...an$) where s0 is a designated intial state and a1a2...an is the string to be parsed. Then the parser executes moves until an accept or error action is encountered.

    I mentioned earlier that the GOTO function is essentially the transition table of a deterministic finite automaton whose input symbols (terminals and nonterminals) and a state when taken as arguments produce another state. Hence the GOTO function can be represented by a graph (directed) like scheme, where each node or state will be a set of items with elements that are productions in the grammar. The elements comprise the core of the items. The edges representing the transition will be labeled with the symbol for which the transition from one state to another is predetermined.

    In the LALR (lookahead-LR) technique, LR items with common core are coalesced, and the parsing actions are determined on the basis of the new GOTO function generated. The tables obtained are considerably smaller than the LR tables, yet most common syntactic constructs of programming languages can be expressed conveniently by LALR grammar.

    I am not going deep into construction of tables. Instead, I would like to explain the use of a tool called Yacc for parser generation.

    Calculator Using Yacc

    Input to Yacc can be divided into three sections:

    1. definitions sections that consists of token declarations, and C code bracketed by "%{" and "}%"
    2. the BNF grammar in the rules section
    3. and user subroutines in the subroutines section.

    We shall illustrate that by designing a small calculator that can add and subtract numbers. Let us start with the definitions section for the Yacc input file:

    /* File name - calc1.l*/
    %{ 
    	#include "y.tab.h"
    	#include < stdlib.h >
    	void yyerror(char *);
    }%
    
    %%
    
    [0-9]+	{
    		yylval = atoi(yytext);
    		return INTEGER;
    	}
    
    [-+\n]	{
    		return *yytext;
    	}
    
    [ \t]	;	/*skip whitespace*/
    
    .	yyerror("Unknown character");
    
    %%
    
    int yywrap(void) {
    	return 1;
    }
    

    Yacc when run generates a parser in the file y.tab.c, along side which another file y.tab.h is also generated. Lex includes this file and utilizes the definitions for token values. Lex returns the values associated with the tokens in variable yylval. But to get tokens, yacc calls yylex the return value of which is integer.

    The yacc input specification is given below:

    /* file name calc1.y */
    %{
        int yylex(void);
        void yyerror(char *);
    %}
    
    %token INTEGER
    
    %%
    
    program:
            program expr '\n'         { printf("%d\n", $2); }
            |
            ;
    
    expr:
            INTEGER
            | expr '+' expr           { $$ = $1 + $3; }
            | expr '-' expr           { $$ = $1 - $3; }
            ;
    
    %%
    
    void yyerror(char *s) {
        fprintf(stderr, "%s\n", s);
    }
    
    int main(void) {
        yyparse();
        return 0;
    }
    

    Here, the grammar is specified using productions. Left hand side of a production being a non terminal followed by a colon and then the right hand side of a production. The contents of the braces show the action associated with the productions. So what does the rules say ?

    It says that a program consists of zero or more expressions. Each expression terminates with a newline. When a newline is detected, we print the value of the expression.

    Now execute yacc as shown:

    yacc -d calc1.l
    

    You get a message "shift/reduce conflict". Shift/reduce conflict arises when the grammar is ambiguous and there is a possibility of more than one derivation tree. To understand this, consider the example given in the stack implementation of shift-reduce parsing. In step 6, instead of shifting we could have reduced appropriately as per the grammar . Then addition will have higher precedence over multiplication.

    Before proceeding you must know about another kind of conflict that is reduce-reduce conflict. This arises when there are more than one option for reducing a stack symbol. For example: In the grammar below id can be reduced to T or E.

    E -> T
    E -> id
    T -> id
    

    Yacc takes a default action when conflicts arise. When there is shift-reduce conflict, yacc will shift and when there is reduce-reduce conflict, it will use the first rule in the listing. Yacc also issues a warning message when conflicts arise. Warnings can be eliminated by making the grammar unambiguous.

    Coming back, yacc produces two files; y.tab.c and y.tab.h. Some lines one has to notice are:

    #ifndef YYSTYPE
    typedef int YYSTYPE
    #endif
    #define INTEGER 257
    YYSTYPE yylval
    

    Internally, yacc maintains two stacks in memory; a parse stack and a value stack. The current parsing state is determined by the terminals and/or non terminals that are present in the parse stack. The value stack is always synchronized and holds an array of YYSTYPE elements, which associates a value with each element in the parse stack. So for example, when lex returns an INTEGER token, yacc shifts this token to the parse stack. At the same time, the corresponding yylval is shifted to the value stack. This makes it easier in finding the value of a token at any given time.

    So when we apply the rule

    expr: expr '+' expr	{ $$ = $1 + $3; }
    

    we pop "expr '+' expr" and replace it by "expr". In other words we replace the right hand side of a production by left hand side of the same production. Here we pop three terms off the stack and push back one term. The value stack will contain "$1" for the first term on the right-hand side of the production, "$2" for the second and so on. "$$" designates the top of the stack after reduction has taken place. The above action adds the values associated with two expressions, pops three terms off the value stack, and pushes back a single sum. Thus the two stacks remain synchronized and when a newline is encountered, the value associated with expr is printed.

    The last function that we need is a 'main'. But the grammar is ambiguous and yacc will issue shift-reduce warnings and will process the grammar using shift as the default operation.

    I am not giving the function here because there is more to learn. I shall come up with that in the next part. I shall also explain how to remove ambiguity from the grammar and then design the calculator for it. In fact, some more funcionalities shall be incorporated into the grammar to have a better understanding.

     

    [BIO] I am a final year student of Computer Science at Government Engineering College, Trichur, Kerala, India. Apart from Linux I enjoy reading books on theoretical physics.


    Copyright © 2003, Hiran Ramankutty. Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 87 of Linux Gazette, February 2003

    LINUX GAZETTE
    ...making Linux just a little more fun!
    I Built a Custom Debian CD from Knoppix
    By Sunil Thomas Thonikuzhiyil

    Introduction

        Knoppix is a live cd distribution based on debian GNU/Linux.It contains a large number of applications which can come in handy even on minimal hardware.Knoppix supports alarge number of graphics cards sound cards scsi and USB devices.It can be used as a Linux demo, educational cd, rescue system or adapted and used as a platform for commercial product demo. It is not necesary to install anything to hard disk There is an installation program which can install the entire cd to a hard disk, if you like. It means that you have a full fledged debian installation in 20 minutes. This document describes how I built a custom live cd from knoppix. My primary motivation to build this cd was to include some of my favorite applications which are missing from stock knoppix CD.  If you find any errors in this document please drop me a mail here

    Requirements

    a) Software

    To start with you have to download knoppix image from knoppix site. There is a release every one  or two week. There are 2 images: German and English. I did my setup based on 31-10-2002-EN release.

      If you already have an image you can try to rsync it to the most recent version as below. However don't expect much bandwidth saving since the knoppix image is compressed. If you have a knoppix cdrom create an image by   #dd if=/dev/cdrom of=knoppix.iso)

    Rename the Knoppix iso image to reflect the current release name.
      Example
      I have downloaded KNOPPIX_V3.1-23-10-2002-EN.iso
      I want to update it to   KNOPPIX_V3.1-31-10-2002-EN.iso
      Rename KNOPPIX_V3.1-23-10-2002-EN.iso to  KNOPPIX_V3.1-31-10-2002-EN.iso
     Then
     rsync -P --stats ftp.leo.org::Knoppix/KNOPPIX_V3.1-31-10-2002-EN.iso  .
    ( you can use any other rsync site of knoppix. Always check the site for latest release)

    b) Hardware

    A Computer with tons of free hard disk space and memory. I did this on a Pentium 3 950MHZ machine with 128 mb RAM.


    Initial setup

     Make a lot of  disk space free  You need a lot of real estate for re-mastering KNOPPIX  CD
       I made two fresh partitions  on my 20 GB Hard disk
               hda2 with 2GB  for swap
               hda3 with 5 GB for re-mastering work ( you can also  use an existing Linux partition if it has sufficient  free space)

     Now boot the machine with Knoppix cd ( You can also do the re-mastering after a  hard disk install of knoppix . A how to for hard disk install  can be found here )

       At boot prompt press enter. Knoppix now boots into GUI. The default is KDE . You  can change it at boot prompt if you want . I did re-mastering  while booted to KDE. It is assumed that you are somewhat familiar with Knoppix.  Read  Knoppix cheat codes on the CD  for more information on booting.

    a) Configure networking from the KDE menu

      Click on
    K/Knoppix/network-Internet/Network-card-configuration

        I am connected to a Lan and I configured IP address, netmask, name server and gateway This step is very important since you have to get the custom stuff to be installed from else where

    b) Setup partitions

    Open a root shell from KDE menu  (K/Knoppix/Root-shell)  You will get  # prompt

    Run cfdisk

    Next you have to make the necessary  partitions . I  created two partitions hda2 with 2 GB and  hda3 with 5 GB

    Make the 2GB  partitions type as swap  ( /hda2 in my case)

    Make the 5 GB partitions type as Linux native( ext2)  (hda3 here)
    Save the modified partition information

    Quit cfdisk

     For creating compressed file system we need a lot of swap space . I created the swap with
     # mkswap /dev/hda2
     # swapon /dev/hda2               

    ( it is also possible to use a swap file )

    Create an ext2 file system on the 5GB partition
      # mke2fs /dev/hda3

    Mount the 5GB  partition to the Knoppix file system
    # mount /dev/hda3 /mnt/hda3

    The basic setup for re-mastering is ready


    Installing and Removing Software

    The knoppix CD  is organized somewhat like the figure below ( Correct me if I am wrong. it may  look different when you look at it from windows or another Linux distro)

         
    /--demos
    |--talks
    |--index.html
    |--autorun.bat
    |--autorun.inf
    |--knoppix.ico
    |--KNOPPIX
          |--KNOPPIX
          |--boot.img
          |--background.gif
          |- (Some more files here)
    

     The file KNOPPIX in  /KNOPPIX  directory on the cd is approxiamtley 700MB. The file contains a compressed image of the file system. We have to modify this file alone and can leave the rest of  the cd intact (unless you want to modify boot image startup files etc).

    a) Copy Knoppix file system to hard disk

    When Knoppix CD is booted compressed image file is mounted at /KNOPPIX You have to copy it to your target partition.I did a
    # cp -Rp  /KNOPPIX    /mnt/hda3/ 
    ( -R option is for recursive copying -p is for preserving ownership time stamp etc) This places all the files you need to make a custom cd on your hard disk  at /mnt/hda3/KNOPPIX/ directory . Take a look at it

    b) Chroot

     You have to install/uninstall software under this tree ( if you don't have networking copy your sources to  (say)  /mnt/hda3/KNOPPIX/root/  and if you have debs copy them to  /mnt/hda3/KNOPPIX/var/cache/apt/archives)
    Now we are going to change the root of the file system to /mn/hda3/KNOPPIX
    #chroot /mnt/hda3/KNOPPIX

    you will get back # prompt ( If you get /dev/null  permission denied message here just press control C)
     You  are at   / ( chrooted to /mnt/hda3/KNOPPIX)

    Next mount the proc file system
    #mount -t proc /proc proc    

    c) Setup networking

    Add to  /etc/resolv.conf
    nameserver ip-of-ur-nameserver

     ( I had a curious problem /etc/resolv.conf was a symlink to /etc/dhcp/resolv.conf. Ping did not work. I removed the symlink and created a /etc/resolv.conf afresh and it worked. Make sure that you restore the symlink  once you are finished)

    Verify your ip address now with ifconfig. (It should be same as what you have out side chroot) Then try  ping google.com. If you  can ping google.com your network setup  is ok under chroot.Do an apt get update 

    d) Install/Uninstall

    You can install /uninstall whatever software you  need using apt. Since the original Cd has a lot of software installed it may not be an easy task.The following is a partial list of packages I removed

     Games

    falconseye-data
    rocks-n-diamonds
    amor
    nethack-x11
    gnome-games-locale
    xboard
    gnocatan-client
    imaze
    kmahjongg
    gnome-gnibbles
    freeciv-gtk
    ktuberling
    gnocatan-help
    ksirtet
    gnome-gnobots2
    jumpnbump
    ksnake
    xgalaga
    lskat
    katomic
    kshisen
    konquest
    chromium
    ktux
    moon-buggy
    kmoon
    ksame
    gnuchess
    ktron
    frozen-bubblekjumpingcube
    fortune-mod
    kodo
    gnocatan-ai
    gnocatan-server-console
    gnocatan-server-data
    nethack
    821
    fortunes
    searchandrescue
    xbill
    kspaceduel
    libkdegames
    tipptrainer-data-dexconq
    gcompris
    gnome-chess
    tuxracer-data
    abuse-frabs
    gnome-gnotski
    frotz
    kblackbox
    gnome-games
    gnome-gtali
    gnome-iagno
    gnome-stones
    gnocatan-server-gtk
    lxdoom-x11
    maelstrom
    kabalone
    gnome-gnotravex
    fortunes-min
    chromium-data
    kdegames
    pingus-data
    task-kde-games
    stax
    gnome-card-games
    xtris
    xtux
    kjezz
    lxdoom

    Non -free

    x3270
    xanim festlex-oald
    netscape-java-477
    j2re1.3
    3270-common
    tgif
    giflib-bin
    frotz xfractint
    giflib3g communicator-smotif-477
    netscape-base-477
    maelstrom communicator-base-477
    gimp1.2-nonfree
    acroread
    lha
    unarj
    xsnow

    Misc  

    tetex-base
    tetex-extra
    j2re1.3  
    lyx
    acroread
    qcad
    rocks-n-diamonds 
    kde-i18n-da
    kde-i18n-it
    kde-i18n-de
    kde-i18n-fr
    kde-i18n-ru
    kde-i18n-nl
    kde-i18n-ja
    kde-i18n-es 
    kde-i18n-cs
    kde-i18n-pl
    kde-i18n-tr
    xfonts-intl-chinese
    kword
    kpresenter
    abiword-gtk
    karbon
    kchart
    kformula
    kivio
    koffice-libs
    kontour
    koshell
    kspread

     I copied the above list to a file ( say kicklist)  then did
        #dpkg -P `cat kicklist`  
    It removed all files listed (notice the back quote above )

    If you are looking for big installed packages
          #  dpkg-awk "Status: .* installed$" -- Package Installed-Size| awk '{print $2}' | egrep -v '^$' | xargs -n2 echo | perl -pe 's/(\S+)\s(\S+)/$2 $1/' | sort -rg
      will list the  packages  with size in descending order.

    Finally run deborphan to check if there are any orphaned packages
    #deborphan > /tmp/orphanlist
    #dpkg -P `cat /tmp/orphanlist`
    # rm /tmp/orphanlist 

    An alternate method is to use  synaptic and add/remove packages from GUI. Synaptic is good graphical front end to apt

    For this do
    # apt-get install synaptic
    You have to export DISPLAY environment variable for synaptic to work properly
    #DISPLAY=myip:0.0    ( replace my ip with your actual IP)
    #export DISPLAY
    #synaptic

    It will start synaptic
    Enjoy apt through synaptic

    Once you are finished with synaptic you can re master the cd. If you are  working from a hard disk install of knoppix  and want synaptic to work, look in/etc/X11/xinit/xserverrc  and see that -nolisten tcp is removed.Also do xhost + as a non root user)

    Unmount proc  ( This is very important)
     #umount /proc

    Press control D to leave chrooted environment

    Further Customization  

    1  Installing applications compiled from source  

     Download the software source inside chroot environment. Compile and install as usual . If it is an X11 application export display before you test
    I use checkininstall asic-linux.com.mx/~izto/checkinstall/ to install and maintain home brew debs
    Remember to remove the sources once you are finished (it will take up space on your CD).

    2)  Changing user settings    

    It is possible to set password for users. Just set it under chrooted environment

    3) Changing backgrounds

    /usr/local/lib/knoppix.gif is the default background image in X

    4) Modifying Boot Screen  

     The Knoppix Cd uses syslinux to boot. If you want to change the boot screen/messages do the following. Make a temporary directory on your hard disk (I did mkdir /mnt/hda3/image)
     Copy the boot.img  file from  Knoppix  directory of your knoppix cd 
           #cp /KNOPPIX/boot.img /mnt/hda3
     Mount the image as follows
          # mount -t msdos -o loop   /mnt/hda3/boot.img  /mnt/hda3/image
      Now look in the image directory you created. There are a number of interesting  files in this directory

        a) Boot logo
    logo.16 is the image displayed on boot screen.  It is encoded in a special format. For replacing it grab a 640*400  16 color image. I downloaded an image from gnu.org. Convert the image to a png file ( call it logo.png) 
     
           #pngtopnm <logo.png >logo.pnm
          #ppmtolss16 <logo.pnm >logo.16
        # cp logo.16 /mnt/hda3/image/logo.16

      (Keep the size of the final log.16 around 50 k). Unmount image directory. Copy the boot.img to a floppy
       #dd if=boot.img of=/dev/fd0
    Boot the machine from the floppy you have made. If it boots up properly you are done

     b)  syslinux.cfg
      By modifying syslinux.cfg you can change a number of parameters passwd on to the kernel. Read the man pages of syslinux for more details
      5)Modifying  kernel  ( ****Untested  ****)         

    Make a  new custom kernel using kernel package . Keep the kernal size small. Copy the kernal to and modules to boot.img file. replace /lib/modules/2.4.19-xfs  with  modules of your new kernel
       Replace files in  /boot  

    6)Changing default GUI to Gnome/ icewm
     Changing default gui to something else is quite easy
      Under the chrooted environment  open the file
         /etc/init.d/knoppix-autoconfig

      Look for the following lines
    ---------------------------------------

    #Also read desired desktop, if any

    DESKTOP="$(getbootparam desktop 2>/dev/null)"

    # Allow only supported windowmanagers

    case "$DESKTOP" in gnome|kde|larswm|xfce|windowmaker|wmaker|icewm|fluxbox|twm) ;; *)

      DESKTOP="KDE"; ;; esac
    --------      ^^   --------------------------------------

      Change the KDE above to gnome and that is all 

     7)  Remove any temporary files 
        a)   Look in /root  for hidden files such as .bash_history .viminfo
        b)   Nuke all deb files in  /var/cache/apt/archives 
         c)  Run the knoppix.clean script   ( Be careful and run it only from  chrooted environmant)

     (* link to the script goes here*)       

    Re mastering the CD

    a)Make an ISO image

     1)  Make a new directory on /mnt/hda3
       I called it NewCd
        Copy Everything except Compressed image file(KNOPPIX) from knoppix cd (look at /cdrom). You can safely delete the directories demos and  talks
      2) Create the compressed Image
       #mkisofs -R   /mnt/hda3/KNOPPIX  |  create_compressed_fs -   65536  > /mnt/hda3/NewCD/KNOPPIX/KNOPPIX   

     3) Recreate the bootable CD
     #cd /mnt/hda3

     #mkisofs -r -J -b KNOPPIX/boot.img -c KNOPPIX/boot.cat   -o myknoppix.iso  NewCd

    b)Testing the image

    Create a boot floppy
    # dd if=/mnt/hda3/KNOPPIX/boot.img of=/dev/fd0 
    Copy the compressed file you created to a directory /KNOPPIX on any partition. The boot floppy i will look for /KNOPPIX/KNOPPIX on hard disk partitions.This makes your testing easy. Once you are satisfied with your image burn itto a CD  

    FAQ

     1) How do I  stop konquerer at startup
      To stop konquerers you  have to modify
            /etc/X11/Xsession.d/45xsession
        Look for the following lines

    -------------------------------------------
    if [ -e "$INDEXFILE" ]; then
    cat >> $HOME/Desktop/KNOPPIX.desktop <<EOF
    [Desktop Entry]
    Name=KNOPPIX
    Exec=kfmclient openProfile webbrowsing $INDEXFILE
    Type=Application
    Icon=html
    Terminal=0
    EOF
    ln $HOME/Desktop/KNOPPIX.desktop $HOME/.kde/Autostart/showindex.desktop
    fi

    ----------------------------------------- 

     It makes an autostart file. Comment it  out

    2) I have booted knoppix cd and mounted a hard disk how do i copy something via scp to the hard disk
          Open a  shell
          set a password for user knoppix
          start ssh  (/etc/init.d/ssh start)
          Then copy with scp

     3 I am at $ prompt I want to su
          Do  sudo  passwd
          set a  root password
          then su

     4) default text mode boots up in frame buffer and characters are very small how do I fix it
         Mount boot.img
         look for syslinux.cfg
         under  Default  vmlinuz
         change VGA=791 to VGA=normal

    5 )My keyboard lay out is  German.  How do i change it to English
     Open KDE control center select system ->keyboard and change it to US English

    References

          I have adapted lot of material from the following links. Also #knoppix on irc.freenode.net is a good source of information
    1)Tech2k home page
    Ken Burk helped me a lot on irc to improve this document . His site has excellent information which you can always rely on. His kix remastering page is also very good
    2) Knoppix.net
      The unofficial knoppix site is a great  source  of information. Lots of new stuff regarding re-mastering appear there regularly
    3) Knoppix forum at linuxtag
    This site mixture of German and English . Very good source on Knoppix

     

    [BIO] I work as consultant information technology at the Kerala Legislative Assembly Trivandrum India. I have been hooked on Linux since 1996. I have a Masters in Computer Science from Cochin University. I am interested in all sorts of operating systems. In my free time I love to listen to Indian classical music.


    Copyright © 2003, Sunil Thomas Thonikuzhiyil. Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 87 of Linux Gazette, February 2003

    LINUX GAZETTE
    ...making Linux just a little more fun!
    Encryption using OpenSSL's crypto libraries
    By Vinayak Hegde

    Motivation for the article

    Linux has already made quite a few inroads into the corporate world. One of the persistent demands of the corporate world has been a need for better data security. This is where encryption comes in, to hide sensitive data from a third party intruder. Open-source software has a reputation for secure programming. This article is another step in that direction.

    OpenSSL's libcrypto is a really good library if you want to use encryption without bothering with the details of underlying implementation of the algorithm. The problem is that the documentation is really minimal. You can obviously read the source and figure out what going on. Also the fact that function names are intuitive helps to some extent. Another way of getting help is joining the various mailing lists from the OpenSSL website. However the command line tools of OpenSSL are pretty well documented and easy to use. I shall explain in this article how to use the blowfish algorithm for encryption using OpenSSL's crypto libraries.

    Some Background Information

    During the early days of cryptography, algorithms as well as keys were secret. However now that trend has changed. Now algorithms are publicly known and keys are kept secret. The best example of this is the RSA algorithm which is widely known and implemented. The public key are known to the world but the private keys are kept secret. RSA is an asymmetric algorithm as it does not use the same key for encryption and decryption. Also it is generally not advisable to use RSA for encrypting large amounts of data as the it is computationally intensive.

    For encrypting large amounts of data, generally less computationally intensive algorithms are prefered. In this article we use the blowfish algorithm for encrypting and decrypting data. Blowfish is a symmetric algorithm which means it uses the same key for encryption and decryption. Blowfish was designed by the famous cryptographer Bruce Schneier. Blowfish is a fast algorithm for encryption/decryption.

    Generating the key

    For the purposes of demonstration we shall use a 128-bit key. This is stored as an character array in the program. We also generate an 64 bit initialization vector(IV). For our program we will use Cipher Block Chaining (CBC) mode. Also we will not use the blowfish functions directly but use then through a the higher level interface.

    An initialization vector is a bit of random information that is used as an input in chained encryption algorithms, that is, when each stage of encrypting a block of input data provides some input to the encryption of the next block. (blowfish uses 64-bit blocks for encryption). The IV provides the first bit of input for encrypting the 1st block of data, which then provides input for the 2nd block and so on. The bit left over at the end is discarded.

    The random bits are generated from the character special file /dev/random which provides a good source for random numbers. See the manpage for more information.

    
    int
    generate_key ()
    {
    	int i, j, fd;
    	if ((fd = open ("/dev/random", O_RDONLY)) == -1)
    		perror ("open error");
    
    	if ((read (fd, key, 16)) == -1)
    		perror ("read key error");
    
    	if ((read (fd, iv, 8)) == -1)
    		perror ("read iv error");
    	
    	printf("128 bit key:\n");
    	for (i = 0; i < 16; i++)
    		printf ("%d \t", key[i]);
    	printf ("\n ------ \n");
    
    	printf("Initialization vector\n");
    	for (i = 0; i < 8; i++)
    		printf ("%d \t", iv[i]);
    
    	printf ("\n ------ \n");
    	close (fd);
    	return 0;
    }
    
    

    The Encryption routine

    The encryption routine takes two parameters - the file descriptors of input file and the output file to which the encrypted data is to be saved. It is always a good idea to zero-fill your buffers using the memset or bzero commands before using the buffers with data. This is especially important if you plan to reuse the buffers. In the program below, the input data is being encrypted in blocks of 1K each.

    The steps for encryption are as follows :-

    1. Create a cipher context
    2. Initialize the cipher context with the values of Key and IV
    3. Call EVP_EncryptUpdate to encrypt successive blocks of 1k eack
    4. Call EVP_EncryptFinal to encrypt "leftover" data
    5. Finally call EVP_CIPHER_CTX_cleanup to discard all the sensitive information from memory

    You may be wondering what "leftover" data is? As mentioned earlier, Blowfish encrypts information in blocks of 64-bit each. Sometimes we may not have 64 bits to make up a block. This may happen if the buffer size in the program below or the file/input data size is not a integral multiple of 8 bytes(64-bits).So accordingly the data is padded and then the partial block is encrypted using EVP_EncryptFinal. The length of the encoded data block is stored in the variable tlen and added to the final length.

    
    int
    encrypt (int infd, int outfd)
    {
    	unsigned char outbuf[OP_SIZE];
    	int olen, tlen, n;
    	char inbuff[IP_SIZE];
    	EVP_CIPHER_CTX ctx;
    	EVP_CIPHER_CTX_init (& ctx);
    	EVP_EncryptInit (& ctx, EVP_bf_cbc (), key, iv);
    
    	for (;;)
    	  {
    		  bzero (& inbuff, IP_SIZE);
    
    		  if ((n = read (infd, inbuff, IP_SIZE)) == -1)
    		    {
    			    perror ("read error");
    			    break;
    		    }
    		  else if (n == 0)
    			  break;
    
    		  if (EVP_EncryptUpdate (& ctx, outbuf, & olen, inbuff, n) != 1)
    		    {
    			    printf ("error in encrypt update\n");
    			    return 0;
    		    }
    
    		  if (EVP_EncryptFinal (& ctx, outbuf + olen, & tlen) != 1)
    		    {
    			    printf ("error in encrypt final\n");
    			    return 0;
    		    }
    		  olen += tlen;
    		  if ((n = write (outfd, outbuf, olen)) == -1)
    			  perror ("write error");
    	  }
    	EVP_CIPHER_CTX_cleanup (& ctx);
    	return 1;
    }
    

    The Decryption routine

    The decryption routine basically follows the same steps as the encryption routine. The following code show how the decryption is done.
     
    
    int
    decrypt (int infd, int outfd)
    {
    	unsigned char outbuf[IP_SIZE];
    	int olen, tlen, n;
    	char inbuff[OP_SIZE];
    	EVP_CIPHER_CTX ctx;
    	EVP_CIPHER_CTX_init (& ctx);
    	EVP_DecryptInit (& ctx, EVP_bf_cbc (), key, iv);
    
    	for (;;)
    	  {
    		  bzero (& inbuff, OP_SIZE);
    		  if ((n = read (infd, inbuff, OP_SIZE)) == -1)
    		    {
    			    perror ("read error");
    			    break;
    		    }
    		  else if (n == 0)
    			  break;
    
    		  bzero (& outbuf, IP_SIZE);
    
    		  if (EVP_DecryptUpdate (& ctx, outbuf, & olen, inbuff, n) != 1)
    		    {
    			    printf ("error in decrypt update\n");
    			    return 0;
    		    }
    
    		  if (EVP_DecryptFinal (& ctx, outbuf + olen, & tlen) != 1)
    		    {
    			    printf ("error in decrypt final\n");
    			    return 0;
    		    }
    		  olen += tlen;
    		  if ((n = write (outfd, outbuf, olen)) == -1)
    			  perror ("write error");
    	  }
    
    	EVP_CIPHER_CTX_cleanup (& ctx);
    	return 1;
    }
    

    The complete code

    A minimal interactive program implementing the above routines can be downloaded from here . The command for compiling the program is
    # gcc -o blowfish sym_funcs.c -lcrypto
    
    The program takes three files from the command line

    1. File to be encrypted
    2. File is which the encrypted data is to be stored
    3. File in which decrypted data is to be stored
    Don't forget to generate a key before encrypting ;).

    An Example Application - A Secure Instant Messenger

    Consider an instant messenger software (IM) which wants to communicate with another IM securely. The following approach could be followed.

    1. Each IM client has it's own public and private key.
    2. The IM client has the public keys of all the IMs it wants to communicate with. (say friends' IMs).
    3. The session key is generated by the client which initiates the connection. This session key is used for encrypting the messages between the two clients.
    4. The session key is encrypted and exchanged between two/multiple clients using public-Key encryption.(eg. RSA algorithm). Thus Authentication is also taken care of.
    5. The exchange of encrypted data (using Blowfish symmetric encryption) thereafter takes place between the different clients after this "security handshake".

    Resources

    1. OpenSSL Homepage
    2. The Blowfish Algorithm
    3. Handbook of Applied Cryptography

     

    [BIO] My life changed since I discovered Linux. Suddenly Computers became interesting as i could try out lots of stuff on my Linux box due to the easy availabily of source code. My interests are predominantly in the fields of networking, embedded systems and programming languages. I currently work for Aparna Web services where we make Linux accessible for academia/corporations by configuring remote boot stations (Thin Clients).


    Copyright © 2003, Vinayak Hegde. Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 87 of Linux Gazette, February 2003

    LINUX GAZETTE
    ...making Linux just a little more fun!
    The Back Page


    World of Spam


    The Nigeria scams riled up one person enough to write a parody spam.

    X-Spam-Status: No, hits=4.3 required=5.0
    	tests=BILLION_DOLLARS,DEAR_SOMETHING,IN_REP_TO,ITS_LEGAL,
    	      LINES_OF_YELLING,LINES_OF_YELLING_2,LINES_OF_YELLING_3,
    	      NIGERIAN_TRANSACTION_1,NIGERIAN_TRANSACTION_2,REFERENCES,
    	      SIGNATURE_SHORT_DENSE,SPAM_PHRASE_05_08,SUPERLONG_LINE,
    	      UPPERCASE_75_100,USER_AGENT,USER_AGENT_MUTT,US_DOLLARS_3
    	version=2.43
    X-Spam-Level: ****
    
    HIGHLY CONFIDENTIAL 
    
    FROM: GEORGE WALKER BUSH 
    DEAR SIR / MADAM, 
    
    I AM GEORGE WALKER BUSH, SON OF THE FORMER PRESIDENT OF THE UNITED STATES OF
    AMERICA GEORGE HERBERT WALKER BUSH, AND CURRENTLY SERVING AS PRESIDENT OF THE
    UNITED STATES OF AMERICA. THIS LETTER MIGHT SURPRISE YOU BECAUSE WE HAVE NOT
    MET NEITHER IN PERSON NOR BY CORRESPONDENCE. I CAME TO KNOW OF YOU IN MY SEARCH
    FOR A RELIABLE AND REPUTABLE PERSON TO HANDLE A VERY CONFIDENTIAL BUSINESS
    TRANSACTION, WHICH INVOLVES THE TRANSFER OF A HUGE SUM OF MONEY TO AN ACCOUNT
    REQUIRING MAXIMUM CONFIDENCE. 
    
    I AM WRITING YOU IN ABSOLUTE CONFIDENCE PRIMARILY TO SEEK YOUR ASSISTANCE IN
    ACQUIRING OIL FUNDS THAT ARE PRESENTLY TRAPPED IN THE REPUBLIC OF IRAQ. MY
    PARTNERS AND I SOLICIT YOUR ASSISTANCE IN COMPLETING A TRANSACTION BEGUN BY MY
    FATHER, WHO HAS LONG BEEN ACTIVELY ENGAGED IN THE EXTRACTION OF PETROLEUM IN
    THE UNITED STATES OF AMERICA, AND BRAVELY SERVED HIS COUNTRY AS DIRECTOR OF THE
    UNITED STATES CENTRAL INTELLIGENCE AGENCY. 
    
    IN THE DECADE OF THE NINETEEN-EIGHTIES, MY FATHER, THEN VICE-PRESIDENT OF THE
    UNITED STATES OF AMERICA, SOUGHT TO WORK WITH THE GOOD OFFICES OF THE PRESIDENT
    OF THE REPUBLIC OF IRAQ TO REGAIN LOST OIL REVENUE SOURCES IN THE NEIGHBORING
    ISLAMIC REPUBLIC OF IRAN. THIS UNSUCCESSFUL VENTURE WAS SOON FOLLOWED BY A
    FALLING OUT WITH HIS IRAQI PARTNER, WHO SOUGHT TO ACQUIRE ADDITIONAL OIL
    REVENUE SOURCES IN THE NEIGHBORING EMIRATE OF KUWAIT, A WHOLLY-OWNED
    U.S.-BRITISH SUBSIDIARY. 
    
    MY FATHER RE-SECURED THE PETROLEUM ASSETS OF KUWAIT IN 1991 AT A COST OF
    SIXTY-ONE BILLION U.S. DOLLARS ($61,000,000,000). OUT OF THAT COST, 
    THIRTY-SIX BILLION DOLLARS ($36,000,000,000) WERE SUPPLIED BY HIS PARTNERS IN
    THE KINGDOM OF SAUDI ARABIA AND OTHER PERSIAN GULF MONARCHIES, AND SIXTEEN
    BILLION DOLLARS ($16,000,000,000) BY GERMAN AND JAPANESE PARTNERS. 
    
    BUT MY FATHER'S FORMER IRAQI BUSINESS PARTNER REMAINED IN CONTROL OF THE
    REPUBLIC OF IRAQ AND ITS PETROLEUM RESERVES. 
    
    MY FAMILY IS CALLING FOR YOUR URGENT ASSISTANCE IN FUNDING THE REMOVAL OF THE
    PRESIDENT OF THE REPUBLIC OF IRAQ AND ACQUIRING THE PETROLEUM ASSETS OF HIS
    COUNTRY, AS COMPENSATION FOR THE COSTS OF REMOVING HIM FROM POWER. 
    
    UNFORTUNATELY, OUR PARTNERS FROM 1991 ARE NOT WILLING TO SHOULDER THE BURDEN OF
    THIS NEW VENTURE, WHICH IN ITS UPCOMING PHASE MAY COST THE SUM OF 100 BILLION
    TO 200 BILLION DOLLARS ($100,000,000,000 - $200,000,000,000), BOTH IN THE
    INITIAL ACQUISITION AND IN LONG-TERM MANAGEMENT. 
    
    WITHOUT THE FUNDS FROM OUR 1991 PARTNERS, WE WOULD NOT BE ABLE TO ACQUIRE THE
    OIL REVENUE TRAPPED WITHIN IRAQ. THAT IS WHY MY FAMILY AND OUR COLLEAGUES ARE
    URGENTLY SEEKING YOUR GRACIOUS ASSISTANCE. OUR DISTINGUISHED COLLEAGUES IN THIS
    BUSINESS TRANSACTION INCLUDE THE SITTING VICE-PRESIDENT OF THE UNITED STATES OF
    AMERICA, RICHARD CHENEY, WHO IS AN ORIGINAL PARTNER IN THE IRAQ VENTURE AND
    FORMER HEAD OF THE ALLIBURTON OIL COMPANY, AND CONDOLEEZA RICE, WHOSE
    PROFESSIONAL DEDICATION TO THE VENTURE WAS DEMONSTRATED IN THE NAMING OF A
    CHEVRON OIL TANKER AFTER HER. 
    
    I WOULD BESEECH YOU TO TRANSFER A SUM EQUALING TEN TO TWENTY-FIVE PERCENT
    (10-25 %) OF YOUR YEARLY INCOME TO OUR ACCOUNT TO AID IN THIS IMPORTANT
    VENTURE. THE INTERNAL REVENUE SERVICE OF THE UNITED STATES OF AMERICA WILL
    FUNCTION AS OUR TRUSTED INTERMEDIARY. I PROPOSE THAT YOU MAKE THIS TRANSFER
    BEFORE THE FIFTEENTH (15TH) OF THE MONTH OF APRIL. 
    
     
    I KNOW THAT A TRANSACTION OF THIS MAGNITUDE WOULD MAKE ANYONE APPREHENSIVE AND
    WORRIED. BUT I AM ASSURING YOU THAT ALL WILL BE WELL AT THE END OF THE DAY. A
    BOLD STEP TAKEN SHALL NOT BE REGRETTED, I ASSURE YOU. PLEASE DO BE INFORMED
    THAT THIS BUSINESS TRANSACTION IS 100% LEGAL. IF YOU DO NOT WISH TO CO-OPERATE
    IN THIS TRANSACTION, PLEASE CONTACT OUR INTERMEDIARY REPRESENTATIVES TO FURTHER
    DISCUSS THE MATTER. 
    
    I PRAY THAT YOU UNDERSTAND OUR PLIGHT. MY FAMILY AND OUR COLLEAGUES WILL BE
    FOREVER GRATEFUL. PLEASE REPLY IN STRICT CONFIDENCE TO THE CONTACT NUMBERS
    BELOW. 
    
    SINCERELY WITH WARM REGARDS, 
    GEORGE WALKER BUSH 
    

    Happy Linuxing!

    Mike ("Iron") Orr
    Editor, Linux Gazette, gazette@ssc.com

     


    Copyright © 2003, . Copying license http://www.linuxgazette.com/copying.html
    Published in Issue 87 of Linux Gazette, February 2003