LINUX GAZETTE

Mid-February (EXTRA) 2001, Issue 63       Published by Linux Journal

Front Page  |  Back Issues  |  FAQ  |  Mirrors  |  Search

Visit Our Sponsors:

Linux NetworX
Penguin Computing
Tuxtops
eLinux.com
LinuxCare
LinuxMall

Table of Contents:

-------------------------------------------------------------

Linux Gazette Staff and The Answer Gang

Editor: Michael Orr
Technical Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Ben Okopnik, Dan Wilder, Don Marti

TWDT 1 (gzipped text file)
TWDT 2 (HTML file)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.
Linux Gazette[tm], http://www.linuxgazette.com/
This page maintained by the Editor of Linux Gazette, gazette@ssc.com

Copyright © 1996-2001 Specialized Systems Consultants, Inc.

Contents:

¶: Greetings From Heather Stern
(?)trouble with diff (fwd) --or--
Why is diff so crazy? -- Perseverence pays off at last.
(?)modem --or--
State of the Art in softmodems
winmodem(tm), HSP, ACP, DSP, whatever. Just call my ISP already
(?)Distros
(!)RE Solaris UNIX?
(?)linux anti virus?
(?)Por favor Ayuda ! It said "OK" a lot, but...
(?)LINUX for SGI Visual Workstation

(¶) Greetings from Heather Stern

Welcome to the Extra issue this February. We had a few extra things in the queue here this time and figured you just couldn't wait, so here they are!

I have to say I am really very impressed by the Human Genome Project's results. There seem to be two sides of the camp... but, we can't call them "the Cathedral and the Bazaar":

  1. That's already taken :-)
  2. The fellow the cathedrals are for, already did all the hard work... * these folks are just studying the results.

Um, let's call them the College and the Commerce. Mr. James Kent in Santa Cruz wrote, over a fairly short time, a program to have about 100 pentiums help him assemble the genome data out of public and academic fragments. (I'm not sure which he used more of, ice packs, or Jolt Cola.)

Meanwhile, Celera was pouring lots of hours and corporate resources into doing the same thing. They both succeeded to an announceable degree, within days of each other. We're not quite at curing cancer yet, but maybe there are enough resources now to start nailing some of the more clearly genetic diseases. There's certainly a lot of work to be done. Anyways, you can read a lot about all this in the New York Times -- I did.

Of course, to read the New York Times online at http://www.nytimes.com, they want you to register. Sigh. To access Celera's database, you have to pay for access (but, they might have more than they published about, too, so maybe you're at least paying for some serious R&D). Still, what the school system paid for is available to all of us... though not terribly readable unless you're into genetics:

http://genome.ucsc.edu

Hey wait a minute, I hear you cry. This isn't Linux! Well, I don't know. It could have been. It doesn't matter. (Gasp! Linux doesn't matter? What can you mean!?) What's more important is what gets done with a computer.

The sheer number of people who have contributed to figure out how we really tick, and the time they continue to put in, since we aren't nearly at the point where we can run a make script, have the waldos get out a petri dish, and create even so tiny a creature as a mouse "from scratch" is just amazing. (Let's see, if the Creator writes and debugs one line of code a year in us, we're as big as... er, never mind.) Compared to that, my effort every month on LG seems like a breeze.


(?) The Answer Gang (!)


By Jim Dennis, Ben Okopnik, Dan Wilder, the Editors of Linux Gazette... and You!
Send questions (or interesting answers) to linux-questions-only@ssc.com


(?) Why is diff so crazy?

Perseverence pays off at last.

From sbcs

Answered By Heather Stern

This is the reformatting and basically kick-in-the-pants of a question that's been in the mill for a few months. For 3 months this fellow patiently sent the message again, certain that someday, we would get to him.
Before I get started with the actual question, I'd like to make it completely clear to our readers... we do enjoy answering questions. For some strange reason, that is part of what is fun about Linux for those of us here in the Answer Gang. The Gazette exists to make Linux a little more fun so here we are.
However, we're all volunteers, we all have day jobs and most of us have families and pillows we like to visit with once in a while. There is no guarantee that anyone who sends us mail gets an answer.
[ He also had some problems that made his mail a good candidate to get ignored. Since we had another thread elsewhere on features that will help you get an answer, I moved my comments there, and you saw those in Creed of the Querent earlier this month. ]
I've added paragraphs, and hit it with my best shot, and maybe the Gang can help out a bit further. Comments from you, gentle readers, are always welcome too!
So now, on to the tasty question.

(?) I wanted to build a fresh installation on my portable (Red Hat 5.2 upgraded to 6.0), but I didn't want to just erase the old one.

So I pulled the notebook's hard drive, plugged it into my server (Red Hat 6.2) and archived the contents with cp -a file file. The -a (archive) tells cp to preserve links, preserve file attributes if possible and to copy directories recursively. The copy process didn't return any errors... so far so good.

(!) [Heather] Okay. So far, we have that you wanted to upgrade, so you planned to back it up. That's a good idea, but the method isn't so hot.
cp -a really only works if you're root, and I can't tell if you were, or not. But it's not the method I would use to do a proper backup of everything. I normally use GNU tar:
cd /mnt/otherbox
tar czvfpS /usr/local/otherbox-60-backup.tgz .
The options stand for (in order) [c]reate, use g[z]ip compression, be [v]erbose, send the output to a [f]ile instead of a tape (this option needs a parameter), save the [p]ermissions, be careful about [S]parse files if they exist. The file parameter given has a tgz extension to remind myself later that it's a tar in gzip format, and I put it in /usr/local because that usually has lots of free space. The very last parameter is a period, so that I'm indicating the current directory. I do not want to accidentally mix up any parts from my server into my otherbox backup.
Among other things it properly deals with backing up device files... all those strange things you'd normally use mknod to create.
(!) [Mike] Before untarring, you MUST do "umask 000" or risk having /dev/null and other stuff not be world-writable.
(!) [Heather] I haven't encountered that (I think this is what the p flag for tar solves) but good catch! Now this works okay for most circumstances and the nice thing is that you have a very easy way to check it is okay:
tar dzvfpS /usr/local/otherbox-60-backup.tgz .
Where the d stands for diff and all the rest is the same. Diff does have a glitch, and will complain about one special kind of file called a socket. X often has at least one of these, the log system usually uses one, and the mouse often uses one too. It's okay to ignore that because a socket depends on the context of the program that owns it, and right now, there's no program running from that disk to give it the right context anyway. (Okay. I'm guessing. but, that is a theory I have which seems to fit all the ways I see sockets used.)
Now my husband Jim doesn't always trust tar, and sometimes uses cpio. I'll let him or one of the rest of the Gang answer with a better description of using cpio correctly. What I will tell you is why. When you are about to do a full restore of a tarball, it checks to see if can assign the permissions back to the correct original owners. However, a complete restore will be to an empty disk, which won't have correct passwd and group files yet. Oops.
But there is a fix for this too, and I use it all the time when restoring:
cd /mnt/emptydisk
tar xzvfpS --numeric-owner /usr/local/otherbox-60-backup.tgz .
It's just as valid to use midnight commander to create /mnt/emptydisk/etc, open up the backup tgz file, and copy across the precious /etc/shadow, /etc/group, and /etc/passwd files before issuing your restore command.

(?) But when I ran diff -r file file, I got screen-fulls of errors. The most obvious problem was that diff was stuck in a loop with "/usr/bin/mh", a symbolic link pointing back to "/usr/bin". :-) Make a pair of directories, each containing a symbolic link pointing back at the directory it resides in, and then run diff -r on those two directories and you can see what I mean.

The diff program doesn't fail on all symbolic links... just those that lead into loops and some others (I didn't take time to explore what it was about the others). I removed "/usr/bin/mh" (I'd have preferred not to have had to, but I wanted to move along and see what other snags I could hit), ran diff again with output redirected to a file and started taking the file apart with grep and wc to find out what general classes of error I was dealing with... turns out diff was failing on "character special" files, sockets and "block special" files.

I don't know what any of those three are, but I used find and wc again on the file system and discovered that diff was failing on every single one that it tried to compare. Does anybody know what to do about these problems?

update: After a week of trying, I'm unable to duplicate the event above. I installed Red Hat 6.0 on a pair of Gateways... basically the same procedure as I did for my disk usage article at the other end of that link.

When I ran the diff, it seemed to start looping somewhere in "/tmp/orbit-root"... I let it run for about 24 hours and the hard drive light was still flashing the next day, no error message.

I tried 6.0 transplanted into a 6.2 box... same thing. I put 6.0 on my portable, pulled the drive and attached it to my server, and got the same thing. I put 5.2 on my portable, upgraded it to 6.0, pulled the drive and attached it to my server... same circumstances as the original event... and diff looped somewhere in "/tmp/.X11-unix" instead of "/tmp/orbit-root".

(!) [Heather] I simply don't recommend that full backups ever waste any time capturing /tmp. The point of this directory is to have a big place where programs can create the files if they need to. Make the programs do their own dirty work making sure they have the right parts. In my case, /tmp is a seperate partition, and I wouldn't even mount it in rescue mode.
While we're at mentioning filesystems to skip, make sure not to bother getting the /proc filesystem, either. The -l (little ell) switch to tar when making a backup, will make sure it won't wander across filesystem borders unless you specify them on the command line.
cd /mnt/otherdisk
(mount its subvolumes here ... skip tmp, proc, and devfs if you have it)
tar czvfpSl /usr/local/otherbox-60-backup.tgz . usr var home

(?) The diff program definitely has issues with most types of non-regular files (directories excluded), as well as the problem of at other times looping without ever generating an error message (which could, of course, be related to the same basic problem with non-regular files). Suggestion(s), anyone? :-)

(!) [Heather] If any of you kind readers have other interesting ways to make sure your backups work when you do a restore... their only reason for existence, after all... let us know!

(?) State of the Art in softmodems

winmodem(tm), HSP, ACP, DSP, whatever. Just call my ISP already

From Marcelo Henrique Gonalves

Answered By Heather Stern

Hi I have a PCTel HSP Micromodem 56! Yes! Onboard :]

This modem is compatible with Conectiva Linux, in the site of conectiva says "no" and yours too! :]

But can i configure my modem anyway! If a download a rpm or other file?!?!
Thankx

by
BM
OS PIRATA!!!

(!) [Heather] HSP means "Host Signal Processing" and that means the host, your computer, has to do all the work. It's a software driven modem.
There used to be only two of these kind of modems with any hope for them whatsoever, in both cases very tricky because vendors had created binary drivers and orphaned them. The only way you can get more unsupported than that is to not have drivers at all.
(Can someome out there please spin up a new buzzword for "software released on the basis that you get no tech support" so we can go back to using "unsupported" for meaning "doesn't work" ?)
Normally for software-modems or controllerless modems (what's the difference? which chip out of three is missing. sigh) we of the Answer Gang simply point querents at the Linmodems site (http://www.linmodems.org) and shake our shaggy heads. It's a lot of work to go through just to use a modem that borrows so much CPU effort and buckles just when the dataflow gets good anyway.
However, I've been watching and it looks like the number of types that can work (whether "supported" or not) has grown to four.
I'll start with yours first because that's what you need. PCTel was one of the early ones to let a driver sneak out, maintained by PCCHIPS. Corel made a .deb of their driver for 2.2.16, and some unknown hero named Sean turned that into a .tgz and has also got available an extra site for Thomas Wright's effort toward the same chipset... a driver for 2.4 :)
Sean's site:
http://walbran.org/sean/linux/stodolsk
Download point for Thomas' 2.4 PCTel driver:
http://www.geocities.com/tom_in_rc/pctel
Hopefully that does it for you!
Now as for good news for everyone else :)
Anyone using Lucent controllerless modems. will also want to take a look at Sean's site, because he keeps a decent listing of useful scripts and kernel parts that you'll find handy.
For those who prefer code built completely from scratch, Richard's LTmodem diagnostic tool moved up to version 0.9.9 ... it can now answer the phone, and handle voice grade work, so you can use it for mgetty setup (where you want to be able to dial straight home) but I think it still isn't good for ppp. Anyone's welccome to let us and the linmodem crowd know if you get ppp working with it:
Richard's LTModem Page
http://www.close.u-net.com/ltmodem.html
IBM has a project they're calling "Mauve":
http://oss.software.ibm.com/developer/opensource/linux/projects/mwave
...which is a driver for the ACP modem found in the IBM Thinkpad 600E. They say they are working on some licensing issues, but plan to release the source for it as soon as they can. Meanwhile, they have updated it at least once, so we know they're fixing bugs.
And lastly, Mikhail Moreyra wrote a driver for the Ambient Tech chipset... that's a DSP-based modem that used to be from Cirrus Logic, just in case one of you gentle readers has an older box. In theory this may work for your software-driven modem if it claims to be a "sound card modem" since that's what the DSP chip really is. Linmodems only points to his tgz but don't worry, it's source code :) However, it's not exactly a speedy modem even once you use the driver, since he's only gotten 14.4, v.34 (32 Kbps) and v.8 working so far.
To the rest of you, sorry. Maybe you should go out and buy a solid external modem with its very own power supply and line-noise reduction features, or a Cardbus modem that isn't afraid to use a little real estate to offer a complete-chipset modem.

(?) Distros

From Gerald Saunders

Answered By Don Marti, Mike Orr, Heather Stern

How about an (objective) artical on the relative diferences between the different GNU/LINUX distributions. Newbies have to rely on the "propaganda" generated by those distributions in order to make decisions as to which distribution serves their needs best. A realy informed guide is what is needed. I, for example, think that Mandrake is best for newbies and Debian is a more stable platform (and more philosophically correct!). Am I right though?

(!) [Don] No, you're utterly wrong! Fool! Infidel! Liar! Your mother runs OS/2! (That's pretty mild for a response to a distribution comparison actually.)
How to pick the best Linux distribution to run:
If you would be most likely to ask on a mailing list for Linux help, read the list for a while and see what the most helpful posters run.
Any technology distinguishable from magic is insufficiently advanced.
(!) [Mike] Most people would say this, although the stability of Debian is in the eye of the beholder. As far as an "objective" comparision of distributions, there are already many out there.
(!) [Heather] Debian's "potato" aka "stable" is actually quite good for stability... but I've seen some exciting side effects in "woody" aka "testing", and "unstable" just plain lives up to its moniker.
(!) [Mike] Oops, I didn't mean to put Mandrake so far ahead of most of the other distros. Many people find Mandrake easier to install than most distributions. However, it's far from being "the only distribution that matters", although Mandrake marketing would like to think so.
Different users have different expectations and requirements, so finding one distribution that's the "best" is as futile as the emacs/vi wars.
(!) [Mike]
(!) [Heather] Recently in The Answer Gang (TAG, we're it :) ... issue 60) someone asked us what was the best distribution for newbies, and we answered in a great deal of detail about some of the relative strengths and weaknesses, plus points to consider like scoring the installer and features you need in that, seperately from behaviors during use of the system, and even whether what you want to do is put your Linux setup together from loose parts.
It might be worthwhile to come up with some major criteria, and then attempt to map distributions against those criteria, so that people get a lot more useful data than "4 penguins, looks great to me."
You're right about propaganda, but, a "really informed guide" is going to be written by ... who? Surely not a newbie. Can a newbie really trust someone wiser enough than them to write a book, to have any idea what sort of things are easy or difficult for a newbie anymore?

(?) Hi Heather!

Yeah I suppose you are right on that. If some one is advanced enough to know heaps about Linux then they probably wouldn't relate well to an abject newbie! I just thought it was a good Idea as when I was trying to find info on the different Linux distros I ended up guessing. There was not much out there to let me make an informed decision. I ended up trying Debian, which was a steep learning curve. I would probably tried Mandrake or Redhat if I had known how steep. I now use Mandrake because of Hpt366 (ata66) support, Reiser FS, Cups printing right out of the box. But my heart is still with Debian!

Cheers, Gerald.

(!) [Heather] Well, then, keep an eye on the Progeny project - Ian Murdoch himself and a handful of trusted friends, are working on putting together a new debian based distro which is really aimed at desktop users more than the server and hardcore-linuxer crowd, yet is aware of the "standard" debian project enough to allow a smooth transition.

Progeny is presently at Beta Two, with downloadable cdrom images available. See http://www.progenylinux.com/news/beta2release.html for the PR, release notes, and a list of discs.

You might also try Storm Linux, LibraNet, or CorelLinux; all are debian based commercial distros, so at least, their installer is a bit smoother. GUI installers drive me crazy, so of the three, I prefer LibraNet.

(?) Thanks Heather!

I will check those out!

Thanks again, Gerald.

(!) RE Solaris UNIX?

From Mitchell Bruntel

Gee: I have a question, and an answer. I sent the question earlier about LILO and not booting hda5 (ugh)

Is AIX or Solaris or SunOS or HP-UX a UNIX?

[Heather] AIX and Solaris are blessed with this trademark under "UNIX 98", HP-UX and Tru64 among others are blessed under "UNIX 95". (You can see the Open Group's Registered Product Catalog if you care:
http://www.opengroup.org/regproducts/catalog.htm
I don't think SunOS ever got so blessed; it was a BSD derivitive after all. You can read some about the confusions between SunOS and Solaris in this handy note:
http://www.math.umd.edu/~helpdesk/Online/GettingStarted/SunOS-Solaris.html
(!) [Mitchell] Here's the deal on SOLARIS. IT IS UNIX. It still has portions that were (c) 198X by AT&T and USL.
Solaris (as opposed to SUN-OS) IS UNIX System V.4
AT&T merged Sun-OS back in with System V and got a bunch of stuff. especially the init systems we love, as well as package-add format.
btw, is there any demand in LINUX for users/programmers with SUN system V package-add experience?
(!) [Heather] I imagine that places which are using Linux to mix Suns and PCs, with Solaris on their Sparcs, and Linux on the PCs... possibly even a more SysV-ish distro like Slackware... would find such programmers handy.
(!) [Mitchell] (so because SUN is STILL the UNIX reference "port" for UNIX System V.4, --from the old days) it IS UNIX, and therefore doesn't need the blessing.
BTW, Solaris also has the distinction of being BOTH BSD, and System V.4 both in one.
a great deal of the product is now engineered to favor V.4, but there are still some BSD roots and compatibility there!
Mitch Bruntel
(ATT Labs-- maybe that's why I know this trivia?)

(?) linux anti virus?

From Jugs

Answered By Mike Orr, Heather Stern

On Sat, Sep 16, 2000 at 03:59:53PM +0200, jugs wrote:

(?) hi
i wonder if you could help?.
i am running a mail/internet server with the red hat linux (6.2) operating system. Viruses are getting through the end user via emails and are spread over my local area network.

1) is there any anti virus software that i can get for the linux box?

(!) [Mike] Yes, but I don't know the names offhand. Check previous issues of The Answer Gang, News Bytes, the LG search page, and www.securityportal.comi.
(!) [Heather] Yes. I'm operating on the assumption that your linux box is the hub through which all mail is received, maybe even the only place that mail really comes to, because the typical Windows or Mac client uses POP.
You could use:
AMaViS (A Mail Virus Scanner) ...note, they have a bunch of great links too!
http://www.amavis.org
Freshmeat has a whole section on antivirus daemons
http://freshmeat.net/appindex/Daemons/Anti-Virus.html
Mind you, most of these require that you have the linux version of one of the commercial vendors' antivirus apps, or, they're meant to deal with problems which usually break the clients (e.g. poor MIME construction, etc). At least one of the commercial vendors has a complete solution for us though... and a handful of other 'Ix flavors too:
Trend Micro's Interscan VirusWall
http://www.antivirus.com/products/isvw
...and in case anyone is wondering whether it only works on RH, I have a few clients who got it working on SuSE and seem pretty happy with it.
For those who prefer to go with all free parts, I have to note, VACina (a sourceforge project) isn't very far along, and anti-spam stuff can be twisted only so far if you aren't planning to become an antivirus engineer on your own.

(?) 2) the option of buying software for each machine wipes my budget out. preferably the solution that i would like would be to stop the virus at the server.

(!) [Heather] That shouldn't be a problem, the stuff I described above works at the server level. I have to warn you though, thaat I used to work in the antivirus field, and until those macro viruses (yeah, viruses ... the biological ones are virii) came around, the vast percentage of infections were from accidental boots off a floppy. There's also a type of virus that is carried in programs, but as soon as given a chance, hits the boot sector too. So going without some sort of resident checker, or if that's too much, then a downtime window where your staff goes through and checks all the machines, is not really doing a complete job.
A school I did a bunch of work with solved the problem in their labs in this way: every evening when the lab closed, they'd go around with a spot checker and take notes what was found. They didn't waste time cleaning any, they just reformatted and reinstalled the OS from a network image. (Among other things, that way they didn't have to worry if they missed some new breed.) But they posted the note on the wall, how many viruses were found the night before. They also made it easy for students to spot check their disks. Of course, the school had an educational license to the AV software. You can think of this as the "free clinic" style of solving it, if you like... though real illnesses, sadly, can't be solved by reformatting the human.
But, I can't say what your budget really is. In the end, you'll have to decide if you want to spend more time or more money.

(?) if you could suggest a solution i would be grateful

thanking you
jugs

(!) [Heather] Everyone else wondering about solutions for their virus ills in a mixed environment, surely thanks you for asking, Jugs. Good luck in the battle!

(?) Por favor Ayuda !

From Carlos Moreno

Answered By Felipe E. Barousse Boue (one of our translators)

This message was sent to the gazette mailbox in Spanish. Mike Orr forwarded it to Felipe for translation. Sadly, Carlos' accent marks got mangled in the transition.

Here it goes: (I decided to add a reply to Carlos...by the way, I can tell by his email that he seems to live in Mexico)

(?) Hi Linux friends, I need your help desperately. I just got a Linux disk (Red Hat 6.0) and the installation process was very painful

After finding out how disk druid worked, I started to partition my disk. It took around 15 minutes to install and configure, at the end I assigned a password for "root" and reboot the equipment.

A huge amount of text was displayed with many "OK". Finally, it printed the "login" prompt where I typed out "root" and my password, then it displays the following message:

[Bash: root@localhost /root]#

I don't know what to do, I can't initialize Linux and it's very disspointing. I want to start on a KDE graphical interface and I can't. Please help me, I ned it a lot.

Regards.

Carlos Moreno.

(!) [Felipe] Hola Carlos:
Welcome to the world if Linux. It seems that you installed Linux correctly since you are getting all those "OK" prompts after rebooting your system and at the end you get a "login" prompt. When you type your "root" id and your password, in fact your ARE now in Linux but, you are in text mode.
If you want to initiate the graphical user interface, first you have to ensure that the graphical environment was installed in first place and, second, that it is configured correctly for your equipment. To start, give the "startx" command after you logged in and got the shell command prompt:
Bash: root@localhost /root]# prompt
This will attempt to initialize the Graphics Environment, if, and only if it is installed and configured correctly. Otherwise, you will have to install it and set it up. (that is a long question/reply so, lets first find out it if you are ok at this point.)
I will suggest that you take a look at http://www.gacetadelinux.com which contains the Spanish edition of Linux Gazette and there we have a new user forum -in Spanish- where you can get a lot of help related to Linux, including installations, configurations and Linux use.
I can assure that once you learn a little bit about Linux, you will have a great experience with this operating system.
Greetings for now.
Felipe Barousse

(?) Hola amigos de Linux necesito ayuda desesperadamente,acabo de adquirir el disco de linux (red-hat 6.0) la instalacion fue un verdadero viacrucis.

Despues de averiguar como funcionaba el disk druid empece a hacer mi particio linux me tarde aprox. 15 minutos en instalarlo y configurarlo. y finalamente asigne una contrase?a para "root" y reinicio el equipo, se desplego una enorme cantidad de texto y muchos "Ok" Finalmente me pidio el login en el cual puse "root" y mi contrase?a, entonces desplega el siguiente mensaje

[Bash: root@localhost /root]#

y no se que hacer, no puedo iniciar linux y es traumatico kiero iniciarlo en interfase KDE y no puedo, por favor ayudenme es indispensable

Atte. Carlos Moreno

(!) [Felipe] Hola Carlos:
Bienvenido al mundo de Linux. Al parecer, por lo que describes, instalaste correctamente Linux. Esto lo puedo decir por todos los "OK" que aparecieron cuando re-iniciaste tu equipo. Al final, cuando te pide el "login" y tu tecleas root y tu password, de hecho ya estas en Linux solo que en modo de texto.
Si quieres iniciar el modo gráfico, primero hay que asegurarse de que lo hayas instalado en un principio y esté correctamente configurado. Para ello prueba usar el comando "startx" cuando te aparece el prompt de comando:
[Bash: root@localhost /root]#   startx
Esto intentará iniciar el modo gráfico si y solo si está instalado. En caso contrario habrá que instalarlo o, en todo caso configurarlo correctamente.
Te sugiero visites la edición en Español de Linux Gazette en http://www.gacetadelinux.com y ahí, en el foro de principiantes puedes encontrar mucha ayuda relativa a la instalación, configuración y uso de Linux.
Te puedo asegurar que una vez que aprendas un poco de Linux, tendrás una muy agradable experiencia con éste sistema operativo
Un saludo por ahora.
Felipe Barousse Boué

(?) SGI Visual Workstation

From Beth H.

Answered By Jim Dennis

Hi Answer Guys!

I must install LINUX on an SGI 540 Visual Workstation running WinNT4.0. I'm using RedHat 6.2 and want to make a dual boot system with LINUX on the 2nd SCSI drive.

My problem is that I can't boot off the RedHat install floppy or the CD. I can't get past SGI's system initialization/setup to boot off either device. I have installed this software on several INTEL Pentium platforms without any trouble.

Please help me get started and/or provide some useful websites that will help. I find SGI/MIPS stuff, but can't seem to find anything else. I checked your answers too so if I missed it, my apologies.

Thanks,
Beth H.
[From a U.S. Navy Address]

(!) [JimD] Well, Beth, You've discovered a fundamental truth about the SGI vis. WS. It's not a PC. It uses an x86 CPU but from what the SGI folks told me when I was teaching advanced Linux courseware to some of their tech support and customer service people, the resemblance pretty much ends at the CPU pins.
So you can't boot one with a typical Linux distribution CD or floppy. I guess we have to find a custom made kernel and prepare a custom boot floppy or CD therefrom. Did you call SGI's technical support staff? They probably have a floppy image tucked away on their web site somewhere; and SGI is certainly not hostile to Linux. Give them a call; if that doesn't work I'll try to dig up the e-mail addresses of some of the employees that took my class and see if I can get a personal answer.
After writing this I got to a point where my Ricochet wireless link could "see bits." (I do all of my e-mail from my laptop these days, and much of it as during business meetings or at coffee shops and restaurants; so it's a little harder to do searches --- I write most of my TAG articles from memory and locally cached LDP docs --- an 18Gb disk is good for that).
So I did a Google!/Linux (http://www.google.com/linux) search on the string: 'SGI "visual workstation" boot images' and found:
Linux for SGI Visual Workstations:
Linux 2.2.10 + Red Hat 6.0
Updated: July 28, 1999
http://oss.sgi.com/www.linux.sgi.com/intel/visws/flop.html
Luckily the main differences are in the kernel and (possibly) in the boot loader and a few hardware utilities. I wouldn't expect programs like hwclock, lspci, isapnp etc work --- though some of them might. I've seen lspci used on PowerPC (PReP) systems, and I've used it on SPARC Linux. I seem to remember that hwclock was modified to use a /proc interface and that most of its core functionality is now in the kernel.
The other software element that is very hardware dependent is the video driver. As more accelerated framebuffer drivers are being added to the kernel then this becomes less of an issue (it folds back into the earlier statement: "MOST of the differences are IN THE KERNEL").
So, once you get the boot disk/CD and an X server working most other software and all of your applications should work just fine.
Of course updates to RH6.2 or later, and to newer kernels might be a bit of a challenge. However, I'll leave those as exercises to the readership. The source is all out there!

More 2¢ Tips!


Send Linux Tips and Tricks to gazette@ssc.com


iI've got an IDE CD-RW...

Wed Feb 14 15:52:09 PST 2001
Heather Stern (The Editor Gal)

Joey Winchester asked:

I need to use the ide-scsi.o module, but NONE of the 'HOWTOs' help, they just give the code to check 'cdrecord -scanbus'. Okay, my CD-RW's not listed, so WHAT DO I DO?? The HOWTOs aren't helpful at ALL. So how can I recompile the kernel to use my IDE drive as a CD-RW and NOT just a CD-ROM.

You need to give the kernel an option, to warn it that it not only requires the hardware level driver (IDE CDROM in this case) but also the IDE-SCSI interface driver. This applies just as much if your ATAPI device is builtin, or attached via Cardbus (as mine is). The easiest place to put this is in your /etc/lilo.conf:
append="hdc=ide-scsi"
If you have more than one kernel option, though, they have to be in one big append statement. What will happen is that you'll get some sort of warning message about a missing driver as the drive initializes. Ignore it, your device should be able to read fine without that. But, when you want to write, then modprobe ide-scsi and your virtual SCSI host will be established. After that the normal instructions for cdrecord and all its brethren will work.


3 legged router: FreeSco

Mon, 5 Feb 2001 10:50:42 -0500
Ray (taylor864 from yahoo.com)

First, let me comment that I think TAG is exceptional. I stumbled upon it by accident, and ended up reading every question and answer, and after this email, will be going to look for previous months. Anyhow, in reference to one of the questions asked about a firewall (Firewall for a SOHO From Tom Bynum), you suggested a 3 legged Linux box to do his routing / Firewalling. There is a free router / firewall called FreeSCo (stands for FREE ciSCO) (http://www.freesco.org) that is essentially a firewall on a floppy, with support for a DMZ. I uses (I believe) masq and IPChains. Runs a mimumum of services, etc. You most likely already knew about it, but I thought I'd pass this along (since the guy lives by your mom and all).

Have a good 'un.

-Ray.

"Linux Gazette...making Linux just a little more fun!"


Linux On Your Desktop: Multimedia

By Marius Andreiana


Many people still have the impression that Linux is about servers and typing commands in console. Well, that isn't all; Linux is being used on the desktop more and more. Why ? Here are some reasons.

Everybody likes music. The technology lets people listen to high-quality music on audio CDs. But if they aren't using a computer, they are missing a lot. Why change the CD because you want to listen to other album ? Lots of songs can be stored on the hard drive or CD-ROMs.

To do that, you'll have to transform the songs from audio CD to computer files. My favourite tool for that is grip. Download bladeenc (rpm) too, which compresses audio data to mp3 files. Launch grip, set Config -> MP3 -> Encoder to bladeenc and let it rip!

However, you should forget about mp3. A new, open format is available. mp3 is quite old and has limitations; the encoding software uses patented algorithms. The alternative, Ogg Vorbis, is intended for unrestricted private, public, non-profit and commercial use and does not compromise quality for freedom. You can already start using the ogg encoder instead of mp3. See this article for an introduction Ogg Vorbis.
Download and install the following RPMs : XMMS plugin, encoder and libraries Next set oggenc as encoder in grip.

You took all your audio CDs collection and encoded it in computer files. Now you'd like to listen to it, don't you ? Fire up X Multimedia System (I really did :-)

X Multimedia System fired up near the Christmas tree

I use a very nice XMMS plugin for crossfading; when the current song is near the end, it fades out while the next song fades in. Let the music play!

While you listen, how about painting ? GNU Image Manipulation Program, or GIMP for short, is the best in digital art on Linux. It can be used as a simple paint program, a expert quality photo retouching program, an online batch processing system, a mass production image renderer, a image format converter, etc.

I'm not into digital art, but look how Michael Hammel managed to transform his cousin in an alien :

at first look, seems a normal human being

but not everything is what it seems to be...

Never trust relatives :). Visit the Graphics Muse site for lots of materials about GIMP and Linux Artist for more resources. For an overview of 3D graphics programs, see this article.

After singing and painting, how about making the computer speak ? Try festival, a free speech synthesizer. As almost always, rpmfind provides links to rpms.

A free speech recognition engine is available from IBM : ViaVoice; it can be used to voice-control xmms for example and simple desktop commands, but writing entire documents from speech is still something for the future.

Besides music, I also watch movies on my computer. smpeg is a nice GPL mpeg player.

'Yes, that is MS Windows', she says

Having a movie in 320x240 resolution using up more than 1gb isn't so great though. Divx is the choice for now; a movie in 700x400 takes less than 700mb. Divx requires a better processor; 300Mhz is a good start. Another problem with it is that there isn't yet a native Linux player; avifile uses Windows DLLs to be able to play.

Thanks to open-sourcing of Divx at Project Mayo, a Linux player will be available too.

I don't have a TV, in the last year I saw more movies on PC than TV. I do have a TV tuner with remote control which I use from time to time.

Tux on TV

If you don't wanna miss a show/movie and you're busy, set it to record a channel at a certain time (make sure you have plenty of space) and watch it later. Or do some Movie Making on your Linux Box.

I'll let you now enjoy your Linux desktop. Maybe you even show it to a friend. You can happily use it and forget about Windows (read On becoming a total Linux user). English knowledge isn't a requirement, as you see from my screenshots. We continue to improve GNOME support for Romanian. Visit the GNOME translation project to see how well your language is supported.

If you are new to Linux, see my previous article showing how to customize GNOME and stay tuned to Linux as a Video Desktop.

And finally, don't forget that Linux and applications like the ones I've talked about were done by volunteers. Feel free to join ;-)


Copyright © 2001, Marius Andreiana.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 63 of Linux Gazette, Mid-February (EXTRA) 2001

"Linux Gazette...making Linux just a little more fun!"


Taming rand() and random()

By Ken O. Burtch


In the lower levels of the Ontario Science Center in Toronto, Canada, there is a wide circular device made of thin rods of steel. Curious bystanders can take billiard balls, put there for that purpose, and let them loose on the machine.  The balls whiz along their rails, richocheting off pins, clanging through wind chimes, grabbed by counterweighted arms and lifted towards the ceiling.  At several places the balls chose one rail or another purely at random.  How is it that a construct not powered in any way, laid out in a rigid pattern, still produces unexpected results?

Writing programs that use random numbers requires an understanding of error estimation, probability theory, statistics and other advanced numeric disciplines.

Bunk.

Random numbers are about getting your programs to do the unexpected without a core dump being involved.  They're about having fun.

Random But Not Really Random


Computers do not use "real world" random numbers.  Like the billiard-ball machine, computers are rigid, constrained by rules and logical behaviour.  For a computer to generate truly random numbers, it would have to choose numbers by examining real world events.  In the early days, people might roll some 10-sided dice and compose a list of digits for a program to use.

Unfortunately real-world random numbers can be unexpectedly biased.  As the old saying goes, "the real world is a special case."  Instead, computers rely on mathematics to generate uniformly distributed (that is, random but not too random) numbers.  They are "pseudo-random", generated by mathematic functions which create a seemingly non-repeating sequence.  Over time, the numbers in the sequence will reliably occur equally often, with no one number being favoured over another.

The Linux standard C library (stdlib.h) has two built-in random number functions.  The first, rand(), returns a random integer between 0 and RAND_MAX.  If we type

  printf( " rand() is %d\n", rand() );
  printf( " rand() is %d\n", rand() );
  printf( " rand() is %d\n", rand() );

rand() will return values like

  rand() is 1750354891
  rand() is 2140807809
  rand() is 1844326400

Each invocation will return a new, randomly chosen positive integer number.

The other standard library function, random(), returns a positive long integer.  On Linux, both integer and long integer numbers are the same size.  random() has some other properties that are discussed below.

There are also older, obsolete functions to produce random numbers

  * drand48/erand48 return a random double between 0..1.
  * lrand48/nrand48 return a random long between 0 and 2^31.
  * mrand48/jrand48 return a signed random long.

These are provided for backward compatibility with other flavours of UNIX.

rand() and random() are, of course, totally useless as they appear and are rarely called directly.  It's not often we're looking for a number number between 0 and a really big number: the numbers need to apply to actually problems with specific ranges of alternatives.  To tame rand(), its value must be scaled to a more useful range such as between 1 and some specific maximum.  The modulus (%) operator works well: when a number is divided, the remainder is between 0 and 1 less than the original number.  Adding 1 to the modulus result gives the range we're looking for.

  int rnd( int max ) {
    return (rand() % max) + 1;
  }

This one line function will return numbers between 1 and a specified maximum.  rnd(10) will return numbers between 1 and 10, rnd(50) will return numbers between 1 and 50.  Real life events can be simulated by assigning numbers for different outcomes.  Flipping a coin is rnd(2)==1 for heads, rnd(2)==2 for tails.  Rolling a pair of dice is rnd(6)+rnd(6).

The rand() discussion in the Linux manual recommends that you take the "upper bits" (that is, use division instead of modulus) because they tend to be more random.  However, the rnd() function above is suitably random for most applications.

The following test program generates 100 numbers between 1 and 10, counting how often each number comes up in the sequence.  If the numbers were perfectly uniform, they would appear 10 times each.

  int graph[11];
  int i;

  for (i=1; i<=10; i++)
      graph[i] = 0;
  for (i=1; i<=100; i++)
      graph[ rnd(10) ]++;
  printf( "for rnd(), graph[1..10] is " );
  for (i=1; i<=10; i++)
      printf( "%d " , graph[i] );
  printf( "\n" );

When we run this routine, we get the following:

  for rnd(), graph[1..10] is 7 12 9 8 14 9 16 5 11 9

Linux's rand() function goes to great efforts to generate high-quality random numbers and therefore uses a significant amount of CPU time.  If you need to generate a lot mediocre quality random numbers quickly, you can use a function like this:

unsigned int seed = 0;

int fast_rnd( int max ) {
  unsigned int offset = 12923;
  unsigned int multiplier = 4079;

  seed = seed * multiplier + offset;
  return (int)(seed % max) + 1;
}

This function sacrifices accuracy for speed: it will produce random numbers not quite as mathematically uniform as rnd(), but it uses only a few short calculations.  Ideally, the offset and multiplier should be prime numbers so that fewer numbers will be favoured over others.

Replacing rnd with fast_rnd() in the test functions still gives a reasonable approximation of rand() with

  for fast_rnd(), graph[1..10] is 11 4 4 1 8 8 5 7 6 5

Controlling the Sequence


A seed is the initial value given to a random number generator to produce the first random number.  If you set the seed to a certain value, the sequence of numbers will always repeat, starting with the same number.  If you are writing a game, for example, you can set the seed to a specific value and use the fast_rnd() to position enemies in the same place each time without actually having to save any location information.

  seed = room_number;
  num_enemy = fast_rnd( 5 );
  for ( enemy=1; enemy<=num_enemy; enemy++ ) {
      enemy_type[enemy] = fast_rnd( 6 );
      enemy_horizontal[enemy] = fast_rnd( 1024 );
      enemy_vertical[enemy] = fast_rnd( 768 );
  }

The seed for the Linux rand() function is set by srand(). For example,

  srand( 4 );

will set the rand() seed to 4.

There are two ways to control the sequence with the other Linux function, random().  First, srandom(), like srand(), will set a seed for random().

Second, if you need greater precision, Linux provides two functions to control the speed and precision of random().  With initstate(), you can give random() both a seed and a buffer for keeping the intermediate function result.  The buffer can be 8, 32, 64, 128 or 256 bytes in size.  Larger buffers will give better random numbers but will take longer to calculate as a result.

  char state[256];                 /* 256 byte buffer */
  unsigned int seed = 1;         /* initial seed of 1 */

  initstate( seed, state, 256 );
  printf( "using a 256 byte state, we get %d\n", random() );
  printf( "using a 256 byte state, we get %d\n", random() );
  initstate( seed, state, 256 );
  printf( "resetting the state, we get %d\n", random() );

gives

  using a 256 byte state, we get 510644794
  using a 256 byte state, we get 625058908
  resetting the state, we get 510644794

You can switch random() states with setstate(), followed by srandom() to initialize the seed to a specific value.
setstate() always returns a pointer to the previous state.

  oldstate = setstate( newstate );

Unless you change the seed when your program starts, your random numbers will always be the same.  To create changing random sequences, the seed should be set to some value outside of the program or users control. Using the time code returned by time.h's time() is a good choice.

  srand( time( NULL ) );

Since the time is always changing, this will give your program a new sequence of random numbers each time it begins execution.

Randomizing Lists


One of the classic gaming problems that seems to stump many people is shuffling, changing the order of items in a list.  While I was at university, the Computer Center there faced the task of sorting a list of names.  Their solution was to print out the names on paper, cut the paper with scissors, and pull the slips of paper from a bucket and retype them into the computer.

So what is the best approach to shuffling a list?  Cutting up a print out?  Dubious.  Exchanging random items a few thousand times?  Effective, but slow and it doesn't guarantee that all items will have a chance to be moved.  Instead, take each item in the list and exchange it with some other item.  For example, suppose we have a list of 52 playing cards represented by the numbers 0 to 51.  To shuffle the cards, we'd do the following:

  int deck[ 52 ];
  int newpos;
  int savecard;
  int i;

  for ( i=0; i<52; i++ )
      deck[i] = i;
  printf( "Deck was " );
  for ( i=0; i<52; i++ )
      printf( "%d ", deck[i] );
  printf( "\n" );
  for ( i=0; i<52; i++ ) {
      newpos = rnd(52)-1;
      savecard = deck[i];
      deck[i] = deck[newpos];
      deck[newpos] = savecard;
  }
  printf( "Deck is " );
  for ( i=0; i<52; i++ )
      printf( "%d ", deck[i] );
  printf( "\n" );

The results give us a before and after picture of the deck:

  Deck was 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51
  Deck is 35 48 34 13 6 11 49 41 1 32 23 3 16 43 42 18 28 26 25 15 7 27 5 29 44 2 47 38 39 50 31 17 8 14 22 36 12 30 33 10 45 21 46 19 24 9 51 20 4 37 0 40

Different Types of Randomness


People acquainted with statistics know that many of real life events do not happen with a uniform pattern.  The first major repair for a car, for example, might happen between 5 and 9 years of after purchase, but it might be most common around the 7th year.  Any year in the range is likely, but its most likely to be in the middle of the range.

Small unexpected events like these occur in a bell curve shape (called a normal distribution in statistics).  Creating random numbers that conform to such a complex shape may seem like a duanting task, but it really isn't.  Since our rnd() function already produces nicely uniform "unexpected" events, we don't need a statistics textbook formula to generate normally distributed random numbers.  All we need to do is call rnd() a few times and take the average, simulating a normal distribution.

int normal_rnd( int max ) {
  return (rnd( max ) + rnd( max ) + rnd( max ) + rnd( max ) +
         rnd( max ) + rnd( max ) + rnd( max ) + rnd( max ) ) / 8;

}

Using normal_rnd() in the test function, we get values that are clustered at the mid-point between 1 and max:

  for normal_rnd(), graph[1..10] is 0 0 4 26 37 23 10 0 0 0

Normal random numbers can be used to make a game more life-like, making enemy behaviour less erratic.

For numbers skewed toward the low end of the range, we can create a low_rnd() which favours numbers near 1.

  int low_rnd( int max ) {
    int candidate;

    candidate = rnd( max );
    if ( rnd( 2 ) == 1 )
       return candidate;
    else if ( max > 1 )
       return low_rnd( max / 2 );
    else
       return 1;
  }

In each recursion, low_rnd() splits the range in half, favoring the lower half of the range.  By deducting a low random number from the top of the range, we could write a corresponding high_rnd() favoring numbers near the max:

  int high_rnd( int max ) {
    return max - low_rnd( max ) + 1;
  }

The skewing is easily seen when using the test program:

  for low_rnd(), graph[1..10] is 36 15 11 8 9 3 4 3 3 8
  for high_rnd(), graph[1..10] is 4 5 8 5 4 10 6 10 14 34

Random If Statements


Arbitrary branches in logic can be done with a odds() function.

  int odds( int percent ) {
       if ( percent <= 0 )
          return 0;
       else if ( percent > 100 )
          return 1;
       else if ( rnd( 100 ) <= percent )
          return 1;
       return 0;
  }

This function is true the specified percentage of the time making it easy to incorporate into an if statement.

  if ( odds( 50 ) )
    printf( "The cave did not collapse!\n" )
 else
   printf( "Ouch! You are squashed beneath a mountain of boulders.\n" );

The standard C library rand() and random() functions provide a program with uniformly distributed random numbers.  The sequence and precision can be controlled by other library functions and the distribution of numbers can be altered by simple functions.  Random numbers can add unpredictability to a program and are, of course, the backbone to exciting play in computer games.


Copyright © 2001, Ken O. Burtch.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 63 of Linux Gazette, Mid-February (EXTRA) 2001

Merchant Empires: Coding your own PHP Universe

By Bryan Brunton


Bryan Brunton is the creator of the Merchant Empires Project. Merchant Empires is a multiplayer, web-based game of space exploration and economic competition. It is a game of strategy, role-playing, combat, and diplomacy. Merchant Empires is based on the venerable BBS game Tradewars. In the article below, Bryan Brunton is interviewed about his experiences in bringing Merchant Empires to life.

Q: Why did you write ME?

A: A number of reasons. First, I wanted to see if it could be done. I have always been a fan of space based strategy games and I have always wanted to write one. Although I knew that the efforts of bringing the idea to completion would be at times tedious, I didn't care. Secondly, I ran across a game called Space Merchant which is a closed-source, ASP-based implementation of Tradewars, and I was appalled at how badly it had been done. In my opinion, there are many things wrong with the Space Merchant implementation but one thing really struck me as ridiculous: when playing Space Merchant, occasionally an error screen would pop up that said, "Command not processed due to an Error Storm. Please log out and try again." The utter inanity of the the concept of an "Error Storm" and that someone was attempting to pass that explanation off as rational was, to me, hilarious. I said to myself, "Tradewars deserves better than this." However, at the same time, I don't want to overly disregard the thought and effort that went into Space Merchant. The developers of SM deserve a lot of credit for their work.

Q: What software have you used to bring ME to life?

A: Here is a brief summary of the open source software used in ME:

Apache
Any webserver that supports PHP could be used.
PostgreSQL
PHP
PHPLIB
This libary provides classes that simplify PHP database access and session tracking.
Python
The first version of ME was written entirely in Python. Due to performance considerations, I switched to PHP. Parts of ME remain in Python.
PygreSQL
The ME event processor and map creator gather and update ME data that is located on a PostgreSQL server using these libraries.
Medusa Asyncronous Network Libraries
Medusa is used in the ME event processor. These libraries provide telnet access to the ME event processor.
The ME event processor and map creator gather and update ME data that is located on a PostgreSQL server using these libraries.
KDevelop
KDevelop is a great editor for HTML/PHP code. I will probably be purchasing the new PHP IDE from Zend.
Gimp
Almost every ME image has been created with this excellent tool.

Q: Many of the ME players tell me that the ME site has been, at times, less than stable. What problems have you run across while developing ME?

A: I ran across a number of bugs and gotchas. The pre-configured scalability of the operating system itself and applications such as Apache and PostgreSQL in most Linux distributions is really quite horrible. In my opinion, pre-configured Linux does not provide a stable platform for a medium traffic, database backed website (Apache + PHP + PHPLIB + PostgreSQL). And when I say pre-configured, I mean as installed on the average PC from any of the popular distro CDs.

Here are a few of the problems that I have run across (most of these caused major headaches):

Q: Why on earth would anyone want to put away one of today's state-of-the-art games like Quake III in order to open up a web browser to play ME? Just how interactive can your game be when it doesn't require the CPUs on your player's computers to make even a single gigaflop of floating point calculations?

A: The stateless void of HTML is certainly the last place a player wants to be when, potentially, an enemy vessel could be pounding him into space dust. But a browser based gaming environment has advantages that I value. I looked at a number of similarly directed projects before writing ME. Many of them had stalled or the developers had spent six months time writing a server and client with no playable game to show for their efforts. I wanted to spend my time immediately writing game code. Spending untold hours writing a scalable multiplayer game server was (1) beyond my ability and (2) boring. Also, I like the lowest common denominator factor involved in playing ME. All you need is a web browser that supports javascript. You can have access to and play ME from a far greater number of places than a game that requires client installation and configuration. As far as what makes a good game, I have always enjoyed intelligent turn-based game play, not frames per second.

Q: The gaming industry as a whole has been very silent concerning Merchant Empires. Recently, when questioning one industry representative about ME and his company's initiatives in bringing games like ME to the marketplace, we received nothing but silence and utter denials of any involvement. What commercial interest has been shown in ME and what future do you see for the "resurrected-from-the-dead, BBS2HTML" gaming market?

A: There is no commercial interest. I despise banner ads. The Merchant Empires site that I run will never use banner ads. This means that I can probably never afford to purchase additional bandwidth to host ME (it is currently run on a friend's 768K DSL line). There is always the chance that a well funded organization that wants the honor and privilege of sponsering ME could provide additional bandwidth. One side note on DSL: while it is great that such cheap bandwidth can be brought to the masses, the reliability of DSL (as profided by QWest in the Colorado Springs, US area) is attrocious. Only a company in monopolistic control of the market, as QWEST is, can afford to provide such lousy service.

Q: How popular is ME?

A: Over 7,000 people have created users. ME has a loyal group of a couple hundred players that play very regularly. In my opinion, the game is somewhat limited in its playability due to its simplistic economic and political models. I would like to flesh out these areas so it might have a greater appeal. The possibility for role-playing is very limited beyond pirating and player-killing.

I enjoy hosting ME because there is something that is just cool about writing a piece of software that gets frequent use and can potentially generate lots of data. I don't know why but I just like lots of data. The ME database can grow to over 100 megs before I delete data from old games and players.

Q: What do the ME players most enjoy about the game?

A: The players seem to most enjoy the politics of planning ways to kill each other. The same is true for most online games that involve combat. In ME, players pick sides and then organize toward the goal of conquering galaxies and then the entire game universe. It is fascinating to watch the organizational approaches that different alliances take along a autocratic to democratic continuum. Many of the ME players are also programmers who provide development assistance. The players definitely enjoy watching the game grow and improve.

Q: What plans do you have for improving ME?

A: IMO, Scalable Vector Graphics (SVG) are the future of the web. SVG is essentially an open implemenation of Flash. SVG could potentially be more powerful because it is based on open standards such as XML and Javascript. It is unfortunate that browser-based SVG support on Linux is limited to a some barely functional code in the MathML-SVG build of Mozilla. On the Windows and Mac side, Adobe provides a high quality SVG plug-in. But as Linux is my current desktop of choice, I am currently caught in this SVG dilemna.

There are a few big features that I want to put into ME. I'd like to implement a java applet that could provide realtime game information. I would also like to introduce computer controlled ships and planets. Eventually, a computer controlled Imperium (the police in ME) will play a larger part in the game.

I would also like to remove ME's dependency on PostgreSQL. I have nothing against PostgreSQL but other people have inquired about running ME with MySQL. Currently most of ME's database access is through data classes provided by PHPLIB so removing the few PostgreSQLisms in the code wouldn't require much work.

I am planning on a few major changes in ME 2.0. I want to have hexagon based maps (currently sectors are square). But to do this right, I need SVG. I want to implement a whole new trading model where there are literally hundreds of different goods and contract based trading agreements. I'd like to do away with ports as separate entities, making ports simply a feature of planets. I would like to replace ME's current simple experience point advancement model with one that is skilled based. These and other ideas are discussed at the ME Wish List over at SourceForge.

Q: It has been noted by your players that your code sucks. Please don't take this the wrong way, but I really must agree. Before this interview, I was looking through the code to your event processor, the server side Python process that handles important game events, and I noticed that all of the program's intelligence is crammed into your networking loop.

A: You should first consider that I wrote Merchant Empires as fast as I possibly could. My approach was very simple: look at a Space Merchant screen shot and reproduce it as quickly as possible. Also, writing Merchant Empires was quite intentionally a learning process for myself. Parts of Merchant Empires use C++, PHP, and Python. While I had limited C++ experience, I had never used, and knew nothing about, either PHP or Python. I wanted to learn both of these languages. Parts of Merchant Empires, such as the inconsistent use of CSS and the combat functionality, are from a coding standpoint barely at the proof of concept stage. At the time that I wrote the event processor, I barely understood what a select networking loop was. Today, I have forgotten everything that I learned on that concept and now I am just pleased that that particular piece of code still works.

Q: So your code is pretty rough around the edges. Have you considering using any recursive programming techniques to spruce it up?

A: Recursion, if properly used, is an awesomely powerful programming tool. However, I have never actually used it. I thought that by interviewing myself for this article (which is a somewhat recursive process), I could introduce myself to the concept of recursion, and if I like it, consider using it in the future.


Copyright © 2001, Bryan Brunton.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 63 of Linux Gazette, Mid-February (EXTRA) 2001

"Linux Gazette...making Linux just a little more fun!"


HelpDex

By Shane Collinge


barbie.jpg
concentrate.jpg
prettyprettypls.jpg
creditcard.jpg
dawn.jpg
elephant.jpg
oops.jpg
weekend.jpg
deposit.jpg
rootpassword.jpg

Courtesy Linux Today, where you can read all the latest Help Dex cartoons.

[Shane invites your suggestions for future HelpDex cartoons. What do you think is funny about the Linux world? Send him your ideas and let him expand on them. His address is shane_collinge@yahoo.com Suggesters will be acknowledged in the cartoon unless you request not to be. -Mike.]


Copyright © 2001, Shane Collinge.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 63 of Linux Gazette, Mid-February (EXTRA) 2001

"Linux Gazette...making Linux just a little more fun!"


Installing dict - An On-Line Dictionary

By Chris Gibbs


Purpose of this Paper

To advertise the efforts of http://www.dict.org and to provide the means by which any Linux user regardless of experience, can install a functional dictionary system either for local or network use.

CONTENTS

  1. Introduction
  2. The DICT Development Group (www.dict.org)
  3. Available Dictionaries
  4. Installation
    1. dictd, dict and dictzip
    2. Webster's
    3. WordNet (r) 1.6
    4. Jargon File, FOLDOC, The Elements, Easton's Bible Dictionary & Hitchcock's Bible Names Dictionary
    5. More up-to-date Jargon File
    6. US Gazetteer
    7. The Devils Dictionary
    8. Who Was Who: 5000 B. C. to Date
    9. Language Dictionaries
  5. Configuring dictd
  6. Using dict
  7. Kdict
  8. Conclusion

Introduction

I have been using Linux exclusively as my operating system for over three years now. One of the very few things I miss about "that other operating system" is the easy availability of cheap or even free versions of commercial encyclopedias and dictionaries.

So when I installed a recent version of S.u.S.E. linux I was both surprised and happy to find a package called Kdict had been installed on my machine. Reading the documentation that came with the package revealed that the program was only a front end to another program, and that though it is possible to install a dictionary server locally, if I wanted to do so I would have to get everything else I need from the Internet.

The DICT Development Group (www.dict.org)

Note:- This section paraphrases the contents of ANNOUNCE in the dict distribution.

The DICT Development Group (www.dict.org) have both developed a Dictionary Server Protocol (as described in RFC 2229), client/server software in C as well as clients in other languages such as Java and Perl, and converted various freely available dictionaries for use with their software.

The Dictionary Server Protocol (DICT) is a TCP transaction based query/response protocol that allows a client to access dictionary definitions from a set of natural language dictionary databases.

dict(1) is a client which can access DICT servers from the command line.

dictd(8) is a server which supports the DICT protocol.

dictzip(1) is a compression program which creates compressed files in the gzip format (see RFC 1952). However, unlike gzip(1), dictzip(1) compresses the file in pieces and stores an index to the pieces in the gzip header. This allows random access to the file at the granularity of the compressed pieces (currently about 64kB) while maintaining good compression ratios (within 5% of the expected ratio for dictionary data). dictd(8) uses files stored in this format.

Available in separate .tar.gz files are the data, conversion programs, and formatted output for several freely-distributable dictionaries. For any single dictionary, the terms for commercial distribution may be different from the terms for non-commercial distribution -- be sure to read the copyright and licensing information at the top of each database file. Below are approximate sizes for the databases, showing the number of headwords in each, and the space required to store the database:

Database

Headwords

Index

Data

Uncompressed

web1913

185399

3438 kB

11 MB

30 MB

wn

121967

2427 kB

7142 kB

21 MB

gazetteer

52994

1087 kB

1754 kB

8351 kB

jargon

2135

38 kB

536 kB

1248 kB

foldoc

11508

220 kB

1759 kB

4275 kB

elements

131

2 kB

12 kB

38 kB

easton

3968

64 kB

1077 kB

2648 kB

hitchcock

2619

34 kB

33 kB

85 kB

www

587

8 kB

58 kB

135 kB

All of these compressed databases and indices can be stored in approximately 32MB of disk space.

Additionally there are a number of bi-lingual dictionaries to help with translation. Though I have not looked at these judging from their different sizes some will be more useful than others (i.e. English to Welsh is unfortunately not very good, whereas English to German is probably quite useful).

All the dictionaries seem to be under constant development so interested people should keep up with latest developments.

Available Dictionaries


Webster's Revised Unabridged Dictionary (1913)

The Oxford English Dictionary this is not! It is however a very pleasant dictionary. It seems to be an American version of one of those Dictionary/Encyclopedias, so common at the time of its writing. Quite often in a definition you will find a poetic quote and it really is very informative and pleasant to use.



WordNet (r) 1.6

This dictionary seems to be under constant development. The aim seems to be to provide definitions of all the words people want to have definitions for! In practice it seems to miss some obvious words such as "with" and "without". I guess the idea is to simply provide necessary update to the definitions found in Webster's. Unfortunately this dictionary is neither as informative or as pleasant as Webster's. If you need a more up to date dictionary it is necessary.



The Free On-line Dictionary of Computing (15Feb98)

FOLDOC is a searchable dictionary of acronyms, jargon, programming languages, tools, architecture, operating systems, networking, theory, conventions, standards, mathematics, telecoms, electronics, institutions, companies, projects, products, history, in fact anything to do with computing. The dictionary is Copyright Denis Howe 1993, 1997.



U.S. Gazetteer (1990)

This is probably only of interest to people wanting information about America. The original U.S. Gazetteer Place and Zipcode Files are provided by the U.S. Census Bureau and are in the Public Domain.



Easton's 1897 Bible Dictionary

These Dictionary topics are from M.G. Easton M.A., D.D., Illustrated Bible Dictionary, Third Edition, published by Thomas Nelson, 1897. Due to the nature of etext, the illustrated portion of the Dictionary has not been included.



Hitchcock's Bible Names Dictionary (late 1800's)

This dictionary is from "Hitchcock's New and Complete Analysis of the Holy Bible," published in the late 1800s. It contains more than 2,500 Bible and Bible-related proper names and their meanings. Some Hebrew words of uncertain meaning have been left out. It is out of copyright, so feel free to copy and distribute it. I pray it will help in your study of God's Word. --Brad Haugaard



The Elements (22Oct97)

This dictionary database was created by Jay Kominek <jfk at acm.org>.



The CIA World Factbook (1995)

This somewhat typically short sighted view of the World (sorry I love America, I lived there for a while - its great, but it is not ALL THE WORLD!), really only becomes useful if you look in the index file and see that there are Appendix's, these are though of limited use to normal people, who think that the world ends at their keyboard.



Jargon File (4.2.0, 31 JAN 2000)

The Jargon File is a comprehensive compendium of hacker slang illuminating many aspects of hackish tradition, folklore, and humor. This bears remarkable similarity to FOLDOC above.



THE DEVIL'S DICTIONARY ((C)1911 Released April 15 1993)

_The Devil's Dictionary_ was begun in a weekly paper in 1881, and was continued in a desultory way at long intervals until 1906. In that year a large part of it was published in covers with the title _The Cynic's Word Book_, a name which the author had not the power to reject or happiness to approve. Users of the fortune program will already have some familiarity with this ;-).



Who Was Who

Who Was Who: 5000 B. C. to Date: Biographical Dictionary of the Famous and Those Who Wanted to Be, edited by Irwin L. Gordon

OTHER DICTIONARIES

A number of other dictionaries have been made available, see the dict home page for details. In many cases you may find the program to convert dictionary data to the format dict requires has not been written yet ;-(

As mentioned elsewhere, there are a number of translation dictionaries also available (see below).

Installation

The links given here were correct at the time of writing. If it is a long time since this paper was published you should visit http://www.dict.org to see what has changed.

Unfortunately installation of the above mentioned software did not go quite as easily as it should have, which partly explains why I am writing this;-).

The first thing you will need is plenty of disk space. The largest dictionary available is Webster's 1913 dictionary, which will need about 85Meg to be re-built in.

dictd, dict and dictzip

Unarchive dictd-1.5.5.tar.gz in the normal manner.

IMPORTANT:- The HTML support has been turned off in this version of dict. You need to turn it back on if you want to take advantage of Kdict.

Load the file dict.c into your favorite editor and remove the comments from line 1069:-


      { "raw",        0, 0, 'r' },
      { "pager",      1, 0, 'P' },
      { "debug",      1, 0, 502 },
         { "html",       0, 0, 503 },    //Remove comments from this line
      { "pipesize",   1, 0, 504 },
      { "client",     1, 0, 505 },

so the file becomes as above.

Now you can run ./configure;make;make install. You will see a great many warnings produced by the compiler, but at the end you should have working client, server and compression program installed.

Webster's

Unpack the files dict-web1913-1.4.tar.gz and web1913-0.46-a.tar.gz:


     $ tar xvzf dict-web1913-1.4.tar.gz
     $ tar xvzf web1913-0.46-a.tar.gz
     $ cd dict-web1913-1.4 
     $ mkdir web1913
     $ cp ../web1913-0.46-a/* web1913
     $ ./configure
     $ make
     $ make db

Now go make a cup of tea, this takes over an hour on my 133MHz box. When done, decide on a place for your dictionaries to live and copy them there, I use /opt/public/dict-dbs as suggested:-


     $ mkdir /opt/public/dict-dbs
     $ cp web1913.dict.dz /opt/public/dict-dbs
     $ cp web1913.index /opt/public/dict-dbs

WordNet (r) 1.6

Grab dict-wn-1.5.tar.gz

It is a great shame that one of the most useful dictionaries is also the one that refuses to compile correctly. To create a viable dictionary the original data must be parsed by a program. When you do make it is this program that is created. Unfortunately this package uses a Makefile created by ./configure which does not work. I am unable to correct the automake procedure but can assure you that the following will work:


   $ tar xvzf dict-wn-1.5.tar.gz
   $ cd dict-wn-1.5 
   $ ./configure
   $ gcc -o wnfilter wnfilter.c
   $ make db

Again this process takes a considerable amount of time ( > 1 hour on my 133MHz). Once complete if you have not already created a directory for your dictionaries do so now and copy the dictionary and its index there:


   $ cp wn.dict.dz /opt/public/dict-dbs
   $ cp wn.index /opt/public/dict-dbs

Jargon File, FOLDOC, The Elements, Easton's Bible Dictionary & Hitchcock's Bible Names Dictionary

Grab dict-misc-1.5.tar.gz


   $ tar xvzf dict-misc-1.5.tar.gz
   $ cd  dict-misc-1.5
   $ ./configure
   $ make
   $ make db
   
   $ cp easton.dict.dz /opt/public/dict-dbs
   $ cp easton.index /opt/public/dict-dbs
   $ cp elements.dict.dz /opt/public/dict-dbs
   $ cp elements.index /opt/public/dict-dbs
   $ cp foldoc.dict.dz /opt/public/dict-dbs
   $ cp foldoc.index /opt/public/dict-dbs
   $ cp hitchcock.dict.dz /opt/public/dict-dbs
   $ cp hitchcock.index /opt/public/dict-dbs
   $ cp jargon.dict.dz /opt/public/dict-dbs
   $ cp jargon.index /opt/public/dict-dbs

More up-to-date Jargon File

Grab dict-jargon-4.2.0.tar.gz


   $ tar xvzf dict-jargon-4.2.0.tar.gz
   $ cd dict-jargon-4.2.0
   $ ./configure
   $ make
   $ make db

   $ cp jargon.dict.dz /opt/public/dict-dbs
   $ cp jargon.index /opt/public/dict-dbs

US Gazetteer

Grab dict-gazetteer-1.3.tar.gz


   $ tar xvzf dict-gazetteer-1.3.tar.gz
   $ cd dict-gazetteer-1.3
   $ ./configure
   $ make
   $ make db

   $ cp gazetteer.dict.dz /opt/public/dict-dbs
   $ cp gazetteer.index /opt/public/dict-dbs

The Devils Dictionary

Grab devils-dict-pre.tar.gz

As with the language dictionaries below, the dictionary has already been created for you. Simply unpack this file in your dictionary directory.

Who Was Who: 5000 B. C. to Date

Grab http://www.hawklord.uklinux.net/dict/www-1.0.tgz


   $ tar xvzf www-1.0.tgz
   $ cd www-1.0
   $ ./configure
   $ make
   $ make db

   $ cp www.dict.dz /opt/public/dict-dbs
   $ cp www.index /opt/public/dict-dbs

Language Dictionaries

Visit ftp://ftp.dict.org/pub/dict/pre/www.freedict.de/20000906

Installing a language dictionary does not involve re-building the dictionary from original data, so you just need to unpack each file into you dictionary directory.

Configuring dictd

dictd expects to find the file /etc/dictd.conf, though an alternative file may be specified on the command line. Each dictionary needs to be specified in this file so dictd can find the dictionary and its index. For example if you just want to use Webster's, WordNet and The Devils Dictionary, then the following entries will be required (assuming you use /opt/public/dict-dbs as your dictionary directory):


    database Web-1913  { data "/opt/public/dict-dbs/web1913.dict.dz"
			index "/opt/public/dict-dbs/web1913.index" }
    database wn        { data "/opt/public/dict-dbs/wn.dict.dz"
			index "/opt/public/dict-dbs/wn.index" }
    database devils    { data "/opt/public/dict-dbs/devils.dict.dz"
			index "/opt/public/dict-dbs/devils.index" }

Advanced Configuration

It seems it is possible to implement user access control and other security measures. I have not tried this. If I were into security issues the current state of the software gives me no reason to trust any security feature it might have. But why anyone would want to restrict access to these dictionaries is completely beyond me, this is stuff any user has a right to use.

You should be aware of a number of security issues if you intend to make dictd available over a local network since not being aware will leave your server vulnerable to a number of possible attacks.

Unless you are installing dictd on a server for a school/college or for some other large network these issues will probably be of no concern to you. If you are installing on such a network then you should already be aware of the issues below.

Server Overload, Denial of Service, Heavy Swapping

All these symptoms can occur if a number of users send queries like MATCH * re . at the same time. Such queries return the whole database index and each instance will require around 5MB buffer space on the server.

Possible solutions include limiting the number of connections to the server, limiting the amount of data that can be returned for a single query or limiting the number of simultaneous outstanding searches.

Denial of Service

The server can be driven to a complete stand still by any evil minded cracker that wants to connect to the server 1,000,000 times.

To prevent such anti-social behavior simply limit the number of connections based on IP or mask.

Buffer Overflow

If you experience this kind of problem you should make your logging routines more robust, use strlen and examine daemon_log.

Using dict

dict expects to find the file /etc/dict.conf. This file should contain a line with the name of the machine you wish to use as your dictd server, though this can be overridden at the command line.

The current version of dict is a little disappointing as a users front-end for dictd. If all you have is a console and you can't use Kdict then you will just have to get used to dict. The worst thing about dict is that it can trash your console and you will need to take action (such as logging out and back in) to restore the keyboard to normal! This typically occurs if there is a problem with dictd; such as when it is not running and you try to use dict.

Since dict is just a console program, it just sends output to less. So unless you have a very good memory you will need to use `cut and paste' to transfer referenced words or phrases back to the command line.

There is an option to send output to a pager program. I tried the command dict -html -P lynx luser, the result was not a happy one! Lynx went mad, referencing random help and configuration files in a manner that reminded me of certain viruses in MS operating systems.

Personally I would say if you can avoid using dict directly, avoid it! It is necessary to have it if you want to use Kdict, and you do want to use Kdict.

Kdict

Kdict.gif

To take full advantage of dict you really need Kdict from http://www.rhrk.uni-kl.de/~gebauerc/kdict. I have used version 0.2 and cannot speak for any other version.

To use Kdict you must turn HTML support back on for dict as described above.

The screen shot above shows Kdict in use. Kdict makes good use of the limited HTML tags provided by dict, and inserts extra tags so that you can easily cross-reference words. Any phrase or word shown in red can be clicked on with the mouse to show its definition.

What makes Kdict so good is the fact that you can use the clipboard to highlight a word from any window on the desktop and paste it into Kdict as a query.

Conclusion

This is a great project that can only get better, so it is a lot like Linux and gnu software in general... Give it your full support!

If you get xscrabble from Matt Chapman's homepage, you can enhance your enjoyment of the game by looking up the definitions of words you don't know, - as the computer beats the sh*t out of you;-).


Copyright © 2001, Chris Gibbs.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 63 of Linux Gazette, Mid-February (EXTRA) 2001

"Linux Gazette...making Linux just a little more fun!"


Downloading LinuxToday links and Linux Gazette's TOC with Python (and Perl)

By Mark Nielsen


Contents

  1. Introduction
  2. The Python Script
  3. Setting up a cron job
  4. A Perl Script I wrote to download Linux Gazette TOC.
  5. A Perl Script I wrote to download Debian Weekly News
  6. Conclusion
  7. References

Introduction

I wanted to add Linux Today's links to my website GNUJobs.com, just for the fun of it. Later, I want to add more headlines from other websites, and perhaps LG's latest edition. I had a choice of Perl or Python. I choose Python because I have been using it for quite a while for a mathematical project, and it has proven quite useful. I want to make a habit of using Python now. It tends to be easier for me to program in Python than Perl. Also, in the future, I wish to use threading to download many webpages at the same time, which Python does very well. I might as well do it in Python now since I know I will use it later.

Both Perl and Python will let you download webpages off of the internet. You can do more than just download webpages, such as ftp, gopher, and connect to other services. Downloading a webpage is just one thing these languages can do.

There are several things the programming language has to do:

This article isn't going to be too long. I commented the Python code.

The Python Script

If you want to include the output of this script to a webpage, then you can use the Server-Side Include (SSI) module in the Apache webserver and use a command like:
<!--#include virtual="/lthead.html" -->
in your webpage. Various programming languages (like PHP, Perl ASP, Perl Mason, etc) can also include files.

It is assumed you are using a GNU/Linux operating system. Also, I was using Python 1.5.2, which is not the latest version. You might have to do a

chmod 755 LinuxToday.py
on the script to make it executable. [Text version of this listing.]
#!/usr/bin/python

# One obvious thing to do is apply error checking for url download,
# download must contain at least one entry, and we are able to create the
# new file. This will be done later.

  ### import the web module, string module, regular expression,  module
  ### and the os module
import urllib, string, re, os

  ### define the new webpage we create and where to get the info
Download_Location = "/tmp/lthead.html"
Url = "http://linuxtoday.com/backend/lthead.txt"

#-----------------------------------------------------------
  ### Create a web object with the Url
LinuxToday = urllib.urlopen( Url )
  ### Grab all the info into an array (if big, change to do one line at a time)
Text_Array =  LinuxToday.readlines()

New_File  = open(Download_Location + "_new", 'w');
New_File.write("<ul>\n") 
  ### Set the default to be invalid
Valid = 0
  ### Record the number of valid entries
Entry_No = 0;
Entry_Valid = 0
  ### Setup the defaults
Date = ""
Link = ""
Header = ""
Count = 0
  ### Create the mattern matching expression
Match = re.compile ("^\&\&")

  ### Append && to make sure we parse the last entry
Text_Array.append('&&')
  ### For each line, do the following
for Line in Text_Array :
    ### If && exists, start from scratch, add last entry
  if Match.search(Line) :
      ### If the current entry is valid and we have skipped the first one, 
    if (Entry_No > 1) and (Entry_Valid > 0) :
	### One thing that Perl does better than Python is the print command. I
	### don't like how Python prints (no variable interpolation).
      New_File.write('<li> <a href="' + Link + '">' + Header + '</a>. ' + Date + "</li>\n")
      ## Reset the values to nothing.
    Header = ""; Link = ""; Date = ""; Entry_Valid = 0
    Count = 0 
    
    ### Delete whitespace at end of line
  Line = string.rstrip(Line)

    ### If count is equal to 1, header, 2 link, 3 date
  if Count == 1:    Header = Line
  elif Count == 2:  Link = Line
  elif Count == 3:  
    Date = Line
      ### If all fields are done, we have a valid entry
    if  (Header != "") or (Link != "") or (Date != "") :
      Entry_No = Entry_No + 1
      Entry_Valid = 1  

    ### Add one to Count
  Count = Count + 1

New_File.write("</ul>\n")

New_File.close()

  ### If we have valid entries, move the new file to the real location
if Entry_No > 0 :
    ### We could just do:
    ### os.rename(Download_Location + "_new", Download_Location)
    ### But here's how to do it with an external command.
  Command = "mv " + Download_Location + "_new " + Download_Location
  os.system( Command )

The Cron Script to make it run nightly

Not the best crontab file, but it will do.
#/bin/sh

### Crontab file
### Name the file "Crontab" and execute with "crontab Crontab"

  ### Download every two hours
*/2 * * * *   /www/Cron/LinuxToday.py >> /www/Cron/out  2>&1  

A Perl Script I wrote to download Linux Gazette TOC

Just so you can compare this to a Perl script, I created a Perl script which downloads the LG's TOC for the latest edition. [Text version of this listing.]
#!/usr/bin/perl
# Copyright Mark Nielsen January 20001
# Copyrighted under the GPL license.

# I am proud of this script.
# I wrote it from scratch with only 2 minor errors when I first tested it.

system ("lynx --source http://www.linuxgazette.com/ftpfiles.txt > /tmp/List.txt");

  ### Open up the webpage we just downloaded and put it into an array.
open(FILE,'/tmp/List.txt'); my @Lines = <FILE>; close FILE; 
  ### Filter out lines that don't contain magic letters.
my @Lines = grep(($_ =~ /lg\-issue/) || ($_ =~ /\.tar\.gz/), @Lines );

my @Numbers = ();
foreach my $Line (@Lines)
  {
    ## Throw away the stuff to the left
  my ($Junk,$Good) = split(/lg\-issue/,$Line,2);
    ## Throw away the stuff to the right
  ($Good,$Junk) = split(/\.tar\.gz/,$Good,2);
    ## If it is a valid number, it is greater than 1, save it
  if ($Good > 0) {push (@Numbers,$Good);}
  }

   ### Sort the numbers and pop off the highest
@Numbers = sort {$a<=>$b} @Numbers;
my $Highest = pop @Numbers;
   ## Create the url we are going to download
my $Url = "http://www.linuxgazette.com/issue$Highest/index.html"; 
   ## Download it
system ("lynx --source $Url > /tmp/LG_index.html");

   ### Open up the index.
open(FILE,"/tmp/LG_index.html"); my @Lines = <FILE>; close FILE;
   ### Extract out the parts that are between beginning and end of TOC.
my @TOC = ();
my $Count = 0;
my $Start = '<!-- *** BEGIN toc *** -->';
my $End = '<!-- *** END toc *** -->';
foreach my $Line (@Lines) 
  {
  if ($Line =~ /\Q$End\E/) {$Count = 2;}
  if ($Count == 1) {push(@TOC, $Line);}
  if ($Line =~ /\Q$Start\E/) {$Count = 1;}
  }

  ### Relink all the links to point to the Linux Gazette magazine
my $Relink = "http://www.linuxgazette.com/issue$Highest/";
grep($_ =~ s/HREF\=\"/HREF\=\"$Relink/g, @TOC);

  ### Save the output
open(FILE,">/tmp/TOC.html"); print FILE @TOC; close FILE;

  ### Done!

A Perl Script I wrote to download Debian Weekly News

I like to keep track of Debian Weekly News, so I wrote this one also. One bad thing about programming, is that when you get really good at programming in a certain way, it is hard to switch to another programming language. These two Perl scripts I did without looking at any code. The Python code took me a while, because I am still not used to it. [Text version of this listing.]
#!/usr/bin/perl
# Copyright Mark Nielsen January 20001
# Copyright under the GPL license.

system ("lynx --source http://www.debian.org/News/weekly/index.html > /tmp/List2.txt");

  ### Open up the webpage we just downloaded and put it into an array.
open(FILE,'/tmp/List2.txt'); my @Lines = <FILE>; close FILE; 
   ### Extract out the parts that are between beginning and end of TOC.
my @TOC = ();
my $Count = 0;
my $Start = 'Recent issues of Debian Weekly News';
my $End = '</p>';
foreach my $Line (@Lines) 
  {
  if (($Line =~ /\Q$End\E/i) && ($Count > 0)) {$Count = 2;}
  if ($Count == 1) {push(@TOC, $Line);}
  if ($Line =~ /^\Q$Start\E/i) {$Count = 1;}
  }

  ### Relink all the links to point to the DWN
my $Relink = "http://www.debian.org/News/weekly/";
grep($_ =~ s/HREF\=\"/HREF\=\"$Relink/ig, @TOC);
grep($_ =~ s/\"\>/\" target=_external\>/ig, @TOC);

  ### Save the output
open(FILE,">/tmp/D.html"); print FILE @TOC; close FILE;

  ### Done!

Conclusion

The Python script actually is made much more complex than it needs to be. The reason why I made it longer was to introduce various modules and to be flexible in case LinuxToday's format changes someday. The only thing the script lacks is error detection in case it can't download the web page, write the new file or rename it. Also, watch the regular-expression modules in Python, because they have been changing in recent versions to increase efficiency and incorporate Unicode support.

Python rules as a programming language. I found it very easy to use the Python modules. It seems like the Python module for handling webpages is easier than the LWP module in Perl. Because of the many possibilities of Python, I plan on creating a Python script which will download many webpages at the same time using Python's threading capbilities.

References

  1. LinuxToday's links
  2. Python's urllib module
  3. Original site for this article (any updates will be here)


Copyright © 2001, Mark Nielsen.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 63 of Linux Gazette, Mid-February (EXTRA) 2001

"Linux Gazette...making Linux just a little more fun!"


Securely Erasing a Hard Drive with Perl

By Mark Nielsen


Contents

  1. Introduction
  2. What problems I have.
  3. The Perl Script
  4. Conclusion
  5. References

Introduction

When moving from Ohio to California, GNUJobs.com had some hard drives (along with other hardware and software) which were to be donated to COLUG (Central Ohio Linux Users Group). They needed to be deleted before they were donated. 2 out of the 3 hard drives had bad sectors on them, and the third I ended up using as a hard drive for testing purposes, like creating this article, so I ended up not giving any away. Still, I will need to wipe a hard drive in the future, so I created this Perl script (which I will later convert to Python and make it have more options).

The goal of this Perl script is to just delete the hard drive at /dev/hdb (the slave drive on the Primary IDE controller) since I have a hard drive removeable kit there. I want it to delete all partitions, create one partition that takes up the whole hard drive, and then fill up the hard drive with garbage data (including some random encrypted data just to ruin a hacker's day trying to find out what the data is).

The Problems

Here is a list of problems I had and how I solved them:
  1. How do I get it to delete all the partitions?

    I remember researching many different options to alter partitions on a hard drive, and doing it manually yielded the best results. I had used a Perl Expect script to automate the fdisk program (fdisk partitions hard drives in Linux) in the past, and I decided to continue to do it that way. I believe there are better alternatives for the simple task of deleting all the partitions, like sfdisk and others, but if one solution covers all possibilities with 100% power and flexibility, I usually just stick to one way of doing things so that I don't have to remember too many things and if it ever gets more complicated, I don't have to learn anything new.

    Thus, I used Expect code to simulate a user typing in the commands for fdisk. The Expect code deleted all the partitions and then it created one big partition.

  2. How do I fill up the hard drive with garbage data?

    Just deleting partitions isn't enough to delete the data. I want to overwrite all old data with garbage to make sure the previous data was deleted. I used sfdisk to get the size of the partition that fdisk created. Then, I created a loop which would continuously print garbage data until the amount printed was equal or greater than the size of the partition.

  3. How do I put binary data on the hard drive to confuse a hacker?

    I created random binary data using the random function and "chr" function in Perl. Then, I encrypted the random data using the Perl Blowfish module. If someone manages to decrypt the data, it will still look like garbage and confuse them. I wanted to encrypt the data so that it didn't look purely random in a mathematical sense.

  4. How do I reformat the big partition?

    This was easy. I just used a simple "mkfs" command.

The Perl Script

The version of Perl I was using for this Perl script was out of date. I was using Perl 5.005_03 and I believe Perl is up to 5.6 as of 1/2001.

There are a lot of things I need to enhance to make this script more user friendly. There should be a lot more error checking, considering how dangerous this script is, and prompts to ask a user if they really want to do so stuff. I am waiting until I restart my MILAS project (which will be written in Python) before I make this script better. It was only to get me through moving from Columbus to the Bay Area.

I have commented a lot of the code, so hopefully a novice Perl programmer can understand most of what I am trying to do. (Text version of this listing)

#!/usr/bin/perl

##### Things to do
# 1. Make sure we create a brand new directory for temporary mounting
#     in order to avoid security risks in case someone is logged in.
# 2. Use perl functions to handle a lot of the system calls. 
# 3. Let it autodetect hard drives, and floppy drives, and only perform
#    actions on unmounted hard drives and floppy drives.
##### 

use strict;
use Expect;
use Crypt::Blowfish;

#-----------------------------------------------
my $Junk;
  ### Set the drive to the slave drive on the Primary IDE controller.
my $Drive = "hdb";

  ### Let us do a lot of random stuff, and get the last line from the
  ### /etc/passwd file to make it really random, assuming one person
  ### has been added to the computer. 
my $time = time();
my $Ran = rand($time);
my $Ran = rand(10000000000000);
my $LastLine = `tail -n 1 /etc/passwd`; chomp $LastLine;
$LastLine = substr ($LastLine,0,30);
my $Blowfish_Key = $LastLine . $Ran . $time;
$Blowfish_Key = substr ($Blowfish_Key,0,20);
while (length ($Blowfish_Key) < 56) 
  {
  $Blowfish_Key .= $Ran = rand($time);
  }
$Blowfish_Key = substr ($Blowfish_Key,0,56);

  ### Done making up random key, now create Blowfish Encryption object.
my $Blowfish_Cipher = new Crypt::Blowfish $Blowfish_Key;

#------------------------------------
system "clear";
print "This will wipe out the hard drive on Drive /dev/$Drive\n";
print "Press enter to continue\n";
my $R = <STDIN>;

  ### Get the list of mounted partitions on the drive we want to wipe out
my @Mounted = `df`;
@Mounted = grep($_ =~ /\/dev\/hdb/, @Mounted);
  ### Foreach mounted partition, umount it
foreach my $Mount (@Mounted)
  {
  my ($Partition,$Junk) = split(/\s+/, $Mount,2);
  print "Unmounting $Partition\n";
  my $Result = system ("umount $Partition");
  if ($Result > 0) 
    {
    print "ERROR, unable to umount $Partition, aborting Script, Error = $Result\n";
    exit;
    }
  }

  ### Start the expect script, which will simulate someone doing this
  ### commands manually.
my $Fdisk = Expect->spawn("/sbin/fdisk /dev/$Drive");

  ### Get a list of mounted partitions by printing the partition table 
print $Fdisk "p\n";
my $match=$Fdisk->expect(30,"Device Boot    Start");

my $Temp = $Fdisk->exp_after();
my @Temp = split(/\n/, $Temp);
  ## Get the lines that tell us about the partitions
my @Partitions = grep($_ =~ /^\/dev\//, @Temp);

  ## Foreach line, delete the partition
foreach my $Line (reverse @Partitions)
  {
    ## Get the /dev/hdb part, and its number
  my ($Part,$Junk) = split(/[\t ]/, $Line,2);
  my $No = $Part;
  $No =~ s/^\/dev\/$Drive//;

  print "Deleting no $Drive $No\n";     

    ## Delete command
  print $Fdisk "d\n";    
  $match=$Fdisk->expect(30,"Partition number");
   
    ## Which partition number to delete
  print $Fdisk "$No\n";
  $match=$Fdisk->expect(30,"Command (m for help):");
  }

$Fdisk->clear_accum();

  ### If we had partitions, write changes, or otherwise, just end it
if (@Partitions < 1) {print $Fdisk "q\n"; $Fdisk->expect(2,":");}
else 
  {
  print $Fdisk "w\n";
  $Fdisk->expect(30,"Command (m for help):");
  }

#-------------------------------
  ## Get the geometry of the hard drive
my $Geometry = `/sbin/sfdisk -g /dev/$Drive`;
my ($Junk, $Cyl, $Junk2, $Head, $Junk3, $Sector,@Junk) = split(/\s+/,$Geometry);
if ($Cyl < 1) 
   {print "ERROR: Unable to figure out cylinders for drive. aborting\n"; exit;}

  ### Create a new expect script to simulate a person using fdisk
my $Fdisk = Expect->spawn("/sbin/fdisk /dev/$Drive");

   #### Tell fdisk to create new partition
print $Fdisk "n\n";
$Fdisk->expect(5,"primary");

  ### Tell it the new partition should be a primary partition
print $Fdisk "p\n";
$Fdisk->expect(5,":");

  ### Which partition, number 1
print $Fdisk "1\n";
$Fdisk->expect(5,":");

  ### Start at cylinder 1
print $Fdisk "1\n";
$Fdisk->expect(5,":");

  ### Go to the end
print $Fdisk "$Cyl\n";
$Fdisk->expect(5,":");

  ### Write and save
print $Fdisk "w\n"; 
$Fdisk->expect(30,"Command (m for help):");

#------------------------------------------
### Format the partition and mount it

my $Partition = "/dev/$Drive" . "1";
my $Result = system ("mkfs -t ext2 $Partition");
if ($Result > 0) {print "Error making partition, aborting.\n"; exit;}

   ### There should be better error checking here
system "umount /tmp/WIPE_IT";
system "rm -rf /tmp/WIPE_IT";
system "mkdir -p /tmp/WIPE_IT";
system "chmod 700 /tmp/WIPE_IT";

  ## See if we can mount the new partition.
my $Result = system ("mount $Partition /tmp/WIPE_IT");
if ($Result > 0) {print "Error mounting drive, aborting.\n"; exit;}
system "chmod 700 /tmp/WIPE_IT";

#--------------------------------
### Now create the file and stop when we hit the size.

my $Count = 0;
my $Written_Size = 0;  

  ### Open up a new file.
open(FILE,">>/tmp/WIPE_IT/Message.txt");
   ### If someone actually wants to screw around with your hard drive, 
   ### let us play with them and waste their time by adding a teaser. 
my $Ran = rand 259200000;   # between now and ten years ago (approx)
($Ran, $Junk) = split(/\./, $Ran, 2);
   ## New date minus random number of seconds
my $Date = `date --date '-$Ran seconds'`;

print FILE "DATE CREATED $Date\n";
my $Ran = rand 50;
($Ran, $Junk) = split(/\./, $Ran, 2);
$Ran = $Ran + 10;
print FILE "This document is extremely secure. It is a violation to let 
any unauthorized persons read it. Known password holders need to 
apply Method $Ran in order to decrypt binary data.\n"; 

  ### Create random number plus 25000
my $Ran = rand 25000;
($Ran, $Junk) = split(/\./, $Ran, 2);
$Ran = $Ran + 25000;

  ### Create an array of numbers which we will use most of the time.
my @Blank =  (1..$Ran);
  ### Take the array and make into a string.
my $Blank = "@Blank";
  ### Empty the array to free up memory.
@Blank = ();
my $B_Length = length $Blank;

  ### Let us get the amount of real space we have for the partition
my @Temp = `df`;
@Temp = grep($_ =~ /^$Partition/, @Temp);
my $Line = $Temp[0];
my ($Junk,$Blocks,@Junk) = split(/\s+/, $Line,4);
  ### We are assuming 1k blocks. 
my $Size = $Blocks*1000;

  ## While the file we have written is less than the size of the
  ## partition, print some more data. 
while ($Written_Size < $Size)
  {
  $Count++;

        ### 9 out of ten times, we just want to print blank spaces to hurry
        ### up printing. One out of ten times, print garbage binary.
     my $Ran = rand (10);
     if ($Ran > 1) 
       {
       print FILE $Blank; 
       $Written_Size = $Written_Size + $B_Length; 
       }  
     else 
       {
         ## This part makes a long string (upto 10000 bytes) of random data. 
       my $Garbage = "";
       my $Length = rand(10000);
       ($Length, $Junk) = split(/\./, $Length, 2);
       for (my $i = 0; $i < $Length; $i++)
         {
         my $Ran = rand 256;
         ($Ran, $Junk) = split(/\./, $Ran, 2);
         $Garbage .= chr $Ran;
         }
         ## This parts encrypts the random data 8 bytes at a time. 
       my $Temp = $Garbage;
       my $Encrypted = "";
       while (length $Temp > 0)  
         {
         while (length $Temp < 8) {$Temp .= "\t";}
         my $Temp2 = $Blowfish_Cipher->encrypt(substr($Temp,0,8));
         $Encrypted .= $Temp2; 
         if (length $Temp > 8) {$Temp = substr($Temp,8);} else {$Temp = "";}
         }

         ### Print the encrypted random data to file. 
       print FILE $Encrypted;
       $Length = length $Encrypted;

       $Written_Size = $Written_Size + $Length;
       my $Rest = $Size - $Written_Size;
       print "$Size - $Written_Size = $Rest to go\n";
       }

   ### At every 500 prints, start saving to a new file.  
  if ($Count =~ /500$/) 
    {
    close FILE;
    open(FILE,">>/tmp/WIPE_IT/$Count");
    }
  }

close FILE;
#----------------------------------------------------

my $Result = system ("umount $Partition");
if ($Result > 0) {print "Error unmounting partition $Partition, aborting.\n"; exit; }

  ### Let us reformat the partition. Doesn't delete data, just removes it
  ### from the directory. 
my $Result = system ("mkfs -t ext2 $Partition");
if ($Result > 0) {print "Error making partition, aborting.\n"; exit;}


Conclusion

Using Expect was not necessary (other programs could have solved the simple problems I had). Using Blowfish was not necessary. As a matter of fact, the whole darn script was way too long if you just wanted to wipe a hard drive and fill it with blanks. However, I wanted to use fdisk because I always want to use fdisk, Expect is such a powerful tool, it is good to let people see how it works, and putting random garbage encrypted binary data in to confuse a hacker is just an extra touch.

I don't understand the complete complexity of hard drives, so I am not sure if there are residual data left on the hard drive. For my purposes, and my level of security, it does exactly what I need. As I develop MILAS more, I am sure there will be tighter checks and enhancements to delete all data off of a hard drive.

I tend to look forward in time trying to anticipate things which might be needed in the future, which always causes a programmer to work more than is required for the project at hand. However, the mood struck me, and I like the direction the script is going, and so, it doesn't bother me to write up this article on an airplane flight. Making something cool doesn't wear me out, unlike having to do work for someone else, which is real work.

References

  1. Perl.com website.
  2. Expect Perl Module
  3. Blowfish Perl Module
  4. Original site for this article. - http://www.gnujobs.com/Articles/14/Wipe_It.html (any updates will be placed here)
[You can also use /dev/random or /dev/urandom to overwrite a disk. See
The Answer Gang: "Classified Disk - Low-level Format" in issue 60. But it doesn't do encryption. -Mike.]


Copyright © 2001, Mark Nielsen.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 63 of Linux Gazette, Mid-February (EXTRA) 2001

"Linux Gazette...making Linux just a little more fun!"


Installing USB, PCMCIA and Kernel 2.2.18 On My Laptop

By Mark Nielsen


[Editor's note: Linux users are currently migrating from kernel series 2.2 to 2.4. Linux 2.4 includes vastly improved USB support. Most distributions and users have not yet made the switch, but will during the next several months. The steps below were written for kernel 2.2.8. See the links in the References section below (especially the Linux-USB Guide), for the latest information on getting USB to work with Linux.

Also, the 2.4 kernel includes PCMCIA support, so try that first. Those drivers don't work for everyone; if you're one of the unlucky few, get the pcmcia-cs package.

Contents

  1. Introduction
  2. Installing 2.2.18
  3. Configuring Lilo to use the old and new kernels
  4. Setting up USB
  5. Changing the Ricochet modem from serial to USB.
  6. Some problems with laptop and resources.
  7. Conclusion
  8. References

Introduction

I wanted to use my Ricochet modem on my laptop using the USB port. I had successfully downloaded kernel 2.2.18 and used USB with my other computers. I didn't feel like getting kernel 2.4 at the time.

The problem with my laptop was the fact it was using pcmcia devices. I found out later that I had to download pcmcia-cs and install it after I installed the new kernel 2.2.18.

Getting USB to work on my laptop meant I had to do several things,

  1. Install the new kernel 2.2.18.
  2. Install the pcmcia drivers.
  3. Configure Lilo to use the old kernel and the new kernel.
  4. Make sure the usb modules are loaded at boot time.
  5. Create a node under /dev/usb for the Ricochet modem.
  6. Reconfigure my ppp settings.
  7. When I was confident the new kernel was working good, make it the default when the computer boots.
  8. Unfortunately, because of the stupidity of the BIOS on my laptop and because of this stupid plug-and-pray garbage, I can only have my USB port working when neither of the pcmcia slots are in use. This means I can't hook up my laptop to the my local network using my pcmcia ethernet card. This isn't the Linux kernels fault, but the dumb computer.

Installing 2.2.18

Here are the steps I used to install the kernel and the pcmcia drivers.
  1. Configure and install the new kernel with console drivers, usb support, and pcmcia. I also selected a bunch of other options.
  2. Download pcmcia-cs and install using the src directory from the new kernel.
Here are the commands I used to install the new kernel
   ## change to the src directory for the linux kernel
   ## for xconfig, I selected the usb options and VESA VGA graphics console
   ## under console drivers for my laptop
make xconfig
make clean
make dep
make bzImage
make install
make modules
make modules_install
Here are the steps I used to install pcmcia-cs.
tar -zxvf pcmcia-cs-3.1.23.tar.gz
  ### Make sure you specify the root directory for the new kernel
  ### mine was /usr/src/linux-2.2.18/linux
  ### I didn't change the other default options.
make config
make all
  ### This puts the modules under /lib/modules/2.2.18
make install

Configuring lilo to use the old and new kernel.

Here is the old and new configuration I had for /etc/lilo.conf. I highly recommend that you do not use this for yourself, as I customized lilo.conf for my own needs. After I edited /etc/lilo.conf to the new configuration, I just typed "lilo" at the command prompt. Then, when I rebooted my computer, I had a choice of "linux_new" or "linux". After I was confident the new kernel was working, I changed it to be the default.

Old configuration.

### Configuration for GNUJobs.com test laptop
vga=791 
boot=/dev/hda
map=/boot/map
install=/boot/boot.b
prompt
timeout=50
default=linux

image=/boot/vmlinuz-2.2.12-32
	label=linux
	initrd=/boot/initrd-2.2.12-32.img
	read-only
	append="hdc=ide-scsi"
#        ramdisk_size=40000
	root=/dev/hda5

New lilo.conf configuration.
### Configuration for GNUJobs.com test laptop 
### New kernel installed. Remember to install console drivers
### into new kernels otherwise vga=791 doesn't work.

vga=791 
#vga=ask
boot=/dev/hda
map=/boot/map
install=/boot/boot.b
prompt
timeout=50
default=linux_new

image=/boot/vmlinuz-2.2.18
        label=linux_new
        read-only
        append="hdc=ide-scsi"
              ### /dev/hda5 is root for GNUJobs.com laptop
        root=/dev/hda5

image=/boot/vmlinuz-2.2.12-32
	label=linux
	initrd=/boot/initrd-2.2.12-32.img
	read-only
	append="hdc=ide-scsi"
              ### /dev/hda5 is root for GNUJobs.com laptop
	root=/dev/hda5

Setting up USB

In order to setup USB, I had to put these commands into my /etc/rc.d/rc.local file.
   ### This command mounts the filesystem for usb to /proc/bus/usb. 
mount -t usbdevfs none /proc/bus/usb
   ### Load a generic usb module -- choose one of these three depending on your
   ### motherboard or USB card.  I have been able to use
   ### uhci or usb-uhci on all my motherboards so far.  If you aren't sure 
   ### which module to use, see "Basic USB Configuration" in the Linux-USB Guide
   ### at http://www.linux-usb.org/USB-guide/c122.html#AEN124
insmod /lib/modules/2.2.18/usb/uhci.o
# insmod /lib/modules/2.2.18/usb/usb-uhci.o
# insmod /lib/modules/2.2.18/usb/usb-ohci.o
   ### Load the module for modems, like Ricochet
insmod /lib/modules/2.2.18/usb/acm.o

Changing the Ricochet modem from serial to USB

In order to change my Ricochet modem to use the new usb, I had to load the modules described in the previous section, and then create a new node and make my ppp configuration use the new node.
mkdir /dev/usb
mknod /dev/usb/ttyACM0 c 166 0 

Again, I changed my modem from using /dev/ttyS0 to /dev/usb/ttyACM0. Now my Ricochet modem is working, and it seems like it is going faster than the serial modem, like it should be, but it could be my imagination. Note that these two commands are permanent: you only need to run them once. Also, this is /dev/usb, not /proc/bus/usb (explained in the Linux-USB Guide). Kernel files magically appear and disappear in /proc/bus/usb as devices are plugged in and unplugged, but that's not what this file is. USB Ricochet modems require a /dev entry; some other USB devices don't. The usbdevfs manages /proc/bus/usb, not /dev/usb.

Some problems with laptop and resources

I installed the new kernel on my laptop from DELL. I did have some problems. It seems like my stupid laptop doesn't have enough irqs to handle using the USB port. Thus, I now have to buy a USB mouse (and maybe keyboard) to free up some irqs. I also had this problem when I tried to use my pcmcia modem and pcmcia ethernet card at the same time. I haven't been able to solve this problem. Now when I use the USB port, I can't use either pcmcia card. It is extremely annoying that I can't get anything to use irq 10, and that I can't disable the parallel port, serial port, and internal ps/2 mouse. The DELL computer was by far the best Linux laptop I had seen, but now it will become outdated soon. I imagine with kernel 2.4, there will be a lot more commercial support for Linux. Why on earth the software evil empire and the hardware evil empire came together to create the user-friendly plug-and-pray nonsense is beyond me. I know my laptop has free resources but I cannot force it to use those resources. Very annoying. I am extremely unimpressed with the BIOS of the particular DELL laptop I got.

I bought another laptop for one of my employees at GNUJobs.com from Emperor Linux, and it was properly configured, and I grilled the salesperson to make sure I got everything working without any problems. I am much happier with the laptop I got from Emperor Linux.

Another goofy thing I did was I forgot to install the iso9660 format into the kernel (or as a module). Now I can't read cdroms. I will have to compile the kernel one more time and specify to include the iso9660 filesystem format as a module.

Conclusion

I am extremely impressed with the USB support in the Linux kernel 2.2.18. After Kernel 2.4.1 comes out, I will most likely upgrade my kernel to 2.4. I have read all the new features about kernel 2.4, and it looks exciting!

Overall, I am impressed with the fact that is was pretty painless to install the new kernel. Installing one kernel didn't blow away earlier kernels, which made it so I could test out the new kernel without getting rid of the old one. This is helpful if I want to revert back to the old kernel. For example, before I installed pcmcia-cs for the new kernel, my laptop's ethernet card didn't work, and hence, it was helpful that I could boot to the old kernel where the ethernet card would still work. Had this happened in a lame operating system which just forces upgrades and wouldn't let you choose how to control your system, I might have been screwed.

Even though the installation was fairly easy for me, it might be easier for other people to just use rpms and to rely on their favorite Linux distribution to help them out. This is the easiest installation of the kernel and pcmcia drivers for a laptop that I have ever experienced. It is nice to see the installation getting easier and easier. After years of having to fight with the kernel for one reason or another, it is nice to see all these technologies come together.

I don't see how the evil empire will be able to resist its downfall considering the fact that GNU/Linux (and OpenBSD and FreeBSD) are technologically superior and are providing user friendliness with GNOME and KDE. The evil empire has never cared about technology, but marketing and user-friendliness. Some of the evil commercial UNIX vendors only cared about technology and did not care about making their environment pleasant to use or user-friendly. Since GNU/Linux is merging technology with user-friendliness, which is the way people want it, we get the best of both worlds, instead of having evil empires dictate to us what they think is best (or how to control us so that they can milk us).

References

  1. Taken directly from linux/Documentation/usb/usb-help.txt
    
    2000-July-12
    
    For USB help other than the readme files that are located in
    linux/Documentation/usb/*, see the following:
    
    Linux-USB project:  http://www.linux-usb.org
      mirrors at        http://www.suse.cz/development/linux-usb/
             and        http://usb.in.tum.de/linux-usb/
    
    Linux USB Guide:    http://www.linux-usb.org/USB-guide/book1.html
       READ THIS!          (or other Linux-USB mirrors)
    
    Linux-USB device overview (working devices and drivers):
                        http://www.qbik.ch/usb/devices/
        
    The Linux-USB mailing lists are:
      linux-usb-users@lists.sourceforge.net   for general user help
      linux-usb-devel@lists.sourceforge.net   for developer discussions
    
    
    
  2. Linux Kernel 2.2.18
  3. PCMCIA-CS source
  4. Using the wireless modem Ricochet
  5. Original site for this article - http://www.gnujobs.com/Articles/15/USB.html. (any updates will be here)


Copyright © 2001, Mark Nielsen.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 63 of Linux Gazette, Mid-February (EXTRA) 2001

"Linux Gazette...making Linux just a little more fun!"


Clearing out the Master Boot Record (MBR)

--Danger, Will Robinson!

By Ben Okopnik


Experimentation is fun. After all, one of the things that makes Linux so interesting to a number of people is the ability to twiddle settings and see what happens - I'll admit that it's a major factor for me. One of the problems with that, though, is that some types of twiddling can lead to serious problems. A bit like sawing off the branch you're sitting on, in fact...

A number of people write to the Answer Gang with a query that goes something like this:

"Dear TAG: I have a stick of dynamite strapped to the CPU, and I'm not afraid to use it. Now that I have your undivided attention: I ran into a problem while trying to reinstall..."

What it turns out to be - after the police, the fire department, and the burly men in the white coats have come and gone - is that they've run into the classic "fried MBR" problem: install Linux, realize that Windows will screw up the boot record, delete the Linux partition, try to install Windows first... OOPS. The Windows setup runs into a problem and stops.

The reason for all of the above is that they forgot to uninstall LILO, which would have written out the original MBR; as it is, the boot code in the MBR is trying to pass control to the Linux kernel - and that's no longer there.

Nothing helps. The undocumented "fdisk/mbr" option that is supposed to write a clean Master Boot Record seems to have no effect; "fdisk" in interactive mode refuses to delete the "Non-DOS" partition; even the detonator fails to explode. What to do, what to do...

By the way, a factor in the first two problems might be the Windows "lock" command - by default, 'raw writes' to disk are disallowed, and "lock c:" 'locks' the drive to allow writing to it. (For the last problem, stick to the bridge-wire type detonators from Dynamit Nobel, and store them
properly. :)
 

    * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
    Note: The following advice will completely wipe your Master Boot Record,
    which contains all your partition information. DO NOT DO THIS unless you
    know that this is exactly the result you want - it will leave your HD in
    an unbootable state, in effect bringing it back to "factory-fresh", i.e.,
    empty of data and requiring partitioning and formatting.
    * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
 

Linux-based solution

If you can still somehow fire up Linux - say, via Tom's Root-Boot floppy - you can simply invoke "dd", like so:

dd if=/dev/zero of=/dev/hda bs=512 count=1

Yep, that's it. That MBR is gone. Obviously, you have to be root to do this.
 

DOS-based solution

Boot with a DOS floppy that has "debug" on it; run "debug". At the '-' prompt, "block-fill" a 512-byte chunk of memory with zeroes:

f 9000:0 200 0

Start assembly mode with the 'a' command, and enter the following code:

mov dx,9000
mov es,dx
xor bx,bx
mov cx,0001
mov dx,0080
mov ax,0301
int 13
int 20

Press <Enter> to exit assembly mode, take a deep breath - and press "g" to execute, then "q" to quit "debug". Your HD is now in a virgin state, and ready for partitioning and installation.

Obviously, you have to be root... oh, oops. Anybody that walks up with a DOS floppy can do this to your system in about a minute, including boot time. Let's see; where was that article about securing your box, again?...

References

The "dd" man page.

DOS-based fix: Original idea and code by Mark Minasi, used for clearing infected/damaged MBRs in a course of his that I used to teach; all code/command modifications mine.


Copyright © 2001, Ben Okopnik.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 63 of Linux Gazette, Mid-February (EXTRA) 2001

"Linux Gazette...making Linux just a little more fun!"


IP Spoofing

By Kapil Sharma


A spoofing attack involves forging one's source address. It is the act of using one machine to impersonate another. Most of the applications and tools in UNIX rely on the source IP address authentication. Many developers have used the host based access controls to secure their networks. Source IP address is a unique identifier but not a reliable one. It can easily be spoofed.
To understand the spoofing process, First I will explain about the TCP and IP authentication process and then how an attacker can spoof you network.

The client system begins by sending a SYN message to the server. The server then acknowledges the SYN message by sending SYN-ACK message to the client. The client then finishes establishing the connection by responding with an ACK message. The connection between the client and the server is then open, and the service-specific data can be exchanged between the client and the server. Client and server can now send service-specific data

TCP uses sequence numbers. When a virtual circuit establishes between two hosts , then TCP assigns each packet a number as an identifying index. Both hosts use this number for error checking and reporting.
Rik Farrow, in his article "Sequence Number Attacks", explains the sequence number system as follows:

"The sequence number is used to acknowledge receipt of data. At the beginning of a TCP connection, the client sends a TCP packet with an initial sequence number, but no acknowledgment. If there is a server application running at the other end of the connection, the server sends back a TCP packet with its own initial sequence number, and an acknowledgment; the initial number from the client's packet plus one. When the client system receives this packet, it must send back its own acknowledgment; the server's initial sequence number plus one."

Thus an attacker has two problems:
1) He must forge the source address.
2) He must maintain a sequence number with the target.

The second task is the most complicated task because when target sets the initial sequence number, the attacker must response with the correct response. Once the attacker correctly guesses the sequence number, he can then synchronize with the target and establish a valid session.

Services vulnerable to IP Spoofing:
Configuration and services that are vulnerable to IP spoofing :

TCP and IP spoofing Tools:
1) Mendax for Linux
Mendax is an easy-to-use tool for TCP sequence number prediction and rshd spoofing.

2) spoofit.h
spoofit.h is a nicely commented library for including IP spoofing functionality into your programs. [Current URL unknown. -Ed.]

3) ipspoof
ipspoof is a TCP and IP spoofing utility.

4) hunt
hunt is a sniffer which also offers many spoofing functions.

5) dsniff
dsniff is a collection of tools for network auditing and penetration testing. dsniff, filesnarf, mailsnarf, msgsnarf, urlsnarf, and webspy passively monitor a network for interesting data (passwords, e-mail, files, etc.). arpspoof, dnsspoof, and macof facilitate the interception of network traffic.


Measures to prevent IP Spoofing Attacks:

Conclusion:
Spoofing attacks are very dangerous and difficult to detect. They are becoming more and more popular now. The only way to prevent these attacks are to implement security measures like encrypted authentication to secure your network.


Copyright © 2001, Kapil Sharma.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 63 of Linux Gazette, Mid-February (EXTRA) 2001

"Linux Gazette...making Linux just a little more fun!"


XML parsing in AOLserver

By Irving Washington


AOLserver

AOLserver is an open-source, multi-threaded, high-performance web server. AOLserver is less known than Apache but it has a few features that put it ahead of Apache: rich and well-thought extension API, superior database connectivity API, embedded and tightly integrated Tcl interpreter. Read my previous LG article to learn more about AOLserver.

XML

If you're going to do serious work with XML you'll have to learn about it and you'll have to do it somewhere else. The best summary of XML I've seen is: XML is an (inefficient) way to to represent data in tree form as text (ASCII) files. Text is good because it's simple. Tree is good because a lot can be represented as trees (e.g., a non-circular list is just a degenerated tree and a circular list can be described with multiple trees). Inefficient is bad but it usually makes an engineering sense to trade inefficiency for extensibility and wide adoption that XML enjoys (lots of tools, lots of information).

XML support in AOLserver

XML processing (parsing and modification of XML documents) in AOLserver is possible thanks to an ns_xml module written by ArsDigita. This module is a wrapper around version 2.x (>2.2.5) of libxml library and adds ns_xml command to the embedded Tcl interpreter. You can download the source or get it directly from the CVS repository doing:
cvs -d:pserver:anonymous@cvs.aolserver.sourceforge.net:/cvsroot/aolserver login
cvs -z3 -d:pserver:anonymous@cvs.aolserver.sourceforge.net:/cvsroot/aolserver co nsxml
You need to press Enter after first command since CVS is waiting for a password (which is empty).

As of Dec. 2000 Linux distributions usually come with version 1.x of libxml library so chances are that you'll need to install 2.x by yourself (this will change in the future since everyone is migrating to 2.x). To install nsxml module go into nsxml directory, optionally edit a path in Makefile to point into AOLserver source directory. Then run make. You should get nsxml.so module that should be placed in AOLserver bin directory (the same that has main nsd executable). Add the following to your nsd.tcl config file:

ns_section "ns/server/${servername}/modules"
ns_param   nsxml           ${bindir}/ns_xml.so
and restart AOLserver. You can verify that the module gets loaded by watching server.log, I usually use a shell window with:
tail -f $AOLSERVERDIR/log/server.log
This is also a great way to debug Tcl scripts since AOLserver will dump detailed debug information every time there is an error in the script.

XML Quick reference

Here's a quick reference of all commands available through ns_xml.

set doc_id [ns_xml parse ?-persist? $string]
Parse the XML document in a $string and return document id (handle to in-memory parsed tree). If you don't provide ?-persist? flag the memory will be automatically freed when the script exits. Otherwise you'll have to free the memory by calling ns_xml doc free. You need to use -persist flag if you want to share parsed XML docs between scripts.
set doc_stats [ns_xml doc stats $doc_id]
Return document's statistics.
ns_xml doc free $doc_id
Free a document. Should only be called on a document if ?-persistent? flag has been passed to either ns_xml parse or ns_xml doc create
set node_id [ns_xml doc root $doc_id]
Return the node id of the document root (you start traversal of the document tree from here.)
set children_list [ns_xml node children $node_id]
Return a list of children nodes of a given node.
set node_name [ns_xml node name $node_id]
Return the name of a node.
set node_type [ns_xml node type $node_id]
Return the type of a node. Possible types: element, attribute, text, cdata_section, entity_ref, entity, pi, comment, document, document_type, document_frag, notation, html_document
set content [ns_xml node getcontent $node_id]
Get a content (text) of a given node.
set attr [ns_xml node getattr $node_id $attr_name]
Return the value of an attribute of a given node.
set doc_id [ns_xml doc create ?-persist? $doc-version]
Create a new document in memory. If -persist flag is given you'll have to explicitely free the memory taken by the document with ns_xml doc free, otherwise it'll be freed automatically after execution of the script. $doc_version is a version of an XML doc, if not specified it'll be "1.0".
set xml_string [ns_xml doc render $doc_id]
Generate XML from the in-memory representation of the document.
set node_id [ns_xml doc new_root $doc_id $node_name $node_content]
Create a root node for a document.
set node_id [ns_xml node new_sibling $node_id $name $content]
Create a new sibling of a given node.
set node_id [ns_xml node new_child $node_id $name $content]
Create a child of a given node.
ns_xml node setcontent $node_id $content
Set a content for a given node.
ns_xml node setattr $node_id $attr_name $value
Set the value of an attribute in a given node.

A simple example

An educational and simple thing to do is to parse a document and print out its tree structure. Stripped to bare bones the process is: If you provide -persist flag to ns_xml parse you'll have to explicitly call ns_xml doc free $doc_id to free memory associated with this document, otherwise it will get automatically freed after execution of a script.

In code it could look like this:

proc dump_node {node_id level} {
    set name [ns_xml node name $node_id]
    set type [ns_xml node type $node_id]
    set content [ns_xml node getcontent $node_id]
    ns_write "<li>"
    ns_write "node id=$node_id name=$name type=$type"
    if { [string compare $type "attribute"] != 0 } {
	ns_write " content=$content\n"
    }
}

proc dump_tree_rec {children} {
    ns_write "<ul>\n"
    foreach child_id $children {
	dump_node $child_id
	set new_children [ns_xml node children $child_id]
	if { [llength $new_children] > 0 } {
	    dump_tree_rec $new_children
	}
    }
}

proc dump_tree {node_id} {
    dump_tree_rec [list $node_id] 0
}

proc dump_doc {doc_id} {
    ns_write "doc id=$doc_id<br>\n"
    set root_id [ns_xml doc root $doc_id]
    dump_tree $root_id
}

set xml_doc "<test version="1.0">this is a
<blind>test</blind> of xml</test>"
set doc_id [ns_xml parse $xml_doc]
dump_doc $doc_id    
ns_xml parse command will throw an error if XML document is not valid (e.g., not well formed) so in production code we should catch it and display a meaningful error message, e.g.:
if { [catch {set doc_id [ns_xml parse $xml_doc]} err] } {
    ns_write "There was an error parsing the following XML document: "
    ns_write [ns_quotehtml $xml_doc]
    ns_write "Error message is:"
    ns_write [ns_quotehtml $err]
    ns_write "\n"
    return
}
Code like this takes more time to write but some day it may save a lot of debugging time (and a day like this always comes).

See how the code works in practice [external site running AOLserver] and get the full source [included in Linux Gazette]. It's a bit more complex than the above snippet. You can see the structure of an arbitrary XML document by typing it in the provided text area. The script also shows how to parse form data and has more robust error handling.

Real life example

XML is better than other similar formats because it is a standard, it has gained wide acceptance and its usage is growing rapidly. One of the possible usages of XML is as a way of communication between web sites (web services). The simplest scenario is that of one web server grabbing information in XML format from another web server. A popular example of such communication is a congregation of headlines, e.g., if you go to freshmeat.net you'll see that they provide current headlines from linuxtoday.com. We'll do the same thing (vive l'originalite!).

In the past it could've been done in a rather distasteful way by grabbing the whole HTML page and trying to extract relevant information. It would be hard to program and fragile (a change in the way HTML page is generated would most likely break such parsing).

Today the site that wants to provide headlines for others can publish this data in an easily to parse XML format under some URL. In our case the data are provided at http://www.linuxtoday.com/backend/linuxtoday.xml. See the format of this file (using previously developed script).

As you can see XML document represent headlines on LinuxToday site. It is a set of stories, each story having title, url, author etc. We know that after parsing the XML document we would like to have a way to easily extract the information. Let's use a "wishful-thinking" (in other words top-down) method of writing the code advocated in a Structure and interpretation of computer programs (a truly great CS book). Let's assume that we've converted XML representation into an object. To build an HTML table showing the data we need the following procedures:

For simplicity I only use URL and title but extending this to more attributes should be trivial.

Having those procedures we can generate the simplest (but rather ugly) table:

proc story_to_html_table_row { story } {
    set url [story_get_url $story]
    set title [story_get_title $story]
    return "- <a href=\"$url\"><font color=#000000>$title</font></a><br>\n"
}

# given headlines generate HTML code of the table with this data
proc headlines_to_html_table { headlines } {
    set to_return "<table border=0 cellspacing=1 cellpadding=3>"
    append to_return "<tr><td><small>"

    set stories_count [headlines_get_stories_count $headlines]
    for {set i 0} {$i < $stories_count} {incr i} {
	set story [headlines_get_story $headlines $i]
	append to_return [story_to_html_table_row $story]
    }

    append to_return "</td></tr></table>\n"
    return $to_return
}
Tcl doesn't give us much choice for representing this object; we'll use lists.
proc headlines_get_stories_count { headlines } {
    return [llength $headlines]
}

proc headlines_get_story { headlines story_no } {
    return [lindex $headlines $story_no]
}

proc story_get_url { story } {
    return [lindex $story 0]
}

proc story_get_title { story } {
    return [lindex $story 1]
}
Note that if we forget about purity (just for a while) we can rewrite the following part of headlines_to_html_table:
set stories_count [headlines_get_stories_count $headlines]
for {set i 0} {$i < $stories_count} {incr i} {
    set story [headlines_get_story $headlines $i]
    append to_return [story_to_html_table_row $story]
}
in a bit more terse way:
foreach story $headlines {
    append to_return [story_to_html_table_row $story]
}
Now the most important part: converting XML doc into the representation we've chosen.
# does a name of the node identified by $node_id equals $name
proc is_node_name_p { node_id name } {
    set node_name [ns_xml node name $node_id]
    if { [string_equal_p $name $node_name] } {
	return 1
    } else {
	return 0
    }
}

# does a type of the node identified by $node_id equals $type
proc is_node_type_p { node_id type } {
    set node_type [ns_xml node type $node_id]
    if { [string_equal_p $type $node_type] } {
	return 1
    } else {
	return 0
    }
}

# is this an node of type "attribute"?
proc is_attribute_node_p { node_id } {
    return [is_node_type_p $node_id "attribute"]
}

# raise an error if node name is different than $name
proc error_if_node_name_not {node_id name} {
    if { ![is_node_name_p $node_id $name] } {
	set node_name [ns_xml node name $node_id]
	error "node name should be $name and not $node_name"
    }
}

# raise an error if node type is different than $type
proc error_if_node_type_not {node_id type} {
    if { ![is_node_type_p $node_id $type] } {
	set node_type [ns_xml node type $node_id]
	error "node type should be $type and not $node_type"
    }
}

# given url and title construct a story object with
# those attributes
proc define_story { url title } {
    return [list $url $title]
}

# convert a node of name "story" into an object
# that represents story
proc story_node_to_story {node_id} {
    set url ""
    set title ""
    # go through all children and extract content of url and title nodes
    set children [ns_xml node children $node_id]
    foreach node_id $children {
	# we're only interested in nodes whose name is "url" or "title"
	if { [is_attribute_node_p $node_id]} {
	    if { [is_node_name_p $node_id "url"] || [is_node_name_p $node_id "title"]} {
		set node_children [ns_xml node children $node_id]
		# those should only have one children node with
		# the name "text" and type "cdata_section"
		if { [llength $node_children] != 1 } {
		    set name [ns_xml node name $node_id]
		    error "$name node should only have 1 child"
		}
		set one_node_id [lindex $node_children 0]
		error_if_node_type_not $one_node_id "cdata_section"
		error_if_node_name_not $one_node_id "text"
		set txt [ns_xml node getcontent $one_node_id]
		if { [is_node_name_p $node_id "url"] } {
		    set url $txt
		}
		if { [is_node_name_p $node_id "title"]} {
		    set title $txt
		}
	    }
	}
    }
    return [define_story $url $title]
}

# convert XML doc to headlines object
proc xml_to_headlines { doc_id } {
    set headlines [list]
    set root_id [ns_xml doc root $doc_id]
    # root node should be named "linuxtoday" and of type "attribute"
    error_if_node_name_not $root_id "linuxtoday"
    error_if_node_type_not $root_id "attribute"
    set children [ns_xml node children $root_id]
    foreach node_id $children {
	# only interested in attribute type nodes whose name is "story"
	if { [is_node_name_p $node_id "story"] && [is_attribute_node_p $node_id]} {
	    set story [story_node_to_story $node_id]
	    lappend headlines $story
	}
    }
    return $headlines
}
The code is rather straightforward. We use the knowledge about the structure of XML file. In this case we know that root node is named linuxtoday and should have a child named story. Each story node should have children named url and title etc. The previous script that dumps general structure of the tree helped me a lot in writing this function. Note the usage of error command to abort the script if XML doesn't look good to us.

Having an intermediate representation of the data might look like an excess given that it costs us more code and some performance but there are very good reasons to have it. We could have written a proc xml_to_html_table that would create HTML table directly from XML document but such code would be more complex, more buggy and harder to modify. Separation that we've made provides an abstraction that reduces complexity, which is always good. It also gives us more flexibility: we can easily imagine writing another headlines_to_html_table procedure that gives us slightly different table.

See how it works in practice [external site running AOLserver] and get the source [included in Linux Gazette]. It should produce something like this:

linuxtoday
- Kernel Cousin Debian Hurd #73 By Paul Emsley And Zack Brown
- Zope 2.2.5 b1 released
- O#39;Reilly Network: Insecurities in a Nutshell: SAMBA, pine, ircd, and More
- ZDNet: Linux Laptop SuperGuide
- ComputerWorld: Think tank warns that Microsoft hack could pose national security risk
 

One thing missing in this code is caching. As it is, it will grab the XML file from other people's server everytime it is invoked. This is not nice. It would be fairly easy to add a logic to cache XML file (or its in-memory representation) and only fetch a new version if, say, 1 hour passed since it was last retrieved.

Conclusion about XML as a data exchange language

Is this data exchange thing between web servers a novel idea? No. You could do everything described here with the first generation of web servers. You would probably use different technologies (C code running inside a web server or a CGI script instead of an embedded scripting language; some ad-hoc text or binary format instead of XML) but the idea would be the same: one web server acts as a client, grabs the data from the other server using HTTP protocol and does something useful with the data. The other web server acts as a server providing data for others. It's just another implementation of a client-server paradigm. It's nothing new. It is just a sign that web programming is maturing. After 5+ years we've finally solved most of the problems with presenting static html pages or generating dynamic web pages from the data kept on the server (e.g., in a database). Now we enter the times of providing services and data for other web sites. Today state-of-the-art is pretty much limited to exchanging headlines and similar trivia but possibilities are bigger, ranging from simple things like providing stock quotes or dictionary definitions to executing complex (e.g., financial) transactions following an agreed upon protocol.

Conclusion about XML parsing in AOLserver

Beside parsing you can also create and manipulate XML documents in memory and convert them to XML ASCII representation. It is not covered in this article but it's so straightforward that you should be able to do it just by looking at the API.

ns_xml module provides basics of XML processing. Although you can do quite a bit with it one could wish to do more. Things that are obviously missing:

An alternative approach to ns_xml module would be to:

Links

If you have comments or suggestions, send them in.


Copyright © 2001, Irving Washington.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 63 of Linux Gazette, Mid-February (EXTRA) 2001

"Linux Gazette...making Linux just a little more fun!"


The Back Page


About This Month's Authors


Marius Andreiana

Marius is 20 years old, student in the second year at Politehnica Bucharest, Romania and working as a web developer. Besides Linux, he also loves music (from rock to dance), dancing, having fun, spending time with friends. He is interested also in science in general (and that quantum spooky connection :)

Bryan Brunton

With a degree in philosophy and as a reformed Visual Basic programmer, Bryan Brunton is a software engineer who wants to work with whatever tools allow him to never again use pointers while remaining platform agnostic.

Shane Collinge

Part computer programmer, part cartoonist, part Mars Bar. At night, he runs around in a pair of colorful tights fighting criminals. During the day... well, he just runs around. He eats when he's hungry and sleeps when he's sleepy.

Matteo Dell'Omodarme

I'm a student at the University of Pisa and a Linux user since 1994. Now I'm working on the administrations of Linux boxes at the Astronomy section of the Department of Physics, with special experience about security. My primary email address is matt@martine2.difi.unipi.it.

Chris Gibbs

I'm a mad, sad geek with three dogs, a network and a cat living on top of a mountain in a Welsh valley. So I'm a contradiction! All that is good is either caffine, chocolate or Linux, so send me lots of chocolate or just point me at more great stuff to run on a salvaged 486! chocolate@hawklord.uklinux.net

Mark Nielsen

Mark works at ZING (www.genericbooks.com) and GNUJobs.com. Previously, Mark founded The Computer Underground. Mark works on non-profit and volunteer projects which promote free literature and software. To make a living, he recruits people for GNU related jobs and also provides solutions for web/database problems using Linux, FreeBSD, Apache, Zope, Perl, Python, and PostgreSQL.

Ben Okopnik

A cyberjack-of-all-trades, Ben wanders the world in his 38' sailboat, building networks and hacking on hardware and software whenever he runs out of cruising money. He's been playing and working with computers since the Elder Days (anybody remember the Elf II?), and isn't about to stop any time soon.

Kapil Sharma

Kapil is a Linux and Internet security consultant. He has been working on various Linux/Unix systems and Internet Security for more than 2 years. He maintains a web site (http://linux4biz.net) for providing free as well as commercial support for web, Linux and Unix solutions.

Irving Washington

Irving Washington (pseud.), as witnessed by his personal server www.fifthgate.org, is mostly interested in creating useful web services.


Not Linux


Here are some screenshots of lesser-known features of Microsoft Office. David Deckert made the last one. Neither of us know where the other three came from.

[Word menu with options for 'Read Boss's Mind', 'Adjust Subordinate's Attitude', etc.]
[Office Assistant that looks like a superhero dog offering to hump your leg, chase a car or shit on the carpet.]
[Default Settings dialog with options to crash every 2 hours, create incredibly large files, and to disable the ability to type during auto saves.]
[The ever-helpful Office Assistant says, 'It looks like you're writing a suicide note!  Office Assistant can help you write your suicide note.  First, tell us how you plan to kill yourself.  Pills, Jump, Pastry?']

[Linux Gazette mini-logo]
We have changed the Linux Gazette logo at the top of the page to an 8K PNG image for your fast-downloading pleasure. Thanks to Karl-Heinz Herrmann for the suggestion and the image. Here is also a small version (200 pixels wide) for those who want an icon on their site to link to LG site with.

Happy Linuxing!

Michael Orr
Editor, Linux Gazette, gazette@ssc.com


Copyright © 2001, the Editors of Linux Gazette.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 63 of Linux Gazette, Mid-February (EXTRA) 2001