LINUX GAZETTE

March 2003, Issue 88       Published by Linux Journal

Front Page  |  Back Issues  |  FAQ  |  Mirrors
The Answer Gang knowledge base (your Linux questions here!)
Search (www.linuxgazette.com)


Linux Gazette Staff and The Answer Gang

Editor: Michael Orr
Technical Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Ben Okopnik, Dan Wilder, Don Marti

TWDT 1 (gzipped text file)
TWDT 2 (HTML file)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.
Linux Gazette[tm], http://www.linuxgazette.com/
This page maintained by the Editor of Linux Gazette, gazette@ssc.com

Copyright © 1996-2003 Specialized Systems Consultants, Inc.

[ Table of Contents ][ Front Page ][ Talkback ][ FAQ ][ Next ]
LINUX GAZETTE
...making Linux just a little more fun!
The Mailbag
From The Readers of Linux Gazette


HELP WANTED : Article Ideas
Submit comments about articles, or articles themselves (after reading our guidelines) to The Editors of Linux Gazette, and technical answers and tips about Linux to The Answer Gang.


devfs problem

30 Jan 2003 08:22:53 +0200
Stelian Iancu (stelian.iancu from gmx.net)

Hello!

I just switched from Gnome 2.0 to KDE 3.1 and I notice that the settings for the devices created by devfsd aren't save between reboots. So I read through the docs and I saw that I have to create some dev-state dir. Well, I already have that dir in /lib and devfsd is set to save the settings (in /etc/devfsd.conf). And if I change the permissions on some devices (/dev/dsp for example), the change is also visible in /lib/dev-state directory. However, after I reboot, the same problem. I don't have permissions. And this is really annoying me.

So any suggestions are greatly appreciated!

P.S. I am using Mandrake 9.0 with the default kernel.

Thanks!

Regards,
Stelian I.


traffic shaping for the internal network; tc filter for source addresses?

Sun, 2 Feb 2003 07:55:01 -0800 (PST)
Radu Negut (rnegut from yahoo.com)

Hi!

I've got my home LAN behind a cable modem, masqueraded to the outside world. The masquerading machine runs RedHat 7.3. What I'm trying to achieve is equally share the bandwidth between the machines (about 7) following this algorithm: if only one host is making a connection at a given time, it gets the whole bandwidth; when a second connection from a second masqueraded machine arrives at the gateway, the bandwidth is equally divided between the two machines; if a third machine makes a connection, the bandwidth is split in three equal shares and so on. Now if one of the machines that has already opened a connection, makes a second one, I would want this connection to be allocated inside the machine's share, not as a separate member participating in the bandwidth division. Following this idea, if someone has 4 open downloads, someone else 7 and a third machine only 1, then bandwidth should be divided only by three and not 12.

I've already read about SFQ, qdiscs and tc filter from the 'Advanced routing HOW-TO' but I couldn't find any info on how to shape/police traffic dynamically and based on ip source addresses. I do not want to split the bandwidth into seven slices from the beginning since not everybody is online all the time and this would waste available bandwidth for the others. I'd rather have the traffic shaped depending on how many internal hosts wish to access the internet at a given time.

I'm not really interested in providing differentiated traffic based on content (interactive, bulk, etc.) just a fair sharing of bandwidth, ignorant of how many download managers/ftp's each and everyone is running, and not allowing anyone to suffocate the shared internet connection with his/her requests.

Thank you very much in advance for the time taken to
answer this,

Radu Negut


PS/2 port still live after shutdown

Wed, 26 Feb 2003 21:43:00 +1300
D & E Radel (radel from inet.net.nz)

Hi everyone.

When Linux shutsdown with halt -p, my pc will turn off, but Linux won't switch off the power to my PS/2 port. It is turned on when X starts, but when X shuts down, or the PC is shut down, the port remains on - and my Optical Mouse stays on. Light remains glowing, etc.

.... However, Windows 98SE will shut this down properly every time. I have kernel 2.4.20 and have tried enabling ACPI and APM. And of course I have an ATX PSU, and nothing weird enabled either in cmos or jumpered.

I know that some boards just have power going through PS/2 ports after soft shutdown as a feature/bug, but Win98SE manages to shut down this one ok.

If someone knows how to fix this, I would really appreciate your help.

Thanks in advance.
D.Radel.

PS. Sorry for mentioning that other OS in this email.


Only USB mouse/keyboard recognized by KDE.

Thu, 30 Jan 2003 04:14:27 -0800 (PST)
Stephen (anonymous)

Greetings.I installed Red Hat Linux 8.0 on my desktop computer. I used my PS/2 keboard and mouse to install the software from CD images downloaded from Red Hat. After software installation completed my computer rebooted to the KDE login screen. My PS/2 keyboard and mouse does not work. Only a USB keyboard and mouse work. When I boot my system into run level 3 my PS/2 keyboard works. How do I configure my system so that I can use my PS/2 mouse and keyboard with KDE?

Any information is appreciated. Thanks.


linux statistics

Tue, 25 Feb 2003 13:48:32 +0600
Sanjaya Singharage (SanjayaS from jkcs.slt.lk)

Hi all,
are there any reputable statistics available on the web comparing linux, *nixes and windows on the enterprise server market? Can somebody give some pointers or links? Any reputable articles would also be welcome. I've been rummaging the web the whole day but couldn't find anything useful.
Thanks.


GENERAL MAIL


What a great service you have done me

Tue, 04 Feb 2003 13:05:37 +0100
Joe Programmer (ctio from lycos.co.uk)

Dear Mike,

After my first article was published, about thirty people downloaded my console interface library. In the few days since you published my second, over ninety people have come for it. If only ten percent of those try to write an editor like I described, you will have turned my dream into a reality.

When I cycled into the city to log on at the daycentre this morning, I had been in the countryside for a week. I had no idea I had been published because I expected it would be in the March edition. I agreed with your comments about C++ not being the universal language I made it out to be and was going to rewrite it with your suggestions in mind.

Unless the author says he plans to do a revision, I assume the article is finished when I receive it. -- Mike

Now I realise it's gone out and I've seen the response, I don't care how bigotted people think I am :)

I cannot thank you enough.

Your faithfully, Stephen Bint

We have encouraged Stephen to write or be involved in more articles; you'll see some of the results when they're ready for publication. -- Heather

Thanks for the encouragement. It was good to hear what the article is doing for you. -- Mike


Mike,

Thank you for pointing out that I gave the misleading impression, that C++ is the first language of all Linux users in my article, The Ultimate Editor (LG#87). Obviously Linux users vary widely in their choice of first language.

It would be a boon to the users of any language, especially beginners, to have an editor which is extensible in their own language. C++ users seem to be the only group who do not have one yet.

Stephen Bint


The Ultimate Editor

Sun, 23 Feb 2003 18:25:46 +0800
Jim Dennis (Linux Gazette Sr. Contributing Editor)
Question by Peter (pfheiss from philonline.com)

Dear Editor,

I can not fully understand the article "The Ultimate Editor" in Feb. LG. Having migrated from DOS to Linux without passing MSWindooze I have to ask what is wrong with the Linux text editors such as joe, xedit, gedit, gxedit, xeditplus, kedit, kwrite, kate, vim, gvim, cooledit, any more?, yes I am sure.

I have seen the text editor in Windooze and thought it a joke compared with some of the Linux text editors mentioned.

May be Stephen Bint should try them all first before picking up more cigarette butts in the gutter thus damaging his lungs and consequently his brain.

Regards
Peter Heiss

Well, I can understand the article. I can also disagree with it, but first I have to understand it. The title seems destined to invite flames (perhaps he's asking for a light for those soggy gutter butts).

He doesn't like the Linux text/console editors he's tried. He doesn't bother to lay out the criteria against which he's rating them. Other than that it's simply an announcement of a library which is built over the top of SLang which, of course is built over the top of ncurses.

It would be easy to cast aspersions, even to question my fellow editors on the merits of including this article. However, I'll just let the article speak for itself. I'll ask, why doesn't xemacs support mouse on the console or within some form of xterm (xemacs does support ncurses color, and menus)? How about vim?

Personally I mostly use vim or xemacs in viper (vi emulation) mode. There are about 100 other text editors for Linux and UNIX text mode (and more for X --- nedit being the one I suggest for new users who don't want to learn vi --- or who decide they hate it even after they learn it).

-- Jim Dennis

I hope that Stephen's comment in the previous portion clarifies what he was really thinking. On the cigarette analogy, he has roll-your-own papers in his pocket, of a C++ variety, but needs someone to share loose tobacco. Then everyone sharing this particular vice can enjoy having a smoke together... downwind of folk who already like their text-editors :D Yes, folk who are used to seeing their brand down at the liquor store are likely to think making your own cigarettes is either quaint or nutty. But it's a big world out here, and the open source world is built by folk who like to roll their own... -- Heather
Let's remember that when Stephen complains, he doesn't just whine and expect others to do things his way. Rather, he takes it upon himself to contribute code that does whatever it is he's complaining about. See I Broke the Console Barrier in issue 86. That was the main reason I published The Ultimate Editor, even though I strongly objected to his assumptions that (1) C/C++ are the only worthwhile languages and (2) emacs should be flogged over the head for not using menus and keystrokes à la DOS edit. The second bothered me enough to insert an Editor's note saying there are other issues involved. The first didn't bother me quite as much, so I sent the author a private e-mail listing the C/C++ objections and asked him to consider a follow-up article or Mailbag letter that took them into account. And it worked: we had a great discussion between Stephen and the Editors' list about C/C++ vs scripting languages, and that led to some excellent article ideas.

Also remember that Stephen is homeless, and his Internet access is limited to an hour here, an hour there on public-access terminals. A far cry from simply sitting in front of your computer that happens to be already on. So he is putting a high level of commitment into writing these articles and programs, higher than many people would be willing to do. It's unfortunate that his limited Internet access prevented me from knowing at press time that he had decided on a last-minute revision to tone down the article and make it more balanced, but c'est la vie. -- Iron


editor's comment...

Tue, 4 Feb 2003 11:16:59 +0100
james (jamiergroberts from mailsnare.net)

In Linux Gazette ( a most excellent ongoing effort, btw):

On behalf of the staff and the Gang, thanks! -- Heather

http://www.linuxgazette.com/issue87/bint.html

there's an editorial aside:

The Ultimate Editor would be what emacs should have been: an extensible editor with an intuitive mouse-and-menu interface. [Editor's note: emacs was born before mice and pulldown menus were invented.]

AFAIK, nope :-) Or at least, not exactly! This would be better:

...............

[Editor's note: emacs was born before mice and pulldown menus were *widely known outside research institutes*.]

...............

Though of course, RMS was at a research institute, so may have known of mice by then :-)

For mouse references, see (amongst many other possibilities):

http://www.digibarn.com/friends/butler-lampson/index.html

or any of the Engelbart stuff. Mice were pretty well known by '72, Emacs dates from '76: TECO (Emacs' predecessor) does however date back almost to the invention of the mouse - I haven't found out exactly when TECO was initiated, around '64 I guess (but see

http://www.ibiblio.org/pub/academic/computer-science/history/pdp-11/teco/doc/tecolore.txt

if the question is really of interest).

I think, strictly speaking, that the editor macros were by their nature trapped in the environment of the editor they were macros for : TECO. So it isn't precisely right to say that TECO was emacs' predecessor; "parent" or "original environment" maybe, but I don't believe TECO was intended to be a general purpose editor ... much less the incredible power beyond that, that the emacs environment grew into after taking off on its own.
Not all menus are pull-down, nor should a mouse be required to reach pull-down menus... a matter of style and usability. For my own opinion, I feel that emacs does have menus; they just don't always look the part. -- Heather

This is all, I agree, excessively pedantic - I've also offered my services as occasional proofreader :-)

JR

Thanks to everybody who offered to proofread. We now have some twenty voluteers. -- Iron


wordsmithing in Gibberish

Mon, 24 Feb 2003 08:10:15 -0800 (PST)
Raj Shekhar (lunatech3007 from yahoo.com)

Dear Ben,

This is with reference to "Perl One-Liner of the Month: The Case of the Evil Spambots" which was published in th LG#86. I especially enjoyed you defination of Gibberish.

Here is something I found in my fortune files. I am pretty sure wordsmithing in the Marketroid language is done using this procedure. Please keep up the good work of giving underhand blows to the Marketroid.

...............

Column 1		Column 2		Column 3

0. integrated		0. management		0. options
1. total		1. organizational	1. flexibility
2. systematized		2. monitored		2. capability
3. parallel		3. reciprocal		3. mobility
4. functional		4. digital		4. programming
5. responsive		5. logistical		5. concept
6. optional		6. transitional		6. time-phase
7. synchronized		7. incremental		7. projection
8. compatible		8. third-generation	8. hardware
9. balanced		9. policy		9. contingency

The procedure is simple. Think of any three-digit number, then select the corresponding buzzword from each column. For instance, number 257 produces "systematized logistical projection," a phrase that can be dropped into virtually any report with that ring of decisive, knowledgeable authority. "No one will have the remotest idea of what you're talking about," says Broughton, "but the important thing is that they're not about to admit it."

- Philip Broughton, "How to Win at Wordsmanship"

...............

Cheers Raj Shekhar


Point of Sale

Tue, 11 Feb 2003 14:44:29 -0800
Gene Mosher (gene from viewtouch.com)
Gene's HTML-only email barely escaped the spam trap, when Mike recognized that it was a followup to Issue 87, Mailbag #2
Folks, while our main publication form is HTML, we have our own style guidelines and pre-processing to do; if you're not submitting a full article, we greatly prefer plain text. -- Heather

There's always the real thing.

ViewTouch is genuine killer app. My life's work resulted in the sales of millions of computers in the 26 years since I first started writing and using POS software. I invented many of the concepts in use today worldwide in retail software, including virtual touchscreen graphics to represent the universe of retail business operations. Much of what we are doing today will become standard in the future. ViewTouch is the original and longest-lived. Thanks for your comments.

Gene Mosher

Hello, Gene - I remember talking to you when I wanted to install VT for a client in Florida a few years back (they backed out of the deal by trying to rip me off, but, erm, I had the root password. We parted ways, and they're still without a POS last I heard. :) As I'd mentioned, I really like the look and feel of your app; however, good as it is, not being Open Source limits its applicability in the Linux world. If I remember correctly, that was the upshot of our discussion here.

Just for the record, folks - Gene was very friendly and very helpful despite the fact that the client had not yet bought a license from him; given his help, the setup (at least the part that I got done before the blow-up) was nicely painless.

Ben Okopnik

We also got a request for aid finding a POS from a fellow with a pizza parlor; luckily, Linux folk have already dealt with Pizza, although it's worth following the old articles over at LJ and seeing how that project moved along. We're still looking for news or articles from people using or developing open source Point of Sale, and I re-emphasize, we mean physical cash registers, not just e-commerce. E-commerce apps we've got by the boatload, on sale and in "AS IS" condition. -- Heather

GAZETTE MATTERS


April/May/June schedule

Thu, 27 Feb 2003 13:08:57 -0800
Mike ("Iron") Orr (Linux Gazette Editor)

I will be out of town March 18 - April 3 at the Python conference and Webware sprint (and visiting New York, Chicago, and Columbus [Ohio]), Heather will be busy the week before Memorial Day (May 26), and I'll be gone Memorial Day weekend.

This means I'll need to finalize the April issue by March 14, so the article deadline is March 10. I've let the recent authors know.

May's issue will be normal.

For June, the article deadline will be May 19 (a week early).


This page edited and maintained by the Editors of Linux Gazette
HTML script maintained by Heather Stern of Starshine Technical Services, http://www.starshine.org/
Copyright © 2003
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 88 of Linux Gazette, March 2003

LINUX GAZETTE
...making Linux just a little more fun!
More 2¢ Tips!
By The Readers of Linux Gazette

See also: The Answer Gang's Knowledge Base and the LG Search Engine


make in future? I'm confused *now*

Tue, 28 Jan 2003 00:35:59 -0600
Mike Orr (Linux Gazette Editor)
Question by Subramanian Varadarajan (hi_subri from hotmail.com)

hai,

I am subbu and I encounterd this problem when i ran

make - filename.

How to fix this problem?Can you help me.

make: *** Warning: File `makefile.machine' has modification time in
the future (2003-01-28 07:07:00 > 2003-01-28 00:09:19)
make: Nothing to be done for `all'.
make: warning:  Clock skew detected.  Your build may be incomplete.

I guess that my real-time clock has set incorrectly. how to correct it.

I appreciate your time.

thanks,
subbu

Ugly HTML had to be beaten up and reformatted. Please send messages to The Answer Gang in text format. -- Heather
[Mike] The message means what it says: 'make' found a file that "was" modified in the future. That may or may not be a problem, and if it is, it may or may not be significant. Do you know by other means whether 'makefile.machine' should have been updated? I.e., did you modify any file related to it?
How did that file get on your machine in the first place? Did you copy or untar it from another computer in a way that would have preserved the foreign timestamp? If so, then the clock on the other computer may be wrong.
To check your own computer's clock, see the 'date' and 'hwclock' commands. 'date' shows and sets Linux's time; 'hwclock' shows and sets the real-time clock. First set Linux's time correctly, then use 'hwclock --utc --systohc' to reset the hardware clock.
If your hardware clock is pretty unreliable (as many are), you can use 'hwclock --adjust' periodically (see "man hwclock"), run ntp or chrony to synchronize your time with an Internet time server, or put the kernel in "eleven-minute mode" where it resets the hardware clock every eleven minutes. (Answer Gang, how do you activate eleven-minute mode anyway?)
[Ben] In the "hwclock" man page:
This mode (we'll call it "11 minute mode") is off until something turns it on. The ntp daemon xntpd is one thing that turns it on. You can turn it off by running anything, including hwclock --hctosys, that sets the System Time the oldfashioned way.
Also, see the "kernel" option under "man ntpd".


H/W detection in Debian ?

Sun, 02 Feb 2003 02:49:05 +0000
Jimmy O'Regan (jimregan from dol.ie)
In reference to: Issue87, help wanted #1 -- Heather

You could try installing libdetect, and then running /usr/sbin/detect (detect is also used by Mandrake). Aside from that, the only thing I can suggest is filing bugs with Debian.


ppp over nullmodem cable - Linux client, win2k RAS server

Sun, 02 Feb 2003 03:13:36 +0000
Jimmy O'Regan (jimregan from dol.ie)
In reference to: Issue87, help wanted #2 -- Heather

The problem is the authentication on the Win2K side. Check out http://msdn.microsoft.com/library/default.asp?url=/library/en-us/apcguide/htm/appdevisv_8.asp Basically, since I assume the RAS server is running etc, you just need to enter this command on NT:

netsh ras set authmode NODCC

Last month Linux Magazine (UK - http://www.linux-magazine.com/issue/26/index_html) ran an article on setting up Direct Cable Connections with NT. I'll send on the details when I find where I left the magazine. You may try searching http://linux-magazin.de since Linux Magazine is a translated version of that.

http://www.tldp.org/HOWTO/Modem-Dialup-NT-HOWTO-9.html

This page may also be of use.


kernels? make your own!

Thu, 6 Feb 2003 02:08:18 -0800 (PST)
j l (cs25x from yahoo.com)
In reference to: Issue87, help wanted #1 -- Heather

Solution: stop using redhat, debian, mandrake kernels, download a fresh kernel from kernel.org and build with that.

The other answer, is to look in you Makefile, and check the line beginning with "EXTRAVERSION=" If you add your own name to that line, and run make, you brand the kernel and modules with that name. Hope that fixes your problem.



"Sean Shannon" <sean@dolphins.org>
Tue, 4 Feb 2003 10:48:20 -0500

The hardest part in compiling a kernel is making the ".config" file. Some things to check:

  1. modify the Makefile changing EXTRA VERSION variable (looks like you did this since the 2.4.18custom directory is made)
  2. make sure when you configure ( I use make menuconfig) that you select "enable module support"
  3. make sure to run "make dep" after "make menuconfig"

[Thomas Adams] Yep -- good idea.

  1. Are you sure that "make modules" and "make bzImage" completed successfully?

[Thomas Adams]
Well, I usually do something like:
alias beep='echo -e "\a"'

make modules && for i in $(seq 10); do beep; done &&
make bzImage && for i in $(seq 10); do beep; done

  1. here is the install procedure I use (System.map is a symbolic link, RedHat 7.0, I use raid 1 disk device /dev/md0, you'll probably use /dev/hda )

[Thomas Adams] or /dev/sda if s/he has a SCSI

To install the new kernel:

Copy the new kernel and system map to the “boot” directory

cp  /usr/src/linux/arch/i386/boot/bzImage   /boot/vmlinuz-2.2.16-22-custom
cp  /usr/src/linux/System.map  /boot/System.map-2.2.16-22-custom

Edit file: “/etc/lilo.conf”. Add a new “image” section (add everything below )

See attached customkernel.lilo.conf.txt

[Thomas Adams] Often called a "stanza". Be careful though. I'd be more inclined to "label" this as "linux-test" so that it doesn't infringe on the "old" version of the kernel. Remember that up until this point, you're still testing (a trial run) the new kernel.

Activate the change as of next re-boot

/sbin/lilo

Install new System.map

rm  /boot/System.map
ln -s  /boot/System.map-2.2.16-22-custom  /boot/System.map

Reboot the system to build module.dep file

shutdown -r now
[Thomas Adams] Hmmm, deprecated. "Init 6" is a better way.

Reboot the system after the login prompt appears Enter “alt-ctrl-del” key combination

Reboot performed because modules.dep is created on first boot (if not, try running the "depmod" command manually then reboot)

[Thomas Adam] Not necessary. "depmod" is ran through all of the init levels on a modern Linux system......

Good luck. Sean Shannon

[Jim Dennis] Most of this can be automated down to just two lines:
make menuconfig
make clean dep bzImage modules modules_install install
... note the list of multiple targets all on one line. Make install will look for an executable (usually a shell script) named /sbin/installkernel (or even ~/bin/installkernel) and call that with a set of arguments as documented in ... (/usr/src/linux) arch/i386/boot/install.sh
Here's a relevant excerpt:
# Copyright (C) 1995 by Linus Torvalds
# Adapted from code in arch/i386/boot/Makefile by H. Peter Anvin
# "make install" script for i386 architecture
# Arguments:
#   $1 - kernel version
#   $2 - kernel image file
#   $3 - kernel map file
#   $4 - default install path (blank if root directory)
#
# User may have a custom install script

if [ -x ~/bin/installkernel ]; then exec ~/bin/installkernel "$@"; fi
if [ -x /sbin/installkernel ]; then exec /sbin/installkernel "$@"; fi
So this can put the approprite files into the appropriate places and run /sbin/lilo or whatever is necessary on your system.
I like to copy .config into /boot/config-$KERNELVERSION Also, in my case the script as to mount -o remount,rw /boot since I normally keep /boot mounted in read-only mode. The script remounts it back to ro mode after running /sbin/lilo.
For new kernels you can save some time in menuconfig by preceding that make with:
 	cp /boot/config-$RECENTKERNELVERSION ./.config.old
	make oldconfig
... which will set all the new config options to match any corresponding settings in the old config. Then you can focus on the new stuff in menuconfig.
Another useful tweak for some people is to edit ... (/usr/src/linux) .../scripts/Menuconfig and find the single_menu_mode variable:
# Change this to TRUE if you prefer all kernel options listed
# in a single menu rather than the standard menu hierarchy.
#
single_menu_mode=
... for those that don't like to have to expend extra keystrokes popping in and out of subsections of the menuconfig dialogs.
Sadly this particular featuer as changed (at least by 2.5.59) with the inclusion of a new kconfig system (instead of menuconfig).
You can get a collapsible try of menu options in the new system using: make menuconfig MENUCONFIG=single_menu (However, it it starts with all branches collapsed. <grump!> ;)


Ipchains vs. Iptables

Fri, 14 Feb 2003 11:06:01 +0100
De Groote, Patrick (Patrick.DeGroote from NETnet.be)
In reference to: Issue87, help wanted #6 -- Heather

if you use ipchains, then you should look at masquerading and port-forwarding.

following command

ipmasqadm portfw -a -P tcp -L $4 4662 -R 192.168.1.100 4662

should do the trick.

rgds Patrick De Groote



Bruce Ferrell <bferrell@baywinds.org>
Sat, 22 Feb 2003 17:30:34 -0800

if you're using ipchains you need something like this:

/usr/sbin/ipmasqadm portfw -a -P tcp -L <EXTERNAL ADDRESS> 11900 -R <INTERNAL ADDRESS> 11900
The point is, whether you use a variable or hardwire in an address, you need to specify both sides of the forwarding connection. Also note that the two examples selected a different port to play on, but the principle is the same. I hope that leaving both examples in makes it all clearer to readers. -- Heather


Jim Kielman <jimk@midbc.com>
05 Feb 2003 23:30:27 -0800

I ran into a similar problem with a client that had to have PCAnywhere access to one of the computers on his network. My solution was to use "ipmasqadm portfw" to forward the ports PCAnywhere needed to access. The server is running Debian potato with a stock 2.2.20 kernel. Here is what I use:

ipmasqadm portfw -a -P tcp -L <internet IP> 4162 -R <mldonkey IP> 4162
ipmasqadm portfw -a -P udp -L <internet IP> 4162 -R <mldonkey IP> 4162
ipmasqadm portfw -a -P tcp -L <internet IP> 4161 -R <mldonkey IP> 4161
ipmasqadm portfw -a -P udp -L <internet IP> 4161 -R <mldonkey IP> 4161

internet IP = the IP address of the computer connected to the internet.

mldonkey IP = the IP address of the computer running mldonkey.

I don't know if you need both udp and tcp, but it works for me. Hope this helps.

Regards
Jim Kielman


a new language

Mon, 10 Feb 2003 00:16:49 -0800
Rick Moen (the LG Answer Gang)
In reference to: Issue 86, 2c Tips #3 -- Heather

John Karns:

Cool - thnks for the pointer. I think I'll check it out. I knew that some IDE's exist for Linux, but never really took the time to look at one.

Why look at only one when you can look at over a hundred?
http://linuxmafia.com/~rick/linux-info/applications-ides.html

Note we're pointing to his gathered list of numerous "integrated development ennvironments" - the previous entry pointed to his description answering that (1) yes we have them, lots and lots; and (2) that if you think you're seeking one, you should make sure you are solving the right problem first. -- Heather


Key Remapping....

Thu, 30 Jan 2003 13:37:20 -0800
Mike Orr (Linux Gazette Editor)
Question by James Scott (jscott from sangoma.com)

Is it possible to remap the <tab> key to another key on the keyboard?? One of my co-workers has a broken left pinky and is going insane not being able to use the tab key to complete commands.

I done a fair amount of searching to no avail... any help would be greatly appreciated.

[Mike] Grr, I just read yesterday about somebody turning Scroll Lock into another Escape key, now where was it...?
You can remap any key using the "loadkeys", "showkey" (singular) and "dumpkeys" commands. That's on the console. You have to do additional steps for X. See the Keyboard and Console HOWTO
http://www.tldp.org/HOWTO/Keyboard-and-Console-HOWTO.html

Thanks for the quick reply. Helps a lot.

James


Palm magic

Sun, 2 Feb 2003 10:21:46 +0100
Ben Okopnik (the LG Answer Gang)
Question by Huibert Alblas (huibert_alblas from web.de)

I was desperatly trying to use my palm with the evolution mailer, recompiled everything but the kitchen sink to get Gnome2 and Gnome1.4 capplets and applets totlaly mixed up in the end, it was working, but gnome was broken, so now I'm repairing Gnome2, and then try to write the apropriate spells for my Paml connetion

Halb uses the Sorceror distro, which refers to compiling its scripts and packages as "casting spells". -- Heather
[Ben] I've found the "appropriate spells" for the Palm - for my M-125 with the USB cable, at least - to be "jpilot" and "coldsync". "jpilot" is really well done, except for the selection interface in the "Install" menu (select a file, click "Add". Select next file, click "Add". And so on for, say, 50 files.) "coldsync" works at a lower level - it's great for reinitializing user info, a quick install with or without synching, and generally tweaking Palm comms. As an example, among the files that I carry on the Palm, I have The Moby Shakespeare collection (all of The Bard in one file) and Gibbon's "Decline and Fall of the Roman Empire", volumes 1-6; both rather large (~5MB). "jpilot" refused to load them (segfaulted). So did my brother's Wind*ws Palm desktop. "coldsync", however, when used with the "slow sync" option, managed it just fine. KDE's palm app, though, is severely broken (to its credit, it mentions that in the initial screens); it hosed my Palm so hard that I had to do a hard reset, and re-init the user (another thing that "jpilot" couldn't handle.)

Yes, well thanks for the info, Jpilot and stuff works like a charm (Palm M105 the small one), but I wanted to Sync my mailadresses in evolution........ wich is based upon gnome 1.4 (c)applets, which are horible to get to play nice with the Gonme2.0 install.

Good to know about the big files though...


Initial thoughts on ratpoison

Mon, 24 Feb 2003 09:40:55 -0700
Jason Creighton (androflux from softhome.net)

For those of you who don't know, ratpoison is a light (very light) window manager. (http://ratpoison.sourceforge.net). The basic scheme is to have all apps fullscreen, using screen-like key bindings to switch between windows. I've been using it for about an hour or so now (Hint: Look at the sample.ratpoisonrc in the doc directory. Don't end up hacking the source code to change the prefix key like I did.), and I'm liking it. The best thing, of course, is the tons of screen real estate you get without any window title bars, borders, etc.

If you like doing everything with the keyboard or you want tons of screen real estates, give ratpoison a whirl.

Also see this article on freshmeat: http://freshmeat.net/articles/view/581


rc.local in debian

Sun, 9 Feb 2003 15:42:07 +0530
Rick Moen (the LG Answer Gang)
Question by Joydeep Bakshi (joy12 from vsnl.net)

If you aren't using RHL, simply edit /etc/rc.d/rc.local

Atul

but there is no such file in debian . what file should I edit in debian ?

thanks in advanced.

The Linux Oracle has pondered your question deeply.
And in response, thus spake the Oracle:
echo '#!/bin/sh' > /etc/rc.local
chmod 744 /etc/rc.local
RL=`grep ':initdefault:' /etc/inittab | cut -d: -f2`
echo "LO:$RL:once:/etc/rc.local" >> /etc/inittab
killall -HUP init
You owe the Oracle a better understanding of why subverting the SysVInit architecture is fundamentally a bad idea in the first place.


recording sounds on linux for windows

Wed, 19 Feb 2003 02:19:49 +0800
Huibert Alblas (huibert_alblas from web.de)
Question by 50258246 (50258246 from student.cityu.edu.hk)

Hi!i'm rayho, i would like to ask how to receive sound from the microphone and then transmit the sound from the linux os to the window os system.Also,I'm not understand where the sound source is stored in which file in the linux os and what hardware and software do i need to do this transmition.Thankyou for your help!!

[Halb] Hi there,
This may sound a bit simple but I would do it like this:
needed Hardware:
needed software:
On the other hand, you might not want to transport single files, but want to do some kind of Internet audio broadcasting or something. You might want to look into
What did you have in mind?


Secure Password Authorization - NTCR ?

Fri, 14 Feb 2003 01:18:41 +0000
Jimmy O'Regan (jimregan from dol.ie)
Question by Neil Belsky (ursine90 from msn.com)

Neil Belsky wrote:

Were you ever able to solve this problem?
http://www.linuxgazette.com/issue64/lg_mail64.html#wanted/1

NTCR is another name for NTLM, which is supported by fetchmail.


Fwd: Terminating Misbehaving Programs

Wed, 12 Feb 2003 22:48:50 -0800 (PST)
Raj Shekhar (lunatech3007 from yahoo.com)

I receieved this tip for inclusion in my HOWTO

http://geocities.com/lunatech3007/doing-things-howto.html

However as it a bit advanced for a newbie's howto I did not include it. i am forwarding it to you.

Regards
Raj

[C.R. Bryan III] Subject: Doing Things in GNU/Linux
Good stuff :) Something I can put on a firewall machine when I put it onsite (since I leave Apache in for a status.cgi page anyway)
In the section "Terminating Misbehaving Programs":
If the afflicted machine is on a network with another Linux machine, or a Windows machine with PuTTY, there are additional steps that can be taken before hitting the Big Red Two-by-Four switch. (My network runs RHL 6.2 on older boxes, old as in P133, so I get practice in this every time Netscape walks into a Java site and freezes.)
  1. Shell into the afflicted machine. Use ssh if you've got it, telnet otherwise. If VNC is installed at both ends, maybe you can use that. Just because the local desktop is frozen doesn't always mean that all desktop functioning is frozen. If the machine won't log you in, obviously it's game-over, so at that point you have to reset the box. Often, though, especially on older boxen, it's just X that's either frozen or in a really deep thrashing session, and you can get a shell prompt. Root-to-root ssh is most convenient.
  2. Get root on the afflicted box with su.
  3. Try to kill off just the program that's freezing things, and try to do it nicely.
     
    a. If you can get X apps to forward, or you can get a VNC window open, you can bring up kpm (the KDE process manager), which, with all the information presented, allows you to pinpoint just the app to kill with a right-click. Try several times to get it to go away, starting with Hangup, then Terminate, then Kill. The more of a chance you give the program to clean up its exit, the less garbage you'll leave lying around in the system.
     
    b. If you know the name of the program that has gotten hung, and only one instance of it is running, use killall. Let's assume for example that it's netscape:
     
    # killall -HUP netscape
    # killall -TERM netscape
    # killall -KILL netscape

     
    Killall does just that, kills off every instance of a program that it finds. That's appropriate for netscape, since it has a session-manager core which is usually the part that's locked up. If you've got a dozen xterms open, and ytree running in half of them, though, killing off every ytree might not be what you want; often, it's the helper-app that ytree launched that's frozen up (lynx, for instance) and you can killall that.
     
    c. Use top and other shell tools to zero in on which process to kill, then use kill. (Here I don't have that much experience: when I need to use top and kill, it's on a firewall without X, where all the running processes fit in an xterm/ssh window, so it's simple to fish out the pid to kill.)
  4. If it won't kill, or you can't figure out who to kill, or things just seem hosed at the X level, as long as you can get root on a shell command-line, you can tell it:
     
    # init 3;init 5
     
    ...and that'll do what ctrl-alt-bs would do, restart X to a graphic login. Your underlying filesystem will have cores and DEADJOEs left lying around from the X-level programs that had to abort, but you won't have to fsck everything on a dirty boot.
  5. If you think you might have stuck ports and locks from the killed X-level processes, and the machine doesn't have duties that would prevent it, or if X won't come back up, you can do a clean reboot to put things back in order, probably in less time than it'd take to find and free the stuck resources...
     
    # shutdown -r now
     
    That'll take down the X level, giving the X programs a chance to clean up after themselves, then the rest of the machine, and your filesystem will be unmounted and rebooted cleanly.
Bottom line: if you can shell or VNC into the frozen machine, there are things you can do to avoid losing data in the innocent processes you're running in X or corrupting your filesystem. You can even do some of these things from Windows if you have the right tools (telnet, ssh, PuTTY, VNC), as long as you have two or more machines on the same network.
How much of this you think might be appropriate to a newbie-help, I don't know, but that's my experience, anyway :)


Two sound cards

Wed, 5 Feb 2003 11:52:43 +0100
Dean Buhrmann (d.buhrmann from chello.nl)
In reference to: Issue 87, 2c Tips #1 -- Heather

Hello,

Great how you tackled this problem. I have a simple Sounblaster 16 card. This card (with this chipset) appeared to be multichannel.

I play online games on the internet (Tribes2) and we use for communication a voice communication program (Teamspeak2). I also want to hear the sound of the game. Teamspeak2 is able to use a different channel (dsp0/dsp1).

So i adress the gamesound to dev/dsp1 and the voice communication to /dev/dsp0. I couldn't get it working with alsa drivers, but others with different soundcards can. So i used the OSS driver. It works great with only one soundcard.

If a program only wants to adress the default /dev/dsp (dsp0) and you want to let it use /dev/dsp1 you can change the link /dev/dsp --> /dev/dsp1

More information on http://www.teamspeak.org

Linux is a very stable platform for games and there is now a (free) voicecommunication program too.


This page edited and maintained by the Editors of Linux Gazette
HTML script maintained by Heather Stern of Starshine Technical Services, http://www.starshine.org/
Copyright © 2003
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 88 of Linux Gazette, March 2003

Contents:

¶: Greetings From Heather Stern
(?)How can I turn on pc into two (effecively)?
(?)H/W detection in Debian ? --or--
There's More Than One Way To Detect It
TMT1WTDI: not just for perl hackers anymore
(?)Regarding the paper entitled "Maintainability of the Linux Kernel" --or--
Linux Kernel Maintainability: Bees Can't Fly
but a Hurd of them might go Mach speed...
(?)routing to internet from home . Kernel 2.4

(¶) Greetings from Heather Stern

Whew! One thing I can say, there was a lot of good stuff this month. There's so many good things to say and I just can't edit them all.

But don't you worry. We've got something for everyone this month. Newbies can enjoy a list of a bunch of apps designed to help setup be a little more fun (or at minimum, a little less headache). The intelligencia can see what the Gang thinks of some academic notions for the future of kernels. And everyone hungering for more about routing has something keen to get their teeth into. Experimenters... nice trick with two monitors, here.

In the world of Linux there's more to politicking than just the DMCA guys trying to get us to stop ever looking at "their" precious copyrighted works ever again. Among the Linux kernel folk there's snatches here and there of an ongoing debate about source code control systems. You see, BitKeeper has the power to do grand things... but for people who have not decided that they hate CVS, it's a bit of a pain to pull out small patches. For people who don't qualify to use BitKeeper under their only-almost-free license (you can't use it if you work for someone who produces a competing sourcecode control system, if I read things right ... thus anyone who works for RH shouldn't, et al.) this is a bad thing.

For that matter I'm a bit of a programmer myself, but if I'm going to even glance in the kernel's direction, I need much smaller peices to chew on, and I really didn't want to spend the better part of a month learning yet another source system. (Not being paid for doing so, being a guiding factor in this case.) I had to thrash around the net quite a bit to get a look at the much smaller portion of the whole.

So some of the kernel gang wrote some scripts to help them with using the somewhat friendly web interface (folks, these definitions of "friendly" still need a lot of work) and Larry threatened to close down bkweb if that bandwidth hit got too high. In my opinion, just about the worst thing he could have said at that moment - it highlights why people are trying to escape proprietary protocols - they want features, but Linux folk, having tasted the clean air of freedom, don't want to be locked indoors just because a roof over their code's head is good to have at times.

Don't get me wrong. Giant public mirrors of giant public projects are great things, and as far as I can tell BitKeeper is still committed to a friendly hosting of the 2.5.x kernel source tree, among a huge number of other projects. Likewise SourceForge. But we also need ways to be sure that the projects themselves can outlast the birth and death of companies, friendships, or the interest of any given individual to be a part of the project. The immortality of software depends on the right to copy it as much as you need to and store it anywhere or in any form you like. If the software you are using isn't immortal in this sense then neither are the documents, plans, hopes, or dreams that you store in it. More than the "viral freedom" clauses in the GPL or the "use it anywhere, just indemnify us for your dumb mistakes" nature of the MIT and BSDish licenses, this is the nature of the current free software movement. And you can quote me on that.

Readers, if you have any tales of your own escapes from proprietary environments into Linux native software, especially any where it has made your life a little more fun, then by all means, we'd love to see your articles and comments. Thank you, and have a great springtime.


(?) How can I turn on pc into two (effecively)?

From Chris Gibbs

Answered By Jimmy O'Regan, Jim Dennis

(?) Hi ya,

I have a dual headed system. I am not really happy with xinerama cause having a different resolution on each monitor does not make sense for me, and having two seperate Desktops for a single X session seems limiting. Neither solution works well for apps like kwintv.

But this is linux! I don't just want to have cake and eat it I want the factory that makes it! What I really want is to have a ps2 mouse and keyboard associated with one monitor and associate a usb mouse and keyboard with the other monitor and have ability not just to run X from each, but to have text mode available also.

Idea also being I could have text mode session and X session at the same time, that way I can have kwintv fullscreen and play advmame in svga mode full screen at the same time ;-)

So how do I initialise the second video card (one pci, one agp) so I can make it tty2 monitor or similar?

(!) [Jimmy] Google
http://www.google.com/linux?hl=en&lr=&ie=UTF-8&oe=utf-8&q=two+keyboards+two+mice+two+keyboards&btnG=Google+Search
came up with these links: http://www.ssc.com/pipermail/linux-list/1999-November/028191.html http://www.linuxplanet.com/linuxplanet/tutorials/3100/1

(?) Am I greedy or wot?

(!) [Jimmy] Nah, cost effective. "Able to maximise the potential of sparse resources". Some good CV-grade B.S.

(?) These links are to articles about X, I already know I can have X however I want it accross the monitors. Thats easy...

What I want is seperate text mode consoles, so at risk of repeating myself how do I initialise the second video card for text mode (not for X) and how do I associate it with specific tty's

(!) [Jimmy] Well, you could set up the first set for the console and use the second for X Okay, not what you asked :). So, to your actual question.
The device should be /dev/fb1, or /dev/vcs1 and /dev/vcsa1 on older kernels. You should have better luck with a kernel with Framebuffer support - according to the Linux Console Project (http://linuxconsole.sourceforge.net) there's hotplug support & multiple monitor support. The Framebuffer HOWTO has a section on setting up two consoles (http://www.tldp.org/HOWTO/Framebuffer-HOWTO-14.html). The example focuses on setting up dual headed X again, but it should contain what you need - "an example command would be "con2fb /dev/fb1 /dev/tty6" to move virtual console number six over to the second monitor. Use Ctrl-Alt-F6 to move over to that console and see that it does indeed show up on the second monitor."
(!) [JimD] It's serendipitous that yhou should ask this question since I just came across a slightly dated article on how to do this:
http://www.linuxplanet.com/linuxplanet/tutorials/3100/1
Some of the steps in this process might be unnecessary in newer versions of XFree86 and the kernel. I can't tell you for sure as I haven't tried this. Heck, I haven't even gotten around to configuring a dual headed Xinerama system, yet.

(?) There's More Than One Way To Detect It

TMT1WTDI: not just for perl hackers anymore

From Joydeep Bakshi

Answered By Rick Moen, Dave Bechtel, Heather Stern

(!) [Heather] All this is in response to last month's Help Wanted #1

(?) 1) kudzu is the DEFAULT H/W detection tool in RH & harddrake in MDK. is there anything in debian?

(!) [Rick] As usual, the Debian answer is "Sure, which ones do you want?"
discover
Hardware identification system (thank you, Progeny Systems, Inc.), for various PCI, PCMCIA, and USB devices.
(!) [Dave]
apt-get update; apt-get install discover

(' apt-cache search discover ': )
discover - hardware identification system
discover-data - hardware lists for libdiscover1
libdiscover-dev - hardware identification library development files
libdiscover1 - hardware identification library
(!) [Heather] Worthwhile to also search on the words "detect" and "config" and "cfg" since many of the configurators or their helper apps have those words in their package names.

(?) discover only detects the h/w, but kudzu does one task extra that is it also configure the h/w. do u have any info. whether the latest version of discover do this auto-config. ? ( I am in debian 3.0).

(!) [Rick] I'm unclear on what you mean by "configure the hardware". Discover scans the PCI, USB, IDE, PCMCIA, and SCSI buses. (Optionally, it scans ISA devices, and the parallel and serial ports.) It looks (by default) for all of these hardware types at boot time: bridge cdrom disk ethernet ide scsi sound usb video. Based on those probes, it does appropriate insmods and resetting of some device symlinks.
What problem are you trying to solve?
(!) [Heather] For many people there's a bit of a difference between "the machine notices the hardware" and "my apps which want to use a given piece of hardware work without me having to touch them." In fact, finishing up the magic that makes the second part happen is the province of various apps that help configure XFree86 (SaX2/SuSE, Xconfigurator/RedHat, XF86Setup and their kindred) - some of which are better at having that magical "just works" feeling than others. Others are surely called on by the fancier installation systems too. Thus Rick has a considerable list below.
For ide, scsi, cdrom it all seems rather simple; either the drives work, or they don't. I haven't seen any distros auto-detect that I have a cd burner and do any extra work for that, though.
PCMCIA and USB are both environments that are well aware of the hot swapping uses they're put to - generally once your cardbus bridge and usb hub types are detected everything else goes well. or your device is too new to have a driver for its part of the puzzle. You must load up (or have automatically loaded by runlevels) the userland half of the sypport, though. (package names: pcmcia-cs, usbmgr)
There are apps to configure X and one can hope that svgalib "just works" on its own since it has some effort to video detection built-in. If you don't like what you get, try using a framebuffer enabled kernel, then tell X to use the framebuffer device - slower, but darn near guaranteed to work. svgalib will spot your framebuffer and use it. My favorite svgalib app is zgv, and there are some games that use it too.
I know of no app which is sufficiently telepathic to decide what your network addresses should be, the first time through. However, if you're a mobile user, there are a number of apps that you can train to look for your few favorite hosting gateways and configure the rest magically from there, using data you gave them ahead of time. PCMCIA schemes can also be used to handle this.
(!) [Rick]
kudzu, kudzu-vesa
Hardware-probing tool (thank you, Red Hat Software, Inc.) intended to be run at boot time. Requires hwdata package. kudzu-vesa is the VBE/DDC stuff for autodetecting monitor characteristics.
mdetect
Mouse device autodetection tool. If present, it will be used to aid XFree86 configuration tools.
printtool
Autodetection of printers and PPD support, via an enhanced version of Red Hat Software's Tk-based printtool. Requires the pconf-detect command-line utility for detecting parallel-port, USB, and network-connected printers (which can be installed separately as package pconf-detect).
read-edid
Hardware information-gathering tool for VESA PnP monitors. If present, it will be used to aid XFree86 configuration tools.
(!) [Heather] Used alone, it's an extremely weird way to ask the monitor what its preferred modelines are. Provided your monitor is bright enough to respond with an EDID block, the results can then be used to prepare an optimum X configuration. I say "be used" for this purpose because the results are very raw and you really want one of the apps that configure X to deal with this headache for you. Trust me - I've used it directly a few times.
(!) [Rick]
sndconfig
Sound configuration (thank you, Red Hat Software, Inc.), using isapnp detection. Requires kernel with OSS sound modules. Uses kudzu, aumix, and sox.
(!) [Dave] BTW, Knoppix also has excellent detection, and is also free and Debian-based: ftp://ftp.uni-kl.de/pub/linux/knoppix
(!) [Heather] Personally I found his sound configuration to be the best I've encountered; SuSE does a pretty good job if your card is supported under ALSA.
When you decide to roll your own kernel, it's critical to doublecheck which of the three available methods for sound setup you're using, so that you can compile the right modules in - ALSA, OSS, or kernel-native drivers. Debian's make-kpkg facility makes keeping extra packages that depend directly on kernel parts - like pcmcia and alsa - able to keep in sync with your customizations, by making it easy for you to prepare the modules .deb file to go with your new kernel.
(!) [Rick]
hotplug
USB/PCI device hotplugging support, and network autoconfig.
nictools-nopci
Diagnostic and setup tools for many non-PCI ethernet cards
nictools-pci
Diagnostic and setup tools for many PCI ethernet cards.
mii-diag
"A little tool to manipulate network cards" (examines and sets the MII registers of network cards).

(?) 2) I have installed kudzu in debian 3.0 , but it is not running as a service. it needs to execute the command kudzu manually.

(!) [Rick] No, pretty much the same thing in both cases. You're just used to seeing it run automatically via a System V init script in Red Hat. If you'd like it to be done likewise in Debian, copy /etc/init.d/skeleton to /etc/init.d/kudzu and modify it to do kudzu stuff. Then, use update-rc.d to populate the /etc/rc?.d/ runlevel directories.

(?) Finally the exact solution. I was searching 4 this looong. Rick, can't understand howto give u thanks. take care.

(?) moreover it couldn't detect my epson C21SX printer. but under MDK 9.0 kudzu detected the printer .

(!) [Heather] Perhaps it helpfully informed you what it used to get the printer going? Many of the rpm based systems are using CUPS as their print spooler; it's a little smoother under cups than some of its competitors, to have it auto-configure printers by determining what weird driver they need under the hood. My own fancy Epson color printer needed gimp-print, which I used the linuxprinting.org "foomatic" entries to link into my boring little lpd environment happily. Some printers are supported directly by ghostscript... which you will need anyway, since many GUI apps produce postscript within their "print" or "print to file" features.
(!) [Rick] Would that be an Epson Stylus C21SX? I can't find anything quite like that name listed at:
http://www.linuxprinting.org/printer_list.cgi?make=Epson
I would guess this must be a really new, low-end inkjet printer.
The version of kudzu (and hwdata) you have in Debian's stable branch (3.0) is probably a bit old. That's an inherent part of what you always get on the stable branch. If you want versions that are a bit closer to the cutting edge, you might want to switch to the "testing" branch, which is currently the one named "sarge". To do that, edit /etc/apt/sources.list like this:
deb http://http.us.debian.org/debian testing main contrib non-free
deb http://non-us.debian.org/debian-non-US testing/non-US main contrib non-free
deb http://security.debian.org testing/updates main contrib non-free
deb http://security.debian.org stable/updates main contrib non-free
Then, do "apt-get update && apt-get dist-upgrade". Hilarity ensues. ;->
(OK, I'll be nice: This takes you off Debian-stable and onto a branch with a lower commitment on behalf the Debian project to keep everything rock-solid, let alone security-updated. But you might like it.)

(?) a nice discussion. thanks a lot.

(!) [Rick] All of those information items are now in my cumulative Debian Tips collection, http://linuxmafia.com/debian/tips . (Pardon the dust.)

(?) ok, thanks a lot. u have clarified evrything very well. now I must not have any prob. regarding auto-detection in deb..

Great site !

(!) [Heather] For anyone looking at this and thinking "Oy, I don't already have debian installed, can I avoid this headache?" - Yes, you probably can, for a price. While debian users from both commercial and homegrown computing environments alike get the great upgrade system, this is where getting one of the commercial variants of Debian can be worth the bucks for some people. Note that commercial distros usually come with a bunch of software which is definitely not free - and not legal to copy for your pals. How easy they make it to seperate out what you could freely tweak, rewrite, or give away varies widely.
Libranet
http://www.libranet.com
Canadian company, text based installer based on but just a little more tuned up than the generic debian one. Installs about a 600 MB "base" that's very usable then offers to add some worthwhile software kits on your first boot.
Xandros
http://www.xandros.com
The current bearer of the torch that Corel Linux first lit. Reviews about it sing its newbie-friendly praises.
Lindows
http://www.lindows.com
Mostly arriving pre-installed in really cheap Linux machines near you in stores that you just wouldn't think of as computer shops. But it runs MSwin software out of the box too.
Progeny
http://www.progenylinux.com
More into offering professional services for your corporate or perhaps even industrial Linux needs than particularly a distribution anymore, they committed their installer program to the auspices of the Debian project. So it should be possible for someone to whip up install discs that use that instead of the usual geek-friendly textmenu installer.
If you find any old Corel Linux or Stormix discs lying around, they'll make an okay installer, provided your video card setup is old enough for them to deal with. After they succeed you'll want to poke around, see what they autodetected, takes some notes, then upgrade the poor beasties to current Debian.
In a slightly less commercial vein,
Knoppix
http://www.knopper.net/knoppix
[Base page in German, multiple languages available] while not strictly designed as a distro for people to install, has great hardware detection in its own accord, and a crude installer program available. At minimum, you can boot from its CD, play around a bit, and take notes now that it has detected and configured itself. A runs-from-CD distribution. If you can't take the hit from downloading a 700 MB CD all at once - it takes hours and hours on my link, and I'm faster than most modems - he lists a few places that will sell a recent disc and ship it to you.
Good-Day GNU-Linux
http://ggl.good-day.net
LWN's pointer went stale but this is where it moved to; the company produced sylpheed and has some interesting things bundled in this. It also looks like they preload notebooks, but I can't read japanese to tell you more.
And of course the usual Debian installer discs.
Anytime you can ask a manufacturer to preload linux - even if you plan to replace it with another flavor - let them. You will tell them that you're a Linux and not a Windows user, and you'll get to look at the preconfiguration they put in. If they had to write any custom drivers, you can preserve them for your new installation. Likewise whatever time they put into the config files.
There's a stack more at the LWN Distributions page (http://old.lwn.net/Distributions) if you search on the word Debian, although many are localize, some are specialty distros, and a few are based on older forms of the distro.

(?) Linux Kernel Maintainability: Bees Can't Fly

but a Hurd of them might go Mach speed...

From Beth Richardson

Answered By Jim Dennis, Jason Creigton, Benjamin A. Okopnik, Kapil Hari Paranjape, Dan Wilder, Pradeep Padala, Heather Stern

Hello,

I am a Linux fan and user (although a newbie). Recently I read the paper entitled "Maintainability of the Linux Kernel" (http://www.isse.gmu.edu/faculty/ofut/rsrch/papers/linux-maint.pdf) in a course I am enrolled in at Colorado State University. The paper is essentially saying that the Linux kernel is growing linearly, but that common coupling (if you are like me and cannot remember what kind of coupling is what - think global variables here.) is increasing at an exponential rate. Side note, for what it is worth - the paper was published in what I have been told is one of the "most respected" software journals.

I have searched around on the web and have been unable to find any kind of a reply to this paper from a knowledgeable Linux supporter. I would be very interested in hearing the viewpoint from the "other side" of this issue!

Thanks for your time, Beth Richardson

(!) [JimD] Basically it sounds like they're trying to prove that bees can't fly.
(Traditional aerodynamic theories and the Bernoulli principle can't be used to explain how bees and houseflies can remain aloft; this is actually proof of some limitations in those theories. In reality the weight of a bee or a fly relative to air density means that insect can do something a little closer to "swimming" through the air --- their mass makes air relatively viscous to them. Traditional aerodynamic formulae are written to cover the case where the mass of the aircraft is so high vs. air density that some factors can be ignored.).
I glanced at the article, which is written in typically opaque academic style. In other words, it's hard to read. I'll admit that I didn't have the time to analyze (decipher) it; and I don't have the stature of any of these researchers. However, you've asked me, so I'll give my unqualified opinion.
Basically they're predicting that maintainance of the Linux kernel will grow increasing difficult over time because a large number of new developments (modules, device drivers, etc) are "coupled" (depend on) a core set of global variables.

(?) [Jason] Wouldn't this affect any OS? I view modules/device drives depending on a core as a good thing, when compared to the alterative, which is depending on a wide range on varibles. (Or perheps the writers have a different idea in mind. But what other alterative to depending a core would there be other than depending on a lot of things?)

(!) [Ben] You said it yourself further down; "micro-kernel". It seems to be the favorite rant of the ivory-tower CS academic (by their maunderings shall ye know them...), although proof of this world-shattering marvel seems to be long in coming. Hurd's Mach kernel's been out, what, a year and more?
(!) [Kapil] Here comes a Hurd of skeletons out of my closet! Being a very marginal Hurd hacker myself, I couldn't let some of the remarks about the Hurd pass. Most of the things below have been better written of elsewhere by more competent people (the Hurd Wiki for example, http://hurd.gnufans.org) but here goes anyway...
The Mach micro-kernel is what the Hurd runs on the top of. In some ways Hurd/Mach is more like Apache/Linux. Mach is not a part of the Hurd. The newer Hurd runs over the top of a version of Mach built using Utah's "oskit". Others have run the "Hurd" over "L-4" and other micro-kernels.
The lack of hardware and other support in the existing micro-kernels is certainly one of things that is holding back the common acceptance of the Hurd. (For example neither "mach" nor "oskit" have support for my video card--i810--for which support came late in the Linux 2.2 series).
Now, if only Linux was written in a sufficiently "de-coupled" way to allow the stripping away of the file-system and execution system, we would have a good micro-kernel already! The way things are, the "oskit" guys are perenially playing catch-up to incorporate Linux kernel drivers. Since these drivers are not sufficiently de-coupled they are harder to incorporate.
(!) [JimD] This suggests that the programming models are too divergent in some ways. For each class of device there are a small number of operations (fops, bops, ioctl()s) that have to be supported (open, seek, close, read, write, etc). There are relatively few interactions with the rest of the kernel for most of this (which is why simple device driver coding is in a different class from other forms of kernel hacking).
The hardest part of device driver coding is getting enough information from a vendor to actually implement each required operation. In some cases there are significant complications for some very complex devices (particularly in the case of video drivers; which, under Linux sans framebuffer drivers, are often implemented in user space by XFree86.)
It's hard to imagine that any one device driver would be that difficult to port from Linux to any other reasonable OS. Of course the fact that there are THOUSANDS of device drivers and variants within each device driver does make it more difficult. It suggestst the HURD needs thousands (or at least hundreds) of people working on the porting. Obiviously, if five hundred HURD hackers could crank out a device driver every 2 months for about a year --- they'd probably be caught up with Linux device driver support.
Of course I've only written one device driver for Linux (and that was a dirt simple watchdog driver NAS system motherboard) and helped on a couple more (MTD/flash drivers, same hardware). It's not so much "writing a driver" as plugging a few new values into someone else's driver, and reworking a few bits here or there.
One wonders if many device drivers could be consolidated into some form of very clever table-driven code. (Undoubtedly what the UDDI movement of a few years ago was trying to foist on everyone).
(!) [Kapil] One the other side Linux "interferes too much" with user processes making Hurd/Linux quite hard and probably impossible---but one can dream...
(!) [JimD] Linux was running on Mach (mkLinux) about 5 years ago. I seem to recall that someone was running a port of Linux (or mkLinux) on an L4 microkernel about 4 years ago (on a PA RISC system if I recall correctly).
It's still not HURD/Linux --- but, as you say, it could (eventually) be.
Linux isn't really monolithic, but it certainly isn't a microkernel. This bothers purists; but it works.
Future releases of Linux might focus considerably more on restructing the code, providing greater modularity and massively increasing the number of build-time configuration options. Normal users (server and workstation) don't want more kernel configuration options. However, embedded systems and hardware engineers (especially for the big NUMA machines and clustering system) need them. So the toolchain and build environment for the Linux kernel will have to be refined.
As for features we don't have yet (in the mainstream Linux kernel): translucent/overlay/union filesystems, transparent process checkpoint and restore, true swapping (in addition to paging, might come naturally out of checkpointing), network console, SSI (system system image) HA clustering (something like VAX clusters would be nice from what I hear), and the crashdump, interactive debuggers, trace toolkit, dprobes and other stuff that was "left out" of 2.5 in the later stages before the feature freeze last year.
I'm sure there are things I'm forgetting and others that I've never even thought of. With all the journaling, EAs and ACLs, and the LSM hooks and various MAC (mandatory access contol) mechanisms in LIDS, SELinux, LOMAC, RSBAC and other patches, we aren't missing much that was ever available in other forms of UNIX or other server operating systems. (The new IPSec and crypto code will also need considerable refinement).
After that, maybe Linux really will settle down to maintenance; to optimization, restructuring, consolidation, and dead code removal. Linus will might find that stuff terminally boring and move on to some new project.

(?) [JimD] What else is there to add the kernel?

(!) [Pradeep] Like my advisor says, Every thing that is never thought before. :-) Lot of people feel the same about systems research. I am planning to specialize in systems. What do you guys think about systems research? Is is as pessimistic as Rob Pike sounds? http://www.cs.bell-labs.com/who/rob/utah2000.pdf
(!) [Dan] Some would say, "streams". (he ducks!)
(!) [JimD] LiS is there for those that really need it. It'll probably never be in the mainstream kernel. However, I envision something a like a cross between the Debian APT system and the FreeBSD ports system (or LNX-BBCs Gar or Gentoo's source/package systems) for the kernel.
In this case some niche, non-mainstream kernel patches would not be included in Linus' tarball, but hooks would be found in a vendor augmented kbuild (and/or Makefile collection) that could present options for many additional patches (like the FOLK/WOLK {Fully,Working} OverLoaded Kernel). If you selected any of these enhancements then the appropriate set of patches would be automatically fetched and applied, and any submenus to the configuration dialog would appear.
Such a system would have the benefit of allowing Linus to keep working exactly as he does now, keeping pristine kernels, while making it vastly easier for sysadmins and developers to incorporate those patches that they want to try.
If it was done right it would be part of UnitedLinux, Red Hat, and Debian. There would be a small independent group that would maintain the augmented build system.
The biggest technical hurdle would be patch ordering. In some cases portions of some patches might have to be consolidated into one or more patches that exist solely to prevent unintended dependency loops. We see this among Debian GNU/Linux patches fairly often --- though those are binary package dependencies rather than source code patch dependencies. We'd never want a case where you had to include LiS patches because the patch maintainer applied it first in his/her sequence and one of its changes became the context for another patch --- some patch that didn't functionally depend on LiS but only seemed to for context.
I think something like this was part of Eric S. Raymond's vision for his ill-fated CML2. However, ESR's work wasn't in vain; a kbuild system in C was written and will be refined over time. Eventually it may develop into something with the same features that he wanted to see (though it will take longer).
As examples of more radical changes that some niches might need or want in their kernels: there used to be a suite of 'ps' utilities that worked without needing /proc. The traditional ps utils worked by walking through /dev/kmem traversing a couple of data structures there. I even remember seeing another "devps" suite, which provided a simple device interface alternative to proc. The purpose of this was to allow deeply embedded, tightly memory constrained kernels to work in a smaller footprint. These run applications that have little or no need for some of the introspection that is provided by /proc trees, and have only the most minimal process control needs. It may be that /proc has become so interwoven into the Linux internals that a kernel simply can't be built with out it (that the build option simply affects whether /proc is visible from userspace). These embedded systems engineers might still want to introduce a fair number of #defines to optionally trim out large parts of /proc. Another example is the patch I read about that effectively refines the printk macro as a C comment; thus making a megabyte (uncompressed) of prink()' calls disappear in the pre-processor pass.
These are things that normal users (general purpose servers and workstations) should NOT mess with. Things that would break a variety of general purpose programs. However, they can be vital to some niches. I doubt we'll ever see Linux compete with eCOS on the low end; but having a healthy overlap would be good.

(?) [JimD] Are there any major 32 or 64 bit processors to which Linux has not been ported?

(!) [Ben] I don't mean to denigrate the effort of the folks that wrote Hurd, but... so what? Linux serenely rolls on (though how something as horrible, antiquated, and useless as a monolithic kernel can hold its head up given the existence of The One True Kernel is a puzzle), and cooked spaghetti still sticks to the ceiling. All is (amazingly) still right with the world.
(!) [Jason] You know, every time I get to thinking about what the Linux kernel should have, I find out it's in 2.5. Really. I was thinking, Linux is great but it needs better security, more than just standard linux permissions. Then I look at 2.5: Linux Security Modules. Well, we need a generic was to assign attributes to files, other then the permission bits. 2.5 has extened attribues (name:value pairs at the inode level) and extended POSIX ACLs.
(!) [Ben] That's the key, AFAIC; a 99% solution that's being worked on by thousands of people is miles better than a 100% solution that's still under development. It's one of the things I love most about Linux; the amazing roiling, boiling cauldron of creative ideas I see implemented in each new kernel and presented on Freshmeat. <grin> The damn thing's alive, I tell ya.
(!) [Kapil] What you are saying is true and is (according to me) the reason why people will be running the Hurd a few years from now!
The point is that many features of micro-kernels (such as a user-process running it's own filesystem and execution system a la user-mode-linux) are becoming features of Linux. At some point folks will say "Wait a minute! I'm only using the (micro) kernel part of Linux as root. Why don't I move all the other stuff into user space?" At this point they will be running the Hurd/Linux or something like it.
Think of the situation in 89-91 when folks on DOS or Minix were jumping through hoops in order to make their boxes run gcc and emacs. Suddenly, the hoops could be removed because of Linux. The same way the "coupled" parts of Linux are preventing some people from doing things they would like to do with their system. As more people are obstructed by those parts---voila Linux becomes (or gives way to) a micro-kernel based system.
Didn't someone say "... and the state shall wither away".
(!) [Heather] Yes, but it's been said:
"Do not confuse the assignment of blame with the solution to the problem. In space, it is far more vital to fix your air leak than to find the man with the pin." - Fiona L. Zimmer
Problems as experienced by sysadmins and users are not solely the fault of designs or languages selected to write our code in.
...and also:
"Established technology tends to persist in the face of new technology." - G. Blaauw, one of the designers of System 360
...not coincidentally, at least in our world, likely to persist inside "new" technology, as well, possibly in the form of "intuitive" keystrokes and "standard" protocols which would not be the results if designs were started fresh. Of course truly fresh implementations take a while to complete, which brings us back to the case of the partially completed Hurd environment very neatly.
(!) [JimD] Thus any change to the core requires an explosion of changes to all the modules which depended upon it. They are correct (to a degree). However they gloss over a few points (lying with statistics).
First point: no one said that maintaining and developing kernels should be easy. It is recognized as one of the most difficult undertakings in programming (whether it's an operating system kernel or an RDBMS "engine" --- kernel). "Difficult" is subjective. It falls far short of "impossible."
Second point: They accept it as already proven that "common" coupling leads to increasing numbers of regression faults (giving references to other documents that allege to prove this) and then they provide metrics about what they are calling common coupling. Nowhere do they give an example of one variable that is "common coupled" and explain how different things are coupled to it. Nor do they show an example of how the kernel might be "restructured with common coupling reduced to a bare minimum" (p.13).
So, it's a research paper that was funded by the NSF (National Science Foundation). I'm sure the authors got good grades on it. However, like too much academic "work" it is of little consequence to the rest of us. They fail to show a practical alternative and fail to enlighten us.
Mostly this paper sounds like the periodic whining that used to come up on the kernel mailing list: "Linux should be re-written in C++ and should be based on an object-oriented design." The usual response amounts to: go to it; come back when you want to show us a working prototype.

(?) [Jason] Couldn't parts of the kernel be written in C, and others in C++? (okay, technically it would probably all be C++ if such a shift did occur, but you can write C in a C++ compiler just fine. Right? Or maybe I just don't know what I'm talking about.)

(!) [Pradeep] There are many view points to this. But why would you want to rewrite parts of it in C++?
Popular answer is: C++ is object-oriented, it has polymorphism, inheritance etc. Umm, I can do all that in C and kernel folks have used those methods extensively. The function pointers, gotos may not be as clean as real virtual functions and exception handling. But those C++ features come with a price. The compilers haven't progressed enough to deliver the performance equivalent to hand-written C code.
(!) [Dan] At one point, oh, maybe it was in the 1.3 kernel days, Linus proposed moving kernel development to C++.
The developer community roundly shot down the idea. What you say about C++ compilers was true in spades with respect to the g++ of those days.
(!) [Pradeep] What is the status of g++ today? I still see a performance hit when I compile my C programs with g++. Compilation time is also a major factor. g++ takes lot of time to compile especially with templates.
(!) [JimD] I'm sure that the authors would argue that "better programming and design techniques" (undoubtedly on their aggenda for their next NSF grant proposal) would result in less of this "common" coupling and more of the "elite" coupling. (Personally I have no problem coupling with commoners --- just do it safely!)
As for writing "parts" of Linux in C++ --- there is the rather major issue of identifier mangling. In order to support polymorphism and especially function overloading and over-riding, C++ compilers have to modify the identifiers in their symbol tables in ways that C compiler never have to do. As a consequence of this it is very difficult to link C and C++ modules. Remember, loadable modules in Linux are linkable .o files. It just so happens that they are dynmically loaded (a little like some .so files in user space, through the dlopen() API --- but different because this is kernel space and you can't use dlopen() or anything like it).
I can only guess about how bad this issue would be but a quick perusal of the first C++ FAQ that could find on the topic:
http://users.utu.fi/sisasa/oasis/cppfaq/mixing-c-and-cpp.html
... doesn't sound promising.
(!) [JimD] I'm also disappointed that the only quotations in this paper were the ones of Ken Thompson claiming that Linux will "not be very successful in the long run" (repeated TWICE in their 15 page paper) and that Linux is less reliable (in his experience) than MS Windows.

(?) [Jason] I'm reminded of a quote: "Linux is obsolete" -- Andrew Tanenbaum. He said this in the (now) famous flame-war between himself and Linus Torvalds. His main argument was the micro-kernels are better than monolithic kernels and thus Linux was terribly outdated. (His other point was that linux wasn't portable.) BTW, I plan to get my hands on some Debian/hurd (Or is that "GNU/hurd"? :-) ) CDs so I can see for myself what the fuss over micro-kernels is all about.

(!) [JimD] Run MacOS X --- it's a BSD 4.4 personality over a Mach microkernel.
(And is more mature than HURD --- in part because a significant amount of the underpinnings of MacOS X are NeXT Step which was first released in the late '80s even before Linux).
(!) [Ben] To quote Debian's Hurd page,

...............

The Hurd is under active development, but does not provide the performance and stability you would expect from a production system. Also, only about every second Debian package has been ported to the GNU/Hurd. There is a lot of work to do before we can make a release.

...............

Do toss out a few bytes of info if you do download and install it. I'm not against micro-kernels at all; I'm just slightly annoyed by people whose credentials don't include the Hard Knocks U. screaming "Your kernel sucks! You should stab yourself with a plastic fork!" My approach is sorta like the one reported in c.o.l.: "Let's see the significant benefits."
(!) [JimD] These were anecdotal comments in an press interview --- they were not intended to be delivered with scholastic rigor. I think it weakens the paper considerably (for reasons quite apart from my disagreement with the statements themselves).
What is "the Long run?" Unix is a hair over 30 years old. The entire field of electronic computing is about 50 or 60 years old. Linux is about 12 years old. Linux is still growing rapidly and probably won't peak in marketshare for at last 5 to 10 years. Thus Linux could easily last longer than proprietary forms of UNIX did. (This is not to say that Linux is the ultimate operating system. In 5 to 10 years there is likely to be an emerging contender like EROS (http://www.eros-os.org ) or something I've never heard of. In 15 to 20 years we might be discussing a paper that quotes Linus Torvalds as saying: "I've read some of the EROS code, and it's not going to be a success in the long run."
(We won't even get into the criteria for "success" in Ken Thompson's comment --- because I think that Linux' current status is already been a huge success by the standards of it's advocates and to the chagrin of it's detractors. By many accounts Linux is already more "successful" than UNIX --- having been installed on more systems than all UNIX predecessors combined --- an installation base that is only recently rivaled by MacOS X in the UNIX world)

(?) routing to internet from home . Kernel 2.4

From Jose Avalis

Answered By Faber Fedor, Jason Creighton, Benjamin A. Okopnik, John Karns

Hi guys and thanks in advance fro your time. I'm Joe From Toronto.

I have this scenario at home.


3 WS with Winxx
1 Linux redhat 7.3
1 DSL Connection (Bell / Sympatico)

I would like to use the linux machine as a router for the internal PC> Could you help me with that, please???

(!) [Ben] OK, I'll give it a shot. You have read and followed the advice in the IP-Masquerade HOWTO, right? If not, it's always available at the Linux Documentation Project <http://www.linuxdoc.org>;, or possibly on your own system under /usr/doc/HOWTO or /usr/share/doc/HOWTO.

(?) The Linux Machine has 2 NIC eth0 (10.15.1.10 | 16 ) is connected to the internal net (hub) , while the other ETH1 (10.16.1.10 | 16) is connected to the DSL Modem.

(!) [Ben] You have private IPs on both interfaces. Given a DSL modem on one of them, it would usually have an Internet-valid address, either one that you automatically get via DHCP or a static one that you get from your ISP (that's become unusual for non-commercial accounts.) Looks like you have a PPPoE setup - so you're not actually going to be hooking eth0 to eth1, but eth0 to ppp0.

(?) as you can see in the following text, everything is Up and run and I can access internet from the Linux machine.

(!) [Jason] This may see like a stupid question, but do the internal PCs have valid internet address? (i.e., those outside the 10.*.*.*, 172.16.*.*-172.31.*.* or 192.168.*.* ranges) If they don't, you need to do IP masquerading. This is not all that hard, I could give a quick & dirty answer as to how to do it (Or you could look at the IP-Masquerading-HOWTO, for the long answer), but I'd like to know if that's your situation first. Yes, I am that lazy. :-)

(?) ifconfig says

See attached jose.ifconfig-before.txt

See attached jose.ping-before.txt

The problem is that when I try to access the internet from the internal lan. I can't access it.

(!) [Ben] Yep, that's what it is. That MTU of 1492 is a good hint: that's the correct setting for PPPoE, and that's your only interface with a Net-valid IP.
(!) [John] The adjusted MTU for PPPoE (from the usual 1500 to 1492) is necessary, but can cause problems with the other machines on the LAN unless they too are adjusted for MTU.
(!) [Ben] Right - although not quite as bad as the gateway's MTU (that one can chase its own tail forever - looks like there's no connection!)
(!) [John] I've been stuck with using PPPoE for about a month now, and have found the Roaring Penguin pkg (http://www.roaringpenguin.com) to work quite well, once it's configured. I seem to remember reading that it does the MTU adjustment internally, and alleviates the headache of having to adjust the rest of the machines on the LAN to use the PPPoE gateway (see the ifconfig output below).
(!) [Ben] Oh, _sweet._ I'm not sure how you'd do that "internally", but I'm no network-programming guru, and that would save a bunch of headaches.
(!) [John] Especially nice if one of the LAN nodes is a laptop that gets carried around to different LAN environments - would be a real PITA to have to reset the MTU all the time.
# ifconfig eth1

eth1      Link encap:Ethernet  HWaddr 00:40:F4:6D:AA:3F
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:21257 errors:0 dropped:0 overruns:0 frame:0
          TX packets:14201 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:100
          RX bytes:4568502 (4.3 Mb)  TX bytes:1093173 (1.0 Mb)
          Interrupt:11 Base address:0xcc00
Then I just tacked on the firewall / masq script I've been using right along, with the only change being the external interface from eth0 to ppp0. PPPoE is also a freak in that the NIC that connects to the modem doesn't get an assigned IP.
(!) [Ben] Yep, that's what got me thinking "PPPoE" in the first place. Two RFC-1918 addresses - huh? An MTU of 1492 for ppp0 and reasonably short ping times to the Net - oh. :)

(?) all the PCs in the net have as Default gateway 10.15.1.10 (Linux internal NIC )

(!) [Ben] That part is OK.

(?) Linux's default gateway is the ppp0 adapter

[root@linuxrh root]# netstat -nr
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
64.229.190.1    0.0.0.0         255.255.255.255 UH       40 0          0 ppp0
10.16.0.0       0.0.0.0         255.255.0.0     U        40 0          0 eth1
10.15.0.0       0.0.0.0         255.255.0.0     U        40 0          0 eth0
127.0.0.0       0.0.0.0         255.0.0.0       U        40 0          0 lo
0.0.0.0         64.229.190.1    0.0.0.0         UG       40 0          0 ppp0
[root@linuxrh root]#
(!) [Ben] Yep, that's what "netstat" says. I've never done masquerading with PPP-to-Ethernet, but it should work just fine, provided you do the masquerading correctly.
(!) [Ben] Can you guys give me some cues of what my problem is ???
I don't have any firewall installed.
Thanks a lot. JOE
(!) [Ben] That's probably the problem. Seriously - a firewall is nothing more than a set of routing rules; in order to do masquerading, you need - guess what? - some routing rules (as well as having it enabled in the kernel.) Here are the steps in brief - detailed in the Masquerading HOWTO:
  1. Make sure that your kernel supports masquerading; reconfigure and
  2. Load the "ip_masq" module if necessary.
  3. Enable IP forwarding (ensure that /proc/sys/net/ipv4/ip_forward is
  4. Set up the rule set (the HOWTO has good examples.)
That's the whole story. If you're missing any part of it, go thou and fix it until it cries "Lo, I surrender!" If you run into problems while following the advice in the HOWTO, feel free to ask here.
(!) [Faber] One thing you didn't mention doing is turning on forwarding between the NICs; you have to tell the Linux to forward packets from one NIC to the other. To see if it is turned on, do this:
cat /proc/sys/net/ipv4/ip_forward
If it says "0", then it's not turned on. To turn it on, type
echo "1" > /proc/sys/net/ipv4/ip_forward
And see if your Win boxen can see the internet.
If that is your problem, once you reboot the Linux box you'll lose the setting. There are two ways not to lose the setting. One is to put the echo command above into your /etc/rc.local file. The second and Approved Red Hat Way is to put the line
net.ipv4.ip_forward = 1
in your /etc/sysctl.conf file. I don't have any Red Hat 7.3 boxes lying around, so I don't know if Red Hat changed the syntax between 7.x and 8.x. One way to check is to run
/sbin/sysctl -a | grep forward
and see which one looks most like what I have.

(?) Hey Faber in NJ /.... thanks for your clues. In fact it was in 0, I changed it to 1, I've restarted tehe box and it is in 1 now; but it is still not working.

(!) [Faber] Well, that's a start. There's no way it would have worked with it being 0!

(?) First at all, m I right with this setup method? I mean using Linux as a router only ??? or shoud I set up a masquerading and use the NAT facility to populate all my internal addresses in Internet?

(!) [Faber] Whoops! Forgot that piece! Yes, you'll hve to do masquerading/NAT (I can never keep the two distinct in my head).
(!) [Jason] It seems to me that you would want the DSL modem (eth1) to be the default route to the internet, not the modem (ppp0).

(?) Because maybe the problem is that I'm trying to route my internal net to the DSL net and Internet and maybe it is not a valid proc.

(!) [Faber] Well, it can be done, that's for sure. We just have to get all the t's dotted and the i's crossed. :-)
(!) [Jason] IP-Masquerading. Here's the HOWTO:
http://www.tldp.org/HOWTO/IP-Masquerade-HOWTO
And here's a script that's supposed (I've never used it) to just be a "fill in the blanks and go":
http://www.tldp.org/HOWTO/IP-Masquerade-HOWTO/firewall-examples.html#RC.FIREWALL-2.4.X
Note this is in the HOWTO, it's just later on after explaining all the gory details of NATing.

(?) Hey, thanks for your mail, the thing is working now. I didn´t know that the NAT functions in Linux are called Masquerading.

(!) [Ben] Yeah, that's an odd one.
Masquerading is only a specific case (one-to-many) of NAT. As an example of other stuff that NAT can do, IBM had an ad for the Olympics a while back (their equipment handled all the traffic for the website); they did "many-to-many" NAT to split up the load.

(?) Thanks again for your help, due to I'm new in Linux, it took me a while to learn of the terminology in this platform.

To many NOS in may head.

I have averything working now, including the firewall, I had to compile the kernel again, but it was ok.

C U.

(!) [Ben] <grin> You're welcome! Glad we could help.


Copyright © 2003
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 88 of Linux Gazette, March 2003
HTML script maintained by Heather Stern of Starshine Technical Services, http://www.starshine.org/

LINUX GAZETTE
...making Linux just a little more fun!
News Bytes
By Michael Conry

News Bytes

Contents:

Selected and formatted by Michael Conry

Submitters, send your News Bytes items in PLAIN TEXT format. Other formats may be rejected without reading. You have been warned! A one- or two-paragraph summary plus URL gets you a better announcement than an entire press release. Submit items to gazette@ssc.com


 March 2003 Linux Journal

[issue 107 cover image] The March issue of Linux Journal is on newsstands now. This issue focuses on On-Line Fora. Click here to view the table of contents, or here to subscribe.

All articles older than three months are available for public reading at http://www.linuxjournal.com/magazine.php. Recent articles are available on-line for subscribers only at http://interactive.linuxjournal.com/.


Legislation and More Legislation

Fox News report on the US Congress's tech agenda.

ITworld.com reported that the European Commission has presented a draft directive that punishes copyright infringement for commercial purposes, but spares home music downloaders, irritating music industry lobby groups (from Slashdot). Also reported at The Register.

Washington Post reports that Howard Schmidt, the new cybersecurity czar, is a former Microsoft security chief. (from Slashdot).

The Register reports how US gov reps "defanged" pro open source declaration.

EFF's comments on the "German DMCA" as part of its ongoing effort to avoid the worldwide export of over broad DMCA-type legislation. The German judicial commission is currently holding hearings on draft German legislation to implement the 2001 European Union Copyright Directive (EUCD).

After winning the initial judgement in a trademark suit, the German software firm MobiliX had to give up its name after all to the publishers of a similarly-named comic book character. MobilX.org is now TuxMobil.org. (NewsForge announcement.)

The EFF has announced the release of "Winning (DMCA) Exemptions, The Next Round", a succinct guide to the comment-making process written by Seth Finkelstein, who proposed one of the only two exemptions granted in the last Library of Congress Rule-making.

The Register reports that the DMCA has been invoked in DirecTV hack. 17 have been charged.

Slashdot Interview with Prof. Eben Moglen who has been the FSF's pro bono general counsel since 1993.


Linux Links

The Happypenguin Awards for the 25 Best Linux Games (courtesy NewsVac).

Lindows launches $329 mini PC, ubiquity beckons.

ZDNet UK reports on how KDE's new version has responded to government needs.

NewsForge report: The rise of the $99 'consumer' Linux distribution

A look at installing Bayesian filtering using Bogofilter and Sylpheed Claws.

Doc Searls at Linux Journal writes on the value shifts underpinning the spread of Linux and free software.

PC-To-Phone calls available for GNU/Linux.

Wired reports on getting an iPod to run on linux.

Jay Beale (of Bastille Linux) with an article on computer security, and how to tell if you've been hacked (from NewsVac).

Monthly Monster Machines back in action LinuxLookup.com have announced a monthly feature Monthly Monster Machines. This is a monthly updated spec of budget, workstation, and dream Linux machines.

It has been reported that starting this year, the Swiss State of Geneva will mail all tax forms with a CD which includes OpenOffice and Mozilla. This replaces an Excel sheet.

Recent NewsForge IRC chat with OpenOffice.org publicist/activist Sam Hiser.

Open Source security manual. There is a report on this at NewsForge.

Possible data write corruption problem on Linux.

NewsForge report on the launch of the new Linux in Education portal.

Slashdot highlighted a recent Business Week feature of Linux comprising 9 articles.

Slashdotters respond to an article about the Microsoft Home of Tomorrow by speculating what an AppleHouse, SunHouse and LinuxHouse would look like.

Howto do things in GNU/Linux

Interview with Dennis Ritchie, a founding father of Unix and C.

Some Links from Linux Weekly News:

For the Chinese readers among you, a Chinese translation of the Peruvian refutation of Microsoft FUD.


Upcoming conferences and events

Listings courtesy Linux Journal. See LBJ's Events page for the latest goings-on.

Game Developers Conference
March 4-8, 2003
San Jose, CA
http://www.gdconf.com/

SXSW
March 7-11, 2003
Austin, TX
http://www.sxsw.com/interactive

CeBIT
March 12-19, 2003
Hannover, Germany
http://www.cebit.de/

City Open Source Community Workshop
March 22, 2003
Thessaloniki, Greece
http://www.city.academic.gr/cosc

Software Development Conference & Expo
March 24-28, 2003
Santa Clara, CA
http://www.sdexpo.com/

Linux Clusters Institute (LCI) Workshop
March 24-28, 2003
Urbana-Champaign, IL
http://www.linuxclustersinstitute.org/

4th USENIX Symposium on Internet Technologies and Systems
March 26-28, 2003
Seattle, WA
http://www.usenix.org/events/

PyCon DC 2003
March 26-28, 2003
Washington, DC
http://www.python.org/pycon/

Linux on Wall Street Show & Conference
April 7, 2003
New York, NY
http://www.linuxonwallstreet.com

AIIM
April 7-9, 2003
New York, NY
http://www.advanstar.com/

FOSE
April 8-10, 2003
Washington, DC
http://www.fose.com/

MySQL Users Conference & Expo 2003
April 8-10, 2003
San Jose, CA
http://www.mysql.com/events/uc2003/

LinuxFest Northwest 2003
April 26, 2003
Bellingham, WA
http://www.linuxnorthwest.org/

Real World Linux Conference and Expo
April 28-30, 2003
Toronto, Ontario
http://www.realworldlinux.com

USENIX First International Conference on Mobile Systems, Applications, and Services (MobiSys)
May 5-8, 2003
San Francisco, CA
http://www.usenix.org/events/

USENIX Annual Technical Conference
June 9-14, 2003
San Antonio, TX
http://www.usenix.org/events/

CeBIT America
June 18-20, 2003
New York, NY
http://www.cebit-america.com/

ClusterWorld Conference and Expo
June 24-26, 2003
San Jose, CA
http://www.linuxclustersinstitute.org/Linux-HPC-Revolution

O'Reilly Open Source Convention
July 7-11, 2003
Portland, OR
http://conferences.oreilly.com/

12th USENIX Security Symposium
August 4-8, 2003
Washington, DC
http://www.usenix.org/events/

LinuxWorld Conference & Expo
August 5-7, 2003
San Francisco, CA
http://www.linuxworldexpo.com

Linux Lunacy
Brought to you by Linux Journal and Geek Cruises!
September 13-20, 2003
Alaska's Inside Passage
http://www.geekcruises.com/home/ll3_home.html

Software Development Conference & Expo
September 15-19, 2003
Boston, MA
http://www.sdexpo.com

PC Expo
September 16-18, 2003
New York, NY
http://www.techxny.com/pcexpo_techxny.cfm

COMDEX Canada
September 16-18, 2003
Toronto, Ontario
http://www.comdex.com/canada/

LISA (17th USENIX Systems Administration Conference)
October 26-30, 2003
San Diego, CA
http://www.usenix.org/events/lisa03/

HiverCon 2003
November 6-7, 2003
Dublin, Ireland
http://www.hivercon.com/

COMDEX Fall
November 17-21, 2003
Las Vegas, NV
http://www.comdex.com/fall2003/


News in General


 IBM and Smallpox

IBM, United Devices and Accelrys have announced a project supporting a global research effort that is focused on the development of new drugs that could potentially combat the smallpox virus post infection. The Smallpox Research Grid Project is powered by an IBM infrastructure, which includes IBM eServer[tm] p690 systems and IBM's Shark Enterprise Storage Server running DB2[r] database software using AIX and Linux.


 Opera and the Swedish Chef Go After Microsoft

Opera Software has released a very special Bork edition of its Opera 7 for Windows browser. The Bork edition behaves differently on one Web site: MSN. Users accessing the MSN site http://www.msn.com/ will see the page transformed into the language of the famous Swedish Chef from the Muppet Show: Bork, Bork, Bork! This is retaliation to apparent targeting by MSN of Opera users. Opera browser users were supplied with a different stylesheet to MSIE users, which made the site display in a less appealing way.


Distro News


 Debian

Debian Weekly News reported the announcement of the new archive key for 2003. This is used to sign the Release file for the main, non-US and security archives, and can be used with apt-check-sigs to improve security when using mirrors.


Also from DWN, and of use to many Debian users, is Adrian Bunk's announcement of the backport of OpenOffice.org 1.0.2 to woody. Packages are available online.


Debian powers PRISMIQ MediaPlayer home entertainment gateway device.


 Knoppix

IBM developerWorks has published a recent article on Knoppix.


 Mandrake

Part 4 of DistroWatch's review of Mandrake 9.1 is online


 SuSE

Open For Business has published a review of SuSE Linux 8.1

NewsForge has reviewed SuSE Linux Office Desktop.


 Vector

LinuxHardware.org have reviewed Vector Linux 2.5 SOHO edition. While Mad Penguin reports the release of version 3.2.


Software and Product News


 C.O.L.A software news

A small selection of announcements from February:


 Candy Cruncher

LGP is pleased to announce that Candy Cruncher has arrived from the replicators and is available immediately.


 CourseForum 1.3

CourseForum Technologies today introduced CourseForum 1.3, its web-based software for e-learning content creation, sharing and discussion. CourseForum can be hosted on MacOS X, Windows 98/ME/NT/2000/XP, Linux or other Unixes, while users need only a standard web browser.


 ProjectForum 1.3

CourseForum Technologies today introduced ProjectForum 1.3, a web-based software for flexible workgroup collaboration and coordination of projects and teams. ProjectForum can be hosted on MacOS X, Windows 98/ME/NT/2000/XP, Linux or other Unixes, while users need only a standard web browser. Licenses start at US$199, and a free version is also available.


 AquaFold

AquaFold, Inc have announced the latest version of Aqua Data Studio, a universal database tool for building, managing and maintaining enterprise relational databases. Aqua Data Studio includes support for all major database platforms such as Oracle 8i/9i, IBM DB2, Sybase Adaptive Server, Microsoft SQL Server and the open source databases MySQL and PostgreSQL. Developed with the Java programming language, Aqua Data Studio supports all major operating systems, including Linux, Microsoft Windows, Mac OSX, and Solaris. Screenshots and downloads available online.


This page written and maintained by the Editor of the Linux Gazette,
gazette@ssc.com
Copyright © 2003, Michael Conry. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 88 of Linux Gazette, March 2003

LINUX GAZETTE
...making Linux just a little more fun!
HelpDex
By Shane Collinge

These cartoons are scaled down to minimize horizontal scrolling. To see a panel in all its clarity, click on it.

[cartoon]

Tux continues his career as an Eminem wannabe.
[cartoon]
[cartoon]
[cartoon]

And other topics...
[cartoon]
[cartoon]
[cartoon]
[cartoon]
[cartoon]
[cartoon]
[cartoon]
[cartoon]

All HelpDex cartoons are at Shane's web site, www.shanecollinge.com.

 

[BIO] Part computer programmer, part cartoonist, part Mars Bar. At night, he runs around in a pair of colorful tights fighting criminals. During the day... well, he just runs around. He eats when he's hungry and sleeps when he's sleepy.


Copyright © 2003, Shane Collinge. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 88 of Linux Gazette, March 2003

LINUX GAZETTE
...making Linux just a little more fun!
Ecol
By Javier Malonda

The Ecol comic strip is written for escomposlinux.org (ECOL), the web site that supports, es.comp.os.linux, the Spanish USENET newsgroup for Linux. The strips are drawn in Spanish and then translated to English by the author. Text commentary on this page is by LG Editor Iron. Your browser has shrunk the images to conform to the horizontal size limit for LG articles. For better picture quality, click on each cartoon to see it full size.



Ecol is now available in three languages: English, Spanish and Catalan.

Your Editor couldn't resist getting Javier to do the same cartoon in Esperanto.


All Ecol cartoons are at tira.escomposlinux.org (Spanish), comic.escomposlinux.org (English) and http://tira.puntbarra.com/ (Catalan). The Catalan version is translated by the people who run the site; only a few episodes are currently available.

These cartoons are copyright Javier Malonda. They may be copied, linked or distributed by any means. However, you may not distribute modifications. If you link to a cartoon, please notify Javier, who would appreciate hearing from you.

 


Copyright © 2003, Javier Malonda. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 88 of Linux Gazette, March 2003

LINUX GAZETTE
...making Linux just a little more fun!
Fonts for the Common Desktop Environment (or: How To Alias Your Fonts)
By Graham Jenkins

Yet Another New Desktop Environment?

Actually, it's not. The Common Desktop Environment (CDE) has been around for as long as some of those who will be reading this article. It was jointly developed by IBM, Sun, HP and Novell so as to provide a unified "look and feel" to users of their systems. It was also adopted by other companies (notably Digital Equipment). You can find further details at "IBM AIX: What is CDE?".

[Screenshot: a typical CDE screen]

The early versions of KDE appear to have been based on CDE, and the more recent releases of XFce have a look-and-feel which is very similar to that of CDE. A key difference here is that both KDE and XFce are Open Source developments.

DeskTop Terminal (dtterm)

One of the most-used CDE applications is probably its desktop terminal 'dtterm' which was based on 'xterm' with some extra menu-bar capabilities; its look is not unlike that of 'gnome-terminal'. There are also image-viewer, performance-monitoring, mail-reader and other useful tools.

Why Should I Care?

I work in an environment where I am required to access and manage a number of Solaris and HP-UX servers. Most of my work is done at a NetBSD-based Xterminal, managed by a remote Solaris machine so that I have a CDE desktop. There are times it is managed instead by a remote Linux machine so that I have a Gnome desktop. And there are times (too many of them!) when I work from home, using a Linux machine with a locally-managed Gnome desktop.


It matters little where I am working; as soon as I open up a CDE utility such as 'dtterm', my Xserver starts looking for CDE-specific fonts. It seems that a number of vendor-supplied backup and other utilities also make use of these fonts.

DeskTop Terminal without Font Support

In the case of 'dtterm' the end-result is that an attempt to select a different-sized font produces a selection list containing eight fonts, and seven of these can't be found. It is actually possible to get around this by redefining on the Solaris or HP host the names of the fonts which are used for the 'dtterm' application. This can be done at either a system-wide or a user-specific level; either way, it's hardly an elegant solution.

In the case of a splash-screen produced at CDE-login time, the result can be quite dire: the user is unable to read the login prompts or error messages! More recent versions of both Solaris and HP-UX get around this by attempting to append an entry like 'tcp/hpmachine:7100' to the font-path at login time. That's fine unless your site security policy prohibits the activation of font service on your Solaris and HP servers.

Why Not Use Specialised Font Server Machines?

You can designate a couple of machines as font-servers for your site. These can be small dedicated machines, or they can offer other services (such as DHCP, NTP, etc.) as well. That's actually the way that it's done with 'thin' Xterminals from companies like IBM, NCD and HP.

There are several issues. First up, you have to actually install the CDE-fonts on the font-server machines; there may be some copyright issues here if you are installing (for instance) HP CDE fonts on Linux machines.

Something we noticed in practice is that the Xserver software we are using doesn't seem smart enough to do a transparent fail-over in the event of a single server disconnection. So what happens is that a user suddenly finds himself presented with a blank screen.

If you are working from home with a modem connection to the LAN on which your font-servers reside, it can take some time for required fonts to arrive when you start a 'dtterm' application.

Install on Every Xserver?

This is certainly a possibility, and if you can live with the copyright issues, it will solve most of the problems outlined above. But it will require an extra 10Mb of filespace on each system.

There is Another Way!

The good news is that you don't have to lose sleep over the copyright issues. And you don't have to install strange fonts all over your font directories.

All you need do is identify some commonly-available fonts which closely match the CDE-specific fonts, and create one 'fonts.alias' file. Place it in an appropriate directory (e.g. '/usr/X11R6/lib/X11/fonts/local'), and run 'mkfontdir' in that directory. Then ensure that the directory name is included in your font-server configuration file (e.g. '/usr/X11R6/lib/X11/fs/config'). If your version of Linux (or NetBSD, or FreeBSD ..) doesn't include a term like 'unix/:7100' in its 'XF86Config' (or similar) server configuration file, you should place the name of your selected font directory in that configuration file.

The Intimate Details

Here's what the 'fonts.alias' file looks like. For clarity, I've shown just the first two and the last alias hereunder, and I've broken each line at the whitespace between the alias-name and it's corresponding real font. There wasn't a great deal of science went into the development of this file, although I did use a couple of simple scripts to assemble it. It was just a matter of finding, for each alias, a font having similar characteristics and size.

 ! XFree86-cdefonts-1.0-2
 ! Font Aliases for Common Desktop Environment using XFree86 fonts.
 ! Graham Jenkins <grahjenk@au1.ibm.com> October 2001.
 -dt-application-bold-i-normal-serif-11-80-100-100-m-60-iso8859-1
   "-adobe-courier-bold-o-normal--11-80-100-100-m-60-iso8859-1"
 -dt-application-bold-i-normal-serif-14-100-100-100-m-90-iso8859-1
   "-adobe-courier-bold-o-normal--14-100-100-100-m-90-iso8859-1"
  ...
 "-dt-interface user-medium-r-normal-xxl serif-21-210-72-72-m-140-hp-roman8"
   "-b&h-lucidatypewriter-medium-r-normal-sans-24-240-75-75-m-140-iso8859-1"

Read My Lips

OK, so you've read this far, and you're still asking "Why Should I Care?". My guess is that eighty percent of you have never used CDE and are unlikely to use it in the future.

But what I can guarantee is that most of you are going to run an application one day, and wonder why it's fonts don't display or scale properly. My hope is that when that happens, you'll recall what you've read here - and apply it to the creation of an appropriate 'fonts.alias' file as outlined above.

 

[picture] Graham is a Unix Specialist at IBM Global Services, Australia. He lives in Melbourne and has built and managed many flavors of proprietary and open systems on several hardware platforms.


Copyright © 2003, Graham Jenkins. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 88 of Linux Gazette, March 2003

LINUX GAZETTE
...making Linux just a little more fun!
Linux-Based Telecom
By Janine M Lodato

Because the baby-boom generation will soon be the senior population, the market for voice-activated telephone services will be tremendous. An open-minded company such as IBM or Hewlett-Packard will surely find a way to meet the market demand. What is needed by this aging population is a unified messaging system -- preferably voice-activated -- that lets the user check for caller ID, receive short messages, check for incoming and outgoing e-mail, access address books for both telephone numbers and e-mail addresses, and place telephone calls.

Everything that is now done by typing and text will be more quickly and easily performed with voice recognition. That is, a voice will identify a caller, read short messages aloud, provide e-mail services in both text-to-voice reading of the incoming e-mail and voice-to-text for outgoing e-mail, voice access of address books, and voice-activated placing phone calls (and ending them when you're done). Once the users are able to answer, make and end a call using just their voices, working with the telephone will be a breeze and seniors will not feel isolated and lonely. What a boon to society voice-activated telephone services will be. Whether or not users are at all computer-savvy, e-mail will also be an option applied to the telephone. It is, after all, a form of communication as is the telephone. It is a Linux-based unified communication system.

Of great value to the user would be e-mail and its corresponding address book. As e-mail comes in, messages could be read by way of a text-to-voice method. Also of great value would be a telephone system with its corresponding address book and numbers. Short messaging could be read through text-to-voice technology and short messages can be left using voice-to-text methodology.

One of the most advanced and productive uses of such simple Linux-based communication devices is to search the web without going on-line to a search engine. Instead, one can just send an e-mail to Agora in Japan and do multiple Google searches with a single e-mail. You do not even need a browser. For example, we are interested how Linux has been recently doing in the press in connection with the life sciences and medical applications. Just send a single e-mail to a free service such as Agora at dna.affrc.go.jp. In the body of a single e-mail one can put a number of searches. Of course, one can modify the search terms:

Send http://www.google.com/search?q=Linux+press+2003+life*sciences\&num=50

Send http://www.google.com/search?q=Linux+press+2003+medical*devices\&num=50

Send http://www.google.com/search?q=Linux+press+2003+telemedicine\&num=50
Within thirty minutes or so, depending on the time of the day and the load the Agora server is under, you get a number of e-mails back, one for each send command in your email. Each e-mail lists the URLs, accompanied by a one-paragraph review of the corresponding web site, which fits the keywords one has specified in the send command. Then just simply select the reference number next to the URL you are interested in and list them in a reply email back to Agora, and they will send the web page you have selected. Or you can use the deep command to get the entire web-site for the URL. To learn more send a Help e-mail to the Agora server for details.

How productive one can get, but do not abuse these fine services since they are for the researchers. Use it when it's nighttime in Japan: after 7pm on the US west coast, after 4pm on the US east coast, and after 11am in western Europe.

Anything that allows independence for the user is bound to be helpful to every aspect of society.

With the attractive price of a Linux-based unified communication device encompassing all the applications mentioned above, users can be connected and productive without the need for an expensive Windows system.

Resources

There's a list of Agora and www4mail servers at http://www.expita.com/servers.html. Two other (less reliable) Agora servers are agora at kamakura.mss.co.jp (Japan) and agora at www.eng.dmu.ac.uk (UK).

Www4mail is a very modern type of server that works similar to Agora. Two servers the author has tested are www4mail at kabissa.org and www4mail at web.bellanet.org. Send an e-mail with the words "SEND HELP" in the body for instructions.

 


Copyright © 2003, Janine M Lodato. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 88 of Linux Gazette, March 2003

LINUX GAZETTE
...making Linux just a little more fun!
The Sushi Pub at the World Internet Center
By Janine M Lodato

The most important capital of an alliance: people, successfully collaborating via the Internet.
Hope springs eternal at the World Internet Center (The Center) in Palo Alto, California. Located in an upstairs suite at the historic Stanford Barn, The Center hosts a weekly social event on Thursdays from 5 to 7 PM called "the Pub". Aside from sushi and wine provided by The Center at nominal cost to those who attend, the networking that takes place at the Pub offers hope to millions of people.

The Center brings hope by connecting Silicon Valley entrepreneurs, corporate executives and technologists, all wanting to forge a start-up company that will make a mark on today's economy using info-tech such as the Internet. From the business opportunities that develop, the beneficiaries of such opportunities are not just the businessmen putting together the deal. In the long run, the beneficiaries may also include people around the world afflicted with physical malfunctions and illnesses.

The Pub allows people to put together start-up firms of varying interests. Small, narrowly-focused companies such as those concentrating on life sciences, soon to be headquartered in Singapore, rely on larger businesses to disseminate their services and capabilities. These larger businesses are called systems integrators.

Visitors to The Center come from as far away as Russia, Australia, Iran, Europe, China, Japan, Chile, Brazil and, of course, from Silicon Valley. California's Silicon Valley is the Mecca of high technology: telecom, multimedia telecom, computers, Internet and e-commerce, attracting countries wanting to ride the high-tech wave of the future because of its potential for financial gain.

Because people forming a team and working well together as a group make for the success of a new company, the elbow-rubbing they do socially at the Pub is an indication of how things will work out in the long run. People make the deal work, not technology, not ideas, not money, but people with those things. If new businesses can speed along medical help for people with all sorts of physical malfunctions, The Center will have achieved a major milestone: lowering the cost of medicine and improving the lives of the needy.

The main theme of The Center is to connect its current and past large corporate sponsors such as Amdocs, Deutsche Telekom, HP, IBM, SAP, Sun with with small high-tech companies and expert individuals in the form of a series of focused think-tanks.

Because my husband, Laszlo Rakoczi, a Hungarian revolutionary who emigrated to the USA after the revolution in Hungary was crushed by the Evil Empire (the Soviet Union), is a member of the Advisory Board of The Center, many small companies seek him out to discuss the potential of collaborative strategic alliance type business arrangements. One such high-tech company recently approaching him is Sensitron.net. Dr. Rajiv Jaluria, founder and CEO, met Laszlo through The Center. Sensitron is a small high-tech firm which built an end-to-end system to connect medical instruments to monitoring stations and databases thus improving the productivity of the medical professionals and increasing the quality of medical care. Of course the question of what type of platform should the application run on came up. Laszlo immediately introduced the idea of embedded Linux based systems for the medical instruments as well as for the PDAs and Tablets for the professionals and even the potential of Linux based servers and databases. Laszlo suggested these since Linux would allow...

Laszlo could not resist pointing out that the real Evil Empire which is holding down and fighting the real revolution -- the simple and low cost collaboration of all peoples via the Internet, not just the ones who can pay for the high cost of a Windows based PC -- is Microsoft with their monopolistic pressure tactics. One of such evil practices of Microsoft is the campaign under which they embrace a small company like Sensitron, enhance their application of Sensitron, then extinguish the original team. Embrace, enhance, extinguish. The Soviets were never that good and imaginative in their tyrannical approach. Maybe that is the reason they have failed.

As the biotech and IT arenas converge, IT enables life sciences companies to accelerate the development, testing and marketing of their intellectual properties, products and services. Life sciences encompass the fields of biotechnology, medical equipment and pharmaceutical products and services. Such companies include many small, as well as large entities like Pfizer, Chiron, Philips and Agilent.

It is hard to believe such a sophisticated, practical idea could come from people socializing over wine and sushi, but that is indeed the case. Many future start-up companies in the Silicon Valley will have the World Internet Center and its weekly Pub to thank for their conception.

One such important think tank, currently in formation stages looking for corporate sponsors, is an NIH-funded project for the disabled, aging and ailing. This proposed think tank planning to investigate the potential of collaborative telemedicine. For example, due to the shortage of medical professionals, China must use telemedicine to connect the small clinics in 260,000 communities to the 100 large teaching hospitals via VSAT type Internet linkage. NeuSoft of China is putting together such a system and of course they do not want to fall prey to Microsoft's overpriced systems. In fact Linux is the major platform China wants for all their applications supported by the Red Flag project.

Telemed systems of this type apply to a very large group, including disabled, aging and ailing people as well as the professionals supporting them. The sum of these people account for half the population of the world and very few of them can afford the artificially high cost of Windows-based systems. Telemed can lower the cost of medicine, improve the capabilities of the medical professionals and at the same time improve the quality of life of the patients.

Sensitron, with the support of NeuSoft will propose that NIH should provide a grant to their strategic alliance under which a disabled and female investigator will do a clinical study of the potential of significantly improved condition of health via Internet-based collaborative virtual community style involvement. This significant upgrade of self-supported health improvement can be achieved using assistive technologies (AT) connected via the web. However, such AT technologies must be upgraded to allow collaboration between the health service professionals and their patients linked via the virtual community. The AT based virtual community needs functions such as...

An important point: the AT technologies we would apply to the disabled and the aging can also be used for the eyes-busy, hands-busy professionals. It could be sold via a for-profit Internet company, with some of the profits paid back to the non-profit think tank in the form of grants and matching grants.

Melbourne, Singapore, Dailan, Shanghai, Hong Kong, Kuala Lampur, Munich, Budapest, Vienna, Lund, Bern, Helsinki, Shenyang, Dublin, London, Stuttgart, Hawaii, Vancouver, Toronto, etc., would all love to come to Silicon Valley in this virtual community manner, through a club equipped with a standard wireless local area network (WLAN), connected to a virtual private network (VPN). This cross-oceanic virtual private network will have kiosk-based unified messaging (UM) between the clubs. This would also including very low-cost voice over the Internet protocol (VoIP) connected in all major APEC, Asian Pacific economic community, cities with the VoIP and UMoIP as well as through carrier allies with IP backbone to 120 of the important cities in USA/Canada as well as many in the EU.

Those of us with neurological dysfunctions such as MS, ALS, ALD, Parkinsons, Alzheimers and myriad more, have a very special personal stake in the networking that goes on over sushi and wine. Life sciences and information technology working together can aid these patients in a very effective way. For example, techniques like neuroprosthetics -- interaction with devices using voice and eye signals -- can develop.

As I sit in the only quiet spot at The Center during its weekly, after-hours social event, I notice the networking that takes place. The Center provides a great opportunity for people to share ideas for business. Everyone from the original architects of the Valley to new entrepreneurs is there. Investors look for good investment opportunities, and start-up companies look for anyone wanting to put money into their new venture. Basically, it's a people-to-people scene and is exciting to observe.

Then there are those who find the allure of the event as a singles bar irresistible. Where else can they find stimulating company, fresh sushi and good wine at such a fair price? Personally, having attended the weekly occasion for so many months now because my husband, Laszlo, is a member of The Center's Advisory Board, I could care less if I ever see sushi again in my life!

By now I have my own circle of friends at this gathering. And I find those wanting to do business with my important husband very courteous and attentive to me. In general, the entire encounter is an "upper" for me, a technology midget among giants.

Nibbling on the cheese set before me, my taste for sushi having long since expired, I fulfill my role as a mouse in the boardroom to the max. I overhear conversations of businessmen from the already-mentioned countries exchanging e-mail addresses to further negotiate via the Internet. The Center has achieved its goal.

I smile a little inward smile, realizing medical researchers around the world have been sharing ideas and breakthroughs on the Internet for years. A medical Manhattan Project has been globalized thanks to the Internet. I know a lot of afflicted people who were ready for medical help yesterday.

What can we do besides raise money to hurry things along? Hope the convergence of biotechnology and IT accelerates treatments for physical malfunctions worldwide and promotes the free exchange of intellectual property among biotechnology companies and research institutions, that's what. And keep that sushi and wine readily available for the Thursday night Pub at the World Internet Center.

 


Copyright © 2003, Janine M Lodato. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 88 of Linux Gazette, March 2003

LINUX GAZETTE
...making Linux just a little more fun!
Perl One-Liner of the Month: Good Enough For Government Work
By Ben Okopnik

When Woomert Foonly answered the door in response to an insistent knocking, he found himself confronted by two refrigerator-sized (and shaped) men in dark coats who wore scowling expressions. He noted that they were both reaching into their coats, and his years of training in the martial arts and razor-sharp attention to detail resulted in an instant reaction.

- "Hello - you're obviously with the government, and you're here to help me, even if didn't call you. May I see those IDs?... Ah. That agency. Do come in, gentlemen. Feel free to remove your professional scowls along with your coats, you won't need them. Pardon me while I call your superiors just to make sure everything is all right; I need to be sure of your credentials. Please have a seat."

Some moments later, he put down the phone.

- "Very well; everything seems right. How may I help you, or, more to the point, help your associates who have a programming problem? I realize that security is very tight these days, and your organization prefers face-to-face meetings in a secure environment, so I'm mystified as to your purpose here; I don't normally judge people by appearances, but you're clearly not programmers."

The men glanced at each other, got up without a word, and began a minute security survey of Woomert's living room - and Woomert himself - using a variety of expensive-looking tools. When they finished a few minutes later, they once again looked at each other, and nodded in unison. Then, each of them reached into the depths of their coat and extracted a rumpled-looking programmer apiece, both of whom they carefully placed in front of Woomert. The look-and-nod ritual was repeated, after which they each retired to the opposite corners of the room to lurk like very large shadows.

Woomert blinked.

- "Well. The requirements of security shall be served... no matter what it takes. Have a seat, gentlemen; I'll brew some tea."

A few minutes later, after introductions and hot tea - the names of the human cargo turned out to be Ceedee Tilde and Artie Effem - they got down to business. Artie turned out to be the spokesman for the pair.

- "Mr. Foonly, our big challenge these days is image processing. As you can imagine, we get a lot of surveillance data... well, it comes to us in a standardized format that contains quite a lot of information besides the image: the IP of the originating site, a comment field, position information, etc. The problem is, both of us are very familiar with text processing under Perl but have no idea how to approach extracting a set of non-text records - or, for that matter, how to avoid reading in a 200MB image file when all we need is the header info... I'll admit, we're rather stuck. Our resident C++ guru keeps trying to convince us that this can only be done in his language of choice - it wouldn't take him more than a week, or so he says, but we've heard that story before." After an enthusiastic nod of agreement from Ceedee he went on. "Anyway, we thought we'd consult you - there just has to be something you can do!"

Woomert nodded.

- "There is. One thing, though: since we're not dealing with actual classified data or anything that requires a clearance - I assume you've brought me a carefully-vetted specification sheet, yes? - I want my assistant, Frink Ooblick, to be in on the discussion. This is, in fact, similar to the kind of work he's been trying to do lately, so he should find it useful as well."

Frink was brought in and debugged by the pair Woomert had dubbed Strong and Silent, although "perl -d" [1] was nowhere in evidence. After introductions all around, he settled into his favorite easy chair from which he could see Woomert's screen.

- "All right, let's look at the spec sheet. Hmmm... the header is 1024 bytes; four bytes worth of raw IP address, a forty-byte comment field, latitude and longitude of top left and bottom right, each of the four measurements preceded by a length-definition character... well, that'll be enough for a start; you should be able to extrapolate from the solution for the above."

"What do you think, Frink? Any ideas on how to approach this one?"

Frink was already sitting back in his chair, eyes narrowed in thought.

- "Yes, actually - at least the first part. Since they're reading a binary file, ``read'' seems like the right answer. As for the second... well, ``substr'', maybe..."

- "Close, but not quite. ``read'' is correct: we want to get a fixed-length chunk of the file. However, "substr" isn't particularly great for non-text strings - and hopeless when we don't know what the field length is ahead of time, as is the case with the four lat/long measurements. However, we do have a much bigger gun... whoa, boys, calm down!" he added as Strong and Silent stepped out of their corners, "it's just a figure of speech!"

"Anyway," he continued, with a twinkle in his eye that hinted at the "slip" being not-so-accidental, "we have a better tool we can use for the job, one that's got plenty of pep and some to spare: ``unpack''. Here, try this:


# Code fragment only; actually processing the retrieved data is left as an # excercise, etc. :) ... $A="file.img";open A or die "$A: $!";read A,$b,1024;@c=unpack "C4A40(A/A)4", $b ...
The moment of silence stretched until Ceedee cleared his throat.

- "Ah... Mr. Foonly... what the heck is that? I can understand the ``open'' function, even though it looks sort of odd... ``read'' looks reasonable too... but what's that ``unpack'' syntax? It looks as weird as snake suspenders."

Woomert glanced around. Artie was nodding in agreement, and even Frink looked slightly bewildered. He smiled and took another sip of tea.

- "Nothing to worry about, gentlemen; it's just an ``unpack'' template, a pattern which tells it how to handle the data. Here, I'll walk through it for you. First, though, let's expand this one-liner into something a bit more readable, maybe add a few comments:


$A = "file.img"; # Set $A equal to the file name open A or die "$A: $!"; # Open using the "new" syntax read A, $b, 1024; # Read 1kB from 'A' into $b @c = unpack "C4A40(A/A)4", $b; # Unpack $b into @c per template
The new syntax of "open" (starting with Perl 5.6.0) allows us to "combine" the filehandle name and the filename, as I did in the first two lines; the name of the variable (without the '$' sigil) is used as the filehandle. If you take a look at ``perldoc -f pack'', it contains a longish list of template specifications, pretty much anything you might want for conversions; you can convert various types of data, move forward, back up, and in general dance a merry jig. The above was rather simple, really:

C4 An unsigned "char" value repeated 4 times A40 An ASCII string 40 characters long (A/A)4 ASCII string preceded by a "length" argument which is itself a single ASCII character, repeated 4 times
The resulting output was assigned to @c, which now contains something like this:

$a[0] The first octet of the IP quad $a[1] " second " " " " $a[2] " third " " " " $a[3] " fourth " " " " $a[4] The comment field $a[5] The latitude of the upper left corner $a[6] " longitude " " " " " $a[7] The latitude of the lower right corner $a[8] " longitude " " " " "
Obviously, you can extend this process to your entire data layout. What do you think, gentlemen - does this fit your requirements?"

After the now-enthusiastic Artie and Ceedee had been bundled off by their hulking keepers and the place was once again as roomy as it had been before their arrival, Woomert opened a bottle of Hennessy's "Paradise" cognac and brought out a pair of small but fragrant cigars which proved to be top-grade Cohibas.

- "Well, Flink - that's another case solved; something that never fails to make me feel cheery and upbeat. As for you - hit those books, young man! - at least when we get done with this little treat. ``perldoc perlopentut'' will make a good introduction to the various ways to open a file, duplicate a filehandle, etc.; ``perldoc -f pack'' and ``perldoc -f unpack'' will explain those functions in detail. When you think you've got it, find a documented binary file format and write a parser that will pull out the data for examination. By this time tomorrow, you should be quite expert in the use of these tools..."



[1] Perl comes with a very powerful built-in debugger; see "perldoc perldebtut" and "perldoc perldebug" for more information. Note, however, that it's not very good at locating hidden transmitters or wiretaps...

 

Ben is a Contributing Editor for Linux Gazette and a member of The Answer Gang.

picture Ben was born in Moscow, Russia in 1962. He became interested in electricity at age six--promptly demonstrating it by sticking a fork into a socket and starting a fire--and has been falling down technological mineshafts ever since. He has been working with computers since the Elder Days, when they had to be built by soldering parts onto printed circuit boards and programs had to fit into 4k of memory. He would gladly pay good money to any psychologist who can cure him of the resulting nightmares.

Ben's subsequent experiences include creating software in nearly a dozen languages, network and database maintenance during the approach of a hurricane, and writing articles for publications ranging from sailing magazines to technological journals. Having recently completed a seven-year Atlantic/Caribbean cruise under sail, he is currently docked in Baltimore, MD, where he works as a technical instructor for Sun Microsystems.

Ben has been working with Linux since 1997, and credits it with his complete loss of interest in waging nuclear warfare on parts of the Pacific Northwest.


Copyright © 2003, Ben Okopnik. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 88 of Linux Gazette, March 2003

LINUX GAZETTE
...making Linux just a little more fun!
Optimizing GCC
By Justin Piszcz

I have a Pentium 3 866MHZ CPU. After reading the freshmeat article on optimizing GCC a few days ago, it got me thinking. So I posed the following question: How much faster would gcc compile the kernel if gcc itself was optimized? I chose to benchmark kernel compilation times, because I think it is a good benchmark, and many other people also use it to benchmark system performance. Also, at one point or another, most Linux users will have to take the step and compile the kernel, so I thought I'd benchmark something that is useful and something that people have a general idea of how long it takes to compile without optimizations. So my test is comprised of the following:

  1. Run 10 kernel compilations and calculate the average time.
  2. The kernel in question is the latest stable Linux kernel.
  3. The GCC used with this test is the latest stable gcc.

With an non-optimized compiler, (configure;make;make install)
Average of 10 'make bzImage':
TIME: 12.42 minutes (762 seconds)

With an optimized compiler, I specifically used:

-O3 -pipe -fomit-frame-pointer -funroll-loops -march=pentium3 -mcpu=pentium3
-mfpmath=sse -mmmx -msse
In case you are wondering how to do this, it is in the FAQ of the gcc tarball. The following line is what I used:
   ./configure ; make BOOT_CFLAGS="optimization flags" bootstrap ; make install
Average of 10 'make bzImage'
TIME: 9.31 minutes (571 seconds)

I compile almost everything I run on my Linux box. I use a package manager called relink to manage all of my installed packages.

Optimizing the compiler alone: offers a speed increase of: 33% (or 3:11 minutes, 191 seconds faster). This may not seem like a lot, but for compiling big programs, it will significantly reduce compile time making those QT & Mozilla builds that much faster :) The actual test consisted of this:

cd /usr/src/Linux

for i in `seq 1 10`

do

  make dep

  make clean

  /usr/bin/time make bzImage 2>> /home/war/log

done
In case you're wondering about the time elapsed per build and how much the CPU was utilized, here they are:
No Optimization (Standard GCC-3.2.2):

   720.88user 34.54system 12:43.97elapsed 98%CPU (0avgtext+0avgdata 0maxresident)k

   719.06user 35.69system 12:42.09elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k

   719.14user 34.37system 12:39.64elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k

   720.52user 36.42system 12:46.68elapsed 98%CPU (0avgtext+0avgdata 0maxresident)k

   721.07user 33.86system 12:41.59elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k

   718.95user 35.65system 12:41.31elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k

   721.83user 36.26system 12:51.54elapsed 98%CPU (0avgtext+0avgdata 0maxresident)k

   720.29user 34.18system 12:40.63elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k

   719.14user 34.80system 12:39.19elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k

   721.16user 33.88system 12:41.93elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k


Optimized Compiler (GCC-3.2.2 w/ "-O3 -pipe -fomit-frame-pointer -funroll-loops
-march=pentium3 -mcpu=pentium3 -mfpmath=sse -mmmx -msse")

   532.09user 33.62system 9:32.76elapsed 98%CPU (0avgtext+0avgdata 0maxresident)k

   531.57user 32.92system 9:29.25elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k

   532.99user 33.12system 9:31.18elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k

   532.58user 33.16system 9:30.57elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k

   533.18user 32.96system 9:31.34elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k

   534.01user 32.21system 9:32.50elapsed 98%CPU (0avgtext+0avgdata 0maxresident)k

   532.59user 33.41system 9:31.56elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k

   532.76user 33.68system 9:32.01elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k

   534.19user 32.54system 9:31.92elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k

   534.11user 32.76system 9:32.40elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k
Note: I realize some of the optimizations, most specifically (-fomit-frame-pointer) may not be a good optimization feature, especially for debugging. However, my goal is to increase compiler performance and not worry about debugging.

 


Copyright © 2003, Justin Piszcz. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 88 of Linux Gazette, March 2003

LINUX GAZETTE
...making Linux just a little more fun!
Programming the SA1110 Watchdog timer on the Simputer
By Pramode C.E

In last month's article (Fun with Simputer and Embedded Linux), I had described the process of developing programs for the Simputer, a StrongArm based handheld device. The Simputer can be used as a platform for learning microprocessor and embedded systems programming. This article describes my attempts at programming the watchdog timer unit attached to the SA1110 CPU which powers the Simputer. The experiments should work on any Linux based handheld which uses the same CPU.

The Watchdog timer

Due to obscure bugs, your computer system is going to lock up once in a while - the only way out would be to reset the unit. But what if you are not there to press the switch? You need to have some form of `automatic reset'. The watchdog timer presents such a solution.

Imagine that your microprocessor contains two registers - one which gets incremented every time there is a low to high (or high to low) transition of a clock signal (generated internal to the microprocessor or coming from some external source) and another one which simply stores a number. Let's assume that the first register starts out at zero and is incremented at a rate of 4,000,000 per second. Lets assume that the second register contains the number 4,000,000,0. The microprocessor hardware compares these two registers every time the first register is incremented and issues a reset signal (which has the result of rebooting the system) when the value of these registers match. Now, if we do not modify the value in the second register, our system is sure to reboot in 10 seconds - the time required for the values in both registers to become equal.

The trick is this - we do not allow the values in these registers to become equal. We run a program (either as part of the OS kernel or in user space) which keeps on moving the value in the second register forward before the values of both become equal. If this program does not execute (because of a system freeze), then the unit would be automatically rebooted the moment the value of the two registers match. Hopefully, the system will start functioning normally after the reboot.

Resetting the SA1110

The Intel StrongArm manual specifies that a software reset is invoked when the Software Reset (SWR) bit of a register called RSRR (Reset Controller Software Register) is set. The SWR bit is bit D0 of this 32 bit register. My first experiment was to try resetting the Simputer by setting this bit. I was able to do so by compiling a simple module whose `init_module' contained only one line:

RSRR = RSRR | 0x1

The Operating System Timer

The StrongArm CPU contains a 32 bit timer that is clocked by a 3.6864MHz oscillator. The timer contains an OSCR (operating system count register) which is an up counter and four 32 bit match registers (OSMR0 to OSMR3). Of special interest to us is the OSMR3.

If bit D0 of the OS Timer Watchdog Match Enable Register (OWER) is set, a reset is issued by the hardware when the value in OSMR3 becomes equal to the value in OSCR. It seems that bit D3 of the OS Timer Interrupt Enable Register (OIER) should also be set for the reset to occur.

Using these ideas, it is easy to write a simple character driver with only one method - `write'. A write will delay the reset by a period defined by the constant `TIMEOUT'.

[Text version of this listing]


/*
 * A watchdog timer. 
 */

#include <linux/module.h>
#include <linux/ioport.h>
#include <linux/sched.h>
#include <asm-arm/irq.h>
#include <asm/io.h>

#define WME 1
#define OSCLK 3686400 /* The OS counter gets incremented
                       * at this rate
                       * every second 
                       */

#define TIMEOUT 20 /*  20 seconds timeout */

static int major;
static char *name = "watchdog";

void
enable_watchdog(void)
{
    OWER = OWER | WME;
}

void
enable_interrupt(void)
{
    OIER = OIER | 0x8;
}

ssize_t 
watchdog_write(struct file *filp, const char *buf, size_t
               count, loff_t *offp)
{
    OSMR3 = OSCR + TIMEOUT*OSCLK;   
    printk("OSMR3 updated...\n");
    return count;
}

static struct file_operations fops = {write:watchdog_write};

int
init_module(void)
{
    major = register_chrdev(0, name, &fops);
    if(major < 0) {
       printk("error in init_module...\n");
       return major;
    }
    printk("Major = %d\n", major);
    OSMR3 = OSCR + TIMEOUT*OSCLK;
    enable_watchdog();
    enable_interrupt();
    return 0;
}


void
cleanup_module()
{
    unregister_chrdev(major, name);
}

It would be nice to add an `ioctl' method which can be used at least for getting and setting the timeout period.

Once the module is loaded, we can think of running the following program in the background (of course, we have to first create a device file called `watchdog' with the major number which `init_module' had printed). As long as this program keeps running, the system will not reboot.

[Text version of this listing]


#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>

#define TIMEOUT 20

main()
{
        int fd, buf;
        fd = open("watchdog", O_WRONLY);
        if(fd < 0) {
                perror("Error in open");
                exit(1);
        }
        while(1) {
                if(write(fd, &buf, sizeof(buf)) < 0) {
                        perror("Error in write, System may reboot any moment...\n");
                        exit(1);
                }
                sleep(TIMEOUT/2);
        }
}

Conclusion

If you are not bored to death reading this, you may be interested in knowing more about Linux on handheld devices (and in general, embedded applications). So, till next time, Bye!

 

[BIO] I am an instructor working for IC Software in Kerala, India. I would have loved becoming an organic chemist, but I do the second best thing possible, which is play with Linux and teach programming!


Copyright © 2003, Pramode C.E. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 88 of Linux Gazette, March 2003

LINUX GAZETTE
...making Linux just a little more fun!
Book Review: Unix Storage Management
By Dustin Puryear

Title: Unix Storage Management
Authors: Ray A. Kampa, Lydia V. Bell
Publisher: Apress
Year: 2003

There are some rather complex--and dare I say it--arcane issues involved in managing storage in a Unix environment. Indeed, Unix storage management can be a complicated affair. This is especially true when you consider the many needs business places on storage systems such as fault tolerance, redundancy, speed, and capacity. Apress has published a book that they promote as being written specifically about this topic, which weighs in at a comfortable 302 pages of actual material.

In general, I find Unix Storage Management to be a good primer on storage management. However, I am a little disappointed in the lack of focus on actually administering storage. When requesting the book I assumed that I would learn how to pull up my sleeves and tune and tweak file system performance, optimize access to network-based storage, and in general get a real feel for managing storage in a Unix environment. But alas, that isn't the case. Unix Storage Management deals mostly with the higher-level details of understanding how storage works, determining what kind you need, and then working to integrate that storage into your network.

This isn't to say that the book doesn't do a good job of introducing the reader to the major components of modern Unix storage systems. Indeed, technologies covered include RAID, SANS, NAS, and backups, to name just a few. Kampa and Bell actually do a good job of introducing this material, but they do not treat the subject matter in great depth. Essentially, after reading the text, readers will have enough knowledge to do more research and know what they are looking for, but they doubtless would not be in a position to actually implement a solution in a demanding environment.

The target audience for this book, whether intentional or not, are IT managers and those that want a broad overview of how storage systems work. Administrators that are in the trenches would also enjoy skimming this book if for no other reason than to remind themselves of the technologies available for them. Also, most administrators will look favorably on the chapter "Performance Analysis", which does a rather good job of detailing the process of collecting and analyzing performance information on storage systems. All in all, this is not a bad book as long as you aren't expecting to walk away with guru-like powers over Unix storage systems.

 

[BIO] Dustin Puryear, a respected authority on Windows and UNIX systems, is founder and Principle Consultant of Puryear Information Technology. In addition to consulting in the information technology industry, Dustin is a conference speaker; has written articles about numerous technology issues; and authored "Integrate Linux Solutions into Your Windows Network," which focuses on integrating Linux-based solutions in Windows environments.


Copyright © 2003, Dustin Puryear. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 88 of Linux Gazette, March 2003

LINUX GAZETTE
...making Linux just a little more fun!
Doing Things in GNU/Linux
By Raj Shekhar, Anirban Biswas, Jason P Barto and John Murray

 
1. Chatting / Instant Messaging with Linux  
2. Printing  
3. Installing and Managing Software with RPM  
4. Playing and Recording Music  
5. Linux in the Office  
6. Gaming With Linux  
7. The Post-Install Tune-up  
8. End Note  

1. Chatting / Instant Messaging with Linux

1.1 History of chatting

Many normal users think that GNU/Linux is not for them, but only for geeks. One reason for this is they believe they can not do the basic things like chatting under GNU/Linux. They believe that there are no Instant Messenger clients for Yahoo, MSN, and ICQ in GNU/Linux, but this is absolutely wrong - chatting was actually first implemented in UNIX. (Linux is a UNIX like free/open operating system).

talk was the first chatting program developed from UNIX long ago when there was no MS trying to capture the Internet. The computing world was a free land then, and you could share any program with any one, you could change them too to suit your needs - much like what Free Software is trying to do. talk is still available with UNIX & GNU/Linux .

From talk, other chatting concepts were developed. IRC was the first to be developed,then other companies came and hence ICQ, Yahoo, MSN, Jabber, AIM etc. chat systems were developed.

I shall try to touch on each of the chat systems here.

1.2 talk

It is the most basic chat system but still popular in LAN's. If you are in a college or office and can access only a UNIX or Linux terminal then you can chat with your friends. First of all the talk server daemon must running.

To chat with your friend then all you have to do the following
[anirban@anirban anirban]$ talk <username>@host <tty>
i.e. if the user name is raj (it will the same as the login name to the system) and his host computer is www.anyhost.com then it will be
[anirban@anirban anirban]$ talk raj@www.anyhost.com

You may be wondering what tty is? Suppose your friend opened many terminals - the terminal in which you want to send the message is specified by the tty number. Numbers start from 0 and only integers are allowed.

You can do the above with write too.
[anirban@anirban anirban]$ write <username@host> <tty>

If you want not to receive any chat invitition or any chatting then you have to give the command.
[anirban@anirban anirban]$ mesg n
to remove the blocking you have to do
[anirban@anirban anirban]$ mesg y

If you a GUI lover and a heavy Yahoo or MSN chatter then you may not like this kind of chatting, but for many of us who like GNU/Linux this old system is still gold.

1.3 IRC or Internet Relay Chat

After talk came IRC and it is still popular. I think that if you really want a high class chatting experience without the flooding or other bad stuff of Yahoo and MSN, then IRC is the thing for you. Also, there are many rooms (known as channels in IRC) from where you can get really good help on GNU/Linux, C/C++ programming, maintaining your Linux box and much more. (As an aside, in my personal experience I was a Yahoo chatter and did not get any thing more than flooding and 4 or 5 guys running for a girl in the room. But from IRC I received really good help when I was stuck. IRC can really be a great source of help.)

1.3.1 Basic concept of IRC

IRC is a little bit different from Yahoo or MSN chat since IRC is not owned by any company. It is free/open like GNU/Linux and generally run by volunteers.

The main difference is that you do not have to sign up to get a ID or password. So what do you do instead? Choose a nickname and a host (IRC server) to connect to. Since it is not run by a single company you have to know the host address, like you have to know the URL to visit a page on Internet. You can get the addresses of different hosts from the internet and also to which topic it is dedicated; for example, irc.openprojects.net is dedicated to the betterment of open source projects and open source developers.

So you have to provide your nickname and the host you want to connect to. If the nickname you pick is already taken then you'll have to provide another nickname.

IRC newbies should check out the IRC Primer before using IRC for the first time.

1.3.2 Software for IRC

There are many IRC chat applications available but I think Xchat is the best. Most distributions provide it with their installation CDs and it is often included in default installations. If it isn't already installed, fear not; you can find it in the installation CDs or download it from http://www.xchat.org. You will generally find it in RPM format so installation will not be difficult.

1.3.3 Configuring Xchat

After installation type xchat in a terminal or click the xchat icon (you will find it in `Main Menu > Internet > Chat').

The first window of Xchat will appear. Provide the nickname you would like. You can provide more than one. In case a nickname is already taken in a room xchat will use the other nickname you provided, else it will pick the first nickname in the list. You also can provide your real name and as which user you want to use it - (generally you do not have to provide all these; the system guesses it for you from your system login name and real name).

Now choose a host from the list of hosts and double click it or click on `Connect' at the bottom. A new window will open with some text flowing in it. It will take a little time to connect, then after connecting it will show the rules you should follow to chat in this host. Since IRC generally is a volunteer effort by good-at-heart people and not by any company, please try to follow it or else you may get banned. Maintainers of IRC chat rooms are very strict about the rules. (That is why chatting experience is much better here than yahoo or MSN).

Now you will see a single line text box where you can write both what you want to say, and also commands to navigate. Commands all start with a / (ie. slash). To get the list of rooms (or channels) in the host type /list . You will see all the rooms, choose the one which suits you and then type /join #<roomname> and then click `Enter'. Please note that you have to always give the number sign (#) before any room name.

Now you will enter that room and start chatting. At the extreme right there will be the list of all users/chatters in that room; selecting any one will get info about him /her. You will find many buttons at the right side of your chat window, by selecting a user and clicking the buttons you can ban or block a user, get info about him/her, invite him/her in a personal chat or even transfer files in IRC.

So I think you will now be able to chat in IRC. Some day you may even meet me in IRC. I generally live in the host irc.openprojects.net and in the room linux (you have to give a number sign ie. # before joining ie. /join #linux).

1.4 Instant Messaging

1.4.1 ICQ

There are several ICQ chat clients in GNU/Linux, and Licq is one of the most popular. You will find it inside the internet or network sections of the main menu i.e. KDE main menu ( so `Main Menu > Internet/Networking > Instant messenger') or just type licq in a terminal. After starting for the first time it will want you to register to their server to get a ID and password. Then you can login with that ID and password as you do with most of the Window's versions of ICQ clients.

1.4.2 Yahoo!

Yahoo provides its own yahoo messenger for Linux and it is similar to the windows version except that you may find some features missing. To get more details go to yahoo.

Since it is similar to the windows version you will find `Add Friends, Your Status, Ban' etc buttons in their usual places, generally as part of a menu at the top of Yahoo Messenger. Currently you can send files, invite people to group chat, and get email notifications.

1.4.3 AIM

Kit is the AIM client for Linux (KDE). You will find it in the main menu under network / internet or just type `kit' in a terminal. At the first startup it will ask you to create a profile and if you do not have a account in AOL it will ask you to create one by going to their site. The current version of Netscape also has a built-in AIM client.

1.5 All-In-One IM Clients

1.5.1 Everybuddy

Do you use or need several different instant messaging clients? Everybuddy is an Open Source IM client that supports AIM, ICQ, MSN, Yahoo!, and Jabber chat, as well as having some file transfer capabilities. In other words, a single Everybuddy client can take the place of several single-purpose clients. It is included (and often installed by default) with some distros, or you can download it from the Everybuddy homepage.

1.5.2 GAIM

GAIM is another all-in-one client resembling AIM that works with AIM, ICQ, MSN, Yahoo! and more. If you don't have it already installed, check out your installation CDs or go to the GAIM website.

2. Printing

2.1 Which Printer to Use

First check to see if your printer is supported by Linux. Most Epson, HP, and Canon printers will be supported, though there are some cheap printers which have less hardware hence need special software to simulate the hardware. This special software isn't generally available for Linux, so you cannot use these printers. But I would not recommend you buy these types of printer anyway since their performance is less than normal printers.

You can find the list of printers that are supported by GNU/linux at linuxprinting.org. I have used RH 7.3 and a HP 810c Printer here as an example.

2.2 Connecting the Printer to the Computer

After choosing the printer, check how it connects to the computer; that is which interface it uses, eg.USB (universal serial bus) or parallel port. Most printers use parallel port, but modern printers generally have both options. My printer (HP 810c) can be connected to the computer by parallel port or USB - I chose the parallel port interface. After connecting the printer comes the software part.

2.3 Installing the Printer

A standard installation of Red Hat (any version from 6.1) will contain the required software for installing the printer, but old versions of the software can be difficult to configure. Here I will focus on RH 7.1 to Rh 7.3 using KDE, though it can also be done from GNOME.

2.3.1 Configuration Program

Now to install printer do the following:
  1. In KDE click the `kontrol panel' on the desktop and then click `Printer Configuration'.
  2. Click `New' and a printer configuration wizard will appear (In RH 7.1 and 7.2 a wizard will not appear but a new window will appear; however, the procedure is almost the same).
  3. Next you have to specify the kind of printer that you want to add, that is `Network Printer' or `local Printer'. Choose local printer since the printer is attached to the machine from where you are configuring it. You have to also specify a name to identify the printer. A name must only have alphabet characters, numbers, "_" (i.e underscore ) and "-"(i.e hyphen). Now click `New'.
  4. If it is a normal printer the interface it uses will be automatically detected. If not, specify it. For example if it is on the first parallel port it will be /dev/lp0, if it is in the second parallel port it will be /dev/lp1 and so on. Now click `Next'.
  5. Next choose the driver for your printer from the given list, you will find drivers for most normal printers. Different brands of printers (like HP, Canon, Epson, etc) are listed; choose the appropriate brand and then expand it by clicking the `arrow' at the left side of the brand name or double clicking the brand name.
  6. Choose your printer from the list and expand it to find the driver. Now, you may find more than one driver has been created for your printer by different people. Generally choose the driver supported by your brand, eg. the driver named "hpijs" is supported by HP and although other drivers also work "hpijs" works better.
  7. Next click `Finish' and you will be back to the main window. Now click `Apply' and then choose `Save Changes' from File in the main menu. ( i.e `File > Save Changes').
  8. Now choose `Restart lpd' from File in the main menu (i.e `File > Restart lpd'). It will restart the printer daemon or process and your system will be ready for printing.
  9. You can test if the installation of printer was successful by selecting `test' in the main menu.
If you have trouble setting up the printer, you can use The Linux Printing HOWTO for help in troubleshooting.

3. Installing and Managing Software with RPM

3.1 What is RPM?

RPM stands for Red Hat Package Manager, and is an easy to use and widely adopted tool for installing, deleting, upgrading, querying and building software packages. There are other systems in use (Debian's DEB for example), though RPM is by far the most popular, and is what you'll get with Red Hat, Mandrake, SuSE and some others.

3.2 What is a Software Package?

Under GNU/Linux, programs are often distributed as single files known as RPM packages. These packages contain the actual program files, its documentation or manual pages, a summary of what the program does, menu entries and icons, plus information on where each file in the package should be installed. The package also contains information on any other files requires to run the program (dependencies), disk space required and so on. It's not unusual for a program to consist of a hundred or more files, so you can see that packaging them all into a single RPM file greatly simplifies the tasks of adding or removing programs. When you install an RPM package, it is uncompressed and broken down into its individual files which are then put into their correct places. RPM also checks to see whether other files necessary to run the new package (dependencies) are present. Another feature of RPM is the building and maintenance of a database of all packages installed on your computer. This means that you can quickly check to see which packages are installed, the files belonging to a particular package, or the package that provides a particular file.

3.3 Using RPM

You can use RPM from the command line, or if you prefer point'n'click, there are several graphical tools available. KDE has a particularly good one named kpackage; there are similar apps for users of other desktop environments, and distribution builders such as Mandrake have their own RPM front ends. I tend to use kpackage for removing un-needed packages to free up disk space, and the command line for everything else; but it doesn't really matter which tool you use. There are significant advantages to becoming familiar with using RPM from the command line; firstly it will be available on any RPM based system you may encounter, regardless of the desktop environment installed, plus it allows you to manage packages on machines that can't or don't run X. The ability to use wild cards (eg.*) to install multiple packages from a group at once is another feature of the command line - for example rpm -ivh mysql*

Just remember that generally speaking, you'll need to be running as root to install, upgrade or delete packages. However any user can run queries.

3.4 Installing Packages

In all the examples below, we'll use the Mozilla web browser package as a sample. To install it, first navigate into the directory holding the package, either from the command line or a graphical file manager. This directory might be one of your Linux installation CDs, or your home folder if you've downloaded the package. Using the command line, you would type:
rpm -ivh mozilla-0.9.8-10mdk.i586.rpm (your version of Mozilla might be different..)
Note that you only need to type in the complete file name when you are working with a package that is not yet installed, otherwise the basic package name (in this case "mozilla") is enough. And don't forget the tab key to auto-complete the long file names. If you prefer graphical tools, clicking on the RPM file in most file managers (Konqueror for example) will open the appropriate tool, or you can right click and use the "Open With" dialog box. It's then just a matter of clicking the `Install' button.

3.5 Updating Packages

Updating an existing program with a later version is done in almost exactly the same way as installing. From the command line:
rpm-Uvh mozilla-0.9.8-10mdk.i586.rpm (note the uppercase "U")
Or click the `Update' button from kpackage or similar tools.

3.6 Downgrading Packages

What if you upgrade, and then find that you preferred the older version? You can use the `"--oldpackage"' option from the command line like this:
rpm -Uvh --oldpackage mozilla-0.9.8-10mdk.i586.rpm.

3.7 Uninstalling Packages

To uninstall from the command line:
rpm -e mozilla(the complete package name isn't required)

Or start your graphical tool, either from the menu or a terminal. You'll see a list of all the installed packages. Click on the package you want to remove, then click the `Uninstall' button. Note that if there are other packages installed that require files from the one you are deleting, a warning will appear and the uninstall won't go ahead. You can override this by using
rpm -e --nodeps mozilla (command line),
or selecting "Ignore Dependencies" (GUI tool), but be aware that this will break the other programs.

3.8 Querying packages

Listing all installed packages is easy. From the command line type:
rpm -qa
If the list is too big to view you can pipe it through less like this:
rpm -qa | less
Graphical tools will usually show the list of installed packages on start up.

3.8.1 Listing all of the files installed by a package

This is done with the rpm -ql command. Using mozilla again as an example we would type:
rpm -ql mozilla
Under a GUI tool just select the package and then click on the " File List" (or equivalent) button. Listing all of the files supplied by a package not yet installed - is done with the rpm -qpl command. Requires the complete file name.
Eg. rpm -qpl mozilla-0.9.8-10mdk.i586.rpm.

3.8.2 Listing a description of an installed package

can be performed with the rpm -qi command. For example:
rpm -qi mozilla.
Using a GUI tool, just click on the desired package. To see a description and other information of an package not yet installed, use the rpm -qpi command. You'll need to use the complete file name for this one. For example:
rpm -qpi mozilla-0.9.8-10mdk.i586.rpm.
With a GUI, select the package, or just click on the file from within your file manager (eg. Konqueror)

3.8.3 To find the package to which a file belongs

To find the package to which a file belongs is done with the rpm -q --whatprovides command. Example:
rpm -q --whatprovides/usr/lib/mozilla/xpicleanup

3.8.4 To list all the packages a package depends on

Use rpm -qR like this:
rpm -qR -mozilla
(For a package not yet installed use rpm -qpR, with the full file name )

3.9 Resolving Dependency Problems

The most common problem encountered when installing software is an unsatisfied dependency. You might be already familiar with this problem if you've installed new software under Windows and then found it refuses to run, with a "missing ***DLL" error message.

GNU/Linux is subject to the same problems, except that RPM will advise you of the problem before the program is installed. Many problems can be avoided when you install Linux - selecting Gnome and KDE for installation will help, even if you don't intend to run them, as many other programs use the same libraries.

So what do you do when RPM complains that a package can't be installed, because of missing packages or files. Write down the missing package/file names, and check your installation CD-ROMs for packages with similar names to the ones required. You can use the rpm-qpl command to view the files supplied by a not-yet-installed package. Often it is just a matter of installing these packages to resolve the problem. Sometimes, though, it leads to even more dependencies, so it can be a rather lengthy process.

3.10 Using RPMFind and RPMBone

There is another extremely useful tool for finding files and packages, and that's the RPMFind website. Type the name of the needed file or package into the box on the main page and click on the search button; you'll then be presented with some relevant information and links to the package. Often you'll already have the package on your Linux CDROM - using the information from RPMFind you'll know which one to look for. RPMFind can provide a list of dependencies for a package; the file names in this list link back to their parent package. Usually, there is also a link to the package's home site. RPMBone is another site that's useful for finding and downloading RPM packages, and is somewhat similar to RPMFind. RPMBone has a more flexible search function; you can narrow down your search to only give results for a certain distribution or architecture for example. It also provides links to a huge number of ftp servers for downloading. If you need to find a package containing a particular file to satisfy a dependency, however, RPMFind should be your first stop.

3.11 Circular Dependencies

Occasionally, you might come across a circular dependency. This is when package A won't install because package B is missing, but when you try to install package B, RPM complains that package A is missing. What you do here is use the `--nodeps' option. For example:
rpm -ivh --nodeps mozilla-0.9.8-10mdk.i586.rpm.
If you are using a GUI tool, click on the "Ignore Dependencies" button.

3.12 Library Version Problems

Sometimes a package will refuse to install because it requires a library file version later than the one already installed. This is easily fixed by upgrading the package to which the file belongs. While library files are usually backwards compatible, occasionally a package will refuse to install because a certain version of a file is missing, even though a later version is present. While you could downgrade the library package, this might well break other programs. Try creating a symbolic link with the name of the older library file that points to the existing newer version. If for example the package you are trying to install insists on having foo.so.3, and you already have foo.so.4 installed in /usr/lib, do this (as root): ln -s /usr/lib/foo.so.3 /usr/lib/foo.so.4

3.13 Automatic Dependency Resolution Tools

Automatic Dependency Resolution Tools are available with some distributions. Mandrake for example has `urpmi', RedHat has `up2date',while Ximian has RedCarpet. There are also tools like apt4rpm. These can automatically download and install required packages for you. See your distributions documentation or the relevant website for details.

3.14 Miscellaneous

3.14.1 RPM Version Incompatibilities

You probably won't have to worry about this unless your Linux installation is fairly old. Earlier Linux distributions were packaged with version 3.x of RPM, and are unable to handle the later 4.x series of packages. The exception to this seems to be version 3.05, you could update RPM to this version, or just replace your distribution with something newer. The RPM 4.x series is backwards compatible with the older series.

3.14.2 Midnight Commander

Occasionally you might want to copy files from an RPM package without actually installing it. You can do this with a file manager known as mc (for Midnight Commander). Despite being somewhat ugly, it is extremely capable. It is supplied (though not always installed by default) with most distributions, and can be started from a terminal window by typing mc. You can then navigate around the package as if it was a normal folder, and copy individual files from it.

3.14.3 Learning More

This article only covers the bare basics of RPM, if you'd like to learn more you could read the RPM manual page (type man rpm), or follow the links below:
RPM One Liners - A concise guide by Brian Jones, worth downloading and printing for reference.
The RPM HOWTO - The "official" HOWTO at the Linux Documentation Project.
Maximum RPM - An extremely thorough guide to just about anything that can be done with RPM. (All these resources were used in the writing of this article)

4. Playing and Recording Music

It's easy to enjoy music with Linux, whether you are playing an audio CD, an mp3 or OGG files you recorded yourself onto CDR or hard disk. You can download tracks, or copy them from your own audio cds. While there are plenty of tools for audio work under Linux in both command line and GUI form, I'll be mainly concentrating on the command line as these tools are available on nearly all Linux distributions. Familiarity with the command line tools will also make configuring GUI programs much easier. I'll assume you already have a sound card installed and working.

*Warning*
Breach of copyright is taken very seriously in most parts of the world - this article in no way encourages users to break the law.

4.1 The Basics

Since much of this story involves CDs, perhaps we should start with a brief look at both audio and data CDs.

Ordinary audio CDs like the ones you'd play in your home stereo differ from data CDs in that the music is recorded onto the disk as raw data, that is, there is no file system on the disk. That's why if you put an ordinary audio CD into your CD drive and try to read the contents in a file manager, you won't find anything. Your computer is looking for a file system where there is none. An audio CD doesn't need to be mounted to be read or burnt - unlike data disks.

Data CDs on the other hand use a file system to organize the way in which the data is written to and read from the disk, similar to the file system on a hard disk. Music files in formats such as .mp3, wav, or ogg are written onto data CDs using a file system just like any other CDROM. These CDs can be opened in a file manager or from the command line, and the music played using the appropriate program.

4.2 Playing Audio Cds

There are several GUI tools available for playing audio CDs. For example Gnome has gtcd, KDE has kscd, and xmms can also play CDs if you have the audio CD plugin enabled. These can be started from the Multimedia section of your menu. From the command line you could try the cdplay program, though it's not very intuitive. Read the manual page (man cdplay) to find out more. Or you can simply use the `play/skip/stop' buttons on your CD drive to play audio CDs.

4.3 Playing MP3 Files

The mp3 format is a hugely popular way of storing and sharing music. One reason for its popularity is its compact size compared to other formats or conventional audio CDs. A typical mp3 file is usually only about a tenth of the size of the same file in .wav or audio CD form. This means you can fit the equivalent of ten audio CDs on a single CD using the mp3 format. Other advantages include the reduction in space used when storing music on a hard disk, and the smaller file size also makes transferring files over a network much more practical. The disadvantage is that mp3 CDs can't be played on most normal CD players (although mp3 compatibility is starting to appear on some portable Walkman type players), so you can only play them on your computer. The most popular player for Linux is probably xmms, an excellent clone of the Windows Winamp player.

4.4 Using XMMS

Xmms (X MultiMedia System) is a widely used multi purpose sound file player that is included with most common distributions. It's mostly used for playing mp3 files, but it can do much more than that. It is also capable of playing wav files, ogg-vorbis files (an open source alternative to mp3), streaming audio etc. Starting xmms can be done from the menus (look under "Sound" or "Multimedia"), or from a command line just type xmms. The interface is much like that of a CD or tape player, with buttons and sliders to control starting, stopping, pausing. skip, repeat, volume, balance and so on. It also includes an equalizer function (the `eq' button) and allows you to set up play lists. To choose a track to play, hit the L key or press the eject ("^") button. This brings up a window allowing you navigate to the folder holding your music files. Once there, you can select a track or tracks to play, or you can choose to play every file in the folder. As well as the audio options, xmms also has visual options, and different skins can be selected to change the appearance of the player. It can even use Windows Winamp skins. Despite the multitude of options, xmms is exceptionally easy to use. If you want to explore its options and capabilities, click on the small O on the left hand side of the display.

4.5 Recording (or ripping) Tracks from Audio CDs

There are several tools for recording audio CDs to hard disk. You can record a single track, selected tracks or the entire CD at once. The music will be converted to a file format that can be read by your computer (usually .wav) as it is recorded. While there are both command line and graphical tools for the job, my favorite is the command line program cdparanoia. If you prefer GUI tools, you might like to check out grip. One of the things I particularly like about cdparanoia is the way it can correct jitters or skips on marked or scratched disks. Here are some examples of how to record tracks from an audio CD using cdparanoia:
To record a single track type:

cdparanoia n

`n` specifies the track number to record. By default the track will be recorded to a file named cdda.wav. If cdda.wav already exists it will be overwritten, so be careful if you are recording several tracks! You can specify your own file name like this:

cdparanoia n filename.wav
To record the entire CD type: cdparanoia -B

The -B in the above command simply ensures that the tracks are put into separate files (track1.wav, track2.wav etc.). Cdparanoia has many more options and an easy to understand manual page; type man cdparanoia to read it.

4.6 Converting .wav files to mp3

If you intend to burn files to an audio CD, you should leave them in .wav format. On the other hand, if you want to play them from your hard drive, or burn a data CD that you'll play from your computer, you'll probably want to convert them to the mp3 format to save space. One of the most popular tools for this is bladeenc. To convert a .wav file into a .mp3 use this command:

bladeenc filename.wav

This will produce a file with the same name as the source file, but with the .mp3 suffix. If you want to specify a destination filename you can add it to the end like this:

bladeenc filename.wav filename.mp3

By default, bladeenc will encode the file at 128kbit/sec, this is the most commonly used bitrate and results in a very compact file of reasonable quality. Higher rates can be specified, giving a better sound quality at the expense of a slightly bigger file size, though it's hard to detect any improvement in sound quality using sampling rates above 160kbits/sec. To convert a file at 160kbits/sec use:

bladeenc -160 filename.wav

4.7 The Ogg-Vorbis Format

Ogg -Vorbis is a completely free and open alternative to the mp3 format. The sound quality is at least as good as mp3, and ogg files can be played on players such as xmms. You'll need the vorbis-toolspackage (check your distributions installation CDs) to convert .wav files to .ogg. Converting is easy:

oggenc filename.wav

As with bladeenc, the sampling rate (and sound quality) can be specified. This is done by using the following command:

oggenc -q n filename.wav (where n is the quality level)

The default level is 3, but can be any number between 1 and 10. Level 5 seems to be roughly equivalent to an mp3 encoded at 160kbits/sec.

4.8 Converting .mp3 files into .wav format

Audio CDs are usually burned from a collection of .wav or .cdr files - you can't directly burn mp3s to an audio cd unless you convert them to one of these formats. The mpg123 program can do this for you and is often installed by default with many distributions. To convert an .mp3 to a .wav, type:

mpg123 -w filename.wav filename.mp3 (note - the destination filename comes first)

Note also that there is some slight loss of sound quality when a .wav file is converted to mp3 format, and this isn't regained when converting back to .wav - so if possible, you should try to use .wav files that have been ripped from an audio CD rather than converting back from mp3s.

4.9 To Normalize a group of .wav files

Creating an audio CD using tracks from different sources can lead to variations in volume amongst the tracks. By using a program named normalize, you can equalize the volume level of a group of files. You'd normally do this to a group of .wav files before burning them to CD. Normalize is a command line tool; to equalize the volume of all the .wav files in a folder type:

normalize -m /path/to/files/*.wav

4.10 Recording (or burning) an Audio CD

I'll assume that you have a CDR or CDRW drive installed and configured already - if you don't, see the links section at the end of this article for more information on set-up details. I'll also assume that you'll be using cdrecord to burn your disks - it's the most popular tool for this and is also what's used by most graphical front-ends like XCDRoast etc. Your files will need to be in .wav or .cdr format; most likely they will be .wavs. Put all the files you want to burn into a separate folder to simplify the burning process, and make sure that they will fit onto the disk (you can check by changing into the folder and running the du command). Now it's just a matter of typing this command:
cdrecord -v speed=4 dev=0,0,0 -audio -pad *.wav

Of course, your speed and device numbers might be different - you can use cdrecord -scanbus to find the device address, and the speed setting will depend on your CD burners' speed rating. In general, burning will be more reliable at slower speeds, especially on older machines.

4.11 Recording a Data CD (mp3 or ogg)

If you only plan to play a music CD on your computer or other mp3 capable device, you can burn mp3 or ogg files in exactly the same way as an ordinary data CD. Because data CDs use a file system, we'll use mkisofs (to create the file system) and cdrecord to burn the disk. As in the audio CD example above, put all the files into a separate folder. The two operations can be combined into a single command like this:

mkisofs -R /path/to/folder_to_record/ | cdrecord -v speed=4 dev=0,0,0 -

Don't forget the hyphen at the end! As in the example for burning audio CDs, you might have to use different speed and dev numbers. Older or slower computers might have difficulties running both mkisofs and cdrecord at once - if so you can do it as two separate operations like this:

mkisofs -R -o cdimage.raw /path/to/folder_to_record/

This creates an image named cdimage.raw. Then burn the disk:

cdrecord -v speed=4 dev=0,0,0 cdimage.raw (using suitable speed and device settings..)

4.12 Some detailed information on related topics:

The Linux MP3 HOWTO
The Linux MP3 CD Burning HOWTO
The SOX Homepage- The swiss army knife of Linux sound tools.
The Normalize Homepage
Installing and Setting Up a CDR/CDRW - *Note* Modern desktop distributions can usually detect and setup a CD burner without any manual configuration required. This page may be useful however for older/difficult distributions that require manualinstallation.
The OggVorbis Homepage
The Bladeenc Homepage
The CDRecord Homepage
The CDParanoia Homepage
The mpg123 Homepage

5. Linux in the Office

Office applications for Linux are now quite mature. Linux desktop productivity tools are in fact so capable and feature rich that corporations are beginning to look at alternatives to MS Office with its high TCO, and leaning towards office suites like OpenOffice and StarOffice. OpenOffice in fact is a part of discussion being held between multiple companies including Boeing Aeronautics, a major international technology contractor, to begin to define a standard for Office document formats. Allowing greater portability of documents between office suites; XML of course is being discussed as the most viable vehicle for the mission. But I digress, to sum it up, if someone wishes to do all of their word processing, spreadsheets, and so on on the Linux desktop they would find themselves very satisfied with today's applications.

5.1 Word Processing

User's choices for word processing on Linux are varied and diverse. To list several applications would probably only begin to scratch the surface of what is available. So in an effort to simplify things I will include a review of those applications with which I have experience and list a few more with which I do not. In addition all of the Word Processors I have used in the past are Microsoft Word compatible - meaning that they can both read and write MS Word documents. This will come in handy for all those who are afraid they will never be able to open a '.Doc' (Word Document) once they move over to Linux.

5.1.1 StarOffice Star Writer

StarOffice is an office suite written for Unix / Linux and developed by Sun Microsystems. Until recently StarOffice was a freely available download but recently with their newest version (I believe StarOffice 6.0) they have begun charging a license fee. I haven't used StarOffice 6.0 but I am familiar with the final release just before it. StarOffice is a functional Office suite with many additional features. When you first open StarOffice you are presented with a screen very similar in appearance to the MS Windows desktop complete with 'Start' bar. StarOffice provides the full suite of functionality including word processing, spreadsheets, email, and MS PowerPoint-like presentations. And again, any and all documents written in StarOffice can be saved in the equivalent MS Office format so you lose no compatibility with co-workers / family members / unconverted Windows- but-soon-to-be-Linux users. StarOffice, along with all the other applications I review here provides a very similar interface to MS Word. So there is little to any learning curve involved with using it. In fact the only real difference between the list of applications reviewed here and those in MS Office is how well the applications can read and write in the MS format. StarOffice does a very adequate job of processing MS Word documents. The only area where StarOffice runs into trouble is reading and writing MS Word documents that have tables embedded in them or those containing forms. However if it is merely straight text, such as a report there is typically no problem involved. But it is my unconfirmed suspicion that even this has changed now that you can pay for StarOffice. Again I have not checked this first hand but I believe the reason Sun now charges for StarOffice is because they paid Microsoft for the APIs that allow StarOffice to read and write MS Office documents. Up until now the formats have merely been reverse-engineered, kind of a best guess at how to interpret the symbols in a MS Word document. For more information, and to confirm / deny my crazy allegations check out StarOffice at http://www.staroffice.com.

5.1.2 OpenOffice Writer

OpenOffice is a spinoff of Sun's StarOffice (as the name may suggest). And like StarOffice, OpenOffice also provides a suite of applications including word processing, spreadsheets, and MS PowerPoint-like presentations. OpenOffice also supports the reading and writing of MS Office documents. Recently I rewrote my resume (and being a Linux-only kinda guy I of course couldn't use MS Word) using OpenOffice. This consisted of multiple fonts and font sizes, the embedding of tables so as to properly position the many elements of my resume, and also included bullets. After completing my resume I proceeded to save it in both the native OpenOffice format as well as the MS Word 2000 format. Of course before shipping it out to employers I wanted to check to see how it would look in MS Word. So when I went to work the next day I proceeded to open it using (the very expensive) MS Word 2000. Much to my surprise, with the exception of some bullets the resume had made it through quite well. All the tables were properly in place, the fonts well represented in their multiple sizes, and the only thing wrong with the bullets was that instead of the `>' arrow I had originally had, it was replaced with a round bullet (I guess perhaps that MS Word didn't support the particular type of bullet I had specified.) So OpenOffice (if you absolutely refuse to pay for software) will do very well for your Office and Word Processing needs. More can be learned about OpenOffice (you can download a copy of OpenOffice from here too) at http://www.openoffice.org.

5.1.3 AbiWord

The only complaint I can really make concerning AbiWord can be summed up in one word - tables. While AbiWord does support tables the interface and handling of tables has a long way to go. Otherwise AbiWord is much like StarOffice and OpenOffice. Reads and writes simple MS Word documents, very MS-like interface, etc. Another nice feature of AbiWord is its support of Gnome themes; a feature that neither OpenOffice or StarOffice provide.

5.2 Others Word Processors

5.2.1 Kword

Kword is part of the KOffice office suite for the KDE desktop. It has all the usual bells and whistles, frames, numbering, bullets, tables, paragraph alignment, etc. However from what I can see at on the webpage KOffice does not support reading and writing of MS Word documents. If you would like to learn more point your browser to
http://www.koffice.org

5.2.2 Corel WordPerfect

WordPerfect was once the dominant word processor for PCs, and the latest available version for Linux is WordPerfect2000. It is a fully featured application and is unusual in that it is not Linux native, but is essentially the Windows binaries running under a built-in version of Wine. For this reason, it may not be as stable or fast as some of the others. You can find out more about Corel WordPerfect 2000 at:
http://linux.corel.com/products/wpo2000_linux/index.htm

5.3 Spreadsheets

Spreadsheets are possibly the most widely used Office program, and as with word processors, Linux users have several quality offerings to choose from.

5.3.1 Gnumeric

Gnumeric is the GNOME projects spreadsheet, and it is a mature and stable program. Compatability with MS Excel files is quite good, and gnumeric is often installed by default with many distros, or at least is available on the installation CDs. A good choice for those who don't want to install a big, fullblown suite like Star/Open Office.

5.3.2 StarOffice/OpenOffice Calc

The Calc spreadsheet is another very competent office tool, with very good Excel compatability. Possibly the best choice for heavy duty spreadsheet users.

5.3.3 kspread

Koffices' spreadsheet, kspread, is a good looking, powerful app, however its Excel compatability is somewhat limited, so if this is important to you, perhaps one of the others would be a better choice. Otherwise an excellent spreadsheet.

5.4 Other Office Applications

The number and quality of office type apps. has grown rapidly in the last couple of years. Some of these are listed below, along with a brief description and links. Most people will probably find these will meet their needs, though some may find they are dependant on certain features of Microsoft Office apps. that just aren't available under Linux yet. For these users, a proprietary product known as Codeweavers Crossover Office allows MS Office (as well as some others) to be installed and run directly from Linux. I've only listed the more well known programs here, and some of these are probably already installed on your computer. If you need to install them, most of these packages can be found on your installation CDs, otherwise just follow the links. The KDE apps listed here are mainly included in the koffice package, while the GNOME programs are usually separate packages.

5.4.1 Address Books

GNOME has `gnomecard' (part of the gnome-pim package), KDE uses `kaddressbook'.

5.4.2 Fax Apps

There is `kfax' with KDE, gfax for GNOME. Programs like hylafax and mgetty+sendfax are also popular.

5.4.3 Email/PIM

Outlook users will probably be most interested in Ximians' Evolution, a fully featured email/PIM program.
There is also a proprietary add-on for Evolution named Connector, and this can enable Evolution to function as an MS Exchange client. As well as email, it has address book, calendar and task-scheduling/alarm features.

5.4.4 Drawing/Graphics

Dia is a structured diagrams program similar to Visio, while Sketch is a vector drawing package.
KDE has Kontour (another vector drawing tool), Kivio for flowcharts, and KChart kchart for drawing charts/graphs.

5.4.5 Financial

Gnucash is a popular personal finance manager, though there are several others. And if you just can't survive without Quicken, you'll be pleased to know it will run under Linux using Codeweavers Crossover Office.

5.4.6 Database

postgreSQL is included with distros such as Mandrake and RedHat, also there is MySQL, a somewhat simpler database.
As well, there are databases such as Interbase, and Firebird a free, open source version of Interbase.
The big names like Oracle and IBM (DB2 for Linux) support Linux too.

5.4.7 Presentation Apps.

The major office suites (StarOffice, OpenOffice, Applix, KOffice) all have functional presentation programs. The StarOffice and OpenOffice versions can handle MS PowerPoint format files.

5.4.8 Organizers

If you are looking for something a little lighter than Evolution, KDE has `korganizer', and GNOME uses `gnomecal' (part of the gnome-pim package).

5.4.9 Calculators

RedHat distribution installs three calculators by default. Xcalc,GNOME Calculator,KCalc.

Xcalc is a scientific calculator desktop accessory that can emulate a TI-30 or an HP-10C. Xcalc can be started from a terminal emulator or from the Run dialog box by typing xcalc. It takes the following command line argument (among others)

-rpn
This option indicates that Reverse Polish Notation should be used. In this mode the calculator will look and behave like an HP-10C. Without this flag, it will emulate a TI-30.
GNOME Calculator is a double precision calculator application. GNOME Calculator is included in the gnome-utils package, which is part of the GNOME desktop environment. It is intended as a GNOME replacement for xcalc. To run GNOME Calculator, select gcalc from the Utilities submenu of the Main Menu in GNOME, or type gcalc on the command line in a terminal emulator or Run Program dialog box.

KCalc can be started by typing kcalc on the command prompt or in the Run Program Dialog box.

5.4.10 PDF Files Viewer

PDF (Adobe's Portable Document Format) is a format for transfering documents with formatting (including fonts, sizes, etc) with a few more extra features (such as URLs). It is a quite common format for publishing documents - it is generally quite difficult to edit such a document, but relativly easy to show it, as it already contains an exact definition of the document (somewhat similar to postscript). There are a number of viewers for PDF documents under GNU/Linux.

5.4.10.1 XPdf

`xpdf' supports most of PDFs features, including LZW-compressed images URLs and encryption. It can be started from command prompt by typing xpdf. xpdf's home is at http://www.foolabs.com/xpdf/ . It also comes with a number of distros, including RedHat, Mandrake and SuSE.

5.4.10.2 Adobe Acrobat Reader

This is not free software (although it can be used free of charge for non-commercial use). Free software gives you permission to use, copy, study and improve the software. You can learn more about Free Software here.

You can get Adobe Acrobat Reader from: http://www.adobe.com/products/acrobat/readstep2.html

5.5 Links

It's impossible to cover all the available office type programs in just a few paragraphs; if you need to know more try the links below:

The Linux-Office Site is a very useful resource for Linux office apps.
The KOffice website
The Gnome-Office website
Codeweavers Crossover Office can run Windows apps like MS Office, Lotus Notes and others under Linux.

6. Gaming With Linux

Okay, you've got your word processor and spreadsheet set up under Linux, plus a web browser, an email client and a million other boring programs. But what about the important stuff? Where are the games? Most people probably wouldn't think of Linux as being a gamer's platform, and it's true that the real hard-core gamer might need to stick with a dual boot system for the time being at least. But for the rest of us, Linux can offer a great environment for playing games. There are plenty of good ones on offer, and accelerated 3D is no longer a pain to set up for many common cards. And now that lots of Windows games are playable using emulators like WineX, we've never had so much choice.

6.1 Where to Get Them

Most distros come with a variety of games, and you probably have some already installed. Look in your GNOME or KDE menus under "Games" or "Amusements". If you don't have any installed, check your installation CDs for packages named "kdegames" and "gnome-games". These packages include a wide variety of games ranging from Arcade style games (Tetris and Jezzball clones, Snakerace etc.), board games (Chess, Mahjongg, Reversi and so on), to card games, plus games to test your strategic skills and much more. As well as the ones in the KDE and GNOME packages, some distros include others like Maelstrom, Bzflag (a popular tank game), FrozenBubble (one of my favourites), and even 3D games such as Tuxracer and Chromium. Browse the package directory of your distro CDs to see what's available. There are also lots of games freely available from the internet, plus some commercially produced/ported titles for sale.

6.2 Commercial Games

A few companies make or have made games available to Linux users. Perhaps the best known of these was Loki, who are sadly no longer in business. Loki ported quite a few popular titles to linux, (QuakeIII Arena, HeavyGearII, Descent III etc), and you might even find some of them still available for sale. Probably the best way to find out what's available is to check out online stores like TuxGames.

6.3 Hardware and Software Requirements

Broadly speaking, games can be split into two groups; those that require accelerated 3D support, and those that don't. The first group would include 3D games such as QuakeIII, UnrealTournament, Tuxracer and so on, while the second group includes the 2D style of games such as those found in the GNOME and KDE game packages (and of course the old style text games would be in this group too). Games in the 2D group don't need anything special to run them; if you can run GNOME or KDE you'll have no problems. The 3D games however are much more fussy about what they'll run on; as well as having enough RAM and CPU power, you'll also need a Linux-supported 3D graphics card (or on-board chip). Individual game requirements vary widely, but as rough guide, the recommended minimum for QuakeIII is a 233Mhz CPU with an 8meg graphics card and 64 meg. of RAM. Keep in mind that this is the bare minimum required just to run the game; you'd probably need to double those figures to get reasonable performance.

Setting up 3D graphics with Linux used to be a bit tricky, but now many modern distros will set up the appropriate drivers during installation, giving accelerated 3D out of the box. When you are setting up your machine, keep in mind that it isn't the brand of graphics card you have that is important, but rather the brand of chipset it uses. In other words, you would use ATI drivers for a card with an ATI chipset, regardless of its brand. Currently, most Linux gamers seem to prefer nVidia based cards, and with good reason. NVidia write their own (closed source) drivers for Linux; these are easy to install and set up and their performance is generally on a par with their Windows counterparts. ATI based cards are also popular, and ATI have recently released unified drivers for Linux users with their higher end cards. Check out this site to see what cards are supported. As well as suitable hardware, you'll also want to use a recent version (>4.0) of XFree86. Later versions have much better 3D support, so if you are having problems an XFree86 upgrade should be one of your first steps.

6.4 Setting up NVidia Based Graphics Cards

As I mentioned earlier, nVidia based cards have become a favourite amongst Linux gamers. While these cards will usually work perfectly out of the box for normal 2D work, you'll probably have to install nVidia's drivers to get accelerated 3D. Some recent distros will install these for you during installation of Linux; even so, you might want to read on so you can update to the current drivers. These drivers are "unified", ie. the same drivers are used for all versions of nVidia based cards. Before you start, you should check that you are running a reasonably recent version of XFree86. There are two drivers that will need to be installed, the NVIDIA_kernel package and the NVIDIA_GLX package. The kernel package is available in several versions to suit most common distros; if there isn't one to suit your distro you can also get tarballs. And if you aren't sure which package to get there is a script you can download from nVidia that will advise you of the best package to use.

Once you've downloaded the packages, you should exit X (not strictly necessary, but it makes recovery easier if things go wrong..) and install the kernel package and then the GLX package. If you are upgrading rather than installing, nVidia recommend removing the old GLX package first instead of upgrading over it. Now all you need to do is edit a couple of lines in your XF86 configuration file (usually this will be /etc/X11/XF86Config-4). Assuming you already have an XF86Config file working with a different driver (such as the 'nv' or 'vesa' driver that is installed by default), then all you need to do is find the relevant Device section and replace the line:
Driver "nv" (or Driver "vesa")
with
Driver "nvidia".
In the Module section, make sure you have:
Load "glx"
You should also remove the following lines:
Load "dri"
Load "GLcore"
if they exist. Now restart X to use the new drivers. If you have any problems, check the `XF86' log file (named ` /var/log/XFree86.0.log' or similar) for clues. Also read the documentation on the nVidia website and in the README file included in the NVIDIA_GLX package.

6.5 Playing Windows Games With Linux

Some well known games produced for windows have Linux binaries available (Return To Castle Wolfenstein etc). The Linux binaries allow you to install the data files from your Windows game CD, and then run the game directly from Linux. Some games include the Linux binaries on the CD (rare, but hopefully this will become commonplace), or you may have to download them.

Another way to run Windows games is to use an emulator like Wine, or WineX. The list of programs that will run well under Wine is growing steadily, though for gaming you'll probably be more interested in WineX by Transgaming. WineX is a commercial offshoot of the Wine project, and while Wine aims to enable Windows programs in general to be run under Linux, WineX focusses exclusively on games. Many Windows games install and play perfectly with WineX, including Max Payne, Warcraft III, Diablo II, The Sims etc. There is a list of games at the TransGaming website, however I have found that there are some games not listed that will still play under WineX. Try searching Google for name of game + winex for help on unlisted games. You can download the WineX source from the CVS tree for free, but compiling and configuring can be confusing for a newbie. Much better is the precompiled packages that are available to subscribers. Subscriptions cost US$5 per month, with a 3 month minimum. There are some other benefits to subscribers, though I think the binaries alone are worth the price.

6.6 Links

Obviously this has been no more than a very brief overview of Linux gaming; see the sites listed below for more info.

The Linux Gamers HOWTO - I can't recommend this one highly enough; if you are serious about gaming with Linux, read this doc!
Linux for Kids - This site has lots of links and info about games and educational apps. You don't have to be a kid to enjoy this stuff - adults will probably find some good stuff here too.
The Linux Game FAQ - A comprehensive list of Frequently Asked Questions about Linux gaming.
The Linux Game Tome - Definitely worth a look!
New Breed Software - Bill Kendrick and co. have written some good games, mainly for kids.
Racer - is a promising race car game with extremely good graphics and physics. Not finished yet, but still playable, and makes a nice change from the shooters.
Transgamings Winex Homepage
LinuxGamers is another interesting game site.

7. The Post-Install Tune-up

Many Linux distributions install lots of stuff that many people never use. Often there are services or daemons set to run that aren't needed, and system configuration settings are conservative so as to run on the widest range of hardware. All these things can detract from your Linux desktops performance, and that's what this article is about; getting more performance from your box by performing a post-install tune-up. Here's what we'll be doing: You don't need to do all these things by the way, only those that you need or want to, the rest are optional. Be aware that if your box is reasonably configured to begin with (and most distributions are, out of the box), you are unlikely to get a dramatic improvement from any one of these things.

However, by doing some or all of them you should end up with a system that boots more quickly, has more disk space and slightly more free memory, and a small but noticeable improvement in performance. The one thing you can do that will have a profound effect on performance is run lightweight, efficient software, so you should make that your first priority when building a fast Linux desktop.

7.1 Disclaimer

I don't guarantee the accuracy of anything that follows, so use this information at your own risk. In other words, if by following this guide you trash your computer, don't blame me.

7.2 Before You Start

I'll assume you'll be running a SysV type system, as this is the most common and what you'll have if you are running a Red Hat-type distribution. SysV simply refers to the way services etc are started at boot time. If you are running some other system, you can still clean up the boot process; check your distribution's documentation for details. It's a good idea to browse through any documentation that came with your distribution anyway. This might be in the form of HTML files or a printed manual, and with many modern distributions is very comprehensive. The documentation should be able to provide you with details of any variations to the boot process used by your particular distribution, though I think the common distributions are pretty much all the same in this regard.

If this is a fresh installation, you should make sure all your hardware is properly configured first. Linux has really come a long way as far as hardware recognition goes, and chances are you won't have to do anything, though things like sound cards sometimes have to be setup manually. Once you are sure everything is going to work, you can continue with the tuning ....

7.3 The Boot-Up

We probably should start with a brief description of what actually happens when you boot your Linux box. You can skip this bit if you want to, but I think an understanding of what goes on at boot-up can often be helpful, so stick around ....

After the kernel is kicked into life by GRUB or LILO or whatever, the following steps occur (with possible minor variations):

  1. The kernel gets it's own internal systems set up.
  2. The init program is started.
  3. init reads the `/etc/inittab' file. This file provides init with the default run level for the system (eg console, graphical, single-user etc.) Take a look at `/etc/initab' yourself so you understand what the various run levels do.
  4. init then runs a script (usually `/etc/rc.d/rc.modules') to load auto loaded kernel modules.
  5. Depending on the default run level (from `/etc/inittab'), init then starts or stops all the services in that particular run level directory. For example, if runlevel 5 is the default according to `/etc/inittab', then all the scripts in `/etc/rc.d/rc5.d/' are run.
  6. init then runs another script (usually `/etc/rc.d/rc.local') . This is where the user can put stuff that he/she wants to be started automatically at boot-up. You might want to start your OSS sound driver from here for example. Users of older versions of Mandrake can edit this file to get rid of that damn ugly penguin ....
This is a bit over-simplified, but I hope you get the idea. If you take a look at the scripts in `/etc/rc.d/rc5.d' (or whatever your default runlevel is), you'll see that the names of the scripts all start with an S(for Start) or a K(for Kill or stop) followed by a number. The number determines the order in which the scripts are run. Most distributions start a diverse range of services or daemons at boot time, and while this automatically covers the needs of the majority of users, it also means that there will probably be several processes started that aren't required. This results in longer boot-up times, increased memory usage, and more potential security holes. Stripping un-needed stuff from the start-up scripts is easy; the hard part is determining what does what, and what you do and don't need. Hopefully the listing below will be of some help, it notes some of the most commonly found services and gives a brief description of what they do. And don't forget to make backups or notes of your changes, just in case you find you really did need to have that daemon started after all .... (this list is courtesy of Stan and Peter Klimas' Linux Newbie Administrators Guide)
anacron
checks cron jobs that were left out due to down time and executes them. Useful if you have cron jobs scheduled but don't run your machine all the time--anacron will detect that during bootup.
amd
automount daemon (automatically mounts removable media).
apmd
Advanced Power Management BIOS daemon. For use on machines, especially laptops, thatsupport apm.
arpwatch
keeps watch for ethernet/ip address pairings.
atd
runs jobs queued by the "at" command.
autofs
control the operation of automount daemons (competition to amd).
bootparamd
server process that provides information to diskless clients necessary for booting.
crond
automatic task scheduler. Manages the execution of tasks that are executed at regular but infrequent intervals, such as rotating log files, cleaning up /tmp directories, etc.
cupsd
the Common UNIX Printing System (CUPS) daemon. CUPS is an advanced printer spooling system which allows setting of printer options and automatic availability of a printer configured on one server in the whole network. The default printing system of Linux Mandrake.
dhcpd
implements the Dynamic Host Configuration Protocol (DHCP) and the Internet Bootstrap Protocol (BOOTP).
gated
routing daemon that handles multiple routing protocols and replaces routed and egpup.
gpm
useful mouse server for applications running on the Linux text console.
httpd
daemon for the Apache webserver.
inetd
listens for service requests on network connections, particularly dial-in services. This daemon can automatically load and unload other daemons (ftpd, telnetd, etc.), thereby economizing on system resources. Newer systems use xinetd instead.
isdn4linux
for users of ISDN cards.
kerneld
automatically loads and unloads kernel modules.
klogd
the daemon that intercepts and displays/logs the kernel messages depending on the priority level of the messages. The messages typically go to the appropriately named files in the directory /var/log/kernel.
kudzu
detects and configures new or changed hardware during boot.
keytable
loads selected keyboard map.
linuxconf
the linuxconf configuration tool. The automated part is run if you want linuxconf to perform various tasks at boottime to maintain the system configuration.
lpd
printing daemon.
mcserv
server program for the Midnight Commander networking file system. It provides access to the host file system to clients running the Midnight file system (currently, only the Midnight Commander file manager). If the program is run as root the program will try to get a reserved port otherwise it will use 9876 as the port. If the system has a portmapper running, then the port will be registered with the portmapper and thus clients will automatically connect to the right port. If the system does not have a portmapper, then a port should be manually specified with the -p option (see below).
named
the Internet Domain Name Server (DNS) daemon.
netfs
network filesystem mounter. Used for mounting nfs, smb and ncp shares on boot.
network
activates all network interfaces at boot time by calling scripts in `/etc/sysconfig/network-scripts'.
nfsd
used for exporting nfs shares when requested by remote systems.
nfslock
starts and stops nfs file locking service.
numlock
locks numlock key at init runlevel change.
pcmcia
generic services for pcmcia cards in laptops.
portmap
needed for Remote Procedure Calls. Most likely, you need it for running network.
postfix
mail transport agent which is a replacement for sendmail. Now the default on desktop installations of Mandrake (RedHat uses sendmail instead).
random
saves and restores the "entropy" pool for higher quality random number generation.
routed
daemon that manages routing tables.
rstatd
kernel statistics server.
rusersd, rwalld
identification of users and "wall" messaging services for remote users.
rwhod
server which maintains the database used by the rwho(1) and ruptime(1) programs. Its operation depends on the ability to broadcast messages on a network.
sendmail
mail transfer agent. This is the agent that comes with Red Hat.
smbd
the SAMBA (or smb) daemon, a network connectivity services to MS Windows computers on your network (hard drive sharing, printers, etc).
squid
An http proxy with caching. Proxies relay requests from clients to the outside world, and return the results. You would use this particular proxy if you wanted to use your linux computer as a gateway to the Internet for other computer on your network. Another (and probably safer at home) way to do it, is to set up masquarading.
syslogd
manages system activity logging. The configuration file is `/etc/syslog.conf' .
smtpd
Simple Mail Transfer Protocol, designed for the exchange of electronic mail messages. Several daemons that support SMTP are available, including sendmail, smtpd, rsmtpd, qmail, zmail, etc.
usb
daemon for devices on Universal Serial Bus.
xfs
X font server.
xntpd
finds the server for a NIS domain and stores the information about it in a binding file.
ypbind
NIS binder. Needed if computer is part of Network Information Service domain.
Many users will find they have a lot of unnecessary stuff in their /etc/rc.d/rc*.d/ folders. If you aren't sure if you need something or not, just move it somewhere else temporarily (but don't delete it), re-boot and see how things go. If you find that you do need it, just move it back and re-boot.

7.4 How To Make The Changes

I usually just fire up a file manager and make a new folder in /etc/rc.d called JunkFromRc5 or something similar. Then I just drag the unneeded scripts from /etc/rc.d/rc5.d/ into the new folder (obviously my default runlevel is 5, you might need to use something different..). Alternatively, you can use a graphical tool like tksysv, or perhaps your distribution has it's own tool. You might also want to edit `/etc/rc.d/rc/local'. Apart from user's entries, this often has a few lines that overwrite `/etc/issue' with some system specs (and/or that hideous penguin), and the contents of the `/etc/issue' file are displayed just before the login screen. Many people prefer to delete this bit and insert a line to display a fortune here instead, eg. /usr/games/fortune > /etc/issue. As usual, if you aren't sure what you are doing, make a backup copy of the file first.

7.5 Re-Claiming Hard Disk Space

This bit is easy, if a little time consuming. I usually start by getting rid of unneeded software packages. Fire up your favorite package management tool (like kpackage) and spend some time browsing through the list of installed programs. Tools like kpackage are ideal for this kind of work as they can easily show you the size of each package, a summary (so you know what the package is for), and any related dependencies.

Do you really need six editors, four file managers, five shells, three ftp clients etc.? Don't be surprised to get rid of a hundred megs or more of stuff. Packages like the Tex related ones, Emacs/Xemacs, and various emulators are never used by many, yet they occupy lots of space. If you are doubtful about removing some packages, keep some notes so you can re install them later if you have to.

Many distributions also install lots of documentation (check out /usr/doc or /usr/share/doc ). You'll probably find that there are only a few files in there worth keeping, and remember most of this stuff is available on the Web anyway. The du tool is invaluable for finding disk hogs. Also look for core files left over from crashes; these are only really useful to debuggers and can be deleted.

7.6 Hard Disk Tuning

I've seen a few articles claiming huge performance gains from using hdparm, a command line tool for setting (IDE) hard disk parameters. The claimed gains are sometimes in the order of several hundred percent. While I'm not doubting these figures, I have to wonder whether they just indicate that the disk was horribly mis-configured to start with. I've tried using hdparm on a few disks, and have found modest gains in performance. Keep in mind that disk performance is only one factor in overall system performance, and even a fairly big jump in disk performance might not make a perceptible difference to the overall speed of your system. If a disk was noticeably slow, I would certainly give hdparm a try, but otherwise I wouldn't worry about it too much. Some common distributions already set some of the optimizing parameters at boot time anyway, so as I said before, unless you think there is a problem, you could probably just leave well enough alone. If you do decide to give it a go, make sure you read (and understand) the man page (at a terminal emulator type man hdparm), and be aware that with some of the adjustments there is a small but real risk that things can go spectacularly wrong, ie. corruption of the file system. If you'd like to give hdparm a try, here's the basic usage:
hdparm [-flag] device

Running hdparm without any flags (or with the -v flag) will display the current settings. To see the current settings for my first hard disk (/dev/hda) for example, I would use: hdparm /dev/hda. To do a basic check of the speed of the first hard disk I would use: hdparm -Tt /dev/hda. Some more commonly used flags:

-c3
Enables IDE 32 bit I/O support
-a [sectcount]
Get/set sector count for read ahead
-m16
Sets multi-sector I/O (in this example 16 sectors, you may need to experiment to find the optimal number for your disk)
-u1
Unmasks interupts
-d1 -X34
Unable DMA mode2 transfers
-d1 -X66
Enable UltraDMA mode2 transfers
Read the man page for more options.

I guess the logical way to use hdparm would be to find out what your disk supports, then set hdparm accordingly. More commonly though, trial and error is used, changing one setting at a time and measuring the performance after each change. Don't use settings recommended by someone else ; while they may have worked perfectly on that persons disk, your disk might be completely different and the results may not be good. There are several tools available for testing disk performance, one of the better known ones is bonnie. And remember the changes will be lost when you re-boot, so if you want to make them permanent, you'll have to add them to a boot script like `/etc/rc.d/rc.local'.

7.7 Filesystem

Linux updates a last access time attribute every time you open a file. If you are after all the speed you can get, and you are sure you don't need this feature, you can add noatime to the mount options listed in `/etc/fstab'. For example:
In the `/etc/fstab' file add the line /dev/hda5/ ext2 defaults,noatime 11 if you do not wish to update last access time on the files in /dev/hda5partition.

7.7.1 Alternative Filesystems

You might have tried (or read about) alternatives to the traditional ext2 file system, and at present the most common seem to be ReiserFS and Ext3. These have some advantages over ext2, including quicker performance, so if you are about to start a new Linux installation you should certainly consider using the Reiser file system. However, as with hdparm, unless you are doing something unusually disk intensive, the gains are likely to be minor, and if your current system is doing the job I'd stick with it.

7.8 Kernel Recompilation

This is another one of those things that is often recommended in the Linux tune-up guides. While it may have been important years ago, it is probably of questionable value now that modular kernels are the norm. So unless you need to compile in some special feature, or you are using a pre-historic non-modular kernel (in which case you could probably benefit from updating your Linux installation), I wouldn't bother. Most recent distributions come with a variety of optimized kernels, and automatically install the one that best suits your system. Of course, you might want to recompile just for its sheer geek entertainment value, and I guess that's as good a reason as any.... I won't go into the details of kernel compilation here, check your distribution's documentation or the Kernel HOWTO for details.

7.9 Miscellaneous Tips

This article has only covered the very basic stuff - if you are interested in reading some much more detailed info about configuring Linux read the Configuration HOWTO.

If you are serious about tuning your Linux box, you'll need some benchmarking tools. To get started, take a look at this site: The Linux Benchmarking Project.

Obviously you'll be aiming to conserve memory as much as possible. Use the free command from a terminal emulator to see memory usage details. Ideally, you'll be able to balance usage against available memory so that swap isn't used.

You can save some memory by using a plain background on your desktop, rather than an image file.

Other useful tools are ps -aux (shows details of running processes), and top (similar to ps but continually updates).

Help reduce the time it takes X to update the screen on low-end machines by not using a greater colour depth than necessary, eg. use 16bit instead of 32 bit. You can check X's performance with x11bench, which is often installed by default.

8. End Note

These are only a few of the chapters that we have covered in our HOWTO "Doing Things In GNU/Linux". You can get the complete HOWTO from here, or here. If you find any mistakes, please mail mail your suggestions to lunatech3007 at yahoo dot com . This HOWTO needs active participation from the readers and I welcome suggestions, praises and curses. Feel free to ask for help on a topic - just check that your question isn't answered here first . If you don't understand the any topic please tell us, so we can explain it better. General philosophy is: if you need to ask for help, then something needs to be fixed so you (and others) don't need to ask for help.

 

Raj Shekhar

[BIO] I have completed my Bachelor in Information Technology from University of Delhi. I have been a Linux fan since the time I read "Unix Network Programming" by Richard Stevens and started programming in GNU/Linux in my seventh semaster . I have been trying to convert people right, left and center ever since.

Anirban Biswas

[BIO] I am Anirban Biswas from Calcutta, India. I have been using Linux for 4 years (from RH 6.1 to RH 8.0, then to MDK 9.0). Currently I'm in the final year of computer enginnering.

Jason P Barto

[BIO] I am from Pittsburgh, Pennsylvania and have been using Linux for 7 years. My first distro was Redhat 3 or something like that. back when configuring the X server was a real adventure. I'm currently an avid Slackware fan, and have been working in software development for Lockheed Martin Corporation for three years.

John Murray

[BIO] John is a part-time geek from Orange, Australia. He has been using Linux for four years and has written several Linux related articles.


Copyright © 2003, Raj Shekhar, Anirban Biswas, Jason P Barto and John Murray. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 88 of Linux Gazette, March 2003