Linux Gazette

October 1999, Issue 46 Published by Linux Journal

indent

Visit Our Sponsors:

Linux Journal
InfoMagic
SuSE
Red Hat
LinuxMall
cyclades
indent

Table of Contents:

indent

TWDT 1 (gzipped text file)
TWDT 2 (HTML file)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.
indent
Linux Gazette, http://www.linuxgazette.com/
This page maintained by the Editor of Linux Gazette,

Copyright © 1996-1999 Specialized Systems Consultants, Inc.
indent

Contents

This FAQ is updated at the end of every month. Because it is a new feature, it will be changing significantly over the next few months.


Questions about the Linux Gazette

1. Why this FAQ?

These are the most Frequently Asked Questions in the LG Mailbag. With this FAQ, I hope to save all our fingers from a little bit of typing, or at least allow all that effort to go into something No (Wo)man Has Ever Typed Before.


2. Where can I find the HTML version of the Gazette?


3. Which formats is the Gazette available in?


4. Which formats is the Gazette not available in?

Other archive formats. We need to keep disk space on the FTP site at a minimum for the sake of the mirrors. Also, the Editor rebels at the thought of the additional hand labor involved in maintaining more formats. Therefore, we have chosen the formats required by the majority of Gazette readers. Anybody is free to maintain the Gazette in another format if they wish, and if it is available publicly, I'll consider listing it on the mirrors page.

Zip, the compression format most common under Windows. If your unzipping program doesn't understand the *.tar.gz format, get Winzip at www.winzip.com.

Macintosh formats. (I haven't had a Mac since I sold my Mac Classic because Linux wouldn't run on it. If anybody has any suggestions for Mac users, I'll put them here.)

Other printable formats.

PostScript
You can use Netscape's "print to file" routine will create a PostScript file complete with images.
PDF
I know Adobe and others consider PDF a "universal" format, but to me it's still a one-company format that requires a custom viewer--not something I'm eager to maintain. If you can view PDF, can't you view HTML?
Word
I'll be nice and not say anything about Word....

E-mail. The Gazette is too big to send via e-mail. Issue #44 is 754 KB; the largest issue (#34) was 2.7 MB. Even the text-only version of #44 is 146 K compressed, 413 K uncompressed. If anybody wishes to distribute the text version via e-mail, be my guest. There is an announcement mailing list where I announce each issue; e-mail with "subscribe" in the message body to subscribe. Or read the announcement on comp.os.linux.announce.

On paper. I know of no companies offering printed copies of the Gazette.


5. Is the Gazette available in French? Chinese? Italian? Russian?

Yes, yes, yes and yes. See the mirrors page. Be sure to check all the countries where your language is spoken; e.g., France and Canada for French, Russia and Ukraine for Russian.


6. Why is the most recent issue several months old?

You're probably looking at an unmaintained mirror. Check the home site to see what the current issue is, then go to the mirrors page on the home site to find a more up-to-date mirror.

If a mirror is seriously out of date, please let know.


7. How can I find all the articles about a certain subject?

Use the Linux Gazette search engine. A link to it is on the Front Page, in the middle of the page. Be aware this engine has some limitations, which are listed on the search page under the search form.

Use the Index of Articles. A link to it is on the Front Page, at the bottom of the issues links, called "Index of All Issues". All the Tables of Contents are concatenated here onto one page. Use your browser's "Find in Page" dialog to find keywords in the title or author's names.

There is a seperate Answer Guy Index, listing all the questions that have been answered by the Answer Guy. However, they are not sorted by subject at this time, so you will also want to use the "Find in Page" dialog to search this listing for keywords.


8. How can I become an author? How can I submit my article for publication?

The Linux Gazette is dependent on Readers Like You for its articles. Although we cannot offer financial compensation (this is a volunteer effort, after all), you will earn the gratitude of Linuxers all over the world, and possibly an enhanced reputation for yourself and your company as well.

New authors are always welcome. E-mail a short description of your proposed article to , and the Editor will confirm whether it's compatible with the Gazette, and whether we need articles on that topic. Or, if you've already finished the article, just e-mail the article or its URL.

If you wish to write an ongoing series, please e-mail a note describing the topic and scope of the series, and a list of possible topics for the first few articles.

The following types of articles are always welcome:

We have all levels of readers, from newbies to gurus, so articles aiming at any level are fine. If you see an article that is too technical or not detailed enough for your taste, feel free to submit another article that fills the gaps.

Articles not accepted include one-sided product reviews that are basically advertisements. Mentioning your company is fine, but please write your article from the viewpoint of a Linux user rather than as a company spokesperson.

If your piece is essentially a press release or an announcement of a new product or service, submit it as a News Bytes item rather than as an article. Better yet, submit a URL and a 1-2 paragraph summary (free of unnecessary marketoid verbiage, please) rather than a press release, because you can write a better summary about your product than the Editor can.

Articles not specifically about Linux are generally not accepted, although an article about free/open-source software in general may occasionally be published on a case-by-case basis.

Articles may be of whatever length necessary. Generally, our articles are 2-15 screenfulls. Please use standard, simple HTML that can be viewed on a wide variety of browsers. Graphics are accepted, but keep them minimal for the sake of readers who pay by the minute for on-line time. Don't bother with fancy headers and footers; the Editor chops these off and adds the standard Gazette header and footer instead. If your article has long program listings accompanying it, please submit those as separate text files. Please submit a 3-4 line description of yourself for the Author Info section on the Back Page. Once you submit this, it will be reused for all your subsequent articles unless you send in an update.

Once a month, the Editor sends an announcement to all regular and recent authors, giving the deadline for the next issue. Issues are usually published on the last working day of the month; the deadline is seven days before this. If you need a deadline extension into the following week, e-mail the Editor. But don't stress out about deadlines; we're here to have fun. If your article misses the deadline, it will be published in the following issue.

Authors retain the copyright on their articles, but distribution of the Gazette is essentially unrestricted: it is published on web sites and FTP servers, included in some Linux distributions and commercial CD-ROMs, etc.

Thank you for your interest. We look forward to hearing from you.


9. May I copy and distribute the Gazette or portions thereof?

Certainly. The Gazette is freely redistributable. You can copy it, give it away, sell it, translate it into another language, whatever you wish. Just keep the copyright notices attached to the articles, since each article is copyright by its author. We request that you provide a link back to www.linuxgazette.com.

If your copy is publicly available, we would like to list it on our mirrors page, especially if it's a foreign language translation. Use the submission form at the bottom of the page to tell us about your site. This is also the most effective way to help Gazette readers find you.


10. You have my competitor's logo on the Front Page; will you put mine up too?

All logos on the Front Page and on each issue's Table of Contents are from our sponsors. Sponsors make a financial contribution to help defray the cost of producing the Gazette. This is what keeps the Gazette free (both in the senses of "freely redistributable" and "free of ads" :)) To recognize and give thanks to our sponsors, we display their logo.

If you would like more information about sponsoring the Linux Gazette, e-mail .


Linux tech support questions

This section comprises the most frequently-asked questions in The Mailbag and The Answer Guy columns.


1. How can I get help on Linux?

Check the FAQ. (Oh, you already are. :)) Somewhat more seriously, there is a Linux FAQ located at http://www.linuxdoc.org/FAQ/Linux-FAQ.html which you might find to be helpful.

For people who are very new to Linux, especially if they are also new to computing in general, it may be handy to pick up one of these basic Linux books to get started:

Mailing lists exist for almost every application of any note, as well as for the distributions. If you get curious about a subject, and don't mind a bit of extra mail, sign onto applicable mailing lists as a "lurker" -- that is, just to read, not particularly to post. At some point it will make enough sense that their FAQ will seem very readable, and then you'll be well versed enough to ask more specific questions coherently. Don't forget to keep the slice of mail that advises you how to leave the mailing list when you tire of it or learn what you needed to know.

You may be able to meet with a local Linux User Group, if your area has one. There seem to be more all the time -- if you think you may not have one nearby, check the local university or community college before giving up.

And of course, there's always good general resources, such as the Linux Gazette :)

Questions sent to will be published in the Mailbag in the next issue. Make sure your From: or Reply-to: address is correct in your e-mail, so that respondents can send you an answer directly. Otherwise you will have to wait till the following issue to see whether somebody replied.

Questions sent to will be published in The Answer Guy column.

If your system is hosed and your data is lost and your homework is due tomorrow but your computer ate it, and it's the beginning of the month and the next Mailbag won't be published for four weeks, write to the Answer Guy. He gets a few hundred slices of mail a day, but when he answers, it's direct to you. He also copies the Gazette so that it will be published when the month end comes comes along.

You might want to check the new Answer Guy Index and see if your question got asked before, or if the Answer Guy's curiosity and ramblings from a related question covered what you need to know.


2. Can I run Windows applications under Linux?

An excellent summary of the current state of WINE, DOSEMU and other Windows/DOS emulators is in issue #44, The Answer Guy, "Running Win '95 Apps under Linux".

There is also a program called VMWare which lets you run several "virtual computers" concurrently as applications, each with its own Operating System. There is a review in Linux Journal about it.


3. Do you answer Windows questions too?

Answers in either the Tips or Answer Guy columns which relate to troubleshooting hardware, might be equally valuable to Linux and Windows users. This is however the Linux Gazette... so all the examples are likely to describe Linux methods and tools.

The Answer Guy has ranted about this many times before. He will gladly answer questions involving getting Linux and MS Windows systems to interact properly; this usually covers filesystems, use of samba (shares) and other networking, and discussion of how to use drivers.

However, he hasn't used Windows in many years, and in fact avoids the graphical user interfaces available to Linux. So he is not your best bet for asking about something which only involves Windows. Try one of the Windows magazines' letter-to-the-editor columns, an open forum offered at the online sites for such magazines, or (gasp) the tech support that was offered with your commercial product. Also, there are newsgroups for an amazing variety of topics, including MS Windows.


4. How do I find the help files in my Linux system?

The usual command to ask for a help page on the command line is the word man followed by the name of the command you need help with. You can get started with man man. It might help you to remember this, if you realize it's short for "manual."

A lot of plain text documents about packages can be found in /usr/doc/packages in modern distributions. If you installed them, you can also usually find the FAQs and HOWTOs installed in respective directories there.

Some applications have their own built-in access to help files (even those are usually text stored in another file, which can be reached in other ways). For example, pressing F1 in vim, ? in lynx, or ctrl-H followed by a key in Emacs, will get you into their help system. These may be confusing to novices, though.

Many programs provide minimal help about their command-line interface if given the command-line option --help or -?. Even if these don't work, most give a usage message if they don't understand their command- line arguments. The GNU project has especially forwarded this idea. It's a good one; every programmer creating a small utility should have it self-documented at least this much.

Graphical interfaces such as tkman and tkinfo will help quite a bit because they know where to find these kinds of help files; you can use their menus to help you find what you need. The better ones may also have more complex search functions.

Some of the bigger distributions link their default web pages to HTML versions of the help files. They may also have a link to help directly from the menus in their default X Windowing setup. Therefore, it's wise to install the default window manager, even if you (or the friend helping you) have a preference for another one, and to explore its menus a bit.


5. So I'm having trouble with this internal modem...

It's probably a winmodem. Winmodems suck for multiple reasons:

  1. Most of them lack drivers for Linux. Notice the term "most" and not "all" -- see http://linmodems.org for more about those few that do, and some general knowledge on the subject.
  2. Since they aren't a complete modem without software, even if they were to work under Linux, they'd eat extra CPU that could be better spent on other things. So they'll never seem quite as fast as their speed rating would imply.
  3. Internal modems have their own problems; they overheat more easily, and have a greater danger of harming other parts in your system when they fail, merely because they're attached directly to the bus. The tiny portion of speed increase that might lend is not really worthwhile compared to the risk of losing other parts in the system.

    So, yeah, there can be good internal modems, but it's more worthwhile to get an external one. It will often contain phone line surge suppression and that may lead to more stable connections as well.


    This page written and maintained by the Editor of Linux Gazette,
    Copyright © 1999, Specialized Systems Consultants, Inc.,


    "Linux Gazette...making Linux just a little more fun!"


     The Mailbag!

    Write the Gazette at

    Contents:


    Help Wanted -- Article Ideas

    Answers to these questions should be sent directly to the e-mail address of the inquirer with or without a copy to [email protected]. Answers that are copied to LG will be printed in the next issue in the Tips column.

    Before asking a question, please check the new Linux Gazette FAQ to see if it has been answered there.


     Sun, 05 Sep 1999 04:09:46 PDT
    From: <
    Subject: Linux in Algeria

    I would like to thank all people who replied to me

    I am a third world LINUX user, in my country there is only windoze as an OS, none know about the advantages of LINUX, now I am going to set up a web site about LINUX so help me please, any printed magazine, books, free cd-rom will be a great help. actually my video card is an SIS 5597 so please if any one who can send me a free LINUX distribution( especially RH 6 it's easy to install, I have RH5.1)with XFREE86 3.3.1 to support my video card.

    sorry for my silly english, and keep the good work

    friendly mimoune

    my address is:
    MR djouallah mimoune
    ENTP garidi, vieux kouba algiers cp 05600
    algeria.


     Sun, 12 Sep 1999 15:18:59 -0400
    From: Jim Bruer <
    Subject: Postfix article

    I just installed Postfix on Suse 6.1. It seems a much easier mailer to install than any of the others I've read about in your recent issues (which were great btw). There is an active newsgroup and the author of the program responds VERY fast to stupid newbie questions. I speak from experience : ) From debian newsgroup postings it appears that Postfix is going to become their standard. Check it out, I'd love to see an article on it since I'm trying to move beyond the newbie stage and really understand this whole mail business.


     Sun, 12 Sep 1999 22:00:49 +0200 (CEST)
    From: <
    Subject: VPN and Firewall with Linux

    Hi,

    We are considering investing in a firewall and VPN for our network at work (a 3rd world aid organisation). We haven't deemed it necessary until now when we will upgrade to Novell Netware 5 which is mainly TCP/IP-based. We have funds for investing in Novell BorderManager as well.

    However, we have been talking about having our own DNS server as well as firewall. Also we would like to be able to connect our offices around the world to the network by VPN. BorderManager has all these facilities but for a price. Is there some comprehensive Linux source (written, on-line, software) about these issues. Even a list of HOWTOs would be OK.

    TIA,
    Martin Sjödebrand

    [Would somebody like to write an article about connecting several local networks securely over the Internet? There must be some Gazette readers who have such a network running in their office. -Ed.]


     Thu, 9 Sep 1999 22:24:11 -0700
    From: Lic. Manuel Nivar <
    Subject: Descompiler

    Hi, My name is Manuel Nivar I dont know how to install a script for irc.chat because I don know how to descompiler. Please Help me.

    The Linux Gazette Editor wrote:

    Hi. This will be published in the October Gazette. I don't use IRC, so I'm afraid I can't personally offer any suggestions. What does "descompiler" mean? Do you mean you can't compile the program?

    Manuel replied:

    Yes I dont know how to descompiler

    [I'm still hoping somebody will write some articles about IRC. It is certainly a popular topic among Gazette readers. -Ed.]


     Thu, 02 Sep 1999 00:48:42 -0400
    From: Leslie Joyce <
    Subject: Printing lines of black

    Hi,

    As a newbie to linux,I am having a several problems with getting linux to work properly,mainly now, I am having a problem with my printer HP 693C. I can print, but on every line of copy(words)after the sentence ends,I get a line of solid black .As I am dual booting and my printer works in Win95,I am thinking this is a linux driver problem.I am using the 550 C driver. I am using Caldera 2.2 I used the graphical interface to install the printer . Any thoughts,guesses or ideas as to where to go to find a solution?

    Thanks for your time and help
    Les.................


     Fri, 3 Sep 1999 10:30:24 +1000
    From: Binjala <
    Subject: Winmodems

    It wasn't until I had my own version of MS Windoze -albeit someone else has the registration, disc etc- that I realised how much I'd like to use something else... the I was shown linux, so Ive got RedHat, but now I find I've got Winmodem, Eagle 1740 AGP VGA Card, and Creative Labs Sound Blaster PCI64. I realise the modem sucks, where can I find if the others are compatible? Are they compatible? Can you recommend a replacement 56K modem? The guy who built this box for me has never used Linux, so he's not very useful. Help!

    Simon.

    [Any modem except a winmodem should work fine. If it says it works with DOS and/or Macintosh as well as Windows, it should be OK.

    See the Hardware Compatibility HOWTO for details.

    An index of all the HOWTOs is at www.ssc.com/mirrors/LDP/HOWTO/HOWTO-INDEX-3.html#ss3.1 -Ed.]


     Fri, 03 Sep 1999 12:44:59 +0800
    From: Jeff Bhavnanie <
    Subject: compling network driver.

    I've got the source code for my network card (SiS900), when I issue the compile command as described in the docs, i get no errors and the *.o file is created. When I issue 'insmod sis900.o' i get a list of errors. I'm a complete newbie at compiling things.

    Can anyone else compile the source into object file for me? I'm running Mandrake 6.0.

    Thanks
    Jeff


     Fri, 03 Sep 1999 11:06:14 +0000
    From: Pepijn Schmitz <
    Subject: Help: printing problem.

    Hi,

    I'm having trouble getting a Solaris box to print on my Linux print server. The Linux box has a printer set up that prints to a Netware network printer. This works, I can print from Netscape, Star Office, etc. I've set up a network printer on a Solaris 7 machine that prints to this machine. This also works, for text files. But when I try to print a page from Netscape, nothing happens, and the following appears in my messages file (maas is my Linux machine with the Netware printer, amazone.xpuntx.nl is the Solaris machine):

    Sep  3 12:55:18 maas lpd[9788]: amazone.xpuntx.nl requests recvjob lp
    Sep  3 12:55:18 maas lpd[9788]: tfA001amazone.xpuntx.nl: File exists
    Sep  3 12:55:18 maas lpd[9789]: amazone.xpuntx.nl requests recvjob lp
    Sep  3 12:55:18 maas lpd[9789]: tfA001amazone.xpuntx.nl: File exists
    

    This repeats itself every minute. I checked, and the tfA001amazone.xpuntx.nl file really does not exist anywhere on my system. There is a cfA001amazone.xpuntx.nl file in /var/spool/lpd/lp however, and if I remove this file the next time around it says this in the messages file:

    Sep  3 13:00:27 maas lpd[9854]: amazone.xpuntx.nl requests recvjob lp
    Sep  3 13:00:27 maas lpd[9855]: amazone.xpuntx.nl requests recvjob lp
    Sep  3 13:00:27 maas lpd[9855]: readfile: : illegal path name: File
    exists
    

    This happens once. The next minute the four lines I gave earlier reappear, and the cfA001amazone.xpuntx.nl file has reappeared.

    I hope someone can help me out here, this has got me stumped! Thanks in advance for anyone who can shed some light...

    Regards,
    Pepijn Schmitz


     Fri, 3 Sep 1999 11:36:57 -0400 (EDT)
    From: Tim <
    Subject: 2gig file size limit?

    Greetings,
    I have a box on my network running RedHat 6.0 (x86) that is going to be used primarily for backing up large database files. These files are presently 25 gigs in size. While attempting a backup over Samba, I realized that the file system would not allow me to write a file > 2gig to disk. I tried using an large file system patch for kernel 2.2.9, but that only allowed me to write 16 gigs, and it seemed buggy when it was doing that even. Doing an 'ls -l' would show me that the file size of the backup was about 4 gig, but the total blocks in the directory with no other files there indicated a much higher number like so:

    [root@backup ]# ls -l
    total 16909071
    -rwxr--r--   1 ntuser   ntuser   4294967295 Sep  2 19:45 file.DAT
    

    I am well aware that a 64 bit system would be the best solution at this point, but unfortunately i do not have those resources. I know BSDi can write files this big, as well as NT on 32 bit systems.. i am left wondering, why can't linux?

    Thanks in advance.
    -Tim


     Fri, 3 Sep 1999 18:03:22 +0200
    From: Service Data <
    Subject: Linux.

    È possibile sapere qual'è l'ultima versione di linux in comercio? È possibile che sia uscita la versione 6.2?

    Attendo Vs. risposta Grazie.

    [Hi. Sorry, I don't speak Italian. "Linux" technically refers only to the kernel. The kernel is at version 2.2.12. We track the kernel version on the Linux Journal home page, www.linuxjournal.com. The original site is www.kernel.org.

    The distribution you buy in a store contains not just the Linux kernel, but a lot of software from a lot of sources. Each distribution has its own numbering system. RedHat is at 6.0. SuSE just released 6.2. The other distributions have other numbers. We list the versions of the major distributions at www.linuxjournal.com, "How to Get Linux".

    There are Italian speakers who read the Gazette; perhaps they can give a better answer than this. -Ed.]


     Sat, 4 Sep 1999 08:35:51 +0530
    From: A.PADMANARAYANAN <
    Subject: Reading Linux partitions from NT

    Dear sir, can you please tell me how can i access linux partitions from windows NT or 98? is it possible? please help me or point me to any resources man pages or URLs i will work on it!

    Thanks in advance!
    sincerely
    Vijay
    Pune, India

    [There is a Windows 95 tool to do this, but I have forgotten its name. It wasn't in a very advanced stage the last time I looked at it. It would be easier to go the other way and have Linux mount your Windows partitions and copy the files there so that Windows can see them. Run "man mount" for details. -Ed.]


     Sat, 4 Sep 1999 21:40:47 +0530
    From: Joseph Bill E.V <
    Subject: Chat server

    Dear sir,
    Is there any chat server for linux users to share their views

    Regards,
    Bill


     Sat, 11 Sep 1999 12:55:54 -0600
    From: Daniel Silverman <
    Subject: Linux Internet forums

    Do you know of any Linux internet forums? If you do, I will be very grateful for their urls.


     Sun, 05 Sep 1999 02:34:41 -0300
    From: Erik Fleischer <
    Subject: How to prevent remote logins as root

    For security reasons, I would like to make it impossible for anyone logging in remotely (via telnet etc.) to log in as root, but so far haven't been able to figure out how to do that. Any suggestions?


     Sun, 5 Sep 1999 23:27:16 +0200
    From: =?iso-8859-2?B?TWljaGGzIE4u?= <
    Subject: When RIVA TNT 2 drivers for XWindows ?

    When XWindows will work properly with vga's with chipset RIVA TNT 2 ? When I'm trying to use RIVA TNT there are only 16 colors and very,very poor resolution.


     Mon, 6 Sep 1999 01:40:38 +0200
    From: Per Nyberg <
    Subject: Mandrake

    Hi, Im thinking of changing to Linux and I will buy Mandrake Linux. Is Red Hat better or is it a good idea of buying Mandrake?


     Sun, 05 Sep 1999 19:24:28 -0600
    From: Dale Snider <
    Subject: neighbour table overflow

    I was running quite a long time with NFS and transmission stopped. I get:

    Sep  6 00:03:20 coyote kernel: eth0: trigger_send() called with the
    transmitter busy.
    

    I rebooted the machine I was connected to and I get the below (part of /var/log/messages file. Not all error statements shown):

    Sep  6 17:57:04 beartooth kernel: neighbour table overflow
    Sep  6 17:57:04 beartooth kernel: neighbour table overflow
    Sep  6 17:57:04 beartooth rpc.statd: Cannot register service: RPC:
    Unable to send; errno = No buffer space available
    Sep  6 17:57:04 beartooth nfs: rpc.statd startup succeeded
    Sep  6 17:57:04 beartooth rpc.statd[407]: unable to register (SM_PROG,
    SM_VERS, udp).l: 
    
    df gives:
    Filesystem           1k-blocks      Used Available Use% Mounted on
    /dev/hda2               792800    628216    123619  84% /
    /dev/hda1               819056    369696    449360  45% /NT
    /dev/hda4              7925082   4892717   2621503  65% /home
    

    I can't find a reference to this error.

    Using RH 6.0 on Intel Pentium III 500 Mhz.

    Cheers
    Dale


     Mon, 06 Sep 1999 02:17:37 +0000
    From: Patrick Dunn <
    Subject: Parallel Port Scanners and Canon BJC-2000

    I have two questions...

    1)Does anyone have a driver written to work with Parallel port scanners? I have one of these dastardly things that I wish I didn't buy but the price was too good to pass up. It's a UMAX Astra 1220P.

    2)I have recently picked up a Canon BJC-2000 inkjet printer and it will print in B&W under Linux using the BJC-600/4000 driver under Ghostscript 5.10 (Mandrake Distro 6.0). Is there a native driver in the works? Color printing under this printer can be problematic.

    Thanks, Pat


     Tue, 07 Sep 1999 21:59:49 -0500
    From: balou <
    Subject: shell programming

    Could you point me to a good source for shell programming. I would prefer to find something off the internet taht's free. I've tried multiple web searches, but usually just come up with book reviews and advertisements.... If there are no free resources on the web, which book would you recommend for a relatively novice at Linux with experience in basic, logo, fortran, pascal, and the usual msdos stuff.


     Wed, 08 Sep 1999 21:35:04 +0700
    From: Ruangvith Tantibhaedhyangkul <
    Subject: Configure X to work with Intel 810 chipset

    Hi again,

    I just bought a new computer. It has an "on-board" video card, Intel 810 chipset, or somewhat like that. I couldn't configure X to work with this type of card. First, I let Linux probed, it failed. Then I looked at the list, of course, it wasn't there. Then I tried an unlisted card and configured it as a general svga, it still failed. What to do now?


     Wed, 8 Sep 1999 16:57:17 +0200
    From: <
    Subject: Internet connection problem !

    Hi all

    I hope someone can lend some advise ...

    I have a PII 350 Mhz box with 64 MB ram running RH 6.0. I am using KDE as a wm and am trying to set up a RELIABLE connection to my ISP. I am using a ZOLTRIX (Rockwell) 56K modem, and kppp to dial in to my ISP.

    My problem is that my I can never connect consistently.. in other words today it works fine but tomorrow it will throw me out ... It seems to dial in fine but when it tries to 'authenticate' my ID and password it bombs out ! It connects fine every time if I boot into Windoze 98.

    Does anyone have any ideas as to why this might be happening ?

    Thanks in advance
    Regards
    Rakesh Mistry


     Fri, 10 Sep 1999 13:28:07 +1000
    From: Les Skrzyniarz <
    Subject: Loading HTML back ssuse

    Iam using win98 IE5 and when Itry to load the complete issues eg.Issue 42 it stops loading at some random point on the page, and as such I can not save the complete issues(some not all) even when I come back to it agin at a later time the problem persists. The problem is not at my end as I do not have this problem with any other page on the internet.Can you offer a reason for this or a solution.

    Thanks
    Les.S

    [Hi. This will be published in the October Gazette, and we'll see if any other readers are having the same problem. I have not heard any other complaints about this so far. I have not used Win98 or IE5, so I can't suggest anything directly.

    Which site are you reading the Gazette at? Can you try another mirror?

    You can try downloading the Gazette via FTP and reading it locally. See ftp://ftp.ssc.com/pub/lg/README for details.

    It may be related to the size of the file and a misconfigured router on the network between us and you. issue45.html is 428K. Are any of the other pages you visit that big? -Ed.]


     Fri, 10 Sep 1999 13:32:21 -0500
    From: root <
    Subject: (no subject)

    Hi! I have a question for you... Is there an utility like fsck but for Macintosh HFS File systems? I want to recover a damaged one due to power supply problems.


     Fri, 10 Sep 1999 13:32:21 -0500
    From: root <
    Subject: (no subject)

    Hi! I have a question for you... Is there an utility like fsck but for Macintosh HFS File systems? I want to recover a damaged one due to power supply problems.


     Mon, 13 Sep 1999 10:22:31 -0700
    From: MaxAttack <
    Subject: Re: hello

    I was looking into CS software for linux, And one of the tools i was looking into was software to graph the internet its etc map out the registerd users on the arpnet. I was woundering if u happend to have any infomation in any of your magazines on this topic

    The Linux Gazette Editor wrote:

    No, I don't know of any such software.

    What is it you wish to do? Find out who is on each computer? The Internet doesn't really have a concept of "registered user", because the concept of "What is a user?" was never defined Internet-wide.

    In any case, you'd have to poll every box to find out what it thinks its current user is. But this identity has no real meaning outside the local network. For Windoze boxes it may be totally meaningless, because users can set it to anything they want. And how would you even find the boxes in the first place? Do a random poll of an IP range? That sounds like Evil marketoid or cracking activity. In any case, if the machines are behind dynamic IPs, as is common with ISPs nowadays, there's no guarantee you'll ever be able to find a certain machine again even if you did find it once.

    Manuel replied:

    i was thinking of just pinging all the registerd users at some DNS databases over a period of time. And using some software to create a graphical user interface for it, or such.

    The Linux Gazette Editor asked:

    Are you talking about a network analyzing program like those products that show an icon in red or page the system administrator if a computer goes down?

    I assume by "user" you mean a particular machine rather than a user-person, since the DNS doesn't track the latter.

    Manuel replied:

    hehe sorry for the confusion what i was trying to pass on what the notion of a software that allows u to track out all the registed Boxes on the internet and graph them into a nice graphical picture. so it looks something like this hopefully this diagram helps:

                                      |---------------|
                                      | InterNIC      |
                                      |               |
                                      |---------------|
    
                                      /              \
                                     /                \
                                    /                  \
    
                             |---------------|      |---------|
                             | linuxstart.com|      |   blah. |
                             |               |      |   com   |
                             |---------------|      |---------|
                                    |
                                    |
                             |---------------|
                             |   */Any Sub   |
                             |     Domain    |
                             |---------------|
    


     Sun, 12 Sep 1999 17:19:10 -0400
    From: William M. Collins <
    Subject: HP Colorado 5GB

    Using Red Hat 5.2

    I purchased a HP Colorado 5GB tape drive on reccomendation of a friend. He helped install RH 5.2. And using a program on the system named Arkeia configured the Colorado from this program. This friend has moved from the area.

    My questions are:

    1. How do I get to this program ?
    2. How do I backup the configuration in the event of emergency ?

    Thanks
    Bill


     Tue, 14 Sep 1999 09:56:39 +0800
    From: a <
    Subject: program that play Video Compact Disk (VCD)

    i have RH 5.1. Is there any program that play Video Compact Disk (VCD)?


     Thu, 16 Sep 1999 20:34:14 -0400
    From: madhater <
    Subject: ahhhhh i heard that

    linux will run out of space in 2026 cause of some bs about that i counts in units and the hard drive will be filled this is not true .... right!

    [No. Linux, like most Unixes, has a "Year 2032 problem" (I forget the exact year) because the system clock counts the seconds since January 1, 1970, and that number will overflow a 32-bit integer sometime in the 2020s or 2030s.

    People generally assume we will all have moved to 64-bit machines by then, which have a 64-bit integer and thus won't overflow until some astronomical time thousands of years from now. If 32-bit machines are still common then, expect some patches to the kernel and libraries to cover the situation. (People will have to check their database storage formats too, of course.)

    I have never heard of any time-specific problems regarding i-nodes and disk space. A Unix filesystem has both a limit of the amount of data it can hold and the number of files it can contain. The number of files is the same as the number of i-nodes, which is fixed at format (mkfs) time. Run "df -i" to see what percentage of i-nodes are used. Every file and directory (including every symbolic link) uses one i-node. (Hard links to the same file share the i-node.) For normal use it's never a problem. However, if you have a huge number of tiny files (as on a high-volume mail or news server), it may be worth formatting the partition with a larger-than-usual number of i-nodes. None of this has anything to do with the year, though. -Ed.]


     Fri, 17 Sep 1999 21:54:30 -0700
    From: Ramanathan Prabakaran <
    Subject: run-time error on cplusplus programme

    I have edited the sourcecode on windows Notepad, compiled on cygwin32 and run the programme. The source code contains fstream class. It is about file input/output. I have created the input file on the same windows notepad. But the programme does not open or read the contents of the infut file.

    Help please

    [I haven't quite gotten to the point of banning Windoze questions in the Mailbag because it's hovering at only one or two per issue. But I'm starting to think about it.

    However, I do want to support the use of free/open source compilers on Windows, especially since the Cygnus ones are (ahem) "our" compilers. Are there any better forums for Cygnus-on-Windoze to refer people to? -Ed.]


     Sun, 19 Sep 1999 20:12:47 +0200
    From: David Le Page <
    Subject: Making Linux talk to an NT network

    I want to get Linux running on my PC at work, and talking to the NT network for file sharing and printer use. Okay, okay, I know the theory -- get samba up and running, read the manual, and make it all happen. But I'm not a networking guru, and I'm battling to understand samba. And everything I read about it seems to be focused on gettings Win machines talking to Samba servers, not the other way around. Can anyone tell me, in 10 Easy Steps, how to get this done?


     Mon, 20 Sep 1999 17:35:23 -0400
    From: Mahesh Jagannath <
    Subject: Netscape and Java

    I am running Netscape Comm 4.51 on Red Hat Linux 6.0. It crashes invariably if I load a site with any Java applet etc. Is there something I am missing or is this a known bug?

    Mahesh


     Mon, 20 Sep 1999 16:48:35 -0500
    From: <
    Subject: Modem noises

    Hi Folks,

    I know this is a nitpick, but for reasons I won't go into, it's keeping me from using Linux as much as I might. Is there a way to divert the modem noises to /dev/null ? Hearing them was a help when I was debugging my connection, but now it's just a disturbance.

    Thanks,
    Jerry Boyd

    [Add "L0M0" to your modem's initialization string. One of these sets the volume to zero; the other says never turn on the speaker. (Of course I forget which is which, which is why I set both. Can you believe how many whiches are in that last sentence?) If there is no existing initialization string, use "ATL0M0" + newline. Each modem program will need this put into its configuration file. For PPP, this would be in your chatscript.

    I use "L1M1", which means (1) low speaker volume, (2) turn the speaker on only after dialing and off when the connection either succeeds or fails. -Ed.]


     Tue, 21 Sep 1999 08:44:19 GMT
    From: raghu ram <
    Subject: help

    sir, I am using apache web server on Linux machine.

    My problem is logrotation,to rotatelogs we should have config file given below

    /var/log/messages {
               rotate 5ge.
               weekly
               postrotate
                                         /sbin/killall -HUP syslogd
               endscript
           }
    

    config is over,but my problems is where should be setup.

    please help me
    Thanks

    Raghu

    The Linux Gazette Editor wrote:

    I don't understand what the problem is. What does "where should be setup" mean?

    Raghu replied:

    I don't known how to run the configfile?. I went to man logrotate,just he given configfile.


     Thu, 23 Sep 1999 13:11:53 +0530
    From: neeti <
    Subject: linux 6.2 compatible scsi adapters

    will somebody pl. tell me the list of SCSI adapters compatible to SUSE lINUX 6.2

    thanx
    neeti


     Thu, 23 Sep 1999 17:32:46 +0200
    From: De Miguel, Guillermo <
    Subject: Package to install...

    Hello everybody,

    I have in my notebook installed RedHat 6.0 in a partition of 800Mb with several products installed. As you can suppose, I had to restrict the installation of a lot of packages due to I do not have to much free hard disk space. Sometimes, I am working with my installation, I have problems due to my Linux does not find some file(s). The question is, does somebody know a way to find the package where a file which is not installed is?. I know that there is a option in the rpm command to find the package a file belongs to. However, that file has in the hdd. Has anybody help me ?

    Thanks ahead. Guillermo.


     Wed, 01 Sep 1999 16:22:28 +0200
    From: Alessandro Magni <
    Subject: Imagemap

    In the need to define hotspots on some images in HTML documents, I found a total lack of programs for Linux that enable you to accomplish this task. Does somebody know what I'm searching for?

    Thanks
    Alessandro


    General Mail


     Wed, 01 Sep 1999 13:13:02 -0700
    From: Jim Dennis <
    Subject: Freedom from UCITA: Free Software

    In response to Ed Foster's many recent gripes about the UCITA and the risks associated with some proprietary software licensing.

    I'm sure he's heard it before but Freedom from the threat of UCITA is only as far away as your local free software mirror site (FTP and/or web based). Linux, FreeBSD (and its brethren) have licenses without any such traps(*).

    * (I've appended a brief note on the two most common software licenses to forestall any argument that they DO contain "such traps.")

    If the quality of Linux and other free software didn't speak for itself, the UCITA would be an incentive for its adoption.

    It's as though the major commercial software publishers are in their death throes and intent on getting in one last bite, kick or scratch at their customers.

    I'm not saying that free software and the open source movement is poised to wipe out proprietary software. For most free software enthusiasts the intent is to provide alternatives.

    Ironically it seems as though the major proprietary software interests will obliterate themselves. The UCITA that they propose may pass and become the fulfillment of some modern Greek tragedy.

    I just hope that free software enthusiasts can provide the improvements and new, free products that may become unavailable if the commercial software industry annihilates itself.

    There's much work to be done.

    ----------------------- Appendix -----------------------------

    Some software is distributed in binary form free of charge. Some proprietary software is distributed with the source code available, but encumbered by a license that limits the production of "derivative works." Those are not commonly referred to as "free software" or "open source" by computing professionals and technical enthusiasts.

    However, "free software with open source" permits free use and distribution and includes source code and a license/copyright that specifically permits the creation and distribution of "derivative works" without imposition of licensing fees, royalties, etc.

    That, of course is a simplification. There are extensive debates on USENet and other technical fora about the exact nature and definition of the terms "free software" and "open source."

    However, that is the gist of it.

    There are two major license groups for "free/open source" software: BSD (Berkeley Software Distribution) and GPL (GNU Public License).

    The BSD license was created by the Regents of the University of California at Berkeley. It was orginally applied to a set of patches and software utilities to UNIX (which was then owned by AT&T). Since then the BSD license has been applied to many software packages by many people and organizations that are wholly unconnected to UC Berkeley. It is the license under which Apache, and FreeBSD are available.

    The BSD license permits derivative works, even closed source commercial and proprietary ones. Its principle requirements are the inclusion of a copyright notice and a set of disclaimers (disclaiming warranty and endorsement by the original authors of the software). Many advocates consider it to be the "free-est" license short of complete and utter abandonment (true public domain).

    The GPL is somewhat more complicated. It was created by the Free Software Foundation (FSF), which was founded by Richard M. Stallman (a software visionary with a religous fervor and a following to match).

    The terms of the GPL which cause misunderstandings and debate revolve around a requirement that "derivative works" be available under the same licensing terms as their "source" (progenitors?).

    This is referred to as the "viral nature" of the GPL.

    Conceptually if I "merge" the sources to two programs, one which was under the GPL and another which was "mine" then I'm required to release the sources to my software when I release/distribute the derivative.

    That's the part that causes controversy. It's often played up as some sort of "trap" into which unwary software developers will be pulled.

    One misconception is that I have to release my work when I use GPL software. That's absurd, pure FUD! I can use GNU EMACS (a GPL editor) and gcc (a popular GPL compiler) to write any software I like. I can release that software under any license I like. The use of the tools to create a package doesn't make it a "derivative work." Another more subtle misconception is that I'd be forced to release the sources to any little patch that I made to a package. If I make a patch, or a complex derived work, but only use it within my own organization, then I'm not required to release it. The license only requires the release of sources if I choose to "DISTRIBUTE" my derivative.

    One last misconception. I don't have to distribute my GPL software free of charge. I can charge whatever I like for it. The GPL merely means that I can't prevent others from distributing it for free, that I must release the sources and that I must allow further derivation.

    The FSF has developed an extensive suite of tools. Their GNU project intends to create a completely "free" operating system. They provided the core "tool chain" that allowed Linus Torvalds and his collaborators to develop Linux. That suite is released under the GPL. Many other software packages by many other authors are also released under the GPL.

    Indeed, although the Linux kernel is not a "derived work" and its developers are unaffiliated with the FSF (as a group) it is licensed under the GPL.

    There are a number of derivative and variations of these licenses. Some of them may contain subtle problems and conflicts. However, the intent of the authors is generally clear. Even with the worst problems in "free" and "open source" software licenses, there is far less risk to consumers who use that software than there is from any software released under proprietary licenses that might be enforced via the UCITA.

    [Jim, you get the award for the first Mailbag letter with an Appendix.

    There is an article about UCITA on the Linux Journal web site, which contains an overview of UCITA's potential consequences, as well as a parody of what would happen if UCITA were applied to the auto industry. -Ed.]


     Fri, 3 Sep 1999 10:32:58 +0200
    From: niklaus <
    Subject: gazette #45 - article on java

    Hey, what's about that buggy article on JDE on linux - i receive nothing more than an floating point error ?!?

    PN


     Fri, 3 Sep 1999 19:16:19 +0200
    From: <
    Subject: Re: Ooops, your page(s) formats less-optimum when viewed in Opera

    Hi Guys at Linux Gazette.

    This mail responds to Bjorn Eriksson's mail in issue #45 / General Mail: SV: Ooops, your page(s) formats less-optimum when viewed in Opera (http://www.operasoftware.com/).

    I have the same problem in my Opera. I tested it with Opera 3.1, 3.5 and 3.6 on Windows and the alpha release for BeOS, but every time the same problem. This is my solution for the problem: When defining this:

    you use the tag option WIDTH="2" If you change it to WIDTH="1%" it looks better.

    Jan-Hendrik Terstegge

    [I tried his advice, and another Opera user said it worked. -Ed.]


     Thu, 02 Sep 1999 13:44:53 -0700
    From: <
    Subject: misspelling

    This month's linux gazette contains what is for my money the most hideous misspelling ever to appear in your pages. The article "Stripping and Mirroring RAID under RedHat 6.0" clearly does NOT refer to an attempt to remove any apparel whatsoever from our favorite distro. STRIP is to undress, STRIPE is to make a thin line, RAID does not concern itself with haberdashery or nudity.

    Dave Stevens

    The Linux Gazette Editor writes:

    OK, fixed.

    P.S. STRIP is also used in electronics, when you scrape the insulation off wires.

    Mark Nielsen adds:

    Oops!
    Sorry!
    Thanks!

    I used ispell to check the spelling, dang, it doesn't help when the word you mispell is in the dictionary.

    Mark


     Thu, 16 Sep 1999 13:46:22 +0200 (CEST)
    From: =?iso-8859-1?q?seigi=20seigi?= <
    Subject: Linux Gazette in French

    Bonjour

    Je voudrai savoir si votre magazine existe en francais sinon ou si vous connaissiez un magazine en francais qui parle de Linux

    Merci d avance

    [There are two French versions listed on the mirrors page, one in Canada and one in France. There used to be a third version, but it no longer exists. A company wrote me and said they are working on a commercial translation as well, although I have not heard that it's available yet.


     Mon, 20 Sep 1999 08:59:40 -0400
    From: Gaibor, Pepe (Pepe) <
    Subject: What is the latest?

    With great interest I got into and perused Linux Gazette. Any new stuff beyond April 1997? and if so where is it.

    [You're reading it. :)

    If the site you usually read at appears to be out of date, check the main site at www.linuxgazette.com. Ed.]


     Wed, 22 Sep 1999 9:21:17 EDT
    From: Dunx <
    Subject: Encyclopaedia Galactica != Foundation

    Re: September 99 Linux Gazzette, Linux Homour piece -

    Liked the operating systems airlines joke, but surely the footnote about the Encyclopaedia Galactica is in error? The only EC I know is the competitor work to the Hitch Hiker's Guide to the Galaxy in Douglas Adams' novels, radio and TV shows, and coputer games.

    Cheers.


     Thu, 23 Sep 1999 17:26:30 +0800
    From: Phil Steege <
    Subject: Linux Gazette Archives CDROM

    I just wondered if there was, or if not has there ever been, any thought to publishing a Linux Gazette Archives CDROM.

    Thank you for a great publication.

    Phil Steege

    [See the FAQ, question 2. -Ed.]


    This page written and maintained by the Editor of the Linux Gazette,
    Copyright © 1999, Specialized Systems Consultants, Inc.
    Published in Issue 46 of Linux Gazette, October 1999

    "Linux Gazette...making Linux just a little more fun!"


    News Bytes

    Contents:


    News in General


     October 1999 Linux Journal

    The October issue of Linux Journal is on the newsstands now. This issue focuses on embedded systems.

    Linux Journal now has articles that appear "Strictly On-Line". Check out the Table of Contents at http://www.linuxjournal.com/issue66/index.html for articles in this issue as well as links to the on-line articles. To subscribe to Linux Journal, go to http://www.linuxjournal.com/subscribe/ljsubsorder.html.

    For Subcribers Only: Linux Journal archives are now available on-line at http://interactive.linuxjournal.com/


     Upcoming conferences & events

    Atlanta Linux Showcase. October 12-16, 1999. Atlanta, GA.

    Open Source Forum. October 21-22, 1999. Stockholm, Sweden.

    USENIX LISA -- The Systems Administration Conference. November 7-12, 1999. Seattle, WA.

    COMDEX Fall and Linux Business Expo . November 15-19, 1999. Las Vegas, NV.

    The Bazaar. December 14-16, 1999. New York, NY. "Where free and open-source software meet the real world". Presented by EarthWeb.

    SANS 1999 Workshop On Securing Linux. December 15-16, 1999. San Francisco, CA. The SANS Institute is a cooperative education and research organization.


     Red Hat News (Burlington Coat Factory, Japan, etc)

    DURHAM, N.C.--September 7, 1999--Red Hat, Inc. today announced that Burlington Coat Factory Warehouse Corporation has purchased support services from Red Hat for its nationwide Linux deployment.

    Under the agreement, Red Hat Services will provide telephone-based suppor t to more than 260 Burlington Coat stores nationwide (including subsidiaries). Red Hat will configure, install and provide ongoing maintenance for customized Dell OptiPlex (R) PCs and PowerEdge servers running factory installed Red Hat Linux. The Red Hat Linux OS-based systems will host Burlington Coat Factory's Gift Registry and will facilitate all other in-store functions, such as inventory control and receiving.

    Durham, N.C.--September 7, 1999--Red Hat, Inc. today announced Red Hat Japan. The new Japanese operation will deliver the company's award-winning Red Hat Linux software and services directly to the Japanese marketplace. Red Hat Japan will feature a new, expanded headquarters and staff, and a new leader for Red Hat's operations in Japan.

    In addition, Red Hat has named software industry veteran Masanobu Hirano as president of Red Hat Japan. Prior to Red Hat, Mr. Hirano was president of Hyperion Japan, a subsidiary of Hyperion Solutions, one of the country's most successful online analytical processing (OLAP) solution vendors. He also served as vice president and was a board member of ASCII Corporation, one of Japan's pioneering computer software companies.

    Durham, N.C.--September 7, 1999--Red Hat®, Inc., today announced that Gateway has joined its authorized reseller program. When requested by its customers, Gateway will install Red Hat on its ALR® servers for network business environments.


     Free Linux tech support via the web (No Wonder!)

    Take a look at No Wonder!the award winning l support web site, where real help for Linux, Mac, Windows, BeOS, Web, PDA is only a couple of clicks away. We currently have over 2000 volunteers ready to answer questions with more support providers up every day.

    "It might sound crazy, but we have been doing it for almost 3 years."


     News from The Linux Bits

    The Linux Bits is a weekly ezine at www.thebits.co.uk/tlb/. It is perfectly suited to offline viewing (no graphics or banners).

    A Survey Of Web Browsers Currently Available For Linux

    Here's a list of all the Linux browsers and their stage of development. Obviously if you know of one that's not on the list then please let them know.

    E-mail signature qutes from :

    Oh My God! They Killed init! You Bastards!

    Your mouse has moved. Windows must be restarted for the change to take effect. Reboot now? [ OK ]

    If Bill Gates had a nickel for every time Windows crashed... Oh wait, he does.

    Feature freeze for Linux 2.3: kernelnotes.org/lnxlists/linux-kernel/lk_9909_02/msg00460.html

    HAPPY 8TH BIRTHDAY LINUX! On the 17th September 1991, Linus e-mailed his 0.01 kernel to just four people. Doesn't sound like much does it? Well believe it or not this was to be the first public release of Linux.

    Linuxlinks.com. The trouble with sites that primarily focus on links to other sites, is that they tend to be thrown together with no real thought and organisation put into them. Fortunately LinuxLinks.com is not one of those sites. A great place to track down information on specific subjects concerning Linux.

    LB also has a multi-part review of StarOffice.


     Over 4000 UK IT and management training courses online

    Training Pages, an online database of IT and management training courses in the UK, has officially passed the threshold of 4000 entries. At the time of writing, the database detailed 4027 courses from 347 companies. These numbers will almost certainly increase by the time this notice is released.

    [Type "linux" in the search box to see their 21 Linux courses. -Ed]

    The press release offers a few technical details of the web site:

    No other UK training site offers comparable levels of interactivity and dynamic web services. The secret behind the site's functionality is it's integration of open standards, open source software, and a smattering of in-house programming trickery.

    By separating the dynamic functions from the presentational elements of HTML, the site can constantly be adapted and improved with minimal human intervention. The programme code currently contains a host of premium features which have yet to be activated. e.g. direct booking, last-minute booking, course evaluation, trainer evaluation, freelance trainers, etc.

    Training Pages was developed by GBdirect, a boutique IT consultancy and training company based in Bradford. A detailed case study of how they designed and built the site is available from www.gbdirect.co.uk/press/1999/trainingpages.htm


     New IDE Backup Device (Arco)

    HOLLYWOOD, Florida --Arco Computer Products, Inc., www.arcoide.com, a leading provider of low cost IDE disk mirroring hardware, today announced the DupliDisk RAIDcase, a real-time backup device that offers PC users a simple and convenient way to maintain an exact, up-to-the-minute duplicate of their IDE hard drives.

    Photo

    If a hardware failure disables one of the mirrored drives, the other takes over automatically. External LEDs change color to indicate a failed drive and an audible alarm alerts the user of the drive failure but there is no interruption of service. The system continues to function normally until the user can find a convenient time to power down and install a new drive. Caddies remove easily to facilitate replacement of failed drives.

    The RAIDcase requires neither an ISA nor a PCI bus slot. IDE and power cables provided by Arco connect the RAIDcase to the computer's onboard IDE controller and power supply. Once installed, the RAIDcase operates transparently, providing continuous automatic hard disk backup and disk fault tolerance. All data sent from the PC to the primary drive is automatically duplicated (concurrently) to the mirroring drive but the system (and end user) sees only the primary drive.

    The RAIDcase uses no jumper settings and requires no driver, TSR or IRQ. Because it requires no device drivers, the RAIDcase is essentially operating-system independent. It has been tested with systems running Windows 3.x, 95, 98 and NT, as well as OS/2, DOS, Unix, Linux, Solaris386, BSDI and Novell NetWare. Manufacturer's suggested retail $435

    www.arcoide.com


     News from E-Commerce Minute

    www.ecommercetimes.com

    Linux vendor MandrakeSoft announced a strategic partnership with LinuxOne this week. They intend to develop Chinese language business and personal software solutions, advancing the cause of open-source in a potentially explosive Internet and computing market...
    www.ecommercetimes.com/news/articles/990903-2.shtml

    E-commerce solution provider Unify Corp., announced this week that two of its forthcoming Internet software releases will be certified to run on the Red Hat, Inc. (Nasdaq: RHAT) distribution of the Linux operating system (OS)...
    www.ecommercetimes.com/news/articles/990910-8.shtml

    Navarre, a business-to-business e-commerce company that offers music and software, announced today that it has entered into a distribution deal with Linux developer tools provider Cygnus Solutions. The deal is Navarre's sixth distribution agreement for the Linux operation system and related products...
    www.ecommercetimes.com/news/articles/990903-6.shtml

    Magic Software Enterprises unveiled the latest version of its e-commerce server Friday, a product powered by the Red Hat, Inc. (Nasdaq: RHAT) distribution of the red-hot Linux operating system (OS)...
    www.ecommercetimes.com/news/articles/990920-3.shtml

    Oracle Corp. has announced that its Oracle8i for Linux, a database designed specifically for the Internet, has been certified to run on the Red Hat, Inc. (Nasdaq: RHAT) distribution of the open-source operating system (OS). The announcement officially launches a strategic partnership between the two companies that is aimed at advancing corporate adoption of Linux...
    www.ecommercetimes.com/news/articles/990922-7.shtml


     New Linux Bulletin Board

    ANNOUNCEMENT FROM: WM. Baker Associates

    On 09/13/99 WM. Baker Associates launched the Linux Bulletin Board at:

    http://www.w-b-a.com/linux.html

    This new Linux Bulletin Board provides a forum for visitors to ask questions, learn, and share ideas about Linux related issues and events.

    Bulletin Board Categories include:

    Linux Technical Information Linux News, Events & Publications Linux Investment Information


     Ziatech and Intel Sponsor Applied Computing Software Seminars

    San Luis Obispo, CA, September 20, 1999 -- Ziatech Corporation, with sponsorship from Intel Corporation, is hosting a continuing series of one-day seminars focusing on real-time operating system solutions for applied computing applications, it was announced today. Beginning in late October, the 1999 Applied Computing Software Seminar Series will feature presentations from leading software companies, including Wind River Systems (VxWorks=AE, Tornado(tm)), QNX Software Systems (QNX=AE, Neutrino=AE), and MontaVista Software (Hard Hat(tm) Linux), in addition to presentations by Ziatech and Intel. The seminar series begins in San Jose on October 29, and continues in San Diego (November 4), Tokyo (November 8), Dallas (November 30), and Boston (December 2). Each one-day session begin with registration at 7:30 a.m., includes lunch, and concludes at 5 p.m.

    www.ziatech.com


     Linux C Programming Mailing Lists

    The Linux C Programming Lists now officially exist. They will be archived on-line and also via majordomo. The Linux C Programming Lists aim to help people programming linux with C. Hopefully no question will be too simple nor too difficult for someone on the list to answer. For anyone learning how to program linux with C these lists will be a valuable resource to help you in your learning.

    David Lloyd has agreed to host a common home page for the linux c programming lists. It is at users.senet.com.au/~lloy0076/linux_c_programming/index.html

    The easiest way to become a member of the (dual) lists is to e-mail .


     LJ sopnsors Atlanta Linux Showcase conference and tutorials

    Linux Journal, the monthly magazine of the Linux community, is proud to announce its leading sponsorship role in the 1999 Atlanta Linux Showcase.

    The Atlanta Linux Enthusiasts and the USENIX association, in cooperation with Linux International, are pleased to announce the Conference and Tutorials Schedule for the 3rd Annual Atlanta Linux Showcase.

    The tutorial program, sponsored and managed by the USENIX Association will feature two days of top rate instruction in the following subjects:

    The Conference program will consist of 41 sessions with up to five sessions in each track. Our tracks cover all of the cutting edge topics in the Linux Community today: Distributed Computing, Kernel Internals, Applications, Security, System Administration, and Development. The sessions are lead by a top notch line of speakers including Bernie Thompson, Eric Raymond, Phil Hughes, Matthew O'Keefe, Jes Sorensen, Michael Hammel, Miguel de Icaza, Mike Warfield, Steve Oualline, Dirk Hondel. Full details on the conference program are available and online at: www.linuxshowcase.org.

    In addition to Jeremy Allison's Keynote, Norm Shryer from AT&T Research will be giving a keynote entitled: "The Pain of Success, The Joy of Defeat: Unix History"-the story of what happened to Unix on the way from Ken Thompson's mind to the marketplace, and how this affects Linux.

    ALS activities for attendees include 3 days of free vendor exhibits, freeform birds-of-a- feather sessions, and the listed tutorials, keynotes, and conference sessions.


     RMS Software Legislation Initiative

    Date: Wed, 15 Sep 1999 04:31:52 +0000 (UTC)
    From: Dwight Johnson <[email protected]>
    To: [email protected]

    Richard Stallman is promoting an initiative to campaign against UCITA and other legislation that is damaging to the free software movement.

    Considering the many legislative issues that affect free software, some, like myself, believe there is a need for a Ralph Nader type of organization to both be a watchdog and also lobby legislators to protect the interests of free software.

    Richard Stallman has volunteered himself to lead off the initiative and Below is his latest correspondence.

    Because the free software movement encompasses both commercial and non-commercial interests and Linux International is an association of commercial Linux interests, it is probably not appropriate for Linux International to attempt to serve the watchdog and lobbying function which is needed.

    There are two ways Linux International may become involved:

    1) Individual companies within Linux International may wish to commit themselves to sponsor the Richard Stallman initiative;

    2) Linux International may collectively endorse and sponsor the Richard Stallman initiative.

    Richard recognizes that his leadership may be controversial to some and has told me he wants to join with 'open-source' people to support the common cause.

    As Linux and free software/open-source solutions move into center stage in the technology arena, the agendas of those who support the older model of intellectual property to oppose the inevitability of this evolution are becoming more sharply defined and dangerous -- they are seeking legislative solutions.

    We need to pursue our own agenda and our own legislative solutions and time is of the essence. A cursory search of the Linux Today news archives will reveal that there are several bills on their way through the U.S. Congress right now, in addition to UCITA, which could be disruptive to the free software movement.

    I urge the Linux International Board of Directors both collectively and individually to take action and support the Richard Stallman initiative to defend open-source/free software against damaging legislation.


     Andover.Net Files Registration for Open IPO

    Acton, Mass.--September 17, 1999-- Andover.Net www.andover.net, a network of Linux/Open Source web sites which include Slashdot.org, today announced that it has filed a Registration Statement on Form S-1 with respect to a proposed initial public offering of 4,000,000 shares of Andover.Net common stock. All 4,000,000 shares are being offered by Andover.Net at a proposed price range of $12 to $15 per share.

    Information regarding the OpenIPO process may be obtained through www.wrhambrecht.com. Copies of the preliminary prospectus relating to the offering may be obtained when available through the web site.


     Linux Breakfast

    New Age Consulting Service, Inc., a Network consulting corporation and Tier 2 Internet Service Provider that has been providing corporate Linux solutions for nearly five years, is introducing Linux to Cleveland at a Breakfast on September 30, 1999. The Linux Breakfast is designed to educate the quickly expanding Linux market in Cleveland about the exciting commercial applications of the open source operating system and how it increases network efficiency in conjunction with or as an alternative to other network operating systems such as Novell and Microsoft NT server solutions.

    NACS.NET's goal is to provide business owners and managers with information that demonstrates how Linux is quickly building a strong hold on the enterprise market and that it is being rolled out in a very strong and well supported manner. Caldera Systems, Inc. and Cobalt Networks, Inc., national leaders in Linux technology, will be presenters at the event.

    Caldera Systems, Inc. is the leader in providing Linux-based business solutions through its award winning OpenLinux line of products and services. OpenLinux for business solutions are full-featured, proven, tested, stable and supported. Through these solutions, the total cost of ownership and management for small-to-medium size businesses is greatly reduced while expanding network capabilities.

    Cobalt Networks, Inc. is a leading developer of server appliances that enable organizations to establish an online presence easily, cost effectively, and reliably. Cobalt's product lines - the Cobalt Qube, Cobalt Cache, Cobalt RaQ, and Cobalt NASRaQ - are widely used as Internet and Web hosting server appliances at businesses, Internet Service Providers, and educational institutions. Cobalt's solutions are delivered through a global network of distributors, value-added resellers and ISPs. Founded in 1996, Cobalt networks, Inc. is located in Mountain View, California-the heart of Silicon Valley - with international offices in Germany, Japan, the United Kingdom, and the Netherlands.

    The presentation will be held in the Cleveland Flats at Shooters on the Water. A full buffet style breakfast will be offered to all registered attendees.


     Franklin Institute Linux web server

    Philadelphia PA -- The Franklin Institute Science Museum has installed a Linux Web server built by LinuxForce Inc. The server is now on line and being used by The Franklin Institute in their Keystone Science Network. The network has been designed to create a professional community of science educators throughout the Eastern half of Pennsylvania.

    Christopher Fearnley Senior VP Technology LinuxForce Inc. said that "LinuxForce is proud to have built the server and its integrated Linux software that will aid the Keystone Science Network in promoting teacher professional growth." The Network has been designed to promote teacher professional growth through the implementation of K-8 standards-based science kits supported by the application of network technology.

    The Web Server built for the program by LinuxForce Inc.'s Hardware Division includes the Debian GNU/Linux operating system. Fearnley commented that the powerful Keystone web server will meet all current requirements and the challenge of any expansion beyond the ten core sites located in school districts throughout the Eastern half of Pennsylvania.

    www.linuxforce2000.com


     Java EnterpriseBeans: no-cost developer's version + contest

    P-P-P-Promotion --- don't start stuttering. If you are on Linux, it's alright for you to laugh! Penguin, ProSyst, PSION: CARESS the Penguin, DOWNLOAD the EJB application server from ProSyst: EnterpriseBeans Server, Developer Edition without any charge and WIN one of three PSION Series 5mx Pro palmtop computers every month!

    Download EnterpriseBeans Server, Developer Edition for Linux and register to Win! All download registrations and answered questionnaires received by October 15, November 15 and December 15, 1999 will be entered in a drawing to win any one time one of three PSION Series 5mx Pro palmtop computers. The winners will be posted on ProSyst's Web site at www.prosyst.com every month.

    WIN again: If you have developed some nice Enterprise JavaBeans or services and you are planning to deploy it, purchase EnterpriseBeans Server, any Server Edition for Linux by December 31, 1999 and get 50% off. You save up to US $ 5,500.


     Linux Links

    LinuxPR. From the web page: "Linux PR is a website for organizations to publish press releases to the enormous market that is the Linux community. Linux PR is backed by the resources of Linux Today and is offered at no charge. Journalists from large media organizations can monitor Linux PR as a source for their Linux-related information."


    Software Announcements


     C.O.L.A software news

    tsinvest is for the real time programmed day trading of stocks. "Quantitative financial analysis of equities. The optimal gains of multiple equity investments are computed. The program decides which of all available equities to invest in at any single time, by calculating the instantaneous Shannon probability of all equities..." Freely redistributable, but cannot sell or include in a commercial product.
    www2.inow.com/~conover/ntropix

    xshipwars game (GPL). Uses the latest Linux joystick driver. Nice-looking lettering and graphics at web site.

    Flight Gear flight simulator game (GPL). "A large portion of the world (in Flight Gear scenery format) is now available for download." Development version is 0.7.0; stable version is 0.6.2.

    wvDecrypt decrypts Word97 documents (given the correct password). A library version (part of wv library) is also available at www.csn.ul.ie/~caolan/publink/mswordview/development.

    hc-cron (GPL) is a modification of Paul Vixie's cron daemon that remembers when it was shut down and catches up on any jobs that were missed while the computer was off. The author is looking for programmers who can take over its further development.

    The Linux Product Guide by FirstLinux is "a comprehensive guide to commercial Linux resources."

    suckmt is a multi-threaded version of suck (an NNTP news puller). Its purpose is to make fuller use of dialup modem capacity, to cut down on connect-time charges. It is more of a feasability study than application at this point.

    Jackal/MEC is a video streaming client/server pair for Linux.


     Personal Genealogy Database project

    This project is just beginning and needs help in coding. The complete program objectives and wishlist are at the project home page. For more information, subscribe to the development mailing list or email the project manager.

    Home Page:              www.msn.fullfeed.com/~slambo/genes/
    FTP site:               none yet; will be announced when code is released.
    
    License:                GPL
    Development:    C++, Qt and Berkeley DB; this may change as development progresses.
    
    Project Manager:        Sean Lamb - [email protected]
    
    Mailing List:   www.onelist.com/community/genes-devel
            Subscribe:      blank message to
     or visit the list home page.
    


     Caldera open-sources Lizard install

    Orem, UT -September 7, 1999 - Caldera Systems, Inc. today announced that its award-winning LInux wiZARD (LIZARD) - the industry's first point and click install of Linux -is now available under the Q Public License (QPL) for download from www.openlinux.org.

    The LIZARD install was developed for OpenLinux by Caldera Systems Engineers in Germany, and by Troll Tech, a leading software tools development company in Oslo, Norway.

    LIZARD makes the transition from Windows to Linux easier for the new user and reduces down time created by command line installation. "We're happy to contribute back to the Open Source community and Linux industry," said Ransom Love, CEO of Caldera Systems, Inc. "We're particularly grateful to Troll Tech for their support of-and contributions to-this effort. LIZARD will help Linux move further into the enterprise as others develop to the technology" "We congratulate Caldera Systems on their bold move of open-sourcing LIZARD," said Haavard Nord, CEO of Troll Tech. "LIZARD is the easiest to use Linux installer available today, and it demonstrates the versatility of Qt, our GUI application framework."

    Under the Q Public License LIZARD may be copied and distributed in unmodified form provided that the entire package, including-but not restricted to-copyright, trademark notices and disclaimers, as released by the initial developer, is distributed. For more information about the Q Public License and distribution options, please visit www.openlinux.org/lizard/qpl.html


     Xpresso LINUX 2000

    Xpresso LINUX 2000 is a safe, simple, stable computer OS with a full set of programs (Star Office 5.1, WordPerfect 8, Netscape 4.51, Chess and more). Everything you need and all made simple for the Linux user. All on a single CD Rom with small pocket-sized manual.

    It is based on Red Hat 6 with the KDE 1.1 graphical interface and sells for just UK Pounds 15.95, delivered to your door world-wide.

    www.xpresso.org


     News from Loki (games and video libraries)

    TUSTIN, CA -- September 8, 1999 -- Loki Entertainment Software announces their third Open-Source project, the SDL Motion JPEG Library (SMJPEG) .

    SMJPEG creates and displays full motion video using an open, non-proprietary format created by Loki. It is based on a modified version of the Independent JPEG Group's library for JPEG image manipulation and freely available source code for ADPCM audio compression. Among its many benefits, SMJPEG allows for arbitrary video sizes and frame-rates, user-tuneable compression levels, and facilities for frame-skipping and time synchronization.

    Loki developed SMJPEG in the course of porting Railroad Tycoon II: Gold Edition by PopTop Software and Gathering of Developers. While Loki is contractually bound to protect the publisher's original game code, Loki shares any improvements to the underlying Linux software code with the Open Source community.

    Loki's first Open Source project, the SDL MPEG Player Library (SMPEG) is a general purpose MPEG video/audio player for Linux, developed while porting their first title, Civilization: Call to Power by Activision. The second project is Fenris, Loki's bug system based on Bugzilla from the Mozilla codebase.

    SMJPEG, SMPEG and Fenris are freely available for download from www.lokigames.com, and are offered under the GNU Library Public License (LGPL).

    About Loki Entertainment Software: Based in Orange County, CA, Loki works with leading game publishers to port their best-selling PC and Macintosh titles to the Linux platform. Loki meets a pent-up need in the Linux community by providing fully-supported, shrink-wrapped games for sale through traditional retail channels. For more information, visit www.lokigames.com.

    Tustin, CA. -- September 17, 1999 -- Loki Entertainment Software, in cooperation with Activision, Inc. and the Atlanta Linux Enthusiasts, announces Loki Hack 1999 to be held on October 11 through 13 at the Cobb Galleria Centre in Atlanta in conjunction with the Atlanta Linux Showcase.

    Loki Entertainment Software launched the Linux version of Activision's popular strategy game Civilization: Call to Power(TM) in May 1999 to strong reviews. During Loki Hack, up to 30 qualified hackers will have 48 hours in a secure setting to make alterations to the Linux source code for this game. In turn Loki will make available in binary form all resulting work from the contest. Winners of this unique contest will be announced during the Atlanta Linux Showcase. First prize will be a dual-processor workstation (running Linux of course).

    Qualified hackers may apply to participate on the Loki web site (above).


     New Linux Distribution from EMJ Embedded Systems

    Apex, NC -- Using Linux for embedded applications on the world's smallest computer just got easier, thanks to the new EMJ-linux distribution, developed by EMJ Embedded Systems.

    EMJ has compiled a small distribution that runs Linux on JUMPtec®'s DIMM-PC/486, saving hardware developers the hours of time required to modify Linux to run on the world’s smallest PC. The DIMM PC is a full featured 486 PC in the size of a 144-pin memory DIMM.

    The DIMM-PC/486 from JUMPtec measures 40x68 mm (1.57 x 2.68 inches) but packs the same punch as a standard 486 PC. The DIMM-PC is perfect for use in high performance applications such as security apparatuses, medical instruments, factory automation and global positioning systems. It ships with 16 MB of DRAM, 16 MB of IDE compatible flash and supports two serial ports and a parallel port, as well as floppy and hard drive interfaces, a real time clock and watchdog timer, and an I2C-bus.

    The EMJ-linux distribution consists of a 1.4MB bootdisk. This bootdisk contains everything needed to do a network install of EMJ-linux. Currently v0.9 is a 5.7MB compressed file (9.8MB uncompressed), designed to download directly to DIMM-PC's 16MB Flash Drive, or to an IDE Drive. Once loaded, the DIMM-PC with Linux will support Ethernet, TCP/ip, Telnet, FTP, WWW (Apache 1.3.9), as well as two serial ports, parallel, floppy, IDE and VGA.

    EMJ-linux is based roughly on Slackware v4.0 which uses the current Linux kernel 2.2.x. It will be kept up to date by EMJ as Linux revisions occur. EMJ will also develop similar Linux solutions for other JUMPtec products.

    The EMJ-linux distribution is available on EMJ's Web site, www.emjembedded.com/linux


     Armed Linux: a distro for Windows users

    [The following was written by the LG Editor, who has not used the software. The web site is sparse on technical details, so the comments below may not be totally correct. -Ed.]

    Armed Linux is a Linux distribution that comes as a 192 MB zip file. You unzip it under Windows and run a batch file --- and it installs Linux. Apparently (this has not been verified) it uses the loopback device to create an entire Linux filesystem in a huge DOS file. An alternative to UMSDOS, for those who remember that.

    The current version, workstation beta 1.1, is available for free download, or for $11.99 on CD. It includes the Enlightenment and WindowMaker, an office suite, Netscape, an MP3 player, and a graphics editor (GIMP?). A server version is planned, which will contain Apache/Samba/Sendmail etc. and an SMP kernel. Both versions have an automated uninstall routine, if you're nervous about installing an unknow program.


     SMART software for Linux

    SANTA CRUZ, CA-- Free software enabling users of the Linux operating system to monitor their hard drives and detect predictable drive failures is available from the Concurrent Systems Laboratory at the University of California, Santa Cruz. Development of the software is sponsored by Quantum Corp. of Milpitas, Calif., a leading manufacturer of hard-disk drives.

    The S.M.A.R.T. (Self-Monitoring Analysis and Reporting Technology) system monitors hard drives and warns of impending failures before they happen. Originally developed by Compaq, S.M.A.R.T. technology has become an industry standard for hard drive manufacturers.

    "The S.M.A.R.T. system allows the computer to talk to the hard drive and ask how it's doing by measuring various performance parameters," said Darrell Long, associate professor of computer science in the Jack Baskin School of Engineering at UC Santa Cruz.

    The initial Phase I release of the S.M.A.R.T. for Linux software only supports ATA, one of two standard interfaces for connecting hard drives to computers. A complete software package that supports both ATA and SCSI will be released by the end of the year, Cornwell said.

    The S.M.A.R.T. software for Linux is available from the following Web site, which also has a link to a Quantum white paper on S.M.A.R.T. technology: http://csl.cse.ucsc.edu/software/smart/.


     Other software

    The GNU Privacy Guard (GPG) version 1.0.0 was released on September 7th, 1999. This is a free replacement for the PGP encryption software.

    Three Axis, a new web site by a company dedicated to bringing more games to Linux.

    LinuxOne is offering a beta version of its LinuxOne OS for download. The first 100 subscribers will get it free; afterwards, there will be a $9.95 shipping and handling charge. (LinuxOne is also working with MandrakeSoft to develop a Chinese-language workstation and server distribution.)


    This page written and maintained by the Editor of the Linux Gazette,
    Copyright © 1999, Specialized Systems Consultants, Inc.
    Published in Issue 46 of Linux Gazette, October 1999

    Contents:

    (!)Greetings From Jim Dennis

    (?)Two Network Cards --or--
    Routing Revisited
    (!)dao
    ("helpless" in TAG #44)
    (?)TCPMUX on Linux --or--
    TCPMux Revisited: You'll need a Daemon for it, or a Better inetd
    (?)The Mac, Linux, perl, Apache & server --or--
    A Staging Server
    (?)Another "respawning" question --or--
    Id "x" respawning too fast: Murdered Mysteriously
    (?)2gig max file size? --or--
    Large File Support Under Linux/x86
    (?)http://sunsite.mff.cuni.cz/lg/issue13/answer.html --or--
    From the Dim History: EQL Revisited
    Bandwidth Load Sharing w/o ISP Support
    (?)Modem blues.. --or--
    High School Modem
    (?)your web --or--
    Who is Jim Dennis?
    (?)redirection of stdin-stdout --or--
    Programming Question about Regaining stdin/stdout
    (?)outgoing email using Netscape --or--
    Outgoing Mail Problems
    (?)How to add fonts to Linux --or--
    Adding Fonts

    (!) Greetings from Jim Dennis

    There's new excitement at the Answer Guy household this month - my book is shipping! I know this because I pre-ordered it from Amazon, and my order has arrived. So, now I can tell you it has a purple binding, and that the cover is white, with a black and white river scene along the top.

    [ In the plugs department, if you're going to buy it from Amazon, buy it through the associate link at our science fiction club, the Los Angeles Science Fantasy Society. You'll get a discount, and help a literary organization at the same time. -- Heather ]

    My lovely wife Heather notes that it should also be at Computer Literacy, if there's one in your area, or you prefer to online shop there.

    [ SVLUG's installfests are held there, and I asked the staff if it was in yet. They said it was at the warehouse but hadn't hit the stores. That was a week ago, so they should be in by now. -- Heather ]

    Onward to Linux itself. The 2.2.13pre kernel is (fingers indicating tiny space) this close to being ready. 2.2.10 through 12 have some memory leaks, so a lot of people are safer staying with whatever their distribution shipped until it's settled in. Alan Cox is putting a lot of effort into this one really being solid.

    You won't find it mentioned at KernelNotes.org - they have good stuff, but they don't bother to mention the pre-kernels. If you're a brave soul and really want to see the latest kernel details, you have to go to kernel.org. I found the .13pre code at http://www.us.kernel.org/pub/linux/kernel/alan/2.2.13pre/ though as I've said many times before, I'm no programmer. I just read README's and comments. (BTW, you can pick a closer mirror if you're in another country, by replacing "us" with your two letter country code. Round robin DNS does the rest.)

    LinuxCare is sending me on another training visit to another state. (Although I enjoyed myself in Japan, I'm glad this one is a shorter trip.) Someone must be looking out for me - the very topic I needed to investigate, embedded systems, seems to be the big topic for Linux Journal this month.

    I'm sure you didn't come here to read all about that. You came for the articles. There's a new footer this month to make it easy to get to the past articles, too. With the short deadline this month, there will be more than usual next time, I think.


    (?) Routing Revisited

    From BK on Wed, 01 Sep 1999

    I have placed two network cards in my system; one to a firewall and the other to a hub serving a small two systems network (I'm a newbie attempting this project.) While booting the kernel detects the two card very well; I have no IRQ confilcts or any other mishaps. I have configured the first card (eth0) using 'ifconfig eth0 192.168.0.1' and card two using 'ifconfig eth1 192.168.0.2'; now if I shutdown the system I and reboot and run 'ifconfig' it only shows me one card (eth0). How do I get the other card to remain constant?

    Badiane

    (!) First problem: The 'ifconfig' command is in no way persistent. When you configure the IP address, netmask and broadcast address on an interface, that setting only lasts until the next reboot (or the next 'ifconfig').
    You need to save the settings for your interfaces in a configuration file somewhere. On a Red Hat system you should find a file named /etc/sysconfig/network-scripts/ifcfg-eth0. Copy that to the name ifcfg-eth1 (in the same directory) and edit the copy. This file is a set of variable assignments which is "sourced" by one of the rc* (start up) scripts. The variables are then used in the 'ifconfig' command.
    When you edit that file, it is VERY important that you remember to change the DEVICE= setting to eth1, otherwise you'll overwrite the configuration of your eth0 interface. The name of the ifcfg-* file is not correlated to device name!
    Here's an example of a ifcfg-* file from one of my Red Hat systems:
    DEVICE=lo
    IPADDR=127.0.0.1
    NETMASK=255.0.0.0
    NETWORK=127.0.0.0
    BROADCAST=127.255.255.255
    ONBOOT=yes
    
    In this example I'm using the lo, or loopback, interface since this system uses DHCP for its ethernet interface and consequently has no ifcfg-eth0. You want to change all of these settings as appropriate for your other subnet.
    This brings us to the second and more drastic problem that you've described. The IP addresses you gave are on the same subnet. That doesn't make sense!
    You could probably force it to work with a few proxyarp commands (to publish the extra IP address of eth1 on the LAN to which you've connected eth0, and to also publish the other two IP addresses on eth1's segment to the other network.
    Another way to make this addressing scheme work would be to publish special host routes or each of these stray IP addresses on EVERY system on the eth0 network segment. That would also constrain you to systems which can properly handle variable length subnet masking (VLSN).
    If this last two paragraphs didn't make sense to you then I suggest TWO things.
    Don't do that! If you don't understand proxyarp then definitely don't want to try using it.
    Read my "Routing and Subnetting 101" article (the longest I've written for LG TAG to date) at:
    The Answer Guy 36: Routing and Subnetting 101
    http://www.linuxgazette.com/issue36/tag/a.html
    The "Routing and Subnetting 101" article will explain what a subnet is, why you want to use it and give you a few examples and tables for determining the valid ways to subnet your particular network. It will also explain ARP, proxyarp, and the use of RFC1918 addresses (which you're already using --- since 192.168.0.* is one of the Class C address blocks reserved in that RFC).
    Since you are using one block of RFC1918 addresses on eth0, you can easily just use another block for eth1. So you could use 192.168.1.*. You can use any number from 1 to 255 for that third octet. You could also use 172.16.*.* on eth1 (and on the other computers/devices on that network segment).
    So, solve those two problems and you're well on your way to discovering the next one. (Don't worry, those two are the only problems I can see from what you've described. So that may be enough to get the job at hand done. It's just that I've learned that we don't really solve problems so much as create new ones and, when we're lucky, delay their discovery through periods of apparent functionality).

    (!) dao ("helpless" in TAG #44)

    From Jay Riechel on Thu, 26 Aug 1999

    Hi: You probably didn't really want to know what "dao" is. But James (and you) put it in TAG, so here is my 2c:
    DAO could refer to Microsoft's Data Access Objects. DAO provides a programmatic interface to Microsoft's Jet database engine, which originally appeared in Visual Basic. Jet supposedly lets you access Microsoft's .mdb database format as well as ODBC and ISAM data sources. It seems to be another attempt at creating a standard on their own (when adequate alternatives already exist), just to tie up more market share.
    I know you avoid MS-related stuff, but hey, knowledge is power! You two are doing great work. Thanks from me and all the other lurkers!
    Jay Riechel

    [ Thanks Jay (and you're welcome). Things like this are good to know. Since you've got the answer to this one, your message gets the AnswerBubble! -- Heather ]


    (?) TCPMux Revisited: You'll need a Daemon for it, or a Better inetd

    From Helpdesk on Wed, 01 Sep 1999

    I was trying to configure a few services using the tcpmux - inetd internal service. but while trying to connect to tcpmux on port 1 it gives me an i/o error on socket and fails to establish a connection.

    could you please elaborate on this.

    i was also hunting to find some info on tcpmux but was not able to.

    Pleaseeeeeeeeeeeeeeeeeeeeeeeee help. doing some serious bussiness programming on linux. stuck up.

    jaggu

    (!) The default 'inetd' that ships with most Linux distributions doesn't support the tcpmux protocol. You'd either need to get a replacement Internet Dispatch Daemon (like Mike Neuman's BINETD, "Better INETD" at: http://www.engarde.com/~mcn/binetd/index.htm), or you'd need to write a standalong tcpmuxd and configure your 'inetd' to launch it for new connections on TCP port 1.
    I also found a web page that suggests that some versions of BSD 4.4 inetd include support for TCPMux services:
    Manpage of INETD
    http://theoryx5.uwinnipeg.ca/gnu/inetutils/inetd.8.html
    This impression seems to be supported by the online man pages at the FreeBSD web site:
    FreeBSD Hypertext Man Pages: inetd
    http://www.freebsd.org/cgi/man.cgi?query=inetd&apropos=0&sektion=0&manpath=FreeBSD+4.0-current&format=html
    So perhaps you could (re-)port that to Linux. Or, perhaps you could write a standalone daemon to implement the protocol. All it would do is a simple handshake and launch.
    Presumably your tcpmuxd daemon would (if you wrote it) use a separate configuration file (maybe /etc/tcpmux.conf would be a good name) which would tell it which services were available (names with the custom protocol versions encoded into them perhaps) and what programs to launch to handle requests for each of those protocols/services. Obviously this would be serving a very similar function to the existing inetd.
    If you were going to write such a daemon, it seems like it would make sense to derive it from TCP Wrappers. tcpd performs very similar operations, and you could link the tcpmuxd against libwrap so that its services could be subjected to the same access controls and logging that TCP Wrappers provides, while allowing the administrator to continue using just the /etc/hosts.allow and /etc/hosts.deny files for those controls.
    The TCPMux protocol is described in RFC1078. There are a number of archives of RFCs on the 'net. Any could search engine should find them (start with the search engine at Linux Gazette's site since I know I've provided links to a couple of them in my past columns).
    Here's one description of this protocol with some notes about where it's supported:
    http://www.con.wesleyan.edu/~triemer/network/tcpmux/tcpmux.html
    I've suggested this project to a few open source programmers, but none have stepped upto the plate. Perhaps you could do it. Once a good implementation is available, we could encourage distribution maintainers to include it and programmers to use it rather than grabbing new ports and perpetuating the problems of "WKS" (well-known service port numbering).
    I'd particularly like to see 'mcserv' (the Midnight Commander communications service) and AMANDA (the "Advanced Maryland Network Disk Archiver") use this for their networking protocols. Those or such specialized protocols that they should use TCPMux rather than grabbing a port number for a protocol which will never be implemented in any other clients or servers.

    (?) A Staging Server

    From Mark on Wed, 01 Sep 1999

    Hello,

    I have an idea and don't quite know if I am tackling it the right way. I own a mac and would like to set up an external server to help with development and testing of CGI scripts using Perl. I basically want to emulate my ISP. Am I right in thinking that I can buy a basic PC, replace windows with Linux to make it a unix box and then run the Apahe server with Fastperl etc. on top of that. Plug the whole thing in and serve pages and across a network to the mac. (sounds easy when you say it like that). Any pointers, suggestions or advice will be useful.

    Regards Mark

    (!) This is referred to as a "staging server" or a "testbed" by sysadmins. It is basically that easy.
    The hard parts are gleaning what your ISPs configuration really is. If you can read their /etc/httpd/conf/httpd.conf and related files (or prevail upon them for copies) then you can probably make it much easier for yourself). It also might be a bit of a challenge to collect all of the same modules that they are running under their copy of Apache.
    There are also a few tricky points to consider about the way you access your content. The most transparent (to your testing process and applications) will be to use "split DNS" --- where your Mac/client thinks of a local DNS name server as "authoritative" for the domain that your (virtual) webserver is configured to be. Then your local name server points to your local clone of the web server when you're doing your testing and to your ISPs web server the rest of the time.
    Depending one the way you structure your web pages and CGI applications it may be possible to dispense with the complication of "split DNS." It just depends on how many of your web pages and applications make specific hostname references as parts of their URLs and processing, and whether your development process allows you to regenerate those pages and CGI scripts with the necessary URL and hostname changes. It's possible to make all of your web pages "portable" (using relative links throughout your HTML for example).
    Instead of buying a basic PC and having to "replace Windows with Linux" consider buying a PC with Linux pre-installed. If you can't find one at a competitive price then contact your preferred vendor and let them know what you really want (a PC with Linux pre-installed, or a PC with no OS installed at all).
    Just replacing MS Windows with Linux (or any other OS) continues to support the widespread perception that people WANT MS Windows and that there is no market for alternatives. As more people adopt Linux, FreeBSD, etc. this becomes a misconception --- but it does nothing to encourage independent software vendors! Ultimately that hurts consumers.
    At Linux Online there is a list of hardware vendors that sell systems with Linux pre-installed. You can find it at:
    Linux Online - Linux-Friendly Hardware
    http://www.linux.org/hardware/index.html
    It would be crass of me to recommend a specific hardware vendor. It would also be bad idea. I have friends to run VA Research, and Penguin Computing. Dell is a strategic partner for my employer. I know people who work at SGI, Compaq/DEC and Sun (among others). They are all involved in Linux and they all produce hardware (most of them produce PC clones and are thus is rather close competition).
    So you'll have to make your own choices.

    (?) Id "x" respawning too fast: Murdered Mysteriously

    From Kelley Butch on Tue, 07 Sep 1999

    James,

    I've been running LINUX on my Thinkpad 600 for a few months now with good results. The other day I experienced a power outage and the system went down. Now, during boot-up and just before the "log-in" screen I get this:

    According to /var/run/gdm.pid, gdm was already running (process id) but seems to have been murdered mysteriously. INIT: Id "x" respawning too fast: disabled for 5 minutes
     

    and after 5 minutes I get the same error.

    I removed the pid file thinking that would solve the problem, but the pid file gets recreated and the errors start over again.

    The culprit seems to be the last line in my inittab file:
    x:5:respawn:/etc/X11/prefdm -nodaemon
    - (this is a link to /usr/bin/gdm)

    Thanks in advance,
    Butch

    (!) Oh yeah! I've seen that on some Red Hat 6.x systems. 'prefdm' is a symbolic link to your preferred display manager (the original xdm, the newer kdm for KDE, or the culprit of your problem the gdm, GNOME display manager).
    Try starting the system in single-user mode (or running the command 'telinit 3' to switch to the "normal" multi-user mode without any display manager (graphical login) running. Then remove the PID file and any stray core files in the root, /root and similar directories.
    You might also want to look for any UNIX domain sockets under the /tmp directory and /var. You can use the command command: 'find /tmp /var -type s -ls' to look for them.
    You'll normally find a couple of them under /var for things like the printer (might also be under /dev) and gpmctl (console mouse and cut/paste support) as well as one or two sockets for your X server(s). Those would normally be in the /tmp/.X11-unix/ directory and be named X0, X1, etc. (If you've never run multiple concurrent X sessions then you'll only see X0 under there).
    You probably don't have to do anything with those sockets. However, it might make sense to blow away the one's under /tmp. X will (re-)create those as necessary.
    The fact that the version of GNOME gdm that shipped with Red Hat 6.x can't gracefully handle (clean up after) an inadvertant shutdown or other mishap is very disappointing.
    Personally I still think GNOME is still beta quality code. (Or at least it was when RH 6.x shipped). It dumps core files all of the place, can't figure out whether there is a living process that owns a 'dead' socket, etc.
    Oh well. At least it's getting a bit better.
    I did grope around a bit at the GNOME web site: http://www.gnome.org.
    I did NOT see this question listed in their FAQ (which surprises me, since I would think that this would be a very commonly encountered problem among RH6/GNOME users). However, I did find a link to a bug tracking system. From there I searched for messages related to our "murdered mysteriously" problem. There was some indication that Martin K. Petersen is the contact for gdm and that he posted patches to resolve that (and several other) gdm issues.
    I also saw several references to a gdm2 (which presumably is a second version of the GNOME display manager).
    In any event, you may want to download a set of updates to your version of GNOME. Hopefully the fix to this problem is included therein. (I'm pretty sure that the GNOME CVS sources are updated, I just don't know if there are RH RPMs of the latest versions and patches readily available).

    (?) Large File Support Under Linux/x86

    From Tim on Wed, 08 Sep 1999

    Hello,

    I have a box on my network running RedHat 6.0 (x86) that is going to be used primarily for backing up large database files. These files are presently 25 gigs in size. While attempting a backup over Samba, I realized that the file system would not allow me to write a file > 2gig to disk. I tried using an large file system patch for kernel 2.2.9, but that only allowed me to write 16 gigs, and it seemed buggy when it was doing that even. Doing an 'ls -l' would show me that the file size of the backup was about 4 gig, but the total blocks in the directory with no other files there indicated a much higher number like so:

    [ root@backup ]# ls -l total 16909071 -rwxr--r-- 1 ntuser ntuser 4294967295 Sep 2 19:45 file.DAT
     

    I am well aware that a 64 bit system would be the best solution at this point, but unfortunately i do not have those resources. I know BSDi can write files this big, as well as NT on 32 bit systems.. i am left wondering, why can't linux?

    Thanks
    -Tim

    (!) Linux doesn't currently support large files on 32-bit platforms. I wouldn't trust an experimental patch to this job.
    Use FreeBSD for this (I've heard that it does support 63-bit lseek() offsets). Samba works just as well on FreeBSD as Linux.
    If you really need to use Linux for this project then use it on an Alpha.
    Note: You could back these up to raw partitions (without filesystems made on them). However, I wouldn't recommend that.

    (?) From the Dim History: EQL Revisited

    Bandwidth Load Sharing w/o ISP Support

    Issue 13 was the very first issue that contained the Answer Guy's replies!


    From Andrew Byrne on Wed, 08 Sep 1999

    Hi there,

    I came across the information below on the web page http://sunsite.mff.cuni.cz/lg/issue13/answer.html which appears to be written by you.

    (!) Well, that would probably be mine. I guess that would be a mirror in Czechoslovakia.

    (?) Since it dates back to late 1996, I was wondering if you have any new information on EQL. I have been told by someone who works for an ISP that EQL may still work even if it isn't supported at the ISP's end of the connection. He noted that all incoming connections could only be directed to either dial-up connection's IP address, but all outgoing data could be sent via EQL.

    If this is true, then EQL may work for what I need, that being using it with two 33.6kbps PPP connections to provide a web server. All incoming requests would come via one PPP connection, but web traffic sent out would be shared across the two PPP connections.

    (!) Actually, if you use DNS round robin then incoming requests will be roughly distributed across each connection. Using "policy-based" routing and the "equal-cost multi-path" options in the Linux kernel can give you the load distribution on the outbound traffic.

    (?) If you do know any more about how EQL works, could you please tell me if what I'm saying is true, or correct me if i'm wrong.

    (!) I think it will be better to address the objective. You want traffic distribution over multiple ISP links but you're asking about traffic distribution over multiple low-level links to a single ISP (EQL). They aren't quite the same thing.
    It is quite common for people to present a diagnosis and perceived solution as though it was their question. One of the things I learned as a tech support guy (and continually strive to remember) is to look past the question that's presented, and guess at the underlying objective.

    (?) Thankyou!
    Andrew Byrne.

    (!) Before I answer I'll also quote something I said at the end of my original answer:
    (After reading this you'll know about as much on this subject as I do; after using any of this you'll know much more).
    This is true of many things that I say in this column. It will be worth remembering as you read on.
    As far as I know EQL still has the constraints that I detailed back in 1996. Your ISP must participate with a compatible driver on his end; both of your lines must go to a single host at your provider's end.
    It's also important to note that EQL and other "bonding" or line multiplexing scheme will only increase your effective bandwidth. This does nothing to lower your latency. Here's a useful link to explain the importance of this observation:
    Bandwidth and Latency: It's the Latency, Stupid
    by Stuart Cheshire <>
    TidBITS#367/24-Feb-97 (Part 1)
    http://www.tidbits.com/tb-issues/TidBITS-367.html#lnk4
    TidBITS#368/03-Mar-97 (Part 2)
    http://www.tidbits.com/tb-issues/TidBITS-368.html#lnk4
    (Someone kindly pointed me to some copy of this article back when this column was published. Now, at long last, I can pass it along. I don't remember whether I was publishing follow-up comments to TAG columns back then).
    In any event EQL is not appropriate for cases where you want to distribute your traffic across connections to different providers. It's not even useful for distributing traffic load to different POPs (points of presence) for one ISP.
    However, there are a couple of options that might help.
    First, you could simple DNS round robin. This is the easiest technique. It is also particularly well suited to web servers. Basically you get one IP address from one ISP, and another from a different ISP (or two addresses from different subnets of one ISP). You can bind each of these addresses to a different PPP interface. If you were using ISDN or DSL routers (connecting to your system via ethernet) then you'd use IP aliasing, binding both IP addresses to one ethernet interface in your Linux host. Then you create an A record for each of these in your DNS table. Both A records (or all of them, if you're using more than two) are under the name: www.YOURDOMAIN.XXX).
    DNS round robin is quite simple. It's been supported for years. Basically it's a natural consequence of the fact that hosts might have multiple interfaces. So I might have eth0 and eth1 on a system known as foo.starshine.org. There is no law that says that these interfaces have to be in the same machine. I can create two web servers with identical content, and refer to both of them as www.starshine.org.
    The only change that was required for "round robin DNS" was to patch BIND (named, the DNS daemon) to "shuffle" the order of the records as it returned them. Clients tend to use the first A record they find. Actually a TCP/IP client should scan the returned addresses for any DNS query to see if any of them are on matching subnets. Thus a client on the 192.168.2.* address should prefer the 192.168.2.* address over a 10.*.*.* address for the same hostname. (For our purposes this will not be a problem since 99.9999% of your external web requests will not be from networks that share any prefix to yours).
    The load distribution mechanics of this technique are completely blind. On average about half of the clients will be accessing you through one of the IP addresses while the other half will use the other address. In fact, for N addresses in a round robin farm you'll get roughly 1/N requests routed to each.
    The is the important point. Since you're not "peering" with your ISPs at the routing level (you don't have an AS number, and you aren't running BGP4) then the links between you and your ISPs are static. Thus the IP address selected by a client determines which route the packets will take into your domain.
    Note, only the last few hops are static. You're ISP might be using some routing protocol such as RIP or OSPF to dynamically select routes through their network, and the backbones and NAP (network access points) are always using BGP4 to dynamically select the earlier portions of the routes --- the ones that lead to your ISP.
    I realize this is confusing without a diagram. Try to understand this: each packet between your server and any client can travel many different routes to get to you. That's true even if you only have a single IP address. However, the first few hops (from the client's system to their ISP) are usually determined by static routes. The last few hops (from your ISP to your server) are also usually along static routes. So, for almost all traffic over the Internet it's only the middle hops that are dynamic.
    The key point about DNS round robin load balancing (and fault tolerance) is that the different IP addresses must be on different networks (and therefore along different routes).
    So, this handles the incoming packets. They come into different IP addresses on different networks. Therefore they come in through different routes and thus over different connections to your ISP(s).
    Now, what about outgoing traffic. When we use round robin to feed traffic to multiple servers (mirrored to one another) there is no problem. Each of the server can have different routes (outbound), so the traffic will return along roughly the same route as it traversed on it way in.
    When we use round robin to funnel packets into a single system we have a problem.
    Consider this: an HTTP request comes in on 192.168.2.34 from 172.17.89.10; the web server fashions a response (source: 192.168.2.34, destination: 172.17.89.10). What route will this response take?
    The default route.
    There can normally only be one default route. Normally only the destination address is considered when making routing selections. Thus all packets that aren't destined for one of the local networks (or one of the networks or hosts explicitly defined in one of our routing tables) normally go through our default.
    However, this is Linux. Linux is not always constrained by "normalcy."
    In the 2.2 and later kernels we have a few options which allow us finer control over our routing. Specifically we "policy based routing" in the kernel, get the "iproute" package and configure a set of routes based on "source policy." This forces the kernel to consider the source IP address as well as the destination when it makes its route selection.
    Actually it allows us to build multiple routing tables, and a set of rules which select which table is traversed based on source IP address, TOS (type of service flags) and other factors.
    I found a short "micro HOWTO" on the topic at:
    Linux Policy Routing
    http://www.compendium.com.ar/policy-routing.txt
    ... that site was hard enough to reach that I've tossed a copy on my own web site at:
    Starshine Technical Services: The Answer Guy
    http://www.starshine.org/tag
    (I should do that more often!).
    There are also some notes under /usr/src/linux/Documentation/networking/ in any of the recent (2.2.* or later) kernel.
    I guess it's also possible to enable the "equal cost multi-path" option in the kernel. This is a simple (and crude) technique that will allow the kernel to use redundant routes. Normally if I were to define two routes to the same destination then only the first one will be used, so long as that route is "up." The other (redundant) route would only be used when the kernel received specific ICMP packets to alert it to the fact that that route was "down." With multi-path routing we can define multiple routes to a given destination and the kernel will distribute packets over them in a round-robin fashion.
    I think you could enable both of these features. Thus any outbound traffic which matched none of your policies would still be distributed evenly across your available default routes.
    I hope you understand that these techniques are ad hoc. They accomplish "blind" distribution of your load across your available routes/connections without any sensitivity to load or any weighting. This is a band-aid approach which gives some relief based on the averages.
    Let's contrast this to the ideal networking solution. In an ideal network you'd be able to publish all of the routes to your server(s). Routers would then be able to select the "best" path (based on shortest number of hops across least loaded lines with the lowest latencies).
    In the real world this isn't currentl feasible for several reasons. First, you'd have to have an AS (autonomous systems) identification. You're ISPs (all of them) would have to agree to "peer" with you. They'd have to configure their routers to accept routes from you. Naturally they would also have to be "peering" with their interconnects and so on. Finally these routes would then take up valuable memory in the backbone and 2nd tier routers all over the Internet. This extra entry in all of those routers is an additional bit of overhead for them.
    Ultimately a router's performance is limited to the number of routes it can hold and the amount of computation that it takes to select an interface for a given address. So it's not feasible to store entries for every little "multi-homed" domain on the Internet. In fact the whole point of the CIDR "supernetting" policies was to reduce the number of routes in the backbone routers (and consequently reduce the latencies of routing traffic through them).
    So we use these cruder techniques of "equal-cost multi-path" and "policy-based" routing. They are things that can be done at the grassroots level (which is the Linux forte, of course).

    (?) High School Modem

    From andrew on Tue, 21 Sep 1999

    I am a high school student/web developer/tech and I've been

    tampering with linux lately. I've been having a lot of fun (if that's the right word..) with it, but I cannot get my modem to work. I have a Telepath 56K modem built by USR for Gateway 2000 for use in OEM computers like mine. For the I've been looking every where for help to no avail... could you help me out answer man?

    System information: Intel 440LX w/ Intel P2-300, running red hat linux (kernel 2.2.10)

    - Thanks, Andrew Shrum (<a

    (!) It's probably a winmodem. Search the Linux Gazette FAQ and you'll see what I have to say about those (and lots of hints on using real modems).

    (?) Who is Jim Dennis?

    From SeanieDude on Tue, 21 Sep 1999

    Why the f*ck is your name listed so damn much in hotbot?

    (!) Err. 'Cause I write a column for an online Linux magazine. I've been writing the "Linux Gazette's Answer Guy" column for about three years.
    Honestly, my name only returns 2,000 hits on Hot Bot. Linus Torvalds (Linux' namesake) returns 9,990. If you search on the terms jim and dennis (not the phrase "jim dennis" with quotes) you get a bunch of stuff about Dennis Hopper being interviewed by Jim Carrey (or something like that). On Yahoo! my "vanity rating" is 4200 (which includes some false hits of course).
    I only get about a 1,000 web page hits on Alta Vista (including links to some "trivia question" pages on the Linux Users Group at the Los Angeles Airport(*). I remember that this was a trivia question at some Linux conference, though I don't remember which one.
    • (http://www.pelourinho.com/linuxatlax/linuxtrivia/tsld061.htm)
    Oddly enough I'm also listed in the credits of the sed FAQ (surprised me --- I don't remember helping on that; but they got one of my e-mail addresses right so it's probably from when I was active on the comp.lang.awk newsgroup).
    At Deja.com I found an amusing reference to myself. Some in the Netherlands apparently adopted an old .sig (signature quote) of mine to which he left on my name as the attribution:
    "Any sufficiently advanced Operating System is indistinguishable from Linux."
    ... though I thought I'd said "UNIX" instead of Linux in that one.
    Anyway, searching for yourself online can be fun and entertaining.
    You can read more than you want to know about me (and quite a bit more about Linux) by going to http://www.linuxgazette.com (and its many mirrors) to http://www.starshine.org (my home, and my wife's consulting service) and http://www.linuxcare.com (my employer), and at:
    http://www.amazon.com/exec/obidos/ASIN/1562059343/o/qid=937947698/sr=2-1/002-3892219-4037450
    (my new book: "Linux System Administration").

    [ As we go to press, the book is already hitting the stores, as well. It has a purple spine, and a white cover with a river scene along the top. -- Heather ]


    (?) Programming Question about Regaining stdin/stdout

    From Marco Mele on Wed, 22 Sep 1999

    After a freopen() to redirect stdin or stdout, how to get back the stdin to the keyboard and the stdout to the screen?

    (!) It sounds like a C/C++ programming question. I'm not a C programmer.
    However, I might try keeping my stdin and stdout file handles and using an open() call on new file handles for my other file operations. Then you can perform the relaying yourself, and thus control what data goes to each of these streams.
    I suppose you could save the value of your current TTY (using the coding equivalent of the 'tty'(1) command; grab its sources for an example) and re-open them later. (If this was a "do my homework" question your assignment is going to be pretty late, and probably wrong).

    [ This message arrived while Jim was on a two week assignment in Japan... although he has begun to read into the backlog, there are likely to be a few more late answers coming your way, gentle readers. -- Heather ]


    (?) Outgoing Mail Problems

    From ronsueboe on Wed, 22 Sep 1999

    Answers Guy

    Using Red Hat 5.2 and S.u.S.E. 6.1 I have suddenly run into the same problem. Very terse email relpy's go-mostly. Longer ones don't. Receiving doesn't seem to be an issue. Sendmail is activated but all my emailing is thru Netscape on a dial-up connection and I boot the machine anytime I need to use it otherwise it's off. I tried to disable Sendmailbut this doesn't seem to help.

    Any ideas? One more thing. Under RedHat the first message usually would go but nothing after it though it was not hard and fast.

    Any ideas? Ron

    (!) I'm not sure I understand the whole question. It sounds like you receive mail just fine, but you have intermittent problems sending mail. You think it might be related to the message size. You use use Netscape (Communicator, presumably) to send your mail. You think 'sendmail' might be involved.
    I think Communicator tries to send mail directly (looking up your recipient's host MX record and attempting to connect to its SMTP port). It may be that you have it configured to connect to your localhost, or it might "fall back" to relaying through your localhost MTA (sendmail) when it sees a message of a given size, or when it can't connect directly to the appropriate recipient system.
    Run the 'mailq' command to see if they are landing in your local mail queue (sendmail). If so, trying connecting to your ISP and running a few copies of 'sendmail -q &' (you can run several of these in the background so that their MX lookups and TCP converstations will occur in parallel). Then you might want to reconfigure NS Communicator to relay your mail through your ISP mail host (often the same one from which you fetch your mail --- your POP or IMAP server).
    If not, you might want to look at your /var/log/messages more closely --- to see if your mail is going through there. You could also run 'tcpdump' to watch the traffic on your PPP (or other TCP/IP) line, and see if the traffic is going through your interface at all.
    Personally I don't use GUI mailers. I also don't like it when an MUA (user agent) tries to perform transport services (the job of an MTA). I prefer to be able to configure system and site policies on host and network wide bases. So the MTA can do masquerading (making my "From" addresses conform to reasonable patterns), and routing (through my firewalls, etc). Of course this is the bias of a professional sysadmin who works with large sites. For an individual home user it's really about the same either way (though often easier to play with the GUI MUA than to configure your MTA).
    (If 'sendmail -q' does help --- you may want to add it to your PPP 'ip-up' script, so a queue run is performed every time you bring up your ISP link).
    I hope that works out. You might want to try some tests with 'elm' or some other simple MUA as part of your troubleshooting --- if the 'mailq' and 'sendmail -q' commands don't do the trick.

    (?) Adding Fonts

    From Swami Atmarupananda on Wed, 22 Sep 1999

    I've got a long shelf of linux books, and none of them say anything about how to add fonts. They all tell you in great detail what fonts are on the system, but not how to add them. I'm using S.u.S.E. 6.1 (shortly to upgrade to 6.2), but have used RedHat also (5.2 and 6.0).

    I understand that TrueType support is not yet built into the kernel (can be used with limited success in Star Office 5.1), but perhaps soon will be. But going with the present font support, how can one add fonts?

    Thanks. Swami Ananda

    (!) I assume you're talking about adding display fonts to your X system.

    [ It's worth noting that in Linux terms, X is not the kernel. The console driver isn't going to support TrueType, although it can be convinced to use special raster fonts, and if I recall correctly SuSE actually makes it quite easy to set those (use YaST). -- Heather ]

    I've never added any X fonts to any of my Linux boxes. As many of my long time readers know I mostly work with servers and text. However, Heather, my lovely wife (and assistant LG TAG editor) does use X quite a bit more.
    From what she says you should be able to just add your new fonds (BDF or Type 1) into any of one the directories that's listed in your XF86Config file's FontPath. Edit the config file, look for the "files" section and you should see the list of FontPath entries. Alternatively you can put your new fonts in a new directory (/usr/local/X11/fonts, maybe) and add that to your FontPath.
    After that you MUST RUN the 'mkfontdir' command to generate the .../fonts.dir files in each of these directories. Read the 'mkfontdir' man page for more details on that.

    [ The mkfontdir is really important because the resulting text files help tell X how to internally map between a font name you and I might consider normal (for example, "fixed"), the file name (6x13.pcf.gz), and the much longer font designator created by X applications (ex: "-misc-fixed-medium-r-normal-*-15-*-75-75-c-90-iso8859-1"). It's certainly possible to create the entries yourself, by hand, and I generally do, just to improve my understanding. The important thing is to look at the files that should already be there. And of course, it won't take effect until your font server gets restarted. -- Heather ]

    Other subsystems have their own font handling.
    For example you can add fonts to the LaTeX (teTeX) system using their 'fontimport' script (read its man pages, the teTeX.FAQ, and or browse Thomas Esser's web pages at: http://tug.org/teTeX). teTeX is a distribution of the TeX/LaTeX typesetting system. LaTeX is a set of TeX macros, style sheets and document class definitions and TeX is a Donald Knuth's typesetting language: a programming language for describing typeset pages (and typeset pages of technical information, mathematical formulae in particular).
    There is another text driven typesetting package on a typical Linux system called 'roff' (which is actually 'groff' the GNU 'roff' suite). 'roff' is short for 'runoff' (apparently an old typesetting term). Your man pages are in *roff format, and there are pakages like 'tbl' (table typesetter) and 'pic' (picture typesetting language), and 'eqn' (for equations and formulae). You can add fonts to the 'groff' subsystem using commands like 'afmtodit' and 'tfmtodit'.
    Of course your printing subsystem might have many fonts of its own. For example you might be using GhostScript (gs) to render PostScript into your printer's native page description language (PDL). GhostScript allows you to make many printers emulate a PostScript printer. This allows you to transparently use applications that only know how to generate plain text or PostScript (many) without paying for a PostScript printer.
    Generally you print LaTeX files using 'dvips' (.dvi, or "DeVice Independent," is the intermediate or "object" file output of TeX and LaTeX), or 'grops' (the 'groff' to PostScript) tool to print TeX and/or *roff files. However, there are many utilities (dviware) that can directly drive many printers in their own native PDL.
    As for TrueType support. You are correct. The current releases of XFree86 don't include support for TrueType. However, they do point out a number of alternative solutions in their FAQ at http://www.xfree86.org (where most people with questions about X under Linux should go instead of me!)
    They mention that they are planning to incorporate TrueType support into the next major release (presumably that will be 4.0). Meanwhile individual programs can implement their own TrueType or other independent font support as needed. It's a shame for them to do that, as the windowing system should provide it in one place rather than having each client do it separately. However it can be done.

    [ On my primary graphics workstation (betelgeuse, our VARstation II) I run a copy of xfs (the X external font server) that understands TrueType as well as the usual run of fonts. Running font services externally means canopus, deneb, and if I was really crazy antares, can all share all these TTFs that I took the effort to set up. It was part of a binary package available for xfstt, which I found when I read an LG article about some other app trying to provide TrueType support.

    Anyways, with this, I can use TTF's in the GIMP very easily. I now have too many fonts to view them effectively in xfontsel, but I don't care. It is frustrating that Netscape won't properly scale TrueType fonts for me, but that appears to be Netscape's problem, not mine. -- Heather ]

    I gather that the GTK canvas has some support for PostScript fonts (GTK is the GIMP Toolkit, a set of programming libraries originally written for the GIMP, GNU Image Manipulation Program, and used as the display technology for GNOME). Perhaps they'll add TrueType to that, and/or to DGS (Display GhostScript, part of the GNUStep project) before XFree86 4.x ships.
    As you can see, fonts are a complex issue. However, you probably were mainly interested in just displaying them, and possibly adding printer support. I suspect you aren't using LaTeX or 'groff' typesetting.
    The 'gs' (GhostScript) installation often gets installed with your distribution and just works without much further thought. However, I don't know if you can find a .ttf to .afm conversion tool. It would probably need that to get printing to work.
    Hello! Here's a link:
    The (preliminary) TrueType HOWTO
    http://www.moisty.org/linux/TrueType-HOWTO.html
    Look at questions 3.1 and 3.2!
    Often I don't find the best FAQs in my web searches until I've written most of my message for LG. That's because I write "off the cuff" and do the research (usually in 'lynx', in another 'screen' window as I type).
    It looks like newer versions of GhostScript can be compiled to support TrueType fonts. I guess that would have been in your 'gs' man pages if you had the new version.
    I've copied Brion Vibber, the author of this (prelimary) HOWTO to encourage him to submit that HOWTO to the LDP as soon as possible. (Brion, don't worry if it's done! It's info we want to see in the LDP tree NOW!).
    I hope all of that helps.

    "Linux Gazette...making Linux just a little more fun!"


    More 2¢ Tips!


    Send Linux Tips and Tricks to


    New Tips:

    Answers to Mail Bag Questions:


    Laptop in different places, setting up different DNS

    Tue, 21 Sep 1999 23:13:08 -0400
    From: Pierre Abbat <

    I use a laptop at home, at the office, and elsewhere. I set up a script so that it recognizes where it is when it boots. It is /etc/rc.d/whereami and has mode 744:

    
    #!/bin/sh
    # Figure out where I am by pinging known hosts.
    if [ -z "`/sbin/ifconfig|grep Ethernet`" ] ; then sleep 2 ; fi
    echo -n elsewhere >/etc/where
    ping -c 1 192.168.97.1 && echo -n home >/etc/where
    ping -c 1 192.168.96.1 && echo -n office >/etc/where
    chmod 0644 /etc/where
    

    (Names and numbers have been changed to protect the innocent.) I call this from /etc/rc.d/init.d/inet, which is run after pcmcia, so the card is up by then. (The sleep 2 is in case it isn't.) Then I do the following:

    
    cp /etc/resolv.conf.`cat /etc/where` /etc/resolv.conf
    

    This installs the appropriate nameserver list.

    The two networks I'm on are next door to each other, which means I can supernet the card and ping both without ifconfigging it. But if one were 10.*.*.* and the other on 192.168.*.*, I'd have to ifconfig eth0 in whereami to ping them both.

    phma


    Tips in the following section are answers to questions printed in the Mail Bag column of previous issues.


    ANSWER: reply to Linux on a laptop

    Thu, 2 Sep 1999 14:09:36 -0700
    From: Russ Johnson <

    You said:

    I'm a linux newbie. I installed linux (redhat 5.2) on a laptop with an ATI rage LT PRO AGP2X, and there's no driver for this graphic card to run Xwindow, I tried to find one on the web, but without success, I also tried other ATI drivers (like ATI rage pro and other MACH64 drivers) without better results. Is there any solution ? Please help me...

    You bet there's a solution. It's not perfect (yet) but it works well until XFree86 gets a new server out there. The solution is to use the Frame Buffer server. Details are here: www.0wned.org/~cain/ragefury.htm Other than that, the only solution available is to purchase a commercial X server.

    Russ


    ANSWER: Funny signature

    Fri, 3 Sep 1999 03:03:41 +0300 (IDT)
    From: Mikhael Goikhman <

    Hi, Csaba Feher.

    I am refering to your tip in LG #45. Please don't get me wrong, I don't want to bash you, why should I? :)

    1) sigchange script itself has redudant IMHO lines (rm, cat, echo?). Here is a smaller version:

    
    #!/bin/sh
    cp $HOME/.signature.basic $HOME/.signature >& /dev/null
    /usr/games/fortune -s linuxcookie computers >> $HOME/.signature
    

    2) It is not very good to put home grown scripts to /bin or /usr/bin. This is what /usr/local/bin and $HOME/bin is for.

    3) It is not very good to put something to /etc/rc.d/rc.sysinit. This is what /etc/rc.d/rc.local is for.

    Have a nice day, Mikhael.


    ANSWER: DNS on the fly

    Fri, 3 Sep 1999 12:40:53 +0900
    From: Dmytro Koval'ov <

    Ernst-Udo Wallenborn <[email protected]> suggests:

    i use the SCHEMES facility of the PCMCIA package to solve a related problem: how to use a laptop in two LANs with different IP addresses, different domains, and (naturally) different DNS servers.
    Basically you set up a file /etc/pcmcia/network.opts which contains all network options, esp. something like:
    
    case "$ADDRESS" in
    home,*,*,*)
    [snip]
    SEARCH="domain.com"
    DNS_1="1.2.3.4"
    DNS_2=""
    DNS_3=""
    [snip]
    ;;
    work,*,*,*)
    [snip]
    SEARCH="work.com"
    DNS_1="5.6.7.8"
    DNS_2=""
    DNS_3=""
    [snip]
    

    Then, when booting with lilo you can append SCHEME=home or SCHEME=work, or better write this into /etc/lilo.conf directly. and type 'home' or 'work' at the lilo prompt.

    Well, may be I was lucky enough - I didn't understand what SCHEMES mean when I was doing my setup ;)

    The problem with approach of Ernst-Udo is that you need to reboot system when you come home from work. But this is a Linux world and nobody needs a reboot just to change the IP address and/or DNS.

    Another approach I'm using with the same /etc/pcmcia/network.opts file is to have different setup for different PCMCIA slots. In this file you can find comment:

    
    #
    # The address format is "scheme,socket,instance,hwaddr".
    #
    

    This comment explains setup below:

    
    case "$ADDRESS" in
    *,0,*,*)
    [snip]
        IPADDR="1.2.3.40"
        SEARCH="domain.com"
        DNS_1="1.2.3.4"
        DNS_2=""
        DNS_3=""
    case "$ADDRESS" in
    *,1,*,*)
    [snip]
        IPADDR="5.6.7.80"
        SEARCH="work.com"
        DNS_1="5.6.7.8"
        DNS_2=""
        DNS_3=""
    

    Having this you'll have only to plug you NIC into 1st PCMCIA slot at home and into 2nd slot at work. You IP addresses and DNS are set correctly upon card insertion! No reboots.


    ANSWER: ATI rage LT PRO AGP2X

    Sat, 04 Sep 1999 10:53:59 +0200
    From: August =?iso-8859-1?Q?H=F6randl?= <

    hi, there are some cards which can be used with a new framebuffer X server

    there is a description at home.t-online.de/home/mueller.elmar/linux.htm (german only)

    regards
    Gustl


    ANSWER: LG Formatting problems

    Wed, 8 Sep 1999 16:52:41 +0100
    From: James Tappin <

    A clue as to the source of the Opera formatting problem (Mailbag Sep '99 - message from Bjorn Eriksson 27/Aug) comes from the fact that KFM (1.1.2 pre 3 release of KDE) also has the same problem which strongly suggests a QT problem as (IIRC) Opera for Linux is also QT based.

    Not sure of any way around it though.

    James Tappin


    ANSWER: SiS6326

    Thu, 9 Sep 1999 14:11:23 -0500
    From: McKown, John <

    There is a commercial driver from Xig which is supposed to support this card. You can look at www.xig.com/Pages/CardMfgrSiS.html

    It is EXPENSIVE! US $99.95! . I've not used the Xig X server, so I don't know how good it is. I have seen some good reviews.


    ANSWER: AGP2X

    Thu, 9 Sep 1999 14:11:48 -0500
    From: McKown, John <

    Have you looked at Metro Link? Go to http://www.metrolink.com. They indicate that they have a driver for this card. Actually it says "Rage LT Pro AGP" not "AGP2X". I don't know if it is any different. However, it is not free. It costs US$39. If you have a credit card that they can accept, you can download the driver from their FTP server. I have had their driver for 2 days now for my STB Riva 128/ZX which did not work well with the XFree86 supplied driver. It works very well with their driver. Just a thought.

    I hope it is of some help to you.

    By the way - your English is quite good.


    ANSWER: Poor Internet Speed

    Thu, 9 Sep 1999 14:12:03 -0500
    From: McKown, John <

    If your system got good speed at your friend's house but not yours, then I can only think of one of two possibilities. One - you friend has a better modem or Two - your friend has a better telephone connection. I would bet on the telephone connection. I regularly connect at around 44,000. I have a friend who says that he can only get around 24,000. But he is in the "boonies" and I'm using a commercial grade line to my house.


    ANSWER: Clearing Lilo from MBR

    Thu, 23 Sep 1999 02:34:20 -0700
    From: Jim Dennis <

    Just read the item on clearing lilo.

    All I do is boot from a Dos ( 5 or greater ) boot disc and issue the command:

    fdisk /mbr

    that seems to fix anything including boot sector viruses. Maybe Linux fdisk would take the same parameter. I enjoy your column, keep up the good work, best wishes,

    norm

    The /MBR option was undocumented and only introduced in MS-DOS 5.0. I don't remember the question to which you were referring. If I didn't mention FDISK /MBR it was probably because I was not assuming that the user was trying to restore an MS-DOS 5.0 or later boot loader to their system.

    Linux fdisk is a different program and doesn't touch the boot code in the MBR. It only works on the partition tables (which comprise the last 66 bytes of the MBR and possibly a set of others for extended partitions).

    There are several Linux programs which do write boot records. /sbin/lilo is the most commonly used. 'dd' will do in a pinch (if you have a .bin image to put into place).

    BTW: don't count on /MBR to fix a virus. Some viruses encrypt portions of your filesystem, thus causing major problems if they aren't removed corectly. To prevent infection by boot sector viruses, disable the "floppy boot" options in your BIOS. You should only enable those long enough to perform an OS installation or system recovery and disable it immediately thereafter. To prevent viral infect by "multi-partite" and "file infector" viruses, stop running MS-DOS. To avoid MS Windows macro viruses, avoid MS Office, MS Exchange and related software (with virus^H^H^H^H macroing hooks built into them).


    ANSWER: Why are they trying to telnet into my Linux box?

    Fri, 24 Sep 1999 14:32:22 -0400
    From: Rick Smith <

    Since my previous letter about Dalnet providers trying to connect to my Linux box via telnet port 23, I have found out that they are also trying port 1080. I have instigated a policy of dropping all incoming connections via a command run by host.deny:

    
    /sbin/ipfwadm -I -i deny -S %a
    

    I hate to do this to my niece, but I don't know of any alternative until these dalnet jerks stop this intrusive practice.

    Anyway, my niece has moved to other irc providers that don't do this kind of thing.


    This page written and maintained by the Editor of the Linux Gazette,
    Copyright © 1999, Specialized Systems Consultants, Inc.
    Published in Issue 46 of Linux Gazette, October 1999

    "Linux Gazette...making Linux just a little more fun!"


    Linux and the Future

    By


    The spread of the Internet at the start of this decade had a profound impact on the development of free software. Suddenly, because of free software a large number of people could start collaborating on developing large software projects without being in large universities or corporations. Though mailing-lists and FTP sites, a large number of people could start using their free/fun/project's time to build programs that would be too much for any single person alone. Fundamentally, the Internet is a new production technology that has made software production cheaper just as the sewing machine made clothes production cheaper.

    Today, free software has grown to become a complete system starting with a very high-quality kernel (Linux and others) to some of the best user applications. In fact, not only do we have free programs to do the job of most commercial applications; in many cases, we have more than one free project with very similar functions (i.e., different philosophies or scopes). This free, complete and high-quality system was bound to attract the attention of companies. Not only because Linux and other free software were growing into a large market but also it was in direct competition (and in many cases beating) their own products. Companies need to sell programs for this platform and need to figure out a way of competing with this new way of making software.

    Possible problems facing Linux in the Future

    The huge growth in the use of Linux and free software has triggered a great increase in the number of commercial and non-commercial developers; this is leading to an ever-increasing rate of growth and innovation. In the midst of all of this great evolution, I see a number of factors that may end up hurting Linux and other free software in the long run. Here are some of them:

    1. Intolerance

    There seems to be a certain amount of intolerance towards software produced by people with different beliefs regarding free software in general and towards commercial software in particular. This, in my judgment, is quite dangerous since commercial software is one of the best sources of innovation in software, simply because commercial people can dedicate themselves to it. Software can't always be given freely since companies have to make money to cover costs. Also, sometimes it would be disastrous for companies to give their programs for free, e.g., drivers for hardware that cost millions in R&D could give many hardware secrets to competitors. The biggest problem with commercial software is not that it is commercial, but that monopolies sometimes arise stopping competition and reducing the quality of the software. If free and commercial software can find a way to co-exist, it would mean we could all enjoy the best of both worlds. A similar type of intolerance exists toward programs that are written with different licenses; this is too bad, since the best feature of free software is the freedom to choose.

    2. Stagnation

    Linux (and UNIX in general) was extremely well-designed for the needs of the people of its time. With the change of requirements of users and applications, change is continuously required. Linux has done better than other UNIX systems in dealing with this change, e.g., the FHS (File Hierarchy Standard) is much more in touch with user requirements than many commercial UNIX systems. There is still room for improvement, in my opinion. The important thing to remember is that change has to be emphasized and standards should be there to facilitate change by providing a common working ground, not by hindering it.

    3. False Oversimplification

    With all the pushing for making Linux easier to use, a number of programs try to imitate other operating systems by having the computer do the computer managing with the user just watching without knowing what is actually happening. This lack of understanding or distinction between the different parts of the system prevents the user from using the different parts in new and creative ways. In fact, this flexibility is one of the best features of Linux. This doesn't mean that graphical interfaces are not required--quite to the contrary, I think we are in desperate need of properly designed ones--it just means that it should be thought out. It should reflect the way a typical Linux system is put together and at the same time have room to grow as different components are added in the future.

    Factors for Ensuring the Continued Success of Linux

    There are a number of strategies I feel free software should adopt. Most of these are extensions of things people have already been doing.

    1. Standardization

    This is one of the most vital requirements for development on the Internet since it allows many people to collaborate on programs that will run on common platforms. Free software always had a long heritage of being very standard compliant. For Example, Linux has been POSIX compliant from the start. There are also free implementations of NFS (for networking), X (for windowing), OpenGL (for 3D graphics) and many others. In light of this heritage, it is truly disturbing to read things like ``requires Red Hat Linux'' (even though I think it is one of the best distributions). Open standards for both software and hardware components should be published and maintained. I would suggest that a standard (``Linux 2000'' would be a nice name) be established that defines everything a hardware or software developer would need to guarantee that his program or the driver would work on any system that is complaint. This standard should not only include things like FHS but also standard packages would be needed. It is very important to realize that distributors and manufacturers will push for open standards, if they are not published and maintained by the Free Software community, and in that case, the control will not be in the hands of the community.

    2. Componentization

    The idea is to build the system from separate components with clear boundaries between them such that you can always plug components into the system and not have to rely on a single source for anything. This can be achieved by insisting on standards for how different components integrate into the system and by separating application-specific configuration files from application-neutral data, so that competing applications or services can use the same information. The ultimate goal of componentization should be to make free software the backbone of everything. When thinking about free software, Richard Stallman suggests thinking ``free speech, not free beer'' as an extension to that I would suggest thinking of free software as ``free air'', it is everywhere and everyone needs it.

    An Example

    As an example of how the ideas I suggested in the last section can be applied, I decided to put together my own desktop Linux system using them. I tried to make my system as standard compliant as possible, but also to include all the ``luxuries'' of a complete desktop system (this included man pages, X, KDE, a number of languages and many other things).

    1. How did I do it?

    My system uses one large (500MB+) file as its root file system and another file (64MB) for swap. It boots off a small temporary RAM disk and mounts the root file system and the swap through the loopback device (/dev/loop0 ). One advantage of this setup is that it is very easy to install on different computers since it's just a matter of copying one directory to the new machine.

    The root disk was made bootable by copying some files from the /bin, /sbin and the /lib, as well as creating and tweaking some files and directories in /etc. Now that the system was booting, I needed to compile the other components of the system. As a first step I needed a compiler, so I copied gcc 2.7.2.3 and compiled egcs 1.1.1 and installed it. The various other components of the system were then compiled and installed starting from the basic (X, common libraries and utilities) and then progressing to applications (KDE, xv, GNUStep, Kaffe, etc.).

    2. System Description

    By examining the file-system structure, you can clearly see the way I tried to implement some of the ideas in the previous paragraphs. Although it is almost fully FHS 2.0 compliant, a number of features make it distinctively different. To begin with, files in the /usr hierarchy are severely restricted. Only 3 main file types are in /usr, The first are the binaries expected to be in any modern UNIX (e.g., head, telnet, ftp, etc. The second group of files are programs or libraries required by many other programs or needing special root access to install properly. This category contains various libraries in /usr/lib and /usr/local/lib and X in /usr/X11R6. Finally, architecture-independent, shared data files are stored in /usr/share as is recommended by the FHS. The emphasis in my system is that the share directory should be a place where programs can share data between different applications on the same system; hence, most files are symbolic links to the data in the program's home directory.

    Another major feature of the system is the modification of the structure of /etc. Instead of the current practice of having all the files in one flat directory, a number of trees have been added. This is done to decrease the clutter and make the structure of the system more clear. For Example, /etc/man.conf is now stored in /etc/utils/man/man.conf while /etc/rc.d is now /etc/sys/init/rc.d with symbolic links are maintained to the old location of files for the sake of compatibility. As is required by the FHS, configuration files for programs in /opt can be stored in /etc/opt, but in addition, subdirectories to it exist for the same reasons given above. In my judgment, these small modifications to the /etc hierarchy can easily fulfill the requirement of a registry system for Linux with only a small modification to the way things are done.

    In my system, most applications and programs live in the /opt directory or a subdirectory of it. For example, Kaffee (the free Java VM) is installed in /opt/languages/kaffe while KDE is installed in /opt/windows/kde. The thinking behind this is that all a package's files are stored in the directory designated for it in the /opt hierarchy and a number of well-defined points of contact are established between a package and the rest of the system including /opt/bin and /opt/bin, subdirectories of /usr/share, as well as a number of other directories.

    Although this looks similar to the FHS the goal is totally different. In my system, a package has to have a symbolic link put in /opt/bin to all of it's public binaries for it to work from the command line. Likewise, proper symbolic links have to be set in /usr/share/man for the man pages of the package to work properly. This same principle applies to a number of other directories including /etc/opt for configuration files and /etc/sys/init/rc.d/init.d for packages that use the services of initd.

    The figure schematically shows both the way the packages interface with the system as well as a specific examples. The reason for going to all of this trouble is to clarify, simplify and limit the points of contact between any packages, both programs and services like httpd, and to emphasis the breaking of the system into clearly defined components which can be isolated, added, removed or even replaced with other components easily.

    The final major new feature of the system is the addition of the /lib/vendor directory. This is intended for kernel modules or other drivers available from vendors. The goal is to provide a standard place for vendors to put their drivers even if they are available in binary-only format. This should encourage vendors to write drivers for Linux and eventually give away the source code for that driver, when the hardware is not so cutting edge. Even if the source code is never released, replacing an existing driver is easier than writing something from scratch.

    Conclusion

    Linux and related utilities have been evolving steadily over the past few years and have grown to be an extremely robust and rich system. Standards have played a core role in this, and their evolution will be even more important if Linux is to continue increasing in popularity.

    I have tried to highlight some points I think are absolutely essential for the continued success for Linux and Free Software in general. One major point is that as good as Linux is, it is not perfect and will have to be in a constant state of evolution. Nothing should be above change and the ultimate goal should always be speed, simplicity and elegance.

    Another point I am arguing is that Linux standards should open up to companies and make it as easy as possible to add programs, services, or drivers into our system smoothly, even if they are not free. This will greatly aid in preventing any single company from monopolizing the system since other companies can make their own replacements for these components or free versions can be written.

    In building my own system, I was trying to see what a system might look like when these ideas are applied. Whether Linux and Linux standards evolve to something similar to my system or not, I hope some of the concerns I raised in the article are considered and addressed by the Linux community.

    RESOURCE

    Componentization for the operating system is closely related to commoditizing computers; Eric Green has a very nice discussion of both at http://www.linux-hw.com/~eric/commodity.html.

    "Linux Gazette...making Linux just a little more fun!"


    Bomb ô Bomb, le premier jeu utilisant l'Addon technology

    By

    Français | English


    Il faudrait peut-être inventer un nouveau terme, plus complet que "freeware", pour un nouveau point de vue sur les programmes gratuits. Si maintenant venait l'idée qu'un programme, en plus d'être donné gratuitement, était donné avec la possibilité de le modifier ? Certes, les programmes gratuits sont donnés avec les sources en général; mais peu de gens, à moins d'être des programmeurs expérimentés, sont capables de modifier les programmes qu'ils utilisent. Si maintenant les programmes "free" étaient donnés avec la possibilité de les modifier facilement et par n'importe qui ?

    Depuis peu sur le net, existe un petit aperçu de ce point de vue. http://david.fauthoux.free.fr

    Volontairement basé sur un jeu ultra-connu (bomberman), ce jeu peut être modifié à volonté en écrivant (au plaisir!) dans son fichier d'initialisation.
    Par exemple, on peut rajouter une page de présentation avec une image de son choix en rajoutant les lignes

    [Page]=
    
    background=myPicture.gif
    Et on peut coller plein d'animations partout simplement en rajoutant des lignes comme "loop anim="... Et pour plus de simplicité, sur le site on trouve plein de tutorials...

    Vous comprenez bien que ce point de vue demande une certaine robustesse et une large souplesse au moteur du programme. Arrêtons-nous un instant sur cette remarque. En effet, il ne suffit pas de changer quelques lignes pour faire d'un programme ce que l'on veut. Il fallait donc mettre en oeuvre une autre façon de procéder. Celle-ci s'adresse aux programmeurs, mais pas forcément expérimentés, et même débutants. Le programme du jeu est fortement structuré (c++), il permet ainsi une compréhension haut-niveau du fonctionnement (regarder les noms d'objets et de fonctions suffit, pas besoin de se pencher sur des algorithmes compliqués). Mais au-delà de ça, cette structuration permet la mise en oeuvre de l'Addon technology : Grâce à cette nouvelle technologie, il est possible de compléter le programme de façon ultra-accessible, en écrivant des "addons", et tous les addons écrits à travers le monde sont et seront compatibles ! Un addon de moins de 20ko peut rajouter une option au jeu : l'addon donné en septembre rajoute une option (avec animations et sons bien entendu !) qui crée un jeu de poursuite dans le jeu !

    Le pouvoir des addons n'est limité que par votre imagination.


    Bomb ô Bomb, the first game using the Addon technology

    By

    English translation by

    Français | English


    Perhaps it would be necessary to invent a new term, more complete than "freeware", for a new point of view in free programs. What if now came the idea that a program, in addition to being given away free, was given with the possibility of modifying it? Admittedly, free programs are given away with the sources in general; but few people, unless being experienced programmers, are able to modify the programs which they use. What if now the "free" programs were given with the possibility of modifying them easily and no matter by whom?

    Recently on the net exists a small outline of this point of view. http://david.fauthoux.free.fr

    Voluntarily based on a ultra-known game (bomberman), Bomb ô Bomb can be modified at will by writing (with pleasure!) in its initialization file. For example, one can add an interface with an image of his choice by adding the line

    [Page]=
    
    background=myPicture.gif
    And one can stick full with animations everywhere simply by adding lines like " loop anim ="... And for more simplicity, one finds the site full of tutorials..

    You understand well that this point of view demands a certain robustness and a broad flexibility with the engine of the program. We stop one moment on this remark. In effect, it is not enough to change some lines to make what one wants of a program. It was thus necessary to implement another way of proceeding. This one is addressed to the programmers, but not necessarily experienced ones, even beginners. The programming of the game is strongly structured (c++), thus is allows a high-level understanding of operation (to look at the names of objects and functions is enough, there is no need to rely on complicated algorithms). But beyond that, this structuring allows the implementation of Addon technology: Thanks to this new technology, it is possible to supplement the program in an ultra-accessible fashion, in writing addons, and all the addons written around the world are and will be compatible! An addon of less than 20ko can add an option to the game: the addon released in September adds an option (with animations and music of course!) creates a pursuit in the game!

    The capacity of addons is limited only by your imagination.


    Copyright © 1999, David Fauthoux. Translation copyright Specialized Systems Consultants, Inc.
    Published in Issue 46 of Linux Gazette, October 1999

    "Linux Gazette...making Linux just a little more fun!"


    An Overview of the Proc Filesystem

    By


    One of the more interesting aspects of certain flavors of UN*X (Linux among them) is the /proc filesystem. This "virtual" filesystem has several key features which are interesting, useful and helpful. It can also be dangerous and disastrous. This column will approach the /proc filesystem in three areas:

    1. A brief explanation of what it is
    2. What /proc can be used for (or sometimes not to be used for)
    3. A map of /proc as of the 2.2 Kernel on the i686 architecture

    What is /proc?

    The /proc filesystem is a direct reflection of the system kept in memory and represented in a hierarchal manner. The effort of the /proc filesystem is to provide an easy way to view kernel and information about currently running processes. As a result, some commands (ps for example) read /proc directly to get information about the state of the system. The premise behind /proc is to provide such information in a readable manner instead of having to invoke difficult to understand system calls.

    What /proc can do for an Administrator

    The /proc fs can be used for system related tasks such as:

    There are some things to take note of, most of those tasks can be done with tools that either peruse /proc or query the kernel directly.

    Different Kernels = Different Capabilities

    Different kernels can allow for different changes and information that is presented within /proc. Some, all, or totally different layouts and capabilities may exist depending on your machine's kernel implementation.

    The Obligatory Warning

    Since there is no one place that documents exactly what you can and cannot do with /proc (again because of distro's) there is no fool-proofing it other than only root may actually descend /proc and monkey with the files therein. I have found the easiest approach to be a sort of hacker method - backup your kernel and apply common sense when making alterations within the /proc fs.

    A prime example of tuning applications via /proc can be found at the The C10k problem document at Dan Kegel's Web Hostel.

    A Map of /proc

    Following is a table with brief descriptions of files and directories in /proc with the 2.2 kernel on a Linux i686 architecture.

    indent
    loadavg Average of system load for the last 1, 5 and 15 minutes
    uptime Time in seconds since boot-up and total time used by processes
    meminfo The number of total, used and free bytes of memory and swap area(s)
    kmsg Kernel messages that have yet to be read in by the kernel
    version Current rev of the kernel and/or distribution (read from linux_banner
    cpuinfo Recognized processor parameters
    pci Current occupation of pci slots.
    self/ Information about processes currently accessing /proc
    net/ Descriptions about the network layer(s)
    scsi/ Contains files with information on individual scsi devices
    malloc Monitoring provisions for kmalloc and kfree operations
    kcore A core dump for the kernel (memory snapshot)
    modules Information regarding single loaded modules
    stat General Linux Statistics
    devices Information about kernel registered devices on the system
    interrupts Interrupt assignment information
    filesystems Existing filesystem implementations
    ksyms Symbols exported by the kernel
    dma Occupied DMA channels
    ioports Currently occupied IO ports
    smp Individual information about CPU's if SMP is enabled
    cmdline Command line parameters passed to the kernel at boot time
    sys/ Important kernel and network information
    mtab Currently mounted filesystems
    md Multiple device driver information (if enabled)
    rc Enhanced real time clock (if enabled)
    locks Currently locked files

    Numbered Directories

    The number directories are running process information by PID.

    Results May Vary

    Again, keep in mind that the capabilities of /proc and it's contents do vary version to version, otherwise, happy exploring.

    For More Information

    Below is a short list of sites with in depth information (LDP aside of course) about /proc contributed by readers:


    Copyright © 1999, Jay Fink
    Published in Issue 46 of Linux Gazette, October 1999

    "Linux Gazette...making Linux just a little more fun!"


    The Graphics Muse

    By



    muse:
    1. v; to become absorbed in thought 
    2. n; [ fr. Any of the nine sister goddesses of learning and the arts in Greek Mythology ]: a source of inspiration
    © 1999 by

    Button Bar
    Welcome to the Graphics Muse! Why a "muse"? Well, except for the sisters aspect, the above definitions are pretty much the way I'd describe my own interest in computer graphics: it keeps me deep in thought and it is a daily source of inspiration. 

    [Graphics Mews][Resources]

    This column is dedicated to the use, creation, distribution, and discussion of computer graphics tools for Linux systems.

    This month there really isn't a new Muse column.  At least not here in the Gazette.  In fact, there won't be any Muse columns in the Gazette anymore.  Nor will there be any more articles in my sister publication, TheGimp.com.

    Instead, I'd like to announce a new Web site:  Graphics-Muse.com.  I realized the time was right to consolidate all my online writings into a single site, to put them into databases and provide some control over when I could make my work available.  Basically, trying to write this column, with all the news stories, reader mail, and so forth just takes too long each month.  I needed something more automated, and searchable.  I needed databases.  That also meant I needed CGI and I couldn't get that with the Gazette due to the way it gets distributed.

    At the same time, I'm moving TheGimp.com under the same roof.  This means I can use pretty much the same tools to write on just about any topic.  Plus I'll have (eventually) all my reader mail and Q And A style stuff in searchable databases. And my old tools database has been updated and migrated to a much friendlier format! 

    All of the old Muse issues from the Gazette, plus all the issues of TheGimp.com are available from this new site.  All of the old Gimp resources (including Tutorials) are also there.  I'll update the site once or twice a day, so you don't need to check it every 15 minutes like LinuxToday or Slashdot, but you will want to stop by daily.  

    The new site, which should be online by October 1st if all goes well, is DHTML (i.e. CSS and Javascript) based, so you'll need a browser that can handle that.  Netscape handles it pretty well, but you'll want version 4.5 or later (preferably 4.61, which is what I used to test the site).  I haven't tried IE since I don't have access (nor do I want access) to any Microsoft systems.  I tried Lynx and it appears to work moderately well, but I make no promises on supporting text based browsers.  I'm not even exactly sure how to use Lynx.

    The new site is designed to work on 800 pixels wide displays.  It should work ok in larger displays, but you'll have to scroll horizontally on smaller ones.

    I'm using DHTML to provide simplified management of articles and their summaries.  It just works better this way.  Since themes.org, Slashdot and freshmeat use it, I thought it would be ok.  I just may not have done it as well as they have.  We'll see.

    My thanks to Margie Richardson and Mike Orr for helping me make the Muse a useful resource.  I've enjoyed my time working on the Gazette version of the Muse.  And I'm not abandoning the Gazette completely - I hope to encourage a few new writers to step forward with their own columns. 

    Anyway, bookmark the new site and visit often!  And thanks for 3 wonderful years in the Gazette!
     

    The Artists' Guide to the Gimp
    Available online from Fatbrain, SoftPro Books and Borders Books.  In Denver, try the Tattered Cover Book Store.

    The following links are just starting points for finding more information about computer graphics and multimedia in general for Linux systems. If you have some application specific information for me, I'll add them to my other pages or you can contact the maintainer of some other web site. I'll consider adding other general references here, but application or site specific information needs to go into one of the following general references and not listed here.
     
    Online Magazines and News sources 
    C|Net Tech News
    Linux Weekly News
    Linux Today
    Slashdot.org
    TheGimp.com

    General Web Sites 
    Linux Graphics
    Linux Sound/Midi Page
    Linux Artist.org

    Some of the Mailing Lists and Newsgroups I keep an eye on and where I get much of the information in this column 
    The Gimp User and Gimp Developer Mailing Lists. 
    The IRTC-L discussion list
    comp.graphics.rendering.raytracing
    comp.graphics.rendering.renderman
    comp.graphics.api.opengl
    comp.os.linux.announce

    Future Directions

    Check out Graphics-Muse.com and find out!

    © 1999


    Copyright © 1999, Michael J. Hammel
    Published in Issue 46 of Linux Gazette, October 1999

    "Linux Gazette...making Linux just a little more fun!"


    The Cash and the Calling

    By


    September 1999

    This paper analyzes a model of software development in which closed-source applications make use of open-source artificial intelligence parts. We begin by observing that AI has a huge potential but that problems limit the development of applications. We consider why there will continue to be closed-source AI applications and note that pure open-source development is limited by the number of people interested in starting open-source projects. We consider the possibility of closed-source applications based on open-source parts, both in a two-tier and a three-tier architecture. We look at the pool of talent available for open-source projects. We conclude that the use of open-source AI parts may significantly increase the development of AI applications and that this may be good for the state of the art of AI.


    Table of Contents

    1. Introduction - Potential Applications of AI
    2. Closed-Source and Open-Source
    3. Two-Tier AI: Closed Application, Open AI
    4. Three-Tier AI: Application, Problem-Domain, AI
    5. The Pool of Talent
    6. Conclusion - The Potential of Open-Source AI Parts
    7. Bibliography and Acknowledgements


    1. Introduction - Potential Applications of AI

    There seems to be more potential applications for artificial intelligence than actual. The untapped market for staff-scheduling alone is immense. The opportunities in the areas of materials scheduling, process optimization, expert decision-making and image interpretation seem endless. The development of AI applications would appear to be a field with a lot of potential for growth.

    Presumably there is money to be made satisfying some of this potential demand. Why is this happening so slowly? Why are there so many potential products that people would pay for but that have not been developed?

    There are a number of reasons why there are so many potential, as opposed to actual, applications of artificial intelligence:

    Artificial intelligence is limited by the expense and risk associated with trying to take advantage of particular opportunities.

    2. Closed-Source and Open-Source

    There will probably always be opportunities to develop closed-source AI software for rent. Companies identify potential applications, and where there is sufficient expected demand, they develop new products. This is expensive because of the amount of analysis and design required. The development only occurs because rents are expected. The source is closed to enable capture of those rents.

    As we have noted, however, this sort of scenario is limited by the expense and risk involved.

    What about open-source? Free should mean less expense. Open-source reduces future-risk. AI would definitely seem to be a good candidate for peer-review. Eric Raymond describes these benefits and how they come about in "The Cathedral and the Bazaar".

    But each open-source project must be started by someone who does the initial analysis, design and development. There are a lot more potential AI projects than people interested in starting them. Open-source application development isn't likely to make a big dent in the pile of unexploited AI opportunities.

    3. Two-Tier AI: Closed Application, Open AI

    The expensive part of an AI application is not necessarily the AI. There are a variety of artificial intelligence techniques, tools, frameworks and engines available. The most expensive part of developing an AI application can be the problem-analysis and the design of how the AI is to be used.

    It may well be reasonable for an application, based on expensive analysis and design, to be closed-source. But what if the application got its AI functionality from open-source AI parts? A staff-scheduling system could be based on open-source AI problem-solving parts. An image-recognition system could be based on an open-source neural-network.

    In "The Magic Cauldron", Eric Raymond describes five discriminators that "push towards open source". The first four discriminators indicate that AI parts would be a good candidate for open-source:

    The fifth discriminator, however, indicates the opposite. Artificial intelligence is not part of "common engineering knowledge". It is an area in which one would expect good proprietary techniques be able to generate good rents.

    In practice, this can be difficult. The customers for software parts are developers of other software. Convincing a potential customer of the worth of a secret technique can be a tough sell. But more importantly, a company will not be interested in having its product dependent on a secret technique that may not satisfy future requirements.

    Open-source software parts offer much less risk. They are easier to judge, they tend to be more reliable and customers always have the option of making their own changes.

    4. Three-Tier AI: Application, Problem-Domain, AI

    If open-source general-purpose AI parts are available, an interesting new product is possible. People can use the general-purpose AI to develop parts that are specific to a problem domain like staff-scheduling or courier-dispatching. This can make for a three-tier architecture -

    Much of the analysis and design goes into the middle tier - the problem-domain-specific AI. It can be expensive to develop and the market is much narrower than the market for general-purpose AI.

    A middle-tier AI product might be developed by a company who will use it to develop an application for sale. In this case, the middle-tier would likely be closed-source.

    A middle-tier AI product might be developed by a company or individual with the intention of offering application development services to narrow markets. The middle-tier might be closed to help capture the market or open to help sell the service.

    The three-tier architecture provides more ways to take advantage of AI opportunities. The development of middle-tier AI products is encouraged by the existence of open-source general-purpose AI parts.

    5. The Pool of Talent

    Many programmers are interested in AI. It's an intriguing field - problem-solving, decision-making, remembering and recognizing... it's the ultimate challenge - software that thinks.

    The vast majority of these programmers never apply their talents to AI - they have no opportunity in their jobs and they are not part of the academic AI community. The Open-source phenomenon provides a number of ways of tapping this pool of talent.

    Programmers with a calling and/or a desire to make a name for themselves will do original research, write new open-source software and start open-source projects. Much unconventional thought will be brought to bear on various problems in artificial intelligence. Many thinkers will have a higher opinion of their thoughts than will later prove to be justified (your present author probably included). But the effect of the open-source movement on the state of the art of AI may be the next great thing that happens in the world of computers.

    Open-source projects need participants - people who contribute time designing, developing, debugging and testing. The open-source culture that supports this participation is described in Eric Raymond's paper, "Homesteading the Noosphere". AI open-source projects should be particularly good at attracting participants.

    Open-source AI that is used in commercial products should be particularly attractive to talent. There are a few reasons for this. One is that the the AI has proven to be useful - it is something worth working on. Another reason is that the work of the project is obviously important and the project is therefore an excellent place to make a name for oneself. A third reason is that money is involved, there is the possibility of paying work and the possibility of getting involved in new business ventures. Even people who aren't looking for work like the idea of acquiring knowledge that can be worth money.

    If a commercial product uses open-source AI, there is the potential for paying work related to the AI. The product developer pays people to initially make use of the AI and this use may have to be maintained. Customers may require consulting, customization and integration services. The product developer and large customers may fund projects aimed at improving the open-source AI.

    If money is being made on a commercial product that uses open-source software, there will be people trying to dream up ways of getting in on the action. People may start third-party consulting and integration services. People may launch a venture to develop a competing product. The possibility of acquiring an equity interest in some new venture has its attractions.

    6. Conclusion - The Potential of Open-Source AI Parts

    Open-source AI parts may significantly increase the development of commercial AI applications. Such development will become cheaper and less risky. Small companies that would lack credibility as developers or purveyors of closed-source AI could have adequate credibility as users of open-source AI.

    Open-source AI parts may also significantly increase the development of home-grown AI applications. Many applications of AI in business are so specific that they will not be developed at all unless they are developed by, or at least for, an individual company for its own use. Development that would be too expensive and risky with closed-source AI products could be feasible with open-source AI.

    As the open-source movement increases the application of AI, more time and money will be directed at improving the AI. As the state of the art of AI advances, more time and money will be directed at trying to apply it.

    The open-source movement could have important effects on the application of AI.

    7. Bibliography and Acknowledgements

    "The Cathedral and the Bazaar" - Eric Raymond
    http://www.tuxedo.org/~esr/writings/cathedral-bazaar/

    "Homesteading the Noosphere" - Eric Raymond
    http://www.tuxedo.org/~esr/writings/homesteading/

    "The Magic Cauldron" - Eric Raymond
    http://www.tuxedo.org/~esr/writings/magic-cauldron/

    I owe much of my appreciation for the open-source movement to the writings of Eric Raymond. There are many good things to read at his web-site.

    Brian Marshall's Home Page


    Copyright © 1999, Brian Marshall
    Published in Issue 46 of Linux Gazette, October 1999

    "Linux Gazette...making Linux just a little more fun!"


    Using Sfdisk and Perl to fdisk a hard drive

    By


    Index:

    1. Resources
    2. Introduction to Sfdisk and the Perl Expect module
    3. How to use Sfdisk to get information about your hard drive.
    4. How to use Sfdisk to create or repartition your hard drive.
    5. How to use Expect to delete all partitions on a hard drive.
    6. How to use Expect to change the geometry of a hard drive.
    7. Comments
    Future updates for this article will be located at http://www.tcu-inc.com/mark/articles/Sfdisk.html.

    Resources

    1. http://www.perl.com/CPAN/authors/id/AUSCHUTZ/Expect.pm-1.07.tar.gz
    2. /usr/src/linux/Documentation/ide.txt
    3. man sfdisk
    4. man lilo
    5. man lilo.conf
    6. http://metalab.unc.edu/LDP/HOWTO/mini/LILO.html
    7. man hdparm # side issue -- you might find this useful for other things.

    Introduction to Sfdisk the Perl Expect module

    I am really getting mad at myself. I only tested this with RedHat 6.0 (again). Debian would be a cool alternative.

    The purpose of the article is to explain how to automate the fdisking of a hard drive by primarily using Sfdisk and the Perl Expect module. Why would you want to do this? Well, it can be one way of many ways to solve the problem where you need to have your bootable files for Linux before the 1024th cylinder. There are other ways to solve the boot problem, but we will stick to the cylinder method for this article. It also can be used to automatically partition new hard drives.

    Sfdisk is a tool that allows you to change the partitioning of your hard disks through scripts. It also lets you get information about your hard drives. Although it is a pretty cool program, it does have limitations. It works and it has most of the necessary powers of fdisk, but not all of it.

    Disk Druid is a program used by RedHat to initialize hard drives (change their geometry) before you install Linux to the hard drive.

    Perl is a very very cool programming language. The "Expect" perl module is a module in perl which adds a relatively user-friendly way of making a script which automates commands. In other words, when you execute an Expect script, it types commands to the computer as though you were typing them -- like a macro but more advanced and usable for any console based program. Perl is just so cool to use with everything.

    There are two other modules you have to install with the Expect perl module.
    IO-Stty-.02.tar.gz
    IO-Tty-0.02.tar.gz


    How to use Sfdisk to get information about your hard drive.

    With RedHat 6.0, they included sfdisk by default. I guess the BSD games had to go to save space. Anyways, here are some simple commands to get information about the master hard drive on your primary controller to an PC compatible computer.

    To get the geometry of your hard drive,

    /sbin/sfdisk -g /dev/hdd
    
    Here is how to get the size (in bytes) of the total space of your hard drive,
    /sbin/sfdisk -s /dev/hdd
    
    Here is how to change the id of a partition 5 on your first hard drive to the Linux partition,
    sfdisk --change-id /dev/hdd 5 83
    

    How to use Sfdisk to create or repartition your hard drive.

    Well, one powerful feature of sfdisk is to repartition your hard drive or create new partitions. Create a file called "Test.data" which has entries in the following format, one per line, and the fields are comma delimited:

    Start, Size, ID, Bootable

    Start = cylinder to start at (first available cylinder if left blank), Size = number of cylinders (all if left blank), ID = type of partition (Linux, Swap, MSDOS, or other), and Bootable = if this partition is bootable. There are other options, but we won't get into them in this article.

    To make it so you have 1 partition of 136 cylinders, a second partition with 254 cylinders of the SWAP ID (82), and a third Linux partition bootable that grabs the rest of the space (ID = 83), make a file list this,

    1,136
    101,254,82
    201,,83,*
    

    and then issue this command to take this configuration and execute it on your slave hard drive on your secondary controller

    /sbin/sfdisk /dev/hdd << Test.data
    
    and then issue this command to see what you did
    /sbin/sfdisk /dev/hdd 
    

    Again, it is highly recommended you read the manpage to figure out how to format this data file. Any fields you leave blank have default values described above.


    How to use Expect to delete all partitions on a hard drive.

    The Perl script to delete all the partitions is not something to be fooled around with. If you test it and screw up your hard drive, it is your fault and not mine even if my script doesn't work right. It is your risk.

    Basically, with the Expect module, you can automate certain takes, which can be used later for a more sophisticated program - hint, hint of what is to come.

    Save the script to "Dufus_Move.pl" and issue the command

    chmod 755 Dufus_Move.pl
    
    and then to do it on your slave hard drive on your secondary controller,
    ./Dufus_Move.pl d
    

    How to use Expect to change the geometry of a hard drive.

    The Perl script to change the cylinders of a hard drive is not something to be fooled around with. If you test it and screw up your hard drive, it is your fault and not mine even if my script doesn't work right. It is your risk.

    Anyways, that script maximizes the number of cylinders.

    Save the script to "Change_Cylinders.pl" and issue the command

     chmod 755 Change_Cylinders.pl 
    and then to do it on your slave hard drive on your secondary controller,
    ./Change_Cylinders.pl d
    

    Comments

    1. Sfdisk is a pretty cool program, but creating a perl script as an interface for Fdisk is even cooler. Better yet, why don't they script fdisk? I wonder who "they" are.
    2. If Sfdisk could redo the geometry of a hard drive, I would have not needed anything else. Perhaps it is possible with Sfdisk, but I didn't see it.
    3. This was the first time I really used Expect for more than just simple tasks. I didn't list here everything I did.

    works as a database programming assistant at The Computer Underground and as a book binder for ZING and as a geek for linux.com. Edited using Nedit and ispell.


    Copyright © 1999, Mark Nielsen
    Published in Issue 46 of Linux Gazette, October 1999

    "Linux Gazette...making Linux just a little more fun!"


    Linux Is Not For You

    By


    Linux is not for you
    (if you are a normal everyday Joe)

    Okay, you've got a home computer, most likely a PC. You've been surfing the net for six months to a year, so you reckon you are pretty wired. But you've been hearing a rumour, a little whisper, a voice in the back of your head that states, "There is another operating system and it's cool and funky, free, stable, powerful and fast". Memories flash up of the time you were working on that really important letter and the system suddenly locked, the day that you finally found an interesting web site then the screen went blue, and you never found that site again. This hint "at a better way" plays on your doubts and suspicions, and after a little surfing you come across the holy grail of Operating Systems, Linux. Perhaps you found it through a document like this one which states, "Linux is no longer only for UNIX wizards who sit for hours in front of a glowing console".

    Okay, Sparky, stop right there. Linux is not for you. I really should add "currently" to that statement, for there does remain hope for the future. But for the moment, Linux is out of most people's league.

    Let me introduce myself. I am the guy that your Uncle Bob calls when his computer crashes, the knowledgeable friend of the family, the man that can sort things out. Self-taught, I don't know everything, but when it comes to the home computer I can sort out most things. Generally this means Windows 95. The faults that I find with most people's systems are extremely easy to rectify, but working on them does give some insight into "the average user": what they want, what they can and cannot do. Also, I myself have been 100% conned by the Myth, and indeed have over the last five days I have installed Linux three times all with varying degrees of success. So I now have a pretty good idea of what is wrong with it in reference to using it for the first time.

    Sad but true, Linux is moving rapidly away from being usable by "the average user". People may choose to argue that with the latest major distributions including Partition Magic and Boot Magic, things are getting simpler, but this is not the case. Look at what the distributions come with: four to six cd-roms, big manuals, yet hardly any help unless one is prepared to search for it. Give me a single Windows 95 CD-ROM and a boot floppy, and I can install an operating system that will have a nice friendly interface, where most people will be able to work out where their hard drive is. It will have a printer installed, and will attempt to sense any other devices.

    With the KDE install that Caldera ships, I was pleasantly surprised to see that on the desktop was my CD-ROM and floppy, but where was my hard drive? What about that ATAPI Zip drive? Why is my printer not working? Eventually after searching the internet (through Windows because there is no obvious quick way of installing an internet connection on Linux), I find out how in theory to install the zip drive. Imagine my surprise when I type in:

            # dmesg | less
    

    and see that somewhere, somehow, my computer already knows that it has a zip drive. It just didn't put an icon anywhere for it or indeed even mount the drive. So I have to do this by typing stuff in: arrgh, horror of horrors. I am not even going to go into the problems I had with the sound card which resulted in severe feedback and waking up the neighbours. I have not even attempted to install a printer yet, because quite honestly my nerves aren't quite up to it.

    Now that deals with the installation problems, no visible hard drive. Although with a little bit of guesswork you could probably work out that it is /, that's not really as intuitive as a Hard Drive Icon. As for finding the other partitions on the drive, well I can do it and am feeling pretty damn pleased with myself, but the average person could not, even though the operating system is perfectly aware (just like with the Zip) that these exist.

    Now let's deal with the issue of Linux moving rapidly away from what the user wants. One of the first things I did was click on the big K. I see a wealth of software: games, text editors (both advanced and normal), and various things that I don't know what they do. I'm going to click on them anyway, but the question is, do I need this stuff? Of course not; the installation does not provide what I need. A good example would be SANE, which apparently is scanner software. This I know, because I already knew what the KRPM did. I look to see whether SANE is installed on my system. Apparently it is. I can even uninstall it by clicking on the button in KRPM. But I can't find any way to run the program. I look in the manual. It tells me to do various things. While this may be good for a UNIX guru, it doesn't help me, because I don't understand what the words refer to.

    This is a plea on behalf of the home user. Companies, stop concentrating on adding as much software as possible! Instead, redirect your effort into producing a sound, simple, base installation! Take a good look at Microsoft's products. Study what they install and how the user navigates around. Microsoft may be despised by the Linux community but it would be best if one were to study the enemy and exploit their strengths as well as their weaknesses. The home user doesn't care about open source codes, they don't program. They want to get a system up and running that they can use, and where they can then install any additional components, preferably without having to type anything in. Keep it simple, concentrate on wizards rather than adding features. I've tested out the speed of Linux using my dual booting system these were the results :

    Copying a folder containing 4 files totalling 149 megabytes to another partition on the same hard drive:

              Linux:  1 minute  47 seconds
            Windows:  2 minutes 37 seconds
    

    Now I have no idea how or why etc., but Linux seems faster so I am keeping it. I know that at some point it could be the OS of the future, and would like to discuss with anyone that is interested, what form the perfect, simple base installation could take. One of the fundamentals was that Linux was for the good of everyone, and as I'm a newbie I figure that I'm the perfect idiot to test it on.


    Copyright © 1999, nod
    Published in Issue 46 of Linux Gazette, October 1999

    "Linux Gazette...making Linux just a little more fun!"


    Linux Humor

    By


    A co-worker brought to my attention this little link: Coy: Like Carp, Only Prettier. It's a Perl module described by its author, Damian Conway, thusly:

    When a program dies 
    what you need is a moment 
    of serenity. 
    
    The Coy.pm 
    module brings tranquillity 
    to your debugging. 
    
    The module alters 
    the behaviour of die and 
    warn (and croak and carp).
    
    Like Carp.pm, 
    Coy reports errors from the 
    caller's point-of-view. 
    
    But it prefaces 
    the bad news of failure with 
    a soothing haiku. 
    

    Not a bad idea.

    Wait, it gets better. The description of the haiku generator algorithm is itself written in haiku.

    The paper cites a Salon Magazine contest for haiku error messages. These are my favorite entries, although all of them are worth a read:

    A file that big?
    It might be very useful.
    But now it is gone.
            -- David J. Liszewski 
    
    Printer not ready.
    Could be a fatal error.
    Have a pen handy?
            -- Pat Davis 
    
    The Web site you seek
    cannot be located but
    endless others exist
            -- Joy Rothke
    
    Windows NT crashed.
    I am the Blue Screen of Death.
    No one hears your screams.
            -- Peter Rothman 
    
    

    All this got me thinking: does anybody have any Linux poetry they'd like to share? Not necessarily haiku--any kind of poetry. There are lots of geeky UNIX things floating about, but nothing specifically Linux-related. (Or is my memory getting dim?) Maybe someday I'll try writing a Linux sonnet.

    P.S. Another co-worker sent me a link to a Star Wars move done in ASCII animation (asciimation). It requires Java.


    Copyright © 1999, Mike Orr
    Published in Issue 46 of Linux Gazette, October 1999

    "Linux Gazette...making Linux just a little more fun!"


    Security for the Home Network

    By and


    Security for the home network is your responsibility.  With all the tools available to the crackers and script kiddies, it is not a matter of if but rather when you will be probed and possibly attacked.  I have personally been connected via modem for less than 5 minutes and been port scanned!  Your ISP really does not care if you are being attacked by "x" because if they shut down "x", tomorrow it will be "y" attacking you. Fortunately there are several things you can do to greatly increase the security of your network.

    Disclaimer: This article provides information we have gleamed from reading the books, the HOWTOs, man pages, usenet news groups, and countless hours banging on the keyboard. It is not meant to be an all inclusive exhaustive study on the topic, but rather, a stepping stone from the novice to the intermediate user.  All the examples are taken directly from our home networks so we know they work.

    How to use this guide:

    Prerequisites: This guide assumes that you have tcp wrapper and ipchains installed, that you are running kernel 2.2.0 or higher, that you have selected a legal/private domain name,  that you're using IP Masquerade to "hide" your machine from the internet, and that you are consistently able to connect to the internet.

    Why crack me? Most of us believed, at one time, that we were so insignificant that a cracker would not waste his time with us. Additionally, there are so many computers connected to the internet that the odds of being cracked were virtually nil. Five years ago that was probably a correct assessment.  With the advent of the script kiddies, this is no longer true. The tools available to them make it so easy to find and crack systems that anyone who can turn on a computer can do it.

    There are two main reasons they may want to crack your home system: the thrill of another conquest, and to get information to use your ISP account to launch other attacks. Life will become distinctly unpleasant when the authorities come to your door investigating why you were using your ISP account to break into the pentagon.

    The following information comes from a series of excellent articles by . They should scare you straight if you have taken security lightly up to now.

    The script kiddie methodology is a simple one. Scan the Internet for a specific weakness, when you find it, exploit it. Most of the tools they use are automated, requiring little interaction. You launch the tool, then come back several days later to get your results.  No two tools are alike, just as no two exploits are alike. However, most of the tools use the same strategy. First, develop a  database of IPs that can be scanned. Then, scan those IPs for a specific vulnerability.
    Once they find a vulnerable system and gain root, their first step is normally to cover their tracks.  They want to ensure you do not know your system was hacked and cannot see nor log their actions.  Following this, they often use your system to scan other networks, or silently monitor your own.
    And now for the bad news: CERT® Coordination Center has only one solution if you have been cracked: reinstall everything from scratch!

    The Firewall Machine: Ideally your firewall should be a machine dedicated to just that: being your security. Given that you only need the power of a 486, this should not be to hard to handle. By using a computer to just be your firewall you can shutdown all the processes that normally get attacked - like imap, ftp, sendmail, etc. A simple solution would be to create a boot floppy with everything you need on it and run it out of a ram disk. That way, if you are cracked, you just reboot the machine, and without a hard drive it will run much cooler. Check out the Linux Router Project for how to set it up.

    However, for the purposes of this article the authors assume you're setting this up on your primary server and that you've been following along with the previous month's articles on DNS and SendMail.

    What we will cover: There are hundreds, maybe even thousands, of ways to crack into your computer. And for every way in, you need to provide a defense. We are not going to cover everything here: we will cover just the basics to get your machine secured from the most likely attacks.

    ip spoofing
    tcp wrappers
    ipchains
    What we will not be covering:
    physical security
    specific programs you run
    encrypting data


    Here are some final thoughts to whet your appetite. Next month we will be discussing dhcp.


    Copyright © 1999, JC Pollman and Bill Mote
    Published in Issue 46 of Linux Gazette, October 1999
    IP Spoofing

    The best information comes straight from the IP Chains How To:

    IP spoofing is a technique where a host sends out packets which claim to be from another host.  Since packet filtering makes decisions based on this source address, IP spoofing is used to fool packet filters. It is also used to hide the identity of attackers using SYN attacks, Teardrop, Ping of Death and the like (don't worry if you don't know what they are).

    The best way to protect from IP spoofing is called Source Address Verification, and it is done by the routing code, and not firewalling at all.  Look for a file called rp_filter by doing this:

        ls -l /proc/sys/net/ipv4/conf/all/rp_filter [Enter]

    If this exists, then turning on Source Address Verification at every boot is the right solution for you.  To do that, insert the following lines in your init script (for Redhat based distributions use /etc/rc.d/rc.sysinit script)immediately after /proc is mounted:
     

    # This is the best method: turn on Source Address Verification and get
    # spoof protection on all current and future interfaces.
           if [ -e /proc/sys/net/ipv4/conf/all/rp_filter ]; then
             echo -n "Setting up IP spoofing protection..."
             for f in /proc/sys/net/ipv4/conf/*/rp_filter; do
                 echo 1 > $f
             done
             echo "done."
           else
             echo PROBLEMS SETTING UP IP SPOOFING PROTECTION.  BE WORRIED.
             echo "CONTROL-D will exit from this shell and continue system startup."
             echo
             # Start a single user shell on the console
             /sbin/sulogin $CONSOLE
           fi

    If you cannot do this, you can manually insert rules to protect every interface.  This requires knowledge of each interface.  The 2.1 kernels automatically reject packets claiming to come from the 127.* addresses (reserved for the local loopback interface, lo).

    TCP Wrappers

    We are indebted to for his excellent article: TCP Wrappers, Locked doors and a security camera for computer!

    How does "tcp wrappers" work?  Many programs do not run all the time as they are infrequently used and would create unnecessary overhead.  The inetd program is run takes care of this nicely.  When a user tries to connect to your computer, the connection is made up of a pair of data: an ip address and a port. Inetd reacts to this connection by looking up the port number in /etc/services, and then looking in the file /etc/inetd.conf for a corresponding service, (program) and then runs the service. With tcp wrappers, inetd is tricked into running tcpd instead of the service that would normally be called. Tcpd checks it's rules in the /etc/hosts.allow and /etc/hosts.deny files. It either accepts the connection and runs the service or denies the connection based on it's rules.

    NOTE: tcp wrappers only works for services that inetd starts! Sendmail, apache, and named do not use inetd, and so they are not protected via tcp wrappers.

    Check to see if tcp wrappers is installed. Most distributions install tcp wrappers by default. The easiest way to see if tcp wrappers is installed is to view the /etc/inetd.conf file. If it is not installed, a typical line looks like this:

    Do this:

    ftp    stream  tcp     nowait  root    /usr/sbin/ftpd       ftpd -l -a

    and if it is installed, it looks like this:

    ftp    stream  tcp     nowait  root    /usr/sbin/tcpd       ftpd -l -a

    The bolded part shows the difference. Assuming it is installed, you must edit your /etc/host.allow and /etc/host.deny files to give tcpd the rules it needs.

    Edit your /etc/hosts.allow  and /etc/hosts.deny to limit access to your computer's network services.  One of the nice features of tcp wrappers is the ability to control access to your computers network services and log failed or successful attempts. You can also perform certain actions based on the users hostname.  When someone tries to connect to a network service on your computer, the tcp wrapper (tcpd) reads the file /etc/hosts.allow  for a rule that matches the the hostname of the person trying to connect, if /etc/hosts.allow doesn't contain a rule allowing access, tcpd reads /etc/hosts.deny for a rule that would deny access to the hostname. If neither file contains an accept or deny rule, access is granted by default.  It's important to note the sequence of events here.  "hosts.allow" is read first and overrides anything in "hosts.deny".  As you'll see, we tell the server to accept connections from specific machines in hosts.allow, but via hosts.deny we tell our server to refuse access to anyone for any reason.

    In the following examples we are going to  deny all finger request, and deny telnet access to all users from lamers.edu.  The format of the rules in the hosts.allow/hosts.deny files is as follows:

    service: hostname : options
    An example   /etc/hosts.allow  could look like the following:
    *****************************************************************************
    ipop3d: ALL: ALLOW

    in.telnetd: .myschool.edu : ALLOW

    *****************************************************************************
    Note: in the two rules above, each rule must be on ONE line, it may appear as more than one line here due to article formatting.

    In the first line "ipop3d" is the service, the hostname is "ALL" which means the rule applies to all hosts, and finally we tell tcpd to "ALLOW" the connection.

    The second rule follows the same format as the first, it allows access to telnet only for users from "myschool.edu".

    Again: Each rule goes on it's own unbroken line.

    The above example was given to explain rules tcp wrappers uses. Here is what I have on my server for /etc/hosts.allow:

    *****************************************************************************
    # allow connections from my local network
    ALL: ALL@127.0.0.1 : ALLOW
    # allow all connections from computers on my network
    ALL: ALL@192.168.124.1 : ALLOW
    ALL: [email protected] : ALLOW
    ALL: [email protected] : ALLOW
    ALL: [email protected] : ALLOW
    *****************************************************************************

    This file allows permissions based on ip addresses instead of services. Since it is a home network, all computers are trusted and listed. Obviously ip spoofing needs to be fixed or this method is not secure. I want all connects from outside my network denied and a message sent to me telling me what happened. My /etc/hosts.deny looks like this:

    *****************************************************************************
    ALL:ALL : spawn (echo Attempt from %h %a to %d at `date` | tee -a /var/log/tcp.deny.log |mail [email protected] )
    *****************************************************************************
    This needs to be on ONE line. And yes, I do get email from here - about two a week. They look like this:

    Attempt from gw.webec.com 209.98.44.94 to in.ftpd at Mon Jul 5 21:44:54 EDT 1999
     
     
     
     
     
     
     

    IP Chains

    In order to understand how to configure a firewall, you need to understand how the data moves from one computer to another. The best explanation I have seen comes from the IP Chains HOWTO:

    All traffic through a network is sent in the form of packets.  For example, downloading this package (say it's 50k long) might cause you to receive 36 or so packets of 1460 bytes each, (to pull numbers at random).
    The start of each packet says where it's going, where it came from, the type of the packet, and other administrative details.  This start of the packet is called the header.  The rest of the packet, containing the actual data being transmitted, is usually called the body.

    Some protocols, such TCP, which is used for web traffic, mail, and remote logins, use the concept of a `connection' -- before any packets with actual data are sent, various setup packets (with special headers) are exchanged saying `I want to connect', `OK' and `Thanks'. Then normal packets are exchanged.

    A packet filter is a piece of software which looks at the header of packets as they pass through, and decides the fate of the entire packet.  It might decide to deny the packet (i.e.. discard the packet as if it had never received it), accept the packet (i.e.. let the packet go through), or reject the packet (like deny, but tell the source of the packet that it has done so).

    Under Linux, packet filtering is built into the kernel, and there are a few trickier things we can do with packets, but the general principle of looking at the headers and deciding the fate of the packet is still there.

    One problem is that the same tool (``ipchains'') is used to control both masquerading and transparent proxying, although these are notionally separate from packet filtering (the current Linux implementation blurs these together unnaturally, leaving the impression that they are closely related).

    So, ipchains looks at the to, from, and port request in the header of the packet and then looks at its rules to decide what to do with it. Examining a rule is the easiest way to understand what it is doing. Here is what I use for the pop3 part of my firewall:

    ipchains -A input -p tcp -j ACCEPT -s 192.168.124.0/24 -d 0.0.0.0/0 110
    ipchains -A input -p tcp -j DENY -d 0.0.0.0/0 -s 192.168.124.0/24 110

    -A input: append this rule to the other input rules (i.e. do not erase the other rules)
    -p tcp: using the tcp protocol
    ACCEPT/DENY: exactly what they say
    -s: the source of the data packet
    -d: the destination of the data packet. 0.0.0.0/0 means: from anywhere, and 192.168.124.0/24 are my network addresses.

    The 1st rule above says: accept any data from the local network going anywhere else for port 110.  The second rule says: deny any packet coming from anywhere else going to the local network on port 110.

    This sounds simple, but what if you do not know what your ip address is - like when you dial up to the internet? And setting up each port can take a while. Fortunately there is help. has put together 3 outstanding scripts that you can put right into your box with a minimum of configuring.  Here is his Readme file:
     

    1) Pick the script that is appropriate to your particular network setup:

       masquerade:
        For systems on an internal RFC1918 network.

       standalone:
        Single machines connected to the net, wanting strong security.

       routable:
        For systems gatewaying a standard network with a routable subnet.

    2) Copy the appropriate script into /usr/sbin. Edit your script variables
       as follows:

       LOCALIF: http://linux.freediskspace.com/files/42180/(Masquerade/Standalone)
        This is the interface with which you are connected to the IP network.
        For modems and serial port ISDN TA's, this is usually ppp0. Otherwise,
        use the ethernet interface your access device is connected to.

       INTERNALNET: (Routable/Masquerade)
        Set this to the *network* address and hostmask of the network you're
        gatewaying. bitmasks or dotquadded masks are acceptable.

    3) If you're on a dialup, add an entry into ip-up to call the script after
       you've connected. If you're on a permanent connection, call it from
       rc.local.

    4) Change the permissions on the file:

            chmod 755 /etc/masquerade[Enter]

    After you edit the files, run the one you need. For the Masquerade/Standalone, I recommend you run it from /etc/ppp/ip-up  (or ip-up.local for Redhat based systems.) Ian is working on a making it even easier with perl and a gtk interface.
    Final Thoughts

    Updates: All the work you have just done is not worth the effort if you do not keep your programs up to date. New holes are found every week and the crackers stay very much up to date. Visit your distribution's home page often and check what updates are available. Install them immediately! The Mandrake distribution has one of the best solutions I have seen with their MandrakeUpdate program.

    Turn off what you do not use! It might be cool to have all sorts of services running on your machine (like http, ftp, finger, etc) but unless you need them, all they do is leave a door open to who ever wants to come in. Edit your /etc/inetd.conf file and comment out (put a # at the start of the line) everything you do not need. Since staying up to date is hard enough, the fewer programs you have to worry about the better your security will be.

    How to add a machine into your network later:  It is likely that you'll add systems to your network after you've completed this setup at some point in the future.  Here are the key files that must be updated with your new system and it's IP address information:

        hosts.allow
     

    Additional resources: There are plenty of resources available on the net. Most are for the professionals, but they have some relevance to us at home as well.  Below are some pages we feel are critical to home security:

    The Linux Administrator's Security Guide The best source of security related info available! Get it, print it out, (178 pages as of today) and read it! Note: it is in pdf format.
    Linux Basic Security Nice overall look at security
    Linux Firewall and Security Site The most comprehensive list of security sites on the net.
    LinuxPowerd.com Another very comprehensive list of security sites, including pages for updates of the more common distributions.
     
     
     

    "Linux Gazette...making Linux just a little more fun!"


    Using Java and Linux to crack the DES challenge

    By


    Abstract

    DES has been used for a long time to guarantee the privacy of transactions in open communication environments, especially in inter-banking and financial ones. Nevertheless, the security level offered by this algorithm is not the same that was before. The DES working conditions have been shifting with the exponential growth of the Internet, and what was considered as being safe a decade ago is not safe anymore. What this article intends to show is how weak the actual DES strength is, and how and what a determined attacker can do to break it using the new Java programming language and the Linux operating system.

    1. Introduction

    Cryptography has always been used as an effective way for protecting sensitive information from non-authorised and curious eyes. Cryptography had its exponential growth with the Second World War. Both the Allies and the Nazis and the Japanese used this technology successfully. Basically, cryptography is the process that allows maintaining private communications. There are two main issues in cryptography: encryption and decryption. Both encryption and decryption are two of the issues in cryptography, but not the only. Issues like digital signatures and digital certificates are equally important. Nowadays, two different kinds of cryptography are used: secret key or symmetric cryptography and public key or asymmetric cryptography. The major difference between them is the key usage. In the first one, only one key is used both for encryption and decryption, and in the second one, two different keys are used, one for encryption and the other for decryption. Normally, the public key is freely distributed, while the private key is kept secret. All the public key system security lies on the secrecy of the private key. In this article, our attention will be mainly focused on the first case, the secret key cryptography. DES is just one of the algorithms that uses secret key cryptography. Others, like IDEA (International Data Encryption Algorithm), RC2 (Ron's Code 2) or RC5 (Ron's Code 5) also use it. DES (Data Encryption Standard) is a block cipher algorithm that was defined and adopted by the US government in 1977 as an official standard. It was originally developed by IBM in 1970 under the name of LUCIFER, and was rapidly adopted as the most used cryptographic system in the world [1]. Banks and financial institutions mainly use DES.

    2. Attacks on cryptography

    The main goal of cryptography is to maintain all the information secret from non-authorised people. Cryptanalysis is a subtopic of Cryptography that tries to break the secrecy (find the original text from a given encrypted text without knowing the key used for its encryption). A successful cryptanalysis can find both the original text and the used key. There are several methods for performing attacks to these algorithms. The success or failure of each of the attacking methods is strongly related to the amount of information the attacker can obtain from both the ciphertext and plaintext [2]. Some of the most used attacking methods are:

    3. Key robustness

    The issue of key robustness has always been highly discussed. The bigger the cryptographic key is, the stronger is the ciphertext generated. However, if the cryptographic key is big, the cryptographic system is also more demanding in terms of processing power.

            +------------+-----------------------------------+
            |    Year    | Millions of Encryption per Second |
            +------------+-----------------------------------+
            |    1995    |                4                  |
            +------------+-----------------------------------+
            |    2000    |               32                  |
            +------------+-----------------------------------+
            |    2005    |               256                 |
            +------------+-----------------------------------+
    
            +------------+--------------+--------------+--------------+
            |  Key size  |     1995     |      2000    |      2005    |
            +------------+--------------+--------------+--------------+
            |  40 bits   |  68 seconds  |  8,6 seconds | 1,07 seconds |
            +------------+--------------+--------------+--------------+
            |  56 bits   |  7,4 weeks   |  6,5 days    |   19 hours   |
            +------------+--------------+--------------+--------------+
            |  64 bits   |  36,7 years  |  4,6 years   |  6,9 months  |
            +------------+--------------+--------------+--------------+
            | 128 bits   | 6,7e17 Myears| 8,4e16 Myears|1,1e16 Myears |
            +------------+--------------+--------------+--------------+
    
            Table 1 - Key robustness computation based on key size and on hardware
            with 4000 dedicated chips. Source: Department of Computer Science,
            University of Bristol, 1996
    
            +------------+-----------------------------------+
            |    Year    | Millions of Encryption per Second |
            +------------+-----------------------------------+
            |    1995    |               50                  |
            +------------+-----------------------------------+
            |    2000    |               400                 |
            +------------+-----------------------------------+
            |    2005    |               3200                |
            +------------+-----------------------------------+
    
            +------------+--------------+--------------+--------------+
            |  Key size  |     1995     |      2000    |      2005    |
            +------------+--------------+--------------+--------------+
            |  40 bits   |   1,3 days   |   3,8 hours  |  28,6 minutes|
            +------------+--------------+--------------+--------------+
            |  56 bits   |   228 years  |   28,6 years |   3,6 years  |
            +------------+--------------+--------------+--------------+
            |  64 bits   | 58,5 Myears  |   7,3 Myears |   914 years  |
            +------------+--------------+--------------+--------------+
            | 128 bits   | 1,1e12 Myears| 1,3e30 Myears| 1,7e19 Myears|
            +------------+--------------+--------------+--------------+
    
            Table 2 - Key robustness computation based on key size and on 200
            dedicated PCs. Source: Department of Computer Science, University of
            Bristol, 1996
    

    The robustness of the keys highly depends on the available processing capacity. With a bigger processing power, less will be the necessary time to crack a cryptographic. On the other end, if the processing power increases also does the key size and consequently the robustness of cryptographic algorithms. But, what seems clear, is that keys that were considered has being safe two or three years ago, are not safe at all now.

    4 DES - Data Encryption Standard

    In 1972, the National Bureau of Standards (now, known as NIST) had launched a request to the scientific community to conceive a new cryptographic algorithm. This new algorithm should have the following characteristics:

    1. High level of security
    2. Completely specified and easy to understand
    3. Available to everyone
    4. Adaptable
    5. Efficient enough to implement in a computer.

    In 1974, IBM answered this request with an algorithm named LUCIFER (later it was called DEA - Data Encryption Algorithm or DES). Finally, in 1976 DES was adopted as a standard in US.

    DES has kept itself, until now, for twenty years as an international standard. Although DES is finally showing some signs of its age, is still considered as one of the strongest and efficient algorithms in the world.

    4.1. How DES works

    DES is a block cipher. It encrypts 64 bits blocks each time. A set of 64 bits of plaintext enters the algorithm and a set of 64 bits of ciphertext comes out of the algorithm.

    DES is a symmetric key algorithm and uses the same key for encryption and decryption processes. The keysize is 56 bits. DES uses a 56 bit key (the key is normally represented by 64 bits, and every eight bit is used just for parity checking).

    At its simplest level, the algorithm is based on two simple principles: diffusion and confusion. DES applies substitutions followed by permutations to a plaintext, based on a given key. This process is called round, and DES applies it 16 times. The following scheme represents a more detailed look of DES.


    Figure 1 - A detailed scheme showing how DES works

    A more detailed explanation is outside the scope of this article. More information can be found on any good book about cryptography.

    5. RSA Labs Cryptographic challenges

    The RSA Laboratories Data Security Division promotes and maintains a set of cryptographic challenges as research tools [3]. Some of the challenges that presently RSA holds, are:

    1. RSA factoring challenge
    2. Secret-key challenge
    3. DES challenge

    5.1. DES challenge

    The original DES challenge was launched in January 1997, with the aim of demonstrating that 56-bit security, such as that offered by the US government's DES, offers only marginal protection against a committed adversary. This was confirmed when the secret key used for encryption was recovered on June 17, 1997. Since then it has been widely acknowledge that much faster exhaustive search efforts are possible and DES challenge II is intended to show how fast.

    While the original showed DES was crackable using an exhaustive search attack, the goal of the new DES challenge is to see how quickly an exhaustive search attack can be accomplished to help judge the true vulnerability of DES.

    Twice a year, on January 13 and July 13, at 9:00 AM Pacific Time, a new contest will be posted on the RSA homepage. The contest will consist of the ciphertext produced by DES-encrypting some unknown plaintext message that has a fixed and known message header. The first to recover the key wins, and the amount of the prize will depend on how fast the key was recovered.

    5.2. DES challenge details

    For each contest, the unknown message will be preceded by three known blocks of text containing the 24-character phrase: ``The unknown message is:``. While the mystery text that follows will clearly be known to a few employees of the RSA Data Security, the secret key actually used for the encryption will be generated at random and destroyed within the challenge-generating software. The key will never be revealed to anyone.

    The goal of each contest is for participants to recover the secret randomly generated key that was used on the encryption in a faster time than that required for earlier challenges in the series.

    6. Breaking DES

    Many problems require a large amount of computational power to reach to a solution. Some problems, though, are amenable to an extremely high level of parallelization, and with today's Internet it is possible to broaden the reach of any large-scale effort to previously unanticipated levels.

    Breaking DES is one of these problems. It is necessary to use a somewhat large computational power. Even if we consider a 56-bit key it is very difficult and hard task to perform.

    The "best" strategy to break DES is to perform a brute force attack. This means having to test all the possible keys and analyse and compare the results. If we consider a 56-bit key, meaning that we will have approximately 256 possible combinations. Even with today common computational power this takes a while.

    Breaking DES is clearly a NP-complete problem. It is impossible to find a solution to a NP-complete problem in polynomial time, but given a solution, it is possible to check whether it is valid or not. Typically, all cryptographic problems are NP-complete problems.

    7. General architecture

    The first aspect to consider when planning a general architecture for breaking DES is to choose between a hardware or software attack.

    DES can be very easily implemented on a hardware chip, and lots of these chips can be used for breaking DES. One of the most obvious advantages of this approach is the resulting processing power. On the other hand, one of the problems arising from this architecture is its large cost.

    Another possible approach for a general architecture is to use a software attack. One of the problems with this kind of architecture is the processing power. If only one computer is used the processing power obtained is quite disappointing. This type of attack is very easy to implement and is also less expensive than the previous one.

    However, more interesting results can be achieved when using a distributed computing attack, based on the computational power increase obtained by joining the computing power of several computers.

    8. Distributed computing

    Distributed computing can be easily described as the effort to unify multiple networked machines in such a way that information or other resources can be shared by all of these connected computers. The hope is that sharing can take place over large areas, many machines, and many users, unifying them in a consistent and coherent framework.

    Distributed computing became a field of study when computer hardware evolved from the mainframe, where everyone shared all the resources of a single machine, to the minicomputer. The minicomputer made it necessary for two people (or programs) to work together or share resources when they were on different machines. Co-ordinating that work, or providing access to those resources, is the goal of distributed computing.

    Interest in distributed computing has increased with the advent of individual workstations and networked PCs, mainly with the spread of the Internet. Because these are single user machines, the need to share information, computing resources or data resources became immediate as soon as there was any joint work to be done.

    9. Specific architecture

    As it was stated before, one of the possible ways for building an architecture to break DES is through a distributed computing architecture.

    However, some considerations should be take into account. For instance, what is the best configuration and which are its constraints.

    No doubt that, in order to profit the availability of Internet's computing power the best solution is to implement typical client-server architecture. A server for distributing the keys and the clients to do the hard work: test all the keys and check the result.


    Figure 2 - Overview of the distributed architecture

    9.1. Considerations

    There are different approaches to an exhaustive search. The major consideration is whether the search is co-ordinated by some central server, or whether multiple processes start at random positions in the search space and run independently until a key is found.

    The use of a central server poses some difficulties. As well as providing a single point of failure, there is also the potential of network congestion and failures.

    A variety of precautions should be taken in account. Servers can be networked into a hierarchy, or replicated if resources allow, so that points of failure are less catastrophic. In addition, clients can test themselves to provide some level of assurance against malfunction and servers to provide assurance against malevolent clients can conduct more explicit testing on the clients. The server can have the client report on server-fabricated problems that can be checked at very low cost. Alternatively a client could calculate a checksum over all attempted solutions in the range examined and another client of the same architecture could check this.

    9.2. Functionality

    Although the client-server architecture presents some problems, it is still the easiest and cheapest one to implement and maintain.

    The basic idea is to have a central server, that is in charge of performing simple tasks, like adding new clients to the contest, distribute new groups of keys and verify and update the results.

    The server itself does only the easy work on the system. The hard work, trying all the possible keys and check the result, is done by the set of clients. The more clients the system has, the faster will be coverage of all the key-space.

    The functionality of the server can be easily summarised in the following scheme.


    Figure 3 - DES key server functionality

    The functionality of the client can also be summarized this way:

    Basically, the main functionality of the client software is to test all the possible keys distributed by the key server and then analyse the result and test if the key is the right key or not.

    If it is the right key the client key the client communicates with the server and sends a remark to the server stating that has found the key.


    Figure 4 - DES key client functionality

    10. Implementation

    No doubt subsists that the best possible architecture for facing the DES challenge is to use a client-server distributed architecture over the Internet.

    In order to implement such architecture, some important decisions have to be made, like choosing the best deployment platforms and the right tools.

    10.1. Linux

    Linux is a Posix Compliant operating system designed to run on the Intel architecture. The system also has extensions to accommodate System V and BSD requirements.

    This OS is licensed under the GNU Public license and is as such, freely distributable provided the source accompanies the distribution or is at least made available to the recipient.

    Linux runs on Intel processors 386 and later that are capable of utilising the 386 protected mode.

    For a strictly minimum implementation you can expect to need about 10-15 Megabytes of disk space and 8 Meg of RAM. It is possible to run it in 4 Meg, but consider 8 a reasonable minimum for text based installation. In order to utilise X (the Unix windowing system) with any reasonable efficiency, expect to need about 16 Meg of RAM as a reasonable minimum (300M hardfile space).

    The system is currently capable of compiling and running Posix compliant Unix programs, as well as Dos programs through the use of DosEmu. Windows applications have limited success running on Linux through the use of Wine (a windows emulator).

    Although Linux is usually referred to in conjunction with 386/486/Pentium machines, it also runs on, or is currently being ported to, other architectures (eg DEC Alpha, Sun SPARC, MIPS, PowerPC, and PowerMAC).

    10.2. Java

    Java is an object oriented programming language developed by Sun Microsystems, and now further developed by its JavaSoft subsidiary.

    At first glance it resembles C and C++, but it's different under the hood. Java is both a compiled and an interpreted language. Its source code is compiled into a universal form - bytecodes for a virtual machine - that can easily be ported across the Internet and interpreted and run on many platforms.

    Compared to other programming languages Java is much simpler, robust, and cost-effective. It allows an application to be developed with a minimum of debugging and is instantly portable to many operating systems. Compared to other Internet solutions, Java offers unmatched performance and versatility while minimising the strain on the web servers by distributing the processing load to the client machines.

    Java possesses also a series of additional APIs that allow the fast development of complex applications. Such APIs include for instance Networking, Distributed Method Communication, Database Connectivity and Cryptographic support.

    10.2.1. Java RMI

    Java RMI (Remote Method Invocation) technology is the basis for distributed computing in the Java environment. Because Java RMI was created after broad acceptance of the Internet and object oriented design, developers are treated to a dynamic and flexible environment for building distributed applications.
    Figure 5 - The behaviour of Java RMI technology in a distributed application

    Because of Java RMI technology, developers can now:

    1. Easily create powerful distributed computer applications and network services for Java and non-Java environments
    2. Use Java RMI, a single programming interface, for object communication in distributed applications.

    10.2.2. JDBC

    JDBC (Java Database Connectivity) is an application program interface (API) specification for connecting programs written in Java to the data in popular databases. The application program interface lets you encode access request statements in structured query language (SQL) that are then passed to the program that manages the database. It returns the results through a similar interface. JDBC is very similar to Microsoft's Open Database Connectivity (ODBC) and, with a small "bridge" program, you can use the JDBC interface to access databases through Microsoft's ODBC interface. For example, you could write a program designed to access many popular database products on a number of operating system platforms. When accessing a database on a PC running Microsoft's Windows 95 and, for example, a Microsoft Access database, your program with JDBC statements would be able to access the Microsoft Access database.

    JDBC actually has two levels of interface. In addition to the main interface, there is also an API from a JDBC "manager" that in turn communicates with individual database product "drivers", the JDBC-ODBC bridge if necessary, and a JDBC network driver when the Java program is running in a network environment (that is, accessing a remote database).

    When accessing a remote database, JDBC takes advantage of the Internet's file addressing scheme and a file name looks much like a Web page address (or URL). For example, a Java SQL statement might identify the database as:

    jdbc:odbc://www.somecompany.com:400/databasefile 
    

    JDBC specifies a set of object-orient programming classes for the programmer to use in building SQL requests. An additional set of classes describes the JDBC driver API. The most common SQL data types, mapped to Java data types, and are supported. The API provides for implementation-specific support for transactional requests and the ability to commit or roll back to the beginning of a transaction.

    10.2.3. JCE

    The Java Cryptography Extension (JCE) extends the Java Cryptography Architecture (JCA) API about additional features for supporting encryption and key exchange.

    The Java Cryptography Extension (JCE) is a set of APIs and implementations of cryptographic functionality, including symmetric, asymmetric, stream, and block encryption. It supplements the security functionality of the default Java JDK 1.1.x / JDK 1.2, which itself includes digital signatures (DSA) and message digests (MD5, SHA).

    The architecture of the JCE follows the same design principles found elsewhere in the JCA.

    10.2.4. PostgreSQL

    Traditional relational database management systems (DBMSs) support a data model consisting of a collection of named relations, containing attributes of a specific type.

    In current commercial systems, possible types include floating point numbers, integers, character strings, money, and dates. It is commonly recognised that this model is inadequate for future data processing applications. The relational model successfully replaced previous models in part because of its "Spartan simplicity". However, as mentioned, this simplicity often makes the implementation of certain applications very difficult. Postgres offers substantial additional power by incorporating the following four additional basic concepts in such a way that users can easily extend the system:

    Other features provide additional power and flexibility:
    1. constraints
    2. triggers
    3. rules
    4. transaction integrity

    These features put Postgres into the category of databases referred to as object-relational. Note that this is distinct from those referred too as object-oriented, which in general are not as well suited to supporting the traditional relational database languages. So, although Postgres has some object-oriented features, it is firmly in the relational database world. In fact, some commercial databases have recently incorporated features pioneered by Postgres.

    PostgreSQL is then a sophisticated Object-Relational DBMS, supporting almost all SQL constructs, including transactions, sub-selects and user-defined types and functions. It is the most advanced open-source database available anywhere.

    10.3. Putting everything together

    Now, after having in mind the tools and what architecture to use, it is necessary to put everything together.

    The chosen development platform is Linux, because of its characteristics as a good development system. The language to be used is Java, because of its easiness.

    On the server side, one of the most important things to be defined is the server database. This database is quite important since it will store important information about the contest, like detailed information about the contest clients, the set of keys that each client is currently processing, which was the last key to be issued among other things.

    The database will be developed using a free object-relational database called Postgresql. The server software, totally developed on Java, will interface with the database through the JDBC API.


    Figure 6 - Client-Server final architecture configuration

    One of the main functions of a server is to wait and receive requests from a client. The DES key server has the same behaviour. The server starts, waits and processes requests. In order to communicate with the clients it is necessary to have some network functionality's. In this case, the server receives requests from the clients through RMI. RMI was chosen, because it adds a layer of abstraction between the Java program and the network complexities.

    On the client side, one of the most important things to be implemented is the cryptographic functionality. The client software should be capable of use and process DES algorithm. Since the client software will know part of the plaintext and the full secret message, it has to be capable of trying to decrypt the secret message using a key supplied by the server and compare the result with the partial plaintext.

    As it is in the server, this client software is totally implemented in Java. This allows that a larger number of different computers and platforms to join rapidly to the contest, enlarging considerably the computational power to find as quickly as possible a solution for the proposed problem - find the correct DES key.

    11. Conclusion

    DES 56-bit is not safe anymore. It is clear that with today's computing power, and with increasing network capabilities, like Internet, it is easy to set-up an architecture for cracking a DES key.

    Linux and Java, are two of the tools that allow the easy creation of such architectures and make it available to almost everyone. Linux, because it is a simple, fast, powerful and free operating system that allows the building of power server capabilities. Java, because it is easy to learn and architecture independent allowing a fast development for a large number of different platforms.

    References

    [1] RSA Laboratories - Cryptographic Research and Consultation, "Answers to Frequently Asked Questions About Today's Cryptography - Version 3.0", RSA Data Security, 1996

    [2] Schneier, Bruce, "Applied Cryptography - Protocols, Algorithms and Source code in C", John Wiley & sons, Inc., 1996

    [3] "DES Challenge II", RSA Laboratories, RSA Data Security, http://www.rsa.com/rsalabs/des2, 1997


    Copyright © 1999, Carlos Serrao
    Published in Issue 46 of Linux Gazette, October 1999

    "Linux Gazette...making Linux just a little more fun!"


    HTML Editor ++

    By


    Being among the first in porting software from Windows to Linux can't be easy. Especially if you do not have any feeling for the Linux community. Still, it is nice to see software that you have played with - and sort of liked - under your new OS. This is what has happened to CoffeeCup Software's HTML Editor ++.

    screenshot

    First let me say that feature-vs-easy-of-use wise it is superior to just about anything on Linux. I feel that Amaya is the closest rival if it just wasn't so eccentric. (At least I think it's eccentric). In HTML Editor ++ you get wizards, albeit limited, for creating tables and frames. You get three rows of buttons across the top and a set of handy menu short cuts along the right hand side of the screen (Align left, right, centre, new paragraph, new line, etc). And you also get - which I think shows a lack of understanding of Linux culture a big--no, gigantic-- shareware banner across and towards the top of the screen. This is annoying but tolerable if you decide to use the editor. When you pay for this piece of software it presumably disappears is if by magic. There is nothing wrong with that but the banner is so large it actually provokes cracking. Another Windows standard that followed the product into Linux is that it is time-bombed: it stops working after 30 days. This together with the banner brought the anarchistic side of me to the front and at least the time bomb "feature" is extremely easily invalidated. CoffeCup Software has a lot to learn from J-M Jacquet and his IglooFTP-PRO which also is shareware but the nag is implemented in a more subtle manner.

    A main "selling point" is the directory and file navigation system pioneered by HomeSite. On the left side of the screen you get two boxes, one for directories and one for files. You navigate you site tree by double clicking on the directory name and open the file by clicking on the file name in the box below. Brilliantly easy.

    Now, I don't have anything against shareware (I used to use several (payed-for) shareware applications on Windows). But there is a problem HTML Editor ++. It's flawed. In the brief period I've tried it out I've discovered several bugs. Most irritating is that it seems to handle existing documents badly. Not that it adds copious amount of own code, it doesn't, but the formatting tools freak out regularly.

    For example: open a document, place the insertion point somewhere in the text and hit the hyperlink button on the right. You get a screen to fill in a text and the link. Click "Cool". You would expect it to insert the text and link at the insertion point, right? Unfortunately it doesn't; just now it scrolled down to the bottom of the page and inserted the text and link there. What you have to do is select a word, hit the hyperlink button insert the linktext and hit "Cool". (It didn't work now, but it usually does). HTML Editor ++ is plagued by this kind of problems.

    Another example is the "New line" button which may insert the <br> tag at any random location in the text (although bottom of the page is preferred).

    Most of these problems go away if you write a document from scratch. Then everything works as intended. Don't ask me why, I didn't write this port. And you can't help out either - it is not Open Software. The most serious problem though is that HTML Editor ++ is unstable. Various simple formatting actions can crash the editor; e.g., just inserting a new <p> tag. It took a couple of pages to discover this but one very good piece of advice is, ironically enough the same as for much software with Windows background, save your work often.

    A last complaint is the "Save" function which is just plain daft. In a graphical application you really shouldn't be forced to enter the path manually. And why do you get a confirmation screen which you have to click "OK" on the make it go away?

    In all, HTML Editor++ is a promising piece of ported software. It needs much more work to become stable and bug free but to those patient enough it is quite nice to work with.


    Copyright © 1999, Martin Skjoldebrand
    Published in Issue 46 of Linux Gazette, October 1999

    "Linux Gazette...making Linux just a little more fun!"


    Creating A Linux Certification Program, Part 7

    By


    A Word of Thanks

    One year ago, in the October 1998 Linux Gazette, there appeared a brief article laying out the reasons why I felt a professional certification program would benefit Linux. It concluded with several questions and asked how I could join in the discussion:

    If you agree that a certification program can be beneficial for the growth of Linux, how do we as a community go about addressing the points I made above about creating a certification program? Do we create another mailing-list or newsgroup? (Does such a group or list already exist? If so, I have so far failed to find it.) Do we meet at a conference? ...

    ...I don't necessarily have the answers - but I would like to participate in the discussion. If someone can suggest the appropriate forum in which this discussion should take place (or is currently taking place!), please let me know.

    I had absolutely no idea that this article would lead down the paths that it did! Looking back, I think my reasons still stand, but certainly my own thinking has evolved on how a program is structured as we all have discussed and strategized. It's been an interesting journey! And one I never would have anticipated...

    And at this moment, I think it is important to pause and say a word of thanks to all who have joined in. The effort that became the Linux Professional Institute could not have happened without the incredible support we have received from the Linux community.

    Now, less than one year after discussion began (the mailing list started in November), we stand nearing completion of the first LPI exam... and it could not have happened without those of you reading this article.

    What all has happened? Let's take a short look:

    Beyond all that, we sent an awful lot of e-mail! We've discussed things, argued, praised, fought, even had a flame-war or two... and in the end worked professionally to develop a whole series of Consensus Points outlining our ideas and decisions. We've spoken at conferences, issued news releases, written articles, held meetings and done a hundred other things. We've spent a lot of long hours and sometimes stressed our relations with our spouses. We've worked hard but have also had some fun along the way. Many of us have become friends through the process. Many of us have had new professional opportunities presented to us through our involvement. We've learned an amazing amount... and truly demonstrated the power of a group of people working together to accomplish a common goal!

    And none of this could have happened at the speed and scale that it did without all the people who chose to join in. The subscribers to our mailing lists... the participants in BOF sessions at conferences... the financial sponsors - both corporate (especially Caldera Systems, IBM, Linuxcare and SuSE, along with Wave Technologies and SGI) and individual - who have backed up their belief in the LPI program through solid financial contributions... the Board members... the people who have visited our web site... those helping with publicity... and all the many others throughout the globe helping our effort move forward in some small way.

    And now today, as we continue to accept and review questions that will enable us to release our first exam within the next few weeks, all that we can say is THANK YOU for helping to make this dream a reality!

    We do, of course, have a lot more to do! This is just the beginning... and we will need the help of all of those who have helping in the past and many, many more to continue to move the program along. Please join with us! Visit our web site at http://www.lpi.org/ or read my July 1999 Linux Gazette article for tips about how you can help.

    It's been a long and amazing journey since that first small article a year ago! Thank you all for your support!


    Previous ``Linux Certification'' Columns

    Linux Certification Part #1, October 1998
    Linux Certification Part #2, November 1998
    Linux Certification Part #3, December 1998
    Linux Certification Part #4, February 1999
    Linux Certification Part #5, Mid-April 1999
    Linux Certification Part #6, July 1999


    Copyright © 1999, Dan York
    Published in Issue 43 of Linux Gazette, July 1999


    The Back Page


    About This Month's Authors


    Husain Al-Mohssen

    Husain is a Mechanical Engineer working in Saudi Arabia who is badly addicted to computers. When he is not trying to build his own distribution he can be found reading up on Quantum Mechanics or experimenting with Mathematica. He can be reached on (all flames happily collected in ~/nsmail/Funny).

    Jim Dennis

    Jim is the proprietor of Starshine Technical Services and is now working for LinuxCare. His professional experience includes work in the technical support, quality assurance, and information services (MIS) departments of software companies like Quarterdeck, Symantec/Peter Norton Group and McAfee Associates -- as well as positions (field service rep) with smaller VAR's. He's been using Linux since version 0.99p10 and is an active participant on an ever-changing list of mailing lists and newsgroups. He's just started collaborating on the 2nd Edition for a book on Unix systems administration. Jim is an avid science fiction fan -- and was married at the World Science Fiction Convention in Anaheim.

    David Fauthoux

    caricature

    David Fauthoux (22 ans) est un étudiant français en DEA d'intelligence artificielle (Bac+5), spécialisé en logique par actions. Il a quatre passions : la logique en IA, la programmation "out-of-algorithm", les jeux de rôle avec les enfants et lire (et relire) les dessins de Quino.

    David Fauthoux (22 year old) is a French student in DEA of Artificial Intelligence (5 years after Baccalauréat), specializing in Action Modal Logic. He has four passions : AI logic, "out-of-algorithm" programming, role playing games with kids, and reading (and reading again) the Quino's drawings.

    Jay Fink

    Jay is a UNIX/Linux systems administrator for "Ipsos-Asi The Advertising Research Company". He contributes to a variety of webzines and sites, and is the Editor of the UNIX & LINUX Computing Journal, which he hosts on his site www.diverge.org. His hobbies include delving into the Linux kernel internels, surfing (as in ocean surfing, not web surfing), and hiking (mainly to get away from computers every now and then).

    Michael J. Hammel

    A Computer Science graduate of Texas Tech University, Michael J. Hammel, [email protected], is an software developer specializing in X/Motif living in Dallas, Texas (but calls Boulder, CO home for some reason). His background includes everything from data communications to GUI development to Interactive Cable systems, all based in Unix. He has worked for companies such as Nortel, Dell Computer, and Xi Graphics.

    Brian Marshall

    Brian has been a software developer in Calgary's oil and gas industry since 1981. He is deeply into C++ and object-oriented design. His career began vowing never to learn Cobol; it progressed to never learning VB and now involves never learning MFC. Brian first got into unix in 1991 but he has only been using Linux for a few months.

    Bill Mote

    Bill is the Technical Support Services manager for a multi-billion dollar publishing company and is responsible for providing 1st and 2nd level support services to their 500+ roadwarrior sales force as well as their 3,500 workstation and laptop users. He was introduced to Linux by a good friend in 1996 and thought Slackware was the end-all-be-all of the OS world ... until he found Mandrake in early 1999. Since then he's used his documentation skills to help those new to Linux find their way.

    Mark Nielsen

    Mark founded The Computer Underground, Inc. in June of 1998. Since then, he has been working on Linux solutions for his customers ranging from custom computer hardware sales to programming and networking. Mark specializes in Perl, SQL, and HTML programming along with Beowulf clusters. Mark believes in the concept of contributing back to the Linux community which helped to start his company. Mark and his employees are always looking for exciting projects to do.

    nod

    nod - (no capital "n") is a real name due to being born a love child of the seventies. nod, did study physics and French to degree level but wanted to become a windsurfing instructor, so became a freelance photographer. This proves that nod also moves in mysterious ways. He enjoys playing with computers and seeing what happens if you "click that" or "unscrew that".

    Mike Orr

    Mike is the Editor of the Linux Gazette. You can read what he has to say in the Back Page column in this issue. He has been a Linux enthusiast since 1991 and a Debian user since 1995. He is SSC's Webmaster. He also enjoys the Python programming language. Non-computer interests include ska/oi! music and the international language Esperanto.

    JC Pollman

    I have been playing with linux since kernel 1.0.59. I spend way too much time at the keyboard and even let my day job - the military - interfere once in a while. My biggest concern about linux is the lack of documentation for the intermediate user. There is already too much beginner's stuff, and the professional material is often beyond the new enthusiast.

    Carlos Serrão

    The author is a professor at ISCTE, a Portuguese public University specilized in Management and Computer Science, teaching subjects like IT Management and E-Commerce. The author is also IT Research and Development Consultant at ADETTI, a Portuguese R&D institution, where he is specialized in information security, colaborating in several European Community IT projects: OKAPI, OCTALIS and OCCAM.

    Martin Skjöldebrand

    Martin is a former archaeologist who now does system administration for a 3rd world aid organisation. He also does web design and has been playing with computers since 1982 and Linux since 1997.

    Dan York

    Dan recently joined the staff of LinuxCare, to work full-time on developing the Linux Professional Institute certification program. He has been working with the Internet and UNIX systems for 13 years and PCs since the early Apple computers in 1977 . While his passion is with Linux, he has also spent the past three years working with Windows NT. Dan has written numerous articles for technical magazines, and has also spoken at various conferences within the training industry. He is now a member of the Certification committee of the Systems Administrators Guild (SAGE - a division of USENIX).


    Not Linux


    [ Penguin reading the Linux Gazette ]

    And now for the inaguration of the Linux Gazette Spam Count . This month, the Gazette received 284 letters. Of these, 79 were spam. October's SPAM COUNT is thus 28%.

    Aside from the usual get-rich-quick schemes, thigh creams, Viagra stuff, hardware/peripheral ads in Spanish and Japanese [note: the characters all turn into $##$% symbols in Latin-1], and investment opportunities "accidentally" sent to the wrong person, the most hilarious piece was:

    We offer Web Hosting for the following server platform: NT 4.0 Running IIS4 as low as $9.95/month, paid quarterly or annually, plus a $19.95 set-up fee.

    Did they think this was the NT Gazette?

    Doesn't the fact that we already have a web page show that we don't need web hosting services?

    P.S. They do offer "FrontPage 2000 extensions FREE!!!" (Yawn.)


    Linux Gazette Issue 46, September 1999, http://www.linuxgazette.com
    This page written and maintained by the Editor of Linux Gazette,
    Copyright © 1999 Specialized Systems Consultants, Inc.