LINUX GAZETTE

December 1999, Issue 48       Published by Linux Journal

Front Page  |  Back Issues  |  FAQ  |  Mirrors  |  Search

Visit Our Sponsors:

Linux Journal
LinuxCare
InfoMagic
SuSE
Red Hat
LinuxMall
Cyclades

Table of Contents:

-------------------------------------------------------------

TWDT 1 (gzipped text file)
TWDT 2 (HTML file)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.
Linux Gazette[tm], http://www.linuxgazette.com/
This page maintained by the Editor of Linux Gazette,

Copyright © 1996-1999 Specialized Systems Consultants, Inc.

"Linux Gazette...making Linux just a little more fun!"


 The Mailbag!

Write the Gazette at

Contents:


Help Wanted -- Article Ideas

Answers to these questions should be sent directly to the e-mail address of the inquirer with or without a copy to [email protected]. Answers that are copied to LG will be printed in the next issue in the Tips column.

Before asking a question, please check the Linux Gazette FAQ to see if it has been answered there.


 Mon, 1 Mar 1999 14:28:50 -0800
From: Vijaya Kittu <
Subject: Visual Basic for Linux ???

I'm looking for a good and easy "Basic" Language Port (such as VB in Windoze) on X-Windows.

If any one is using any Visual Development tool, kindly send comments back to me.


 Fri, 29 Oct 1999 16:22:22 +0200
From: <
Subject: connecting SPARC and 386 for printing purposes

having a SPARCstation 10/20 with Redhat 5.1/SPARC working fine on it, I wanted to connenct a printer to the system. Since the parallel port is not supported under S/Linux, I wanted to connect my SPARC with a cheap 386. The 386 is working under MS-DOS.

I bought an ethernet card for my 386, and connected the in-board ethernet card on the SPARC with the ethernetcard on the 386. The cable is twisted-pair, since only two computers are to be connected.

If I turn on both computers, I can start a testing program for the ethernet card on the 386. It will wait for any external signal. When booting the SPARC (the 386 sensing for any action), the SPARC checks for 'eth0' but finds no other computer on the other side.

Questions: (1) how can I let my 386 tell to the SPARC he is there? (2) How can I make the 386 accessible (that is: the parallel printer port) from the SPARC?

Thanks in advance for any help!!


 Sat, 30 Oct 1999 21:19:01 +0200
From: Dr. Bodo Zimmermann DD-260 <
Subject: problems with tcp/ip in kernel 2.2.x

I have found problems with ftp aund telnet (tn3270, x3270) in the kernel 2.2.5 and 2.2.10 when connecting to IBM mainframes (VM/ESA). I cannot tranfer any data!

There are no problems at all with a kernels 2.0.x

Whom may I address?


 Sat, 30 Oct 1999 23:25:05 EDT
From: Richard Monte <
Subject: Running MS applications on Linux?

I was wondering if I can run my MS applications on Linux? What would I need?

If I can, what are the advantages and/or disadvantages of doing so?

Are there any legal issues related to running MS applications on a non-Windows OS?

-Rick


 Sun, 31 Oct 1999 12:54:24 +0100 (MET)
From: Mika Numminen <
Subject: Hmmm.... don't know how to categorize this..

I usually log into my machine with SSH from home (no concole access) and I was wondering if there is a way to have processes still running after I log out. For instance an ftp-session loading an ISO, etc..


 Mon, 1 Nov 1999 18:53:32 -0600
From: lordj <
Subject: linux sound problems

I run sndconfig and the friggin thing just fails. it finds my card and then sets up the irqs and all that rot, and then when it trys to play the sound sample i get the failure "don't know what to do with CONFIG CTL002b/4886122 ... error parsing file" and stuff like that. what is wrong with it?


 Tue, 02 Nov 1999 18:21:51 +0800
From: Zon Hisham Bin Zainal Abidin <
Subject: Telnet 110 fails

Now that my LILO is back (up and running), it's time for something else.

I have 6-7 pcs in the office on LAN. I am on RH6.0 and the rest on W98. I am trying to configure my pc as the email server for this small LAN.

I have managed to correctly setup DNS. A remote PC can resolve the DNS server correctly. Then I went into Linuxconf and setup the Email server portion. My Linux PC by the way is named svr and the domain is cma.com

We are using Netscape as the email client. I entered svr.cma.com as the Incoming and Outgoing mail servers. Netscape client can sent email but were unable to receive (email) with the message: "Netscape's network connection was refused by the server svr.cma.com The server may not be accepting connections or may be busy. Try connecting again later."

I did a telnet svr.cma.com 110 with a "Unable to connect to remote host" message. But a telnet svr.cma.com 25 is ok. That explaint why sending is OK but not receiving rite?

How do I fix this?

rgds.


 Tue, 02 Nov 1999 20:53:57 +0000
From: Anatoli B. Titarev <
Subject: HELP with LT Win Modem and SB 16 Sound Blaster wanted!

I am a new Linux Red Hat 6.0 user.

I cannot connect to the Internet (still using Winows 98). My LT Win Modem connected to com3 is not working under Linux. (Someone wrote that trying to config Win modems for Linux is waste of time, is it true?)

Connection: 8, parity - none, stopbit - 1. I told the 'modemtool' to use to use the serial port to which my modem is connected. I removed the word 'lock' in /etc/ppp/options file.

I get 'modem is busy' in kppp configuration program or 'mode is locked' errors in other programs.

I also have a problem with SB 16 sound blaster. 'sndconfig' program doesn't help!

I am looking for anybody who can help me to make my modem and sound blaster working with Linux.

Please help!!!


 Tue, 2 Nov 1999 16:09:01 -0600
From: smita narla <
Subject: Re: your mail

hello, I'm doing a general survey on the testing techniques used in open source.I've a questionnaire with some 12 questions.Since you are a developer can you answer this questionnaire.I'm doin this for one of my class projects and i need to send the questionnaire to some 200 developers.Can you send me the addresses of some other developers ?

I'll be glad if u can send me some questions to improve my questionnaire.

Thank you smita

The Editor sent some suggestions, and Smita responded:

Hello, thanks for your response. i 've ghone to the sites u have mentioned in you previous mail. i got some info but not exactly what i'm looking for. i need some nice multiple choice questions for my Survey of testing techniques in open source.Some of the questions I've found are:

2. When ever you make a minor release do you make a complete regression tests and test for the bugs fixed in this released ?

3. For every major/minor release do you make tests for the bugs you fix ?

4. Will there a any test planning carried parallel to development of coding ?

5. What will be the acceptance criteria for the source you releasing (based on the test results) ?

6. Are you satisfied that test you are planning to execute will cover all the required conditions.

But i dont think they will serve the exact purpose. Ineed some real good multiple choice questions (which are easy to answer with out much thiking from the developers point of view).

I'll be glad if you could help with some nice questions and some mailing lists of open source developers.

thank you, smita


 Wed, 3 Nov 1999 00:14:11 +0100
From: cabotzen <
Subject: a problem

This is a multi-part message in MIME format.

Hello i'm french and i love linux, but i try to install it on a notebook with win98, i have a samsung's notebook. After i have installed all the paquages, the instalation stopped when it wanted to reconize the mouse and i can't continue the instalation. CAN YOU HELP ME PLEASE. Good buy


 Tue, 2 Nov 1999 18:14:44 -0500
From: Paul Nathaniel (NOL1NXP) <
Subject: Linux command questions

fsck /usr: What does this command do? What is its NT equivalent.

What is this command: cat /etc/passwd; and what is it's NT equivalent?


 Wed, 03 Nov 1999 07:44:42 CST
From: Jim Bradley <
Subject: slowwwww ftp and partition problem

I have encountered two problems with linux (Mandrake 6.1) that I haven't figured out how to overcome.

The first problem is the logon speed for ftp. I have 3 desktops and a laptop that I have networked with 100 Mbps ethernet. 2 of the 3 desktops are dual booted with OS/2 and linux, the remaining desktop and the laptop are linux only. If I try to connect to ftpd on linux with an ftp client on either OS/2 or linux, there is an approximately 3-5 minute wait before a logon prompt is returned. This is clearly not running properly! I can use a linux client to an OS/2 ftpd, and promptly get the logon prompt. I've tried "renice" at both -20 and +20 using KDE's task manager. What do I need to do to speed this up?

The second problem is: I have one machine with an 8 Gb drive partitioned into a 2Gb,6Gb, and 128k swap. Mandrake 6.1 is installed on the 2Gb partition, hda1. The 6Gb partition, hda3, is an ext2 partition. After linux boots, the 6Gb partition is mounted to /mnt/hda3. The problem that I've encountered is that when copying files to /mnt/hda3, it is the first partition that fills up, not the second. What's happening here? I was using kdisk to monitor the disk size when mirroring another site to /mnt/hda3, and it wasn't the correct partition that enlarged. After this occurred, totally filling the smaller partition, I could no longer umount the partition, either, getting a message "device is busy."

Any help to solve either of these problems??


 Wed, 3 Nov 1999 10:49:32 -0600
From: Hunter, Kevin <
Subject: LinNeighborHood

Thu, 28 Oct 1999 14:54:48 +0100
From: Network Desktop User < >
Subject: Linneighbourhood
Hi, sorry to bother you with inconsequential mail but I think you of all people should know this !! I'm looking for some software called Linneighbourhood. It's a network neighbourhood browser for Linux. I have scoured the net for it but to no avail !! Can you help??

http://www.bnro.de/~schmidjo/

From: Graeme Wood <:

I think their is a gnome program that lets you browse the net neighbourhood as in win 9X. i do not know the name right now. but if you want i will figure it out


 Wed, 03 Nov 1999 10:14:02 -0800
From: Chuck Newman <
Subject: Linux Communications

Is there a Linux based communications program similar to pcAnywhere or Carbon Copy? I need remote PC control. Don't need to transfer files.


 Mon, 08 Nov 1999 16:18:50 -0500
From: Sunshine Smith <
Subject: Installing RH6.0

I am trying to install RH 6.0 on a Thinkpad 760ED, unfortunately the cdrom (Teac 44E) does not appear on the supported cdrom drives, does anyone have any ideas other than buying an external (supported) cdrom drive.

Thank you


 Tue, 09 Nov 1999 10:36:19 +0200
From: Lucian Hanga <
Subject: about ESS Solo 1

ESS Solo 1. Unfortunately I have one !!!! If smbd. kwon how to make it works !!!! Please email me !!!!!!

10x


 Wed, 10 Nov 1999 16:54:20 -0000
From: Ben Huckel <
Subject: Virtual Terminals in windows

Can you offer any advice....? I am currently considering a project for my third year project of a bsc(hons) in computer science which would implement virtual terminals for windows 95/8...... Do you know if there is any info on this sort of thing, has it already been done? Any help or ideas would be greatly appreciated.


 Thu, 11 Nov 1999 00:06:50 +0000
From: Nadeem Oozeer <
Subject: SDRAM Problem

I've got a PII with 128MB SDRAM PC100, with 8MB AGP onboard, running windows and Linux Red Hat 6.0. I've been told that the AGP uses 8MB from the SDRAM that's why on windows I get 120MB in system. On Linux however, I get only 64MB of RAM in /etc/meminfo .... How can I get all the 128 MB ? I've tried append in lilo.comf, but it freezes my PC with strange codes ?


 Thu, 11 Nov 1999 12:22:57 +0800
From: Brian <
Subject: How to set the Linux as a router!

I would like to set a single PC as a router by using Redhat 6.0. However, I find difficulties in finding the related topics in books or journals. Can you tell me how I can find the related topics? Thanks!

Best regards, Yvonne.


 Thu, 11 Nov 1999 13:47:55 -0600
From: anthony <
Subject: Looking for open-source programmers

I need to start somewhere, so I'll start here. I am trying to find programmers interested in developing a new piece of software. I thought I would give Linux Open License types the first shot at it. We are looking for this solution for our small company, but are willing to share it to help defray development costs.

Can you help direct me? Or am I spinning my wheels?

The initial write up on the proposed application is attached. I know this isn't the normal way to develop software, but I wanted to give it a try.

Anthony O'Krongly, Dir. of I.T.

[If you just want to throw a project idea out and see if anybody is interested in helping, I'll put it in the Mailbag. (Which I'm doing now. :)

If it's a critical piece of software for your company and needs to be completed "soon" and with "professional quality", you might have a look at www.cosource.com and www.sourcexchange.com instead. They act as auction brokers between those willing to pay to help get open-source software developed and those who wish to work on such projects.

The GNU Gnats program may partially meet your requirements. It allows one to keep track of "job requests" in several categories, and to see which ones have been followed up on and what needs to be done with the remaining ones. The Tkgnats program is a GUI front-end. Perhaps this could link into your ordering and accounting system somehow. -Ed.]


 Thu, 11 Nov 1999 19:41:36 -0000
From: ggg <
Subject: help please

please can you help me im new to remote booting i have got the hardware one network card with a boot rom and one with out so im fine on the hardware side but how do i configure linux for remote booting i am new to linux so i really have not got the foggiest.


 Thu, 11 Nov 1999 17:20:41 -0800
From: Johnny Lam <
Subject: hi

Hi, i'm very new to Linux but i got a question, it may sound silly but pls help me with it. I've just install linux 6.1 as a server, but for some reason i can't get the x-window to work, can u pls tell me how i can use x-window as a server? thank you very much


 Thu, 11 Nov 1999 20:51:35 -0500
From: Gary R. Cook <
Subject: X-windows application for communicating over /dev/ttyS1 (COM2)

Does anyone know where I can find a utility (with source code) that uses an xterm window for communicating with a remote asynchronous device over /dev/ttyS1?

Thanks!!


 Sat, 13 Nov 1999 03:18:19 +1100
From: Greg W <
Subject: Hello

I am sure you get swamped with questions, now that I have acknowledged that, I feel better about asking if you know, or know someone who can point me to a good working example for ipchains for a machine that is standalone, I have 64 IPs all routed through a PC, this PC is not Linux based, so I dont want MASQ or ppp examples

A straight up script to disallow spoofing, and everything else besides the "normal" services (i can easily add or remove as necesarry)

All examples seen so far dont work because they are based on like I said, MASQ or ppp0 being present.


 Sat, 13 Nov 1999 20:42:18 -0500
From: zak <
Subject: Apollo P-1200 Inkjet Printer

Today I purchased an Apollo P-1200 inkjet printer at the local drug store (yes, drug store!), the main attraction being it cost $90 *before* a $50 rebate! This is supposed to be a new 'low-end' printer, using H-P technology. The downside is that the CD that came with it only covers installation for 'doze. Has any other LG reader purchased one of these? If so, where can I find an appropriate Linux driver for it? I tried the http://www.myapollo.com site, but all that's there is also 'doze stuff. If anyone knows the nearest this thing is with any H-P driver, that would also be useful. I mainly use a printer for printouts of txt and html documents, and with Corel Word Perfect 8.0 for Linux. Thanks in advance for any assistance. Zak


 Sun, 14 Nov 1999 02:43:43 -0000
From: George Christofi <
Subject: Getting linux to authenticate to nt domains.

I want to get linux to authenticate to my NT network. As a newbie, I am stunned by the complexity of the whole Linux thing. I find it easier and more reliable to do things with NT Server. I am not thick (just MCSE qualified), but cannot see how to get it to perform this seemingly simple task.


 Sun, 14 Nov 1999 12:40:36 -0800
From: Kenneth MacCallum <
Subject: Linux memory woes

Hi, I've just installed Red Hat 6.0 and it's not detecting all of my memory. I tried typing linux mem=64M at the LILO prompt as recommended but then the boot-up fails. As it is it only detects 14M or so of my 64M.

I had a look in my bios, and I noticed a line saying that there is a memory hole at 15M. I'm guessing that this is causing Linux to not see memory above this, but I don't know why this "hole" is there. I had a look in my motherboard manual (PC Chips M577) but it didn't mention anything useful. This hole doesn't seem to upset Windows; is there anything I can do to get Linux to work too?

I also tried swapping my DIMMs around, thinking that if the hole was due to a bad DIMM it might move up to 47M (?) but it doesn't.

I've been searching about the web for some insight but I've had no luck so far.

Can you help?


 Mon, 15 Nov 1999 12:12:50 -0500
From: Bruce Kramer <
Subject:

Editor, I'm looking for some help in the Detroit, MI area. I have a son, 16 yrs old - junior in high school, presently enrolled in some special projects at school. One of which is to learn Linux. He recently removed Windows from our home computer to install Linux. After attempting this for four weeks now he is frustrated and ready to give up. Rather than giving up and going back to Windows I thought that it might be possible to find someone in our area who could help get Linux up and operating on our computer. Any suggestions?

[Look at www.linuxjournal.com under "User Groups (GLUE)" and see if there are any contacts in your area. -Ed.]


 Mon, 15 Nov 1999 22:00:36 -0500
From: Pierce C. Barnard <
Subject: Looking for the Korn shell

Hi, I recently obtained Redhat V6.0 and I found out to my dismay, that it does not have the Korn shell with it. Does anybody know where I can find a copy of it? Thanks.


 Wed, 17 Nov 1999 19:16:38 +0800
From: Yvonne Chung <
Subject: How to set a PC as a router with Linux6.0?

Dear Sir/Madam,

I would like to set a single PC as a router by using Redhat 6.0. However, I find difficulties in finding the related topics in books or journals. Can you tell me how I can find the related topics? Thanks!

Best regards, Yvonne. --


 Wed, 17 Nov 1999 13:03:37 +0100
From: Harold Konijnenberg <
Subject: ipop3d/imap server problem

Problem with IMAP/Ipop3d server

I have a problem with getting the Ipop3d/Imap server to work. I use the Imap 4.5 package on my Red Hat 6.1 system. After installing RH 6.1 and configuring the system all services like Samba, Apache, FTP, telnet are working fine. But now I want to add a pop3 server. I checked my /etc/services file and Imap and pop3 services are enabled here. in the /etc/inetd.conf file the imap and pop3 services are not enabled, so i uncomment de following lines

#pop-3   stream  tcp     nowait  root    /usr/sbin/tcpd ipop3d
#imap    stream  tcp     nowait  root    /usr/sbin/tcpd imapd

After uncommenting these lines in the inetd.conf file i kill the inetd proces to reload /etc/inetd.conf with the following command:

killall -HUP inetd

When i try to telnet in to the linux box now by: telnet 192.168.1.254 (my linux box) i can't connect. FTP'ing into the linux box gives a problem with the in.ftpd daemon.

When i restore the original /etc/inetd.conf all problems are disappeared and eveything works fine again.

I can't figure out what is the problem. I know that the problem starts when changing the /etc/inetd.conf file, so perhaps its some kind of security issue???

Any help is very welcome,


 Thu, 18 Nov 1999 17:06:03 IST
From: nayab shaikh <
Subject: How to configure isdn on linux

Hi friends,

Can anybody of you guide how to configure isdn on linux.I have Red Hat linux 6.0 server installed on my computer.I have a 56.6 kbps ext modem ....also is RADIUS is possible with it.(remote athentication dial in user service)....

Waiting for your reply...

Nayab


 Sat, 20 Nov 1999 12:22:12 +0530
From: Sivaraman Manivasagam <
Subject: Regd Lexmark Printer Drivers

I've recently installed Redhat Linux 5.2 am having a problem installing my Lexmark Optra E ( PS Support ) Laser Printer .I have been to the Lexmark site and there are no drivers for Linux.Any help in this matter would be appreciated.

Kindly mail to : [email protected]
a copy to : [email protected]


 Sat, 20 Nov 1999 19:24:56 -0500
From: GZukoff <
Subject: None

I have stumbled upon your website and was exterely pleased!!! I am a newbie to Linux but have been interested in alternate OS's for about 18 months after trying (in vain) to install FreeBSD. I have downloaded and am installing Corel Linux 1.0 and was hoping you wil be doing a write up regarding it son. I am anxious to see how it stacks up against the other Linux releases.


 Sat, 20 Nov 1999 22:33:04 -0500
From: outofstep <
Subject: xwindows display problems?

i recently installed redhat 6.1,, i have a voodoo banshee 2 graphics card, and i get ditorted pixels on the screen when i scroll and move windows... is there a driver update or something i need to fix this? please help. so far i love linux, and this is my only problem :)


 Sat, 20 Nov 1999 21:27:52 -0700 (MST)
From: <
Subject: Question

Hi

I hope this is the right address to right to with a Linux question.

I have a lot of Joliet format CD's made with DirectCD in Windows. Linux can read the CD's, but on files with long names (Over 20 characters or so) the filesize is dramatically misreported. 6-7 meg files are listed as 2-3 meg files. Linux can't see it all, and this of course means the file is pretty much corrupt and of no use in Linux. The CD's are readable fine in Windows, but not Linux. I've searched high and low and can find no answer to this problem. I hope you can help.

Thanks

Steve - [email protected]

angry fruit salad (n.)

A bad visual-interface design that uses too many colors. (This term derives, of course, from the bizarre day-glo colors found in canned fruit salad.) Too often one sees similar effects from interface designers using color window systems such as X; there is a tendency to create displays that are flashy and attention-getting but uncomfortable for long-term use.

JARGON FILE, VERSION 4.1.4


 Thu, 18 Nov 1999 21:47:28 -0500
From: Edith & Steve Dolesch <
Subject: Fw: Disability Features.

I just need to know if Linux has disability features like Windows (all versions)? I'm a disabled person and use StickyKeys to write with one hand if needed and the Numeric Pad to move the mouse cursor.

[The Linux Access-HOWTO explains the capabilities Linux systems have for people with disabilities. It says that X-windows has a StickKeys feature, and the FVWM window manager can be controlled without a mouse. -Ed.]


 Tue, 23 Nov 1999 10:30:53 +0800
From: ZHOUKM <
Subject: winNT+MSproxy+linux question

My PC is in a WindowsNT-based LAN which has a MSproxy system, under win9x session in order to connect to internet I have to login the WinNT domain, and Of course the win9x is equiped with msproxy client. A linux system is also installed in my machine, but how to visit the internet?

Thank you


 Mon, 22 Nov 1999 21:55:00 EST
From: <
Subject: do you know the USB(network adapter 10T) can run under linux?

Do you know the Universal serial bus with ehternet 's network adapter can work under Linux? or where we can down load driver?

hope to get your reply and help


 Mon, 22 Nov 1999 21:10:13 -0800
From: Yunfei Deng <
Subject: Setup network printer

I have RedHat 6.1 installed, but need to figure out how to use the network printer. The network printer is shared on a NT domain which is available to use if I login in NT. Any tip is welcome.


 Tue, 23 Nov 1999 11:16:44 -0600
From: Carlos Alarcon <
Subject: Hi!!! I have a big problem

I just bought a new computer. It has an "on-board" video card, Intel 810 chipset. I couldn't configure X to work with this type of card. First, I let Linux probed, it failed. Then I looked at the list, of course, it wasn't there. Then I tried an unlisted card and configured it as a general svga, it still failed. What to do now?


 Tue, 23 Nov 1999 21:09:22 PST
From: Edgar Henry <
Subject: Accessing previous partition (path 16 and 32)

I am trying to install redhat linux 6.0 on my PC which is dual booting on Windows 98 and windows NT and has a patition of Path 16 and 32. When I try to install Redhat Linux choosing the workstation its automatically installing and I cannot access my data anymore using the windows 98 or NT since it is not recognizing the partition.

It is possible for me to get my data from my previous windows.

Thank you very much


 Wed, 24 Nov 1999 02:57:15 -0800
From: kevin hartman <
Subject: Direct Cable Connection between Win95 and Linux, by Thomas P. Smyth

Would you happen to have a current e-mail address for Thomas P. Smyth, author of Direct Cable Connection between Win95 and Linux?


General Mail


 Sat, 30 Oct 1999 17:47:57 +0100
From: Andy D Williams <
Subject: a complaint about issue 47

Hello

I am a regular reader of your gazette magazine who is using windows 95. I download and extract your magazine onto a local disk for reading at my pleasure and I find it very useful and informative about installing and use of linux. I intend to move over to a unix like system permanently sometime in the future and prefer to read your magazine on my windows hard disk while I'm learning to get to grips with linux.

So onto the complaint! How can I read your magazine if your tar.gz files won't extract onto my hard disk! Why won't they extract or why won't issue 47 extract? Your issue 47 tar.gz appears to be using a dos/windows reserved system name for a directory or filename! the lg/issue47/aux directory can't be created on my windows disks because aux is one of those names. Assuming you or your contributors have used dos and windows in the past why didn't they know about these names in dos and windows?

Are you trying to stop windows users from learning about linux?

I have been able to read your excellent magazine from issue 1 to issue 46 without any problems and would like to continue until and after I am no longer using windows as my primary operating system.

Thank you very much

Andy Williams

[Argh, I forgot aux is a reserved devide name under DOS/Windows. This was corrected in early November. (The directory name is now misc.)

I also corrected a link to a program listing in JC Pollman's and Bill Mote's article Backup for the Home Network. If you were unable to view these files earlier, please try again now. Apologies to the authors of an excellent article which is being widely read, judging from the number of messages I received about this link.

Starting in issue 47, I have been moving all program listings into their own text files rather than keeping them inline as part of the HTML. This will hopefully be a convenience for readers who wish to run the programs or borrow code from them. --Ed.]


 Fri, 12 Nov 1999 10:05:43 +0100
From: Joachim Krieger <
Subject: AW: Duplicate messages, unreadable listings, Pollman article

[The following are excerpts from a larger conversation. The issue is, after there's been a correction to the LG FTP files, how does one tell whether the file(s) s/he has are the old ones or the new ones? If the file date at the main FTP site is newer, obviously the user has an old file. But if the user's program or a mirror site touched the file, its modification date could be misleadingly recent. -Ed.]

Is there any criteria that allows to distinguish the old issue47 from the new one ? ( beside the file-date/time )

OK - I got the fix and offer it on our server. ( http://www.the45er/lg resp. ftp://ftp.the45er.de/pub/lg )

Not your - but my problem is, that the .gz-file carries date/time-stamp of my download - and that's Nov 11.

Assume: Someone has an old version - including the problems you've fixed. This person is looking for the fixed file. Question: does this person have to download 'lg-issue47.tar.gz' and unpack it

File date/time can't be the criteria, because of touching by some clients...

The Linux Gazette Editor wrote:

I'm not sure how to deal with that situation. I'll put it in the Mailbag and see if somebody can come up with a solution. It might be worth adding a changelog to the README, but that won't help you decide whether the file you have is the old one or the new one. I could link the new file to a different filename (lg-issue47-fixed-nov-2.tar.gz), but that would cause too much clutter in the directory and may not be welcomed by the mirrors.

It seems like this is what the modification date is for, and if some programs aren't preserving it on downloads, the solution is to fix those programs or figure out how to work around them (e.g., set up an alias with the correct command-line options to preserve the timestamp); otherwise, you'll have even more problems later with other files, trying to figure out what is up to date and what isn't.

Joachim responded:

maybe one solution could be an aditianal note apended to one of the files - or a new file with that. The result should be a different filesize of the newly created tar.gz.

Perhaps publishing the size of the old - and the size of the new file may help.

[Readers, would this be helpful? Or exactly what information would be most helpful to you? -Ed.]


 Fri, 05 Nov 1999 09:57:04 -0500
From: Darren and Kristen Morin <
Subject: Excellent job!

Yeah, THIS is the kind of content I like to read.

Hello my name is Darren and I have been a linux user for about two years. While I may be past the point of out-and-out newbie, I'm certainly NOT a Guru. The articles in the Nov.99 issue are excellent1 It is almost like you folks were picking my brains, because these were the kinds of issues on my mind right now, especially the features on security and running unix as a home user. More features like this, please!

Keep up the great job folks. I don't know if you get many encouraging letters, considering this as one of them.

Bye for now


 Mon, 1 Nov 1999 23:19:07 -0800 (PST)
From: Heather <
Subject: Re: New format: My 2 cents

Sorry, not to offend, but I really don't like the new Answer Guy format! I prefer to read it in "The Whole Damn Thing" format.

TWDT is always available but please, read onward...

I find that I can pick out the things that interest me much easier if I can scan the message contents as well as just the titles. I usually just use the down arrow and pagedown keys to browse through the articles.

November's style *not* a new format; this is a special edition. People have been asking for the titles to be indexed, and it was clear that some i of them did not spot the "Index to Past Answers" gadgets found at the bottoms of the single messages (see October issue, any message, footer).

There were a great many messages this month, but they will be published in December's issue, in the same format that October's was.

I understand that the new format is more search engine friendly so maybe a compromise is in order. How about a complete page in TWDT format for each primary topic. That way those of us that like to scan a whole series can do so without having to bounce back and forth between individual articles and the index and the associated page reload delays.

Not mentioned before is that cooking the TWDT version is a normal part of my submission at the end of the month... Mike, this sounds like a request to normally link that in at the Answer Guy Index level.

Of course this returns us to torturing the search engines as it would hit on TWDT.lg_answerNN.html for every amazing topic in linuxdom. (Because we get a wide variety of questions each month, and so few search engines search for the keywords being anywhere near each other.) I'm not at all sure I favor this. This is why it hasn't been linked in even though my submits have included TWDT format subfiles for a long time. Mike, Jim, any opinions?

However, Rob, to address what you really seem to be asking for: No. Sorry. I am *not* going to republish the whole of the past answer guy messages in TWDT format by the topics I selected. It's weird enough that I only picked one classification each, when several of them fit more than one. Several of them are superseded long ago by changes in Linux itself. It's an *index* ... that means it does *not* contain anything but pointers. Since it doesn't point to any new content, it would be a waste of time better spent on the newer messages. Please bear in mind I do this work for LG on a volunteer basis. I have consulting work to do as well, and would like to limit the time I spend on this.

As a side note, I usually read the Gazette on my work machine which is that other OS :-( which complicates the process of reading the tar.gz version.

-Rob-

That question is answered in the FAQ. WinZIP for "that other system" handles tar, gzip, or tgz files without complaint. If you insist on doing it the hard way I believe tar and gzip might have been ported as commandline utilities, but you'll have to go find them yourself, perhaps at winfiles.com.

I hope you enjoy next month's column.

* Heather Stern, HTML Editor for "The Answer Guy"

[I'm currently doing a lot of "under the hood" work on how the Gazette is linked together. That is, writing scripts to auto-generate the links and update them, rather than placing the links by hand. Once this is complete, we can look at adding links to make The Answer Guy more readable. But first, we have to get used to the new Answer Guy format for a couple issues, so we can see what is working well and what isn't. -Ed.]


 Tue, 16 Nov 1999 10:08:03 +0100
From: marco masini <
Subject: "great ideas" ??? :-))

Hi guy How are U? U make a good work with linux gazette

Why U don't create an "index" by word (analitical index is better? :-) ) ?

I' m looking for emacs but I don't know how to find something about it on your pages.

Bye

PS. I apologize for my englis

[There is a search engine at the main site. Starting this issue there is a "Search" link near the top of both the Table of Contents and the Front Page.

I typed "emacs" into the search dialog and got lots of links back. -Ed.]


 Mon, 15 Nov 1999 11:43:26 -0800
From: Guillermo Schimmel
Subject: Re: I can't access ftp.ssc.com

I can't access ftp.ssc.com anymore. Is my problem or the server is down?

[The server has PARANOID turned on, which means your (the client's) forward and reverse DNS names must match or it won't allow access. Could that be the problem?

I asked our sysadmin if we could turn off the PARANOID feature, since the files are public anyway. However, he was adamant that it's a necessary security measure. -Ed.]


 Sun, 31 Oct 1999 10:21:14 +0100
From: Paul Dunne <
Subject: to MIME or not to MIME?

I can't take issue with your plea for writers-in to stop using HTML. But I do take exception to this:

And if your mailer splits long lines by putting an "=" at the end of the line and moving the last character or two to the next line, please try to turn that feature off. Also some mailers turn punctuation and foreign characters into "" and "=E9" and the like. I can't reformat those, since I don't know what the original character was! -Ed. P.S. This the first time ever I have resorted to blinking text, which I usually despise. I understand some mailers don't allow you to turn off this obnoxious "multimedia" formatting. But if you can, please do so.

If your mailer can't understand MIME, and translate a so-encoded document it back into proper text when appropriate, then your mailer is broken, and you need either to fix it, or get another that isn't broken. It really is as simple as that.

[I use mutt, which is supposed to be one of the most MIME-capable mailreaders around. -Ed.]

From: Pepijn Schmitz <

You can use any extensions you want, as long as you configure your webserver to use the text/plain MIME type for them. It's the mime type that the browser uses to decide whether or not to display a file, not the extension. Only when the web server doesn't provide a MIME type does the browser try to guess at it using the extension.

Also, regarding your request to turn off quoted printable characters (, etc.), my question is: why? It's a lot better than leaving the original 8-bit characters in, which may display very differently on your screen than the author of the email intended, or may be control characters which will screw up your terminal settings. Quoted printable characters (and also the ='s at the end of a line) are part of the MIME standard and should be converted back by any MIME compliant mail reader.

HTML email is even better, because it allows you to include international characters in a platform independent way (quoted printable is not platform independent). If I were you, I'd upgrade to an email reader that will handle all these things, such as Netscape Messenger.


 31 Oct 1999 14:00:10 -0800
From: Stephen R. Savitzky <
Subject: Re: Filename extensions for web program listings

You [the LG Editor] write:

My question is, which filename extensions are safe to use so that they'll show up properly as text files in the browsers? I'm wavering between using a language-specific extension (.c, .sh, .pl, .py, etc.) vs putting .txt at the end of all of them (or .sh.txt, etc.) What about listings that don't have an extension on the source file? They display as text on my browser, but do they display properly on yours?

The correct solution is to use the correct language-specific extensions, and to configure your server to give files with those extensions the MIME type "text/plain". This means that the browser doesn't need to guess (and guess wrong, in most cases) about what type the file really is.

Language-specific extensions would be the most ideal, because they offer the possibility of syntax highlighting if the browser supports it. (Does any browser support this?) However, I know I've tried to view files on other sites that I know perfectly well are text-readable, but the browser insists on downloading them rather than viewing them because it doesn't recognize the type. (Of course, that's better than the corollary, where it tries to view .tar.gz or .mp3 files as text.)

All browsers should use the type supplied by the server in the headers; only if the server fails to provide a "Content-Type:" header is the browser permitted to guess. If your server is Apache, for example, you can perform the extension-to-type mapping in a ".htaccess" file in the directory containing the listings -- see the Apache documentation for details.

Syntax highlighting can be done by providing two versions of the file, one in plain text and the other (with, e.g., a ".c.html" extension) in HTML as generated by a clever highlighting programm that you run once, on the server side. Again, nothing is left up to the browser; it's all done on the server.

From: Rich Brown <:

My suggestion is xxxxx.c.txt, yyyyy.sh.txt, zzzzz.pl.txt, and so on.

Keep up the great work. LG is one of my prime resources.

From: Anthony E. Greene <:

I'd say no extensions should be used. Most web servers default to text/plain when sending unknown files to browsers. I've never seen a browser that supports highlighting, so language-specific extensions are of limited utility.

From: <:

What about listings that don't have an extension on the source file? They display as text on my browser, but do they display properly on yours?

Not easily (am using StarOffice-5.1a).

Language-specific extensions would be the most ideal, because they offer the possibility of syntax highlighting if the browser supports it. (Does any browser support this?)

[...]

The answer to both objections is an easy-to-use, GUI, Mime setup program. The other side of this is that so many software packages ignore the system (and user for that matter) mailcap and mime-type files in favor of their own.

No extension should not be an option. Without an extension there is *no* handle to decide what to do with the file. Add ".txt" to these.

Where an extension is already present, leave it, but provide an alternative of the same filename + ".txt". Thus "foo.c" would be available both as "foo.ads" and "foo.ads.txt".

Hope this helps,
== Buz :)

From: walt <

use the .txt extension

and if you really feal froggy, call it ".text" so the point and click crowd doesn't get confused.

From: Vrenios Alex <

First, I believe that I can "set" my text editor to be called up when a ".sh" or a ".csh" suffix is found. But that might be oo much of a pain for most people (even me) to do for every possible extension. Here's my recommendation:

Given mountfloppy.csh, a C shell script that mounds a floppy disk, use mountfloppy_csh.txt

Given convertIP.c, a C language program that converts an IP address from hex to dotted decimal or back, use convertIP_c.txt

The use of that underscore is visually descriptive, letting everyone know exctly what kind of file it is intended to be, which the dot-txt tells your OS which editor to bring up, for every one of these ascii files.

Are there any files that you are considering setting up that are -not- ascii text files? If not, maybe this will work. Good luck.

From: Jeff Rose <:

Subject: Filename extensions for web program listings ...

I'm enjoying reading this issue of LG on my Palm Vx after downloading your text version then using a small conversion util to format into PDBformat.

Anyway, I vote for the age-old '.txt' extention for filenames. Why re-invent the wheel? And the less formatting - the better. We have _plenty_ of utilities for conversion: but TEXT vs. PDF, etc., .txt is the painless route.

My $0.02.

From: Sylvia Wong <

Language specific extension wouldn't work coz it's too much hard work to change the mailcap to browse a few program listing. I prefer language extension + .txt. This way, when I want to save the file (I'm using Netscape), I could just delete the .txt and don't have to type a .pl or .c or whatever.

Does Windose handle files with 2 extensions well? (Or maybe we don't care about them at all as why would anyone read Linux Gazette using windose).

[I decided to go the *.sh.txt route, using a language-specific extension when one exists, but always ending in .txt. This should ensure the files can be both read and downloaded verbatim on the widest variety of web servers and browsers.

BTW, a lot of people do read the Gazette on Windows machines. -Ed.]


 Thu, 04 Nov 1999 18:49:22 +0100
From: <
Subject: Spanish tranlations of Linux Gazette

Dear editor,

As you said in your reply to Mr. Offret currently there's no Spanish translations of Linux Gazette. However, the only Linux on-line magazine translated into Spanish is Linux Focus (ok, they're your competitors, but it's the only you can get in Spanish for free). There're also a number of Linux magazines in Spanish but, as far as I know, they're all printed editions and they're not free!. Here is a list:

All of them are published by the same publisher "Prensa Tecnica". I don't know if you can get these magazines outside Spain.


 Sat, 6 Nov 1999 14:49:51 -0500
From: Gerard Beekmans <
Subject: Re: Compiling everything myself

Greetings, ladies and gentleusers. I would like to compile my own Linux system. Not just the kernel. Everything. I've got enough room and partitions on my disk(s) to do it. Do not tell me do buy a distribution. Until now, I've tried a lot of them - I count eleven on my shelf - I do not like one of them the way I would like a self-created one. I just need a place to start. All of the distributions must have started at some point or another - how did they do it? Please point to a location where info may be obtained. The LDP seem to provide _nothing_ concerned to this task. Every hint will be highly appreciated. I would also love to contribute documentation of the process to the Free Software community. Every reader is invited to answer via email.

I currently am writing a series or articles that do this exact same thing: building a new Linux distribution from scratch. Every program is built from source (though you do need a working Linux to get it initial working (eg: you need to have a running Linux + compiler to compile a compiler so you can install that on the new system and start compiling other things there).

There's one downside for you I'm afraid: I'm writing these articles for a Dutch/Belgium Ezine and therefore the articles are in Dutch (surprised? ;) But wait, there's a good side: I'm also in the process of translating these articles into English. I decided to start translating these articles a week or so ago and I haven't started doing so yet (the usual excuse; too busy with other things).

I'm also thinking of giving them to the LDP so they can be distributed as a HOWTO of some sort.

Another reason I haven't started this translation yet is because I don't know yet how usefull they would be (ie; if people actually are thinking of doing this besides me). The Dutch articles aren't done yet either because I got stuck after a while since I don't know 100% how to configure every program and things like that (building from scratch for me means also completely making your own sendmail.cf file and things like the ruleset's are just a bit over my head at the moment).

If you're interested picking up this project I'd be happy to start translating the articles I already have written and then continue finishing them with you (and possible other people who read this since I CC'ed it to LG aswell).

Hope to hear from you soon.


 Wed, 10 Nov 1999 09:25:22 -0700
From: TJ Miller jr <
Subject: Will The Priests Please Refrain From Kicking The Heathens?

Listen my children, and ye shall receive news, of a troublesome plague that sweepeth across the Land of The Holy Penguin...

Allow me to confess the blight which lay upon my soul: I am cursed by professional necessity with an MCSE, but I want someday to hear the revelation, the revelation that a Microsoft certification is about as worthless, as worthless an ArcNet certification would be this very day. Verily, I further want to have the luxury of looking upon my MCSE and saying "there shalt not be further need to keepeth this foul brand upon my wretched soul..." Can I get an "Amen!?"

The problem remains that, in order to maketh a living, I have to prepareth my high-school aged disciples for the IT industry at large...and that necessitates teaching them the blasphemy that is Microsoft. (On a happy note, I finally got permission to get decent Linux curricula going - it will be out in January. Refer to http://www.linuxgazette.com/issue47/lg_mail47.html for more information, and pitch some more opinions at me - I'm about half-way down that page.)

But enough of the digression, my dear flock...let us kneel together, to meditate upon the source of my distress...

However, let's do it in plain English...because all kidding aside, we've got a growing and serious problem here:

While perusing the web for ideas about what would make a good set of Linux classes, I come across a disturbing trend, one that was once only an insignificant bother, but hath now become enough of a problem to warrant attention:

Linux Over-Zealotry.

I guess it began to demand notice (for me) during the badly-managed PCWeek NT vs. Linux security tests. Now, I believe that PCWeek made a huge mistake (they forgot an RPM patch), but, IMO, an honest one...could have happened to anyone. The reaction sickened me... zealots everywhere began screaming about how PCWeek was "Microsoft's Whore", and how they were suddenly a part of some huge conspiracy to quash Linux... I then begin to look about the various Linux forums, to see even more venom and flame, directed at anyone who dares to say that perhaps Microsoft does have software that will do certain things better than Linux.

I have a newsflash for the zealots: In some cases, NT is better than Linux. In Others, Linux is better than NT. It all depends on your specific needs and circumstances. Any real-world IT manager with more than three working brain cells will tell you the very same thing.

Now, what motivated me to write this was an article I noticed, which read:

"The people who complain about it not supporting 100% of games or whatever don't seem to get what the alternative is. It's easy to say Linux is better than a certain monoploy(sic) based OS but to put your money (ok it's free but nevermind) where your mouth is, is something else."

-James Rogers, recently in osOpinion

Now, if I owned a business, didn't know much about computers, and needed an OS that was compatible with all of the machines I have, the last thing I want to hear is: "Too bad you Microsoft scum!"

If I were Joe Six-Pack, wanting a decent OS at home to do my bills and send the occasional e-mail to mom with (and an OS with all the cool games for my kids, of course), the last thing I want to hear would be: "no way, d00d - you gotta conform to the Linux way or else! We don't support yer fave games, so deal with it!"

You notice a trend here (at least I hope you do...): I used regular, typical situations. If you want Microsoft's piece of The Market Pie, you're going to have to woo these exact types of people away from their MS-run boxes...

A Very Big Clue: You won't capture the heats and minds of MS users by calling them "sheeple". You won't do it by insulting their intelligence. You won't do it by shouting like a spoiled child whenever Linux gets bad press (no matter the reason.) And most importantly, you certainly won't lure them away form Microsoft by demanding that they do something the hard way, when for only $85 they can do it easier with (insert big, bad, monopoly OS product upgrade here.)

You also won't do it by letting others perform these same indecent act without nary a comment from members of the Linux community who happens to catch them in the act...

Now, it had always been (at least, used to always be) the case that Linux adapted to needs - someone needed a driver for (insert oddball peripheral item here), odds are that all he/she needed to do was ask, and there was more than likely a driver written for it by (insert friendly programmer's name here), and that it can be downloaded for free at (insert URL here).

If Linux wants a bigger market share, this is exactly what needs to continue happening....customer-focused, friendly service to the world community at large.

I recall - that there indeed is a perfect article for describing how we can advocate, and by extension make Linux a dominant OS in this world...I propose that we all read it at least once in our lifetimes: http://www.ssc.com/mirrors/LDP/HOWTO/mini/Advocacy.html

After all, if you want converts, you don't do it by kicking the heathens you want to proselytize...especially not in a consumer-driven market such as computer products.

There is one other note: If you see anyone misbehaving, thereby giving Linux a bad name, then go out of your way to show that person the light...and in the process showing the world at large that the Linux community won't tolerate childish behavior...which would make an ever better impression on the undecided and the unknowing.

Now go forth my precious flock, to spread peace and Open Source among the perilous world in which we live...

TJ Miller jr ([email protected]) is a secondary education teacher in the Utah ATE (read: Vocational Technologies) school system, specializing in the instruction of Unix, Networking, and (because they make me do it) that big, bad GUI-based OS we've all heard of. He putzes around with Linux far more than is considered healthy, but enjoys the outdoors enough to go hunting, skiing, hiking and fishing whenever weather and time permit.


 Mon, 1 Nov 1999 13:35:49 +0800
From: Li Wei <
Subject: comment

the 47th issue is the most boring lg i ever read. in other lgs, i can always find some interesting articles.

[Which kinds of articles do you consider "interesting" and "boring"? -Ed.]


 Thu, 11 Nov 1999 11:26:51 -0800 (PST)
From: Nicolas Chauvat <
Subject: Re: Duplicate messages, unreadable listings, Pollman article

The [LG] FTP file for issue 47 is 2.6 MB. This is due to the large number of graphics. Next month will have fewer graphics to bring the file size down.

Graphics shouldn't be a problem. You have a lot more graphics in some web pages and the whole thing is much lighter than this. The problem is that most graphics are not compressed... well, they are, but not enough, or the image depth is to high. Just make the authors follow the basic rules of image making for the web and it will be fine, even if you have more graphics.

Use jpg for photos (lots of colors, no line or clear boundary)
Use gif for others (fewer colors, sharp boundaries)
Use png for either one.
Try with different compression levels (jpg) and different image depth/palette (gif).

There are programs out there that do that for you, i.e. take the image and output the same with the best compromise size/quality. I don't remember the names though, but they are probably listed on freshmeat or linuxberg.

Hope this helps,

-- Nicolas, Coordinator of the french translation of the Linux Gazette

Hi, I'm a deadly e-mail virus, please copy me into your .signature file to help me spread. :: Bonjour, je suis un dangereux virus. SVP copiez-moi dans votre fichier .signature pour m'aider =E0 me propager
[There's one of those pesky =E9's again. Wish I had a table to convert it to its Latin-1 equivalent. Must be an à. -Ed.]


 31 Oct 1999 09:33:31 -0700
From: Eric Hanchrow <
Subject: Suggestion for TOC pages on the web

I'm looking at http://www.linuxgazette.com/issue47/lg_toc47.html, and I see many bulleted items, each corresponding to one article in the current issue of Linux Gazette. It would make my life a tiny bit easier if each of those items showed me, not just the article's title, but the first few lines of the article itself. That way I could tell if I wanted to read the entire article. As it stands, I have to guess. This is especially true for the articles that appear to be regular features.

[There isn't enough room in our current table of contents layout for these lines. For the Mailbag, 2-Cent Tips and News Bytes, it wouldn't make sense because these columns consist of a lot of small items which are not related to each other, and thus, the first letter is not representative of them all. Also, it would mean a lot more manual editing to decide which lines to include.

Thanks for the suggestion, though, and I'm open to hearing any others you may have. -Ed.]


This page written and maintained by the Editor of the Linux Gazette.
Copyright © 1999,
Published in Issue 48 of Linux Gazette, December 1999

"Linux Gazette...making Linux just a little more fun!"


News Bytes

Contents:


 December 1999 Linux Journal

The December issue of Linux Journal is already on the newsstands. This issue focuses on system administration.

Linux Journal has articles that appear "Strictly On-Line". Check out the Table of Contents at http://www.linuxjournal.com/issue68/index.html for articles in this issue as well as links to the on-line articles. To subscribe to Linux Journal, go to http://www.linuxjournal.com/subscribe/index.html.

For Subcribers Only: Linux Journal archives are available on-line at http://interactive.linuxjournal.com/


 Linux Journal/Linux Gazette Millenium Edition Archive CD-ROM

Can't find that April 1996 Linux Journal? Someone borrowed the September 1998 copy?

Now you can have it all! All of Linux Journal (issues 1-56; 1994-1998) and all of Linux Gazette (issues 1-45; 1995-1998) on one archive CD.

Click here for more information.


Distro News


 Caldera

Antarctica IT and Caldera Team Up

Framingham, MA, November 11, 1999 - Antarctica IT, Inc. and Caldera Systems, Inc. will work together to provide Linux services in Boston and New England.


 Corel

Corel Signs Linux OEM Agreement With PC Chips


 Libra

NORTH VANCOUVER, BC, November 2, 1999 - Libra Computer Systemstoday announced the release of 'Linux by Libranet', based on Debian. The CD, includes one year of support via email and fax, Linux HOWTO documentation and the Debian-Guide, a concise, easy to follow guide to the use of the Debian Linux Distribution. The CD may be purchased for $27.00 at www.libranet.com. Libra Computer Systems is a privately held company based in North Vancouver, BC, Canada.


 Red Hat

Red Hat acquires Cygnus, promotes Matthew Szulik to CEO

See press releases at the Red Hat home page, and the commentary Red Hat acquires Cygnus.

Red Hat to Support Leading Open Source Application

Durham, N.C.--November 3, 1999--Red Hat, Inc., today announced an expansion of its services program that will provide the consulting and support enterprise organizations need for nearly all of the popular, powerful, open source software applications used by enterprises worldwide.

As the first step in the program, Red Hat's worldwide services group will immediately offer expanded Service Programs for popular open source software solutions, including top Internet software like the Apache Web Server, Sendmail and Postfix. Apache is the number one Web server and runs more than 55 percent of the Internet's Web sites. Sendmail is a messaging solution that powers 80 percent of Internet Service Providers (ISPs).

The sweeping initiative will expand in the coming months, embracing more open source solutions. This broad support program delivers enterprise users a single, trusted source for their open source computing needs.

Center for Open Source

DURHAM, N.C.--November 1, 1999--Red Hat, Inc. announced the formation of a new non-profit organization, the Red Hat Center for Open Source (RHCOS), that will sponsor, support, promote and engage in a wide range of scientific and educational projects intended to advance the social principles of open source for the greater good of the general public.

eSoft

eSoft Inc., a company that develops and markets the TEAM Internet Linux software suite for small businesses, has joined Red Hat Inc.'s Development Partner Program.

Pact With RSA To Enhance Security

Red Hat, Inc. has entered into a new strategic agreement with RSA Security, Inc. to enhance security for professional users of the Red Hat Linux OS package...


 SuSE

Photodex and SuSE Give Linux an Imaging Boost

Austin, Texas -- November 8, 1999 -- Photodex Corporation, a leading supplier of digital content management, imaging and multimedia software, and SuSE Inc., announced today their partnership to provide CompuPic Digital Content Manager software for the SuSE Linux operating system.

"CompuPic brings powerful, yet user-friendly imaging capabilities to the SuSE Linux desktop, enabling SuSE Linux to compete head-on with other platforms. These tools should not be taken for granted," said Paul Schmidt, CEO, Photodex. "Professional web developers need powerful tools to more easily develop and maintain web sites. Even novice users find it easier to use the Linux desktop when familiar applications are available. CompuPic solves these problems and more."

E-Commerce on SuSE's Portal

Linux vendor SuSE announced this week that it has tapped outsource solutions provider Digital River, Inc. (Nasdaq: DRIV) to turn its online information portal into an e-commerce site.


 Storm

Storm Linux 2000 to Ship with VMware

Vancouver, Canada - November 17, 1999 - Stormix Technologies, Inc., announces that Storm Linux 2000 will ship with the VMware evaluation binaries, a software technology that allows the running of other operating systems, including Windows, from within Linux.

Stormix also announced Storm Linux will ship with a full version of StarOffice, and a demo copy of Enhanced Software Technologies' BRU (Backup and Restore Utility).


News in General


 C.O.L.A news

TheStuff.net is a new, online source for information and tools encompassing all free Unix operating systems.

Spanish translation of the Linux Administrator's Security Guide

Dual chrooted Bind/DNS servers Mini-HOWTO


 News from The Linux Bits

LG got a nice little mention in TLB issue 23 (http://www.linuxdot.org/tlb/24.html):

The latest edition of the "New York Times of Linux e-zines" - Linux Gazette - was released today, featuring the usual crop of interesting reading -- and plenty of it. Once again we get a few mentions. "Gee ma, we're famous." :)

Other news in issue 23:

From issue 22 ( http://www.linuxdot.org/tlb/24.html):

TLB's Laurence Hunter wrote to the LG Editor:

If there's ever an article that you particulary like in any issue (even back issues) feel free to use it, change the formatting to your liking, and edit it to your heart's content. You don't need to ask our permission either. Like you we put together The Linux Bits purely for fun so no copyrights etc exists on them and we ask nothing in return. Anything to help out the Linux community is plenty payment for us.
[Thanks! I'll remember that. -Ed.]


 Upcoming conferences & events

The Bazaar: "Where free and open-source software meet the real world". Presented by EarthWeb. December 14-16, 1999. New York, NY. http://www.thebazaar.org/

SANS 1999 Workshop On Securing Linux. The SANS Institute is a cooperative education and research organization. December 15-16, 1999. San Francisco, CA. http://www.sans.org

IDG Communications France, organisateur de LinuxWorld, et Sky Events, organisateur de LinuxExpo, unissent leurs efforts pour créer une manifestation unique, LinuxWorld/LinuxExpo. Cet événement Linux ouvrira ses portes du 1er au 3 février 2000, au Palais des Congrès de Paris (Porte Maillot).


 Linux File, Print & CD Thin Server in Flash-ROM

KYZO has today released the commercial version of its hugely popular PizzaBox Linux distribution, so called because a prototype server was built in a Pizza Hut Takeout box - take a look - www.kyzo.com

In the first month the site took over 500,000 hits has recently registered 75,000 in one day. Registered users that have downloaded and run the free PizzaBox Server include both the Goddard Space Flight Centre and the Jet Propulsion Laboratories at NASA.

The commercial version is the same basic software as the free version, which will still be available, but differs in one significant respect. It is shipped pre-installed in a bootable Flash-ROM and comes with the the circuit board you need to make it boot in any 486 (or above) PC. With 10 times the life span of a hard disk, the package is aimed at system builders who are fed up with having to send skilled engineers to sites for days on end to re-install a File Server every time a hard drive goes down.

The system includes File, Print & CD sharing, remote access (for full remote administration), UPS monitoring, Tape backup, Hardware monitoring, APM. It will automatically accept both SCSI & IDE hard drives and comes with a sophisticated, JavaScript enabled, web management interface. It is aimed squarely at the SME market where current offerings from the big players are overcomplicated for the SME sector.


 Cobalt Networks Selected by Allegiance Telecom

Cobalt Networks has announced that its RaQ 2 server appliance will be the dedicated web hosting platform for Allegiance Telecom customers. The RaQ 2 provides a way for ISPs like Allegiance to offer top quality dedicated web hosting with a low TCO and a quick ROI.

This news comes on the heals of last week's introduction of the RaQ 3i, Cobalt's third-generation server appliance. The RaQ 3i rounds out Cobalt's product family by offering ISPs a high-end server appliance that can handle high-traffic Web portals, e-commerce, and application hosting at a price that the small to mid-sized business market can afford.

www.cobaltnet.com


 E-Commerce Minute

EBIZ Re-Launches TheLinuxStore


 E-Exams announces a Linux System Adminstration exam

E-Exams announces a Linux System Adminstration exam. With E-Exams you can create customized technical exams to assess potential employees. Among the many exams E-Exams is offers, is an exam for Linux System Administrators. Companies who have signed-up for the service, can have potential employees take a web based exam. Questions range from beginner, to intermediate to expert. The results can be used to determine whether the candidate should take the next step in the hiring process. The Linux System Administration exam is distribution neutral. It also among the first exams which the company is offering. The company is planning to create many more exams, including some distribution specific nes, starting with a Red Hat Linux exam. Specific topics such as networking with Samba are also in the works.

You can learn more about E-Exams at: www.eexams.com


 National Semiconductor announces Linux support for its geode webpad reference design

November 15, 1999 - National Semiconductor Corporation today announced that Infomatec AG will port its custom Linux-based basic platform Java Network Technology (JNT) operating system to the National Geode WebPAD platform - a complete hardware and software reference design for a wireless Internet personal access device (PAD). This collaboration is the first result of an agreement signed earlier this year by National and Infomatec to partner on the development of information appliances such as set-top boxes, thin clients and PADs.


 Is Linux available on my platform?

The Current ports of the Linux OS web page lists a wide variety of architectures and platforms Linux is available on.

The main ports are :

- Special PCs and Near-PCs SGI-Linux (intel-SGI), MCA Linux (MicroChannel), Linux/98 (NEC PC-98), LinuxIA64 (new 64bits-merced- intel chip)

- Motorola processors Linux/m68k (Motorola 68000), LinuxPPC (PowerPC)

- SPARC chips S/Linux (Sun SPARCstations), UltraLinux (Sun UltraSPARC)

- Compaq (former Digital) chips and equipments VAXlinux (VAX computers), Linux/Alpha (Alpha processors)

- Other RISC chips ARM Linux (ARM/StrongARM), Linux PA-RISC (HP's PA-RISC), Linux/MIPS (MIPS chips)

- Handhelds, Microcontrollers, embedded and other small systems LinuxSH3 (Hitachi SH3), Linux/Microcontrollers (Palm,Motorola ColdFire), ELKS (8086-80286), VMELinux (VMEbus embedded systems), LinuxCE (to substitute WindowsCE), PDAs in general

- Microkernels and real-time MkLinux (Linux on Mach =B5-Kernel for Apple PowerMacintosh, HP PA-RISC and x86), DROPS (Linux on the L4 =B5-Kernel and Fiasco =B5-kernel), Real Time Linux (RTLinux and KURT)

- Multimedia Computing, QLinux

- SMP and clustering MOSIX for Linux (a bridge between SMP and MPP), Beowulf Project (parallel clusters), Linux SMP(multiprocesor)

- Misc ports Linux on IBM 370/390, Linux/AP+(Fujitsu AP1000+), Linux-AS/400(IBM AS/400)

- Cool things running Linux Not ports, but who cares?


 "Linux Administration Made Easy" (LAME) guide

The "Linux Administration Made Easy" (LAME) guide, recently updated for Red Hat 6.1, attempts to describe day-to-day administration and maintenance issues commonly faced by Linux system administrators.

This 130+ page guide is geared to an audience of both corporate as well as home users, and attempts to summarize the installation and configuration, as well as the day-to-day administrative and maintenance procedures that should be followed to keep a Linux-based server or desktop system up and running.

LAME can be found at www.LinuxNinja.com/linux-admin/ in a variety of document formats.


 Linux Links

The legal findings in the antitrust case against Microsoft

An article that alleges Microsoft is operating a pyramid scheme to artificially boost its stock price.

Parodies of Linux and its friends and enemies: mages, webpage sendups, poetry, songs, articles, etc. Stars a Microsoft Myths spoof page.

Linux Fool.com is a new web site offering Linux users an unbiased forum for discussions and information sharing. LinuxFool.com is an official mirror of the Linux Documentation Project.

/www.QuestionExchange.com allows users or system administrators to name their price for high-end tech support in open systems -- Linux, Apache server, Sendmail, etc.

How the University of Georgia English Department is using Linux.

A Linux-Windows comparision, and other Linux-related stuff.

Transmeta's Crusoe

LinuxMall Re-Launches With New Look


Software Announcements


 C.O.L.A software news

Kmahjongg 0.5 (beta)

Warbird is "a new silly 3D game" using VRML.

HTMLDOC v1.8 HTML document conversion to PDF, Postscript and indexed HTML.


 Calerga SysQuake Viewer

Lausanne, Switzerland, November 2, 1999 - Calerga announces today SysQuake Viewer for Linux, a port to the Linux operating system of the software for understanding systems by interactive manipulation of graphics.

SysQuake is a ground-breaking application for understanding and designing complex systems by the use of interactive graphics. Interactivity lets the user manipulate graphics, observe how phenomena are related, and change parameters to improve the design of a technical device. Understanding how initial conditions affect a simulation or how the parameters of a feedback controller determine the behavior of a dynamic system is made much easier than with the static graphics created by existing software.

www.calerga.com/SQViewer/index.html


 EasyCopy screen capture utility

SAN JOSE, Calif., October 19, 1999 - AutoGraph International (AGI) is demonstrating its commitment to the Linux community through our EasyCopy suite of products. The Linux operating system has always played an immense role in AGI and some of our developers are active in the Danish Linux community. In fact, Linux engenders a decidedly emotional response from our normally cool, analytical development staff.

Enhanced X Capture offers full screen, selected window or an arbitrary rectangular area capture. All original colors of the windows are preserved with any combination of windows. Even when a window appears in false colors on the screen due to color table overrun, it is captured in its original colors by EasyCopy and printed correctly.

www.augrin.dk


 DataKeeper

DataKeeper by PowerQuest (maker's of PartitionMagic) ensures backup protection by continuously monitoring file activity in the background, and supporting compressed backups on the fly. The user configures the programme and specifies backup methods to backup drives, folders and files, and DataKeeper then follows these commands automatically, and unobtrusively.

Because DataKeeper creates a backup each time a file is modified, scheduling is unnecessary.

Any removable disk drive, network drive, hard disk or floppy drive is supported by DataKeeper. And because it uses a high compression rate, DataKeeper saves disk space.


 News from Loki Entertainment Software

TUSTIN, CA -- November 1, 1999 -- Loki announces the Linux demo version of Railroad Tycoon II: Gold Edition.

Loki released the full version of Railroad Tycoon II, along with The Second Century Expansion Pack, last month to strong reviews. This real-time strategy simulation game places gamers in a world of big business, expansion and engineering in which all aspects of the railroad industry can be controlled. It features a simulated stock market and sophisticated economic system, and as the years progress, gamers must solve modern problems such as the increasing demand for mass transportation in major cities.

The demo allows users to exercise their tycoon tendencies in five different scenarios, and is now freely available for download.

TUSTIN, CA, November 3, 1999 -- Loki is now developing the Linux version of Heroes of Might and Magic(tm) III.

Heroes of Might and Magic III is the latest installment in the series of top-selling, critically-acclaimed strategy games. This turn-based, strategy war game set in the popular Might and Magic(r) fantasy world has already captivated legions of PC gamers. It was awarded 5 out of 5 stars from both Computer Games Magazine and the CNET Gamecenter, as well as an Editor's Choice Award from Computer Gaming World Magazine.

Heroes III will be available for the Linux operating system wherever software is sold by mid-December.

The Loki Hack hacks on the Linux version of Activision's Civilization: Call to Power are now available for public consumption.

TUSTIN, CA -- November 4, 1999 -- Loki Entertainment announces today the creation of a CVS server to facilitate public access to its Open-Source projects.

"One reason we've been successful is that we create Open Source tools to replace proprietary code in our games," said Scott Draeker, president of Loki. "We give these tools back to the community in the form of Open Source projects. We hope other developers will evaluate our tools for projects and choose to use them rather than the proprietary alternatives."

The Loki CVS server allows the public to access the latest development source code for Loki's several Open-Source projects, including SMJPEG, a motion JPEG library, SMPEG, a MPEG-1 playback library, and Setup, a GUI Installer. Other modules available for viewing include SDL, a cross-platform multimedia development API, and Mixer, an enhanced version of the sample SDL audio mixer.

Loki's Open-Source projects are freely available for download from www.lokigames.com, and are offered under the GNU Library Public License (LGPL). A web interface for the CVS server is available at cvs.lokigames.com.


 Other software

Anti-virus software from Central Command, Inc. and Kaspersky Lab.

Max for Linux is a new product for compiling and running Xbase code on Linux-based computers, and is available for dowload.

Photogenics is an award-winning graphics package, first released on the Amiga five years ago. It will soon be available on Linux.

Cycas is a new 2D + 3D CAD software, based on GTK. In addition to typical CAD functions, it offers special elements and techniques for architectural design.

The CRiSP visual text editor from Vital.

txt2pdf from Sanface converts text reports to PDF format. Can colorize words using Perl regular expressions, add border to every page, print in 2-column format, and more.


This page written and maintained by the Editor of the Linux Gazette.
Copyright © 1999,
Published in Issue 48 of Linux Gazette, December 1999

Contents:

(!)Greetings From Jim Dennis

(?)PCI Winmodem --or--
Why Winmodems Don't Work Under Linux, Yet!
(?)Do you know of any V.90 internal modems that are not winmodems? --or--
Linmodems.org
(?)LI_ DOH! --or--
Major Hardware Problems
(?)Clearing Lilo from MBR
(?)modem problems --or--
Modem Problems on a Win '95 System
(?)Questions about Linux --or--
Setting COM port speeds
(?)virus protection --or--
Virus Protection for Linux: A Non-Issue ... But....
(?)Telnet trouble
(?)Modem Noises --or--
Quiet, Modem!
(?)Dual Booting without Re-Partitioning
(?)Regards from Argentina
(?)re: Helpless (issue 45 of linuxgazette)
(?)Thanks!
(?)thanks answer guy! --or--
Just Buy a REAL Modem
(?)Red Hat --or--
Telnet gives: "Connection closed by foreign host..."
(?)multiple root accounts --or--
Multiple Root Accounts: Delegation
(?)opl3 yamaha/SAx sound card --or--
Soundcard Drivers for Win '98?
(?)Another "respawning" question --or--
Author Responds to "gdm Murdered Mysteriously"
(?)Unable to login to SuSE --or--
Upgrade to 6.2 from 6.1 Disables Login
(?)Doslinux --or--
Euphoria
(?) Dell EIDE TR5 Tape Drive
(?)Some basic ftp questions --or--
FTP Daemon: Special Requirements
(?)RedHat Login Problems --or--
login, su, and passwd dies: Everybody dies!
(?)Files invisible via Telnet? --or--
Files invisible via Telnet?
(?)Thanks for the sendmail info in September's column --or--
sendmail Masquerading, Configuration, and User Masquerading Revisited
(?)Segmentation Faults --or--
General S. Fault
(?)Problem of Linux connecting to the Internet thro' MS Proxy --or--
Linux Workstations Behind a Proxy/Firewall
(?)LILO Lockup --or--
LILO Stops at LI
(?)Need help !! --or--
Protocols on top of Protocols: It's Protocols ALL THE WAY DOWN!
(?)Uninstall Linux --or--
Uninstalling Linux
(?)your web --or--
Who is Jim Dennis?
(?)FoxPro 2.6 on Linux --or--
FoxPro 2.0 (SCO) Running Under Linux: Try Flagship?
(?) Coping with Bad Sectors
(?)Linux memory management --or--
Homework Assignment: Write about Linux Memory Management
(?)Lt Modem --or--
HP with LT Winmodem
(?)about slackware --or--
Linux to HP9000 Through RAS?
(?)TCPMUX on Linux --or--
TCPMux Revisited: You'll need a Daemon for it, or a Better inetd
(?)probs --or--
Overwrote NT with RedHat: Good Idea But Bad Move
(?)Need Advice --or--
Partitioning Advice
(?)Unix emulator under Linux. --or--
UNIX Emulation Under Linux? iBCS
(?)PAM applications running as root (Was Re: WebTrends Enterprise Reporting Server)
(?)Looking for help --or--
International Keyboard Mappings for
(?)rsh --or--
Really Wants 'rsh' to Work. Really
(?)RedHat 6.0 - various problems --or--
Laundry List of RH 6.0 Problems or Hardware Blues
(?)spying --or--
More AOL Instant Messenger Spying
(?)X respawning question and answer --or--
Another Solution, or a Different Problem
(?)Using LILO to boot directly to dos --or--
Setting the LILO Default
(?)glibc --or--
Multiple Concurrently Installed Version of glibc
(?)lg #45: Limiting Internet Access through Cable Modems
(?)The Linux Startup Script?
(?)Comcast and IPmasq --or--
Short names for Long Domains?
(?)serial port snooping --or--
Snooping on a Serial Port
(?)Maximal mount reached; check forced --or--
Maximal Mount Count Reached
plus, Ted T'so replies.
(?)QUESTION --or--
Selecting a Lotus Notes Platform
(?)RedHat 6.0:Telnet has no login prompt --or--
"telnetd connected:" But No "login" Prompt
(?)A Staging Server --or--
Staging Server on localhost
(?)CDR used in scsi emulation --or--
Mounting CDs on IDE CDRW Drives

(!) Greetings from Jim Dennis

I was working on a long, involved, probably tasty message to use for my greetings this month. Unfortunately, it's long, involved, and I think I want to rearrange it a bit, but don't have a lot of time because we're heading down to L.A. to be with family -- specifically, a science fiction convention we attend every year.

So, I'll aim for something less lofty. I'm finally getting a chance to work a bit with my new improved workstation, and I must say I'm quite pleased with console-apt (a curses interface for Debian's "apt" package manager). We still have quite a mixed bag of distributions around the house though. I think I might write an article comparing the various package management styles available. I may or may not focus only on Linux for that.

Sadly I get to spend even more time in Arizona after I get back. Most people would call me crazy, but I like it a bit better when it's gray and rainy -- reminds me of my youth in Oregon.

Happy Thanksgiving, everyone.


(?) Why Winmodems Don't Work Under Linux, Yet!

From Chuck Winters on Thu, 23 Sep 1999

What is the difference between a winmodem and a regular modem and why will it not work on a Linux box. I have been looking all over for this information, and I can't find it. All I can find is that a Winmodem doesn't work on a Linux box. Could you also put up some reference links to some reading material on the web. Thanks.

Chuck Winters

(!) Check out http://linmodems.org
Also look at my recent rants on the topic using the Linux Gazette search engine (WebGlimpse) (http://www.linuxgazette.com/search.html).

(?) Linmodems.org

From Dick King on Thu, 23 Sep 1999

It's kind of annoying. You can't tell by looking at the ad or sometimes even the box [unless you notice that it requires a Pentium, but they don't always admit that]. I would rather buy an internal modem because the only space i have on my cluttered desk is inside the tower and my transient absorber has no room for one of those little power cubes ;-) but it looks like i need to go external to be safe.

-dk

(!)
Try following the FAQ link somewhere under
http://linmodems.org
There is an interesting discussion about Linux support for Winmodems and a link to someone else's FAQ for identifying the bloody things.
(I like the idea of "linmodem" as a generic cheap telephony interface card. We'll see what it does to performance and all that, when someone ships code).

(?) Major Hardware Problems

From Rian Kruger on Thu, 23 Sep 1999

Here is my problem.

I have been asked to fix a computer which uses LILO to Boot and is partitioned WinNT, Linux. Fine and Dandy.

The problem is that one morning the computer desides not boot. All I get is LI_ where it should say LILO: and then boot up.

I have a rescue disk but this was created when installing Linux on another machine. Both Destributions are Red Hat.

I insert the disk and start the machine

So then I get LILO: it then says type rescue, which I did... Alls well, till the machine asks:

VFS: Insert root floppy disk to be loaded into ramdisk and press ENTER
 

On pressing error things go well enough for long enough to give you a false sense of security before hitting you with.


floppy0: data CRC error: track 0, head 1, sector 12, size 2
floppy0: data CRC error: track 0, head 1, sector 12, size 2
end_request: I/O error, dev 02:00, sector 29

repeatedly

If I try to boot linux from the same Boot disk.

(!) [crash messages ellided]

(?) Thats when I got desperate and tried to boot from a NT repair disk (also created on another machine during installation)

And the Computer Says:

The emergency Repair Disk is not startable.
 

Repairing a damaged Win Nt Installation is an option available at the beginning of the win NT Instalation.


To start setup bla bla bla....

If I take this root, am I going to have to reformat the entire machine, will I loose all the Linux info, How do I save the situation. Please help, sagely advice would be much appreciated.

Thanks
Rian

(!) Sounds like a bad controller, or a dead DMA chip. Might be some memory that went wacky.
It sounds like hardware failure.

(?) Clearing Lilo from MBR

From Norman Elliott on Thu, 23 Sep 1999

Just read the item on clearing lilo.

All I do is boot from a Dos ( 5 or greater ) boot disc and issue the command:

fdisk /mbr

that seems to fix anything including boot sector viruses. Maybe Linux fdisk would take the same parameter. I enjoy your column, keep up the good work, best wishes,

norm

(!) The /MBR option was undocumented and only introduced in MS-DOS 5.0. I don't remember the question to which you were referring. If I didn't mention FDISK /MBR it was probably because I was not assuming that the user was trying to restore an MS-DOS 5.0 or later boot loader to their system.
Linux fdisk is a different program and doesn't touch the boot code in the MBR. It only works on the partition tables (which comprise the last 66 bytes of the MBR and possibly a set of others for extended partitions).
There are several Linux programs which do write boot records. /sbin/lilo is the most commonly used. 'dd' will do in a pinch (if you have a .bin image to put into place).
BTW: don't count on /MBR to fix a virus. Some viruses encrypt portions of your filesystem, thus causing major problems if they aren't removed corectly. To prevent infection by boot sector viruses, disable the "floppy boot" options in your BIOS. You should only enable those long enough to perform an OS installation or system recovery and disable it immediately thereafter. To prevent viral infect by "multi-partite" and "file infector" viruses, stop running MS-DOS. To avoid MS Windows macro viruses, avoid MS Office, MS Exchange and related software (with virus^H^H^H^H macroing hooks built into them).

(?) Modem Problems on a Win '95 System

From Benedict, Kevin F on Thu, 23 Sep 1999

I have a windows 95 system, and was using a U.S. Robotics 33 external modem. It started working spiratically. I received a U.S. Robotics 56k external modem as a gift. It was new. I hooked it up, and now my machine will not boot up unless the modem is turned off, or the serial cable is unplugged. The control panel identifiles com 2, . If I boot up, and then connect it, and try to add it to the system, the system locks up. On boot up, the system gets all the way to "starting windows 95" , access the cd rom drive light briefly, then the modem trsl ight, then locks up. Microsoft says it is a hardward problem, and U.S. Robotics took the modem in for service, and returned it, saying nothing was wrong with it. I have replaced the cable, with no effect. I can boot up in safe mode o.k.. Is there something that could help me besides taking the machine into a shop? thanks for your consideration.

(!) Well, it's an external modem. So, it will probably run under Linux. Of course the machine might have problems running Linux --- so you might want to replace that and your OS. I'd start by just replacing MS Windows '9x with Linux or FreeBSD. See if that works.
BTW: I'm the Linux Gazette Answer Guy. You might want to read some of the back issues, or the FAQ, to understand why my answer is so obtuse. More importantly you might want to actually READ some of the web pages that come up when you're desperately searching for support that your software vendor clearly is not providing. If you'd READ any of the links from which you found my e-mail address then you might have seen that I don't DO Windows. Most importantly you if you READ before you e-mail question you won't sent "off-topic" questions to UNIX, MacOS, and other "answer guys" that don't DO Windows.

(?) Setting COM port speeds

From Jason_Magill on Thu, 23 Sep 1999

Answer Guy,

I have an application that I am running that requires me to read the serial port (Com2). The problem is that I need to read it at 9600 7-E-1. How would you go about doing that so when the system is rebooted it will automatically read the serial port at 9600 7-E-1?

I could really use your help.

Thanks, Jason

(!) You should be able to do that with the following command:
               stty 9600 parenb cs7 < /dev/ttyS1
... or something like that.
Note you must redirect INPUT from the serial port to the 'stty' command. This is because the terminal settings are accomplished through an ioctl().
You might also look at the 'setserial' program. It works a bit differently (and is Linux specific) whereas the 'stty' program has been around for UNIX for many years.
Note also that we use ttyS1 for MS-DOS COM2. This is because Linux counts these devices from zero. There is no guarantee that Linux will detect these ports in the same order as MS-DOS, but usually COM2 should be /dev/ttyS1.
The parenb is to set "parity even" (I don't know why the have the "b" there --- for "byte" maybe?) and the cs7 is to set the "character size." There are many other 'stty' settings available. Read the 'man' page for details.

(?) Virus Protection for Linux: A Non-Issue ... But....

From muzician on Thu, 23 Sep 1999

Subject: Re: virus protection I cant find any references to that. I am installing 6.0 for the first time, and need to know what to do.

(!) Basically viruses are a non-issue for Linux and other forms of UNIX. While it is technically possible to create them, the multi-user design of UNIX-like systems coupled with the widespread practices that separate "normal use" (access to applications, and user data) from "administration" (use by 'root' user) make the OS very hostile to virus propagation.
You can write a virus, but it won't spread.
This is one of the benefits to the convention of logging in as a "normal user" for most of your Linux work and reserving the "root" account for upgrading and installing software. Another benefit is that it limits the damage you'll do with a careless user command.
This is not to say that Linux and UNIX are immune to viruses, trojan horses and other forms of hostile code. Far from it. There are many programs that run with "root" privileges on a typical installation. Any of these might be "tricked" into acting on an attackers behalf. They can be subverted, which leads to the compromise of the whole system's security.
Any program that can be "tricked" (subverted) into running foreign code, or otherwise compromise the user's and system administrator's intentions has a bug. When we find these bugs we fix them.
Finding the ways in which such programs can be commandeered by hostile users, and by anonymous attackers over networking connections is an ongoing effort by thousands of programmers throughout the open source community. There is nothing Linux specific about these efforts. OpenBSD (http://www.openbsd.org) is most renowned for it's accomplishment of a comprehensive audit of its own code. Some of that code is being re-ported to Linux (for example the BSDish FTP daemon that's included with some distributions).
Linux and UNIX code auditors tend to focus on programs that are run "SUID" (with the effective permissions of the program's owner, rather than those of the owner of the executing process) and with "daemons" (programs that act as "servers" for network protocols and provide other local services). These are the most obvious cases where programs are an interface between "security contexts."
For a cracker (any anonymous attacker of your systems) the "mother lode" is a network process that runs as 'root' and has a remotely exploitable bug (often a buffer overflow, a particular sort of bug where an expected input is filled with an excessively long response which contains some hostile code). Finding one of these allows a cracker to remotely assume control of a whole system.
These sorts of bugs are not specific to Linux, or UNIX. They're possible under NT and most other operating systems as well. They are commonly detected on UNIX systems and quickly fixed (and occassionally re-introduced in future versions and new programs). It is believed that there are about as many exploitable bugs in NT and MacOS servers as there have been in Linux and UNIX. They usually show up as "hangs" or "abends" (abnormal ends) in the services or on those systems, rather than complete, interactive exploitation.
(The reasons for this have to do with the rather poor remote administration features and somewhat more complicated programming models of these other systems). So on the surface NT and MacOS seem to "failsafe" (die without giving the attacker access) --- although this is probably an illusion, waiting to be dispelled by the next generation of crackers).
Again, these are NOT viruses. However, they have similar results, someone runs code on your system that you didn't approve and don't want.
So these vulnerabilities (especially buffer overflows in network daemons like popd, imapd, mountd, ftpd, etc) are the greatest risk to the security of your system. That's why companies put up firewalls. That's why sysadmins tell you not to leave "ports open" (these services available) on your systems, or to use TCP Wrappers (pre-installed on every major Linux distribution) to limit the networks and systems that can access those services that you REALLY need.
I mentioned that security auditors focus on SUID progams and networking daemons. This is a matter of priorities as those are the most "attractive" points for an attacker to probe. However, we have to be aware that security auditing and robust code is necessary ANY TIME A PROGRAM ACTS AS AN INTERFACE BETWEEN/AMONG DIFFERENT SECURITY CONTEXTS.
We must be concerned about bugs IN ANY CODE THAT PROCESSES UNTRUSTED DATA.
(I'm shouting about this since it is a point that is often overlooked, even by some of the most respected programmers that I know).
For example, when you sent me e-mail. Your mail comes from one security context (the outside world, from a complete stranger). My mail user agent (MUA) acts as an interface between you data and me. If there's a bug in my mailer (or the editor that my mailer invokes when I want to respond) then you might be able to craft a piece of e-mail that will crash my program, and possibly even subvert it.
Such a "black widow" would be very hard to write for any UNIX mailer (though the addition of MIME handling features did introduce some such bugs in some mailers). It would also be limited in its effect. It probably could only affect one mailer under one operating system. It might not propagate through POP servers and/or through certain POP clients (like 'fetchmail').
There are dozens of common MUAs (mailers) used by UNIX and Linux people. So any such bug is likely to only hurt a few of them (and not propagate from them to others). Likewise for many other classes of programs.
The worst security risks are incurred by "monocultures" (a term borrowed from agriculture). If we all grow the same strains of the same crops, one blight and we all starve. If a few of us grow one strain, others grow a different crop, etc --- then the damaged is limited and the blight doesn't spread as far or as fast (since the various fields of any one crop/strain are separated by buffer zones).
When you think about the effects of Melissa, and WinExplorere.zip and the many other MS Windows macro viruses you see the inherent risks in monoculture. (You also see that Microsoft added features to their office suite and mail client which make it easy to write trojans and worms).
Computer systems and networks exhibit similar characteristics in the face of hostile programmers.
(In other words diversity is good. Some of us should run FreeBSD, Solaris, and some completely non-UNIX operating systems that aren't even C derived. Some of us should run Linux on x86, while others use Alphas, PowerPCs, etc. Uniformity has some short-term cost and training benefits --- but that way lies great danger!).
How bad is this danger?
Well, I've been running an experiment. I administer a system (a web server for a small literary organization, a non-profit) which is exposed to the Internet and gets very little administrative attention. I tend not to upgrade it until I have to. It's been cracked twice in three years. It probably hasn't been cracked on other occasions since I actually do have a sneaky trick up my sleeve that allows me to detect and recover from the garden variety "script kiddie" attacks fairly quickly (and remotely). I do say "probably" since anyone that asserts that he or she has "never" been cracked or that he or she is "sure" that they are secure is really a bit foolish. You can have a very high degree of confidence --- but certaintly in this case is a sin.
That is on a box which is effectively "wide open." With a modicum of configuration (not running inetd, limiting access to any services you must run, updating your packages as bug fixes are announced, etc) you can limit your chances of being compromised to very low values. Read the Linux Security HOWTO and with about five percent of the effort described there you'll eliminate well over ninety percent of the risk.
Note: Symantec is apparently shipping an anti-virus for Linux. I've heard that Trend is also testing one. I guess these are designed to catch the two strains of viruses that have been heard of for Linux. I also gather that they will scan your system for MS-DOS and Windows macro viruses (well over 10,000 of those). This is to protect the clients that might be using your Linux system as an FTP, Samba, or NFS server, and to save you from the infection on your "other OS" on those multi-boot systems.
Personally I suggested to Symantec (back when I worked there) that the best Linux product they could release would be a simple terminal to the PCAnywhere package. Let me use a window on my Linux system to remotely manage any MS Windows PC's that I have to deal with.
They didn't listen, and now we don't need it. VNC (*) seems to do the job well enough, and we may stomp out most of MS Windows before Symantec could code up a new PC Anywhere client.
Remember, a virus is just a bit of programming code. It does things that most recipients don't want --- but nothing short of a brilliant AI (artificial intelligence) can be relied upon to distinguish a virus from any other (benign) program. "Heuristic" virus scanners have been written --- they haven't fared significantly better than the traditional reactive signature scanners.
(I used to work for Symantec, and for McAfee. I've read, heard, and dealt with far more about PC and Mac viruses than I can possibly type here).
Summary: Don't worry about viruses on your Linux box. They aren't a problem. As for the security concerns, just lock down those stray networking services and don't give accounts out on your system to people you don't trust. If all you do is add the following to your /etc/hosts.deny:
ALL:ALL
... you've done plenty to secure your home system from the occasional portscan attack through your dial-up ISP connection.
If you read the Security HOWTO (*) by Kevin Fenzi and Dave Wreski and follow most of their suggestions then you'll probably never have a problem. Under Linux you can keep your system as wide-open or just about as locked down as you like.

(?) Telnet trouble

From Mathew on Sat, 25 Sep 1999

Dear Jim

Your email did help me to solve the problem with the telnet in linux. It works fine now. Thanks a million.....

I have a small doubt. Let me explain...... My network has a NT server, LINUX server and 20 windows 95 clients. I followed your instructions and added the address of all the clients into the /etc/hosts file on the LINUX machine and voila the telnet worked immediately.

But the NT server was the one who was running a DHCP server and dynamically allocating the addresses to the clients. The clients were configured to use DHCP and were not statically given and ip addresses. I managed to see the current DHCP allocation for each client and add those address into the /etc/hosts file on the LINUX server but my doubt is what happens when the DHCP address for the client changes? Then again we'll have to change the address in the /etc/hosts file right? This seems silly. Is there anyway to make the LINUX hosts file to automatically pick up the DHCP address from the NT server?

Also another important thing is I am still unable to ping from the NT server to the LINUX server using the name. It works only with the IP address. Is there any way to make the NT DHCP to recognize the LINUX server?

(!) Well, either you shouldn't use dynamic addressing (DHCP) or you should use dynamic DNS. You could also disable TCP Wrappers (edit your /etc/inetd.conf to change lines like:
telnet stream  tcp     nowait  root    /usr/sbin/tcpd  in.telnetd
... to look more like:
telnet stream  tcp     nowait  root    /usr/sbin/in.telnetd in.telnetd
(and comment out all of the services you don't need while you're at it).

(?) Thanks Jim for all your help....you've become my LINUX guru.............

(!) Perhaps you should consider getting a support contract (or joining a local users group). I may not always respond as quickly nor as thoroughly as you'd like.

(?) Best Regards
bob


(?) Quiet, Modem!

From qed on Fri, 24 Sep 1999

Hi,

I know this is a nitpicky thing, but I bounce back and forth between Linux and that other operating system(Doors?), and the thing that's bugging me is that I can turn off the modem noises in Win95, but I don't know how to reroute them to /dev/null in Linux.

Later, Jerry Boyd

(!) echo ATL0M0 > /dev/modem
... Actually you want to add the L0M0 (those are zeros) to your init strings in whichever Linux programs you're using with your modem (like your PPP chat scripts).
(You many want to check with your modem to make sure it understands these AT commands --- but it should if it's even moderately Hayes compatible).

(?) Dual Booting without Re-Partitioning

From Hoyt on Fri, 17 Sep 1999

In addition to the suggestions you made, you could also look at Lnx4Win, a Mandrake Linux product. Read more about it at http://linuxforum.com/99/07/linux4win.html
Hoyt

(?) Regards from Argentina

From Horacio Peña on Wed, 15 Sep 1999

Jim and Heather:
I just want to thank you for the extraordinary job you both are doing with the "Answer Guy". The whole Gazette is wonderfull; I have all issues, but the Answer Guy is the first thing I read. I have learned a lot with it. Carry on !
PS: my regards to all folks at the Gazette.

Horacio Peña - Quilmes Oeste
[[ Argentina ]]

(?) re: Helpless (issue 45 of linuxgazette)

From Alex Brak on Thu, 9 Sep 1999

Hi,
In the Answer Guy article titled "Helpless" (http://www.linuxgazette.com/issue44/tag/33.html), I think Leslie was referring to Data Access Objects (DAO), a Microsoft data access API/toolkit. AFAIK, there's no Linux/Unix version, as it is a MS-proprietary technology.
btw, the answer to his/her question is that DAO will work on Win98, as it did with Win95 -- he/she will, however, have to install the latest DAO runtime redistributable K to develop software using DAO. One should be able to find it on the Microsoft Developer Network site: http://msdn.microsoft.com
I'm not sure why leslie sent this to you of all ppl :)
keep up the good work,
Alex Brak

(?) Thanks!

From Moore, John R on Sat, 4 Sep 1999

Mr. Dennis:
I just sent a pathetic pleading letter from a confused newbie (me)to an author of a PPP HOWTO begging for some clarification on how to setup the necessary files to communicate with my ISP using RH5.2.
In yet another desperate attempt to find the truth, I ran across you response to some other poor souls similar dilema dated Tuesday August 31 of this year.
Finally, some answers I think I can understand. You'd most certainly expire laughing if you knew what I've been through to get on the net. Only now, after reading your article, do I understand that I must create a script that runs, rather than just typing in the commands at some terminal window.
It is no wonder that so many users run as fast as they can in the opposite direction. In a time when the Linux community is trying to be "picked up" by the corporate world and become a viable "player", I find it amusing that so many in that same community can't understand why they get repelled rather than attracted. I myself spent the past three weeks trying to configure my one computer to establish a link on the internet without any success. Oh sure, I was ble to get minicom to dial my ISP...ooohhh, look out big blue! Watch out Bill Gates! C'mon corporate world...jump on the Linux band wagon...we're great! I was beginning to suspect that most new members of the Linux community are bald from pulling their hair out...I know I was getting that way.
And then along comes your informative, albeit wordy (like myself) article that will hopefully clear some of the haze. It is refreshing to read an article that doesn't assume we all grew up in Berkley with nothing better to g /etc/host for hours on end. Still some assumptions were made, but hopefully, this time next week, I'll be sending you email on a Linux box, looking over at my Windows box with a sly grin, rather than visa versa with a look of total frustration.
Sincerely,
John Moore

(?) Just Buy a REAL Modem

From Jonathan J. Rakocy on Thu, 26 Aug 1999

Dear AnswerGuy,
Well thanks for your response answer guy. I apprieciarte the prompt reply.
To the tech store i go! Happy computing and maybe I talk to you again? I f you have a mailing list add me if possible. Thanks again!
Sincerely Jonathan Rakocy <lunar>
At 09:48 PM 8/25/99 -0700, you wrote:

Hello answer guy. My name is Jonathan. I am a tired windows user

trying to educate myself on the in and outs of open source, and in my case linux. i love the idea and concepts behind it. I've been downloading pages and saveing links. buying books and reading till 3 in the morning like now.

so to my point. First i installed linux along side windows. i need

it ( unfortunatly for school so far). ihope to learn how to work around it. but as you probably know by now, i am quite green. I ve not been able to connect ot the www. Now it comes down to the modem i think. i have a winmodem and i ve recently learned that the y are junk. after reading the essay on them in the gazette, my fears are confirmed. do i truely lose? or is there something i can do? say rewrite something? where can i find more in formation on this or do i just go buy a new modem if i wnat to continue? thanks for your patience. it is late and my eyes are blurry. have a good day. Sincerely Jonathan Rakocy linux Supporter

The answer is still:

Just Buy a REAL Modem!


(?) Telnet gives: "Connection closed by foreign host..."

From Martin Osvaldo Mauri on Wed, 13 Oct 1999

Dear James:

I've had a problem with Red Hat 6.0 while trying to telnet from one machine to another. All the configurationfiles seems to be OK, but when I telnet, it gives me the message "connection closed by foreign host..."

Any suggestion? best regards,

Martin O. Mauri

(!) What does the syslog (/var/log/messages) say on the server?
This message indicates that the remote system is disconnecting you. Probably the remote system is enforcing some security or access policy, or you're missing a file (such as the /usr/sbin/in.telnetd program).
I'd look at the /etc/hosts.allow and /etc/hosts.deny files. I'd also look at the /etc/inetd.conf file.
In /etc/hosts.allow and hosts.deny there might be a set of rules specifying a list of services (such as in.telnetd). Each of these services can be associated with a list of host/domain and network address patterns. Entries in the hosts.allow are allowed to access the specified services while entries in the hosts.deny are disconnected (as you described). All attempts to access any service are logged.
There are two groups of network services for which access can be controlled through these two hosts files: those that are dynamically launched through the 'inetd' dispatcher (any lines that refer to /usr/sbin/tcpd), and any "standalone" services which are linked to the "libwrap" libraries (such as the portmapper daemon that's shipped with most Linux distributions).
At a guess I'd say that your problem is not related to entries in /etc/hosts.allow and /etc/hosts.deny. Most Linux distributions ship with those as empty files; they're just placeholders with comments.
It's more likely that you're actually missing your /usr/sbin/in.telnetd. This should show up in your logs as a message like: "unable to execute"
If that's the case: mount up your CD (on the server), change to the RPMS directory and type a command like:
rpm -Uvh telnet*.rpm
... that should install the telnet server and client. Red Hat 5.2 used to put those both in a single package. I don't remember if RH 6.0 split those into separate packages or not.
While you're thinking about this you should consider avoiding telnet, rsh, and other insecure protocols. They are not appropriate for use over untrusted networks. The telnet protocol transmits all information as "plain" text. This means that your passwords can be "sniffed" as you type them into your login and 'su' prompts.
We won't even get into the many problems posed by rsh/rlogin. Suffice it to say that these suffer from a very weak authentication model in addition to being "sniffable."
So, you may want to install a set of cryptographically secure tools for your remote access needs. 'ssh' is a secure (cryptographically enabled) program that works like rsh and rlogin. It's the most popular tool for this sort of work. There are also tools like STEL (secure telnet) and ssltelnet.
Most of these are freely available. ssh version 2 is only free for some forms of "personal use" while the older ssh version 1.2 is free for a broader interpretation of "personal use." STEL was developed by the Italian CERT (computer emergency response team) and is ...
I've talked about the many free cryptography tools available for Linux in a previous column (The Answer Guy 35: FS Security using Linux http://www.linuxgazette.com/issue35/tag/crypto.html)
I hope most of those links are still valid. Meanwhile let me assure you that the most useful site on the Internet for getting free crypto packages is still: Replay Associates (http://www.replay.com).

[ Replay Associates has moved to http://www.zedz.net/. Apparently ReplayTV really wanted the name they had :) -- Heather ]

They currently have one of my favorite Latin quotes on their web page:
"Quis custodiet ipsos custodes?"
Hope that helps.

(?) Multiple Root Accounts: Delegation

From R Dalton on Wed, 13 Oct 1999

Hello Answer Guy

I'm wondering if it is possible to setup multiple root accounts on a linux system for more than one unix admin to monitor a system ?

If this is possible how ? also can they have different root directories ?

Thanks. R Dalton

(!) It is possible to have multiple accounts with 'root' privileges. The easiest way is to edit the /etc/passwd file (using 'vipw') and make extra copies of the line that starts with "root" (the root account entry).
Then you edit the login name field (the first one), and the full name (GECOS) field, the home directory field, and the shell field. Then you issue the 'passwd' command to set each of the initial passwords for each of these.
Example (excerpt from an /etc/passwd file):
root:x:0:0:root:/root:/bin/bash
toor:x:0:0:Alternative Root Acct:/root:/bin/bash
ruut:x:0:0:Cracker Jack:/root:usr/bin/pdksh
jonroot:x:0:0:Jon Doe:/home/jon/root:/bin/bash
tomroot:x:0:0:Tom Boote:/home/tom/root:/bin/bash
jillroot:x:0:0:Jill Tedd:/home/jill/root:/bin/tcsh
jimd:x:500:123:Jim Dennis:/home/jimd:/bin/bash
In this example I have the customary root account followed by 'toor' ("root" backwards) and 'ruut' (punny spelling of "root"). Then I have a set of root equivalent accounts for Jon, Tom, and Jill.
I've followed those entries with a token "normal" user account for comparison. The only important detail is that the 3rd field on the root equivalent accounts is set to '0' while all non-root accounts have other numbers (UIDs).
All kernel and filesystem data structures that store and manipulate ownership (of files, processes, etc) and check permissions use the numeric UID. The /etc/passwd file is the primary way to map names to UIDs and vice versa.
Note that this works in any form of UNIX. However, it is not necessarily the best way to do things. Some programs will do get a login name by looking up the UID. When we have non-unique UIDs, we can confuse those programs (of course, you probably shouldn't be running those programs as 'root' anyway).
There are other potential problems with this strategy. Obviously having more people and more passwords that give the same level of access increases the risk that unauthorized people will guess or steal those passwords, or trick one of the admins into doing "bad things" (social engineering). Also this mechanism provides no tracking of who did what. There is no way to distinquish between what jillroot and tomroot did (since they have the same UID --- which is all the system uses for marking file ownership and checking privileges).
A better way for most situations is to install ' sudo'
The sudo package allows you to selectively give access to specific users and groups, allowing them to execute specific programs and with specific options. The users run the 'sudo' command, which prompts them for their own (normal user account) password.
In your case you might just start by installing sudo and configuring it to allow access to a command shell (/bin/sh and/or /bin/bash). That's pretty simple. It effectively gives you the benefits of the multiple account entries (though it doesn't set up separate home directories). One advantage is that it does logging of who used which sudo commands at what time.
(Obviously a 'root' user can edit the local syslog entries and can stop, restart, and resplace the local syslogd daemon to "cover his tracks" --- 'sudo' to root access doesn't protect you from unreasonable expectations. But the logging can help a bit).
As you more clearly define the precise operations that you need to delegate; you can edit your /etc/sudoers file to more precisely limit your users and groups to those specific programs and scripts that they need.
The sudoers file is relatively to understand.
The only confusing part is that its entries refer to network hosts and "netgroups" (a Sun NIS concept). This is intended to allow sites to create a single /etc/sudoers file and distribute that to all of their systems. The reason I found this confusing when I first installed 'sudo' is that 'sudo' itself doesn't providing any networking or distribution mechanism (and the man page doesn't actually explain why they put these hostname references there or how you'd use them). It assumes that the sysadmins using the package will want to create a uniform sudoers file and know how to do it (through rdist, ssh/scp, rsync, etc).
cfengine is another package you might want to consider. It has nothing to do with authority delegation (giving out root privileges to more users), but may be useful for automating your system monitoring and configuration tasks.
cfengine is a host configuration utility. It implements a language to describe certain sorts of system administration policies and corrective actions. It's an intriguing concept that I've only played with a little bit. However, it is gaining popularity in sysadmin circles (along with the healthy respect that one reserves for a loaded firearm --- one mistake in a cfengine script can make thousands of changes on hundreds of hosts).

(?) Soundcard Drivers for Win '98?

From Adam Moore on Tue, 12 Oct 1999

hello i'm currently trying to find a driver for the OPL3 YAMAHA/SAx sound card for win98 any help you can give me would be appreciated.

thanks

(!) I don't do (MS) Windows. Contact Microsoft, or your sound card manufacturer, or your local retailer.

(?) Author Responds to "gdm Murdered Mysteriously"

From Martin K. Petersen on Tue, 12 Oct 1999

Sorry about the delay. I've been attending Linux Kongress the past week.

The fact that the version of GNOME gdm that shipped with Red Hat 6.x can't gracefully handle (clean up after) an inadvertant shutdown or other mishap is very disappointing.

Well, contrary to common belief X display management isn't trivial at all. Since gdm is a complete redesign/rewrite of xdm, it is bound to have problems. I have never tried to conceal the fact that gdm has issues. It even says so on the web page.

That being said, gdm works ok for local display management, provided the X server is relatively well behaved (i.e. not buggy).

The fact that the version of GNOME gdm that shipped with Red Hat 6.x can't gracefully handle (clean up after) an inadvertant shutdown

Yes it can. You're speculating.

I find it disappointing that a person like you, who has been in the Linux business for several years, resorts to spreading FUD. Don't answer questions, if you don't know the answer.

The right answer here, like in 99% of all other cases concerning system daemons, would be to consult the syslog.

(!) I believe I did consult the syslogs. I remember that I did have to remove some sort of lock file or something before I could get gdm working again (but I don't remember the details).

(?) I did NOT see this question listed in their FAQ (which surprises me, since I would think that this would be a very commonly encountered problem among RH6/GNOME users). However, I did find a link to a bug tracking system. From there I searched for messages related to our "murdered mysteriously" problem. There was some indication that Martin K. Petersen is the contact for gdm and that he posted patches to resolve that (and several other) gdm issues.

The "murdered mysteriously" message is a non-critical warning, not a failure. Thus it hasn't and never will be "fixed". Neither have I posted any patches to resolve it.

The message indicates that something killed the master gdm process or something left Xlib in a state which it couldn't handle gracefully. I.e. gdm was forced to about without cleaning up. However, this doesn't affect subsequent startups in any way.

/Martin

(!) This suggests that a simple [Ctrl][Alt][BackSpace] or a 'telinit q' command should restore 'gdm' to functioning. Or is there some sort of state preserved by Xlib in the filesystem?
I'm sorry if my message offended you. I was frustrated when I first encounted this error message (in front of a class full of new Linux students, in fact) and I seem to recall that the problem persisted through a reboot, and required some fussing with lockfiles or unix domain socket entries, or something.
I was also frustrated when I got the question for my column.
Since I don't run GNOME on my home systems or my workstation I don't have an easy way to attempt to reproduce the problem myself. The user didn't provide me with much info either.
(Thus I am forced to speculate: frequently and for many of my answers. I try to let people know when I'm guessing and when I'm speaking from first hand experience. Sometimes I fail).
I have seen Red Hat 6.0 systems FAIL to bring up gdm in a persistent way. I've seen the message "gdm murdered mysteriously" associated with this failure. It might not be a failure of gdm's --- but the gdm error message is certainly occurring at about the same time.
Anyway, hopefully GNOME will be more stable in Red Hat 6.2 or 6.3. Overall I think Red Hat Inc has been pushing it on to too many desktops a little too quickly.

(?) Upgrade to 6.2 from 6.1 Disables Login

From Dave on Tue, 12 Oct 1999

I recently began setting up a SuSE linux system to replace my Win9x system. The installation of SuSE 6.1 went great. As well as XFree86 and several software packages including Netscape and RealPlayer.

While under X I installed the base RPM upgrades for SuSE 6.2. The only packages it replaced are at ftp://ftp.suse.com/pub/suse/i386/update/6.2/a1.

Nothing that would affect my logging I thought. After logging out as root I was unable to log back in as root or my user account. It would just give me and "invalid login" message. I tried going into rescue mode and clearing the root password entry in the /etc/shadow file as well as the /etc/passwd file.

The login error remains. Can you give any suggestions as to where the source of this problem might be?

Thanks, -- Dave N.

(!) My wife just did the same thing this weekend. She's still working on it. Her plan is to push forward with the full system upgrade. She's using one of the other systems to fetch all the RPMs to a local mirror (since full FTP installation/upgrade over the Internet is far too unreliable to complete; she tried that already).
So, she's still waiting on the DSL line to finish that process. She has a Debian laptop and we have a couple of others Linux boxes around, so it's not like she's totally stuck.

[ I was solved fairly quickly once I got enough of the base back in sync with itself. Because I was foolish enough to do my update over the net (I was too impatient to wait for the boxed copy to arrive at my local store) I had to wait for hours that were lightly travelled around here, and a mirror that had free ftp logins available around that hour.

Many things about my system seem more stable now, although gimp and enlightenment appear to have an allergy with each other. -- Heather ]

I would guess that there's a problem with the libc libraries. I guess that S.u.S.E. 6.2 installs glibc 2.1 libraries. Some of the programs that are linked to glibc2 (libc.so.6) are failing on some differences between 2.0 and 2.1. (Those programs probably should have been linked more tightly --- to 2.0 rather than just 2).
Anyway, your best best is probably to push forward and upgrade the whole system. You might be able to temporarily fix your system (well enough to log in) by booting from a rescue floppy, mounting your root filesystem and tweaking the symlinks under /lib. Basically make the libc.so.2 link point to libc.so.2.0 rather than libc.so.2.1. (If the links don't look something like that when you get there, it blows my theory; there'd have to be something else wrong). If you do find the symlinks wrapped like I'm thinking --- change them around, cd to the root of filesystem (the top level mount point below your rescue, usually /mnt) and run the command:
usr/sbin/chroot . /sbin/ldconfig
... that should force the ldconfig command to execute properly on your filesystem tree.
This "chroot" stuff is very handy for working on rescue disks. You boot on the rescue, mount your normal filesystems under /mnt, cd to there and "chroot . /bin/sh" Then you can work on your normal fs structure and the commands you use like /sbin/lilo, rpm, ldconfig, passwd, et al, will all find things where they're "supposed to be" (like the /etc directory, the rpm dbm files under /var/lib/redhat, and the /lib directory).
It's a bit confusing to describe. Play with it a bit and see what you figure out.

(?) Euphoria

From Greg Phillips on Tue, 12 Oct 1999

I've been using Doslinux for quite some time now, and am quite impressed. Unfortunately, Kent Robotti doesn't answer his email, it seems =)

I'm a member of a mailing list, which pertains to Euphoria. No, not intense joy, but a relatively new and unknown programming language. While the number of users is small, we're very passionate about Euphoria. Recently, a linux version was released, and many users wanted to use it. A fair chunk of them were hesitant to repartition their hardrive and install a new OS, so I recommended Doslinux. While it worked well for some users, others had trouble installing Euphoria, and other applications. This was no fault of their own: Doslinux documentation is a little bit skimpy, if you're new to Linux and don't know where to look. Being the resident Doslinux veteran, I was soon flooded with questions (How do I install X? How can I get KDE to run? How do I log in? Why doesn't this work?). So I opted to make a CD for the Eu community, with Doslinux, all the extras (gcc, x, kde or gnome, euphoria, etc.) already installed. Unfortunately this proved to be a lot of work. Trying to stuff a bunch of software into a pre-made distribution was getting to be painful.

So, after some reading, experimenting, etc., I decided to create EuLinux. The same idea as Doslinux, but customized towards Euphoria users. So, here's my question: How?

I've read everything I can get my eyes on, and, as I understand it, this is how DOSLinux works in a nutshell: It uses a loopback filesystem as the root device.

To install the whole system, a ramdisk device is mounted, which is used to create an empty file of a fixed size on the dos partition. The linux system can then be copied into that empty file, which can be booted with LOADLIN.

I know there's a lot more to that, but I hope I've got the basics correct.

Am I right? Can you point me to some documentation? Is it even worth trying?

Thank you, Greg Phillips

(!) Well, it certainly sounds like an interesting and worthwhile project. However, I might suggest a slightly different approach.
It would be nice if Debian could be installed on a FAT filesystem (sort of a blend of DOSLinux and Debian). Then you could create a Debian package (and an RPM). This would make Euphoria accessible to most Linux users with a minimum of fuss while make DOSLinux capable of installing a very large number of well-maintained packages.
I suggest the DOSLinux/Debian merge for a couple of reasons.
First Debian has more packages that Red Hat, S.u.S.E. etc. Many Debian packages are smaller and more focused, while Red Hat tends to put more stuff in a given package. That leads to coarser dependency granularity for Red Hat.
Also Debian has developed "virtual packages" and "alternatives" which allow for more choices without having to work around the dependency/conflict management features of its packaging system. (For example in Debian some packages depend on "MTA" which is a virtual package that can be provided by exim, sendmail, qmail, etc).
Debian packages tend to "fit together" a bit better than those from Red Hat and other RPM distributions. Debian are hundreds of volunteer maintainers. Many of those maintainers tend to more proactively patch the base sources and feed their patches "upstream" (to the program authors). They seem to have closer ties between their package maintainers and the software authors (probably since there are so many maintainers, so each can afford a bit more time on the few packages that each one maintains).
Meanwhile Red Hat, Caldera, S.u.S.E., TurboLinux and other distribution maintainers each have a smaller number of professional developers. The various RPM distributions tend not to have compatible package dependencies and they duplicate quite a bit of the packaging effort.
Keep in mind that the core software among all of these is mostly the same. The differences show up in packaging, dependency and conflict management, and configuration tools. Debian package configuration mostly falls into the "it's ugly but it works" model --- where a package might prompt for one to five answers (with reasonable defaults). This is done basically as a simple list of "echo/read" (shell script) questions. It's not pretty, but it is elegant and minimal --- and it works better than linuxconf.
(Don't get me started about linuxconf. I've banned that from my systems until further notice!)
So, that's what I'd like to see. A DOSLinux that could be used as the base system for a Debian system. (For that matter any improvement to the Debian bases system install would be welcome. It's a really good system once you get it up --- but that first step is still a bit of a doozy.

(?) Dell EIDE TR5 Tape Drive

From Yixin Diao on Tue, 12 Oct 1999

Hi, Jim,

I know you after I read your message "Open Letter Re: Linux on Dell Hardware". May I ask you a question? Can Dell's tape drive "10/20GB EIDE TR5 Tape Backup" work well under Linux?

Thanks! Yixin

(!) Ahh Yes! The old "open letter to Dell." Of course I can't take any credit for the fact that Dell does now offer Linux pre-installed on selected models. I don't even know if my letter ever made it to Micheal Dell or anyone beyond "secretary." Ironically I work for the company that provides Linux technical support to Dell.
I haven't played with any TR5 tape drives. However, I suspect that it should work. Be sure to compile a kernel with support for IDE tape drives (it's in the menuconfig options after IDE CD-ROM drives, and IDE hard drives).
According to /usr/src/linux/Documentation/ide.txt you should be able to use /dev/ht0 to access your IDE tape drive, just as you'd use /dev/st0 to work with a SCSI tape drive.

(?) FTP Daemon: Special Requirements

From William Dawson on Tue, 12 Oct 1999

Hello, Mr. Dennis.

I'm in desperate need of help...I'm down to only one functioning nerve...

After much brain-strain and (I don't mind saying) heavy persistence, I finally found the right man pages to tell me how to change the port that ftp listens on, and other things of the like. My problem now is that can't seem to figure out how to set the maximum number of simultaneous ftp users (limit<class><n><times><message_file> seems not to work in the ftpaccess file...<times> format is of particular concern). Also, how do I limit users to only one ftp connection (login) at a time?

If you can help me out with any of this, I would be eternally grateful.

(!) Are you sure you're running WU-ftpd? Other FTP daemons don't necessary read the /etc/ftpaccess file.
If you're using the BSD ftpd (possibly reported from OpenBSD) then it would ignore /etc/ftpaccess (unless then changed it). If you're using ProFTPd, BeroFTPd, or ncftpd then you'd use different files to configure each of them.
Also are you SURE you want to change the port on which you are running your FTP daemon. I could see cases where you might want your ftpd to selectively bind to some interfaces or IP aliases and to ignore others (which means that you can support FTP virtual hosting, among other things). However, running it on a different port seems like a bad idea since many FTP clients (especially those in MS Windows) don't offer options to connect to non-standard ports.
The limit directive in the WU-FTPD doesn't give you a way to limit the number of concurrent connections PER USER. (At least I don't remember such a thing). It's intended to limit the total number of connections for each "class" of users (mostly to insure that anonymous users don't bog the machine down so much that your own employees, students, etc can't access the system).
I think you might want to look at the documentation for the ProFTPd and/or the ncftpd packages before you fight too much with your current FTP daemon. Please note that ProFTPd has been hit with a couple of security exploits recently --- so make sure you get the most recent version (with bug fixes) and you what the 'net for alerts. There may be more bugs waiting to be discovered in this package. (Of course that's true in every software package. But some have a better reputation than others; and write now WU-FTPD and ProFTPd are at the high end of that reputation scale.).
ncftpd is not free software. However, Mike Gleason, its author has a good reputation in the open source community. His ncftp CLIENT is free and is one of the best.
You can find these at:
NcFTP Software
http://www.ncftp.com
ProFTPd
http://www.proftpd.org
WU-ftpd
http://www.wu-ftpd.org
BeroFTPd
http://apps.freshmeat.net/download/895008043
pftpd
http://apps.freshmeat.net/download/918313631
I tossed in links to a couple of other FTP daemons that might be interesting.

(?) login, su, and passwd dies: Everybody dies!

From Tim Kientzle on Mon, 11 Oct 1999

I'm having a hard time finding good technical archives for Linux. I've been searching 'linux login' trying to find someone else who's had this problem; I stumbled across a couple of your columns in the process.

In any case, I have a useful factlet for you regarding login problems: The KDE 'add user' program by default does not assign a login shell. Thus, accounts that cannot be logged into.

For myself, I have a RedHat 6.0 machine that does not accept any logins, not even root. It never gets to a password prompt. This is true of getty, telnet, ftp, and KDM logins. I brought the machine up in single-user mode, and found that 'su' and 'passwd' both crash, also. It looks as though any program that needs to touch the password file simply dies at that point. (Note that 'su' does not crash if you use it to go from root to a non-root user.)

Oddly enough, this problem arose spontaneously; the machine has worked fine (about 8 functioning accounts) for a week. I came in this morning, and tried to telnet into that machine; the connection dropped at the telnet prompt. I rebooted the machine (which required hitting the big red button, since I can't login as root to safely shut down) and then KDM won't come up; it seems to crash the X server. No useful messages to /var/log/messages, just a machine that can't even be reasonably debugged.

I'm almost ready to drop RedHat and go with FreeBSD instead. <sigh>

- Tim Kientzle

(!) It sounds like massive corruption to your libraries or to your /etc/passwd file. I'd start by booting from a rescue diskette (as you've already done once) and editing the /etc/passwd and /etc/group files.
If those look alright (compare their formatting to the ones on your rescue floppy) then check the /etc/shadow file. You could just rename the etc/shadow file and try the 'passwd' command again.
(The libraries use the existence of an /etc/shadow file as a hint to use shadow passwords).
Another thing to try is the following commands:
cd /mnt/ && usr/sbin/chroot . /bin/sh rpm -Va
(or, if you have a copy of the 'rpm' command on your resuce diskette you can use: 'rpm --root /mnt -Va')
This rpm command will check the integrity of the system files, libraries and binaries, and the ownership, modes, and other particulars for every file that included in any package you installed.
You can use the 'rpm -qf' command to match the filenames that the verification reports to the package names that you might need to re-install.
Of course you could re-install from scratch. However, it would be very unsettling to me to do that with no understanding of what went wrong. I suspect that you feel the same; how can you trust a system that "did that" all of a sudden.
One possibility is that your system was cracked (incompetantly). The programs you mention are some of the same ones that are replace by most "rootkits." If some script kiddie broke into your system and tried to install a rootkit that was pre-compiled for libc5 then I could see it give symptoms just like those you've described.
The 'rpm -Va' command might uncover this. Script kiddies and their canned cracker tools don't seem to have added "rpm dbm patchers" YET. However, it's only a matter of time before they do.
In a past issue(*) I've described more robust techniques for using write-protected boot floppies, you original CD, the rpm command and some shell utilities to perform a system integrity audit on a Red Hat or other RPM based system. That article was long-winded since I explained the method in some detail.
You might also try Debian. I'm slowly switching my home systems and laptop over to it. I just got my new personal workstation up and running this weekend, got my home directory tree (700Mb, mostly of e-mail archives) transferred and have spent all day answer e-mail on it.
Let me give an example of what I like about Debian.
I chose a minimal installallation to start with (about 100Mb total. I upgraded it from the CD version to the latest version of everything in "unstable" by editing one file (/etc/apt/sources.list) and issuing one command (apt-get update && apt-get dist-upgrade -f). It sucked down 120 packages from the 'net and put them all in place.
Then I installed a curses package selection tool (console-apt) and used that to select a few other programs that I wanted (things like xemacs). apt-get and console-apt use the same selection database, and use the dpkg command under the hood. They automatically get any libraries and ancillary packages to resolve dependencies.
Later, after I'd transferred all of my home directory files, I starting working like I prefer (running xemacs, in viper-mode under 'screen'). At some point I typed a word that look "wrong" and I hit [F3][$] (my personal xemacs macro for spell-checking the word under my cursor). Ooops, ispell wasn't installed!
So I switch to another VC, type:
apt-get install ispell iamerican
... wait about 30 seconds and go back to my editing. my spell check now works.
Another time a few weeks ago I was answering a question about LPRng. I logged into my other Debian system (a little file and mail server, antares), installed the package, read the doc page I was looking for, and removed it. That was faster than hunting for it in Yahoo!
That's what I like about Debian. I've been doing that sort of thing for months on my machine at the office.
In fairness to Red Hat and it's ilk, the 'rpmfind' command (http://rufus.w3.org) makes RPMs almost as easy to manage. However, debian does have a lot more packages and apt-get seems more stable than rpmfind.
So far the Debian apt-get facility is the only one that I've ever trusted with automatic system upgrades. I've been running regular dist-upgrades on my box at work for months --- on the UNSTABLE development series, the betas --- and I haven't broken my system yet (sometimes one or two packages get a bit messed up; but nothing that's caused real problems).
Meanwhile Debian is not for the UNIX novice. Most users would not know that xemacs and emacs call on a program named 'ispell' and most wouldn't know that the various dictionary/wordlist files for ispell are named like iamerican, ibritish, etc. While Debian has more packages, part of this is because they them to a finer granularity. They don't put an IMAP daemon and a POP daemon in the same package. Also many of the Debian packages are alternatives or are fairly obscure. They are the sort of thing that
Of course the FreeBSD "ports" system is also pretty good. I'd still like to see something like it for Linux. (It gets the prize for most comprehensive use of the 'make' command).
Anyway, I hope you track down the problem. If it does turn out to be a cracker, be sure to get the latest security fixes when you re-install. There have been a number of bugs with wu-ftpd and ProFTPd recently and they are being actively exploited. (Fixes for the known bugs are at the Red Hat updates site, and at the archive sites for most other Linux distributions.
As usually you should disable any services that you don't absolutely need to run, limit access to your non-anonymous services (using TCP Wrappers and/or ipchains) wherever that's possible, replace 'telnet' support with 'ssh', 'ssltelnet' or 'STEL', use shadow passwords, etc.
Read the Security-HOWTO (http://www.linuxdoc.org/HOWTO/Security-HOWTO.html) and the Linux Administrator's Security Guide (http://www.seifried.org/lasg) for lots of details on that.

(?) Files invisible via Telnet?

From Norris on Mon, 11 Oct 1999

First off, thank you for a good site, a real valuable source on all those little (and not so little) problems you run into :)

Second, im not sure if I should go and ask questions directly to you, but it seems others have done so, but if i am in error, please just ignore me :)

(!) You did it right: mail to the . It's a shame that so many people still send their questions to (which is the same mailbox) because I can auto-sort LG mail from personal mail. However, that just comes with the territory.

(?) The Problem:

I have allowed telnet acces to my server, and i can login fine, i can su and so forth, but a lot of files just dont show up via telnet? i have messed with many combinations of file permissions, but that doesnt seem to do it... any help would be appreciated, thank you :)

(!) Linux and UNIX have no mechanism for selectively "hiding" files based on the protocol through which you've logged in. Unless you are running your telnet daemon or inetd in a "chroot jail" (a fairly obscure custom configuration) then you should be able to see everything.
Now, if you mean that some files don't appear to be readable through your terminal (you do an 'ls' and see "blanks" where there should be files) then that's another matter entirely. Trying using the command:
ls --color=never
... if your files are suddenly visible then you're just suffering from a bad terminal type (TERM environment variable) setting.
You could also try the commands:
export TERM=dumb find . | less
... (assuming you're using a Bourne compatible shell -- like the default 'bash'). This should show you all your files.
The 'ls --color' trick has to do with the ncurses color support in the GNU version of 'ls'. Some distributions set a system default alias like: alias ls='ls --color=auto' so that you'll see your files in pretty colors when using a color capable terminal (such as the console, or a color xterm). If your TERM environment variable is set correctly, and your terminfo libraries and data files are installed correctly then this alias should automatically disable the use of color when necessary.
Unfortunately the MS Windows version of TELNET.EXE is pretty wretched. It defaults to settings which do not implement enough VT100, VT220 or VT330 support to be usable for most Linux tasks. You're usually better off forcing that telnet client to use the older, less powerful VT52 mode. You'd be MUCH better off to replace it with a good telnet client, such at K'95 (The Win '95 port of Columbia University's Kermit package).
Of course I'm just guessing at what you mean by "invisible." It's possible that you are in a "chroot" jail (in which case you wouldn't see ANY files and directories "above" a certain point in the real directory tree. This would probably be quite confusing to you since you'd have to have duplicate /etc, /dev, and /usr/bin directory structures and filesets.
I've set up systems like that (in fact I've just configure my new personal workstation in this way this weekend). However it's a relatively obscure and advanced technique that almost no one uses. You couldn't have set that up "accidentally" or without knowing about it.
So I'm assuming the more likely interpretation of your question. (I usually have to do that in this business).

(?) sendmail Masquerading, Configuration, and User Masquerading Revisited

From Willy Lee on Mon, 11 Oct 1999

Hi Mr. Dennis,

The information in your September 1999 column regarding fiddling with sendmail configuration was very informative, and very welcome! I knew that sendmail looked at sendmail.cf, and that you were supposed to run a file through m4 to generate sendmail.cf, but I didn't know how to do it, now I at least have some idea.

Unfortunately Netscape rendered the backquotes rather oddly, so I labored for about an hour under the mistaken impression that they were pairs of single quotes, instead of backquote/quote pairs.

(!) Sorry. I'll try to remember to mention that next time I use unusual syntax such as m4's.

(?) You said, "For this to work smoothly you should create an account on your system that matches your account name on your ISP, and use that to work with your e-mail from that address." Unfortunately my desired username was already taken at my ISP, so my mail address is 'willy2' instead of 'willy', which is my account on my machine at home.

I use Gnus from within Emacs as my MUA and I've told it my correct mail address. I believe it sets the Reply-to field, so that should be ok, shouldn't it?

Well, if I get a reply from you, then I'll know that my Gnus/sendmail setup is working :)

(!) Your method seems to be working. Under some systems you can't easily set the return addresses from your MUA. You can have sendmail rewrite the username portion of the addresses by using the genericstable FEATURE.
I didn't want to get into that last time I was talking about this since it just adds a bit more complication. The original correspondent was probably having enough trouble digesting all that I said as it was.

(?) Thanks for all your columns!

(!) You're welcome. Glad I could help.

(?) General S. Fault

From dlancas on Mon, 11 Oct 1999

Along the lines of "wha'appen?", I started using KDE the other day, and klicked on my kcalc and waited... and waited.... and then I klicked again just for the heck of it. When I tried to start the calculator from the terminal, I got the wonderful segmentation fault. This happened previously with xosview which was working great then stopped working one day. I recently upgraded KDE to the newest stable version (1.1.2) and noticed that along with the calculator, some other programs are also giving me segmentation faults (such as the KDE text editor).

Just who is segmentation and why is my computer blaming him for screwing up my system? I looked in numerous books, etc, and can not find any mention of this. Thanks in advance.

Doug Lancaster www.lancompworx.com

(!) Maybe "Segmentation" is General Protection Fault's middle name ;-)
Actually a segmentation fault is Unix technese for: "that program stepped out of bounds!" It is similar to the old MS Windows "General Protection Fault" or the old QEMM "Exception 13" etc (except that segfaults don't make Linux kernels unstable and don't indicate that we should hastily save all our work and reboot "before it's too late!").
The way that Linux keeps user programs from interferring with system operations, and other user programs is through memory management.
When a program is loaded it is allocated some memory for it's binary executable code (ironically called "text"), it's static data segment (the "BSS"), and its stack and heap (dynamic data). If the program needs more memory it must request it through the malloc() system call. If a program ever tries to access memory outside of its segments it is stopped by the processor (on x86 and some other platforms) or some other hardware MMU (on some other CPU architectures). This processor or MMU "exception" (or "trap" as its known in some architectures) then calls on a kernel exception handler, which then summarily kills the program, (optionally) dumps a post mortem debugging file, usually named 'core' (and consequently known as a "core file"), and signals the program's parent process (with a SIGCHLD).
When the parent process is the shell, it prints a message like the one you describe: "segmentation fault, core dumped"
So, probably the programs in question have bugs in them. Perhaps they swallowed a corrupt bit of data that the programmer wasn't quite clever enough to foresee in his parsers. Possibly some of the libraries on which these programs depend have changed or differ from the versions that the programmer was using.
Of course there are other possibilities. Perhaps the binaries or some of their libraries have been corrupted on the disk. Possibly you have some bad memory chips (possibly some RAM that only misbehaves under specific conditions, such as when you are running the video card is specific resolutions). Those sorts of problems usually give other symptoms like the infamous "Sig 11" problems (http://www.bitwizard.nl/sig11).
I'd definitely shutdown and force a full fsck of all your filesystems ASAP. If that doesn't report any problems then it probably is just the software.
Probably the latest "stable" KDE is not as stable as your previous version. Maybe you need to upgrade some libraries to go with the new release.
Good luck. You might try downgrading back to the older version to see if that clears it up.

(?) Linux Workstations Behind a Proxy/Firewall

From anil kumar on Mon, 11 Oct 1999

Hi Jim,

This is Anil from India.I saw your letter in the red hat site & wanted some details on how to access the internet from my Linux box. I dual boot it with my NT.Now,I am behind a Proxy(MS Proxy & firewall)& my Ip address has been given permission to access the internet.I access it in the usual way from NT but when i boot thro' Linux, I dont get any option to configure the Proxy server.Does the name resolution request go to the DNS configured in our local network first & from there upon not resolving to the next higher level that is, the local ISP DNS ?But i have configured my Linux box for the DNS.Now how do i configure my Linux to access the net?.I would appreciate if you would throw some light on it.

Thanx in advance, Anil.

(!) You're probably expected to use SOCKS clients. Most proxying firewalls conform to the NEC SOCKS proxy traversal protocol (a standard way for client programs to contact a proxy and request a service).
The normal Linux client software (telnet, ftp, etc) are not "SOCKSified" (linked to library functions which check for proxying). So you want to install the socks-clients RPM package.
You can find a copy of that at:
socks5-clients-1.0r6-1.i386 RPM
http://rufus.w3.org/linux/RPM/turbolinux/3.0/RPMS/socks5-clients-1.0r6-1.i386.html
It will replace most of your network client software utilities. You'll then have to edit the /etc/libsocks5.conf. One of mine looks like:
socks5          -       -       -            -          192.168.1.5
noproxy         -       192.168.1.           -
noproxy         -       123.45.67.0/255.255.255.240          -
Creating this file is the hardest part of using the SOCKS client RPM. You have to put in your SOCKS proxy server at the end that first line. That's an IP address. Then you can put into IP address patterns on your noproxy line(s). I have set a noproxy for one RFC1918 address block, and one (sanitized) "real" address block with a netmask. This would be a typical arrangement if there where a block of servers on our DMZ (Internet exposed network segment) that were directly accessible from my station. In many other cases you wouldn't have that 3rd line, you'd go through the proxy to get to your DMZ, too.
The programs provided by this RPM will all read the /etc/lib5socks.conf file automatically. There is also a shared library which can be used to "socksify" many "normal" TCP programs. In particular, under Linux it's possible to over-ride the normal shared library (DLL) loading sequence, forcing a program to preload (LD_PRELOAD_PATH) a custom dynamical library. Thus with a short wrapper script (described in the documentation of this package) it's possible to redefine how a program implements some library calls without recompiling the package.
Of course these libraries can also be used explicitly (by linking programs to them). This obviates the need for LD_PRELOAD_PATH shenanigans. Personally I haven't used this "socksify" technique.
Some programs (like ncftp) might have to be replaced separately. In some cases you'll have to fetch the sources and compile programs with non-default options. In other cases, like Netscape Navigator, you'll want to just configure them (under Navigator and Communicator look for "Edit, Preferences, Advanced, Proxying" and fill in the dialog box).
Some software and some protocols will not work through SOCKS proxying or will have to be patched to do so. (Some of the Pointcast, RealAudio, CU-See-Me, and other protocols don't support SOCKS, or require proprietary proxying packages in order to traverse your firewall).
The canonical site for information about SOCKS is:
SOCKS Proxy Protocol
http://www.socks.nec.com
In particular you'll want to read the Socks FAQ (http://www.socks.nec.com/socksfaq.html)
You probably don't need a SOCKS server (you've already got one) you just need the client software for there protocols you plan to use through this firewall).
However, I provide pointers to some server software for other readers. You can download NEC's SOCKS software for Linux (in source form) from the web site listed above. However, you'll want to read the license on that before using or distributing it.
In addition to the NEC SOCKS implementation, Linux supports a couple of alternative SOCKS servers (NEC's SOCKS is not under GPL or BSD and it's not fully "free" software).
One that I've used is DeleGate (http://wall.etl.go.jp/delegate/) Another that I've read about but never used is Dante (http://www.inet.no/dante/). DeleGate and Danta are free under the BSD license.
One thing I like about DeleGate in particular is that it's possible to manually traverse it. In other words, if you have a favorite ftp or telnet client that doesn't know how to talk the SOCKS protocol, you can manually connect to DeleGate, type some magic commands at it, and it will then open up the same sort of connection that the SOCKsified client would have. (This is like the FWTK and other manually traversed proxy systems).
There are a number of other firewall and proxying packages available for Linux.

(?) LILO Stops at LI

From Peter.Oliver2 on Mon, 11 Oct 1999

James,

I hope that its OK to email you here!

I recently installed OpenLINUX v2.2. After installation, everything seems to work fine. When I restarting LILO starts to boot with the first two letters LI and then everything locks up. None of the keys work and the only way round is a hardware reset.

I used a RedHat6.0 boot floppy which gets OpenLinux2.2 up and running. When finally back in linux I ran fdisk. fdisk reported that my hard drive has 1027 cylinders and warns that more than 1024 can cause problems with some programs such as LILO.

Do you think that this is the cause of the LILO boot problem?

In addition when booting from the floppy disk OpenLinux2.2 does not work properly eg. automount for the floppy disk and cdrom does not work.

If you get around to having a look at this I would appreciate it very much!

Thanks Peter Oliver

(!) This symptom is one of those that would I expect from trying to load a kernel image or boot map that is inaccessible to your BIOS.
However, I would expect that a boot floppy would work. So long as your kernel is on the floppy (or CDROM) and your kernel supports the controller to which your hard disk is attached. Of course it's possible to create a floppy which just has a LILO primary boot block on it (with no kernel or boot map files). That's pretty rare so I wouldn't have expected you to make one of those (none of the common distributions would have set it up that way).
If you have a copy of MS-DOS/Win '9x on that drive, I'd used LOADLIN.EXE to run Linux. I've described it many times before so do a search on the term LOADLIN and you'll find explanations of how that works. Basically it lets you load Linux from a DOS command prompt.
Allegedly there was an effort to write a version of LOADLIN to run from the NT CMD.EXE console (a Win32s version). However, I've never seen that.
So, if you have NT installed, and you don't want to repartition the first portion of this drive (to make a small Linux /boot partition), and you don't have MS-DOS installed (so you can run LOADLIN.EXE), then you'll want to use a floppy (with a properly configured kernel on it) to boot Linux. Considering how rarely most of us boot Linux this is not much of a hardship. Note: for your purposes you only need the kernel on the floppy. You don't need a root filesystem (like you'd find on a Tom's Root/Boot or some other rescue floppy).
You can put a kernel on an MS-DOS formatted floppy, and run syslinux on that. This lets you select kernel options and pass command line parameters to the kernel (like LILO). You could also put an ext2 or minix filesystem on a floppy, copy your kernel to that and use /sbin/lilo on that. That too will let you configure various parameters, prompts, boot passwords, etc. You can even 'dd' a kernel directly on to a floppy (with no filesystem on it). This will still boot --- but it won't give you any prompt, so you can't pass command line options to the kernel or through it to init. That makes booting into single user mode (or any non default mode) rather difficult, so I don't recommend it.
Of course it's much easier if you just re-partition the drive. You can create a small Linux partition that's below the infamous 1024 cylinder boundary. That should be about 16 to 32 megabytes. You'd configure that as /boot (a separate small filesystem mounted off of the rootfs). You can use Linux' fdisk to put that partition near the beginning of the drive, while putting your other Linux partitions near the end.
The only hard part in that is getting NT, MS-DOS, or other operating systems to work around it. If you make a small C: drive under your MS OS' then you can still fit /boot in under the 1024 cylinder mark.
Another approach is to install a second drive. PC can boot from the first or second hard disk on the primary controller. (If you have CD-ROMs, IDE tape drive, DVD, etc. put them on the secondary and tertiary controllers).
Yet another approach is to use a commercial utility like Partition Magic, or a different freeware boot loader like GRUB (http://www.gnu.org/software/grub) although the FSF apparently still considers GRUB to be in "alpha" (as in "unreleased, unstable, no ready for prime time").
I hope you can see that the problem is not with Linux here. You're fighting against the legacy of the PC architecture which has never handled multiple boot gracefully and was never designed for hard drives of over 1024 cylinders. The first PC hard drives were 5 and 10 Mb. The early IDE drives were limited to 540Mb, which was extended to 8Gb through LBA "translation" (a technique for having the BIOS combine the traditional cylinder head sector (CHS) co-ordinates into linear block addresses (LBA) which were then converted back into real sector locations by the drive's electronics). Many BIOS still need to use CHS addresses to boot (thus the 1024 cylinder limits, which is 10 bits out of the 16-bit value that stores cylinder and sector).
Now we have UltraDMA drives of 15, 20, and 30 Gigabytes and some BIOS can't handle them. The MBR only has room for about 466 bytes of program loader code (the other 66 bytes are the partition table and a flag to tell fdisk that this isn't a blank drive). Thus the need to create small "partition manager" or "boot manager" partitions to provide room for more software, to work around these firmware limitations.
I'll be so glad to see the demise of PCs!

(?) Protocols on top of Protocols: It's Protocols ALL THE WAY DOWN!

From Alicia Romero on Mon, 11 Oct 1999

Hello my name is Alicia; I'm a student looking for help

I have a class of networking and there is one thing I don't get the Questions is What protocol is use typically by UNIX to connect to a network using TCP/IP?

Can you help me ??

(!) It sounds like you are underestimate how much you don't get.
TCP/IP IS a set of networking protocols!
The question you ask, answers itself. UNIX uses the TCP/IP suite of protocols for almost all of its networking. IP (internet protocol) is the lower portion of the suite. TCP (transport control protocol), UDP (unreliable datagram protocol), ICMP (internetwork control messaging protocol), and other protocols work over IP.
IP packets have source and destination IP addresses. TCP packets add source and destination ports, sequence numbers, and options/flags to support flow control, acknowledgement and handshaking. UDP packet headers lack some of features of TCP packets, so they are different variations of an IP packet. ICMP packets (which are used by the 'ping' and some versions of the 'traceroute' commands) have headers that are different from UDP and TCP.
In addition to TCP, UDP, and ICMP there are also some other protocols that ride directly over IP (for example GRE, a routing encapsulation protocol).
Other (applications level) protocols are built over TCP and UDP. (ICMP is used for very specific operations, so protocols aren't generally built over that)(*).
So, protocols like telnet, HTTP, and FTP are implemented over TCP while protocols like SNMP, syslog and FSP(*) are implemented over UDP.
Some services use UDP and TCP. For example SMB uses hybrid protocols over both. DNS uses UDP for normal name resolution and uses TCP for "zone transfers" (updating secondary authority servers).
So, you have applications protocols over transport protocols. Under the IP layer you have network layer protocols like ethernet CSMA/CD, token ring, ARCnet, etc. Under that you have media layer (physical) protocols which describe the wires, fibres, voltages, frequences and modulation parameters of the signals that actually carry all of these protocols.
So, your question is a bit confusing. It's like asking:
What driver does a bus driver use to drive a bus?
UNIX and Linux predominantly use TCP/IP for most of their applications protocols. Connecting UNIX to a network involves running many protocols over the TCP/IP suite.
It's worth noting that Linux and some other forms of UNIX also offer support for some other transport protocols like Novell's IPX/SPX, Apple's DDP and DEC's DECnet (Pathworks) protocols.
All of this material should have been covered in the first day of any decent computer networking class. (Except for the references to FSP, and those mythical/apocryphal "stealth" protocols, of course).
Consider taking a better course, getting better text books to study on your own, or something --- because it sounds like this one is just not doing it for you.

(?) Uninstalling Linux

From Victor Turner on Mon, 11 Oct 1999

Hi - hope you can help me,

I found and read your answer to how to uninstall Linux [dated 18th Dec

1998 from Tom Monaghan] - which related to uninstalling Red Hat Linux 5.2 - and thinking that I could use the info you gave in the answer to remove & uninstall my version of Linux which is Caldera open Linux 2.2, I followed the instructions . .

but as a complete moron when it comes to computers I somehow failed to obtain the results expected and linux won't go away ! I have it installed on my Olivetti Echos 133EM laptop PC which has 1.6gig hard drive partitioned [during the Linux install] to 2 partions, 1 of 92% & the other of 8%

there is 80 meg RAM/ CD-ROM & Floppy drive The Caldera open Linux 2.2 used a built in version of Partition magic to partition thye hard drive during installation.

I SHOULD have waited until I obtained a second hard drive for my desktop PC where I could 'play' with Linux and learn to my hearts content whist still having my laptop [which used to have Win95 for work related stuff ] - but I was not patient and now seem to be suffering for it as I can not find out any information [other than your answer to Toms question] - which was for a different breed of Linux.

Can you help me PLEASE !
Yours in anticipation,

(!) There is nothing special about the suggestions I offered. It doesn't matter what Linux distribution (or even what operating system) you're trying to remove. The process boils down to:
This last step makes sense for versions of MS-DOS after version 5.x. Win '9x is really still just MS-DOS with a GUI glued over the front of it. So most of this should apply to it as well, when you manage to get in behind the GUI. Alternatively you should be able to use the command:
lilo -U
... BEFORE you delete your Linux partitions.
When you first install Linux using LILO, the /sbin/lilo command will create a /boot/boot.0XXX file, a backup of the MBR that it originally found on the system. The -U and -u options of the /sbin/lilo command will put that backup copy back into place (if it exists and with some sanity checks).
Note: The lilo -U command is basically the same as using a command like:
dd if=/boot/boot.0XXX of=/dev/hdX count=446
... though it should be a tiny bit safer. In any case I highly recommend performing a backup of your system before doing any software installations, upgrades, or removals.
Incidentally, if you really want to wipe data from those Linux filesystems before you remove the partitons you can use a command like:
dd if=/dev/zero of=/dev/hdXXX
... do over-write a whole partition or drive with ASCII NULs. Note: if you do that with of= (output file) /dev/sda you'll wipe your entire first SCSI hard drive. If you do it with /dev/sda1 you'll only much the first partition on that drive. If you use /dev/hdb3 you'll destroy the number three primary partition on your second IDE drive, /dev/hda6 will get the second LOGICAL partition on your first IDE drive, and so on.
If that doesn't scare you then you didn't read it carefully enough. You can wipe out whole drives or individual partitions using the 'dd' command. It should be treated like a six-foot razor!
Also note that you really want to do these operations while booted from a rescue floppy. Otherwise you'll destroy shared libraries, swap partitions, and other data that Linux needs to complete your swipes.
In the worst case to a backup of your entire DOS/Win '9x system. Test it. Then boot from a rescue floppy, zero out the whole drive; re-install MS Windows from scratch and restore your data. I know that's inconvenient and scary.
It's the shame of the whole computer industry. Backups and restores are slow, expensive, unreliable and instill little confidence in most users. That problem is hardly unique to Linux, or PCs; I hear that from all sorts of users.
(My preferred means of doing software and OS upgrades is to perform the installation on a new system, then copy my data and configuration over. I just did that to canopus this weekend, so I'm writing from canopus2. In a week or so I'll wipe out the old canopus and rename my new host to use the old name).

(?) Who is Jim Dennis?

From SeanieDude on Mon, 11 Oct 1999

hey why did it take you so long to respond i don't even know what i searched for anymore

(!) I've been busy.

(?) anyway go to my page at http://go.to/3d~now

(!) Yuck! Looks really ugly in Lynx. Of course you seem to be selling banner ad graphics design services and you seem to want frames and require JavaScript to view your site. So Lynx users wouldn't be interested anyway.

(?) Sean


(?) FoxPro 2.0 (SCO) Running Under Linux: Try Flagship?

From Joseph Gazarik on Mon, 11 Oct 1999

Dear James,

Hello, my name is Joe Gazarik. I am a software support specialist

with Signal Software in Pittsburgh, PA. Signal produces accounting software for the automotive aftermarket. Our current product, TireWorks Gold is FoxPro 2.6 based running on SCO. I have been commissioned to get that FoxPro 2.6 SCO program working in Linux. I would appreciate any and all help that you could lend me. Do you have any suggestions in getting FoxPro 2.6 runtime for SCO Unix running on Linux? Thanks in advance for your assitance!

Best Regards, Joe Gazarik

(!) Well, I don't know of a Linux port of FoxPro. It seems unlikely that we'd see one any time soon since Fox software was aquired by Microsoft.
In general you can run SCO compatible software under Linux by using the iBCS libraries. For that I'd run Caldera's OpenLinux since they provide a kernel with iBCS patches already applied.
For FoxPro specifically you might find that they rely on some of SCO's proprietary libraries. You'd have to copy those unto your Linux system from your SCO system --- and you'd have to watch the licensing issues that relate to that. Obviously that might cause some serious problems for your software distribution plans.
You could port your software to Flagship (http://www.wgs.com/fsad.html). That is a package from WorkGroup Solutions which provides a set of programming tools for compiling dBase code (with Fox, Clipper and other xBase extensions).
You can then create and distribute standalone native applications using your existing xBase sources. The resulting programs are royalty free and should run on Linux systems without any need for iBCS or any proprietary libraries.
As a commercial software developer this sounds like it would be your best bet.
In addition it looks like Flagship offers modules to support connections to SQL backend databases. This could be useful if you develope client/server versions of your package in the future.

(?) Coping with Bad Sectors

From excess6 on Mon, 11 Oct 1999

Hi, i found out my 4month old quantum hdd has some bad clusters, is there a way i can fix it? what happed was i turned off ym poota in windows and later on i turned it on and it said data error reading c: i did scandisk surface scan and it found 1 of my sectors was bad, i rebooted later and windows is working but is there anything i can do to like isolate the sector so i dont put inportant stuff on that sector or something. -cheers excess

(!) I would hope that SCANDISK.EXE would mark bad sectors so they would no longer be used. If not, get a copy of Norton Utilities or download some shareware utilities for MS-DOS. There used to be a few utilities with names like MARKBAD.COM that would mark clusters as bad under MS-DOS. (Windows '9x filesystems are mostly the same as MS-DOS FAT with some hackery involving extra "volume labels" to get long filename support. MS should have called their new variant KFAT -- for "kludge" rather than VFAT).
Of course if you were using Linux you could just use 'e2fsck -c' to check your filesystems, test them for back blocks and automatically assign any bad blocks to a special system list (thus prevent them from every being accessed by anything else).
When you create new filesystems under Linux, you should also use the -c option to mkfs (or check the appropriate box/option in any GUI/dialog that you happen to be using).

(?) Homework Assignment: Write about Linux Memory Management

From Rhymer,Robert on Mon, 11 Oct 1999

I was wondering if you knew a site where I can get alot of information on Linux's Memory Management. I am doing a report on it and need as much infomation I can get. Websites, books, etc.. any info will help. Thank you for your time.

(!) How about the source code? That would be the most authoritative resource on the topic. Look through the sources in /usr/src/linux/mm for starters.
If you look at my "LDP by Topics" page and search on "programming" you'll find a list of all of LDP HOWTOs and guides that relate to Linux programming. In particular there the ones that refer to the "kernel." That might be a good introduction; help you get your bearings.
There is also the "The Linux Programmer's BouncePoint" at http://www.ee.mu.oz.au/linux/programming and the Linux Programming page at the #LinuxOS (IRC Channel) home page: http://www.linuxos.org/Lprogram.html Another site is Rik van Riel's Linux Memory Management page at: http://www.linux.eu.org/Linux-MM (though this is really pretty preliminary -- it has links to related topics. Rik apparently started this, then got side-tracked to write a performance tuning guide. (An understandable priority, considering things like the Mindcraft fiasco).

(?) HP with LT Winmodem

From Bob Gregg on Mon, 11 Oct 1999

I have just purchase a HP computer with LT Winmodem . DEspite loading new and latest drivers etc I can only connect at 28,800 to my isp . I have tried other isp's as well spoken to HP at length and to no avail. any suggestions.

Bob Gregg

(!) It sounds like you're not running Linux. So my first suggestion is: don't send non-linux questions to the Linux Gazette Answer Guy.
If you're getting 28.8 then be happy. Speed claims beyond that are dubious and likely to occur only under optimal conditions, rarely and briefly. If you really need faster access you'll have to look at alternative technologies (ISDN, cable modem, leased line, DSL, etc).

(?) Linux to HP9000 Through RAS?

From hansmok on Mon, 11 Oct 1999

(?) Dear Jose L. Torres Reyes,

I am a beginer who have interest in linux in South Korea.

(!) First, please don't send e-mail in HTML format. Most people won't appreciate it. (Obviously if you have some friends who prefer it, you can do what you like with them.
The best way to send e-mail is as plain, simple ASCII text with no special characters, and simple line, space and tab formatting. Keep the lines down to about 72 characters or less. For best results leave a few spaces or a tab on the left as a "margin."

[ We've no idea who Mr. Reyes is, either. But it was, at least, a Linux question. -- Heather ]

(?) I tried contacted between personal computer(operating system:slackware linux) and HP9000 unix server(operating system:UNIX) through PPP(point to point protocol) method.

I checked success in login into HP9000 unix server but failed after inserting password.

I don't know the reason that failed in contacting into HP9000 unix server.

I use RAS(Remote Access Service) in contacting. I'd like to receive your answer as soon as possible.

Bye.

(!) There are many ways to set up PPP among UNIX and Linux systems. I don't know what your HP-UX is referring to as "RAS" (remote access server). It could be that they've implemented some service that is designed to allow NT systems to log into theirs. (That would be the most likely meaning of the term RAS in this context).
The Linux PPP daemon would not normally function as a RAS client. There maybe options to do that, however it seems like it would just complicate matters. I'd suggest that you just play with the HP PPP settings (see their docs for info on that --- or talk to their technical support) so that they allow simple PAP or CHAP authentication. Then configure your Linux pppd accordingly.
The difficult thing about PPP in general is that you have to make sure that your settings and those of your remote (ISPs in most cases, the HP9000 in yours) all match. Different PPP implementations use slightly different terms for some of the many features that they offer, and most of them have completely different configuration file formats, locations and command line options.
Be sure to spend time with the Linux PPP HOWTO at http://www.linuxdoc.org/HOWTO/PPP-HOWTO.html.

(?) TCPMux Revisited: You'll need a Daemon for it, or a Better inetd

From Helpdesk on Fri, 1 Oct 1999

Thnx jim (hope i can call you that).
i could make my work do with the Mike Neuman's BINETD, "Better INETD" at: http://www.engarde.com/~mcn/binetd/index.htm)
works ok till now.
will be in touch.
ciao
jaggu

(?) Overwrote NT with RedHat: Good Idea But Bad Move

From tonyray on Mon, 11 Oct 1999

Hi there answer guy,

Im tonyray form philippines, got a big problem.

I was installing a redhat 5.2 on my computer, with 1st hardrive ide-with nt4 normally working, 2nd hardrive for redhat5.2. What happened the nt4 hardrive was deleted reformatted into linux native. Is there a way to get back my nt4 files, like unformat?

please help thanks tonyray

(!) Unformat would NOT help in this situation regardless of which OS or platform was involved. You didn't merely reformat your drive or delete those files. You've overwritten most or all of them.
The only reasonable way to get back your files is from a backup. I realize that you probably don't have one --- I've found that too many people walk the software installation and upgrade high wire with the safety net of a recent, tested backup of their data.
Sorry I can't offer you better news or more hope. If your data is worth quite a bit more than your computer you could look into data recovery using "magnetic force microscopy" (a technique used in criminal crime labs computer forensics).
Of course the procedure would probably cost more than my annual salary and probably only be partially successful. So that suggestion is not practical.

(?) Partitioning Advice

From Stock Watch on Fri, 15 Oct 1999

Greetings James,

Hi, I'm Wong. I need your advice on how to partition a 6.2GB hard disk so that it can optimize the usage of Linux. This is my first time installing Linux. I intend to set it as a full Linux server. The main purpose of the Linux server is to act as a mail server besides doing other functions. I also learn that it need a swap partition. Please advice. Thanks in advance.

Cheers, Wong

(!) This is a very common question. There are differing views and philosophies on the subject as well.
Here's my suggestions:
               /boot             31Mb
                /                127Mb
                (swap)           127Mb
                /usr            1635Mb (1.5Gb)
                /tmp             127Mb
                /home           2047Mb  (2Gb)
                /var            (rest: ~3Gb)
... and on some systems I'd add other filesystems on another drive. For example a filesystem on /var/spool/mail can be mounted with the "sync" (insuring that all writes to that filesystem are done synchronously --- minimizing the damage down by a power failure).
This set of values is based on years of experience. It leaves plenty of room for extra kernels, initrd images and System.map files on /boot, gives plenty of room for paging (swap). /usr is big enough to install LOTS of software. If this was going to be used exclusively as a mail server then you don't need anywhere near that much space on /usr. Of course you shouldn't need anywhere neer 3Gb for normal mail and POP services either.
That should get you started. If you find that you do manage to outgrow your disk space, you can add additional drives pretty easily. I've described that process in previous Answer Guy columns.

(?) UNIX Emulation Under Linux? iBCS

From antonio on Fri, 15 Oct 1999

Hi James,

i read an answer of your's in Linux Gazzette regarding Unix emulators.

I seem to understand that Lucent's Inferno is a Unix emulator for Linux??

I have a problem. There is a program for Pharmacy's here in Italy (very widely used) which has the function of registering medical codes etc. but which runs only under Unix. But Unix OS's are not free so Pharmacies are forced to buy Unix OS (Open Server) only for the sake of letting the computer run for that specific prog i mentioned above. Linux instead is free so pharmacies would not have to afford the price of the OS. Though the company that wrote the prog is not willing to port it under linux ... for obvious reasons of economical interest as they handle the licence of the OS and thus have a share on that too. With linux they would have to give up the that share. Can you help me??

(!) First Lucent's Inferno operating system is not a form of UNIX. It can run as a "standalone" (traditional) operating system, and it can apparently run as a "rehosted" OS (that is an operating system which runs under another system's kernel, and accesses the system hardware through the host OS' system calls). So, Inferno can run under Linux.
Now onto your core question. When you say that this proprietary software runs "only under Unix" what do you really mean? UNIX is not a single product or operating system. UNIX is a family of operating systems and related utilities, and a philosophy or paradigm for system design. In that sense Linux is UNIX.
It is meaningless to say that UNIX is not a free OS. FreeBSD, 386BSD, OpenBSD, NetBSD, all have as much historical claim to "being UNIX" as any AT&T SVR4 system. Those are all free operating systems. Linux was independently developed. So it can't claim to be UNIX on historical or "familial" grounds.
What I think you were saying is that this proprietary pharmaceutical management product is written to run under one of SCO's UNIX products (Open Server, or Open Desktop).
If that is the case than you might be able to run the package under iBCS.
iBCS is the "Intel Binary Compatibility Specification" --- an old (pre-Linux) standard binary format for executables on x86 forms of SVR4 UNIX. At one time there were over 20 different commercial competitors to SCO on x86 hardware. There was a lackluster effort among these vendors to share a common executable file format and suite of libraries so that "shrinkwrapped shelfware" could be developed for "any" version of UNIX on the x86.
SCO OSVR and ODT programs are iBCS by default (if I understand it correctly).
Linux does support iBCS through a set of optional libraries. (Ironically SCO and Solaris now support Linux binaries through their lxrun packages. FreeBSD has been able to run Linux binaries for years).
So, it might be technical possible for you to run this proprietary application. Then again the app. might be linked against some SCO proprietary libraries --- which you might not be licensed to copy to your Linux systems.
Aside from the technical issues you must consider the legalities. It might be a violation of your license to run this software under a different OS. Of course it might also be an illegal and unenforcable contract in your jurisdiction. If this company requires that you buy a given product with theirs (bundling), you might have legal recourse.
I don't know. I'm not a lawyer.
So, I'll just get back to the technical issue. You can probably get this to run under Linux by using iBCS.
The iBCS libraries ship with most Linux distributions. You can also find them on line with a simple Yahoo! or Google search.

(?) PAM applications running as root (Was Re: WebTrends Enterprise Reporting Server)

From Darren Moffat on Sun, 17 Oct 1999

You can run the server as root or as some other user. In order to use PAM (Pluggable Authentication Module) it has to run as root.

A general comment about PAM rather than this specific problem.

It is NOT a requirement of the PAM framework that application be running as root. There are two cases though that make login type applications need to run as root.

1) The password is stored in /etc/shadow which only root can read

If the password was in NIS/NIS+/LDAP then the authentication could succeed are an ordinary user.

(!) Actually it should be possible for /etc/shadow to be group readable and associated with a "shadow" or "auth" group. Then SGID programs could authenticate against it.
A different level of access could be required to modify it (that would presumably be reserved for root, since the ability to modify the /etc/shadow and /etc/passwd files on those systems that are configured to honor local files will consitute root access in any event).

(?) 2) the login application needs to make setuid/setgid calls this

usually happens in the application after PAM authentication has been completed and is thus nothing to do with PAM.

If the OS has privileges/capabilities then the application would assert PROC_SETID/CAP_SETID instead of being root to make the setuid/setgid calls.

(!) Linux 2.2 has privs implemented in the kernel. There is, as yet, no filesystem system support for storing the priv. bitfield metadata. So, anyone wishing to "capify" the Linux authentication system would have to do so through wrappers.
I'm personally disappointed by the lack of progress in this field.
It seems that we should have at least attained the ability to emulate BSD securelevel (wrapper or patches to init(8)) and to limit certain well-defined daemons (like xntpd, routed/gated, syslogd) to their specific priv sets (set time, modify routing tables, bind to priv ports).
We should be able to use the immutable and append-only filesystem attributes, the read-only mount option and the chroot() system calls as rootsafe security features and we should be able to audit many (traditionally root run processes) daemons and SUID utilities in terms of their "most damage from subversion."
Well, it looks like I have to do some coding on this myself. (One look at my code should scare some others into doing it right!)
BTW: Does Solaris implement POSIX.1e "capabilities" (privs)? How about HP-UP, AIX, et al? Are those in the mainstream OS APIs or are they only available in "Trusted" (MLS/CMW) versions?

(?) International Keyboard Mappings for

From Hector Rivera on Mon, 18 Oct 1999

Hi there,

Sorry for approaching you like this. But I need help. I learned about you from Linux Gazette.

I'm new to Linux but somewhat familiar with Unix. Just installed Caldera Open Linux 2.3 with KDE GUI. I share the HD with Win98.

I want to find how I can configure my US keyboard for US-International mapping, like Win98. I need to be able to have access to latin characters without having to replace the keyboard. Is there a way to setup KDE or X to US-International? If so, how? Will it work with other Linux applications like StarOffice, Word Perfect, email clients, etc?

Let me know if you need more details.

Thanks in advance, Hector

(!) Under Linux your cosnole keyboard mappings are setting in two places.
The text mode virtual consoles are controlled with utilities from the kbd or console-tools package. (Older versions were called kbd; this as been supplanted by the console-tools package since kbd was no longer under active development; think of it as simply a renamed and updated package).
In particular the utility you want is 'loadkeys' which takes mapping files that are similar in syntax to those from the standard xmodmap utility that you find with any implementation of the X Window system.
'xmodmap' and the 'xkeycaps' utility are normally used for X. I don't know if KDE and GNOME have their own frontends for setting xmodmap. I would expect their "control panel" facilities to offer this sort of thing (given that the KDE project was started by Germans, and some of the principle GNOME developers are Mexican, and both projects have developers from all over the world).
The best resource for customizing your Linux keyboard is:
The Linux Keyboard and Console HOWTO
http://www.linuxdoc.org/HOWTO/Keyboard-and-Console-HOWTO.html
One HOWTO specifically about coping with Linux in Spanish is:
Spanish Linux HOWTO
http://www.linuxdoc.org/HOWTO/Spanish-HOWTO.html
... although that doesn't seem to have been updated since 1996. There's got be something newer out there!
The LDP (http://www.linuxdoc.org) has a set of non-english links including a section on Spanish web sites at:
http://www.linuxdoc.org/links/nenglish.html#spanish
You can find a whole list of HOWTOs on Linux support for various other languages at:
http://www.starshine.org/jimd/LDPxTopic.html#international
This last item is my own modest attempt to arrange a list of links LDP HOWTOs and guides by Topic (http://www.starshine.org/jimd/LDPxTopic.html#international)
(Sometime I should put some more work into it. For now it tries to start with basics and user stuff and progress towards more advanced and obscure topics.)
As for support for non-English in an applications suite, I'd expect StarOffice to have the edge. It was created in Germany and does support several European languages.

(?) Really Wants 'rsh' to Work. Really

From Mike Hahn on Mon, 18 Oct 1999

Dennis,

I have read all I could find on rsh and getting it to work to no avail. I run a small CAD network and am in need of the rsh function. We are on a small private network so hacking is not really a consern. I run Mandrake Linux 6.0, Windows95/8/NT, IBM AIX, SUN Solaris, and DOS 6.2. I can rsh from to all machines exept the Linux boxes. I have tried all the suggestions in the "Answer Guy" columns that I could find. I did notice that in all of the "problems" the error message was "permission denied", what I am getting is "Connection Refused" Can you HELP?

Thank You.

M. Hahn Systems Admin.

(!) What does the rsh line in your /etc/inetd.conf file look like?
It might be configured with command line options that prevent the Linux version of in.rshd from honoring certain types of .rhost file or force it to ignore /etc/hosts.equiv. Here's an example:
# /etc/inetd.conf:  see inetd(8) for further informations.
shell   stream  tcp   nowait   root   /usr/sbin/tcpd  /usr/sbin/in.rshd -h
login   stream  tcp   nowait   root   /usr/sbin/tcpd  /usr/sbin/in.rlogind
In this example the -h option was specified. That is a common setting on many Linux distributions. It means that "super user accounts may not be accessed through this service" (i.e. 'root' and any other accounts with UID=0).
If a -l option is specified than nobody's .rhosts file will be allowed. (Only the /etc/hosts.equiv would be consulted).
Read the in.rshd man page for all the gory details. Keep in mind that the Linux version of rshd is likely to be very picky about the forward and reverse hostname-IP address mappings (in a mostly futile attempt to foil spoofing).
One trick for testing these sorts of problems is to temporarily replace the in.rshd (or other inetd launched daemon) with a wrapper shell script that calls 'strace' with a command like:
strace -o /tmp/rshd.strace /usr/sbin/in.rshd.real $@
... and then try to connect to the service. After you get an error, login to the system using some other means and view the resulting "system call trace" file.
These 'strace' files can be difficult to read. However, you can usually take a pretty good guess as to what the problem is by watching for failures on open(), stat() and lstat() calls.
(This strace trick is useful for all sorts of problems, helping you isolate the missing configuration file or directory that some program is failing to find or unable to create/see. Of course I'd love to see a massive "error messages" project for Linux that would add patches to these programs to ensure that every distinct failure mode at a clear error message and every man page had an comprehensive list of the associated error messages and suggested coping strategies).
Good look.

(?) Laundry List of RH 6.0 Problems or Hardware Blues

From root on Mon, 18 Oct 1999

Hi

I am hoping you will be able to help. I have various problems with Redhat 6.0

A Current problem

1. Whilst running gnome 1.04 I can access no terminal windows except kde terms.

Xterm window does not appear complaining about no ptys free

rxvt does not appear complaining about colour maps

Gnome-terminal appears but with no promt just a cursor which I cannot type into.

(!) Wow! That's pretty irritating.
The ptys problem suggests that your don't have the new /dev/pts psuedo-filesystem mounted. /dev/pts is a virtual filesystem (similar to /proc). It allows those programs with the appropriate library support to dynamically allocate ptys (psuedo-ttys, used by each xterm, telnet session, 'screen' window, etc).
My guess is that your other terminal emulators aren't using that method, so they are searching through the list of traditional ptys (/dev/ttyp*). It would be nice if the programs that try to go through the pts system would fall back to the old method automatically. (Of course I could be wrong, perhaps it is the xterm that is using the old method, and the others are getting they ptys dynamically. Look under /dev for ttyp* and look at the mount command for a /dev/pts.
The color maps problem suggests that the GNOME/Enlightenment theme that you're using is taking up enough colors that there aren't any available for your other applications. If you're running Netscape Navigator/Communicator before you try these commands, then it is the likely culprit --- it steals a lot of colors.
You could try starting X with --bpp 16:
startx -- :1 --bpp 16
... (this is to put a 2nd X session on a new virtual console, on the assumption that you are using a gdm, kdm, or xdm graphical login on :0).
With 16 bits per pixel (if your video card and configuration supports it) you should have lots more room in the color map.
Another alternative is to change your X configuration and disable your current list of startup applications (just start with an xterm as the session manager, no window managers, no xclocks or other widgets).
I have no idea what's wrong with your GNOME-terminal. It could be suffering from the color map shortage as well. Possibly it simply isn't communicating the problem as gracefully as rvxt.

(?) 2. While checking rpms gnorpm dies when I try to access System environment/ base (but only this category)

An xterm window sees to try to appear at this time.

(!) This sounds like its related to the problem above.
(Personally I don't use any GnoRPM or Glint type front ends to RPM. I just use the 'rpm' command).

(?) I can access gnome via gdm most of the time and run all programs except these I believe.. It could possibly be a problem to do with the gtk libraries bu I am running out of ideas

(!) I don't thinks the libraries are the problem in your case. Certainly you might want to download many of the 125 upgraded packages from ftp://updates.redhat.com
That site is very busy and often full. So you may want to go to a mirror site such as:
ftp://ftp.cdrom.com/pub/linux/redhat/6.0/i386

[ You'd better stick with ftp://ftp.cdrom.com/pub/linux/redhat/ and use human judgement for the directories beyond that; they keep changing the directory structure. -- Heather ]

There are at least seven GNOME related update packages there.

(?) B Recurring problem (which I think has caused the present problem.

regularly when rebooting e2fsck destrotys patrts of my file system. This has appeared in various ways:

moving directories into lost+found deleting entiring directories deleting/corrupting files (last deleted several files on my root partition particularly fstab causing current problem)

(!) Are you sure that e2fsck is doing the "destroying" here? It seems more likely that e2fsck is attempting to recover from damage that's being down to your filesystem during the previous session.

(?) This problem has caused me to re-install several times which timakes a long time on my system due to repeated Sig 11 faults.

(!) Whoa!
Have you read the Sig 11 page (http://www.BitWizard.nl/sig11)?
The bad news is that you probably have some bad memory or some other hardware problem. Did you check the "Scan for Bad Blocks" option when creating your swap partitions and filesytem? If not, go back and try that! Then go through the SIG 11 FAQ in detail.

(?) Any help would be appreciated

My system is

RedHat 6.0 running gnome, Afterstep and KDE 1.1.1pre2 Partitions /boot 7M / 220M /usr 820M /home 170M /dos 700M Kernel 2.2.10

(!) Be sure you are enabling the "Unix98 PTY Support" option under "Character devices" when compiling these new kernels.

(?) Motherboard PC Chips M590 Memory 64M PC100

(!) Try taking out first one, then the other of your memory modules (presuming that you have a couple of 32Mb DIMMs or SIMMs). If not, see about getting another memory module and trying that in the system. If that works, or you can't get more memory, then take your existing RAM in for testing.
Most SIG 11s are caused by faulty RAM. Nothing works your RAM like UNIX and Netware. They are a better burn-in test then any "memory test" software, or even test bench equipment. (They are the real world, whereas the test software and hardware are simulating the load that they create).

(?) 32X CDROM Graphics SIS 3D AGP Pro ebedded in otherboard generic V90 modem

(!) Even aside from the likelihood that you're suffering from hardware problems, I have to say again that I don't like the state in which GNOME was pushed into RH 6.0. I've seen its components dropping core files all over the few systems where I've run RH 6.x, and I've heard that this is the case for just about everybody. core files are a symptom of bugs. When almost everyone is getting core files in "normal operation" then the product/package is not ready for production use. Let's not adopt the Microsoft attitude towards "1.0" products.
With the upcoming release of RH 6.1 I certainly hope Red Hat has stabilized their flagship GUI.

(?) More AOL Instant Messenger Spying

From Jon Sandler on Mon, 11 Oct 1999

it is very important to me as well that i spy on other people's instant messages - seeing messages from both the sender and the reciever. your help would be greatly appreciated. and im not too in tune with the technical stuff, so a simple way would be good. thank you very much.

(!) You are also "not too in tune" with ethics. You're also not too bright, and you're lazy (since I'm sure you know something about the basics of capitalization, et al).
I don't use AOL or any other instant messaging facility. I don't care about AOL or instant messaging. I don't know how to spy on people using this software, and if I did know (if I took the time to figure it out) I wouldn't tell you.
What I don't understand is why people like you send me questions like this.
I can only imagine that you came across my name using a web search. If you searched on "AOL" or "IM" and "spying" (or similar topics) then you might have found other issues of the Linux Gazette where I've told other people that I don't dispense advice on spying. So I can only conclude that you found my address, didn't bother to read any of my writing, and blindly mailed me. Like I said, not terribly bright.
I'm sure you could find people who are more "in tune" with your interests and ethics in some of the seedier IRC channels. Of course you'll have to learn quite a bit to figure out how use IRC.
Of course, I do wonder if anyone else at your site knows, or cares about your clandestine interests. I'll just copy the postmaster there on this reply (since you might be violating your employer's policies or your ISP's acceptable use policy by attempting to invade the privacy of others).

(?) Another Solution, or a Different Problem

From Paul Leclerc on Mon, 11 Oct 1999

First of thanks for your column. It's a great source of help for me.

Your recent question and answer in LG about X respawing, etc. may have another solution.

I was mucking around with my XF86Config file trying to get 16bpp to work. I really wasn't sure what I was doing and put various directives in different "sections" of the file.

One mistake that I made (and I don't remember what!!) caused the same problem, i.e. X respawning. I went back to the original file and everything was fixed.

Paul Leclerc

(!) Certainly having bad values in your XF86Config file can cause failure to load X. In general anything that prevents X Windows from loading cause a "respawning too fast" error from 'init' when running 'xdm' (or it's cousins, 'gdm', 'kdm' etc) from the /etc/inittab.
In fact, any program listed on a "respawn" line of /etc/inittab will cause this error message if it exits too quickly after loading (commonly the case when the program fails).
The message is from a feature of the 'init' program which attempts to prevent overwhelming the system with respawning activity.
So one must be sure that their 'getty' and other programs are properly configured before adding them to the inittab file. This can be a challenge with some programs (like the various forms of 'getty' that prompt your for a login name on your console and on any serial terminals or modems that you configure as login devices for your system). Some programs like 'getty' cannot normally be run from a shell prompt -- they must be started through 'init'.
One sort of program that is often difficult to run from 'init' is 'syslogd'. It is one of those programs that normally peforms a "double fork() and exec" as it loads. This is a programming technique for writing daemons such that they run disconnected from their parent process. The primary benefit is that it prevents "zombies." When the daemon dies or exits later, its return value will be automatically "reaped" (discarded) by 'init' (which adopts orphan processes for this purpose).
Basically some process must go through the process table periodically and read the exit values of "zombies." (a "zombie" is a dead process whose status is retained in the process table). This is normally the job of a process' parent. However, if the parent exits/dies before some of its children then 'init' takes on the job.
However, if you have one of these "double fork()-ing" daemons, and you try to run it directly from 'init' under a "respawn" directive, you'll get the "respawning too fast" error.
In the case of the Linux 'syslogd' (the system logging daemon) we can add a -n option to "not fork()." This would be done like so (/etc/inittab extract):
sl:2345:respawn:/sbin/syslogd -n -m 10
Here I've give this line an "ID" of "sl" (SysLog). I've marked it to run in levels 2 through 5 (1 is single-user repair mode, 0 is halt, and 6 is reboot). I've also set the "heartbeat" or "mark" interval to 10 minutes.
(The 'syslogd' "mark" option forces it to send a message at some interval even if nothing has been logged, it's used for automated monitoring in conjunction with remote logging. Read the syslog and syslog.conf man pages for details).
BTW: I think the option you might have been looking for in your XF86Config might have been:
"DefaultColorDepth"
... which has to go into the correct "Driver" section for your video card. The fact that XFree86 support multiple video card and monitor specifications inside of a single config file is an endless source of complication and confusion.

(?) Setting the LILO Default

From Ryan Sheidow on Mon, 11 Oct 1999

I want to set the LILO boot system to load dos by default. Could you please send me a detailed algorithm that would allow me to do this. I am just a linux beginner and do not know anything.

Thank You, Ryan Sheidow

(!) The easiest way is to edit your /etc/lilo.conf file, add an appropriate default= line near the top (first line of the file is O.K.) and then run /sbin/lilo.
The default= line should refer to the label in your MS-DOS "stanza." Basically that would be whichever word you've been manually typing at the LILO: prompt to get into MS-DOS.
Here's a sample:
default=dos
boot=/dev/hda3
root=/dev/hda3
install=/boot/boot.b
map=/boot/map
vga=normal
delay=20
image=/vmlinuz
        label=Linux
        read-only
other=/dev/hda1
        label=dos
Of course, as with any changes to your kernels, partitions, or lilo.conf files you must run the /sbin/lilo command to update your boot record and maps in order for your changes to take effect.

(?) Multiple Concurrently Installed Version of glibc

From Young, Geoffrey S. on Tue, 12 Oct 1999

Hello -

I am currently using RedHat 5.2, but would like to upgrade to the Perl rpm provided with RH6.0. When I test the rpm dependencies, the newer perl requires glibc 2.1. Ordinarily, I would just upgrade glibc while I'm at it, except that I am running an application that is not compatible with glibc 2.1 (it was specifically developed for RH5.2, which uses glibc 2.0.7).

Can I have both versions of glibc on the same system? If so, can I still use the rpm format?

thanks --Geoff

(!) You can have multiple version of any shared libraries under Linux. When they have different version numbers then the dynamic loader (ld.so) will find the one that a given binary is linked against. That's why there are symlinks under /lib pointing from major versions to the most recent minor versions. (Study the links under there to see what I mean).
However, as you say, there can sometimes be problems with this automatic loading mechanism. Sometimes the author of a package needs to specify a a more tight binding to some shared libraries. However, that involves recompiling it.
There is another way. You can use LD_PRELOAD_PATH and/or LD_LIBRARY_PATH environment variable to over-ride the normal library loading mechanism of normal (non-SUID) programs. (The loader over-ride is disabled when EUID doesn't match RUID, as is the case while running SUID programs. If this wasn't true it would be trivially easy to bypase system security with custom libraries and access to any dynamically linked SUID binary).
They way to use the LD_*_PATH variables is to sequester your special libraries in their own directory, and write small wrapper script to set and export the LD_*_PATH environment variable, then execute the necessary program. You do this with each program that needs the special library version.
Read the ld.so man page for more details.

(?) Geoffrey Replies...

From Young, Geoffrey S. on Tue, 12 Oct 1999

thanks for your help. I'll read the ld manpages and play with LD_*_PATH, but I think that the application in question is running SUID.
Anyway, you have given me lots to go on - thanks again
--Geoff

(?) lg #45: Limiting Internet Access through Cable Modems

From August Hörandl on Tue, 12 Oct 1999

more on: lg #45: Limiting Internet Access through Cable Modems

(?) Hi, in your answer you missed another possibility: the squid proxy program can use ACL: you can easily set up allowed access times for each clients and a lot of other stuff; squid can even be used to filter certain sites. you also get log files and can check which places have been visited

(!) It sounds interesting. I'm not sure I'd suggest squid as a security proxy. There seem to two different classes of web proxy software, those for firewalls (focus on security) and those for caching, content filtering and features (like Squid).
However, you're right. I should have mentioned that the features exist. I think you have to get some Squid plugins for some of these features.
(It never ceases to amaze me how many Linux/UNIX subsystems are "modularizing" with Perl modules, Apache modules, loadable kernel modules, Squid modules, Netscape Navigator/Communicator plugins etc).

(?) Gustl
ps: keep on the good work - i like your answers

(!) I'm glad.

(?) lg #45: Limiting Internet Access through Cable Modems

From The Answer Guy on Wed, 13 Oct 1999

(!)It sounds interesting. I'm not sure I'd suggest squid as a security proxy. There seem to two different classes of web proxy software, those for firewalls (focus on security) and those for caching, content filtering and features (like Squid).

However, you're right. I should have mentioned that the features exist. I think you have to get some Squid plugins for some of these features.

(!)there is a nice guide at
http://info.ost.eltele.no/freeware/squidGuard
i had to solve another problem: i am teacher and net admin in a technical high school; we have full internet access - but it is impossible to teach anything if the internet is accessable ;-) so i had to find a easy solution to "toggle" a whole classroom (= a subnet). i did it via a little script (which includes menues via dialog), which creates a file with acls for squid. teachers are allowed to call the script via sudo (needed to restart squid)
Regards Gustl

* * * Linux... Don't fear the penguins * * *
August Hörandl


(?) The Linux Startup Script?

From Joe Lorino on Thu, 23 Sep 1999

In a message I found in a search about syslogd parameters, you mention modifying the Linux startup script. Can you tell me what file that is?

(!) Normally Linux has a number of startup scripts. Those include all of the files under /etc/rc.d/rc3.d (or /etc/rc2.d on Debian systems, or various others on other systems). This also depends on your operating mode (default runlevel).
However, I would say that the /etc/inittab is really THE Linux startup file. Ultimately a Linux kernel really only starts one process, 'init'(*). Then 'init' reads the /etc/inittab file and all of the rc.sysinit, rc*.d/S* files, etc. are run by that.
If you're going to run 'syslogd' directly from the inittab then you should use the "-n" (no forking) option. Be sure to upgrade to a reasonably new version as it was broken in some older ones. (Thank you, Martin Schulz for fixing that! It was a but that I reported). Also be sure to disable any /etc/rc*.d/S*syslog script you're running.
Of course you could also just edit our /etc/*/init.d/syslog script file.
I presume you're planning to add either the -m (generate "heartbeat" marks for remote monitors) or the -r (allow reception of remote syslog messages on a central loghost). Please be aware that -r might leave you vulnerable to some attacks --- particularly some DoS (denial of service) shenanigans. Use it with caution, and arrange your packet filters to limit access from untrusted networks.

(?) followup: The Linux Startup Script?

From Joe Lorino on Mon, 11 Oct 1999

Thank you for your reply. The information you provided is very helpful.

(!) Good. The most important thing for troubleshooting most computer systems is knowing in detail exactly what steps they go through as they start up.
Glad I could help.

(?) Short names for Long Domains?

From Green, David on Tue, 12 Oct 1999

Gentlemen,

I have a COMCAST modem and ipmasq and everything connects and runs

fine (http browsing, ICQ, realaudio) except my MS mail can't find the pop and smtp servers "mail" and the news server "news." I can ping the long address mail.etntwn1.nj.home.com or the IP address, but I can't ping "mail" nor can my mail client (Outlook express) find "mail" or "news". Have you seen a similar problem and how did you solve it?

(!) Umm ... so why don't you configure MS Mail to use mail.etntwn1.nj.home.com and news.etntwn1.nj.home.com (the long names) instead of trying to use the the short names.
If your domain was set to "etntwn1.nj.home.com" then the bare hostnames should work. Under Linux or UNIX you could add directives to your /etc/resolv.conf file to force it to search additional domains. I don't know how to do that under Win '9x or NT (or even it it's possible for them).
So, try that.

(?) RE: Comcast and IPmasq

From Green, David on Tue, 12 Oct 1999

Jim,

(?) why don't you configure MS Mail to use mail.etntwn1.nj.home.com and news.etntwn1.nj.home.com

(!) I tried that already. The problem was apparently that somone hacked our local comcast mail and news servers so they were off line for a day or so. Everything works great now. Thanks.
--David Green

(?) Snooping on a Serial Port

From Rudy Moore on Mon, 11 Oct 1999

How can I snoop what an application is sending and receiving from a serial port?

(!) Look for ttysnoop. Here are a few URLs:
ttysnoop-0.12c-4.i386 RPM
http://rufus.w3.org/linux/RPM/contrib/libc5/i386/ttysnoop-0.12c-4.i386.html
[freshmeat] ttysnoop
http://freshmeat.net/appindex/1999/09/05/936520647.html
Debian GNU/Linux -- ttysnoop
http://www.debian.org/Packages/unstable/admin/ttysnoop.html
You might also look at:
Debian GNU/Linux -- ttylog
http://www.debian.org/Packages/unstable/utils/ttylog.html
... which is a similar program. You could probably use the 'alien' package (http://kitenet.net/programs/alien) to convert the Debian package into some other format (like RPM).
I trust you will be using these for ethical purposes.

(?) (Not sure if you prefer long or short questions, but I can elaborate if you'd like more information.)

Rudy

(!) I prefer questions that provide just enough information that I can answer them. I like them to be just general enough that they will be useful to some significant number of the Linux Gazette readers and to the many people who find my back issues using Yahoo!, Google, Alta Vista, Deja and just specific enough that I can answer them in less than five pages.
Oddly enough yours is the first question I can remember that actually asked what sort of questions I prefer.

(?) More on: Snooping on a Serial Port

From rudy on Wed, 13 Oct 1999

The problem with ttysnoop is that it's heavily oriented toward spying on a network connection - which is different from protocol analysis. The first begs the "ethical?" question, the second implies reverse engineering - or debugging. And I would venture to say that debugging in this manner is really just a form of reverse engineering, so...

I wrote a PERL frontend to strace and have made a pretty darn useful protocol analyser. At some point in the future, I'll post my code so others can benefit from it.

Thanks for the reply! Rudy

(!) I agree that ttysnoop isn't well-suited for protocol analysis. However, I was unable to find any tools specifically for that.
One thing that would be cool would be a modified form of the serial device driver --- one that could used to capture and log data as it is passed from the interface to the userspace process.
This has shades of "STREAMS" gathering like storm clouds over it. The ability to attach filters into the streams of data on UNIX device driver is a major feature of STREAMS. There is an optional set of STREAMS patches (LiS) available for Linux. However, they are not part of the standard interfaces and drivers (and probably never will be).
One of the key arguments against STREAMS in the mainstream Linux kernel is that we have the driver sources available. If we need to add custom filtering, logging, etc, into those at a low level --- we should modify the driver. This prevents the rest of the drivers from suffering from bloat and performance restrictions that would be necessary to fully support the STREAMS infrastructure. (Those are the arguments as I remember and understand them. I'm not a kernel or device driver developer and don't really have a qualified opinion on the whole debate).
Of course, if the 'strace' solution is working for you, then use it. It sounds interesting and useful. However, if 'strace' doesn't do it, or it costs too much load for your purposes, maybe you could use a patched driver.

(?) Another Call for Serial Snooping

From VETTER Joe on Tue, 12 Oct 1999

Hi,

I have a program which communicates through the serial port to a data logger. The program is not very functional and I would like to reproduce it. The problem is I do not know the commands to send to request data from the data logger. I am looking for a program which will monitor the data passing in and out of the serial port, without actually stopping the other program from using the serial port. Any ideas ?

Thanks in Advance

(!) Look for ttysnoop. This is a package that is specifically designed to "listen in on" Linux ttys (serial or console).
Here's the Freshmeat pointer:
http://freshmeat.net/appindex/1999/09/05/936520647.html

(?) Maximal Mount Count Reached

From Marius Andreiana on Sat, 25 Sep 1999

Dear Answer Guy,

I'm Marius and here is my problem :

From time to time ( seldom ), my RH Linux 6.0 says during booting "/dev/hdaX had reached maximal mount count; check forced" where X is 3 and 4. Here's my partition table :

/dev/hda1, 170 MB, type= 6h (DOS 16-bit >=32) /dev/hda2, 16 MB, type=82h (Linux swap) /dev/hda3, 497 MB, type=83h (Linux native) /dev/hda4, 129 MB, type=83h (Linux native)
 

I always perform clean shutdowns. I suppose this is because all the above 4 partitions are primary. But then why does it report that message only from time to time ?

Thanks a lot for your time
Marius

(!) We call that "losing the lottery." It always seems to happen when you're in a hurry to get the system back up and running.
Don't worry about this message, though. It's perfectly normal. Even if you do cleanly shutdown every time, there's always that chance that some filesystem corruption has crept in. So each filesystem stores the number of times you've mounted it since you did an fsck (filesystem check) and automatically forces one at those points.
If you want to live dangerously you can change the the maximal mount count value on a filesystem using the 'tune2fs' command's -c option. You can also manually set the mount value using the -C (upper case) option. You can see the current values using a command like:
tune2fs -l /dev/hda1
... This is the only command that you might run on any of these devices while it is mounted. In my particular case it shows a maximal mount count of 20. You should unmount any filesystem before using tune2fs to write any new values unto them using the other options from this command. (It should be safe to change some values when you have a filesystem mounted read-only; though it might be worth asking an expert, so I've copied Ted T'so and Remy Card on this message).
You can also set a volume label on any of your ext2 filesystems using this command (BTW: 'tune2fs' only works on ext2fs --- don't try to use it on any other sorts of filesystems that you have). I notice that the man page doesn't tell us anything about these volume labels (like what characters are allowed, and how long they can be). Glancing at the source code to e2fsprogs I find that you can have volume labels of up to 16 characters. I didn't see any filtering on characters so I suppose any string of non-NUL characters might be used --- though I'd stick with simple alphanumerics and printable punctuation to be safe.
As far as I know there is no way in which this volume label is currently used. It seems to be a wholly optional feature; I guess we can use these to keep track of our removable media or something.
(Ted, Remy, is it safe to set some or all tune2fs values on a filesystem while it's mounted read-only? Are there any characters that should NOT be used in the volume labels? Is there anything that uses these volume labels, or are they just obscure cosmetic options?)

(?) Having been cc'd, Ted T'so adds...

From tytso on Mon, 27 Sep 1999

Date: Sat, 25 Sep 1999 01:14:42 -0700 From: Jim Dennis <>

We call that "losing the lottery." It always seems to happen when you're in a hurry to get the system back up and running.

(!) Yup. Note that even once we have journalling support in ext2, you will want to occasionally force an fsck over the filesystem just to make sure there haven't been any errors caused by memory errors, disk errors, cosmic rays, etc.
If you need your laptop to reboot quickly just before a demo (and your laptop doesn't have a hiberate feature or some such), something you can do is to sync your disks, make sure your system is quiscient (i.e., nothing is running), and then force a power cycle and let your system reboot. Your system will then fsck all of your disks, and you can then shutdown your system, confident that the dreaded "maximal mount count" message won't appear during that critical demo.

(?) If you want to live dangerously you can change the the maximal mount count value on a filesystem using the 'tune2fs' command's -c option. You can also manually set the mount value using the -C (upper case) option. You can see the current values using a command like:

tune2fs -l /dev/hda1

(!) If you know that your system is fairly reliable --- you've been running it for a while and you're not seeing wierd failures due to cheasy cheap memory or overly long IDE or SCSI cables, etc. it's actually not so dangerous to set a longer maximal count time.
One approach if your system is constantly getting shutdown and restarted is to set the filesystem so it uses the time the filesystem was last checked as a criteria instead of a maximal count. For example:
tune2fs -c 100 -i 3m /dev/hda1
This will cause the filesystem to be checked after 100 mounts, or 3 months, whichever comes first.

(?) (It should be safe to change some values when you have a filesystem mounted read-only; though it might be worth asking an expert, so I've copied Ted T'so and Remy Card on this message).

(!) Yes, it's safe these values if the filesystem is mounted read-only. If you're ***sure*** that the filesystem is quiscent, and nothing is changing on the filesystem, you can even get away with changing it while the filesystem is mounted read-write. It's not something I'd really recommend, but if you know what you're doing, you can get away from it. It really depends on how much you like working without a safety net.

(?) As far as I know there is no way in which this volume label is currently used. It seems to be a wholly optional feature; I guess we can use these to keep track of our removable media or something.

(!) You can use the volume label in your /etc/fstab if you like: For example:
LABEL=temp /tmp ext2 defaults 1 2
or
UUID=3a30d6b4-08a5-11d3-91c3-e1fc5550af17 /usr ext2 defaults 1 2
The advantage of doing this is that the filesystems are specified in a device independent way. So for example, if your SCSI chain gets reordered, the filesystems will get mounted correctly even though the device names may have changed.
- Ted

(?) Selecting a Lotus Notes Platform

From Rebecca Henderson on Mon, 11 Oct 1999

I have Lotus Notes on my box and it is in a UNIX environment, A question of platforms has come up. As400 vs UNIX, Solaris. What do you feel would be easiest to support and maintain?

(!) I've never worked with an AS/400. I've heard that they are quite easy to administer though they have a somewhat limited range of utilities and applications. (Indeed it seems, from what I've heard, that the limited range of utilities and peripherals available for the 400 is what makes it possible for IBM to make a fully integrated menu-driven administration system that's usable by relatively unsophisticated operators).
It seems like a odd question to ask since you say that you already have Lotus Notes installed on an existing UNIX system. If your staff already has UNIX experience (and possibly a preference for it) or if you need the sort of flexibility and control that UNIX gives you (which leads to the "do-it-yourself" flavor of the administrative interfaces and tools) then UNIX would be the obvious choice.
UNIX is not the "easiest" in terms of learning curve. However it may be the "easiest" in terms of giving you the tools you need to suit your precise needs. In other words, if your needs exactly match the design of the OS/400 --- if they match the platform --- then it's likely to be the easiest. If you have needs for customized procedures or specialized hardware then UNIX is probably the easiest to adapt.
As with all questions that relate to selecting a platform (hardware or software) the answer always boils down to: "it depends on your requirements."
Unfortunately I see far too many cases where selections are made before requirements are understood, and where the selection process is constrained by organizational politics more than by requirements analysis.
The real job of a consultant is to assess and analyze your requirements and provide recommendations to match them. Too many people who use the term consultant are actually referring to "contractors" or "VARs"; contractors implement and VARs sell. Consultants provide consultation.

(?) QUESTION

From Rebecca Henderson on Mon, 11 Oct 1999

Thanks for your input. We are on Sun Solaris now and the application is quite stable. I guess it is a "politics" issue. At least from my point of view. Requirements for 3rd party software are an issue with UNIX and the Lotus Notes application. Your input has helped.
Sincerely,
Rebecca M Henderson
UNIX System Administrator.

(?) "telnetd connected:" But No "login" Prompt

From cbgyeh on Mon, 11 Oct 1999

Hi,

I appreciate that if you can help me the problem related to telnet running RedHat 6.0.

I recently configured RedHat 6.0. When I telnet to the server, I see the banner message. There is no login prompt. The /var/log/secure indicates telnetd connected from xxx.xxx.xxx.xxx. When I test the loop back i.e. 127.0.0.1, the telnetd works correctly. Ping and ftp work well. FTP has no delay at all.

I did not install any patches yet. Ching

(!) It sounds like a TCP Wrappers problem.
Linux systems normally have TCP Wrappers (tcpd) preconfigured to provide selective access control to all 'inetd' launched services. You'll see this if you look in your /etc/inetd.conf. Thus 'inetd' is configured to listen to the telnet service port (23, as listed in /etc/services). 'inetd' find 'tcpd' and runs that. Thus 'inetd' won't complain about a "program not found."
TCP Wrappers will log the connection attempt (under the service name). Then it will do a double-reverse lookup (taking the source IP address of the connection, getting a purported host/domain name, then doing a a forward lookup of that to scan for the original source IP address). If those values are inconsistent it may just drop the connection or it may continue as normal.
TCP Wrapper will then check the /etc/hosts.allow and the /etc/hosts.deny files. It will look for a line that applies to this service (in.telnetd) followed by a list of allowed (or denied) IP address or host/domain name patterns. The syntax of these files is described in the hosts_access man pages.
I've described TCP Wrappers and this double reverse lookup before. If you're reverse DNS zones aren't properly configured you may seen very log delays on connections at this point (several minutes). You're test from localhost succeeds because you have localhost (127.0.0.1) listed in your /etc/hosts file so the forward and reverse records will always be correct so long as the 'files' entry in your /etc/nsswitch.conf (/etc/hosts.conf for older libc5 packages) is properly maintained.
Usually your FTP daemon would also be protected this way. However, new Linux distributions sometimes are using ProFTPd which is often run "standalone" (not through the inetd dispatcher). ProFTPd has optional service access controls of it's own (and might not be configured to do this "double-reverse lookup").
So, try adding the appropriate IP addresses and names to your /etc/hosts file or get your hostmaster to get your reverse zone maps configured properly. If that doesn't work trying using strace. To do that replace the in.telnetd line in your /etc/inetd.conf file with something like:
telnet         stream  tcp     nowait  telnetd.telnetd /usr/sbin/tcpd  /root/bin/trace.telnetd
... (all on one line, of course --- the backslash is just to note that this is a "continued" line in my e-mail --- DON'T put that in the inetd.conf file).
"/root/bin/trace.telnetd" is a shell script that looks like:
#!/bin/sh
exec strace -o root/tmp/telnetd.trace /usr/sbin/in.telnetd
... that can give you a system call trace of what the telnet daemon is doing after its launched. Of course you have to signal your inetd to re-read it's configuration file using a command like:
kill -HUP $(cat /var/run/inetd.pid)
... in order for this change to take effect. (It would also take effect after a reboot, of course).
NOTE: I don't recommend that you run with this strace script during normal production use. It could be insecure and it's likely to be a bad idea in any event. However, it's useful for capturing some low level diagnostic data.
Reading strace output is challenging. However, you can usually get by okay by simply looking for failed open(), stat(), and lstat() calls.
If that line doesn't work (you don't get any telnet.trace output) try:
telnet         stream  tcp     nowait  telnetd.telnetd         /root/bin/trace.telnetd         telnetd
... (all on one line, as before). In this case we are eliminating tcpd from the picture. The confusing part about the inetd.conf file syntax is that you seem to repeat the name of the program your running twice on each service line. The first reference is the program that will be run, the next is the name under which it will appear in a 'ps' (process status) listing and anything else on the line will be passed as command line arguments to the daemon.
This ability to separately supply an executable patch/name and a full argument list, including "arg(0)" --- the 'ps' name --- is normal for UNIX and Linux, it's just not something you'd see from using the command shell. That and 'init' always start programs using the same value for both the executable path and the arg(0).
I doubt you'll have to go to that level of debugging for this. I'm just describing the technique (again) for other readers and in case you do need it.

(?) Ying at New York: Re: RedHat 6.0:Telnet has no login prompt

From chgyeh on Fri, 15 Oct 1999

Jim,
Thank you so much for showing me strace command. I was able to look at the trace file and determined my nsswitch was not correct. It was hanging at the nis+ which I did not configure to use. Again, thank you for your help.
Ching

(?) Staging Server on localhost

From Larry on Wed, 13 Oct 1999

Your answer to Mark was accurate as it stood, but you left out a second, possibly better, solution: put Linux on the Mac! Instead of buying a basic PC, save the money and use one computer for two purposes.

I have an older PowerMac running a webserver (Apache under MkLinux to be precise) on our corporate intranet, and it does a fine job. I also have a dual-booting G3 at home, plus an older system dedicated to NetBSD.

Back to the staging server -- if Mark were to simply use the Mac (running Linux) to do the testing, he could pull up a browser under X11 and make all his calls to <http://localhost>.

Just another idea. Larry

(!) You are correct. He could run Linux on his Mac. He didn't specify which platform his Mac was (PPC or older 68K) but there are versions of Linux for both. For PowerPC there are a number of distributions available: LinuxPPC, mkLinux, Yellow Dog (YDL), and Debian, to name a few.
For the 68K I think Debian might be the only full distribution around.
Of course it might be that Mark actually likes MacOS, or some of the tools that currently run under it and haven't been ported to Linux. It's also possible that some of the testing that he wants to do is not feasible through the loopback interface.
However, it's worth mentioning and remembering.

(?) A Staging Server

From anonymous on Fri, 15 Oct 1999

You are correct. He could run Linux on his Mac. ...

For the 68K I think Debian might be the only full distribution around.

There's also NetBSD/mac68k, which I hesitate to mention because this is a Linux publication. :-) But from personal experience, it's a good choice for a 68K system.

Larry

(!) You're right about that, too. NetBSD support on several platforms pre-dates Linux by a few years.
I wouldn't hesitate to mention it because of this being a Linux venue. I've recommended *BSD to Linux users before and I will again. FreeBSD is a slick system. It's well integrated and robust. I've used it and I suggest that every serious Linux enthusiast try FreeBSD or {Net,Open}BSD at least once.
Of course I prefer Linux. But my preference is based on personal experience, and relatively minor issues.

(?) Mounting CDs on IDE CDRW Drives

From Anthony Dearson on Thu, 23 Sep 1999

You had a question about mounting CDRW drives when using SCSI emulation. I found that using /dev/scdX allowed mounting the CDRW. /dev/cdrom can be linked to that instead of /dev/HDX Tony Dearson

(!) Ironically I guessed that and gained some first hand experience about a week ago when I bought and installed one of the silly things into my workstation. I also found out about the need to append hdc=ide-scsi on my shiny new 2.0.38 kernel to get the SCSI emulation working. I also found out that my normal CD-ROM drive (/dev/hdb) can't read CDRW media (though it seems to do O.K. with the CDR's that I burn in the same drive).
Thanks for confirming my guess.

(?) More CDRW

From David G. Watson on Thu, 23 Sep 1999

I have an IDE CD-RW drive, and it's just a little bit tricky getting it to work. The guy who you responded to in TAG is very close - he's just looking at the wrong device. The one he really wants is /dev/scd0. I haven't managed to get all CD playing software to recognize this when I'm not root, but most X CD players can (xplaycd, workman, synaesthesia, kscd, etc.).

Hope this helps, -Dave Watson

(!) Yep! I got one and had to learn a bit about it myself.

(?) Even more on CDRW

From Lance DeVooght on Thu, 23 Sep 1999

James, Regarding,

"Reading CD Discs on an IDE CDR Drive From balou nguyen on Wed, 14 Jul 1999 "

He needs to use: mount -t iso9660 /dev/scd0 /mnt/cdrom

if his CD-ROM drive is the master on the secondary IDE controller.

I just had this same problem on my system. It seems that after enabling SCSI emulation, the CD-ROM drives are seen as SCSI devices.

So, the slave drive on the secondary IDE controller is /dev/scd1.

You have probably already found this out but I couldn't resist trying to help the "Answer Guy" for a change.

Thanks for a great column!

Another fan, Lance DeVooght

(!) Obviously the CDRW medium as arrived! I've gotten more clarifications on that point then I can remember getting on any other.

"Linux Gazette...making Linux just a little more fun!"


More 2¢ Tips!


Send Linux Tips and Tricks to


New Tips:


2 Cent Tricks & Tips - Mounting a zip drive

Sat, 30 Oct 1999 21:48:16 -0700
From: David <

Well, this topic has been covered a number of times, but one minor nit from your "Mounting a zip disk" in issue #47 - the printed tip reads:

1) Compile in IDE Floppy support in the kernel - there is no need for scsi emulation unless you want auto-eject support. Also remember to compile in support for the filesystems you wish to have on your zip disks.

Under Debian potato, kernel 2.2.13, eject version 2.0.2, I am able to eject the zip disk from the ATAPI zip drive without using ide-scsi emulation. I've been able to eject it this way at least since 2.2.5 or so, I believe, thou I'm not sure quite when I found this out. I didn't have the zip when I ran 2.0.x kernels, so ide-scsi might've been needed there, but not anymore, it seems.

David


atapi zip drive comment

Tue, 02 Nov 1999 11:36:25 -0600
From: Draper7 <

Hello and how are you? I just wanted to say thanks for the help with my linux problems and make one small comment. In the documentation on the atapi zip drive I think that alot of newbies might find it helpful if you added how to format a zip disk for the dos filesystem.

/sbin/mkdosfs  ..........
thanks again for the help documentation!!
Jeremy


Toshiba Cyber 9525 video chipset

Sat, 13 Nov 1999 01:13:22 -0500
From: Cliff Miller <

Re: (LG 43, mailbag...)

From: ANTONIO SORIA ([email protected])

to buy a Toshiba Satellite S4030CDS which comes with the Trident Cyber 9525 video card...

...a good resource is www.741systems.com/linux/2595XDVD-install.html


Sorting the lines in a file

Fri, 12 Nov 1999 01:37:22 -0500 (EST)
From: Mike Smith <
I have unsubscribed that address, run "uniq" on the mailing list to remove any other duplicates, and unsubscribed all other addresses with

Just in case you forgot (we all do sometimes), uniq presupposes that the stream it is processing is already sorted. `Sort -u' will screen out duplicates, too.


Winmotherboard

Sun, 14 Nov 1999 18:41:26 -0500
From: Pierre Abbat <

You've heard of Winmodems, now there are Winmotherboards.

I bought a Shuttle Spacewalker HOT-591P to upgrade an existing system. I put a Pentium in it, tweaked the speed switches until it came up, attached the hard drives, turned it on again, and set the hard drive size. I got an error message:

No [active partition] found DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER

This is the first time I have seen a motherboard *require* that relic of the DOS age, the bootable partition.

Unlike the Winmodem, there is a way around this. Run cfdisk and set one of the partitions on hda as the active partition.

phma


Followup to Running UNIX At Home

Mon, 22 Nov 1999 15:38:59 -0500
From: Rob Reid <

From: Javier López Pérez <:

Hello Mr. Reid:

In your article "Running UNIX At Home" in LinuxGazette #47 you wrote:

None of my cron jobs, like updating locate's database and trimming the log files, were being done since the computer was hardly ever on in the wee hours of the morning, the time chosen by the distributions (Slackware, then Red Hat 3.0.3, then 5.1) for housecleaning.

Like most home users, I have also stumbled across cron not running the programs it should because of my computer being off. Although the script you provide looks great, I was wondering if you know that there already are, to my knowledge, two programs that resolve this very same issue: anacron and hc-cron (sorry, I have not an URL to give, but I bet that a search to http://freshmeat.net would be useful.)

In the same article you also mention that to change runlevels you stop/start starter scripts by hand [...]

You can do this in a more convenient manner by using /sbin/telinit, giving it the runlevel you want to change to. To change from any runlevel to runlevel 3 you would just type (as root, of course): /sbin/telinit 3 and that is all. As far as I know, all your programs will be SIGTERM or SIGKILL by doing this, while starter scripts will be called to start/stop services as needed.

I hope this information is of any help to you. I also deeply hope not to have made any big mistake in what I have said.

Best wishes.

Rob Reid responds:

----- Forwarded message from Howard Shaw <> -----

First, with regard to your cron problems, you might consider using hc-cron. Here is Freshmeat's description of it...

hc-cron will remember the time when it was shut down and catch up jobs that have occurred during down time when it is started again.

----- Forwarded message from Duckie <> -----

Debian installation offers "profiles" nowadays. The "Personal workstation" profile includes anacron.

That's great, but groundskeeper will make sure that the commands are run at *convienient times for the users* as closely to the specified intervals as machine uptime permits. In other words, it uses batch and the others don't.

You have a very good point here. I filed it as a wishlist item for anacron on Debian's bugtracking system, from where it'll be forwarded to whoever maintains the program itself.

Thanks,
Arjan Drieman

----- Forwarded message from Howard Shaw <> -----

Second, with regard to your runlevel usage, while you can write scripts for starting your networking mechanisms, and there are valid reasons for doing so, you can also change your runlevel without rebooting by executing, as root, 'init X' where x is the runlevel. So you can start at runlevel 4, then use init 3 to drop to runlevel 3 and start those scripts, then use either init 0 to shutdown, or init 6 to restart your system.

This was the other thing people wrote to remind me: use init (or sysinit) to switch runlevels without rebooting. It's very useful; in going from 4 (no internet) to 3 (internet) it stops and restarts some services that look like they could have been left alone, but it does the job in a one line command!

Thanks for everyone's feedback.


FYI: PS/2 Port Problem In 2.2 Kernel

Mon, 22 Nov 1999 16:12:20 -0800
From: Bovy, Steve <

In the past few months I have attempted to install RedHat 6.0 Caldera 2.2 and Caldera 2.3 on my Compaq Presario 5630 Pentium Computer.

What I discovered is that regardless of which distro, or which mouse I use, any attempt to install with a mouse attached to the ps/2 port always fails with a "frozen" machine.

Caldera support and the Caldera Knowledge base has finally acknowledged that this problem is "real", and that they are looking into it.


ANSWER: Filename extensions for web program listings ...

Sun, 21 Nov 1999 22:41:10 -0500
From: Jeff Rose <

I'm enjoying reading this issue of LG on my Palm Vx after downloading your text version then using a small conversion util to format into PDB format.

The Linux Gazette Editor wrote:

Do you mean PDF? If so, which program do you use?

Jeff clarified:

I meant *.pdb :) format using the attached 'txt2pdbdoc' utility. GNU ZDoc, Aportisdoc, and other utilities convert quick and hotsync quick for a nice 'mobile Linux-related Library'! ;-)

[txt2pdbdoc is at http://homepage.mac.com/pauljlucas/software.html. -Ed.]

To make things easier, I made a symbolic link from txt2pdbdoc to 2pdb, so I can just type

$ 2pdb {filename.you.want.2.show.on.pda} {filename.original}
{outputfilename.pdb}


Tips in the following section are answers to questions printed in the Mail Bag column of previous issues.


ANSWER: Terminal emulators

Sat, 30 Oct 1999 16:00:13 -0400
From: Michael Kohne <

If what you want is a telnet program for windows that's better than Microsoft's, check http://www.geocities.com/SiliconValley/Network/1027/ Kevterm is a very simple vt100 emulating telnet client. It's biggest features are that it's small, and it works. I'm personally a big fan of low feature count, but fully working software, and Kevterm is that in spades. It does everything I need

It's not very fancy, but I've found it to be very useful over time.

From: Richard Cohen <:

http://tucows.mirror.ac.uk/term95.html (or the same page at any more local TUCOWS mirror) contains a list of telnet clients for Win95/98, including some freeware and lots of shareware. I seem to remember having heard good things in the past about Tera Term Pro (which is free and has a SSH plug-in available, if you care about that).

From: Jonathan Hutchins <:

Look for a program called "TerraTerm" on your favorite shareware site. There are modules for SSH connections available.

Good terminal emulators can cost $250 per workstation, and can eat up a lot of resources for graphic and keyboard mapping (IBM's Personal Communications Suite, Reflections). TerraTerm will do 95% of what you want with a reasonable footprint similar to Microsoft's Telnet.

From: Pierre Abbat <:

I have a linux shell from my Win98 machine via a terminal login. I am presently using telnet to do this, however this causes profound graphical errors, no color, and other problems. I am looking for a better terminal. Any suggestions?

Try TeraTerm. I got mine from srp.stanford.edu; it's a doctored version that supports SRP authentication, though it doesn't encrypt as the SRP telnet for Linux does. TeraTerm does colors, but the keyboard isn't perfect, and I have to use u and d in less instead of pgup and pgdn.

phma

From: Charles Hethcoat <:

If you are looking for a better telnet for a Windows machine, you might check out Tera Term Pro:

[email protected]

It is much nicer than the plain jane Telnet that comes with Windows.

I am aware of no totally free X servers that run under Windows. If you will settle for a nonfree but cheap Windows X server, try the MicroImages X server 2.0 (or latest) from www.microimages.com. It's pretty plain, but works fine for connecting to Linux X clients. And it's pretty inexpensive,= too.

Charles Hethcoat


ANSWER: LS120

Sun, 31 Oct 1999 04:25:32 +0100
From: Ghlenn Willard <

Ghlenn Willard ([email protected]):

I would like to know exactly what I need to input into the /etc/fstab concerning having OpenLinux 2.2 to see the LS120 drive. My system has the 1.44 floppy drive at fd0, hard drive at hda and hda1, and the cdrom at hdc. I tried the approach Caldera suggested for the owner of OpenLinux 1.3, it didn't work or I messed up, which is possible since I am still a greenhorn at this.

I have two HDs, hda and hdc (IDE Primary 1, IDE Secondary 1), a CD-ROM, hdb, (IDE Primary 2), and an LS120 as hdd (IDE Secondary 2), and it works well with this /etc/fstab:

/dev/hdb    /cdrom    iso9660    ro,noauto,user    0    0

#  This is for "DOS"-formatted disks - gets mounted with  mount /B
/dev/hdd    /B              vfat          noauto,posix,user    0    0

# This is for ext2-formatted disks - gets mounted with  mount /E
/dev/hdd    /E               ext2         noauto,user   0    0

Needless to say: the mounting points /B and /E must exist - beforehand.


ANSWER: Lilo gone

Sun, 31 Oct 1999 11:16:28 +0100
From: Ivo Naninck <

My wife ran Norton antivirus and detected that the MBR was changed. She checked the 'Repair' box. Now my LILO is gone. How do I install it back into the MBR?

Boot your rescue floppy and mount your root filesystem on for example /mnt. Then run /mnt/sbin/lilo and that should fix it. Then remove any software from/or your other OS(?) that is capable of mucking around with critical parts of the system.

From: Tomislav Filipcic <:

This can be fixed easily. Get a boot floppy (or get a friend to make one for you) and use it boot linux. When you get to the prompt just type "lilo" and the MBR will be fixed.

From: Zon Hisham Bin Zainal Abidin <:

Thank you everybody for helping me out with the LILO issue.

Think I am gonna luv this Linux thingy. And the amount of support that I have received...it's just fascinating. Keep up with the good work to help newbies like me...so that I am able to help newer newbies in the future.

The latest problem that I face at the moment is the task of removing the largest virus partition on my PC...that is the DOS partition :)


ANSWER: Dialling up my ISP (Freeserve)

Mon, 01 Nov 1999 12:14:10 +0000
From: Maxwell Lock <

Hi there,

To connect to Freeserve using RedHat, follow the instructions in the freeserve HOWTO:

http://www.damtp.cam.ac.uk/user/ig206/freeserve/

and

http://www.scottish.lug.org.uk/freeserve.html

-Cheers, Max.

From: Steve Phipps <:

I use Freeserve with Red Hat Linux and find that it works perfectly well.

I have no idea what the problem is in your case, so I'll go through my configuration in detail. Hopefully this will enable you to find the fault - you can always copy my setup if all else fails!

First of all, ensure that the file /etc/hosts.allow contains the line

ALL: LOCAL

, that /etc/hosts.deny contains the line

ALL: ALL

, that /etc/resolv.conf conatins the lines

search .
nameserver 195.92.195.94

(the number is the IP address of Freeserve's nameserver) and that
/etc/hosts contains the line

127.0.0.1   localhost

Next you need a chat script, which should be placed in the file
/etc/ppp/chatscript. Mine, which is fairly basic, is as follows:

# Set abort conditions
ABORT 'NO CARRIER'
ABORT BUSY
ABORT 'NO DIALTONE'

# Set a nice long timeout because Freeserve can be a bit slow
TIMEOUT 120

# Reset modem
'' ATZ

# Dial (0845) 079 6699
SAY 'Dialling Freeserve...\n'
OK ATDT08450796699

# Log in to remote machine
CONNECT ''
SAY 'Connection established, logging in...\n'
ogin: 
word: 

# Log in complete
SAY 'Log in successful...\n'

Obviously, replace and with the appropriate values for your account. See man chat if you want to write your own script.

Finally, you need to initiate PPP. I have a file called dialup in my home directory which simply contains the command

exec pppd connect 'chat -v -f /etc/ppp/chatscript' -detach crtscts \
  modem defaultroute /dev/ttyS1 38400

See man pppd if you want to know what's going on here. To connect to the Internet, make sure that you're in your home directory and enter

source dialup

(you'll have to su first). You should now be connected! Once you're finished, just hit Ctrl-c to terminate the connection.

This should be enough to connect to Freeserve from a stand-alone terminal. I'm assuming that PPP is installed on your machine and that your modem is connected to the second serial port (/dev/ttyS1, or COM2 under DOS).

If this still doesn't work, look in /var/log/messages. That should tell you why the connection failed.

Good luck!


ANSWER: At-command error message

Mon, 01 Nov 1999 12:14:10 +0000
From: Buz Cory <

On Tue, 28 Sep 1999 03:51:36 +0000, Ben wrote regarding "AT-command error message ":

Whenever I try to run "at" I get an error message, like so:
    root@benzz:> at 10:15 command
    Only UTC Timezone is supported. Last token seen: command
    Garbled time
    
This is actual output. My system _is_ on UTC timezone,

I think that the reference to "UTC" in the error message is to the command line, not what your system clock is running.

AFAIK, at uses *local* time unless you specify "UTC", and seems not to accept any other time zones. It seems in your example to be trying to interpret "command" as a time zone.

the at man-page didn't help a bit.

I suggest that you read the man page again.

If you don't know what "standard input" is, this could be the problem.

From the man page: "at and batch read commands from standard input or a specified file which are to be executed at a later time, using /bin/sh."

Someone suggested that I should write a file:
echo command> file at 10:15 cat < file

This is almost correct. Any of

         echo command > file ; at 10:15 cat < file
         echo command > file && at 10:15 cat < file
or
         echo command > file
         at 10:15 cat < file
would work. (These are two separate commands, and must be seen by the shell as such).
but that wouldn't help, as "at" is still in there, and it's "at" making trouble.

Not exactly, it's misuse of "at" that is making the trouble.

All "at" expects on its command line is the time and in your example it is trying to interpret what follows the time as part of the time.

The examples above avoid this, but they are kludgey.

If you wish to run the *same* command from "at", then make it a shell script and use the "-f" option, eg:

        at -f my-script noon
or
        at noon < my-script

Otherwise, just use a pipe:

        echo command | at noon

Granted, this is non-intuitive, but it is the way "at" has worked for over 20 yrs.

Does anybody know what I'm doing wrong? Or just another way to schedule tasks? I'm getting desperate now...

There is also another way to schedule tasks. "at" is for once-only tasks. "cron" is for periodic tasks and works a whole different way.

Hope this helps,
== Buz :)

From: Ben <:

All "at" expects on its command line is the time and in your example it is trying to interpret what follows the time as part of the time.

This was the exact piece of information I was looking for. I didn't realize "at" is interactive, and if anyone would have told me before, or if this information was obvious from the man-page, I would have had my brain around the concept of 'standard input' a lot sooner.

Thanks again, Buz.

From: Buz:

This was the exact piece of information I was looking for.

Good, but the following indicates you still don't quite get it.

I didn't realize "at" is interactive,

It isn't, exactly, though it can be used that way. It is mostly intended to be used as the end of a pipeline. and if anyone would have told me before,

That's why I stuck in my 2 cents.

or if this information was obvious from the man-page,

It is (or should be), but only if you are already familiar with the concept and workings of Standard Input.

I would have had my brain around the concept of 'standard input' a lot sooner.

As I mentioned above, you still don't seem to grok it in fullness. It is intimately related to concepts like "pipes", "pipelines", "filters", and "redirection".

M$-DOS had these concepts, to but they don't work as well there as in Un*x, DOS not being multi-tasking. Hardly anyone used them or even knew about them.

Now that I think about it, have not seen a good discussion of these for a long time. (10 yrs or more?). Seems I might have to write one, but don't want to put it in the body of this mail.

You might go to "UnixHelp for Users" at unixhelp.ed.ac.uk (please use one of their many mirrors). This site provides the best intro to using Un*x that I have seen online, but far from the best possible. In particular, their glossary leaves much to be desired.

Or the Unix Version 7 Manuals at plan9.bell-labs.com/7thEdMan/index.html. This will take some work to make readable, but you will learn quite a bit in just making it readable. The documents here were all written by the *original* Unix gurus at AT&T about 1979. Not everything here is still relevant, but most is. I plan to eventually have all this online as HTML, but probably not till next year sometime.

There are several HardCopy (book) resources available, but this email is getting far too long.

[...]

You should try "man stdio" and then "man stdin" for a complete and correct (if not terribly clear) description of the meaning and purpose of "standard input", etc from the program's point of view. This is written for "C" programmers, but most of it applies for any language.

I shall continue working on getting a good, clear definition that is independent of OS and programming language.

You might also try the section on "Text_IO" in the Ada definition, the same concepts apply.


ANSWER: Connecting a Linux PC to an ADSL modem

Mon, 1 Nov 1999 09:45:15 -0600
From: Jonathan Hutchins <

Take a look at some of the Firewall/Router/IPForwarding HOWTO's.

Basically what you need is to connect one PC running Linux to the ADSL connection, then connect the other PC's to an ethernet segment that also includes the connected PC. Run IPChains/IPMasq on the connected station, and you have a firewalled router that connects your private LAN to the internet using seamless TCP/IP.


ANSWER: Is SMP worth it?

Mon, 1 Nov 1999 09:39:03 -0600
From: "Jonathan Hutchins" <>

I haven't seen any benchmarks for Linux SMP, but generally you not only need the SMP kernel, you need code that is optimized for the number of processors you are running.

The rule of thumb I know is from the NT universe, building and selling servers, and that is that a second CPU results in about a 40% gain in throughput. Be sure to compare the cost of a second CPU and the dual socket motherboard against a 40% faster CPU (or maybe an Alpha?).

You don't indicate how long these modeling sessions run, but if they're reall clock hogs you might consider something like a Beowulf array using less-than-state-of-the-art CPU's.


ANSWER: Lexmark printer drivers, and Zoom modem

Mon, 1 Nov 1999 09:45:15 -0600
From: Jonathan Hutchins <

Don't know about your Zoom modem - presumably you know enough to avoid Winmodems and Plug-and-Pray devices. Could be that something like a NIC is grabbing the port - Linux brings up the NIC before the serial port, so where in DOS you get a modem but the NIC fails, in Linux you get the NIC but no modem.

As far as your Lexmark goes, 1) Hassle IBM to provide drivers, and 2) Set it to emulate an HP printer and use the driver for that.


ANSWER: Linux classes

Mon, 1 Nov 1999 09:45:15 -0600
From: Jonathan Hutchins <
My question to all of you in the industry is this: What parts of Linux, and the networking of same, are most important to you? Should there be more concentration in TCP/IP fundamentals (which I have included), specific Linux/*ix-based programs ( KDE, Gnome, Apache), or which? What is it that you most desire in an entry-level (or not-so-entry-level) employee candidate?

I think you should leave the TCP/IP stuff for a Networking class - it's pretty independent of LINUX, and spending a lot of time on it would shortchange the Linux specific material.

I think there are two basic categories to Linux - First, setting a system up, getting it running correctly and getting software installed for whatever end-use is in mind. Second, ongoing administration - program updates, troubleshooting, setting changes, maintaining things from routing tables to firewall patches.

I would look at the Microsoft "Installing and Configuring" classes for the first level. Start with planning: hardware selection and compatibility, selecting network topology and protocol, determining what services will run, etc. Work with such indefinates as how to configure Xwindows for something like the Asus SP97-V which isn't correctly supported by the install scripts. Work on building install scripts for the various distribs like Linux and Caldera that allow you to build your own. Mention common wierdnesses - like the fact that Linux brings up Network cards first, then serial ports, which is the opposite of DOS and means that in a conflict it will be the serial port instead of the NIC that fails.

Ongoing admin can be anything from finding module and library dependencies to compiling custom kernels. How to keep the patches and updates current without scrambling the system with multiple installations of common libraries. Strategies for testing updates before rolling them out to production.

The first class would be most of what someone would need to maintain a single-user workstation or a home router/server. Some topics wold cross over - installing and updating new Xwindows software would be of concern to the person setting up a new system and adding productivity software, to the individual maintaining a single-user workstation, and to the system administrator rolling out a new WordPerfect to a 100 user office. Probably the basics would be covered in the first class, and the strategies for a large rollout in the second.

Anyway, there's what I think. Let us know what you end up with.


ANSWER: Diamond A50

Fri, 19 Nov 1999 17:19:13 -0500
From: Anthony J Placilla <

The Diamond A50 will work with a little configuration. You need to modify /etc/X11/XF86Config. Go to the "Graphics Device" section. You'll see a stanza that starts with " Device configured by Xconfigurator" Underneath the "Boardname" line add the following 3 line:

option "no_bitblt"
option "no_imageblt"
option "sw_cursor"

the video ram line will probably be commented. Remove the # in front of it, save & exit the file.

startx

Have fun


ANSWER: Telnet Trouble

04 Nov 1999 14:03:58 -0600
From: Omegaman <

Jim, The user shouldn't need to disable TCP wrappers. I have left both messages in full for my explanation.

--Sat, 25 Sep 1999 01:28:37 -0700 From: Jim Dennis ([email protected])

Dear Jim

Your email did help me to solve the problem with the telnet in linux. It works fine now. Thanks a million..... I have a small doubt. Let me explain...... My network has a NT server, LINUX server and 20 windows 95 clients. I followed your instructions and added the address of all the clients into the /etc/hosts file on the LINUX machine and voila the telnet worked immediately. But the NT server was the one who was running a DHCP server and dynamically allocating the addresses to the clients. The clients were configured to use DHCP and were not statically given and ip addresses. I managed to see the current DHCP allocation for each client and add those address into the /etc/hosts file on the LINUX server but my doubt is what happens when the DHCP address for the client changes? Then again we'll have to change the address in the /etc/hosts file right? This seems silly. Is there anyway to make the LINUX hosts file to automatically pick up the DHCP address from the NT server? Also another important thing is I am still unable to ping from the NT server to the LINUX server using the name. It works only with the IP address. Is there any way to make the NT DHCP to recognize the LINUX server?
Well, either you shouldn't use dynamic addressing (DHCP) or you should use dynamic DNS. You could also disable TCP Wrappers (edit your /etc/inetd.conf to change lines like:
telnet  stream  tcp nowait  root/usr/sbin/tcpd  in.telnetd
... to look more like:
telnet  stream  tcp nowait  root/usr/sbin/in.telnetd in.telnetd

There's no need to do this. hosts.allow and hosts.deny allow network/netmask specifications. So lets say our user's DHCP assigns from the simple class C 192.168.0.1 - 192.168.0.255. In hosts.allow we can then put:

in.telnetd: 192.168.0.

-- OR --

in.telnetd: 192.168.0.0/255.255.255.0

you can also allow/deny based on host or domain name:

in.telnetd: .domain.com

Then you won't need mappings in /etc/hosts for the current DHCP assigned address. You may want to do dynamic DNS if you need the hostnames of the windows workstations. Then point the linux box's resolv.conf at your NT server with the DNS mappings. Or, better still, make the linux box your DHCP server/DNS server and use BIND 8's dynamic DNS features.

Windows also has a HOSTS file with a format identical to /etc/hosts in the windows or winnt directory. You'll find the entry for localhost already in it. You can add the linux box's IP there for name resolution.

(and comment out all of the services you don't need while you're at it).

Definitely a good idea.


ANSWER: DHCP and Dynamic DNS

Thu, 4 Nov 1999 15:17:38 -0600
From: Jonathan Hutchins <

Jim Dennis says:

"But the NT server was the one who was running a DHCP server and dynamically allocating the addresses to the clients. The clients were configured to use DHCP and were not statically given and ip addresses. I managed to see the current DHCP allocation for each client and add those address into the /etc/hosts file on the LINUX server but my doubt is what happens when the DHCP address for the client changes? Then again we'll have to change the address in the /etc/hosts file right? This seems silly. Is there anyway to make the LINUX hosts file to automatically pick up the DHCP address from the NT server?

Also another important thing is I am still unable to ping from the NT server to the LINUX server using the name. It works only with the IP address. Is there any way to make the NT DHCP to recognize the LINUX server? "

Microsoft's answer to this problem is to run the Windows Internet Name Service - WINS. This provides NetBIOS name resolution to dynamic addresses as assigned by WindowsNT DHCP. The WindowsNT DNS system will also allow you to provide Internet Name resolution for dynamic addresses. Since you're mostly running WIndows95 clients, these clients can be pointed to the WINS server via the variables on the DHCP server, and will be able to resove addresses for TELNET and other services. (Be sure to take advantage of the WINS node-type configuration in DHCP too - it reduces broadcast traffic for name resolution.) If you need non-Windows applications or clients to resolve names, you'll need the DNS system as well.

Another solution is to go into the DHCP configuration and make each IP Address assignment a "reserved" address - reserved for the MAC address of a given machine. That way, you have the advantages of centralised administration of Internet paramaters such as Gateway and DNS servers without having the IP addresses change unless you tell them to.

Finally, the Microsoft implementation of the LMHOSTS and HOSTS files allows using an "INCLUDE" statement to link to a central address file (on the server, for instance). This lets you use these files instead of WINS and DNS (respectively), but maintain the tables in a single, central file.


ANSWER: i740 AGP

Thu, 4 Nov 1999 15:08:46 -0500 (EST)
From: Gleef <

Hakon Andersson wrote:

I wish to run my i740 AGP under Linux. I am a Linux newbie though. I was wondering if you could tell me, or direct me onto some resources on how to setup my i740, or which server to install during installation. I am installing Redhat5

I currently have an i740 AGP system running under Linux. The problem is that, back when Red Hat version 5 came out, Intel was refusing to release information about that video chipset. There was a binary-only X Server, but it was poor. Since then, Intel apparently opened up its NDA (with Red Hat's urging, if I recall), and the source code for that X Server was released. Since then, the source code has been cleaned up, and incorporated into the standard SVGA server for XFree86.

The i740 works very well with the XFree86 SVGA server, but only in versions 3.3.4 and later. Red Hat 5 has a much earlier version, but it probably can be upgraded with a little effort. You're probably better off just using a more recent distribution, such as Red Hat 6.1.

Best of Luck, -Gleef


ANSWER: 3-button mouse on X Window System

Thu, 4 Nov 1999 21:27:38 -0600
From: Alan Wormser <

Angelo:

I use Red Hat 5.1, so your subdirectories may be a little different, but here is a solution:

1. Log in as "root" so you can modify the configuration files for XWindows. (As an alternative, you could log in as a regular user and use the "su" command to get root access.)

2. Edit the file called "XF86Config" in the "/etc/X11" directory. This is a regular text file that has all your X settings for mouse, keyboard, monitor, and video card.

3. Find the "Pointer Section" which is about on line 125 (out of 383 lines in my copy of the file).

4. Find the line that says, "# Emulate3Buttons", and remove the "#" sign. The "#" indicates a comment and removing it turns the comment into a command.

5. Save and exit and log back in as a regular user (never play around as "root" -- it's too dangerous).

I think that will fix xstart for you! Good luck!

Alan Wormser Austin, TX

From: <:

Dear Angelo,

In Linux Gazette nr. 47, you write:

Can anybody help me with this simple (I guess) problem? My three-button mouse works very fine on the console, but it doesn't when I "startx". What's going on? How can I solve this problem and start using the middle mouse button under X? Any suggestion will be appreciated.

What's going on is that your gpm (console mouse daemon) is probably set up differently from ths X Window System.

Look for the file XF86Config (probably in /etc/X11/)

Look for the "Pointer" section, and change the protocol from Microsoft to MouseSystems, as shown below:

Section "Pointer"
#    Protocol   "Microsoft"
    Protocol  "MouseSystems"
    Device    "/dev/mouse"
EndSection

That should do the trick.

From: Gerard Beekmans <:

From: [email protected]:

Can anybody help me with this simple (I guess) problem? My three-button mouse works very fine on the console, but it doesn't when I "startx". What's going on? How can I solve this problem and start using the middle mouse button under X? Any suggestion will be appreciated.

Start by opening your XF86Config file. It's under one of these places (under normal circumstances):
/etc/XF86Config
<XRoot>/lib/X11/XF86Config.hostname
<XRoot>/lib/X11/XF86Config

Look for the section "Pointer"

See if you have these (similair) lines:
Emulate3Buttons
Emulate3Timeout 50

If you have them, comment them out (by putting #'s in front of them) If you don't have them, you can try adding them.

Exit X and restart it (and cross your fingers ;)

If this doesn't work, could you tell me what kind of mouse (brand and connection like serial mouse or ps/2 mouse) you're using? And can you send me your XF86Config file too.

From: Joachim Noffke <:

The mouse configurations for the console and for X are independent from each other. You probably have configured X with a different mouse type that doesn't support three buttons. If your mouse works on the text console, try using the same settings for X.

To do this, check your "gpm" command in /etc/rc.d/rc.local (for Slackware, filename may vary depending on your distribution) to find out which mouse type is being used there (argument "-t"). Reconfigure the mouse under X accordingly (i.e. rerun XF86Setup, xf86config, or whatever you're using).


ANSWER: Dual PIII Xeon performance

Sat, 6 Nov 1999 16:00:01 +0100 (CET)
From: <

Dear Nick,

In issue 47 of the Linux Gazette, you wrote

I do some intensive (multi-week runs) ocean modeling on my Dell 610 w/ a PIII 500 Mhz Xeon. I am having a hard time finding out whether a second PIII will improve the speed of a single process, or only for multi-processes. Either way would help, but it would be nice to know before laying out the $.

A second processor cannot really enhance the speed of a single process, other than distributing all the processes on your PC over two processsors. Remember that Linux is a multiuser/multitasking systems. At any time several processes are waiting for processor time.

If your application is capable of running multiple instances of itself, working together on the same data-set, you would see the largest speed increase.

Remember that you need a kernel compiled for SMP to be able to use 5the second processor. AFAIK, an SMP-able kernel doesn't work with a single processor.

Other things you could try to make your process run faster are:


ANSWER: epson 800 printer driver

Sat, 6 Nov 1999 16:16:52 +0100 (CET)
From: <

Dear Linda,

In the Linux Gazette nr. 47, you wrote:

We need to install above and need a driver installer disk, can you help. e:mail us or please call 01752 788099, we are desperate.

Linux doesn't have printer drivers as such. Printing services aren't part of the Linux kernel.

Most Linux users use Ghostscript (a postscript interpreter) as a way to use their printer as a postscript printer. The Epson Stylus 800 is supported by Ghostscript 5.5. Most modern distributions have this.

Install Ghostscript, adapt your /etc/printcap so that all programs that print via the printing daemon can access them.

Read the Printing-HOWTO.


ANSWER: KDE slower than windoze?

Sat, 6 Nov 1999 16:23:21 +0100 (CET)
From: <

Dear Sandra,

In Linux Gazette nr. 47 you wrote:

I've just installed linux on my Acer Notebook 370 and I thought everthing works fine. But when I'm running KDE it takes e.g. about 5 minutes to open Netscape!!! Is anybody out there who knows what's wrong with my installation???

How much memory does your notebook have? I'd say you would need at least 16 MB to get a somewhat usable X environment, but 32 or 64 MB would be better. It sounds like your system is swapping a lot to get Netscape to load.

I would recommend you switch from KDE to a less memory intensive Window Manager (try fvwm1). Netscape itself is also quite big, although older versions (3.x) are smaller then new ones.


ANSWER: My Windows partition hasd full access for root only

Sat, 6 Nov 1999 14:34:00 -0500
From: Gerard Beekmans <

I have 2 questions: I have partitioned my HD in 4 partitions. 1.1 - Win98 (Filesystem is FAT-Win95)
2.Linux Swap
3.Linux OS
4.Personal Data (Filesystem is FAT-Win95)
Questions 1.
Both the FAT-Win95 Filesystem Partitoins get mounted properly in Linux but the problem is that only root has read/write/execute permission. The other users only have read/execute permissions.How can I have it set up so that everyone had r/w/x permission to the mounted filesystems (and all the subdirectories within them)

Here's a solution. When you mount the partition, use this command:

mount -t vfat -o umask=000 partition mountpoint

You might use a slightly differnet command (aside the -o umask=000 option). The thing is: you need to put -o umask=000 somewhere. This command makes all files/directories mode 777 (r/w/x) for everybody (owner, group and other).

This doesn't add attributes which aren't originally present. Normally when root mounts, the write-attribute is removed from every file when you mount it (since root has usually umask 022 - group and other don't get write permissions when new files are created. If you use umask in a mount command every file that is on that filesystem will be treated like it has just been created (to put it simple)). If a file is marked read-only by Windows, the mount program will see that and will treat it like that aswell. Therefore you can't write to files that are already read-only, unless you're root.

Question 2.
If I access any file from the FAT-WIN95 filesystem and make a change to it within Linux, when I boot in windows, that file is marked as "read only". Any idea why this is happening and how I can stop this from happening?

I've never seen this happen on my system. I usually don't even use the umask=000 setting (since i hardly access the win95 partition at all. If i have to do it, i just do it as root). My only guess is that your root's umask has the first digit set to 2. If you type 'umask' at your prompt you should see '022'. But I don't think this is the case here. Check it anyway just to be sure :)

Perhaps this umask=000 setting answers your 2nd question. Again, since it doesn't happen here I can't test it either i'm afraid.

Maybe the two are related. Any help will be greatly appreciated.

You're most welcome. Hope all goes well. If you have more questions, feel free to ask ;)

From: Joachim Noffke <:

If your FAT partitions are mounted automatically at boot time, all files on them will belong to root. To change the permissions for other users, edit your /etc/fstab file: in the two lines corresponding to your FAT partitions (the ones which contain "vfat" or "msdos" in the third column), add the mount option "umask=0" to the fourth column. This should solve at least problem #1.


ANSWER: Tryin' to install a Diamond SupraExpress 56i V PRO

Sat, 6 Nov 1999 14:35:06 -0500
From: Gerard Beekmans <

I have a problem with my new modem. I tried to install it under Red-Hat 5.2 but it doesn't work. My modem is an internal Diamond Supra Express 56i V PRO and under W98 the default configuration is irq 12 an I/O port 0x3e8. Under W98 it works perfectly and i don't think this is a "winmodem"(isn't it?). Windows "says" that under DOS it must be configured with: COM 3, irq 4 and I/O port 0x3e8 (/dev/ttyS2 isn't it?) I just want to know if this is a winModem or not and how can I install it.

First we have to make sure wether it's a winmodem or not.

How do you configure the modem? Do you need to change jumpers on the modemcard itself to change IRQ's and the like? If so, then chances are high that it is not a winmodem.

Here's a second check.

When you boot the computer, you get the BIOS tests output. It also contains a list with comports it had found. If that lists the comport of your modem, then you are sure that your modem is not a winmodem.

I advise to put your modem on COM4 (IRQ3 I/O 2E8) since that works best and in case you have a serial mouse it won't interfere with it (if you have a serial mouse on COM1 and a modem on COM3, they both have the same IRQ and that is a possible problem hazard). If you, however, have a PS/2 mouse, then you can safely leave it on com3.

And yes, COM3 is /dev/ttyS2 under Linux.

Have you tried minicom under Linux? It's a program similair to Hyperterminal from Windows. You can use the program to send AT-commands (modem commands) to a serial port and in that way you can determine if the modem is responding or not.

Good luck.


ANSWER: Linneighbourhood

Thu, 11 Nov 1999 01:25:12 +0530 (IST)
From: Raj <

this is the rpm info extracted with the rpm -qip command

Name        : gnomba                       Relocations: (not relocateable)
Version     : 0.3                          Vendor: (none)
Release     : 1                            Build Date: Thu Jul 2911:10:45 1999
Install date: (not installed)              Build Host:otherland.darkcorner.net
Group       : Utilities/Network            Source RPM:gnomba-0.3-1.src.rpm
Size        : 64461                        License: GPL
Summary     : Gnome Samba Browser
Description :
gnomba is a GUI network browser using the smb protocol.  It allows users
to browse workgroups, machines, and shares in a "Network Neighborhood."

i havn't installed this nor tested this

if you find time pl send me the comments about this rpm

raj


ANSWER: Compiling your own Linux distro.

Fri, 12 Nov 1999 21:04:49 +0200
From: Willem Brown <

Hi,

I recently came across this on the web.

The ROCK Linux Homepage: linux.rock-projects.com

----- snip README ------

ROCK Linux is built by a few shell scripts. These scripts can download all necessary sourcecode from the internet, compile the packages with optimizations for your choice of processor, build the package files and (optionally) create a CD-ROM image.

ROCK Linux is a small distribution, but it's not a "mini distribution". It comes with over 200 packages including X11 and the GNOME Desktop.

----- snip README ------

These scripts is about 1.3MB in size.

HTH


ANSWER: SiS

Sun, 21 Nov 1999 21:04:48 -0600
From: Alton W. Jones <

Your November column had some advice for a newbie struggling with an SiS video assembly. I offer the folowing:

For S.u.S.E. Go to their site http://www.suse.com/ for drivers. I did this with S.u.S.E. Linux 6.1. The drivers work, but can be installed only with S.u.S.E. SaX. The other installation programs will not work. The XF86Config file written is NOT compatible with xf86config, but is required to get SiS up. My SiS 530 on-board setup is limited to 16-bit color depth. Windows on the same computer has 32-bit color depth.

For Red Hat 6.0, Open Linux 2.0, and Slackware 3.6 There are drivers at http://www.sis.com.tw/. There are also instructions. I have not tried them, so you are on your own. Have fun.

Regards, William L. Jones, P.E. [email protected]


ANSWER: RedHat Business Model

Mon, 1 Nov 1999 09:47:25 -0600
From: Jonathan Hutchins <
One part of my research is where I am analysing the business model of Linux (from Redhat) . However I fear by going to Redhat's website the information about it's product may be biased and I may not be able to get an all rounded opinion.

Get your basic info from RedHat, state your thesis on line (newsgroups?) and ask people to refute it. RedHat's pretty above-board, and most people don't have too much argument about their policies.


This page written and maintained by the Editor of the Linux Gazette.
Copyright © 1999,
Published in Issue 48 of Linux Gazette, December 1999

"Linux Gazette...making Linux just a little more fun!"


It's Only a One Day Conference...

By



A pile of conference badges. If you look closely, you'll see Maddog's badge on top.
It was sometime during the week of Oct 4th. The day was sunny, bright, the sky a piercing light blue. I was walking north along Bell Avenue in Brookhaven National Laboratory on my way to the bank. I had just finished lunch and was savoring the walk. It was quiet, peaceful. The first sunny, crisp days of fall had arrived. The peacefulness surrounding me was so impressive. It had been 4 months since I've been able to enjoy such a quiet moment.

"What do you mean there are a busload of people wandering lost around the RHIC ring!" I replied to one of the organizers after being interrupted from fidgeting with that damn PC projector. "That's what I've been told, and YOU HAVE TO DO SOMETHING!" she replied. I ran out to the lobby in a frantic disposition. One of my PHENIX collaborators was walking around looking at the vendor exhibits. "Please John, go out and find those wandering conference attendees and give them a tour of the PHENIX detector!" He looked a bit puzzled as I explained the situation to him, but was off in a rush after he understood what was going on.

That's me making cables. One of those valuable Ph.D. analytical skills I've picked up. The poster/demo room's IT infrastructure needed to be assembled and thus there you have me making RJ45 cables.

That was October 2nd, the day of the Open Source/Open Science conference. The whole day was a series of crises. The first crisis started that morning when we fired up the projector which was hooked up to the PC running Linux. The projected image jittered in such a way as to give me a rather nasty headache. "This will not do!" I exclaimed. Malcolm, one of the organizers swore again and again that it was working great the night before. "Hooking up a another laptop or PC is no easy feat" he warned me. It takes a good 20 minutes to figure out the settings on the projector so that it will sync properly with the video output. What to do - what to do. I had 15 minutes before the conference started, and the only working PC/projector setup was running Windows NT. What an embarrassment for an Open Source/Open Science conference. I was not going to do my introductory talk with the NT desktop brightly displayed behind me, as I used IE to down load my presentation. I had my laptop setup on a table on the auditorium stage from where I was going to run the conference. My intent was to monitor the inbox of the osos account on that laptop, to which people were to send their e-mail with questions I was to ask the speakers. I took down my laptop, set it up next to the projector, plugged in the video cable and pressed fn-crt which in theory should enable the video output on my laptop. Nothing. A big blue box shown on the screen with a clear message reading "No video sync." The conference was to start in 10 minutes. I quickly booted my laptop into windows 98, and back into Linux. The video projector came alive and projected my GNOME desktop after logging into my account. 5 minutes to go and I had a working Linux desktop environment projected onto the screen for the audience to see and from which I would present the introductory talk. Wheeewww. Crisis #48 averted....

A view of the RHIC tunnel with some of the invited OSOS'99 guests and then some. Of those who I recognize are Gabor David of PHENIX on the far left, John Hall 5th from left, Mark Galassi one over from John Hall, followed by Fred Johnson of DOE, skip one and then Michael Johnson of Red Hat and finally Bruce Perens.
The idea for this conference was begot back in early June after a lunch I had with Sean McCorkle. As we walked out of the cafeteria, Sean suggested that we put on an Open Source conference. This was the same idea I had in the back of my head for a long time now. "I would be glad to dedicate time for an Open Source conference," Sean told me. He said it with a zealot's enthusiasm whence I knew he meant what he said. With that bit of encouragement, I told him that we should "explore" the idea further.

A couple of days later, Malcolm Capel, a big Linux user doing structural biology work at the Light Source, Tim Sailer, a big Linux (Debian) developer, who was working for the RHIC computing facility at the time, Sean McCorkle, who does database work for the Human Genome project and myself, who spends much too much time on these silly write-ups, gathered during lunch to push forward this idea of organizing an Open Source conference at the Lab.

Same gang, but now we are posing for a photo op in front of the STAR detector. This is one of the two BIG detectors ready to study the creation of the Universe at BNL. Maddog took this picture.
"Let's get Maddog." "How about Bob Young." "Yea, IBM just released its data visualization software under an Open Source copyright license. Let's get them." "SGI has a bunch of Open Source projects, XFS etc." "Hey, Lincoln Stein wrote the CGI perl module. He works at Cold Spring Harbor Lab, that's right here on Long Island. He'd be a great speaker!" And so the meeting went. By the time lunch was over I asked the question, "So will it be worth having an Open Source conference?" The consensus was yes, its worth the time and effort. I can't remember who, but someone said that working on a conference takes up all your time. This was in reference to a conference which was recently held at BNL where 300 people came for a week to talk about small angle scattering experiments at the Light Source. The head organizer spent all his time working on the conference. We blew that fact off. "It's only a one day conference." (That phrase would be uttered many times between then and Oct. 2nd.) "How much time could it possibly take?!" It was going to be a bunch of guys from BNL plus a couple of outside speakers. "What's the big deal!"

I couldn't resist this photo. The PHENIX central magent system is in the background, with its logo painted on top, and Bruce Perens Penguin Power t-shirt logo in the forground. Nice juxtaposition of symbols.
This brainstorming session stirred my blood for some action. I fired off an e-mail to Bob Young, Maddog, and Richard Stallman. I had met each one of them from previous meetings I strayed to earlier this year. The e-mail outlined the idea for the conference and asked for feedback. "Is this a good idea or not?" Bob replied within several hours. He thought it was a good idea. Maddog got back a couple of days later. He also thought it was a good idea. I didn't hear back from Richard for a couple of months. (Some e-mail glitch was at fault and the outcome turns out to be a whole 'nother chapter.)

With that positive feedback from Bob and Maddog, I asked to meet with Mike Murtagh, the chair of the physics department. "It's a software conference." I told him. "We spend too much of our time on software not to talk about it." I told him about IBM and SGI and their software work and about how our software affects our science work and how people may be interested in what we do, etc. Mike was warming up to the idea and it wasn't until about 45 minutes into this discussion that I spoke the words "Open Source" and "Free Software." When I did, Mike uttered a long "Ooohhh, that's what this is all about" with his signature Irish accent. He had visions of people dressed in tie-dyed tee shirts, purple colored rim sun glasses, and flower sack dresses forming a "Free Software" commune on site during the conference. "That's not quite the idea Mike, but it's close..." Mike told me that he noticed that I didn't say "Open Source" or "Free Software" until well into the discussion. I did that on purpose.

Getting other Lab officials involved in the conference was something I didn't like. "They're going to hijack the idea and run away with it and take all the credit for it!" I told Mike. Mike was sensitive to this problem, but left me no choice. Either the Lab management was to get involved and give their stamp of approval or there was going to be no conference. "You can always reserve a conference room and invite your speakers, but if you want the Lab's logo (and resources and money) to back your conference, you have no choice but to get one of the departments to back you." Mike continued, "If it were a Physics conference or workshop, I could see the physics department backing you up, but for this one, you need to go to ITD." (ITD is the Information Technology Division.) I don't know why I was so jealous of this conference idea. Who on earth would want to take over this Open Source conference? No one in the Lab management had ever heard of the term "Open Source" and they had no interest in the matter. (As it turned out, this was mostly true. There were several "directors" in the directors office (we have lots of directors) who, in the end, did show interest.) I guess what I was worried about was that my efforts in trying to promote this as a computing conference as much as an Open Source conference would get those with computing empires at the Lab to take note and take the conference over. Hindsight has proven that this was the least of my worries.

Berkner Hall, the building where the conference took place, at about 7am the morning of the conference. That was the last quiet moment I had before my crisis management skills were tested.

Nice photo of Maddog starting his talk.
From that meeting, Mike told me how to proceed. Make sure I get ITD management to back the conference. I had to broach the idea to them which meant getting ITD involved. We also needed to get someone from the Directors Office involved as well. My tendency was to go straight to the top, but Mike warned me that the first thing anyone in the directors office would do is consult with Don Fleming, the new Lab CIO and chair of ITD.

While I was pushing the idea of this conference through my contacts, Sean and Malcolm were pursuing the idea of the conference through Biology. From that we got two key people on the organizing committee. Donna Zadow and Ann Emrick. They are pros in this conference stuff, which I would later find out. "Let us worry about getting the speakers here, you worry about the content of the conference." That was a phrase they kept telling me. (And they were right.)

A picture of the LUG tables. The Conneticut Free Unix Group's table is in the forground and the Long Island LUG is in the back. Larry Augustin is hanging with the LILUG guys trading Linux war storries. CFUG is working on getting X to run on the FreeBSD setup. Give these guys a power strip and an RJ45 jack into the Internet and they're set to go.
After several weeks of having meetings to meet with X, Y and Z, meeting with X, Y and Z and then having more meetings to meet with U, V and W, we finally got 3 of the working folk of ITD involved. Tom Schlagel, Ed McFadden and Susan McKeon. Ed really pushed the conference idea. He wanted to set up meetings with Don Fleming and Jim Glim, the new head of the Center for Data Intensive Computing. We needed to get their OK for this conference to proceed. And at about this time, word started to leak to them of the conference as well as to Peter Paul, the big cheese in the Directors office. As I said, we have many directors. At the top is the Lab director, Dr. John Marburger. He had no interest in any of this, which was leaked to me by his secretary when she called about a month later, to inform me that he turned down my personal invitation to attend the conference and VIP dinner. Under him there is Peter Paul, who is the scientific lab co-sub director. There is another guy who is also a co-sub director who is in charge of the non-scientific aspect of lab management. The guy to talk to was Peter Paul. If we got his OK, then we knew the conference would happen. But we wanted to make sure he heard it from us, the original organizers. Not the head of ITD or anyone else. We had the idea first, and we want to make sure Peter knew that. (Again, I don't know why I was so jealous of this fact, but I was.) So, Malcolm, Sean, Tim and I decided on a pre-emtive strike. Instead of letting Peter hear about the conference from Don Fleming, or anyone in ITD, we were going to go to him first. The guys from ITD said that having this meeting with Peter Paul without someone from ITD would make it look disorganized and could potentially derail the conference. We didn't care, "This is just a pre-meeting" to tell Peter what was coming down the pipe. "It's just a heads up thing."

The SGI table. Chris Porter is on the left and Ken Howard of Comnet, a local SGI reseller, is on the right. They're showing off their L1400 server.
So I called Peter Paul's secretary, set up a meeting and within a day we were talking to him in his office. "We want to have the accelerator facility, the Light Source and the lab scientific infrastructure as a backdrop to this conference." "It's a great way to promote the computing efforts of the Lab." "Look, people use this same software on their PC's at home that we use to do our research. This commonality is a great way to get them into the lab and show them around." And so we pitch the idea. "Wait a minute!" he said, shuffling over to his Xterminal, groping for his mouse. Click, click, click. "Hmmm, October 2nd? I won't be here." Pause.... "But I don't have to be here for the conference do I..." Finally after about 30 minutes of pitching the idea to him, (some small/short heads up meeting this was, it was a full blown presentation,) he asked, "Well, what do you want from me?" It suddenly dawned on me what he was really asking, was "How much money do you want from me?" After a brief pause to think about what he was really asking for, I said. "We need money." I didn't say anything about how much we needed. He just offered to kick in $2K to get the conference going. He's got his own directors stash which he can do with as he pleases and that afternoon he gave us, what turned out to be, the last $2K he had in his scientific co-directors fund. Looking back on this, that was quite a feat on our part. Peter just donated $2K to help promote the use of "Open Source" software at BNL. More importantly, we got the official backing from the Lab Mike Murtagh told me I should get. But there was still work to be done. Peter said that we got his blessing and money but we still needed to get ITD to fully back the conference. This meant we needed to get back with the ITD contingent of our organizing committee and schedule a meeting with Don Fleming.

The AndoverNet reps read my Slashdot paper and wouldn't let me leave without getting a picture taken with me. I tried to explain to them that my /. paper was a bit of joke, but that didn't phase them. They still wanted a picture with "Dr. Adler". What could I do?
In the meantime, I set out to try and get other departments to contribute money to the conference. Our eventual plan was to hit up the vendors for money by selling ad space on our conference web site and floor space in the auditorium lobby. But for now, we needed some "startup" funds. Physics and the Light Source kicked in $2K each without much fuss. Mike knew that I would need money. I asked if I could put money in from my own computing funds for the physics department. He OK'ed that. I sent an e-mail to the head of the Light Source and that was all it took to get $2K out of him. Chemistry never responded and I was told that Biology had no money. (But we did get 4 people from Biology on the organizing committee, a much more valuable contribution. And we did eventually get $2K from Biology, thanks to Ann Emrick, about a month later which was a real help.) Mike Murtagh told me that I could ask one of the physics secretaries to help with the conference and I was able to recruit Bonnie Sherwood, another gold nugget. The other missing component of this conference was a budget. Ann Emrick worked up a MS Excel spread sheet with a bunch of costs she could think of. "You need XXX for lunch, the conference banners will be that much, the buses will cost this much." She had the figures pretty much on the ball. The total cost of the conference, if we did get 450 people to attend, would be $25K. The biggest single and unpredictable cost was the food which was going to be $25 per person. If we added a $25 entrance fee, the cost would be $15K. We got $6K in the bag and needed $9K more to go.

A shot of the panel disussing how we can get DOE and BNL to GPL the software they write, among other issues. From left to right are Maddog, Larry Augustin, Oggy Shentov, Mike Johnson, Bruce Perens, and Fred Johnson.

This set the stage for the crucial meeting with Don Fleming. We knew the numbers, we had a rough idea of who we were going to invite, the date of the conference and all we really needed now was someone to sign the lab conference paper work. In order to have a conference at the Lab, you are required to fill out a form, get a head of a department to sign it and submit it to staff services who works within the director's office. This will then get an account created where one can deposit money and more importantly, spend it.

Mark Galassi on the left, Constantine Olchanki with is back to us and Sean McCorkle on the right are blowing off some steam after the conference finished. We are siting around waiting for rides to the Bellport where we had our VIP dinner.
The meeting we had with Don Fleming was a small one. Tom Schlagel was there, along with Ed McFadden, Sean McCorkle and myself. We pitched the idea to Don. Don got into the Open Source bit. Microsoft vs the "Free Software" community was a theme he picked up on. He saw this as a way of promoting ITD and the new computing initiatives the Lab was embarking on. After about 30 minutes or so he wanted to know "What do you want from me." Again, that question popped up which really meant, "How much money do you want from me." We pull out the spread sheet, showed him the figures. "We have $6K from other departments, the conference will cost $15K" I started. I then I told him we would try and raise as much money from outside sources as possible, implying that this would repay any money he gave us. "OK he said, I'll cover the remainder." Meaning that he would give us up to $9K for the conference and he would sign that form needed by Staff Services.

Bingo! That was it! We had our conference! There was no turning back now!

*commit* *commit* *commit* *awwuuuugah* *commit* *commit* *commit*

Its a shootin war now, boys.
Was one e-mail Sean McCorkle fired off to our internal organizers e-mail list. Just after that meeting, I had to give a talk to the PHENIX detector council. The head of the online controls group, the group I worked in for the PHENIX experiment, touted this talk as some kind of opportunity to get in front of the senior scientist of the experiment and "get some exposure." I was on such a high going into that talk, that they must of thought I had lost it. I was so giddy during that presentation. There is always some kind of tension in the PHENIX experiment coming from a competitive attitude within the various subgroups. This manifests itself through criticism of ones work as being insufficient, late, not working or whatever angle of attack your colleague conjures up at the moment. The speaker before me was one such notorious colleague who would be quick to "qualify" any statements I said about my particular project for the PHENIX detector. As it turned out, I wouldn't care less what anyone in that room thought about what I was presenting. And I was talking away, at a mile a minute, bouncing around the front of the room, making jokes about the work, heading off "suggestions" from my notorious colleague with out a pause, and all the time I had one thought screaming in the back of my mind. I had just raised $15K to put on an Open Source conference at the Lab. "PHENIX detector council, eat your heart out!" Of course I never said that to them, but they must have known something was going on.

Malcolm Capel setting up and testing the display projectors used for the conference.
The next step was to try and raise money from the Linux business community. I got a copy of the Linux Journal and made a list of every company who advertised there. I then split up the companies into 3 groups of which I was in charge of calling up 1/3 of them. The other 2/3rd's were delegated out to two others in the organizing committee. Sean McCorkle snagged someone from Microway. He asked me to follow through with them. I spent a couple of days working down my list. I learned a valuable lesson during this part of the fund raising campaign. One of the hardest things to do is call someone up you don't know and try and sell them on the idea of giving you money. It is a humiliating experience with a capital H. I have gotten a gazillion calls from vendors trying to sell me their products and I usually keep their pitch short and send them on their way. Now it was my turn to be on the other end of the phone. Most of the time I couldn't get through to the person in charge of promotions. A lot of the times all I could do was send e-mail to these people and never get a reply. But there was the occasional time I did get through to somebody. The first one I did get through was KAI. I managed to pitch the conference to someone important there. A week later, I got an e-mail that they were interested in the coffee break sponsorship. Another victory celebration erupted immediately.

Bonnie Sherwood and Elaine Dimasi pictured the day before the conference as they sorted out the badges for the registrants.
Through out all this, we kept brining up the idea of getting a "major sponsor" for the conference. They would contribute the full amount needed to organize the conference and this would free up our time to organize it rather than wasting precious time on this fund raising task. "Ask Red Hat, they got lots of money." Was one comment. "These guys are flush in Venture Capital funds." Referring to the general Open Source business community like VA Linux and others. So it was left to me to try and contact Red Hat and VA Linux. I knew Jim Gleason of VA Linux from the NYLUG so I contacted him. He gave me names, numbers and e-mail address of those over in corporate HQ who could make the decisions. I contacted (via e-mail) Bob Young and pitched the idea. He warmed up to it. VA Linux after a couple of days told me that they could only be a minor sponsor. "All our funds have been committed for the year" I was told. Fair enough, we were pushing to fund a conference about 3 months before it was to take place. (Its now mid July.) Also, everyone in Linux land was getting ready for the Linux World Expo in San Jose CA (early August) and I knew that OSOS would have to take a back seat to that event until it was over.

Parallel to raising money for the conference, we started to worry about promoting it. When we first started working on organizing it, I thought that a couple of postings on Slash Dot, Linux Today, Linux Weekly News, Freshmeat and some well chosen news groups would be all that we needed. WRONG! The first thing we needed to do was get the word out to those who worked at the lab. "Simple!" I thought. BNL has a Public Affairs office (PA) who's sole purpose is to promote the Laboratory. I would just use those resources.

Lars Ewell on the left, who's my office mate, and Martin Purschke, who does online work for the PHENIX experiment are pretending to show off a poster which is going to be presented at the conference. The poster was made by Martin.
"You can't use the term Open Source. Nobody know what it is." I was told by PA, in response to our text for the BNL e-mail announcement I wanted them to send out lab wide. "What do you mean I can't use the term Open Source. It's an Open Source conference. It's in the Title!" I argued. "Also, no one knows what source code is. You can't use that term either" they continued. "We asked someone here what they thought source code was and they said it must be some kind of government specification for plumbing. Sorry, you can't use it" PA said. I couldn't believe what I was up against. PA continued, "Also, we don't e-mail lab wide announcement of conferences. No body is interested in conferences, and that is not our job. We do have a special e-mail list which people sign up to where we can post your announcement too. It has about 800 people signed up" they boasted. I checked out who was on the list and it was slim pick'ns as far as laboratory scientists and engineers went. I had never heard of this announcement e-mail list which PA was telling me about so how would the rest of the lab know about it. And I have been at the lab for over 10 years!. So be it, we did our best to reword the text for our e-mail announcement to the Lab, replacing the terms Open Source and source code and let them send out the e-mail to the their 800 list subscribers. This generated about 30 hits on our web site of which about 5 were interested in the conference. We had to do something. We used the lab channels and came up empty handed. Furthermore, of all the people I asked, no one had gotten our e-mail announcement. And I asked all my colleagues who I worked with who I knew would be interested in it. This left us with little recourse but to SPAM the Lab. We were left with no choice. We were going to get the word out to the Lab via e-mail no matter how high the obstacles PA and Lab management were going to throw up in our path. So, after about 2 hours of perl scripting, I was able to generate a list of over 5000 e-mail addresses of anyone who in one way or other was involved with the Laboratory. After setting the mailing in motion, (I had to send out the e-mails slowly since there was the possibility of crashing the NT mail exchange server of the Lab and if the OSOS SPAMing did so, we would be in big trouble. It had happened once before with a situation not related to our conference and thus our caution) we got a healthy hit rate on our web site and everyone I talked to had received notice of the event. Sorry Public Affairs, but we just had to do it. You left us no choice in the matter.

The photo of Eliane Dimasi, Bruce Perens and Jon "maddog" Hall during the pre-registration dinner.
But that was just the beginning of our efforts to get the the word out. We spent a healthy 2 weeks generating our own mailing list of colleges and Universities through out the NE to which we would snail-mail a flier announcing the conference. But we realized that we needed to do more. We further realized that we didn't budget a penny to advertise the conference. By this time, I was able to get KAI, SGI, Portland Group and others to sponsor the conference. We had generated about 5K in sponsorships at this point. So we decided to spend all those funds on advertising. Also, this was just around the Linux World conference. It turned out that Donnie Barns was talking with VA Linux during Linux Expo, about a possible joint major sponsorship. A week later I got e-mails from VA Linux and Red Hat that they were interested. This sent chills down my spine. This was the mother load! The organizing committee huddled and tried to figure out how we would "package" a major sponsorship. It was rather easy. We would put their names and logos on every bit of real estate we owned, on the web, on every banner, conference bag and brochure we would print. Anywhere and everywhere we could think of, Red Hat's and VA Linux's logo would be there. They like it, and we got $25K. Thank God, because we spent it all on advertising.

Bill Horn talking about openDX.
The NY Times science section and Newsday took up the lion's share of advertising funds. We also got 10 days of prime time space on WSHU, the local public radio station broadcasting out of Connecticut. My guess is that we got word out about the conference to well over 100,000 people. In the end over 200 people showed up, a 10^-3 effect which is what I expected. The important thing was not so much that we got over 200 people to attend the conference, but that hundreds of thousands of people saw the Brookhaven Laboratory logo associated with Open Source software along with Red Hat and VA Linux. In a country of 250 million people, hundreds of thousands are not large numbers, but you have to start somewhere. And the long term effects of this "brand name" association are yet to be known but I'm sure it will be positive. (Dr. John Marburger, the director of the Laboratory, should be very grateful of us for what we did here.)

"Where are those PC's!"
Jim Gleason and Ari of VA Linux screwing around for a photo op in the basement of the Physics building while we were looking for Xterminals.
By the time the spots on NPR started airing and the ads in the NY Times and Newsday started to appear, the registration rate started to pick up. It peaked the weekend before the conference. All that was left was to make sure we put on a good show for the attendees. (Ehmm, I meant to say a good "conference".)

"What do you mean the VA Linux PC's are stuck in NYC?" I asked Jim Gleason. Those were the PC's we were going to use for the Post/Demo room. "Its a union thing" he replied. "The Fed Ex guy wasn't allowed to cross the corridor to pickup the boxes. Only a union guy could do that." Where are are we going to get replacements? "Sean, Ed, Heeeelp!" (Sean McCorkle, Ed McFadden and I managed to scrounge up enough Xterminals from around the Lab to cover the missing VA Linux boxes.) "What to do you mean we have to have to conference photo in the back of Berkner. I want it taken just out side the main entrance!" I told Ann Emrick. "Sorry, the photographer said there was too much sun light" she replied. Who ever heard of too much sun light!

The VIP Dinner. From left to right are Steve Adler (me), Larry Augustin; CEO and co-founder of VA Linux, an attendee, Sean McCorkle, Ari of VA Linux, Ed McFadden, Maddog and Bruce Perens. My stiff drink is hidden behind that carafe.
And so October 1st and 2nd went, one crisis situation after another. The day flew by and before I knew it, what remained of the speakers and who ever I could find hanging around after the conference, were seated at The Bellport, for our "VIP" dinner. It was all over at that point. I had a stiff drink which I rarely do. "A whiskey on the rocks please, and make it a double." Everyone seemed to have a good time there. I spent most of my time reminiscing about DEC hardware with Maddog, Larry Augustin, and Bruce Perens.

The following week I started to hear feedback on the conference. As best as I can tell, it turned out to be a great success. The most admirable compliment came from Tom Kirk, the associate director for Nuclear and High Energy Physics. (He makes all the decisions regarding future nuclear and high energy experiments at the Lab. In other words, he's an important guy.) He told me that the conference, when looked back several years from now, would be considered as a key turning point on the topic of Open Source and science. I also got word that a lot of good things were said about the conference through out the directors office.

There I am on the left with Sean McCorkle on the right. Cheers to the Open Source/Free Software world from BNL!
So there I strolled along Bell Avenue, heading north towards the bank. A bright breezy day. The surrounding trees vibrated in fall color. Peace and quiet. "So what's next?" was a thought I was trying to avoid. I was recovering from my 4 month marathon organizational project. I was going to have to heal some open wounds I had left with my colleagues over in PHENIX. They were threatening to take my name off the authorship list of the PHENIX experiment for skipping out on all their meetings and generally blowing them off. Finally, I was going to have to think about the future. "Open Source/Open Science 2K. Hmmm that has a nice ring to it."

I would like to thank Matthew Prete, Joe Louderback and Andrew Pimlott for their corrections to the grammar and spelling of this article. Thanks guys.


Copyright © 1999, Stephen Adler
Published in Issue 48 of Linux Gazette, December 1999

"Linux Gazette...making Linux just a little more fun!"


SAMBA, Win95, NT and HP Jetdirect

By


I am running a computer routing lab that is used to teach routing fundamentals on proprietary equipment. It consists of an 18 seat lab with 9 PCs, 1 server and 1 HP LaserJet 4050N with a HP Jetdirect  print server card installed. The server is running Slackware 4.0 with Linux 2.2.6 on it. Eight of the PCs are running WinNT 4.0 SP5 and one PC is running Win95a.

My requirements for the Linux server are as follows:

There was a choice of using NFS and configure each client to connect to the Linux server or to use SAMBA and only configure the server. During the normal operation of the lab, the clients are regularly rebuilt, rebooted and reconfigured. It was felt that by running SAMBA services, the Linux server would be transparent to the clients and allow the simplest client install.

This article will describe how I used SAMBA to:

NOTE: This is not a "howto" type of article but an example of a working configuration and the process used to configure SAMBA


Installing SAMBA

The installation process will vary depending on which distribution of Linux you are running. Under Slackware, select SAMBA during the installation process or if you are adding SAMBA to an existing system, use the pkgtool program.

Change to the Slackware CD, cd to /slakware/N11. Type pkgtool and "Install packages from current directory". For all other distributions, this article will assume that you have SAMBA properly installed on your system.

SAMBA is started under Slackware by the rc script "/etc/rc.d/rc.samba":

#
# rc.samba: Start the samba server
#
if [ -x /usr/sbin/smbd -a -x /usr/sbin/nmbd ]; then
  echo "Starting Samba..."
  /usr/sbin/smbd -D
  /usr/sbin/nmbd -D
fi

The smbd program provides SMB/CIFS services to clients. SMB (Server Message Block) is the services that Win95 and NT clients use to connect over networks. The new name for SMB is the Common Internet File System (CIFS).

The nmbd program is a NETBIOS name server to allow NETBIOS over IP naming services to clients.

Typing "ps -aux" at the command prompt allows us to view the processes that are running and to see if smbd and nmbd are actually present:

USER       PID %CPU %MEM   VSZ  RSS TTY      STAT START   TIME COMMAND
root         1  0.0  0.2   220  128 ?        S    Oct21   0:02 init
root         2  0.0  0.0     0    0 ?        SW   Oct21   0:00 [kflushd]
root         3  0.0  0.0     0    0 ?        SW   Oct21   0:00 [kpiod]
root         4  0.0  0.0     0    0 ?        SW   Oct21   0:00 [kswapd]



root       101  0.0  0.5  1544  380 ?        S    Oct21   0:00 /usr/sbin/smbd -D
root       103  0.0  0.9  1196  584 ?        S    Oct21   0:03 /usr/sbin/nmbd -D



root      8113  0.4  0.9  1164  616 ttyp0    S    11:14   0:00 -bash
root      8120  0.0  1.1  2272  744 ttyp0    R    11:14   0:00 ps -aux

SAMBA Configuration File

The configuration file for SAMBA is /etc/smb.conf and there are many examples configurations available in /usr/doc/samba-2.0.3/examples.

The /etc/smb.conf can be divided into 3 general sections:

The Global section deals with global parameters such as workgroup name, netbios name, IP interface used. For example:

# Global parameters

        workgroup = E328                # workgroup name
        netbios name = E328-00          # Linux server's netbios name
        server string = %h - Linux Samba server         # comment shown in Win's Network Neighborhood detail view
        interfaces = 192.168.1.3/24     # NICs + subnet mask (24 = 255.255.255.0) 
        encrypt passwords = Yes         # Required for NT (Win95 will work with encrypted or not)
        null passwords = No             # Must have a password
        log file = /var/log/samba.      # location of samba log files (many!)
        max log size = 50               # maximum size of each log file
        socket options = TCP_NODELAY    # Speeds up convergence of netbios
        os level = 33                   # Gives a higher browse master "priority"
        preferred master = Yes          # This server is the browsemaster
        guest account = pcguest         # guest account name
        hosts allow = 192.168.1. 127.   # networks allowed to access this server using SMB
The Shares section deals with sharing file directories. For example:

[homes]
        comment = Home Directories      # comment shown in Win's Network Neighborhood detail view
        path = %H                       # automatically display user's home directory as SMB share
        valid users = %S                # Only user is allowed to access this directory
        read only = No                  # can read/write
        create mask = 0750              # permissions given when creating new files
        browseable = No                 # only show user's home directory not "homes" folder

[public]
        comment = Public Files          # comment shown in Win's Network Neighborhood detail view
        path = /home/ftp/pub            # path to public directory
        guest ok = Yes                  # anyone can access this directory

[cdrom]
        comment = Cdrom on E328-00      # comment shown in Win's Network Neighborhood detail view
        path = /cdrom                   # path to cdrom drive
        guest ok = Yes                  # anyone can access cdrom drive, public share

The Printers section deals with sharing printers. For example:

[lp] 
        comment = E328-Laser            # comment shown in Win's Network Neighborhood detail view
        path = /var/spool/samba         # path to spool directory
        print ok = Yes                  # allowed to open, write to and submit to spool directory

You can manually create the /etc/smb.conf file if you know what each of the entries mean or you can use the web GUI called SWAT (SAMBA Web Administration Tool). An added bonus of using SWAT was the online help files that described each of the choices available. I understand that SWAT is installed automatically with all versions of SAMBA from 2.0 and up.


Running SWAT

The following instructions are taken directly from the /usr/doc/samba-2.0.3/swat/README file:

Running via inetd
-----------------

You then need to edit your /etc/inetd.conf and /etc/services to enable
SWAT to be launched via inetd. 

In /etc/services you need to add a line like this:

swat    901/tcp

the choice of port number isn't really important except that it should
be less than 1024 and not currently used (using a number above 1024
presents an obscure security hole depending on the implementation
details of your inetd daemon).

In /etc/inetd.conf you should add a line like this:

swat    stream  tcp     nowait.400      root    /usr/local/samba/bin/swat swat

One you have edited /etc/services and /etc/inetd.conf you need to send
a HUP signal to inetd. On many systems "killall -1 inetd" will do this
on others you will need to use "kill -1 PID" where PID is the process
ID of the inetd daemon.



Launching
---------

To launch SWAT just run your favourite web browser and point it at
http://localhost:901

Note that you can attach to SWAT from any IP connected machine but
connecting from a remote machine leaves your connection open to
password sniffing as passwords will be sent in the clear over the
wire.

You should be prompted for a username/password when you connect. You
will need to provide the username "root" and the correct root
password.

Once SWAT is up and running, you should see the following:

The menu buttons are pretty self-explanatory and there are excellent help screens available. A quick break down of the menus:

Whenever changes are made to the configuration in the Global, Shares and Printer section, the changes must be committed using the commit button/icon on the respective page. Otherwise the /etc/smb.conf file is not modified.

Once the changes are committed (/etc/smb.conf modified), the smbd and nmbd server should be restarted. The Status menu has options that allow the servers to be stopped and restarted.

I found that a good way of understanding the process that was going on was to view the /etc/smb.conf file as I made changes using the View button in SWAT.


Usernames

It is very important that the usernames and passwords are the same for both the Windows and Linux environments. The synchronization of the Linux passwords with the SMB encrypted passwords is done using the shell script mksmbpasswd.sh which is found in the /usr/lib/samba/private.

Note: For Slackware, the directory for SAMBA is /usr/lib not the standard /usr/local directory.

The following information is taken from the /usr/doc/samba-2.0.3/docs/textdocs/ENCRYPTION.txt file:

The smbpasswd file.
-------------------

In order for Samba to participate in the above protocol it must
be able to look up the 16 byte hashed values given a user name.
Unfortunately, as the UNIX password value is also a one way hash
function (ie. it is impossible to retrieve the cleartext of the users
password given the UNIX hash of it) then a separate password file
containing this 16 byte value must be kept. To minimise problems with
these two password files, getting out of sync, the UNIX /etc/passwd and
the smbpasswd file, a utility, mksmbpasswd.sh, is provided to generate
a smbpasswd file from a UNIX /etc/passwd file.

To generate the smbpasswd file from your /etc/passwd file use the
following command :-

cat /etc/passwd | mksmbpasswd.sh >/usr/local/samba/private/smbpasswd

The problem that I found with this step was that I expected that it would automatically recognize shadowed passwords and place them in the smbpasswd file. Unfortunately, it didn't and I had to manually enter in the passwords using the smbpasswd command. Luckly, I had only only about 10 passwords to enter in. There is probably a method of doing this automatically and I am just not aware of it.

Once completed, I was able to use Network Neighborhood and point and click on the Linux directory shares without being prompted for a username and password.


Configuring the HP JetDirect Card using Linux

Getting Linux and the HP JetDirect card to work was surprisingly easy. The JetDirect card is a print server card that fits into the HP 4050N printer. The first step is to configure the HP JetDirect card and printer. The standard install disk does not contain support for Linux but there is a WebAdmin tool that you can download from HP's website: http://www.hp.com/support/net_printing. I chose to do it manually by using telnet and the built-in webserver of the JetDirect card.

Telneting to the JetDirect Card

In order to telnet to the JetDirect card, you need to configure the printer's IP address. The default IP address is 192.0.0.192 which most likely will not be a valid address on your network. The HP 4050N printer allows you to to configure the IP address through the printer's status window. Select "JetDirect Menu" from the Menu button and then follow the directions for configuring the network. After the IP address is set, configure the subnet mask in a similar manner.

Telnet to your printer's IP address. You have two choices when telnetting in, you can view the current settings of the printer by typing "/" or viewing the help menu using "?" as shown by the following:

Please type "?" for HELP, or "/" for current settings 
>/ 
   ===JetDirect Telnet Configuration=== 
        Firmware Rev.   : G.07.20 
        MAC Address     : 00:10:83:1b:41:c7 
        Config By       : USER SPECIFIED 
 
        IP Address      : 192.168.1.10 
        Subnet Mask     : 255.255.255.0 
        Default Gateway : 192.168.1.1 
        Syslog Server   : Not Specified 
        Idle Timeout    : 120 Seconds 
        Set Cmnty Name  : notachance 
        Host Name       : E328-LASER 
 
        DHCP Config     : Disabled 
        Passwd          : Enabled 
        IPX/SPX         : Disabled 
        DLC/LLC         : Enabled 
        Ethertalk       : Disabled 
        Banner page     : Disabled   
 
>? 
        To Change/Configure Parameters Enter: 
        Parameter-name: value  
 
        Parameter-name  Type of value 
        ip:             IP-address in dotted notation 
        subnet-mask:    address in dotted notation 
        default-gw:     address in dotted notation 
        syslog-svr:     address in dotted notation 
        idle-timeout:   seconds in integers 
        set-cmnty-name: alpha-numeric string (32 chars max) 
        host-name:      alpha-numeric string (upper case only, 32 chars max) 
        dhcp-config:    0 to disable, 1 to enable 
        ipx/spx:        0 to disable, 1 to enable 
        dlc/llc:        0 to disable, 1 to enable 
        ethertalk:      0 to disable, 1 to enable 
        banner:         0 to disable, 1 to enable 
 
        Type passwd to change the password. 
 
 Type "?" for HELP, "/" for current settings or "quit" to save-and-exit. 
 Or type "exit" to exit without saving configuration parameter entries

The first thing that you should do is type "passwd" and add an administrator password to the printer. Next configure the default gateway and then the host name. The rest will be configured using the printer's built-in webserver.

HP JetDirect Webtool

The HP JetDirect webtool has 6 menu tabs available:


Printing from Linux to JetDirect

In order to print from Linux to the JetDirect print server, an entry was made in the /etc/printcap file. I made a new spool directory called /usr/spool/lj4050n but the default /usr/spool/lpd should really be used. The directory /usr/spool is a softlink to /var/spool.

The following is a listing of the /etc/printcap file that was used to communicate with the HP JetDirect print server:

# HP Laserjet 4050n

lp|lj4050n:\
        :lp=/dev/null:sh:\
        :mx#0:\
        :sd=/usr/spool/lj4050n:\
        :rm=e328-laser.domainname.com:rp=text:

Where:


Configuring Windows for Linux Shared Printer

>From Network Neighborhood, double-click on the Linux server's shared printer icon. Windows will ask you to configure the printer. I shared the printer's configuration CD on the Linux box and went to the disk1 folder to find the INF file. The printer configuration/installation will stop and display a message something to the tune that it "can't find disk2" just go up a directory to find the disk2 folder. It will finish the installation and you are done. I usually run a Print Testpage to ensure that it works properly.

The normal installation procedure is to run the setup utility from the CD. This installs megabytes of data on to the client which was not what I wanted. I only wanted the print driver and found that the above method gave me a quick, clean and simple printer configuration.


Summary

It was surprisingly easy to configure SAMBA and have it meet the lab's objectives. When I first ran SAMBA, it took less than 10 minutes to communicate with Win95. This was amazing as I had no prior experience with it.

In configuring the lab environment, I ran into a few problems, some annoying and some took a bit of work to sort but all were solved.

An example of one of the annoying problems was having the [homes] folder show up as a share on a client. It was identical to the client's home directory. Selecting "Browseable = No" in the Global section of /etc/smb.conf solved that.

The most frustrating problem was finding out the the smbpasswd file did not automatically convert passwords from shadow files. I kept getting asked for a username and password whenever I tried to connect to a network share. All the documentation indicated that I was doing everything correct. Manually entering each username's password using the smbpasswd program solved this. I am sure that there is an automatic process for this, as this would not be acceptable if there more than my 10 generic user accounts.

All in all, I was able to configure the network quicker and easier than if I used an NT server and the Linux server is totally transparent to the user. Here's an interesting point: this article has taken longer to write than it did to configure the network.


Copyright © 1999, Eugene Blanchard
Published in Issue 48 of Linux Gazette, December 1999

"Linux Gazette...making Linux just a little more fun!"


LinuxThreads Programming

By


Some theory...

Introduction

LinuxThreads is a Linux library for multi-threaded programming. LinuxThreads provides kernel-level threads: threads are created with the clone() system call and all scheduling is done in the kernel. It implements the Posix 1003.1c API (Application Programming Interface) for threads and runs on any Linux system with kernel 2.0.0 or more recent, and a suitable C library.

What are threads?

A thread is a sequential flow of control through a program. Thus multi-threaded programming is a form of parallel programming where several threads of control are executing concurrently in the program.

Multi-threaded programming differs from Unix-style multi-processing in that all threads share the same memory space (and a few other system resources, such as file descriptors), instead of running in their own memory space as is the case with Unix processes. So a context switch between two threads in a single process is considerably cheaper than a context switch between two processes

There are two main reasons to use threads:

  1. Some programs reach their best performance only expressed as several threads that communicate together (i.e. servers), rather than a single flow of instructions.

  2. On a multiprocessor system, they can run in parallel on several processors, allowing a single program to divide its work between different processor. Such programs run faster than a single-thread program which can exploits only a CPU at a time.

Atomicity and volatility

Accessing the memory shared from threads require some care, because your parallel program can't access shared memory objects as they were in ordinary local memory.

Atomicity refers to the concept that an operation on an object is accomplished as an indivisible, uninterruptible, sequence. Operations on data in shared memory can occur not atomically, and, in addition of that, GCC compiler will often performs some optimizations, buffering values of shared variables in registers, avoiding the memory operations needed to ensure that all processors can see the changes on shared data.

To prevent GCC's optimizer from buffering values of shared memory objects in registers, all objects in shared memory should be declared as having types with the volatile attribute, since volatile objects reads and writes that require just one word access will occur atomically.

Locks

The load and store of the result are separate memory transactions: ++i doesn't always work to add one to a variable i in shared memory because other processors could access i between these two transactions. So, having two processes both perform ++i might only increment i by one, rather than by two.

So you need a system call that prevents a thread to work on a variable while another one is changing its value. This mechanism is implemented by the lock scheme, explained just below.
Suppose that you have two threads running a routine which change the value of a shared variable. To obtain the correct result the routine must:

When a lock is asserted on a variable only the thread which locked the variable can change its value. Even more the flux of the other thread is blocked on the lock assertion, since only one lock at a time is allowed for a variable. Only when the first thread remove the lock the second one can restart asserting its own lock.

Consequently using shared variables may delay memory activity from other processors, whereas ordinary references may use local cache.

... and some practice

The header pthread.h

The facilities provided by LinuxThreads are available trough the header /usr/include/pthread.h which declare the prototypes of the thread routines.

Writing a multi-thread program is basically a 2 step process:

Let's analyze the two steps starting from a brief description of some basic pthread.h routines.

Initialize locks

One of the first actions you must accomplish is initialize all the locks. POSIX locks are declared as variables of type pthread_mutex_t; to initialize each lock you will need, call the routine:

int pthread_mutex_init(pthread_mutex_t  *mutex,   
                       const pthread_mutexattr_t *mutexattr);
as in the costruction:
#include <pthread.h>
...
 pthread_mutex_t lock;
 pthread_mutex_init(&lock,NULL);
...
The function pthread_mutex_init initializes the mutex object pointed to by mutex according to the mutex attributes specified in mutexattr. If mutexattr is NULL, default attributes are used instead.

In the continuation is shown how to use this initialized locks.

Spawning threads

POSIX requires the user to declare a variable of type pthread_t to identify each thread.
A thread is generated by the call to:

 
int pthread_create(pthread_t *thread, pthread_attr_t *attr, 
                   void *(*start_routine)(void *), void *arg);
On success, the identifier of the newly created thread is stored in the location pointed by the thread argument, and a 0 is returned. On error, a non-zero error code is returned.

To create a thread running the routine f() and pass to f() a pointer to the variable arg use:

#include <pthread.h>
...
 pthread_t thread;
 pthread_create(&thread, NULL, f, &arg).
...
The routine f() must have the prototype:
void *f(void *arg);
Clean termination

As the last step you need to wait for the termination of all the threads spawned before accessing the result of the routine f(). The call to:

  
int pthread_join(pthread_t th, void **thread_return);
suspends the execution of the calling thread until the thread identified by th terminates.
If thread_return is not NULL, the return value of th is stored in the location pointed to by thread_return.

Passing data to a thread routine

There are two ways to pass informations from a caller routine to a thread routine:

The second one is the best choice in order to preserve the modularity of the code.
The structure must contain three levels of information; first of all informations about the shared variables and locks, second informations about all data needed by the routine; third an identification index distinguishing among threads and the number of CPU the program can exploit (making easy to provide this information at run time).

Let's inspect the first level of that structure; the information passed must be shared among every threads, so you must use pointers to the needed variables and locks. To pass a shared variable var of the type double, and its lock, the structure must contain two members:

  double volatile *var;
  pthread_mutex_t *var_lock;
Note the use of the volatile attribute, specifying that not pointer itself but var is volatile.

Example of parallel code

An example of program which can be easily parallelized using threads is the computation of the scalar product of two vectors.
The code is shown below with comments inserted.

/* use gcc  -D_REENTRANT -lpthread to compile */

#include<stdio.h>
#include<pthread.h>

/* definition of a suitable structure */ 
typedef struct
{
  double volatile *p_s;       /* the shared value of scalar product */
  pthread_mutex_t *p_s_lock;  /* the lock for variable s */
  int n;                      /* the number of the thread */
  int nproc;                  /* the number of processors to exploit */
  double *x;                  /* data for first vector */
  double *y;                  /* data for second vector */
  int l;                      /* length of vectors */
} DATA;

void *SMP_scalprod(void *arg)
{
  register double localsum;   
  long i;
  DATA D = *(DATA *)arg;

  localsum = 0.0;
  
/* Each thread start calculating the scalar product from i = D.n 
   with D.n = 1, 2, ... , D.nproc.
   Since there are exactly D.nproc threads the increment on i is just
   D.nproc */ 
  
  for(i=D.n;i<D.l;i+=D.nproc)
     localsum += D.x[i]*D.y[i];
  
/* the thread assert the lock on s ... */
  pthread_mutex_lock(D.p_s_lock);

/* ... change the value of s ... */
  *(D.p_s) += localsum;

/* ... and remove the lock */
  pthread_mutex_unlock(D.p_s_lock);

  return NULL;
}

#define L 9    /* dimension of vectors */

int main(int argc, char **argv)
{
  pthread_t *thread;    
  void *retval;
  int cpu, i;
  DATA *A;
  volatile double s=0;     /* the shared variable */ 
  pthread_mutex_t s_lock; 
  double x[L], y[L];
  
  if(argc != 2)
    {  
      printf("usage: %s  <number of CPU>\n", argv[0]);
      exit(1);
    }
        
  cpu = atoi(argv[1]);
  thread = (pthread_t *) calloc(cpu, sizeof(pthread_t));
  A = (DATA *)calloc(cpu, sizeof(DATA));

 
  for(i=0;i<L;i++)
    x[i] = y[i] = i;

/* initialize the lock variable */
  pthread_mutex_init(&s_lock, NULL);
  
  for(i=0;i&lt;cpu;i++)
    {
/* initialize the structure */
      A[i].n = i;            /* the number of the thread */
      A[i].x = x;
      A[i].y = y;
      A[i].l = L;
      A[i].nproc = cpu;      /* the number of CPU */
      A[i].p_s = &s;
      A[i].p_s_lock = &s_lock;

      if(pthread_create(&thread[i], NULL, SMP_scalprod, &A[i] ))
        {
          fprintf(stderr, "%s: cannot make thread\n", argv[0]);
          exit(1);
        }
    }

  for(i=0;i<cpu;i++)
    {
      if(pthread_join(thread[i], &retval))
        {
          fprintf(stderr, "%s: cannot join thread\n", argv[0]);
          exit(1);
        }
    }

  printf("s = %f\n", s);
  exit(0);
}


Copyright © 1999, Matteo Dell'Omodarme
Published in Issue 48 of Linux Gazette, December 1999

"Linux Gazette...making Linux just a little more fun!"


A Brief History of the 'ls' command

By


The ls command, which lists files, is one of the most essential utilities for Unix and Linux users and, not surprisingly, one of the oldest. In its earliest form it was called listf and was available on the Massachusetts Institute of Technology's Compatible Time Sharing System (CTSS) by July, 1961. By 1963, there were a few options that could be used to vary what listf would list:

listf list files, newest first
listf rev list files, oldest first
listf file extension give information about the named file
listf month day year list files older than the given date

In 1965, listf was extended to recognize ``*'' as a way to list all files that matched a specific pattern, with further improvements to the pattern matching in an updated version dated January 3, 1966. The 1966 version also generalized the syntax and added lots of options, including:

listf (file) list only files, not links
listf (auth) user list files created by the given user
listf (made) mmddyy [mmddyy] list files created between the given dates
listf (srec) list by size
listf (smad) list by date of last modification
listf (rev) list in reverse order
listf (long) list in long format

When CTSS was superseded by Multics, the listf command was renamed to list, which could optionally be abbreviated to ls. The early version of ls had fewer options than late versions of listf had, but still included, along with a few others:

list -all (ls -a) list everything
list -date_time_modified (ls -dtm) list by date of last modification
list -reverse (ls -rv) list in reverse order

When Bell Labs dropped out of Multics development in 1969 and work began on Unix, only the abbreviated name of list, ls, was retained. The First Edition (November 3, 1971) Unix manual documented the following options for ls, all of which are still available today:

ls -l   list in long format
ls -t sort by time modified
ls -a list everything, including names starting with `.'
ls -s include the size of files in the listing
ls -d list directories' names, not their contents

By the Fifth Edition (manual page dated March 20, 1974) the list of options for ls had expanded to include:

ls -r   list in reverse order
ls -u use access time, not modification time
ls -i list i-number for each file
ls -f don't sort the listing at all

The Sixth Edition (May, 1975) added one more:

ls -g   list group IDs in the long format listing

In May and August, 1977, Bill Joy made some modifications of his own to ls at the University of California, Berkeley, which he subsequently distributed as part of the First Berkeley Software Distribution, 1BSD. The most dramatic difference with this version of ls was that it listed files in multiple columns rather than only listing one name per line. The options to control this new format were:

ls -1   list one name per line (no columns)
ls -c list in columns
ls -x list in columns, but sort across, not down
ls -q show control characters as `?'
ls -m everything on one line, separated by commas

There was some controversy over whether it was appropriate to include code to print in columns as an integral part of ls or whether instead the formatting into columns should be done by a separate program into which the output of ls (or any other program) could be piped.

At the beginning of 1979, Bell Labs released Seventh Edition Unix. Its version of ls did not incorporate the controversial changes, and had one new option that conflicted with a letter that Joy had also used:

ls -c   use inode change time, not modification time

A new Berkeley version of ls, dated August 26, 1980 and released with 4.0BSD, resolved the conflict by capitalizing the option to list in columns: ls -C. It also added to what the manual was by this time calling ``an unbelievable number of options:''

ls -F   mark directories with `/', programs with `*'
ls -R recursively list subdirectories
ls -b show control characters as their ASCII codes

Another revision in 4.2BSD (July 28, 1983) removed the -m, -b, and -x options -- but not before System V Release 2 had added all three of these to its own repertoire of options. They temporarily stayed alive there, but none of the three survived POSIX standardization so the 4.2BSD version of ls is essentially ls as we know it today.


Copyright © 1999, Eric Fischer
Published in Issue 48 of Linux Gazette, December 1999

"Linux Gazette...making Linux just a little more fun!"


Advanced Programming in Expect: A Bulletproof Interface

By


Links to the scripts in this article are included in the Linux Gazette. Off-site links to Expect manual pages are indicated by a "(*)" after the link.

Introduction:

This article assumes the reader has a thorough understanding of the basics of the Expect scripting language and is looking for advanced solutions. For more on Expect, see:
http://www.cotse.com/dlf/man/expect/index.html (*)
In the design of automated systems using the Expect programming language, one of the more difficult hurdles many programmers encounter is ensuring communication with ill-behaved connections and remote terminals. The send_expect procedure detailed in this article provides a means of ensuring communication with remote systems and handles editing and rebroadcast of the command line. Where a programmer would usually send a command line and then expect the echo from the remote system, this procedure replaces those lines of code and provides the most reliable interface I have come across. Features of this interface include: Communication with local processes (i.e. those running on the same workstation as the expect process) is typically not problematic and does not require the solutions detailed in this article. External processes, however, can create a number of problems that may or may not affect communication, but will affect an automated system's ability to determine the success of the communication. In cases where it is corrupted, it is not always immediately obvious: a corrupted command may trigger an error message, but data which has been corrupted may still be considered valid and the error would not show up immediately, and may cause a variety of problems. This is why it is necessary to ensure that the entire string that is transmitted is properly received echoed by the remote system.

The basic idea of this interface is to send the command string except for its terminating character (usually, a carriage return) and look at the echo from the remote system. If the two can be matched using the regular expressions in the expect clauses, then the terminating character is sent and transmission is considered successful. If success cannot be determined, the command line is cleared instead of being sent, and alternative transmission modes are used.

In many cases, nothing more than expecting the exact echo of the string is sufficient. If you're reading this article, though, I suspect that you've encountered some of the problems I have when programming in Expect, and you're looking for the solution here. If you're just reading out of interest, the problems arise when automating a session on a machine off in a lab, or on the other side of the world. Strange characters pop up over the connection, and the terminal you're connected to does weird things with its echo, but everything is working. It becomes very difficult to determine if what was sent was properly received when you have noise on the connection, terminal control codes inserted in the echo, and even server timeouts between the automation program and the remote session. This interface survives all of that, and if it can't successfully transmit the string, it means that the connection to the remote system has been lost.

The code provided in this article is executable, but needs to be incorporated into any system in which it is to be used. Ordinarily, system-dependent commands need to be added based on the needs of the target system. Also, this code uses simple calls to the puts (*) command to output status messages - these should be changed to use whatever logging mechanism is used by the rest of the system. A final caveat, and I can't emphasize this enough: always wear eye protection.


Setting Up The Interface:

The procedures provided in this article are: The interface is initialized with the send_expect_init procedure, which sets up all the globals required by the other procedures. See the section on controlling the behavior of the interface for an explanation of the parameters. The send_expect_init procedure is run once, at the beginning of execution (before the interface is to be used). It may be run a second time to restore settings, if necessary.

The send_only procedure is a wrapper for the exp_send (*) command, and is used by send_expect to transmit strings. The only time this procedure is called directly is for strings that are not echoed, such as passwords, and multi-byte character constants, such as the telnet break character (control-]).

The send_expect procedure is the actual interface between the automated system and its remote processes, and is detailed in the next section.

Finally, the send_expect_report procedure is used at the end of execution to output the statistics of the interface for debugging. This procedure may also be run during execution, if incremental reports are needed.

Using The send_expect Procedure

Once the interface has been initialized using send_expect_init, and a process has been spawned, it is ready to be used with the syntax:
send_expect id command;
where
id = the spawn id of the session on which to send the command
command = the entire command string including the terminating carriage-return, if any.
This syntax, and the implementation of the expression-action lists, support multiple-session applications.

Many people who follow the documented examples tend to write the same kind of error-prone code, because they follow the example as if it's the best example, instead of just a simple example. Examples are kept uncluttered by the little details that make the difference between bulletproof code and code that will eventually fail. The examples provided in this article are simple examples but with more attention to detail, and where warranted a complete implementation is provided as an example. The send_expect procedure usually replaces only two lines of code in an existing system.

The full syntax for properly using the interface is actually:

if { [send_expect $id $command] != 0} {
    ## handle your error here
}



How It Works:

The interface uses four different transmission modes, in order:

If a mode fails, the command line is cleared by sending the standard control-U, the expect buffer is cleared, and the next mode is tried. Each mode except the last one can also have a failure tolerance set, using:
sendGlobals(ModeXFailMax), where X is either 1,2 or 3.
If this max value is set to a positive number, once the failures for that mode exceeds this value, it is no longer used. If it is set to 0, then each mode is tried for each transmission, regardless of the number of failures. Each of the modes uses the send_only procedure as a wrapper for exp_send. If this procedure returns an error, it most likely means that the connection was lost, and the spawn id is checked to see if the session is still active. The error is returned to send_expect, which in turn returns an error to the calling procedure.

For local processes and robust remote connections, mode 1 is usually sufficient. If the remote system is a bit slow, mode 2 may be required. Mode 3 has proven invaluable when connected to routers and clusters which provide rudimentary terminal control. Mode 4 is rarely required, but acts as a backup to mode 3.


Moving Window Diagnostics

Expect provides a means of controlling the output of its internal diagnostics and expression-matching attempts using the command "exp_internal" (*). The send_expect interface makes use of this command to create a diagnostics output file for each transmission attempt - for each attempt, a new diagnostics log file is created using exp_internal -f. If transmission is successful, the file is deleted. If it fails, the file is renamed using the syntax
send.n.i.command.diags
where
n = the number of the failure
i = the spawn id of the channel that had the failure
command = the first word of the command string that failed to get sent properly.
If you've ever read a 30 Megabyte log file with all of the diagnostics from the beginning to the end of execution, you'll understand why this is necessary. The diagnostics files created using this method are usually less that 2 Kilobytes, and since they are directly associated with failures (because the window file is deleted for successful transmissions), debugging is far more efficient.

The moving window diagnostics file is the fastest and smallest way to implement full diagnostics output during the execution of a send command. If the transmission succeeds, this file is deleted. If there is a failure, this file is closed and renamed, and on the next invocation of the send command a new file is created. This results in very small files (comparatively) with all of the diagnostics from expect and the user-defined messages, from the very beginning of the attempt to send the command.

Ideally, if there are no failures during execution, there should be no more than one send diagnostics file in existence at any time, named send.diagnostics. If there are diagnostics files, each is associated with a particular failure and should be used in debugging that failure.


Controlling The Behavior Of The Interface:

The sendGlobals array contains all of the parameters used by the interface, and may be modified at runtime to control how the interface works. This section will cover the meanings of these parameters and how they may be modified. See send_expect_init for the initial values of these parameter.

The failure limit elements (Mode1FailMax, Mode2FailMax, and Mode3FailMax) determine how many failures are permitted for modes 1, 2 and 3 (respectively). A value of zero disables this limitation, and any positive integer sets the maximum number of failures for that mode before it is no longer used by the interface. There is no failure limit for the last mode.

The element useMode allows the system to determine which transmission mode should be used first, so that the less reliable modes (the first and second) can be bypassed. Allowable values for this parameter are 1, 2, 3, or 4. Invalid values will be replaced by the default mode (1).

If transmission errors are not considered fatal, the sendErrorSeverity element may be specified to a more tolerant value. Note that this parameter is not used internally, so if the automated system does not access this value, it won't affect the interface.

The kill element defines the command line kill character, which is defaulted to the Gnu-standard control-U.

The diagFile parameter names the temporary internal diagnostics file (generated from exp_internal).

The logDiags allows disabling of all diagnostics output for faster execution, but be forewarned that disabling this feature well make debugging much more difficult.

The sleepTime parameter, when set to a positive integer, causes the interface to sleep for the designated amount of time before starting transmission. This is useful if the automated system appears to be going faster than the remote system can handle - the consistent loss of characters in the transmission phase usually indicates a speed and synchronization problem, and this parameter is provided as an allowance for such cases.

The interval and delay elements represent the two items in the send_slow list, which is used by the second and third modes.

For experimentation purposes, it is recommended that these parameters be modified by the automated system at runtime, rather than directly editing the defaults in the initialization procedure. Once valid settings are found the defaults may be changed to reflect them.


Failures:

When the send_expect procedure returns a failure, it indicates an unreliable connection to the remote system, and manual verification will confirm this. Such a failure is fatal to the reliability of any automated system, and must be corrected before the system can run properly.

If the procedure itself appears to be malfunctioning, the diagnostics files that were created during the failure should help in debugging. This interface has yet to fail with a reliable connection.
 
 


Copyright © 1999, David Fisher
Published in Issue 48 of Linux Gazette, December 1999

"Linux Gazette...making Linux just a little more fun!"


Linux, Java and XML

By


Abstract:This article is a basic introduction to the new web markup language XML and the transformation language XSL. Here I show how the Apache web server can be configured using the servlet engine JServ, to do client side XML/XSL transformation using Apache's Cocoon servlet.

Future updates for this article will be located at http://www.inconn.ie/article/cocoon.htm
(The domain name is currently non-functional but is expected soon.)

Introduction

The eXtensible Markup Language (XML) is a powerful new web markup language (ISO approval in February 1999). It is a powerful way of separating web content and style. A lot has been written about XML, but to be used effectively in web design the technologies behind it must be understood. To this end I have added my own two pence worth to the already vast amount of literature out there on the subject. This article is not a place to learn XML, nor is it a place where the capabilities of XML are explored to their fullest, but is is a place where the technologies behind XML can be put in practice immediately.

Before I go any further, I should recommend the two sites where definitive information on XML can be obtained. The first is the World Wide Web Consortium (W3C) site http://www.w3.org/. The W3C are responsible for the XML specification. The second site is the XML frequency asked questions site (http://www.ucc.ie/xml/) which will answer any other questions. I also recommend the XML pages hosted by IBM, http://www.ibm.com/xml/, where you will find a wide range of excellent tutorials and articles on XML.

The original web language, SGML (around since 1986) is the mother of all mark-up languages. SGML can be used to document any conceivable system; from complex aeronautical design to ancient Chinese dialects. However, it suffers from being over complex and unwieldy for routine web applications. HTML is basically a very cut down version of SGML, originally designed with the scientific publishing community in mind. It is a simple mark-up language (it has been said "anyone with a pulse can learn it") and with the explosion of the web it is clear that the people with pulses have spoken. Since its foundation the web has grown in complexity and it has long outgrown its lowly beginning in the scientific community.

Today web pages need to be dynamic, interactive, back-ended with databases, secure and eye catching to compete in an ever more crowded cyberspace. Enter XML, a new mark-up language to deal with the complexities of modern web design. XML is only 20 percent as complex as SGML and can handle 80 percent of SGML situations (believe me when you are talking about coding ancient Chinese dialects, 80 percent is plenty). In the following section I will will briefly compare two markup examples, one in HTML and the second is XML, demonstrating the benefits of an XML approach. In the final section I will show you how to set up an Apache web server to serve an XML document so that you may begin immediately to start using XML in your web design.

HTML

The following example is a very simple HTML document that everyone will be familiar with:

      <html>
         <head>
          <title>This is my article</title>
         </head>
         <body>
          <h1 align="center">This is my article</h1>
          <h3 align="center">by <a href="mailto:[email protected]">EoinLane</a></h3>
          ...
         </body>
        </html>
  Two important points can be made about this document.

XML addresses these two issues.

XML

The XML equivalent is as follows:

        <?xml version="1.0"?>
        <page>
         <title>This is my article</title>
         <author>
          <name>Eoin Lane</name>
          <mail>[email protected]</mail>
         </author>
         ...
        </page>

The first thing to note is that this document, along with all other valid XML documents, is well formed. To be a well formed document every tag must have an open and close brace. A program searching for the mail address then has only to locate the text in between the opening and closing tags of mail.

The second and crucial point is that this XML document contains just data. There is nothing in this document that dictates how to display the author's name or his mail address. In practice it is easier to think about web design in terms of data and presentation separately. In the design of medium to large web sites, where all the pages have the same look and only the data is changing form page to page, this is clearly a better solution. Also it allows a division of labour where, style and content can be handled by two different departments, working independently. It also allows the possibility of having one set of data with a number of ways of presenting it.

An XML document can be presented using two different methods. One is using a Cascading Style Sheet (CSS) (see http://www.w3.org/style/css/) to markup up the text in HTML. The second is using a transformation language called XSL, which converts the XML document into HTML, XML, pdf, ps, or Latex. As to which one to use, the W3C (the people responsible for these specification) has this to say:

Use CSS when you can, use XSL when you must.

They go on to say:

The reason is that CSS is much easier to use, easier to learn, thus easier to maintain and cheaper. There are WYSIWYG editors for CSS and in general there are more tools for CSS than for XSL. But CSS's simplicity means it has its limitations. Some things you cannot do with CSS, or with CSS alone. Then you need XSL, or at least the transformation part of XSL.

So what are the things you cannot do with CSS? In general everything that needs transformations. For example, if you have a list and want it displayed in lexicographical order, or if words have to be replaced by other words, or if empty elements have to be replaced by text. CSS can do some text generation, but only for generating small things, such as numbers of section headers.

XSL

XSL (eXtensible Stylesheet Language) is the language used to transform and display XML documents. It is not yet finished so beware! It is a complex document formating language that is itself an XML document. It can be further subdivided in two parts: transformation (XSLT) and formatting objects (sometimes referred to as FO, XSL:FO or simply XSL). For the sake of simplicity I will only deal with XSLT here.

XSL Transformations (XSLT)

As of the 16th of November 1999 the World Wide Web Consortium has announced the publication of XSLT as a W3C Recommendation. This basically means that XSLT is stable and will not change in the future. The above XML document can be transformed into a HTML document and subsequently displayed on any browser using the following XSLT file.

<?xml version="1.0"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/XSL/Transform/1.0">

  <xsl:template match="page">
   <html>
    <head>
     <title>
      <xsl:value-of select="title"/>
     </title>
    </head>
    <body bgcolor="#ffffff">
     <xsl:apply-templates/>
    </body>
   </html>
  </xsl:template>

  <xsl:template match="title">
   <h1 align="center">
    <xsl:apply-templates/>
   </h1>
  </xsl:template>

  <xsl:template match="author">
   <h3 align="center">
    by <xsl:apply-templates/>
   </h3>
  </xsl:template>

  <xsl:template match="mail">
   <h2 align="left">
    <xsl:apply-templates/>
   </h2>
  </xsl:template>

</xsl:stylesheet>

To learn more about XSLT, I recommend the XSLINFO site (http://www.xslinfo.com/ as a good starting point. Also I found the revised Chapter 14 from the XML Bible to be very good. This revision is based on the specifications that eventually became the recommendation.

With the arrival of the next generation of browsers, i.e. Netscape 5 (currently under construction http://www.mozilla.org/) this transformation with be done client side. When an XML file is requested the corresponding XSL file will be sent along with it, and the transformation will be done by the browser. Currently there are a lot of browsers only capable of displaying HTML, and until then the transformation must be done server side. This can be accomplished by using Java servlets (Java server side programs).

The Cocoon servlet is such a servlet, written by some very clever people at Apache (http://www.apache.org/). It basically takes an XML document and transforms it using a XSL document. An example of such a transformation would be to convert the XML document into HTML so that the browser can display it. So if your web server is configured to run servlets, and you include the cocoon servlet, then you can start designing your web pages using XML. The rest of this article will show exactly how to do this.

How do I do it?

I have tested the following instructions on a fresh installation of Red Hat 6.0, so I know it works.

Apache Web Server

First set up the Apache web server. On Red Hat this comes pre installed but I want you to blow it away using:

rpm -e --nodeps apache

and do not worry about the error messages. Next get a hold of the most recent Apache (http://www.apache.org/) (currently verison 1.3.9) and copy it somewhere handy. I put mine in /usr/local/src. Tar and unzip the file using:

tar zxvf apache_1.3.9.tar.gz

This will expand the installation into the directory /usr/local/src/apache_1.3.9. Change into this directory and configure, build and install the application using the following:

./configure --prefix=/usr/local/apache --mandir=/usr/local/man --enable-shared=max

make

make install

This will install apache into the directory /usr/local/apache and the important file to note here is http.conf which can be found in the directory /usr/local/apache/conf. This file contains most of the important information necessary to run apache correctly. It contains information on: where to serve the web documentsfrom, virtual web servers and folder aliases. We will be returning to this file shortly so become familiar with it's general layout. At this stage I had to reboot Linux and then start Apache using the following instruction

/usr/local/apache/bin/apachectl start

To test it, point your web browser to http://localhost/ and you're in business, hopefully! For good web design and planning I would refer you to an article that I found invaluable in setting up my own web site: Better Web Site Design under Linux

Java and JSDK

As of October, IBM have released the Java Development Kit 1.1.8 for Linux. It claims to be faster than the corresponding Blackdown's (http://www.blackdown.org/) and Sun's JDKs. Download IBM JDK (see http://www.ibm.com/java/). Again tar and unzip this into the /usr/local/src/jdk118 directory. Next, download the JavaSoft's JSDK2.0, the solaris version (not JSDK2.1 or any other flavours you might be tempted to get) and tar and unzip it - again I put it in /usr/local/src/JSDK2.0. Add the following or equivalent to /etc/profile to make them available to your system.

JAVA_HOME="/usr/local/src/jdk118"
JSDK_HOME="/usr/local/src/JSDK2.0"
CLASSPATH="$JAVA_HOME/lib/classes.zip:$JSDK_HOME/lib/jsdk.jar"
PATH="$JAVA_HOME/bin:$JSDK_HOME/bin:$PATH"
export PATH CLASSPATH JAVA_HOME JSDK_HOME

To test them run:

java -version

at the command prompt, and you should get back the following message

java version "1.1.8"

and to test the servlet development kit run:

servletrunner

and if all goes well you should get back the following:

servletrunner starting with settings:
port = 8080
backlog = 50
max handlers = 100
timeout = 5000
servlet dir = ./examples
document dir = ./examples
servlet propfile = ./examples/servlet.properties

We are now ready to install Apache's servlet engine, ApacheJServ.

ApacheJServ

Again, download the latest ApacheJServ (version 1.0 at this time, although version 1.1 is in it's final beta stage) from Apache's Java Site (http://java.apache.org/) and expand it into /usr/local/src/ApacheJServ-1.0/. Configure, make and install it using the following instructions:

./configure --with-apache-install=/usr/local/apache --with-jsdk=/usr/local/src/JSDK2.0

make

make install

When this has successfully completed add the following line to the end of the http.conf file that I refereed to earlier during the Apache web server installation:

Include /usr/local/src/ApacheJServ-1.0/example/jserv.conf

and restart the web server using:

/usr/local/apache/bin/apachectl restart

Now comes the moment of truth, point your web browser to http://localhost/example/Hello and if you get back the following two lines:

Example Apache JServ Servlet
Congratulations, Apache JServ is working!

then you are almost home.

Cocoon

Finally, download the latest version of Cocoon (version 1.5 at this time) from Apache's Java Site (http://java.apache.org/). Cocoon is distributed as a Java jar file and can be extracted using the command jar. First, create the directory /usr/local/src/cocoon and then expand the cocoon jar file into it:

mkdir /usr/local/src/cocoon

jar -xvf Cocoon_1.5.jar

Now comes the tricky part of configuring the JServ engine to recognise a file with a .xml extension and to use the cocoon servlet process and serve them.

Locate the file jserv.properties which you will find in the directory /usr/local/src/ApacheJServ-1.0/example/ and at the end of the section that begins:

# CLASSPATH environment value passed to the JVM

add the following:

wrapper.classpath=/usr/local/src/cocoon/bin/xxx.jar

In the case of Cocoon 1.5 this means adding the following three lines:

wrapper.classpath=/usr/local/src/cocoon/bin/fop.0110.jar
wrapper.classpath=/usr/local/src/cocoon/bin/openxml.106-fix.jar
wrapper.classpath=/usr/local/src/cocoon/bin/xslp.19991017-fix.jar

Although these files will change with different versions. The next file to locate is the example.properties file, again found in the /usr/local/src/ApacheJServ-1.0/example/ directory and add the following line:

repositories=/usr/local/src/cocoon/bin/Cocoon.jar

In my example.properties file it meant changing the line:

repositories=/usr/local/src/ApacheJServ-1.0/example

to the following:

repositories=/usr/local/src/ApacheJServ-1.0/example,/usr/local/src/cocoon/bin/Cocoon.jar

Also add the following line to the end of the example.properties file:

servlet.org.apache.cocoon.Cocoon.initArgs=properties=/usr/local/src/cocoon/bin/cocoon.properties

The JServ engine is now properly configured and all that is left for us to do it to tell Apache to direct any call to an XML file (or any other file you want Cocoon to process) to the Cocoon servlet. For this we need the JServ configuration file, jserv.conf mentioned earlier (again in the same directory). Include the following line:

ApJServAction .xml /example/org.apache.cocoon.Cocoon

In order to access the cocoon documentation and examples add the following lines to the alias section of your http.conf file:

Alias /xml/ "/usr/local/src/cocoon/"

<Directory "/usr/local/src/cocoon/">
    Options Indexes MultiViews
    AllowOverride None
    Order allow,deny
    Allow from all
</Directory>

Alias /xml/ example/"/usr/local/src/cocoon/example/"

<Directory "/usr/local/src/cocoon/example/">
    Options Indexes MultiViews
    AllowOverride None
    Order allow,deny
    Allow from all
</Directory>

Restart the web browser for this to take effect:

/usr/local/apache/bin/apachectl restart

Now point your browser to http://localhost/xml/ to browse the documentation and http://localhost/xml/example/ to try out the examples. If Cocoon complains about a exceeding a memory limit then open the file cocoon.properties found in the /usr/local/src/cocoon/ directory. Find the line

store.memory = 150000

and change it to something lower like 15000. To try out the PDF examples, which I think are very cool, you have to have Acrobat Reader installed as a netscape plug-in, but it is worth the extra effort to get this working.

Cocoon 2

The Cocoon 1.x series has basically been a work in progress. What started out as a simple servlet for static XSL transformation has grown into something much more. With this ongoing development, design considerations taken at the beginning of the project are now hampering future developments as the scale and the scope of the project becomes apparent. To add to this, XSL is also a work in progress, although the current version of XSLT has become a W3C Recommendation (as of November, 16 1999).

Cocoon 2 intends to address these issues and provide us with a servlet for XML transformations that is scalable to handle large quantities of web traffic. Web design of medium to large sites in the future will be based entirely around XML, as its benefit become apparent, and the Cocoon 2 servlet will hopefully provide us with a way to use it effectively.

Conclusions

Even as I have been writing this article, Apache have opened a new site dedicated exclusively to XML (see http://xml.apache.org/). The cocoon project has obviously grown beyond all expectations, and with the coming of Cocoon 2 will be a commercially viable servlet to enable design of web sites in XML to become a reality. The people at Apache deserve a lot of credit for this so write to them and thank them, join the mailing list and generally lend your support. After all this is open source code and this is what Linux is all about.

lane.xml
An XML version of this article.
lane.xsl
Its XSL style sheet.
lane.xml.txt
A text version of the XML source.
lane.xsl.txt
A text version of the XSL source.


Copyright © 1999, Eoin Lane
Published in Issue 48 of Linux Gazette, December 1999

"Linux Gazette...making Linux just a little more fun!"


DHCP for the Home Network

By and


Tired of lugging that laptop from the office to home only to find that once you get there you have to boot your M$ operating system, change your network settings to work with your home network, and then ... ah-hem REBOOT?!  I was!  [Enter DHCP from stage left.]   If your office and home environments are configured correctly, DHCP is about to be your new best friend!

Disclaimer: This article provides information we have gleamed from reading the books, the HOWTOs, man pages, usenet news groups, and countless hours banging on the keyboard. It is not meant to be an all inclusive exhaustive study on the topic, but rather, a stepping stone from the novice to the intermediate user.  All the examples are taken directly from our home networks so we know they work.

How to use this guide:

  • Words encapsulated by square brackets like [Enter] indicate the depression of a key on the keyboard or a mouse button [Mouse1]
  • Words encapsulated by squiggly brackets like {your name here} indicate data that will/should be substituted with "real" data
  • Text depicted in italics are commands you, the user, should type at a prompt
Prerequisites: This guide assumes that you have at least DHCP 2.0 installed and that your local IP network is setup and functioning.

DHCP@Home, why? Convenience.  I was fed up with changing network configurations when I came home and then subsequently went back to the office.  It was completely unacceptable to add a bolt-on utility to manage these settings for me.  Then one day it hit me.  We use DHCP at work.  Why shouldn't I use it at home!  It'll give me an IP address on the local network, the necessary DNS information, and the correct gateway.

DHCP@Work, why?  This one is a little easier to justify than simply being inconvenienced.  I worked at a facility with ~600 workstations.  On the 1st day with the company I had to setup my own machine and get it on the network.  It took nearly 30-minutes for an ill-equiped network administrator, I'll leave his name out of this article since he's still providing network services for a local company, to find an empty IP address that I could statically assign to my system.  He had paper print outs of 3-complete Class-C networks (192.168.[1-3].[1-254]).  It doesn't take higher math to figure out that there's over 750 addresses available.  255 of which were on the network segment I was on.

His process:

1. Find an open slot on the printed copy
2. Verify it was open in his electronic copy
3. Ping the address to verify no one was on it
4. Give it to me, write it on the paper and enter it electronically
That was Friday.  Monday morning I had an IP Conflict.  Real accurate system.  All I knew about DHCP was that I had to select it to get an IP address from my ISP at home.  I had no idea how it worked nor if we could use it to fix this `problem'.  I joined forces with a different network services group in the company, they educated me, and we did a full frontal assault on every machine.  DHCP is running to this day ...

IP Numbering Schemes:  If you've decided to run DHCP at home then be smart about it.  Figure out a numbering scheme and stick with it.  You'll know exactly what's on your network just by seeing the IP address.  This advice holds more weight in the corporate world where you're likely to be hosting more machines.  Anyway, this is what I use at home and it closely resembles work as well:

IP Address Range     Hosts

* Important machines and network equipment *

    .001             Your router (if present)
    .002 - .009      Network equipment (switches and hubs)
    .010 - .075      Servers
    .076 - .099      Network Printers

* User workstations *

    .100 - .254      DHCP range
 

Sticking with a scheme like the one above will make visual scans of your logs much easier since you'll likely notice an oddball IP address doing something it's not supposed to or not capable of, i.e., a printer trying to telnet into your machine?  Not likely ... and a good indicator that you didn't read last month's article on security!

Are you ready?  Type the following command at a shell prompt:
 

whereis dhcpd[Enter]
You should see output similar to the following line.  If not, then you still need to install DHCP.  Get to it and come back here:
dhcpd: /usr/sbin/dhcpd /etc/dhcpd.conf  /etc/dhcpd.leases /etc/dhcpd.leases~


/etc/dhcpd.conf: This is where we start.  This file is the key to it all and it's extremely simple in design.  Here's what mine looks like:
 

/etc/dhcpd.conf

September 18, 1999
Author: Bill Mote
default-lease-time 36000;
max-lease-time 7200;
option subnet-mask 255.255.255.0;
option broadcast-address 192.168.1.255;
option routers 192.168.1.10;
option domain-name-servers 192.168.1.10;
option domain-name "mynetwork.cxm";

subnet 192.168.1.0 netmask 255.255.255.0 {
        range 192.168.1.100 192.168.1.254;

/etc/dhcpd.leases: We now need to give the dhcp server somewhere to start.  We do so by creating an empty leases file.  Type the following command at a shell prompt:

touch /etc/dhcpd.leases[Enter]
Below is a `live' dhcpd.leases file from my system.  Please note that your file will be empty until a DHCP client successfully obtains an IP address from your server.
 
 

/etc/dhcpd.leases

September 18, 1999
Author: {auto generated by dhcpd}
lease 192.168.1.100 {
        starts 6 1999/09/18 17:27:36;
        ends 6 1999/09/18 17:37:36;
        hardware ethernet 00:60:08:e3:60:03;
        uid 01:00:60:08:e3:60:03;
        client-hostname "NoFear";
}

Let's go already!  It's what you've been waiting for.  Time to startup the dhcp server and get a client running.  Type the following command at a shell prompt:

/usr/sbin/dhcpd[Enter]
That's it for the server side.  Now onto client configuration.

M$ client:  For the purposes of this article I'm going to assume that you've got mostly M$ machines connected to a linux server.

The Microsoft client is actually very easy to configure.  Follow these mouse clicks:
 

Start -> Settings -> Control Panel -> Network
Find the `TCP/IP protocol for your network adapter'.  Highlight it get to the properties window by pressing [Mouse1] on the `Properties' button.  Since pictures are better than words I'll use the following 3 pictures to illustrate how your TCP/IP settings should be configured:
 
 










After a quick M$ reboot you should be golden!  For more info on DHCP see the DHCP-minihow-to.  Notice in the opening sentence of this paragraph the word, "should".  Well, here's what I've found.
 

DHCP and Win98 SE (second edition) -- [Bill steps onto his soap box] Nothing like a little competition to spark the boys in Redmond, eh?  The impression was the *everyone* was setting up Linux boxes to share their internet connections at home.  Microsoft responds in short order with Internet Connection Sharing included in Win98 SE.  Quite frankly, I loved it.  I used it to get our church computers running from a single dial connection and it's working great.

However, Win98 SE's ICS seems to be doing something a little off the wall when it comes to DHCP.  My Win98 SE machine at home assigns an IP address to my NIC if it can't find a DHCP server.  In the old days Win9x would use it's last known IP address as long as the lease hadn't expired ... not any more.

If you have problems getting your Win98 SE box to get an IP address then you may want to assign it a permanent address.  What about the numbering scheme you ask?  You `could' set a small range aside at the bottom or top of the range, but still above our default .100 (like .100 - .105).

If you still want the box to have an IP address assigned by DHCP then you can do the following:

  • Give the Win98 SE box an IP address on your network.  Don't worry it'll just be temporary.
  • Wait a couple of days (haven't narrowed the time down yet)
  • Set the machine back to DHCP
Ridiculous?  Yes.  I just haven't looked for a way to permanently fix the problem.  There's probably some undocumented Win9x registry tweak ...  If anyone knows send it to my e-mail address below please!

Next month we will be discussing Linux Client Side DHCP.



Copyright © 1999, JC Pollman, Bill Mote
Published in Issue 48 of Linux Gazette, December 1999

"Linux Gazette...making Linux just a little more fun!"


The Back Page


About This Month's Authors


Steven Adler

While not building detectors in search of the quark gluon plasma, Steve Adler spends his time either 4 wheeling around the lab grounds or writing articles about the people behind the open source movement.

Eugene Blanchard

Eugene is an Instructor at the Southern Alberta Institute of Technology in Calgary, Alberta, Canada where he teaches electronics, digital, microprocessors, data communications, and operating systems/networking in the Novell, Windows and Unix worlds. When he is not spending quality time with his wonderful wife and 18 month old daughter watching Barney videos, he can be found in front of his Linux box. His hobbies are hiking, backpacking, bicycling and chess.

Eric Fischer

Eric currently lives in Chicago, Illinois and is employed by RootsWeb, Inc. He was formerly involved in the development of the Vim text editor.

David Fisher

Dave has been working as a software consultant for 12 years doing development, QA, and automation in both military and commercial projects. He also designs strategic games and is a published musician.

Eoin Lane

Eoin is the managing director of InConn Technologies Ltd., an intranet document management company. After completing his Ph.D., in chemical physics Eoin saw the potential of using a Linux server to centrally manage documents, from this he decided to set up the company InConn Technologies Ltd., to explore this technology commercially. Eoin specialises in XML solutions for complex document and knowledge management problems.

JC Pollman and Bill Mote

JC has been playing with Linux since kernel 1.0.59. He spend way too much time at the keyboard and even let his day job - the military - interfere once in a while. His biggest concern about linux is the lack of documentation for the intermediate user. There is already too much beginner's stuff, and the professional material is often beyond the new enthusiast.

Bill is the Technical Support Services manager for a multi-billion dollar publishing company and is responsible for providing 1st and 2nd level support services to their 500+ road-warrior sales force as well as their 3,500 workstation and laptop users. He was introduced to Linux by a good friend in 1996 and thought Slackware was the end-all-be-all of the OS world ... until he found Mandrake in early 1999. Since then he's used his documentation skills to help those new to Linux find their way.

Bob Reid

Rob is doing his Ph.D. in Astronomy at the University of Toronto, where he was a system administrator on the side for a while along with running his own Linux boxes at home and school since 1995.


Not Linux


[ Penguin reading the Linux Gazette ]

This issue is short on articles because the Thanksgiving weekend pushed the deadline too far forward for some of the authors. Articles by LG regulars Bill Bennet, Slambo, Sean Lamb and Anderson Silva will be in the January issue.

Since the Linux Gazette Spam Count has been hovering steadily at 26-28%, I decided to look at a new statistic this month: the Linux Gazette Bounce Count. This is the number of messages that were undeliverable on the 5500-member lg-announce mailing list. Bounce count: 1034 messages, creating a wopping 7.5 MB file of error messages. Most of these were because of the mail loop created by the [email protected] address, which is now unsubscribed. There are still some problems which duplicate messages on the list. We are currently trying to pin down where these messages are being generated from.

Today is the first day of the World Trade Organization meetings her in Seattle, and the protesters stated a day earlier than expected, on Sunday. Nothing unusual to report so far that you won't find in the papers.

Finally, a tongue twister from my boss Dan Wilder. Say it three times fast:

How many nets could a Netwinder wind if a Netwinder could wind nets?


Linux Gazette Issue 47, September 1999, http://www.linuxgazette.com
This page written and maintained by the Editor of Linux Gazette,
Copyright © 1999 Specialized Systems Consultants, Inc.