LINUX GAZETTE

July 2001, Issue 68       Published by Linux Journal

Front Page  |  Back Issues  |  FAQ  |  Mirrors  |  Search

Visit Our Sponsors:

Penguin Computing
Linux NetworX

Table of Contents:

-------------------------------------------------------------

Linux Gazette Staff and The Answer Gang

Editor: Michael Orr
Technical Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Ben Okopnik, Dan Wilder, Don Marti

TWDT 1 (gzipped text file)
TWDT 2 (HTML file)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.
Linux Gazette[tm], http://www.linuxgazette.com/
This page maintained by the Editor of Linux Gazette,

Copyright © 1996-2001 Specialized Systems Consultants, Inc.

The Mailbag



HELP WANTED : Article Ideas

Send tech-support questions, answers and article ideas to The Answer Gang <>. Other mail (including questions or comments about the Gazette itself) should go to <>. All material sent to either of these addresses will be considered for publication in the next issue. Please send answers to the original querent too, so that s/he can get the answer without waiting for the next issue.

Unanswered questions might appear here. Questions with answers--or answers only--appear in The Answer Gang, 2-Cent Tips, or here, depending on their content. There is no guarantee that questions will ever be answered, especially if not related to Linux.

Before asking a question, please check the Linux Gazette FAQ to see if it has been answered there.



kernel tuner's helper

Mon, 04 Jun 2001 14:56:15 +0200
Antonio Sindona ()

There is a tool to monitor kernel parameters and to change the limits. I explain better: I'd like to have a look in some occasions if there is some kernel parameter which has reached its limit, and have the possibility to change it. Moreover I'd like to see anyway ALL kernel parameters available to set them to optimize it. With HP-UX there is a tool (SAM) which makes this work. Any idea for a Linux box ? Thanks in advance

Antonio Sindona


What Type of Hard disk ( SCSI/RAID)

1 Jun 2001 20:29:11 -0000
Alpesh K and Sachin Vyas ( alpesh8 from rediffmail.com AND sachin-vyas from usa.net)
Both of these users asked pretty much the same question. They are SuSE users but because the question is about the kernel overall, the result should be useful to everyone. But this could also serve as the starting point for an article about using RAID, or about working with the proc filesystem, or an inspiration to write this "tool like an sg interface" they have conceived of. If you decide to answer this directly, please mail both of them and cc . -- Heather

Alpesh K

Hi:

Is there a programming interface through which I can find out type of hard disk ( SCSI/RAID)? I have Suse Linux 7.1 (x86) with two SCSI hard disks and one SCSI is connected through AMI megaraid hardware RAID controller. RAID driver is megarid and lsmod displays it.

I need to find out which is RAID/SCSI ? I looked at /proc/scsi/scsi but that didn't tell which is SCSI/RAID.

But another SuSE 7.1 system having Mylex h/w RAID controller and DAC driver has /proc/rd/c*d* entry.

1. So why not there is an entry /proc/rd/c*d* on system which has AMI RAID controller?

2. Is there any ioctl through which I can find out which disk is SCSI/RAID, if device name ( /dev/sda , /dev/sdb etc.) is given.

...in a later mail he added... -- Heather

I looked at scsi generic interface(sg ) but as I understand It cann't be used for all the types of RAID Controllers.( Mylex DAC, Compaq smart array etc.) A generic interface like sg which works for all the RAID controllers will be very helpful.

I looked at looked /proc/scsi/scsi and /proc/rd/c* but that may vary , so if I have to use /proc then I have to hardcode /proc pathname.

I will be very thankful if somebody tells me how do I do this.

Thanks in advance.
Alpesh


sachin vyas

Hi All:

...Sachin asks similar questions as Alpesh, but adds: -- Heather

Can I use /proc/partitions which will give all the disks then I can filter out IDE by looking at /proc/ide , SCSI by /proc/scsi/scsi and remaining becomes RAID ?

I will be very thankful if somebody tells me whether using /proc/partitions is correct or not ? If not how this be done.

Thanks in advance,
Sachin

I believe that the remainder would also include old model CDROMs or tapes that had private controllers. -- Heather


WordPerfect 8 and RH 7.1

Fri, 25 May 2001 13:00:56 -0400
James P. Harlos ()

For reasons which seemed good to me at the time I switched distos and upgraded to RH 7.1. After some work everything seems to be ok except for sound. However, I got a big surprise when I tried to load WordPerfect 8 from CD. This is a purchased copy of WP 8 Personal Edition that worked fine. The install script would not run! I went to Corel's site. They apparently don't help people who purchased their product. In any event they seem to be more concerned with their new product WP 2000. On one of their user group boards there was a tread concerning this, suggesting the problem was in the initial install script, install.wp. Well the long and short of it is I made the changes suggested with no change in behavior. Do you have any answer?

We don't have the script at hand, but someone who does might tell you haw they forced it to install correctly. Which needn't be by using what Corel provided. -- Heather

I find it unfortunate that Corel takes the path of install scripts rather than RPM or even tarballs. I used WP way back under DOS and liked it. I liked it even more when the first verions of Word came out. Sadly, WP rested on their laurels and their product withered. When Corel brought out WP for Linux I thought this was good, tried out the free download, liked it and bought it in version 8. After this I will not buy another Linux product from Corel. Enough soapbox.

Jim Harlos

I think it's worth mentioning that both the RPM format and DEB format, plus possibly other packaging types, do have they ability to have install and remove scripts. What you actually want is for them to work. -- Heather


modem installation help

Mon, 28 May 2001 10:22:05 -0700 (PDT)
bxb 3 ()

to the guys n' gals of tag:

Hi! First of all I'm somewhat of a complete newbie and that my brain is muck (overloaded, uncooperative and maybe just almost totally inaccessible)from what I've read and tried to understand... I just need a... well, a quite simple explanation...

I have an internal ESS ES28381 modem... I did a... (what was it?).. a /cat /proc/pci whatever and got this about the (uhmm PCI?):

Bus 2, device 4, function 0:
Communication controller: ESS Technology ES28381).
IRQ 9.
I/o at 0xc400 [0xc40f]

Well thats about it and that under the other OS my modem was at COM 4.

NOW WHAT DO I DO?

thanks anywayz.... and you don't really need to put this in the mag if it doesn't sound that interesting or if it seems totally dum... I agree I'm quite a bit. I just need a little help so a reponse would be greatly appreciated.

wvdial or kisp might do it for you; but troubleshooting using modems for the first time under Linux is worth an article in its own right. -- Heather


Microphone trouble

Tue, 5 Jun 2001 22:23:20 -0700
j_c_llings and Rafael Lepra ( j_c_llings from netzero.net AND rlepra from adinet.com.uy)

j_c_llings

I have noticed that my microphone does not work in linux. The impression I am getting is that this is common in the Linux community since most folks don't use them and there are always bigger fish...( er... penguins?) to fry. Would it be possible to convince you to cover the general configuration and most common problems with microphones?


Rafael Lepra

I have a motherboard GA 7ZX with a Creative PCI 128 sound chip. I have installed RedHat 6.0 (I got the CD from a magazine). First of all I could not initialize XWindows cause it did not recognize the Rage 128. I downloaded and instal XFree86 4.02 and it worked.

But I have not been able to put the PCI 128 to work, the sistem just did not find the chip. I have read the Sound How TO but it looked rather criptic to a newby. Could you give me some advice. Please take into account I am quite new in Linux.

If someone would like to work with these two to improve these HOWTOs (there are more than one related to sound, see http://www.linuxdoc.org/HOWTO/HOWTO-INDEX/apps.html#MMAUDIO) then you should also copy the author of the HOWTO you're going to be looking at. But we'd love to see an article about just getting sound going so you can play cute "tada" sounds and talk to your computer. -- Heather


Viewing gzip'ed pages with Netscape

Sat, 16 Jun 2001 00:32:54 +0900 (JST)
Aron Lentsch ()

Dear Answer Gang!

In my previous version of Netscape (4.75) gzip'ed HTML pages where displayed in just the same way as normal (unzipped) html pages. However, after upgrading my system including Netscape (now version 4.76), this function is not available anymore.

What do I need to do, in order to make Netscape read compressed html-pages again?

THANKS! Aron

A comparison review of the most recent web browsers and how easy or hard they make common tasks such as this would be a very cool article. -- Heather


tighter mail security.... using SSL

Mon, 11 Jun 2001 02:01:37 +0800
Renato Tiongson III ()

Hi,

I'm currently doodling around with rh 6.2 configured as a mail server with sendmail as the mta. I'd like to know how to go about implementing SSL for POP / IMAP & SMTP? Also would like to know how to implement authentication when relaying mail / SMTP.

Thanks in advance!

Nats


GENERAL MAIL



Tired Newbie

Wed, 20 Jun 2001 14:09:02 -0700
Paul Bussiere <>

On the article in this month's Gazette

http://www.linuxgazette.com/issue67/lg_mail67.html#mailbag/5

The link to The Answer Guys doesn't address anything from me, as alluded to in the article. Had a few folks email me asking "Why didn't they answer your questions?"

Just a heads up

I'll let Heather respond re where that link was supposed to point to. We did have a big debate about Unix PATH vs DOS PATH, but I'm unsure if it got published. It would have been published under an unrelated topic, I think about printing.

No, he's quite correct, it got entirely lost in the shuffle. A matter which has been fixed, this time! I've made sure that cross-referenced threads were the first to get handled. So check out TAG... -- Heather

By the way, we did receive an offer for a WinDweeb column. Jeff Lim, a self-titled WinDweeb, is preparing a regular column full of miscellaneous Linux advice for those familiar with Windows. In particular, he wants to provide straightforward "how-to" pieces for tasks he doesn't think are covered adequately in the manpages and in the HOWTOs, or are so far buried in them that you can't find the info you need. Look for it in next month's issue. Thanks, Jeff! --Iron


thanks

Mon, 04 Jun 2001 12:34:11
darrell rolstone ()

Dear Dan, ( and stern Heather too!)

Actually, "stern Heather" Stern is not stern. She has a dry sense of humor, looks like a punk girl, and wears a big red hat (no pun intended) with a wide brim. --Iron
The complete thread he is thanking us for can be found in TAG this month. Because he's offering his service to the community I also am mentioning it here. I offer no opinions in any direction about his service - you'll have to consider that on your own.
The long letter he is referring to is in the TAG list archives at http://www.ssc.com/mailing-lists/tag/200105/0020.html
He hopes to set up a site at http://www.team-synergy.org but will need technical help to do that. -- Heather

Thanks SO MUCH for answering! Now I know that my letter did infact arrive! I REALLY appreciate it and all that you and the other volunteers do. I hope that my "interesting questions" generate some "food for thought" among linux programmers! It looks like IBM and their new associates will be addressing many of the drawbacks that I pointed out ....by paying their programmers to code using the linux kernal (along with the NSA)! I hope that "far-reaching" programs like the two I have initiated will garner monetary support somehow!

Please know that I put myself "at your service" to the whole of the linux community, as a "synergy design consultant" ( which pretty much covers EVERYTHING) and would be glad to volunteer to contribute my prospective on any projects/problems!

Remember: "the significant problems we face today cannot be solved by the same level of thinking that created them" Albert Einstein.

May the Source be with you, Darrell Ernest Rolstone aka "Rollee"...w/o's


Yet Another Helpful Email

Tue, 05 Jun 2001 08:04:47 -0700
Benjamin D. Smith ()

Read your post at June's Linux Gazette - And, it might be a good thing to document the long and sometimes frustrating journey from the Windows to *nix camps.

I once heard a teacher put it the best I've yet heard it.

Imagine a graph.

Windows gets you started and going quickly (line rises sharply) but you rapidly run into the limitations of the system and Operating System, so very quickly your productivity levels off.

*nix, on the other hand, has a steep learning curve, so steep that at first your productivity actually DIPS (line drops below the "0" line) at first, until you start to "get it". Then, as your knowledge accumulates (took me about a year of using Linux primarily before I really "got it") the productivity climbs, and just keeps climbing with how much you learn.

I've not seen that there is a discernable cap - the more I look, the more cool stuff I find I can do!

-Ben

Ben has also submitted a 2c Tip for us, which is in this month's column. -- Heather


ssh-agent article

Tue, 05 Jun 2001 15:19:02 -0500
Dustin Puryear ()

Dustin Puryear wrote to Jose Nazario, one of our authors:

Jose, I read your article on Linux Gazette about the SSH key agent.

(http://www.linuxgazette.com/issue67/nazario2.html) in the June issue.

Very nice. However, I noticed you neglected to mention a way to load the agent when the user logs in rather than doing so manually. Following is a simple addition to a user's .bashrc (or edit for your particular shell) that will do the job:


# load local keys
env | grep SSH_AGENT_PID > /dev/null
if [ $? -eq 0 ]; then
ssh-add -l | grep $USER@$HOSTNAME > /dev/null
if [ $? -ne 0 ]; then
ssh-add
fi
else
ssh-agent /bin/bash
exit
fi

Feel free to pass this on to your readers. Regardless, good work.

Regards, Dustin


This page edited and maintained by the Editors of Linux Gazette Copyright © 2001
Published in issue 68 of Linux Gazette July 2001
HTML script maintained by of Starshine Technical Services, http://www.starshine.org/

"Linux Gazette...making Linux just a little more fun!"


News Bytes

Contents:

Selected and formatted by

Submitters, send your News Bytes items in PLAIN TEXT format. Other formats may be rejected without reading. You have been warned! A one- or two-paragraph summary plus URL gets you a better announcement than an entire press release.


 July 2001 Linux Journal

The July issue of Linux Journal is on newsstands now. This issue focuses on Program Development. Click here to view the table of contents, or here to subscribe.

All articles through December 1999 are available for public reading at http://www.linuxjournal.com/lj-issues/mags.html. Recent articles are available on-line for subscribers only at http://interactive.linuxjournal.com/.


Distro News


 SuSE

SuSE Linux 7.2 is Available from June 15th. More details on the SuSE products page

In further news, SuSE have announced that SuSE Linux has been validated for Oracle9i Database..


News in General


 Upcoming conferences and events

Listings courtesy Linux Journal. See LJ's Events page for the latest goings-on.


Enterprise Linux Insititute Conference
July 3-5, 2001
London, UK
http://www.elxi.co.uk

Linux Expo Exhibition
July 4-5, 2001
London, UK
http://www.linuxexpo.co.uk

Internet World Summer
July 10-12, 2001
Chicago, IL
http://www.internetworld.com

O'Reilly Open Source Convention
July 23-27, 2001
San Diego, CA
http://conferences.oreilly.com

10th USENIX Security Symposium
August 13-17, 2001
Washington, D.C.
http://www.usenix.org/events/sec01/

HunTEC Technology Expo & Conference
Hosted by Hunstville IEEE
August 17-18, 2001
Huntsville, AL
URL unkown at present

Computerfest
August 25-26, 2001
Dayton, OH
http://www.computerfest.com

LinuxWorld Conference & Expo
August 27-30, 2001
San Francisco, CA
http://www.linuxworldexpo.com

Red Hat TechWorld Brussels
September 17-18, 2001
Brussels, Belgium
http://www.europe.redhat.com/techworld

The O'Reilly Peer-to-Peer Conference
September 17-20, 2001
Washington, DC
http://conferences.oreilly.com/p2p/call_fall.html

Linux Lunacy
Co-Produced by Linux Journal and Geek Cruises

Send a Friend LJ and Enter to Win a Cruise!
October 21-28, 2001
Eastern Caribbean
http://www.geekcruises.com

LinuxWorld Conference & Expo
October 30 - November 1, 2001
Frankfurt, Germany
http://www.linuxworldexpo.de

5th Annual Linux Showcase & Conference
November 6-10, 2001
Oakland, CA
http://www.linuxshowcase.org/

Strictly e-Business Solutions Expo
November 7-8, 2001
Houston, TX
http://www.strictlyebusinessexpo.com

LINUX Business Expo
Co-located with COMDEX
November 12-16, 2001
Las Vegas, NV
http://www.linuxbusinessexpo.com

15th Systems Administration Conference/LISA 2001
December 2-7, 2001
San Diego, CA
http://www.usenix.org/events/lisa2001


 Making Kernel Configuration (More) Fun

In a move sure to warm the hearts of those who believe in Linux Gazette's motto of "making Linux just a little bit more fun!", Eric Raymond has added a new interface to the Linux kernel configuration system. This one is reminiscent of Zork and those adventure games:

Welcome to CML2 Adventure, version 1.6.1.
You are in a maze of twisty little Linux kernel options
menus, all different.


The main room. A sign reads `Linux Kernel Configuration System'. Passages lead off in all directions.

n

The arch room.  A sign reads `Processor type'.
A passage leads upwards.


Choose your processor architecture. A brass lantern is here. There is a row of buttons on the wall of this room. They read: X86, ALPHA, SPARC32, SPARC64, MIPS32, MIPS64, PPC, M68K, ARM, SUPERH, IA64, PARISC, S390, S390X, CRIS The button marked X86 is pressed.

Courtesy Linux Weekly News. A copy of Eric's original e-mail is here.


 Tridia

Tridia Corporation has an ambitious marketing plan that pits its open source remote access hybrids head to head with industry leader Symantec. This is the beginning of a strategic marketing campaign to introduce the logical benefits of Tridia Corporation's open source hybrids over its closed source competitors. Among other platforms, TridiaVNC runs on Linux, and is a useful eSupport tool for Linux, in particular in mixed OS environments. Three announcements are planned beginning with a FREE TridiaVNC (virtual network computing) downloads promotion that's linked to a contest. The TridiaVNC econtest runs for 180 days, from 5/23/01 thru 11/23/01.


 Linux in Life Sciences

IBM is aiming to use integrated Linux-based technologies enable life sciences companies to speed the process of managing and sharing the staggering amounts of data being generated by contemporary research in experimental biology.

On May 29th, IBM and Devgen, a Belgium-based biotechnology company, announced an agreement to deploy IBM technology and Linux solutions to accelerate the drug development process. Devgen's informatics system includes a Linux supercluster, consisting of 20 IBM eServer xSeries servers and 2 pSeries servers. The high-performance system will conduct genetic research on a microscopic roundworm (C. elegans) to identify validated drug targets and in vivo active compounds. By studying this transparent worm, Devgen researchers can better understand gene interactions in humans that trigger chemical reactions in cells and cause diseases, and narrow the search for medical treatments.

Linux is a key factor in solving a variety of demanding life sciences challenges, including: Unravelling protein interactions - protein function is one of the most computationally intensive problems in the world of research today. Deciphering the interaction of more than one million proteins requires 1,000 times more compute power and generates 100 times more data. Understanding protein folding - the way proteins fold into geometrical shapes allows them to perform their biological functions. It would take around 300 years, using today's most powerful computer, to calculate the folding process for even a very small protein. High performance Linux computing power allows researchers to unravel the protein folding process. Drug discovery and development process - new drugs often cost upwards of $500 million to develop and test, and between ten and fifteen years to approve. New information technologies like Linux are rapidly becoming a major factor in reducing these costs.


 Linux NetworX Announces Support For AMD's Multiprocessor Architecture

Linux NetworX, a leading provider of powerful and easy-to-manage Linux cluster computing solutions, announced today support through its Evolocity cluster systems for the new AMD Athlon MP processor and the AMD-760 MP multiprocessing chipset. Evolocity is a cluster solution, which incorporates computational hardware with Linux NetworX cluster management software called ClusterWorX.


 GNU-Linuxfest

Galax, Virginia, will host the third annual Linuxfest on July 14, 2001. Linuxfest has traditionally celebrated Linux, the alternative operating system. Linuxfest this year will offer entertainment, food and Linux related demonstrations which include Satellite Internet and the new Mandrake OS, as well as servers, KDE, Gnome, Afterstep, Quick Cams and audio recording. There will be space to sell and swap and some great door prizes. This year's Linuxfest will be held on Grayson Street in Galax; and will be the first street fest in the history of the event. Attendees may register at the event's website at www.gnu-linuxfest.com.


 Freenet

Linux Journal has an article about freenet. The Freenet FAQ has further information. Freenet is a network application for sharing information (like the Web), but without using centralised servers (like Usenet or Gnutella). The upshot is that on Freenet it's difficult for governments and free-speech haters to track where the information resides, who inserted it, and who's requesting it. The application is still in its alpha stage; many features are still in the discussion or research stages.


 "Linux 3D Graphics Programming" Book Available

The book Linux 3D Graphics Programming is now available for purchase. The book aims to give a reader who has little or no 3D graphics experience, a well-rounded understanding of the fundamental theoretical and practical concepts involved in programming real-time 3D graphics applications using freely available tools under Linux. Topics include the X Window System, OpenGL/Mesa, 2D screen access and rasterization, hardware acceleration, and 3D vision and perception. A tutorial explains how to create 3D models using Blender, a 3D modeller included on the companion CD.


 UKLinux ISP

uklinux.net is a free (no subscription charges) Linux ISP for the UK. The service features include:

Services such as mail and FTP access are conditional upon accessing the Internet via uklinux's dial in access numbers which are charged at UK local rate. If you want to access your e-mail, or FTP your site via another ISP you will need to upgrade your membership.

Also, all the profits from uklinux go to funding Open Source/Free Software.


 Linux Journal web articles

These articles are available at the Linux Journal web site; they are not in the magazine. (You can also go to the no-frames site if you don't like frames or want minimal graphics.)


 Linux Focus

The following articles are in the July-August issue of the multilingual ezine Linux Focus.


 Linux Links

The Duke of URL has posted:

firstLinux.com have a review of Linux chess interfaces.

Stop the presses! RMS Says Free Software Is Good. Slashdot readers respond "Dog Bites Man", "Sky Blue".

Asian characters and why Unicode is inadequate

Linux forklifts in the data warehouse

Some more FUD from Microsoft (MS Word format). Courtesy Linux Weekly News. In a further twist Slashdot reported that Mr. Bill Gates believes that Linux is a like PacMan.

Finally, another Slashdot story, looking at David A. Wheeler's analysis of the true size of Linux. See the reaction here


Software Announcements


 SAP and Linux on the ZSeries

Macro 4 has announced support for Linux on the new S/390 eServer ZSeries for its SAP certified output management solution. By running Macro 4's solution with SAP and Linux on the IBM platform, users can take advantage of the space, energy and cost savings of the latest e-business mainframe technology and be assured of dependable output across a distributed network. Large SAP implementations require many SAP application servers and it is not atypical for a sizeable company to run as many as 150 instances of SAP at once. Organisations can simplify their IT systems by moving the SAP application servers to one ZSeries box with each application running on an instance of Linux. But they still have to manage output from all of these applications. UniQPrint gives users a single point of control for cross-platform enterprise printing from an IBM eServer ZSeries platform running Linux and assures the delivery of documents from point of origin to any output destination, be it print, fax, e-mail, or Web site.


 Workstation Solutions Announces New Quick Restore Data Protection Software for Linux Users

Workstation Solutions have announced new Linux and Windows platform support for its Quick Restore backup and recovery software. This broadened platform support extends the quick implementation, easy operation, and comprehensive scope of Quick Restore across Microsoft Windows 2000, UNIX, and Linux operating environments. The company also announced new Quick Restore features to improve performance, extend firewall support, control DLT tape format, and support newly available tape libraries from leading vendors.


 mnoGoSearch Search Engine Version 3.1.14 Released

The new version 3.1.14 of mnoGoSearch free Open Source search engine software for intranet and web servers is available from mnoGoSearch website. With the new version the 3.1.x branch is declared stable and it includes various enhancements and fixes, as documented in the ChangeLog. mnoGoSearch runs on Linux and other *nix OS'es.


 Micro Sharp Technology: Netule

Micro Sharp Technology, a provider of thin server solutions have announced the release of Netule Web Module I (WM-I) and Netule eMail Module I (EM-I). EM-I and WM-I are based on Linux, an extremely powerful, stable and reliable UNIX like operating system. Linux is easy to upgrade and offers a true multi-tasking solution. EM-I is a robust, thin server solution that allows users to simply, predictably and cost effectively meet eMail server needs in a shorter period of time. WM-I is intended to meet users' web server requirements in a similar manner. Each module comes with all the hardware and software needed to plug in and get started.


 Other Software

Internet Exchange Messaging Server (IEMS) 5, the latest messaging solution from developer International Messaging Associates, will be released June 12, 2001. IEMS 5 runs on Linux Red Hat, Caldera, VA Linux, Turbo Linux Server, Suse, Mandrake, and Windows 98, Windows 2000, and Windows NT.


Aladdin Systems, have announced a new licensing initiative - the StuffIt OneSource Volume License Program. StuffIt natively supports Macintosh, Windows, Linux and Solaris Unix platforms. With StuffIt, users can choose to easily manage their favourite file types, such as .zip, .sit, .rar, .tar, .gzip and many others.


Advanced Management Solutions, supplier of tools for project and resource management claim that AMS Realtime is the first and only non-browser based Project Management software available for Linux. This software is a Linux port of Advanced Management Solutions' application AMS REALTIME Projects. LINUX users can now try out this fully functional Linux version through a special free trial offer.


Copyright © 2001, Michael Conry and the Editors of .
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 68 of Linux Gazette, July 2001

(?) The Answer Gang (!)


By Jim Dennis, Ben Okopnik, Dan Wilder, Breen, Chris, and the Gang, the Editors of Linux Gazette... and You!
Send questions (or interesting answers) to

There is no guarantee that your questions here will ever be answered. Readers at confidential sites must provide permission to publish. However, you can be published anonymously - just let us know!


Contents:

¶: Greetings From Heather Stern
!!: Meet The Answer Gang
(!)Dial-on-demand users should know:
(?)XFS ....font server...true font
(?)A tired Newbie attempts Linux (again)
(?)I've had no response to a long letter of several weeks ago!? --or--
Sometimes, we just don't know what to say
(?)I need a windows-linux solution --or--
Bulk File Transfers from Windows to ???
(?)Cannot Login Question
(?)LG 24 - Tips: Yet another way to find
(?)Deleting a file that thinks it is an option --or--
Dash it All! Coping with ---Unruly--- Filenames
(?)aol instant messanger behind lrp (linux router project) box --or--
File Tranfers with AIM (AOL Instant Messenger)
(?)reverse dns
(?)Closing Ports
(!)Best of ISO burning under Windows.
(!)A tired Newbie attempts Linux (again)

(¶) Greetings from Heather Stern

Greetings, dear readers, and welcome to another exciting installment of The Answer Gang.

One of our crew has come up with an amusing bit of text to send to people who are amazingly off topic, for example, completely non-computing matters:

These Are Not The Droids You're Looking For

It gave me a belly laugh when I first saw it. Some who get this are asking computer questions, tho - solely about a certain Other OS. Perhaps amazingly, a few of the Gang don't mind working with MSwin. And certainly questions about mixing two or more operating systems are juicy and we love answering those (if we actually can) but this just isn't the place for a question that would be answered much better at the Win98 Megasite:

http://www.cmpnet.com/win98

Oddly enough I don't think I've seen any MacOS questions roll through. But these sort of notes happen often enough that I've got my own little "answering machine style" note:

You've Reached The Linux Gazette Answer Gang

If that's the only peeve we have left I think we're doing alright. Now for stuff that's about our favorite penguin playground - Linux!

As it's been around a while and many of the commercial distros are releasing their new versions with 2.4 kernels and various modern bugfixes, I know an increasing number of truly Newbie users who are taking the plunge. Given the context I know them from, they're not generally dumb people, but they have been led to believe that the hassle of a full install will be worth it for the security they will gain.

Unfortunately, the gain doesn't happen automagically. Okay, the average setup is a little better out of the box than Windows. But it's like buying a car with a car alarm and other "smart sensors" -- they're no good if you can't find the clicker to turn them on. Or if you can turn on the alarm but not roll up the windows and close the convertible top.

Or if you find the clicker covered with cryptic phrases like "IPchains" and "construct a filter to block UDP..." This is nonsense to a lot of people. There are tools which claim to help toughen it up, except that if they don't explain what they're doing, you have no idea what they helped, or whether they are hindering something that you actually need to do.

Beyond that, there's simply that anything that had to go through the publishing industry is out of date. Your boxed distro may have taken a month and a half to hit the shelves - while they're proud of getting 2.4.2 in there, 2.4.5 is already released and pre6 is cooking along. And so on. If you don't make arrangements to update to the security patch versions as soon as you're a working system, you'll be wide open to something.

Pleasantly, tools to help people tweak firewall rules are actually starting to get usable. Distros increasingly resist, or have ways of tracking, suid binaries - which are dangerous not so much because of that feature, as that they are almost always suid root, the most dangerous account on the system. Distros make it increasingly easy to always get the latest, and the major ones seem to have websites where you can keep an eye on how they're doing.

So as we approach Independence Day -- which in the U.S. is a celebration of determining our own destiny -- take a few thoughts towards those unknown souls who set the first policies you depend on, and towards your own ability to choose how you are defended. The price of liberty... is eternal vigilance.


(!) Dial-on-demand users should know:

From Richard Greaney

More Answers by Mike Orr, Jim Dennis, Willy Tareau, David Forrest, Juanjo and Erik Corry

(!) If you are on a dial-up connection and are tired of not getting reliable starts to your connection (often having to click "refresh" after starting a browser to prevent it from timing out) you may benefit from this piece of advice.

There are two ways Linux looks up a host before connecting. One uses the traditional gethostbyname() function (which uses DNS and hence UDP) while the other uses a straight lookup on the IP address. Either way, if you use a demand dial setup, these will run into problems. If you type ifconfig before you get connected, you will notice your ppp0 adapter has the address 10.64.64.64. Once you are connected, it becomes a little more beleivable. However, those first lookup SYN packets are sent from 10.64.64.64, but since the ppp interface has changed it's IP address, the packets will not reach it. Refreshing the connection attempt will work, but it's less than elegant.

How to fix: cat /proc/sys/net/ipv4/ip_dynaddr should return the value '1'. If this is not the case, type (as root) echo 1 > /proc/sys/net/ipv4/ip_dynaddr

What you are doing is telling your machine that it has a dynamic IP address. Any packets which are originally sent from 10.64.64.64 will be redirected to the new IP address as soon as you get connected to your ISP.

Richard Greaney

(!) [Mike] Thanks for the advice. Why do people need to do this now? When I used to have a dynamic IP address three years ago, I never needed to do this.

(!) That one I can't answer. I've built a few Linux boxes in my time and not one of them has had anything other than 0 set for the ip_dynaddr field. Having said that, they were very seldom used to connect to the net. My present machine (which connects to the net several times a day) was also set to 0 by default (as is standard) but I decided one day I was going to iron out why I was having to refresh the connection on startup before any data came across. I was looking to rewrite some source code but stumbled across this one instead. I've read that Linux is widely known for not being great with demand-dial setups. Perhaps this is why? I thought the people could benefit from knowing this.

The text from the kernel docs explains it pretty clearly.

(!) [JimD] The result is a failure of existing connections. As new connections are established over the newly raised link, those work. So the ip_dynaddr solution is for a problem that many Linux users never knew they were having.
Paul Mackerras (author/maintainer of the Linux pppd) was trying to explain this whole thing to me one time. I'm afraid I just wasn't "getting it." (We'd probably already had too much sake by then, I think we were eating sushi that evening).
(!) [Mike] I did a google search for "ip_dynaddr" and came up with:
(!) (Willy Tareau)
http://web.gnu.walfield.org/mail-archive/linux-kernel/2000-March/0179.html
What exactly does setting ip_dynaddr to 1 or 2 do? it allows to change your local addresses to the one of an interface which is changing (typically ppp*) when up.
(!) (David Forrest)
http://web.gnu.walfield.org/mail-archive/linux-kernel/2000-March/0184.html
drf5n@mug:/usr/src/linux$ grep -r ip_dynaddr *
(!) (Juanjo, with RST-provoking mode by Erik Corry )
http://www.linuxhq.com/kernel/v2.0/doc/networking/ip_dynaddr.txt.html
IP dynamic address hack-port v0.03-rst
This stuff allows diald ONESHOT connections to get established by dynamically changing packet source address (and socket's if local procs). It is implemented for TCP diald-box connections(1) and IP_MASQuerading(2).
 
If enabled[*] and forwarding interface address has changed:
 
  • Socket (and packet) source address is rewritten ON RETRANSMISSIONS while in SYN_SENT state (diald-box processes).
  • Out-bounded MASQueraded source address changes ON OUTPUT (when internal host does retransmission) until a packet from outside is received by the tunnel.
    This is specially helpful for auto dialup links (diald), where the "actual" outgoing address is unknown at the moment the link is going up. So, the same (local AND masqueraded) connections requests that bring the link up will be able to get established.
     
    If you enable the RST-provoking mode, then the source address will be changed, even if the socket is established. This means we send an incorrect packet out, which causes the remote host to kill our socket. This is the desired behaviour, because such a socket is doomed anyway, and the earlier it dies, the better. This prevents the dial-on-demand connection from being kept up by a dead connection, and tells the application that the connection was lost.
     
    [*] At boot, by default no address rewriting is attempted.
     
    The values for the ip_dynaddr sysctl are:
     

    1: To enable:
    2: To enable verbosity:
    4: To enable RST-provoking:

     
    Flags can be combined by adding them. Common settings would be:
    To switch off special handling of dynamic addresses (default)
    # echo 0 > /proc/sys/net/ipv4/ip_dynaddr
    To enable rewriting in quiet mode:
    # echo 1 > /proc/sys/net/ipv4/ip_dynaddr
    To enable rewriting in verbose mode:
    # echo 3 > /proc/sys/net/ipv4/ip_dynaddr
    (for backwards compatibility you can also use)
    # echo 2 > /proc/sys/net/ipv4/ip_dynaddr
    To enable quiet RST-provoking mode:
    # echo 5 > /proc/sys/net/ipv4/ip_dynaddr
    To enable verbose RST-provoking mode:
    # echo 7 > /proc/sys/net/ipv4/ip_dynaddr

(?) XFS ....font server...true font

From Hari Charan Meda

Answered By Huibert Alblas

hi

how do i install XFS font server on Red HAT 7.1.i need to install true font on my machine.

need help :(

harry

(!) [Halb] Hi,
As far as I know xfs is allready installed on RedHat distro's from 6.2 and up. But there are not many TTF installed, so you have to 'install' them seperatly. Going to www.redhat.com, clicking on 'support' and searching for 'fonts' results in: http://www.redhat.com/support/alex/215.html which is a fairly short intro into adding extra TTF. It should work that way, if somehow xfs is not installed, the rpm should be on your redhat cd.
Watch out for these tripwires (cavcats or something they are called here on the list...):
A search on www.google.com/linux for 'ttf redhat howto' returned(among others):
http://www.ibiblio.org/pub/Linux/docs/HOWTO/mini/other-formats/html_single/FDU.html#INTRO
or of course the linuxgazette search engine came up with this:
http://www.linuxgazette.com/issue28/ayers1.html
Have fun reading the documentation.

(?) A tired Newbie attempts Linux (again)

From Paul Bussiere

Answered By Mike Orr, Jim Dennis

(!) [Heather] Last month Paul Bussiere wrote in with a submission that raised a valid point, which I published in The Mailbag (http://www.linuxgazette.com/issue67/lg_mail67.html#mailbag/5) and which, pleasantly, has got us a few responses from potential authors. It mentioned that TAG had some comments for him, and I linked across, but it had escaped my processing script.
Not surprisingly a few people mailed him, wondering why we hadn't answered him. (See this month's Mailbag.) While it's certainly true that every month The Answer Gang does send out answers to a lot more people than you see in print these days, I had definitely intended to see his thread published.
So here it is -- my apologies for the confusion.

(?) Of all the articles I have read on how wonderful Linux is, seldom have I seen any that [cynically] document how the average Windows user can go from mouse-clicking dweeb to Linux junkie.

(!) [Mike] Have you read The Answer Gang column? It's choc-full of problems people have installing and using Linux, and should be a dose of reality for anybody thinking that going from Win to Lin requires no effort. Occasionally we run pieces about things to watch out for when doing your first Linux install.

(?) So, the claim of FREE FREE FREE really isn't so....I've found other places that you can buy a CD copy cheaper but still, some money negates the FREE.

(!) [Mike] Most experienced Linuxers would caution a new user against downloading the OS the first time or getting a $5 CD from Cheap Bytes. The cost of a commercial distribution with a detailed tutorial and reference manual is quite worth it, compared to spending a weekend (or two) getting it right.

(?) Why doesn't Linux do the equivalent of a DOS PATH command? Newbie Me is trying to shutdown my system and I, armed with book, type "shutdown -h now" and am told 'command not found'. But wait, my book says...etc etc....and of course, I now know you have to wander into sbin to make things happen. Why such commands aren't pathed like DOS is beyond me....perhaps that's another HowTo that has eluded me.

(!) [Mike] Linux does have commands for querying and setting the path.
$ echo $PATH
/usr/bin:/bin:/usr/X11R6/bin:/usr/games:.
$ PATH=/home/me/bin:$PATH
$ echo $PATH
/home/me/bin:/usr/bin:/bin:/usr/X11R6/bin:/usr/games:.
The path is an environment variable like any other environment variable, so you set it in your shell and it propogates down to all subcommands. (Actually, that's the way it works in DOS too; DOS just has an extra convenience command to set it.)
(!) [JimD] Actually, technically the way that environment variables and shell variables work in UNIX is somewhat different than how they work in DOS' COMMAND.COM.
In UNIX shell there are variables. These are either shell/local variables or they are in the "environment." A variable is an association of a value to a name. They are untyped strings. One can move a variable from the shell's heap into the environment using the 'export' (Bourne and friends) or the 'setenv' (csh/tcsh) built-in commands.
In either case all that is being done is that the variable and its value are being stored in different memory regions (segments). Here's why:
When any program is started under UNIX it is done via one of the exec*() family of system calls. exec() replaces the currently running program with a new one. That is to say that it overwrites the code, heap and other memory segments of the current process with a new program (and performs a number of initialization and maintenance functions that relate to closing any file descriptors that were marked "close on exec" resetting the signal processing masks, etc).
The environment is the one segment that is NOT overwritten during exec(). This allows the process to retain some vestige of its "former self."
Under UNIX all processes are creatd via the fork() system call. (Under Linux fork() is a special case of the clone() system call --- but the statement is still "mostly" true). fork() creates an exact copy of a process. Normally the fork()'d processes (now there are two "clones" of one another) immediately go their separate ways. One of them continues one set of operations (usually the parent) while the other handles some other jobs (processing a subshell, handling a network connection/tranaction, or going on to exec() a new program).
So, a side effect of the environment handling is that a copy of the environment is passed from a child shell to all of its descendents. Note: this is a copy. The environment is NOT an interprocess communications mechanism. At least, it is NOT a bidirectional one.
(Incidentally any process can also remove items from its environment, or even corrupt it by scribbling sequences of characters that don't follow the variable=value\0 convention, using NUL terminated ASCII strings. Also there are variations of the exec*() system call which allow a process to specify an alternative block of memory --- a pointer to a new environment. In this way a process can prepare a completely new environment for itself).
Notice that, in UNIX, the notion of a process persists through the execution of multiple programs. The init process forks children which become (exec) shells to handle startup scripts, and "getty" processes to handle login requests. the various rc shell process spawn off childrem which become invocations of external commands (like mount, fsck, rm, etc). Some of those children set themselves as "session leaders" (create their own process groups), detach themselves from the console and "become" various sorts of daemons. Meanwhile the getty processes "become" copies of login which in turn may become login shells or which (under other versions of the login suite --- particularly PAM versions with logout "cleanup" enabled) may spawn children that become interactive shells.
An interactive shell spawns of many children. EVERY pipe implicitly creates a subprocess. Every "normal" invocation of an external command also creates a subprocess (the shell's own "exec" command being a notable exception, it terminates the shell causing the current process to "become" a running instance of some other shell --- in other words the shell "exec" command is a wrapper around the "exec()" system call). Some of the subprocesses don't perform any exec(). These are subshells. Thus a command like:
echo foo | read bar
.. from bash will create one subshell (child process) which will read a value from the pipeline. (It will then simply exit; since this is a nonsensical example). A command like:
/bin/echo foo | { read bar; echo $bar$bar ; }
... create two children (actually a child and a grandchild). The child will create a pipe, fork(), and then exec() the external version of the echo command. It's child (our shell's grandchild) will read its pipeline modify its copy of the bar variable, then echo a couple of copies of that value. Note that we don't know (from these examples) if bar is a shell/local variable or an environment variable. It doesn't matter. If the variable was in our shell's environment than the subshell (the grandchild, in this case) will be modify its copy of that environment variable. If the variable didn't exist, the subshell will simply create it as a local variable. If the variable did exist as a shell/local (heap) variable in our shell, it would cease to exist in the child process after the exec() of the /sbin/echo command, but a copy of it would still exist (and be overwritten) in the grandchild process.
Meanwhile the original shell process does a "wait()" system call on its child. In other words it just idly sits by until the work is done, and then it reaps the result codes (the exit values returned by the suprocesses) and continues.
(Incidentally, the fact that the child process is on the "right" side of these pipe operators is common but not guaranteed. It is the case for the Bourne and bash shells. However, the opposite case holds true for newer versions of ksh ('93 or maybe '88 and later?) and zsh; I personally believe that ksh and zsh is doing "The Right Thing (TM)" in this regard --- but it is a nitpick).
My point here is that the nature of "environment" variables seems to cause new students of UNIX endless confusion. It's really quite easy to understand if you think in terms of the underlying fork() and exec() operations and how they'll effect a process' memory map.
MS-DOS has an "environment" that is similar to the UNIX environment in that it is a set of variable/name value pairs and that it exists in a portion of memory that will persist through the execution of new programs. However MS-DOS doesn't have a "fork()" or similar system call and can't implement pipes as "coprocesses" (with one process writing into a "virtual file" --- an unnamed file descriptor that exists purely in memory and never on a real, physical filesystem).
(MS-DOS handles pipes by created a temporary file and a "transparent redirection" executing a writer process, waiting for that to complete --- writing all its output into the temp file, and then executing a reader process with transparent input redirection to eat up the contents of the temp file; and finally executing its own deletion of the temp file. This is a pale imitation of how UNIX manages pipes).
The scary thing about the way that MS-DOS runs these programs is that it marks some state in one region of memory (part of its "reserved/resident" memory; then it executes the program). When the external program exits it passes control back to the command interpreter's resident portion. The resident portion then performs a checksum on the "transient portion" of the DOS address space to determine if that "overlay" needs to be reloaded from the command interpreter's disk image/file. Then it resumes some of its state. If it was in the process if executing a batch file it *re-opens* the file, searches to its previous offset (!) and resume it's read/parse/execute process.
I can imagine that experienced UNIX programmers who were never tortured with MS-DOS internals or the nitty gritty of CP/M are cringing in horror at this model. However, it really makes alot of sense if you consider the constraints under which MS-DOS was hacked to operate. It was intended to work from floppies (possibly on systems with a single floppy drive and potentially without any resident system filesystem). It need to work in about 128K or less (that's kilobytes) of RAM though it might have had as much as 640K to work with..
I guess I get nervous when I see people explaining UNIX semantics in terms of MS-DOS. I've learned too much about the differences between them to be comfortable with that --- and I've seen to many ways in which the analogy can lead to confusion in the UNIX novice. Probably it's silly of me to nitpick on that and bring up these hoary details. MS-DOS is almost dead; so it may be that the less people know about how it worked, the better.
(!) [Mike] /sbin and /usr/sbin should be in the root user's path. If they're not, adjust /root/.bash_profile or /root/.bashrc.
Whether to put the sbin directories in ordinary users' paths is a matter of debate. The debate goes like this:
CON: The sbin directories are for administrative commands that ordinary users would have no reason to use.
PRO: But what about traceroute, ping, route and ifconfig? Ordinary users may want to run these to see if a host is down, find out which ISPs it goes through to reach a host, find out what our IP number is and which hosts are our gateways.
CON: I don't want my users running ping because it increases network load and it can be misused for a DoS attack. As for route and ifconfig, too bad.
PRO: You're a fascist. I'm putting them in my path myself. Nyaa, nyaa, nyaa!
Some programs are borderline so it can be difficult to determine whether they belong in sbin or bin. Also, there are disagreements and uncertainty about what sbin is really for. (I've heard it was originally for statically-linked programs in case their dynamic counterparts weren't running.)

(?) Actually, that was my submission....tongue and cheek.....not exactly questions for the column! Whoops...should have been more specific!

Paul J. Bussiere

(!) [Mike] Submitted to the Mailbag and The Answer Gang. It'll be up to the editor of those sections whether to publish it.
(!) [Heather] And, I decided to publish it both ways, but then I screwed up. Oh well, I'm only human...

(?) Sometimes, we just don't know what to say

From darrell rolstone

Answered By Ben Okopnik, Dan Wilder, Heather Stern

Dear Tag staff , especially Ben,

(?) Well folks....I've patiently waited several weeks for a response to my LONG letter.....and have given up and am re-contacting you to ask if you even got it?

(!) [Dan] Your message of 2 May 2001 did indeed arrive. It may be found at the list archive,
http://www.ssc.com/mailing-lists/tag/200105/0020.html
You raise some interesting questions.
The fact that the Open source software base comprises untold millions of lines of code (has anybody ever counted?) indicates that there must be some reason people do this. Eric Raymond has a few interesting ideas. See http://www.tuxedo.org/~esr/writings.
I suspect that absence of responses on this mailing list was due to the fact that none of the regular posters felt they had much in particular to contribute, at least on the day your mail appeared, to the questions you raise. For this I don't really apologise; we're all volunteers here, and we do what we can.

(?) Ben's first response was IMMEDIATE and so ...

(!) [Ben] I guess I should stop responding so IMMEDIATELY... <wicked grin>
(!) [Heather] ...and so you have sadly developed the illusion that you should get 1 hour turnaround on your problems from TAG. That's not the way it works here. Linux Gazette is not any sort of company, TAG's just a batch of friends that try to help folks like you out... and the Gazette gets to publish the results, if it looks like they'd "Make Linux a Little More Fun!" for others. That's the only pay we get, too. You want timely, buy a tech support contract from some Linux vendor that sells them.
I remind you that The Answer Gang does not guarantee any answers will ever arrive, much less in a timely fashion.
Even if it were a company a fast turnaround might be unreasonable, but that depends on whatever the original problem was. Most companies actively limit what type of questions they'll even bother to answer; "Linux" is quite a broad topic, these days.
If Ben or any of the other members of the Gang continue the thread, great. If not, oh well. Sorry you got your hopes up. You can re-send your message if you like. If you do, its successful delivery to any TAG gurus doesn't obligate them to answer.

(?) ... I'm thinking maybe something went wrong in the sending process, although I never received a "failure" notice! It has also suspiciously disappeared from my "sent messages" folder!!! I have to wonder if you guys are being "watched"? Immediately after mailing you, I received some strange e-mails .....one from "friends@hoovertime" with no message.....just an attachment!

(!) [Heather] So, it looks like your site gets spam too. (Due to an open policy of receiving queries from Linux users all over the world, we get an unfair share, ourselves.) Luckily, you probably can do something about it... procmail filters or something. There have been a few useful articles against that subject in recent issues of LG.
(To be fair, it might be little hard to filter spam in your hotmail account. Oh well. )

(?) Anyway, please let me know if infact you did receive my e-mail of several weeks ago.

Thanks

Darrell Rolstone / Vector Synergy Intl

(!) [Heather] We get an increasing amount of mail each month. You have provided no special context that I could use to find it anyways other than your name. It's not reasonable for us to go looking for it. I hope that some of what Ben said for you was helpful so far. If your actual thread of question and answers is useful to others by the end of the month, it may get published in whole or part.
Yes, I agree, it's polite to continue a thread in progress, and for all I know, 3 of them have it lazing about in their draft folders. One of TAG may have sent you something which never arrived at your end. If they didn't copy the publishing staff here, I actually have no means to find it for you, or even to publish it later.
Meanwhile, good luck in your endeavors.

(?) Dear Dan, ( and stern Heather too!)

Thanks SO MUCH for answering! Now I know that my letter did infact arrive! I REALLY appreciate it and all that you and the other volunteers do. I hope that my "interesting questions" generate some "food for thought" among linux programmers! It looks like IBM and their new associates will be addressing many of the drawbacks that I pointed out ....by paying their programmers to code using the linux kernal (along with the NSA)! I hope that "far-reaching" programs like the two I have initiated will garner monetary support somehow!

Please know that I put myself "at your service" to the whole of the linux community, as a "synergy design consultant" ( which pretty much covers EVERYTHING) and would be glad to volunteer to contribute my prospective on any projects/problems!

Remember: "the significant problems we face today cannot be solved by the same level of thinking that created them" Albert Einstein.

May the Source be with you, Darrell Ernest Rolstone


(?) Bulk File Transfers from Windows to ???

From Brian Schramm on the L.U.S.T List

Answered By Jim Dennis

I have a Linux machine on a cable modem. That server has a lot of files that I need to get to from a Windows machine in another location that is on a dsl line. I have tried samba but it is aparently blocked at the cable co. I think NFS is open but there is no nfs client that I have gotten to work on windows yet I have pcnfs installed on my Debian server and my local 95 machine does not attach to it. I have tried ice-nfs and omni for client software.

Is there a way to do this? Is there a problem in doing this? I am at my wits end.

Please help.

Brian Schramm

(!) [JimD] The approaches you've attempted so far all related to file sharing protocols (NFS, SMB). These are normally only used on the LAN or over VPN or dedicated links. In general you're best approach for a one-time or any periodic file transfers is to archive the files into one large file (a tar file for UNIX and UNIX-like systems, a ZIP file for MS-Windows and MS-DOS systems, a Stuffit or similar file for MacOS boxes). Usually you'd compress the archive as well (gzip or bzip2 for UNIX/Linux, implicit for .zip and .sit files).
Once you have the files archived you can use ftp, scp (SSH copy) or even rsync over ssh to transfer it to the remote system.
Of course this might take a very large amount of temporary file space (usually at least half of the total size of the originals) at each end of the connection. If this is a limiting consideration for your purposes, perhaps burning CDs of the data and shipping them via snail mail might be the bettern approach.
Under UNIX you can avoid the large temporary copy/archive requirements at both ends by archiving into a pipeline (feeding the archive data into a process which transmits the data stream to the remote system) and by having the remote system extract the archive on-the-fly.
This usually would look something like:
cd $SOURCE && tar czvf - $FILE_DIR_LIST | ssh '(cd $DESTINATION && tar xzpf - )'
or possibly like:
ssh '(cd $SOURCE && tar czvf - $FILE_DIR_LIST )' | cd $DESTINATION && tar xzpf -
... depending on whether you want to push the files from the local machine to a remote, or vice versa
For MS-Windows and MS-DOS systems, I have frequently used a Linux boot floppy (like Tom's Root/Boot at: http://www.toms.net/rb) or a bootable CD (like Linuxcare's Bootable Business Card --- at: http://open-projects.linuxcare.com/BBC/index.epl). Basically you boot them up, mount up their FAT or VFAT filesystems and do your thing -- in those cases I've usually had to use netcat in lieu of a proper ssh tunnel; but that's just laziness.
Here's a sample script I use to receive a system backup from a Windows '98 Point of Sale system (which I call "pos1" in by backup file.
#!/bin/sh
  ifconfig eth0 172.17.17.1 netmask 255.255.255.0  broadcast 172.17.17.255
  nc -v -v -n -w 6000 -p 964 -l 172.17.17.2 964 -q 0 | bzip2 -c > $( date +%Y-%m-%d )-pos1.tar.bz2
  ## cp /etc/resolv.conf.not /etc/resolv.conf
  sync
I run this on one system, and then I go to the other system, boot it, configure the network to the ...2 address; as referenced in my nc command above, mount my local filesystems (the C: and D: drives under MS-DOS; create a mbr.bin file using dd if=/dev/hda of=/mnt/mbr.bin count=1 bs=512) and feed the receiver with:
tar cBf - . | nc -p 964 172.17.17.1
(nc is the netcat command).
If I had to manage any Win2K or NT systems I'd probably just install the Cygwin32 tool suite and see if I could use these same tools (tar, nc, ssh, etc) natively). Obviously MacOS X should probably have these tools already ported to it; so similar techniques should work across the board.

(?) Cannot Login Question

From Nancy Laemlein

Answered By Ben Okopnik

Hello,

I found my problem listed as http://www.linuxgazette.com/issue37/tag/46.html, but no solution.

I have been running RH6.2 kernel2.2.14-50 on I586, as two test servers.

Both have been running for one-two months. One morning I restarted both servers and then I encountered no normal user could successfully login. I could only login as root, or even more bizarre, as any user but using the root password.

(!) [Ben] Hm. I hate to jump to such an obvious conclusion, but that kind of behavior seems "man-made" rather than some specific failure. Your site may well have been cracked.
One of the first things I'd do - given the problems that you're encountering - is compare the size of your "/bin/login" and "/bin/bash" to those on a normal system (this assumes the same distro or at least GNU utility versions on the machines.) If they're significantly larger, they're probably "rootkit" versions, compiled with the library calls in the executable. If you can compare the sizes with the originals (i.e., look inside the RPMs), so much the better.
Check your access logs. The intruder can wipe those, but there's always a chance - most script kiddies are pretty inept.
Do a "find / -name bash" to search for an extra copy (usually SUID'd) of "bash"; in fact, doing an occasional search for SUID'd files on your system - and being familiar with that hopefully very short list - is a good thing to do on any system you admin.

(?)

I have created a new user and tried loggin in; same scenario, new user cannot login with newly assgined user/password, can login as new user using root password.

For "startx" problem I have checked /etc/security/console.perms and edited File classes

 from:
 <console>tty=3D[0-9][0-9]* :[0-9]\.[0-9] :[0-9]
 to:
 <console>tty=3D[0-9][0-9]* vc\/[0-9][0-9]* :[0-9]\.[0-9] :[0-9]

I think the origin is in the password problem but I don't know where to start. Servers are using shadow password, files /etc/passwd and /etc/shadow look like this:

  -rw-r--r--  1 root  root  944 passwd
  -rw-r--r--  1 root  root  944 passwd-

  -r--------  1 root  root  979 shadow
  -r--------  1 root  root  979 shadow-

Do you have any ideas?

Many Thanks -
Nancy Laemlein

(!) [Ben] The perms look OK; that might not have much to do with it though. If you find that you have indeed been cracked, you'll need to reinstall your system (since anything could be compromised), and read the Security-HOWTO before putting it back on-line. Running Bastille (a sort of an automated security audit) on your machine is a fairly good idea.
Do note that the problem could be as simple as some strange library succumbing to bit rot. Doing diagnostics via e-mail with limited information is a middlin' tough job.

(?) LG 24 - Tips: Yet another way to find

From Bill Thompson

Answered By Ben Okopnik

Hi,

I have been using grepfind since it was published in the LG 24 Tips column with great results. Since using Mandrake 8.0, it no longer works as before. It does it search but nothing is written to the display. The command-line returns as many lines down as the results of grepfind found. A friend using Mandrake 7.2 says grepfind works. Another friend using Mandrake 8.0 reported the same results as I get.

PLEASE help!

(!) [Ben] Bill, the only thing I can figure is that the syntax of 'find', 'sed', or 'grep' has been changed in the 8.0 distro. Nowadays, there are better ways to do what you want - as an example, my version of 'grep' (GNU 2.4.2) supports recursive searching via the "-r" switch - but if you want to find out what's bugging your script, try removing the "layers" one at a time.
As an example, in 'grepfind' you have the following lines:

if [ "$2" = "" ]; then
find . -type f -exec egrep -i "$1" /dev/null {} \; | sed -e 's/[^ -~][^-~]*/ /g'
(This is the "single-argument" mode, so that's how you'd test it.) Try eliminating the "sed" part:

if [ "$2" = "" ]; then
find . -type f -exec egrep -i "$1" /dev/null {} \;
# | sed -e 's/[^ -~][^-~]*/ /g'
Now run the script and see if you get reasonable output; if you do, the problem is in 'sed'. If you don't, the problem is in 'find' or 'egrep'; split them out like so:

if [ "$2" = "" ]; then
find . -type f
# -exec egrep -i "$1" /dev/null {} \;
# | sed -e 's/[^ -~][^-~]*/ /g'
This time, if the problem disappears, it's in 'egrep'; if it still doesn't, it's in 'find'. Check the appropriate manpage for whatever the syntax change may be; someone may have decided to go back to that perverted version of 'find' that requires "-print" in order to output anything (yechhh!), for example. After that, the only thing that's left is figuring out what the author wanted the function to do, and replacing the syntax with the new version that will do the same thing.
Good luck.

(?) Ben,

Thanks for the troubleshooting tip. 'sed' was the offender. I recompiled the original one from Mandrake 7.2 and now all's well. To date, I haven't experienced any fallout when 'sed' is used elsewhere.

(!) [Ben] It's interesting to note what subtle bugs can come up even in shell scripts over time (this, of course, was not a script bug but a GNU util, but still...) Glad that I was able to help, Bill.

(?) Dash it All! Coping with ---Unruly--- Filenames

From John Murray

Answered By Jim Dennis, Andrew Higgs, Ben Okopnik

I was experimenting with the TAR command and trying to get it to "exclude" some files. I finally got it to work but in one of my tests I ended up sending the output of the TAR command to a file called -exclude. I want to delete -exclude, but when I type in the command to delete it "rm -f -exclude", the command thinks the file name is an option. The same thing happens when I try to rename the file. Any ideas??

(!) [JimD] This is a classic UNIX FAQ. I pose variations of it to prospective sysadmins and support personnel when I interview them.
(!) [Andrew] Hi John
Try 'rm -f -- -exclude'. The '--' terminates the options list.
Kind regards
(!) [JimD] ./ is more portable (works with all UNIX commands on all versions of UNIX) and is even shorter (saves one space).
The current directory in UNIX (and Linux, of course) is always the "dot" directory. When we use a filename with no slashes (directory/subdirectory delimiters) in it; that's merely a shorthand for "the one in the current directory." So any file foo is also ./foo.
More importantly -foo is the same as ./-foo (Of course it's also the same as $(pwd)/-foo --- the full path to the filename).
NOTE: I give half credit for people who suggest " -- " (a double dash argument to the rm or some other commands; signifying the end of all options processing by that command and thus forcing all following arguments to be taking literally rather than as possible option switches).
(!) [Ben] <laugh> How about for using Midnight Commander? Scroll down to the filename, hit 'F8'. Gone.
(!) [JimD] Half credit. What if mc isn't on this box? How about mc's "undelete" feature? Is that file really gone?
(!) [Ben] Erm... the "undelete" feature is a facet of the ext2 file system; nothing to do with MC itself, which just provides an interface to it. <grin> You'd get a _debit_ for that one.
(!) [JimD] I'll take the hit --- but I'll stick by my guns on the opinion that "rm is forever" (even though I've recovered deleted files using grep, dd and vi). I've read about mc's undelfs before, but I've never used it and I didn't remember the details.
(!) [Ben] "undelfs", despite the confusing name, is no more a filesystem than is their "tarfs": 'foo'fs, in MC parlance, is simply a way of looking at 'foo', where 'foo' is presented as a filesystem - sort of like what the 'mount' command does with a DOS floppy. In the case of "tarfs", it allows you to look at tar files as if they were directories and subdirectories containing files; with "undelfs", it allows you to look at the deleted-but-not-yet-reused filespace in ext2 as if it was a directory. Pretty neat gadget that greatly facilitates the undeletion process; however, the initial read (it would have to scan the entire partition, obviously) takes a while. MC, despite its ease of use, goes very deep; you can actually edit the various 'fs' files, and even create your own!
(!) [JimD] I'll give 3/4 credit for anyone who says: "I just use (x)emacs dired-mode for all my file management, so I'd just C-s (search) to move point to it, and hit the 'd' key."
(!) [Ben] Hey! What if emacs isn't on the box? In my opinion, that's a higher probability than with MC.
(!) [JimD] I'll even give 90% credit for someone who says: "I forgot all my shell tricks since I do everything in perl. So I'd just use: perl -e 'unlink("-foo");' and be done with it"
(!) [Ben] <laugh> That's what I used the last time I had to rename a bunch of MP3s (KOI8-R (Russian) song names on my 'LANG=C' system). Even MC had a problem; Perl chewed it right up. Except in my case, the above would be
perl -we'unlink -foo'
It's not a list and there's no precedence issue (no parens), and there's no interpretation necessary (no double-quotes). Also, enable warnings to tell you if something weird happens.
(!) [JimD] I give extra credit for those who recognize it as an FAQ and laugh at me for posing the question.
I'll give double credit to anyone that recites the FAQ, and ticks off six different ways to do it (--, ./, mc, perl, dired, and one I've never heard of) and tells me that he saw the whole thread on TAG!
(!) [Ben] Edit the inode directly - hexedit, MC, etc. Use C's "int unlink (const char *FILENAME)". Tell an assembler programmer that you've found something that can't be done in it. <evil grin>
(!) [JimD] That would be using fsdb or debugfs and use clri and unlink commands therein.
(!) [Ben] (and, from the "incidental feature" side of the fence...)
Use a large electromagnet. Take a hammer to the hard drive. Take off and nuke the site from orbit (it's the only way to be sure.)
And, as the bonus option, go to Jim Dennis and say, "Jim, I have this beautiful barometer I'll give you if you..." Oh, oops. Wrong story. :)
...Oh. You said "_one_ [way] I've never heard of"? :)
(?) Now, don't you wish I was a hiring manager at someplace you wanted to work?
(!) [Ben] If I wanted to be a sysadmin (which I don't), definitely. Knowing that the people around me are vetted by someone knowledgeable instead of some HR bozo with a checklist <barely managing to supress Major Rant with a sack> would be a Good Thing!
(!) [JimD] I give zilch credit for any efforts to use quotes, backslashes, or other shell "escaping" techniques.
The leading dash is conventionally used by UNIX commands (and specified in the POSIX standards for many of them) as a way to distinguish "object" arguments (things the command will work on) from "options" (or adverbial) arguments (also known as "switches") (which specify how the command will work). Note that this is a convention; and that it applies to commands. The shell (command interpreter) does no special parsing of leading dashes. Quoting and escapingn only affect how the shell parses the command line (how it is passed to the commands) --- so no amount of shell shenanigans will ever overcome a "dashed" filename problem.
(BTW: That's the easiest question on my list of "qualifiers" for sysadmins. No offense intended, but I need people to know the FAQs before I can let them loose with a root prompt on my production systems)
Here's a slightly tougher one: "due to some merger we need to migrate a set of users to a block of (formerly unused) UIDs; why and how? Also what are some 'gotchyas' in this process?" (Preferred answer to "how" is in the form of shell script pseudocode; feel free to gloss over details with comments on how you'd safely figure out implementation specifics).
That's a medium level qualifier.
BTW changing chowning all of the files belonging to just one user is a one-liner; However, it's not as simple as you'd think. Try that for 3/4 credit.
Answer next month; if you remind me! (My rough cut at this is just over 20 lines long).
(!) [JimD] The UNIX FAQ is at:
http://www.faqs.org/faqs/unix-faq/faq/contents
... and this is question 2.1
Read that before you su again!

(?) File Tranfers with AIM (AOL Instant Messenger)

From Steve Paugh

Answered By Jim Dennis

I have a working LRP (linux router project www.linuxrouter.org) box and I would like to make file transfers with AOl Instant Messanger possible from behind this box to the outside world for my Windows clients. I am not sure excately how to do this

I've seen something like the below in a different setup that hadn't been tested.

My understanding is that the 0.0.0.0/0 is for dhcp. but i am not sure about the $AIM_HOST

Does anyone have any idea on a rule that would allow what I need? I am kinda new to firewalling and would appreicate any help you can give me.

$IPCHAINS -A input -s 0.0.0.0/0 -d $IP_EXT/32 5190 -p tcp -j ACCEPT
if [ "$AIM_HOST" != "firewall" ]; then
$IPMASQADM portfw -a -P tcp -L $IP_EXT 5190 -R $AIM_HOST 5190
fi

Much thanks,
Steve Paugh

(!) [JimD] First, I know NOTHING about AIM. I figured out that it is AOL's interactive chat system over the Internet; but I don't use it or anything like it (though it, ICQ and so many other "instant messaging" systems are available). I prefer e-mail and I already spend WAY too much time interacting with people via that channel.
The only "instant messaging" I do for now is "voice over POTS line" (or via cell phone). I don't even know how to send SMS messages to my phone. (It seems to be a fully WAP capable toy --- but that's another gadget that I haven't invested the time to learn).
O.K. Now that I've set your expectations properly, you are getting this response from a backwoods, curmudgeonly geezer, I'll answer your question.
In the context of this script fragment 0.0.0.0/0 is an argument to a command. Specifically the script is calling on some command whose name we can't see because it is stored in a variable named IPCHAINS. The shell (the script interpreter) "dereferences" $IPCHAINS as the script is run. The $ is a "dereferencing operator" -- it means: replace this variable with the variable's current value. All of the $XXXX thingies in this fragment are shell variables.
As you can see shell programmers usually capitalize the names of their variables, so they standout and are easier to spot. This is merely a convention. In this case the $IPCHAINS and $IPMASQADM variables clearly supposed to be holding the full path to the ipchains and ipmasqadm utilities. In some other part of this script (not shown) or in some parent process that invoked this script, there would be some assigment to these variables that provided the values for a given system. This allows the programmer to localize the system specific code to some point near the top of the script so that they can make any necessary changes in a single place rather than having to hunt throughout the whole script.
As an argument to the ipchains command, the -s refers to a purported source address pattern. In that case 0.0.0.0/0 refers to any IP address. The -d refers to a destination address pattern, $IP_EXT is a variable (which presumably would be set to the IP address of our router's external interface, as the name clearly implies). The /32 indicates that this is a full 32-bit IP address, that it is NOT a subnet designator; successively smaller values would indicate progressively larger networks and subnets based at certain special addresses (space doesn't permit a full descripting of subnetting and routing; but search the LG archives for a 20 page treatise on that topic). The 5190 is a port number; and the -p refers to the protocol, which in this case, is TCP (as opposed to UDP, ICMP, etc). So this ipchains rule applies to packets which purport to be from anywhere, and are destined for TCP port 5190 on the local systems external interface.
The -j in ipchains is a bit confusing. In the man pages and docs it refers to "jump" (while processing the sets of rules, if any packet matches all of these conditions, "jump" to another set of rules to process that set of rules). However, in this case we aren't "jumping" to a different chain of rules; we're "just" accepting the packet into the system. When I teach people about the IP Chains package I teach this concept. -j either means "just" and in "just ACCEPT, DENY, REJECT, REDIRECT, MASQ, or RETURN" the packet or it means "jump" to a user defined (and named) chain of rules.
In our example the -A means to "add" a rule, and the "input" argument is naming the chain of rules to which the rule will be added. The input chain is one of the pre-defined sets of rules that the Linux 2.2.x kernel always has present (if it has the ipchains support compiled it at all).
Oh yeah! I didn't put any of this into context yet. The Linux kernel has optional builtin support for packet filtering and masquerading. This has undergone numerous changes over the years, starting with the ipfw code in 1.3.x, the ipfwadm code in 2.0.x, and through the ipchains code in 2.2.x and the new net filter code (usingn iptables) in 2.4
In all of these cases the kernel has a table of rules against which it checks every packet that it receives, and/or every one which it attempts to send, and/or any packet it intends to forward. (I kept saying "and/or" because the exact rules of which rules sets are traversed differ from one major kernel release to another --- so one packet that may have to traverse the incoming, forwarding, and outgoing rulesets in one release and might only need to traverse one of them in newer kernels; read the appropriate HOWTOs and look at the ASCII art diagrams for further enlightenment on this issue if you need it).
There are various commands: ipfwadm, ipchains, iptables which match the major kernel releases and allow the administrator to insert or add rules to these kernel tables, to delete or flush the rulesets, to query the system and determine how many packets matched a given rule, etc.
It's handy to understand this bit of background. The ipchains command here is adding a rule to the kernel's input chain.
The next command line is a conditional; basically it's saying that "if the AIM_HOST is not the firewall" then (it must be some other system behind the firewall) so we should use the ipmasqadm command to set up a port fowarding rule. We will "add" a rule for TCP that will take any packets to our "local" port 5190 on our external interface, and we'll forward it to port 5190 on a remote host, whose name or address is stored in $AIM_HOST.
Personally I think this is sloppy coding. What if I wanted to name my internal AIM_HOST "firewall?" Using a plain word like "firewall" as a sentinel value is kind of bogus. Using localhost (the canonical name for the local system) would be quite reasonable. However, it's a nitpick.
The last line is simply the Bourne shell way of marking the end of an "if ... then ... else" block. It's the word "if" spelled backwards. If we were looking at the more complex conditional structure called a "case" then we'd find the end of that block by looking for the "esac" token. Once upon a time I read about some other programming language which was Stephen Bourne's inspiration for using this quirky syntax. Thankfully he only did this with conditionals, and we don't have to end our "while" loops with "elihw" and our "for" loops with "rof" --- even better we don't have to try ending our "do" loops with an octal dump.
[Sorry! Inside joke there. The UNIX od command is an "octal dump" utility, so "do" backwards would create an inconvenient token collision].
Actually the while, until, and for loops (and the odd select prompting construct) all use the "do" and "done" tokens to delimit them.
So, back to your original question: It would appear that you can get AOL Instant Messenger to work through your firewall simply by relaying traffic for TCP port 5190 to the appropriate system. This fragment of shell code gives a rough example of how to do that on a Linux 2.2.x system (or later, but using the ipchains support module). However, you'll have to fill in the variables as appropriate to your system. You can just replace all the $VARIABLE_NAME thingies in this example with the literal text that points to your copy of ipchains, your copy of the ipmasqadm command, your external IP address, and (possibly) the IP address of the internal system where you'd be running your IM client.

(?) reverse dns

From Iskandar Awall

Answered By Mike Orr

Do you know a step by step guide in doing reverse dns lookup in unix. I have done a reverse dns lookup but it doesn't seem to be able to resolve.

(!) [Mike] You've got a choice of several commands. 'dig' seems to provide the most information.
$ host 1.2.3.4
$ host domain.com
$ dig 1.2.3.4
$ dig domain.com
$ dig 1.2.3.4 ANY
$ dig domain.com ANY
$ nslookup
> set type=any
> 1.2.3.4
> domain.com
> [ctrl-d]
$
If a reverse lookup fails, it probably means there is no reverse record for that IP. There's no technical connection between forward and reverse records. Indeed, they may even be maintained by different organizations, so keeping them in sync may be impossible. The forward record (name -> number) is maintained by whoever hosts your domain name--your ISP, another company, or you on your own server. The reverse record (number -> name) is maintained by whoever maintains your IP number; i.e., your ISP. My ISP, For instance, refuses to change my reverse record from a long ugly DSL name because they say their billing system depends on that name. I have my own domain which I host myself (to avoid the $5-$20/month the ISP would charge, which is an outrageous rip-off for one minute's worth of labor when it changes, and no cost in months it doesn't change--except the cost to run their DNS server, which they'd have to do anyway), but since I cannot get the reverse record changed, the forward and reverse records don't match. There are also some ISPs out there who don't have reverse records at all, because they think that setting reverse records for their customers' IPs is not worth the time.
Users are caught in the middle of a debate over whether reverse records should be used for identification. The pro argument is that it helps identify spammers and abusers. The con argument (which I believe) is that the purpose of domain names is convenience: so you don't have to remember a number, and so that a site can maintain a "permanent" identifier even if they move to another server or a different ISP. You shouldn't have to have a domain name, much less have it set to any particular value. And to identify scRipT kIddyZ, just do a simple traceroute. The second-last hop is their ISP (or part of their own network), and ISPs always have their own domain name showing. And what if a computer has several domain names, each hosted at a different organization? There can be only one reverse record, so all the other names will be left out in the cold.

(?) Closing Ports

From Saylormoon7

Answered By Mike Orr

(?) Hello, I'm new to the 'puter world an I have been hearing a lot about "closing ports." What exactly does this mean? And how would I go about checking for open ports an closing them? Again like I said I am new to all of this. So if you can help me, please explain it in the simplest way you can. Thank you for you're time an help

(!) [Mike] A port is simply a positive integer the kernel uses to map a network packet to the currently-running process (=application instance) it came from or is going to. (This kind of port has nothing to do with the physical ports on the back of your computer--serial, parallel, USB.) It is not the Process ID (PID), because each process has only one PID but it may have several network connections open simultaneously.
Your kernel has ports numbered from 1 to somewhere above 60000. Each port is either open (currently in use) or closed (not in use). Most ports are used as endpoints for current connections (every connection has one local port on your computer and one remote port on the other computer), but the ports you're interested in are the ones open in "listening" mode. Listening means that there's no particular "other end" of the connection--the server is waiting for a client to come along and use it.
Think of prostitutes waiting on a street corner. The only difference is that when a client does come up, the hooker (or rent boy) clones herself (himself), and the clone walks off with the customer, while the original continues waiting for the next customer.
Of course, programs have bugs, and a smart cr@cKeR knows which versions of which programs have exploitable vulnerabilities. So he'll go scouring around the net looking for computers running vulnerable services. Say you're running a version of Sendmail that has a certain security weakness. The cracker finds it, and you're dead. But say you don't need Sendmail running on that particular computer, so you turn it off. The cracker comes along, gets a "Connection refused" error, and curses the darkness. The port is closed, meaning there's no application running to receive his request, so the kernel can do nothing but say, "Sorry, nobody's home." Frustrated, the cracker goes and bothers somebody else's computer instead.
Another trick some crackers do is to portscan the computer. This means he'll try to connect to every possible port. Most will be rejected, but at least he'll know which ones are listening. Then he can concentrate his attack on those ports. Usually, he doesn't care about those applications in themselves; he just wants to force the program into an error condition such as a buffer overrun in such a way that it fools the computer into giving him a root shell. Then he can try to crack the US National Security Agency, and the guys in black suits will come knocking at your door thinking it was you.
Closing ports is something you can do yourself: simply turn off all services you don't have to have running on that machine. To combat portscanning, you can use various software tools which log the attempt and/or raise an alert. Some of these programs are described in the Linux Gazette articles below. The articles also include other security tips for keeping the bad guys out of your servers.
The last three articles are listed in chronological order and perhaps give the best background.
You can also poke around http://www.securityportal.com for similar security tips.

(!) Best of ISO burning under Windows.

Answers From Robert L Keeney, Götz Waschk, Simon Rowe, Chris Olsen, Ed Wiget

We had a general request for burning CD images under Windows: http://www.linuxgazette.com/issue65/lg_mail65.html#wanted/1

I've downloaded the ISO file. Now what do I do with it? I've burned a CD and it won't boot with it.

We got a lot of answers :) Here's the best ones:

MULTIPLE DIRECTIONS

Although most of these are Linux, there's a few Windows and at least one Macintosh program shown.

(!) [Robert L Keeney] The adaptec instructions worked for me. The others I haven't tried.
http://www.linuxiso.org/cdburninginfo.html

CDRECORD

In the original our querent complained that the Howto instructed him in cdrecord...

(!) [Götz Waschk] This program is portable and the windows version shares the parameters with the linux version.
There is a binary for windows at:
ftp://ftp.fokus.gmd.de/pub/unix/cdrecord/alpha/win32/
First you have to find out the SCSI id of your CD recorder with
cdrecord -scanbus
...than you can burn the image with
cdrecord -device <your_id> filename.iso

NERO BURNING

A different reader noted that Nero Burning's FAQ on their website gives a step by step example of burning a Redhat image to a disc.

(!)[Simon Rowe] change the file extension to 'NRG' eg
SuSE71.iso ---> SuSE71.nrg
The Nero software will then recognise the ISO image correctly, and should burn it ok using the applications defaults (in version 5.x anyway!)
Once the filename extension has been changed, just double click the file in Windows, and Nero should load ready to burn the ISO image. This works under Windows 2000 and Windows 9x., I have not abused my computer with Windows ME to try it there though :)

ADAPTEC EZ CD

(!) [Chris Olsen] EZ-CD Creator will handle iso's really easily, just install it, and you can right click the .iso image and select record to cd. Presto, a proper image, not one big file on CD.
(!) [Ed Wiget] Windows 98 + Adaptec EZ CD Creator 4.xx
  1. download the iso file for the distribution you wish to create cd's.
  2. assuming you already have Windows 98 on that machine and Adaptec EZ CD Creator installed, you need to close everything down in the taskbar next to the clock (no programs except systray and explorer should show up if you press ctrl+alt+del).
  3. to make sure the large iso file is continuous, you should defragment your hard drives. As an added measure, you should also clear your temp folder of any files on C:\TEMP and C:\WINDOWS\TEMP
    I would hope this isn't actually necessary, it should be making regular Windows filesystem calls to get at the bits, but it might make it burn faster. My suspicion is that more of that speed would be from a general Windows speedup, if it's been awhile since your last defragmentation.
  4. Open EZ CD Creator, and select Data CD
  5. From the File menu, select Create CD from CD Image (almost all cd recording software for windows uses a similar statement to distringuish an ISO file from normal data files)
  6. EZ CD Creator by default looks for a *.cif file, change this to iso from the drop-down list in Files of Type.
    note: another reader commented that 4.02d defaults to extension .cdi ... I suppose it would be nice if these Windows burning programs would learn to agree on something. *sigh*
  7. Browse to the location of the downloaded iso file and select it.
  8. Select Open
  9. The buttons Create CD, Track at Once, and Close CD should be selected.
  10. Select the speed of your CD-Recorder
  11. Select O.K.
  12. When it is finished recording the CD, place it in another computer and make sure you can see the CD's contents.

To which I will add, these may be mostly Linux binary programs on our discs, but you should be able to mount up the disc and see their names, at least. That's what all those "TRANS.TBL" files you might see are ... support for long names on a CD filesystem.

Thanks to everybody who wrote in! -- Heather

More 2¢ Tips!


Send Linux Tips and Tricks to


Getting s-video to work on Win2k. Tip: working with tech support.

Mon, 28 May 2001 11:33:29 -0400
Qustion From: Jonathan Van Luik
Tip From: Ben Okopnik ()

I am trying to help out my friend use his Inspiron 3800. He wants to display what is on his laptop onto the television to watch his dvd movies. However, now that he has win2k on the laptop he cannot seem to get the fn+f5 button function to work. This should be very simple. Connect the s-video to the t.v., and then push the button. But it isn't working, and I cannot get Dell help since it is not my laptop.

Ben sent him a cheerfully silly note expressing that this is not the right place for this question. See the Greeting From Heather Stern in this month's TAG for more about that ;) -- Heather

Just as a possibly helpful aside, I've spoken to Dell tech support 20 times or more, never as an owner. I always start out the conversation with "Hi, this is Ben Okopnik, and I'm the tech working on Mr. X's machine." As long as you have the serial numbers, etc. that they're asking for, there shouldn't be a problem; ask to speak to a supervisor if there is one. There's absolutely no reason for them to deny you help, especially if your friend is there with you.


Need to contact a hacker

Thu, 7 Jun 2001 09:01:30 -0700
Question From: Kane Felix
Tip From: Dan Wilder ()

On Thu, Jun 07, 2001 at 08:39:49AM -0000, Kane Felix wrote:

I have been attempting to contact any hackers in the Tampabay, Orlando, FL area. I have a project that needs some expert input, however, I have been unsuccessful thus far. Is there a mailing list, or e-mail address listing for this area I may locate? If so, please help me to locate it, or offer any advice possible.

Depends on which common meaning of the term "hacker" you intend. For "expert code mechanic," try

http://www.linuxjournal.com/glue

("Groups of Linux Users Everywhere") for a Linux user group in your area.

Do you need to restrict yourself to your geographic area? Linux itself is written by many people scattered all over the world. The Internet provides a substitute for physical proximity. While a little long in the tooth, the internet newsgroups still provide forums around which many efforts coalesce. Check

http://groups.google.com/googlegroups/deja_announcement.html

For mailing lists or other forums related to the subject matter you're interested in, again check Google. A suitable search will reveal various forums. Jump in, participate, you're quite likely to find people who can assist you.

If your project is open source, consider registering it on SourceForge,

http://www.sourceforge.net

which provides network CVS access, forums, and other services organized around particular projects.

-- Dan Wilder


How do I create a new driver disk for RH7.1 network

Tue, 22 May 2001 15:15:21 -0700
Question From: Rick Lin
Tip From: Breen Mullins ()

Hi there, I am trying to do an upgrade on a working RH6.2 system, trying to upgrade to RH7.1 by using the netboot.img (Install via FTP/HTTP/NFS), when I get to the question "do you have a driver disk" I insert the driver disk, but the drivers listed do not have the 3com 3c509 nic card listed they all seem to be PCI nic cards.

How can I create a new driver disk for the 3c509 ISA card?

Hi Rick --

See the README file from the RedHat CD. It points you to an additional drivers.img file that you use to make another floppy. I'd guess that the 3C509 driver is there.

HTH --

Breen


Maximum Username Limits in /etc/passwd

Tue, 05 Jun 2001 12:07:20 -0700
Question From: José Antonio Pérez Hernández
Tip From: Jim Dennis ()

Hi,

I'd like to know how long is the account field in the /etc/passwd file and if it can be modified: I'm trying to install a system that will serve users distinguished with their registration code (14 chars or more) instead of their usual user name.

Any tip is welcome. TIA.

Jose Antonio.

Under any reasonably recent Linux distribution (any glibc based one) you can have usernames of up to 31 characters. I think you're still required to have an initial alphabetic and I'd be very dubious of any effort to use any characters other than alphanumerics and maybe underscores.

However, I think the software that you're talking about is bizarre in this requirement --- and I suspect that it's a severely broken model that would lead UNIX software to impose constraints on usernames beyond those implicit in the standard libraries.

BTW: Limitations on username lengths and similar issues are purely a library and userspace issue. The kernel has no notion of usernames. UID limits are primarily a kernel issue; although the libraries naturally must implement to the kernel's requirements. Linux kernels prior to 2.4 used a 16-bit UID (limits us to 65,536 distinct users). In 2.4 this has been changed to a 32-bit value allowing us to use upto 4-billion UIDs. Although its a rare system that needs to provide access to more than 64K users --- there are fairly common administrative requirements that UIDs be unique throughout the life of a company or organization -- that they never be re-used or that they be retained for seven years or whatever.

I realize that UID limits weren't part of your question; but they're likely to be of interest to other readers, especially others who are come across this in the search engines.

-- Jim Dennis


regarding "LINUX FOR THE PRODUCTION ENVIROMENT"

Tue, 5 Jun 2001 17:01:15 -0400 (EDT)
toby cabot ()

Folks,

I enjoy your column and invariably learn something from reading it, but this time I can answer a question of yours!

In your answer to the "LINUX FOR THE PRODUCTION ENVIROMENT" question in the June issue you asked what sql-ledger is. It's an accounting package written in perl. It uses a relational database back-end (postgres, maybe others) and the UI is cgi-scripts. It's pretty good; I used it last year when I was consulting to cut and track some invoices. I'm not an accountant but it seemed to work just fine for me, and it wasn't that hard to figure out.

http://www.sql-ledger.org

Regards, Toby Cabot


MySQL Tips and Tricks - finding Linux Magazine

Wed, 6 Jun 2001 09:32:04 -0000
Böðvar Björgvinsson ()

For the English version og Linux Magazin(e): http://www.linux-magazine.co.uk This was the only version I had seen until I came across this posting of Linux being a German Mag.

HTH

Bodvar


Want to remove linux completly - GRUB still present

11 Jun 2001 10:13:01 +0200
Question From: Chandina rodrigo
Tip From: Huibert Alblas ()

HI guys.. I formatted my machine and got rid of windows98 but when i boot up i get to the command grub> i'm a new user with some undestanding of linux and windws. i found this posted i tried this peice of code


dd if=/dev/zero of=/dev/hda bs=512 count=1

and it said Error: unrecognised command

guys i need to install windows again for some project work so could u kindly tell me what i should type in the grub> so that i can do the normal fdisk with a bootable floppy of win98/95

Thanks.!!

This should be no problem,

this GRUB thing is the bootmanager installed by your linux distro, grub is not linux, so it can not regonise the dd command. Since you allready got rid of Linux :-( the only thinbg left is to remove the bootmanager. The dd command would be the right one if you were still using Linux.

Now, for the solution:

Hope I could help,

Halb


lost linux password

Mon, 11 Jun 2001 12:01:15 -0700
Question From: Selim Javed
Tip From: Mike Orr ()

dear sir

my linux passs forget

but
i'm reboot & booting single

bash command not access

pls help me

What is the exact error message you're seeing?

What happens if you type this at the Lilo prompt?

Lilo:  linux init=/bin/sh


Re:prob in lilo booting

30 May 2001 10:41:25 +0200
Question From: saravanan_n
Tip From: Huibert Alblas ()

Dear sir,

I successfully installed windows 2000 and linux,but i need

dual booting facility,i have 20 gb seagate hardisk ,i partitioned my hardisk as 5 gb for c: 10 gb for d: and for linux the rest of the space i use,but while i overwrite the first sector,but this lilo partition is not happening,it shows ur partion limit exceeds 1024 cylinders,so please give assistance to do the same.

with regards saravanan

I hope I understand correctly that you managed to install both W2K and some Linux distro. but get into problems dualbooting with Lilo.....

If you have an 'older' version of a linux distro, it probably hasnt got over the '1024 cylinder problem' yet.

I hope you have a bootfloppy ready and working. Then the only thing you have to do is:

remove old Lilo

goto http://freshmeat.net/projects/lilo

Download source, install, be happy

or ask follow up question right here....

the new versions of lilo have no problems on new PC's but as you are running W2K this should be no problem...

Hope I could help..

Halb


I need an answer from you....

Mon, 4 Jun 2001 22:35:28 -0700
Question From: James G
Tip From: Don Marti ()

I am wondering if there is a way using CGI/Perl or C to create a program that learns and mimics the packets sent by programs like realplayer etc, and then be able to modify the packets so that information like destination, who sent it etc. I have read a lot on packet swtiching, bridge proxies etc and I have no idea on what to do.....

The first thing you want to learn is a packet sniffer, such as ethereal: http://www.ethereal.com

or tcpdump:
http://www.tcpdump.org

That should give you some idea of what the proprietary application is sending over the wire. Next, you'll want to experiment with mimicking it. You can get the information you need to do this from the excellent "Unix Network Programming" volume 1, by W. Richard Stevens. (It's C-centric, but you can apply the ideas to other programming languages too.)

http://vig.prenhall.com/catalog/professional/product/1,4096,013490012X,00.html

-- Don Marti


Re: central logging and pipping to postgresql db

Tue, 5 Jun 2001 19:02:16 -0700
Question From: control
Tip From: Don Marti ()

not only do i want to have a central logserver, i also want to find a way to log these events to a postgresql database table--this includes and is not limited to only "secure","messages" files but all syslogd events. what should i do?

syslogd supports writing to a named pipe. See man syslog.conf. So, you could write a script to read from the named pipe and do inserts into the database.

If something bad happens to the database, you'll still want regular logs to fall back on, so you should log the important stuff to files too.

-- Don Marti


Re: catch stdin

Thu, 7 Jun 2001 09:21:08 -0700
Question From: sami
Tip From: Mike Orr ()

On Thu, Jun 07, 2001 at 06:59:12PM +0500, sami ullah jan wrote: i need to do something like this:

telnet "host"
'catch' stdin
enter username
'catch' stdin
enter passwd

how do i go about 'catching' the stdin?

It sounds like you want a script to automatically log into one place. There are a few possibilities:

1) The 'expect' program allows a script to wait for certain characters from the host (e.g., "ogin:" or "ssword:") and then send the specified text. That's usually used for programs that want the entire session under program control. Whether you can use 'expect' to log in and then switch to interactive mode, I don't know.

2) If you use ssh instead of telnet, you can set up your account on 'host' so that it will allow you to log into it without having to type a password. See "man ssh".

-- Mike Orr

hi, thanx for the quick reply before. u guys are doig a great job.

QS. how do u go about writing on the soundcard? what do u need to know? i presume its not as simple as writing to a terminal device.

thanx, sami.


Completely wiping the MBR using DOS debug

Sat, 9 Jun 2001 15:36:11 +0100 (BST)
Chandima rodrigó ()

hey guys.... i go it!!! I'm in love with linux!! i found the code on site.. sorry for the bother... this part of code did the trick!! yep it was ben who had given the link thanx fellow!! made my day!!

regard. rodrigo!!

Boot with a DOS floppy that has "debug" on it; run "debug". At the '-' prompt, "block-fill" a 512-byte chunk of memory with zeroes:


f 9000:0 200 0

Start assembly mode with the 'a' command, and enter the following code:


mov dx,9000
mov es,dx
xor bx,bx
mov cx,0001
mov dx,0080
mov ax,0301
int 13
int 20

Press <Enter> to exit assembly mode, take a deep breath - and press "g" to execute, then "q" to quit "debug". Your HD is now in a virgin state, and ready for partitioning and installation.

Glad I could help, Rodrigo (or is that 'rodrigo!!'? :) -=- Ben Okopnik!!


MS Frontpage98 Server extention Redhat Linux 6.0 vs ASP

Thu, 31 May 2001 16:00:39 -0400 (EDT)
QUestion From: Francois
Tip From: Daniel S. Washko ()

Hi I am currently running MS Frontpage98 Server extension Redhat Linux 6.0 (Apache) I would like to know if it is possible to run ASP on the same configuration. Would it be a Frontpage Upgrade or would it be on the Linux Side and is it major changes?. Any help would be highly appreciated. Thanx

Francois

You should be able to build apache with both Frontpage98 and ASP, but you will need to add mod_perl first. Check out this site: http://www.nodeworks.com/asp

-- Daniel S. Washko


YAHE: run BIND safely

Tue, 05 Jun 2001 08:04:47 -0700
Benjamin D. Smith ()

(Yet Another "Helpful" Email)

BIND sucks, and we all know it, even though it is a core piece of infrastructure to the 'net. Bind 9 looks good, but I don't quite yet feel ready to deploy it. Instead, run BIND in a chroot jail - so even if it gets hacked, they don't "get" anything.

There's a howto at www.linuxdoc.org:
http://www.linuxdoc.org/HOWTO/Chroot-BIND-HOWTO.html

-Ben

Thanks Ben, and for your thoughts on the learning curve of Windows vs. UNIX type systems, posted in the Mailbag this month. Just about any dangerous daemon might be a tiny bit safer in a chroot jail. It's not a perfect trap without some use of the 'capabilities' (privileges really) in newer kernels, but it's pretty good. -- Heather


Linux solution to syncing with Exchange Address books as a client

Tue, 22 May 2001 20:20:53 -0700
Question From: Alan Maddison
Tip From: Heather Stern ()

James

I hope that you can help me find a solution before I'm forced back to NT. I have to find a Linux solution that will allow me to connect to an Exchange server over the WAN and then sync address books.

Any suggestions?

Thanks.

Alan Maddison

Well, we've had numerous past articles address the matter of replacing an Exchange server with a Linux box, but you're asking about being a client to one...
And I can't just point you at the Gazette search engine :( because "Exchange" is way too common a word. MX records and the server side of this question will flood you, even without people just using the word for its real meaning.
But, we had a mention in a past issue (http://www.linuxgazette.com/issue58/lg_tips58.html#2c/6) about Bynari having a good one - they also have a server product. So I think you might find the Insight client to be just what you need! (http://www.bynari.com)
I post it again because I have to update you ... it's not free - you have to pay them for their hard work in getting the protocols right. Their website has a "price special" but it appears to have expired a month ago, so I have no idea what they cost, but it's probably not trying to be too expensive. -- Heather


This page edited and maintained by the Editors of Linux Gazette Copyright © 2001
Published in issue 68 of Linux Gazette July 2001
HTML script maintained by of Starshine Technical Services, http://www.starshine.org/

"Linux Gazette...making Linux just a little more fun!"


HelpDex

By


jennysknee.jpg
allclean.jpg
ruckus.jpg
explode.jpg
lead.jpg
psyched.jpg

More HelpDex cartoons are at http://www.shanecollinge.com/Linux.

Shane Collinge

Part computer programmer, part cartoonist, part Mars Bar. At night, he runs around in a pair of colorful tights fighting criminals. During the day... well, he just runs around. He eats when he's hungry and sleeps when he's sleepy.


Copyright © 2001, Shane Collinge.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 68 of Linux Gazette, July 2001

"Linux Gazette...making Linux just a little more fun!"


Journalling Filesystems for Linux

By


Introduction

A filesystem is the software used to organize and manage the data stored on disk drives; it ensures the integrity of the data providing that data written to disk is identical when it is read back. In addition to storing the data contained in files, a filesystem also stores and manages important information about the files and about the filesystem itself (i.e. date and time stamps, ownership, access permissions, the file's size and the storage location or locations on disk, and so on). This information is commonly referred to as metadata.

Since a filesystem tries to work as asynchronous as possible, in order to avoid hard-disk bottleneck, a sudden interruption of its work could result in a loss of data. As an example, let's consider the following scenario: what happens if your machine crashes when you are working on a document residing on a Linux standard ext2 filesystem?
There are several answers:

In this last scenario things can be even worse if the drive was writing the metadata areas, such as the directory itself. Now instead of one corrupted file, you have one corrupted filesystem and you can lose an entire directory or all the data on an entire disk partition.

The standard Linux filesystem (ext2fs) makes an attempt to prevent and recover from the metadata corruption case performing an extensive filesystem analysis (fsck) during bootup. Since ext2fs incorporates redundant copies of critical metadata, it is extremely unlikely for that data to be completely lost. The system figures out where the corrupt metadata is, and then either repairs the damage by copying from the redundant version or simply deletes the file or files whose metadata is affected.

Obviously, the larger is the filesystem to check, the longer the check process. On a partition of several gigabytes it may take a great deal of time to check the metadata during bootup.
As Linux begins to take on more complex applications, on larger servers, and with less tolerance for downtime, there is a need for more sophisticated filesystems that do an even better job of protecting data and metadata.

The journalling filesystems available for Linux are the answer to this need.

What is a journalling filesystem?

Here is reported only a general introduction to journalling. For more specific and technical notes please see Juan I. Santos Florido article in Linux Gazette 55. Other information can be obtained from freshmeat.net/articles/view/212/.

Most modern filesystems use journalling techniques borrowed from the database world to improve crash recovery. Disk transactions are written sequentially to an area of disk called journal or log before being written to their final locations within the filesystem.
Implementations vary in terms of what data is written to the log. Some implementations write only the filesystem metadata, while others record all writes to the journal.

Now, if a crash happens before the journal entry is committed, then the original data is still on the disk and you lost only your new changes. If the crash happens during the actual disk update (i.e. after the journal entry was committed), the journal entry shows what was supposed to have happened. So when the system reboots, it can simply replay the journal entries and complete the update that was interrupted.

In either case, you have valid data and not a trashed partition. And since the recovery time associated with this log-based approach is much shorter, the system is on line in few seconds.

It is also important to note that using a journalling filesystem does not entirely obsolete the use of filesystem checking programs (fsck). Hardware and software errors that corrupt random blocks in the filesystem are not generally recoverable with the transaction log.

Available journalling filesystems

In the following part I will consider three journalling filesystems.

The first one is ext3. Developed by Stephen Tweedie, a leading Linux kernel developer, ext3 adds journalling into ext2. It is available in alpha form at ftp.linux.org.uk/pub/linux/sct/fs/jfs/.

Namesys has a journalling filesystem under development called ReiserFS. It is available at www.namesys.com.

SGI has released on May 1 2001 version 1.0 of its XFS filesystem for Linux. You can find it at oss.sgi.com/projects/xfs/.

In this article these three solutions are tested and benchmarked using two different programs.

Installing ext3

For technical notes about ext3 filesystem please refer to Dr. Stephen Tweedie's paper and to his talk.

The ext3 filesystem is directly derived from its ancestor, ext2. It has the valuable characteristic to be absolutely backward compatible to ext2 since it is just an ext2 filesystem with journalling. The obvious drawback is that ext3 doesn't implement any of the modern filesystem features which increase data manipulation speed and packing.

ext3 comes as a patch of 2.2.19 kernel, so first of all, get a linux-2.2.19 kernel from ftp.kernel.org or from one of its mirrors. The patch is available at ftp.linux.org.uk/pub/linux/sct/fs/jfs or ftp.kernel.org/pub/linux/kernel/people/sct/ext3 or from one mirror of this site.
From one of these sites you need to get the following files:

Copy Linux kernel linux-2.2.19.tar.bz2 and ext3-0.0.7a.tar.bz2 files to /usr/src directory and extract them:
mv linux linux-old
tar -Ixvf linux-2.2.19.tar.bz2
tar -Ixvf ext3-0.0.7a.tar.bz2
cd linux
cat ../ext3-0.0.7a/linux-2.2.19.kdb.diff | patch -sp1
cat ../ext3-0.0.7a/linux-2.2.19.ext3.diff | patch -sp1
The first diff is copy of SGI's kdb kernel debugger patches. The second one is the ext3 filesystem.
Now, configure the kernel, saying YES to "Enable Second extended fs development code" in the filesystem section, and build it.

After the kernel is compiled and installed you should make and install the e2fsprogs:

tar -Ixvf e2fsprogs-1.21-WIP-0601.tar.bz2
cd e2fsprogs-1.21
./configure
make
make check
make install
That's all. The next step is to make an ext3 filesystem in a partition. Reboot with the new kernel. Now you have two options: make a new journalling filesystem or journal an existing one.
You can mount the ext3 filesystem using the command:
mount -t ext3 /dev/xxx /mount_dir
Since ext3 is basically ext2 with journalling, a cleanly unmounted ext3 filesystem could be remounted as ext2 without any other commands.

Installing XFS

For a technical overview of XFS filesystem refer to SGI linux XFS page and to SGI publications page.
Also see the FAQ page.

XFS is a journalling filesystem for Linux available from SGI. It is a mature technology that has been proven on IRIX systems as the default filesystem for all SGI customers. XFS is licensed under GPL.
XFS Linux 1.0 is released for the Linux 2.4 kernel, and I tried the 2.4.2 patch. So the first step is to acquire a linux-2.4.2 kernel from one mirror of kernel.org.
The patches are at oss.sgi.com/projects/xfs/download/Release-1.0/patches. From this directory download:

Copy the Linux kernel linux-2.4.2.tar.bz2 in /usr/src directory, rename the existing linux directory to linux-old and extract the new kernel:
mv linux linux-old
tar -Ixf inux-2.4.2.tar.bz2
Copy each patch in the top directory of your linux source tree (i.e. /usr/src/linux) and apply them:
zcat patchfile.gz | patch -p1 
Then configure the kernel, enabling the options "XFS filesystem support" (CONFIG_XFS_FS) and "Page Buffer support" (CONFIG_PAGE_BUF) in the filesystem section. Note that you will also need to upgrade the following system utilities to these versions or later: Install the new kernel and reboot.
Now download the xfs progs tools. This tarball contains a set of commands to use the XFS filesystem, such as mkfs.xfs. To build them:
tar -zxf  xfsprogs-1.2.0.src.tar.gz
cd xfsprogs-1.2.0
make configure 
make 
make install
After installing this set of commands you can create a new XFS filesystem with the command:
mkfs -t xfs /dev/xxx
One important option that you may need is "-f" which will force the creation of a new filesystem, if a filesystem already exists on that partition. Again, note that this will destroy all data currently on that partition:
mkfs -t xfs -f /dev/xxx
You can then mount the new filesystem with the command:
mount -t xfs /dev/xxx /mount_dir

Installing ReiserFS

For technical notes about reiserFS refer to NAMESYS home page and to FAQ page.

ReiserFS has been in the official Linux kernel since 2.4.1-pre4. You always need to get the utils (e.g. mkreiserfs to create ReiserFS on an empty partition, the resizer, etc.).
The up-to-date ReiserFS version is available as a patch against either 2.2.x and 2.4.x kernels. I tested the patch against 2.2.19 Linux kernel.

The first step, as usual, is to get a linux-2.2.19.tar.bz2 standard kernel from a mirror of kernel.org. Then get the reiserfs 2.2.19 patch. At present time the last patch is 3.5.33.
Please note that, if you choose to get the patch against 2.4.x kernel, you should get also the utils tarball reiserfsprogs-3.x.0j.tar.gz.
Now unpack the kernel and the patch. Copy the tarballs in /usr/src and move the linux directory to linux-old; then run the commands:

tar -Ixf linux-2.2.19.tar.bz2
bzcat linux-2.2.19-reiserfs-3.5.33-patch.bz2 | patch -p0
Compile the Linux kernel setting reiserfs support on filesystem section.
Compile and install the reiserfs utils:
cd /usr/src/linux/fs/reiserfs/utils 
make
make install 
Install the new kernel and reboot. Now you can create a new reiserfs filesystem with the command:
mkreiserfs /dev/xxxx 
and mount it:
mount -t reiserfs /dev/xxx /mount_dir

Filesystems benchmark

For the test I used a Pentium III - 16 Mb RAM - 2 Gb HD with a Linux RedHat 6.2 installed.
All the filesystems worked fine for me, so I started a little benchmark analysis to compare their performances. As a first test I simulated a crash turning off the power, in order to control the journal recovery process. All filesystems passed successfully this phase and the machine was on line in few seconds with each filesystem.

The next step is a benchmark analysis using bonnie++ program, available at www.coker.com.au/bonnie++. The program tests database type access to a single file, and it tests creation, reading, and deleting of small files which can simulate the usage of programs such as Squid, INN, or Maildir-format programs (qmail).
The benchmark command was:

bonnie++ -d/work1 -s10 -r4 -u0
which executes the test using 10Mb (-s10) in the filesystem mounted in /work1 directory. So, before launching the benchmark, you must create the requested filesystem on a partition and mount it on /work1 directory. The other flags specify the RAM amount in Mb (-r4) and the user (-u0, i.e. run as root).

The results are shown in the following table.

Sequential Output Sequential Input Random
Seeks
Size:Chunk Size Per Char Block Rewrite Per Char Block
K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU / sec % CPU
ext2 10M 1471 97 14813 67 1309 14 1506 94 4889 15 309.8 10
ext3 10M 1366 98 2361 38 1824 22 1482 94 4935 14 317.8 10
xfs 10M 1206 94 9512 77 1351 33 1299 98 4779 80 229.1 11
reiserfs 10M 1455 99 4253 31 2340 26 1477 93 5593 26 174.3 5

Sequential Create Random Create
Num Files Create Read Delete Create Read Delete
/ sec % CPU / sec % CPU / sec % CPU / sec % CPU / sec % CPU / sec % CPU
ext2 16 94 99 278 99 492 97 95 99 284 100 93 41
ext3 16 89 98 274 100 458 96 93 99 288 99 97 45
xfs 16 92 99 251 96 436 98 91 99 311 99 90 41
reiserfs 16 1307 100 8963 100 1914 99 1245 99 9316 100 1725 100

Two data are shown for each test: the speed of the filesystem (in K/sec) and the CPU usage (in %). The higher the speed the better the filesystem. The opposite is true for the CPU usage.
As you can see reiserFS reports a hands down victory in managing files (section Sequential Create and Random Create), overwhelming its opponents by a factor higher than 10. In addition to that is almost as good as the other filesystem in the Sequential Output and Sequential Input. There isn't any significant difference among the other filesystems. XFS speed is similar to ext2 filesystem, and ext3 is, as expected, a little slower than ext2 (it is basically the same thing, and it wastes some time during the journalling calls).

As a last test I get the mongo benchmark program available at reiserFS benchmark page at www.namesys.com, and I modified it in order to test the three journalling filesystems. I inserted in the mongo.pl perl script the commands to mount the xfs and ext3 filesystem and to format them. Then I started a benchmark analysis.
The script formats partition /dev/xxxx, mounts it and runs given number of processes during each phase: Create, Copy, Symlinks, Read, Stats, Rename and Delete. Also, the program calculates fragmentation after Create and Copy phases:

Fragm = number_of_fragments / number_of_files 
You can find the same results in the directory results in the files:
log       - raw results
log.tbl   - results for compare program
log_table - results in table form
The tests was executed as in the following example:
mongo.pl ext3 /dev/hda3 /work1 logext3 1
where ext3 must be replaced by reiserfs or xfs in order to test the other filesystems. The other arguments are the device to mount, where the filesystem to test is located, the mounting directory, the filename where the results are stored and the number of processes to start.

In the following tables there are the results of this analysis. The data reported is time (in sec). The lower the value, the better the filesystem. In the first table the median dimension of files managed is 100 bytes, in the second one it is 1000 bytes and in the last one 10000 bytes.


ext3
files=68952
size=100 bytes
dirs=242
XFS
files=68952
size=100 bytes
dirs=241
reiserFS
files=68952
size=100 bytes
dirs=241
Create 90.07 267.86 53.05
Fragm. 1.32 1.02 1.00
Copy 239.02 744.51 126.97
Fragm. 1.32 1.03 1.80
Slinks 0 203.54 105.71
Read 782.75 1543.93 562.53
Stats 108.65 262.25 225.32
Rename 67.26 205.18 70.72
Delete 23.80 389.79 85.51


ext3
files=11248
size=1000 bytes
dirs=44
XFS
files=11616
size=1000 bytes
dirs=43
ReiserFS
files=11616
size=1000 bytes
dirs=43
Create 30.68 57.94 36.38
Fragm. 1.38 1.01 1.03
Copy 75.21 149.49 84.02
Fragm. 1.38 1.01 1.43
Slinks 16.68 29.59 19.29
Read 225.74 348.99 409.45
Stats 25.60 46.41 89.23
Rename 16.11 33.57 20.69
Delete 6.04 64.90 18.21


ext3
files=2274
size=10000 bytes
dirs=32
XFS
files=2292
size=10000 bytes
dirs=31
reiserFS
files=2292
size=10000 bytes
dirs=31
Create 27.13 25.99 22.27
Fragm. 1.44 1.02 1.05
Copy 55.27 55.73 43.24
Fragm. 1.44 1.02 1.12
Slinks 1.33 2.51 1.43
Read 40.51 50.20 56.34
Stats 2.34 1.99 3.52
Rename 0.99 1.10 1.25
Delete 3.40 8.99 1.84

From these tables you can see that ext3 is usually faster in Stats Delate and Rename, while reiserFS wins in Create and Copy. Also note that the performance of reiserFS in better in the first case (small files) as expected by its technical documentation.

Conclusions

There are at present time at least two robust and reliable journalling filesystems for Linux (i.e. XFS and reiserFS) which can be utilized without fear.
ext3 is still an alpha release and can undergo several failures. I had some problems using bonnie++ on this filesystem: the system reported some VM errors and killed the shell I was using.

Considering the benchmark results my advice is to install a reiserFS filesystem in the future (I'll surely do it).

Matteo Dell'Omodarme

I'm a student at the University of Pisa and a Linux user since 1994. Now I'm working on the administrations of Linux boxes at the Astronomy section of the Department of Physics, with special experience about security. My primary email address is .


Copyright © 2001, Matteo Dell'Omodarme.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 68 of Linux Gazette, July 2001

"Linux Gazette...making Linux just a little more fun!"


Compiling and Installing a Linux Kernel

By


Abstract:

Hi everyone, this article provides you with an extremely detailed and step-by-step process describing how to Compile, Configure and then Install a Customized Red Hat Linux Kernel. As we all know, a Customized Kernel is required for many reasons, and I wouldn't want to go into the details of those. I will only show how to Compile, Configure and Install a Custom Kernel. Though the steps mentioned below are primarily targeted for the Red Hat Linux users, but the same process applies to the users of other Linux Distributions also, of course, with a few minor modifications as required. (For instance, not all systems use initrd.)

Main:

Please note that I have performed all the steps mentioned below on a computer system with the following configurations: Compaq Presario 4010 Series computer system, 15.5 GB Hard Disk Space, 96 MB RAM, 400 MHz Intel Celeron Processor, Red Hat Linux 7.0 Distribution Release. Underlying Kernel: 2.2.16-22

Aim:

Our aim would be to obtain a fully working Customized Kernel after all the steps mentioned below have been completed, for example, I have a Customized Kernel named "2.2.16-22ghosh" running on my system (cause my name is Subhasish Ghosh, you could have anything else, in fact a couple of them running together!). So, happy hunting and compiling the Linux Kernel.

Steps to Compile, Configure and Install a Customized Red Hat Linux Kernel:

The steps to be followed are as follows:

Step 1: Login as "root" and then perform these steps.

Step 2: At the command prompt, type in: "rpm -q kernel-headers kernel-source make dev86"

Step 3: If these RPMs are already installed, then proceed to step 4. Otherwise, mount the Red Hat Linux 7.0 CD-ROM and then perform a rpm -Uvh for installing these RPMs.

Step 4: If you have a fully working X Window System, then type in "startx" at the command-prompt. In case you don't have an X Window System configured, I personally would suggest you to have it done before proceeding cause it would be extremely beneficial. If X Window System is NOT configured, then type in "make config" or "make menuconfig" at the command-prompt. Please note that I have assumed that you have an X Window System running on your system, and for that reason, just type in "startx".

Step 5: Then once within the GNOME environment, open the GNOME Terminal and type in: "cd /usr/src/linux" and press enter.

Step 6: Then from within /usr/src/linux, type in "make xconfig".

Step 7: The GUI version of "make config" would come up on the screen. It provides you with various options that you have for obtaining a Customized Kernel.

Step 8: Now, I would suggest you to leave most of the default options just as it is. Just don't try to fiddle around cause most of the options are sensitive and requires expert handling. Just make sure you make the following changes:

1.Processor Type and Features: Choose the correct Processor depending on whether you are working on a Pentium 2, 3, or Intel Celeron like me. For example, I did the following: Processor Family: PPro/686MX, Maximum Physical Memory: 1 GB, Math Emulation: Yes, MTRR: Yes, SMP: Yes.

2.Open the Filesystems dialog and then make the following changes to it: For example I did: DOS FAT fs support: Yes(y), MSDOS fs support: Yes(y), UMSDOS: m, VFAT(Windows 95) support: Yes(y), NTFS filesystem support (read-only): Yes(y), NTFS read-write support(DANGEROUS): No(n). After you have made these changes, please make sure you haven't changed the others in the process. All these above-mentioned changes are quite harmless and won't cause any harm to your existing Linux Kernel.

3.Save and Exit from the Main dialog.

Step 9: Then, perform a "ls -al" from within the path: /usr/src/linux.

Step 10: I am sure you can see a file called: "Makefile". It is an extremely important file for this entire Compilation process. So, make sure you create a backup of this file, by using: "cp Makefile Makefile.bak"

Step 11: Now, do: (from within /usr/src/linux) "vi Makefile".

Step 12: Go to line EXTRAVERSION and change it to something like this, for example I changed EXTRAVERSION=-22, to EXTRAVERSION="-22ghosh". You are free to name it anyway you wish.

Step 13: Save and exit the file.

Step 14: All the following steps should be done from within: /usr/src/linux. Type in: "make dep clean", and press enter.

Step 15: Then type in: "make bzImage modules". This would take some time, go and have a drink while it compiles all the necessary files. I usually take a nap during this time, cause I do all this stuff in the middle of the night.

Step 16: After this step is over, a "bzImage" file would be created in the directory /usr/src/linux/arch/i386/boot, just go to this directory and check whether a file called "bzImage" has been produced or not. IF AND ONLY IF all the compilation steps have been executed correctly and all the options that we have had chosen in "make xconfig" are correct, this file would be produced. If you can find this file, which I am sure you would, well, you can start enjoying already, cause you have won 75% of the battle. If you can't see this file, I am sorry, but you must have had made a mistake somewhere, just take a break and carry out all the steps again from the start. I am sure you would succeed.

Step 17: Type in (from within /usr/src/linux): "cp ./arch/i386/boot/bzImage /boot/vmlinuz-2.2.16-22ghosh" and press enter.

Step 18: Then type in: "cp System.map /boot/System.map-2.2.16-22ghosh"

Step 19: Then type in: "make modules_install" and press enter. You would see all the modules being installed in a new customized directory.

Step 20: Then type in: "mkinitrd /boot/initrd-2.2.16-22ghosh.img 2.2.16-22ghosh"

Step 21: Then, type in: "vi /etc/lilo.conf" and then add the following entry:

image=/boot/vmlinuz-2.2.16-22ghosh

label=GhoshKernel
initrd=/boot/initrd-2.2.16-22ghosh.img
root=/dev/hdc5
read-only

Step 22: Save and exit. Please note that you can change the entries in the lilo.conf file as you desire, and the root should be the root in your system, in my machine, it's at /dev/hdc5. So, insert the info from your own system.

Step 23: Type in: "/sbin/lilo -v -v"

Step 24: Read all the info on the screen. If there are no errors, well, the job's all done. Congratulations!!!!

Step 25: Reboot the system by typing in: "/sbin/reboot" or "/sbin/shutdown -r now".

Step 26: In the start-up screen, press Tab (or Control-X, if you have the LILO start-up screen), and you can see the entry: "GhoshKernel" along with the other pre-existing entries.

Step 27: Type in: GhoshKernel and press enter. The fully working Customized Kernel will be seen booting on your system. So, you have a fully working Customized Kernel working on your system now.

Conclusion:

After logging in as "root", type in: "uname -r" and press Enter. You can see the following entry on the screen: 2.2.16-22ghosh that proves that you are running a Customized Kernel, and not 2.2.16-22 which was the Kernel we had started out with. That's all. Also remember, that you can have as many number of Kernel versions as you like running on a single computer system. In case this doesn't work out or you guys (and gals) face problems, make sure you do e-mail me at for questions and suggestions. I would really like to hear from you and help you and I hope this detailed HowTo helps everyone out there who wants to run a fully working Customized Kernel.

Resources:

There exists a lot of info on how to configure and run fully Optimized and Customized Kernels at a number of web-sites. Make sure you check out: http://www.vmlinuz.nu and a few other ones for the HowTo's and Documentation on Linux Kernel.

Subhasish Ghosh

I'm 20 years old, and currently living in India. I am a computer-systems engineering student as well as a Computer Professional. I currently hold 6 Microsoft Certified Professional(MCP) Certifications, and also am a Microsoft Certified Professional Certified on NT 4.0 Track. I have been working with Linux for a long time, especially Red Hat Linux. I am currently preparing for the Red Hat Linux Certified Engineer(RHCE) certification Exam and plan to work primarily with the Linux operating system in the future.


Copyright © 2001, Subhasish Ghosh.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 68 of Linux Gazette, July 2001

"Linux Gazette...making Linux just a little more fun!"


A Linux Client for the Brother Internet Print Protocol

By


The Brother Internet Print Protocol

A recent article Internet Printing--Another Way described a printing protocol which can be used with some Brother printers. It enables users of Windows machines to send a multi-part base-64 encoded print file via email directly to a Brother print server.

The article went on to show how the functionality of the Brother print server can be implemented in simple Perl program which periodically polls a POP3 server to check for jobs whose parts have all arrived. When such a job is detected, its parts are downloaded in sequence and decoded for printing.

A Linux Client

The Perl program mentioned above has been in use at my place for a few months, and has made it a lot easier for me to print Word and other Microsoft-format documents to a remote printer. But it hasn't made life any easier for those at my place who use Linux workstations.

A brief search on the Brother website failed to reveal a Linux client, so it was decided that I should develop one. The result is described hereunder.

Implementation

Conventional wisdom probably dictates that a program which breaks a binary input stream into chunks for feeding into a decoder in sequence - should be implemented in Perl, or perhaps in C. In fact, the common Bourne shell and its derivatives have all the necessary capabilities when used with a couple of general Unix/Linux tools like 'split' and 'wc'.

Program Walk-Through

As shown in the listing (text version), the program starts by checking that it has been called with two arguments; a usage message is printed if this is not the case. It then defines a function which will be called later to print a header on each part as it is sent. In particular, this function will include an address for notification, a part number, a part count, and a job identifier.

The program body begins by generating an email address for the originator, together with a timestamp. These are concatenated and used to generate a name for a scratch directory. A trap is set to remove any directory having that name in the event of error, and an attempt is made to create the scratch directory.

The Unix/Linux 'split' utility is then used to split the program input into parts whose size is given by the first program argument. Each of these is fed into a base-64 encoder and mailer (with appropriate pre-amble) to the address given by the second program argument.

The program ends by removing the scratch directory and returning an exit status.

#!/bin/sh
# BIPclient.sh  Brother Internet Print client program. Breaks incoming stream
#               into parts of designated size, then does base-64 encoding of
#               each part and emails it with appropriate preamble etc. to
#               designated email address.  Graham Jenkins, IBM GSA, June 2001.

[ $# -ne 2 ] && echo "Usage: `basename $0` kb-per-part destination">&2 &&
  echo " e.g.: man a2ps | a2ps -o - | `basename $0` 16 [email protected]">&2&& exit 2

do_header () {                                  # Function to print header
cat <<EOF
START-BROBROBRO-START
BRO-SERVICE=ZYXWVUTSRQ980
BRO-NOTIFY=Always
BRO-REPLY=$Me
BRO-PARTIAL=$Part/$Total
BRO-UID=$Me$Now
STOP-BROBROBRO-STOP

Content-Type: application/octet-stream; name="PrintJob.PRN"
Content-Transfer-Encoding: base64

EOF
}

Me=`whoami`@`hostname`
[ -n "`domainname`" ] && [ "`domainname`" != "(none)" ] && Me=$Me.`domainname`
Now=`date '+%Y%m%d%H%M%S'`                      # Generate email address,
Dir=/tmp/`basename $0`.$Me$Now                  # timestamp and directory name
trap 'rm -rf $Dir;echo Oops>&2;exit 1' 1 2 3 15 # Set cleanup trap

mkdir $Dir                      || exit 1       # Create directory
split -b ${1}k - $Dir/          || exit 1       # Generate parts
Total=`ls $Dir|wc -w |tr -d ' '`|| exit 1       # Count parts

Part=0
for File in `ls $Dir/*` ; do                    # Encode and send parts
  Part=`expr 1 + $Part`
  [ -t 2 ] && echo "Sending part: $Part/"$Total"  to: $2 .. $Now" >&2
  ( do_header
    base64 $File                                # Use mmencode or base64
    echo ) | Mail -s "Brother Internet Print Job" $2 
done

rm -rf $Dir                                     # Cleanup and exit
exit 0

Limitations

In the interests of simplicity, the 'do_header' function shown in the listing leaves out some of the header lines which are generated by the Windows client programs, and uses a dummy value for 'BRO-SERVICE'. In consequence, it may not work satisfactorily with a genuine Brother print server. If any readers have such a device, I would be interested in their feedback.

The 'unique' message identifier can actually be duplicated if a user submits two jobs within the same one-second period; this is a limitation of the Brother identifier format. An alternative identifier format which inserts a process number before the user's email address could be used if required.

And finally, the creation of a scratch directory to hold what is effectively a duplicate of the raw print file - may be seen as a problem if the client machine has a limited amount of temporary file-space. The issue here is that we really have to take a copy of the raw print file as it arrives so that we can generate a "total-parts" figure for inclusion in the header of each mailed component.

It is possible (using Perl or 'dd') to generate and mail parts on the fly, without using any temporary files - provided that the server program is modified slightly so as not to require a "total-parts" figure in the header of each part. I will be happy to send details to anyone who would like to do it this way.

Graham Jenkins

Graham is a Unix Specialist at IBM Global Services, Australia. He lives in Melbourne and has built and managed many flavors of proprietary and open systems on several hardware platforms.


Copyright © 2001, Graham Jenkins.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 68 of Linux Gazette, July 2001

"Linux Gazette...making Linux just a little more fun!"


The Opening of the Field: PostgreSQL's Multi-Version Concurrency Control

By


PostgreSQL's Multi-Version Concurrency Control feature frees data tables for simultaneous use by readers and writers.

Introduction

Question of the day: what's the single most annoying thing about most large multi-user databases? As anyone who's worked with one knows, it's waiting. And waiting. Whether the database system is using table-level, page-level, column-level, or row-level locking, the same annoying problem persists: readers (SELECTs) wait for writers (UPDATEs) to finish, and writers (UPDATEs) wait for readers (SELECTs) to finish. If I could only find a database that doesn't require locking. Will it ever be? Well, the answer is yes".

PostgreSQL's no-locking feature

For PostgreSQL , "no-locking" is already a reality. Readers never wait for writers, and writers never wait for readers. I can already hear the objections to the claim that there is no "no-locking" in PostgreSQL, so let me explain PostgreSQL's advanced technique called Multi-Version Concurrency Control (MVCC).

MVCC

In other database systems, locks are the only mechanism used to maintain concurrency control and data consistency. PostgreSQL, however, uses a multi-version model instead of locks. In PostgreSQL, a version is like a snapshot of the data at a distinct point in time. The current version of the data appears whenever users query a table. Naturally, a new version appears if they run the same query again on the table and any data has changed. Such changes happen in a database through UPDATE, INSERT, or DELETE statements.

Example: Row locking vs. MVCC

The essential difference between traditional row-level locking and PostgreSQL's MVCC lies in when users can see the data they selected from a particular table. In traditional row-level locking, users may wait to see the data, whereas PostgreSQL's MVCC ensures that users NEVER wait to see the data. Let's look at the following example to illustrate more clearly.

SELECT headlines FROM news_items

In this example, the statement reads data from a table called news_items and displays all the rows in the column called headlines. In data systems that use row-level locking, the SELECT statement will block and the user will have to wait if another user is concurrently inserting (INSERT) or updating (UPDATE) data in the table news items. The transaction that modifies the data holds a lock on the row(s) and therefore all rows from the table cannot be displayed, forcing users to wait until the lock releases. Users who have encountered frequent locks when trying to read data know all too well the frustration this locking scheme can cause.

In contrast, PostgreSQL would allow all users to view the news_items table concurrently, eliminating the need to wait for a lock to be released. This is always the case, even if multiple users are inserting and updating data in the table at the same time. When a user issues the SELECT query, PostgreSQL displays a snapshot - a version, actually - of all the data that users have committed before the query began. Any data updates or inserts that are part of open transactions or that were committed after the query began will not be displayed. Makes a lot of sense, doesn't it?

A Deeper Look at MVCC

Database systems that use row-level locking do not retain old versions of the data, hence the need for locks to maintain data consistency. But a deeper look into how "no-locking" through MVCC works in PostgreSQL reveals how PostrgreSQL gets around this limitation. Each row in PostgreSQL has two transaction IDs. It has a creation transaction ID for the transaction that created the row, and an expiration transaction ID for the transaction that expired the row. When someone performs an UPDATE, PostgreSQL creates a new row and expires the old one. It's the same row, but in different versions. Unlike database systems that don't hold on to the old version, when PostgreSQL creates a new version of the row it also retains the old or expired version. (Note: Old versions are retained until a process called VACUUM is run on the database.)

That's how PostgreSQL creates versions of the data, but how does it know which version to display? It bases its display on several criteria. At the start of a query, PostgreSQL records two things: 1) the current transaction ID and 2) all in-process transaction IDs. When someone accesses data, Postgres issues a query to display all the row versions that match the following criteria: the row's creation transaction ID is a committed transaction and is less than the current transaction counter, and the row lacks an expiration transaction ID or its expiration transaction ID was in process at query start.

And this is where MVCC's power resides. It enables PostgreSQL to keep track of transaction IDs to determine the version of the data, and thereby avoid having to issue any locks. It's a very logical and efficient way of handling transactions. New PostgreSQL users are often pleasantly surprise by the performance boost of MVCC over row-level locking, especially in a large multi-user environment.

MVCC also offers another advantage: hot backups. Many other databases require users to shutdown the database or lock all tables to get a consistent snapshot - not so with PostgreSQL. MVCC allows PostgreSQL to make a full database backup while the database is live. It simply takes a snapshot of the entire database at a point in time and dumps the output even while data is being inserted, updated or deleted.

CONCLUSION

MVCC ensures that readers never wait for writers and writers never wait for readers. It is a logical and efficient version management mechanism that delivers better database performance than traditional row-level locking.

PostgreSQL is available for download at the Great Bridge Web site (www.greatbridge.com/download).

Joseph Mitchell

Joseph is a knowledge engineer for Great Bridge LLC, a company formed to promote, market and provide professional support services for PostgreSQL, the open source database, and other open source business solutions. He can be reached at .


Copyright © 2001, Joseph Mitchell.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 68 of Linux Gazette, July 2001

"Linux Gazette...making Linux just a little more fun!"


Using RPM: The Basics (Part I)

By


This documentation is designed to serve as a brief introduction to the Red Hat Package Management system, or RPM. Part 1 will cover installation of RPM packages, while part 2 will cover building your own RPM packages for distribution. We will cover what RPM is, why you would want to use it, compare it to other packaging systems on Linux and UNIX, and how to get it. The bulk of the time will be spent on how to use RPM for installing, checking packages, and removal of installed packages. Neither section will cover the RPM API.

What is RPM?

The Red Hat Package Manager (RPM) is a toolset used to build and manage software packages on UNIX systems. Distributed with the Red Hat Linux distribution and its derivatives, RPM also works on any UNIX as it is open source. However, finding RPM packages for other forms of UNIX, such as Solaris or IRIX, may prove difficult.

Package management is rather simple in its principles, though it can be tricky in its implementations. Briefly, it means the managed installation of software, managing installed software, and the removal of software packages from a system in a simplified manner. RPM arose out of the needs to do this effectively, and no other meaningful solution was available.

RPM uses a proprietary file format, unlike some other UNIX software package managers. This can be problematic if you find yourself needing to extract one component from the package and you don't have the RPM utility handy. Luckily a tool like Alien exists to convert from RPM to other formats. It can be possible, through tools like Alien, to get to a file format you can manage using, say, tar or ar.

The naming scheme of RPM files is itself a standardized convention. RPMs have the format (name)-(version)-(build).(platform).rpm. For example, the name cat-2.4-7.i386.rpm would mean an RPM for the utility "cat" version 2.4, build 7 for the x86. When the platform name is replaced by "src", it's a source RPM.

Why Package Management?

At first glance you may say to yourself, "I can manage this myself. It's not that many components ..." In fact, for something as small as, say, cat, which has one executable and one man page, this may be so. But consider, say, KDE, which has a mountain of components, dependencies, and likes to stick them everywhere. Keeping track of it all would be tough, if not impossible.

Package management makes it all easier. By letting a program maintain the information about the binaries, their configuration files, and everything else about them, you can identify which ones are installed, remove them easily or upgrade them readily, as well.

Installation becomes a snap. You select what you want, and ask the system to take care of the dirty work for you. Unpack the program, ensure that there is space, place things in the right order, and set them up for you. It's great, it's like having a valet take care of your car when you go to a restaraunt. Dependencies, or additional requirements for a software package, are also managed seamlessly by a good package manager.

Management of installed packages is also greatly facilitated by a good package management system. It keeps a full list of software installed, which is useful to see if you have something installed. More importantly, it makes upgrading a breeze. Lastly, this makes verification of a software package quite easy to do. By knowing what packages are installed, and what the properties of the components are, you can quickly diagnose a problem and hopefully fix it quickly.

How Does RPM Compare to Other UNIX Package Systems?

I have had the (mis)fortune of working with many UNIX variants, and gaining some experience with their package formats. While I sometimes slag RPMs, in comparison to other UNIX software packaging formats I find it usually comes out on top for my needs. Here's a brief rundown on the pro's and cons of some of the other formats and tools:
 
Format Platform Pro Con
inst IRIX(SGI) great graphical installer amazingly slow, frequentl reboots after, no network installs (aside from NFS)
sw HPUX(HP) (are there any?), also supports net installs terribly slow
pkg BSD(many)  tarballs, net installs lack signatures, sums
? Solaris(SUN) (are there any?)  slow, lack signatures, sums
.deb Debian just ar's, easy to extract w/o tool  lacks signatures

In brief, my biggest complaint about RPM is the lack of a solid GUI interface to it. While a few exist (like gnorpm and glint), they lack the more complex features that SGI's Software Manager has. Overall, I find that RPM has better conflict handling and resolution than inst does, and is much, much faster. Hence, i'm willing to live without a strong GUI.

My biggest raves for RPM, however, are in speed and package checking, using both package signatures and the sums of the components. As an example, once I had to reboot an SGI just because I reinstalled the default GUI text editor (aka jot). It also took approximately 15 minutes to reinstall that small package, before the reboot.

RPM In a Nutshell

Much like a compressed tarball, RPM uses a combination of rolling together multiple files into one archive and compression of this archive to build the bulk of the RPM package. Furthermore, additional header information is inserted. This includes pre- and post-installation scripts to prepare the system for the new package, as well as information for the database that RPM maintains. Dependencies are checked before any installation occurs, and if the appropriate flags are set for the installation they are also installed.

It is this database that makes RPM work the magic that it does. Stored in there are all of the properties of the installed packages. Should this become corrupted, it can be rebuilt using the rpm tool.

The Hows of RPM

We will now focus on the three core actions of RPM discussed in this documentation. They include installation of new packages, management of installed packages, and package removal. We will begin at the beginning, and how to add a package using RPM.

Installation Using RPM

This is the most basic RPM function, and one of the most popular: the installation of new software packages using RPM. To do this, give rpm the -i flag and point it to an RPM:

        rpm -i (package)

This will install the package if all goes well and send you back to a command prompt without any messages. Pretty boring, and worse if you want to know what happened you're out of luck. Use the -v flag to turn on some verbosity:

        rpm -iv (package)

All that gets printed out is the package name, but no statistics on the progress or what it did. You can get a hash marked output of the progress is you use the -h flag. People seem to like using -ivh together to get a "pretty" output:

        rpm -ivh (package)

Again, this doesn't tell you much about what just happened, only that it hopefully did. Hence, I usually crank up the verbosity (-vv) when I install. This lets me see what's going on:

        rpm -ivv (package)

While the output usually scrolls, I can see exactly what happened and if any problems were encountered. Plus I get to see where stuff was installed.

Dependencies are handled pretty wisely by RPM, but this itself depends on a good package builder in the first place. I have seen packages that depend upon themselves, for instance, and some that seem to depend on packages that will break other things. Keep this in mind.

Sometimes RPM will whine about a dependency which is installed but isn't registered. Perhaps you installed it not using an RPM for the package (ie OpenSSL). To get around this, you can force it to ignore dependencies:

        rpm -ivv --nodeps (package)

Note that this isn't always wise and should only be done when you know what you are getting youself into. This will rarely break installed stuff, but may mean the installed package wont work properly.

On rare occassion RPM will mess up and insist that you have a package installed when you don't. While this is usually a sign that something is amiss, it can be worked around. Just force the installation:

        rpm -ivv --force (package)

Beware. Just like when you ignored the dependencies, forcing a package to install may not be wise. Bear in mind that your machine could burst into flames or simply stop working. Caveat emptor and all that.

This probably wins the award for one of the coolest features in RPM: network installations. Sometimes, you don't have network clients on a system but you need to install them via RPM. RPM has built in FTP and web client sowftare to handle this:

        rpm -iv ftp://ftp.redhat.com/path/package.rpm
        rpm -iv http://www.me.com/path/package.rpm

I don't think it can do SSL HTTP connections, though. Debian's packages can do this, as can BSD packages. I don't think most commercial tools can do this, though.

Managing Your Packages

OK, you know how to install packages. But let's say you want to work with some packages before you, either installed or not. How can you do this? Well, simply put, you can use package management features in rpm to deal with packages, ones that are installed already or ones that are not, to look inside of them. This can include verifying packages, too.

When you get a new package, you may want to examine it to see what it offers. Using query mode, you can peek inside. To simply query a package and get some generic information about it, just simply:

        rpm -qp (package)

This should bring just the name of the package. Pretty boring, isnt it? A much more useful method is to get the package information from the package itself:

        rpm -qip (package)

This will bring up the author, build host and date, whether it's installed yet, etc, about a package. Also included is a summary about the package's functionality and features.

All of this is nice, but let's say you want to see what is really inside the package, what files are inside of it? Well, you can list the contents of a package, much like you would get the table of contents of a tar archive (using tar -tvf):

        rpm -qlp (package)

This will list all of the files within the archive, using their full pathnames. I use this often to see what will be installed with a package, but most importantly where. I like to stick to conventions about putting stuff in their expected places, but some packagers do not. Lastly, to show all of the packages you have installed on your system, use:

        rpm -qa

This will bring up a list of packages installed on the current system. It may be useful to sort them (by piping through sort, rpm -qa | sort). Use these names when uninstalling packages (below).

One of my favorite things about RPM is how it can verify packages. This is useful in detecting a compromised machine, or a binary that may be missing or modified due to some error on your part. To verify one package, just point rpm at it with the -V flag:

        rpm -V (package)

This should bring up a brief description of wether or not the package checks out. To verify all packages installed on a system, it is quite simply:

        rpm -Va

Verify mode brings up several statistics about a file. Their shorthand is as follows:
 

5 MD5 sum
S File size
L Symlink
T Mtime (modification time)
D Device
U User
G Group
M Mode (includes permissions and file type)

Sometimes they're meaningless, for example if you modify your /etc/inetd.conf file it will show up as a different size and MD5 sum. However, some things shouldn't change, like /bin/login. Hence, rpm -Va can be a useful quick security check, suggesting which things need more peering into.

One of the great things about package management we stated at the outset was the ease with which upgrading can be performed. RPM has two sometimes confusing options to upgrading packages. The first is a simple upgrade:

        rpm -U (package)

What is confusing about it is it's resulting action if the package is not yet installed. If it finds it, the package is upgraded. If it doesn't find it, you are upgraded to it, meaning the package is installed. This can be scary sometimes if you don't mean to install a package and an update comes out, which you blindly follow. Because of this, I suggest using "freshen" on packages you only want to ensure you have the latest version of:

        rpm -F (package)

This will only update installed packages, and not install the package if it is missing.

Upgrades are done in an interesting fashion, too. The new version is first installed and the differences with the old version are noted. Then the older version is removed, but only the unique portions of it so as not to remove the new portions. Imagine if /usr/local/bin/netscape were upgraded and then removed, that would defeat the whole effort!

Uninstalling an RPM

You can install, upgrade, and manage, and hence you can definitely uninstall using RPM. The flag to use is -e, and many of the same conditions for installation apply for removal. To silently uninstall an RPM package, use:

        rpm -e (package)

Note that, unlike for installations and upgrades, package here refers not to package-version.i386.rpm, but instead to package-version. These are the values reported in query mode, so use those. You should be able to get all components of a package for removal by specifying the most generic, common part of the name, ie for linuxconf and linuxconf-devel, specify linuxconf. Dependencies can also be avoided:

        rpm -e --nodeps (package)

The same caveats apply here, as well, you could wind up breaking more stuff than you anticipated. Verbosity flags can also be added, just like in installations.

Some Notes about RPMs

Sometimes maintainers build rather stupid dependencies for their RPMs. Case in point, libsafe. It has a wierd dependency: itself. As such, I usually find I have to install --nodeps to get it to install properly. Other times a package will contain extra junk, and you could wind up installing more than you had planned for.

My biggest complaint are RPMs that have a name that doesn't suit the function of the pieces. While this can be gotten around by digging around using the query tools as described above, it's more time than I care to waste. Name your RPMs well, I suggest.

Getting RPM

You can get RPM for any Linux or UNIX variant, as it is Open Source. RPM comes native in Red Hat Linux, and some derivatives. Versions 3.0 and above are recomended for compatability, some stupid stuff went on before that that 3.0 hopes to fix. Version 4.0 reportedly has a different database format, and so I reccomend checking around for how to get around this issue before you upgrade to 4.0. I'm not sure if can simply rebuild the database in 4.0 to remedy this.

RPM is normally distributed as an RPM of itself. Cute, eh? Luckily, it also comes in a gzipped tarball and also in source form. I have RPM installed on Slackware, for example, and could install it on IRIX or Solaris if I so desired. It's nearly useless on non-Linux platforms as rarely are packages built in RPM for other UNIX variants.

Coming Next Time

In the upcoming second half of this documentation we will taking a look at how to build an RPM for yourself. We'll look at the spec files, layout in /usr/src and the building flags. It's pretty easy once you learn a few basics.

Resources

The best place to start is the online website for RPM, http://www.rpm.org/. There you will find the book 'Maximum RPM', which covers how to use, build, and even programming with the RPM API. The RPM help page (rpm -h) is also quite useful once you learn a few basics. To find RPM archives, check http://www.rpmfind.net/, which maintains a great searchable database of packages for various distributions and versions on a variety of platforms. Very useful.

Jose Nazario

José is a Ph.D. student in the department of biochemistry at Case Western Reserve University in Cleveland, OH. He has been using UNIX for nearly ten years, and Linux since kernels 1.2.


Copyright © 2001, Jose Nazario.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 68 of Linux Gazette, July 2001

"Linux Gazette...making Linux just a little more fun!"


Chosing Good Passwords

By


Right now I'm running Crack on some people, and I'm doing a lot of thinking about passwords and how to generate good ones. Specifically, what sorts of things I can do to get better ones. I was recently asked by a friend about any ideas on passwords and about sharing them at our local event "LinuxDay". I'll take some time now and discuss passwords with you now. Passwords provide our most major defense against unauthorized use of our systems, so let's keep them good, even in the presence of crypto, firewalls, and rabid dogs.

OK, so this is how I generate passwords for myself: I reach over, grab the nearest issue of "Nature", open it up to a genetics article and point to some gene construct name and use that. No lie. It's how I chose good passwords. Random, complex, easy to generate. Granted, a lot of dictionaries have now included gene names, but these are construct names, which differ from gene names. So, instead of something like "Brc1" it's more like "pRSET5a::20STa::6xHis". You can shove the latter in any cracking program and it will not fall out quickly, I can almost garauntee it.

The trick is this: users dislike complex passwords. They're difficult to remember, they'll cry. And they're right. To overcome that, they'll either write it down on some post-it note on their monitor or change it to something silly, like "LucyDoll".

Most importanly, a password should roll off the fingers. It should be typed quickly and efficiently, and of course corectly. For that matter, I sometimes will type it ten times quickly to generate the rythm of it, and that works.

Quickly, a diversion to the Crack 5.0a documentation, this is ripped from the appendix. It deals with password security and security in general, and is some good wisdom:

At the bottom line, the password "fred" is just as secure (random) as the password "blurflpox"; the only problem is that "fred" is in a more easily searched part of the "password space". Both of these passwords are more easily found than "Dxk&2+15^N" however. Now you must ask yourself if you can cope with remembering "Dxk&2+15^N".

OK, great, we've chosen a good password... oh crap. We have about ten accounts, some on BBS's, some on systems we can't ssh to, and some are the root passwords on systems we administer for major businesses. or we have to rotate them often. How do we keep track of them all?

Myself, I do two things: I keep levels of passwords. I have a handful of passwords for disposable things. Yeah, if someone grabs a password of mine I use on a BBS and posts some flamebait, I will get some flack. But honestly, I doubt anyone will do that, it's the systems I care about and administer that I really protect. Hence, I cycle passwords there, using very strong passwords that never go out on the wire without some strong crypto behind them (ie secure shell). A new password is chosen (randomly), and the old ones are bumped down the chain to less demanding positions, system and accounts. I use the tricks I outlined above, and it has paid off. Sometimes I forget them, and that's always a scary moment, but it's usually no more than a minute or two.

Keeping track of multiple passwords is easily handied using the Password Safe from Counterpane systems, but that only works on Windows systems. I once started writing the implementation for Linux, but given my poor programming skills and heavy load of other things, I doubt it will ever see the light of day (it's sitting idle now, if anyone wants to know). I do, however, often reccomend this program to people with lots of passwords to remember. Other similar applications exist for the Palm Pilot of other PDAs, which protect a bank of passwords with one password. Since most Linux geeks I know also have PDAs, this would be a handy solution.

For some real fun, though, check out FIPS 181 (1), a scheme the government uses to generate passwords based on pronounceable sounds. It's pretty cool, and I started playing with it (via Gpw, a related tool(2)). And check out how Crack (3) works, it's chez pimp. For comparison's sake, find out how L0phtCrack (4) works, and you'll snicker at NT's security. If you're feeling particularily brave and have some computing power to burn, consider brute forcing passwords (6), which is an interesting problem in dictionary generation and optimization of routines.

Notes and Links:

1. FIPS 181 is Federal Information processing Standard 181. The document can be found (with source for DOS) at http://www.itl.nist.gov/fipspubs/fip181.htm. A companion FIPS document, FIPS 112, discusses the usage and considerations of passwords.

2. Gpw is a UNIX utility in C/C++ (and Java, too) to generate pronoucable passwords. Handy and fun. http://www.multicians.org/thvv/gpw.html . An additional one can be found on http://freshmeat.net/projects/apgd/.

3. Crack 5.0a source can be found at http://www.users.dircon.co.uk/~crypto/. It can also be found at http://packetstorm.securify.com/Crackers/crack/

4. L0phtcrack... how I love thee. http://www.l0pht.com/l0phtcrack/ . Mudge tells us how L0phtcrack works at this realaudio presentation from beyond Hope, 1997, NYC (1 hour) http://www.2600.com/offthehook/rafiles/l0pht.ram (Note that since this piece was originally written, L0phtcrack version 3 has been released. Several people have noted a dramatic drop in the efficiency of cracking passwords, though new extraction tools have been incoporated into the code. Many people I know who use L0phtcrack use version 2.52 for cracking after extractions with version 3.)

5. John the Ripper is another useful password cracking utility. Several modules for cracking S/Key and MD5 passwords have been introduced lately. http://www.openwall.com/john/.

6. This is a great description of brute forcing passwords and some of the techniques involved... I may have to try it! The ambitious amateur vs. crypt(3)

Jose Nazario

José is a Ph.D. student in the department of biochemistry at Case Western Reserve University in Cleveland, OH. He has been using UNIX for nearly ten years, and Linux since kernels 1.2.


Copyright © 2001, Jose Nazario.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 68 of Linux Gazette, July 2001

"Linux Gazette...making Linux just a little more fun!"


Qubism

By


qb-stats.jpg
qb-mram.jpg
qb-kiddies.jpg
qb-getetch.jpg
qb-emotionchip.jpg

Jon "SirFlakey" Harsem

Jon is the and creator of the Qubism cartoon strip and current Editor-in-Chief of the CORE News Site. Somewhere along the early stages of his life he picked up a pencil and started drawing on the wallpaper. Now his cartoons appear 5 days a week on-line, go figure. He confesses to owning a Mac but swears it is for "personal use".


Copyright © 2001, Jon "Sir Flakey" Harsem.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 68 of Linux Gazette, July 2001

"Linux Gazette...making Linux just a little more fun!"


Making an X Terminal from a PC

By


An X terminal is a great way to expand the computing presence in a home or office. (We're talking about a PC that functions as an X terminal, not NEC's dedicated X-Terminal device.) They're fast, cool, a great demonstration of a Unix system's power, and, most importantly, dirt cheap. The following chronicles my experience in creating an X terminal from old PC hardware and connecting it to my Debian GNU/Linux system.

My server is a Pogo Altura with a one gigahertz AMD Athlon processor. Pogo is a great company to go to if you don't want to piece together your own system or pay for a Windows license with a prebuilt system from somewhere else. I run Debian on it, so the X terminal will use that also. That's enough background information for now.

Root filesystem setup

The X terminal will boot off of a custom kernel on a floppy and then get its root filesystem over NFS from the server. The first step, then, is to create this root filesystem. You could copy over file after file by hand from a currently running system, or you can take a shorter approach like I did and just use the base Debian system. All you have to do is download the base system tarball which can be found on the Debian webserver as the file base2_2.tgz. I downloaded that and did a "tar -xvzf base2_2.tgz" in /usr/xterminal and seconds later I had a fully functional root filesystem in that directory. Anyone can use the Debian base system regardless of their server's flavor of Linux.

The next step is to configure this root filesystem. I did it by becoming root and running "chroot /usr/xterminal bash --login". Now that I'm "inside" the root filesystem I duplicated my actual /etc/resolv.conf in my new root filesystem so I could use domain names during the configuration. Next we should install X, and, as any Debian user knows, the best way to do this is with apt-get. I ran "apt-get update" then "apt-get install xserver-s3 xfonts-100dpi xfonts-base", but you'll change xserver-s3 to something different if you don't have an s3 card in your X terminal (check for something suitable here.) This downloads and installs the necessary components of X. apt-get should ask you a few questions and make an XF86Config file for you, if it doesn't, or if you need to tweak the file it generates, refer to the XFree86 HOWTO. The root filesystem is almost complete. I created an /etc/fstab file for the X terminal root that looks like this:

10.0.0.1:/usr/xterminal / nfs defaults 0 0
proc /proc proc defaults 0 0
    

Of course, you'll change 10.0.0.1:/usr/xterminal to match your server IP and location of the NFS root directory. Since I have no user accounts created in this NFS root filesystem I decided to start X from init. This necessitated the following amendment to my /etc/inittab file (yours will probably have a different IP address at the end):

X:123456:respawn:/usr/bin/X11/X -query 10.0.0.1
    

I'm not sure if this is perfectly correct, but it works. Finally remove the /sbin/unconfigured.sh shell script so it doesn't whine when you try to boot from your now complete root filesystem.

Building the boot floppy

Next let's make the kernel. Refer to the Kernel HOWTO if you aren't familiar with the kernel compile process. I tried making a boot-disk with Kernel 2.4.5 but it seems as if bootp is broken in it, so I chose Kernel 2.4.4 instead. Go through the regular routine (make xconfig, make dep, make bzImage) but be sure to select the following options to be compiled into the kernel (not as modules):

  • IP:BOOTP support under IP:kernel level autoconfiguration in Networking options
  • NFS file system support and Root file system on NFS under Network File Systems in File systems (this must be selected after BOOTP support since Root on NFS will not exist until BOOTP is selected)
  • Drivers for your NIC under Ethernet (10 or 100Mbit) in Network device support

Build your kernel and copy it to a floppy with "dd if=arch/i386/boot/bzImage of=/dev/fd0" (as root) or a similar command with your kernel and floppy drive. Since this is just a raw kernel on a floppy we need to tell it to boot over NFS. Create a /dev/boot255 device by typing (still as root) "mknod /dev/boot255 c 0 255". Now make the floppy look for its root filesystem over NFS by running (as root, of course) "rdev /dev/fd0 /dev/boot255". You can now "rm /dev/boot255" since it has no further use. Now you can set your boot floppy aside until you get your X terminal hardware.

Server setup

We aren't ready to move on to hardware quite yet though, now it's time for the tricky part of server configuration. I did this on Debian (it is so choice), so your setup for this part may differ slightly. First I installed and configured an NFS server (apt-get nfs-user-server) so the X terminal could get to its root filesystem. The configuration consists of editing the /etc/exports file. Yours should contain a line that looks like this one (the no_root_squash is important):

/usr/xterminal 10.0.0.4/255.255.255.0(rw,no_root_squash)
    

The netmask (/255.255.255.0) is included so I can add more clients on my local network without listing each one independently, but just the ip address works if that's all you want. Next I installed a BOOTP server with "apt-get install bootp". To configure this I added the following line to my /etc/bootptab file:

xterm1:vm=auto:ip=10.0.0.4:ht=ethernet:ha=00a0240d5f52:rp=/usr/xterminal
    

You'll probably want to change the IP, the hardware address of your NIC (it's written on it somewhere, or if you can't find it there it should be in view when the boot floppy kernel stops to look for bootp servers), and the path to the root filesystem. You'll also need to add a line like the following to your /etc/inetd.conf file:

bootps          dgram   udp     wait    root    /usr/sbin/bootpd        bootpd -i -t 120
    

Then you'll need to have inetd reparse its configuration file by running a "killall -HUP inetd" as root. One more thing to set up, XDM. Again Debian makes this ridiculously easy ("apt-get install xdm"). There are a few configuration files to worry about with XDM though. For me these are found under /etc/X11/xdm although yours may be somewhere else. I added the line "10.0.0.4:0 foreign" (you will probably have a different X terminal IP) to my Xservers file and commented out the ". To :0 local /usr/X11R6/bin/X -dpi 100 -nolisten tcp" line so I didn't have to login through XDM on the server. To Xaccess I appended the line "10.0.0.4" so my X terminal could connect to xdm. And finally in xdm-config I commented out (put a # infront of) the line that said "DisplayManager.requestPort: 0" so that xdm would not deny all connection attempts. The server is now set up.

Putting it all together

Now for the interesting part, finding an old computer. I did not have an old box lying around so I went shopping and found one at a local used computer store. No guarantees came with my newfound 486/66, but it was only thirty bucks. By the way, this is probably the top end of X terminal hardware, anything more in the processor department would be overkill. It came with an S3Trio64 onboard (It could only do 800x600x16 so I've replaced it with a generic Trident Blade3D-based PCI video card running under the SVGA X server), and I had some RAM (32 megabytes) sitting around that I shoved in it. Another note on hardware overkill, 32 megabytes is way more RAM than an X terminal needs. Eight or sixteen would probably be fine since all it has to do is run X and the kernel. I took out everything but the floppy drive and the NIC (3Com Etherlink III), so it runs silently except for the gentle hum of the power supply fan. I plugged the monitor, ethernet patch cable, keyboard, and mouse in, put in my boot floppy, and turned it on to see (after a short boot delay) a nice XFree86 logo and a login box in X. With a cheap fifteen-inch monitor these high-quality X terminals could probably be made for $150-$200 apiece.

The X terminal works great, just like I'm sitting in X on my server. If you are having problems setting this up check the relevant HOWTOS listed above or ask your local Linux guru. These instructions should work to give you nice, fast, and, best of all, cheap X terminals to put around your home or office.

Patrick Swieskowski

Patrick will be a senior this fall at Roosevelt High School and Central Academy in Des Moines, Iowa. He discovered Linux in eighth grade and has been using it ever since.


Copyright © 2001, Patrick Swieskowski.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 68 of Linux Gazette, July 2001

"Linux Gazette...making Linux just a little more fun!"


The Back Page


Corrections


Paul Evans' Secret Tools article in June's Linux Gazette was revised June 7. The originally-published version was not the author's final. If you read the article at the beginning of June, you may want to read it again to see what's been added.


Not The Answer Gang


Routers vs. hubs --or-- folksongs and filksongs

Answered By Iron, Ben Okopnik and Chris Gianakopoulos

(!) [Iron] In any case, it's *certainly* a lot more topical than Ben and I chatting away about folk songs!

(!) [Heather] At this point, there are folk songs about free software, but no querent has asked about 'em yet.

(!) [Chris] All right, let's hear those folk songs about free software. What instruments do you use. Are teletypes used instead of drums? Are radios placed near computers for the melody? (You know -- the tones from the speaker) And, of course, who sings those songs? I'll be yet another curious querent.

[Astute readers will remember that Heather published her own autobiographical filksong, The Programmer's Daughter, in June's Answer Gang. -Iron.]

(?) A/C

Answered By Iron

(!) i have problem in logging to my A/C. What should i do?

(!) [Iron] You have a problem logging into your air conditioner? Or is your alternating-current generator is going haywire?


(?) Did Microsoft invent Linux?

Answered by Iron

(!) Microsoft has endorsed Linux in the past. If you will remember, Linus Tovalds developed Linux on a Microsoft research grant using kernel code developed in Redmond by Microsoft's famous "Blue Team". At the time, no one at Microsoft thought the code was going anywhere so they let Linus have it and, of course, the rest is history. SO, anytime you fire up Linux, give a hearty thanks to the people who made it possible - Bill Gates and his buddies in Redmond.
(!) [Iron] Linus has publicly stated several times that he wrote Linux as an exercise to learn how to program his 386 CPU. The book _Just for Fun_ is a biography of him and includes the details. The initial code all came out of his head. It was partially a reverse-engineering of Minix, combined with adding whatever features were necessary to get Unix applications like GNU CC to run. If anything, the code may have gone the other way. There are allegations that Microsoft has borrowed code from Linux. I won't believe it until there's proof (or at least a reasonable specificity of what exactly they might have gotten--proof is difficult when you don't have the source), but if it's true, it would violate the GPL, which forbids GPL'd code from being used in a closed-source project.

Well I got this statement from the following link: http://www.zdnet.com/tlkbck/comment/465/0,10732,113210-844545,00.html

(!) [Iron] It's either a joke or the guy doesn't know what he's talking about.


World of Spam


Linux Gazette spam count

I decided to take a spam count this month. The last time I did this in 1999, the results remained pretty stable from month to month: 28-29% of the mail LG received was spam. I was shocked to discover the rates have reversed.

This month's results are:

Legitimate mail 226 30%
Spam 324 70%

"Legitimate mail" means all mail sent personally to LG or to The Answer Gang, even those that were sincere but off-topic, as well as press releases for News Bytes and off-topic press releases by companies that normally send stuff to News Bytes. "Spam" includes only bulk solicitations.

And now, on with the spam! (Said to tune of the Red Queen shouting, "Off with her head!")


We wonder if you have any interest in importing tracksuits, windbreakers, shorts, jogging suits and other sports garments from us.


WE'RE LETTING A MAJOR SECRET OUT OF THE BAG TODAY!

The top Legal Consultant in this industry has just developed a literal cash machine. Now, people in our activity are receiving money from all over the place. $500 into $4,000 virtually overnight. And that's just for starters. Everything is done for you automatically!


[I got a letter in a totally unfamiliar language. Thinking it might be Turkish, I sent it to my friend Gorkem. -Iron.]

Hey Mike,
It is indeed Turkish. In case you want to publish it, I am enclosing a translation :-) It is of course pure spam.
Gorkem

Subject: Re: Cinsel problemlerinize son verebilirsiniz.

Cinsel Problemlere Kalici ve Kesin Cozumler
Decisive and Permanent Solutions to your Sexual Problems
Ereksiyon sorunlari, iktidarsizlik, performans arttirma...
Erectile problems, impotency, increasing performance...
Erken bosalma sorunlari ve penis buyutme teknikleri
Premature ejaculation problems and penis enlargement techniques
(ameliyatsiz/vakumsuz)...
(no operations / no vacuum pumps)
Bayanlarda isteksizlik, orgazm sorunlari ve firijitlik...
Low sexual desire, orgasm problems and frigidity in women
Turk toplumunun yabanci oldugu cinsellik bilgileri ve seks teknikleri...
Sexuality information that is foreign to the Turkish society and sexual techniques

Mutluluklar dilerim.
The wishes of happiness.
Saygilarimla,
Respectfully
Dr. XXXX XXXX


We have reviewed several online credit card programs and feel that the XXXXX MasterCard is one of the best for anyone who needs a credit card. With the XXXXXX program you are ABSOLUTELY GUARANTEED APPROVAL ONLINE. No credit checks and no credit turndowns.

This is a personal invitation for you to be a part of our private association. We are a group of successful entrepreneurs with backgrounds specializing in helping others achieve financial success.


Local governments in Florida are considering tougher penalties for traffic violators who run red lights.

Should the fines be doubled?

To tell Florida officials what you think, click the following link.


Le nuove estensioni .biz e .info sono state approvate dall ICANN, l'authority che regola il sistema dei nomi di dominio. Entro il 2001 inoltre sarà disponibile anche l' estensione .eu, richiedi il dominio che vuoi adesso, prima che diventi disponibile per la registrazione, con il nostro modulo di richiesta. L'inizio della registrazione ufficiale è previsto per il 17 di agosto nel caso dei domini .info, e per il 31 di ottobre per quelli .biz. Tuttavia già fin d'ora puoi pre-registrare il Tuo nome di dominio, in questo modo la tua richiesta sarà inviata agli enti registranti prima delle altre e avrai cosi più possibilità di ottenere il tuo nome.


I visited <http:///>ssc and I'm sure that you could get much more money only by including your own TELEPHON LIVE CHAT inside your site. Upgrade your site with CONTACTEL system FOR FREE and get money from more than 20 countries for the traffic received on your TELEPHON PARTY-LINE.


This E-Mail does NOT contain any hype, just facts.


Happy Linuxing!

Mike ("Iron") Orr
Editor, Linux Gazette,


Copyright © 2001, the Editors of Linux Gazette.
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 68 of Linux Gazette, July 2001