|
Table of Contents:
|
||||||||
TWDT 1 (gzipped text file) TWDT 2 (HTML file) are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version. |
|||
This page maintained by the Editor of Linux Gazette, Copyright © 1996-2000 Specialized Systems Consultants, Inc. |
|||
The Mailbag!Write the Gazette at |
Contents: |
Answers to these questions should be sent directly to the e-mail address of the inquirer with or without a copy to [email protected]. Answers that are copied to LG will be printed in the next issue -- in the Tips column if simple, The Answer Gang if more complex and detailed.
This section was edited by Heather Stern <> aka. "Starshine".
Before asking a question, please check the Linux Gazette FAQ to see if it has been answered there. The AnswerGuy "past answers index" may also be helpful (if a bit dusty).
http://e-zine.nluug.nl/hold.html?cid=59
I've been reading a bit about this Rock Linux distro. Has anybody used it? It's worth some coverage in LG just because it's so different.
Some time ago I posted a question asking about free Linux ISPs. Lots and lots of folks were kind enough to write back telling me about FreeWWWeb, a free OS-independent ISP here in the continental USA (THANKS!).
Well, FreeWWWeb just went belly up and is trying to transition its customers to Juno.
That leaves Linux users up a creek since Juno (like all other free ISP software that I'm aware of) is Windows only.
So, once again I ask the question:
"Anyone know of any free LINUX-FRIENDLY ISP's in the continental USA?" A wide variety of access numbers would be ideal since I do travel with my Linux laptop but beggers can't be choosers, as they say. Thanks, all.
i have Debian 2.0 and can't connect to ISP though i can connect in Win95. Below is output of plog:
Jun 30 11:47:50 debian pppd[223]: Serial connection established. Jun 30 11:47:51 debian pppd[223]: Using interface ppp0 Jun 30 11:47:51 debian pppd[223]: Connect: ppp0 <--> /dev/ttyS2 Jun 30 11:47:51 debian pppd[223]: sent [LCP ConfReq id=0x1 <asyncmap 0x0> <magic 0x7bc2> <pcomp> <accomp>] Jun 30 11:48:18 debian last message repeated 9 times Jun 30 11:48:21 debian pppd[223]: LCP: timeout sending Config-Requests Jun 30 11:48:21 debian pppd[223]: Connection terminated. Jun 30 11:48:21 debian pppd[223]: Receive serial link is not 8-bit clean: Jun 30 11:48:21 debian pppd[223]: Problem: all had bit 7 set to 0 Jun 30 11:48:30 debian pppd[223]: Terminating on signal 15.
[ This would make a great article. There are a fair handful of ppp-connection apps like wvdial and kppp, but no clear documentation or wizard in the form of "The windows user would do ... need to ask ISP 'blah blah blah' ... fill them into <whatever files or dialogs>". Most especially one that would have the user able to successfully beat the "you're using linux? we can't help you then" gantlet at their ISP. (My advice for those: vote with your wallet.) Probably the tech support of every major distro gets asked this sort of thing a dozen or more times a day.
It appears that the ISP-Hookup-HOWTO hasn't been updated since early 1998. The Linux world has changed a lot in the last two years; perhaps someone should become its new maintainer. -- Ed.]
Dear Sir,
I have Asus AGP TNT34000 display card with TV out/in support. It works absolutely fine in Win9x. It also works fine in X-windows. But I am unable to get TV-out support in Linux.
Finally I have change the display card to SiS 6326 (with has direct output feature to tv). I am able to get the command line output to TV. But when I try to run X-Windows, I am unable to see any thing. There can be some errors in XF86Config file regarding PAL/NTSC resolution and frequency settings. Can anyone point me to some web-sites (or documentations) for TV-out support or software.
Can you suggest some other cards, which can give tv-out support. I am using 21" Color AIWA TV.
I would like to see some articles on Tv-in/out(Video in/out) support.
Regards
Santosh Kumar Pasi
Hi there, My name Sergey, I use Slackware 7.0. I have read a document about serial connection between Win95 and Linux, but never have seen about paralel. Help me, tell me how can I use my parallel NULL modem. Is lp0 the LPT1 port?
Thanks,
Sergey
[ There's a PLIP-install-HOWTO but not one for connecting to Windows, nor even between two Linux boxes that are in a normal installed state. Anyone got something for him?
Oh, and it can't be a null modem, modems are serial. A crossover parallel cord is sometimes called a "Laplink" or "Commander Link" cable, after those popular apps for a certain non-Linux platform. -- Ed.]
Request status of DAT TAPE's 'tape to be cleaned' using a.e. mt (magnet tape) command.
Is it somehow possible to detect the state 'Cleaning required' remotely. The only sign I know is the blinking LED in front of of DAT-Tape. But we have distributed Server and I 'm not happy with the idea to ask the users 'is there some kind of LED blinking in a strange way' while they are in front of allover blinking LEDs and beeping computer speakers. I am shure THEIR answer will never be the answer according MY question.
Thanks
H.Ahrens
Dear answerguys.
My name is Wayne. I work at an ISP in South Africa. The network administrator here told me that it was possible to use a short distance CB two way radio to be able to transfer data between two machines. Is it at all possible u ppl could be so kind, as to mail a way of connecting one of these things to my machine.
Thank you very much
Hi Answerguy-
Can you tell me what the PASSWORD page on the SWAT does? I have some familiarity with Samba, and I have been to the samba.org site but I still nedd help with this.
-Doug
Hi there.
I have set up a linux firewall, but one of the things I would like to do with it (which I have so far been unable to accomplish) is access use port forwarding to access a web server situated behind the firewall from a client on the public internet.
I am using ipchains as my firewall mechanism, and have used redir and ipmasqadm with no luck.
I am convinced that the problem lies somewhere in my ipchains scripting, but I am not sure what to look for. I have looked through various howto's and webpages and am stuck.
What I was wondering is if you had a basic ipchains script that works in conjunction with ipmasqadm to forward port 80 through to a web server behind a firewall.
Any help that you can offer would be great!
Darren Hutchings
[ In this case there are almost too many HOWTOs ... which are difficult to read for a complete novice to networking. Numerous tools exist but they expect you to know what logic you want to apply, rather than what results you would like. If one of you gentle readers knows about a useful "wizard" style tool for making sensible ipchains, or a similar one for the up-and-coming netfilters, reviewing it for us would make an excellent article. -- Ed.]
James,
I would like to create a Linux distribution from scratch and integrate with it an installer program using my native language. I was wondering if you could assist me in finding installer programs for existing Linux distros ( GUI, cli or curses-based, and whatever language -- shell script, Python, GTK, etc). BTW, are the installer programs of Mandrake, or Red Hat like Anaconda or their curses/text-based programs/scripts GPL'ed too?
Thanks,
Botong San Beda
One thing that is really great about Sun hardware is that you can get rid of the monitor, mouse, and keyboard all together and do everything from install the operating system to change EEPROM settings via a serial console. While Intel hardware was never designed this way, I cannot find much information about setting up Linux on Intel to approximate this. Is it possible to install and boot Linux over a serial console? Log-in in this way is easy, but to be able to completely administer a system the install and boot functions are critical, especially the Lilo prompt would be nice.
I have Mandrake 7.0 installed on my IBM compatible PC. I want to take tech. classes with Manpower over the Internet. They told me that they don't support Linux OS, only Microsoft Windows 95, 98, and NT. Do I install Linux Prep Tool and follow the steps to create a dual boot system over Mandrake 7.0? I'd appreciated any help you can give me!
Georges Train
Hi,
I got stuck with 2 problems:
1) tried to dump screen content using a cron job. It basically is using ImageMagick command import:
import -display 192.9.101.16:0 -window root fru.tif
However command fails (when run by crond, otherwise it runs from xterm just fine) with following message:
Xlib: connection to "192.9.101.16:0.0" refused by server. Xlib: Client is not authorized to connect to server. import: Unable to connect to X server (192.9.101.16:0).
It should be noted that 192.9.101.16 is localhost. It fails no matter if I'm root or not. What sort of authorization do I need to connect to XServer from cron?
2)My sendmail daemon takes ages to startup. When I checked the log it says few times: May 29 08:52:03 piii sendmail [24126]: My unqualified host name (piii) unknown; sleeping for retry.
It should be noted that piii is the localhost defined in hosts file. Furthermore when I tried to check the mail as a user it says it cannot find the spool file. I can create it manually but is it going to work? What sort of configuration I need to do for sendmail when I need send and receive mail only locally (mail exchange among local users only)?
Thanks in advance,
Jan
hi,
Ive been able to use procmail when I send the mail to &myuserid+keyword.
Where I assign a variable (say PLUSARG=$1).
I can use the variable PLUSARG to base some procmail recipes.
So what's the question??? well it seems if I us and alieas set up for me and the "+keyword" syntax that procmail doesn't pass the "+keyword" in as the $1 parameter. E.G. $1 is found when I use myid+keyword but not with alieasId+keyword.
thanks,
Scott Lowrie
Hello Mr. Dennis!
I have had a problem writing a login script for compuserve in kppp dialer
The compuserve login procedure requires that the port bits be set initially to 7 bit and later to 8 bit.
Also, it requires the setting of parity bit.
The kppp dialer help section on login scripts indicates there is a MODE command that make it possible to set the port bits to 7 or 8 bits. But, the problem it is not in the program in the section where you type in the login script. They totally omitted this command. Also, how do you send a ^M(I think this a control M) in the kppp login script? Also, how do handle the parity bits? I am connecting to compuserve but I am getting an error message on the setting of the 8bits.
I AM ATTACHING TO THIS EMAIL THE COMPUSERVE LOGIN SCRIPT IS USE IN WINDOWS 98. THE FILE NAME IS CIS.TXT. THIS IS A TEXT FILE.
I did check with compuserve and Red Hat Software and neither one could help me. I am using Red Hat 6.2. I am using the kppp dialer that came with the Red Hat 6.2.
If you could help me with this, I would appreciate it very much. I look forward to your comments in your Answerguy column.
Thank you.
Andy Andrianos
-- attachment follows --
; ; This is a script file that demonstrates how ; to establish a PPP connection with Compuserve, ; which requires changing the port settings to ; log in. ; ; Main entry point to script ; proc main ; Set the port settings so we can wait for ; non-gibberish text. set port databits 7 set port parity even transmit "^M" waitfor "Host Name:" transmit "CIS^M" waitfor "User ID:" transmit "your user ID" transmit "/go:pppconnect^M" waitfor "Password:" transmit "your password" transmit "^M" waitfor "One moment please..." ; Set the port settings back to allow successful ; negotiation. set port databits 8 set port parity none endproc
Greetings. I am going crazy trying to figure out how to do what seems like a simple task. In VMS I was able to change into the NCP (Network Control Program) shell and connect to another device over layer 2 on a flat network. I believe the syntax was:
NCP> connect via ewa-0 physical address 00-c0-75-00-21-00
ewa-0 being the nic of the localhost, and the final number being the destination MAC address. I could establish what was basically a telnet session, but the device did not have to be configured with a correct IP address for my subnet. This is a much quicker way to configure a device than hooking up a console to it. When I was remote, I was able to telnet to VMS (when I had a VMS box) and NCP to a device on that local net.
Is there any way to accomplish this same end in Linux? I don't want my boss tauting VMS over against Linux you know!
Thank you for your efforts.
Sincerely, Scott A. King
Hi.
I've got an old Compaq Prosignia with onboard graphics "card". The problem is that the parallel port is located at 0x3bc, irq 7. For the console monitor, I use an HP VGA mono monitor. The problem with this is that Linux decides that it's going to use ega rather than vga+. ega then tramples over my parallel port resources so that I can't use it.
If I boot the box without a monitor, or with a colour SVGA monitor, vga+ is then used, which uses different resources, hence allowing the parallel port to work.
I've already tried to compile certain things in and out of the Kernel before I realised that simply taking unplugging the monitor lead fixed it. My question is: is there any way that I can either alter the resources (io and IRQ) that the ega driver uses, or force it to use vga+ ? The monitor I use for it is VGA anyway - and it's console only - not used for X.
Thanks.
Hi,
I have a computer at home that has Linux (RH) 6.0 on it.. In my emacs I can set the coding to Devnagri. But I wonder how can I get anything to type on the screen. I am sure that I need to use a different inout system. But what? Additionally, what do I have to do if I want to install support for more languages.
Any pointer on this direction.
Thanks,
Aseem.
Dear Sir:
I had heard that there is some advantage in clearing server swap space each night, by running swapoff followed by swapon as a cron job.
What's your thoughts on this ?
Thanks,
Robert Dalton
Hi Jim and Heather,
This ought to be an FAQ, or, at least, fodder for your Answer Guy column:
I've been trying to set up pop over ssh. It isn't working.
To test it, I've been trying to retrieve e-mail which steadily accumulates on my new server, running qmail and it's pop daemon. I've also been trying to retrieve an e-mail sent to your antares.starshine.org, since I have great confidence that your pop server works.
I'm following the directions in the pop over ssh mini-HOWTO. It still doesn't work.
First, it isn't retrieving the mail. Second, it asks me for the passwords every time. I thought the idea was to automate this so I wouldn't have to type in the passwords every time.
At this point, I'm fairly dazed and confused.
I've run ssh-keygen on both machines and copied the keys back to the authorized_keys file on my laptop to which I'm trying to retrieve the e-mail. When I do ssh-agent getmail, it asks for the passphrase. Okay, that much I expected.
Then, as I said above, it asks for the password for each server I'm connecting to. That would be fine, if it never asked again. But, five minutes later, it's forgotten the passwords and I have to type them in again. And so it goes, on and on.
-- David Benfell
Hello Jim, Heather and the new Answer Gang!
One of my clients has "seen the light" and is installing a rather heavy-duty Linux box (PIII 600 Mhz, 256 Meg RAM, and five 9 Gig hard drives configured as RAID 5) as their main server. Two of their requirements are secure ftp (I'm leaning towards sftp even if it is commercial) and, this is the kicker, the ability to restart a failed (unattended) ftp transfer.
After hitting Freshemat and Google/linux, I've found that the RFC for ftp loosely defines a restart mechanism using a marker, but I've yet to find an ftp client/server that actually implements it. The FAQ for ssh/sftp does not mention this capability.
Anyone in the Answer Gang have a brilliant answer/solution?
-- Faber Fedor, RHCE, MCSE, MCTHi Answerguy James T. Dennis,
I was pointed to your site through the linuxdoc site.
Perhaps I cna be so bold as to ask a question. I have Debian 2.1, and xdm 3.3.?.
I reinstalled fvwm95 the other night using dselect, and now xdm will not launch any windows manager. The xserver is up and running, I get the login screen. When I type my name and password, it tries to launch a windows manager, and then, not being able to, reverts to the login screen again.
I am able to launch a windows environment by killing xdm under su, and then as user, executing xinit, and launching fvwm95 through a bash shell from the grey root screen. This is a bit of a kludge, really.
I think that it has to do with file permissions, as fvwm works fine (apart from the keyboard mappings).
There is no report of errors in .xsession-errors, or of problems in /var/log/xdm.log, other than a report of me killing it.
I am sure the problem is very simple to you.
Regards,
John Langley,
England.
First of all, I am sorry if it shouldn't come to you like this. Could you let me know who to send this email to, if there is a dedicated address?
Thanks.
I have two executable files that work when called manually. File2 is called from within File1 so that when I run File1, File2 gets called as well.
When I set a CRONTAB job to run File1 (with the nested call to File2) , File1 runs, but File2 doesn't.
That's the simplified version.
The detailed version is that File1 dials out to an FTP server and downloads File2 which is then supposed to run. I stress that when I manually run File1, File2 is downloaded and runs correctly. When within a crontab job, File1 runs, but doesn't appear to activate File2.
Is there something I am missing. My only thoughts lie in the fact that perhaps I need to chmod the file in a different way, for it to be called from within a crontab job successfully.
Any thoughts or Pointers?
Mick
Dear Answerguy(s),
I'm a complete Linux newbie. Just installed LinuxPPC2000 on my imac. Partitioned fine, installed fine. Got the penguin, then logged into Gnome. Felt great. Went to create NewUser. Made big mistake. I created new Gnome Icon in the panel. I don't want two. So I remove the panel. But, and I'm in root of course, it removes the whole panel, not just the icon. I go to new shell and type in shutdown commands. Re-login, but Gnome has lost it's panel. How do I get it back?
I hope this mail doesn't clog up your mailbox unnecessarily, but I thought you could help.
Thanks for your time
Mike
Hi,
The company I'm working for is using Netware 4.1. And it is the company policy not to use IPX protocol. So, all Windows clients login into the server using IP protocol.
Is there any netware client for linux that uses IP to talk to the server? So far if I have not mistaken, all (ncpfs,nwclient and mars) uses IPX to talk to the server.
If there's no available software that I can use. Is there a work around it? Can I modify some settings on those client which makes them uses IP instead of IPX?
Is there anyway for me to make Linux login to NDS without sepcial clients?
Please tell me as soon as possible. I need the answer urgently. Thank you in advance.
Best Regards,
Kheng Teong
I can't seem to find any detailed information on this and wondered if you might have some ideas. Normally I deal with large ethernet/atm/etc type networks. This is my first small time network that uses
First some information. I have a server that uses PPP with an ISDN modem. I am using diald to connect. I use this server to provide network service for an internal network. Right now, everything works, but I need to have the ISDN modem drop its connection when not needed and then re-connect when someone tries to use the interent.
Here is my connect script information
fifo /usr/lib/diald/diald.ctl mode ppp connect "sh /etc/ppp/connect" device /dev/ttyS1 speed 115200 modem lock crtscts local xxx.xxx.xxx.xxx <--- static IP remote xxx.xxx.xxx.xxx <--- static IP defaultroute pppd-options noauth include /usr/lib/diald/standard.filter ip-up /etc/firewall.rules ip-down /etc/firewall.reset
Any idea on how to this or where I can find any information? Thank you for your time.
Dear Mr. AnswerGuy...
Can you provide me with a good list of well known port numbers for TCP and UDP? I am trying to decipher another netguy's packet filters and am stumped by a few. Is there an all-inclusive list somewhere???
Can't find info on the following ports:
TCP: 27, 109, 139, 1547, 7777
UDP: 67, 137, 138, 5631, 5632, 7648, 7649
Any assistance would be greatly appreciated.
Thanks.
Deane
Gidday after readin your colum and coming across your previous post in regards to the i740 i was wondering to what extent does linux have 3d support. I have heard rumors in regards to later kernels supporting other 3d features.
To be more to the point to what extent am i likely to get linux running 3d progams on my i740? and if it is possible how whould I be likly to do this ... as you can tell Im very new to linux and have so far been astounded by it... a truly wonderfull os for the peoples .. i hope that some day gnome etc will come with afeature set for the truly dumb user (with out loosing the advanced userset!) so we can kick microsoft off its high chair.
anyway thank you for your time.
Andrew Nye
James,
I am running Red Hat Linux 6.2 & am trying to get a syslog server running. I have limited Linux knowledge & just want to get it working to log messages from cisco devices. Do you know of the commands to get it working.
Thanks in advance,
Domenick Villamagna
Network Engineer
Hello! Excuse the beginner question, but I was wondering how in the heck to install and run DEVFS in my RedHat 6.2 linux OS?
I never saw the option under menuconfig, and whenever I try to do something like "mount -t devfs none /devfs" it says that the kernel doesn't support it.
thx in adv!
-ion
Hi James:
Here is my question, I could not get a satisfactory answer from any of the newsgroups: comp.lang.perl.misc, comp.os.linux, etc.
I want the user to be able to upload a file via the browser's file form object. The file would go to /cgi-bin/docs directory, for about a second where it will be read into mySQL and then deleted from that directory.
Anyway, this cgi runs on Linux with Apache 1.3.9. I even downloaded the latest, 1.3.12 and compiled it with mod_put.c into it. THe docs directory has 777 permissions temporarily. The script is written in Perl. After executing the CGI script I have a 0K file created in the docs dir but the file is empty.What is going on? How could I get this working?
For reference I include the code Perl here. THanks for the help.
############################ #!/usr/bin/perl use CGI; $q = new CGI; $upload_file = $q->param("upload_file"); $upload_file =~ s/^.*(\\|\/)//; print "Content-type: text/html\n\n"; print "<html><head><title>Upload</title></head>\n<body bgcolor=\"\#ffffff\">\n"; print "<h2>File Upload</h2>\n"; ########################### if ($upload_file) { open(UPLOADED, ">/var/lib/apache/cgi-bin/docs/$upload_file") || print "Couldn't open $file :: <b> $!</b>"; binmode UPLOADED; while ($size = read($upload_file, $buffer, 1024)) { print(UPLOADED $buffer) } # end while close(UPLOADED); print "$upload_file\n"; } # end if ##----------- print "<p>\n<b>Files in upload directory:</b><br>\n"; $path = "docs"; opendir (DIRHANDLE, "$path") || print "Can't open directory $path <b>$!</b><br>\n"; while (defined ($files=readdir(DIRHANDLE))){ print " $files<br>\n" if ($files !~ /^\.{1,2}$/); } # end while closedir(DIRHANDLE); print "</body></html>";
I need information regarding chat server on linux.
Mon, 3 Jul 2000 15:55:47 -0400 (EDT)
From: Karl Pena <>
Subject: Linux Bike Project
Hello Linux Gazette!
Today (July 3, 2000) I found your kind posting of a requested comment I sent a long time ago (August 1999). You could not imagine my heartfelt glee (goosebumps) when I saw it (at the URL below).
Thank you so much for putting that up, or even responding to it. Reactions to the Linux Bike Project have been extremely positive across the board, and a few wonderful open-source community players have taken interest in participating. I am still brainstorming and drafting out ideas and details for the epic bike ride, and would love to speak with you if possible (even just to say 'thank you' directly).
You can reach me at the following email address now:
THANK YOU again, and have a wonderful day!
Yours sincerely,
Karl G. Pena
[Karl is planning a bicycle ride across the US to promote Linux. He promises to send updates to LG during the ride. He is looking for sponsors for his trip. If you'd like to help, contact him at the address above. -Mike.]
Sun, 2 Jul 2000 17:46:21 +0200 (MET DST)
From: Jan-Hendrik Terstegge <>
Subject: German LG translation needs web hosting
Do you know a website hoster which is free and without advertisement? We have a problem with the German translation of the Gazette in that we don't have any space on a web server so we have to use free web space offers. Do you know if one of your sponsors can offer us an account where we can upload up to 10 or later 20 MB?
Sun, 2 Jul 2000 17:46:21 +0200 (MET DST)
From: Werner Gerstmann <>
Subject: translations of LG
Hallo there,
LG#55, general Mail, translations of LG.
I think, it is a nonsense to translate English written computer texts: it is a matter of general education to read English just as to be able to read and write in your own language or to know some basic mathematics. Regards Werner Gerstmann
While it may be true that university-educated people in a technical field should be able to read computer articles in English, I don't think that's a reasonable expectation for everybody in the world. One of the advantages of Linux over other OSes is that it's more international, so people are on more of an equal footing than with some other OSes whose corporate headquarters is in an English-speaking country. Another advantage is that it's accessible to people who can't afford to spend a month's salary on one copy of Windows 2000. These two facts alone mean Linux will be (and is being) used in non-English speaking countries in ways that, um, No Other Operating System Has Been Used Before... <Star Trek music in background> This means that people in those countries will eventually contribute (and are contributing) Linux software that wouldn't otherwise exist.
Some of these contributions will be by people whose day job is not in the computer industry. Thus, they may have less of a reason to study English. Or, maybe they can read the English Gazette but prefer not to. Or maybe they're willing to read the English version, but would also like to share the articles with family and friends who aren't computer nerds and maybe don't speak English either. What's wrong with that? If it's making their use of Linux "just a little bit more fun", then Linux Gazette is reaching its goal, no matter what language it's in.
Thank you for your mail on the translation issue, even though I wont change my opinion !
What we need, or better, what I consider desirable, is a special online dictionary on certain new terms, not too simple, in English, of course, like the Jargon File.
Take e.g. the articles in the Linux Journal: what regards language, they are of very different level, beginning with What is Linux up to the sophi sticated papers of Doc Searls and Stan Kelly-Bootle. Especially Mr. Kelly-Bootle seems to invent new word combinations and, of course, you have to think about such compositions as beyond-the-soggy-pale or cosource-sweet-value-added, even if your mothers tongue is English, I suppose. If I looked into my large Pons dictionary, I didnt find words like kludgy or to shrink-wrap. These words I found in the excellent online dictionary of Munich University at www.leo.org.
Sat, 15 Jul 2000 08:38:59 +0400
From: "Felipe E. Barousse Boue" <>
Subject: Spanish translation
[I asked Felipe Barousse how his new Spanish translation of LG is going. I was surprised to discover it's going much better than I expected, even though it's been in operation only two months! I couldn't wait to share the news. -Mike.]
Hi Mike:
Well, I have not counted all emails. But we have had many many emails with congratulations about the site. There are around 70 registered translators and about 25 of those have actually done translations; the rest either are in the process of translating something or said they will do so.
With every new issue and announcement, traffic goes up very high and slowly starts fading out during the course of the month, unless some new announcement -kind of kick out everyone's curiosity- is made.
La Gaceta de Linux is being linked from several linux related sites (Spanish speaking most of them) and we are in a couple of search engines now.
In short, I guess that for two months online this is doing well. What is your opinion about it ?
Actually a great effort from volunteers within the company has been put out to build, install and manage the site.
Hola!!! Seria una buena idea que la revista estubiese traducida a otra lengua, por ejemplo al Español, por poner un ejemplo. Saludos desde España.
[He said, "Hi. It would be a good idea if the magazine was translated into other languages; for example, Spanish. Greetings from Spain." -Mike.]
Hola Luis:
Recibí copia de tu mensaje del editor de Linux Gazette. Te invitamos a que visites la traducción de la revista Linux Gazette en http://gaceta.piensa.com/ Aqui tenemos traducción de los últimos número de la revista y con serías intenciones de seguir traduciendo los número anteriores.
Trabajamos con voluntarios de todo el mundo para realizar las traducciones al Español.
Te esperamos en La Gaceta de Linux.
Saludos desde Ciudad de Mexico
Felipe Barousse
[He said, "Hi Luis. I received a copy of your message from the Editor of Linux Gazette. I invite you to visit the translation of the magazine Linux Gazette at http://gaceta.piensa.com. Here we have a translation of the most recent issues, and we seriously intend to translate the previous issues also. We work with volunteers from all over the world to do the translations. We await your visit to La Gaceta de Linux. Greetings from Mexico City. Felipe Barousse." -Mike.]
Sun, 9 Jul 2000 04:19:02 -0400
From: Dean Maluski <>
Subject: Windows Partitions
I work at a TV station as System Administrator. This week a hard drive failed in a computer, a very important computer. It controls 7 satellite dishes, the azimuth and elevation can be adjusted from this machine. Also the 7 receiver frequencies can be adjusted from this windows machine. Without it we get no video feeds and have to order shows on tape, which costs the station a great deal of money.
Now that I've given you the basics let me tell you how Linux saved the day. When this hard drive failed windows could no longer determine what type of partitions the drive had. I tried setting it up as a slave drive so I could salvage our satellite software, windows couldn't see it so I booted with Slackware rescue disks, ran fdisk and set the partitions to be Windows FAT 16. Rebooted win98 in dos prompt mode and managed to copy ini files off it. With those ini files I was able to reconstruct another system. After that I made several backups of the ini files. Linux gained a great deal of respect within Tribune Broadcasting this week!!!
I'm a Newbie.
I saw some of your answers in the Linux Gazette. Is this still active, if yes how do I subscribe.
The Linux Gazette is published at the beginning of every month, on the website http://www.linuxgazette.com.
After that, with volunteer effort, it is mirrored to locations throughout the world, translated, and added into most major distributions as part of the Linux Documentation Project. Back issues may be in your /usr/doc/ directory right now... The LDP's home is http://www.linuxdoc.org.
You could probably use a website page-checking service to advise you when the Front Page changes to list new issues.
Is there a simple way to change the font type & size that EMACS uses. For example in some type of resource file?
For this part, one of our readers or the Answer Gang will have to help.
TIA, Allen Grayson
You're welcome.
SSC's webmaster was asked:
Hi, I am looking for an open source job center management program. Is the one you use in your jobs section open source? Can you recommend one? We are starting a small site in Phoenix and would like to integrate something useful for careers.
Thanks, Steve Hasz
I'm at work, you can reply to the above address or
The Career Center on the Linux Journal site is not open source at present. Linux Gazette's "Help Wanted" section is not about job offers, but about people seeking help with different aspects of Linux. Since you're seeking help about an aspect of Linux, we've put your letter here. (Actually, in the "General Mail" section, but at least it's the same page.
We build this page, as well as 2-Cent Tips and The Answer Gang, with the aid of some hand-crafter Perl and Python scripts. We sort the letters into standard Unix mailboxes with particular filenames, and the scripts convert them to HTML. However, we also post-process each column by hand to do things the scripts can't.
Freshmeat.net is a canonical site for looking for open-source software; the CPAN project (a perl script archive at cpan.org) may also be useful.
For examples of other functioning job centers--or to find a developer to write one for you--see www.cosource.com and www.sourcexchange.com . There are also a few other companies offering a similar service.
Perhaps you can adapt one of Linux's general "trouble ticket"-tracking applications like gnats and jitterbug for your purposes.
-Heather and Mike
Thanks for your response! We decided to just have one of our open source developers write something, which turned out quite well. We actually created our entire site and run it using only modified open source.
In case you are interested, we are in public beta at http://www.aztechbiz.com/.
Wed, 5 Jul 2000 20:40:47 +0100
From: Ed Brown <>
Subject: Ventura under Linux
I noticed that you mentioned that you use Ventura for some of your work. Is this a beta of the Linux version due in a few months?
[Are you talking about Warren Young's letter in http://www.linuxgazette.com/issue41/lg_mail41.html?You can write to him and ask what he knows. Corel has ported several of their applications to Linux, and just released Corel PHOTO-PAINT for Linux. I don't use graphics programs except for basic retouching in the Gimp, so I don't know much about Corel's products. -Mike.]
Sun, 9 Jul 2000 17:59:39 +0900
From: Lance Lindley <>
Subject: Hi folks
Thanks for some great information. However, here's my suggestion:
PROBLEM: Not much point in having terrific information if nobody knows it is there!
SOLUTION: Add a few two-or-three word synopses of what is in each issue to your Table of Contents. Big time investment initially, but very useful in the long run and easy to maintain once you've got it in place.
Tue, 11 Jul 2000 14:05:37 +0200 (CEST)
From: Christoph Lange <>
Subject: Separate all-in-one version?
could you please offer the all-in-one version of the Linux Gazette (HTML as well as text) as a separate download? I don't understand why I have to download these two files with every issue. Separating them from the standard package would decrease it by ~ 1/3 of its size without any loss of information.
[The all-in-one version requires some of the other files anyway (e.g., images, program listings, HTML files not part of the main article page). The decision to not make separate packages for the all-in-one versions was made by the former Editor (hi Margie!), because we already offer the Gazette in a variety of formats and we can't customize it to everybody's liking.We have a new Editor coming on board next month, Don Marti, so I'll leave him the decision whether to change things. -Mike.]
Tue, 25 Jul 2000 13:50:50 -0600
From: Doug <>
Subject: Please remove bogus Posting!
Hello, Can you please remove the posting with the following header:
Fri, 2 Jun 2000 11:48:21 -0600
From: "Tom Russell" <[email protected]>
Subject: How to run Windows programs on Linux
at this location: http://www.linuxdoc.org/LDP/LG/issue55/lg_mail55.html
I do not know who Tom Russell is or why my email address is listed under his name, but I am tired of people sending me emails about WINE!!!! I know all about WINE, and I don't want or need a hundred people telling me to check it out!
Please remove this for me! Please!!
Thank you,
Doug Springer
[OK, I removed the e-mail address. It must have been a formatting error when the column was made, that caused your e-mail address to leak through from your letter onto that one. -Mike.]
Sat, 1 Jul 2000 20:15:58 +0100
From: David Andrew Williams <>
Subject: linux cad programs
Dear Editor,
Your roundup of cad programs for Linux misses a program called LinuxCAD mentioned as far back as issue 30 and in some more recent issues as well so it seems to be popular.
I'm sure some of the other programs mentioned in the recent issue can also be found mentioned in older issues.
Andy
Sat, 15 Jul 2000 08:38:59 +0400
From: Mr. Mamodaly <>
Subject: subscribe
[Subscribe to what?For the lg-announce mailing list, send "subscribe lg-announce [email protected]" in the message body to [email protected].
Linux Gazette itself is not available via e-mail. For the reason why and other options, see the LJ FAQ, questions 2-4, at http://www.linuxgazette.com/lg_faq.html . -Mike.]
Wed, 5 Jul 2000 10:28:16 -0700
From: Mike Orr <>
Subject: Trivia for the day
$ tm bash: tm: command not found
I guess the computer doesn't do Transcendental Meditation.
Hi everybody! Wow, it was a lot more work than I expected... but then, I also handled the Mailbag and Tips this month. It's all part of our new team effort to make Linux a little more fun!
We got a fair handful of questions about statistics, none of which went answered. I'm the statistician among us, and I've been hacking web pages and perl scripts all month. I didn't even manage time to whip up a cool new logo for The Answer Gang yet.
But I'll say this, and you can all percolate on what you think of it: stattistics developed by someone else aren't terribly useful to you - the situation they studied will be different, every difference is a statistical skew, and it doesn't take much variance to make it not only notuseful, but actually a waste of time and effort.
As contrasted with benchmarking done in-house, in your own controlled environment... where you know that the situation being tested is something you really can apply and show to your boss. But you have to have a "control" - at least one case that is not part of the experiment, but allowed to run "naturally", whatever that means. The larger the sample the more likely that you do not have a big bad skew like an observer's opinion sway their observation, or hardware problems corrupting a software test, or something like that.
By the way, the Benchmarking HOWTO over at the LDP homepage may be dusty, but it's actually still very readable, I recommend that people who care about serious comparison of systems, distributions, and OS' check it out, and apply its methodology when making their comparisons.
The smaller the sample the sillier it is. If we used the methodology of "letters that came to LG this month" why, MS Windows is still popular, but Linux outsells it by at least 4 to 1 (dual boots and crossover issues counted in favor of WIndows), maybe more ... and there were almost as many as people who submitted questions that did not involve Linux. (pant stains? car CD players? Where do these people come from?) Oh yeah, and there's my final note. Look out for subjectifying words like "almost" "nearly" "overwhelming" and other such vague quantifiers. If they aren't numbers, they're not useful. If they are numbers, they're only as useful as the correlation between how they were gotten, and your particular real life use for them.
9 out of 10 of my donuts are gone, with a 60% chance of the rest disappearing within the next 15 minutes. See you next month!
Translator: Aron Felix Gurski
Well, we got about a dozen people who came forward with our solution. Not that we here at the Gazette have any better answer for tthe original querent. So, if you know some useful sites that Linux folk might enjoy for overclocking and other hardware hackery, submit them to they will be published next month to finish this thread.
And a big hand for Aron, who sent in a very early reply that also helped me learn something, plus an offer of future help!
Hi!
I just began looking over the July issue and found that you needed some help in translating a question from Danish. Please do not call the user "hilsen kaspar";"hilsen" is just a friendly way of ending a letter (literally it means "greetings") -- the user's name is Kaspar, a male first name. Kaspar really *does* repeat himself at the end of the message. (He also has made not a few typos...)
Good luck at answering him. (For future reference, I can translate Danish, Norwegian and Swedish for you [email address ellided])
Dav jeg syntes at det er en gode side du har med en masse gode brugbare råd . men det er ikke det jeg vil , je har et problem som du måske kan hjælpe mig med . Jeg har en 450 mhz p3 cpu som jeg gerne vil have overclocket jeg har et asus bundkort model :p2b/f1440bx agp atx. Jeg ved ikke om at jeg skal have noget extra køling på når det kun er til 500 mhz da mit bundkort ikke kan tage mere.enanden ting er at jeg ikke ved hvordan jeg gør så jeg håber at du vil hjølpe mig.JEg håber at du vil hlælpe mig med mine spørgsmål.
hilsen kasper
Dav[e?], I think that you have a good page with a lot of good, useful advice. But that's not what I want, I have a problem with which you may be able to help.I have a 450 MHz P3 CPU that I would like to overclock. I have an ASUS P2B/F14440 BX AGP ATX motherboard. I don't know if I need extra cooling for 500 MHz (my motherboard cannot go any higher). Another thing is that I don't know what to do, so I hope that you will help me. I hope that you will help me with my questions.
Best wishes,
Kaspar
AnswerGang: RazorBuzz, Jim Dennis
From RazorBuzz on Fri, 07 Jul 2000
Here's a comment on a question from a while back. I don't remember that question (but it was about a year and a half ago). I see that this was the same month that I write a 26-page guide to "routing an subnetting" and answered about a hundred other questions. No wonder some of them weren't complete!
Andswer Dude,
Your response to Tony Grant on the Plug and Play board problems (#36) can
be overcome in Linux itself. You can manually set rc.S to run a config' for the IRQ5 (Which, if memory serves, is Com3). If you add this line:setserial /dev/ttyS3 uart 16550A port 0x2e8 irq 10
to the /etc/rc.d/rc.S file it'll be run on every boot (duh) and correct the problem. Of course the IRQ and IO need to be changed. The chipset of 16550A is pretty much standard and most likely won't need changed...but if it does, you can always grab it easily. All that command line does is force's the box to accept the comport and recognize that it can in fact be used. Dammed defaults tend to only recognize Com1-Com3...Hopefully the next RH, Caldera OL, or Debian should have that fix (since Slackware is just..well....lacking....nobody has hopes for that to ever get itself in gear.)
- -=Razor=-
- -=Buzz=-
Then again, looking at Tony's original question (http://www.linuxgazette.com/issue36/45.html) I see that the it wasn't clear that setserial would be the right tool for the job. It was a question about a conflict between an ISDN TA (terminal adapter) and a ethernet card. I have no idea how the setserial command would change the IRQ on the actual device. As far as I know all it does is configures the kernels serial driver --- to inform it of what IRQ the hardware is using.
So I stand by my original answer (in this case).
(I understand that the ISDN TA was probably acting like a modem, and thus probably had a UART of some sort --- probably a 16550A since a 16450 or an 82xx series would be WAY too old and obsolete for any sort of ISDN equipment. I don't see any evidence in the message that the user had any way to manually set hardware jumpers to specify non-conflicting IRQs for these devices).
I wonder whatever happened to this correspondent? Have they long since switched to DSL? Is that old ISDN TA a doorstop somewhere?
AnswerGang: DUDU, Jim Dennis
From dudu on Fri, 07 Jul 2000
You answered in LG55 the following question:
Simple Shell and Cron Question
From Amir Shakib Manesh on Thu, 08 Jun 2000
Dear ANswer Duy, I want to write a shell script, in which every 15 minutes it run a simple command, let say 'top -b'. Would you help me?
Well one way would be to make a cron entry like:
*/15 * * * * top -b
... which you'd do by just issuing the command: 'crontab -e' from your shell prompt. That should put you in an editor from which you can type this command.
But, when the cron job runs it has now default environment variables like PATH. So shouldn´t one include the full path to the top binary in order to run it properly?
Rgds. DUDU
Of course cron runs it its own environment with its own PATH and other settings. However, on most Linux systems 'top' is going to be located in /usr/bin --- which really should be in cron's PATH.
So I think the example I gave was good enough for the common case and I think I did go into more detail later in that response.
Of course I have a tendency to refer to programs and scripts by their full path in my configuration files and scripts, but by shorter names in examples and on the command line.
AnswerGang: Mike Orr, Jim Dennis
From Mick Faber on Fri, 07 Jul 2000
Hi
I have written a script that automatically connects my machine to an FTP server and downloads a set of files that I need nightly. The client downloads a file which is my indicator to any changes. In effect, if this downloaded txt file has changed, then I need to download the other files.
That part is ok. I can automatically download the check file, so I have two files (current and new dir) called the same but in different directories.
I have written a script that says
> Set a=cksum file1 > Set b=cksum file2 > If a=b > Then ... > Else ...
My problem seems to be even though the CKSUM results are differently when done manually, in the script they ALWAYS are equal. Is SET the wrong term to use to set a variable. Is there another way to do this altogether.
[Mike] We need to know what language this script is written in. From the "set" statement, I'd assume it's csh or tcsh, although what you wrote appears to violate the rules for (t)csh syntax. (Capital letters, no " around "chksum file1", etc.)
Anyway, if the language is similar to C, the "a=b" expression should be "a==b" to test for equality. "a=b" means set a to the value of b.
[Jim] The code fragment you've included doesn't specify what scripting language you're using. It isn't a valid fragment of bash, PERL, or even csh. For one thing, the common UNIX scripting languages are case sensitive. Thus your capitalization of "If" and "Then" are enough to cause this fragment to fail under most interpreters.
Of than that there isn't enough context or code here to guess what scripting language you're trying to use. However, the 'set' command isn't used in most Linux scripting languages (at least not for "setting values to variables"). csh, TCL (and 'expect', a TCL derivative) and the MS-DOS batch language, use the "set" command for variable assignments.
This leads me to suspect that your code sample is in "MS-DOS batch" or some sort of psuedo-syntax.
To do this with bash (or Korn shell or any similar interpreter) you'd use something like:
#!/bin/sh a=$(cksum $1) b=$(cksum $2) if [ "$a" = "$b" ] ; then ... else ... fi
...assuming that you were calling the script with two parameters, the names of the two files. Note: the $( ... ) expressions are the key here. They "capture" the output from the enclosed command(s) and substitute those result into the expression in which they $(....) expressions have appeared. This is called "command substition" (traditionally rendered as `...` using backticks). This "command substitution" feature is one of the shell's most powerful and useful scripting mechanisms and it allows us to seamless assign the output from any normal command (internal, or external) to shell variables.
(Note: Some very old Bourne shells might not recognized the
$(...) form and thus may require the backtick form. However, all UNIX shells should be able to do command substitution. I've never heard of one that didn't. csh/tcsh also requires the backticks, and can't use the more legible $(...) form).
Actually this is an oversimplification. The GNU 'cksum' command prints output of the form:
2839321845 1516 /path/file.name
Obviously if I take the output of two of these commands, with DIFFERENT FILENAMES the full text of each output will be different even if the checksums are the same. I need to extract just the checksums, or at least filter out the differences in the filenames.
My first thought was that the cksum command might have some switches or options to suppress the extraneous output. It seems like the need to get just the numeric checksum value would be pretty common. However, it appears that the FSF maintainer for this utility doesn't agree with me. So we have to isolate it ourselves. That's only a minor nuisance (taking far less time for me to do than to explain).
There are a couple of ways I can do that. Here's the first that comes to mind. Just insert the following at the top of the script.
function cksum () { command cksum $1 | { read a b x echo $a $b } }
This creates a local shell function which over-rides the output of the external cksum command. The "command" command forces the shell to execute the command (bypassing the shell functions and aliases --- and prevent a recursion loop).
All I do here is pipe the output into a command that reads the first and second fields (the part I want to keep). I read the rest of the output into a "throwaway" variable (which I expediently call "x"). Then I just echo out the two pits of info I cared about (the checksum and the size) leaving off the "rest." This trick of using the read command to filter out fields that I want from lines of input is pretty handy. It's a reasonable advantage over using the external 'cut' command because read and echo are internal commands. Also 'cut' defaults to using tabs as delimiters while I usually want to "cut" on any whitespace (any number of tabs or spaces).
The advantage of writing this little shell function into our script is that I can leave the rest of the script alone. I don't have to re-write it. Of course it's better to avoid the name collision. I could name my function "checksum" (and avoid having to use the "command" command). Even if I do rename the shell function I can leave my "command" command as is. It doesn't hurt anything.
Naturally I could have also just piped the output of each of these cksum command through cut like so:
a=$(cksum $1 | cut -d" " -f 1-2 )
... which works fine. Of course it is a little less maintainable. Even though I'm only calling this expression twice --- it's still better to consolidate it into a shell function so it really works the say way in both invocations. Otherwise a slight difference to one of the invocations could silently cause the later comparison to always and erroneously fail.
Note that we don't have to use "if... then ... else .... fi" in most shell scripts. We can shorten this script to:
[ "$(checksum $1)" = "$(checksum $2)" ] && .... || ....
(assuming I made my checksum shell function as before).
... where the command after the && is the same as you'd put after the "then" token in the earlier script. The command after the || operator is similar to the "else" block, but it would be execute if the checksums didn't match or if the if the command in the && clause returned a non-zero value (an error). This is frequently what you actually want in shell programming; though the differences can be subtle and important.
Note: the && and || operators take a single command. If you want to perform a block of commands under those conditionals you'll want to use command grouping or possibly a subshell --- using the {...} (braces/grouping) or (...) (subshell) syntax.
One "gotchya" that crops up in bash 2.x when using "grouping" is this:
{ foo; bar }
... was accepted under bash 1.x and is an error under bash 2.x --- it's because the closing brace is being taken as an argument to the bar command. This is technically correct for the parser (it was a bug in bash 1.x that allowed the command to work).
So, good shell scripting requires that we us this syntax:
{ foo; bar; }
(or simply put the braces, particularly the closing brace, after a line end, perhaps on its own line).
That's basic shell scripting.
Any assistance appreciated. Email preferred, but will keep checking this here to check for any legendary solutions...
Mick
[Jim] I don't know that my answers are "legendary" but I hope they help anyway.
[ Maybe most aren't but some are. The length of this particular thread is about to rival some of your own longer missives, but I think it will still be shorter than your legendary "Routing and Subnetting 101" (issue 36, plus it had a floowup. Some people are teaching clasess based on it. Rah Rah Rah, Go LDP!) Of course it's an unfair comparison; there's two of you ganging up on the question this time so your relative portion is even shorter. --Heather ]
BTW: When posting questions about scripting --- include a syntactically complete and semantically relevant portion of the code. Try to keep that under 25 lines. Often the process of isolating a testing a chunk of code that clearly illustrates the problem, leads you to an understanding and a solution or work around.
... he replied ...
Thanks so much for the reply, I have written this using VI on Redhat6.1 - I don't know if that is the answer you need - I'm only a 2 week novice with Linux and programming of this level for that matter ... Does this answer your question?
The actual command line I want to use is
if cksum /usr/local/c_drive/batm/video/current/pod001.avc = cksum /usr/local/c_drive/batm/video/new/pod001.avc; then
I also want to verify that the downloads are successful and not corrupted. I figured CKSUM is the best for that as well - that problem is to get tackled yet ....
[Mike] Vi is the editor you're building the file with. What we need to know is the program that's running the file. From the "actual command line below", it looks like a shell script, so I assume it's running under the default Linux shell, bash. Do you have a "#!" line at the top of the file? If so, what does it say?
The following script works when I try it comparing one file with itself, then comparing it with a different file.
if [ "$(cksum /usr/local/c_drive/batm/video/current/pod001.avc)" = \ "$(cksum /usr/local/c_drive/batm/video/new/pod001.avc)" ] ;then echo "They're the same." else echo "They're different." fi
"if" takes a single command. If the command's exit status is 0, the "then" part is run. If the command's exit status is non-zero, the "else" part is run. The brackets "[ ... ]" imply the "test" command, which runs a test (in this case, a string comparision) and exits 0 if the answer is true.
[Jim] Actually the [ .... ] doesn't "imply" the test command. [ is really a built-in alias for 'test' (and it generally also exists as a symbolic link to the /usr/bin/test command, for those shells which don't implement it as a built-in).
When the command 'test' is called under the name '[' then it requires the ']' as a delimiter. That's actually a bit silly, since the shell is still doing it's own parsing, and the shell "knows" when the command ends quite independently of this "]" marker (which the shell ignores as it's just another argument to the '[' command.
However, these are just syntactic anomalies. It's really better for beginning shell scripters to use the 'test' command (so that the really internalize that it is really just a command like any other Unix command. It is not a "feature of the language" --- it's just a command that processes a list of command line arguments and returns and exit value. (This is as true of '[' but it's less obvious to people who've been exposed to any other programming languages.
[Mike] "$( command arg1 arg2 )" returns the output of the specified command-- what it would have printed on the screen. This is different from its exit status. The double quotes keep the output together even if it contains spaces; otherwise the output would be misinterpreted.
Bash allows either "=" or "==" for string comparisions. Another operator "-eq" does numeric comparisions, but we don't want that here since "cksum" returns more than just a simple number. Some other languages would require "==" instead of "=", as I said yesterday, but bash isn't one of them.
[Jim] Although bash allows this, the external 'test' command requires that we use the = and will give an error if we try to use ==
So, depending on bash' permissiveness is less portable.
Incidentally, another approach we could have used (given the original problem) is to do something like:
pushd $(dirname $1) a=$(cksum $(basename $1 )) cd $(dirname $2) b=$(cksum $(basename $2 )) popd ....
... this relies on the fact that the files being compare have the same names but reside in different directories. However, it seems really bad to impose that constraint on our shell script even though this particular application/situation allows it. It would make the resulting script useless for most other situations. However, the approach I recommended (filtering out the filename with and read/echo pair or a 'cut' command) gives us a more general script that we can re-use for similar purposes.
This example does show the use of the very handy 'basename' and 'dirname' commands. It also shows that the $(...) form of command substitution can be nested (which overcomes a limitation of the older `...` backtick form).
[Mike] Please cc: on subsequent e-mails about this issue. This is a mailing list which is used to build the Answer Gang/Answer Guy column in Linux Gazette, and several people read it who may be able to help read it.
[Jim] Once you have local copies of the file, why not just use the 'cmp' command. The cksum command is already going to read the whole file. You've already burned up the bandwidth (transfer the whole files to the local machine).
So what's wrong with:
if cmp -s /old/path/file1 /new/path/file1 then ... else ... fi
That seems quite a bit simpler.
Also, let's assume that you have two directories. A script to compare corresponding files in them would look something like:
for i in $1/*; do cmp -s $i $2/$(basename $i) && # they're O.K ... || # Ooops: corrupt file done
(This assumes that you're call it with just two parameters, the names of the old and new directories).
Alternatively you can have a script take a directory name (the "new" directory for argument's sake) and a list of files as probably provided by a "wildcard" (globbing) pattern.
That would look something like:
d=$1 [ -d "$d" ] || exit 1 shift for i; do if cmp $i $d/$( basename $i ) then .... else .... fi done
... Here again I'm using the basename command. I could also use the "parameter substitution" feature of the shell instead of basename: ${i##*/} However, I find that form to be almost unreadable. If performance where an issue I might hide the ${1##*/} in a shell function that I'd name "basename" (and I'd toss in ${1%/*} as "dirname"). That would be a bit quicker for large directories since basename and dirname are external commands. So using them entails quite a bit of fork()'ing and exec()'ing. Naturally the ${...} parameter substitution features are always internal if they are supported at all.
... he replied ...
Hi, I am using the default program bash (have also tried sh as other information I downloaded had this in it - are they significantly different?
I ran this command:
if [ "$(cksum /usr/local/c_drive/batm/video/current/pod001.avc)" = \ "$(cksum /usr/local/c_drive/batm/video/new/pod001.avc)" ] ;then echo "They're the same." else echo "They're different." fi
and found the following results:
when the file is compared to itself, it works. When compared to a file of the SAME NAME in another folder, if doesn't work. It's almost as if the folder is taken into account, but when I run cksum filename on the two files they give me the same CRC, no. bits and file name as they should. I would expect then that this command should work.
[Jim] Of course the "folder" (directory name) is part of what's being compared. The "$(.....)" are expressions that evaluate to text strings. The contents of those strings are set to the output of the commands that are included in the parentheses. The [ (test) command takes a list of arguments and operators. In this case the arguments are two strings (substitutes by the $(...) expressions) and the = operator. Note that the "=" sign here is just an argument to the test command --- which is also know as the '[' command. The closing ']' is just an argument that the 'test' command requires when it is called under the '[' name.
Now, if you think about it you'll see that the '[' command has no reasonable way of "knowing" that you only care about the checksum values of the two strings. It was give a couple of strings and an argument (the "=" sign). So it (the test command) will return a value (exit code, errorlevel) based on whether the two strings are identical.
I am interested only in the CRC value - perhaps we could use the -eq if we can only extract the CRC value as a result instead of the other info CKSUM give us....?
[Jim] I don't recommand that. The 'test' command will probably emit an error about the format of the operands to the -eq option/operator.
Feeling so close now.... Thanks again for your patience....
[Jim] See my long response of a few minutes ago. The answer is simple, we include a bit in the $(....) expressions that filters out the irrelevant text. I do this by over-riding the cksum (external) command with my own shell function, but the concept is the same.
Note: I dove into that message and my earlier response before seeing that others had tried to help you with your question.
Regards, Mick Faber
[Mike]~% cksum ksc.txt /tmp/ksc.txt 3082533539 2180 ksc.txt 3082533539 2180 /tmp/ksc.txt
It looks like the difference is only in the path and not in the checksum. I tried it both with the two filenames being hard links to the same file, and with them being copies of each other. To get the checksum only, run:
~% cksum ksc.txt |cut -f 1 -d ' ' 3082533539
Or to be verbose:
cksum ksc.txt | cut --fields=1 --delimiter=' '
3082533539
Here's a script:
---------cut here---------- #! /bin/bash FILE1=that FILE2=/tmp/that cksum $FILE1 $FILE2 if [ "$(cksum $FILE1 | cut -f 1 -d ' ')" -eq \ "$(cksum $FILE2 | cut -f 1 -d ' ')" ] ;then echo "They're the same." else echo "They're different." fi ---------cut here----------
$ /tmp/checkit 3558380555 93104 that 3558380555 93104 /tmp/that They're the same.
Out of curiosity, what do you think of the difference between cksum and md5sum?
Bash has more features than sh and is larger. Exactly what the differences are, you'd have to consult the manuals. I use zsh for my interactive shell, and zsh or bash for scripting.
... he replied ...
Thanks heaps for your help. I have resolved the issue.
FYI: I am using the command "if cmp -e file1 file2" and not using the cksum at all anymore.
Thanks again - you guys are lifesavers!!!
Mick
AnswerGang: Jim Dennis
From Stephen Richard Levine on Fri, 07 Jul 2000
I cannot find a reference which would show me how to access data sitting on an nt server (version 4.0) in multiple directories. I want to use linux as the o/s, apache as a web-server, but the content all resides on nts as pdfs in separate subdirectories. each user has their own nt subdirectory. Any assistance would be appreciated.
You could use the Linux SMBFS. You'd have to compile support for that into your kernel and use the 'smbmount' command.
SMBFS is similar to Samba (and based on the same free sources and work). However, it is the client side (Linux access SMB filesystems) rather than the server. (Samba is an SMB server).
When you're accessing files via an MS-Win '95 "share" it's using the SMB (server message block) protocol. Likewise for NT, Windows for Workgroups, the old OS/2 Lan Manager, and for printing and some of the MS Windows "popup" messages. Samba is a free package written by Andrew Tridgell (and others). It runs on most forms of UNIX, where it allows any UNIX or Linux system to emulate an NT server. This allows all those MS Win '9x and NT workstation clients to access files on Linux and UNIX systems using their "native" protocols. No special software has to be installed on the clients. (That's a big win for two reasons: MS Windows clients don't offer very robust remote administration facilities, so installing software on them is expensive and time consuming; and MS Windows systems are frequently plagued with DLL and other software conflicts which makes manually installing software on them difficult, frustrating and time-consuming).
Anyway, you're trying to do the opposite of what Samba offers. You're trying to use your Linux system as a "client" to your NT fileserver. Personally I think that this is a backwards way to do things. I'd suggest installing Samba on the Linux system (along with Apache and any other requisite tools) and let the clients post their files directly to the Samba shares on the Linux host. It's possible to configure Samba to listen on a specific interface and to limit the IP address ranges with which Samba will interact. Thus you can configure a system so that only local users can access the Samba shares while it's still publicly accessible as a web server.
(In the "belts and suspenders" philosophy it's also possible to use ipchains to block SMB traffic from even reaching the public interfaces on your Linux box. And of course you do that blocking on the host itself and on a separate border router).
Another approach would be to house primary copies of these files on the NT server, and write some sort of replication script that would periodically be executed (task scheduler?) to create an archive of the user files and push them over to the Linux box. Probably that would be most easily done using the 'rsync' command (another UNIX/Linux tool, written by Andrew Tridgell). You can run many freeware UNIX tools under Interix (formerly called "OpenNT" by a company formerly called Softway Systems, now owned by Microsoft) or under the Cygwin32 (Cygnus' package for supporting UNIX APIs and libraries under on Win32 systems).
rsync is very efficient (sending only the "diffs" of large files that have changed, rather than whole copies). It is the most popular replication tool on Linux these days.
However, if you have some other constraint that really mandates the use of NT for the fileserver, then I suppose you can use Linux' smbfs. You can read more about it at the Samba web site (http://www.samba.org/samba/smbfs).
... he replied ...
Many thanks for the assistance and setting me straight on which part of the client/server I should access.
Steve
AnswerGang: Jim Dennis
From ajshields on Tue, 04 Jul 2000
gday
how are you, I am new to Linux and am trying to install it as dual boot on my new 10gb seagate diskdrive i have already got windoze installed. My bios doesn't support a 10gb drive so i downloaded seagates boot manager that allows me to use the hdd full potential. When i tried to run fips it said that the last bit of it has files on it (it doesn't). And doesn't want to run anymore than that.
Can you help
Andrew
Did you read the FIPS.DOC file that comes with the FIPS package? (FIPS is the "free internet partitioning software"). It discusses this in the doc file, in the FAQ and in the ERRORS.TXT file:
Last cylinder is not free Since the new partition is created at the end of the old one and contains at least one cylinder, the partition can not be split if not at least the very last cylinder is completely free. Probably there is a hidden file like 'image.idx' or 'mirorsav.fil' in the last cylinder - see the doc.
(That's from ERRORS.TXT). In the doc and in the FAQ it describes what you should do about this:
But before starting FIPS you _must_ now defragment your Harddisk. All of the space that will be used for the new partition must be free. Be aware that the Windows Swapfile will not be moved by most defragmentation programs. You must uninstall it (in the 386enhanced part of the Windows Control Panel) and rein- stall it after using FIPS. If you use IMAGE or MIRROR, the last sector of the hard disk contains a hidden system file with a pointer to your mirror files. You _must_ delete this file before using FIPS (it will be recreated the next time you run mirror). Do 'attrib -r -s -h image.idx' or 'attrib -r -s -h mirorsav.fil' in the root directory, then delete the file. If FIPS does not offer as much disk space for creation of the new partition as you would expect it to have, this may mean that a. You still have too much data in the remaining partition. Consider making the new partition smaller or deleting some of the data. b. There are hidden files in the space of the new partition that have not been moved by the defragmentation program. You can find the hidden files on the disk by typeing the command 'dir /a:h /s' (and 'dir /a:s /s' for the system files). Make sure to which program they belong. If a file is a swap file of some program (e.g. NDOS) it is possible that it can be safely deleted (and will be recreated automatically later when the need arises). See your manual for details. If the file belongs to some sort of copy protection, you must uninstall the program to which it belongs and reinstall it after repartitioning. I can't give you more aid in this - if you really can't figure out what to do, contact me directly.
Also Arno Schaefer, the author/maintainer of FIPS, suggests that you create a debugging report with the -d switch and that you include the resulting FIPSINFO.TXT file with any questions that you mail to him.
The other approach would be to backup your data, check your backups (restore the critical data to another drive, another system, or at least a different subdirectory) and then do an old-fashioned re-partition, re-install (of MS Windows) and then do your Linux installation.
I realize that this sounds dull, tedious, time consuming, etc. However, think of the advantages. First, you'll have a backup! Also, your new installation of MS Windows may be much cleaner than the existing one (since their OS seems to gather cruft at a frightening rate).
I've only used FIPS a couple of times (on other people's systems, at their insistence). I prefer the old-fashioned approach. Actually I prefer to wipe out the old OS and give Linux the whole system. Failing that I prefer to add an extra hard disk and use LOADLIN.EXE to run Linux off of that (non-primary) drive. So repartitioning is third on my list of preferences; and using FIPS is fourth. That would be followed quite distantly by using Partition Magic (which I've never tried).
Of course I have no idea what files FIPS is complaining about. It might be some sort of hidden/system driver that was installed by that Seagate boot managed you mentioned.
Incidentally I have no idea if Seagate's boot manager (software disk driver?) is compatible with LILO. The LILO technical documentation describes their success in operating with a variety of partitioning drivers (like Ontrack's Disk Mangler^H^H^Hager, and Maxtor's (??) EZ-Drive). However, I don't have the time to hunt down information about Seagate's software (particularly since you give no details about it --- not even the name of the package).
As I said: my preference is to give Linux a whole hard drive. If you can get a cheap little 1 or 2 Gb drive that your BIOS does support --- make that the master, install MS-Windows "C" drive on it; and give Linux the other drive (or most of it. Of course you could also look at upgrading your BIOS, replacing your motherboard (getting a new BIOS along with that, of course), or installing a smarter IDE controller (with its own BIOS).
Of course you can just try to do the installation. It might just work with no fuss. However, when novices try to install Linux, and they include these little constraints (wants dual boot on a big drive, on a system that doesn't support big drives, and wants to non-destructively resize and repartition that drive) they naturally complicate their initial experiences.
You're likely to get an unduly dim view of Linux "ease of installation" by trying an installation with all of these constraints. (That isn't to say it can't be done just as you want --- it's just to point out that the process is often more complicated than it needs to be).
So, consider alternatives as I've suggested. Ultimately some hardware upgrades might save you enough time to offset the cost.
... he replied ...
gday again
All that i can say is welll sooooooooorrrrrrrrryyyyyyy
it came up with 54h as it can't recognize this operating system
AnswerGang: Jim Dennis
From Rajan Karwal on Mon, 03 Jul 2000
i recently read your cooments about LI on a web newsgroup. My problem is this. I was running lunix on my machine but didnt like it so i want to go back to windows. I deleted the several partitions that linux reated and formatted the drive. Now all i get if i start my machine is "LI".(not at this point i have installed ms dos on the machine) The only way i can get to a C:/ prompt is to use a boot disk. Can you shed any light on this?
Thanks for your time
Raj
Boot from an MS-DOS floppy and run FDISK /MBR
One component of LILO is a "boot loader" (a bit of code that is stored on your primary hard drive in the "master boot record" (MBR) along with your partition table. The LILO boot loader code stores some additional code beyond the 446 bytes that are available in the MBR (the other 66 bytes are the primary partition table and a "signature" that marks the drive as "formatted"). Usually that additional code is stored on one of your Linux filesystems (/boot, or the /, root filesystem, depending on how you've laid out your systems).
When you removed your Linux filesystems, you also removed the additional boot loader code (the "secondary boot loader"). The reason that the boot process stops at: LI is that Werner Alsmesberger used a clever bit of programming to fit some diagnostics into the 446 of code. The letters L, I, L, O are printed at different points of the boot process.
So, if the boot loader hangs part way through the process, you have some idea of how far it got. There are many reasons why a system might stop at LI and not get to the second L in LILO. All of them amount to "I couldn't load the second stage boot loader." (Which makes sense in your case since you DELETED THEM).
Note: I've heard of cases where people have removed partitions and/or kernels and were still able to boot from them. That's because LILO stores the raw disk addresses of these files (this refers to the data in a way that is "below" the filesystem level). Removing the things from the partition tables or from a filesystem marks space as "unallocated" --- but it doesn't generally actually overwrite or affect the data. It just changes the way that the space is accounted for and make it available to be used by other partitions/files. So it makes since that LILO can still be used to be boot the system from an out-of-date mapping; until the data blocks that those files and partitions are actually used by something else.
Running the /sbin/lilo command updates those mappings, of course. The /sbin/lilo command is a program that uses the /etc/lilo.conf file to build a set of boot blocks and maps. I like to think of /sbin/lilo as a "compiler" for the "/etc/lilo.conf" program; that makes the boot records and maps analogous to the "program" and "libraries" that a compiler generates from your source code. This analogy makes perfect sense to programmers --- but it seems to sink in for quite a few non-technical users as well.
AnswerGang: Jim Dennis
From Gillian Bennett on Sun, 02 Jul 2000
Hi James,
I guess that in all likelihood this is the wrong forum for this question, but there are so many mailing lists for linux that I wasn't sure which one to post to. I am reasonalbly new to linux after being an admin for sun, dec etc for a few years.
I was wondering if there is a tool that will dump filesystems (similar to ufsdump or some other dump tool from other unix flavours) on RH linux 6.X. The filesystems are ext2 type filesystems and are currently backed up using cpio (SHUDDER).
I appologise for the inconvenience, Regards, Gillian
What have you got against cpio?
Anyway there is a Linux 'dump' (and 'restore') package. You should find it on your installation CD or on any good archive site.
Of course it's version number is only 0.4b16 or so. In a rational world that would suggest that the author things it is roughly 40% "feature complete" to version 1.0. However, some programmers in the Linux world don't like simple, rational versioning schemes so I have no idea what that version number is supposed to imply.
AnswerGang: Jim Dennis
From Jaris Visscher on Thu, 06 Jul 2000
mars.ncn.net is a Linux server which is having problems emailing us. We are having trouble with mars.ncn.net emailing us at mtc1.mtcnet.net. = They seem to think it is our MMDF mail server.
We have checked all of their reverse DNS info and it is correct. They are gettting the error
Connections reset by mtc1.mtcnet.net
Message could not be delivered for 5 days
Message will be deleted from queue
This has been going on for 2 months. Here is the exact message as it comes to our MMDF server in our log file. /usr/mmdf/log/chan.log As you will see we get a fetch of mars.ncn.net failed
I'm not at all familiar with the MMDF mail transport system. So I don't know what sort of "fetch" is going on here. However, it looks like:
6/23 10:16:02 smtpsr8272: h2chan ('mars.ncn.net', 1) 6/23 10:16:02 smtpsr8272: h2chan table 'local' 6/23 10:16:02 smtpsr8272: tb_fetch: dbminit 6/23 10:16:02 smtpsr8272: fetch (mars.ncn.net) 6/23 10:16:02 smtpsr8272: fetch of 'mars.ncn.net' failed 6/23 10:16:02 smtpsr8272: h2chan table 'list' 6/23 10:16:02 smtpsr8272: h2chan table 'smtpchn' 6/23 10:16:02 smtpsr8272: ns_fetch (21, mars.ncn.net, 1) 6/23 10:16:02 smtpsr8272: ns_fetch: timeout (0), rep (0), servers (0) 6/23 10:16:02 smtpsr8272: ns: key mars.ncn.net -> 38 6/23 10:16:02 smtpsr8272: ns_getmx(mars.ncn.net, 805db9c, 8068b58, 10) 6/23 10:16:02 smtpsr8272: ns_getmx: sending ns query (30 bytes) 6/23 10:16:02 smtpsr8272: ns_getmx: bad return from res_send, n=3D-1, = errno=3D114, > h_errno=3D0 6/23 10:16:02 smtpsr8272: nameserver query timed out
... you're getting a name resolution failure while looking for MX records?
Does mars.ncn.net have a valid MX record? It doesn't look like it (from my own 'dig' commands).
It sounds like ncn.net hasn't created MX records for you. I don't know if you're MMDF installation has been configured for anti-relaying. It may be that the anti-relaying (anti-spam) configuration that you used is requiring that the sender/relayer have an MX (mail exchanger) record rather than just an A (address record.
Anyway, I'm sure that you know more about MMDF than I do. However, it occurs to me that it may be best to point you at the the canonical MMDF resources page (http://www.ivine.com/~mmdf) and let you read through the FAQ (http://www.ivine.com/~mmdf/mmdf.html)
Hopefully that will make more sense to you, since you've configured some of these programs and channels. There's also an searchable archive the mailing list. I saw one message there that seemed to assert that MMDF won't fall back to A records when MX lookups have failed (searching MX). I would expect that to apply to SENDING mail, which is why I'm wondering if your MMDF is trying to use a similar mechanism in an anti-spam measure while it's recieving messages.
Anyway, that should help. Having your postmaster subscribe to that list and post MMDF questions there will also probably be much better than posting them to more general fora. MMDF is a bit of a niche, so you really want to talk to its specialists.
AnswerGang: Jim Dennis
From Henry White on Thu, 29 Jun 2000
Please point me to a place I can read on how to create an .ios file. I want to make a CD from this file.
Thanks Henry White
My guess is that you mean an ".iso" (as in International Standards Organization) which is a filename extension commonly used with IS0 9660 (the formal specification on the formatting for data CD-ROM).
Assuming that this is the case you want to get the mkisofs and the cdwrite and/or the cdrecord utilities. The mkisofs man page will help a bit. However, you should also look at the CD-Writing HOWTO at http://www.linuxdoc.org/HOWTO/CD-Writing-HOWTO.html
That is quite detailed.
... he replied ...
You are right I was asking about iso. Thanks for your help. I am on my way now.
Henry C. White
AnswerGang: Michael Williams, Heather Stern
From WwSHADOWMASTERwW on Thu, 29 Jun 2000
Listen. I just installed RedHat Linux 6.2 and I cannot get my modem to work. I did the test and modem test on the set up manu and is does detect it but stays at the initializing Modem prompt.. What do I Do I can t find anyone who can answer this for me HELP.....I am using the KDE work station setup..please tell em Step by Step on how to do this I would appreciate it very much
PS I am not using Gnome!
[Michael] Is your modem internal? If it is, then there's a fair chance it's a 'WinModem'. These are modems designed to work within MSWindows. Since they use drivers written for MSWindows to work, it is very difficult [currently impossible] to get them working under Linux. If this is the case, then your best bet is to buy a new external modem. They're reasonably priced, and will work with all OS's.
[Heather] While it is very much in the cumudgeonly spirit of the Answer Guy to tell someone that their "lose"modem is not a big winner, it is no longer quite accurate to say that they just don't work.
PCTel models work, because a different corporate entity is maintaining their binary driver. How well they work, I wouldn't know They aren't the most common softmodem variety.
Lucent "56kFlex" modems work, because they (somewhat quietly) released a binary driver (it's been updated once, even though the party line is "we don't have a Linux, some outsiders did that, ask your modem manufacturer, we just design the controllerless cores". Sure. The drivers have to be modem specific, that's why Lucent has only one "Windows" driver posted on your i website. I have to laugh). Their corporate confusion aside, Lucent's have a fairly fine chance of becoming something much better than a modem as well, since some folks are working on different aspects of real software for it to be used as a phone line diagnosis tool and sampler. Depending on your needs for that, it might already be better than a modem ... but it's not usable as a modem that way; the open source software can't do PPP yet. Whereas the binary driver is flawed as regards unloading, and often requires shoe-horning into place.
We can hope that these binary maintainers are paying attention to roll out new binaries as the 2.4 kernel ships, because it has a waaaaay different modules interface.
But the other softmodems (Conexant, 3com, some others) are useless hunks of incomplete hardware in a Linux, or *BSD box. Haven't checked regarding BeOS or OS/2 but if those don't work either, don't say we didn't warn you. If you bought or received a removable internal softmodem and it's among those that don't work, vote with your wallet - send it back!
At the end, check out linmodems.org for your driver, if it exists. There is also a link there to someone's big list of modems which are software driven modems. Expect your softmodem to flake out at high speeds as the CPU load grows (whether you're under MSwin or Linux won't matter, it will merely affect how much overall load it will take to flake out). In short: if you are a serious modem user, you want a serious modem.
[Michael] What distribution are you using? I'm guessing it's Caldera, since that attempts to set up the modem at installation.
[ No, he said RH 6.2, but that's an interesting factoid, so it stays. --Heather ]
You don't actually have to 'install' the modem as you would have to do in Win98. To use a modem, firstly find out its comm port. It'll probably be in Comm1 or 2. Under Linux, these appear as /dev/cua0 and /dev/cua1. You'll also need to know the modems speed. If it's a new modem it should be 57000 k's a second. Now, to use this goto kppp under the internet selection of the KDE 'start' menu.
It's pretty self explanatary from here onwards. Enter your comm port - try from 1 - 4 ( cua0 - cua3 ), until you find which port your modem uses. Enter your modem's speed, and then your ISP's details. Unless you have other problems, that should allow you to use the internet.
[Heather] A Lucent controllerless modem, if you happen to have one and force the driver (module ltmodem.o) to load, becomes /dev/ttyS14. It is known to have problems interacting with the current ppp module though; a patched ppp.o with features reduced back to 2.2.14 is available for 22.15 and 2.2.16.
On systems without a ps/2 mouse, serial 0 is usually the mouse, and serial 1 (com2) the modem. On laptops, the external serial is usually serial 0, and the infrared (when turned on) serial 1, leaving PC cards to be on serial 2 (com3).
AnswerGang: Jim Dennis
[ Folks, while our Answer Gang does read technese as well as English, it helps if you use some connective grammar... little things like "when I used 'cat whateverfile' it said " or "with kerneloptthingy=nnn I can see syscalls blah() blah() blabla()". This one had to be translated, and my wildest guess is Fuchangdong uses some sort of kernel debugging that he didn't describe to us. --Heather ]
From fuchangdong on Mon, 17 Jul 2000
please give me some help,i didn't know how to explain at my implementing embeded os. fuchangdong
You're trying to use Linux for an embedded system?
http://www.sohu.com/sas/temp/twoyear/2year.html http://www.sohu.com
hi :
i now have a question,please give me help, i use initrd and ramdisk to complete embedded linux on my hardware. first ,i create a initrd.img from command mkinitrd.and a bigger root fs:ram.img.gz ,to lilo it,and reboot it
You're using the Linux initrd (initial RAM disk) feature. You use the mkinitrd command to create your RAM disk image install that and your kernel onto the target hardware (which I presume is x86 because...) you then run /sbin/lilo on that and try to boot it.
at init process,do_basic_setup,this line :
kernel_thread(do_linuxrc,"/linuxrc",0);
at this function: do_linuxrc()
execve(shell,argv,envp_init); it return -1 ,and errno is 8,this tell that it is "exec format error"
so i can't to exec linuxrc script file.
According to the kernel sources it is calling the kernel_thread(do_linuxrc,...) function and the do_linuxrc function returns a failure on the execve(), with the errno global set to 8, which translates to "exec format error" according to the strerror()/perror() function.
linuxrc's content is :
#!/bin/sh
ls -l
and chmod 0777 linuxrc
The /linuxrc is a trivial (test) shell script. You've tried marking that as executable with the chmod 0777 command.
so i can't know what wrong with me? why initrd.img cant't be load right? but i find :
ret = open("/linuxrc",O_RDONLY,0); ret = success.
If you (patch the kernel?) to simply open the file you don't see any error.
and infomation have : mount root filesystem (ext2);
You think you have an ext2 filesystem mounted on root at this point? (It's not clear how you are getting this info).
so i can't get reason ,please give me help? linux is redhat 6.2 linux kernel is 2.2.12-20
The development environment is a Red Hat 6.2 system and you're using a 2.2.12-20 kernel.
after, i test this ,give me these information: i add modprobe/insmod command in initrd.img, reboot it, this system give me information: " kmod:failed to load /sbin/modprobe -s -k binfmt-0000"
When you try to run a modprobe command in the initrd.img you get a kmod binfmt error.
execve() call do_execve(),do_execve() call request_mode() ,request_mod() call exec_modprobe(),so it's path is right. but i can see this inforamtion ,at boot ,system load script ,aout,elf binfmt. so i can't know greater!!! please give me help !!!
This last bit of typing is utter gibberish. Actually your whole message is basically incomprehensible. However, I've echoed a guess after each fragment of what you've said to see if I could understand the question.
It sounds to me like you are somehow missing some of the necessary binfmt loaders from your kernel. Now there are a couple of options in the 'make config' scripts that allow you to enable or disable a couple of different types of executable (binfmt) loaders. You generally need at least one of them compiled directly into the kernel (so that it can execute a linuxrc and/or an init(8) process).
I don't think it's possible to build a kernel without statically linking one of a.out (COFF) or ELF. If 'make menuconfig' somehow let you pull that off, it's a bug in the Makefiles and dependencies.
You need one of those.
In addition I've never seen an option to leave out the text/script binfmt loader. That is the loader that handles text files and uses the #!/.../ line to execute most scripts.
However, it would seem that you have somehow managed to do this. I could see it if you had been applying your own patches to the kernel code, or if you were hand editing or bypassing the Makefiles with some of your own.
I suppose English is not your native language (given the distressing incompetance of your message). I supposed you should look for a (Chinese?) users group, newsgroup, mailing list or other forum where you can have someone translate your question into English.
Other than that try recompiling your kernel and ensuring that the ELF executable support (under "General Setup") is set to "Y" (NOT "M" and definitely NOT "N").
To quote the help text that is associated with that menu config option:
Saying M or N here is dangerous because some programs on your system might be in ELF format.
It is highly unlikely that you are somehow managing to compile your core shell and other software in a.out format. That actually might be quite useful for embedded systems work --- but the older format and the tools to generate them haven't been used by any general purpuse distribution in a few years. The only remaining a.out distribution that I know of is David Parsons' Mastodon (http://www.pell.portland.or.us/~orc/Mastodon).
So, I think you can safely leave out the other binfmt loaders.
BTW: You also MUST have one of the filesystem types statically linked into the kernel. You can just go through and blindly mark EVERYTHING as modular. It won't work. The initial RAMdisk will have to be in some filesystem format (minix, ext2, something). Of course it would be possible to use the ROMfs. This is much different than initrd --- it's readonly and you have to make the filesystem using a genromfs utility AND you'd have to link your ROMFS into your kernel. I don't know of anyone that actually uses ROMFS.
Anyway, I suspect that the reason your shell script isn't working is that the kernel can't load the shell interpreter. The reason it can't load the shell interpreter is because your shell is probably in ELF (executable linking format) and you left the ELF loader out or put it in as a module. Of course the insmod/modprobe programs are also in ELF format --- and the kmod (kernel loader module) requires access to those in order to actually load any modules. (kmod doesn't load modules, it spawns a kernel thread, which runs modprobe to do the actual work. You can read /usr/src/linux/kernel/kmod.c to see that.
I hope that helps.
AnswerGang: Jim Dennis
From Michael Hudson on Tue, 04 Jul 2000
Hi yall,
First off let me tell you that I am completely new to the Linux world! I have been <Stuck> with Windoze most of my computing life.. I have only recently discoverd this whole new world! So please make you answers as simple as possible to understand.. Thanx in advance!
I have recently installed Linux Mandrake on my K6 Machine. I am running it Dual Boot with Windoze.. I am having some reall problems setting up my modem to actually work..
I think this is solely down to my lack of knowledge towards Linux... Could NE1 give me some advice?
Yours, Michael Hudson.
You're also having "some reall" [sic] problems describing your problem. Read back through your message. Try to pretend that you were getting this from some stranger. Do you really think there is enough detail provided for any mere mortal to devine what you problem could be?
I understand that you're a novice a Linux. However, you could put a little energy into the questions you're going to ask.
What did you try to do? Did you run some program to try to "set up" you modem? What do you mean by "set up"? What kind of modem is it? If you ran some program or command to try ot "set up" your modem; WHAT DID IT DO? Did you get a error message? What were you expecting the modem to do? What did it do?
Did you read any manuals or do searches through any Internet web search engines?
Anyway, the problem is probably that you probably have a "winmodem" or a "softmodem" or some other useless piece of junk that isn't really a modem. If you go back to the Linux Gazette (which you should have read in order to get this e-mail address) and you peruse the FAQ and maybe search on the word "modem" you'll find about 100 other messages where I've talked about modems, Linux, using modems under Linux, testing to see if your modem is supported by Linux, and especially about why "winmodems" are such losers.
AnswerGang: Jim Dennis, Heather Stern
From Allen Tate on Thu, 27 Jul 2000
Anyone out there know anything about making the cooling fan run on a laptop running Linux? Seems I read something somewhere about running a module that made the fan run. Any advice is appreciated.
[Jim] What makes you think you need a special module or driver to control your system's fan?
On any reasonable equipment the fan should run when it is needed without any software support required. The hardware should include its own thermostat which should operate completely indendently of the OS.
(Actually there's a good argument that we should be producing better hardware that runs cooler, with lower power consumption. So that fans would be unecessary for most laptops and general purpose computing devices. That's what Transmeta --- the company for which Linus works --- has recently introduced to the PC market).
Anyway, I don't know of any module that "makes the fan run." or anything like that. The closest I can think of would be the ACPI kernel features (ACPI is an advanced and somewhat complicated alternative to APM --- advanced power management). That would require that you get a daemon to call those kernel functions from user space. Under Debian you'd just use the command 'apt-get install acpid' to fetch and install that daemon, under other Linux distributions you'd have to hunt for it on your CDs, and/or look for it on their FTP contrib sites, etc).
There is also a package called "LM_Sensors" which allows one to monitor some values such as CPU temperature, fan speed, power supply voltage, etc. There are a number of motherboards which use an LM78 or similar chip and sensor set to allow software access to these sorts of metrics. Under Debian you could get the sources to this package using 'apt-get source lm-sensors' which will fetch the original package sources and the Debian maintainer's patches and unpack them under your current directory. I usually do that sort of thing from my /usr/src/debian directory.
LM_Sensors consists of a kernel patch (you must recompile your kernel to add these features) and some user space utilities for querying the kernel driver.
I highly recommend LM_Sensors to sysadmins who are maintaining servers at co-located facilities and in server closets. Those are places where having this information available via software can save a great deal of downtime and damage. (The audible alarms that might be in your case to warn of fan failures and overheating aren't very useful when there's no one there to hear them. Also the typical machine room has to much fan and air conditioning noise for anyone to hear the failure of one system).
However, I don't know if any laptops have any of the support LM78 or similar sensor features. So that's probably not useful to you.
... he replied ...
Thanks for the advice. I look into it.
[Heather] As someone who works a lot with laptops (imagine that, since I work for a linux laptops company ... though the relation was really the other way around) I'd like to add a couple of brief points:
- There really are some special utilities for some laptops out there. At minimum Thinkpads and Toshibas, two major brands famous for being very nice systems, but somewhat weird. A colleague of mine recently released source for a certain style of hibernation partitions. Most of these sorts of tools are not useful to machines with a different BIOS.
- If the fan comes on, it's because the system thinks it's too hot and needs it. If you're personally feeling a bit toasty and it's looking like it's 112 in the shade outside, do you turn OFF the air conditioning in your house? nope, bad idea. Some poor woman in the southwest turned her fans off in such heat because she feared it would push up her electric bill; she died. Basically, if a system that is getting cooked doesn't turn its fan on, the thermal sensor or the motor may be broken and it should be looked at by a technician before you get a thermal failure. Now if your BIOS has a feature to spin the fan faster than it really requires when it's overheating if AC power is on... that'd be kinda cool
AnswerGang: Jim Dennis, Mike Orr
From Todd Tredeau on Sat, 01 Jul 2000
I am trying to understand mx records, and the role the play in relationship to a backup queue server. I have two mail servers mx1.wisernet.com and mx2.wisernet.com, I also have a third emergency back server, to be manually added if I need it.
If the primary mail store is on mx1 then should the priority be higher or lower?
like mx1.wisernet.com 10 (primary) mx2.wisernet.com 20 (backup)....
your help would be greatly appreciated, I have all sorts of mail problems....Actually my antispam software was working so well at one point, I couldn't send messages from mx1 to mx2 and so on... got that straightened out though. Nice web site.....
[ Thanks! -- Heather ]
Todd
The MX record with the lowest value will have the highest priority. Think of it as the "distance to user's mailboxes" and consider that the various MTAs (mail transport agents) which are relaying a piece of mail are each seeking to get the mail closer to its final destination.
Of course the host with the lowest MX value will either have to accept the mail or there will have to be an accessible route to an A record of the host. (Note: CNAMES are never supposed to be used for mail exchanges). Normally we have MX and A (address) records for any host that is supposed to receive mail.
In general there is nothing special to setting up backup MX relationships. It used to be that you could simply add the appropriate MX records to your domain zones. These days there is one extra step.
In recent years it has become almost mandatory for sites to limit their mail relaying. Before the advent of widespread spamming it was common to allow "promiscous relaying." That basically meant that my mail servers would attempt to forward/relay/deliver any piece of e-mail that landed on them, regardless of where it was from and regardless of who it was to. That was basically a fault tolerance feature. If a bit of e-mail got mis-routed and landed on my server --- the server would just try to get it delivered anyway. That was common courtesy in a co-operative Internet.
However, the spammers ruined all of that forever. They would dump one item of e-mail, generally with a couple thousand recipient addresses, onto any open relay. This allows the spammer to use a small bit of their own bandwidth (as provided by a 14.4 or 28.8 modem) while leeching much more bandwidth (a few thousand times their "investment") off of the rest of the Internet and the host of the open relay in particular.
So now we have to also configure the MTA on our backup MX hosts to access mail to our domain. (Obviously that's no problem if we're talking about additional hosts within our domain --- they presumably are already configured to accept/relay mail for us. It is also true of cases where we want to set up mutual backup MX services for and with other domains. (Thus if the connection(s) into our domain is/are down, or if some regional outages prevent some customers from reaching us directly, but still allow connections to one of our MX partners, then the mail works its way towards us. The correspondents feed their mail up to any available MX server, so the mail doesn't languish on thier systems.
That's the idea, anyway. I've had some people question whether configuring backup MX services is still appropriate in the modern Internet. Personally I think it is. However, there are valid arguments on both sides of this issue.
[The way I heard it, if the primary mail server is down, a secondary server's job is to accept the message and keep trying to forward it to the primary server, with a longer-than-usual retry timeout. This prevents the mail from bouncing needlessly if the primary server is down for a while. Note that the secondary server cannot deliver the message itself, since the recipient is not a local user on that machine. --Mike]
AnswerGang: Jim Dennis
From Asghar Nafarieh on Tue, 25 Jul 2000
Hi,
I hope you can help me on this problem. After booting my linux server (RedHat6.0) It goos through booting and comes back with the above prompts and hangs there. I have hat this machine running for 6 months and this is the first time this is happenning. I have a lot of data in there. I tried to use the resuce disk but I don't know how to get to the hard disk to check the problems. I appreciate your help.
Thanks, -Asghar
This error message basically means that the kernel was unable to find a console on which it could run init.
That suggests that it can't find your /dev directory (on the root filesystem) or that it can't find the appropriate /dev/tty* and /dev/console device nodes thereunder.
This is most commonly caused by one of two problems:
- Perhaps you removed or damaged the /dev/* nodes that the kernel needs.
- Perhaps the kernel is mounting the wrong filesystem on the root directory (a filesystem which doesn't HAVE a /dev directory).
So, here's how you use a rescue diskette the troubleshoot this sort of problem:
- Boot from the rescue diskette.
- Mount your root filesystem. Use a command like:
mount /dev/hda3 /mnt
- Look for a .../dev/console device thereunder. Use a command like:
ls -l /mnt/dev/console
It should look something like:crw-r--r-- 1 root root 5, 1 Jul 21 14:50 /dev/console
If it's there then you want to try booting from your hard drive again. This time, at the LILO prompt you'd interrupt the boot process and pass the kernel some options.
When you see LILO press the [CapsLock] or the [ScrollLock] key. Then hit the [Tab] key. That should give you a list of available boot labels ("linux" and "dos" for example). You'd type something like 'linux root=/dev/hda3 init=/bin/sh' (Be sure to refer to the same device, hda3, or whatever, as you did when mounting your root fs under the rescue diskette).
In this case I've specified the kernel option "init=/bin/sh" just for further troubleshooting. If that comes up O.K. you can then type 'exec /sbin/init 6' to force the system to shutdown and reboot under the normal init.
I realize, from the tone of your question, that this may all be a bit confusing to you. You don't mention what you've done to the system between the time that it was working and the time that this error started occurring. I can guess at a few possibilities, but I'd only be guessing.
For example: if you are someone else with administrative access to that system had built a new kernel it might be that you built it with a faulty "rootfs" flag. A Linux kernel as a point to the default root filesystem device and partition compiled into it. If it isn't passed a root= parameter, than the this compiled in pointer specified which device the kernel will try to find and which partition it will try to mount as root. Normally the LILO boot loader has a root= directive in it. That is usually in the "global" section and is used for any "stanza" which doesn't over-ride it. When we are typing in root= directives at the LILO prompt we are over-riding both the kernel's default and LILO's stored option.
As you can infer from the foregoing the Linux kernel mounts a root filesystem and then it opens a console device. That done it prints alot of messages to the screen, and runs the init program. It looks in several places like /sbin, /etc, and /bin, for a program named 'init' then it looks for /bin/sh as a failsafe. Failling all those the kernel will print an error message like: "No init found. Try passing init= option to kernel."
(You can read the kernel source code for these actions in /usr/src/linux/init/main.c).
Note that I haven't addressed the issue of whether there is a Linux filesystem, recognized by your kernel, available. If you had no Linux filesystem there, you'd be getting a error more like: "VFS Kernel Panic: Unable to mount root" or "VFS: Cannot open root device" (depending on whether the filesystem/partition was nonexistent or corrupt, or whether the device couldn't even be found).
I've also left out any discussion of the initrd (initial RAM disk). Red Hat does tend to use these, though they are not necessary for most systems. Here's a little bit about how those work:
If you are using an initrd, then the loader (LILO) must load the kernel, and the initrd into memory. It then passes the kernel an option. The kernel (with initrd support enabled) will then allocate memory for a RAM disk, and decompress the initrd image into that memory. Normally the initrd will contain a compressed filesystem image. (It's actually possible for it to contain other sorts of data, but that's not a feature that I've ever heard of anyone using).
Once the initrd (RAMdisk) has been initialized and populated, the kernel temporarily mounts that as the root filesystem and attempts to execute a command called /linuxrc. After that command exits, then the regular root filesytem is mounted, and the normal init process is run.
Note that this is basically a hook between the kernel's initialization and the normal root fileystem mount and init process. Often the initrd will have no effect on the regular boot process. However the most common case is for the initrd to contain some modular device drivers, and for the /linuxrc to load them. This is intended to allow the kernel to access devices for which it only has modular (rather than compiled in) drivers.
(Usually I suggest that users learn how to compile their own kernel, statically including their main disk interface and network adapter drivers. That obviates the need for an initrd, making the whole system a tiny bit easier to maintain and troubleshoot).
I mention all of this in your case because it's possible that you kernel is fine, your root filesystem is fine but that your initrd has been corrupted and is setting the rootfs flag to some
For more details about this initrd subsystem you can read /usr/src/linux/Documentation/initrd.txt
Of course I should also take this opportunity to give the standard parental lecture about the need to make and test backups. However, I don't have a really good resource to which I can refer you. I don't know of a well-written "System Recovery HOWTO" and I should take it upon myself to write one. (The third chapter of my book on system administration is a start --- but it doesn't go down to step-by-step details).
Let's just say this for now:
If you end up re-installing here are some tips to make recovery from these sorts of disasters much easier:
First, during installation, create at least three or four partitions. I like using lots of partitions. You want to have partitions for root (/), system (/usr), and data (/home) at least.
I like to have an alternative root filesystem (/mnt/altroot) (which is normally not mounted) and a /var partition. Then I may add other partitions based on the needs of a specific machine. I usually create /tmp and /usr/local partitions, and sometimes I add /var/spool and/or /var/spool/news partitions for some mail and news servers.
One of the reasons for this partitioning is to facilitate system and data recovery. Most problems will only affect one of your filesystems. For example, if your root filesystem is damaged (as it appears has happened in your case) then you can just reformat and restore that without worrying about your data (which should mostly be stored on /home and/or /usr/local).
If you have a separate /boot partition it can be mounted read-only most of the time (just remounted in read-write mode when you are installing a new kernel). That can also work around limitations of older BIOS' and versions of LILO with regards to the infamous 1024 cylinder limit. If you keep an extra "alternative root" filesystem you can maintain a "mirror" (replication of) the root filesystem on that, with copies of all the system configuration data (from under /etc). Then when your root fs is damaged you can simply boot from the altroot using the root= kernel/LILO option while booting. (You could also use the root= directive when booting from a floppy disk or bootable rescue CD).
You can copy all of your root fs to the alternative root with a sequence of commands something like:
mount /dev/hdc8 /mnt/altroot cp -ax / /mnt/altroot umount /mnt/altroot
... assuming that you already have created a /mnt/altroot mountpoint (using mkdir) and that you have a partition like /dev/hdc8, the fourth extended partition on the primary IDE drive of the secondary controller, with a valid filesystem thereon. Once your create an altroot partition
I suggest keeping /usr as a separate filesystem for two reasons. You can keep it mounted read-only most of the time (remounting it in read-write mode during major system upgrades and while installing new packages). That makes it more difficult for it to get damaged and might even protect your system from some of the sloppier "script kiddy" exploits (it's not a real security feature, a better exploit will remount filesystems read-only before installing a rootkit).
Of course keeping /home as a separate partition should be fairly obvious. If you're using your system in a sane fashion, most of your data should be under /home. That means that you can focus on backing that system up. The other filesystems should change somewhat less often, and you can be assured that the programs, libraries and other files are store on them are recoverable (from your installation CDs, and the Internet at large) or are expendable (temporary files, caches, logs, etc).
Under Linux there are many different ways to perform a backup. In general you can use 'tar', 'cpio' and/or the 'dump' commands for individual systems, or you can use the free AMANDA package for setting up a networked client/server backup infrastructure.
Each has its advantages and disadvantages. You could also get BRU (the backup and recovery utility) which is probably the most popular among several commercial Linux backup packages.
Of course you need more than software to do backups. You need to have places to store these backups (media) and a device to handle the media. Some of your choices are tape drives, CD-R or CDRW, magneto optical or any of various types of removable storage ranging from floppies through LS120, Zip, Jaz, etc.
Most systems sold these days don't include any backup devices. With common disk drive capacities of several gigabytes, we can't count 1.44Mb floppies as a reasonable backup device. (Even in the days of 100 and 200 Mb hard drives, no one was using floppies to do full system backups). Managing a thousand or more floppies per hard drive is absurd.
Even the systems that sell with LS120 or Zip(tm) drives aren't really meeting the backup/recovery needs of an average user. It wasn't too bad for one and two gigabyte systems (10 to 20 disks) but it's not reasonable for the 6 to 18 gigabyte hard drives we're seeing now (60 to 200 disks). Even CD-R or CDRW are barely adequate for backing up individual systems (at 650Mb each you need about a dozen discs for a typical drive, and I'd need almost 30 of them to backup my laptop).
So the only reasonable way to do full system backups on most moderns PCs is to use tape drives. A 4mm DAT3 tape can store 12 Gb uncompressed. DLT tape drive capacities range from 20 to 70 Gb. There are other drives ranging from 250Mb (FT) through over 100 Gb and most are supported by Linux drivers.
The biggest problems with tape drives is that they are expensive. A good tape drive costs as much as a cheap PC.
Let's say you bought a 4mm DAT drive (and a SCSI controller to go with it). You could to a backup of your whole system with a command like:
tar cSlvf /dev/st0 / /usr /home ...
... Note: here I'm not using compression, and I am using the "S" (--sparse: note that's a capital "S") and "l" (--one-file-system a lower case "ell") options to 'tar'. I'm assuming the first (usually the only) tape drive which is called /dev/st0 (or /dev/nst0 if you want to prevent the system from rewind the tape after the access). I'm listing the top level directory of each locally mounted filesystem (the mount points). Using this technique avoids inadvertantly backing up /proc (a virtual filesystem) and any network mounted or other unusual filesystems. Obviously you'd only list those filesystems that made sense for your system (read your /etc/fstab for a list).
I could add a "z" flag to force 'tar' to compress the data, however that usually causes latency issues (the data doesn't "stream" or flow smoothly to the tape drive). Since the tape must be moving under the read-write head at a constant velocity, if the data doesn't stream you'll get "shoeshining." The most common causes of this are compression and networking. So, in those cases you'd use a command more like:
tar cSlvf - / /usr /home ... | buffer -o /dev/st0
(Here, I've changed 'tar' to write it's output into the pipe --- to stdout technically --- and added the buffer command which using a bunch of shared memory and a pair of read/write processes to "smooth out" the data flow).
Hint: You should write down the exact command you used to write your data on any tapes that you've created. This allows any good sysadmin to figure out what command is required to restore the data.
To restore a system using such a tape you'd follow the following procedure:
- Boot from a rescue diskette or CD (or onto your altroot)
- Mount up a temporary filesystem using a command like: mount /dev/hda5 /tmp (or make sure your RAM disk has a few meg of free space).
- Restore a table of contents (index) of your tar file to /tmp/files using a command like: tar tf /dev/st0 > /tmp/files
- Restore your /etc/passwd and /etc/group files from the tape. Overwrite those in your rescue system's (RAM disk based) /etc directory.
NOTE: This must be done in order to ensure that all the OTHER files that you restore will have their proper ownership and permissions. Otherwise you are quite likely to end up with all the files on the system owned by the root user (depends on the version of 'tar'). Trust me, you need to do this. This may be a bit time consuming, since the tar command will go throug the entire tape to find those two files. (It does make more sense in practice to do do different backups to your tapes, one of just the root filesystem, or even just the /etc directory, and the other containing the rest. However, it is more complicated to understand and explain, as you're dealing with "multi-member" tapes and have to know how to use the 'mt' command with the nst0 device node to skip tape "members" (files). This method will work, albeit slowly).To do this selective restore use a command like:tar xf /dev/st0 ./etc/passwd ./etc/groupNote: when you did the backup as I described above the GNU tar command will have prepended each filename with "./"; if you weren't using GNU tar you should modify the command I listed to create the backup by inserting a cd / command before it, and changing each directory/mountpoint reference to ./ ./usr, etc. Of course, if you weren't using GNU tar then the S and l options might not work anyway. Those are GNU extensions.- For each corrupted/damaged filesystem:
- backup/copy any accessible files that are newer than your last backup.
- reformat using the 'mkfs' command. Use the -c option to check for bad blocks.
- mount that filesystem under /mnt in the same (relative) place where it would go under normal operations. For example a filesystem that would normally be located under / would be under mnt, and one that was usually under /usr would go under /mnt/usr, and one that was under /usr/local would now be mounted under /mnt/usr/local/ (see your old /etc/fstab for details, restore that to /tmp if necessary).
Note: It may make sense to mount any undamaged filesystems read-only as part of this process ... so that the whole directory tree will appear more like you expect as you're working, but helping you avoid accidentally over-writing or damaging your (previously) undamaged filesystems. Obviously this is simpler if you're restoring to a whole new disk or system --- and are thus restoring EVERYTHING.- restore the files that were on that filesystem.
If you are restoring a whole system (there were no undamaged filesystems) then you can simply use a command sequence like:
cd /mnt && tar xpvf /dev/st0
(after you've mounted up all the filesystems under /mnt in the correct relationship).
If you need to restore individual filesystems you'd still cd to /mnt, then you'd issue a command like:
tar xpvf /dev/st0 ./home ./var ...
where ./home ./var ... are the list of top level
directories below which you want to restore your files.
If you just want to restore a small list of files (you can't use "*.txt" or other wildcard patterns on the 'tar' command line) then the best method is to use a "take list." Take the "index" (table of contents file) that you generated back in step 3 and either edit or "grep" it for the list files that you want. Filter out or delete the names of all the files that you don't want. Then use a command like:
tar xpvTf /tmp/takelist /dev/st0 ./home ./var ...
... assuming that you stored the list of files you want in /tmp/takelist.
If you know of a regular expression that uniquely describes the files you want to restore you can use a command like:
grep "^\./home/docs/.*\.txt" /tmp/filelist | tar xpvTf - /dev/st0 ./home ./var ...
... to get them without having to create a "takelist" file. Here we are forcing 'tar' to "take" its list of files from "stdin" (the command pipeline in this case).
I realize that all of this seems complicated. However, that's about as easy as I can make it for people using the stock Linux tools. If that's too complicated, then you might want to consider trying something like BRU (which has menu and GUI screens in addition to its command line utilities). Personally I think those are really as complicated, but some of that complication is hidden from the common cases and only comes out to bite you during moments of extreme stress --- like when your system is unusable while you're trying to restore your root filesystem).
BTW: you don't have to buy a tape drive for every computer on your network. Linux and other UNIX systems can easily share tape drives using their standard tools. For example you can use, 'ssh' (or 'rsh' if you have NO security requirements) and the 'buffer' program to redirect any 'tar', 'cpio' or 'dump' backup (or restore) to a tape drive on a remote system.
Then you can use commands like:
tar cSlvf - / /usr /home ... | ssh -l bakoper tapehost buffer -o /dev/st0
... to do your backups. (In this case I'm using ssh to access a "backup operator" account (bakoper) on the host named "tapehost", and I'm directing my tar output to a 'buffer' process on that remote system). Obviously there's more do it than that. You have to co-ordinate all the access to those tapes --- since it wouldn't do to have each machine over-writing one tape. But that's what professional sysadmins are for. They can write the scripts and handle all the scheduling, tape changing etc.
... he replied ...
Jim,
The file /dev/console was missing as well as /var/log/*. I think my server was compromised by a DNS attack. I was running old version of bind. I noticed there is a directory ADMROCKS in /var/named which implies bind overflow. I upgrated my OS and things back to normal.
Thanks for the tips,
-Asghar
AnswerGang: Ben Okopnik, Jim Dennis
From erwin on Fri, 30 Jun 2000
If i want to install a package from binary source, i put the command tar -XXX foo.tar.gz, "make", and then "make install" .... What I have to do if i want uninstall that package?
[Ben] First, be a bit careful about syntax when using "tar"; for historic reasons, the '-' is not just a syntax "preceder" but a part of the syntax itself, signifying piped input. "tar xzf foo.tar.gz" would be the correct way to "untar and defeather" the package; "tar xvzf foo.tar.gz" would print some useful info while doing so.
As to uninstalling the package - this is where one of the disadvantages of *.tar.gz packages shows up: since most of them do not follow any kind of a filesystem standard or a set of install/uninstall rules (unless you're talking about packages from a standard Linux distrib), the process can range from "simple" to "I'd rather have a root canal".
Since you didn't say that you're using a package from, e.g., Slackware, which I believe has a specific uninstall procedure, I'm going to assume the worst case - that you're talking about a random tarball pulled off the Net somewhere, meaning that it could be anything at all. So, here we go...
Easy version: type "make uninstall". Some software authors have enough mercy in their hearts on people like me and you to include an uninstall routine in their makefile. If it works, burn a Windows CD as an offering and be happy.
More complex version: If the above process comes back with an error ("No rule to make target `uninstall'. Stop."), the next step is to examine the makefile itself. This can be an ugly, confusing, frustrating process if you're not used to reading makefiles - but since we're only looking for 'targets' (things like "all:", "install:", "clean:", and "uninstall:"), here's a shortcut -
grep : makefile
This will print all the target names contained in the makefile, possibly along with a bit of unrelated junk. The line you're looking for may be named something like "remove:", "purge:", "expunge:", or a number of other things - but what that target should have, as the listed action (run "make -n <target_name>" to see what commands would be executed by that option), is the deletion of everything done by the "install:" target. If you find one that fits, rerun "make" with that switch.
"Crawling on broken glass" version: if you can't find anything like that, then you have to remove everything manually. In a number of cases, I've found that the least painful way to do it is by 1) running "make -n install > uninstall" and examining the created file to see exactly what is done by that target, 2) deleting all the compilation statements ("gcc [...]" or "g++ [...]" and the like) and reversing the action of all the "mkdir", "cp", and "install" statements (i.e., "rm -rf" the created directories and "rm" the individual files that fall outside that hierarchy), and 3) running what remains as a shell script to execute those actions (". uninstall").
Of course, if the "install" target is simple enough - say, copying one or two files into /usr/bin - just delete those.
On a more general note, you should _always_ examine any makefile that you're about to run (with at least a cursory glance to see if an "uninstall" target exists): since some programs require installation by the root user, a stray "rm -rf" could cause you a lot of grief. This requires learning to read makefiles - but, in my opinion, this is a rather useful skill anyway. Using Midnight Commander to view the makefiles can be very helpful in this, since it highlights the syntax, which visually breaks up the file into more easily readable units.
... he replied ...
Thank you for information and correction I miss to interpretate syntax "tar" with preceder ("-") and without preceder. Could you explain what is the main different between command "tar -zxvf" and "tar zxvf". In many linux (linux howto..) and other unix clone articles I found "tar" command with preceder and sometime without preceder, which one a correct?
[Jim] Either case is fine (with GNU tar). The - flag is more portable.
[Ben] If you examine the stated syntax carefully, you will find that both are correct: as is usual with Linux, There's More Than One Way To Do It. The dash ('-') in "tar" syntax (as in a number of other utilities) indicates "piped" input. Here are two versions of a command line that performs the same operation:
tar xvzf foo.tgz gzip -dc foo.tgz | tar xv -
The differences are the following:
- In the first case, "gzip" is invoked by "tar", via the "z" switch; in the second case, it is used explicitly. As I understand it, "tar" did not originally have this capability - this may explain why some folks would use the second version (i.e., a habit from previous usage). As well, I believe that a number of users are unaware of this "built-in decompression" in "tar" - and a name like "foo.tar.gz" seems to just beg for two tools to process it...<grin>
- The 'f' switch precedes the name of the file that "tar" should process. In the second case, since the input to "tar" is piped from the output of "gzip", '-' is substituted for 'f' to indicate this. The 'z' switch is also eliminated, since the decompression is done explicitly by "gzip".
For LOTS of further info (prepare to spend an entire evening or so), read the "tar" man page.
[Jim] Ben, I think Erwin was asking about the difference between 'tar -xzf' and 'tar xzf' (with and without the conventional "-" options prefix). Erwin has repeatedly referred to a "preceder."
Ben's answer is correct so far as it goes. If the "-" is used as a filename (in a place where tar's argument parser requires a filename) it can refer to the "standard input" and/or the "standard output" file descriptors.
However, this doesn't seem to be what the question was about.
Traditionally UNIX has used the "-" prefix to indicate that an argument was a set of "switches" or "options." If you think of an analogy between the UNIX command line and natural English sentences the usual syntax of a UNIX command is:
verb -adverbs objects
... The "options" affect HOW the command operates. All other arguments are taken as "nouns" (usually filenames) ON WHICH the command operations.
However, this is only a convention. For example the dd command doesn't normally take "options" with a dash prefix. Thus we see commands like:
dd if=/dev/zero of=/dev/null bs=12k
In the case of 'tar' the options were traditionally prefixed with a dash. However the 'tar' command required that the options appear prior to any other arguments. Thus the prefix is redundant on the first argument. Thus:
tar xvf ...
... is not ambiguous.
Actually it should be noted that many versions of the tar command require that the first option be one of: c, x, t, r, or (GNU) d --- that it specifies the mode in which tar is operating: (c)reating, e(x)tracting, listing a (t)able of contents, (r)e-doing (appending), or (d)iffing (comparing contents of an archive to corresponding files). Thus you might find that the command: 'tar vxf foo.tar' might give an error message for some versions of 'tar').
Many versions of 'tar' still require the - prefix. However, the GNU version of 'tar' (which is used by all mainstream general-purpose Linux distributions) is reasonable permissive. It will allow the dash but not required it (for the first argument) and it will parse all of its command line to find the command mode.
Thus we can use a command line like:
tar vf foo.tar * -c
... under GNU tar. Even though the -c is at the end of the command line. (Note that after the first argument any other options must be prefixed with a "dash" to disambiguate them from file names).
Of course this raises the question: what if you want to use a filename of "-" or one that starts with a "dash."
This is a classic UNIX FAQ. Usually it shows up on mailing lists and in the comp.unix.questions and/or comp.unix.admin newsgroups as: "How do a remove a file named -fr?"
The answer, of course, is to use the "./" prefix. Since any filename with no explicit path is "in the current directory" and the current directory is also know as "." then ANY file in the current directory can also be referred to with a preceding "./"
Personally I recommend that users avoid starting any file specification with a globbing wild card (* or ?). Any time you want to use "*.c" you should probably use: "./*.c" That will be safer since any filenames that do start with a "-" character will not be misinterpreted as command options (switches).
I've frequently seen people suggest "--" as an answer to this classic FAQ. My objection to this approach is that it won't always work. The GNU 'rm' command, and many of the other GNU commands, and some other implementations of some other commands will recognize the "--" option as a terminator for all "options" (switches). However, some versions of 'rm' and other commands might not.
It is generally safer to use ./ to prefix files in the current directory. That MUST work because relies on the way that all versions of UNIX have handled directory and file names throughout UNIX' 30 year history.
Note that there are a number of commands which take a file name of "-" as a reference to the "standard input" and/or "standard output" file descriptors. It is also possible to use /dev/fd/1 (/proc/self/fd/1) or /dev/fd/0 (/proc/self/fd/0) to access these. (On most Linux systems /dev/fd is a symlink to /proc/self/fd/; on many other UNIX systems /dev/fd is a directory containing a set of special device nodes which act in a way that is similar to /dev/tty).
Getting back to tar, here's an example where we use dashes for BOTH input and output file descriptors:
find . -not -type d .... | tar -czTf - - | ssh somehost buffer -o /dev/nst0
... Here we use a find command to find files (not directories) and we feed that list of filenames into a tar process. The T (capital T) option on GNU tar takes a filename with a list of files in it. Here we use our first dash, so the list of files is read from standard input. We also specific the -f option which forces tar to write to a file as named by the corresponding argument. In this case we have used "dash" - as the argument for the -f option, so the tar files is written to standard output (which we are piping into a command that is feeding it into Lee McLoughlin's 'buffer' filter, which does buffering and feeds a nice steady stream of data to our SCSI tape drive (in non-rewinding mode).
Note that most modern versions of GNU tar are compiled to use stdout be default. It used to be that most versions of tar would write to the default system tape drive if you didn't specify any -f option. That seemed reasonable (tar was originally written to be the "(t)ape (ar)chiver", after all). However it caused problems, particularly on occasions when novice users ran the command on systems with no tape drive.
One of the "in" jokes among sysadmins is to ask how many 100Mb /dev/rmt0 files you've removed. If you are interviewing a sysadmin, ask them that question. If they "get it" you're probably not dealing with a novice. I've seen a few full root filesystem result from this sort of mistake.
Note that the "-z" (and the newer -I) option requires that you have the 'gzip' program (or bzip2, for -I) on your path. The compression and decompression are done by a separate process which is transparently started (fork()'d then exec()'d) by GNU tar. These options are unique to GNU tar as far as I know.
So, if there is any chance that your command will run on a non-Linux system (i.e. you are writing a script and require some portability) then you should always use the - prefix for all 'tar' options, start the tar options list with c, t, x, or r and avoid the GNU enhancments (z, I, d, T etc).
AnswerGang: Jim Dennis
From Edwin Ferguson on Tue, 04 Jul 2000
Hello I am hoping that you can help me , even with your busy schedule, can you tell me how I can stop my network user from running chat room programs and instant messaging programs like ICQ , Yahoo and MSN. I use a linux box as a firewall and proxy server. I am running Red Hat 6.1, is there a way to also prevent them from running Real Player and other such applications that take up plenty bandwidth. Then finally how can I actually see what sites they are visiting and in turn block out porn sites etc. Your assitance is very much appreciated.
Edwin Ferguson Technical Support
What you've presented here is the basic laundry list of the "fascist sysadmin?" You're trying to enforce an acceptable use policy based on the assumption that your users are trying to waste your bandwidth and your company's time and other resources.
You could spend a considerable amount of time tightening your packet filters, eliminating routing and IP masquerading in favor of application layer proxies, monitoring your proxy logs, installing and/or writing filtering software etc.
If you're users are motivated to break the rules and violate these policies then you'll probably find yourself in an escalating "cybercombat" with some of the more "hacker" oriented among them.
Ultimately this is a recipe for disaster.
Now, back to your questions:
Instead of making a list of all the things that you "don't want them doing" try turning it around to ask: "What services should my users be able to access?"
If all they need is e-mail, then you can block all IP routing masquerading and proxying for all the client systems. You then run a local mail server that is allowed to relay mail from the Internet. That's that! If they need access to a selected dozen or hundred external web sites, consider installling Squid (http://www.squid-cache.org) (an Internet caching deamon) and SquidGuard (http://www.nbs.at/linux/Squidguard/installation.html) (a filtering module for Squid) and define your acceptable list accordingly.
If you remain more vague about what you policies are then you'll just enough up with an ever growing laundry list. It's obviously that the list you gave here isn't comprehensive; you tossed in "and block porn sites etc" as an afterthought. That approach will grow to consume all of your time and creative energy. Be sure to explain this to your management, assuming that they are pushing on you to pursue this tack.
The bottom line is that the there are some policies that are best enforced by human means (specifically by the HR department). Otherwise it may well be that your best recommendation will read something like:
"For each user we hire one full-time armed guard. Each guard is assigned a user, stands over his or her shoulder with weapon locked, loaded and aimed at the victim's temple...."
(Of course you management might try doing some MANAGEMENT. If the users are busy with their work, and if the management has reasonable productivity metrics and sane methods for monitoring behaviour --- then abuses of your precious bandwidth should be relatively limited ... unless management is spending all ITS time in IRC on the porno channels!).
AnswerGang: Jim Dennis
From Hari P Kolasani on Wed, 26 Jul 2000
Hi,
I was looking at this issue:- http://tech.buffalostate.edu/LDP/LDP/LG/issue38/tag/32.html, and I did not understand your solution correctly.
Can you please let me know what I need to do in order for telnet to work without any pause?
I happen to see similar problem for FTP also.
Thanks Hari Koalsani
If you look at some of the other back issues (search on the string "tcpd" you can see that I've tried to explain the issue a few times and at great length.
Basically there are three ways to approach this:
- Abandon telnet; use ssh instead.
- Fix your reverse DNS zones. Make the PTR records consistent with the A (address/host) records.
- Remove TCP Wrappers protection from the telnet service on this host. Change the line in the /etc/inetd.conf file that reads something like:
telnet stream tcp nowait telnetd.telnetd /usr/sbin/tcpd /usr/sbin/in.telnetd
to look more like:
telnet stream tcp nowait telnetd.telnetd /usr/sbin/in.telnetd in.telnetd
Personally I suggest that you use both methods 1 and 2. Use ssh, which USUALLY doesn't use tcpd or libwrap, the library which implements tcpd access controls, AND fix your DNS zones so that your hosts have proper PTR records.
As I said, I've written many pages on this topic. I'm not going to re-hash it again. Hopefully this summary will get you on the right track. If you still can't understand what is going on and how to do this you should consider calling a tech support service (Linuxcare does offer single-incident tech support calls, though they are a bit expensive; there may be other companies still doing this), or hire a Linux consultant in your area (look in the Linux Consultants HOWTO http://www.linuxdoc.org/HOWTO/Consultants-HOWTO.html for one list of them).
They can provide hand holding services. A good consultant can and will show you how to handle these sorts of things for yourself, and will ask some questions regarding your needs, and recommend comprehensive solutions.
I would ask about why you are using telnet, who needs access to the system, what level and form of access they need, etc. I can simply answer questions, but a good consultant will ask more questions than he or she answers --- to make sure that you're getting the right answers. Given my constraints here, I don't have the luxury of doing in-depth requirements analysis for this column. (Also note that I'm not currently available for consulting contracts, Starshine Technical Services is currently in hiatus).
AnswerGang: Jim Dennis
From Maenard Martinez on Tue, 25 Jul 2000
Is it possible to connect the Linux Red Hat 6.0 (costum installed) to the network wherein the PDC is a Windows NT 4.0 Server? Do I need additional tools to connect it? Is it similar to UNIX X-windows?
Thanks, Maenard
Basically all interoperation between Linux (and other forms of UNIX) and the Microsoft Windows family of network protocols (SMB used by OS/2 LANManager and LANServer, WfW, Win '9x, NT, and W2K) is done through the free Samba package.
Normally Samba allows a Linux or other UNIX system to act as an SMB file and print server. There are various ways of getting Linux to act as an SMB client (including the smbclient program, which is basically like using "FTP" to an SMB server, and the smbfs kernel option that allows one to mount SMB shares basically as though they were NFS exports).
Now, when it comes to having Linux act as a client in an MS Windows "domain" (under a PDC, or primary domain controller) it takes a bit of extra work. Recently the Andrew Tridgell and his Samba team have been working on a package called "winbind." Tridge demonstrated it to me last time he was in San Francisco.
Basically you configure and run the winbind daemon, point it at your PDC (and BDCs?) and it can do host and user lookups, (and user authentication?) for you. I guess there is also a libnss (name services selector) module that is also included, so you could edit your Linux system's /etc/nsswitch.conf to add this, just as you might to force glibc linked programs to query NIS, NIS+, LDAP or other directory services.
Now I should point out two things about what Tridge showed me. First, it was under development at the time. It probably still is. You'd want to look at the Samba web pages and read about the current state of the code --- but it may not be ready for use on production systems. (I hear that some sites are already using it in production, but basically that's because it's their only choice). The other thing I should mention is that I got the basic "salesman's" demo. That's not any fault of Tridge's (he wasn't trying to "sell" it to me and he certainly can get into the technical nitty gritty to any level that I could understand). It's just that we didn't have much time to spend together. As usual we were both pressed for time.
(I'm writing this on a train, which is why I can't look for more details at the Samba site for you. So, point your browser at: http://www.samba.org for more details.
AnswerGang: Ben Okopnik
From Mike Miller on Sun, 23 Jul 2000
Hi, I'm having a bit of trouble trying to figure out a way to automate my dial up process. Say I'm sitting here at my hewlett packard and I want to get on the internet.... I have to open a telnet window, logon as root on my linuxbox, and type ppp-go. I already have a script for my isp login name and password. Is there program out there that would possibly open a telnet window, type root and password, and enter ppp-go, sort of a dial on demand? Also, is there a way to disconnect from my isp from my hewlett packard without opening telnet and using ppp-off?
Short answer: IP-Masq and "diald". The man page and the HOWTO were on the NY Times Bestseller list for 16 weeks straight. <grin>
OK, let's see - you don't say what your setup is; for that matter, we have no information whatsoever, other than reasonable guesses. From these clues, I gather the following: you have a Windows box (3.1? 95/98? NT?) connected to a Linux machine on a local network. The Linux box is the one with the connection (ISDN? Dial-up? Telepathic?) to your ISP. If this is correct, then the explanation to follow may be of use; my main reason for answering this is that it's a relatively common setup, and other people may find it useful as well.
The first thing that's needed is IP-Masq and SLIP compiled into your kernel; depending on your distro and version, it may already be done. IP-Masq is a NAT (Network Address Translation) program; what it does, in effect, is make your LAN look like a single IP address to the "outside world", i.e., no matter which machine you use to surf, telnet, etc., all requests will come from (and all replies will be sent to) your IP-Masq router, which will then route the traffic inside the LAN.
"Diald" is a 'dial-on-demand' daemon (that requires SLIP) that will establish a connection to your ISP whenever you request an "outside" IP - i.e., if you fire up Netscape and ask for www.slashdot.com, "diald" will see that the address is non-local and establish a connection by dialing up. It will also, if you want it to, disconnect automatically after a period of inactivity.
What does this mean in practical terms? You never have to think about dialing from either of your machines again - just open your browser and start surfing, or telnet to anywhere, or ping at will. The first response will take 30 seconds or so (the period required for the dial-up connection), but that's it. As automatic as it gets.
The IP-Masquerading HOWTO (sorry, no URL - I'm writing this at sea, and don't have access to the Net) takes you step-by-step through the process of setting up IP-Masq, and the "diald" man page and documentation are very detailed, with lots of examples for various situations.
AnswerGang: Heather Stern
Hello Answer Guy, or Gang perhaps,
I would like to ask your help on something that's been bugging for some time.
I work in a company where Windows and Microsoft in general are the standard for the desktop and I more or less manage to survive the daily routine (Windows 98 only crashes a few times a week, which is a big improvement over Windows 95). However, for my technical support activity I use two Linux boxes, old 486s recycled because no Windows 9x would run on them, at least not without reducing productivity to 10%. I'm very happy with them and I just couldn't not do without them. One runs Slackware 3.4, the other Debian 2.2.
The only problem I have is to telnet into them from my Windows machines (as this is an internal network I don't need to use SSH and similar). That is, any telnet client works fine but whenever I need to use applications like Midnight Commander (wonderful tool) or even VI, some keys, namely function and navigation keys, do not work. The test I normally do when I try a new client is to run MC and try all the function keys. I have tried the standard Windows client, Netmanage, and several others. The only client that somehow achieves about some success if the new CRT 3.1, from Van Dyke, www.vandyke.com. It has a Linux terminal and keyboard type and with it I can use F6 to F10 with no problems but F1 to F5 seem not be working at all. I have tried all the different combinations, like VT100 terminal and Linux keyboard, and so on (for some obscure reason F5 does not work at all, with any client).
Teraterm is trainable. (As a side note it also has an ssh add-in available.) You might also try whatever Hummingbird offers for telnet services, they have been doing terminal emulators for a l...o...n...g time, and of all the possible results you should be able to pick one on your side, and a matching TERM variable on the Linux side.
But it's worth noting that there are big stacks of vt-something terminal types. When I was playing with a Solaris box at one point (!) the "standard" Windows telnet behaved best if I set the term variable to "vt100-nav" (no advanced video, has some sort of effect on the way it handles the last screen column). You probably want to try a bunch of the TERM variables anyway, because lame little telnet announces itself as "ansi" but isn't close enough to that spec either. For that matter, the telnet that comes with it also offers vt52 emulation, and you can try that ...
The Keyboard HowTo does not say anything on this issue, so I wonder whether you had any information you may be willing to share.
There's no reason something about remapping the Linux console driver's idea of keys would have any effect whatsoever on a remote connection (whether ssh or telnet)
I know you do not use Windows since several years but maybe you have come across this problem in the past.
Best of luck with it; if you need to keep looking for a configurable enough client, try winfiles.com or Tucows.
Anyway, thanks a lot for your help and should you need any additional information, please feel free to contact me at any time.
Best regards.
ROBERTO URBAN
... he replied ...
Heather,
Thanks for your quick response. I'll act on your information right away. Thanks again.
Best regards.
ROBERTO URBAN
AnswerGang: Srinivasa Shikaripura, Mike Orr
From Nick Adams on Tue, 11 Jul 2000
Hello, Quick question. I want to change my port to accept telnet connections to port 80. This enables me to connect from behind my proxy at work. How do I do this? Thanks,
Nick Adams
[Sas] hi,
If I understand your problem, "you want to telnet to your personal machine which is behind a http proxy, from outside the proxy network".
My quick answer would be it is not possible.
If you are behind a http proxy, then you can't connet to your machine using telnet from outside. Since proxy talks only in HTTP protocol, your telnet clint from outside wouldn't be able to talk to your machine through it.
Coming to other part of the question on how to make the telnetd accept telnet connections on port 80, you may need to modify your '/etc/services' and /etc/inded.conf'.
Hope that helps.
Cheers, -Sas
[Mike Orr] There exist telnet-via-web applications, but they have to be installed on the host (i.e., proxy) machine. I've never used them, so I don't know anything more about them.
[Sas] Thanks for the info.
I agree with you that with custorm programs to handle Telnet proxy we could telnet over proxy. But with a standard apache/Netscape/IIS proxy web server it is not possible. Also, the proxy admin needs to install and enable corresponding telnet port to outside world, which may be risky.
- Here is one server which does telnet proxy:
- http://www.nabe-intl.co.jp/faqs/telfaqs.html#tel001
Just FYI.
-Sas
From Yu-Kang Tsao on Wed, 26 Jul 2000
AnswerGang: Jim Dennis
Hi James:
Now I am setting up a linux red hat 6.2 server box in our NT LAN and I am trying to telnet connect to that box from one of the NT workstation in our NT LAN. But it gives me connectiong refuse message. Would you help me telnet connect to linux box ? Thank you very much.
Sincerely
Nathan
You probably don't have DNS, specifically your reverse DNS zones (PTR records) properly configured.
Linux includes a package called TCP Wrappers (tcpd) which allows you to control which systems can connect to which services. This control is based on the contents of two configuration files (/etc/hosts.allow and /etc/hosts.deny) which can contain host/domain name and IP address patterns that "allow" or "deny" access to specific services.
You could disable this feature by editing your /etc/inetd.conf file and changing a line that reads something like:
telnet stream tcp nowait telnetd.telnetd /usr/sbin/tcpd /usr/sbin/in.telnetd
to something that looks more like:
telnet stream tcp nowait telnetd.telnetd /usr/sbin/in.telnetd /usr/sbin/in.telnetd
(Note: THESE ARE EACY JUST ON ONE LINE! THE TRAILING BACKSLASH is for e-mail/browser legibility) some of the details might differ abit. This example is from my Debian laptop and Red Hat has slightly different paths and permissions in some cases).
You should search the back issues of LG for hosts.allow and tcpd for other (more detailed) discussions of this issue. It is an FAQ. Of course you can also read the man pages for hosts_access(5), hosts_options(5) and tcpd(8) for more details on how to use this package.
Note: You should also consider banning telnet from your networks. I highly recommend that you search the LG back issues for references to 'ssh' for discussions that relate to that. Basically, the telnet protocol leaves your systems susceptible to sniffing (and session hijacking, among other problems) and therefore greatly increases your chances of getting cracked, and greatly increases the amount of damage that an intruder or disgruntled local user can do to your systems. 'ssh' and its alternatives are MUCH safer.
... he replied ...
Hi Jim:
I also want to thank you for advising me ban telnet from my networks. I will ban telnet from my networks. Thanks a lot.
SincerelyNathan
From sarnold on Fri, 07 Jul 2000 on the L.U.S.T List
AnswerGang: Jim Dennis
On 3 Jul 00, at 18:07, Number 4 wrote:
I've just installed Loadlin on my Win98 partition and can't get it to boot my kernel (bzImage type). When I try to load the kernel, with all of the proper parameters set, it gives an "invalid compressed format" error message and the system is halted. I think the problem is that when I copy the kernel onto the windoze
partition, it is automatically converted from Linux binary format (two-digit hex numbers in brackets) to DOS binary format (many weird ASCII characters). Does anyone know how to remedy this? Thanks.
I have no idea what you think you're saying in this last statement. Binary is binary. If a file is copied as a stream of binary octets (bytes) than it will the same file on any platform that supports 8-bit bytes. There is no "Linux binary" vs. "DOS binary" (in terms of file formats). Of course the executable binaries have much different formats (in fact Linux supports a.out, ELF and some others, while MS-DOS support .COM and .EXE).
However, the Linux kernel is not "executed" by DOS. It is loaded LOADLIN.EXE (which obviously an MS-DOS .EXE binary executable file). However, the kernel image is generally a compressed kernel image in ELF format with a small executable stub/header. It is formatted so that it could be dropped unto a floppy and directly booted (so the first sector of a Linux kernel image is basically just like a floppy boot sector). Other loaders (like LILO, SYSLINUX and LOADLIN.EXE copy the kernel into memory and jump into a different entry point (past the "boot record" and unto the part that allocates extended memory and decompresses the kernel into it)
I hope you can see that your characterization of "hex digits" vs. "weird ASCII characters" is hopeless confused. Those are both different ways of viewing or representing the same binary data. The fact that they appeared to be different is probably an artifact of the tool you were using to view them. To actually tell if the file was modified as it was copied, use the the cmp (or at least the diff) command and check its return value.
If the files are different then look to see if you have your FAT/MSDOS filesystem mounted with the "convert" options enabled. This was a feature in earlier Linux kernels that applied to some of the FAT, VFAT, and UMSDOS filesystems. I think it as been dropped from more recent kernels (or is at least depracated). It was intended to automatically convert TEXT files as they were copied to or from MS-DOS compatible filesystems. However, it is known to have caused many problems and the consensus in the Linux kernel community seems to be that kernel filesystem drivers should NOT modify the contents of files as they are stored or retrieved. (I'm inclined to agree --- let the applications be modified to handle the format differences gracefully).
Of course the TEXT file formats differ among UNIX/Linux, MS-DOS, and MacOS systems. It all depends on the line termination conventions. Linux/UNIX use just "newlines" (LF, linefeed, a single character: ASCII hex 0x0A, '\n' in C strings) while MacOS uses just the carriage return (CR, ASCII hex 0x0D, '\r' in C) and MS-DOS uses the highly irritating CRLF (2 characters: carriage return, line feed, ASCII hex 0D0A sequences, or "\r\n" in C). I've seen some MS-DOS editors freak out when presented with text files that had LFCR line boundaries (reversed CR and LF sequences). However, most of them could handle that and some/most could handle UNIX and Mac style text files.
(Of course most GNU and free text editors and tools can handle any of these formats and there are many little scripts and tools to convert a text file into any of the appropriate formats. Some day, someone ought to write a really top notch "text file" library that automatically detects the line feed convention on open and defaults to preserving that throughout the rest of the operation --- with options to coerce a conversion as necessary).
(The reason I say the MS-DOS form is so irritating is that it messages with the sizes of the file. Having two character line boundaries then breaks quite a few other assumptions about the text of the file.
I don't use loadlin, but I think the 2.2.x kernels need to be bzipped. 2.0.x kernels use the older compression; you could try an older kernel, or maybe boot your kernel off a floppy disk. Is there some reason why you can't install to an e2fs partition and use lilo?
Sorry, that's about all I can think of (on the last morning of a holiday weekend
Steve
Older versions of any loader (LILO, LOADLIN.EXE, SYSLINUX) my not be able to handle bzipped kernels. However recent versions (as in the last two or three YEARS) should be able to cope with them.
I suspect that it is more likely that he's corrupting the kernel image as he's copying it.
From sas on Sun, 16 Jul 2000
AnswerGang: Jim Dennis
Christine Rancapero was published in the Mailbag:
Do you have an issue regarding the advantages and disadvantages of migrating linux mail server to an MS exchange? Your help is gratefully appreciated....thank you very much
One of our more active readers this month replied -
Advantages of moving from Linux mail server to MS Exchange:
Dis-advantages of not moving to MS Exchange:
[Disclaimer: No hard feeling please. It is not a flaim bait.
Just my experience with *nix mail and my colleguages experience with MS Exchange]
cheers -Sas
All humor aside this would not be so much of a "special issue" (of LG) as a white paper. Here are some thoughts:
The first observation to make is that we are comparing apples to fish heads. Linux is an operating system kernel. There are many packages that can supply standard mail services under Linux. Basically the UNIX/Linux e-mail model involves MTA (mail transport agents), MSA (mail storage/access agents) and MUAs (mail user agents). There are also a variety of utilities that don't really quite fit in any of these categories.
[ Our LG Editor, Mike, thought Jim's next part describing an overview of Linux mail services was so good, he split it into a seperate article: http://www.linuxgazette.com/issue56/dennis.html
Summarized: there are a several MTAs, a number of ways to apply administrative policy -- more complicated policy takes much more planning. You can also get the LDA (local delivery agent) involved, and apply rules or filters at the email client level. This certainly includes responders such as the common 'vacation/out of office' note. With shell scripts invoking small utilities, certain kinds of recovery are easier on the sysadmin; small utlilities for the user (like 'biff' to spot new mail) exist too. Goodness knows what mail client the user may have - he has so many choices.
-- Heather. ]
This is all in contrast to Microsoft's approach. With Microsoft you are almost forced to use the MS Outlook client, and the MS Exchange server. They referred to that as "integrated." They also basically require that you use their "Back Office" for and "SMS" products for some management features, and their WINS (or the newer ActiveDirectory?) for directory services.
One of the costs of all this integration is CONTROL. You must set up your network, your routers, and your servers in one of the approved Microsoft ways in order for any of to work. You can't have one "farm" (cluster) of servers (say outside your firewall, possibly with some geographic dispersion) recieving and relaying mail with another cluster of servers (say inside your firewall, at specific regional and departmental offices. You can't make your e-mail address names follow one convention (abstraction) such as "" while the actual underlying routing and storage archictecture follows a different model (such as ).
The UNIX/Linux model is scalable. That's proven by the fact that it's used by well over 80% of the Internet (obviously the largest interconnecting set of computer e-mail networks in history).
As usual if the Microsoft package doesn't do what you want you'll have to do without. There is very little option for administrators and users to customize the operations. Even if you do try to customize your Microsoft installation their internal complexity, tight coupling (integration) and overall fragility result in steep learning curves, and high risks (the packages you add in are more likely to conflict with other, seemingly unrelated, parts of the system or with other subsystems).
Obviously with the Linux tools there are no arbitrary limits placed on number of users, number of accounts, number of sent or received messages, sizes of messages, etc. While some specific tools may bump into limits, more often the default configuration, or the wise administrator, will impose constraints based on their own capacity planning needs and their own policies. (Like when I modified my sendmail.cf to set limits after the incident I described above).
With the Microsoft approach you're required to pay for every user; and those costs will probably become ANNUAL expenses (as Microsoft foists thier ASP software "subscription" model on their customers).
In addition, of course, the Microsoft approach emphasizes the convenience for their programmers and the needs of their marketing people over the security of your users. That's why we are regularly treated to the perrennial debacle of the e-mail macro virus epidemics (Melissa, ILOVEU, LoveBug, etc). These macro viruses are basically caused by the very same programming flaws that gave us the WinWord and Excel Macro viruses (and they are written in basically the same language). Similar bugs seem to have been found in Explorer.
Microsoft thrives of shallow whizzy "features" and one of the easiest way to implement those is through poorly designed obscure "dynamic content" hooks which treat "special" data as programs. Those are precisely the kinds of "features" that are most attractive to cybervandals and most easily exploited. Once they've been put into a system and used by other components on that system then they can't be removed or disabled (all in the name of backwards compatibility).
Of course that hallowed "backward compatibility" will only be honored to the degree that suits Microsoft's whims. They will deliberately or neglectfully break their APIs in order to force users and ISVs (independent software developers) to upgrade existing products as a requirement to upgrading other (seemingly unrelated) subsystems.
Thus an upgrade to the latest Powerpoint may entail an upgrade to the rest of MS Office, which may require upgrades to the OS and thus to the mail client (Outlook or Express) and thence possibly right up to the mail server (Exchange) and the server's OS (NT to W2K). Microsoft generally benefits from such domino effects; though they do have to exhibit some restraint. That's particularly true since they have enough trouble getting any single product to ship on schedule and they can't try to sync them all for really massive coups.
This is another cost of integration. The "integrated" systems become rigid and hard to maintain, harder to upgrade or enhance, impossible to troubleshoot or repair.
Open systems are characterized by modularity --- separate components interacting through common APIs (sometimes via shared libraries), and communicating via published protocols. Open systems generally have multiple combinations of clients and servers. Of course that has its cost. Some of these components will fail to implement their protocols in interoperable ways some of the time. Sometimes this will require revisions to the protocols, more often to the components. Some combinations of components will not work, or will be a bad idea for other reasons. Often the same functions will be implemented at multiple different points (duplication of feature sets).
Overall these systems will be more robust, more resilient, and more flexible. It will be possible for an organization to tailor their system to meet their needs.
Such systems do require skilled, professional administrators (or least consultants for the initial deployments, and for follow-up support). However, the "easy to use" MS Windows based systems, and even the famed "intuitive" MacOS networks required trained professionals for most non-trivial networks.
Ultimately you should consider the availability of expertise in your IT decisions. Hire people with broad experience and a willingness to learn. Then ask them what systems they prefer to manage.
On Tue, Jul 18, 2000, James Strong wrote:
In studying ip addressing I come across the reference of 255 and 256.
if all ones (11111111) = ?
if all 0s (00000000) = ?
How does one come up with 256 sometime and 255 other times?
-confused
There are no "256"s in valid IP addresses.
IP addresses are 32 bits, and are written in 4 octets of 8-bit numbers expressed in decimal form. The biggest possible 8-bit number is 255, which is 2^7 + 2^6 + ... + 2^1 + 2^0.
A good explanation of IP addresses is in the Linux Network Administrator's Guide, available in your favorite Linux distribution or from www.linuxdoc.org.
-- Don Marti
Mathieu () wrote:
Hi, I had a problem with LILO, everytimes I've install Redhat 6.2 on this hhd it did the same problem... My hdd partition has 2055 cylenders and when I boot up the computer, it just prints "LI"... Have any idear?
- Mathieu
[ This is the number one problem with LILO - it's a bit sensitive to some matters of size and cylinder location. Matthieu here isn't the only one who's had this question, it comes up several times every month, and has been in the FAQ for a while. The number two problem is the same with some number speewing at you.
Either usually means there is a geometry problem, and the right options can be added to lilo.conf with any text editor. See LILO's own docs (usually in /usr/doc/packages/lilo). Somewhat more usefully, the LILO Mini-Howto was just updated a few days ago: http://www.linuxdoc.org/HOWTO/mini/LILO.html
I think you readers will also be pleased to know there are numerous alternatives. You can find a stack of them by going to freshmeat.net and typing "boot loader" or "bootloader" into its search box. (Do both seperately, you get different lists.) Ones worth highlighting are GRUB, Smart Boot Manager, GAG (it may be a slow link, but it looks really nice) and Winux (an odd one... it's a configurator for using LOADLIN effectively). They don't seem to mention Debian's 'mbr' - which (like Smart Boot Manager) is only a first stage (you still need LILO or something like it to chain into the kernel) but even less verbose than FreeBSD's spartan partition picker. You have to press SHIFT if you care to change which partition to boot from.
Lastly, if after you install LILO, Windows/DOS won't boot even from a floppy, boot from a rescue disk and use Linux fdisk to change your extended partition type to 85 (linux extend). This will stop it from looking for a D: that simply isn't there.
--Heather.]
I love your columns - very informative and very helpful.
I've searched high and low for a solution to this problem but haven't had any luck.
I recently re-installed RH6.0 (after root was compromised by a non-malicious hacker), and haven't done updated anything (yet) except my version of XWindows.
Everything is more or less working, but I'm having difficulty logging into my POP3 server. I have a perfectly valid and functioning user account, but POP3 is rejecting my login (with the Linux account's password) with a "-ERR Bad login" message.
Are there any circumstances where my POP3 server would be looking for a different password than the OS? Or is there something else that could be going on?
Thanks in advance for your assistance!
-Steve Lobo
But Steve found his answer and sent it in:
Nevermind! Not sure why, but the pop file didn't exist in /etc/pam.d - so although everything looked to be in order in terms of connecting to port 110 attempting to get into a transaction state, POP had no idea about how to authenticate. I just rebuilt imap* from my RH CDRoms and everything's fine...
Thanks anyway!
If you prefer reading HTML to plain text, the KDE help system (program kdehelp) provides a nice interface to man pages (But of course it's supposed to be used under the kdm display manager, not on a "dumb" console.) Either type "man:<command>" (without the quotes and <> into the URL line, or go through the main menu. It also provides an interface to the other help system (info pages) but less nicely formatted (you can type "info:<command> invocation" into the URL line but in this case I think it's easier to navigate the menu system).
can u run linux and windows95 on the same computer and selecting whic operating system u want to run on the bootup?
Why, yes, yes you can. It's no easy process though, and you'll have to read a bit, so chek out
http://www.linuxdoc.org
Or more specifically:
http://www.linuxdoc.org/LDP/Linux+Windows-GUIDE/index.html
Installing Linux can mean re-partitioning, unfortunately. So be careful! There are ways to avoid re-partitioning, however. Check out:
http://www.vmware.com
Also, if you just wanna test out Linux, may I suggest Mandrake 7.0, which comes with a program called Linux for Windows, which will install Linux onto a FAT formatted partition.
Linux comes with a program called LILO (LInux LOader), which installs itself to the Master boot record, and can easily be configured to boot multiple OS's, such as Windows.
Mike
Sir:
Is there a program or programs that accept computer trade-in for tax credit?
Joe Kellum-NYC
Probably.
I did a Google (http://www.google.com) search on the phrase "computer donation tax" and got 35,000 hits. The first several appeared relevant.
However, this has nothing to do with Linux or with the free software movement. It's also not a technical question. Thus you've posted it to the wrong venue.
Perhaps you should talk to a tax professional.
[ The real tip here is, we're the Linux Gazette, not the tax writeoff gazette. You might try donating it to the Free Software Foundation, the Debian project, or a developer who is working on stuff your company uses, but is poor and could use the particular hardware you have. --Heather.]
bonjour, je recherche un driver son yamaha labway olp3-sax, pouvez vous me dire comment faire sur internet pour le trouver ?
voici mon adresse :
merci d'avance à bientöt. joel.
I thought I would need to post a translation request in Help Wanted again, but our assistant editor Don Marti stepped up to the plate:
Joel,
-- Don Marti
On Wed, Jul 12, 2000 at 09:27:23AM -0500, Kishore T. Kapale wrote:
I want to connect a Laptop and a PC both running RedHat 6.2, through an ethernet connection. I do not need any technical details, I am aware of those. I have onlyone question, which is can I use thin 10Base2 network without a hub to achieve this?
Most laptop ethernet cards that I've seen use 10Base-T, or twisted-pair, Ethernet. If you have this kind of Ethernet on both systems -- it has an RJ-45 socket, like a wide version of a phone jack -- then you can connect two, and only two, systems with a crossover cable, available at any well-stocked computer store. Or build your own crossover cable using the diagram at: http://www.homepclan.com/cabcr20.jpg You'll need a tool called an RJ-45 crimper, which is a good investment if you want to make a lot of cables in custom lengths.
If you have true 10Base-2 Ethernet, which is rare these days, both systems will have a BNC connector, which is round with two little pins on the sides. Using 10Base-2 Ethernet, you can connect any number of systems without a hub. You'll need a 10Base2 cable (which is a coaxial cable, similar to what cable TV uses but different) a BNC "T" connector for each system, and a BNC terminator for each end. All available at any well-stocked computer store.
-- Don Marti
Hi,
I'm a newbie Linux user, and I just have a couple questions about my newly installed RedHat 6.0 system.
1. I'm trying to figure out how to run KDE from the console. Running startx brings up either GNOME, Afterstep or FVWM and I can't switch to KDE from any of those. I don't want to use GDM, and I found a script called 'kde' on my system, which of course doesn't work because the X server is not up. I found that 'X' was a symbolic link to my installed X server, and that brings up the familiar gray background and mouse cursor. I tried just switching to a console and running 'kde' again, hoping it would find the X server I just started.
Use a text editor (eg emacs), to edit: /etc/sysconfig/desktop
So that it now holds the string 'KDE' (excluding the quote marks of course)
The Answer Guy, Jim Dennis, commented:
Of course it would be unfair to single out Microsoft in this regard. I don't like Netscape's "vcard" attachments any less obnoxious than "winmail.dat" and I find Netscape's previously default behavior of appending HTML formatted copies of the body text to all outgoing e-mail to be almost as bad as appending .doc or other binary formats. (At least I can read between the tags if I care to).
Netscape 4.72 (& 4.73?) still defaults to HTML mail, and vCards are an open standard (http://www.imc.org/pdi) that I have found very useful.
I have a web application that, among other things, sends the user's contact information to us via email. Adding these users to our address book becomes almost trivial thanks to an Outlook add-in that imports vCards. My application includes a vCard attachment with the message and we can double-click to add the user to a shared contacts folder. We could do something similar using Netscape, or any *nix mail client that called GNOMECard as a vCard viewer if we were using *nix desktops.
As long as sending a vCard is not the mail client's default behavior, I don't have a problem with it. It has more info than a plain sig, and since it's actually plain text, it's just as human-readable as an attached text file.
[ There is an Addreesbook written in perl which uses vCard format natively. Still working on vCard 3.0, but perhaps you can enjoy it anyway: http://www.acm.rpi.edu/~jackal/ab.html
XCmail is a mail client which handles vCards and PGP (among other things): http://www.fsai.fh-trier.de/~schmitzj/Xclasses/XCmail --Heather.]
Dear Linux Supporters:
I have started playing around with SuSE Linux and am impressed with the product. I have been a died in the wool Microsoft user for the last eight years. I have seen them step on a lot of folks and that is part of business. I have also put up with their mindless CD keys that make a network administrators life miserable. Not copy protected is what it said on all of their software. That was until they controlled the market now everything is copy protected.
But the latest rumor or plan that Microsoft has put me over the edge. I read the an article in the May 1, 2000 issue of INFO WORLD that Microsoft now wants to jam a "medialess OS" down our throats. The article is entitled "Users find Microsoft's medialess anti piracy play hard to swallow" explains their latest attempt to stop software piracy. This is it for me.
I have been an ardent supporter up till this. I want to convert to something else. The problems are my word, access and other apps that use MS apps. Is there a way to continue to use these apps without Microsoft OS. Or is there a way to emulate win apps or is there other apps that transparently use their files? Any help would be greatly appreciated.
Well as one newbie to another, good luck, Star office will import and save in most if not all the MS formats for office, personally I was using Star office on my MS machine, so I know it works at least for word, and excel, never used access or powerpoint, so cannot tell you how well those work.
This issue with MS medialess OS? I had not heard anything about it, but sure am glad I am switching over to Linx myself. Very tired of Micr$oft and its games, was waiting until I had found the apps needed to switch over and with the release of Coreal office for linux, I figured the time was at hand. Now to convince my wife.....
I am looking over the stuff for development on the KDE platform, namely the KDevelop IDE. If Microsoft would have developed something like this, and gave it way, Windows would not be the mess it is. Alot more developers would be able to work, without resorting to piracy to get the tools needed.
Good luck and have fun...
Douglas Macdonald () wrote:
I have a US robotics modem PCI Fax modem 3cp5610 and running Red Hat Linux. I can not get it to work. Any suggestions ?
[ You are a very lucky guy - you actually have an honest to goodness real modem there. So, you need to see what IRQ it's getting, and if necessary use setserial to advise Linux' drivers to keep it that way.
If you recompiled a kernel, double check that you have serial support. Also, in the "extended dumb serial options" turn on IRQ sharing. People who know modems a bit know that under MSwin 2 serial ports get the same IRQ, but a different I/O address. Same here, if you tell it so. --Heather.]
"I was wondering if you have ever heard of anyone booting up a system with a linux boot floppy. The system previously lacks the ability to boot from a CD , but after installing linux, uses the CD drive to install another operating sytem which at teh same time will write over the Linux system."
Noah:
Three different machines at work use boot floppies to start Linux. I have had problems using LILO and modifying the boot sector on two of those machines. Using a floppy gives me a safe and simple way to have Linux and Windows on the same machine with no changes to the boot sector. I did need to do an RDEV on the boot floppy's kernel to point it toward the proper partition on the hard drive. Other than that minor detail using a boot floppy works quite well.
Dave Sarraf
Hello, I have been playing with Linux for about 4 months, and I would like to share some information that may already be available, but not evident to me. My system has an older VESA local bus motherboard, and the processor is a 486DX2 running at 66 MHz. It took me approximately 2 months to get Xfree86 working with my Diamond Stealth Pro VL (VESA local bus) board which used an 80C929 device. Anyway, I want to prevent other Linux people from pulling overnight hacks like I did (that will never happen), so here is the section of the XF86Config file of importance for a Diamond Stealth Pro VL video board (VL for VESA local bus).
------------------------------------------------------------
Section "Device" Identifier "Diamond Stealth Pro" VendorName "Diamond Multimedia" BoardName "Stealth Pro VL" VideoRam 2048 Ramdac "ss2410" Option "diamond" Clockchip "icd2061a" Chipset "s3_generic" EndSection
------------------------------------------------------------
My Linux distribution is SuSE 6.4. The parameters in the above file reference the components in my video board.
The line which really made my system work without crashing while running X was the Chipset directive. The default chipset was mmio_928. When that option was used, I would get system hangs (you couldn't even telenet via the ethernet), segmentation faults, and lots of other problems. I will make the bold (and possibly incorrect) assumption that the assumed memory locations (for memory mapped I/O) were in conflict with the memory space of a running process (possibly kernel space?) I do not know for sure, but, using s3_generic (which implies I/O mapping for device registers) fixed the problem.
I am pleased that Linux came into existance, and it is one of the ultimate hacks. Your Linux Gazette has helped me lots and lots (I read all the back issues -- I am up to May 2000), and I hope that I can achieve the knowledge to help other people the way your extensive documentation has helped me.
Chris Gianakopoulos (soon to be Linux hacker)
James, I find your Linux Gazette column to be very valuable. I have a problem that I have not been able to find the answer to: is it possible to get NT/2000 to read ext2 partitions seamlessly? I found a utility which will allow the user read-access but it is painfully slow and requires you to copy anything you want Windows apps to be able to access:
http://uranus.it.swin.edu.au/~jn/linux/explore2fs.htm
[ Yes, there is at the very least Ext2Read, which is a GUI to fit a package of loose tools originally designed for DOS, then ported for NT. It's reputed to work on W95, and appears to have a number of features. Note, I haven't used it:
http://www-scf.usc.edu/~vakopian/programs/progs.html#ext2read/There's also EXT2 Researcher, but the documentation is slim. I haven't used it either:
http://winfiles.cnet.com/apps/nt/disk-analyze.html
--Heather.]
As an alternative, is there a way to transform the filesystem from ext2 to ntfs? Reformatting is out of the question as I have 30GB of data on the partition.
---- Mike Perham,
Java Server Guy
Somebody wrote a few days ago about how modern distributions have too many files and it makes the "locate" command unusable. Because anything you type matches a whole slew of pixmap and HTML files used for the desktop interface. The person was asking the distributions to move these into tar files.
Another strategy is just to filter those filenames out of the "locate" output:
loc () { locate "$1" | egrep -v 'bmp|html|whatever' }
This creates a shell function called loc, so that when you type:
$ loc time
you don't get back entries containing 'bmp', 'html' or 'whatever'. You can of course adjust the egrep expression to your heart's content.
-- -Mike
Recently I had to sum up a column of numbers appearing in a tab-delimited text file. The following awk program 'summ' worked well, in conjunction with a few other tricks.
#!/usr/bin/awk -f { total = total + $1 } END { print total }
Assume the data file contains:
aaa 44 asdf bbb 55 asdf ccc 67 asqq
$ cut -f2 data.txt | summ 166
If I wanted to process only some of the lines, I can put a 'grep' before it:
$ grep 'asdf' data.txt | cut -f2 | summ 99
If I wanted, I could move both these operations into the awk script. The "1" in $1 could be replaced by any column number, and I could put a regular expression before the first bracket:
/asdf/ { total = total + $2 }
However, I prefer one generic script. I wanted to call it 'sum' but the name was already taken. ('sum' produces checksums.)
-- -Mike
Wed, 31 May 2000 02:29:55 +0100 (BST)
From: Joseph Petrow ()
Hi,
In regards to spellchecking for homonyms, I have built a web search spellchecker at http://www.searchspell.com. It is a lookup database of misspellings using ePerl and MySQL on a Linux box. It allows me to customize spelling rules for particular rules, and even recommend possible corrections to words with correct spellings ("hear" for "here", "where" for "wear", etc.) Before reading your column I did not have "hoard" and "horde" in my database, but that is now fixed, and I'm tracking down more and more everyday.
Currently my database has over 2,000,000 entries, which I'm able to permute in billions of typo corrections, and each day I'm getting closer to a true "intelligent" spellchecking system.
If you have some spare time, please check it out.
Regards,
Joe Petrow
Sun, 18 Jun 2000 19:27:26 -0500
From: Dan Watling ()
Hey,
I'm having some trouble with allowing regular users to control the ppp0 connection. I even enabled "Allow any user (de)active the interface" under netconf. Essentially what happens is the user types in "ifup ppp0" or "ifdown ppp0" and it sits there without ever doing anything. Any ideas or suggestions would be appreciated.
[ You could install mserver, then let the users have a masqdialer client each; they even exist for Windows. But the question is still a good one; why does this hang? --Heather.]
Also, would you happen to know of a Linux help site that is in message board format?
Thanks. -Dan
Sat, 8 Jul 2000 18:27:15 -0400 (EDT)
From: Jason Dixon <>
Hi Angus:
The quickest, easiest way to do what you want is just to extend your expression a bit...
finger | grep 'potatoe ' <instead of> finger | grep potatoe
Note that I added the quotes, with the trailing space. This will match all instances of "potatoe" with a trailing space (for example, a username). However, hostnames (potatoe.onthefarm.com) won't match because of the trailing ".".
Hope this helps!
Jason Dixon
Systems Engineer
Tue, 11 Jul 2000 13:41:48 -0700 (PDT)
From: Srinivasa Shikaripura ()
hi,
Things you could try are:
As advice, learn about regular expressions in *nix. The first solution above, used the '^' symbol to indicate to grep to get the lines starting with potato. You could do many such things with regular expressions...
Hope that helps.
cheers -Sas
Fri, 07 Jul 2000 15:29:50 -0700
From: Sudhakar Chandra ()
Matthew Willis () wrote;
You can get a two-column printout from netscape by using the psutils packages. For letter-sized printouts, just change your "Print Command" in netscape topstops -q -w8.5in -h11in -pletter "2:[email protected](8.in,-0.1in)[email protected](8.in,4.95in)" | lpr -h
You will have to edit the Makefile and set PAPER=letter if you live in North America.
Why bother with such a verbose command. Part of the psutils package is a program called psnup. The preceding verbose command can be replaced by:
psnup -c -n 2 | lpr -pprinter
psnup has also been hacked (by yours truly to generate back-to-back postscript documents. See http://www.aunet.org/thaths/hacks/psutils
Thanhs
Sudhakar C13n
Sun, 2 Jul 2000 17:40:50 +0200 (MET DST)
From: "Werner Gerstmann" ()
Hallo Jim,
your question in LG#55: You simply have to put into the conf.modules or modules.conf file in the etc directory:
alias ppp-compress-21 bsd_comp alias ppp-compress-24 ppp_deflate alias ppp-compress-26 ppp_deflate
and reboot. Regards Werner Gerstmann
Tue, 11 Jul 2000 13:28:52 -0700 (PDT)
From: Srinivasa Shikaripura ()
hi,
There are definitely well defined file formats. If you are looking for Windows/Dos, the file formats are .COM and .EXE. To get to know about these formats refer to any standard assmbly language book, like "Introduction to assmbley languge" (not sure about the title) by Peter Norton and Socha.
In *nix world there are two famous execution file formats(AFAIK), "a.out" and "elf (Executable and Linking Format)". "a.out" is a little old standard and Linux came out of that format sometime back. elf is a old but very generic and good one.
There is a standards document somewhere which defines the format of an elf file. Even you could try 'man elf' and it could tell you something.
In short, elf file contains a number of sections, one for each of constant-data, un-initialized data, executable code, startup-code and debug-info tables.
May be if you look at programs like objdump or elf library (libelf.o) related header files, you would get interesting things.
Hope that helps a little.
cheers -Sas
Tue, 11 Jul 2000 09:55:35 -0700 (PDT)
From: adh math ()
Dear Ms. Parker,
I hope you've gotten your question posted at Linux Gazette answered by now (six weeks later), but in case not, here are some suggestions:
In the KPPP Setup dialogue, under the IP Address tab there's a box "configure hostname automatically"; *un-check* this box.
KPPP does indeed edit /etc/resolve.conf, but if (under the DNS tab in Setup) you do not check the box "disable existing DNS servers" then your default DNS server (e.g., your local caching DNS server, if you've set one up) should also work, and will be tried before your ISP's DNS server is consulted.
Again, I hope this is not helpful (i.e., that you've already gotten things working again .
adh
Tue, 11 Jul 2000 13:14:19 +0200 (CEST)
From: Dario Papoff ()
Hi,
when you strip a library with strip or objcopy --strip-all you don't wipe out the dynamic symbol table, this mean that static libraries become useless but when you strip a dynamic library you don't loose dynamic symbols (have a look with nm -D or objdump -T on your stripped library) and so library functions can be still referenced
Bye, Dario Papoff
Sun, 2 Jul 2000 17:41:38 -0400
From: Pierre Abbat ()
I have a LAN, so my boxes have 192.168 addresses, but I use kppp as you do. Here are the relevant options:
Bring up kppp, hit Setup, under the Accounts tab select the ISP, and hit Edit.
IP: Uncheck "Auto-configure hostname".
DNS: If you run your own name server, the address list should have only 0.0.0.0.
Some versions of libc will not work if /etc/resolv.conf has the word "localhost" in it. If "Disable existing" is checked, the contents of /etc/resolv.conf will be commented out while you are on line.
phma
Sun, 25 Jun 2000 11:23:02 EDT
From: GregV ()
Dear Answer Guy,
Searching for more inormation about the i810 chipset I came across your discussion about it and Linux.
I had a similar porblem with my Linux installation, where as Linux installed fine and I could utilize the command line wihtout a problem. However I had no graphics support, that is to say no XFree86.
The soultion to this porblem is to be found at support.intel.com, under the i810 fourm site. They have the X server and Kernel module and complete instrcutions for how to install and use the software. You must however read the fourm posts as there are a few tricks to the setup procedure.
That being said, I would like to know when/if kernel support will be provided for the i810 chipset. Actually I would rather learn how to find this information for myself. If you teach a man to fish, etc....
Thanks,
GregV
[ Xfree86 is a userspace application; the kernel is only involved a tiny bit for video (unless you use framebuffer, then kernel space is doing a lot of the work). A good way to search is to download a current kernel source package from kernel.org, install it, and grep around in its Documentation/ directory. You can also give keywords you find here (like "AGP" "framebuffer" etc.) to normal search engines like the Gazette's own, or Google! -- Ed.]
Tue, 11 Jul 2000 11:19:10 -0700 (PDT)
From: adh math ([email protected])
Dear Mr. Gauthier,
When you run fsck (or e2fsck) on a filesystem, it is very important that the filesystem be mounted read-only; otherwise fsck will do further (possibly severe) damage to the filesystem being checked.
As you may know, Linux stores data in RAM buffers, so when there's a sudden power-out, a lot of data about the running system is lost. However, unless the power goes out while you're booting, I don't think you should lose configuration files like inetd.conf. That's what makes me suspect that fsck was run on a read-write filesystem.
Recent kernels (2.2.14 or later, say) are better about syncing RAM buffers to the disk every minute or so (so less data is lost in a crash), and ext3 (the new filesystem type) handles crashes better than ext2 (in theory
About desktop applications, KDE allows you to add executable icons on your desktop; right click on the desktop and select "New Application" in the dialogue box that pops up, then fill in information as directed. You should have a couple of clock programs, such as "xclock", "oclock", perhaps even "daliclock" (perhaps this is a GNOME program...?).
Hope that's helpful.
adh
Mon, 10 Jul 2000 04:13:12 +0100 (WEST)
From: Luis Pinto ()
Hi! I saw your question on Linux Gazette, which i try to respond:
After the upgrade, you have probably erased your /etc/X11/XF86Config. Now, the computer is trying to boot X upon the linux boot because you have the line:
id:5:initdefault:
in your /etc/inittab file. You must change the number 5 to 3. To do so, you must boot giving the 'single' option to LILO:
LILO boot: linux single
Then, you must edit the /etc/inittab file, change the previous line, and reboot. After that, you must use Xconfigurator, XF86Setup or any other tool to configure your X.
Hope to have helped...
Luis Pinto
Sun, 2 Jul 2000 17:53:36 -0400
From: Pierre Abbat ()
You may have a bad sector. I had a bad sector in the inode area, and every so often a file would land there and cause havoc. The worst was when /etc/mtab landed on the bad inode. The computer couldn't tell what was mounted and refused to boot. I fixed it with fsck -c .
phma
Tue, 11 Jul 2000 11:46:29 -0700 (PDT)
From: adh math ()
Dear Mr. Adams,
Regarding your June 15 post at Linux Gazette, you might swing the desired arrangement with port forwarding on the proxy server (ipportfw, one of the IP masquerading utilities), but it may not be easy (read: impossible if you don't have root access on the proxy, merely difficult otherwise). I'm pretty sure it's impossible if the proxy is also accepting HTTP connections on port 80, since you can't (to my knowledge) run two services on the same port.
Even if the technicalities can be overcome, there are good reasons not to allow telnet connections through your proxy firewall's www port:
(That's just off the top of my head...I'm confident there are other good reasons
Well, not to lecture, but it sounds like a bad idea to me. More positively, I think you'd do better to convince your employer to run an SSH server inside the firewall, and/or to allow outgoing SSH connections.
Sincerely,
adh
Tue, 11 Jul 2000 00:10:04 -0500
From: Jim Liedeka ()
I have run into the problem you are describing. I added a SCSI card to my machine which hosed Win95. I never did get Win95 working but I kept hosing my boot sector trying to reinstall it. The solution is really pretty simple.
I wrote this from memory so I may have left out a step or two but I hope this will give you the idea.
Jim
Tue, 11 Jul 2000 10:47:00 +1000
From: chimera ()
I think either objcopy or strip can be used. However, the Linux Bootdisk HOWTO says that only debug symbols should be removed (--strip-debug). What would happen if everything is removed (--strip-all)? I have tried and the resulting boot/root disk seems to be OK. However, something must be wrong ... P> I found this out after some hair-pulling exercise. Some distributions have a "depmod -a" in the initialisation scripts. This uses the object symbols to resolve the dependances between modules. If you strip all, depmod cannot resolve and hence cannot work out that whenever you load sound.o you will also need to load sb.o.
If your bootdisk already has a depmod, then I suppose you can strip-all to save space. There maybe other reasons why you shouldn't do a strip-all that I don't know about.
chimera
Mon, 26 Jun 2000 13:05:21 CDT
From: Heather Stern, Linux Gazette Technical Editor ()
Norman King () wrote:
I saw a post by you on the Linux Gazzette about Voice Mail, E-Mail, Faxes, etc. integrated on Linux. You said it was possible via scripts, but you did not cite any examples of software to use to do this.
I have seen mgetty+sendfax but it is not ready for prime-time and won't work with but a few voice modems, and even still, doesn't always work and is in the beta test stage.
Do you know of any open-sourced, shareware, freeware, or commercial Linux solutions that do all of this that costs under $200 if commercial?
Thanks.
[ You could certainly try HylaFax, it is open source and absolutely free. Specifically, the fellow who wrote it works at SGI, and they let him give it away, as long as they get to disclaim everything and not be involved in it. So check out http://www.hylafax.org right away
There are probably others, I'm sure we'll get notes about it now. --Heather.]
[The Help Dex cartoons were not available by the publishing deadline, and will return next month. Instead, let's look at some of Shane's other cartoons. This is from the series Maximux. Maximux is a superhero, of course, as Shane's protagonists frequently are. Maximux is like Hercules. And just like Batman has his sidekick Robin, Maximux has his Router. -Mike]
You can still see the latest HelpDex episodes on Linux Today.
OLinux: Tell about your career: college, jobs, personal life (age, birthplace)
Russel Pavlicek: I was born and raised in New York state in the US. I grew up in a town called Ossining that is known for a few things, including Sing Sing Prison (a favorite reference in American gangster films in the 1940's), Peter Falk (who played Lieutenant Columbo in the long-running US TV series of the same name), and Martha Quinn (who was one of the original VJ's on MTV). I received a Bachelor's Degree from Trinity College in Hartford, Connecticut.
I joined the consulting team at Digital Equipment Corporation in 1987 and remained in place through the acquisition by Compaq. I now live in Maryland with my wife (Maryann) and children (Stephanie and Christopher) and numerous pets. We have hosted numerous exchange students over the past few years, including students from France, Italy, and Brazil. Professionally, I spent years as a VMS consultant for Digital. I had little Unix experience or interest. In 1995, I had the need to develop some Unix skills. I came across Linux (Yggdrasil Plug-and-Play, Fall 1994 edition, for those who care about such things) and started using it for training purposes. Within 2 years, it became my preferred desktop system.
OLinux: You told us that you are writing a book. What is the subject of this work? Have you published any other books? Don't forget to ship us a copy, ok ?:)
Russel Pavlicek: The book is entitled "Open Source Software Development: Adding Clarity to Chaos." It is meant to give people outside of the Open Source world an introduction to Open Source development and the people behind it. It should be useful for many people who use Open Source software, but do not necessarily understand why the community behaves like it does. I think it will also have value for people within the movement who know how the movement works but have not spent that much time considering why it works the way it does. This is my first book. And, yes, I've asked the publisher to send you a copy.
OLinux: What's your position at Compaq?
Russel Pavlicek: My current job is to help build up our Linux consulting expertise worldwide. Compaq recognizes the potential opportunity we have in the Linux consulting arena and is working to prepare our workforce. I am also a frequent speaker at Linux and Open Source trade shows and technical conferences, as well as an occasional author of articles for various Linux websites. The two topics I speak about the most are Linux Advocacy and a perspective on understanding the Open Source movement called, "Welcome to the Insane Asylum."
My speaking schedule and bibliography can be found at http://www.erols.com/pavlicek/
OLinux: Do all Compaq's systems run the Linux OS? Can you list or indicate a url where our users can find detailed technical information about those products/drives/patches?
Russel Pavlicek: Almost all Alpha systems and Proliant systems can run Linux. A few of the very early Alpha boxes used a bus called Turbochannel which is not yet supported on Linux (NetBSD apparently works, though). But just about everything else works. Some options (some video cards, for example) do not have drivers, but there are other supported options that can be used instead.
Compaq's main Linux website can be found at http://www.compaq.com/linux
We host a site about Linux on handheld units (including the iPaq H3600) at http://www.handhelds.org/.
That site includes links to information about Linux on both Alpha systems and Proliant servers. In addition to this, there is good information about Linux on Alpha at the following non-Compaq sites: Alpha Linux information, peer-to-peer support.
OLinux: What is Compaq's marketing strategy for Linux?
Russel Pavlicek: Compaq sees room for many operating systems solutions in today's computing environment. Some customers want Tru64 Unix or OpenVMS. Others want Linux. And others want Windows 2000 or Windows 98. Compaq tries to meet the needs of customers across the board.
Compaq sees Linux being very important in a number of markets today. It is a key component for customers needing low cost but highly effective webservers. It is absolutely critical in the high performance technical computing arena, thanks to the excellent price performance value of Beowulf clusters. Linux is also making excellent progress in the handheld market (we recently released information on how to load Linux on the latest of our iPaq handheld units).
OLinux: What are Compaq's key alliances & investments with Linux companies and organizations to support this platform?
Russel Pavlicek: Compaq has partnerships with a number of key Linux companies including Red Hat, SuSE, TurboLinux, and Linuxcare. We are the only top tier solutions company that can claim to be an original member of Linux International.
Compaq realizes that we must work with the community, and we are doing just that.
OLinux: How does Compaq analyze Linux growth for past few years? Is it consistent growth in your opinion? To what extent does Compaq want Linux to succeed as an alternative operating system on the server and desktop?
Russel Pavlicek: From my personal perspective, the growth of Linux over the past few years has been nothing but spectacular. I started using Linux in 1995 and actively tracking the Linux community in 1997. By the beginning of 1997, I was entirely convinced that there was a viable industry growing out of the Linux community.
I attended the Atlanta Linux Showcase in 1997. It was held on a weekend with about 500 attendees and about 20 vendors on the expo floor. Even though it was a trivial show by computer industry standards, I walked away from the show totally energized. I knew without a doubt that Linux was going to be big -- very big. There was enthusiasm and passion I had not seen in years coming from the attendees. The reports of the new work in progress was exciting. None of the industry analysts were paying attention to Linux, but I could see that something of significance was happening here.
In 1997 I began suggesting to coworkers that we should be developing a Linux strategy. There was a fairly small group of us who understood what was going on, but before too long it became apparent to everyone that Linux was not just a hacker's project. It was turning into an exciting operating system.
Today, we have an Linux Program Office focused on the success of Linux on Compaq's platforms. There are many people working hard to make sure that Linux works well on both our Proliant systems and Alpha systems. We have had engineers working on Linux Alpha since 1994.
OLinux: What is your role representing Compaq for Linux International? How does Compaq support the project? Are there any special funds direct to li.org?
Russel Pavlicek: I just recently took the position of Compaq's representative to Linux International. I am still getting acclimated to the task. Jon "maddog" Hall had been our representative for years until his departure last year to become the full-time Executive Director of Linux International.
OLinux: In you opinion, how much has the Linux/OS community has grown and how do you see its future?
Russel Pavlicek: Since 1997, when I first joined the community, it has changed in many ways and stayed the same in others. Obviously, the number of people using Linux these days has grown dramatically. The community used to be almost exclusively programmers, but that is no longer true. In my experience, the average Linux user still has some technical background, but most are not actively contributing code to the community. I am active in a local Linux User Group and there are a number of people involved in the community these days who are technical, but not working on projects. The community is changing from primarily code hackers to technical users.
The number of non-technical users has grown significantly, as well. Even my wife and daughter, who are not technical people at all, use Linux on a daily basis without problem. This is the growing frontier.
OLinux: What are the main internet technologies that you consider extremely interesting or relevant advance for technology information?
Russel Pavlicek: I think clustering technologies will be critical on the server side in the near term. There are many things in development and more things planned, but a full, rich clustering solution will be a key development for Linux when it arrives. Linux on handheld and embedded processors will be vitally important in the near future, I believe. That is why I am proud to see that our people have worked so hard to port Linux to our new iPaq handheld. Also, system installation procedures have come a long way in the past five years, but we need to continue on to make installation incredibly simple. This is especially true on laptops.
There are many packages that can supply standard mail services under Linux. Basically the UNIX/Linux e-mail model involves MTA (mail transport agents), MSA (mail storage/access agents) and MUAs (mail user agents). There are also a variety of utilities that don't really quite fit in any of these categories.
Under Linux there are several MTAs including sendmail, the most common across most forms of UNIX; and D.J. Bernstein's qmail and Wietse Venema's Postfix. These accept and relay mail. This sounds quite simple, but in practice it can be quite complex. There are a number of routing and masquerading options that can be set by administrative policy --- and these amount to programming languages that filter and modify the headers of each message as it is relayed. In addition the process of routing mail and finding user mail boxes (mail stores) can involve arbitrarily complex interactions with various directory services (DNS, passwd files, NIS, LDAP alias/dbm files, and all manner of custom databases).
These days MTAs also have to implement anti-spam features that amount to access control lists and rules about the address formats (to and from headers) that are allowed from specific domains and address ranges. (Those generally also involve queries on tables or directory services, including those like Paul Vixie's RBL (real-time blackhole list: or MAPS, mail abuse prevention system) and it's ilk, like Dorkslayer/ORBS. Recently, MTAs are being increasing required to enforce other policies and implement anti-virus/anti-worm features.
The most common cases are easy enough to install and configure. However, all that power and flexibility comes at a price. As your organization chooses to tailor its MTA to meet your special routing, nomenclature, security and anti-spam requirements you'll require more sophisticated configuration options and many of those will involve choreographing complex relationships between your MTA and various other subsystems (such as any LDAP and DNS servers you use).
Once you've selected, installed and configured an MTA you generally will also need to go through the same process for an MSA. Most organizations these days don't deliver mail directly to desktop client systems. They store the mail on servers and have the users fetch their mail via POP or IMAP. There are various protocols for managing a mail store but the only two that really count these days are POP3 and IMAP4 (there are also older versions of each of these protocols, of course). As with the MTAs there are a number of programs (daemons) can can provide each of these services. Most MSAs can work with any common MTA. In addition these systems usually do locking and/or use other mechanisms so that multiple MSAs can be concurrently in use without conflict.
That means that you can have some users who access their mail via POP, while others use IMAP and others might even log in and use a local MUA (such as pine, mutt, or elm). Individual users can swith from what mail access method to another, usually without requiring any sysadmin intervention. Clever users can often bypass the normal MSA/MUA tools and use normal UNIX commands (like cp, and mv) and FTP or rsync to move their mail around. (This is generally too clunky for normal use, but can be quite handy when fixing corrupted mailboxes, etc).
The first time I was ever called upon to set up a POP server on an existing general purpose Linux server, I was surprised to find that there was no work required. A POP daemon had been installed and enabled when I did the initial OS installation; I had disabled it (commenting out a line in the /etc/inetd.conf file) during my routine system hardening. So "setting" up the service simply require that I uncomment one line in one file, and restart one service/daemon.
IMAP is similar. Where POP generally transfers mail to the client system and removes it from the server, IMAP allows one to store the mail on server side folder, and the copies on client systems are essentially a cache or "working copy" --- this usually costs more server storage space, but it allows the IT teams to focus on server backup/recovery and allows the client systems to be considered more-or-less disposable. IMAP can be used just like POP (where the mail is expunged from the server by the clients after delivery). Operationally, there isn't much difference. Both services are normally started by inetd (the network dispatcher service; Linux's "receptionist" if you will).
A POP or IMAP server can run for years, serving hundreds, even thousands of mailboxes and users without ever requiring any special attention. Usually you're users or their e-mail correspondents will occasionally do something stupid, or some software they run will exhibit some bug that will require the system administrator to go in and do some troubleshooting or cleanup.
For example, one time I had some complaining that his POP e-mail was broken. I found out that one of his customers had sent him a bit of e-mail with a 100Mb file attachment! (It was a Netware crash dump image). This was bumping into some diskspace and speed/capacity limits on the old 32Mb 486 what we were using to serve mail to him and the other 50 people in the department. I fixed it in a few minutes with some shell commands, used some command line tools to uudecode the attachment back into a file which I put in the user's home directory. I tossed together a quick throwaway script to extract the rest of his e-mail by building a new mailbox for him. (mbox files under UNIX are simple text files. qmail mail stores are directories with individual small text files, one for each message). Any competent intermediate system administrator could have done the same thing.
So most of the problems you might encounter with MSAs and MTAs can be fixed with text editors and common UNIX filters and utilities.
There are many MUAs that will work with POP and IMAP servers, including Microsoft's Outlook. Under Linux many people use 'fetchmail' to fetch their mail to a local mail spool (mailbox). Then they can use any MUA (elm, pine, mutt, MH/exmh, EMACS' rmail, vmail, mh-e, gnus, and the plethora of GUIs like Balsa, Mahogany, etc). Many other Linux users choose Netscape Communicator's built-in mail client.
Under Linux and UNIX there are other tools like procmail, vacation, biff, and fetchmail which, as I said before, don't fall into any of the three classic categories (MTA, MSA, MUA) that I describe earlier.
procmail is usually used as a "local delivery agent" and as a mail processing agent. It's generally used to filter the part of the final delivery of a message to its end recipients. This allows a user to write scripts to automatically refile, reject, respond to, forward or otherwise work with selected bits of mail as they are received. (It can also be used to post process mailboxes and as a more general e-mail programming language/library).
(vacation is an old program that can be used to simply provide an automated response to e-mail upon reciept. It was originally used to warn correspondents that the recipient was "on vacation." This can be also done with a simple two line procmail recipe).
biff is a utility to notify a user that mail has arrived. (There are various similar utilities for doing this in GUIs, displaying icons, animations, emitting music or vocal announcements, relaying biff notifications over a network and using various backend MSA protocols, etc).
Ping is an diagnostic tool for verifying connectivity between two hosts on a network. It sends ICMP Echo Request packets to a remote IP address and watches for ICMP responses. The author of the initial version of the ping program that we use today was Mike Muss. Many other people have tweaked, rewritten, and variously abused ping since then.
The name ping itself is somewhat colorful. Some people will claim that it is an acronym standing for the Packet INternet Groper, this is not the case. Ping was named after the sound of a sonar tracking system. There is even a story claiming that a system administrator wrote a script which repeatedly pinged a host on the network and made an audible "pinging" alert for each success. The system administrator was then able to methodically go through his network checking BNC connectors until he found the dodgy connector that had been plaguing his network — when the noises stopped, he'd found his culprit.
Ping used to be a very good indicator of a machines ability to receive and send IP packets in general. If you could ping a host, you could also make an ftp or http connection. With the wider advent of packet filtering for security, this is becoming less true. Many firewalls explicitly disallow ICMP packets on the twin grounds that,
1) people don't need to know what your internal network looks like, |
2) and, any protocol can be used to launch an attack, even ICMP. |
There are additional flavors of the ping command that have been written for other purposes. Among the most common is the fping command. Which was written to ping a range of addresses, and is commonly used in network scanners and monitors like saint and mon (both of which are covered in this chapter). Another variation is the Net::Ping module, which provides a perl implementation of Ping functionality that can easily be used from within a script without calling an external program. You might use the script something like this:
Example 1. Using Net::Perl
#!/usr/bin/perl -w use strict; use Net::Ping; my $host = $ARGV[0]; my $p = Net::Ping->new("icmp"); if ($p->ping($host)) { print "$host is alive.\n"; } else { print "$host is not reachable.\n"; } |
Ping is most often used without additional arguments and shut off with a Ctrl–c. The results look like this:
[pate@cherry pate]$ ping mango PING mango (192.168.1.1) from 192.168.1.10 : 56(84) bytes of data. 64 bytes from mango (192.168.1.1): icmp_seq=0 ttl=255 time=0.5 ms 64 bytes from mango (192.168.1.1): icmp_seq=1 ttl=255 time=0.3 ms 64 bytes from mango (192.168.1.1): icmp_seq=2 ttl=255 time=0.3 ms 64 bytes from mango (192.168.1.1): icmp_seq=3 ttl=255 time=0.3 ms 64 bytes from mango (192.168.1.1): icmp_seq=4 ttl=255 time=0.3 ms 64 bytes from mango (192.168.1.1): icmp_seq=5 ttl=255 time=0.3 ms --- mango ping statistics --- 6 packets transmitted, 6 packets received, 0% packet loss round-trip min/avg/max = 0.3/0.3/0.5 ms [pate@cherry pate]$ |
This example also shows another important point, you should not rely on a single packet to diagnose your network. A series of five or ten is much better, as you can count up to 40% data loss as congestion on a network, and even a single packet dropped can be attributed to a busy host on the other end.
There are several useful options to the ping command. These are summarized in the following table:
Table 1. Ping Command Options
Switch | Description |
---|---|
-c count |
Stop sending and receiving packets after count packets. |
-d |
Set the SO_DEBUG on the socket used. |
-f |
Send the packets as fast as possible. (flood) |
-i wait |
Set an interval of wait seconds between packets. |
-I 〈device〉 |
Sets the output interface. |
-l preload |
Sends preload packets as fast as possible, then drops back to normal mode. |
-n |
Don't look up hostnames, just give IP addresses. (numeric) |
-p pattern |
Specify up to 16 bytes of "pad data" to be sent with the packet. |
-q |
Output only summary lines. (quiet) |
-r |
Don't use routing tables to send the packet, just drop it out the local interface. |
-R |
Set the Record Route option. |
-s packetsize |
Set the number of data bytes sent to packetsize. |
-T tsonly |
Sends a ping with the timestamp option. |
-T tsandaddr |
Collects timestamps and addresses |
-T tsprespec [host1 [host2 [host3 [host4]]]] |
Collects timestamps and addresses from prespecified hops. |
These options can be combined to make ping even more helpful. One thing that you cannot see is that the ping command used in the previous section is likely to take several seconds to run and report back. Using the -f switch will reduce the time spent waiting for the command. Combining this with the -c 10 and the -q switches will give you quick results and easier output to read:
[root@cherry /root]# ping -c 10 -fq mango PING mango (192.168.1.1) from 192.168.1.10 : 56(84) bytes of data. --- mango ping statistics --- 10 packets transmitted, 10 packets received, 0% packet loss round-trip min/avg/max = 0.2/0.2/0.9 ms [root@cherry /root]# |
Note: The -f and -l switches can only be used by root, as they can cause serious network degradation if misused.
It might be of some benefit to test larger packets, using ping -c10 -s 1024 -qf will send larger packets for you. This can be especially useful where you suspect problems with fragmented packets.
To see the route that your packets are traversing, you can use ping -c10 -R. This command produces the following output:
PING tbr.nailed.org (206.66.240.72) from 192.168.1.10 : 56(124) bytes of data. 64 bytes from bigfun.whirlycott.com (206.66.240.72): icmp_seq=0 ttl=239 time=217.2 ms RR: 192.168.1.10 216.41.39.90 serial0.mmgw32.bos1.Level3.net (209.244.39.25) 208.218.130.22 166.90.184.2 so-6-0-0.mp2.NewYork1.level3.net (209.247.10.45) 137.39.52.10 180.ATM7-0.BR2.NYC9.ALTER.NET (152.63.22.229) lo0.XR2.NYC9.ALTER.NET (137.39.4.175) 64 bytes from bigfun.whirlycott.com (206.66.240.72): icmp_seq=1 ttl=239 time=1940.8 ms (same route) 64 bytes from bigfun.whirlycott.com (206.66.240.72): icmp_seq=2 ttl=239 time=250.6 ms (same route) 64 bytes from bigfun.whirlycott.com (206.66.240.72): icmp_seq=3 ttl=239 time=230.3 ms (same route) 64 bytes from bigfun.whirlycott.com (206.66.240.72): icmp_seq=4 ttl=239 time=289.8 ms (same route) 64 bytes from bigfun.whirlycott.com (206.66.240.72): icmp_seq=5 ttl=239 time=1261.4 ms (same route) 64 bytes from bigfun.whirlycott.com (206.66.240.72): icmp_seq=6 ttl=239 time=469.4 ms (same route) 64 bytes from bigfun.whirlycott.com (206.66.240.72): icmp_seq=7 ttl=239 time=1272.3 ms (same route) 64 bytes from bigfun.whirlycott.com (206.66.240.72): icmp_seq=8 ttl=239 time=353.1 ms (same route) 64 bytes from bigfun.whirlycott.com (206.66.240.72): icmp_seq=9 ttl=239 time=1281.1 ms (same route) --- tbr.nailed.org ping statistics --- 10 packets transmitted, 10 packets received, 0% packet loss round-trip min/avg/max = 217.2/756.6/1940.8 ms |
Note: The record route option specified by the -R switch is not honored by all routers and hosts. Further, it contains only a limited space to hold router addresses, traceroute may be a better tool for identifying the path packets follow through a network.
The ping command is a very useful tool for your troubleshooting kit, and should not be overlooked.
This article is copyright 2000, Pat Eyler and New Riders Publishing. It is presented under the Open Publication License, with no additional terms applied. It is a draft version of a section of the book Networking Linux: A Practical Guide to TCP/IP, which will be published by New Riders Publishing in the winter.
Author: Fyodor
Required: flex, bison
Homepage: http://www.insecure.org/nmapCurrent stable release: 2.53
License: GPL
Platform ports: Linux, FreeBSD, NetBSD, OpenBSD, Solaris, IRIX, BSDI, SunOS, HP-UX, AIX, Digital UNIX, Cray UNICOS and Windows NT.
The intent of this article is to familiarize the reader with the network scanner nmap. As Lamont Grandquist (an nmap contributor/developer) points out, nmap does three things: It will ping a number of hosts to determine if they are up. It will portscan hosts to determine what services they are offering and it will attempt to determine the OS (operating system) of host(s). Nmap allows the user to scan networks as small as a two node LAN (Local Area Network) or as large as a 500 node LAN and even larger. Nmap also allows you to customize your scanning techniques. Sometimes a simple ICMP (Internet Control Message Protocol) ping sweep may be all you need. If not, then maybe you're looking for a stealth scan giving back reports on UDP (User Datagram Protocol) and TCP (Transmission Control Protocol) ports that are available and as to what operating system the host is using? Still want more? You can do all that and log the data into either human-readable or machine-parsable format. In this article I will be covering some basic to intermediate scanning techniques to get you off and running with nmap. If you love it enough then I would suggest reading the the nmap man pages 50 times and then translating it into the foreign language of your choice;)
Some Linux distributions come with nmap as part of the install. If you do not have nmap then let's begin with grabbing the latest copy and getting it up and running. The version I will be covering here will be the source code tarball, optionally you have both rpm and source-rpm to choose from . The Linux distribution I am using is Red Hat 6.1. Download the nmap-latest.tgz file into your home directory. Once the download is complete perform tar -zxvf nmap-latest.tgz and this will unpack the source code into your home directory. Go into the newly created nmap-latest directory and read both the README and INSTALL files. Ideally the next step would be to perform configure, make and (as root) make install in the top level of the newly created directory. This will install the nmap binary into /usr/local/bin. From here we're ready to run nmap.
Scanning types
Without further ado, let's get down to business with nmap. First we will need an address to scan against. If you are working from a LAN then pick a number of one of your hosts. Let's say that your LAN consists of two machines: Adam and Eve. Adam (192.168.0.1) is the unit we'll be running nmap on. Eve (192.168.0.2) is the machine we will be scanning. From the command line I would type the following:
nmap 192.168.0.1
Here is a sample output from the scan:
Example 1
Starting nmap V. 2.53 by [email protected] (www.insecure.org/nmap)
Interesting ports on Eve (192.168.0.2):
(The 1511 ports scanned but not shown below are in state:closed)
Port State Service
21/tcp open ftp
23/tcp open telnet
25/tcp open smtp
79/tcp open finger
80/tcp open http
98/tcp open linuxconf
111/tcp open sunrpc
113/tcp open auth
513/tcp open login
514/tcp open shell
515/tcp open printer
6000/tcp open X11
Nmap run completed -- 1 IP address (1 host up) scanned in 1 second
What the above example did was run a vanilla TCP scan against the designated address. As we can see from this sample output our host is up and gives us a list of available ports that are listening. This of course is the most basic of all commands and can be run without any special privileges. The disadvantage of this call is that any host running logging software will easily detect this sort of scan. The output of this call would be the same as adding the option -sT to the command line so it would look like this: nmap -sT 192.168.0.2. (Note that this call is allowable by normal users).
Not on a local LAN? Working from a single host dial-up machine? No problem, run ifconfig (or use your favorite text editor to view your /var/log/messages file, look for the last entry in the messages file that contains a remote IP address) to obtain your IP address and go from there. Let's say my IP address is 206.212.15.23, we can use that as a premise to base our scans on. So with that in mind let's check on our "neighbor":
nmap -sT 206.212.15.22
Here is the sample output:
Example 2
Starting nmap V. 2.53 by [email protected] (www.insecure.org/nmap)
Interesting ports on find2-cs-4.dial.ISP.net (206.212.15.22):
(1522 ports scanned but not shown below are in state: closed)
Port State Service
139/tcp open netbios-ssn
Nmap run completed -- 1 IP address (1 host up) scanned in 20 seconds
This is a very basic example of nmap's capabilities but it atleast gives the beginner some grounds to work off of if not on a local LAN.
-sS Now let's say that that you wish to use a more stealthy scan to prevent detection, you would then use our previous example only with the -sS (SYN) call so it would look like this: nmap -sS 192.168.0.2.The -sS (SYN) call is sometimes referred to as the "half-open" scan because you do not initiate a full TCP connection. The output will read the same as example 1 only with a lesser chance of detection from the other end. Unlike running the -sT call this call requires root privileges.
-sF -sX -sN Now for the truly paranoid or instances when the target may be running filtering or logging software that detect SYN we can issue a third type of call with the -sF (Stealth FIN), sX (Xmas Tree) or -sN (Null) scan. Note: Since Microsoft insists on doing things their own way, neither the FIN, Xmas or Null scan modes will work on Windows 95/98 or NT boxes. So if we were to get a listing of available ports running either the -sT or -sS options but "All scanned ports are: closed" running the -sF, sX or -sN option, then we can safely assume that the target is probably a Windows box. This really isn't a necessary procedure to verify a Windows machine since nmap has built in OS detection which we will cover later. These three commands also require root privileges.
-sU This option tells nmap to scan for listening UDP (User Datagram Protocol) rather than TCP ports on a target host. Although this can sometimes be slow on Linux machines it runs particularly fast against Window boxes. Using our previous examples of Adam and Eve, let's run (once again root privilege is required) a -sU scan against Eve:
nmap -sU 192.168.0.2
Here is the sample output from the scan:
Example 3
Starting nmap V. 2.53 by [email protected] (www.insecure.org/nmap)
Interesting ports on Eve (192.168.0.2):
(The 1445 ports scanned but not shown below are in state: closed)
Port State Service
111/udp open sunrpc
517/udp open talk
518/udp open ntalk
Nmap run completed -- 1 IP address (1 host up) scanned in 4 seconds
As we can see nmap scanned 1455 ports on Eve and gave us a listing of the UDP ports it found to be listening. We can gather from examples one and two that we are looking at a Linux install. With that in mind if you remember in the introduction I mentioned that nmap performs three things: It pings, it portscan's and it detects the target's (operating system). Now that we've briefly covered the first two uses let's move onto OS detection
-O This is the option to be used to determine the operating system of the given target. It can be used in conjunction with our above mentioned scan types or by itself. Nmap uses what is called TCP/IP fingerprinting to try and accurately determine the OS of the given target. For a more complete reading on OS fingerprinting please see Foyer's article titled "Remote OS detection via TCP/IP fingerprinting" found here. Now with that in mind let's get right to our next example. Using our target host (Eve) from Example 1, I would type the following: (Note that the -O option requires root privileges)
nmap -O 192.168.0.2
Here is a the sample output from the scan:
Example 4
Starting nmap V. 2.53 by [email protected] (www.insecure.org/nmap)
Interesting ports on Eve (192.168.0.2):
(The 1511 ports scanned but not shown below are in state:closed)
Port State Service
21/tcp open ftp
23/tcp open telnet
25/tcp open smtp
79/tcp open finger
80/tcp open http
98/tcp open linuxconf
111/tcp open sunrpc
113/tcp open auth
513/tcp open login
514/tcp open shell
515/tcp open printer
6000/tcp open X11
TCP Sequence prediction: Class=random positive increments
Difficulty=1772042 (Good luck!)
Remote operating system guess: Linux 2.1.122 - 2.2.14
Nmap run completed -- 1 IP address (1 host up) scanned in 1 second
Notice that nmap reports the same available port data as it did in example 1 due to the default -sT option, but also the OS of the machine (in this case Linux) and the kernel version...not bad ehh?! Nmap comes equipped with an impressive OS database.
Instead of limiting ourselves to scanning just one target., let's broaden our horizon's to bigger and better things. In example 2 we used our IP address to base a scan against. Using that address again we can get a look at numerous targets in our "community". At the command line type the following (substituting a valid address of your choice of course):
nmap -sT -O 206.212.15.0-50
What this does is instruct nmap to scan every host between the IP addresses of 206.212.15.0 and 206.212.15.50. If you happen to find many interesting feedback results from this or a larger scale scan then you can always pipe the output into your choice of a human readable file or a machine parsable file for future reference by issuing the following option:
To create a human readable output file issue the -oN<textfile name> command into your nmap string so that it would look similar to this:
nmap -sT -O -oN sample.txt 206.212.15.0-50
Rather have a machine parsable file? Enter the -oM <textfile name> to pipe the output into a machine parsable file:
nmap -sT -O -oM sample.txt 206.212.15.0-50
*Back when I was becoming aquatinted with all the nmap options, I ran my first large scale scan against 250 consecutive machines using an arbitrary number (nmap -sX -O -oN sample.txt XXX.XXX.XXX.0-250).To my great surprise I was confronted with 250 up and running virgin Linux machines. Another reason why Linux enthusiasts should NEVER become bored.
-I This is a handy little call that activates nmap's TCP reverse ident scanning option. This divulges information that gives the username that owns available processes. Let's take a look (Note that the host has to be running ident). At the command line issue this command against your target, in this case our default Eve running Linux:
-iR Use this command to instruct nmap to scan random hosts for you.
-p Port range option allows you to pick what port or ports you wish nmap to scan against.
-v Use verbosity to display more output data. Use twice (-v -v) for maximum verbosity.
-h Displays a quick reference of nmap's calls
Now that we have looked at nmap's three basic usage types and some of it's other options, let's mix and match them.
nmap -v -v -sS -O 209.212.53.50-100
This instructs nmap to use a maximum amount of verbosity to run a stealth scan and OS detection against all machines between IP addresses 209.212.53.50 and 209.212.53.100. This command will also require root privileges due to both the -sS and -O calls. Of course this will display a very overwhelming amount of data so let's log our results into a human readable file for future reference:
nmap -v -v -sS -O -oN sample.txt 209.212.53.50-100
Now let's make nmap run a stealth scan and instruct it to look only for machines offering http and ftp services between the addresses of 209.212.53.50 and 209.212.53.100. Once again we will log the output (I'm a log junkie) for future reference into a human readable file called ftphttpscan.txt:
nmap -sS -p 23,80 -oN ftphttpscan.txt 209.212.53.50-100
Remember the -iR option mentioned previously? Let's use it to take a random sampling of Internet web servers using the verbatim example from nmap's man page:
nmap -sS -iR -p 80
Last but certainly not least, while gleaning information, don't forget to nmap yourself. Just type at the command line: nmap 127.0.0.1 This is especially useful and recommended if you're a newcomer to Linux and connected to the Internet via DSL or cable modem.
Now for those of you who would rather not work on the command line (shame on you) there are graphical front ends for nmap.
NmapFE - NmapFE, written by Zach Smith, comes included in the nmap-2.53.rpm and uses the GTK interface. NmapFE can be found at http://codebox.net/nmapfe.html
Kmap - Kmap, written by Ian Zepp, uses the QT/KDE frontend for nmap at can be found at http://www.edotorg.org/kde/kmap/
KNmap - KNmap, written by Alexandre Sagala, is another KDE frontend for nmap and can be found at http://pages.infinit.net/rewind/
This wraps up our quick and dirty look and nmap. I hope you find the application as enjoyable as I do. Comments or questions can be sent to either myself or [email protected]. Happy scanning.
Translated from the Spanish by Rory Krause
Some time ago, Bill Gates and Paul Allen thought that it was just not fair to have to pay for processing time on expensive mainframes. And they thought the solution was the personal computer. They bet all of their future on it. It was only an idea. They were even startled to see the debut of Altair computers in an electronics magazine. ``The future is passing us by,'' they thought. And they believed fervently in their idea, enough to make it a reality. Then they built Microsoft into the giant that it is today. Maybe, if Bill Gates an Paul Allen had not existed, the only personal computers that we might own would be Japanese, and would only work for playing games. Although, with out a doubt, we would have Quake III Arena, with a Japanese name, of course. And it would be a role playing game.
Likewise, Richard Stallman thought that it was not fair that people pay large sums of money for the software that they used, especially if it was not quite up to the quality they would prefer. And so he created the GNU project. He also be his future on his idea. And he believes fervently in it. He believes that the GPL license of his software it the best license in the world, and it is possible that he is right. Several years ago I read some articles written by industry analysts, that predicted that although the cost of hardware would decrease steadily, the cost of software would only continue to increase. Maybe, if Richard Stallman had not existed we would no be paying licensing fees of $1000 or more for a software package worth less that a tenth of that fee.
And then we have Linus Torvalds. The fundamental idea that he had was to realize at some point that the software that he had created in order to connect to the server at his university had certain similarities to an operation system, or rather the kernel of an operating system. Furthermore, the idea to try to improve the operating system on an ongoing basis with the help of its users. This was the fundamental step. Linus it the leader. If Linus had not existed... no, I don't even want to imagine it.
Richard Stallman simply wrote his manifesto, made his software, and put his license on it, but his has not done anything of such significance since then. Any chance he gets, he gives his opinion of all types of licenses (e.g. the Motif libraries). The license on software is not its most important part. If it was, people would have stopped using Windows a long time ago. It needs something more. It need people who will do for each application and each library what Linus did for the kernel.
Furthermore, consider the fact that both ideas, the idea of Bill Gates and Paul Allen of not having to pay for processor time by using a personal computer, and the idea of Richard Stallman, of not having to pay for the software that you use on the computer converge to one idea. They converge to Linux: Free distribution for personal computers.
Now that Linux is entering very very slowly, but quite surely in the personal computer arena, we should place our attention on its graphical interface. I say that it is entering the arena quite surely, because once a user is satisfied with his Linux installation, he has all his hardware supported, he has tested all his application, is is possible that he will never return to Windows. But it is slow also, because only a small percentage of the people that try Linux are satisfied with the experience. The rest return to Windows. I contend that the single most important aspect is the graphical user interface. Because out there in the real world, there are more than 200 million users, that only know how to use a computer in this fashion, without text consoles, without two letter commands.
In this way, we are duplicating effort and advancing at half the speed or slower than what we should be doing. The teams of programmers working on the KDE and GNOME environments are working on the same problems, on creating an interface that is consistent, a standard for programming, incredibly personalizable, so the users of the graphical environment feel at home, and the programmers know that they can choose any group of libraries and their programs are going to work on whatever Linux system without problems or the necessity for the user to install additional components. Definitely we are duplicating efforts.
One of the first things I learned in college was that in programming, we should not try to reinvent the wheel each time we write a program. It is for this reason that function libraries exist and are standardized.
The KDE environment is complicated, it has many features, and supplies a consistent interface that is at the same time personalizable for any user. It is extremely easy to use and this allows for increase productivity with any user and eliminates the learning curve accompanied with a migration from a Windows system to a Linux system. In spite of this, I have seen messages from our local group of Linux users who detest KDE, more for ideological reasons (the Qt library license) than for anything practical. They would never recommend it in a business environment in spite of the benefits that its use might have.
"I have a K carved into my forehead," was the expression used by one of them.
GNOME also has problems with its users. It is an excellent tool, and its GTK programming kit is GPL'ed on all the platforms to which it has been ported. It has several innovative technologies and notable features. But it also has some problems not just in itself, but also in its acceptance by some people. Some programmers that prefer BSD licenses detest anything that has to do with GPL. It doesn't have a consistence interface across all installations. No sir, this is not a bug, this is a feature. This is the phrase that is used in these cases. Because you can try out various window managers and choose your favorite. But many user don't even have a clue about what a window manager is.
To put this in a different perspective, let us suppose that the Hurd kernel from the GNU project had been ready in 1996 before the commercial explosion that Linux had last year, and there were some distributions with the Linux kernel and others with the Hurd kernel and some with both. There would be fanatical users of one or the other kernel. There might be some saying "I have penguin carved into my forehead," even though he was, like all of us, dedicated to the project of liberating us from the 64,000 bugs for $400 or more, to which we are still tied to by industry forces. And maybe I would find "I have a penguin carved into my forehead" in the list of messages from our local user's group.
This comparison is not exact, but it gets the point across. It is not exact because both kernels can execute the same programs ant the programmer does not have to do anything special for this to happen. In graphical environments KDE applications use one set of libraries and standards while the Gnome applications use different ones. Additionally you can use the Qt libraries without complying with the KDE standards or GTK without complying with Gnome standards. The users must have both sets of libraries in order to run both types of programs.
The rest of the applications use other available libraries. There are a tremendous number of libraries for programming in a graphical environment. Maybe this is a good thing, maybe not. A programmer who knows C or C++ and wants to write an application for use on Linux soon discovers a confusing mess of standards and libraries. The is no consensus on which set is the best. It is possible that our hypothetical new user might spend more time deciding which programming libraries and which environment to use, than the time he uses learning to use these libraries for writing his program. It is also possible that someone might use Gnome only because it is installed by default on the RedHat distribution. Only GPL fanatics will use Gnome. But this is only a possibility.
It would be nice to find a way to unite the KDE and Gnome projects. Perhaps there could be a leader to coordinate the projects, just like Linus does with the kernel. This would give motivation to the people who want to program and make Linux better. We would have increasingly more and more programmers, almost exponential growth, just as we have seen happen with kernel development since its beginnings.
In spite of this, Linux has to solve other problems of equal urgency. I I believe that Linux is an important operating system in the computing industry and the hardware manufacturers should recognize it as such. Instead of letting the open source community write all their drivers, the hardware manufacturers should write their own drivers. If they want, they can make them open source so that the users can correct errors or add functionality for the hardware. Even if they write drivers and release only binaries it would be a tremendous help.
Let's take the case of software modems, better known as "winmodems". Lucent wrote a driver and released the binary for the RedHat 6.1 distribution, which supported modems with the Lucent chips. Lucent has only written this driver, (it has some bugs) because they were only asked once. Lucent has announced the it will not make drivers at the request of users, but rather at the request of business that sell Linux distributions. It is a good policy. It saves time and money in technical support and it is possible that many hardware manufacturers might have a similar policy. Maybe they are only waiting for a Linux distribution to ask for the drivers. Everyone who has at some time written divers knows that it is much easier to write drivers for Linux than for Windows.
Linux distribution companies lack the sense of leadership that they need right now. Some companies announced a little while ago that, "we have hired programmers to work on X project for X amount of time". Two programmers are to few. Especially the distributions with leading sales numbers like RedHat and Caldera should contract all the programmers that they can for Linux programs. Even though they have already hired some programmers, the more the better. The same goes for all the other distributions.
Linux is still not strong enough. The companies that make their money selling Linux have the responsibility to make it stronger. But they leave many things up to the "open source community" to do. Linux is a good software package to improve and make more compatible and friendly. Other open source projects are not. A project is no necessarily of good quality just because it is open source. They should not try to support all the open source projects that there are just because they are open source or because they are "alternative" operating systems. For this reason, among others, we should not see it as strange when the value over there stock shares go down.
Some people have speculated that the triumph of Linux could be based on the success of one application (the killer application), a software program that very good and that only works on Linux, which will make everyone want to have it. And some thought that such an application could be an office suite or something similar. The people at Microsoft think in a similar fashion. For this reason, they will try to make the next version of office (code named office 10) include integrated voice recognition. And we all know that such version might sell well, but the voice recognition will not work really well until the next version, or until two or three versions later.
The creators of RedHat created the RedHat Package Management System (RPM) and on that idea they based their distribution. They did a good job. Everyone who has needed to update slackware or another version or even Windows, knows that the best method is to erase everything and reinstall from nothing. And this is now a problem with Linux, thanks to the folks at RedHat.
Maybe the answer is not the one application. Maybe it is many applications. These are known as games. The first effort to make Windows an acceptable platform for games was called WinG. It worked on top of Windows 3.1 with extensions in 32 bits called "Win32s". It was terribly bad. The only game that I remember that used Win32s was something like Wolfenstein. And it was also quite bad. There was no comparison between the game for DOS, like DooM, Descent, or Warcraft and the simple Windows games. Even with Windows 95, the support for games was terrible. It was not until 1996 that the DirectX libraries appeared. The first respectable game that worked well on Windows that I could see was DooM II. Then there was MS Fury. And then the industry slowly changed. The rest is history. It can be seen that it was never an easy trip for Microsoft, most of all for their model for writing drivers.
The advantage for the companies that made games for DOS, was the total control that they had over the hardware. The advantage that they have now to make games for Windows9x is that they do not have to worry about compatability with an innumerable amount of hardware that is out there on the market, because the DirectX libraries allow them to use a series of standard routines to control the hardware that the systems has. Linux does not allow the total control of the hardware, and for now it does not have a series of standard libraries either that allow interaction with the hardware in an efficient way. Although there are some video libraries, they still lack sound and control mechanisms and more.
It is thanks to games that I still keep a working copy of Windows 95 on my computer. Linux, like everything else has evolved faster than Windows and now supports graphics acceleration with OpenGL on some cards and Xfree 4.0 promises much more of this type of thing. Many server of Quake III Arena run Linux. But the competition will not stop. We need more and better ideas. Especially now that Win2000 has copied everything that it can from Linux systems. And the next version will also copy everything that it can.
We need ideas that change the future of technology and benefit everyone; Ideas like those of Bill Gates and Richard Stallman. And we need leader like Linus Torvalds that can coordinate the efforts of people around the globe to make those ideas reality. I hope that those potential visionaries and leaders are reading this article and they will be moved to risk all of their future for those ideas. Surely then, both the innovators as well as the rest of us, will have a better future.
[Introduction and conclusion written by the LG Editor.]
"Tuxedo Tails" is a new series of single-panel cartoons Eric Kasten is drawing for Linux Gazette. I asked Eric how he builds his cartoons and how much he uses Linux. He replied, "I pretty much use Linux for everything that I do electronically any more (other than my tax software and occasion forays into Photoshop or Illustrator). The drawing is still done largely by pen and ink and then scanned and colorized and assembled. I mostly use the Gimp for interactive work and ImageMagick for scripts and such."
Eric also draws a multi-panel strip called Sun Puppy. It's about--you guessed it--puppies. Read the strip at http://www.sunpuppy.com. Most of the cartoons are not computer related, but two of the funniest computer-related episodes were May 22nd and May 24th. The second one may or may not remind you of biff
, the program that alerts you when mail has arrived.
This article is the the current installment in an ongoing series of site reviews for the Linux community. Each month, I will highlight a Linux-related site and tell you all about it. The intent of these articles is to let you know about sites that you might not have been to before, but they will all have to do with some aspect of Linux. Now, on with the story...
In past columns we looked at places to find information about troubleshooting your Linux installation. We looked at some language and development sites. We looked at some of the tools that you can use to put together GUI-based apps quickly and efficiently. So, now that you're an accomplished developer-hacker... Wouldn't it be nice to get paid for developing open source applications?
You bet yer sweet petootie it would be nice. After all, that new dual Athlon system was a little more than chump change for the month's allowance. This month, we look at SourceXchange.
SourceXchange is a member of the Collab.net network that has been getting a bunch of press this month, especially with Sun's announcement that they'll be opening the source to Star Office (yes, I'm already on that mailing list, and I hope to be able to help out with some documentation or code), and hosting the project on Collab.net servers. SourceXchange offers quite a bit to the budding and established open source developer, including peer reviews, project hosting, spec writing and the chance to get paid for your work.
We'll get to the money in a little bit, but let's take a look at what else is there first. SourceXchange's goal is to unite open source developers with projects that fit a need. If you've taken a look at some of the development boards recently, there's a growing list of developers who are looking for projects to work on. The "what should I do now?" question seems to be popping up more often.
With this site, the developer doesn't have to spend the time and energy to work up a challenging project, hop over to the RFP (Request For Proposal) section and see if any of the unbid projects are of interest. As this article was being written there were RFPs for accessing image manipulation utilities, developing a message board in PHP, and a few different RFPs for BXXP support in various utilities. Each RFP includes what the contractor is looking for in terms of skills and deliverables and how much the contractor is willing to pay, both in cash and materials. If one of the RFPs strikes your fancy, you can comment on it and express interest in offering your services to the project.
If you want to help out but don't want to work on the code itself, SourceXchange also offers Peer Reviewer positions. The Peer Reviewer is just that, a peer reviewer; this is someone who will help the developer and guide and review the progress toward the goal, acting as a third-party mediator to ensure that the project is both completed and done well.
Once all the players are selected, and the teams are finalized, the work can begin. SourceXchange provides a convenient place to post the status of the deliverables as well as the developer and reviewer information, project location and mailing list information and other pertinent details about the project. Developers are free to use whatever tools and hosting methods they need, according to the terms that have been decided between them and the project sponsors.
When the projects are complete, SourceXchange ensures that the developers recieve the peer review that is necessary to provide complete, accurate and robust code, as well as the compensation that they were promised from the project sponsors.
You can view the RFPs and project invormation without registering, but to really get the most out of this site, you do have to sign up and work. If you want to be a part of a growing interest in open source development by big business, you owe it to yourself to register as a developer and get on one of these projects. These projects may be substantially more than the Hello World applets that we all know and love, but the compensation offered is nothing to sneeze at either.
This article will provide several methods for the process that the original Micro Publishing article covered.
Or, use any other printer. A Deskjet will work fine, but it will be a lot slower and you will have to print on the paper twice (once on the back, and once on the front), and the quality of the print won't be as good.
The only other option is to fold it by hand, which I refused to do, but which Rick did happily. Children are great slaves for folding paper.
Now you have both NEW.ps or NEW.pdf. View NEW.pdf in either gv or Acrobat Reader! It will look a little weird since you are printing books. Please remember to set your duplex printer to print on the short edge, and not the long edge.
Now, for the non-duplex printers (most printers), follow the steps in the article Micro Publishing.
Remember to specify you want double-sided printing at your local office store.
Several people have also told us of cheap places, which will make books for you, that can be cheaper than making the books yourself. This is true. But ....
I truly hope people will band together to make their own books so that the technology behind it gets cheaper, faster, and easier to manage. People with better ideas will come along, and make the process better. In short, get involved in the free software movement and the free documentation movement.
I will most likely post another update in the next 6 months. I want to
Mark works as a computer guy at The Computer Underground and also at ZING and also at GNUJobs.com (soon).
Somebody asked Michael Williams if he could do Python and Java versions of his article An Introduction to Object-Oriented Programming in C++. Here's a Python version of the code. I'll comment on the differences between C++ and Python. Perhaps somebody else can write a Java version?
I am assuming you know the basics of Python. If not, see the excellent Tutorial and the other documentation at http://www.python.org/doc/.
To represent Michael's house (in section Classy! in the C++ article), we can use the following code: (text version)
If we run it, it prints:#! /usr/bin/python """house.py -- A house program. This is a documentation string surrounded by triple quotes. """ class House: pass my_house = House() my_house.number = 40 my_house.rooms = 8 my_house.garden = 1 print "My house is number", my_house.number print "It has", my_house.rooms, "rooms" if my_house.garden: garden_text = "has" else: garden_text = "does not have" print "It", garden_text, "a garden"
My house is number 40 It has 8 rooms It has a garden
What does this program do? First, we define what a generic house is in the class
block. pass
means "do nothing" and is required if the block would otherwise be empty. Then we create an instance (that is, a particular house) by calling the class name as if it were a function. The house is then stored in the variable my_house
.
This house initially has no attributes--if we were to query my_house.number
before setting it, we'd get an AttributeError. The next three lines set and create the attributes. This is a difference between the languages: Java instances start out with certain attributes which can never change (although their values can change), but Python instances start out with no attributes, and you can add or delete attributes (or change their type) later. This allows Python to be more flexible in certain dynamic situations.
We can initialize the instance at creation time by including a special __init__
method. (A method is a function which "belongs" to a class.) This program: (text version)
prints:#! /usr/bin/python """house2.py -- Another house. """ class House: def __init__(self, number, rooms, garden): self.number = number self.rooms = rooms self.garden = garden my_house = House(20, 1, 0) print "My house is number", my_house.number print "It has", my_house.rooms, "rooms" if my_house.garden: garden_text = "has" else: garden_text = "does not have" print "It", garden_text, "a garden"
Because the class has anMy house is number 20 It has 1 rooms It does not have a garden
__init__
method, it's automatically called when an instance is created. The arguments to House
are really the arguments to __init__
. Although most programs don't, you can also call __init__
yourself as many times as you want: my_house.__init__(55, 14, 1)
. This tells the object to "reinitialize itself".
Note that __init__
is defined with an extra first argument, self
. But we don't specify self
when we call the method. All Python methods work like this. self
is in fact the instance itself, and Python supplies it behind the scenes. You need self
because it's the only way the method can access the instance's attributes and other methods. Inside the method, self.rooms
means the instance's attribute rooms
, but rooms
means the local variable rooms
. Local variables, of course, vanish when the method ends. Python's use of self
is parallelled in Perl and other OO languages as well.
Michael didn't tell you, but C++ has a this
pointer which works like Python's self
. However, in C++ you don't have to type this->house
if there is no local variable house
, and you never type this
on a method definition line. In other words, C++ (and Java) do the same thing as Python and Perl; they just hide it from the programmer.
In fact, self
in Python is just a conventional name. You can call it this
or me
instead if you like. I actually like me
better. However, I stick with self
so that if somebody else has to maintain my work later, it will be easier for them to read. In contrast, C++'s variable this
is magic and cannot be renamed.
In the C++ program, garden
is a boolean attribute. Python doesn't have boolean attributes, so we use an integer instead. The expression my_house.garden
is true if the attribute is 1 (or any non-zero, non-empty value).
This section corresponds to the "Member Functions" section in Williams' article. I prefer the term "method" over "member function", as Pythoneers usually do. Michael's square.c
program would look like this: (text version)
prints#! /usr/bin/python """square.py -- Make some noise about a square. """ class Square: def __init__(self, length, width): self.length = length self.width = width def area(self): return self.length * self.width my_square = Square(5, 2) print my_square.area()
10
area
should be self explanatory because it works exactly like __init__
above. To reiterate, all the self
s in square.py are required. I have chosen to give Square an __init__
method rather than setting the attributes later, because that's what most Python programmers would do.
Nothing to say here. Python does not allow methods to be defined outside the class. Of course, this doesn't apply to ordinary (non-class) functions.
Not much to say here either. All Python attributes and methods are public. You can emulate private attributes and methods via the double-underscore hack, but most Python programmers don't. Instead, they count on the programmer not to abuse the class's API.
__init__
method is the constructor.
This prints:#! /usr/bin/python """person.py -- A person example. """ class Person: def __init__(self, age, house_number): self.age = age self.house_number = house_number alex = [] for i in range(5): obj = Person(i, i) alex.append(obj) print "Alex[3] age is", alex[3].age print for alexsub in alex: print "Age is", alexsub.age print "House number is", alexsub.house_number
Alex[3] age is 3 Age is 0 House number is 0 Age is 1 House number is 1 Age is 2 House number is 2 Age is 3 House number is 3 Age is 4 House number is 4
Python has no equivalent to person alex[5]
in the C++ program, which creates an array of five empty instances all at once. Instead, we create an empty list and then use a for
loop (which sets i
to 0, 1, 2, 3 and 4 respectively) to populate it. The example shows a loop subscripting a list by index number, another loop which gets each element in the list directly, and a print
statement which access an element by index number.
[This is part III in a series on HTML editors. Part I focused on CoffeeCup's HTML Editor ++. Part II was about Bluefish. -Ed.]
This will conclude my look into HTML editors for now. I've gotten a few questions about other HTML editors and I'll look at one of them in this article. There are lot's of HTML editors out there - many of which I haven't used at all, or recently. A few seem unmaintained (like asWedit, Ashe) and frankly I find little point in using unmaintained and quite limited software when there are very good alternatives. Which leeds us on to ....
OK - I cheated I grabbed a rpm on the quanta site and installed from there. No problems at all it just slipped in to my Mandrake machine. Just like an install should behave. First thing that meets the eye is that Quanta follows the new standard of tabbed tool palettes instead of multiple button rows. This is definately a Good Thing (TM) as otherwise the button rows easily grab too much screen real estate. The palettes are more limited than those in Bluefish lacking CSS and PHP "wizards" to name two.
Another thing that meets my eye is a problem with all KDE applications. It's ugly. Yes, a wholy subjective notion I agree but nevertheless visual impressions play some part in forming an opinion about a piece of software. That said I think that some of the tool buttons are better than those found on (e.g.) Bluefish.
A third pretty obvious feature of Quanta, one which I like, is the directory tree on the left hand side of the screen. Double-clicking a file in the tree opens it in the editor screen. I miss this in Bluefish. There is no drag-and-drop between the directory tree and the editor screen though.
I will turn to a couple of thing that has been bugging me about Quanta in the past - and there still is a problem in this respect. Firstly - how do I make the editing screen wrap lines? Continuing this line for more than what's visible produces a very long line. This is the standard setting, which in my humble opinion is dumb. I found the wrap lines settings is Options/Editor Options. So that is a problem less. The second irritant is that the default font for Quanta is Courier, a serif font set a 12 pt. Two things about this, I think most people agree now that a sans-serif font is best for the screen, and a serif font, if acceptable, is hard to read at 12 pt on a 1024x768 screen. Changing the setting to 14 pt Clean is OK for me. Just noticed that writing this sentence and formating "is" as bold re-formated the display. Now I get much smaller margin for writing. Weird. Apparently what happened was that the word wrap doesn't take hold until another tag has been completed.
Minor irritants as these migh be, a thing that is a problem, and seems to be a bigger one at that, is the handling of extended ASCII. Opening a page written in Swedish you might be excused to believe that some kind of DOS virus has mirculously struck in Linux. Where there once was beautiful extended characters like ä and ö there is white space. This is because Quanta does a "on-the-fly" translation of these characters to the entities equivalents. But - this feature is only one way. If you've written a page full of extended characters Quanta isn't smart enough to translate these when opening the page. And, to make matters worse - the entity translation is buggy. It manages ä and ö but fails on å and ü, handling the latter as seperate dots and the letter u.
To end this section on a possitive note there is context sensitive help and in place tag editing. Highlighting a tag and then pressing the right mouse button you get a context menu including context help, tag attributes and tag edit. For the two latter you select either and simply fill in your attributes. Quanta does the rest. Neat. Selecting context help naturally opens the relevant passage in the hypertext HTML reference document. Which goes to prove that the best Linux HTML editors are catching up with HomeSite. Next evolutionary step would be tag completion where you (even more simply) type SPACE within a tag and get a list of attributes to select, and so on.
To do Quanta justice I must mention the left hand frame once more. In it you have four separe tabs. The default shows you the complete directory tree, although why it doesn't default to the working directory is beyond me. It is very useful for quickly finding your files wherever they may be. You've got a Struct tab which shows you the structure of the document you are working with, broken down into separate major tags like P, IMG and Hx. Clicking on one of those brings you to that place in the doc. Neat and useful. Even neater would be if you could drag those tags around and thereby rearrange the document in the editor. Perhaps in some future version. The fourth tag is for HTML documentation. It is hyperlinked document providing all the information you might ever need for writing correct HTML 4 documents. There is also a tab for Project management - which I haven't used for this review.
There has been some time since I looked at Quanta and I must admit to being a bit impressed at the progress of the editor.All in all Quanta is a very nice HTML editor, very powerfull but with a major bug for non-English users that rely on extended ASCII for their webpages. I've also found somewhat more minor irritants in Quanta than when using Bluefish.
File handling, document structure and HTML documentation, is very impressive and something that I miss in Bluefish. It might be enough to give Quanta a lead over the latter for those that are not dependent on extended characters for their documents.
Originally published at LinuxSecurity.com ( original article)
Recently I got an opportunity to speak with Jay Beale, the Lead Developer of the Bastille Project. Jay is the author of several articles on Unix/Linux security, along with the upcoming book "Securing Linux the Bastille Way," to be published by Addison Wesley. At his day job, Jay is a security admin working on Solaris and Linux boxes. You can learn more about his articles, talks and favorite security links via http://www.bastille-linux.org/jay.
LinuxSecurity.com: Can you briefly describe the bastille-linux project? What is the goal/objective of bastille?
Jay Beale: Bastille Linux is a project to harden, or "lock-down," Linux systems. It asks the user a number of questions, which it uses to provide the most comprehensive security, without removing needed functionality. We're trying to make a more secure environment for every class of user, without restricting them too much.
We've been very successful so far - Bastille can stop almost every single root grab vulnerability that I know of against Red Hat 6.x. In the case of the well-known BIND remote root vulnerability, we had secured against that one before it was even discovered!
LinuxSecurity.com: How was it started?
Jay Beale: Bastille started about almost two years ago, when Jon Lasser began making UMBC Linux, a secure distribution that he could give out to students and faculty, without worrying that their new boxes would be quickly "rooted." While at a SANS conference, he met a number of people who were doing the same thing. Through a beer-enabled Birds of a Feather (BoF) session, they decided to stop duplicating effort, banding together to create the new Bastille Linux distribution.
Fast forward a few months. As many would-be distribution makers quickly learn, this group found out that making a new distribution was very hard work, before you even tried to secure it. They shifted strategy, and instead decided to modify the existing Red Hat distribution. This was faster and could be far more comprehensive. I joined up then, bringing a rather long Perl script with me that would turn a virgin Red Hat 6.0 box into more secure one. Jon and I became partners, Lead Coordinator and Lead Developer, and I posted a "modules wanted" sign in the form of a Spec Document for the script.
At that point, we were joined by the people that make up our core team, including Pete Watkins, who brought his strong and comprehensive IPCHAINS firewall, Sweth Chandramouli, who's helping me with architecture design, and Mike Rash, who's working on Intrusion Detection. We've got a great team on board, really, with a number of people dedicated to testing Bastille and generating ideas.
LinuxSecurity.com: Can you describe your background? How long have you been involved with security and Linux?
Jay Beale: Two years ago, I was a mathematician with an interest in computing and physics. I became interested in computer security when I took my first sysadmin job about two years ago. Security is one of the few areas of computing that is rather complex - yet, there's an underlying structure running through the entire field. It really fascinated me from the beginning, so I read everything I could find and started tinkering at home and at work.
Later on, I began working as a security admin., doing everything from writing host-based Intrusion Detection, to handling hacker break-ins, to writing hardening scripts. Bastille's main module development started as an extension of ideas I implemented for Solaris, actually. Now, I'm writing a book on applied Linux Security for Addison Wesley and writing articles for various sites, in addition to keeping up with Bastille, which is no small task.
LinuxSecurity.com: Do you ever expect vendors to ship Linux in a configuration that obviates the need for such a project?
Jay Beale: This really is possible, though it's a long shot... The problem is that users need their systems to "work" and, more and more, they don't have the time to tinker with them a great deal first. So, most vendors ship with ftp on, Apache with server-side-includes/cgi enabled, and no password on single user mode.
You see, to secure a system, you'll have to remove some functionality. This is due to a basic premise of computer security: to fully secure a system, you really have to grind it into dust, scatter the pieces to the wind, and hope that Entropy does it's part. Since you can't do this, you make tradeoffs.
I think things like Bastille will always be around for three reasons. First, vendors have incentives to make systems easy to use - Bastille works against this, but educates the admin/user to compensate. Second, we're going to keep researching, creating and implementing ideas before the vendors. Third, much of what we do isn't necessarily the vendor's "job" - implementing an intrusion detection system is usually a third party function. Bastille does a great deal to systems and we're about to start doing even more - we're growing beyond a simple hardening system into more facets of system security.
LinuxSecurity.com: What are the most difficult challenges you've faced while developing it?
Jay Beale: The toughest problems are really in the architecture, rather than features. Bastille's original goal to make a new distribution, press our own CD's and such. Then, we were still making a new distro, by installing Red Hat and modifying that directly after install. Now, we can modify a year-old system, but that took an architecture overhaul and an intense code audit to implement. This wasn't so much an added feature, as the problem was getting redefined after we implemented our first solution!
Actually, another problem that we're considering over time is that as Bastille does more and more, it has to ask a lot more questions! Right now, if you read all the explanations, it takes about an hour to run through the interactive portion. It's nowhere near as bad as a Linux kernel, but it annoys some users who just want a quick fix. Rather than abandoning these users, we're making "One Shot" configurations, where they can choose a sample configuration that matches their own and deploy that. While they miss a crucial part of securing the system (Secure the Admin!) they still get a safer system...
LinuxSecurity.com: What type of user would be most interested in running bastille?
Jay Beale: I think Bastille is accessible to every class of user, from the newbie to experienced admins. Every class of user tends to find it more comprehensive than anything they do by hand. Newbies find it useful because it explains everything it wants to do and asks questions, so as not to break anything. Experienced sysadmins find it useful because it automates what would normally take many man-hours, especially when you scale it to hundreds of systems. Further, many experienced sysadmins haven't ever had the time to learn about or implement security on their systems. They find themselves trying to make time, in the middle of the night, right after someone "hacks" their systems.
LinuxSecurity.com: What do you think of the state of security today on Linux?
Jay Beale: I think Linux security is getting better, but we're in a tough arena. Given the accessibility of Linux, most crackers have it on hand and are coding exploits for it first. Using Open source makes a program that much easier to audit for holes, so people are discovering some of the vulnerabilities very quickly and not all of them are White Hats. It's also a difficult situation, in that development is moving so much faster than audits.
Honestly, we've also got an amazing advantage: we've got the numbers, baby. The "Ping of Death" vulnerability was corrected in, if reports are to be believed, 1 hour for Linux. No vendor came close to that! While Linux may have had many more security vulnerabilities than Solaris in the past three years, these holes get patched a whole lot faster. Kurt Seifried's report on this noted that while Sun has, on average, only six announced vulnerabilities per year, it takes then around 90 days to fix them - this doesn't even account for all the programs, like WU-ftpd or BIND 8, that you generally add to a Sun box. The thing to remember, though, is that every operating system will have holes. It is human nature to make mistakes, no matter how many geniuses work on a system. Further, there are many creative, bright people in the cracking community - they will win many battles here.
LinuxSecurity.com: What features does it offer the average Linux user?
Jay Beale: Bastille is very accessible to the average user. It doesn't just start securing, but instead asks permission for every step it takes. Further, it educates. This key feature came out of a design problem I faced about a quarter way through writing the first script. The average Linux user tends to install their distro with everything installed and everything turned on, because they're not sure what it all does and they don't want to miss something. Bastille was asking the user questions, like "can we disable routing daemons?" when we hadn't explained what a routing daemon was or why they shouldn't need one. Pete and I ended up writing explanations for each question, so anyone could make educated choices, whether they were a newbie or an experienced sysadmin.
Bastille also has lots of other nice features: it can be re-run to keep a system secure after patches, everything it does can be undone, and it's fairly comprehensive. It tightens user account security, configures a well-tuned firewall, configures Apache, makes sane boot security choices, configures some smart PAM options, chroot's your DNS server, restricts access within your FTP server, sets better file permissions and audits your Set-UID root programs. It also configures stronger logging, locks down Sendmail a bit, and tries to turn off services and daemons that you don't need. This is really just the start, though! We're expanding this right now with new modules, including a basic network IDS system and a number of other modules under development.
LinuxSecurity.com: What new features are you working on?
Jay Beale: Expect some really incredible news on this in a few months. We're kicking around some great architecture ideas with the help of Yoann Vandoorselaere, from Mandrake. Sweth and others are helping us move rapidly to support far more than just Red Hat and Mandrake. We're eyeing FreeBSD, Solaris, Irix, Slackware, Debian and everything we can possibly generalize this to.
LinuxSecurity.com: What do you think are the biggest security concerns with using Linux today?
Jay Beale: Honestly, there aren't too many other security concerns that are specific to Linux. All of it generalizes to Unix and most of it applies to operating systems as a whole.
I think too many programs run with superuser privilege. We can kludge this, the way we do with programs that drop privilege, but we can also stop making this an all-or-nothing, user-or-root game. We should think beyond the basic security mechanisms present in Unix/Linux. Let's start implementing our programs using capabilities and dropping the number of programs on the system which use root.
Actually, I think computer security as a whole is a very tough problem. We're trying to make computers easier and easier to use, often at the cost of security. Cracker activity has grown immensely, as many more would-be script kiddies get Internet access. When I got my first shell account, the Internet was well known mostly among the University crowd - now, everyone's got access to the Internet and it's becoming a rougher neighborhood. I'm not saying world-wide Internet access is bad - it's an amazing resource, but one that some people are choosing to abuse.
LinuxSecurity.com: Security is always about tradeoffs. What tradeoffs do you face while developing bastille? Certainly it would be easiest to just remove rlogin, telnet, and other inherently-insecure programs, but this isn't always possible.
Jay Beale: Well, I think we've got a nice solution here. We're letting the user decide what tradeoffs to make and we're providing the user with the background to make that decision. Bastille is highly granular, taking many actions and asking the user about each one. In the end, the user decides whether or not to kill telnet, but we try to help them make an educated decision, by presenting facts like these: telnet is cleartext, so that someone eavesdropping can steal your account from under you - using programs like hunt, they can even steal your entire session!
Educating the end-user and letting them make all the decisions was a new approach, but we felt it was the only one that worked for a community as diverse in background as the Linux community.
LinuxSecurity.com: Thanks for taking the time with us today, and we wish you and your team members the greatest of success with this project!
There's been a flurry of activity behind the scenes this month. The Answer Gang is finally on track and has taken over all tech-support questions to the Gazette. We're also experimenting with ways to improve the TAG index and LG FAQ to make it easier to find previous answers to (and articles about) frequently-asked questions.
This issue also marks the debut of the new cartoon series Tuxedo Tails.
News Bytes was not available by the publishing deadline and will return next month.
Happy Linuxing!
Michael Orr
Editor, Linux Gazette,