|
Table of Contents:
|
|||||||||||
TWDT 1 (gzipped text file) TWDT 2 (HTML file) are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version. |
|||
This page maintained by the Editor of Linux Gazette, Copyright © 1996-2000 Specialized Systems Consultants, Inc. |
|||
The Mailbag!Write the Gazette at |
Contents: |
Answers to these questions should be sent directly to the e-mail address of the inquirer with or without a copy to [email protected]. Answers that are copied to LG will be printed in the next issue in the Tips column.
Before asking a question, please check the Linux Gazette FAQ to see if it has been answered there.
Mon, 13 Mar 2000 17:07:37 -0400
From: DoctorX <>
Subject: Suggestions
I am from Venezuela South America and I think that the next issue of "Linux Gazette" will contain something about Latin American Linux Distributions , and others project developed in Latin America
I am a leader of a project named HVLinux HVlinux is a project for make a venezuelan linux distribution based on slackware, it run for now in RAM ( version 0.2.2 ) but the next version will run from HD.
HVLinux HOME : http://www.hven.com.ve/hvlinux
this site is in spanish
It is only a suggestion
Wenceslao Hernandez C.
Wed, 1 Mar 2000 06:47:02 -0800 (PST)
From: Jim Coleman <>
Subject: Free ISPs for Linux???
This isn't exactly a burning question but I'd be interested in knowing if anyone in the Gazette readership knows of a free ISP that supports Linux. All of the ones I've checked out so far require Windows and/or Internet Explorer.
Thanks!
Thu, 02 Mar 2000 08:24:30 GMT
From: Brad Ponton <>
Subject: redhat 6.1 'hdc: lost interrupt' problems
i own a panasonic 24x CDROM, a quatum 2.5GB bigfoot and recently purchased a segate 17.2GB.
i am getting 'hdc: lost interrupt' constanly through the install ( which ploads along for about 2 hours ) then ends in what i think is suppoed to be text displayed, only the screen then starts displaying out of sync, drawing lines across the screen which look like very large ( 2 inch, or 4 centimetre ) and unreadable letters. thus the install ends. ( obviously incomplete )
other probs with computer which may have influeenced
a) to install the seagate was extremly hard requiring about an hours worth on fiddling with ide cables and jumper settings, currently the quatum is master of the primary ide, with nothing else, the cdrom is secondary master and the seagate is secondary slave. this is about the ONLY way all devices are detected properly AND the cd is able to boot.
b) -before the segate was installed- i have ( along with other a queer 'randomly order' file display in windows explorer for only some directories ) not beenable to play audio cds, with them either not being detected by windows 'cdplayer' or jumping from track to track when play is pressed, without playing anything. although data and video cd's continue to run fine.
thank you for your time ( and possibly help )
Sat, 4 Mar 2000 01:07:37 +0100
From: Kjell Ø. Skjebnestad <>
Subject: Help Wanted: XF86 3.3.6 vs ATi Rage 128
so. i need some help over here. X is not working properly with my spanking new ATi Rage 128-card ("bar code text"). here's what i have done so far:
downloaded XF86 3.3.6 (binary) (which supports ATi Rage 128) to my Win98 disk. installed a normal installation of Red Hat 6.1 (in text mode) to my linux disk. installed XF86 3.3.6 properly to my linux disk, taking great care not to overwrite the original `xinitrc' provided by Red Hat runned XF86Setup, configured everything there;the XF86Setup-test worked nicely runned `startx', which started Gnome etc. the Gnome Help Browser popped up, but the text... it was all like bar codes (that is, only horizontal lines). but occasionally, perfectly legible text popped up when scrolling. the words `Gnome Help Browser' in the title line was perfectly legible all the time. and yes, i have entered the correct sync rates for my monitor. i presume this is a font problem, but the means to solve it i cannot begin to comprehend. yea, i have clutched my brain rather avidly trying to get my system working properly. i have even tried deleting the font-folder before installing XF86-3.3.6... and making sure that the font-folder was regenerated when installing XF86-3.3.6.
so. in summary, the main problem seems to be the weird behaviour of text when using X ("bar code text"). any help to solve this problem would be greatly appreciated.
Sun, 05 Mar 2000 02:15:18 -0800
From: Dianne Witwer <>
Subject: PLEASE HELP ME!
All right, I have the boot disk failure on my computer. And I was readding the article and I dont have a boot disk or a back up disk. But can I just reinstall windows somehow without using a boot disk. Thanx Josh
Mon, 6 Mar 2000 14:51:05 -0000
From: Anthony W. Youngman <>
Subject: RE: Clenning(sic) lost+found
I notice Ed said "it's often the sign of serious disk corruption". May I beg to disagree? Note that ALL of the files in lost+found are of type b or c. I don't know why or how this corruption occurs, but I found that (1) I couldn't delete them, and (2) the machine gaily carried on working with no sign of any problems.
The problem only went away when the computer was scrapped - it was a 386 and was replaced with a 686MX.
[It sounds like we agree. Perhaps I wasn't clear. Really weird permissions like"c-wx--S-w-"
that no sane person would ever do even in their stupidest moments can be a sign that some hardware fault has zapped the system. Data corruption is the result, not the cause. The goal is then to find the cause or scrap the computer. Perhaps the cause was a one-time thing, as apparently happened in your case.The files that wouldn't delete may have had their immutable or undeletable attribute set. The command
lsattr
(see manpage) shows which attributes a file has, andchattr -i -u
will remove those attributes. Attributes are like permissions but refer to additional characteristics of files in the ext2 filesystem. However,lsattr /dev
spews out a whole lot of error messages, so it may be that the command won't help with device files ("b" or "c" as the first character of als -l
entry.) -Ed.
Wed, 08 Mar 2000 11:43:04 -0700
From: T.J. Rowe <>
Subject: man-db package compilation problems
First of all, I'd just like to thank all those who replied to my previous postings here at LG and got me unstuck. =) I only like to resort to asking when I've exahsted all my own ideas. That being said, I've come up across another interesting challenge to which I've yet found no solution. Here's the deal:
In compiling the man-db package (as at least one reader correctly guessed, yes, I'm following the LFS linux from scratch howto from here at LG), I get the follow compilation error using both pgcc 2.95.2 as well as egcs 2.91.66 on both my new linux partition and the "main" partition:
cp include/Defines include/Defines.old sed -e 's/^nls = .*/nls = all/' include/Defines.old > include/Defines make -C lib make[1]: Entering directory `/root/project/man-db-2.3.10.orig/lib' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/root/project/man-db-2.3.10.orig/lib' make -C libdb make[1]: Entering directory `/root/project/man-db-2.3.10.orig/libdb' gcc -O -DHAVE_CONFIG_H -DNLS -I../include -I.. -I. -I- -c db_store.c -o db_store.o In file included from db_store.c:44: ../include/manconfig.h:298: parse error before `__extension__' ../include/manconfig.h:298: parse error before `(' make[1]: *** [db_store.o] Error 1 make[1]: Leaving directory `/root/project/man-db-2.3.10.orig/libdb' make: *** [libdb] Error 2
Has anyone gotten this problem before? Any ideas? :)
Thu, 9 Mar 2000 01:13:16 -0500
From: Kent Franken <>
Subject:
I would really like to see case studies on switching to Linux form other platforms.
Here's our platform and some requirements and questions:
We currently use Windows NT Terminal Server Edition. How hard would it be to go to Linux?
- We have two TSE servers with approximately 30 users each logged in on average. In total, we have about 130 users but it is a manufacturing plant and many people share terminals.
- We use Citrix Metaframe, for Load Balancing and failover. Is there a product for Linux that offers this option?
- We use thin clients (Boundless (www.boundless.com) Windows CE terminals with RDP and ICA protocols) and no X. I have not been happy with the CE terminals. I was wondering if an X-term performed better? You can really see how slow CE is if you just click on the start button and move the mouse up and down the program list. I had an old PC with not enough RAM and the ICA client worked much better than the CE ICA client. I just found www.ltsp.org and the x-terms look fantastic (you could do a whole article just on that project).
- Dependability. I have to reboot my TSE servers once a week. Last week a new HP printer driver caused about 40 blue screens of death before we figured out what was going on. Will Linux be better?
- Office productivity software. If we are used to MS Office, what will it be like going to something like star-office?
- Anti-virus programs? Is there an antivirus program to scan mail stores (sendmail POP server)?
- Security. How good is Linux at keeping users honest? With TSE you can delete or overwrite files in the system directories as a user. Can't delete a system file? Just open it in Word and save it and watch us IS guys jump around.
Thu, 09 Mar 2000 19:27:53 +0100
From: Eva Gloria del Riego Eguiluz <>
Subject: HELP, PLEASE!
I have a problem with one partition of my hard disk. Yesterday I installed Red Hat 6.0 and everything was O.K. until I saw that I could not enter in one of my FAT32 partition.
In Windows '98, when I click on E: (the wrong partition) the message is "One of the devices vinculated to the system doesn't work" (or something like that, I read the message in spanish).
I tried to see the hard disk with Partition Magic 3.0 and the error message is:
"Partition table error #116 found: Partition table Begin and Start inconsistent: the hard disk partition table contains two inconsistent descrptions of the partition's starting sector...."
I want to know if I have lost all the information in this partition or in the other hand a way to get the data again.
The other partition of the hard drive are O.K. and I can start Windows O.K. and Linux also, because the wrong partition isn't primary.
Thank you very much,
Sat, 11 Mar 2000 08:40:07 -0500 (EST)
From: Ahmadullah Asad <>
Subject: a question...
Hi,
I would like to download the postscript viewer from http://www.medasys-digital-systems.fr/mirror/linux/LG/issue16/gv.html but I cannot connect to this German FTP site. Any help will be appreciated. Also I am totally ignorant of how to compile and run the source if I am successful in downloading this file. I would really appreciate if you can give me some info on that as well.
[Which distribution do you use? It's included in Debian, and it should be included in all the other distributions as well. You can also get the .deb file and convert it to rpm or tgz using the alien program if you have it (in the package "alien"). -Ed.]
Mon, 13 Mar 2000 18:39:52 +0500
From: Choudhry Muhammad Ali <>
Subject: VGA card Problem with Redhat 6.1
hi
I 've been running with Linux 4 a few years now, using RedHat 6.0 distributions. I'm currently upgrading my computer with RedHat 6.1. Recently, Redhat 6.1 Running fine, but one small problem with my vga card...6.1 not configure properly....set automatically my VGA Card on perverse version(sis 5 series)...my card is sis 6 +series
Does anyone have any ideas as 2 why this would b happening and how 2 correct it?
CMA
Tue, 14 Mar 2000 19:07:34 +0100
From: Laurent STEFAN <>
Subject: MIME and mail ?
How to join a file (.html, .gif or whatelse) in a e-mail with the mail program ?
I'm using something like : # ./test.pl | mail someone@somewhere -s"something"
but it result only a full text report in the body.
regards.
Now a good stuff :
I made a bash script that look like this for the mail process:
exec $PROG | mail [email protected] -s "My subject Mime-Version: 1.0 Content-Type: Text/html "
And it works !
So, why not to add a 'boundary=...' with a Multipart/related in Content-Type and while we are at it a 'Content-Transfert-Encoding: BASE64' ?
If you can tell me where there's a BASE64 encoder ... That would be great !
Thu, 22 Jul 1999 15:28:09 +0200
From: Sandeep < >
Subject: Video Card AbigPRoblem
Hello there, i am sandeep from india and i have recently Bought a azza MainBoard with Built in Sound Card (Avance Logic 120) and Video Card (SiS-530(but chip is SiS-5595 (AGP with shared Ram ))) there is a very big problem Linux couldnot recognise the VRAM on the shared Sdram MOdule originally allocated by Award Bios. The sound card also was not recognised by the new Linux 6.0 version but by 5.2v ver of Red Hat it was recognised as Sound Blaster not as my original card as u guys have bveen doing great work i thought that u would be able to solve my problem or direct me to those who can thankiing u guys u r really doing a great work bye My email address is [email protected] (i am calling from a cafe so please do not reply to any other e mail address than given above([email protected])
Tue, 22 Feb 2000 21:53:22 +0100
From: Wim van Oosterhout <>
Subject: ISND PCI128 Trust
Ban anyone tell me how to install my ISDN adaptor? Kind regards Wim van Oosterhout
Fri, 17 Mar 2000 12:00:59 -0500
From: thesun <>
Subject: Mailbag submission: Help Wanted on Japanese text input
Hi,
I'm new to the Linux Gazette but not new to Linux, and I've had a hell of a time trying to find clear, step-by-step info about how to WRITE Japanese under RedHat Linux 6.1. Part of the problem is that I don't READ Japanese well enough to sift through the Japanese sites, but I've found a program called "dp/NOTE" from OMRONSOFT which almost works--apparently, the program is supported by the Japanese version of RedHat 6.1, but I don't know what RPMs to download to get the thing to run under the English version...or even if it will run at all. Are there fundamental differences between the Japanese and the English versions? Should I set up a dual boot system? Ideally, I'd like to just run English RedHat but have a program I can pull up that will allow easy romaji text input and then convert to hiragana, katakana, or Kanji. There's a nice program by NJStar that does that, but it's (barf!) Windoze95 only.
Any suggestions or help would be greatly appreciated.
Thanks,
Fri, 17 Mar 2000 19:20:34 -0500
From: Clark Ashton Smith <>
Subject: Help with group rights and Netscape Composer
I've search deja news, Linux HowTos, and books, and I have not seen this error mentioned. Makes me think it something on my end, but I can't figure out what it is. I'm hoping someone can help.
I am running RedHat Linux 5.2 and Netscape 4.08.
I created a group called web and made a directory
/usr/local/webauth
I set the group and the SGID bit on that directory
chgrp -v web /usr/local/webauth chmod -v 2775 /usr/local/webauth
Now for the problem:
Anyone in the web group should be able to save into that directory. With Netscape composer they can only edit files that already are in that directory!?!
If I use the "Save As" option in Netscape Composer to save into that directory I get the following error:
"The file is marked read-only."
The same error occurs using the "Save" option to save a new file to that directory. BUT, if I open a file from that directory in netscape, edit it, and use the "Save" option, it will write over the old file in that directory. I can edit any existing html files created by anyone in the "web" group.
What on earth is going on? The users belong to the web group and they can all create files in /usr/local/webauth via the touch command or emacs. The users all have a umask of 002. The files they create with touch or emacs are all are created
-rw-rw-r-- username.web filename
They can use emacs to open end edit each others files in /usr/local/webauth, but they can't create new files with netscape composer! They can only edit existing files and save them to the same filename.
The only way I can get "Save As" and "Save" to create new files in /usr/local/webauth is to set the permissions to
chmod -v 2776 /usr/local/webauthor
chmod -v 2777 /usr/local/webauthwhich defeats the whole point of creating special work groups and protecting the files from being written by anyone not in the group.
If you can, please shed some light on this. Thank you.
Mon, 20 Mar 2000 20:05:16 -0000
From: Tony <>
Subject: connecting win98 and Linux
I am relatively new to Linux so please be patient. Can anyone tell me how to connect my win 98 machine with my Linux server. I used to run win 95 and had no problems but since using win98 overtime I try to browse my network from network neighbourhood I am unable to browse the network, I cannot ping the Linux server either. I am sure that I have TCP/IP installed correctly on both machines. Any help anyone can give would be most helpful. Regards A newly.
Fri, 24 Mar 2000 16:23:30 +0300
From: Alexandr Redko <>
Subject: DNS for home mail not working
For some time I was following the guidelines of article by JC Pollman and Bill Mote "Mail for the Home Network", Linux Gazette #45 with the aim to build my verySOHO net.
Here is my setup:
Linux Red Hat Cat 6.0 ------------------------------------------------------------- "host.conf" ------------------------------------------------------------- order hosts,bind multi on ------------------------------------------------------------- "nsswitch.conf" ------------------------------------------------------------- hosts: files dns ------------------------------------------------------------- "resolv.conf" ------------------------------------------------------------- search asup nameserver 10.0.0.5 ------------------------------------------------------------- "named.conf" ------------------------------------------------------------- options { directory "/var/named"; forward first; forwarders { 196.34.38.1; 196.34.38.2; }; /* * If there is a firewall between you and nameservers you want * to talk to, you might need to uncomment the query-source * directive below. Previous versions of BIND always asked * questions using port 53, but BIND 8.1 uses an unprivileged * port by default. */ // query-source address * port 53; };
zone "." { type hint; file "db.cache"; };
zone "asup" { notify no; type master; file "db.asup"; }; zone "0.0.10.in-addr.arpa" { notify no; type master; file "db.0.0.10"; }; zone "0.0.127.in-addr.arpa" { type master; file "db.127.0.0"; }; ------------------------------------------------------------- "db.asup" ------------------------------------------------------------- @ IN SOA sasha.asup. redial.asup. ( 1 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 86400 ) ; Minimum IN NS sasha IN MX 10 sasha sasha IN A 10.0.0.5 sasha IN MX 10 sasha mail IN A 10.0.0.5 www IN A 10.0.0.5 news IN A 10.0.0.5 localhost IN A 127.0.0.1 asup1 IN A 10.0.0.101 asup1 IN MX 10 sasha ------------------------------------------------- "db.0.0.10" ------------------------------------------------- @ IN SOA sasha.asup. redial.asup. ( 1 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 86400 ) ; Minimum IN NS sasha.asup. 5 IN PTR sasha.asup. 5 IN PTR www.asup. 5 IN PTR mail.asup. 5 IN PTR news.asup. 101 IN PTR asup1.asup. ---------------------------------------------------------- "db.127.0.0" ---------------------------------------------------------- @ IN SOA sasha.asup. redial.asup. ( 1 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 86400 ) ; Minimum IN NS localhost. 1 IN PTR localhost. ---------------------------------------------------------- ---------------------- "firewall" ---------------------- :input ACCEPT :forward REJECT :output ACCEPT -A forward -s 10.0.0.0/255.255.255.0 -d 10.0.0.0/255.255.255.0 -j ACCEPT -A forward -s 10.0.0.0/255.255.255.0 -d 0.0.0.0/0.0.0.0 -j MASQ ---------------------- # echo 1 > /proc/sys/net/ipv4/ip_forward ----------------------------------- /etc/sysconfig/network ----------------------------------- NETWORKING=yes FORWARD_IPV4="yes" HOSTNAME=sasha.asup DOMAINNAMEGATEWAY="" GATEWAYDEV="" ----------------------------------- With ppp0 up route ---------------------------------------------------------------------------- 10.0.0.5 * 255.255.255.255 UH 0 0 0 eth0 196.34.38.254 * 255.255.255.255 UH 0 0 0 ppp0 10.0.0.0 * 255.255.255.0 U 0 0 0 eth0 127.0.0.0 * 255.0.0.0 U 0 0 0 lo default 196.34.38.254 0.0.0.0 UG 0 0 0 ppp0 ----------------------------------------------------------------------------
DNS debug when making nslookup for my ISP server: ---------------------------- datagram from [10.0.0.5].1026, fd 22, len 31 req: nlookup(www.tsinet.ru) id 1755 type=1 class=1 req: missed 'www.tsinet.ru' as '' (cname=0) forw: forw -> [196.34.38.1].53 ds=4 nsid`421 id55 3ms retry 8sec retry(0x4011e008): expired @ 953043527 (11 secs before now (953043538)) reforw(addr=0 n=0) -> [196.34.38.1].53 ds=4 nsid"50 id=0 3ms datagram from [10.0.0.5].1026, fd 22, len 31 req: nlookup(www.tsinet.ru) id 1755 type=1 class=1 req: missed 'www.tsinet.ru' as '' (cname=0) reforw(addr=0 n=0) -> [196.34.38.2].53 ds=4 nsid`421 id55 3ms reforw(addr=0 n=0) -> [196.34.38.2].53 ds=4 nsid"50 id=0 3ms datagram from [10.0.0.5].1026, fd 22, len 31 req: nlookup(www.tsinet.ru) id 1755 type=1 class=1 req: missed 'www.tsinet.ru' as '' (cname=0) ----------------------------
I'll be very greatful if someone say me what I done wrong.
Regards to All
I appreciated the depth to which you went in talking about your invironment, however, I'm confused about what your problem is. Can you tell me exactly what isn't working and what you're trying to do please? I'd love to help!
Thank you for attention to my petty problem. Sorry to say that I'm in no way closer to it's solution then in the beginning. I got two letters (thanks very much ) from JC Pollmann:
A number of things could be causing the problem:
"db.asup" ------------------------------------------------------------- @ IN SOA sasha.asup. redial.asup. ( 1 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 86400 ) ; Minimum IN NS sasha IN MX 10 sasha sasha IN A 10.0.0.5 sasha IN MX 10 sasha mail IN A 10.0.0.5 www IN A 10.0.0.5 news IN A 10.0.0.5
I know they say otherwise, but try using CNAME, eg:
mail IN CNAME 10.0.05 localhost IN A 127.0.0.1 asup1 IN A 10.0.0.101 asup1 IN MX 10 sasha ------------------------------------------------- "db.0.0.10" ------------------------------------------------- @ IN SOA sasha.asup. redial.asup. ( 1 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 86400 ) ; Minimum IN NS sasha.asup. 5 IN PTR sasha.asup.you can not use CNAMED names for reverse lookup
5 IN PTR www.asup. 5 IN PTR mail.asup. 5 IN PTR news.asup. 101 IN PTR asup1.asup.make those changes and do a named restart and see what happens.
copy this to your rc.local file and reboot:
echo "setting up ipchains" echo "1" > /proc/sys/net/ipv4/ip_forward # allow loopback, always /sbin/ipchains -A input -i lo -j ACCEPT # this allows all traffic on your internal nets (you trust it, right?) /sbin/ipchains -A input -s 10.0.0.0/24 -j ACCEPT # this sets up masquerading /sbin/ipchains -A forward -s 10.0.0.0/24 -j MASQ #you need this for ppp and dynamic ip address echo 1 > /proc/sys/net/ipv4/ip_dynaddr
I did my home work:
--------------- named.conf --------------- options { directory "/var/named"; forwarders { forward.first; 195.34.38.1; 195.34.38.2; 195.34.38.1; }; /* * If there is a firewall between you and nameservers you want * to talk to, you might need to uncomment the query-source * directive below. Previous versions of BIND always asked * questions using port 53, but BIND 8.1 uses an unprivileged * port by default. */ query-source address * port 53; }; // // zone "." { type hint; file "db.cache"; }; zone "asup" { notify no; type master; file "db.asup"; }; zone "0.0.10.in-addr.arpa" { notify no; type master; file "db.0.0.10"; }; zone "0.0.127.in-addr.arpa" { type master; file "db.127.0.0"; }; ------------- db.asup ------------- @ IN SOA sasha.asup. redial.asup. ( 1 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 86400 ) ; Minimum IN NS sasha IN MX 10 sasha sasha IN A 10.0.0.5 sasha IN MX 10 sasha mail IN CNAME 10.0.0.5 www IN CNAME 10.0.0.5 news IN CNAME 10.0.0.5 localhost IN A 127.0.0.1 asup1 IN A 10.0.0.101 asup1 IN MX 10 sasha ------------------------- db.0.0.10 ------------------------- @ IN SOA sasha.asup. redial.asup. ( 1 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 86400 ) ; Minimum IN NS sasha.asup. 5 IN PTR sasha.asup. 101 IN PTR asup1.asup. ------------------------- firewall ------------------------ :input ACCEPT :forward ACCEPT :output ACCEPT -A input -s 0.0.0.0/0.0.0.0 -d 0.0.0.0/0.0.0.0 -i lo -j ACCEPT -A input -s 10.0.0.0/255.255.255.0 -d 0.0.0.0/0.0.0.0 -j ACCEPT -A forward -s 10.0.0.0/255.255.255.0 -d 0.0.0.0/0.0.0.0 -j MASQ ------------------------ network ------------------------ NETWORKING=yes FORWARD_IPV4="yes" HOSTNAME=sasha.asup DOMAINNAME= GATEWAY= GATEWAYDEV= ------------------------
I connect to my ISP by issuing " ipup ppp0 " command, and then my luck ends:
Here is excerpt from tcpdump output:
12:39:23.442252 ppp0 > sasha.asup > 195.34.38.1: icmp: echo request 12:39:23.644703 ppp0 < 195.34.38.1 > sasha.asup: icmp: echo reply .......... 12:39:27.434735 ppp0 > sasha.asup > 195.34.38.1: icmp: echo request 12:39:27.574701 ppp0 < 195.34.38.1 > sasha.asup: icmp: echo reply 12:39:28.014857 lo > sasha.asup.1052 > sasha.asup.domain: 30612+ PTR? 10.0.41.198.in-addr.arpa. (42) 12:39:28.014857 lo < sasha.asup.1052 > sasha.asup.domain: 30612+ PTR? 10.0.41.198.in-addr.arpa. (42) 12:39:28.015356 ppp0 > sasha.asup.domain > 195.34.38.1.domain: 37592+ PTR? 10.0.41.198.in-addr.arpa. (42) ............ 12:39:29.434740 ppp0 > sasha.asup > 195.34.38.1: icmp: echo request 12:39:29.564708 ppp0 < 195.34.38.1 > sasha.asup: icmp: echo reply 13:26:31.045625 lo > sasha.asup.1075 > sasha.asup.domain: 31874+ PTR? 90.10.8.128.in-addr.arpa. (42) 13:26:31.045625 lo < sasha.asup.1075 > sasha.asup.domain: 31874+ PTR? 90.10.8.128.in-addr.arpa. (42) 13:26:31.046140 if21 > sasha.asup.1071 > 198.41.0.4.domain: 16489+ PTR? 90.10.8.128.in-addr.arpa. (42) 13:26:31.464747 if21 > sasha.asup > 195.34.38.1: icmp: echo request 13:26:31.604706 if21 < 195.34.38.1 > sasha.asup: icmp: echo reply
I got it so that there is some problem with masquerading and my packets just don't go any further than my ISP's server.
Regards to All
Fri, 24 Mar 2000 19:13:08 +0200
From: Ivanus Radu <>
Subject: I need an answer, pls help me
Hello!
I am proud to announce this: "I'm a 4 day Linux admin, and i'm doing "fine" :) "
OK. Now to the important problems:
1. I've found that in the Net3-4 HOWTO : "If you are interested in using Linux for ISP purposes the I recommend you take a look at the Linux ISP homepage for a good list of pointers to information you might need and use." in Net3- HOWTO I found this : "11. Linux for an ISP ?
If you are interested in using Linux for ISP purposes the I recommend you take a look at the Linux ISP homepage for a good list of pointers to information you might need and use." but the link is broken... Q: What now? (I am interested in making linux for ISP purposes) 2. A Winmodem solution for Linux users. If U are luky to have an win9x sistem connected to the linuxbox trought an network card, then do this: - install a Wingate like SyGate 2.0 for Win on the Win9x box - In Linux make Default Gateway to link to the Win9x box IP That's all for the moment :)
TNX bye.
Fri, 24 Mar 2000 17:29:16 -0800 (PST)
From: Michael Dupree <>
Subject: HELP!!!!!!!!!!
i need some help if u can do so please do. when i am in a chatroom and stuff on aim people use booters witch create errors i need a patch for aim that will stop these errors if u can please do
[What's "aim"? Is it a Linux program? We publish only questions dealing with Linux-related issues. Also, are these "errors" things which crash the browser or are you simply trying to prevent the roommaster from booting you out when maybe s/he has a legitimate reason for doing so?
If you really have a program with a bug in it, we need to know what the program is, who makes it, whether it's standalone or runs with a web browser, under what circumstances the error occurs, what error messages you get, and what kind of computer and version of Linux you have. -Ed.]
Fri, 24 Mar 2000 22:37:50 -0500
From: Walter Gomez <>
Subject: HP682 C jet ink color printer
My 682C HP printer is not workin properly. When a print signal is sent, the printer will move a page in and start flashing the yellow light but not print the file. I have checked everything I could think about with no result. Could you help me? Regards,
Fri, 24 Mar 2000 22:54:58 -0800
From: Taro Fukunaga <>
Subject: Sendmail faster start up!
Hi,
I am having trouble with sendmail. I read an article in the Gazette dated a few? months? ago about setting up sendmail, but I'm still puzzled.
I am running MkLinux R1 (a RedHat 6.0 implementation) and sendmail takes forever to startup. Taking a cue from the article, I stopped the sendmail daemon, started up pppd, and then restarted sendmail. It started up much faster. Also I've noticed that if I don't have the pppd up,sendmail tries to ping my other computer (the two are connected by an ethernet and router). Both machines are at home. The other machine does not have ppp set up yet. Well if I don't want to start pppd immediately on boot, what can I do to make sendmail start up faster?
Sat, 25 Mar 2000 12:39:14 +0200
From: Serafim Karfis <>
Subject: Linux as a mail server
I am trying to find instuctions on how to use my Linux server as a mail server for my company. I have a registered domain name, a permanent connecion to the internet and unlimited number of e-mail accounts under my domain name. Please keep in mind that I am a new Linux user, so any instructions have to be detailed for me to understand. Thank you in advance.
Sun, 26 Mar 2000 23:16:30 +0530 (IST)
From: nayantara <>
Subject: insmod device or resource busy
Hi, I'm running 2.2.12.... I wrote a module that goes:
#define MODULE #include int init_module(void) { printk("<1>It worked!\n"); return 0; } void cleanup_module(void) { printk("<1>All done.\n"); }
now, i compile this and when i insmod it, it printk's It worked! but then gives me :could not load module device or resource busy. what am i missing? what resource is busy? (I looked through the FAQ's but didn't find anything.....so if i missed it please bear with me)
Thanks, Deepa
Tue, 28 Mar 2000 15:43:10 +0100
From: Luis Neves <>
Subject: Xircom CE3 10/100 and Red Hat 6.1
Hello,
I have a toshiba laptop with Red Hat's 6.1 Linux installed. I also have a Xircom 10/100 Ethernet Adapter. I know that xircom doesn´t provide any drivers for Linux and the compatibility list regarding ethernet adapters from RH 6.1 doesn´t include xircom cards. Is there any workarround ? How can I get it to work ?
Thanks in advance,
Luis Neves.
Tue, 28 Mar 2000 16:16:49 GMT
From: hasan jamal <>
Subject: Re: source code of fsck
I need the source code of "fsck", the file system checker under the /sbin directory. I have searched most of the ftp archives related to linux and did not find anywhere. I got RedHat & SuSe distribution, in none of them I found. I would be grateful if anybody can give me the source code or the ftp site.
Md. Hasan Jamal Bangladesh
It should be included in your distribution. I know only Debian, so when I type "dpkg -S fsck" it shows me fsck and fsck.ext2 are in the "e2fsprogs" package. (There are other fsck modules for different filesystem types in other packages, including "util-linux".) Rpm (and yast?) do a similar thing but with different command-line options. Find the appropriate command on your system and it will show which package the program comes from. Probably e2fsprogs*.srpm or util-linux*.srpm or a similar file will have the source you want.
I got it in SuSE. Thanks a lot for such a quick reply.
Wed, 1 Mar 2000 11:45:34 -0600
From: Scott Morizot <>
Subject: Pine and Pico not "open source"?
I just got around to reading the February edition of Linux Gazette. I was more than a little perplexed by the claim in the article about nano that pico and pine weren't "open source".
While it's true that pine and pico aren't under the GPL, neither are many other open source stalwarts like sendmail and apache. Even a quick read of the license http://www.washington.edu/pine/overview/legal.html
makes it clear that the source can be used for any use, even commercial, that it can be modified, and that it can be distributed. I certainly don't see anything that would prevent it from being considered open source.
Oh, and the source is available, of course. Always has been.
And, although you can't get the pico source separately from the entire source tree, you can build just pico or, if your OS is one of the supported ones, download a pico binary for your systems from the unix-bin directory. ftp://ftp.cac.washington.edu/pine/unix-bin/
Not that nano isn't a perfectly good editor and effort. There can't be too many. But keep the facts about pine and pico straight.
Ummm. GPL'ed software is protected under copyright also. So is sendmail. So is apache. All of those licenses are licenses to use copyrighted software. UW's is just another license to use copyrighted software. But all free software protected by a license is copyrighted software and includes some sort of restrictions on its use. GPL software is no less under copyright than software under any other license. In fact, copyright is what allows the GPL to make the restrictions on use that it does. Software that is not under copyright is in the public domain and absolutely no restrictions may be made on its use at all.
I didn't see anything in the legal notice that would constrain the redistribution of binary versions of pico. And, in fact, binary versions of pico are redistributed with most distributions of Linux.
While it's not GPL, I still fail to see any terms in the UW pine license that cause it not to meet the open source definition.
[The license does not allow you to distribute modified binaries of pine. (Hint: all the Packages in your Linux distribution are "modified binaries", because they undergo customization to adhere to the distribution's overall standards.) This is why it's not "open source". Pine is not included in Debian. Instead, you install special packages which include the source and diffs and compile it yourself. -Ed.
Tue, 29 Feb 2000 11:40:06 +0100
From: Linux Gazette <>
Subject: FAQs
The following questions received this month are answered in the Linux Gazette FAQ:
Thu, 2 Mar 2000 17:17:31 -0800
From: Systems Administrator <>
Subject: Duplicate announcement on lg-announce
Some time ago you had trouble with your mailinglist (lg-announce). Your latest announcement was received twice, I therefore attach a text file with the headers from the two messages. I hope this will help you to fix the problem.
[The latest round of duplicates was caused by the same problem as the ones last fall--a certain subscriber or their ISP had a misconfigured mail program which sent the message back to the list. This address was in the middle of the Received: lines of all the duplicate message samples we saw.There are some mailers (Windoze?) which do not honor the envelope-to field of forwarded mail as they should, but instead think they should send it to everybody in the To: header--even though this has already been done. And majordomo's code to detect this sort of loop appears to be broken. While we work on a solution at the software level, we have unsubscribed that address and complained to the user and his/her postmaster.
The two cases involved different users on different continents. So we cannot guarantee it won't happen again, but will continue to unsubscribe addresses as they are detected. -Ed.
Fri, 3 Mar 2000 01:38:37 -0800 (PST)
From: Matthew Thompson <>
Subject: Microsoft OS's, their pricing and Linux
Greetings,
Perhaps someone has brought this up before, but I have YAMSCT (Yet another Microsoft conspiracy theory :). Maybe this whole thing about them not being able to combine the NT kernel with the Win9x series of OS's is a ruse.
If they did that, based on the current anti-trust scrutiny, they'd have to lower the price of WinNT/2000/whatever to the price of Win98/ME. They'd never be allowed to force home users to pay the premium price that businesses are now paying for Win2KPro on the desktop.
So as long as they have 2 separate product lines, they can charge basically double for Win2000 that gets sold to businesses. They would completely lose those higher profit margins if they merged the products.
I know I'm preaching to the proverbial choir, here, but it will take Linux to end this. But only when you can have it *all* (I'm typing this from a telnet session to my Debian box from Win98SE, since I want to use all the features of RealPlayer 7, java, Diablo and Descent 3). I'm hopeful that Mozilla will cure the java problems under Netscpape (does anyone know if this will be the case?), more games are coming out for Linux all the time, but what about multimedia apps? Is there Free project out there to fill this gap? I haven't heard anything about Real releasing a fully functional RealPlayer for Linux, especially as a plug-in.
As much as it pains me to say it, right now I'm afraid Linux *is* lacking on the desktop. Here I am, a Linux evangelist (practically a zealot, ask my friends) and I spend more time in Win98 than I do Linux because of the games and internet apps available.
What can we do about this? Is it enough to send emails to companies like Real to get them to release the same software for Linux that they do for Windows? We certainly can't expect MS to release Media Player 6.4 (which *is* an excellent app, btw) for Linux.
Fri, 3 Mar 2000 16:05:32 -0700
From: Jim Hill <>
Subject: Drop that comic strip pronto
The guy doing your strip can't draw and isn't funny. One can conceivably get away with a lack of either skill or humor but certainly not both.
Thanks for your 15 seconds.
[Since the Gazette is a do-it-yourself enterprise, if you don't like something, it's up to you to send in something better. :) -Ed.]
Mon, 6 Mar 2000 20:33:03 -0800
From: <>
Subject: RE: Who is Jim Dennis?
From SeanieDude on Tue, 21 Sep 1999 Why the f*ck is your name listed so damn much in hotbot?
Dear SeanieDude:
The reason the Godfath... err, Mr. Jim Dennis appears so often in a HotBot search is that, as the current head of the Maf... err, a large syndicate, he is being investigated due to a totally unfounded accusation: namely, that everyone who has ever been rude to him, *particularly* via e-mail, seems to have suffered unfortunate accidents.
Should your precarious health NOT fail shortly, for some inexplicable reason, take this as a guide for your future behavior:
NEVER be rude to people you don't know anything about, in e-mail or otherwise.
Hoping that you're still around to take good advice, Consigliori Ben Okopnik
Thu, 9 Mar 2000 13:49:49 -0700
From: Vrenios Alex-P29131 <>
Subject: RE: Article Submission
Mike Orr,
I have three articles on your web site so far and you might be happy to know that LG is an inspiration to me. This won't happen over night, but I am starting my own web site magazine about early computers and their use, at http://www.earlycomputing.com/ . You probably have a good deal of experience with the issues surrounding such a venture so any words of wisdom would be greatly appreciated. (Mine will be more a labor of love than a money making enterprise.)
I look forward to contributing to LG again in the future in any event, as the focus of your site and mine are quite different. Thanks in advance.
[I'll help out if I can. I only began editing the Gazette eight months ago, and it's been around for five years. So I can't say much about starting an ezine; just how to keep it going.I'm wondering whether you can get enough articles about early computing to have a regular "zine", or if just a "site" where you can post articles as you receive or write them would be just as well? I am surprised at how many people are willing to contribute to the Gazette. Every month I used to wonder whether I'd get only a few articles that month, but so far I've always gotten plenty to make a full zine. But that will be more difficult with a more specialized topic, and especially at the beginning when you're not as well known.
Feel free to send me any other questions you have. maybe we can make an article or section in the Mailbag eventually about starting a zine. Maybe you'll feel like writing an article about your experience setting up an early computer zine, how you're doing it similar/differently than the Gazette, etc. Not exactly Linux related, but I'm the editor so I can put in anything I want. Plus I know that there is an interest in early computers among Linux folk: we got an article about emulators recently and another person is also writing another article about emulators now. -Ed.]
Sun, 12 Mar 2000 20:52:18 -0500
From: Doc Simont <>
Subject: Gandhi quote
To Whom it may Concern --
I would be interested to know the source of the (great) quote that you use at the top of your web page:
"One cannot unite a community without a newspaper or jounal of some kind."
I am one of a number of volunteers who manage to produce a surprisingly high-quality monthly "newspaper" for our small town in NW Connecticut. It might be something we could use, but our standards prevent us from taking attributions without verification. Too often something attributed to Abraham Lincoln turns out to really have come from Dante (or vice-versa) ;-).
Any pointers toward the source would be very much appreciated.
[It's from the movie Gandhi. We don't know whether Gandhi himself said it. -Ed.]
Mon, 13 Mar 2000 09:49:26 -0700 (MST)
From: Michael J. Hammel <>
Subject: correction to your Corel Photo-Paint story
Stephen:
In your C|Net article on Corel's release of Photo-Paint for Linux, (http://news.cnet.com/news/0-1003-200-1569948.html) you mentioned Gimp and Adobe as Corel's most likely competitors. This isn't exactly true. First, Gimp has no marketing or business structure. Not even a non-profit. So, although its a terrific program, it lacks the exposure that a commercial application can get. In the long run, this may hurt it.
(I actually toyed with the idea of trying to form a non-profit or even a for-profit to keep Gimp a strong product, but coming up with a business model for this type of application is difficult. Its not likely selling the OS, where service and support can bring in significant income.)
Adobe's move into Linux is limited, so far, to its PDF and word processing tools. Its not, as far as I know, doing anything about porting is graphics or layout applications (though Frame is probably considered a layout too by many). Corel's not really competing with Adobe in graphics on Linux yet.
Mediascape is about ready to launch its vector based ArtStream for Linux next month. This will be the first entry into the Linux layout tools market. Not long after that, Deneba (www.denebe.com) is expected to launch their Linux version of Canvas 7, a popular Mac image editing tool with vector, layout, and Web development features. These would be Corel's main competitors in the vector graphics arena. Gimp remains a competitor in the raster-based image editing front, but the lack of prepress support and and organizational structure could eventually become a problem.
Fri, 17 Mar 2000 18:36:30 EST
From: <>
Subject: subscription
Hello, I'm looking to find a postal address to you...? I do volunteer work with an inmate pen pal site: http://members.xoom.com/crosllinked/index.htm Today I received a request from an inmate, wanting any type of subscription to any site who offers news on Linux Operating Sytems. Thank You for your time. Sandy
[Our address is:
Attn: Linux Gazette Specialized Systems Consultants, Inc. PO Box 55549 Seattle, WA 98155-0549 USA...but I'm not sure what that will gain him. The Gazette is not available by mail, unless a reader (you?) would be willing to print it out and send it to him. Otherwise, he would have to read it online or via the FTP files or a CD.
Of course, Linux Journal is available by mail if he wants that. A text order form is at http://www.linuxjournal.com/subscribe/subtext.txt which you can print and have him mail in. -Ed.]
Wed, 22 Mar 2000 08:12:40 +0100
From: Morgan Karlsson <>
Subject: translation of linux gazette articles
Hello, My name i Morgan Karlsson and I'm a new member of the se.linux.org family. I wonder if it's ok to translate articles from you to swedish and publish them on our website www.se.linux.org? Or even if we get enough people working with it translate every number of you fantastic magazine in swedish. What do you think about this?
[Certainly. We welcome translations. When your site is ready, please fill out the form at http://wwwlinuxgazette.com/mirrors.html so that we can add the site to our mirrors list and people will be able to find you. -Ed.]
Contents: |
The April issue of Linux Journal is on newsstands now. This issue focuses on the Internet.
Linux Journal has articles that appear "Strictly On-Line". Check out the Table of Contents at http://www.linuxjournal.com/issue72/index.html for articles in this issue as well as links to the on-line articles. To subscribe to Linux Journal, go to http://www.linuxjournal.com/subscribe/index.html.
For Subcribers Only: Linux Journal archives are available on-line at http://interactive.linuxjournal.com/
NUREMBERG, Germany, PASADENA, USA and METZ, France - 2/14/2000 - SuSE Linux AG, MandrakeSoft and Linbox Inc. are joining forces to develop Linux Network Computing and bring it to the broadest audience. SuSE Linux AG, MandrakeSoft and Linbox Inc. are partnering to develop and include in future versions of Linux distributions the key technologies of the Linbox Network Architecture. All SuSE Linux and MandrakeSoft users will soon be able to set up efficient diskless Network Computing solutions based on Linux.
The Linbox Network Architecture is an open approach to Linux Network Computing based on diskless standard computers on the desktop side, such as the Linbox Net Station, and full featured servers, such as the Linbox Net Server. According to Jean Pierre Laisne', CEO of Linbox Inc., "The Linbox Network Architecture allows users to bring the power of Linux to the desktop at minimal costs while still preserving the user's investments in Windows or MacOS software. Diskless Linbox Net Stations can be set up in a matter of minutes by users with no previous skills in the Linux environment and require no maintenance."
Under the joint partnership, SuSE Linux AG, MandrakeSoft and Linbox will form an open project which will publish the specifications of the Linbox Network Architecture and the required software under open source licensing. Development will be held by the Linbox R&D center in the Lorraine region, France's pioneering region for Linux and Free Software. All Linux users and businesses are welcome to join the LNA open project.
Technology Center Hermia, Tampere, Finland - February 21, 2000
SOT will be releasing their Best Linux operating system to English-speaking users worldwide for the first time. The press conference and publication will take place at CeBIT 2000, Hannover, Germany at 10am, February 24.
"The T-1 beta program was a success. We now have a good reason to expect the same success with the final English release as that we've had in Finland. I am sure that the release will finally dispel the myth of Linux as a server-only operating system, and will show Linux as a real contender for the dominating OS. I hope to see you at the press-release conference at CeBIT 2000" said Santeri Kannisto, CEO, SOT.
The first English version is called Best Linux 2000. SOT will begin shipping boxes after the release. Boxes will be available from well-known Linux-resellers and book stores. Additional information is available at the Best Linux web site, http://www.bestlinux.net
The Best Linux 2000 boxed set includes some new features never seen before in Linux. It includes lifetime technical support and a free update service. Customers are shipped the latest installation CD to guarantee their Best Linux is always up-to-date. A boxed set includes also a 400 page manual, an installation CD, a source code CD, a Linux games CD and a software library CD providing an easy way even for home users to start using a complete Linux system.
Caldera IPO Marks First Linux Disappointment
INDIANAPOLIS - March 27, 2000 - Macmillan USA (http://www.placeforlinux.com) announced Secure Server 7.0 for professional server administrators. Macmillan's new product is a secure Linux web server built within the new Linux-Mandrake(tm) 7.0 operating system.
FREDERICTON, NB, March 17 /CNW/ - Mosaic Technologies Corporation and Alta Terra Ventures Corp. have announced an alliance that will see Mosaic's Linux training programs bundled with Alta Terra's MaxOS(TM) Linux operating system.
Bringing Linux to the everyday PC desktop user is a major priority of both Alta Terra and Mosaic. To help Windows users make the transition to Linux, Mosaic, working with India's Sona Valliappa Group, will offer a Linux simulator, which runs in a Microsoft Windows(TM) environment. This allows users to go through the steps of installing and setting up Linux, without leaving Windows. Mosaic will follow up with training programs to help users with tasks within Linux itself.
Mosaic Technologies Corporation
Alta Terra Ventures Corp.
CeBIT, Hannover, Germany (February 22, 2000) - SCO and SuSE Linux AG, today announced an agreement to offer SCO Professional Services to SuSE customers, worldwide. The new offering, along with SCO's global reach, will help extend SuSE's growth into new markets. The agreement marks the first time SuSE Linux AG has partnered for professional services on a global level.
The SCO Professional Services offerings are designed to help SuSE's customers and resellers to get started with planning, installation, configuration, and deployment of their new SuSE Linux systems.
Santa Cruz, California, March 13, 2000 - Lutris Technologies Inc., and TurboLinux Inc., today announced a joint effort to certify and distribute the Enhydra Java/XML application server for the TurboLinux operating system. The partnership creates a scalable Open Source foundation for enterprise e-business application development and deployment.
Lutris and TurboLinux will work together to promote the Open Source e-business platform. Enhydra will be distributed on the TurboLinux companion CD and listed in the TurboLinux Application Directory as a premier development environment. Lutris will provide support, training and professional services for Enhydra applications running on TurboLinux.
Colorado Linux Info Quest |
April 1, 2000 Denver, CO thecliq.org |
Corel Linux Roadshow 2000 |
April 3-7, 2000 Various Locations www.corel.com/roadshow/index.htm |
Montreal Linux Expo |
April 10-12, 2000 Montreal, Canada www.skyevents.com/EN/ |
Spring COMDEX |
April 17-20, 2000 Chicago, IL www.zdevents.com/comdex |
HPC Linux 2000: Workshop on High-Performance Computing with Linux Platforms |
May 14-17, 2000 Beijing, China www.csis.hku.hk/~clwang/HPCLinux2000.html (In conjunction with HPC-ASIA 2000: The Fourth International Conference/Exhibition on High Performance Computing in Asia-Pacific Region) |
Linux Canada |
May 15-18, 2000 Toronto, Canada www.linuxcanadaexpo.com |
Converge 2000 |
May 17-18, 2000 Alberta, Canada www.converge2000.com |
SANE 2000: 2nd International SANE (System Administration and Networking) Conference |
May 22-25, 2000 MECC, Maastricht, The Netherlands www.nluug.nl/events/sane2000/index.html |
ISPCON |
May 23-25, 2000 Orlando, FL www.ispcon.internet.com |
Strictly Business Expo |
June 7-9, 2000 Minneapolis, MN www.strictly-business.com |
USENIX |
June 19-23, 2000 San Diego, CA www.usenix.org |
LinuxFest |
June 20-24, 2000 Kansas City, KS www.linuxfest.com |
PC Expo |
June 27-29, 2000 New York, NY www.pcexpo.com |
LinuxConference |
June 27-28, 2000 Zürich, Switzerland www.linux-conference.ch |
February 14th -- Software Carpentry is pleased to announce that O'Reilly & Associates has invited the winners in each category of its design competition to present their work at the 2nd Annual Open Source Software Convention in Monterey, CA, July 17-20, 2000.
So far, 27 individuals and groups have indicated that they will be submitting a total of 39 designs. The deadline for first-round entries is March 31, 2000; for more information, see the Software Carpentry web site at: http://www.software-carpentry.com
HANNOVER, GERMANY, Feb. 28, 2000 -- TRISIGNAL Communications, a Division of Eicon Technology Corporation, today announced the availability of its Phantom (TM) Embedded Modem reference design for the Linux operating system.
This new, off the shelf, pre-ported design will be immediately available for license to OEMs, allowing them to quickly bring to market any product running Linux and requiring V.90 modem connectivity, such as Internet appliances. The embedded Phantom design comes with all the necessary modem code and engineering support required for final integration by the OEM. Manufacturers that license this embedded modem design can benefit from TRISIGNAL?s core software code which has an installed base approaching 20 million units.
http://www.trisignal.com.
Chicago, IL - February 29, 2000 -- Computer I/O Corporation, a provider of software and services that simplify network access to live and real-time data streams, today announced the release of the Easy I/O Server, a Linux-based network I/O server. The Easy I/O Server is a flexible network I/O appliance that leverages the versatility of Linux and Computer I/O's new middleware technology to bring cost-effective, real-time networked data streaming capabilities to embedded and enterprise application developers.
The Easy I/O Server delivers an entirely new approach to the creation and remote access of I/O servers, peripherals, and appliances for telecommunica tions, multi-media, or any other application utilizing real-time data streams. It's I/O middleware technology provides unified interfaces for applications, network access, and real-time data collection and transfer. Computer I/O designed Easy I/O to allow application developers to quickly deploy and access a streaming data server without the need to understand low-level real-time and network programming issues. With its browser-based hardware configuration interface, and universal application programming interfaces, the Easy I/O Server helps reduce software development and maintenance costs, and cuts the time to market for new embedded and enterprise network data applications.
www.computerio.com
Internet Technologies, Inc. (Inttek (TM)) has developed a highly effective E-Commerce System with the ability to work directly with Hell's Kitchen Systems CCVS.
For more information, visit our site: www.penguincommerce.com. Visit www.megcor.com for a working commercial example. The megcor.com site sold almost $1000.00 in golf equipment the first weekend it went live. More examples will be coming soon.
The System is hosted on a remote Application Server -- The only required on-site hardware/software is a PC and Web Browser. No knowledge of HTML is needed.
Technical Overview:
Penguin Commerce uses proprietary Software Developed by Internet Technologies, Inc.
Penguin Commerce is based on the following:
Red Hat Linux (currently running on 5.2)
Red Hat Secure Server -- (based on Red Hat Secure Server with modification)
Our secure web servers run a customized version of the Red Hat Secure Server release. We continually upgrade and improve these servers for maximum security.
MySQL Database Engine
PHP Version 4 (including custom Inttek extensions)
Various types of Open Source Image processing software
PGP Encryption Technology
PGP is used throughout our E-Commerce solutions to provide security for sensitive data. For example, before customer credit card numbers are stored in the MySQL database, they are encrypted to ensure privacy in the event that the data transmission to the MySQL server is compromised.
Hell's Kitchen Systems CCVS (including custom network connection program)
HKS CCVS software runs on a separate server, connected to the secure web server via a private Ethernet connection utilizing non-routed IP addresses. This, plus aggressive packet filtering, guarantees that the CCVS server is available for connections from the secure server only. All access to the CCVS server is controlled to prevent unauthorized access to any credit card numbers stored in plain text form.
Connections to the CCVS software are made through the standard Linux inetd service, which calls an intermediary program to translate commands and output between the secure web server and the CCVS software. This intermediate software, written in C with the CCVS C language API, is designed as an extra layer of abstraction, insulating the web programmer from the details of credit card processing, and generalizing the credit card processing interface. This will allow us to present a consistent interface to web designers and programmers, regardless of the details of our credit card processing implementation. Our intent was to keep the credit card intelligence on the CCVS server, not the web server.
An overview of the physical layout is as follows:
(Internet) | General Firewall (running RedHat Linux) | Apache Web Server running PHP4 and MySql (running RedHat Linux) | Secure Server Running the RedHat Secure Server Distribution | (Private Ethernet Segment) | HKS CCVS Server Running CCVS and custom CCVS connection software (running RedHat Linux) | Modem with phone line to connect to Vendors Merchant Account (For security purposes, the HKS CCVS Server will not accept incoming phone calls via the modem)
Milestone achievements of Inttek or Inttek engineers that involved Linux:
Qarbon.com, the originator of Viewlets, and SuSE Inc, a leading international Linux distribution, announced the launch of Qarbon.com's "The Linux Viewlet Project." Viewlets are a Web innovation that changes help files and FAQ's into vivid and dramatic "How To" demonstrations that "show" rather than "tell" a user how to perform a specific computing task. The introduction of Viewlets to the Linux community allows Linux developers and users from around the world to create, use and exchange Viewlets, which answer thousands of questions. Keeping with the Linux spirit, Viewlets are free to everyone on the net. Qarbon.com's business model includes a free Viewlet development tool, advertising-based revenue sharing for Viewlet authors and participating web sites,and a Viewlet syndication process designed to promote the use of Viewlets across the web.
To see some of the first Viewlets built around SuSE's Linux 6.3 go to www.teach2earn.com/linux/. Viewlets are expected to be a boon to increasing the use of Linux as users and developers see how Viewlets solve problems, reduces help desk calls and facilitates installation and use.
MINNEAPOLIS-March 15, 2000- MyFreeDesk.com announced today that Quality Internet of Jordan, Inc., a chain of Internet cafes in the Middle East, will offer ad-free versions of the MyFreeDesk.com web-based office suite on its computers beginning April 1, 2000. Quality Internet customers who traditionally visited the cafes to browse the Internet or play online games will now have the added benefit of a complete office suite of personal computer applications. MyFreeDesk includes a fully featured word processor, spreadsheet, presentation program, database, email manager and Web page editor.
Quality Internet charges its customers an hourly rate to use its computers. MyFreeDesk will receive 50 percent of Quality Internet's proceeds from customers who pay for the ad-free version of MyFreeDesk. Quality Internet expects to open 12 Internet cafes throughout the country of Jordan during the year 2000.
MyHelpdesk.com unveiled directories of technical support and productivity information for 20 distributions of the Linux operating system and some of the most popular Linux applications. The free directories will include help for Linux Web browsers, graphical desktop environments and Linux utilities and add-ons.
Each one of the 20 directories contains the Internet's best resources on everything from searchable knowledge bases and FAQs, to upgrade information and bug reports, to training and tutorials. The 20 directories cover the most popular distributions of the Linux operating system, including Caldera Open Linux, Corel Linux, Debian, Linux Mandrake, MK Linux, Red Hat, Slackware, SuSE and WinLinux 2000.
Sair's complete Linux and GNU Certified Administrator (LCA) level of exams are now available worldwide and our self-study guide for Linux & GNU Installation and Configuration has sold close to 40,000 copies. www.linuxcertification.com
DENVER - LinuxMall.com, is urging the adoption of certification standards developed by the Linux Professional Institute. The Linux Professional Institute (LPI) is an international non-profit organization dedicated to establishing professional vendor-neutral certification for the Linux Operating System.
Mark Bolzern, President of LinuxMall.com, is a member of the LPI Advisory Council and a sponsor of the organization. LinuxMall.com joins the company of other industry leaders like IBM, Caldera Systems, and Hewlett-Packard in supporting LPI's mission to certify the talent and hard work of Linux professionals throughout the world.
The first exam for LPI certification was launched in January 2000, offering an incentive program of Linux-related utilities to participants. LinuxMall.com has donated over a hundred of incentive items-including t-shirts and Tux the penguin mascots-to participants who have completed the initial phase of testing. In addition, LinuxMall.com provided the fulfillment and distribution of all prizes donated by other LPI sponsors. "Because of the neutrality of LPI certification, businesses will gain a higher level of confidence in the abilities of the professionals they hire," adds Bolzern. "It's far more credible than certification standards established by a single company, such as the MSCE standard."
DENVER - LinuxMall.com announces an agreement with EarthWeb's Dice.com that will allow customized Linux job searches directly from the site by simply placing Linux in the job search string. LinuxMall.com will continue to enhance and improve job-related information such as training and education within the LinuxMall.com site.
Under terms of the agreement, LinuxMall.com becomes part of Dice.com's Custom Search Network, which is a targeted group of sites that use the Dice.com job search engine to power their job areas. LinuxMall.com becomes the 16th site to display Dice.com listings on their Web site through Dice.com's Custom Search Network, which includes Red Hat.com, Girl Geeks.com and UserFriendly.org.
Tux demonstrates once again that he's playing with a full deck and holding all the cards.
For those late nights at the office, relaxing at home with family and friends, or the perfect gift for the Joker who has everything (hmmm...), LinuxMall.com proudly presents Penguin Power and LinuxMall.com playing cards!
Tux appears in formal dress on face cards, befitting a Linux King, Queen or Jack. The Joker, however, may be in need of a decent haircut. But they're all waiting to be dealt in to your favorite game and fit neatly up most regular-sized sleeves.
These playing cards are just the latest addition to LinuxMall.com's vast array of Linux goodies. Be sure to check out the entire site; LinuxMall.com has everything from beach towels and buttons to bumper stickers and "Born to Frag" T-shirts.
http://www.linuxmall.com/shop/01840
SiteReview.org is a place where wensurfers review and rate web sites. Go post some Linux reviews, and say something nice about Linux Gazette. :)
http://joydesk.com is groupware that provides web-based email, calendar, address book and task list services for web sites. Prominent customers include FreeI.net and www.webmail.ca.freei.net.
IBM Unveils Linux-Based Supercomputer
An interview with MontaVista founder Jim Ready
Why Linux won't fragment like UNIX did Coming soon, to a car near you: Linux-based Internet radios
Linuxcare Establishes Asian Operations
www.destinationlinux.com encompasses everything related to Linux, from games, jokes, contests to Linux information (products, news, training, etc.).
On the FirstLinux site:
Neosystem (France) provides turnkey application servers, training and consulting for Linux.
http://www.balista.com/njp/linux.htm is a personal site by Nicholas Jordan, containing tips, links and advocacy.
LinPeople is the Linux Internet Support Cooperative, a system of free technical support on IRC. The channel is #LinPeople
on irc.linpeople.org:8001
.
We are proud to announce the 1.1final version of KDevelop (http://www.kdevelop.org) . This version contains several new features and many bugfixes. It is intended to be the last release for KDE 1.1.2. We want to concentrate our effort now for KDevelop 2.x (which will work on KDE 2.x).
Summary of changes (between 1.0final and 1.1final):
Please see http://www.kdevelop.org for further information (requirements and download addresses) and http://fara.cs.uni-potsdam.de/~smeier/www/pressrelease1.1.txt for the official press release.
NEW YORK-iEC 2000-February 29, 2000-Progress Software Corporation today announced the Developer Edition of its award-winning Progress(r) SonicMQ(tm) Internet messaging server will support the Linux operating system.
"We really wanted to try running SonicMQ on Linux," said Michael Quattlebaum, director of R&D at ChanneLinx.com, a provider of complete e-commerce solutions by linking industries into an interoperable digital marketplace. "I installed it on my Linux server, had it up and running in no time and it worked like a dream. Because SonicMQ is 100% Java, it can run with little or no configuration changes on any platform with a supported JVM. SonicMQ has been great to work with -- a clean implementation of the JMS specification with the added tools needed to make it practical."
SonicMQ is the first-and to date the only-standalone messaging server available from a major software vendor based on Sun's specification for Java-based messaging, Java Message Service (JMS). By providing a standards-based reliable and scalable messaging infrastructure for Internet application interoperation, together with key standards beyond the JMS specification such as eXtensible Markup Language (XML), SonicMQ ensures business-to-business transactions are successfully completed.
www.progress.com
Garlic is a free molecular visualization program, for viewing proteins and DNA.
Mahogany 5.0 e-mail and news client for X11 (GNOME or KDE).
Opera Software has released a i tech preview (that means alpha) version of their web browser Opera.
Java products that run on Linux:
Hmm. I had a pretty darn good blurb written up about cool new stuff in 2.4 (still in the pre phase right now) but when I came back to my chair after getting coffee, my cat had taken over the chair and was sitting there, purring innocently.
As I lack the time to do it over (again!) here you go. See you next month.
A writer should never forego looking up a word in the dictionary because it's too much effort. Here's a way to have a dictionary at your fingertips if you're connected to the Internet: Make a shell script 'dict' that does
lynx "http://www.m-w.com/cgi-bin/dictionary?book=Dictionary&va=$*"
Now the shell command
dict quotidian
Brings up a definition of "quotidian".
I use Lynx instead of wget because the dictionary page has links on it I might want to follow (such as alternate spellings and synonyms). I use Lynx instead of something graphical because it is fast.
Everybody who is running a software project needs a FAQ to clarify questions about the project and to enlighten newbies how to run the software. Writing FAQs can be a time consuming process without much fun.
Now here comes a little Perl script which transforms simple ASCII input into HTML output which is perfect for FAQs (Frequently Asked Questions). I'm using this script on a daily basis and it is really nice and spares a lot of time. Check out http://leb.net/blinux/blinux-faq.html for results.
Attachment faq_builder.txt is the ASCII input to produce faq_builder.html using faq_builder.pl script.
'faq_builder.pl faq_builder.txt > faq_builder.html'
does the trick. Faq_builder.html is the the description how to use faq_builder.pl.
faq_builder.pl faq_builder.html faq_builder.txt
Well I'm a Linux newbie. Been in DP for 25 years and wanted something better. I'm running SuSE 6.3 and having fun. Really close to wiping WINDOZE from my hard disk. I've really enjoyed your site and have found many answers to the many questions that have been starring me in the face. Keep up the good work.
Read another article about winmodems being dead wood as far as Linux is concerned. I got a great deal I thought at Best Buys a 56k modem for 9.95. Of course I should have know it had to be a winmodem. But I stumbled on the site linmodems.org There are alot of people working on this issue. In fact LUCENT has provided a binary for their LT MODEMS. Well I downloaded that guy and I'm running great on my winmodem. There are other drivers available for other modems. Some are workable and some are still in development. I believe it's worth taking a look and spreading the word. You can reach alot more than I can.
Once again great site and thank you for all the helpful hints. I'll continue to be a steady visitor and a Linux advocate.
Hi!
When I browse through the 2 cent tips, I see a lot of general Sysadmin/bash questions that could be answered by a book called "An Introduction to Linux Systems Administration" - written by David Jones and Bruce Jamieson.
You can check it out at www.infocom.cqu.edu.au/Units/aut99/85321
It's available both on-line and as downloadable PostScript file. Perhaps it's also available in PDF.
It's a great book, and a great read!
I usualy print a lot of documentation. One thing that I would like to make is that my print jobs gets the pages to be numbered. So at the bottom of the pages we could see "page 1/xx" , etc. I had looked a while for info in how to set this up, but could not find. The printtool just dont do it. Maybe i should to create a filter, but what commands must i use to make this heapppens???
It depends on the software you used to create the file. If it is a plain text file, you can use "pr" to print it with page numbers in the header:
pr -f -l 55 somefile.txt | lpr
If the file is HTML, there is a utility html2ps that will do what you need. It's available at http://www.tdb.uu.se/~jan/html2ps.html. It converts HTML to PostScript with an option for page numbers and other things.
If you are using a word processor, it may have an option to include page numbers. Let us know what you are trying to print and we may be able to give better help.
See emacs *Tools | Print | Postscript Print Buffer* command... It does exactly what you want!
[ ]s - J. A. Gaeta Mendes Greetings from Brazil!
'man pr'
For fancier output with various bells and whistles
'man nenscript'
(plus maybe a driver script, or two, to get your customizations).
Michal
Try the 'mpage' command.
It wont work for everything but it uses a header and page numbers and can also print multiple pages of stuff per page (saves paper).
Example:
mpage -2 -H document | lpr
(prints 2 pages on one, and puts a header w/ filename, page numbers)
Hope this helps.
Thanks Bob and everybody who helped me! Now I am using mpage to print. That makes more sense, saving paper and toner.
To develop a distributed database application that runs on Linux, what inexpensive, powerful databases might work best?
Check the Application section at Linux.org for databases. It lists big names such as DB2, Oracle, Informix, Ingres, and others such as PostgreSQL which is free, powerful, and ships with Red Hat Linux. The latest beta adds several desirable features. You may hear a lot about MySQL, but if you're building anythin more complicated than a basic query system, you'll need something more powerful than MySQL.
PoestgreSQL, by far. www.postgresql.org Inexpensive ( free, GPL ), powerful ( check the website for a list of features ), great. There's also MySQL, but less powerful than Postgres ( performance, when talking about milions of records, and many more )
Distributed ? PostgreSQL is the database engine, server & client. You start the server and you can connect to it from any other machine using TCP/IP. Connect from what ? From what you want : C, Python, PHP, Perl, etc etc ( see Postgres howto ). A recent great adition is Gnome-DB ( www.gnome.org/gnome-db/ ), which gives you the power to develop cool desktop applications.
I'm just guessing, but are the clocks on all these machines synchronized? That seems to be one reason why one computer would continually update another.
Hello: You can solve your problems by reading the section 20 (Modems) on the Linux Hardware Compatibility HOWTO...
http://www.linuxdoc.org/HOWTO/Hardware-HOWTO-20.html
and the section 30 (Appendix E. Linux incompatible Hardware) too...
http://www.linuxdoc.org/HOWTO/Hardware-HOWTO-30.html
This is a known bug with the PPP packages distributed with Red Hat 6.1. See RHEA-1999:051-01 for a full description of the problem, and pointers to updated packages.
Scottie
I am trying to compile virtuald using make virtuald here is the error I get
Makefile:14: *** missing separator. Stop.I did a cut and paste of the code from http://www.linuxdoc.org/HOWTO/Virtual-Services-HOWTO-3.html in the section 3.4 Source then used ftp to put it on the server in order to compile it.
This is a small gotcha with make. See the following makefile snippet:
# Remove all generated files. clean:; rm -f $(OBJS) $(BASENAME) *~ core $(TARFILE) $(BACKUP) $(LOG)
The commands that follow a rule (`rm' in this case) should be preceded by a tab character. Probably, some tabs got converted to spaces when cutting-and-pasting.
Try editing the Makefile with vi or emacs. Both can insert literal tab characters, and emacs has a specail mode for makefiles that takes care of that automagically.
Hi,
I think you're missing a Makefile!
The "make" command interprets a set of commands in a specifically written makefile, to compile and link source code into an executable. "gcc" is the actual compiler/linker, and is appears to be what you actually require in this case. Try "gcc (source filename goes here) -o virtuald" (it's probably wise to move your source from virtuald to virtuald.c with the command "mv virtuald virtuald.c" first).
Scottie
btw, if you did write a makefile, a common gotcha is that at least one tabspace is required between the label and the commands. Spaces just won't cut it; you'll end up with the "Missing Separator" message again!
Check out a program called WinRoute for Windows 98. You can download a trial copy from winroute.com. If it's setup right, you can tell your Linux box that your Win98 box is your "default gateway" and winroute will do the rest for you. I have 2 W98 boxes, 1 linux box and 1 iMac all running on this kind of setup and I've never had a problem.
DJ Busch
Hi!
In LinuxGazette #51 Pierre Abbat ([email protected]) wrote:
I am trying to read the Gazette with kfm 1.167 and several pages crash it, including the mailbag and 2c Tips. Can you help me figure out what's wrong? It's happend before.In you Subject you wrote the the gazette crashes the konqueror, but in your text you wrote it is the kfm. What thing do you use? Is the Konqueror your kfm replacement. Then I think you use a KDE-beta Version. Please try to use stable releases (I think the last was KDE 1.1.2).
From: Jeffrey T. Ownby ([email protected])
Subject: 5250 terminal for AS400 connectionI am adding a Linux box to a network consisting of several Win9X and NT machines that use either IBM Client Access or Rumba to connect to our AS400. Is there a program similar to either one of these that can provide terminal emulation on Linux. Any info appreciated!
Later,
Jeffro
I currently use tn5250 to connect from my laptop to an AS/400 (thru an NT Server). This is a modified version of telnet with better key mappings. You can find it at: http://www.blarg.net/~mmadore/5250.html
IBM also has a java based version of client access which reportedly runs under Linux. I haven't tried it since it requires some of it to run from the HTTP server on the AS/400. Their link is: http://www-4.ibm/software/network/hostondemand
Hope this helps.
Vince Du Beau
There's a 5250 terminal emulator available at http://www.linux-sna.org/software/5250/index.html
We use an AS/400 in the college I attend, and while I haven't been able to get permission to put a linux box on the same network as the 400 (and therefore cannot vouch for the linux version) I have tried the WinNT port of the program, and it works very well. (Though as far as I remember, there was no way to paste)
But as far as getting the capabilities of client access, you should try Linux SNA ( http://www.linux-sna.org/) which adds the AS/400s native protocol stack to the kernel. There are also some tools which should provide some of the other capabilities of client access, such as file transfer.
(If you get it working, please drop me a note, as I'd love some testimonial to use to convince my college to let me hook up a box to our as/400).
If you can't or don't want to use auto-mounting, and are tired of typing out all those 'mount' and 'umount' commands, here's a script called 'fd' that will do "the right thing at the right time" - and is easily modified for other devices:
#!/bin/bash d="/mnt/fd0" if [ -n "$(mount $d 2>&1)" ]; then umount $d; fi
It's a fine example of "obfuscated Bash scripting", but it works well - I use it and its relatives 'cdr', 'dvd', and 'fdl' (Linux-ext2 floppy) every day.
Ben Okopnik
Hi guys, here I'm trying to get a little bit of help with my computer. I'm doing a very time expensive calculations using FORTRAN programs, compiled with g77 under Red Hat 6.1. First, on dual Pentium-II/400MHz and Pentium-III/450MHz computers I noticed, that when program size (RSS in top) is getting biger that approximately 600K computation speed dramatically decreasing by factor two. This slowing down agrees with the bus speed (100MHz) and L2 cache (512K, 200MHz). So,
What have you done about optimizing your program? Some things you could do:
I've decided that the reason lies in the cache sped/size and bought (pretty cheap) dual Pentium-II/450MHz Xeon computer with 2MB L2 cache per proccessor that suppose to run at 450MHz and 512M SDRAM on SuperMicro mainboard. Unfortunately I did not find any difference in performance of this computer and still much cheaper dual Pentium-II/400MHz. Why is it?
Total performance depends on a lot of things except CPU speed and cache size. To name a few:
The clock speeds between the two systems don't vary much. So if you find that your performance hasn't increased as much as you expected, the only thing that you can conclude is that the cache size probably isn't the limiting factor here.
May be, Red Nat 6.1 somehow must be told explicitely about cache size? But I did not find any such option...
I don't *think* so. AFAIK, the cache can't be influenced from the OS on x86 CPU's. I've only heard of that trick with Mac's running on 68040's
HTH, Roland
Is there a "definative" anti virul program for Linux? Any info appreciated!
There are several anti-virus programs available, see Freshmeat.net: http://freshmeat.net/appindex/daemons/anti-virus.html
I believe that the McAfee also runs on Linux.
But these are mostly used for scanning mail destined for other systems that are more vulnerable to viruses.
Linux and other UNIX-like systems don't suffer much from viruses, because most programs do not run with root privileges. So they don't have access to the system, other than the user's home-directory and processes.
So as long as you're surfing as a normal user, and not as root, any virus that you contract can at most endanger your own files and processes, and not the integrety of the system. Besides, most binary and macro viruses are targeted on DOS/Windows, so they don't even work on Linux.
Of course there are other attacks on your system possible, forms of so-called root-exploits; using known defects in programs to gain access to your machine as root.
That's why you do need to keep track of your distribution's security advisories.
Dear,
I was running quite a long time with NFS and transmission stopped. I get: Sep 6 00:03:20 coyote kernel: eth0: trigger_send() called with the transmitter busy. I rebooted the machine I was connected to and I get the below (part of /var/log/messages >file. Not all error statements shown):Sep 6 17:57:04 beartooth kernel: neighbour table overflow Sep 6 17:57:04 beartooth kernel: neighbour table overflow Sep 6 17:57:04 beartooth rpc.statd: Cannot register service: RPC: Unable to send; errno = No buffer space available Sep 6 17:57:04 beartooth nfs: rpc.statd startup succeeded Sep 6 17:57:04 beartooth rpc.statd[407]: unable to register (SM_PROG, SM_VERS, udp).l:
I had the same problem but with all ftp daemons running under inetd
My problem was resolved when I added "127.0.0.1 localhost" into /etc/hosts and when I setup the loopback lo interface using route and ifconfig
I hope this was also your problem.
Dear Wagner Perlino
What you are asking is commonly done with Linux. For example I have a small home network with three and sometimes four W95 machines and sometimes another Linux box. All are connected to a hub along with a "server" linux box. That machine does the following
I did not set out to do all of this at once. The project began as a request to make a dial-on-demand connection. It just grew as I got better with Linux and realized the power of the O/S and machine.
Dial on demand was quite an experience to set up due to my then inexperience with Linux, ISP hookups, and networking. Now it would be quite easy.
There were some issues with the current version of diald (0.16 and 0.99) and incompatibilities with newer versions of Linux (particularly Red Hat 6.1 and the Ethertap device). When I did this project about a year ago the issues were being worked through. Some comprehensive how-tos were posted and some users were reporting success. By now the package itself has probably been fixed. My workaround was to stay with diald 0.16 and RedHat 5.1. Setup was straightforward and the combination has worked flawlessly for about a year.
Contact me if you want more details. I'd be glad to help.
Put the insmod command into /etc/rc.d/rc.local
Any commands there are run at boot time.
I noticed a number of times people have problems with secondary ide on a pnp sound card. Don't be alarmed by ide3: unexpected interrupt, status=0xff, count=1. That is to say, if you are successful in getting your isapnp.conf correct and the the card seems proper from dmesg all except for this message and you still can't do a mount /dev/cdrom /cd0.....well fool, then go to /dev and rm cdrom and ln -s to the correct device! /dev/cdrom is probably linked to /dev/hdb for example. When testing your cd use 'mount -t iso 9660 /dev/hdx /mnt/cdrom'. Where x in hdx is the correct device name. You will perhaps surprise yourself after many hours spent shaking your head.
I have just installed version 6.1 and set up my modem to dial out to my ISP. However, when I log on as a user and press KDE>Internet>kppp a pop-up box opens up and wants me to enter the root-password! This does not seem right. is there a way to avoid having to enter the root pass word when logged on as a non-root user?
You could possibly change the permissions on /dev/modem and whatever it is pointing to (/dev/ttyS1 etc.) to allow the user to read/write from that device.
You should also be selectively allow some users to use the modem by giving group permissions, but I am really not suer how it is done. (but I know it is possible).
cheers -Sas
... I can hear the modem clicking like it is trying to dial, but it will never dial the number....
Try running setserial to set the IRQ used by the modem, e.g. I use "setserial /dev/ttyS2 irq 5 uart 16550A" to get my modem working. This command must be run as root. You may have to mess with the jumpers on your modem card to set the IRQ.
There should be an initialization script that controls serial port configuration at boot time. On Suse 6.3 this is /sbin/init.d/serial. It should be possible to edit this script to set your modem set up automagically, although I haven't yet got it working on my system.
See the Modem HOWTO for more info.
-- Steve
Saw this question in LG the other day - yep, I'm a few issues behind but catching up fast. Relatively easy answer (I just tried it with both Debian and Red Hat and it works fine):
(Assumptions: you have DOS/Windows installed, and can read from your CD.)
First, create a directory - C:\Linux is fine.
(The two examples below will cover the majority of the installs done these days, and are easily adaptable to other distros.)
cd Linux
at the C: prompt, then type
loadlin linux root=/dev/ram initrd=root.bin
This uses the 5.2 CD but I would think it's much the same for the different versions. From the "dosutils" directory, copy "loadlin.exe"; from "dosutils\autoboot" copy "vmlinuz" and "initrd.img" into your "Linux" directory. Shut down, attach CDROM, reboot into DOS, 'cd Linux', and type
loadlin vmlinuz initrd=initrd.img
...and you're on your way!
Another tip, while we're on the subject - Debian has these files available at their FTP server, and probably on the CD as well -
base2_1.tgz - 10MB drv1440.bin - 1.4MB resc1440.bin - 1.4MB
Stick these in your "Linux" directory, too; they'll install a base Linux system on your HD, or let you perform any sort of rescue ops necessary (by mounting your existing Linux partition as /target hanging off a ramdisk, for example - forget your root password lately? ). No CD required - that's the entire base package. One of _the_ handiest things there is when you're munging through a tricky installation - and a GOOD reference for the initial state of your /etc files (look inside base2_1.tgz). Putting those files on the DOS side is usually my first step during an installation, and it's saved my sanity more than once.
Ben Okopnik Captain S/V "Ulysses"
Thus spoke Ferenc Tamas Gyurcsan
I just saw your problem on the lg. Did you look for something like xvidcap? (I can't give you an url, but you will find it.) ps: If you manage to produce a good mpeg from the captured pictures, please let me know how. Ferenc
The original question was from Shawn Medero, who asked:
It captures motion on the computer desktop, basically multiple screen-captures tied together to form a movie of sorts. Primarly one would use to create training demostrations on linux applications, etc.
I checked, and yes, XVidCap does appear to fit this description. A quick check on Freshmeat gave this description for XVidCap:
XVidCap is an X11/Xt program, which captures specified rectangular areas of the X11 desktop. The captured frames can be saved in different formats (XWD, PNG, JPEG, PPM). Frames per second and other parameters can be defined at the command line. The saved frames could be used e.g. for an mpeg encoder or to make an animated GIF. A Step-mode is supported to get a frame on mouse click.
I tried to access the Homepage for this program, but couldn't get through, though it might be a problem on my end. I don't have time to recheck today, so I'll just pass along the URLs of interest.
Download: ftp://ftp.komm.hdk-berlin.de/pub/linux/X11/ Homepage: http://home.pages.de/~rasca/xvidcap/
Thanks to Ferenc for pointing this out. Its another application to add to my own catalog of tools on the Graphics Muse site (http://graphics-muse.com).
First I know this is opposite of what the intentions of Linux are, but sometimes it IS necessary. I recently had to remove Linux from my Dell Inspiron 3500 Laptop so that I could reinstall Windoze, it was necessary due to work, and limited hard-drive space(4GB).
Windows fdisk, Partition Magick, and Drive Wizard would not remove the partitions I had created for Red Hat 6.1(kudos to Linux on that one :-)) Instead you must first begin installing WinNT, and allow it to remove the partition. After WinNT has removed the partition you can either let it format the drive, or install Windoze as normal. So far that is the only way I have found to remove Linux partitions
Note: This tip was actually given me by my brother-in-law who had to do the same thing.
If, like me, you prefer vi-style command line editing in bash, here's how to get it working in Mandrake 7.0.
When I wiped out Redhat 5.2 on my PC and installed Mandrake 7.0, I found vi command line editing no longer worked, even after issuing the "set -o vi" command. After much hair pulling and gnashing of teeth, I finally found the problem is with the /etc/inputrc file. I still don't know which line in this file caused the problem. If you have this same problem in Mandrake or some other distribution, my suggestion for a fix is:
1. su to root. 2. Save a copy of the original /etc/inputrc file (you may want it back).
3. Replace the contents of /etc/inputrc with the following:
set convert-meta off set input-meta on set output-meta on set keymap vi set editing-mode vi
The next time you start a terminal session, vi editing will be functional.
--Bolen Coogler
This seems to be a very common Red Hat 6.1 bug. The problem is mentioned, without a solution, in the current FAQ for the PPP demon.
After installing Red Hat 6.1, every time I tried to dial-in to my ISP, a mysterious hangup occurred on the first attempt, and the connection always succeeded the second time. I first suspected the ISP, but they have nothing to do with it.
The problem disappeared as soon as I compiled ppp from the most recent source rpm. I use ppp-2.3.11-1 with a 2.2.14 kernel. Compilation was straightforward. Now I connect at once and everything is just fine.
According to the README.linux file in the ppp documentation, there are some subtleties related to compiling ppp for different kernel versions. Perhaps the ppp package included in Red Hat 6.1 was configured for another kernel than it ships with.
Best regards
I'm new user and believer of the Linux OS and I need help badly. I'm looking for a driver for an ATI Xpert@Work 8Mb PCI card. Where can I get it? I'm using a RedHat 5.2 and my monitor is a Mitsubishi Diamond Scan model FA3415AT4 [...]
Configure your display with the help of 'XF86Setup' (you have to write it as I do, with upper and lower cased letters), or, if it doesn't run the 'xf86config' program. Try to find your ATI Card, and if you don't, use simply SVGA. Most of cards which are not listed are standard SVGA Cards (my Matrox Millenium G200 also), and they run very well with the SVGA driver.
I have (successfully) set up a bunch of ATI cards under RedHat, and lately (>=5.0) have found that Xconfigurator seems to give better results with ATI cards.
The tip you're looking for looks something like this:
Edit your dns entry that probably looks like this:
www A [Your Machine's IP Address]TO:
@ A [Your Machine's IP Address] www CNAME @
What you're saying is that your IP is uniandes.edu.co and that www is a alias to it. So either way, it will end up at your site. If Apache is set up with the ServerName directive as "www.uniandes.edu.co", then it will fix names as soon as it connects to apache.
Chances are if your are using any other platform than Linux (Windows, dos, mac, etc.), the problem is the mode that ftp was in when you uploaded it. You have to make sure you upload in ascii mode as opposed to binary. This mode will do the proper conversion of line breaks and such. Give it another shot. The best way to tell if it's readable on linux is to type 'cat filename.c' You should see the line breaks in the right places. Hope this works for you. Let me know if you need more help if this doesn't roll out for you.
Several readers took your humble Editor to task for telling a user that Linux cannot autodetect memory above 64 MB because of a BIOS limitation; instead, I said that you have to tell Linux explicitly in the LILO config file or at the LILO command line.
The readers said they have had no problems with Linux autodetecting their 128 MB of memory. So I went home and took the
append = "mem 128M"line out of my /etc/lilo.conf file and discovered that, indeed, it was unnecessary. But I know it was a necessity last year when I put the system together. In the meantime I had switched from kernel 2.0.36 to 2.2.14--perhaps autodetection was added to the 2.2 kernels.
I have a question, have you any idea where I could find info about running multiple video cards and monitors under linux. eg. 2 SVGA cards or a SVGA and a VGA card ... and how should one configure these ??
XFree4.0 (which is out now) should solve this issue.
It captures motion on the computer desktop . .basically multiple screen-captures tied together to form a movie of sorts. Primarly one would use to create training demostrations on linux applications, etc.
You could do this with import (part of the ImageMagick package) and a simple shell script. Try this, for example.
camera.sh You could then combine them to an animated GIF with Gimp. Or use ImageMagick's animate command to view the sequence, like this:
animate shot*.gifBut that starts a loop that repeats until you stop it...
I'm trying to extract a block of text from a file using just bash and standard shell utilities (no perl, awk, sed, etc). I have a definitive pattern that can denote the start and end or I can easily get the line numbers that denote the start and end of the block of text I'm interested in (which, by the way, I don't know ahead of time. I only know where it is in the file). I can't find a utility or command that will extract everything that falls between those points. Does such a thing exist?
awk and sed are considered to be "standard shell utilities." (They are part of the POSIX specification). The sed expression is simply:
sed -n "$begin,${end}p" ...... if begin and end are line numbers.
For patterns it's easier to use awk:
awk "/$begin/,/$end/" ...... Note: begin and end are regexes and should be chosen carefully!
However, since you don't want to do it the easy way, here are some alternatives:
------------------ WARNING: very long -------------------------
If it is a text file and you just want some lines out of it try something like: (text version)
#!/bin/sh # shextract.sh # extract part of a file between a # pair of globbing patterns [ "$#" -eq "2" ] || { echo "Must supply begin and end patterns" >&2 exit 1 } begin=$1 end=$2 of="" ## output flag while read a; do case "$a" in "$begin") of="true";; "$end") of="";; esac [ -n "$of" ] && echo $a done exit 0
... this uses no external utilities except for the test command ('[') and possibly the 'echo' command from VERY old versions of Bourne sh. It should be supported under any Bourne shell derivative. Under bash these are builtin commands.
It takes two parameters. These are "globbing" patterns NOT regular expressions. They should be quoted, especially if they contain shell wildcards (?, *, and [...] expressions).
Read any good shell programming reference (or even the rather weak 'case...esac' section of the bash man page) for details on the acceptable pattern syntax. Note because of the way I'm using this you could invoke this program (let's call it shextract, for "shell extraction") like so:
shextract "[bB]egin|[Ss]tart" "[Ee]nd|[Ss]top"
... to extract the lines between the any occurrence of the term "begin" or "Begin" or "start" or "Start" and the any subsequent occurence of "end" or "End" or "stop" or "Stop."
Notice that I can use the (quoted) pipe symbol in this context to show "alternation" (similar to the egrep use of the same token).
This script could be easily modified to use regex's instead of glob patterns (though we'd either have to use 'grep' for that or rely on a much newer shell such as ksh '93 or bash v. 2.x to do so).
This particular version will extract *all* regions of the file that lie between our begin and end tokens.
To stop after the first we have to insert a "break" statement into our "$end") ...;;; case. To support an "nth" occurence of the pattern we'd have to use an additional argument. To cope with degenerate input (cases where the begin and end tokens might be out of order, nested or overlapped) we'd have to do considerably more work.
As written this example requires exactly two arguments. It will only process input from stdin and only write to stdout. We could easily add code to handle more arguments (first two are patterns, 'shift'ed out rest are input file names) and some options switches (for output file, only one extraction per file, emit errors if end pattern is found before start pattern, emit warnings if no begin or subsequent end pattern is found on any input file, stop processing on any error/warning, etc).
Note: my exit 0 may seem superfluous here. However, it does prevent the shell from noting that the program "exited with non-zero return value" or warnings to that effect. That's due to my use of test ('[') on my output flag in my loop. In the normal case that will have left a non-zero return value since my of flag will be zero length for the part of the file AFTER the end pattern was found.
Note: this program is SLOW. (That's what you get for asking for it in sh). Running it on my 38,000 line /usr/share/games/hangman-words (this laptop doesn't have /usr/dict/words) it takes about 30 seconds or roughly only 1000 lines per second on a P166 with 16Mb of RAM. A binary can do better than that under MS-DOS on a 4Mhz XT!
BUG: If any lines begin with - (dashes) then your version of echo *might* try to treat the beginnings of your lines as arguments. This *might* cause the echo command to parse the rest of the line for escape sequences. If you have printf(1) evailable (as a built-in to your shell or as an external command) then you might want to use that instead of echo.
To do this based on line numbers rather than patterns we could use something more like: (text version)
#!/bin/sh # lnextract.sh # extract part of a file between a # line numbers $1 and $2 function isnum () { case "$1" in *[^0-9]*) return 1;; esac } [ "$#" -gt "2" ] || { echo "Must supply begin and end line numbers" >&2 exit 1 } isnum "$1" || { echo "first argument (first line) must be a whole number" >&2 exit 1 } isnum "$2" || { echo "second argument (last line) must be a whole number" >&2 exit 1 } begin=$1 end=$2 [ "$begin" -le "$end" ] || { echo "begin must be less than or equal to end" >&2 exit 1 } shift 2 for i; do [ -r "$i" -a -f "$i" ] || { echo "$i should be an existing regular file" >&2 continue } ln=0 while read a ; do let ln+=1 [ "$ln" -ge "$begin" ] && echo $a [ "$ln" -lt "$end" ] || break done < "$i" done exit 0
This rather ugly little example does do quite a bit more checking than my previous one.
It checks that its first two arguments are numbers (your shell must support negated character class globs for this, ksh '88 and later, bash 1.x and 2.x, and zsh all qualify), and that the first is less than or equal to the latter. Then it shifts those out of the way so it can iterate over the rest of the arguments, extracting our interval of line from each. It checks that each file is "regular" (not a directory, socket, or device node) and readable before it tries to extract a portion of it. It will follow symlinks.
It has some of the same limitations we saw before.
In addition it won't accept it's input from stdin (although we could add that by putting the main loop into a shell function and invoking it one way if our arg count was exactly two, and differently (within our for loop) if $# is greater than two. I don't feel like doing that here --- as this message is already way too long and that example is complicated enough.
It's also possible to use a combination of 'head' and 'tail' to do this. (That's a common exercise in shell programming classes). You just use something like:
head -$end $file | tail -$(( $end - $begin ))
... note that the 'tail' command on many versions of UNIX can't handle arbitrary offsets. It can only handle the lines that fit into a fixed block size. GNU tail is somewhat more robust (and correspondingly larger and more complicated). A classic way to work around limitations on tail was to use tac (cat a file backwards, from last line to first) and head (and tac again). This might use prodigous amounts of memory or disk space (might use temporary files).
If you don't want line oriented output --- and your patterns are regular expressions, and you're willing to use grep and dd then here's a different approach:
start=$(grep -b "$begin" ... ) stop=$(( $( grep -b "$end" ... ) - $begin )) dd if="$file" skip=$begin count=$stop bs=1b
This is not a shell script, just an example. Obviously you'd have to initialize $begin, $end, and $file or use $1, $2, and $3 for them to make this into a script. Also you have to modify those grep -b commands a little bit (note my ellipses). This is because grep will be giving us too much information. It will be giving a byte offset to the beginning of each pattern match, and it will be printing the matching line, too.
We can fix this with a little work. Let's assume that we want the first occurrence of "$begin" and the last occurence of "$end" Here's the commands that will just give us the raw numbers:
grep -b "$begin" "$file" | head -1 { IFS=: read b x echo b } grep -b "$end" "$file" | tail -1 | { IFS=: read e x echo e }
... notice I just grep through head or tail to get the first or last matching line, and I use IFS to change my field separator to a ":" (which grep uses to separate the offset value from the rest of the line). I read the line into two variables (separated by the IFS character(s)), and throw away the extraneous data by simply echoing the part I wanted (the byte offset) back out of my subshell.
Note: whenever you use or see a pipe operator in a shell command or script --- you should realize that you've created an implicit subshell to handle that.
Incidentally, if your patterns *might* have a leading - (dash) then you'll have problems passing them to grep. You can massage the pattern a little bit by wrapping the first character with square brackets. Thus "foo" becomes "[f]oo" and "-bar" becomes "[-]bar". (grep won't consider an argument starting with [ to be a command line switch, but it will try to parse -bar as one).
This is easily done with printf and sed:
printf "%s" "$pattern" | sed -e 's/./[&]/'
... note my previous warning about 'echo' --- it's pretty permissive about arguments that start with dashes that it doesn't recognize, it'll just echo those without error. But if your pattern starts with "-e " or -n it can effect out the rest of the string is represented.
Note that GNU grep and echo DON'T seem to take the -- option that is included with some GNU utilities. This would avoid the whole issue of leading dashes since this conventionally marks the end of all switch/option parsing for them.
Of course you said you didn't want to use sed, so you've made the job harder. Not impossible, but harder. With newer shells like ksh '93 and bash 2.x we can use something like:
[${pattern:0:1}]${pattern:1}
(read any recent good book on shell programming to learn about parameter expansion). You
can use the old 'cut' utility, or 'dd' to get these substrings. Of course those are just as external to the shell as perl, awk, sed, test, expr and printf. If you
really wanted to do this last sort of thing (getting a specific size substring from a variable's value, starting from an offset in the string, using only the bash 1.x parameter expansion primitives) it could be done with a whole lot of fussing. I'd use ${#varname} to get the size, a loop to build temporary strings of ? (question mark) characters to of the right length and the ${foo#} and ${foo%} operators (stripping patterns from the left and right of variable's value respectively) to isolate my substring.
Yuck! That really is as ugly as it sounds.
Anyway. I think I've said enough on the subject for now.
I'm sure you can do what you need to. Alot of it depends on which shell you're using (not just csh vs. Bourne, but ksh '88 vs. '93 and bash v1.14 vs. 2.x, etc) and just how rigit you are about that constraint about "standard utilities"
All of the examples here (except for the ${foo:} parameter expansion) are compatible with bash 1.14.
(BTW: now that I'm really learning C --- y'all can either rest easy that I'll be laying off the sh syntax for awhile, or lay awake in fear of what I'll be writing about next month).
Here's a short GNU C program to print a set of lines between one number and another: (text version)
/* extract a portion of a file from some beginning line, to * some ending line * this functions as a filter --- it doesn't take a list * of file name arguments. */ #include <stdio.h> #include <stdlib.h> #include <errno.h> int main (int argc, char * argv[] ) { char * linestr; long begin, end, current=0; ssize_t * linelen; linelen = 0; linestr=NULL; if ( argc < 3 ) { fprintf(stderr, "Usage: %s begin end\n", argv[0]); exit(1); } begin=atol(argv[1]); if ( begin < 1 ) { fprintf(stderr, "Argument error: %s should be a number " "greater than zero\n", argv[1]); exit(1); } end=atol(argv[2]); if ( end < begin ) { fprintf(stderr, "Argument error: %s should be a number " "greater than arg[1]\n", argv[1]); exit(1); } while ( getline(&linestr, &linelen, stdin ) > -1 && (++current < end ) ) { if (current >= begin) { printf("%s", linestr); } } exit(0); return 0; }
This is about the same length as my shell version. It uses atol() rather than strtol() for the argument to number conversion. atol() (ASCII to long) is simpler, but can't convey errors back to us. However, I require values greater than zero, and GNU glibc atol() returns 0 for strings that can't be converted to longs. I also use the GNU getline() function --- which is non-standard, but much more convenient and robust than fussing with scanf(), fgets() and sscanf(), and getc() stuff.
Microsoft's flip-flop on Linux has created a lot of confusion in the marketplace. Here's a look at the positions taken during and after the anti-trust trial, and an evaluation by a Linux advocate.
Microsoft has managed to create a quagmire of uncertainty among some potential Linux users by conveniently recognizing Linux as a technological threat to Windows 9x and NT during the Antitrust Trial, then denouncing Linux as "hype" on a Microsoft web site. Obviously, Microsoft was willing to portray Linux as competition to the court, but not to the buying public.
One the one hand, in its "Proposed Findings" submitted to the anti-trust court, Microsoft all but endorsed Linux. The section on the findings on competitive operating systems glittered with references to Linux's viability and acceptance. While anti-trust prosecutors no doubt anticipated that sort of tactic from Microsoft, such detailed recognition of Linux coming from Microsoft was stunning. Of course, Microsoft's Proposed Findings were not the findings of the court, and shortly after the trial, Microsoft accordingly changed its public position about Linux.
While Microsoft may have previously characterized Linux as "pie in the sky," the company felt sufficiently threatened to show a short film ridiculing Linux at its July meeting with stock market analysts. Most people would consider that good old-fashioned FUD (Fear, Uncertainty and Doubt), a tactic frequently employed by Microsoft. After the meeting, however, Microsoft executives reportedly emphasized the real point: They had ridiculed Linux because they were afraid of it.
In September, in a hefty document submitted to the anti-trust court (Defendant Microsoft Corporation's Revised Proposed Findings of Fact, Microsoft Does Not Possess Monopoly Power in the Alleged Market for "Operating Systems for Intel-Compatible PCs." -- September 10, 1999) Microsoft made over 50 references to Linux as a competitive operating system that is gaining significant momentum and market share.
But in October, Microsoft was on the offensive again, publishing a "Linux Myths" (http://www.microsoft.com/ntserver/nts/news/msnw/LinuxMyths.asp) page on the web. This time Linux was characterized as a lot of hype.
We all know that Microsoft is a fierce competitor, with many challengers and enemies. But in the interest of separating the hype from the realities, we need to sort out what were the main assertions about Linux that Microsoft declared in its court-filed "Proposed Findings," and how do the compare to the "Linux Myths" document.
As one who has been heavily involved with the microcomputing world since the mid '70s, and a member of the Linux community from the outset, I would like to review the Microsoft positions and try to make sense of them. To accomplish this, however, we must assess some obvious contradictions.
A month later Microsoft flip-flopped in its Linux Myths by saying, "Linux does not provide support for the broad range of hardware in use today."
Microsoft was, in fact, right the first time. In reality, Linux supports most exciting PC hardware and much non-PC hardware. Linux is an innovative extension of Unix as noted in my 1994 document "Why Linux is Significant," which predicted correctly many years ago the state of Linux today (http://www.LinuxMall.com/news/announce/lxsig).
With the proven stability of the Linux kernel, Microsoft is correct to anticipate that various Linux systems will compete effectively with Windows NT. Perhaps more important, Linux will compete with Windows 2000, Microsoft's future contender for the business community. In fact, the wild success of the Red Hat and VA Linux IPOs has no doubt stolen a lot of thunder from the long-planned Windows 2000 rollout.
Later on, in its Linux Myths, Microsoft maintains that "The complexity of the Linux operating system and cumbersome nature of the existing GUIs would make retraining end-users a huge undertaking and would add significant cost." Another flip-flop.
The fact is, the Corel and KDE interfaces are indeed pleasing and user friendly. So are the ones developed by the Gnome project, Xi Graphics' CDE and others. And when you consider the thousands of expert participants in the Linux community who are contributing their expertise to the free software cause, it suggests that if there's a next quantum improvement in GUIs, it will come through the Linux Community.
In the Linux Myth document Microsoft flip-flopped again, saying, "Linux as a desktop operating system makes no sense. A user would end up with a system that has fewer applications, is more complex to use and manage, and is less intuitive."
It seems to me that this flip-flop is not only transparent, but flies in the face of potential future litigation. The fact is, freely-distributed Windows emulators will circumvent many interface problems. Even Microsoft's Proposed Findings recognizes that it's possible to run Windows-based applications for Linux by using WINE emulation software developed by the open source movement. In addition, Red Hat, Caldera, SuSE, TurboLinux, Linux-Mandrake and other future versions of Linux will be bundled with popular desktop applications. And products like Tarantella, which connects dissimilar systems to share applications, and VM-Ware, which allows running Linux and Microsoft OS simultaneously on the same machine, among others are starting to have a significant impact. There is also a significant trend toward the porting of Windows applications to Linux by manufacturers themselves.
Microsoft is also guilty of not being completely honest with its own users. Windows 2000 will be incompatible with a number of existing Windows applications, and with the advent of Intel's 64-bit computing platform, both Linux and Windows will find themselves on a level playing field for applications. When vying for application support on the Intel 64-bit platform, Microsoft may quickly fall behind Linux in terms of available applications.
Later, in the Linux Myths, Microsoft says "The Linux community likes to talk about Linux as a stable and reliable operating system, yet there are no real world data or metrics and very limited customer evidence to back up these claims." Microsoft points to it customers, including Boeing, Barnes and Noble, Dell Computer and Nasdaq, as dependent on Windows NT 4.0 for their mission-critical applications.
While Microsoft has, and will continue to have, an impressive list of high-end customers, so too does Linux. Nearly all of the Fortune 2,000 are listed in LinuxMall.com's customer base, and are using Linux for strategic purposes. As stated by Microsoft and noted in the press on a daily basis, many high-profile computer manufacturers and independent support companies are now offering comprehensive lists of hardware supported by Linux. At the time of this writing, in addition to the manufacturers mentioned by Microsoft, my company, LinuxMall.com, is being flooded by requests from companies wanting to offer Linux-based hardware, as well as offering support services for sale.
Actually, the Internet abounds with stories of failed installations based on Microsoft technology being replaced by Linux and other Open Source systems, and the phenomenal return on investment that doing so brings about.
Then, in Linux Myths: "Serious corporations are spending serious money on Linux, and it is growing rapidly on all relevant fronts . . .The Linux operating system is not suitable for mainstream usage by business or home users," and goes on to explain why Linux has a long way to go to be competitive to Windows.
It doesn't actually surprises me that so many millions of people were using Linux as of last year. As CEO of one of the Internet's largest Linux-related sites, I have seen our traffic grow to more than 1/2 million people per month, many of whom are using Linux. IDC has pegged the growth of the Linux market at about 212% per year, and I believe that Microsoft has pegged the annual Linux growth rate closer to 1000%. That's rather breathtaking, and the rate is obviously capable of continuing in the short term as the support for both corporate and home users gets dramatically better.
Witness to a Paradigm Shift? Microsoft's contradictory position on Linux as a viable competitor seems indicative of the difference between the marketplace and the court. It also indicates that Microsoft senses a paradigm shift similar to the shift that allowed Microsoft to replace IBM as the dominant force in the computer industry. Only time will tell if Linux will displace Microsoft entirely, but the shift is important nonetheless. Just as IBM suffered and then adapted, so will Microsoft. The question is "when."
If you're aware of the news in general, you know there is a lot of excitement about Linux, and a certain amount of hype. My guess is that all the ink Linux has generated over the past year has created a 60-70% awareness of the operating system. Of that, perhaps 5-10% took action in 1999. Does that mean it's being oversold? I don't believe so.
Microsoft minimizes the importance of Linux's low cost (actually, no cost to download, and cheaper yet when you consider time, $1.89 for any of the most popular Linux distributions on CD from LinuxMall.com.), arguing that "A free operating system does not mean low total cost ownership." However, when a Fortune 1000 business considers running Windows 2000 on servers and thousands of workstations, they will be faced with paying millions of dollars for their operating system in licensing fees alone! In addition, it is common knowledge that it is not out of the ordinary to need to reboot Windows NT servers weekly or monthly in order to avoid problems. Linux, on the other hand, has been proven to run for months and sometimes years without requiring a reboot. In 24/7 operations this is a critical issue. This is especially true when the cost of supporting Linux, because of its stability and configurability, may actually be lower than for Windows NT, even while techs for Linux are being quickly spawned by universities and the Internet. These are reasons why many if not most of the ISPs that once used Windows NT now use Linux or its Open Source cousin, FreeBSD, as will more and more corporations.
I believe Linux is en route to becoming the operating system of choice very soon. Its development simply cannot be stopped, simply because of the community dynamic behind it. As people become aware of how rapidly Linux is getting easier to use and support, acceptance at the desktop will grow dramatically. In the next two or three years, Linux will continue to attain significant market penetration. Over the next five years, it has a chance to become the dominant operating system for general use.
I also believe that the findings of fact tend to underestimate what is happening with Linux. In general, the present situation is accurately depicted by the findings, but I believe Judge Jackson has erred on the side of caution in finding that Linux is not a significant long-term competitor to Microsoft. However, Judge Jackson can be excused for his lack of vision concerning Linux since proving that Microsoft would have significant competition in the future will not excuse them for past actions.
In the end, Microsoft's public smoke screen cannot obscure the fact that Linux is a viable operating system for servers and desktops alike. Linux has made good on being the "better UNIX than UNIX" that was the stated goal of Windows NT. The question is no longer, "Is Linux ready for you?" The question now is, "Is the public ready for Linux?"
More HelpDex cartoons are on Shane's web site, http://mrbanana.hypermart.net/Linux.htm.
The OLinux site also has more Linux interviews.
This article is the the current installment in an ongoing series of site reviews for the Linux community. Each month, I will highlight a Linux-related site and tell you all about it. The intent of these articles is to let you know about sites that you might not have been to before, but they will all have to do with some aspect of Linux. Now, on with the story...
Linux, and UN*X in general, rocks the command line. However, as more users familiar with that other operating system migrate to a real operating system, they expect to see a graphical interface on almost everything. You may argue that this is a Bad Thing, but as a programmer myself, I see this as a Good Thing. It means that more and more programs will need to be written, updated and maintained, which translates to job security.
With users migrating from another windowing system, they expect to find programs that have a windowing interface. Even with the advantage that there is almost always more than one way to do something in Linux, the choice of windowing libraries to use can very quickly generate a religious war, so I'll try not to spread any of my own preferences in this area, lest I become a target myself.
Until recently, it has been difficult to write a completely windows-driven (note the lack of capitalization here) user interface from scratch. Writing an interface with a text editor and compiler can be exceedingly time consuming for a programmer who isn't intimitely knowledgeable about the windowing library. That's where Glade comes in.
Glade is an attempt to create an interface builder that uses the GTK+ library to create the widgets that the programmer needs for an application. If you have the GNOME development libraries installed, Glade can produce native GNOME application interfaces as well. Once you get used to creating and placing widgets in Glade, you can create some very complex interfaces in a manner of minutes.
When the interface is the way that you like, Glade can create the source code for you in either C, C++, Ada95, Python and Perl. Glade will also allow you to create a dynamically loaded interface that uses libGlade to read and build the screen definitions without generating source code (this can be handy for writing quick dialog boxes or informational windows).
Although Glade is currently still in development, now at version 0.5.7, my testing proved this to be a robust application that was able to create the interface that I wanted with a minimum of troubles.
This wouldn't be a Linux Site O' The Month without a look at the website, so let's take a closer look...
At first glance, the Glade website isn't the most exciting site on the internet. But, that's not necessarily a Bad Thing. With a minimum of graphic elements on the main page, it is a very fast-loading site, compared to others that I've seen recently. The site is frameless, which I am tending to like more as I see frames so misused on other sites.
The Features section of this site includes screenshots of the three windows that make up the Glade interface as well as some sample images of interfaces that were created with Glade. The Download page includes the usual list of source code tarballs and a few prebuilt packages for some of the more popular distros. The developer has included both the release history and todo list for Glade in the History and ToDo sections, respectively. If your mailbox isn't quite full enough yet, you can get your fill under the Mailing Lists link. Finally, in the Links section, there are links to information and tools that use or support Glade, while the Applications section highlights apps that were built with Glade.
This isn't a very big website, but what it lacks in size, it makes up in content. There is enough information on this site to help you get Glade installed on your box, get you started building applications with it, and get you examples of other programs that were created with it. If you've been thinking of building an application for Linux but don't know where to start in building your interface, try Glade. You'll be surprised at how easy it can be to get down to writing the code that controls your program and not worry about how it connects to the user's rodential device pointer.
Just a few minutes before sitting down to write this article, I managed to fix a problem that has been the bane of my existence for the last two weeks. Since it is a problem that I have often seen mentioned in the Linux Gazette, usually phrased in a manner that shows the writer to be standing on a chair with a noose around his neck and typing with his toes, I've decided to share it with other readers, hopefully saving them wear and tear on good rope. This may also serve as a good guide to troubleshooting software problems in general. Be aware, though, that a login problem could involve _any_ of the areas described - what fixed my particular machine may not be the solution for yours.
A couple of weeks ago, I decided to install an MUA (Mail User Agent) on my machine. A strange thing to do, considering that I live on a sailboat anchored well away from phone lines or electricity - but I had my reasons. I'd done this on land-based systems before; there was just a bit of experimentation that I wanted to do. Wel
l, as a pride of lemmings goeth before a fall off a cliff, so does an MTA (Mail Transfer Agent) go before an MUA - you need something that will deliver the mail, otherwise there's not much point in writing it! So, an MTA/MUA installation. No problem - I keep the entire Debian distribution on the Linux partition of my hard drive; this speeds up installations as well as making package searches a trivial task.
If truth be known, I don't like 'su', at least not for major tasks: the fact that it keeps the original user's environment variables, rather than assuming those of the account being "su"'d to, has caused me a few "interesting moments". Yeah, a quick permissions change or an /etc file modification - all right, - but for serious work, like installing and uninstalling several major packages (I wasn't sure which MTA I wanted yet), I log in as `root'.
On to the task. Midnight Commander makes it the work of a few keystrokes to dive into and explore a directory tree, as well as letting you look inside - and install - any Debian or RedHat package. Let's see... `sendmail'? (Read the `man' page inside the package, look at the docs, install...) Nope, too big and complex. I need something a bit simpler. (Uninstall.) `exim'?... `exmh'?... `mh'?... `nmh'? All got the same "install/uninstall" treatment, with the exception of required libraries: whenever I install a library, it stays installed. After a bit of doing this on a new system, I don't get any complaints about `Required libraries missing' - if it wasn't for the fact that a number of libs in any given distribution are `either/or' choices (they'd conflict with each other), I'd install the entire "libs" directory and never worry about it again!
However, I still had an MTA to choose. Ah, `smail'! Easy to install, painless to configure - done. Easy choice for an MUA - I really like the configurability of `mutt' - and I'm finished! (Prophetic words...)
EXCEPT. Now, I found that I could not log in as a non-root user anymore. The message I got was:
Cannot execute /bin/bash: Permission denied
What in the heck was this?
`Was this some occult illusion?
Some maniacal intrusion?
These were choices Solomon
Himself had never faced before...'
I knew that I hadn't done anything in /etc/password - for that matter, anything in /etc - but I wasn't 100% sure of what those packages, safe as they're supposed to be, were doing under my auspices as `root'. So, I quickly did some double-checks - yes, user `ben' still existed in /etc/password; ditto for group `ben' in /etc/group; entering the wrong string as a password provoked the usual `Login incorrect' message instead of the `Cannot execute'. Hmm.
Another double-check: I created a new user ("joe"), new password and all ("joe"), and tried to log in as that user. No go, same error. Something in the login sequence had died, for reasons unknown. (Goodbye, "joe"...)
At this point, I let out a quiet "eep!" of minor panic, very quickly switched to another VT, and tried to log in as `root'. WHEW; no problems there. At least I would still have access to the machine when I next brought it up... I'd have hated to do an immediate `live' backup and reinstallation!
Open up /bin. What do the file permissions look like? Uh-huh... everything is set to 755 (-rwxr-xr-x); in addition, `login', `mount', `umount', `ping' and `su' are all SETUID (-rwsr-xr-x). So far, so good; how about /etc permissions? They all look OK too - mostly 644 (-rw-r--r--), with an occasional 600 (-rw-------) here and there, for files denied to everyone but `root'. All right, let's try something silly; I overwrote `login' and `bash' with fresh copies, straight out of their original packages, to make sure that they weren't corrupted. Nope; still no luck.
Wait, how about /home? If the permissions on that got mis-set and the user couldn't get in... Rats, it was fine too - 6775 (drwxrwsr-s). Checking the .bashrc and .bash_profile showed nothing unusual - and their perms were OK. Just for kicks, I checked all the other subdirectories in '/'; all except /root were world-readable, which was fine.
There are a couple of files in /var that keep track of who's logged in, when they logged out, and so on; if these guys get corrupted, *all* sorts of strange unpredictable stuff happens. So - emergency measure time! - I typed
cat >/var/log/wtmp cat >var/run/utmpwhich blew their contents away and left them as zero-length files. [He actually typed this without the "cat", but I put the "cat" in to make it clear that the ">" was part of the command line and not the shell prompt. -Ed.] I logged out on all VTs (just so `utmp' and `wtmp' would get some data), and... the usual result.
Permissions on /dev/ttyX and /dev/vcsX (terminals and virtual consoles)? They all looked OK too; I was starting to lose hope.
Wait; what about a systematic approach? Let's get an idea of exactly what's happening before running in every direction. A quick look at the System Administrator's Guide (SAG) to refresh my memory - ah, there's the login process:
From the "System Administrator's Guide", by Lars Wirzenius
First, init makes sure there is a getty program for the terminal connection (or console). getty listens at the terminal and waits for the user to notify that he is ready to login in (this usually means that the user must type something). When it notices a user, getty outputs a welcome message (stored in /etc/issue), and prompts for the username, and finally runs the login program. login gets the username as a parameter, and prompts the user for the password. If these match, login starts the shell configured for the user; else it just exits and terminates the process (perhaps after giving the user another chance at entering the username and password). init notices that the process terminated, and starts a new getty for the terminal.
' ' ' ' ' ' ' ' ------------ ' GIF2ASCII ' | Start | ' conversion by ' ------------ ' "fastfingers" ' V ' program ' ------------------- ' Copyleft 2000 ' ___________| init: fork + exec |_______ ' ' ' ' ' ' ' ' | | "/sbin/getty" | | | ------------------- | ^ V ^ | ---------------------- | | | getty: wait for user | | | ---------------------- | ^ V ^ | ---------------------- | | | getty: read username,| | | | exec "/bin/login" | | | ---------------------- | ^ V ^ | ---------------------- | | | login: read password | | | ---------------------- | ^ V ^ | / \ | | / \ | ------------- / Do \ | | Login: exit |---<-No- / they \ | ------------- \ match?/ ^ \ / | \ / | \ / | | Yes ^ V | ------------------------ | | login: exec("/bin/sh") | | ------------------------ ^ V | ---------------------- | | sh: read and execute | | | commands | ^ ---------------------- | V | ---------- | | sh: exit |----------- ----------Figure 8.1: Logins via terminals: the interaction of init, getty, login, and the shell.Note that the only new process is the one created by init (using the fork system call); getty and login only replace the program running in the process (using the exec system call).
Following the process, we can see that everything up until the last part - the 'exec("/bin/sh")', that is - seems OK. It's during or after that hand-off that things go wild. The problem was now down to system calls, something I wasn't quite sure how to approach... and yet that piece of information contained everything I needed to know; I just didn't know how to apply it. Later on, it would become self-evident.
Over the next ten days or so, every time I logged in I would try something new; some things totally outlandish and unlikely to work; some, bright ideas that produced great disappointment when the Evil Message once again showed its head. Nothing worked. I replaced `getty'; tried a couple of shells other than /bin/bash; tried "su"ing to `ben'; checked the logs (they showed `ben' as having successfully logged in (!), which told me that `login' was fine; the failure occurred when it handed the process off to `bash' - I knew that!)...
After finding only a few references to this on the Net - mostly in Japanese, Swedish, and German (I managed to puzzle out the last two - one of them suggested checking perms on '/' ! Excellent idea... which didn't pan out in my case), I shot off a panicked resume of the problem to the The Answer Guy - Hi! <grin> Unfortunately, he must have been swamped by all those Windows2000 questions that he just loves to answer... anyway, I was cast on my own resources.
Ah - `strace'! Remember `strace'; `strace' is your friend... A really fantastic piece of software that traces the execution of a program and reports it, step by step. Let's go!
Since you have to be logged in to run a program, I ran
strace -s 10000 -vfo login.ben login ben
from my current VT; this meant "Run strace on `login ben'; print all lines up to 10000 characters long (I didn't want to miss any messages, no matter how long they were); make the output verbose; trace any forked processes; output the result to a file called `login.ben'". Then, as a baseline, I ran
strace -s 10000 -vfo login.root login root- and now, I had two files to compare. The `root' one was about twice as long as `ben' - that made sense, since a successful login goes on to execute all the stuff in the "~/.bash*" files.
`strace login' makes for very informative reading. If I hadn't already read the System Administrator's Guide, this would have given me the exact information - in far more detail. It shows all the libraries that are read, every file examined by `login', the comparison procedure for `group' and `password'... the only thing it did NOT show was the reason for the failure; just the fact itself, at exactly the point in the procedure where I expected it to be:
(300+ lines elided) execve("/bin/bash", ["-bash"], ["TERM=linux", "HZ=100", "HOME=/home/ben", "SHELL=/bin/bash", "PATH=/bin:/usr/bin", "USER=ben", "LOGNAME=ben", "MAIL=/var/spool/mail/ben", "LANG=C", "HUSHLOGIN=FALSE"]) = -1 EACCES (Permission denied) write(2, "Cannot execute /bin/bash: Permission denied\n", 44) = 44
Just great. The last thing poor `login' tried to do, before falling over on its back with its legs twitching in the air, was to `execve' bash with the defined variables collected from /etc/password, /etc/login.defs, and so on - all of those looked OK - and write those 44 hateful characters to "stderr" (output descriptor 2). Basically, the stuff I'd already figured out.
I did notice, however, that `login' was opening a number of libraries in /lib that were needed by the Name Service Switch configuration file (/etc/nsswitch.conf). What if one of the mentioned libraries was corrupted? That would be right in line with the `system calls' theory - since libraries are where the system calls come from! Let's check the lib that handles local logins for NSS (see `man nsswitch'):
dpkg -S libnss_compat-2.0.7.so("Tell me, O Mighty Debian Package Manager, whence cometh said program?"), and the Debian Oracle, in his wisdom, replied -
libc6: /lib/libnss_compat-2.0.7.so
Humm. The very core of the Linux libs. Well... a quick replacement of all the /lib/libnss* ... and no change. Next idea.
This procedure got me thinking, though. Something was indeed "rotten in the state of Denmark" - perhaps I needed to check perms on the files in /libs?
The only problem was, I didn't know what they were supposed to be. You see, most of the libs are set to "root.root 644" - owner root, group root, user - read/write, group - read-only, others - read-only. There are a few, though, that should be set "root.root 755" - as above, but with "execute" permissions for everyone added... and without looking at a fresh Linux installation, I had no idea of what was right.
WAIT a minute! As I'd mentioned in a 2-cent tip that I'd sent in to LG, I like to keep a copy of a Debian "base installation" file set (7 files, about 15MB) on my DOS partition as a 'rescue' utility - it should have everything I need!
Yes, I did check the perms on all the other libraries; `ld-2.0.7.so' was the only one that was affected. The only remaining `unknown' was how the perms changed in the first place... but I suspect that question will never be answered.
As usual, the lessons that Linux teaches are hard - but fair. There's *always* a way to solve a problem; admittedly, often the easiest way is to reinstall the system, but this does not teach you the "innards" of an OS the way tracking down a problem will. In my case, reinstallation would have been relatively easy: I have a couple of spare drives, easily big enough to hold my "up to the minute" data so that I don't even need to touch my backups, and a basic Debian install takes me less than 10 minutes. I wasn't interested in that. The thought uppermost in my mind was: "What would happen if this occurred at a customer's site?" I needed to know what the right solution was... and through persistence - no, sheer bloody-mindedness - I succeeded.
I don't suggest that every one of you beat his brains out against some difficult problem once a week just to "keep in practice" - but I do suggest that you use a methodical approach, based on knowledge gained from reading the appropriate HOWTOs and other documentation available before grabbing that installation CD yet another time. There will be times when you'd like nothing better than to laugh maniacally as you watch your system shrink to a pinpoint, dropping away from your lofty perch on the Empire State Building... and there will be other times when the satisfaction of having solved a knotty problem of this sort makes you pound your chest and do Tarzan imitations.
Now, if you all will excuse me, I've got a chimpanzee and an elephant I'm supposed to meet...
Happy Linuxing to all,
Ben Okopnik
INTRO
Shell scripting is a fascinating combination of art and science that gives you access to the incredible flexibility and power of Linux with very simple tools. Back in the early days of PCs, I was considered quite an expert with DOS's "batch files", something I now realize was a weak and gutless imitation of Unix's shell scripts. I'm not usually much given to Microsoft-bashing - I believe that they have done some absolutely awesome stuff in their time - but their BFL ("Batch File Language") was a joke by comparison. It wasn't even a funny one.
Since shell scripting is an inextricable part of the shell itself, quite a bit of the material in here will deal with shell quirks, methods, and specifics. Be patient; it's all a part of the knowledge that is necessary for writing good scripts.
PHILOSOPHY OF SCRIPTING
Linux - Unix in general - is not a warm and fuzzy, non-knowledgeable-user oriented system. Rather than specifying exact motions and operations that you must perform, it provides you with a myriad of small tools which can be connected in a literally infinite number of combinations, to achieve any result that is necessary (I find Perl's motto of "TMTOWTDI" - There's More Than One Way To Do It - highly apropos for all of Unix). That sort of power and flexibility, of course, carries a price - increased complexity and a requirement for higher competence in the user. Just as there is an enormous difference between operating, say, a bicycle versus a super-sonic jet fighter, so is there an enormous difference between blindly following the rigid dictates of a standardized GUI and creating your own program, or shell script, that performs exactly the functions you need in exactly the way you need them done.
Shell scripting is programming - but it is programming made easy, with little, if any, formal structure. It is an interpreted language, with its own syntax - but it is only the syntax that you use when invoking programs from your command line; something I refer to as "recyclable knowledge". This, in fact, is what makes shell scripts so useful: in the process of writing them, you continually learn more about the specifics of your shell and the operation of your system - and this is knowledge that truly pays for itself in the long run as well as the short.
REQUIREMENTS
Since I have a strong preference for `bash', and it happens to be by far the most commonly used shell, that's what these scripts are written for. Even if you use something else, that's still fine: as long as you have `bash' installed, these scripts will execute correctly. As you will see, scripts invoke the shell that they need; it's part of what a well-written script does.
I'm going to assume that you're in your home directory, since we don't want these files scattered all over the place where you can't find them later. I'm also going to assume that you know enough to hit the "Enter" key after each line that you type in, and that, once you have selected a name for your shell script, you will check that you do not have an executable with that same name in your path (Hint: type "which bkup" to check for an executable called "bkup"). For this specific reason, you should never name your scripts "test". This is one of the FAQs of Unix, a.k.a. "why doesn't my shell script/program do anything?" There's an executable in /bin called "test" that does nothing (nothing obvious, that is) when invoked...
It goes without saying that you have to know the basics of file operations - copying, moving, etc. - as well as being familiar with the basic assumptions of the file system, i.e., "." is the current directory, ".." is the parent (the one above the current), "~" is your home directory, etc. You didn't know that? You do now! <chuckle>
Whatever editor you use, whether `vi', `emacs', `mcedit' (the default editor in Midnight Commander and one of my favorite tools), or any other text editor is fine; just don't save this work in some word-processing format.
In order to avoid constant repetition of material, I'm going to number the lines as we go through and discuss different parts of a script file. I'll be putting it all together at the end, anyway.
BUILDING A SCRIPT
Let's go over the very basics of creating a script. Those of you who find this obvious and simplistic are invited to follow along anyway; as we progress, the material will become more complex - and a "refresher" never hurts. As it is, the projected audience for this article is a Linux newbie, someone who has never created a shell script before - but wishes to become a Script Guru in 834,657 easy steps. :)
In its simplest form, a shell script is nothing more than a shortcut - a list of commands that you would normally type in, one after another, to be executed at your shell prompt - plus a bit of "magic" to notify the shell that it is indeed a script.
The "magic" consists of two simple things: a notation at the beginning of the script that specifies the program that is used to execute it, and a change in the permissions of the file containing the script in order to make it executable.
As a practical example, let's create a script that will "back up" a specified file to a selected directory; we'll go through the steps and the reasoning that makes it all happen.
First, let's create the file and set the permissions. Type
>bkup chmod +x bkupThe first line creates a file called "bkup" in your current directory. The second line makes it executable; note that the "+x" option of `chmod' makes this script executable by everyone - if you wish to restrict that, you'll need to run `chmod' with "u+x" or "ug+x" (see the "chmod" man page). In most cases, though, just plain "+x" is fine.
Next, we'll need to actually create the script. Start your editor and open up the file you've just made:
mcedit bkupThe first line in all of the script files we create will be this one (again, remember to ignore the number and the colon at the start of the line):
1: #!/bin/bash
This is a subtle but important point, by the way: when a script runs, it actually starts an additional bash process that runs under the
current one; that process executes the script and exits, dropping you back in the original shell that spawned it. This is why a script that,
for example, changes directories as it executes will not leave you in that new directory when it exits: the original shell has not been told to change directories, and you're right where you were when you started - even though the change is effective while the script runs.
To continue with our script:
2: # "bkup" - copies specified files to the user's ~/Backup 3: # directory after checking for name conflicts.
4: cp -i $1 ~/Backup
The "-i" syntax of the `cp' command makes it interactive; that is, if we run "bkup file.txt" and a file called "file.txt" already exists in
the ~/Backup directory, `cp' will ask you if you want to overwrite it - and will abort the operation if you hit anything but the 'y' key.
The "$1" is a "positional parameter" - it denotes the first thing that you type after the script name. In fact, there's an entire list of
these variables:
$0 - The name of the script being executed - in this case, "bkup". $1 - The first parameter - in this case, "file.txt"; any parameter may be referred to by $<number> in this manner. #@ - The entire list of parameters - "$1 $2 $3..." $# - The number of parameters.There are several other ways to address and manipulate positional parameters (see the `bash' man page) - but these will do us for now.
MAKING IT SMARTER
So far, our script doesn't do very much; hardly worth bothering, right? All right; let's make it a bit more useful. What if you wanted
to both keep the file in the ~/Backup directory and save the new one - perhaps by adding an extension to show the "version"? Let's try
that; we'll just add a line, and modify the last line as follows:
4: a=$(date +%T-%d_%m_%Y) 5: cp -i $1 ~/Backup/$1.$a
The effect of the last two lines in the script is to create a unique filename - something like file.txt.01:00:00-01_01_2000 - that should not conflict with anything else in ~/Backup. Note that I've left in the "-i" switch as a "sanity" check: if, for some truly strange reason, two file names do conflict, "cp" will give you a last-ditch chance to abort. Otherwise, it won't make any difference - like dead yeast in beer, it causes no harm even if it does nothing useful.
By the way, the older version of the $(command) construct - the `command` (note that "back-ticks" are being used rather than single quotes) - is deprecated, for a good reason. $()s are easily nested - $(cat $($2$(basename file1 txt))), for example; something that cannot be done with back-ticks, as the second back-tick would "close" the first one, and the command would fail, or do something unexpected. You can still use them, though - in single, non-nested substitutions (the most common kind), or as the innermost or outermost pair of the nested set - but if you use the new method exclusively, you'll always avoid that error.
So, let's see what we have so far, with whitespace added for readability and the line numbers removed (hey, an actual script!):
#!/bin/bash
# "bkup" - copies specified files to the user's ~/Backup # directory after checking for name conflicts.
a=$(date +%T-%d_%m_%Y) cp -i $1 ~/Backup/$1.$a
Oh, one last thing; another "Unix FAQ". Should you try to execute your newly-created script by typing
bkupat the prompt, you'll get this familiar reproof:
bash: bkup: command not found-- "HEY! Didn't we just sweat, and labor, and work hard... What happened?"
Unlike DOS, the execution of commands and scripts in the current directory is disabled by default - as a security feature. Imagine what would happen if someone created a script called "ls", containing "rm -rf *" ("erase everything") in your home directory and you typed "ls"! If the current directory (".") came before "/bin" in your PATH variable, you'd be in a sorry state indeed...
Due to this, and a number of similar "exploits" that can be pulled off, you have to specify the path to all executables that you wish to run there - a wise restriction. You can also move your script into a directory that is in your path, once you're done tinkering with it; "/usr/local/bin" is a good candidate for this (Hint: type "echo $PATH" to see which directories are listed).
Meanwhile, in order to execute it, simply type
./bkup file.txt
- the "./" just says that the file to be run is in the current directory. Use "~/", instead, if you're calling it from anywhere else;
the point here is that you have to give a complete path to the executable, since it is not in any of the directories listed in your
PATH variable.
This assumes, of course, that you have a file in your current directory called "file.txt", and that you have created a subdirectory
called "Backup" in your home directory. Otherwise, you'll get an error.
REVIEW
In this article, we've looked at some of the basics involved in creating a shell script, as well as some specifics:
WRAP-UP
Well, that's a good bit of information for a start. Play with it, experiment; shell scripting is a large part of the fun and power of
Linux. Next month, we'll talk about error checking - the things your script should do if the person using it makes an error in syntax, for example - as well as getting into loops and conditional execution, and maybe dealing with a few of the "power tools" that are commonly used in shell scripts.
Please feel free to send me suggestions for any corrections or improvements, as well as your own favorite shell-scripting tips or any really neat scripting tricks you've discovered; just like anyone whose ego hasn't swamped their good sense, I consider myself a student, always ready to learn something new. If I use any of your material, you will be credited.
Until then -
Happy Linuxing!
"man" pages for 'bash', 'cp', 'chmod'
``Not me, guy. I read the Bash man page each day like a Jehovah's Witness reads the Bible. No wait, the Bash man page IS the bible.
Excuse me...''
-- More on confusing aliases, taken from comp.os.linux.misc
In the last issue, your humble Editor asked if anybody wanted to send in any artwork to jazz up the Gazette. I received two entries.
Linux Total World Domination 2005 John Hinsley <> |
Penguin created using xpaint Rick Smith <> |
The design of compilers/interpreters is a challenging field - one which offers a lot of scope for theoretical exploration as well as hands on coding. Being a Python fan, I tried to implement some of the ideas which I am learning about compilers/interpreters in this beautiful language. As I am neither a Python Guru nor a compiler expert, the implementation may be imperfect. But it was certainly lots of fun!
1+2*3-4 1/2+3-4/5 .....We will start with a program which will read an expression of this form and evaluate it directly. We will then modify this program to generate a data structure called a parse tree which can then be evaluated by recursive algorithms. The next step is to generate instructions for a virtual machine using this parse tree. The last step is to store these virtual machine instructions on disk and run it with an interpreter when required.
Programming languages are often described using a compact and powerful notation called a Context-free Grammar. The grammar describes a set of substitutions. Here is a grammar for arithmetic expressions:
E ::= T { ADDOP T } T ::= F { MULOP F } F ::= 0 | 1 | 2 | 3 | ..... ADDOP ::= + | - MULOP ::= * | /Assume that E stands for expression, T stands for term and F stands for factor. The curly brace denotes 'zero or more repetitions'. Reading the first production, we would say that "An expression is a term, followed by zero or more repetitions of the combination of an adding operator and a term." The third production says that a factor is either 0 or 1 or 2 or 3 or 4 and so on, ie, the whole set of positive integers. It takes some time to get used to esoteric definitions like this, but if we have a basic understanding of recursive structures, it is not very difficult.
Here is the source for a simple expression evaluator in Python. (text version)
#--------------------A simple expression evaluator---------------# import re, string Inputbuf = [] # A token is either a number or an operator symbol. # The main program reads a line from the input and # stores it in an array called Inputbuf. The function # gettoken() returns individual tokens from this array. def gettoken(): global Inputbuf p = re.search('^\W*[\+\-\*/]|^\W*[0-9]+', Inputbuf) token = p.string[p.regs[0][0]:p.regs[0][1]] token = string.strip(token) if token not in ['+', '-', '*', '/']: token = int(token) Inputbuf = Inputbuf[p.regs[0][1]:] return token # lookahead() peeks into the input stream and tells you what # the next input token is def lookahead(): global Inputbuf try: p = re.search('^\W*[\+\-\*/]|^\W*[0-9]+', Inputbuf) token = p.string[p.regs[0][0]:p.regs[0][1]] token = string.strip(token) if token not in ['+', '-', '*', '/']: token = int(token) return token except: return None def factor(): return gettoken() def term(): e1 = factor() tmp = lookahead() while (tmp in ['*', '/']): gettoken() if (tmp == '*'): e1 = e1 * factor() else: e1 = e1 / factor() tmp = lookahead() return e1 def expression(): e1 = term() tmp = lookahead() while (tmp in ['+', '-']): gettoken() if (tmp == '+'): e1 = e1 + term() else: e1 = e1 - term() tmp = lookahead() return e1 def main(): global Inputbuf Inputbuf = raw_input() print expression() if __name__=='__main__': main()It would be good to trace the execution of the above code for some simple expressions.
The above program simply evaluates the given infix arithmetic expression. We are now going to modify it to produce a parse tree instead. A parse tree for the expression 1+2*3 would look like this:
+ / \ / \ 1 * / \ / \ 2 3Each node of the tree consists of the following fields:
#--------------------Produce a parse tree---------------------# # gettoken() and lookahead() are same as in the first listing NULL = 0 import re, string Inputbuf = [] class Tree: pass def factor(): newnode = Tree() newnode.number = gettoken() newnode.left = newnode.right = 0 return newnode def term(): left = factor() tmp = lookahead() while (tmp in ['*', '/']): gettoken() right = factor() newnode = Tree() newnode.op = tmp newnode.left = left newnode.right = right left = newnode tmp = lookahead() return left def expression(): left = term() tmp = lookahead() while (tmp in ['+', '-']): gettoken() right = term() newnode = Tree() newnode.op = tmp newnode.left = left newnode.right = right left = newnode tmp = lookahead() return left def treeprint(ptree): if (ptree): try: print ptree.op except: print ptree.number treeprint(ptree.left) treeprint(ptree.right) def main(): global Inputbuf Inputbuf = raw_input() ptree = expression() return ptree if __name__=='__main__': ptree = main() treeprint(ptree)
The parse tree which we have created can be easily evaluated by writing a recursive function. But we will adopt a different method. We will generate code for evaluating expressions in the instruction set of a simple hypothetical machine called a 'stack machine'. The instructions which this machine has are very simple - push a number on to the stack, add two numbers, multiply two numbers etc. Thus, evaluation of the expression 1+2*3 yields the following code:
push 1 push 2 push 3 mul addThese instructions are stored in an array. Push, mul, add etc are functions. The instructions may be directly executed by walking through the array and executing the functions held by each array element or they may stored in a disk file (an easy way is to use the Python pickle module, though it is a waste of space). Another program may then read this code into an array and execute it. The code which I have written works like this: If you run the program without any filename argument, it reads an expression from the keyboard, generates code for the virtual machine in an array and executes it by walking through the array. The code is also stored in a file called 'code.out'. Now if you run the program with a file name argument code.out, it loads the instructions from the file and executes it, without reading from the keyboard.
import re, string, sys, pickle # Functions not included herein should be copied from the previous listings. NULL = 0 Inputbuf = [] NCODE = 100 NSTACK = 100 Code = [] Stack = [0] * NSTACK Pc = 0 Stackp = 0 class Tree: pass class CodeItem: pass def initcode(): global Code for i in range(0, NCODE): t = CodeItem() Code.append(t) def pushop(): global Stack, Stackp, Code, Pc Stack[Stackp] = Code[Pc].number Stackp = Stackp + 1 Pc = Pc + 1 def addop(): global Stack, Stackp, Code, Pc Stackp = Stackp - 1 right = Stack[Stackp] Stackp = Stackp - 1 left = Stack[Stackp] Stack[Stackp] = left + right Stackp = Stackp + 1 # define subop, mulop and divop here. def generate(codep, ptree): try: # if the field 'number' is not present, the # following line generates an exception. n = ptree.number Code[codep].op = pushop codep = codep + 1 Code[codep].number = n codep = codep + 1 return codep except: if (ptree.op == '+'): codep = generate(codep, ptree.left) codep = generate(codep, ptree.right) Code[codep].op = addop codep = codep + 1 return codep # elif (ptree.op == '-'): We will write the code # generation actions for '-', '*', '/' here. def eval(ptree): # Generate the instructions, then execute them global Pc, Stackp, Code, Stack Pc = generate(0, ptree) Code[Pc].op = NULL Stackp = 0 Pc = 0 while Code[Pc].op != NULL: tmp = Pc Pc = Pc + 1 Code[tmp].op() return Stack[0] def eval2(): # Directly execute the loaded code global Pc, Stackp, Code, Stack Stackp = 0 Pc = 0 while Code[Pc].op != NULL: tmp = Pc Pc = Pc + 1 Code[tmp].op() return Stack[0] def main(): global Inputbuf, Code try: f = open(sys.argv[1]) Code = pickle.load(f) f.close() result = eval2() print 'result is:', result return result except: print 'Not opening code file, reading from k/b' initcode() Inputbuf = raw_input() ptree = expression() result = eval(ptree) f = open('code.out', 'w') pickle.dump(Code, f) print 'Code dumped in a file called dat' print 'result is:', result return result if __name__=='__main__': result = main()'generate()' and 'eval()' are the critical functions. 'generate()' walks through the expression tree creating the virtual machine code and storing it in an array 'Code'. 'eval()' walks through the array 'Code' executing the instructions, using an array 'Stack' for holding the partial results.
It is possible to extend the above program to handle variables and assignment statements, control flow constructs like gotos, if statements etc. Soon, you would be building a simple Basic-like language.
Coming from a C background, Python's lack of certain C constructs like the ++ operator is a minor irritation. The lack of compile time type declarations also seems to have some detrimental effects upon code readability. Also, you will pay dearly for any typo. If you have a variable 'f' of type 'struct foo' and 'foo' does not have a field called 'next', an assignment to 'f.next' will generate a compile time error in C whereas the Python interpreter would gladly allow the assignment to go through.
Luis is a 17-year-old student in Sao Paulo and a volunteer in this section at OLinux.
Next month Linux Gazette will have a new Editor: Jason Kroll. Jason is the Technical Editor for Linux Journal, so you may know him from his product reviews and Stupid Programming Tricks column. He is a strong supporter/activist for free software, and I predict he is going to have a lot of fun running the Gazette. Especially considering the enthusiasm of its readers and contributors, as I have always commented.
I will continue to work on technical aspects of SSC's web sites, which means writing more web applications in Python, such as the LG Discussion forums unveiled last month. (What do you guys think of them, by the way?)
Next month, Jason and I will be working together to produce the May issue. The following month, he'll be on his own, because I'll be on VACATION in England and Scotland, and not turning on a computer at all if I can help it!
The following quote was sent to me by a coworker.
The classically minded among us may have noted a new TV ad for Microsoft's Internet Explorer e-mail program which uses the musical theme of the "Confutatis Maledictis" from Mozart's Requiem."Where do you want to go today?" is the cheery line on the screen. Meanwhile, the chorus sings "Confutatis maledictis, flammis acribus addictis."
This translates to "The damned and accursed are convicted to the flames of hell."
Good to know that Microsoft has done its research.
Thanks for sending in your articles and 2-cent tips. Remember: don't just use Linux this month, have fun with it!
Michael Orr
Editor, Linux Gazette,