|
Table of Contents:New feature: TALKBACKS! (This does not include the columns--"The Mailbag", "News Bytes", "The Answer Guy", "More 2-Cent Tips" and the Back Page--because these consist of many unrelated topics in one article. We are exploring what kind of discussion forum would be most appropriate for them.)
|
||||||||
TWDT 1 (gzipped text file) TWDT 2 (HTML file) are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version. |
|||
This page maintained by the Editor of Linux Gazette, Copyright © 1996-2000 Specialized Systems Consultants, Inc. |
|||
The Mailbag!Write the Gazette at |
Contents: |
Answers to these questions should be sent directly to the e-mail address of the inquirer with or without a copy to [email protected]. Answers that are copied to LG will be printed in the next issue in the Tips column.
Before asking a question, please check the Linux Gazette FAQ to see if it has been answered there.
Tue, 01 Feb 2000 12:16:28 -0200
From: Clovis Sena <>
Subject: help in printing numbered pages!
Hi,
I usualy print a lot of documentation. One thing that I would like to make is that my print jobs gets the pages to be numbered. So at the bottom of the pages we could see "page 1/xx" , etc. I had looked a while for info in how to set this up, but could not find. The printtool just dont do it. Maybe i should to create a filter, but what commands must i use to make this heapppens???
A other thing that would be nice is that is we could also print the location of the document, or the url, if it be a web page, like /usr/doc/html/mydoc.html or http://www.linuxjournal.com/article/example.html. Could this be done???
Any help will be welcome. Thanks for reading this.
Tue, 1 Feb 2000 16:37:07 +0200
From: Knobel Alex <>
Subject: samba problim
hi and thank's fore your Help :0) will my english is "so bad so forgive me". will I have Linux Box With Kernel 2.2.12-20 Red Hat Linux & I Have Install The Samba Frome The Same CD I Have Red The HOWTO Samba & I Try What IT But I Dis'nt Anderstand How The Windows Shod See The Linux Box Will Frome The Linux I Can See The Network & see The Share But the NT Dosn't See The Linux In The smb.conf I Write The Follow
[global] workegroup = staff " i have a group cald staff" netbios name = samba ......
Tue, 01 Feb 2000 11:46:50 -0500
From: Stan Wilburn <>
Subject: Modem problem with 5.2
I have recently setup Linux 5.2 on a PC at home. The setup went great with only one problem: I cannot get the modem to dial so that I can access the Internet. Looking through various websites on Linux, I found that if your modem is on com3 there is some additional setup that is required. Can you help me with this? I have mapped the com3 port to /dev/cua2 and by the manual that is all I thought I had to do. I also ran statserial on this port and I get the correct information here also. What could be wrong? I can hear the modem clicking like it is trying to dial, but it will never dial the number. Thanks.
Tue, 01 Feb 2000 15:48:28 MST
From: Sipe <>
Subject: Linux on IBM Aptiva S6H
I have an IBM Aptiva S6H (214) with an IBM MM75 Multimeda monitor. I cannot seem to get my X server (Xfree86 -- current version #.#.5) with mandrake 6.1. I have an onboard ATI Mach64 3D Rage Pro card as well as a voodoo2, which supports rather high res. I've set custom frequencies (horiz: 69khz vert: 120khz) and picked my card from the card database. Still, when i run x windows, it's saying monitor res not supported and no monitor found, specifically "fatal server error" "no monitor". If somebody could help me out i would REALLY appreciate it! If you need more information about my computer or versions, please email me ([email protected]). I am desparate for help, i've tried irc dalnet and efnet #linux, friends, and checked many forums.
Wed, 2 Feb 2000 01:46:20 -0500
From: Anthony H. Downey, Jr. <>
Subject: IP Masquerade Connection Problems
I've been running Linux for a few years now, using mostly RedHat distributions. I'm currently running RedHat 6.1. Recently, I decided to connect my small office (4 workstation nodes and 2 servers) to the internet via IP Masquerading rather than having each workstation dial directly (as was done previously).
IP Masquerading works fine, save for one small problem that's turning into a real annoyance. I'm using PAP authentication with the ISP. Whenever I initiate a connection, the modem dials, waits until whatever timeout is specified, then hangs up without ever achieving a complete connection. The two modems handshake, but the login to the ISP is never successfully completed. Since I've specified for it to reconnect on a line drop, it immediately redials and then successfully connects and logs in on the second try.
This is 99.9% consistent, meaning that it nearly always requires two dial attempts to get a successful connection. On rare occaisons, it will connect on the first try, but this is apparently only after I've just downed the interface within the past few minutes. I don't have it configured for demand dial at this point -- I normally bring the link up and down manually via ifup ppp0 and ifdown ppp0. Eventually this will be set to demand dial once I'm confident of the configuration.
I suspected the ISP at first, but then I tried a Mandrake 6.5 distribution on the same equipment (with essentially the same config files), and it connects the first time every time. I'd much rather use RedHat 6.1, however, for various reasons. I've even tried a few different modems, both 56K and K56Flex, and the results are the same.
Does anyone have any ideas as to why this would be happening and how to correct it?
TIA.
Wed, 02 Feb 2000 11:15:32 -0500
From: John Sanabria <>
Subject: Tip & Tricks
Hello:
I'm currently work with the domain .edu.co at Universidad de los Andes, Bogota, Colombia. I want to know how can do when the people enter the url, uniandes.edu.co, the browser go at www.uniandes.edu.co. eg. i can type the url linuxtoday.com, and my browser show me www.linuxtoday.com.
Where do i modify my dns entry in order to obtain this behavior?
Thanks in advance.
PS: Sorry for my poor english.
Wed, 2 Feb 2000 17:41:31 -0500
From: Rob C <>
Subject:
Hi,
I was wondering what you have to do to get your Diamond Supraexpress 56i modem recognized and configured under Linux. I also need to get a PPP connection setup on my linux system to access my internet. I am positive that it isn't a winmodem.
I am a really new newbie at this, so i'll need lotsa explanating here.
Thanks.
Thu, 03 Feb 2000 21:26:00 +0100
From: Ivo Naninck <>
Subject: Question to be published.
Hello LG,
Can you please put this in your next issue?
Anyone who has a hint on how to get a Linux DHCP client to get a lease from a DHCP server over a Token Ring network? I have already tried a LOT, believe me. Things like dhcpcd, dhclient and pump all do not work. There is also VERY little response on this matter from several related news groups.
--
Best regards, Ivo Naninck.
Neckties strangle clear thinking.
-- Lin Yutang
~
:wq!
Thu, 03 Feb 2000 23:08:31 +0100
From: JA <>
Subject: WindowsNT vs. Linux
I've made a comparison running the same Fortran program compiled with GNU g77 compiler with identical options.
The results is unexpected: the program ran under WinNT4.0 20 sec, and under Linux/Slackware7.0 27 sec.
Why?
Jacek Arkuszewski, Switzerland
Thu, 03 Feb 2000 22:34:56 -0500
From: BLOODICE <>
Subject: Linux & win98 internet connection sharing
im just wondering if it is possible to connect linux to the internet, over a network through a win98 machine using internet connection sharing?
if there is a way, can you tell me if i cant do it that way is there another way i can do it?
also...do you happen to know of any cable modem software(for linux) that will run RoadRunner modems
one more thing...are there any tutorials on my first question that i should look at?
Mon, 26 Apr 1999 14:23:10 +0530
From: Sreelal T S <>
Subject:
My opl3-sax 32-bit soundcard was not supported by redhat linux6.1.I tried configuring it using sndconfig.Please send your responses.My address is [email protected]
Fri, 4 Feb 2000 17:43:38 -0600
From: Jeffrey T. Ownby <>
Subject: 5250 terminal for AS400 connection
I am adding a Linux box to a network consisting of several Win9X and NT machines that use either IBM Client Access or Rumba to connect to our AS400. Is there a program similar to either one of these that can provide terminal emulation on Linux. Any info appreciated!
Later,
Jeffro
Wed, 09 Feb 2000 09:18:14 +0100
From: Vipie - Your Leader <>
Subject: Multiple video cards
Hello,
I have a question, have you any idea where I could find info about running multiple video cards and monitors under linux. eg. 2 SVGA cards or a SVGA and a VGA card ... and how should one configure these ??
Wed, 9 Feb 2000 10:37:33 +0200
From: Mahdy <>
Subject: dos boot disk that can logon to linux
i know that my quistion is silly but i didn't find in the wep solution will i'm trying to make a boot disk fore dos that can see my linux box in my linux box i have install samba v2.0.5 will frome win 95/98/NT i can see the linux but frome dos i cant i need this thing to restore the Ghost files (image of disk) to the station until now i do this thing thru the NT server & in the client machien i run the NET.exe file to make this connection ? is there application fore dos that i can use and to be in one floppy disk 1.44 MB ?
Thank's
Hijaze Mahdy
Wed, 09 Feb 2000 12:42:27 +0000
From: thierry.lamant <>
Subject: Configuring Xfree for i810 chipset
I have successfully introduced Linux and XFree (now 3.3.5) in my company for the development of Aircraft Simulators on PC. I would like now to use it on my home computer, a Compaq Presario 5456 which include a i810 chipset. I can see that XFree 3.3.6 is supporting i810 mother board, but that does not work better than 640*480 when I try to install it (Win98 is running good at least in 1024*780). SuperProbe cannot detect the hardware.
Is there something special to do? Shall I wait for XFree 4?
Thanks in advance for any help.
Best Regards (and thousands thanks for the usefull Gazette)
Tue, 8 Feb 2000 23:05:15 -0200
From: pedroanisio <>
Subject: Clenning Lost+Found?
I having problem with my filesystem lately. And I have a lot o file in my lost+found, and I am having some problem to cleaning up those file which corrupted.
b---rwS-wt 1 30291 357 88, 118 Dec 8 2023 #586 b--sr-S--t 1 27968 27968 115, 109 Apr 29 2032 #592 br-xrwS-wt 1 14963 25715 108, 97 Apr 12 2031 #683 br-xrw---- 1 29555 21622 115, 115 May 10 2031 #713 br-S--S--x 1 26483 14641 116, 97 Apr 12 1998 #732 br-xr-xrw- 1 10604 24864 101, 114 Feb 13 1996 #741564 c---r----- 1 8224 25632 32, 32 Jan 29 2029 #741565 br-sr-xrwx 1 29806 24864 116, 32 Oct 15 2021 #741567
I tried rm -rf but it didnīt work, it result in
rm: cannot unlink `file': Operation not permitted
what can I do? Thank You,
[When people have gotten weird permissions like that, it's often the sign of serious disk corruption. I would definitely back up any data you can right now just in case the drive fails in the near future. Is the drive otherwise working OK? I would be inclined to buy a new drive and install a fresh copy of Linux on it, and then later decide whether to trust this drive enough to continue using it. Hard drives and power supplies are the most failure-prone components of any PC. Fortunately, nowadays hard drives are cheap to replace.You can run "file" on the files to see what types they are, and "less" to see if they have any contents worth saving. -Ed.]
Thu, 10 Feb 2000 15:07:36 -0800
From: Chris Dumont <>
Subject: linuxconf
I've been trying to use linuxconf (ver 1.14 on a RH 6.0 install) and have now screwed it up in my computers at work and at home in the same way. When I'm ready to leave linuxconf it wants to "sync" the configuration with the system. Every time I run linuxconf now it says that it must execute "/etc/rc.d/rc5.d/S85gpm start". And then, even after it does so, it still says it's not in sync and it wants to do it again. I guess that it thinks it must start the mouse even though it's already running.
How do I fix this?
A possibly significant piece of information is that in both cases when I originally installed Red Hat I chose the wrong mouse driver and had to play around to find the right one.
Fri, 11 Feb 2000 22:43:26 +0100
From: Daniel Lüscher <>
Subject: Linux-Mandrake 7.0
Hello! What about Linux-Mandrake 7.0? It would be worth an article or even an interview with the developers.
Greetz Pat
Sat, 12 Feb 2000 14:14:40 +1000
From: Tony Smith <>
Subject: Help wanted etc.
Hi
This little problem has been sending me nuts for two days now. I sure hope someone in the LG community can help
I wonder if someone would mind helping me with a mystery that is causing me to pull my hair out......
I have been using a Linux server based on RedHat 5.2 for some time to connect to the Internet and provide masquerade for a bunch of Windows boxes on a private network behind it.
Today I decided to redo the server to incorporate a few more sophisticated things like DNS, DHCP and HTTP serving for a prototype intranet.
All went very well with one small hiccup. There seems to be an undocumented change in the behavior of ppp between RedHat 6.1 and 5.2.
Using 5.2 invoking "ifup ppp1" will result in the server making one and one only attempt to connect to the internet.
This is exactly what I want
Using 6.1 if the dialout attempt fails, it simply loops and tries again and again until it either connects or is stopped.
I don't want this.
I have compared all the relevant (I hope) scripts between 5.2 and 6.1 but I cannot find a difference that would account for the changed behavior.
I have the awful feeling I've missed something important (it's been a long day), could someone please tell me what it is?
Sun, 13 Feb 2000 04:59:29 +0200
From: FreeDoM <>
Subject: ZOLTRIX modem
Hi My name is Ozgur; I use redhat Linux but I dont use modem I have 56K Zoltrix (Rockwell cheap set) modem Winmodem but this modem dont run under Linux could u help me? If u have file this problem pls send me!
Tnx for helping.
Sun, 13 Feb 2000 22:01:33 -0600
From: Kim Updike <>
Subject: Inexpensive, powerful db's for Linux?
To develop a distributed database application that runs on Linux, what inexpensive, powerful databases might work best?
E-mailiing me at [email protected] with any advice would be much appreciated. -Kim
Sun, 13 Feb 2000 10:53:21 -0500
From: Art <>
Subject: Drivers
I have a question. Where can I download an update driver for a "Diamond Monster 3D" voodoo card? The present software I have is Version 1.08 and DOS 6. My OS is Win98 Second Addition 64Mg Ram and Celeron 300 with an I intel 740 video card. I thank you in advance for your reply. Art
Tue, 15 Feb 2000 08:14:24 -0500
From: William Aycock <>
Subject: Window Managers and Window App Development
[There's an lg-announce mailing list. See the Linux Gazette FAQ at http://www.linuxgazette.com/lg_faq.html http://www.linuxgazette.com/lg_faq.htmlKDE and GNOME are graphical "environments" (that is, a set of custom applications, widgets and rules that all have a common "look and feel"). A window manager is just one aspect of this. In my opinion, KDE is currently "ahead". But I don't tell people, "Use KDE, it's #1." I use KDE at work, but only because it's the default, and I don't use 90% of its features. At home I use fvwm2.
Some people want to see Linux have a single standard desktop. However, I think most people are glad there's a choice, because different people want different things. Indeed, with some people running Linux for games (i.e., they need a 3D video card and fast drivers) and other people running it on 486s as servers, different people have very different software needs. KDE and GNOME tend to appeal to people who want a standard look and feel a la Windows and the Mac. But those who hack with the older programming tools are never going to give them up. As the geek joke goes, "Who needs a GUI when you've got Emacs?"
February's Linux Journal had an in-depth look at KDE and at GNOME. Both articles are on-line at http://www.linuxjournal.com/lj-issues/issue70-Ed.]
Wed, 16 Feb 2000 20:45:06 +0100
From: Dennis Johansen <>
Subject: Quakeworld server as a deamon
I want to set my RedHat Linux 6.1 up as a QuakeWorld server on my lan. Iīts a k6-200 with 64 mb ram.
And it works fine when i start it, but how do i run it as a deamon ?
Please help.
Best Regards
Fri, 18 Feb 2000 08:47:02 +0200
From: Halil Cem Tonguc 998027 <>
Subject: I NEED PIII PATCH.
Where can I found a patch for PIII????
Fri, 18 Feb 2000 23:46:14 -0800
From: wesley <>
Subject: make virtuald
I am trying to compile virtuald using make virtuald here is the error I get "Makefile:14: *** missing separator. Stop. "
I did a cut and paste of the code from http://www.linuxdoc.org/HOWTO/Virtual-Services-HOWTO-3.html in the section 3.4 Source then used ftp to put it on the server in order to compile it.
what am I missing??
thanks for your time and help
Wes
Sat, 19 Feb 2000 23:53:46 +0000
From: Ari Hyvölä <>
Subject: pppd daemon dies
Hello, I am a beginner in Linux world and would like to test Linux ISDN ability. I have ASUSCOM 128 modem connected to RS-232 port. ISP, IP and DNS are correctly set up but kppp hangs up while logging onto network with the error message "pppd daemon died unexpectedly" and the log says "cannot open logfile". The modem tests run through. I have Corel Linux 2.2.12 and the Kernel should include neccessary isdn and ppp -options. Lock option has been turned off. I appreciate any help - from Corel pages support can not be found.
Sun, 20 Feb 2000 14:19:25 +0100
From: Santiago Cepas <>
Subject: Memory undetected
Hello there.
I've got some problems with my RAM: Linux only detecs 64 M when I have 128. My specs are: SuSE Linux 6.3, kernel 2.2.13, runnin in a AMD k6-2 350 box. The weirdest thing is that the whole 128 M are in the same DIMM slot. The bios seems to detect 128 M, and also that other op. system which in don't want to name. Could anyone help me here?
Thanks
[Linux cannot auto-detect memory above 64MB. You have to tell it explicitly how much memory you have at the boot prompt, or putappend = "mem8M"in your /etc/lilo.conf file and re-run lilo (if you're using lilo; see the lilo documentation).The problem is that the BIOS detects the extra memory but it doesn't tell Linux about it. It's a limitation in the PC BIOS design. I don't know how that other OS probes for memory.
If you ever reduce the amount of memory you have, remember to tell lilo first. Otherwise, the system will segfault at boot time when it tries to use memory that isn't there. You can always type "mem=64M" at the boot prompt to override what lilo thinks.
Information from the Linus himself about situation is in the BootPrompt HOWTO . -Ed.]
Mon, 31 Jan 2000 22:33:26 +0000
From: T.J. Rowe <>
Subject: glibc compilation problem
Hello everyone,
I'm making glibc 2.1.2 from the source tarball for use in building a system from the ground up (to get rid of all those statically linked programs I've started with). Well, when compiling either 2.1.2 or 2.1.1, I get following error and the compilation stops:
common/db_appinit.c: In function `__db_appname': common/db_appinit.c:479: fixed or forbidden register 0 (ax) was spilled for class AREG. common/db_appinit.c:479: This may be due to a compiler bug or to impossible asm common/db_appinit.c:479: statements or clauses. common/db_appinit.c:479: This is the instruction: (insn 902 901 903 (parallel[ (set (reg:SI 2 %ecx) (unspec:SI[ (mem:BLK (reg:SI 5 %edi) 0) (const_int 0 [0x0]) (const_int 1 [0x1]) ] 0)) (clobber (reg:SI 5 %edi)) ] ) 424 {strlensi+1} (insn_list 901 (nil)) (expr_list:REG_UNUSED (reg:SI 5 %edi) (nil))) make[1]: *** [db_appinit.os] Error 1 make: *** [db2/others] Error 2
I have pgcc 2.95.2, which I believe fills the requirement for gcc. I'm clueless as to what else can be causing this. Any ideas?
btw, I come up with some other undefined function error when trying to compile glibc 2.0.7pre6, and I don't want to use the older glibc, anyway.
Someone has to know what causes these kinds of errors, right? :)
Mon, 31 Jan 2000 22:40:13 +0000
From: T.J. Rowe <>
Subject: bash problem
I have an interesting problem with my bash prompt. The upper case K and T letters do not echo to the screen, and when I type them it beeps at me. Now, before jumping to conclusions, let me describe what I've done so far. This problem only occurs in bash and bash2, not in tcsh, zsh, csh, etc. I've recompiled and reinstalled bash several times from both the source tarballs and source rpms. I've replaced very thing terminal related that I can think of, such as gettydefs and libtermcap. I've looked at my keymaps repeatedly, and there doesn't seem to be a problem there--and again, it only happens in bash. It has to be something broken which I haven't replaced yet, but I'm not sure what is left. The K and T letters work fine in programs spawned in bash, it's just the prompt itself. This one has just about everyone I know confused. Hopefully someone out there can help or has heard of this odd problem. If anyone would like more details or has any information, please contact me.
Thanks, ~T.J. [email protected]
Wed, 23 Feb 2000 14:45:47 -0500
From: LabRDist <>
Subject: bootup disk?
i have a toshiba 415cs and i formatted the hard drive. I dont have the toshiba companion disk or did i make a boot disk before i formatted it. i need to find a way to get into the computer to activate the d: drive to install windows. I just get an error every time i start saying insert the boot disk. Any help would be appreciated.
Wed, 23 Feb 2000 21:29:29 -0800 (PST)
From: napolean <>
Subject: question about linux 5.2 deluxe
i installed linux and then it said it was completed, then i got a prompt saying
[bonethugs@localhost bonethugs]$after entering the password and login name what do i do? and i have one more question can you have linux and windows 98 on one hard drive with out being partitioned in half please e-mail me back please thank you very much
Thu, 24 Feb 2000 10:22:52 -0000
From: Ivica Glavocic <>
Subject: unable to install
Hi
I don't know if this is the proper way to contact you, but this is the address I found searching through Internet for solution for my problem:
PC is PII233 (LX) with 128 MB DIMM SDRAM, 6.4 GB WD HDD, Diamond Speedstar 50 AGP (4MB) VGA, Genius LAN card, Philips 40x CD, ps/2 keyboard and mouse. On it I have installed NT Workstation 4.0 on first partition (4G) formatted as NTFS. NT is working fine.
Now, I wanted to put Linux RedHat 6.1 on second partition. First I checked HCL for my PC - OK. On Internet I found documents how to boot Linux using NT Loader and everything seemed to be fine until i started actual installation booting it from CD. It loaded X graphical display, gave me choice for language and keyboard, and then disk druid started. I could see my NTFS partition as HPFS but I knew this would happen since I red about it before. So out of unused ~2 GB I created swap (129M) and root (2G) partition and after i pressed next, error was displayed "error opening security file ..." and termination and kill signals stopped further installation.
Now, I know that this is not a fatal error (security..), but what is the problem then? I tryed to change partitioning (added /boot partition, many combinations) but nothing seem to work. There is no explanation WHY this happens, on wich module installation fails. Then I got new boot diskette from RedHat, tryed with it, but same thing happened. So I put some boot parameters like MEM8M and changed keyboard type, but still nothing.
Maybe installation log would help, but it is going fast on my screen and I can't read it, is there a way to save it to diskette since I can't put it on NTFS partition? Is this partition a problem somehow? Shouldn't be.
Please answer as soon as you can because it is really urgent for me to get this Linux going.
Thank you in advance
Ivica Glavocic
Thu, 24 Feb 2000 23:52:17 +0100
From: jacek czerwinski <>
Subject: users' information on my netware 4.11
Novell ver >= 5.x has LDAP gateway to his NDS. These are many of LDAP clients (API from Mozilla, OpenLDAP, netscape). caldera has made NDS fo Linux (3 user trial ? - i cant download it from my small city and test ) (www.caldera.com) Novell said that in march/april will release Opensourced API for NDS based on OpenLDAP (www.openldap.org) Maybe LDAP gateway for NDS (in trial wersion ?) is free for download from Novell ? Write after your success ;-)
Thu, 3 Feb 2000 08:43:48 -0800
From: Linux Gazette <>
Subject: FAQs
The following questions received this month are answered in the Linux Gazette FAQ:
Sat, 5 Feb 2000 20:07:59 -0800 (PST)
From: Ron Bates <>
Subject: Linux in Grocery Stores...
Hey, I just wanted to tell you guys, I purchased the premiere issue of Maximum Linux magazine with a full install of Linux-Mandrake 6.0 at Pak N' Save in So. San Francisco. To think the day has come when you can purchase a copy of Linux at your local chain grocery store. It's a beautiful world.
Sun, 6 Feb 2000 23:14:00 -0500
From: Pierre Abbat <>
Subject: Gazette crashes Konqueror
I am trying to read the Gazette with kfm 1.167 and several pages crash it, including the Mailbag and 2Ē Tips. Can you help me figure out what's wrong? It's happened before.
phma
Fri, 11 Feb 2000 15:05:04 +0100
From: Richard Vanek (ETM <>
Subject: current issue
Hi
I would like to download always current issue of Linux Gazette to my Pilot. Do you have general link which always point on contents of current issue?
The download is done by avantgo server which prepare a html pages for downloading to Palm Pilot. I would like to setup link where I can always find a new issue. AvantGO will look at it and when it's change I will get a new pages into pilot.
The reason why link to the current issue is simple, if I point to your main page in level 1 of html I will have all issues which is too much for a Palm Pilot memory.
[I added a symbolic link http://www.linuxgazette.com/current pointing to the current issue. I also made index.html in each issue point to the Table of Contents page.Readers, please do not bookmark anything through the "current" link! The bookmark will go dead at the end of the month or point to the wrong article. Instead, use the actual issue directory (e.g.,
issue51
. -Ed.]
Fri, 18 Feb 2000 08:35:58 -0800
From: Linux Gazette <>
Subject: Mirror site in India
Sankar wrote:
I am located in India, and keenly interested in hosting a mirror site which would actually be in India. Can you please let me know what would be approximate size required in the server for this?
The WWW directories are approximately 84 MB. Each issue will add 2-3 MB.
The size of the FTP files can be found at http://www.linuxgazette.com/ftpfiles.txt The current size is 43 MB, plus 1-2 MB for each new issue.
18 Feb 2000 10:16:01 -0800
From: Michael Grier <>
Subject: WebGlimpse Blank Page Syndrome
It has been my experience that Netscape will not display a page if there is a missing closing </table>
tag somewhere, especially if there are nested tables. Try appending an extra one at the end of the table.
[You are correct. However, fixing this requires delving into the internals of WebGlimpse and Glimpse, which we do not have the resources to do. It also appears that Netscape is throwing away the bottom of the page when a large result set is returned, and that's where the </TABLE> tag is going. -Ed.]
Fri, 25 Feb 2000 10:46:34 -0500
From: Jeff Rose <>
Subject: Main LG Logo ...
Greetings,
Agreed, I am *not* any type of GIMP-guru [see _my_ horrible non-aliasing problematic logo] ... however, a suggestion:
The Linux Gazette Main Logo is, well, ugly!
Perhaps you could encourage the readership to submit their own logo creations for Linux Gazette and have fun while making an improvement at the same time?
Hope this sounds like a plan,
Jeff
-- ( >- Jeff Rose - everyone's Linux User Group (eLUG) -< ) /~\ http://www.elug.org mailto:[email protected] /~\ | \) *** Freelance Linux/IT Writer *** (/ | |_|_ eFAX: +1.630.604.4130 _|_|
[The staff and readers of LG are divided over whether it's time for a new logo. I myself am undecided. But I can do this:If you or any readers wish to submit artwork for the Gazette, I will put them in a "Linux Gazette Art" article so when we can all see them. They will then be available when I go looking for something to jazz up a page with.
This was how the "Penguin Reading the Gazette" image at right found a home on the LG FAQ page. We had the existing image from the days when Margie was the Editor of the Gazette. (She still retains the title "Ruler of the Gazette"--she refused to give me that title! :) I liked it, used it in one of my Not Linux columns, and then decided it would make a good image for the FAQ.
The LG logo is 600x256, and any new logo should probably be the same size. The Penguin Reading the Gazette is 161x160; most page decorations will want to be more or less that size. Page decorations are a lot more likely to be used in future Gazette issues than logos are, because there can be only one logo. All images should have a Linux Gazette theme in their content somehow.
With this in mind, send in some art! -Ed.]
Fri, 25 Feb 2000 13:07:44 -0500
From: Jason Fink <>
Subject: Pundit Here, Pundit There, Pundits Everywhere
Pundits. Had enough of that word yet? I know I have and I also know there are a few of these so-called experts whose teeth I would like knock out (I guess the web equivalent would be breaking their knuckles or something). In all honesty, since when did becoming an expert one area of technology suddenly make one an absolute know-it-all about all technology? This has suddenly become the new thing to be: an industry pundit, a technology pundit, or whatever. A pundit, by the definition I refer to, is an accomplished expert in a particular discipline. This e-world of ours has suddenly given anyone who has the time and tenacity to become a pundit the opportunity to do so. Those whom have been in the industry a long time have received that label, and rightfully so, for excelling in a particular field. All of a sudden, however, many of these pundits are breaking away from their disciplines and yammering about others. I would rather they did not.
This is the group that really irritates me. They started out as people who knew a lot about something, and in some cases, they outright contributed to making something, setting a trend or engineering a brilliant new process. While these people should be (and have been) commended, after a period of time they suddenly become experts in other fields as well, but now they can forego validation. Why that is, I have no idea. One can relate it to actors/actresses or music stars who suddenly feel they should become involved with politics and that their ideas are right. While there is a difference of sorts, you can see the obvious parallel: I'm a somebody now so everything I say should be taken seriously.
This sort of abuse of journalistic power needs to stop, someone needs to start putting an end to it. The reason it needs to stop is not just ethics, it really has more to do (in my opinion) with quality. If a person is particularly known for having insightful columns on a given subject, chances are, I will read them. The second they step into unknown territory and blunder, they lose credibility in my eyes. While I may be just one person, I highly doubt I am the only one who stops reading the material they write about because they blundered.
Another point many administrators have brought to my attention has been the pointy-haired-boss syndrome. A Glassy-Eyed Suit (IS Department Head) reads a well-known, well-respected pundits column. It must true, right? Those who know no better cannot be protected by the sane. A pundit writing about something outside the sphere of their knowledge is in fact damaging the industry as a whole. Knuckleheads across the world will do a 180 based upon what some guy at some zine printed. Surely this is a good enough reason not to arbitrarily write about something.
Word has it, if you secretly embed the word Linux on Coke cans, Coke sales will skyrocket. Zines across the globe are printing Linux this and Linux that, all for the sake of advertising dollars. That, in itself, is disgraceful. It is bad enough that there are so many 2-bit sites in general The fact that established journalists are using Linux to bring in the hits is repulsive.
So am I being hypocritical? In a sense, yes, because I really do not know much about Journalism, then again, I am not a pundit now am I? This trend has been going on for a long time now. It is my hope, that by seeing this, not only will you as a reader take the extra step to verify the information source, but perhaps those readers out there who are writers will think twice about their next article.
[The thing about Linux on coke cans reminded me of something somebody said at the Python conference. He said, a Linux Janitorial Service could go IPO just because it has the word "Linux" in it. Then the stockholders would be surprised to realize they were serious about the "Janitorial" part. -Ed.]
Contents: |
The March issue of Linux Journal is on the newsstands now. This issue focuses on Linux training.
Linux Journal has articles that appear "Strictly On-Line". Check out the Table of Contents at http://www.linuxjournal.com/issue71/index.html for articles in this issue as well as links to the on-line articles. To subscribe to Linux Journal, go to http://www.linuxjournal.com/subscribe/index.html.
For Subcribers Only: Linux Journal archives are available on-line at http://interactive.linuxjournal.com/
SAN JOSE, Calif., 7 February 2000 - Lynx Real-Time Systems, Inc., today announced the availability of BlueCat(tm) Linux, Release 1.0, Lynx' version of Linux for high-availability, high-reliability embedded applications. BlueCat Linux is part of the LynuxWorks(tm) suite that allows embedded systems development of both BlueCat Linux and the LynxOS(r) real-time operating system with a common compatible tool set.
BlueCat, a tested and stabilized Linux version based on Red Hat 6.1, is binary compatible with Linux applications, system and development tools and drivers to allow rapid, efficient embedded deployment. Lynx provides cross development support for BlueCat to a variety of popular embedded processors and includes an advanced test methodology to ensure quality and reliability.
"BlueCat and its development environment, LynuxWorks, has established Lynx as a key player in the embedded Linux market," said Rick Lehrbaum of LinuxDevices.com, the embedded Linux portal. "By taking the bold strategy of 'competing' against themselves in delivering BlueCat Linux, Lynx has provided a viable business model that will avoid fragmenting Linux. Real-time support is an important option for embedded developers who are choosing Linux as their operating system."
www.bluecat.com.
Corel Delivers Macromedia Flash to Linux
Bellevue, Wash., February 8, 2000 - GoAhead(R) Software, today announced that MontaVista Software Inc. is shipping GoAhead WebServer with their recently announced Hard Hat Linux(R), a Linux operating system for embedded applications. GoAhead WebServer is an open source, embedded Web server that provides a secure, flexible and free way to access remote devices and appliances via standard Internet Protocols.
GoAhead WebServer leverages embedded JavaScript, a strict subset of JavaScript optimized for small footprint environments. It is the only open source, embedded Web server that uses Active Server Pages (ASP), embedded JavaScript and in-memory CGI processing to deliver a highly efficient method of dynamic Web page creation. GoAhead WebServer also features a ROM packaging utility for Web pages.
GoAhead's web site is http://www.goahead.com. The source code for the GoAhead WebServer is at http://www.goahead.com/webserver/wsregister.htm. Support for GoAhead WebServer is available through a collaborative Usenet newsgroup, news://news.goahead.com, in which GoAhead is an active participant.
Information on Hard Hat Linux is on MontaVista's web site, http://www.mvista.com. (See the picture of a penguin wearing a yellow hard hat in the top left corner of this page.)
Linux for Windows has been released in the UK. It is based on Linux-Mandrake 6.1, and requires no partitioning or reformatting since it runs from a file within Windows. For details, see the Macmillan software webs site
NEW YORK - (LinuxWorld); February 2, 2000 - Macmillan USA, the Place for Linux, (http://www.placeforlinux.com), announced it is shipping Complete Linux-Mandrake 7.0 for beginning and intermediate Linux users. Macmillan's new product simplifies the process of installing the Linux operating system for first time users and includes in-depth user reference guides, technical support, and a complete suite of desktop productivity applications. Complete Linux-Mandrake 7.0 is available now at a MSRP of U.S. $29.95.
System Requirements for Complete Linux-Mandrake 7.0: CPU Intel Pentium
Red Hat Expands European Operations to France and Italy
A description of SPIRO-Linux is included in the Configure your network from a web cell phone announcement below.
Storm Linux 2000 Makes European Debut
SuSE has appointed Sysdeco Mimer as its first Business Partner in Sweden. Sysdeco Mimer will be providing their MIMER DBMS technology on the SuSE Linux distribution. MIMER is characterised by ease-of-use, high performance, scalability, openness and high availability. Among the thousands of MIMER users all over the world, Volvo, Swedish Telecom, Ericsson, Hammersmith Hospital (UK) and English Blood Authority can be mentioned. MIMER Personal Edition for Linux is available for free download from http://www.mimer.com/download.
LINUXWORLD EXPO 2000, NEW YORK, NY (February 2, 2000) - SCO and SuSE Linux AG today announced plans to deliver SCO's Tarantella web-enabling software to SuSE Linux customers worldwide. SCO will bring Tarantella web-enabling software to the SuSE Linux platform and work with SuSE to market the software to its customers.
TurboLinux Adds E-Commerce Suite to New Server
Linux Open Source Expo and Conference |
March 7-10, 2000 Sydney, Australia www.linuxexpo.com.au |
Game Developers Conference |
March 10-12, 2000 San Jose, CA www.gdconf.com |
Software Development Conference & Expo |
March 19-24, 2000 San Jose, CA www.sdexpo.com |
Singapore Linux Conference (SLC) |
March 23-25, 2000 Singapore www.slc.com.sg |
Colorado Linux Info Quest |
April 1, 2000 Denver, CO thecliq.org |
Montreal Linux Expo |
April 10-12, 2000 Montreal, Canada www.skyevents.com/EN/ |
Spring COMDEX |
April 17-20, 2000 Chicago, IL www.zdevents.com/comdex |
HPC Linux 2000: Workshop on High-Performance Computing with Linux Platforms |
May 14-17, 2000 Beijing, China www.csis.hku.hk/~clwang/HPCLinux2000.html (In conjunction with HPC-ASIA 2000: The Fourth International Conference/Exhibition on High Performance Computing in Asia-Pacific Region) |
Linux Canada |
May 15-18, 2000 Toronto, Canada www.linuxcanadaexpo.com |
Hauppague, New York, NY - February 8, 2000 - BASCOM Global Internet Services, Inc. (BASCOM), pioneering developers of Linux-based thin server productivity solutions for small business, today announces an initiative in keeping with the collaborative spirit of the Open Source movement. BASCOM's Open Source Equipment Exchange (osee.bascom.org) is designed to match those donating computer equipment with those in the Open Source community needing to add infrastructure to further their development efforts and spur innovation toward the continued evolution of the Linux operating system.
The OSEE web site will link those donating equipment with those in the Open Source community in need of hardware resources. Since the Linux OS requires relatively modest "horsepower" to drive it, older class computers that have outlived their use in corporate environments will be given renewed life in the hands of Open Source developers.
Ken French, BScH, MSc
Consultant, IT Division
Multec Canada Ltd.
200 Ronson Drive, Suite 204
Telephone: (416) 244-2402 ext. 105
Email:
Web: www.multec.ca
MILFORD, CT - 11 February 2000 - Advanstar Communications, Inc., parent company of LINUX Canada and Advanstar Expositions Canada LTD, and Specialized Systems Consultants, Inc., publishers of Linux Journal, announced today a strategic alliance to create a series of global conferences and expositions for the Linux community.
The Linux event series launches in Canada, May 15 - 18, 2000 at the Metropolitan Toronto Convention Centre. The inaugural event, LINUX Canada, will consist of multiple conference sessions and an exhibit hall with leading-edge, Linux-related exhibitions. Three visionary keynote presentations are also confirmed, including: Robert Young, chairman & CEO of Red Hat, Tuesday, May 16; Linus Torvalds, creator of the Linux operating system, Wednesday, May 17; and, Ransom Love, title of Caldera Systems, Thursday, May 18. The event has already gained strong support from the Canadian Linux Users Exchange (CLUE) and the Toronto Linux Users Group (TLUG).
Portland, OR, February 15, 2000 - Representatives from Caldera Systems, GartnerGroup, Novell Corporation and Red Hat will debate "The Future of OS on Desktop PCs and Servers" on Friday, March 10th at Noon, announced System Builder Summit today. The panel discussion will take place in conjunction with System Builder Summit Spring 2000, being held March 8-11th at the Desert Springs Marriott Resort and Spa in Palm Desert, California.
In addition to the OS panel, the agenda for System Builder Summit Spring 2000 includes Keynotes by Michael Tiemann, CTO of Red Hat; Jim Yasso, Vice President of Intel Corporation; and Dan Vivoli, Vice President of Marketing at nVIDIA. Through roundtable sessions, theater presentations and exhibits, more than 100 technology vendors will offer a preview of year 2000 roadmaps, products, services and programs that cover the entire technology spectrum from the desktop to the server to Internet-enabled computers and communication. Among them are 3COM Corporation, AMD, ATI Technologies, Canon Computer Systems, Conexant, D&H Distributing, Imation, Ingram Micro, Intel Corporation, Logitech, NASBA, Novell, nVIDIA, Quantum and Techworks, among others. http://www.systembuildersummit.com
Unfortunately, due to a lack of funding, GetTux.com is ceasing business. This is a most unpleasant situation, and we are not thrilled to have to send out this notice, but, without major funding, we are unable to advertise, unable to increase staff, and are thus unable to advance as a company. We had meetings with several large investors, but, unfortunately, no one was able to invest in us, while at the same time leaving us control of the company and its direction. What a sad thing the business world can be.
As several of you know, the company was run, on a day to day basis, by me, Scott McDaniel. Website, promotions, product, research and all. We really needed a few other people, and, without that funding, we couldn't continue.
We certainly felt that we offered a unique and useful product, but, again, without the funding we hoped to receive, we were forced to discontinue service. Hopefully, the Linux community will strive (of course it will!), and someone may indeed resurrect the idea of a monthly subscription service. I know that several of you are in Linux related businesses, some of you in the print media, and others are part of Linux related websites.
GetTux.com was formed in the hope that we could advance the use of Linux , and that the ease of use would be promoted by allowing users updates to documentation and applications. We are fans of Linux. We are users of Linux. We are part of the commmunity. And we are sorry we are unable to continue to serve you in this capacity.
In the event that anyone in the community would like to use the GetTux.com domain, or would like the product we were selling, business plan and all, please contact me personally, .
Anyway, back to the sad business: Current subscribers will be given a FULL refund. Consider the first month of your subscription a gift from other Linux aficionados. If you have a refund related question, please email .
Again, my apologies for any inconvenience, and, again, I wish to thank each and every customer that allowed us to serve them.
VA Linux Acquires Andover.Net for $800M
IBM Expands Linux Investment
VA Linux Expands SourceForge Online Development regarding KDE and the CMU Sphinx speech recognition system.
The Atlanta Linux Showcase has a call for papers. Submissions are due April 17 for the Extreme Linux Workshop, and May 1 for the Hack Linux/Use Linux Tracks.
eSoft and Gateway are partnering to advance Linux-based Internet software and services to small and mid-sized businesses.
Python and Tkinter Programming is a new book by John E. Grayson, published by Manning Publications.
www.whichrpm.com is a site helping you to find Linux software in the RPM format. Over 35 000 software packages are indexed, allowing you to find, for instance, a package that lets you talk to your PalmPilot.
Using Linux in Embedded and Real-time System is a white paper on why Linux is ideal for this environment.
Extreme Programming was mentioned in Jason Steffler's article, but I want to mention it again because it's a cool site. It's "a gentle introduction to Extreme Programming", which is one of those strategies for getting product development synchronized with the client's expectations.
Metran Technologies has updated the HTML version of Using Samba, and placed it on a more reliable web server.
The Linux-Net Project, http://linux_net.tripod.com (temporary URL), seeks to modify Linux to allow your household computing devices to become intelligent "terminals". For instance, your refrigerator may notice that milk is running low and, being a termenal, asks the server to put "Milk" on the shopping list. Now you get the shopping list off of your PDA, which is also a termenal, and see that you should get milk. It could even be extended so that the server will watch how much milk, or any other product, your family drinks and will adjust the ammount you should get based on that.
COLUMBUS, OH, (February 1, 2000)-Progressive Systems, Inc., a leading provider of Linux-based network security solutions, and Cobalt Networks, Inc., a leading developer of server appliances, today announced a new firewall appliance solution based on Progressive's Phoenix Adaptive Firewall, the only firewall for Linux to be both ICSA and LinuxLabs certified, and Cobalt's award-winning Qube 2 server appliance. The combination will allow small-to medium-sized businesses, workgroups, branch offices, schools and government institutions greater flexibility in establishing network security policy and design.
With the Phoenix firewall on the Qube 2, Progressive extends both its relationship with Cobalt and its existing offering of Linux-based firewall appliances. The partnership with Cobalt allows Progressive to deliver security solutions designed to meet a range of customer needs and opens Cobalt's customer base to Progressive. The Phoenix firewall on the Qube 2 joins the existing Phoenix firewall on the Cobalt RaQ 2, a Linux-based firewall appliance that allows service providers and resellers to deliver security products and managed services to small-to-medium-sized businesses, branch offices, schools, and government institutions. The addition of the Phoenix on the Qube gives customers an easy-to-use, high functionality premise-based solution that satisfies the needs of individual businesses and institutions. Progressive and Cobalt enjoy synergies in their product and sales organizations, allowing the companies to leverage their respective channel and product strategies.
The Phoenix Qube will retail for $2,495 with unlimited users.
Boulder, CO - January 31, 2000 - Ecrix Corporation and the Linux Fund today announced OpenTape.org, a new nonprofit web site supporting the open source software movement. OpenTape.org offers users access to technical information about data backup hardware and software for the Linux operating system. All advertising and vendor participation revenues generated by the site go to the Linux Fund, a non-profit organization that supports Linux programmers with development grants and university scholarships.
LinuxLinks.com is proud to announce the launch of a free web based email service targeted specifically for the Linux community. This service allows users to send and receive email on any computer that offers a web browser. Now you can deal with email anywhere in the world in a discreet and secure manner whilst at the same time promoting the Linux cause.
Some of the features provided include: a [email protected] email address, lightning fast service, attachments, folders, address book, spam filters and POP mail retrieval. There are many preference options available too allowing users to enhance their messages with different font styles, sizes and colors. A built-in help facility is also provided.
To complement this rich number of features we also provide message searches, personal signatures, aliases, and a personal profile to make a complete FREE web email service.
For more information and to sign up please visit http://www.firstlinux.net/
This service is the latest addition to the FirstLinux network, which currently comprises of LinuxLinks.com and FirstLinux.com.
LINDON, UTAH-Jan. 18, 2000-Lineo, Inc, developer of embedded Linux system software; Elitegroup Computing, manufacturer of system board products; and Bast, Inc., provider of set-top hardware, today announced an agreement to use Lineo Embedix Linux 1.0 and Embedix Browser for Bast's line of embedded Linux-based set-top hardware. These devices will be installed in hotel rooms and apartment buildings in the United States, Europe and Asia. This Embedix-based Web browser is the first Internet appliance to provide hotel and apartment managers a low-cost, convenient way to give customers easy access to the Internet from an ordinary television set via a broadband network maintained by the property owner. Lineo:www.lineo.com
Elitegroup: www.ecsusa.com, www.ecs.com.tw
Bast: www.bastinc.com.
Silicon Automation Systems Limited today extended its Synapse range of xDSL products by introducing a G.lite solution for Linux. This is the world's first Linux based internal G.lite modem offering the end-user enhanced data rates over the existing copper lines.
Mr. Deepak Gupta, Vice President., SAS, says, "External ADSL modem solutions available in the market today are expensive and bulky. They are not suitable for the Internet appliance market where cost and size are critical. Synapse G.lite for Linux is an extremely attractive solution for OEMs looking for providing ADSL access to their Linux based appliances. Providing G.lite under Linux is a pioneering step towards mass deployment of ADSL and in firmly establishing it as the technology of choice in the Internet access market," he added.
The Synpase G.lite for Linux modem provides data transfer rates of 1.5 Mb / s downstream and 512 Kb / s upstream while allowing simultaneous access for analog voice telephony. It is fully compliant with the ITU-T G.992.2 standard and has been interoperated with most major DSLAM vendors. Moreover, the performance of the system exceeds those mandated by standards. www.sasi.com
Portland, OR, January 31, 2000 - Michael Tiemann, Chief Technical Officer (CTO) of Red Hat, will address system builders and technology vendors from North America and Europe at the Opening Reception of System Builder Summit on Wednesday evening, March 8th. A technical leader in the industry, Tiemann will provide strategic insight into the open source movement and the impact it will have on current industry and channel models. System Builder Summit takes place March 8-11th at the Desert Springs Marriott Resort and Spa, in Palm Desert, California.
Tiemann was named Chief Technical Officer of Red Hat on January 12th of this year. Prior to this, he was co-founder and acting CTO of Cygnus Solutions, which Red Hat acquired this month. In his new role, Tiemann is responsible for communicating Red Hat's strategic direction to the company's customers and ensuring that Red Hat technologies meet their long term business requirements. www.systembuildersummit.com
Phone is a linux client program that lets you talk with other people on the internet using voice on a full duplex connection. The requirements are:
Your identity is based on your email address. The server maintains a database of who is online and everyone's contact list. You can find out if your friends are on line if they are in your contact list and you're in their contact list. Similar to ICQ.
The program is very new and is command line based. Lots of improvements are planned, should it become popular. The client is free and open source. The matching service is free until further notice.
Scotts Valley, CA -- Paul Hessinger, chairman and chief executive officer of OpenAvenue, Inc. (www.openavenue.com), today announced that the company has acquired CodeCatalog and Cyclic.com from SourceGear Corporation. OpenAvenue is a privately-held, business-to-business company specializing in Web-based hosting, management, and distribution of worldwide collaborative software development projects. OpenAvenue hosts application development content provided by individual and corporate content owners, and makes it available to a worldwide community of software developers under open source, community-based, and private licensing.
A Web-based open-source code search engine, CodeCatalog (www.codecatalog.com) provides a fast reference tool to search and browse a source code repository which currently contains more than 20 million lines of open-source code from Linux, Mozilla, KDE, GNOME, and others. All of the most popular open-source projects are represented, searchable, browsable, and cross-referenced against each other. CodeCatalog is built upon a variety of open source technologies, including Linux. OpenAvenue will integrate CodeCatalog into its OAsis infrastructure.
OpenAvenue also acquired SourceGear's Cyclic.com, a popular Web destination for developers using the Concurrent Versions System (CVS), the leading version control tool for open-source developers. OpenAvenue will further SourceGear's initiative to support the open community development of CVS, work to enhance its functionality, and remain committed to its status as free software.
SAN FRANCISCO, CA--Collab.Net announced today that it has entered into an agreement with Hewlett-Packard Company to build, host and maintain the Open Source software development infrastructure and Web site (www.e-speak.net) for e-speak technology. E-speak, an HP-developed software that is now available as Open Source software, dramatically simplifies the creation, composition, deployment, management, and maintenance of e-services over the Internet.
Collab.Net is providing Web-based development life-cycle management services including code versioning, bug tracking, and email discussion forums that are key to a well-run Open Source software community.
As a part of this effort, Collab.Net is providing a hosting platform consisting of several major, well-known packages of Open Source software combined into a cohesive offering:
- Code Version System (CVS) repositories - Bug tracking using the BugZilla software from the Mozilla Project - Mailing lists, archived and searchable - Source code browsing tools - An administrative interface for "admins" from the e-speak community
MOUNTAIN VIEW, Calif. - January 19, 2000 - The Sun-Netscape Alliance (Alliance) today announced iPlanet(TM) Web Server, Enterprise Edition 4.1 and the new FastTrack Edition 4.1 software, both of which now include support for the Linux operating system and the latest Java(TM) technologies. With these additions, iPlanet Web Server software now gives users an even wider degree of choice and flexibility in using iPlanet Web Server software on the operating system - or combination of operating systems - that best meets their business and Internet system requirements.
The new iPlanet Web Server, FastTrack Edition 4.1 software was specifically designed to meet the needs of developers and will be offered for free. FastTrack Edition 4.1 contains nearly all the core application development and administrative features of Enterprise Edition 4.1 that developers need to build rich, dynamic Web applications. iPlanet Web Server, Enterprise Edition 4.1 software continues to meet the needs of enterprises and service providers who require a high performing, scalable Web server to power their mission-critical Internet applications.
CNN To Power Web Sites With iPlanet Web Server 4.1 Software "The CNN Web sites have relied on Netscape Web Servers to meet the world's tremendous demand for news ever since CNN.com's initial launch in 1995. We have served billions of pages with Netscape Enterprise Server 2.01, and now it is time to move forward," said Sam Gassel, chief systems engineer for CNN Internet Technologies. "Our initial testing shows that iPlanet Web Server 4.1 software should be a highly scalable and reliable release. As we offer a more dynamic and personalized service, our users will benefit from the improvements in core functionality and new features such as integrated support for the latest Servlet and JavaServer Pages(TM) (JSP) specifications."
A pre-release version of iPlanet Web Server, Enterprise Edition 4.1 software on Linux is currently available on the iPlanet Web site, at www.iplanet.com/downloads/testdrive/index.html, for trial use and feedback. iPlanet Web Server, Enterprise Edition 4.1 software on the Linux, NT and Solaris(TM) Operating Environment is scheduled to be available for purchase in early March, 2000. iPlanet Web Server, Enterprise Edition 4.1 software with support for HPUX, AIX and Tru64 (Compaq/DEC) is scheduled to be available in early April, 2000. iPlanet Web Server, FastTrack Edition 4.1 software is scheduled to be available on all supported operating systems--Linux, Solaris, NT, AIX, Tru64 and HPUX--for free on the iPlanet Web site in early May, 2000. iPlanet Web Server, Enterprise Edition 4.1 software is priced at $1495 per CPU.
January 24, 2000 - Wayne, NE - SPIRO-Linux announces the development of a Linux Administration System, SPIRO-Linux WETMINtS, a powerful web-based administration interface for Linux systems.
Using WETMINtS, you can configure DNS, Samba, NFS, local/remote filesystems and more using your web-enabled cellular phone. WETMINtS is simple web-enabled cellular phone software, and consists of a number of CGI programs which directly update system files. WETMINtS supports all SPIRO-Linux and other Linux operating systems. Standard operations available with WETMINtS include:
About SPIRO-Linux
SPIRO-Linux is the only distribution that comes with five easy-to-use Server installations (Mega Server, Web Server, Application Server, Name Server, File and Print Server) and three Workstation installations (General Purpose, Development and Graphics), plus an upgrade path and a custom install. SPIRO-Linux includes an Office Suite which contains a word processor, spreadsheet that is built into the product. In addition, SPIRO-Linux has the capability to import Microsoft Word documents into Linux.
Currently, SPIRO-Linux is pursuing relationships with different OEM manufacturers in need of a more robust and easier to use version of Linux. SPIRO-Linux is published under the GNU general public license.
[Just before press time, I couldn't get to www.spirolinux,com. It could be on http://www.openshare.net according to a web search, but that site is under renovation. However, Walnut Creek has a version of SPIRO-Linux for download at ftp://ftp.cdrom.com/pub/linux/sunsite/distributions/spiro/i386/SPIRO/RPMS/ . If anybody at SPIRO-Linux reads this, please send the correct URL to . -Ed.]
Denver - XI Graphics Inc. has announced a new web-based product line of graphics hardware drivers for Linux. In making the announcement, Xi Graphics' National Sales Manager, Lee Roder, said the company would continue to offer its existing Accelerated-X Display Server line of products for Linux and Unix operating systems.
"Desktop Linux is coming fast, and we're going to be there," said Roder. "We produce higher quality drivers more efficiently than anyone else in the business. Linux gamers and developers alike can easily download, demo and purchase our drivers from the Web site at affordable pricess -- it's good for us and it's good for Linux."
The 3D Linux drivers are OpenGL 1.1.1-compliant, support libGLU and libGL, and are available for download from Xi Graphics' web site (www.xig.com) starting at $29 each. A limited number of drivers are currently available, but more are being added weekly.
Roder said each driver is priced according to a numbe of factors such as hardware capabilities of the card, the degree of difficulty developing and testing the driver and the likely volume of sales of the driver.
"Our new line of graphics support for Linux in desktop installations is a result of requests from our customers," said Roder. "It gives them the ability to very quickly get superb graphics support for their specific hardware at a price that is quite economical."
The new 3D Linux driver products are sold only over the Web and are available as freely downloadable, limited-run-time demos. The demos can be run on the customers' hardware to confirm compatibility and performance before purchasing. Customers use a registration system to purchase a key through Xi Graphics' web site. The key is e-mailed back to the customer, unlocks the demo product and converts it into standard product.
Magic Enterprise Edition V.8 is a toolkit to build e-commerce and enterprise software directly on Linux.
clobberd 4.16 is a daemon that monitors user activity and network interface activity. Available at Linux FTP sites.
TotalView 4.0 is the first parallel debugger to support multiple development platforms for both traditional UNIX and Linux.
Babylon remote-access software from Spellcaster is now open-source.
Cyclades has released version 6.5.5 of the Cyclom-Y and Cyclades-Z Linux driver. (In the Tech support section of the web site.)
EZHTML makes the arcane art of HTML accessible to everyone with an easy to use interface and well thought out design. Tags are arranged in categories and are grouped with other related tags. The built in search function makes it easy to find that one tag you are looking for and save time. A free 14-day trial is available.
Igloo FTP has released a beta version of their commercial FTP client, IglooFTP-PRO Beta 1.0.0pre1. Binaries for this, and the source for an earlier version (IglooFTP 0.6), are on the web site.
The Parallel Computing Toolkit from Wolfram Research, Inc. is a Mathematica application package that brings parallel computation to anyone having access to more than one computer on a network or to a multiprocessor computer.
Easy Software Products announces
Many people have been known to quote Blaise Pascal:
"I have made this letter longer than usual because I lack the time to make it shorter."
In this month's Answer Guy, the Answer Guy himself learns some solid lessons in recovery planning. In the case of his editor, to hit the Save key combination every once in a while. You never know when you're accidentally going to hit the key assigned to "quit this session without prompting". Thus
"I have made this blurb shorter than usual because I lack the time do it over."
Sorry folks! And I wouldn't be the Answer Guy if I didn't have the answer for this. Emacs users, make sure to turn auto-save-mode on for your important buffers before you get rolling, amd set your auto-save-interval to something other than NIL. Screen users, make sure to update -- the newer versions now ask before completely shutting down screen itself. And try not to use a keyboard with \ too close to ] so you're less likely to backwack that fascinating article you were writing.
From David B. Sarraf on Sun, 30 Jan 2000
James:
I recently read this question regarding the ethernet card. You mentioned that the card may be configured to a base address which autoprobe was not looking at. I agree that this may be a problem. You advised the reader to use boot-time parameters. That is a workable approach however I try to avoid it whenever possible.
I have had very good results using isapnptools, which came with my RedHat 5.1 distro. I used it to configure a modem card and a network card. Between the well engineered software and the excellent documentation the process was quite painless and eminently successful. Now I have cards that work and which run at well-defined and well-known addresses and IRQs and I don't need the boot parameters.
That's probably very good advice.
I've never used isapnptools and I usually forget that they even exist. Could you give an example of HOW you use them?
I recently ran into a similar situation with a 3-com card. It had been taken out of a PNP system and put into a stock ISA system. I knew it to be a PNP card but I did not realize that it had been set to use the same IRQ as the floppy drive controller and isapnptools would not detect this card. This caused much head scratching until I put it back in a PNP system and ran 3Com's "disable PNP mode" software. Still, I was successfull at using the card without boot parameters.
None of this is meant as a criticism of your initial advice. It is just my experience with an alternative method.
Dave Sarraf
Actually I think I could use more criticism.
From Paully0529 on Sun, 30 Jan 2000
I recently received a laptop which has Red Hat 5.1 installed on it. I would like to remove this OS but have no idea what the login password is. Is there any way around this?
You don't need a user/account password to remove any operating system. So long as you can control the boot sequence of the system (i.e. boot from floppy or CD) then you can boot up into something that will wipe out all that nasty stuff that you don't want on your new laptop's hard disk.
There are also ways for you do force a password change on a Linux box. I've described it several times --- but the basic sequence is something like this:
At the LILO: prompt type:
linux init=/bin/sh rw
... this will boot the system using the "linux" LILO stanza, and force the kernel to bypass the normal bootup process (by loading a command shell instead of the usual init process). It will also force the kernel to mount it's "root" filesystem in "read/write" mode.
You can then type:
mount /usr
... which might not be necessary, and thus might give a (harmless) error message.
Then type:
/usr/bin/passwd
... and provide a new password (which you'll need to repeat twice).
Next you can type the following commands (ignoring some possible, harmless warnings and errors):
sync umount /usr mount -o remount,ro / exec /sbin/init 6
Of course those directions are for people who want to take over a Linux system and preserve the programs, configuration and data on it. In your case you could do something more like the following at the LILO prompt:
linux init=/bin/sh rw
... and when you get a shell prompt just use:
dd if=/dev/zero of=/dev/hda
... (assuming that Linux is on your primary IDE drive).
NOTE: This last command example will WIPE OUT EVERYTHING ON YOUR PRIMARY IDE DRIVE! It will scribble strings of binary zeros (ASCII NUL characters) all over the drive wiping out everything. Don't use this unless that's really what you want to do!
(Note: one some systems you might have to use some other "stanza" name other than "linux" --- hit a [Tab] key at the LILO prompt to see a list of options).
SysAdmins Note: If you want to prevent users from doing these sorts of things to their desktop systems (as a matter of policy for example) then you can set up a LILO password and mark the system as "restricted" in the /etc/lilo.conf file.
Of course this by itself will not be much "protection" -- you'll also have to mark the file as not readable by users other than root, restrict root access to the system, change the CMOS boot sequence to prevent booting from floppies, CD discs and other removable media, and set a CMOS/NVRAM password to prevent the users from changing the boot sequences back. On top of all that you'll have to pick a brand of PC/BIOS that doesn't have any known "backdoor" CMOS passwords and you'll have to lock the cases so that the users can't open them up to short the battery to clock chip leads, or otherwise reset the CMOS registers to their factory state. Those are all hardware security limitations of PCs, Macintosh and many of the other workstations. They are not OS specific issues.
With most operating systems, you can boot up off their installation media and readily wipe out whatever happens to be sitting on the system by simply answer some silly install program warning. (Early versions of MS-DOS were pretty stupid in that they would refuse to remove or overwrite "foreign" or "unknown" partitions in FDISK regardless of a users wishes. I don't know if they ever fixed that. I haven't installed any MS operating system on anything for several years).
From Dr.S.Vatcha on Sun, 30 Jan 2000
Dear answerguy
Everytime I attempt to set up netscape4.7 version a browser error 432 comes up saying close uninstallshield and resatrt the setup. i have not to my knowledge opened uninstallshield or that it exists on any files on my pc.
help shahrookh.
The problem here is probably that you are running MS-Windows or some derivative thereof. Last I heard InstallShield is a program for installing software on MS Windows systems. I've never heard of a version for Linux.
I presume also that your copy of Netscape (Navigator, Communicator, whatever you've got) is trying to launch "UninstallShield" to remove the older version of the NS software that you are trying to upgrade from. There's probably some sort of temp or "lock" file that is confusing their uninstaller.
Pretty pathetic programming, really. That sort of thing is one of the reasons I stopped using and supporting MS Windows so many years ago.
(BTW: I'm the Linux Gazette Answer Guy).
From kodancha on Sun, 30 Jan 2000
sir
i have installed redhat linux 6.1 on piii hardware with fire walls.IT works fine.But every 3-4 days i have to reboot the system because of the following.
Sytems will not take any command.When i type any thing cursor moves but no char appears on the screen.Even cntrl-alt-del is also not working.But all oher clients connected this server has got no problem.I tryied stty sane ,cntrl -c etc but it is not respondig.Can u help me
gjkodancha
Well, you description doesn't give me much to go on.
Is this in X? Are you using a kernel with frame buffer (graphics driven text mode) console support?
Try building a new kernel. Leave the system in text mode and disable any "VESA VGA Graphics" support in the kernel (in menuconfig under the "Console Drivers" menu). Be sure to enable the "Magic SysRq Key" under "Kernel Hacking." Read the docs about this "Magic SysRq" in the /usr/src/linux/Documentation/sysrq.txt file.
Now, after you've built and installed the new kernel, when you reboot with it, use runlevel 3 (Red Hat: text mode multi-user mode) rather than runlevel 5 (multi-user with GUI/xdm login mode).
If the console seems to go comatose again, try using some of the Magic SysRq keys, particularly the p (processor status), t (task list) and m (memory status) diagnostics, and the k and r keys to kill everything on a given console and to "reset" the keyboard driver.
A couple of other things you can do:
Edit /etc/syslog.conf and add a line like:
*.* /dev/tty12
... (and restart your syslog daemon). This will copy all syslog messages to your twelfth virtual console. When you leave your system unattended, switch to that VC. If it appears comatose when you get back, look at the messages at the end.
When you restart your system, look at the tail of the /var/log/messages file. That's where most system warnings and errors are logged.
Also you can try logging in via ssh (or telnet, rlogin or some other insecure protocol) and using the commands: chvt 1 or chvt 2 ... to force the console to switch to another VC. See if that works.
You can also run commands like
stty sane < /dev/tty1
And:
setterm -reset > /dev/tty1
(note: the redirection operator on my stty command is NOT backwards. stty performs ioctl() calls on it's input device while setterm words on it's output file descriptor. It looks weird but stty was written a long time ago and it actually made sense back then).
I've had cases where I could revive a console (from a crashed X session or a wacked out SVGAlib program failure) by running startx (from a remote session, even from a serial login. That works sometimes (since the X server has to reset quite a bit of the video and keyboard state as it starts).
These kinds of console oddities are pretty rare. It's usually the result of some buggy program that's running with root privileges or some buggy driver (which naturally is running with system/kernel level access to the hardware). It could be a bit of flaky hardware, too. It would be good to figure out what is doing this. However, if this is set up as a server, it may be that you don't need the console all that much. You can treat it like a "headless" system if you need to.
If you do decide to switch out hardware, try a cheap replacement video card. Under Linux, in server applications there is no point to using a high end video card. Meanwhile the high end video cards are most likely to have the cutting edge technology and the highest chance of conflict with other system components. Also make sure that any "Plug and Pray" (PNP) settings are disabled in your ISA peripherals and BIOS/CMOS.
If you think it's software then try attaching strace to various processes that are bound to your console. See if there's some oddity to the system calls they're making before the lockup.
(Read the strace man page to understand a bit more about this. Don't even bother if you don't know the difference between a system call and a library function. There's a pile of learning curve in that direction).
From Berlin Tokugawa on Sun, 30 Jan 2000
I have a LAN in our office connected to the Internet using a subnet (240) for 16 IP numbers. Our office actually use only 4 computers to assign IP numbers from the said IP pool. One of those computer in our office LAN is a Linux box configured as a PPP server so I could dial-in from home (and get a static IP number from the office IP pool assigned to the ppp interface) and connect my small home LAN using a subnet of 248. I used eight(8) IP numbers at home (using those unused, contiguous IP numbers from the office IP pool) but I'm having problems.
The other computers at home (not the dial-out computer) can not ping the dial-out computer when I'm PPP-connected to the office LAN. Unconnected via PPP to the office LAN, all my home computers can ping each other. All the other computers at home have the home dial-out computer as their gateway to the outside world, while the office computers have their gateway set to our office router connected via leased-line to an ISP. I am wondering if the cause of the problem is the re-use of the IP numbers at home that are already subnetted in the office --regardless of their assignment or non-assignment to working computers.
BTW, I do not want to use private IP addresses, IP aliasing, firewalling, etc., as there is a need for my home computers at home to be referenced by the outside world via valid IP numbers directly. Any thoughts on this problem is greatly appreciated. Thanks.
Berlin Tokugawa
You should really draw an ASCII diagram of your network and include the IP addresses (even a fake set of consistent IP addresses) when you ask a question like this.
+-----------------------+ | The Internet | +-------+---------------+ | (A) +---+----------+ | Your Office +-----------+ +--------------+ (B) | +-----------+ +--+ Home | (C) +-----------+
The routers here are:
A you office's end of your link from your ISP,
B your offices end of the link between your office and your home,
C your home's end of the link to your office
So there are five routing tables you care about.
Let's assume that they've given you 123.45.67.176 through 123.45.67.192 (a.k.a. the 123.45.67.176/240 network).
Let's presume that you and your ISP have followed common conventions and assigned the first usable IP address in your block to your router (A). That means that (A) is 123.45.67.177. Therefore it would make sense for the office to use the lower subnet (from 177 to 182) Thus that subnet will have a netmask of 255.255.255.248 and a broadcast address of 123.45.67.183 (add 7 to 176) Remember, you only get six usable addresses out of that mask since one is reserved for the network (the "zero offset from your base) and the last address (your base net address with a trailing sequence of binary 1's).
So you pick an IP address for (B): lets call that 123.45.67.182 (the last usable address in your lower subnet). You'll also need an address for (C) 123.45.67.185 (the first usable address in the upper subnet). Actually all of these routers will have multiple interfaces. The PPP (exterior) interface to your ISP at (A) will usually have one of the ISP's addresses. You can actually use any RFC1918 address for your PPP link from (B) to (C) since only B and C will use those addresses in their routing tables. Let's call those PPP endpoints 10.1.1.1 and 10.1.1.2
(I'm not sure but I think there's a way around that in some TCP stacks but this should work).
Now the home system have to have a default route that points to (C). (C) has a default route that points to (B), and a network route that point to eth0 (the home network). That network route corresponds to our upper subnet so it looks like:
route add -net 123.45.67.84 netmask 255.255.255.248 eth0
(A) and (B) also each a route that look similar. In (B)'s case it looks like:
route add -net 123.45.67.84 netmask 255.255.255.248 gw 10.1.1.2
... (B) is listing (C) as the gateway to the upper subnet. (C) lists ppp0 as its default route.
Finally there's (A) which lists (B) as its gateway to the upper subnet and the ISPs address as its default route.
The only tricky part is that all of the machines on the office subnet should also know about the subnet route to (B).
This is simplified slightly if (B) is actually not a separate router, but merely an extra interface on (A).
Of course there are many ways to do all of this. When asking questions about routing --- draw a picture and then go to each network and router (connecting point) and ask what the routing tables must look like from that location!
From Rich on Sun, 30 Jan 2000
Answer guy
I have looked everywhere for some basic instructions on setting up a two system soho network and can't seem to find any information.
I am currently running linux-mandrake 7.0. Basically all I want to do is have the two machines talk to one another in a network configuration. Any help would be greatly appreciated.
Thanks Rich
Let's assume that you have ethernet cards in your two machines. You can then hook them up with a "crossover" cable or you can get a hub and plug both of your systems into that. Most ethernet cards and hubs have little lights on them. Some combination of these lights being on should reassure you that you've successfully connected to two systems.
That takes care of the physical layer. If you don't get that far then you'll need some phone or in person support.
Next you have to configure the two systems to talk to one another.
I'm going to guess that they are both Linux boxes. I'm also going to guess that you don't have a block of "real" IP addresses assigned to you. Therefore you're going to use a couple of addresses from one of the special "reserved blocks" that are set aside for this situation. The reserved blocks are defined in RFC 1918 (an Internet standards document). They are:
192.168.*.*
10.*.*.*
and: 172.16.*.* through 172.31.*.*
... that's a lot of addresses to choose from. I'm going to choose 192.168.130.17 and 192.168.130.18 for deeply mystical reasons. (192.168 are the "class C" address blocks, which are normally used by small to medium offices, and this is 1/30/2000, so I picked 130 for the next digit. The 17 and 18 are chosen because it is common convention to reserve the bottom and top 16 or so IP addresses in any class C block for routers, servers, etc).
So on one one of these twins (let's call it pollux) we'll log in as root and type the command:
ifconfig eth0 192.168.130.17
... and, on the other (which we'll call it castor) we'd issue the command:
ifconfig eth0 192.168.130.18
For a temporary connection that's all we have to do. If we these two systems to be persistently configured for this we edit some file under /etc/sysconfig/network-scripts ... or we use one of Mandrake's little configuration "helper" programs.
I haven't been using Mandrake (or recent versions of Red Hat) and I've never been a fan of GUI configuration tools. So I can't help you with the latter of these. If you are familar with basic text editing then look at the /etc/sysconfig/network-scripts/ifcfg-eth0 script and see if you can guess what needs to be put in there.
You can use a netmask of 255.255.255.0 and a broadcast address of 192.168.130.255 on both of the twins. In fact there are many values you could use for these --- so long as they were consistent with one another and some other arcane rules that I won't cover this morning.
I't also possible for you to connect these to system over a null modem or a "Laplink" (parallel link) cable.
If you were using a serial/null modem cable you'd run the PPP program (direct connection). I definitely don't have time to explain configuring PPP right now; that's the most complicated option. If you connected the two boxes with a parallel link cable you'd use commands roughly like this:
modprobe plip
ifconfig plip1 192.168.130.17
(on pollux) and
modprobe plip
ifconfig plip1 192.168.130.18
... for castor.
Notice that we can use the same IP addresses for our two boxes regardless of what sort of physical connection we use between them. That's the whole point of TCP/IP networking. That was the breakthrough that it made before any of the other networking protocols did.
Once you've done this you should be able to use any standard networking service between your twins.
I'm pretty sure this is covered in the "Linux Installation and Getting Started Guide" (Linux Documentation Project) http://www.linuxdoc.org.
Hope that helps.
From Alvaro Gonzalez on Sun, 30 Jan 2000
Hi.
I need your help. I have Caldera OpenLinux version 2.2 and i have problems using Informix-SE version 5.0 for SCO UNIX.
This software run with iBCS 2.1-1 without problems, but i have a limit in the size of the database files, 1 Giga.
I think that limits is for the variable ULIMIT of SCO UNIX, set on 1 Giga for default, and if this is true I need to set iBCS with a value greater than 1 Giga.
Thanks. Alvaro
I'll be the first to admit that I just don't know as much as I want about the big SQL DBMS packages.
However the question that immediately comes to mind is:
Why don't you use the Linux version of Informix?
http://www.informix.com/informix/products/linux/lx.html
I have an evaluation copy for IDS (Informix Dynamix Server) version 7.30.UC7.
Is the problem here that upgrading to this will be too expensive? Is it that there are enough upgrade issues that your code and tables will take too much effort to upgrade for operations under the new server?
Assuming that upgrading/migrating to the Linux version of Informix is not an option doesn't Informix 5.0 allow you to define database file "extents" (1Gb Linux/UNIX files that their server internal manages as blocks of the large tables. I was under the impression that all of the major DBMS packages did this before the LFS (large file summit) brought support for large files to most of the 32-bit UNIX implementations. (Linux hasn't implemented LFS, though there are exerimental patches floating about to do so).
There is a ulimit command (shell built-in) and there are Linux facilities for managing the ulimit settings. These vary from one distribution to another and based on some of the packages you might have installed. For example some use the /etc/login.defs (stock Shadow suite) or the PAM pam_limits module or /etc/lshell.conf (my Debian system). However the limit you're bumping into might be with your iBCS support.
Current Linux ext2 filesystems have a 2Gb filesize limit on 32-bit systems. This is likely to change in 2.4 or 2.6 (over the next year or so). However, that's not likely to help you in the near term.
Personally I suggest that you look into the Linux native Informix server, and into the table segmenting features for that. If those don't to the trick, maybe you should contact a DBMS specialist. (We have some people at Linuxcare, where I work for my day job; that know far more about these things than I do).
[ Sadly, Informix' FAQ, http://www.informix.com/informix/products/linux/lxfaq.html notes they took out several cool features when releasing their SE suite for Linux. Perhaps they were in a rush to market, or perhaps they don't think we're really "the enterprise" so we don't care, or maybe they're merely uncertain (they say they're evaluating Linux needs). Among the features taken out were raw device support (which probably would allow use of larger spaces) and backups (!? WHat good is a big server without backups? You mean I have to turn it OFF to back it up? Ow!).
They also claim we're missing a feature I'm not entirely sure we lack (asynchronous I/O -- a Google! linux search on "asynchron*" yields plenty of info, seeming to implicate a particular flamer in the land of tainted benchmarks). If you are among those who'd hope to use it for real work, make sure to let them know your real needs at ).
-- Heather ]
From Farhad Saberi on Sun, 30 Jan 2000
Hello, my situation is this::
"provide a wiring diagram of an Ethernet (802.3) 100 Mbps network for the following application:
Company A has two offices located 1km apart. Each office has about 40 computers. All the computers will need to be interconnected on the same LAN. "
Apparently, I have to use full-duplex fiber optics because of the distance of 1 km.
My problem is that I don't know how many hubs I can use. Is there a restriction on the number of hubs, according to some standards ?
Thanks, Farhad.
This sounds suspiciously like a homework assignment rather than a problem you are actually having around the house, office or campus.
Personally I think you should refer to whatever textbooks or lecture notes that you should have seen before this question was posed to you.
The standards that govern ethernet are maintained by the IEEE (Institute of Electrical and Electronics Engineers). You're interested in the 802 series of documents (Maybe you wondered what those funny numbers in that question were about). Not that I'm actually suggesting that you slog through an IEEE draft standards document. I wouldn't do it for a question like this (those are written for engineers who are designing the hardware to implement the standards).
The info you're looking for should be in any good networking text book.
From Jinquan Luo on Mon, 31 Jan 2000
Dear James,
I have been tring to set up Proxy ARP using the ARP commands in Linux (Red Hat 6.1), but they do not seem to work for me. I wonder if you would give me some advice as to how to fix the problem. Here is my problem.
I have a CISCO router that connects to the internet. From that ONE link comes into a hub. Two computers are connected to the hub. One of the computer is our bastion host which is our web server and mail host. The e-mail messages are immediately relayed to our internal network through the other computer, which is the firewall. So here is the setup:
The firewall has ip# xx.xx.xx.2, MAC 00:20:AF:A2:9E:58 The bastion host has xx.xx.xx.3
The Firewall has a second NIC which is connected to the internal network. So the e-mail also goes through it. Now The email received by the bastion host is forwarded to xx.xx.xx.149, which is a phony address. So I tried to arp .149 to the MAC of the firewall like:
arp -i eth0 xx.xx.xx.149 00:20:AF:A2:9E:58 pub.
This command doesn't look quite right to me. Try something a bit more like:
arp -i eth0 -Ds ${NETWORK} eth1 netmask ${NETMASK} pub
This example is taken right out of the ProxyARP mini-HOWTO (*)
(NOTE: the 2.2.x kernel doesn't allow the netmask option. Apparently you must issue a separate command for each of the intended IP addresses you which to publish. I don't know what the state of this will be for version 2.4. I've copied one of my more expert associates; perhaps he'll jump in with more info).
The command appears to work so the arp shows xx.xx.xx.149 MP eth0 as advertised. The firewall is functiong and does translate the .149 address into an internal number 192.168.1.52, which is our internal mail server. the problem is that if I ping xx.xx.xx.149 on the bastion host it show this:
$ ping mickey \PING mickey.tbc.com (xx.xx.xx.149): 56 data bytes --- mickey.xx.xx ping statistics --- 3 packets transmitted, 0 packets received, 100% packet loss.
In another window I have
$ tcpdump -n arp 12:33:06.979376 arp who-has xx.xx.xx.149 tell xx.xx.xx.3 12:33:07.969471 arp who-has xx.xx.xx.149 tell xx.xx.xx.3 12:33:08.969470 arp who-has xx.xx.xx.149 tell xx.xx.xx.3 3 packets received by filter 0 packets dropped by kernel.
This continues forever.
My Kernel routing table looks like this:
bash# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface xx.xx.xx.3 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 xx.xx.xx.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 xx.xx.xx.1 0.0.0.0 UG 0 0 0 eth0
Apparently there is something missing in setup that ARP is not providing the right link_addr.
The arp command doesn't affect your routing tables.
When I added a static route such as:
route add -net xx.xx.xx.0 netmask 255.255.255.0 gw xx.xx.xx.2
[where xx.xx.xx.2 is the firewall]
then the tcpdump shows this : 12:40:52.120385 arp who-has xx.xx.xx.149 tell xx.xx.xx.2.
Apparently PROXY ARP is not working because it is not responding to requests.
Try reading the ProxyARP mini-HOWTO and using a command that's closer to their example.
Yes, you do need to make sure that the routing table on the proxyarp host has entries for both subnets.
I am really not sure what is wrong with my set-up. If you can spare a moment please take a look and give me some clues.
Thanks very much.
Jinquan
If this doesn't help, draw up an ASCII art diagram to help me figure it out. It takes along time to guess what you mean based on this text. Be sure to include the routing tables on each of the routers (and proxyarp hosts) and a sample routing table from representative non-router hosts on each subnet.
Usually the process of creating this diagram will make your problem obvious.
[ There's a reasonable example of such art in another message this month, subject "Subnetting".
-- Heather. ]
From Nathan Keeter on Sat, 19 Feb 2000
Is there any way to specify that syslog log all events from a particular host to a particular file?
That depends.
First I'd have to understand what you mean by "events" (and what you mean by "from" actually).
There are several sorts of "events" that can logged "from" a host. For example the TCP Wrappers (tcpd) that is pre-installed and configured for use by all major Linux distributions will log each access attempt to each wrapper protected service.
You can figure out most of the "wrappered" services by reading the /etc/inetd.conf and looking for references to tcpd on each of the active lines therein. Also note that the portmapper, rcp.mountd and some other "standalone" services might also be "wrappered." They would generally be compiled with and linked to "libwrap" (the TCP Wrappers libraries).
(Anyone interested in this should read the hosts_allow(5) and tcpd(8) man pages).
Another source of log messages "from" a host might be your kernel packet filtering tables. There are options to ipfwadm and ipchains to allow you to output/log messages about packets that match certain packet filtering rules.
(Anyone interested in more details on this should read the ipfwadm(8) and/or ipchains(8) man pages, looking for the -o and -l option respectively).
[ For those of you keeping up with the newer kernel series, Rusty is trying to encourage people to use and debug the new netfiler code. See the homepage http://netfilter.kernelnotes.org/ for the latest scoreboard.
-- Heather.]
Yet another source of syslog messages "from" a given host might be that you've configured your syslogd to accept remote (UDP) messages (by adding the appropriate command option to its rc* script), and you've configured the hosts in question to forward their messages to that loghost (using appropriate "@" directives in the /etc/syslog.conf files of the loghost clients).
(Anyone interested in these topics should read the syslogd(8) and syslog.conf(5) man pages).
Obviously any other services my have their own logging features in addition to these.
(Thus you see what I mean by "it depends on what you mean by 'from.'" Do you mean: "log messages from localhost services that involve (stem from interactions with) a host" or do you mean "log messages received by my syslog daemon that were purportely issued from the hosts in question").
Anyway, it is not possible to configure the normal syslog daemon to separate the messsages into separate files based on the hosts from which they were received. The normal configuration directives allow separation and filtering based on the "facility" and "severity" (read that syslog.conf man page for an explanation and lists of these).
One way to do what you want would be to feed the messages into a processing script (awk, or PERL). It's even possible to do this "in real time" by configuring your loghost to feed messages into one or more FIFOs (named pipes) and running your processing script(s) to read from that (or them). Again, the details should be in your syslog.conf man page but the short form would be something like this:
Add a line to /etc/syslog.conf like:
*.* |/dev/myloggingnode
Create "myloggingnode" (conventionally in the /dev/ directory, though a /var/run or other suitable place might be better). Use a command like:
mkfifo /dev/myloggingnode
or with:
mknod -m 0600 p /dev/myloggingnode
(You'll need to make this writable by your syslog daemon, of course).
Then you just run your PERL or awk script on that.
Another option is to check out one of the alternative syslog systems. I've read a bit about syslog-ng (next generation), and I think it can be configured for what you want. Have a look at:
http://www.balabit.hu/products/syslog-ng
... or at:
http://www.freshmeat.net/appindex/1999/02/17/919286467.html
For more on that.
For those interested in other aspects of network system logging and event monitoring across Linux and UNIX systems, I suggest looking at the "secure syslog" (which uses cryptographic techniques to authenticate that messages came from the host that purports to have sent them, etc) and I also recommend "colortail" as a great tool for those who like monitor their systems with 'tail -f' logging.
You can find those at:
http://www.core-sdi.com/english/slogging/ssyslog.html
- ... and
- http://www.freshmeat.net/appindex/1999/02/20/919554599.html
From Timothy Roloff on Sun, 20 Feb 2000
I am responding to information posted about GNOME on your website at http://www.linuxgazette.com/issue46/tag/5.html .
Someone was getting this message from Linux just before the log-in screen:
According to /var/run/gdm.pid, gdm was already running (process id) but seems to have been murdered mysteriously. INIT: Id "x" respawning too fast: disabled for 5 minutes
I ran into the same problem myself in RedHat 6. I tried all your suggestions, which I appreciate because they gave me somewhere to start. But none of them worked. Finally, on a whim I ran df I found that my drive was full. Freeing up some space solved the problem.
From Dr. Firyal Bou-Rabee on Sun, 20 Feb 2000
[From Kuwait].
hi, I have windows 2000 professional edt.
My condolences.
Q1. How to get ride of pop window for asking about user name and password?
Q2. How to reinstall windows 2000?
thank you
MAB
Try calling Microsoft or reading their manual. Didn't they sell you some documentation and/or support with that? If not, what exactly are you getting for your money?
From steve eckard on Sun, 20 Feb 2000
I would appreciate some advice on what to delete off my C drive. I seem to be just about out of space and know that there's plenty of junk there, but I'm not sure which is which. Are there any guidelines on what is not needed. Thanks for your help. My email address is [snipped]
The fact that you refer to it as your "C" drive suggests that it is running MS-DOS and probably MS-Windows ('95, '98, or whatever).
Personally I'd start by removing that and putting in Linux. However that's based purely on my personal bias. It's what most people would expect from the "Linux Gazette Answer Guy."
Presumably you'd like to be a bit more selective.
This is one of the long-standing problems with MS-Windows in all its many flavors (3.0 through 3.11, WfW, '9x, and NT). It is customary for software to scatter their files all over your drive(s) and to embed drive/path information throughout their configuration files (or "registry" trees).
That leads to two problems:
- You don't know which files belong to which packages.
- Adding a new drive and trying to move data and programs into the additional free space leads to "transplant shock" or requires removal and re-installation/configuration of some software.
These lead to other problems. Proper function of your system can be dependent on the order in which some of your software was installed. Re-installation of some software can result in the replacement of some DLLs with older versions, which can cause other software to fail.
Basically the whole design is like a house of cards.
You could try some programs like "Uninstaller" or "Cleansweeper." I don't remember the actual names of any of these but you can find a list of them in the TUCOWS (http://www.tucows.com) "Win '95/98 Disk Cleaner Utilities" category at: http://www.tucows.com/diskc95.html
Of course you should use these programs with caution and only after you've backed up any data (particularly that thesis that you might need to graduate). Of course my favorite disk cleaner for MS-Windows partitions is the Linux command:
dd if=/dev/zero of=/dev/hda?
... which will WIPE OUT WHOLE FILESYSTEMS without so much as a "Are you sure?" warning prompt!
TUCOWS is self-proclaimed to be "The Ultimate Collection of Windows Software." If you're going to continue using MS-Windows you might was well have a decent selection of shareware and freeware to make it more tolerable.
They also run Linuxberg (http://www.linuxberg.com) (or http://linux.tucows.com/index.html) which is a good source of free and shareware Linux software. They are arranged in categories and each is rated with one to five "penguins" (stars).
From Phoenix_II on Sun, 20 Feb 2000
Hey answerguy!!
I have a problem here. First of all, I am brand spanking new to Linux. I have a Win98SE box and a Linux Mandrake 6.0 box. The Win98 box has two network cards, one is connected to a LAN and the other to the Linux box via crossover cable. I just installed winproxy on the win box and am trying to get the Linux box to "notice" the proxy so I can get HTTP, FTP and so forth, to the internet through the win box. the winproxy has "CERN HTTP," FTP, Telnet, "SOCKS 4 & 5," DNS, DHCP, and Transparent Proxy capabilities and all the instruction is for if I want to connect another win box to it. Do you think you can help me out? Thanks, Phoenix
Well, that's a backwards way to do it. Usually you'd use the Linux box as a proxy server and use the MS-Windows boxes as the clients. That's because Linux is much more stable, and secure, requires much less system overhead, and has much more flexible IP filtering, routing and masquerading control.
However, to make your Linux system into a SOCKS client you can install the socks-client RPMs and modify the /etc/libsocks5.conf. Once this is done all of the normal TCP/IP client software on your system should automatically use the proxy according to the configuration.
I did provide a more extensive discussion of this process and the many other options available back in issue 36
- "Linux as Router and Proxy Server: HOWTO?"
- http://www.linuxgazette.com/issue36/tag/21.html
That contains a sample libsocks5.conf file and a bit of explanation on what the fields mean.
(Some software, notably your Netscape Navigator/Communicator will have to be separately configured to use the proxy. Just hunt down the appropriate dialog boxes through the program's UI --- Edit, Preferences for Netscape, etc).
From mjschack on Mon, 21 Feb 2000
Hello,
In reference to your explanation of how to recover a lost password in the current issue of the Linux Gazette, there is a simpler method.
For instance, if your kernel is labeled "linux," you could reboot (assuming your currently using the system), type "linux 1" at the boot prompt, boot to single-user mode, type "passwd" when at the prompt and then enter a new password. To get it all in one logical sequence, the next command could be "telinit 3" or if XDM is running the show, "telinit 5." "Telinit 6" in this scenario wouldn't be necessary, since no volatile changes to the disk have been made.
Just my two cents. Sincerely,
That will work on some Linux distributions under some configurations. However, most modern distributions use an "sulogin" utility to password protect the single user mode.
The steps I gave will handle most systems. Two cases that are likely to interfere with the procedure I outlined would be:
- System has a LILO password enabled to prevent passing over-ride parameters to the kernel
AND
System has CMOS password in place to prevent booting from floppy and other removable media.
OR- System has ppdd (privacy protected disk driver) installed and the root filesystem is encrypted.
There are ways to get around the second part of problem #1 --- (which bypasses the LILO password). However, scenario #2 would be VERY difficult to get around.
The number of system that are actually secured to this degree is way less than 1%. This is actually a bit of a pity in some ways, since users don't REALLY know if their computer workstation, left unattended in their open cubicle is trustworthy when they sit down at it in the morning and type their passwords into it. Ultimately this means that most businesses have somewhat limited accountability --- they can't definitely assert that a given user was the one who used a particular account to violate some policy. This is a limitation of PCs (and most other commonly available workstations) that has nothing to do with the OS.
As I've described, it's possible to lock down a PC running Linux so that it takes some pretty studly work to get into them. However, it's pretty rare.
Incidentally, the MBR in recent Debian Potato releases may be insecure from scenario #1. There was a feature added that allows one to bypass CMOS boot restrictions and boot from floppy by pressing the apropriate key sequence in the MBR boot loader.
This was discussed a couple of weeks ago one the Bugtraq security mailing list. It is possible to over-ride this default using options to the Debian install-mbr command. See its man page for details.
From Judith M. Bride on Tue, 15 Feb 2000
Thank you for the response, Jim. Sorry to take so long to acknowledge! Judith
Mr. Dennis, Can you direct me to a source that will tell me how to secure our RHLinux 6.0 DNS server. I am new to Linux. We only use it for DNS, so we want to make sure we've closed all unnecessary services.
Thanks..... Judith
- You could start with Bastille Linux at
- http://www.bastille-linux.org
[ Plain thanks don't always get published, but it's worth noting that Bastille just released a new version of their scripts a few days ago. To stay secure, it's worth keeping up. The mailing lists at SecurityFocus are worth watching, too.
-- Heather]
From Frank da Cruz on Mon, 21 Feb 2000
I notice you've got a lot about Kermit in your page:
http://www.linuxprogramming.com/mirrors/LDP/LDP/LG/issue32/tag_modem.html
Including:
Another simple testing trick is to use 'minicom' to dial the phone and establish your connection (log in). Then use the "Quit without Resetting the Line" option (using the [Ctrl]+[A], [Q] key sequence). This should dump you out of minicom and back to a shell prompt without disconnecting your modem. (It is then possible to invoke pppd on that line --- using an alternative version of the ISP options file without any "connect" directive).
That trick doesn't work with kermit --- it won't exit without resetting the communications line. From what Frank de la Cruz tells me you can't use C-Kermit as a replacement for 'chat' because of this. Basically he says it violates some programming standards to do this. (I still don't understand that --- but it's not currently a priority to me. If someone understands it and wants to explain --- write an article and send me a copy).
The explanation is that when a UNIX process exits, all the files that it opened (and in UNIX, devices count as files) are closed by UNIX itself. There's nothing the application can do to prevent it. The only way Minicom could keep the connection open when it exits is by configuring the modem to ignore DTR, but you could do that with Kermit too.
But the open device was passed to chat by the parent process, pppd, which is still running. I guess that's what chat (or the PPP daemon) is doing.
Anyway, C-Kermit 7.0, which was announced 1 Jan 2000:
http://www.columbia.edu/kermit/ckermit.html
now incorporates a built-in method for PPP dialing, which you can read about here:
http://www.columbia.edu/kermit/case13.html
I didn't see that, though I know that I did mention something about the new Kerberos features in one of my recent issues.
It doesn't require any kludges with the modem or cables.
- Frank P.S. Also note spelling of name: "da Cruz".
Sorry. I should have double checked that. I simply mis-remembered it.
PS: I'm the same guy that wrote an article for sysadmin Magazine about C-Kermit as a replacement for telnet, rlogin et al, back in about '97
You and I corresponded several times back then.
From Robert Lynch on Sun, 20 Feb 2000
Hi-
My wife is an SCO sys admin whose business bought her a Dell pre-configured RH 6.0 box for home use. She's used to ksh on SCO, with default vi setting. She finds this broken with bash. I have tried to fix it, including installing pdksh, but no joy.
Say she types:
$ ls $ who
then hits ESC + up arrow. Instead of moving through the command history, a "[A" character appears. I tried re-mapping the arrow keys but nada.
Please NOTE: I've been working on this for a while, and have seen several postings, to the effect "just set -o vi". Ain't this!
TIA. Bob L.
set -o vi under bash works fine for me. I've used it occasionally and I've had students use it for years.
However, vi normally uses the letters h,j,k, and l for its cursor controls. The [Up Arrow] and other keys are not traditional vi key bindings (though most modern versions of vi will also accept them if your terminal is properly configured using the appropriate TERM variable in association with a decent terminfo or termcap entry).
Can you run 'vi' (the editor) and/or emacs or xemacs from this command prompt? Is it in an xterm or from a text console? Are you (and she) accessing this through telnet? What terminal emulation is this going through?
If you can run 'vi' or 'emacs' within that terminal window, or console/terminal session then your TERM environment setting should be sufficient to support the vi keybindings used by bash, pdksh and ksh). My quick tests with pdksh, ksh '93, and bash all show that the [Up Arrow] key IS recognized by the vi keys handlers in any of them. I tested this from a text console with TERM=linux, from under the 'screen' utility with TERM=screen, and from an xterm with TERM=xterm. This is all on a Debian/Potato system with all recent updates applied for all software (yesterday).
So I don't know what is wrong. You'll want to play with the 'k' (vi up key) to see if that works, and play with the TERM variable (especially if you're connecting through your net using some MS-DOS or MS-Windows telnet client). You might also want to check if you are using some unusual key mappings on your Linux console (the loadkeys command) or in your copy of X (the xmodmap command). Again, if you have vi and emacs working normally, then these should not be problems for you.
Moving on to the larger issues of making your wife feel at home on here new system. If she's used to SCO's ksh then she might find some of the minor differences between it and GNU bash (the default Linux shell, from the Free Software Foundation --- http://www.gnu.org). There are even differences between pdksh and ksh '93, and even more minor ones between pdksh and ksh '88.
I don't know which ksh ships with SCO UNIX nor even which version of SCOnix that your wife is using. I'll assume she's using a recent copy, and that it shipped with the most recent Korn shell.
I suppose the best advice would be to get the real Korn Shell (ksh '93) from David G. Korn's web site: http://www.kornshell.com. Just follow a couple of obvious links (Software, "Official AT&T Release...") and fill out the little form.
Note that this is NOT free for commercial use. It is also NOT open source. You can read their license agreement for all of the details.
If your use will be within those restrictions than you can get this for your wife. If she's used to subtle nuances of ksh --- and especially if she does serious shell scripting involving associative arrays, co-processes, and floating point mathematics then she'll be happier with the "real thing" then she will be with bash or pdksh.
Most shell users wouldn't know the difference. However, ksh '93 (the most recent major release of the package) is the "king" of Unix command shells. It has a number of features which are unique to it (particularly in support of associative arrays).
There's a pretty good, and short, article co-authored by the creator of the Korn shell that discusses some of the features that make ksh '93 unique which was published by the Linux Journal and can be read on their web site at:
http://www2.linuxjournal.com/cgi-bin/frames.pl/lj-issues/issue27/korn.html
More information about pdksh can be found at the web site of its current maintainer, Michael Rendell at:
http://www.cs.mun.ca/~michael/pdksh
As far as I know pdksh and ksh are the only shells to support co-processes. So, if your wife uses those she'll definitely want to use one of them. Bash doesn't seem to support co-processes, yet. I think it's on their wishlist.
(For those that don't know: Co-processes work something like this:
$ bc |& [1] 12345 $ print -p '1234 * 4567890' $ read -p result $ echo $result ... $ print -p quit $ [1] + Done bc $
I started a copy of the "big calculator" (bc). The |& operator makes it a "co-process" --- running with its stdin (standard input file stream) connected to one pipe from ksh/pdksh and its stdout (standard output file stream) connected to another. Then I printed a value into that special co-process pipe (print -p) and read that value back out (using read -p). I can then print other transactions into this co-process and read other results from it. This allows me to have one process loaded, maintaining state, and available for work. In shells that don't support co-processes I'd have to maintain my own state, and keep re-executing this command (possibly with quite a bit of extra over-head in providing my intermediate results/state back to the new instance of the program).
So, co-processes are one of the more interesting innovations of the Korn shell. Such things are relatively easy to do from C, but ksh and pdksh are the only shells that interactively provide these at a high level shell prompt.)
From Ilan Tal on Sun, 23 Jan 2000
Hi Jim,
This morning your answer on getting started with Samba was the best thing I've seen to date on Linux. It answered the questions I didn't know how to ask. I still haven't implemented it, but at least now I know which direction to go.
Glad it helped. I have no idea which comment on Samba you're referring to --- after three years and over a thousand questions answered on this column they tend to all be a blur.
I have another couple questions which you might be able to answer. I have a multiple boot machine with win95, win98, win2k and Linux 6.1. Naturally I want Linux to talk to EVERYTHING, including across the LAN (the Samba part).
I have hooked up win95 (fat16) and win98 (fat32), but each time when I boot up and look in the /mnt folder I see both icons with a red band around them (meaning that I can't open the folder).
I go into the terminal, open up a super user and do: ls -ld win98. The result I get back is d----w-w-. That explains why I can't open them. Then I do a umount win98 and immediately mount it back again. Now ls -ld win98 gives me drwxrwxr-x, and I can open the folders. Under linuxconf I have myself as the owner.
The problem is if I reboot, it again goes into the mode where I can't open the folder. My question is: what do I have to do to convince it to boot up with drwxrwxr-x and NOT d---w-w-?
Well that is odd. The initial (boot time) mount fails in a way that marks the permissions absurdly and a remount succeeds and shows sane permissions.
The diagnostics you're giving are pretty sparse. The most useful information in this case would be:
- Relevant excerpt from your /etc/fstab
- Output of the 'mount' and 'df' commands before and after the failure.
- Output of the 'lsmod'command before and afer the failure.
I suspect that the problem is related to your loadable kernel modules and your kmod (kernel module autoloading subsystem). It could be that there is something subtly wrong with your start up sequence so that the vfat.o, msdos.o, and related modules are not properly loading.
Tracking down the problem could be interesting. Basically you'd read through the /etc/rc.d/* scripts to find out where kmod is activated (/etc/rc.d/rc.sysinit?) and where your filesystems are mounted (find /etc/rc.d | xargs grep "mount.*-a").
This is made somewhat more entertaining by the amount of cruft that's accumulated in the Red Hat rc scripts. Some of the other distributions have somewhat cleaner, more elegant rc scripts and some of them have it even worse. When I teach classes in Linux administration, I spend about a half day on /etc/inittab and the rc* scripts.
Read those after digesting _Learning_the_bash_Shell_ (*) by Cameron Newham and Bill Rosenblatt (O'Reilly and Associates) and you'll be well prepared to troubleshoot most Linux configuration problems.
- (http://www.oreilly.com/catalog/bash2)
A workaround might be easier than a real fix. One work around would be to simply add the commands:
umount /mnt/win98 ; mount /mnt/win98 umount /mnt/win95 ; mount /mnt/win95
... to your /etc/rc.d/rc.local script. If that doesn't work try using something like:
( sleep 120 umount /mnt/win98 ; mount /mnt/win98 umount /mnt/win95 ; mount /mnt/win95 ) &
This encapsulates the previous commands into a subshell (the parentheses), inserts a two minute delay at the beginning of that subshell, and runs the subshell in the background (the ampersand).
The overall effect of this would be that the troublesome mount would be left alone for a couple of minutes, and then cleared up in the background. Obviously, your win '9x filesystems wouldn't be available until after that delay.
As I say. This is a WORKAROUND. Normally installations should NOT have to do this.
Another approach would be to ensure that the modules are loaded BEFORE the mount command occurs. Usually the kmod automatic module loading system is reliable enough. However, we can always add our own 'insmod' or 'modprobe' commands commands to /etc/rc.d/rc.sysinit or to some other point in the rc scripts. Indeed we could even insert a custom rc invocation into the /etc/inittab file --- which would run before ALL of the other rc scripts.
Yet another approach is to link the requisite modules statically into your kernel. You don't have to do this with all of the modules you use (and that might result in kernels that are two big for LILO to use). However, you should statically link in your primary disk controller, primary filesystem, any ethernet driver and then the other drivers that you really care about. This is done by simply building a new kernel from sources.
(Which segues into one of your other questions).
The other question has to do with hooking up to win2k with NTFS. I understand that I have to recompile my kernel, which is a bit scary. My question is: is there a good source of directions on how to do this? If there is something similar to what you showed me for Samba, it would be great!
Well the usual answer for this is to head over to "The Linux Kernel HOWTO" at
http://www.linuxdoc.org/HOWTO/Kernel-HOWTO.html
... and if that seems to shoot over your head then go back to basics by reading the "Installation and Getting Started" guide by Matt Welsh et al:
- The Linux Documentation Project Works
- http://www.linuxdoc.org/docs.html#guide
... this guide is the first link in the guides section of the LDP (the Linux Documentation Project).
Of course every Linux user should start with the LDP guides and HOWTOs. Every general purpose Linux distribution should have links to the LDP http://www.linuxdoc.org web site prominently evident on their own web sites, in their documentation and installation scripts, in the sample HTML that's installed on any localhost web server, etc.
(Incidentally the Linux Gazette is a part of the LDP. This is relatively obscured by the fact that we run completely autonomously).
If the "Installation and Getting Started" doesn't seem to be quite enough then I'd suggest looking at Henry White's "Basic Linux Training" (http://basiclinux.hypermart.net/info-basic.html)
That's a highly structured online class. Although it requires registration (and a password) to access the content of this course --- the registration seems to be free. This class uses the Getting Started guide as a text book.
Another question is: where can I obtain a decent word processor for Linux? The Redhat 6.1 didn't have anything on the CD. There are all sorts of editors, and even a spread sheet, but no word processor. It is difficult for me to return to the dark ages where I had no control over the font size, or which font I was using, which are the minimum goodies for a word processor.
Thanks,
Ilan
There are several word processors for Linux. However, I should note that the whole word processor paradigm is not the only way to have control over font size and selection. It's also important to understand that word processors existed for years before GUIs and "WYSIWIG" (what you see is what you (hope to) get) were feasible.
One of the earliest uses of UNIX was document preparation and typesetting using the roff (nroff, and troff) system. This is still the format used by Linux and UNIX man pages, and some people still create and maintain their resumes in troff. A troff document consists of text with embedded "dot commands" which identify the document parts (titles, headings, sections, etc) and desired typesettings (font, face, weight, etc).
Another typesetting system which is geared for technical publications and academic work (particularly for dissertations in mathematics, physics, Klingon linguistics, etc) is Donald Knuth's TeX system (pronounced "tech" as in "tau epsilon chi"). TeX is a typesetting language. Many TeXnicians actually use the LaTeX macro package which is built around "plain" TeX. There are hundreds of packages which plug into LaTeX and dozens of TeX alternatives to it.
You can learn far more than you ever wanted to know about this approach to document preparation by browsing around the TUG (TeX Users Group) website http://www.tug.org and by perusing the CTAN (Comprehensive TeX Archive Network) website at http://www.ctan.org.
(If you need to typeset a dissertation comparing Elvish, Goblinoid, and Klingon literature, perhaps with diagrams of their respective approachs to chess, and perhaps you want to have barcode footnotes of your bibliographic references --- if you need that then CTAN is the place to go!)
I've never written anything in *roff. However I do like LaTeX, and I did use it to write my first book. It does represent a significant learning curve, but you do get very good control over the project as a whole.
So, I suggest that you look back into the "dark ages" a bit and see what arcane wisdom might be found thereby.
Now, back to your question:
The most obvious commercial and closed source choices for your word processing needs are:
- Word Perfect by Corel software
- http://www.corel.com
(I'd give a more specific URL, but they use nasty Javascript redirections to access their products listings so you'll just have to hunt for it yourself).
Star Office by Star Division at http://www.stardivision.com which was purchased by Sun Microsystems http://www.sun.com last year
- Applixware for Linux
- http://www.applix.com/applixware/linux/main.cfm
... note that you might want to wait a couple weeks for their upcoming release of 5.0
(Then again you might want to also have another word processor handy for that first couple of months after you get Applixware 5.x --- since it is a major commercial sofware release and a ".0"!).
I've used Applix (a couple of minor versions ago) and StarOffice. In fact I decided to install the Sun release of StarOffice 5.1 while I was writing this message. (I'd had a small stack of the free CDs that were given out at some LUG around here sitting next to my computer for about two months, and one of those was sitting in my workstation's CD tray for the last couple of weeks).
So, it's copying files now ... done.
StarOffice does install pretty easily. This workstation (canopus) is on a Debian (Potato) distribution. Here's how:
- mount the CD
- startx as root
- open an xterm
- changed to the /mnt/cdrom/linux/office51 directory
- run './setup /net'
- follow the on screen instructions;
especially: select an installation path. (I used /opt/StarOffice51/)
... this creates the base system installation (about 150Mb).
Then each local user who intends to use the package must:
- login
- startx (if necessary)
- cd to a (site dependant) directory such as /opt/StarOffice51/bin
- run ./setup (without the /net option)
- follow the instructions and fill out the forms; pick a personal installation path (I used ~/.Office51/)
(Note: the CD doesn't seem to be required for the user installations).
This seems to install about 3Mb under my home directory. It seems to me that quite a bit of that could be replaced with symlinks but...
Anyway, StarOffice comes with word processing, spreadsheet, database, scheduler, presentation, and drawing programs. It also seems to have some features for accessing your Palm Pilot PDA.
Overall I think the whole StarOffice suite works O.K. but is a bit too cluttered. There are too many tool bars, icon ribbons, rulers, scroll bars, menu bars, status lines etc. Some, possibly most of that could be configured away, but it is not clear how you access those functions without having the various widgets constantly in your face and taking up valuable screen real estate.
When you start StarOffice it creates one window/frame into which all of the others must fit. Thus your document, dialog, spread sheet and other windows don't float on your desktop co-existing with your other X applications. They are "boxed in" and visually isolated from the rest of your desktop. This isn't much of a problem to me. I tend to give each application its own virtual screen in my window manager (mostly I've been using icewm recently --- it's very modest and mostly unobtrusive). So I have no objection to sizing StarOffice to "full screen" and let it take over almost all of my screen. I just leave my icewm menu bar at the bottom (taking up valuable screen real estate) and I use it's little screen buttons to switch to my xterm screen (O.K., I cheat! I have a couple of xterms open on that one), my Navigator screen, or to my "other" (xdvi, gv, whatever) screens.
Of course one of my xterms has a copy of 'screen' (the text mode terminal multiplexer) running in it. This is running my copy of xemacs, with my mailer and most of my other editing buffers. Other screen sessions have shell prompts, root shell prompts, and my lynx sessions. I regularly detach this screen session, yanking it over to whichever terminal session I happen to be on. So it makes rounds from text console, to xterm, to ssh session as I keep re-attaching from various locations and under various modes. (Understanding how I work my help you decide that you don't want to listen to my advice about GUI programs).
Anyway, I consider StarOffice to be acceptable, and certainly no worse then what I've seen of MS Office.
There is a free alternative. StarOffice is free as in "beer" not free as in "speech" -- you can use it around the house without paying any money but you can get the sources (yet), modify it, or redistribute it without a license from Sun).
Abiword is the premier GPL'd GUI word processor. The current version is 0.7.7.
Since you're a Red Hat user you can download the RPM file from the AbiSource web site:
- AbiSource -- Linux / Intel Install
- http://www.abisource.com/dl_linux_intel.phtml
You can learn lots more about this project and the company that's undertaken it at:
http://www.abisource.com
Under Debian you could install the latest Abiword using the command:
apt-get install abiword
... assuming you have your /etc/apt/sources.list configured correctly. Debian will then handle all of the downloading, installation and dependency resolution for you.
I've played with Abiword and it is pretty nice. However it is missing some fairly key pieces (such as the dialogs to control your margins and page layout. It does seem less cluttered than StarOffice's "writer" and when you use the option to turn of the "ruler" display then you can still find all the options it offered on the pull down menus. (Now if they just offered options to display the display of the tool and menu bars and offered options to pop those up with middle and "other" mouse buttons on the main text display, then we might have a really clean interface).
One of the most aggressive projects to create a new office suite for Linux is KOffice (http://koffice.kde.org) by the KDE folks. I haven't gotten this running yet. I did try to grab the RPMs and use alien to install pre-compiled binaries in the background as I was writing this message --- but I'm missing a few libraries that it wants, so I'll have to get the sources and build it all in order to test it. That will have to happen later.
There are lots of other word processing packages for Linux. Some are under development, others are commercial, some seem to be half done and abandonned projects (like Maxwell, and Papyrus). Here's a couple of URLs to browse some of them:
- [freshmeat] X11/Office Applications
- http://www.freshmeat.net/appindex/x11/office%20applications.html
- Christopher B. Browne's Linux Word Processing
- http://www.hex.net/~cbbrowne/wp.html
- Goob:Software:Office
- http://www.linuxlinks.com/Software/Office
(This last one appears as "Goob" in my book marks file --- so I think that is a historical name for the site).
[ Yes, it used to be called "Linux Links by Goob!" and I always assumed it started life as his personal bookmarks.
-- Heather ]
That should keep you busy for awhile.
From Ilan Tal on Sun, 30 Jan 2000
Hi Jim,
The diagnostics are pretty sparse because I don't know what I'm doing!
I'm coming from the Windows world, but I'm in unfamiliar territory with Linux.
Don't feel bad. When I was supporting MS Windows users full time I often got sketchy symptomology.
The thing that makes it more frustrating these days is that I'm doing it via e-mail. So the questions I want to ask and the data I need to gather in order to treat the problem involve long delays in both directions.
As Linux increases in popularity there will be many people in the same boat with you. I'll have to refer more people to interactive phone support (which is NOT free). Sometimes that will be the only sane approach.
It will take me a few days to digest your letter. I will look into the problem and see if I can solve it, with the hopes of learning something along the way. If I can't solve it, then I'll go for a work around. In any case, I hope to learn something in the process - and maybe as a result next time my diagnostics won't be quite so sparse.
I VERY much appreciate all the effort you put into your answer. Now I've got to digest it by trying out all the things you suggested.
Thanks again, Ilan
Many people feel a bit overwhelmed by my responses. I'm used to it. Obviously you are motivated and interested in learning more.
From Ilan Tal on Sun, 30 Jan 2000
Hi Jim,
I don't want to bother you without doing my homework, but I have run into a couple of questions which I just don't know how to solve. You answered my question about the weird file permissions when I log on, which become normal when I umount and then mount again.
First of all, my fstab is:
/dev/hda6 / ext2 defaults 1 1 /dev/cdrom /mnt/cdrom iso9660 noauto,owner,ro 0 0 /dev/hda5 swap swap defaults 0 0 /dev/fd0 /mnt/floppy ext2 noauto,owner 0 0 none /proc proc defaults 0 0 none /dev/pts devpts gid=5,mode=620 0 0 /dev/hda4 /mnt/win98 vfat exec,dev,suid,rw,conv=auto,uid=500,gid=500,umask=755 1 1 /dev/hda1 /mnt/win95 msdos exec,dev,suid,rw,conv=auto,uid=500,gid=500,umask=755 1 1
Doh! [self: slaps forehead!]
I should have guessed!
Your problem is with those UMASK settings. I see what you intended but that not how the UMASK works. UMASK is a list of the bits to strip OFF (mask away from) the default 666 file creation permissions (777 for directories).
So a UMASK of 022 is actually what you want!
Basically consider each of the digits in the permissions set to be an octal value (three binary bits). To get the sequence rwx you have to use 4 + 2 + 1 (or 1 * 2 ^ 2 + 1 * 2 ^ 1 + 1 * 2 ^ 0 if you were to look at the binary exponents). 777 in octal is 111 111 111, and 022 is 000 010 010. The complement of your UMASK (each bit is inverted) is 111 101 101. If you and that against your default permissions (777 for directories and 666 for files) then you "strip out" the write permissions for groups and "others."
I know that this is confusing. It's one of those things that makes perfect sense to the programmers who used UNIX early one; and is something that many sysadmins and UNIX users just learn (umasks of 022 or 027 are good, most others are bad) without really understanding the underlying bit manipulations.
The easy way to calculate the effects of a umask are to subtract the desired permissions (755) from 777. That you can do in your head: 777 - 755 = 022.
Anyway, just change those 755's to 022 and everything should be fine.
The other thing which I noticed about the ls -ld win98 with d----w-w- is that the owner is ilan. After I umount and mount again the owner is root. Your message actually gave me a very important clue. If, in fact, I do a simple
umount /mnt/win98; mount /mnt/win98
Of course that wouldn't work. The mount command has been working right all along. It's doing exactly what the options in the fstab are saying.
then nothing good happens! I am still stuck with the same garbage permissions. However if I mount as: mount /dev/hda4 /mnt/win98, THEN the permissions are OK (and the owner is root, not ilan).
Of course. When you specify the device and the mount point then the 'mount' command doesn't perform a lookup in the /etc/fstab. It just mounts the filesystem according to your arguments and its compiled in defaults.
I tried to extend your command (find /etc/rc.d | xargs grep "mount. *-a") to something to find the mount win98 string, but I couldn't make it work correctly. What I tried was find /etc | xargs grep "mount. win98", but aside from telling me that every entry was a directory, I didn't get anything useful. Thus I don't know where or how it mounts win98.
While I'm at such questions, I tried sending you the output of the commands (like ls -ld win98). What I found for redirection was rather complicated:
(command) 2>&1 | tee (output file)
I did all that, but each new command erased the output file. In the DOS days, I used to use (command) > (output file), or (command) >> (output file) to append to an existing file. I'm sure there must be something similar in Linux.
You can use the >> operator under Linux. (Actually the operators are part of the syntax of your shell, bash, or tcsh. Theoretically someone could write a UNIX/Linux shell that used completely different syntax).
Finally I tried recompiling the kernel to accommodate NTFS (read only). Your suggested source helped a lot in adding information to what I had from the official Linux documentation. Still it keeps coming back at me, at the compile stage, telling me the kernel is too big and it can't compile it.
Use the 'make bzImage' option rather than the 'make zImage' choice. This is a bit of a historical artifact.
You can read the Linux Kernel Mailing List (LKML) FAQ at Tux.org (http://www.tux.org/lkml) for more on that.
I suspect that this is a red herring, so I simply had to ask your opinion. I have 926 Mbyte on my Linux partition, of which no less than 95% is in use! This blew my mind, and I have 41 Mbyte free. I suspect that the real problem may be that the disk space just isn't big enough.
No. That's now a problem. You should clear up some space but that's now causing the symptoms you've described to me.
If this be the case, either I have to somehow exploit my win98 disk, which has loads of free space, or I need to figure out what I can dump from my Linux partition.
You can symlink your home and /usr/src/linux directories into directories on one of your MS Windows partitions, though you should use the uvfat filesystem type rather than just vfat. That will impose some UNIX/Linux semantics on your MS FAT filesystems using some hidden files that the Linux uvfat filesystem driver will transparently create and maintain for you.
You might want to hunt around for information on uvfat and it's predecessor/ancestore UMSDOS. I haven't used these in years and haven't read about them recently.
Another very useful suggestion in your letter was to sign up for the Linux course. This I did yesterday. Hopefully it may fill in some of the huge holes in my background. Knowing Windows just isn't enough, not by a long shot!
I just saw an interesting link to Linux training providers at:
- LinTraning:
- http://www.lintraining.com
... which is currently just a redirection to:
- LinSight: Training
- http://training.linsight.com
I'm just mentioning that for the benefit of my other readers, though you can check it out if you want to shop around.
I thought about referring you to Linuxcare (which does do some training). Howevever I see from our web site that we don't currently have any classes for individuals scheduled. Our focus at Linuxcare is more on training the instructors of the training facilities, and on corporate training. So we only occasionally offer classes to the general public.
(Many of the facilities listed in LinTraining are coming to Linuxcare for their materials).
Thanks, Ilan
From Ilan Tal on Mon, 21 Feb 2000
Hi Jim,
It took me a couple of days before I could get rid of my Windows obligations and return to the fun stuff of Linux. In any case I must report to you that YOU are right and the Linux documentation is WRONG. I am referring to your letter:
>>> /dev/hda4 /mnt/win98 vfat >>> exec,dev,suid,rw,conv=auto,uid=500,gid=500,umask=755 1 1 >>> /dev/hda1 /mnt/win95 msdos >>> exec,dev,suid,rw,conv=auto,uid=500,gid=500,umask=755 1 1
Doh! [self: slaps forehead!]
I should have guessed!
Your problem is with those UMASK settings. I see what you intended but that not how the UMASK works. UMASK is a list of the bits to strip OFF (mask away from) the default 666 file creation permissions (777 for directories).
Where in hell would I know what permissions to use? I must have read it somewhere, because I know NOTHING. First of all I took your suggestion, went into linuxconf and changed 755 to 22. That solved my problem COMPLETELY! No more problems with logging on, IT WORKS!
The next problem was to figure out where I got the bum steer. I went to the Linux documentation, "Getting started". I wanted to get started, right? There is a section about mounting Windows devices. That is what I needed, right? Well it says to use 755, so that is what I did. I didn't know nothin' about any UMASK, and I didn't change any UMASK. I took YOUR suggestion (in linuxconf) about using 22, and guess what? It works!
Someone should either fix linuxconf, or they should fix the documentation. One of them is simply wrong. Without your telling me where the problem was, I didn't have the chance of a snowball in hell of finding it.
Thanks, Ilan
Hmm. If you could find the passage in "Installation and Getting Started" to which you are referring, we could look up the current maintainer of that LDP guide and suggest a clarification.
Matt Welsh, the original author of GS, knows at least as much about UNIX as I do (probably more). So if the passage was incorrect, it was probably an accident in the wording, or something confusing about the sentence structure. I presume that he was trying to say that you should set your umask value such that the resulting directories end up with mode 755.
As for linuxconf, don't get me started about it. I've tried it a couple of times and it just does things WRONG! I refuse to use it on my systems. I'd like a mode in linuxconf that would just "edit the files" and show me what needs to be put where. In other words it would be nice if it had interactive help, and forms to put the right stuff into the right places in things like zone, hosts, passwd, and other files. Some of these conf files can be pretty picky.
(I once let crackers create directories on a public FTP site and upload "warez" --- pirated software --- all because I had a space following a common in a WU-FTPd ftpaccess file; I was saved by some other settings that prevented them and their ilk from get back into to retrieve the warez --- but that's still a slightly spooking experience).
I suppose I should quit my kvetching and get in there to fix it, or to help with the COAS (Caldera Open Admin System). One problem with UNIX and Linux is that those of us who get good at managing the system with text conf. files have little interest in or incentive to make easier interfaces for those who don't know that a man page that specifies a syntax of "opt1,opt2,opt3..." really MEANS that there should be no spaces after those commas, etc.
Anyway, if you can find the places that lead you astray, please feel free to e-mail the maintainers of those documents and packages (linuxconf included) and let them know.
From Paul Todd on Sun, 30 Jan 2000
I have been sent a document with a .max extension. What software do I need to be able to open and print it?
Regards Paul Todd
I have no idea. If you look at it with a text text/hex editor is there any sort of hint among the gibberish?
I'd bounce a message to your correspondent and ask that he or she provide the attachment in a format you can support or point you to the necessary tools to use it.
(.max is not any sort of standard or convention I've ever heard of).
From Paul Todd on Mon, 31 Jan 2000
Hi Jim Thanks for your note. The person who sent me the file thought it was a "Paperport" file. If I use Wordpad to look at it is just gibberish - random characters. If I use something like Quickview I get the same but it is possible to see the odd word
From Paul Todd on Mon, 31 Jan 2000
Jim, I found a viewer for Paperport files on the web and sure enough .max files are Paperport format and having downloaded the viewer I have printed the manual out.
Sorry to have wasted your time so some extent but at least you know what .max files are!!!!
regards
Paul
From Zombewolf on Mon, 31 Jan 2000
dear answer guy
I'm a keep it simple, I dont like to waste time on computers kind of guy. I do very little with a computer because the learning curve is too high and software changes version far too frequently to allow me to use what I just learned.
In 1993 I edited many films, commercials, infomertials on AVID proprietary mac based machines. Now I'm an accupuncturist with minimal computer usage. I draft bills and do some research. Some day, when I get rich, I'll make films again ( maybe in 5 or 10 years).
That's an interesting background.
I HATE MICROSOFT AND IT'S FASCIST WINDOWS PRODUCTS.
This sentiment seems a mite strong for a casual computer user. Usually it's the professionals who spend 80 hours a week working on this stuff that have such seething vehemence.
I'll buy a computer in March - can I do well with linux or am I the wrong profile?
I hate to see some one driven so much by hatred. (O.K. I know that sounded funny, but I mean it).
I'd rather you use Linux because it offered you what you wanted, rather than out of unadulterated revulsion.
How to you feel about MacOS?
It sounds like Linux or MacOS might fill the bill for you. However, you might have a bit of a trick to find Linux applications that meet your needs and user interface preferences.
The GNOME project is working on GNOfin and GNUcash for personal accounting. However I doubt those are ready for use as general purpose accounting and billing applications. There is the AppGen commercial accounting package for Linux (*) which might be overkill for your needs.
- (http://www.appgen.com/linux.html)
Unfortunately there isn't a "Quickbooks" for Linux, yet. Here's one place to look for some of the financial applications that exist for Linux:
- LinuxStart
- http://www.linuxstart.com/applications/finance.html
... and a couple of my favorites:
- Christopher B. Browne's
- http://www.hex.net/~cbbrowne/finances.html
... and
- Linas Vepstas
- http://linas.org/linux
I dont know anyone who uses the os and want to know how to make an informed decision.
Thank You William Daniels
I see by your phone number that you're in the 415 area code. (As always I've removed all addressing and personal information beyond your name from the quoted portions of your e-mail; I don't let those get published in my column).
You're in the perfect location to visit Linux user's groups to meet lots of people who use Linux. You can visit SVLUG (http://www.svlug.org) and BALUG (http://www.balug.org) to find out the details about when and where the have their meetings.
SVLUG meets next Wednesday (first Wednesday of every month). The February meeting will probably be a bit less well attended than usual --- many bay area Linux users will be in New York at the LinuxWorld Expo. However, there will probably still be over 100 people there. (Their meetings are usually twice that).
BALUG is held on the third Thursday of the month at the Four Seas restaurant in San Francisco's chinatown.
You can find out more than you'd ever want to know about Bay Area (and Silicon Valley) Linux events at http://www.linuxmafia.com/bale
(Note: Installfests are events where Linux geeks like me get together and help new users install Linux unto their systems and configure them for your use. That's one of the nice benefits of using a free operating system --- we can help one another without concerns about software piracy interfering with our fun).
From Tim Moss on Sun, 20 Feb 2000
Jim Dennis wrote:
BALUG is held on the third Thursday of the month at the Four Seas restaurant in San Francisco's chinatown.
Tim Moss says:
Shouldn't that be 3rd Tuesday?
Yep! 3rd Thursday is BayLISA (Bay Area Large Installation Systems Administration: a predecessor to SAGE, the SysAdmin's Guild).
I sometimes get them confused, though my PalmPilot reminds me of the right meeting dates. I have a pair of periodic reminder for each (previous and same day for each).
That's two you've caught me on this month. Want a job as "assistant editor"?
(The pay is non-existent, but we have the benefits to match!)
[ Tight deadlines R us! For those who may be wondering who this fellow is, he's a work colleague of Jim's who has had a chance to see some of these ahead of time. I do attempt to clean up little details like that during processing.
-- Heather ]
From Hongwei Li on Mon, 31 Jan 2000
Hi Jim,
Thank you very much! I did the following as you advised, but still failed:
Hi,
I recently installed RedHat 6.0 and 6.1 on two machines,
respectively. Everything looks working except that users can not
access their e-mail accounts on these two servers from PC Windows
using Netscape Mail or MS Outlook Express although they can access
the e-mails using telnet, pine. Apparently, POP3 daemon is not
working on these two RH Linux boxes.
Somebody said I can retrieve POP3 daemon from IMAP package. But,
I don't know where it is and how to do it. Could you help me?
How to check the system if POP3 daemon is installed and working?
Where can I get IMAP package and retrieve POP3 daemon, then
installed it and let it run? or should I get something else?
I would greatly appreciate your help!
Hongwei Li
Your "somebody" is a smart cookie. You sould listen to him or her. However, you might ask him (or her) for a wee bit more detail.
-- He/she sent me the first advice, but didn't explain further after I asked in more detail. So, I could not do anything until I received your help.
Red Hat inexplicably puts their POP and IMAP daemons in the same file. You can install them using something like the following procedure:
1) Mount your RH Linux CD (disc #1?) using a command like:
mount /mnt/cdrom
-- I did it.
2) Go to RPM directory using something like:
cd /mnt/cdrom/Redhat/RPMS
-- then, this.
3) Install the imap....rpm package using a command like:
rpm -Uvh imap*
-- then this as:
rpm -Uvh imap-4.5.3.i386.rpm (on a RH 6.0 system)
the screen shows:
imap ##....#
... or:
rpm -i imap*
That's basically all there is to it. You can test for POP installlation/accessibility using a command like:
telnet $TARGET 110
... where $TARGET is replaced with the hostname or IP address of the system on which you hope to find a POP server.
If you get a "connection refused" or a "connection closed by remote host" then you don't have POP installed properly on the $TARGET system (or you have a firewall, packet filter, or /etc/hosts.deny rule between your client and the server).
-- then I try this:
telnet elyback.wustl.edu 110 from that machine (elyback) and another Linux system, but got:
Trying 128.252.85.78... telnet: Unable to connect to remote host: Connection refused
It sounds like you have a more basic networking problem. Are you sure that you have your IP addresses and routes set up correctly?
Then, I reboot the machine (maybe I don't need to reboot, but enter some other command else?), try it again and still get the same message from that machine and from another system. I check the /etc/service file, it shows
pop-3 110/tcp # POP version 3 pop-3 110/udp
and the /etc/hosts.deny file is empty. We don't have firewall.
It's good that you checked that. Actually it's possible to put deny rules in the /etc/hosts.allow file (or vice versa). When I asked Wietse why he didn't just change that to /etc/tcpd.conf instead of having two different files who's names are obviously derived from the name of the utility that references them (/sbin/tcpd).
Remember to ensure that you have valid /etc/hosts entries for your two systems. Do a search on my FAQ or in the Linux Gazette Archives on the string:
"double reverse lookup"
... for some long explanations on why this is important.
(My first guess would be that you don't have proper /etc/hosts or DNS PTR records for these, and that your copy of TCP Wrappers may be configured (compiled) with the -DPARANOID option. Possibly that your /etc/hosts.allow has a PARANOID directive in it).
So, it seems that the pop3 daemon is still not working. Is there any other way to check if it is installed and running after I did the above things?
Could you give me more advice? Thank you!
Hongwei
The fastest way to get an answer would be to call Linuxcare's tech support number. However, that is not free. You could keep trying to get me enough information so that I could find the answer --- but I'm sure you understand that this might take a long time (I'll be gone in New York all next week, and in Arizona the week after that).
So, if you need this quickly, and are willing to pay a little bit to get some handholding consider calling 888-LIN-GURU. Otherwise I'll need to see the output from the following commands:
script /tmp/answerguy.capture ifconfig -a route -n netstat -an --inet | grep LISTEN tail /var/log/messages exit cd /tmp col -b < answerguy.capture > answerguy.txt
If you do that correctly you should see a message like: "Script done, file is /tmp/answerguy.capture" and you should find a reasonably clean copy of the the captured information in /tmp/answerguy.txt
(The 'script' or "typescript" command is what students use to capture there interactive sessions to files, so they can print their homework assignments. The col -b command "collates out" the backspaces and other control characters that might have been captured along with the text. The other commands are diagnostics and information gathering, ended by the exit command to end the typescript session).
I'd need these for both of the machines involved.
Also see if each system can "ping" the other, and try running the command:
tcpdump -n
... on the server while you are trying to access the POP service. You should be able to see the packet headers that tcpdump "sees" as the connection attempts are made.
As I say, if you decide to stick with me it could be a few weeks before I get back to you (two weeks out of town and more time to get caught up with my e-mail backup after that.
Of course you can also post this to netnews (comp.os.linux.networking) or you could subscribe to L.U.S.T. (Linux Users Support Team) which has a web page at:
- L.U.S.T. Home Page:
- http://www.ch4549.org/lust
... you can find a list of other support options at:
- Netpedia Linux: Support
- http://smalllinux.netpedia.net/links/support.html
From Tim Moss on Sun, 20 Feb 2000
Tim Moss commented on one reader's apparently unresolved problem:
Have him uncomment pop-3 in his /etc/inetd.conf. I believe it is commented out by default in current Red Hat distros.
Hongwei wrote:
Hi Jim,
Thank you very much! I did the following as you advised, but still failed:
Installing a POP Daemon on Red Hat Linux
Of course I should have added a "check your inetd.conf" check to my instructions.
I have mixed feelings about this change to Red Hat's default /etc/inetd.conf.
On the one hand I applaud the advance towards better security. It's long been a problem in the UNIX world that companies leave services enabled and insecurely configured in their call avoidance efforts. Doing the "right thing" can often result in getting a very large number of technical support calls, which translates to EXPENSE for the commercial software vendor.
So it's nice that Red Hat is in a business where they can fix problems like this and not worry about the consequences.
On the other hand I think that it is absurd that they haven't enhanced their RPM's and package management to resolve the issue of configuring (and re-configuring) packages as they are installed and/or after the fact.
This is one of those respects in which I prefer Debian. If I install a Debian POP server it makes sure that the inetd.conf is configured to use it. It might ask me if I want to add an entry to limit the hosts.allow that are allowed to access this service.
Of course Red Hat couldn't simply adopt the Debian strategy. The Red Hat distributions are geared for an "install everything" approach. When I try to make an initial "minimal" installation in Red Hat I find that later efforts to add packages "as needed" are frought with trips into "dependency hell."
By contrast Debian excels at the minimal installation. Later addition (and removal) of packages is more robust than I've seen under any other OS. Dependencies and conflicts are handled (mostly automatically).
(At the same time Debian has room for improvement as well. I'll save my choice comments for a review of their next major release).
(Incidentally, any reader that writes to suggest using Linuxconf will get an e-raspberry! I won't even comment on my experiences with Linuxconf. Ugh!).
On my KDE desktop, I have semi-live satellite images on several of the pages. I used cron and wget to get the latest weather for my desktop background. Here's how. First I made a couple of directories
mkdir ~/weather ~/bin/cronThen in the file ~/bin/cron/hourly.job I did this
#!/bin/sh W=$HOME/weather wget http://www.cis.ec.gc.ca/goes/huron_f.jpg \ -O $W/goes_huron_f.jpg wget http://www.cis.ec.gc.ca/goes/gulf_f.jpg \ -O $W/goes_gulf_f.jpg wget http://sgiot2.wwb.noaa.gov/COASTWATCH/GOES/G8CWNEVS.GIF \ -O $W/G8CWNEVS.GIF
Then, I made it executable
chmod +x ~/bin/cron/hourly.joband set up a crontab entry like this, using
crontab -eand creating this line:
10 * * * * $HOME/bin/cron/hourly.job >> /tmp/out.h 2>&1then I saved the file and exited.
I used 10 minutes after the hour to avoid a race on the hour. Finally, you're ready. Give a test with the command:
~/bin/cron/hourly.jobDid it work? If so, let's set up the window manager. If not, you may not have wget installed. (Note- you can snag wget at ftp://gnjilux.cc.fer.hr/pub/unix/util/wget/)
Setting up your WM:
Here's a shell script to do the opposite of kfm's File|Open Terminal.
#!/bin/sh # Opens a kfm window in the current directory. kfmclient openURL `pwd`
I put it in /opt/kde/bin/kfmhere .
Funny thing; I was just about to post this tip when I read Matt Willis' "HOWTO searching script" in LG45. Still, this script is a good bit more flexible (allows diving into subdirectories, actually displays the HOWTO or the document whether .gz or .html or whatever format, etc.), uses the Bash shell instead of csh (well, _I_ see it as an advantage ...), and reads the entire /usr/doc hierarchy - perfect for those times when the man page isn't quite enough. I find myself using it about as often as I do the 'man' command.
You will need the Midnight Commander on your system to take advantage of this (in my opinion, one of the top three apps ever written for the Linux console). I also find that it is at its best when used under X-windows, as this allows the use of GhostView, xdvi, and all the other nifty tools that aren't available on the console.
To use it, type (for example)
doc xl
and press Enter. The script will respond with a menu of all the /usr/doc subdirs beginning with 'xl' prefixed by menu numbers; simply select the number for the directory that you want, and the script will switch to that directory and present you with another menu. Whenever your selection is an actual file, MC will open it in the appropriate manner - and when you exit that view of it, you'll be presented with the menu again. To quit the script, press 'Ctrl-C'.
A couple of built-in minor features (read: 'bugs') - if given a nonsense number as a selection, 'doc' will drop you into your home directory. Simply 'Ctrl-C' to get out and try again. Also, for at least one directory in '/usr/doc' (the 'gimp-manual/html') there is simply not enough scroll-back buffer to see all the menu-items (526 of them!). I'm afraid that you'll simply have to switch there and look around; fortunately, MC makes that relatively easy!
Oh, one more MC tip. If you define the 'CDPATH' variable in your .bash_profile and make '/usr/doc' one of the entries in it, you'll be able to switch to any directory in that hierarchy by simply typing 'cd <first_few_letters_of_dir_name>' and pressing the Tab key for completion. Just like using 'doc', in some ways...
Hope this is of help.
In a recent two cent tip there was a suggestion to use NT loader to triple boot, I have another method use lilo.
Install all three OS'es, the order I did it, Win95(FAT32), NT4(NTFS) then linux (ext2).
Edit lilo.conf and add the following sections:
# Windows 95 stanza other=/dev/hda1 table=/dev/hda label=windows # End Windows 95 stanza # Windows NT stanza other=/dev/hda2 table=/dev/hda loader=/boot/os2_d.b label=NT # End Windows NT stanza
Then run lilo and when you can select between the three of them. If you select NT it then comes up with the NT loader. I have set this to appear for 0 seconds so that it starts straight away instead of having a menu.
Hi Mike!
In LinuxGazette #50 you wrote:
I'm new user and believer of the Linux OS and I need help badly. I'm looking for a driver for an ATI Xpert@Work 8Mb PCI card. Where can I get it? I'm using a RedHat 5.2 and my monitor is a Mitsubishi Diamond Scan model FA3415AT4 [...]Configure your display with the help of 'XF86Setup' (you have to write it as I do, with upper and lower cased letters), or, if it doesn't run the 'xf86config' program. Try to find your ATI Card, and if you don't, use simply SVGA. Most of cards which are not listed are standard SVGA Cards (my Matrox Millenium G200 also), and they run very well with the SVGA driver.
Overclocking a Motherboard? 233Mhz to 450Hmz?
From Steve on Fri, 07 Jan 2000
I have a motherboard in my PC that says it has a max processor speed of 233mhz, i did'nt know this when i resently purchased a 450mhz CPU. Is there any way i can override this issue, my max bus speed is 66.
I wouldn't recommend it. Bare motherboards are not very expensive so you should be able to pick up one for about a $100 or less that will take the CPU that you got and still allow you to pull in your old DIMMs, and adapter cards from the existing system.
Of course I'd double check the RAM (DIMMs, SIMMs or whatever you've got) and make sure it is fast enough that you aren't wasting the investment in the new CPU.
Personally I don't recommend CPU upgrades at all. I suggest that people buy the CPU, motherboard, and RAM together, and get them matched to one another. Also it is quite unlikely that you applications are actually CPU bound. Your CPU investment would probably have been better spent in extra RAM or a faster controller.
As you say your current motherboard's bus speed is only 66Mhz. You probably want a motherboard that runs at closer to 100Mhz (at least PC100 RAM). You don't want the CPU sitting there waiting for its own RAM all the time.
Sometimes it can be done! I just managed to get my old DFI 586STC Rev D+ motherboard to run an AMD k6-2/400.
The jumper settings for the cpu voltage only went down to 2.5v The K6-2 needs 2.2v. There are three jumpers used to set voltage. I noticed a binary pattern to the settings suggesting that there are undocumented settings for: 2.4-2.0v. Following this pattern, I set the jumper for what should be 2.2v. I installed a K6/300 and turned on the computer. It fired up fine. I let it run a while and the cpu stayed cool. I now had a K6/300 running but the bios was saying it was a MMX processor at 50mhz. It was running at 300mhz: Bogomips was 597 which is what it should be for a K6/300. The K6-2/400 will run at 6x (66mhz times 6x 396mhz) when the multiplier jumpers are set to 2x. I tried that and it was no go. She wouldn't boot. I put the K6/300 back in and reset the multiplier jumpers to 4.5x. I went to DFI's web page and downloaded the latest bios. I flashed the new bios and success, the cpu was now recognized as a K6/300. So, I put the K6-2/400 back in and set the jumpers back to 2x. Success again! My old DFI motherboard is now running at 400mhz. I have SUSE 6.3 on it and it seems rock stable. With a K6-2/400 selling for under $50 I don't see this as a bad day's work. However, I WAS prepared to suffer the consequences if this project had failed! The motherboard was old, retired from service and failure wouldn't have been that great a lost.
--
Rick in Tampa
These are not strictly linux questions, but I'll answer them anyway. :)
On Sun, 16 Jan 2000, Wei Huang wrote (in issue50):
sir, I need some help.
Question1,
I wrote a program of showing the contents of other program on the platform of LINUX,the compiler is g++, but got problems in statement 1 and statement 2.
First,I want to show the sentence in statement 1 ahead of the body of file being showed,but it can't appear in the proper position.I mean I want the program running like this:
This is caused by buffering being done in the C++ stream library. To get the result you desire, you just need to flush the stream before writing directly to stdout. In other words, just put the statement "cout.flush();" right after statement 1.
I don't why,how can it be corrected. Second,I had believed the statement 2 is illegal in grammar.Because if you define an array, char buffer[BufferSize] ,the BufferSize indicate the number of elements in array and should be a constant or at least be a const variable,and it can't be a variable, otherwise the program cann't be compiled. But the fact is this program is compiled smoothly and functions normally. What is wrong?
ANSI C++ does not allow this, but g++ does. If you run g++ with the "-pedantic" flag, it will issue a warning about that statement.
Question 2, I write a c program. But when compiling, gcc reports that all the functions with "vga_" profix are undefined,what is the problem? I had included "vga.h" in my programe as indicated in "man" help on the usage of either "vga_" function.
You need to link with the vga library. Just include the flag "-lvga" when running g++.
Wei Huang did write an email asking:
I want to show the sentence in statement 1 ahead of the body
The two statements in your code:
cout<<"\nThis is the body of file.\n";and
write(1,buffer,sizeof(buffer));
are using two different methods of writing to the terminal.
The C++ statement "cout" uses a buffer inside your program to collect individual output statements into a larger block. When your program writes "\nThis is the body of file.\n" the first time, it goes into the buffer. When it writes the same line again, it gets added to the buffer. Finally, when your program ends (or if the buffer gets full) then the contents of the buffer are sent to the screen.
The kernel call "write" has very different behavior. This uses the method that cout uses when it writes the buffer to the screen. When you call "write", your program does not continue until the data has been written.
Since the "cout" library call buffers data inside your program and the "write" kernel call does not, then the strings you "write" will usually arrive on your screen before the strings you "cout".
The solution is to use one method for writing strings throughout your program rather than mix different methods. For almost all programs, "cout" is a better choice than "write". Do some more reading about the C/C++ library. A good place to start is your Linux system, where you can type `info libc`.
In your second question:
gcc reports that all the functions with "vga_" profix are undefined,
the message "undefined reference to ..." comes from the linking phase of the compile. That is, the compiler has already translated your code into machine language, and is now trying to find the library routines needed to run the program. Vga programming is not a standard thing, so you have to tell the compiler which library to look into. Add the compiler option "-lvga" to your command line.
On Fri, 28 Jan 2000, oliver cameron wrote:
I am running RH 4.2 and I need to convert my existing etc/passwords file to the shadow passwords format used on a new RH 6.1 installation. Can anyone give me a simple explanation of how to do this? Any help would be greatly appreciated. Many thanks in advance, Oliver.
Redhat 6.1 comes with utilities for converting to and from shadow files.
"man pwconv" should give you a description.
I am not familiar with Norton Ghost; however I have been successfully dual booting NT 4 and versions of linux (currently Redhat 6.0) for the past year.
First let me refer you to the excellent article on multibooting by Tom de Blende in issue 47 of LG. Note step 17. "The tricky part is configuring Lilo. You must keep Lilo OUT OF THE MBR! The mbr is reserved for NT. If you'd install Lilo in your mbr, NT won't boot anymore".
As your requirements are quite modest they can easily be accomplished without any third party software ie. "Bootpart".
If NT is on a Fat partition then install MSdos and use the NT loader floppy disks to repair the startup environment. If NT is on an NTFS partition then you will need a Fat partition to load MSdos. Either way you should get to a stage where you can use NT's boot manager to select between NT and MSdos.
Boot into dos and from the dos prompt: "copy bootsect.dos *.lux".
Use attrib to remove attributes from boot.ini "attrib -s -h -r boot.ini" and edit the boot.ini file; after a line similar to C:\bootsect.dos="MS-DOS v6.22" add the line C:\bootsect.lux="Redhat Linux".
Save the edited file and replace the attributes.
At the boot menu you should now have four options: two for NT (normal and vga mode) and one each for msdos and Linux. To get the linux option to work you will have to use redhat's boot disk to boot into Linux and configure Lilo. Log on as root and use your favorite text editor to edit /etc/lilo.conf. Here is a copy of mine:
boot=/c/bootsect.lux map=/boot/map install=/boot/boot.b prompt timeout=1 image=/boot/vmlinuz-2.2.14 label=linux root=/dev/hda5 read-only
It can be quite minimal as it only has one operating system to boot; there is no requirement for a prompt and the timeout is reduced to 1 so that it boots almost immediately without further user intervention. If your linux root partition is not /dev/hda5 then the root line will require amendment.
I mount my MSdos C: drive as /c/ under linux. I am sure this will make some unix purists cringe but I find C: to /c easy to type and easy to remember. If you are happy with that; then all that is required is to create the mount point, "mkdir /c" and mount the C: drive. "mount -t msdos /dev/hda1 /c" will do for now but you may want to include /dev/hda1 in /etc/fstab so that it will automatically mounted in the future; useful for exporting files to make them available to NT.
Check that /c/bootsect.lux is visible to Linux "ls /c/bootsect*"
/c/bootsect.dos /c/bootsect.lux
Then run "lilo"
Added linux *
Following an orderly shutdown and reboot you can now select Redhat Linux at NT's boot prompt and boot into Linux. I hope you find the above useful.
Any modem that is not a WinModem is Linux-compatible, in a nutshell. Why?
A normal modem has two components:
WinModems (or "software based modems") require that your dialer program handle the second part up there about compression and error correction. This means that the manufacturer only has to implement the first feature, and therefore means it's cheaper to produce. MUCH cheaper.
Although there are whispers in the shadows about future WinModem compatibility within Linux, this feature doesn't exist. It's always better to go for a hardware-based modem anyway, because that means your dialer doesn't have to do all the work, which means your programs may run faster.
This entails the basics. Check out http://www.56k.com/winmodem.shtml for more technical information.
Again, if it's not a WinModem, it's Linux compatible. I use an Aopen 56K modem and it works beautifully.
fyi -
For small networks that include both encrypted and unencrypted password clients (wfwg3.1, 95, 98, nt), you can customize the [global] settings on a machine-by machine basis to accommodate both.
If the majority of your systems are encrypted, but you have two exceptions maca and macb, set up your smb.conf to use encrypted passwords in the [global] section, then, also in your [global] section put the following:
include = /usr/local/samba/lib/smb.conf.%m
Then create mini-configs /usr/local/samba/lib/smb.conf.maca and /usr/local/samba/lib/smb.conf.macb which contain
[global] encrypt passwords = no
Now all machines on the network will connect with samba using encrypted passwords EXCEPT maca and macb.
If the majority of systems on your network are unencrypted, do the reverse, setting the smb.conf global encryption off and using encryption on a machine-by-machine basis.
Getting a PS/2 Microsoft Cordless Wheel Mouse working correctly under X-windows takes a litte bit of tweaking.
In the Pointer Section of /etc/X11/XF86Config, set the following:
Protocol "MouseSystems" Device "/dev/gpmdata"
Then, make your gpm start like so:
gpm -R -t imps2
You should be all set.
To explain: -gpm's "-t imps2" option reads the mouse input as if it were a "Microsoft Intellimouse (ps2) - 3 buttons, wheel unused". -Then gpm's "-R" option pushes mouse output to /dev/gpmdata in the "MouseSystems" format. -So you have to set X-windows to read the new device in the new format.
You can also control mouse speed and acceleration and other things from gpm. Check it out.
Ted Wood <[email protected]> asked:
I'm trying to create a dual boot image with Redhat and NT Workstation. I'm using Norton ghost version 6 to create the image. It will ghost fine but after ghosting, lilo comes up as "LI" only. The problem is fixable by booting to the Linux floppy and rerunning Lilo which rewrites the mbr. After that everything is great, but why won't Lilo work properly the first time? I've searched Symantec's page and I've tried the switches but they all result in the same problem. Please Help!
it is better to use the Fips32 - available from your /dosutils distro cd-rom. Please, *read carefully* all instructions there - it can damage your data (and will surely do it) if not adequately utilized.
Wagner Perlino <[email protected]> asks:
I am trying to build a small network at home with one Linuxbox and another Win95/Win98 box. I have no printer, but I would like to share files and use my Linuxbox as a gateway to the internet so that both boxes can use one single dial-up connection AT THE SAME time. Is there any link or suggestion for this type of connection ?hmm.. suggest you to buy 2 nic's. Even a good tulip 10/100 is cheap.
<[email protected]> asks:
I have a new DELL with 13 GB hard-disk running windows98 second edition.
I bought Partition Magic 5.0 to create couple logical drives and Linux Native and Swap Partitions.
I am trying to install RedHat 5.2 using a bootable disk and a CD -ROM.
For some reason, neither disk-druid nor fdisk (in-built) recognizes my Logical Partitions.
On the Blue screen (driver information) it says:
/hda1 <size> Win95 FAT 32 /hda2 <size = 13 GB - size of C drive> 0x0f (blank in type field)That means, the installation process does not know there are couple of Logical Drives and also a Linux Native and a Swap Partitons. What does 0x0f type means ?
fdisk however shows there are partitions on MS-DOS propmt but installation program can not recognize them !
Is there anything I am doing wrong ? Any suggestions ?
friend:
a) linux doesnot work on fat32 - neither NT, btw ;-)
b) first convert all your hd to fat16. As it means that you will lost all of your data, first make a good backup
c) with fdisk (M$-dos) make a single partition for your win-xx. The remaining hd must be ignored.
d) try again with cfdisk or d-druid.
ah: throw the Partition magic thing. If needing, use fips
Dina Yazbeck <[email protected]> asks:
I am basically trying to host a domain and have all the e-mails of the users of this domain (remotedomain.com) sent to one particular Mailbox (such as [email protected]). This is straightforward. Assuming that the remote domain machine (LINUX) polls the ISP server. What is the procedure to use to retrieve this mail and distribute it to the respective mailboxes of the users? Do I have to use procmail? Is fetchmail sufficient for distributing the mail? Waiting for your reply.
There is a very good article on this matter from j.pollmann: linux gazette, issue 45. Also read the FAQ's at www.sendmail.org, where you will find many tips and hints on this. Also your distribution (howto/mini) must have something about - think that it is called "queue mail" or something similar there.
Here things are working 100%, *after* I followed the a.m. instructions.
glitch <[email protected]> asks:
I would like to ask for a QUALITY set of general instructions on how to install things i download off the internet such as
just use webmin (www.wbmin.com). It is GREAT, and will administer EVERYTHING in your box (and network), including samba and NIS.
Gabriel Ramiro Ferro <[email protected]> wrote:
Subject: Desde Argentina
Perdón pero no manejo el ingles, lo que desearía es que me informen como adquirir los CD de Linux en forma gratuita Desde ya muchas gracias.
Babelfish translation:
Subject: From Argentina
Pardon but nonhandling ingles, which would wish is that they inform to me like free acquiring the CD of Linux From already thank you very much.
Just for grins, I decided to run babelfish on Gabriel's letter to the LinuxGazette. While Babelfish doesn't do a very "good" translation, you can usually get the gist of the message.
Gabriel and I have since exchanged a number of e-mails all translated thru babelfish.
I found a number of Spanish language Linux resources and no less than 3 Argentine LUGs that I pointed him to.
A copy of the latest Linux/Redhat/Mandrake distribution is on its way to him.
Trying to help someone when there is no language in common was a fun and enlightening experience. I hope other Gazetteers give it a try!
Thus spoke Shawn Medero
It captures motion on the computer desktop . .basically multiple screen-captures tied together to form a movie of sorts. Primarly one would use to create training demostrations on linux applications, etc.
Sorry to take so long to get back to you. I was out of town.
I don't know of anything that does this for Linux, although there were some tools for Unix/Motif that did so. I haven't looked at them for quite some time and can't find any notes on what they were called. My suggestion is to look at http://www.motifzone.com/. There may be some links there to help.
The problem, of course, is that these tools will work well with Motif, but not with Gtk or any of the popular Linux desktops (GNOME, KDE, etc). This is just one area that hasn't been addressed yet by the Linux marketplace.
At least I don't *think* they will work with Gtk. They work on low level X events, but don't know how they deal with widget sets, window managers and desktop environments.
I am a new user to Linux (Red Hat 6.0). I am currently dual booting between Windows 95 and Linux and I have a 56K winmodem install. I have not been able to get this modem to work under Linux. Can you suggest a good modem to upgrade to? Preferably one that will work under both my Linux and Windows installations.
The only modem that is certain to work is an external one, connected via a serial cable.
Otherwise a internal ISA (not PCI!) modem might work, preferably not a plug-and-pray model. If you can get a modem that can be configured via dip-switches, that should work.
I have just installed version 6.1 and set up my modem to dial out to my ISP. However, when I log on as a user and press KDE->Internet->kppp a pop-up box opens up and wants me to enter the root-password! This does not seem right. is there a way to avoid having to enter the root pass word when logged on as a non-root user?
This is probably related to the permissions of the serial port that your modem is using.
For instance my modem is /dev/modem. This is a symbolic link:
rsmith@aragorn:~$ ls -l /dev/modem lrwxrwxrwx 1 root root 10 Nov 28 1998 /dev/modem -> /dev/ttyS2
Now for the permissions on this device:
rsmith@aragorn:~$ ls -l /dev/ttyS2 crw-r----- 1 root dialout 4, 66 Feb 22 21:24 /dev/ttyS2
So only the owner [root] has read/write acess [cRW-r-----], and the group has read access [crw-R-----]. Your setup is probably similar.
So you can do a chmod (see `man chmod') to grant wider access to the device. This can be done in two ways.
- You can give read/write access to the group that owns the device (in my case dialout), and make yourself a member of the relevant group. - Alternatively you can give read/write access to everybody.
Both of these approaches have security implications on a multi-user machine.
Another approach could be to make pppd setuid root. That means that pppd assumes the identity of root even if you start it as a normal user. I would not reccommend this. These suid programs can be a major security risk.
I am trying to run Linux on a system with a NEC Multisync monitor. Where can I find a driver Or what monitor should I use.
The X Window System doesn't use a monitor driver as such.
Information about the monitor's characteristics should be entered in the configuration file for the X server program, usually /etc/X11/XF86Config.
Some tools like xconfigurator or xf86config can help you with this. Your monitor might even be in their database. The X server distribution XFree86 also comes with a list of so-called modelines for lots of monitors.
Alternatively you can use the following example:
Section "Monitor" Identifier "SAM M174" VendorName "S.A.M. GmbH" ModelName "M174" HorizSync 30-64 VertRefresh 47-100 Gamma 1.2 Mode "640x480@72Hz" # Standard VESA DotClock 31.5 HTimings 640 664 704 832 VTimings 480 489 492 520 EndMode Mode "800x600@72Hz" # Standard VESA DotClock 50 HTimings 800 856 976 1040 VTimings 600 637 643 666 Flags "+HSync" "+VSync" EndMode Mode "1024x768@70Hz" # Standard VESA DotClock 75 HTimings 1024 1048 1184 1328 VTimings 768 771 777 806 Flags "-HSync" "-VSync" EndMode EndSection
It is the "Mode"'s that you want. All VESA compliant monitors should support these modes. Note that you have to look-up the "HorizSync" and the "VertRefresh" in your monitor's documentation.
For more in-depth information, see the XFree86-Video-Timings-HOWTO (usually in /usr/doc/HOWTO/XFree86-Video-Timings-HOWTO.gz)
I Have Reacently Installed LINUX on an Dell OptiPlex GX110 PentiumIII that I bought. I can boot up an login into root but when i issue the commandstartx
to get X to start this is what i get:execve failed for /ect/X11/X (errno 2)and then_X11TransSocketUNIXConnect: Can't Connect: errno = 2thenGiving upand Finally I wonderered if anyone could help a LINUX newbie.
Your X installation has not been finished. This is/was a common problem with Red Hat and probably other distro's
Basically, /etc/X11/X is a symbolic link to the real X server (that usually lives in /usr/X11R6/bin/.
In your case the link probably doesn't point to the right file.
Try configuting X again (using whatever program you used for that) or make a link using `ln -s' to the appropriate server.
We are attempting to set up our linux machine such that it can used as a POP server. On a Windoze machine, Eudora software successfully retrieves email from the machine. However, when we try to send email through the linux machine, the email bounces back. By the way, using email tools such as pine on the linux machines itself works fine.
The mail transfer agent on your Linux box is not configured to send the mail from the windoze box through. Most mail tranfer agents block relaying from other machines/domains.
The log files (/var/log/syslog, var/log/messages) should give you an idea what's wrong.
Read the documentation for the MTA you're using (probably sendmail?) to see how you should configure it properly for your situation.
You could also ask your local network wizard. :)
More HelpDex cartoons are on Shane's web site, http://mrbanana.hypermart.net/Linux.htm.
Some years ago, a typical UNIX cluster was composed by an expensive, powerful server and many cheaper terminals connected to that server. An example of such a terminal is the IBM X Station. The hardware of the IBM X Station includes a screen, keyboard, mouse, some RAM, and jacks for Ethernet cables. Since they do not have a hard disk or a floppy drive, they must get the operating system from a host on the net that they are attached to.
The aim of a X Station was to provide a terminal optimized for X and graphics and connected to a powerful server at low cost.
X Stations depend on a IBM Workstation, as RISC 6000, running AIX OS. The X station manager package, released with this OS, contains the software needed in order to make X Stations boot from network and then run the X interface. Managing several X Stations from a RISC 6000 server is a quite easy job; can we do the same thing from a Linux box? The main reason to do such work are hardware failures of the IBM server since RISC Workstations are expensive (much more than a PC).
This article shows how to configure a Linux box in order to provide to X Station 120 and X Station 130 all the information needed to boot and work properly.
There are 5 different steps to accomplish before you can successfully start a X Station from Linux:
All the boot files for your X Stations are in the directories /etc/x_st_mgr
and /usr/lpp/x_st_mgr
in the AIX installation backup. Restore the backup in your Linux box (in the following we presume you restored the backup in the same directories as the AIX installation).
You don't need all the files; but a minimal installation must contain:
/etc/x_st_mgr/120 link to /usr/lpp/x_st_mgr/bin/bootfile3 /etc/x_st_mgr/120.cf configuration file, see below /etc/x_st_mgr/bootfile3.cf copy of 120.cf /usr/lpp/x_st_mgr/bin/bootfile3 boot file X Station 120 /usr/lpp/x_st_mgr/bin/x11xor3.out X server /usr/lpp/x_st_mgr/bin/rgb.txt color descriptor databasefor X Station 120, and:
/etc/x_st_mgr/130 link to /usr/lpp/x_st_mgr/bin/bootfile4 /etc/x_st_mgr/130.cf configuration file, see below /etc/x_st_mgr/bootfile4.cf copy of 130.cf /usr/lpp/x_st_mgr/bin/bootfile4 boot file X Station 130 /usr/lpp/x_st_mgr/bin/x11xor4.out X server /usr/lpp/x_st_mgr/bin/rgb.txt color descriptor databasefor X Station 130.
The file 120.cf contains the names of the files needed by X Station 120 during the boot process. An example of its structure is:
/usr/lpp/x_st_mgr/nls/keymap /usr/lpp/x_st_mgr/nls/msg /usr/lpp/x_st_mgr/bin/x11xor3.out /home/x_st_mgr/lib/fonts,/home/x_st_mgr/lib/fonts/ibm850 /usr/lpp/x_st_mgr/bin/rgb.txtIn the file 130.cf the sole change is the line:
/usr/lpp/x_st_mgr/bin/x11xor4.outreplacing the line of the X server x11xor3.out.
In the configuration file 120.cf (or 130.cf) the line:
/home/x_st_mgr/lib/fonts,/home/x_st_mgr/lib/fonts/ibm850points to the directories where you copied the fonts of the AIX installation, located in /usr/lib/X11/fonts. You can freely change the installation directory, but you must be consistent.
When a X Station is switched on it send on the network its Ethernet address and a request of assistance to accomplish the boot. During this process on the X Station screen you can see some informations, as shown below.
The IBM Xstation Version 1.4 (c) Copyright IBM Corporation 1981, 1990 Ethernet Hardware Address 08005A010F33 Ethernet Thick/Thin Thin BOOTP - 0000 0000 0000 0000 TFTP - 0000 0000 0000 0000First of all record the Ethernet Hardware Address of the terminal; this information is needed later.
Turning on the X Station the first number on the BOOTP line starts to augment: the X Station is sending a request for help on the network. To respond to that request a computer must have bootpd daemon running and properly configured, that is have a line in the bootpd configuration file matching exactly the Ethernet address of the X Station.
This step is required for X Station 120; a X Station 130 can be a statically configured to contact the machine which stores the boot files and font. This is done in the setup screens which are accessed by pressing F12 during the boot procedure. Here you must provide all the boot and IP informations (the informations are stored in a Non-Volatile RAM subsystem on the X Station) and disable BOOTP. In the following we suppose you don't use a static configuration for your X Station 130.
bootpd daemon is usually located in /usr/sbin/booptd. If you can't find it here or in similar locations you must get a copy of the program from the net and follow the installation procedure (this part is not covered here).
The configuration file for bootpd is /etc/bootptab. Here you must insert the informations about all X Stations you want to manage from the Linux box. An example of this file is reported below:
# declaration of types x_st_mgr.120:ht=ether:hd=/etc/x_st_mgr:bf=120:T170=2328:ds=131.114.8.144:gw=131.114.8.129:sm=255.255.255.0: x_st_mgr.130:ht=ether:hd=/etc/x_st_mgr:bf=130:T170=2328:ds=131.114.8.144:gw=131.114.8.129:sm=255.255.255.0: # X Stations astr12pi.difi.unipi.it:tc=x_st_mgr.120:ha=08005A010F1A:ip=131.114.8.236: astr13pi.difi.unipi.it:tc=x_st_mgr.130:ha=08005A010F33:ip=131.114.8.237:In this file there are two declaration of type (i.e. X Station 120 and X Station 130), with the informations about the related boot files. The name of the tags are explained below:
# hd -- home directory # bf -- bootfile # sa -- server IP address to tftp bootfile from # gw -- gateways # ds -- DNS # ha -- hardware address # ht -- hardware type # ip -- host IP address # sm -- subnet mask # tc -- template host (points to similar host entry) # hn -- name switch # bs -- boot image size # dt -- old style boot switchThe other lines are the list of the X Stations managed. For a X Station 120 a typical entry begin with the name of that X Station and refers to tag tc=x_st_mgr.120; the ha tag must match the Ethernet address you previously recorded and the ip tag is the IP number corresponding to the X Station. For a X Station 130 change the tag tc to tc=x_st_mgr.130.
The bootpd daemon is started from /etc/inetd.conf file inserting the line:
bootps dgram udp wait root /usr/sbin/tcpd bootpdYou can test your Linux bootpd configuration issuing the command:
kill -HUP `cat /var/run/inetd.pid`and turning on the X Station. If the BOOTP phase is passed (i.e. the first number on the TFTP line increases) the bootpd daemon works properly.
Now the Linux box can successfully answer to the boot request from the X Station but it's not able to supply the files the X Station is asking. To accomplish this process you must install and configure the tftp daemon (Trivial File Transfer Protocol). tftp daemon usually resides in /usr/sbin/tftp and can be started from /etc/inetd.conf inserting the line:
tftp dgram udp wait root /usr/sbin/tcpd in.tftpd /home/x_st_mgr /etc/x_st_mgr /usr/lpp/x_st_mgrThe configuration can be tested restarting inetd:
kill -HUP `cat /var/run/inetd.pid`Turning on the X Station the boot process must be successful and after a short wait on the X Station screen appears a message like this:
Copyright 1989, 1994 AGE Logic Inc. All rights reserved, Release 072594and after another wait the screen becomes grey with a typical cross cursor in the middle. The X interface is now working.
The last step is to provide a login mask to the X Station. This is the job of xdm daemon. The xdm daemon can be started setting the runlevel 5 in the /etc/inittab, i.e. in this file you must have the line:
id:5:initdefault:Now change directory to /etc/X11/xdm. If you won't the xdm interface starts automatically on console edit the file Xservers and comment the line:
:0 local /usr/X11R6/bin/XIn the same file add the names of all the X Stations defined above, as in the example:
astr12pi:0 foreign astr13pi:0 foreignNow edit the xdm-config file and insert the lines:
DisplayManager.astr12pi_0.setup: /etc/X11/xdm/Xsetup_astr12pi_0 DisplayManager.astr12pi_0.startup: /etc/X11/xdm/Xstart DisplayManager.astr12pi_0.reset: /etc/X11/xdm/Xstopwhere Xsetup_astr12pi_0 is a copy of Xsetup_0, Xstart and Xstop are links to GiveConsole and TakeConsole. Repeat this operation for all the X Stations defined.
The last operation is insert a crontab entry which refreshes the xdm daemon every 1 minute. This step is needed to manage quickly a X Station when it's switched on. As user root issue the command:
crontab -eand insert the line:
* * * * * kill -HUP `cat /var/log/xdm.pid`Reboot your Linux box and turn on the X Stations. If all goes OK, the X stations will boot, start the X server and show the login screen.
The implementation was conducted in Jan III Sobieski Hotel - one of the biggest, most luxurious and prestigious hotel facilities in Poland. The hotel offers a wide variety of services, ranging from suite rental, through restaurants to organisation of conferences.
The hotel has over 400 rooms and employs up to 600 persons. The office and administration centre is located in a four-story building. The hotel owns restaurants, underground car park, business centre, pay-TV, information desks at the airport, etc.
Jan III Sobieski is a giant entity which could be compared to a large factory. It is operated round-the-clock, as guests do not like to be kept waiting. Since the prices reflect hotel's high standard, the guests are keen on getting their money's worth. The expectations with regard to the computer system are, therefore, very high.
The system has to provide for uninterrupted operation in continuous traffic. Operating downtimes should be reduced to a minimum, as no time is available to shut the server down for maintenance. The system should provide for reliable customer service and ensure that the numerous, large and critical databases be adequately protected.
The following systems had been in use at Jan III Sobieski Hotel prior to the network implementation:
The decision to implement Linux was made in 1998 when communication problems with the taxation module were aggravated due to frequent legislative changes. A decision was made to purchase an application designed by a local provider and offering better support. The hotel selected HS-Partner and their hotel and restaurant application.
The HS-Partner application (now HS-Partner - Protest) operates on Linux graphic terminals. The terminal comprises a PC in specialist casing, a touch-operated LCD display and software: XWindow and specialist libraries. It is based on QT with a number of adjustments implemented by HS Partner programmers.
The actual program operates on a server based on the SQL database - initially PostrgeSQL, now Adabas-D.
The terminals themselves provide for secure operation in the kitchen and the bar as they are resistant to temperature, humidity and do not need to be repaired. It is a very important feature in a hotel and restaurant environment.
The application was implemented in the first half of 1999. The management decided to switch to a 32-bit platform. The choice was restricted to two platforms - Windows NT and Linux.
, a manager of the IT department and a great Linux enthusiast, played the vital role in the decision-making process. Being aware of system capabilities as well as the hotel's operating needs, Moszumanski was able to convince the management to seriously consider the Linux option.
At that time, our company had already become known for forcing StarDivision, the developer of the StarOffice package, to enter the Polish market. We took an interest in StarOffice after we had experienced difficulty in promoting our Linux solutions. Still, the customers never ceased to ask the sacramental question: what about Word and Excel?
StarOffice was the only Linux package which could match Microsoft solutions. Its operating standards resembled similar solutions developed by the Redmond company. The only obstacle was the absence of a Polish distributor and a Polish language version.
After laborious efforts (the Polish market has always been disregarded by American software developers), our company finally became an official StarDivision dealer and we could offer solutions based on the system. Thanks to our PKFL activity and the recommendation of HS-Partner, we were able to establish contact with Jan III Sobieski Hotel.
The hotel management were faced with a difficult choice - either to spend a lot of money to develop Windows-based solutions or to acquire a system whose advantages were recognised only by the professionals without any marketing support. The main advantage of Linux was its ability to maintain a uniform system platform, while Windows' strength laid with its commercial popularity. Still, the numbers were on our side. The cost of acquiring Windows-based software exceeded the Linux solution five-fold.
For a large company like Jan III Sobieski Hotel, the cost of acquiring software is not as important as overall system development costs. As it turned out, the total software cost was ZERO!
The cost of owning a computer is higher than the actual purchase cost. The cost of installing 100 computers with NT Workstations would be: 100 (computers) x 2 (hours to install the system, network and applications) equals the monthly pay of a well-paid computer expert.
Additional costs would have to be incurred with regard to management, maintenance and support. Jan III Sobieski Hotel is a large facility and it takes more than a short walk to get from one building to another. A system requiring less maintenance would be the preferred choice.
The alternative solutions were the Zero Administration Kit, PC Anywhere and others. Still, they implied additional costs without a guarantee of proper operation.
All those features were already present in the Linux System. A PC could be transformed into a practically self-operated terminal, a remotely or centrally controlled workstation. The high reliability of a Unix class system also guaranteed safe data processing.
The hotel management decided to implement a uniform system platform. A decision was soon made to introduce the StarOffice package provided that the HS-Partner system would be successfully implemented. The implementation was completed in August 1999. The next step was to unify the system platform. Negotiations were held on involving our company in the implementation process. Our company had become known for a number of successful StarOffice implementations based on Linux, for example in Warsaw's BoatHouse restaurant.
The hotel management take credit for approaching the subject in the most reasonable way - by stating the requirements and demanding results. They did not intervene in technical details and provided the programmers with the freedom to make the right decisions.
At that time, a fee was charged for commercial StarOffice applications. Even though it was half the price of the MS Office package, we still had to import it, pay the customs duty, wait and transfer the money. At low margin levels, it was not a very lucrative undertaking - our company makes money on implementing, rather than selling software packages. In the late summer of 1999, StarDivision was taken over by Sun Microsystems and the StarOffice package became available to the public for free. We no longer had to waste our time dealing with bureaucratic chores and we were (and still are) the most competent company implementing the StarOffice package based on Linux.
We suggested that Xterminals be used, but the idea was not picked up by the hotel's IT department. Firstly, the terminals implied additional costs. The hotel had an adequate PC base and only some of the computers required upgrading. Secondly, a new server would have to be acquired to prevent the loss of the system's calculating power on the terminals. Server capacity was also questioned. The capacity of the Intel platform and servers with Linux support seemed limited and other hardware platforms were not an option. The present solutions ensure adequate processing capacity.
The management decided to postpone the decision on buying terminals until the existing stations became inadequate. A number of weaker computers would still have to be configured as Xterminals.
The contract was signed in October and it involved training and system installation. We had to begin by developing a quick system installation mechanism. The company called MandrakeSoft helped by releasing Linux-Mandrake. This distribution was equipped with the KickStart mechanism which provided for automatic installation based on previously developed scripts. Our programmers wrote a set of scripts which installed the system and the required applications with the developed computer base. The script would be saved on a floppy disk and run to install the abridged version of Mandrake, the StarOffice package, configure the network and printers. Another set of scripts was used to automatically generate user configurations. All those tools enabled us to install the system on workstations practically overnight.
Workstations with the operating system and the StarOffice application are installed on the local disk. The users' home directories are installed from the server with the use of NFS. Authorisation control is conducted with the use of yp. IPXutils is used to communicate with the Novell server.
The only problem was NFS. NFS itself is not an optimal solution and is probably the greatest weakness of Unix systems. The majority of hotel users frequently browse through directories to find a given file - with 100 logged users, the capacity of the server using nfsd drops below the allowable level. The Kernel nfsa driver - knfs - was used. At present, processor load is negligent (the server has two processors) and the network operates correctly. We did not experience any other network-related problems. Thanks to remote access, user support became possible without running from one floor to another.
Training sessions were held in the hotel's training room. Two lecturers trained two groups each. Courses were taught daily - one group in the morning, the other in the afternoon. The course comprised 24 class hours on package operation, basic Internet and e-mail skills, etc. The users did not have any problems with the presented material despite the sudden change of the working environment. The training session lasted less than four weeks. Following the course, we provided the users with two weeks of start-up support - a job well done by Piotr Duszynski.
Most users are not familiar with the specific nature of the Linux system and the system itself. They use the window manager interface to open applications such as StarOffice, accounting applications and, in justified cases, Netscape. Practically none of the users know how to administer or configure the system, but such skills are not required.
The hotel currently relies on the above hotel system and the office centre operates the StarOffice package. The HP UX accountancy system is also used. The WWW and e-mail server operates on Linux. The Novell NetWare system is integrated with Linux using IPXutils tools. The Windows system is used for pay-TV, which is the only hotel service that keeps the IT department busy troubleshooting.
The implementation was one of the few stories of success in the Polish computer industry. I would rather not make any references to major system failures experienced by large companies, but we tried to steer clear of the common errors. The hotel management defined the goals and the programmers developed the right implementation methodology. The IT department is always short of people during system implementations which is why we had to resort to outsourcing to a limited extent. Pawel Moszumanski's managerial skills cannot go unnoticed - he scrupulously supervised project goals and provided the team with organisational and technical support. Much of the credit goes to the hotel's staff who provided us with a friendly and stress-free working atmosphere. We also received massive support from Arek Podgórski, one of the top Linux administrators and system designers, and Piotr Duszynski, implementation expert. Lenin's theory that "all depends on the working class" was fully proven.
A business application should be the key feature of a computer system. A Unix class system proved to be a reliable platform for such applications. Business applications should be selected to match the company's operating profile, but the system's open architecture and stability should also be taken into account. The costs of maintaining and developing open applications and systems are half of those incurred in closed systems. Linux and other Unix class systems, such as SCO and Solaris, provide the perfect solution. The system selected for office applications should be well integrated with the business application system. Linux is the best solution in the Unix system category. It provides for a single stable platform, uniform administration system and lower costs. Network size is of minor significance - we have implemented a similar solution (featuring Xterminals) in 5 workstations at Warsaw's BoatHouse restaurant. A similar method can be applied in a network of 1000 workstations. Unix was developed for large networks, but Linux is more flexible and can be used in both small businesses and giant corporations. If its scaling capacity is exhausted, commercial Unix systems can always be applied.
Softomat's web site is http://www.softomat.com.pl. In the near future, the company plans to launch a mini service which will discuss their experience implementing the above as well as other projects.
Try this string with Eterm:
Eterm -W -O --shade 60 --home-on-echo no --home-on-input
Here's the reasoning. Oh, and I'm doing all of this with "Eterm-0.8-9.i386.rpm". If something I do here doesn't work for you, consider the possibility that you're using a newer or (less likely) older version, and that the author(s) has changed something relevant.
The "-O" gives a pseudo-transparent Eterm. The terminal remembers your wallpaper at the time the terminal was created, though, so if you use any sort of automated wallpaper changer, you're going to end up with the wallpaper one thing and the background on the Eterm another. Same thing if you move the Eterm to a desktop with different wallpaper. The "-W" causes the Eterm to watch the desktop for changes to the wallpaper, fixing this behavior. The "--shade 60" darkens the Eterm's background. This is a big help if you use wallpaper with bright colors... you can shade the Eterm background to the point where you can actually use the terminal. :-) The "60" in "--shade 60" is the percentage amount of shading applied to the background... "0" is no shade, "100" is black. A value of "60" works well for me, you might want to tinker.
If you are scrolled up in an Eterm window, and the system generates new text, Eterm will automatically scroll you down as far as it can go so that you 1) are made aware that there's new output and 2) can read it immediately. (I should add here that this may just be the default for the Red Hat package I'm using and not the official Eterm download. The man page for Eterm warns that "In keeping with the freedom-of-choice philosophy, options may be eliminated or default values chosen at compile-time, so options and defaults listed may not accurately reflect the version installed on your system". Now, regarding the above-mentioned "jump to bottom on output" feature, that same man page reads as if the option is inactive by default. On my Eterm, it's not... it's active. In fact, the POSIX "-H" option that controls this is useless to me because my Red Hat package Eterm behaves the same way whether "-H" is present or not. :-( So between the man page warning and the behavior I see, I tend to think that my Red Hat package has "-H" on by default.) I don't like this behavior. I've been using "gnome-terminal" for a while, and when you scroll up in a gnome-terminal window, it stays scrolled whether the system is generating new text or not. (Although it WILL appear to be scrolling down as text scrolls up past your point-of-view, displaced by the new text the system is generating. That'll probably make more sense once you've seen it happen.) I like that because, for example, if I'm running a "find" and want to scroll up and examine an old directory listing while the "find" is finding, I don't have to fight the terminal as much to keep my point-of-view on the directory listing. Therefore, "--home-on-echo no". (The long option "--home-on-echo" is equivalent to the POSIX option "-H", but since, as I discuss above, "-H" appears to be compiled into the Eterm Red Hat package, the long option is the only way to turn the option off.) Now, something else weird. Normally, if YOU generate text (rather than the system) while scrolled up in an Eterm window, Eterm will automatically scroll you down so you can read what you're typing. I like this behavior. However, for some reason that's opaque to me, as soon as you use "--home-on-echo no", this behavior stops, and the point-of-view stays where you scrolled it to, whether the system OR you generates new text. It's as if turning "--home-on-echo" off somehow forces "--home-on-input" off as well, even though the latter option had defaulted to "on" previously. Anyway, using "--home-on-input" in the command string reinforces what I want and gets the Eterm back in line.
correction: Eterm doesn't QUITE mimic the behavior of gnome-terminal, even after these tweaks... when you are scrolled up, gnome-terminal will "lock" the view in place until the buffer fills, then text will scroll. Text in an Eterm window will scroll regardless of the buffer. (Again, that'll probably make more sense once you've seen it happen.) I like gnome-terminal's behavior better, so if anybody knows or can figure out a way to make Eterm work exactly the same, please .
I use "ls --color". While the default color choices are fine against a solid background, I find them less than optimal against my wallpaper, even with "Eterm --shade 60". (And much higher than that, and why bother?) So I'm trying to figure out how to tweak the default Eterm colors to get maximum visibility across as much of the enormous range of possible color contrasts brought to my desktop by my collection of wallpaper as I can. I'll swap green and white, as I find that green has better contrast with the rest of the spectrum in general than white. I'll swap the two rather than just changing white to green because files with execute permission are listed in green, and there's no sense in sacrificing that color cue. I'll also lighten the blue, as it's dark enough to have lousy contrast with most of the spectrum. These changes aren't necessarily for everyone... I'm tweaking the Eterm colors so they work better with my eyes when I'm using Eterm's pseudo-transparency feature. You may find that different choices work best for you. Nevertheless, please read on and learn from my mistakes. :-) Lastly, once I've got all this "escape sequences" stuff worked out, I'll probably try something creative with my bash prompt. OK, the first thing that I had a question about is "what are the RGB values for the default Eterm colors"? Here's what I figured out:
Name Code RGB Value Name Code RGB Value Black 30 000000 Dark Gray 1;30 333333 Red 31 cc0000 Light Red 1;31 ff0000 Green 32 00cc00 Light Green 1;32 00ff00 Blue 34 0000cc Light Blue 1;34 0000ff Cyan 36 00cccc Light Cyan 1;36 00ffff Purple 35 cc00cc Light Purple 1;35 ff00ff Brown 33 cccc00 Yellow 1;33 ffff00 Light Gray 37 faebd7 White 1;37 ffffff
The color names are from /usr/doc/HOWTO/Bash-Prompt-HOWTO.
I didn't understand at first why it was so difficult to find a list of THE ANSI colors (in RGB) somewhere on the 'net. Now I understand that there IS no THE ANSI colors (at least in RGB). The sixteen ANSI colors appear to be defined simply as "black", "blue", "green", "cyan", etc. and are not defined down at the RGB level at all. I base this conclusion on the fact that gnome-terminal, Eterm and a page I found on the 'net with the ANSI colors described (look in the HTML and you'll find the RGB values that the author chose) all three use different RGB values. The RGB values are pretty straightforward in all three cases except for light gray in the Eterm set and brown in the gnome-terminal set. These two colors use unusual RGB values, and I can't explain why. Anyway, here are the values for gnome-terminal:
Name Code RGB Value Name Code RGB Value Black 30 000000 Dark Gray 1;30 555555 Red 31 aa0000 Light Red 1;31 ff5555 Green 32 00aa00 Light Green 1;32 55ff55 Blue 34 0000aa Light Blue 1;34 5555ff Cyan 36 00aaaa Light Cyan 1;36 55ffff Purple 35 aa00aa Light Purple 1;35 ff55ff Brown 33 aa5500 Yellow 1;33 ffff55 Light Gray 37 aaaaaa White 1;37 ffffff
And here are the values I found at http://knuckle.sandwich.net/ansi.html:
Name Code RGB Value Name Code RGB Value Black 30 000000 Dark Gray 1;30 444444 Red 31 aa0000 Light Red 1;31 ff4444 Green 32 00aa00 Light Green 1;32 44ff44 Blue 34 0000aa Light Blue 1;34 4444ff Cyan 36 00aaaa Light Cyan 1;36 44ffff Purple 35 aa00aa Light Purple 1;35 ff44ff Brown 33 aaaa00 Yellow 1;33 ffff44 Light Gray 37 aaaaaa White 1;37 ffffff
Here's a shell script that's semi-useful.
echo -e " \033[30mBlack\033[0m \033[1;30mDark Gray\033[0m" echo -e " \033[34mBlue\033[0m \033[1;34mBlue (Light)\033[0m" echo -e " \033[32mGreen\033[0m \033[1;32mGreen (Light)\033[0m" echo -e " \033[36mCyan\033[0m \033[1;36mCyan (Light)\033[0m" echo -e " \033[31mRed\033[0m \033[1;31mRed (Light)\033[0m" echo -e " \033[35mPurple\033[0m \033[1;35mPurple (Light)\033[0m" echo -e " \033[33mBrown\033[0m \033[1;33mYellow\033[0m" echo -e "\033[37mLight Gray\033[0m \033[1;37mWhite\033[0m"
Here's a more sophisticated one I took from /usr/doc/HOWTO/Bash-Prompt-HOWTO.
#!/bin/bash # # This file echoes a bunch of colour codes to the terminal to demonstrate # what's available. Each line is one colour on black and gray # backgrounds, with the code in the middle. Verified to work on white, # black, and green BGs (2 Dec 98). # echo " On Light Gray: On Black:" echo -e "\033[47m\033[1;37m White \033[0m\ 1;37m \ \033[40m\033[1;37m White \033[0m" echo -e "\033[47m\033[37m Light Gray \033[0m\ 37m \ \033[40m\033[37m Light Gray \033[0m" echo -e "\033[47m\033[1;30m Gray \033[0m\ 1;30m \ \033[40m\033[1;30m Gray \033[0m" echo -e "\033[47m\033[30m Black \033[0m\ 30m \ \033[40m\033[30m Black \033[0m" echo -e "\033[47m\033[31m Red \033[0m\ 31m \ \033[40m\033[31m Red \033[0m" echo -e "\033[47m\033[1;31m Light Red \033[0m\ 1;31m \ \033[40m\033[1;31m Light Red \033[0m" echo -e "\033[47m\033[32m Green \033[0m\ 32m \ \033[40m\033[32m Green \033[0m" echo -e "\033[47m\033[1;32m Light Green \033[0m\ 1;32m \ \033[40m\033[1;32m Light Green \033[0m" echo -e "\033[47m\033[33m Brown \033[0m\ 33m \ \033[40m\033[33m Brown \033[0m" echo -e "\033[47m\033[1;33m Yellow \033[0m\ 1;33m \ \033[40m\033[1;33m Yellow \033[0m" echo -e "\033[47m\033[34m Blue \033[0m\ 34m \ \033[40m\033[34m Blue \033[0m" echo -e "\033[47m\033[1;34m Light Blue \033[0m\ 1;34m \ \033[40m\033[1;34m Light Blue \033[0m" echo -e "\033[47m\033[35m Purple \033[0m\ 35m \ \033[40m\033[35m Purple \033[0m" echo -e "\033[47m\033[1;35m Pink \033[0m\ 1;35m \ \033[40m\033[1;35m Pink \033[0m" echo -e "\033[47m\033[36m Cyan \033[0m\ 36m \ \033[40m\033[36m Cyan \033[0m" echo -e "\033[47m\033[1;36m Light Cyan \033[0m\ 1;36m \ \033[40m\033[1;36m Light Cyan \033[0m"
Here's the table with the RGB values for the default Eterm colors again, expanded to include the internal number that Eterm uses to refer to each color.
Eterm Eterm Name Color # Code RGB Value Name Color # Code RGB Value Black 0 30 000000 Dark Gray 8 1;30 333333 Red 1 31 cc0000 Light Red 9 1;31 ff0000 Green 2 32 00cc00 Light Green 10 1;32 00ff00 Brown 3 33 cccc00 Yellow 11 1;33 ffff00 Blue 4 34 0000cc Light Blue 12 1;34 0000ff Purple 5 35 cc00cc Light Purple 13 1;35 ff00ff Cyan 6 36 00cccc Light Cyan 14 1;36 00ffff Light Gray 7 37 faebd7 White 15 1;37 ffffff
So to swap "white" and "light green" and lighten up "light blue":
Eterm -f rgb:00/ff/00 --color10 rgb:ff/ff/ff --color12 rgb:99/99/ff -W -O --shade 60 --home-on-echo no --home-on-input
Why not --color10 rgb:ff/ff/ff --color12 rgb:99/99/ff --color15 rgb:00/ff/00? It doesn't work. By default, foreground-color and color15 are both "white", but they CAN be set differently, and "ls --color" uses the ANSI code for foreground-color, not for color15.
I said something earlier about doing something with my bash prompt. Well, I decided not to group that with the Eterm material. It really doesn't have anything to do with Eterm, although it WAS researching the colors for Eterm that led me to the Bash-Prompt-HOWTO. Anyway, go to http://home.earthlink.net/~edward_leah/linux to see what I did.
There is no doubt: GIMP not only is one of the applications that serves as an attractor for Linux desktop-users, but also one of the most powerful graphic applications available. There's hardly any task that cannot be done with the aid of the GIMP. One of its main features is modularity: programmers can extend the program with C programs called ``plug-ins''. Just open your right-mouse-button popup-menu, point to ``Filters'' and its diverse submenus, and you'll see how important the plug-in feature is for GIMP: all Filters seen here are implemented via plug-ins.
What about the next item in the popup menu, ``Script-Fu''? Tons of effects can be discovered here. The difference between Script-Fus and true filters is that Script-Fu-effects are generated by the aid of so called ``Scheme-Scripts'', small programs written in a strange-looking language called ``Scheme'', which is strongly connected with Lisp--some of you have certainly heard this name in connection with artificial-intelligence programs. GNU Emacs uses Lisp as an extension and implementation language, and GUILE, the GNU Projects Ubiquitous Language for Extension that can be embedded into all kinds of applications, is a Scheme dialect. Why does the Gimp as an end-user-application use such a complicated language? This question may be the source of a great religious and philosophical debate about programming languages on which some people can spend days and months. The religious explosive force lies shortly behind the question about which text editor to use and the world's end. Perhaps one can sum up the whole discussion with a simple sentence: Lisp and Scheme are powerful, flexible and elegant languages, but they are certainly not easy to learn for the unexperienced user in contrast to more ``conventional'' languages. Nevertheless, Script-Fu is a powerful means of expanding GIMP's functionality with shortenings for often needed operations and doing automated picture processing and image generation--features not only useful for the end users, but also for web designers and publishers. Just think of your home page using tons of graphical buttons with the same look: you'd have to do the same steps over and over for every button, and sooner or later this will become very boring and tiring. Scripting automizes the process so that you'll only need to enter the text of the button (and maybe the color and other things), and GIMP generates them automatically for you with the aid of a Script-Fu script.
As you can see, scripting is a powerful feature that can be useful for all kinds of jobs. To make writing scripts easier, Marc Lehmann set out to write extension software for GIMP which is already contained in the 1.1.x developers versions and may be used for the stable 1.0.x-tree of the GIMP: He implemented an interface to make it possible to write GIMP scripts using the Perl language. I will not start another discussion about the qualities and weaknesses of Perl here, but in fact the language is much easier to learn than Script-Fu if you have no programming experience. Too, and this is the main advantage in my opinion, most cgi-scripts and web-based programs are written in Perl, so many people already know the language.
Perl Interpreters are available with all Linux distributions I know, because Perl has become one of the important components of a UNIX System, so normally there's no need to install it separately. If you use the stable version of GIMP (1.0.x), then the GIMP-Perl package needs to be installed first. See the box below on how to do this. If you want to use interactive scripts, a language binding between Perl and the GIMP toolkit gtk must be added to the system, too, so that the corresponding features are accessible. The instructions on how to do this are contained in the installation box as well.
Enough of theory--lets take a tour of the small GIMP-Perl script below to see how the system works. (text version of this listing)
#! /usr/bin/perl # Simple script to demonstrate the basics of gimp-perl use Gimp; use Gimp::Fu; register "colored_canvas", "Creates a colored canvas", "This script creates a canvas pattern over a color gradient<\n>, "Wolfgang Mauerer", "wolfgang\@mynetix.de", # @ must be written as \@ in a perl string "0.0", "<Toolbox>/Xtns/Script-Fu/Patterns/Color Canvas", "", [], sub { my $image = new Image(640, 480, RGB); # Set new selections for the fore and background-color. Palette Palette my $layer = $image->layer_new(640, 480, RGB_IMAGE, "Color Canvas", 100, NORMAL_MODE); # Create a color gradient with the choosen colors. Gimp->gimp_blend($layer, 0, 0, 0, 100, 0, 0, 0, 0, 0, 0, 0, 639, 379); # ... and apply the canvas filter Gimp->plug_in_apply_canvas(1, $image, $layer, 0, 10); $image->add_layer($layer, 0); return $image; }; exit main;The first line of the script, #! /usr/bin/perl, has nothing to do with GIMP or Perl--it is just a shell command that starts Perl to interpret the commands contained in the file. You might already know this construction from shell scripts.
The first lines interesting for use are use Gimp; and use Gimp::Fu;, as they initialize the GIMP binding and enable its usage. If you use these lines in your Perl scripts, you'll be able to refer to nearly all of the GIMP's functionality in your Perl scripts.
One central part of creating a new script is registering it within GIMP. In GIMP-Perl, this task is taken over by the function register, which expects numerous arguments as parameters. The script is described by them so that other users can get an idea of what the script does through a textual description, and on the other hand, the script is presented to the internal structures of GIMP so that it can be executed in the right way. The arguments have the following meanings:
If you get an error message like ``Command not found'' or so (depending on the shell you use), then you should check where your perl binary is located by typing whereis perl and adjust the #! line to the correct location. If you get error messages from Perl, double check your script's code with the code shown here and try to fix the typos.
Works quite fine, didn't it? But one thing seems to be mysterious: When calling the register function, we set a location within gimp's menu structure for the script, and according to the line
<Toolbox>/Xtns/Script-Fu/Simple/Color Canvasone normally would have expected the script to appear somewhere within the Xtns-menu. The reason why the script is not yet installed permanently is simple: we just used some kind of test mode, a clever feature of GIMP-Perl that enables the quick testing of new scripts. Normally, a script has to be installed so that it may be used permanently within the GIMP. By starting the Perl server, we receive the ability of testing the script directly on the command line and getting the same results as a normal, installed module. Another huge advantage of the Perl server is that you can modify your script as often as you want and then immediately restart it. When you install a script permanently, you have to reinstall it whenever you change it. And worse, each time a script is newly installed, you'll have to restart GIMP. Debugging or improving a script would really be time consuming with this method, but with the help of the Perl server, everything works just fine, and the turnaround time becomes much slower.
When you are sure your script works as expected, you may integrate it into GIMP using the gimptool utility. Our little script, for example, would be installed executing the line:
gimptool -install-bin simple_fu.plThe script is only used in your user-related GIMP configuration--other users won't be able to access your script that way. If you want all users to be able to profit from your work, you'll have to type:
gimptool -install-admin-bin simple_fu.pmas this installs the script globally on your machine. You need to have root privileges for doing this.
After installing the script, you can see a new menu point in Xtns -> Script-Fu -> Patterns with the name ``Color Canvas''. Select the menu point to execute the script, and you'll get the same window as you did when calling the script directly on the command line: our red-blue canvas.
With a quite simple script, we have created an image that would take at least several hundred lines of code without GIMP. Because we could use predefined functions (like GIMP-blend) and already existing plug-ins (apply_canvas), we were able to do the whole job with very little coding required.
As you certainly might expect, GIMP is built on the basis of a huge number of functions and procedures, and a great number of them can be used within scripts--never mind whether you write them as a plug-in in C, as script-fu-scripts or with GIMP-Perl. How can one easily keep track of this big functionality without reading GIMP's source code and studying all the scripts and plug-ins? As we've already seen when registering our first script within GIMP, every new component which can be used by other GIMP users/programmers needs to have descriptive texts associated with it. All the basic functions, as well as all plug-ins and scripts shipped with the GIMP by default, have such descriptions, too. The central access point for all available functions together with their parameters and descriptions is the DB browser, an interactive catalogue that aids you in developing scripts.
The DB browser can be started by selecting the menu point ``DB Browser'' from the Xtns menu. A dialog window as shown in Figure 1 appears on your screen.
The best thing to do if you want to work with GIMP-Perl is use one of the developer versions of GIMP (1.1.x). You can get it from one of the GIMP mirrors, just take a look at http://www.gimp.org/. Installing the package for GIMP 1.0.x requires a little more work. First, you'll have to get GIMP-Perl and Gtk-Perl (at least version 5.0120) from any CPAN mirror (http://www.cpan.org/). You need to unpack the tar files with the commands:
tar xzfv Gtk-Perl-0.5120.tar.gz tar xzfv Gimp-1.098.tar.gzMove to the newly created directories and in each type:
perl Makefile.PL make make test make installIf Gtk-Perl doesn't work and GNOME is not installed, you may have to type:
perl Makefile.PL --without-gnome
In the left part of Figure 1 is a list box with many function names--the GIMP's functionality is really enormous. When you select a function from the list, several kinds of information about it appear on the right side of the window: the functions name, a blurb (we've heard that before--it's the GIMP's name for a short descriptive text), a help text and--most important--the parameters needed to execute the function correctly. When you see one or more lines headed by ``Out:'', the function not only has input parameters, but also returns some computed values. The gimp-layer-new function, for example, takes a long list of parameter values and returns a new layer. We've used the function in our small example script, feeding it with the equivalent values and getting back a new layer which was assigned to the variable $layer.
Obviously, the buttons ``Search by name'' and ``Search by blurb'' allow you to search either the function name or the short description for a piece of text. Do a search by name for the string ``canvas'', and you'll get to GIMP functions with the word ``canvas'' in their name. Select plug-in-apply-canvas and see what parameters were used for the function in our script.
If you look at the function names exactly, you will see a very important detail: while all the functions listed in the database have normal dashes(-) in their names, their counterparts in the Perl-script used underscores (_) for this purpose. While Scheme (and Lisp) usually take the - as an optical-separation symbol, Perl uses the _ convention. The effect of this is your scripts get more consistent with other Perl code and thus are easier to read and understand by programmers other than you. So when you look up functions in the DBB (an abbreviation for database browser), don't forget to replace the dashes by underscores in your Perl program.
In some functions, constants may be used as parameters because descriptive names are easier to remember for than numbers. Pay attention when using such constants, because GIMP-Perl replaces the dashes by underscores here, too.
The DBB is a powerful tool to aid you in script development, but it's not the only one available within the GIMP. The alternative one is the PDB, the Procedural Data Base Browser. Both show the same functions, but present them in a different manner. You can start it by selecting the appropriate entry from the Xtns menu. As you can see, there is a GIMP-Perl logo in the upper-right corner. The tool is therefore not a script-fu tool, but natively written for GIMP-Perl. In the window's headline, you can see that the browser is still an ``early alpha version'', so be prepared that things may change. Nevertheless, PDB is already a very good helper when creating GIMP-Perl scripts.
The whole thing works like this: in the ``Command'' text-box, you type in a function name, and all functions containing the substring are shown in the box below. You can either continue typing until there's only one solution left (insert it by pressing F2), or select a function name from the list. Select, for example, plug_in_apply_canvas. When you press space after the function name, the PDB will prompt you for the first parameter of the function, simultaneously presenting a list of valid alternatives. Again, you can choose one from the list or type in the parameter via the keyboard (completion by F2 works too). The whole game runs again until all needed parameters for the function call are selected. A nice thing for functions with 10,000 parameters is the status-bar at the bottom of the window which shows how the percentage of those completed.
When used in conjunction, PDB and DBB are two very useful tools to aid you in script development.
According to Larry Wall and Randal Schwarz, a programmer has three main virtues as they state in Programming Perl: laziness, impatience and hubris. But why do I tell you this when talking about GIMP? Take a look at the function calls: aren't they really too long for a lazy and impatient programmer? Always GIMP->gimp_... before a simple call can be boring. GIMP-Perl introduced some kind of shortenings for this problem. When you look at the GIMP's functions, you find they all fit into a certain category that is deduced from their heading: all functions starting with gimp-palette- have to do with palette operations, layer functions have the string gimp-layer- in front of their function name, and so on. As GIMP-Perl uses Perl's-object oriented syntax much typing can be saved by using the shorter forms for these Operations, with the side effect that your scripts will be much easier to read and understand. If, for example, you have an image-object ($image in our example), you can use the gimp-image--functions directly by appending them to the object. Gimp->image_add_layer($image, $layer, 0) becomes $image->add_layer($layer, 0) by this. $image isn't needed as a parameter any more since it is clear to which image the layer will be added.
Another possibility for shortening your scripts is using the kind of abbreviations as you can do with palette operations. Instead of writing Gimp->palette_set_foreground(...) you may simple use Palette->set_foreground(...).
To see what kinds of abbreviations are possible exactly, take a look at the Gimp/Fu man page by typing man Gimp::Fu (if that doesn't work, try perldoc Gimp::Fu).
Programmers are just humans, so they tend to make errors in their programs. The situation with GIMP-Perl is exactly the same, but with a ``special extension'': you can, on the one hand, create erroneous Perl code and, on the other hand, call GIMP's function in the wrong manner.
The Perl interpreter helps us to fix things falling into the first category be displaying corresponding error messages, but what about our errors originating from the GIMP? Imagine we were calling one of the internal GIMP functions with the wrong set of arguments. Although the number of parameters is right, one or more may have values that are out of range. The only message issued is something like procedural database execution failed at file name line line_number. That's not very much information.
Tracing is a way to get more precise messages about your running script. Simply insert Gimp::set_trace(TRACE_NAME) into the header of your subroutine, and you'll get a list of all called functions together with their arguments. For our simple script, the output to STDERR will look like Listing 2 (the lines were split up in several parts because they are very long in the original output). It is possible to fine tune the information given in a trace with the parameter of the set_trace call. In our example, we have used TRACE_NAME, but there are other possiblities:
Option | Output Information |
TRACE_NONE | No tracing output at all. |
TRACE_NAME | Print all the executed functions together with |
their parameters as shown in the previous examples. | |
TRACE_DESC | Print the function description from GIMP's |
internal database: All parameter names, their | |
description and the possible values for them. | |
TRACE_ALL | Print all informations available. |
gimp_image_new(width=640, height=480, type=0) = (image=7) gimp_palette_set_foreground(foreground=[255,0,0]) = () gimp_palette_set_background(background=[0,0,255]) = () gimp_layer_new(image=7, width=640, height=480, type=0, name="Color Canvas", opacity=100.000000, mode=0) = (layer=16) gimp_blend(drawable=16, blend_mode=0, paint_mode=0, gradient_type=0, opacity=100.000000, offset=0.000000, repeat=0, supersample=0, max_depth=0, threshold=0.000000, x1=0.000000, y1=0.000000, x2=639.000000, y2=379.000000) = () plug_in_apply_canvas(run_mode=1, image=7, drawable=16, direction=0, depth=10) = () gimp_image_add_layer(image=7, layer=16, position=0) = ()
Let's demonstrate the features of GIMP-Perl shown by now in a more complex than the first. Did you ever create a home page that uses graphical buttons? Then you certainly know that for each button the same operations have to be applied all the time--the perfect situation for a GIMP-Perl script.
The thing we are working on are buttons with a text component centered in their middle (variable fonts and sizes should be usable), bordered by a frame that looks like it has been carved into your desktop. If you've already worked with a program using Sun's new Java metal layout, you might already know this. Gtk, too, offers the rendering of buttons according to such a scheme.
Before we start writing our script, we must think about the steps that need to be done first. We need three layers: one for the background, one for the border and one for the text. The border and text layers must be transparent and therefore need an alpha channel.
As the font name, size and button text need to be user-configurable, we must pass them as parameters to the script. When you apply existing script-fu-scripts: dialog boxes for the various settings appear before the script is actually run. The same thing is possible with GIMP-Perl: When registering the function, we just need to say what kind of user customizable arguments the script needs, and the dialog where the parameters values can be entered is generated and shown automatically when the script is executed. All these parameter definitions have to be put inside the square brackets that already appear in the register of the first example. The line describing a parameter also needs to be put into square brackets. For our new script, this would look like:
[ [PF_STRING, 'text', 'The button's text.',""], [PF_FONT, 'font', 'Font used for the button's text label.'], [PF_BOOL, 'flat', 'Flatten the image and convert it to indexed] ]Experienced Perl users know that a reference to an array containing array references is passed that way, but this is just a technical detail. The [] block describing the argument is always structured in this way:
[PF_TYPE,name,description,default value(optional),other values(optional)]PF_TYPE sets the type of the argument. This is important because the input methods differ for the diverse data types: a font selection is done via a font selection box, while string are entered in a text box. Boolean (true/false) values will be entered using a simple check box. The possible values for PF_TYPE are documented in the Gimp/Fu-man page. As the number of input types is quite large, we won't cover them all here--just take a look at the man page. (In fact, it would be quite difficult to write a script that needs parameters of all types available.)
name will precede the actual input element as a descriptive text; description will show up as a tool tip (appears when you place your mouse over the selection widget and do not move it for some time).
The default setting for a parameter to be usable should supply safe ``base-values'' to a script--or set parameters to their optimal values for the best look to give a good starting point.
The other values have different meanings for the different parameter types, but we won't use them here.
What else will we need to create the button? A routine to draw the frame would be quite handy, and we'll implement this as a Perl subroutine that calls the corresponding GIMP functions to paint four lines with the aid of the pencil tool. The script is shown in Listing 3. As you can see, the subroutine used for controlling the creation of the button is no more an anonymous one like in the simple example, but a true Perl subroutine. Therefore, we'll have to use &create_etched_button to pass a reference of the subroutine to the register function.
Our third argument to the script has to name flat. When it is selected, all layers of the image will be merged together and the resulting layer will be converted to the ``indexed'' color format. The image may be saved in the gif format.
The general outline of the script should be clear now, as the structure doesn't change too much from our simple example except that we now have parameters and the subroutine is a real Perl subroutine. One thing that still needs to be explained is how the text layer is created.
Font and size for the text have been selected by the user and can be found in the string $font. It's format is called X logical font description and looks like:
-Adobe-Courier-Medium-R-Normal-14-140-75-75-M-90-ISO8859-1Quite complicated. But that's no problem: we can use that string directly as an argument for gimp-text-fontname, a function that creates a new layer containing a specified text string. The function needs both the font name, and the font size. It is contained in the font string and can be extracted from it via the function xlfd_size provided by GIMP-Perl.
Drawing the two overlaying frames that create the etched effect is done by the Perl procedure draw_frame which is not exported into the GIMP's database and therefore can be used only in our script. It draws four lines, based on the knowledge of where the upper-left and lower-right corner of the box are located. The settings for color and brush are not changed; the procedure operates with the settings active when it is called.
If the flat flag has been set ($flatten is 1 then), additional actions take place: the layers are merged together, and the image is converted to index colors, suitable for saving as a gif file.
For the user's convenience, the back and foreground colors from the palette are restored to the settings before the script's execution. Then we return the picture to the GIMP, and a new window containing a graphical button shows up. You won't see the etched effect too much here, but try putting the button in a web page--it will look like Figure 4 shows.
If you like the script, you may install it with the gimptool command as demonstrated in the simple example. A new entry called ``Etched'' shows up in Xtns -> Script-Fu -> Buttons then.
One thing to note, I called up RR, and they were very vague about whether they allow people to use IP Masquerading. The sales rep basically said that if you hog bandwidth, they will get upset. If you run servers that hog bandwidth, they will get upset.
There are two "I"'s in this document. Sometimes it is Andrew talking, and sometimes it is Mark.
Your workstations and hub are inside your private network, indicated by a white background in the image, and the rest of the world is shown as the gray area. Note that the gateway machine is riding the fence between the two. Each of the Ethernet cards has an IP address assigned to it, and your gateway has one in the outside world and one on the inside. Positioned as such, the gateway has the power to forward communications from your private network to the outside world.
For the purpose of this discussion, I have assigned the internal network the 10.x.x.x (netmask 255.0.0.0) block of reserved private IP addresses and chosen the domain name "local". These can of course be changed if you know what you are doing, but they should work for most people.
The workstations are named "w1.local" (10.0.0.10) through "w3.local" (10.0.0.30) and the masquerading gateway is called "main.local" (10.0.0.1 on its private adaptor.) As before, you can modify or expand this scheme if you feel comfortable with it.
The workstations can be any machine running a TCP/IP capable OS, such as Linux, MacOS, or even MS operating systems, and you will need to outfit each machine with an ethernet card.
Choose a 10/100 autosensing hub. It makes your life a lot easier. Use 100 mbit ethernet cards on the inside, and the ethernet card connecting your gateway computer to the outside can be just a 10 mbit ethernet card. I will still use all the same type of ethernet cards and make them all 10/100. Using the same hardware across the board makes your life easier most of the time.
It is a good idea to stick with PCI ethernet cards in linux systems. Ethernet cards are quite cheap now, and having PCI cards will save you some possible headaches in the configuration phase. I have used the Netgear FA310TX card flawlessly in several machines, and have been told that Intel cards provide extremely reliable service. In any case, do some web research and make sure that there are commonly available modules for the cards you buy. The Netgear card and most 3com cards (which I have used extensively) have drivers that ship with Redhat 6. If you must use ISA cards, 3com 3c509s are quite easy to use. Keep in mind that if you are using an older machine for your gateway, it may have only ISA slots, possibly requiring you to use two ISA ethernet cards. In this case, you will have to acquire a utility program (which usually runs only in DOS) for your cards from their manufacturer, and use this program to set the IO address and/or IRQ of the cards to prevent conflicts.
First, physically install your carefully selected Ethernet cards in the machine. It is much less confusing to use two different brand cards on the gateway; by having a unique driver for each card you can be sure which one will be called eth0 and which will be eth1, and also lessen the chances of interrupt conflicts (to be avoided like the plague). If you opt for ISA cards, you may need to download some DOS utilities to set the IRQ and I/O addresses.
Start with a clean install of the Redhat 6 (for the purpose of this article) distribution on the gateway. If you've got the hard drive space, go ahead and install everything, but you really don't need an X server, graphics processing software, etc. on this machine. Just make sure that you are installing the ipchains, BIND, cacheing nameserver, pump, and other packages critical to what we are doing here. Also tell the install program to start named at boot.
For the purpose of this article, eth0 will be going to the cable modem and eth1 will be on your private network. You can probably get the first ethernet device up and running during the initial install. Have eth0 connected to the cable modem with the necessary crossover cable and tell it to use DHCP to get its IP address (don't specify one) since the Roadrunner service will be assigning us an internet IP. Use the netconf utility (or edit the /etc/sysconfig/network-scripts/ifcfg-eth? and /etc/conf.modules files directly) to get eth1 working once you can boot, and assign this one a private network IP address (10.0.0.1). It may take a bit of tinkering to get both cards detected and up. Be sure to specify the correct kernel modules/drivers, and once you can do an 'ifconfig' and see both eth0 and eth1 (or see them both go up at boot time), you should be ready to continue. It is possible to compile network card drivers into the kernel (this was often done in the past) but it is common and perfectly acceptable to simply use modular drivers as we've suggested here.
Ensure that your /etc/sysconfig/network looks like this and replace /etc/hosts with your own version of this file. The /etc/hosts file isn't absolutely necessary since we'll be setting up a DNS server, but it's a good backup.
NOTE ON TWO 3c509 NETWORK CARDS:
Install both ISA ethernet cards before you install RedHat Linux. The two ethernet cards Mark used (a long time ago) were 3com 3c509. The first had values of, irq=10, address=300 and the second had irq=11, address=310. Also, when you install RedHat, go ahead and install it for a LAN and have it autoprobe the ethernet cards.
NOTE ON DHCP:
You can setup DHCP using RedHat's control panel, netconf, editing the files manually, or during the installation of RedHat (or whatever other Linux distribution you are using).
Try pinging a few outside servers or retrieving a few web pages with lynx to see if your Roadrunner connection is functioning. If so, you are ready to add masquerading. Append this block of modprobe and ipchains commands to your /etc/rc.d/rc.local script to enable forwarding/masquerading and also provide some fairly strong firewall rules.
Armed with some knowledge from the DNS-HOWTO, create or modify /etc/named.conf, /etc/resolv.conf, and the related files in /var/named. You must also create or edit /etc/pump.conf to keep pump from overwriting your resolv.conf settings every time eth0 goes up. Verify that 'named' is being run at bootup (there will be an S??named link in /etc/rc.d/rc3.d/ if it is) and you should have a cacheing nameserver running upon reboot, as well as DNS for your private network.
Here is an example for other computers in your network.
Also, if you are using pc or mac clients or other stuff, check out the masquerading mini-howto.
The general idea is to specify your masquerading machine (10.0.0.1) as the gateway and DNS server for each machine. The IP-Masquerading HOWTO has an excellent section on how to configure workstations running several different non-Linux operating systems in a masq'd private network.
Masquerading is inherently somewhat unsafe because we must allow traffic to pass through the firewall. By disabling telnet, ftp, or other daemons that listen at ports on your gateway you can avert much of the danger, and fully understanding IP firewall chains is also valuable. A port forwarder such as ipportfw can also be used to redirect incoming requests for connections to other machines on your network (which would then be running the requested service), directing the danger away from your gateway.
You can prevent access to your DNS server by moving it to another machine on your network or by simply giving an appropriate listen-on directive to named (see the named.conf man page) to keep it from binding to a port on your external interface. If you would like higher security but need login capabilities from the outside, look into openssh, which allows telnet-like logins over an encrypted connection. It is also advisable to install an advanced logger such as tcplogd which can detect and inform you of most portscans and malicious activity.
Lastly, in many cases you can simply turn off the interface to the outside world when not using your connection, thereby lessening the chances of someone gaining unauthorized access. Simply issue 'ifdown eth0' on your gateway machine to disable your connection and 'ifup eth0' when you need to use it again.
Summary of Security
Dyndns.org can provide for you a static domain name despite your dynamic IP address, giving others on the internet an easy to remember and constant name for your gateway machine. Once you have registered with dyndns, a utility called 'ddup' can contact dyndns.org and update your nameserver record when your IP address changes. Appending this shell script fragment to our /etc/rc.d/rc.local will update your dyndns record at boot time, but only if your IP address has changed (dyndns doesn't like you to update your record for no reason.) This assumes that you have 'ddup' installed properly.
I tend to reboot the gateway machine every day or two and haven't had any problems with my assigned IP changing while it was up, though this is technically possible. If you plan on leaving your gateway up for weeks at a time, you might want to have cron run this script occasionally to make sure your dyndns record is always current. This will take a little reworking -- experiment and see what you can do.
# Firewall config - Should be appended to the end of /etc/rc.d/rc.local to run on boot. # Adapted from examples in the IP Masquerading HOWTO and IPChains HOTWO. See the original # documents to learn more. These examples should provide a fairly safe masquerading # firewall. echo "Loading IP masquerading modules..." # load modules to handle masquerading some tricky but common protocols /sbin/depmod -a /sbin/modprobe ip_masq_ftp /sbin/modprobe ip_masq_raudio /sbin/modprobe ip_masq_irc echo "Turning IP forwarding on..." # make sure the forwarding is turned on echo "1" > /proc/sys/net/ipv4/ip_forward # Get the dynamic IP address assigned via DHCP # and external interface name, save them to variables for easy use extip="`/sbin/ifconfig eth0 | grep 'inet addr' | awk '{print $2}' | sed -e 's/.*://'`" extint="eth0" # Do the same for internal network name and interface intint="eth1" intnet="10.0.0.0/8" echo "Configuring firewall chains:" echo -n "input..." ############################################################################# # Input chain: flush and set default policy of reject. Actually the default policy # is irrelevant because there is a catch all rule with deny and log. ipchains -F input ipchains -P input REJECT # local interface, local machines, going anywhere is valid ipchains -A input -i $intint -s $intnet -d 0.0.0.0/0 -j ACCEPT # remote interface, claiming to be local machines, IP spoofing, get lost ipchains -A input -i $extint -s $intnet -d 0.0.0.0/0 -l -j REJECT # remote interface, bounce anything trying to open a connection to us # this should keep anyone from opening TCP connections to this machine from # the outside world. Just an example of what we can do with IPChains, and not # a bad idea unless you have a reason for letting people connect to your firewall. # ipchains -A input ! -f -i $extint -p TCP -y -j REJECT # remote interface, any source, going to roadrunner dhcp address is ok ipchains -A input -i $extint -s 0.0.0.0/0 -d $extip/32 -j ACCEPT # loopback interface is valid. ipchains -A input -i lo -s 0.0.0.0/0 -d 0.0.0.0/0 -j ACCEPT # catch all rule, all other incoming is denied and logged. ipchains -A input -s 0.0.0.0/0 -d 0.0.0.0/0 -l -j REJECT echo -n "output..." ############################################################################# # Output chain: flush and set default policy of reject. Actually the default policy # is irrelevant because there is a catch all rule with deny and log. ipchains -F output ipchains -P output REJECT # local interface, any source going to local net is valid ipchains -A output -i $intint -s 0.0.0.0/0 -d $intnet -j ACCEPT # outgoing to local net on remote interface, stuffed routing, deny ipchains -A output -i $extint -s 0.0.0.0/0 -d $intnet -l -j REJECT # outgoing from local net on remote interface, stuffed masquerading, deny ipchains -A output -i $extint -s $intnet -d 0.0.0.0/0 -l -j REJECT # anything else outgoing on remote interface is valid ipchains -A output -i $extint -s $extip/32 -d 0.0.0.0/0 -j ACCEPT # loopback interface is valid. ipchains -A output -i lo -s 0.0.0.0/0 -d 0.0.0.0/0 -j ACCEPT # catch all rule, all other outgoing is denied and logged. ipchains -A output -s 0.0.0.0/0 -d 0.0.0.0/0 -l -j REJECT echo -n "forward..." ############################################################################# # Forward chain: flush and set default policy of deny. Actually the default policy # is irrelevant because there is a catch all rule with deny and log. # ipchains -F forward ipchains -P forward DENY # Masquerade from local net on local interface to anywhere. # This is the line that does all the work, most of the rest if the lines # in this file are just for security reasons. ipchains -A forward -i $extint -s $intnet -d 0.0.0.0/0 -j MASQ # catch all rule, all other forwarding is denied and logged. pity there is no # log option on the policy but this does the job instead. ipchains -A forward -s 0.0.0.0/0 -d 0.0.0.0/0 -l -j REJECT echo "done."
NETWORKING="yes" FORWARD_IPV4="yes" HOSTNAME="main.local" DOMAINNAME="local" GATEWAY="10.0.0.1" GATEWAYDEV="eth0"
NETWORKING="yes" FORWARD_IPV4="no" # change hostname on each machine HOSTNAME="w1.local" DOMAINNAME="local" GATEWAY="10.0.0.1" GATEWAYDEV="eth0"
127.0.0.1 localhost localhost.localdomain 10.0.0.1 main main.local 10.0.0.10 w1 w1.local 10.0.0.20 w2 w2.local 10.0.0.30 w3 w3.local
search local columbus.rr.com nameserver 10.0.0.1 #our local nameserver nameserver 128.146.1.7 #fill in a backup server. do not use this one, it is for OSU students only.
device eth0 { nodns }
options { directory "/var/named"; }; zone "." { type hint; file "named.ca"; }; zone "local"{ type master; file "local.zone"; notify no; }; zone "0.0.10.in-addr.arpa"{ type master; file "local.reverse"; notify no; }; zone "0.0.127.in-addr.arpa"{ type master; file "named.local"; };
@ IN SOA main.local. root.main.local. ( 200001151 ; serial 8 ; refresh 2 ; retry 1 ; expire 1 ; default_ttl ) @ IN NS main.local. localhost IN A 127.0.0.1 main IN A 10.0.0.1 w1 IN A 10.0.0.10 w2 IN A 10.0.0.20 w3 IN A 10.0.0.30
0.0.10.in-addr.arpa. IN SOA main.local. root.main.local. ( 1997022700 ; serial 28800 ; refresh 14400 ; retry 3600000 ; expire 86400 ; default_ttl ) 1.0.0.10.in-addr.arpa. IN PTR main.local. 10.0.0.10.in-addr.arpa. IN PTR w1.local. 20.0.0.10.in-addr.arpa. IN PTR w2.local. 30.0.0.10.in-addr.apra. IN PTR w3.local.
# Update dyndns.org entry with crafty abuse prevention # This will not work unless you have a dyndns account and the ddup package installed. # This requires some variables that are set in the firewall config script, # so this should be appended to the end of the firewall script. # replace dummy.hostname with your registered hostname reghost="dummy.hostname" regip="`nslookup $reghost main.dyndns.org | tail -n 3 | grep 'ddress' | awk '{print $2}'`" echo -e "\n Dyndns.org abuse prevention IP address check:" echo "$reghost registered: $regip" echo -e "$extint has IP address: $extip \n" if [ "$regip" = "$extip" ]; then echo "Address has not changed. DDUP not run."; else echo "Address has changed. Updating your record."; ddup --host $reghost; fi
alias eth0 3c509 alias parport_lowlevel parport_pc pre-install pcmcia_core /etc/rc.d/init.d/pcmcia start alias eth1 tulip
"Thou shalt run lint frequently and study its pronouncements with care, for verily its perception and judgment oft exceed thine."
-Henry Spencer, "The Ten Commandments for C Programmers"
C programmers take pride in thinking(and often proclaiming to the world) that they know what they are doing. This supreme self confidence (or shall we say arrogance?) is not a bad thing - but a little caution is always judicious as C is a language with many dark corners(why should people write books like "C Traps and Pitfalls"?). Taking Lint as a companion with you in your journey into the dark woods of C will always be worthwhile - though this companion is at times a bit noisy and tiring!
In the good old days(it is said), a decision was made to take out full semantic checking from the C compiler and put it in a stand alone program called lint(the usual reasons - making the compiler smaller, simpler and faster - worshiping at the altar of the little Tin God of efficiency). The C programmer, so sure of himself, never took the trouble to run lint on his code - with the extremely gratifying result that he got buggy code which compiled very fast! Lint is a tool which shows you how your smart C compiler may spring surprises on you - ignore him at your own peril.
You can give LCLint a try. LCLint is a powerful tool which is available for free in source form from http://lclint.cs.virginia.edu/ftp/lclint/lclint-2.4b.src.tar.gz LCLint, as we will see later, is much more than a lint.
LCLint does the traditional lint checks like detecting:
Here is a small C program:
main() { int a[10]; if (sizeof(a)/sizeof(a[0]) > -1) printf("hello\n"); }We expected this to print hello, but it did not. gcc did not give us any hint. Let us see what lint has to say about this beauty. Here is the output from running 'lclint a.c':
LCLint 2.4b --- 18 Apr 98 a.c: (in function main) a.c:4:15: Operands of > have incompatible types (arbitrary unsigned integral type, int): sizeof((a)) / sizeof((a[0])) > -1 To ignore signs in type comparisons use +ignoresigns a.c:6:2: Path with no return in function declared to return int There is a path through a function declared to return a value on which there is no return statement. This means the execution may fall through without returning a meaningful result to the caller. (-noret will suppress message) Finished LCLint checking --- 2 code errors foundOh, oh, sizeof gives you the size as an unsigned value. We are comparing this to -1, which when interpreted as an unsigned yields a big number.
The output of LCLint is verbose, but it is in a form readable by ordinary mortals, and not ANSI (or ISO or whatever) legalese. The output also displays enough of context to help us immediately locate the trouble spot. Note that we are also told how to turn off such errors, ie, use +ignoresigns as an option when invoking LCLint. You may call LCLint a program with a very 'helping mentality'.
Let us see another example, a goof-up which any C programmer worth his name should have made when he was a toddler:
main() { int a=0; while (a=1) printf("hello\n"); return 0; }LCLint is justifiably angry at such amateurish use of C, but he is gentle in his admonishments:
LCLint 2.4b --- 18 Apr 98 c.c: (in function main) c.c:4:14: Test expression for while is assignment expression: a = 1 The condition test is an assignment expression. Probably, you mean to use == instead of =. If an assignment is intended, add an extra parentheses nesting (e.g., if ((a = b)) ...) to suppress this message. (-predassign will suppress message) c.c:4:14: Test expression for while not boolean, type int: a = 1 Test expression type is not boolean or int. (-predboolint will suppress message) Finished LCLint checking --- 2 code errors found
LCLint is capable of detecting many memory management gotchas. Here is one:
#include <stdlib.h> int main() { int *p = malloc(5*sizeof(int)); *p = 1; free(p); return 0; }If you thought LCLint would be fooled, you are mistaken:
LCLint 2.4b --- 18 Apr 98 d.c: (in function main) d.c:5:7: Dereference of possibly null pointer p: *p A possibly null pointer is dereferenced. Value is either the result of a function which may return null (in which case, code should check it is not null), or a global, parameter or structure field declared with the null qualifier. (-nullderef will suppress message) d.c:4:14: Storage p may become null Finished LCLint checking --- 1 code error foundWhen the program is rewritten as follows:
#include <stdlib.h> #include <stdio.h> int main() { int *p = malloc(5*sizeof(int)); if (p == NULL) { fprintf(stderr, "error in malloc"); exit(EXIT_FAILURE); } else *p = 1; free(p); return 0; }LCLint is perfectly happy.
Here is an example of code which tries to free a block twice:
#include <stdlib.h> main() { int *p = malloc(5*sizeof(int)); int *q; q = p; free(q); free(p); return 0; }This is how LCLint responds:
LCLint 2.4b --- 18 Apr 98 f.c: (in function main) f.c:7:19: Dead storage p passed as out parameter: p Memory is used after it has been released (either by passing as an only param or assigning to and only global. (-usereleased will suppress message) f.c:7:10: Storage p is released Finished LCLint checking --- 1 code error found
One can write perfectly horrible C programs without any assistance from the macro preprocessor and yet some people are not satisfied. They forget that the C macro preprocessor is a simple program designed to do simple things and proceed to build grandiose designs with dancing #defines, #ifdef's , #endif's and so on. The result is utter chaos. The designers of LCLint are very much aware of the C programmer's passion for macros and they have built into their program the ability to detect many kinds of macro programming errors.
Here is a typical instance of how a macro defined to work like a function does not work like one.
#define sqr(p) p * p main() { int i=2, j; j = sqr(i+1); printf("%d", j); /* prints 5 */ return 0; }LCLint is quick to point out the error. Please note that when you run lclint, you must specify that you expect your macros (with parameters) to behave like functions by using the flag +fcn-macros. Thus, we would invoke the above program as 'lclint i.c +fcn-macros'. Here is the output from LCLint:
LCLint 2.4b --- 18 Apr 98 i.c:1: Parameterized macro has no prototype or specification: sqr Function macro has no declaration. (-macrofcndecl will suppress message) i.c: (in macro sqr) i.c:1:13: Macro parameter p used more than once A macro parameter is not used exactly once in all possible invocations of the macro. To behave like a function, each macro parameter must be used exactly once on all invocations of the macro so that parameters with side-effects are evaluated exactly once. Use /*@sef@*/ to denote parameters that must be side-effect free. (-macroparams will suppress message) i.c:1:16: Macro parameter used without parentheses: p A macro parameter is used without parentheses. This could be dangerous if the macro is invoked with a complex expression and precedence rules will change the evaluation inside the macro. (-macroparens will suppress message) i.c:1:20: Macro parameter used without parentheses: p Finished LCLint checking --- 4 code errors foundThe third error message clearly tells you that you need to use parenthesis.
What does a function prototype do? Well, the prototype tells you what all arguments the function accepts - the type and number of the arguments and the return type of the function. It acts as a sort of interface between the function and its caller. The caller is required to abide by the interface if he wishes peace for himself, his program and the world at large. The prototype might also be thought of as placing some sort of constraint on the legal use of the function.
The provision of constraints on functions comes to your aid when you start building large systems. You are sure that your function foo_bar() is always called with the right number and type arguments if you ensure that all your function calls take place in the presence of prototypes. There are several other constraints which you would like to place on your function, like defining the list of globals which the function is allowed to modify. The C language does not permit any such constraints, so the only option you have is to use tools like LCLint.
Here is an example of the use of an annotation.
static void foo(int *a, int *b) /*@modifies *a@*/ { *a=1, *b=2; } main() { int p=10, q=20; foo(&p, &q); return 0; }Note the comment(a stylized comment) /*@modifies *a@/. This is a hint to LCLint that function foo is constrained to modify the value of *a only. Let us see what output LCLint produces:
LCLint 2.4b --- 18 Apr 98 j.c: (in function foo) j.c:3:11: Undocumented modification of *b: *b = 2 An externally-visible object is modified by a function, but not listed in its modifies clause. (-mods will suppress message) Finished LCLint checking --- 1 code error foundHere is another example:
static void foo(int *a, int *b) /*@modifies nothing@*/ { *a=1, *b=2; } main() { int p=10, q=20; foo(&p, &q); return 0; }LCLint tells you:
LCLint 2.4b --- 18 Apr 98 k.c: (in function foo) k.c:3:5: Undocumented modification of *a: *a = 1 An externally-visible object is modified by a function, but not listed in its modifies clause. (-mods will suppress message) k.c:3:11: Undocumented modification of *b: *b = 2 k.c: (in function main) k.c:8:5: Statement has no effect: foo(&p, &q) Statement has no visible effect --- no values are modified. (-noeffect will suppress message) Finished LCLint checking --- 3 code errors foundHere is another one dealing with global variables:
/*@checkedstrict@*/ static int abc, def; static void foo() /*@globals abc@*/ { def = 1; } main() { int p=10, q=20; foo(&p, &q); return 0; }The annotation /*@checkedstrict@*/ tells LCLint to provide error messages on all undocumented accesses of global variables, whether it be for reading or writing:
LCLint 2.4b --- 18 Apr 98 l.c: (in function foo) l.c:5:5: Undocumented use of file static def A checked global variable is used in the function, but not listed in its globals clause. By default, only globals specified in .lcl files are checked. To check all globals, use +allglobals. To check globals selectively use /*@checked@*/ in the global declaration. (-globs will suppress message) l.c:2:13: Global abc listed but not used A global variable listed in the function's globals list is not used in the body of the function. (-globuse will suppress message) l.c: (in function main) l.c:10:5: Called procedure foo may access file static abc l.c:1:32: File static variable abc declared but not used A variable is declared but never used. Use /*@unused@*/ in front of declaration to suppress message. (-varuse will suppress message) Finished LCLint checking --- 4 code errors found
We have not even scratched the surface of LCLint's capabilities. If you feel that you wish to explore more, go over to http://www.sds.lcs.mit.edu/lclint/.
Here is an advice, not from us, but from people who have learned it the hard way - if you wish to use lint in your project, start from the word go, or risk insanity (Peter van der Linden, in his book 'Expert C programming - Deep C secrets', talks of a 'lint party' he had at Sun Microsystems. He must have got a kick out of it!).
Lint, especially a very powerful version like LCLint, can be used to learn more about C programming. Just thinking about the error messages and trying to make them go away will give you a lot of insight.
Much of the examples here apply equally well to all implementations of Smalltalk. Though all implementations of Smalltalk share the same basic characteristics, there are differences among them - especially when GUI code comes into play. There's a number of freely available1 Smalltalk implementations available for Linux: GNU Smalltalk, Smalltalk/X, Squeak, and VisualWorks Non Commercial2. Squeak in particular is doing some really cool stuff lately, but the examples here are written in VWNC, since this is the flavour that I'm most familiar with. Also, even though there's a later version available, I'm going to use VWNC v3.0 for illustrative purposes since that is the version with the most freely available tools/extensions available.
This article covers some background information on Smalltalk, getting and installing VWNC3, characteristics of Smalltalk (with examples), and further references. The characteristics of Smalltalk that are covered here are that it:
Smalltalk has been around for quite a while, it originated at the Xerox Palo Alto Research Center (PARC) in the early 70s, and its original intention was to provide an environment for children to start learning how to program in. Accordingly, its syntax is very simple and it's a very mature environment. Smalltalk's power is rooted in its large class library, which can be a double-edged sword. You don't have to keep reinventing the wheel to get any work done, but on the other hand you spend a lot of time looking for the right wheel to reuse when you're first learning it4.
Another double-edged sword is that it's a pure OO environment, and thus encourages people to make the paradigm shift to OO much more strongly than other Object Based (OB) languages5 (more on this later). People who make the shift tend to become very attached to Smalltalk and find it a very fun, productive6, and theoretically pure environment to work in. People who don't make the shift tend to shun it, and stay in OB languages where it's easier to get away with writing procedural code in the guise of writing OO code. (No Dorothy, just because you're using a C++ compiler doesn't mean that you're writing OO code).
Though many people characterize Smalltalk as only being useful in large, vertical markets, Smalltalk is extremely flexible in that it is also used in very small, horizontal markets. Smalltalk has been used to write applications7 for large network management systems, to hospital clinical systems, to palm pilots, to micro controllers, to embedded firmware.
But that's enough background information, let's get on with some real examples.
To start it up, I like to copy the virtual machine to the image directory, or to make a symbolic link. Since I have it installed on my dos partition, I'll make a copy:
cp /dos/vwnc3/bin/linux86/visualnc /dos/vwnc3/image/visualnc
I noticed that the virtual machine isn't executable by default, so let's make it that way:
chmod 555 visualnc
Then to start the image, it's necessary to tell the virtual machine what image to run, do:
cd /dos/vwnc3/image
visualnc visualnc.im
A license reminder screen pops up every time you start VWNC, click I accept, and proceed through it. The window that appears on top is called the Transcript, and the window below it is called a Workspace. Next, you need to inform the image where to find its needed system files. To do this, click on File>Set VisualWorks Home from the Transcript. In the dialog that appears, enter in the home directory: /dos/vwnc3.
The Transcript and Workspace look like:
Fig. 1 - VWNC Transcript
Fig. 2 - Default VWNC transcript
The startup workspace has some useful information that's worth reading, but it isn't necessary to read it to continue through this article.
A note on mice: Smalltalk assumes that you have a three button mouse. If this isn't the case, you can simulate a three button mouse on a two button mouse by clicking both the left and right buttons at the same time.
Lets give saving a try (might as well save the system file location you just specified). Move your windows around until they're where you like them, then save your image, select File>Save As... It's a good idea to save your image as something other than the virgin image, so if you run into problems you can always return to the clean image and reload your code. I'll save the image as testingImage. After saving, try closing the image File>Exit VisualWorks...>Exit. Then to restart, be sure to pass the new image name to the virtual machine. Note, that when you save your image, the date and time are printed on the transcript, like so:
There's a lot more I could say here on saving your work (Perm Save As, filing out, change log, Envy), but I'll digress in the interests of brevity.
Fig. 3 - Transcript after saving as 'testingImage'
Transcript cr. | This code gets a hold of the Transcript object, and asks it to show a carriage return on itself. |
Transcript show: 'Hello World.' | Gets a hold of the Transcript object, and asks it to show 'Hello World' on itself |
Date today | Asks the class Date what the date today is |
Another thing to notice is the literateness of Smalltalk. To get the date today we just asked Date for today. Though Smalltalk is a very literate language, it obviously has to break down somewhere, for example, we can't ask Date for tomorrow. That being said though, the general syntax of Smalltalk is very simple and easy to read. Keep this in mind while looking through the upcoming code examples.
Let's move on to the third example of this section then, and get into the paradigm shift aspect of OO:
Ex 3: Illustrating the paradigm shift
Java is similar in the respect that it is byte compiled, but different in the respect that it isn't incrementally byte compiled. So when Java programming, you need to recompile all of it (or parts of it if you're using make), and relink all of it every time you want a change to take effect.
Now you'll notice that your entire system is running with a Mac look-n-feel! The first time I saw this back in '95, it blew me out of my chair. I had just spent a year doing a very painful port of OpenWindows to Motif for parts of a C based application. Then, here somebody showed me how they could 'port' their application from SunOS to Solaris to MacOS with a click of a button!
Fig. 11- Selecting a Mac look-n-feel
Keep in mind that the above window is running on a Linux box! I commonly use this feature at work when my employer is developing in Windoze, because I prefer the Motif look-n-feel.
Fig. 12 - Transcript with Mac look-n-feel
Ex 4: Adding inspect menus to all windows
self view model inspect.
ScheduledBlueButtonMenu :=
Menu
labels: 'relabel as...\refresh\move\resize\front\back\collapse\close\inspect' withCRs
lines: #(1 7 8)
values: #( #newLabel #display #move #resize #front #back #collapse #close #inspectView).
Garbage collection: AKA the sanity saver. Smalltalk has garbage collection, which means objects that are no longer referenced are cleaned up to free up memory. I remember pulling out my hair many a time when programming in C++ trying to find memory leaks. This is because in C++ it's up to the developer to manage your memory. If you didn't deallocate the memory you're using, your application would continually take up more memory until your machine ran out of memory and crashed.
Speaking of machines crashing, you'll note that we never had to do any pointer manipulation. This is because Smalltalk has no pointers. (Well, this is technically incorrect. Every variable reference is actually a pointer to an object, it's just that in Smalltalk the developer is relieved of the burden of manually maintaining pointers).
Category | A group of classes |
Class | A type of an object |
Horizontal market | A market that tends to have a very large audience and has a very small impact on that audience. Shinkwrapped software addresses a horizontal market. For example, a word processing package - if it crashes, just reload your last saved snapshot. |
Inspector | A GUI type of object that allows you to look at and work with objects. |
Literateness | A simple definition of literateness is how readible/simple the syntax of a language is. Literate programming is programming for readability for the programmer who comes after you. |
Method | A piece of Smalltalk code for an object. |
Object | A grouping of related data and operations. It exhibits behaviour through its operations on its data. |
Protocol | A group of methods. |
Reflectiveness | How much of an environment can be manipulated within itself. In Smalltalk, 98% of Smalltalk is written in Smalltalk, which makes it easy to customize, enhance, or tweak the environment. (Squeak is the notable exception here, 100% of it is written in Smalltalk). |
Transcript | The main window of the IDE, where other windows (browsers, workspaces, etc) are opened from. Also keeps a running list of system messages. |
Workspace | A scratchpad where developers can experiment with code. |
Vertical market | A market that tends to have a very small audience and has a very large impact on that audience. For example, a network management system for a telecommunications company - if it crashes the company loses a million dollars a minute. |
Patrick Logan <[email protected]>
wrote in message
news:[email protected]...
>
> I wish the original Oak team had chosen to adopt Smalltalk
> rather than invent Java
They tried to, but ParcPlace wanted too much on a per-copy royalty basis...sigh
Introduction:
Software mobility means different things to different people. Mobile computing is often used to mean email access from a laptop, process migration refers to the automatic redistribution of active processes within a cluster system, and agent migration describes processes on a mission, gathering data for a full report upon returning home. The first example is an extension of the client-server paradigm of the 1980s while the other examples, self-directed movement of a process from one networked machine to another, is the subject of leading edge research [1].
Cluster systems are multi-computers, a network of machines that present a single-server image to a client. There are many different processes running on a cluster system, some of which may self-replicate in order to handle a suddenly higher client request load. A cluster operating system may direct one or more of its processes to migrate to other machines on its network, redistributing the overall system resource load.
Autonomous agents are single processes that are best described by example. Consider wanting to collect the cheapest possible prices on all the components needed to build your "dream computer" system. Imagine having to visit or call all those parts stores to compare prices of system boards, disk and CD-ROM drives, etc. Now imagine sending your intelligent agent software out to do web searches, examining each site, and returning with the complete list of parts, including prices and URLs. If you give it your credit card number, all those parts could be shipped to your door! You get the idea.
The very first step toward implementing any process migration system is to figure out how to get a duplicate process started on some other machine from within your active process. This isn't quite a remote-fork operation because the new process can be started from its beginning. This isn't just a remote-shell operation, either, because there is no guarantee that a copy of the binary executable exists on the remote machine. What techniques do you use? What system file modifications must be made to a remote machine in order to make it amenable to your request to run a copy of your active process? Let's examine the issues surrounding what we call process cloning.
Background:
The approach is to develop a service and an associated function interface. Your process calls the function with the name of a remote host. The function connects to a well-known port on the remote machine to invoke the cloned service through the inter-networking daemon, inetd.
Early Unix systems ran daemons for every service that waited for a remote connection. When you used telnet, for example, there was a telnet daemon named telnetd waiting for your connection to port number 23 on the remote machine. (Port 23 is always used for telnet. It is one of many so-called well-known ports defined in /etc/services.) There came to be so many daemon processes waiting for connections that the amount of available swap space and the number of user processes became limited.
The super-server solved these problems by waiting for a connection to one of a list of its well-known ports, automatically starting associated servers when connections were made. One need only add a line entry to two files to register a new service. You must be root to edit these files or to request that inetd reinitialize itself and reread them [2].
Keep in mind that when you allow a process on some remote machine to start a copy of itself on your machine, you bare a significant security risk of that new process being hostile. All of the computers on my network are completely under my control so security is not a concern of mine. That issue will not be discussed further.
How It Works:
The service needs to be defined in /etc/services. Add an entry as follows:
clone 5050/tcp # automatically starts cloned
where clone is the service name, 5050 is the well-known port number, and tcp is the communications socket's transport protocol. We picked the 5050 port because it was higher than the reserved system port numbers, and it wasn't already in use. (The 50-50, half-and-half connotation will make this port number easy to remember.)
The clone service needs to be defined to inetd in /etc/inetd.conf as follows:
clone stream tcp nowait root /user/bin/cloned cloned
where clone is the service name, a match to the /etc/services entry. (See the man pages for inetd.conf for descriptions of other parameters [2].) Note that inetd will not know about these new entries until the system is rebooted, or unless you issue a command to force it to reread its configuration file. Issue
>killall -HUP inetd
as the root user to make inetd reread its service definition file.
How It's Used:
Your application makes a function call to clone() with the name of the remote host where the new copy of your process is to be started. The clone() function determines the name of the active process by searching for its own process id in a ps command output listing. It uses which command output to find the path to the executable.
When clone() connects to port 5050, inetd accepts the connection and does the plumbing necessary to set cloned's stdin to the socket receive stream and its stdout to the socket send stream. The cloned service gets control from inetd via fork() and execl() calls [3].
When cloned gets control on the remote machine, it reads the name of the executable that is sent in the first packet. The received executable data will be written to the local /tmp area using the same executable name.
Summary:
The local application calls clone() with the name of the remote host. The clone() function connects to the clone service port, causing cloned to be started on the remote machine. It then copies the executable over the socket connection to the remote machine, where it is written into the local /tmp area by the cloned daemon, activating it via fork() and execl().
What happens next depends on the needs of your application. You may wish to open a new connection to your remote clone so both copies remain active. You may wish to terminate the local version to effect agent migration. You might even have the remote copy start another remote copy somewhere else, forming a daisy-chain peer-to-peer network.
How it all works is ultimately up to you, but the first step is to get that process active on a remote machine. An application named test.c and the clone daemon, cloned.c along with its interface function, clone.c are included with this article. All are written in C and tested on Red Hat Linux version 4.2.
References:
[1] Milojicic, D., Douglis, F. and Wheeler, R., Processes, Computers, and Agents, Association for Computing Machinery Press, 1998.
[2] See the Unix man pages for inetd and inetd.conf.
[3] Stevens, W. R., Unix Network Programming, Prentice-Hall, 1990.
This caricature of your humble editor is by Shane Collinge, who draws our HelpDex series.
The post-WTO protest to shut down Microsoft (or at least annoy them) took place as scheduled, but your editor had a doctor's appointment and couldn't attend. News coverage was scanty except for this Seattle Times article. The Seattle Weekly later did an analysis of the event, although only the first part of the article is direcly about Microsoft.
Ironically, Microsoft Singapore sent the Gazette a nice piece of spam about a "Windows 2000 Professional Sales Training" seminar. "Come and learn all about Windows 2000 Professional, then take the online test. Microsoft will award all who pass with a certificate you can frame and show others your achievement." I can't wait. Not.
Thanks for sending in your articles and 2-cent tips. Remember to have fun with Linux this month!
Michael Orr
Editor, Linux Gazette,