From: David E. Stern,
The end goal: to install FileRunner, I simply MUST have it! :-)
My intermediate goal is to install Tcl/Tk 7.6/4.2, because FileRunner needs these to install, and I only have 7.5/4.1 . However, when I try to upgrade tcl/tlk, other apps rely on older tcl/tk libraries, atleast that's what the messages allude to:
libtcl7.5.so is needed by some-app libtk4.1.so is needed by some-app(where some-app is python, expect, blt, ical, tclx, tix, tk, tkstep,...)
I have enough experience to know that apps may break if I upgrade the libraries they depend on. I've tried updating some of those other apps, but I run into further and circular dependencies--like a cat chasing it's tail.
In your opinion, what is the preferred method of handling this scenario? I must have FileRunner, but not at the expense of other apps.
It sounds like you're relying too heavily on RPM's. If you can't afford to risk breaking your current stuff, and you "must" have the upgrade you'll have to do some stuff beyond what the RPM system seems to do.
One method would be to grab the sources (SRPM or tarball) and manually compile the new TCL and tk into /usr/local (possibly with some changes to their library default paths, etc). Now you'll probably need to grab the FileRunner sources and compile that to force it to use the /usr/local/wish or /usr/local/tclsh (which, in turn, will use the /usr/local/lib/tk if you've compiled it all right).
Another approach is to set up a separate environment (separate disk, a large subtree of an existing disk -- into which you chroot, or a separate system entirely) and test the upgrade path where it won't inconvenience you by failing. A similar approach is to do a backup, test your upgrade plan -- (if the upgrade fails, restore the backup).
This is a big problem in all computing environments (and far worse in DOS, Windows, and NT systems than in most multi-user operating systems. At least with Unix you have the option of installing a "playpen" (accessing it with the chroot call -- or by completely rebooting on another partition if you like).
Complex interdepencies are unavoidable unless you require that every application be statically linked and completely self-sufficient (without even allowing their configuration files to be separate. So this will remain an aspect of system administration where experience and creativity are called for (and a good backup may be the only thing between you and major inconvenience). -- Jim
From: Bill Johnson,
I have two networking problems which may be related. I'm using a dial-up (by chat) ppp connection.
1) pppd will not execute for anyone without root privilege, even though it's permissions are set rw for group and other.
I presume you mean that it's *x* (execute) bit is set. It's *rw* bits should be disabled -- the *w* bit ESPECIALLY.
If you really want pppd to be started by users (non-root) you should write a small C "wrapper" program that executes pppd after doing a proper set of suid (seteuid) calls and sanity checks. You might be O.K. with the latest suidperl (though there have been buffer overflows with some versions of that.
Note that the file must be marked SUID with the chmod command in order for it to be permitted to use the seteuid call (unless ROOT is running it, of course).
Regardless of the method you use to accomplish your SUID of pppd (even if you just set the pppd binary itself to SUID):
I suggest you pick or make a group (in /etc/group) and make the pppd wrapper group executable, SUID (root owned), and completely NON-ACCESSIBLE to "other" (and make sure to just as the "trusted" users to the group.
'sudo' (University of Colorado, home of Evi Nemeth) is a generalized package for provided access to privileged programs. You might consider grabbing it and installing it.
I'd really suggest diald -- which will dynamically bring the link up and down as needed. Thus your users will just try to access their target -- wait a long time for dialing, negotiation, etc (just like pppd only a little faster) and away you go (until your connection is idle long enough to count as a "timeout" for diald.
2) http works, and mail works, and telnet works, but ftp does not work. I can connect, login, poke around, and all that. But when I try to get a file, it opens the file for writing on my machine and then just sits there. No data received, ever. Happens with Netscape, ftp, ncftp, consistently, at all sites. Even if user is root. Nothing is recorded in messages or in ppp-log. /etc/protocols, /etc/services and all that seems to be set up correctly. Any suggestions?
Can you dial into a shell account and do a kermit or zmodem transfer? What does 'stty -a < /dev/modem' say? Make sure you have an eight-bit clean session. Do you have 16550 (high speed) UARTS.
Do you see any graphics when you're using HTTP? (that would suggest that binary vs. text is not the problem).
-- Jim
From: Zia Khan,
I have a question regarding fetchmail. i've been successful at using it to send and recieve mail from my ISP via a connection to their POP3 server. there is a slight problem though. the mail that i send out has in its from: field my local login and local hostname e.g. [email protected]. when it should be my real email address [email protected] those who recieve my message recieve an non existant email address to reply to. is there any way in modifying this behavior? i've been investigating sendmail with hopes it may have have a means of making this change,to little success.
Technically this has nothing to do with fetchmail or POP. 'fetchmail' just *RECIEVES* your mail -- POP is just the protocol for storing and picking up your mail. All of your outgoing mail is handles by a different process.
Sendmail has a "masquerade" feature and an "all_masquerade" feature which will tell it to override the host/domain portions of the headers addresses when it sends your mail. That's why my mail shows up as "[email protected]" rather than "[email protected]."
The easy way to configure modern copies of sendmail is to use the M4 macro package that comes with it. You should be able to find a file in /usr/lib/sendmail-cf/cf/
Mine looks something like:
divert(-1) include(`../m4/cf.m4') VERSIONID(`@(#)antares.uucp.mc .9 (JTD) 8/11/95') OSTYPE(`linux') FEATURE(nodns) FEATURE(nocanonify) FEATURE(local_procmail) FEATURE(allmasquerade) FEATURE(always_add_domain) FEATURE(masquerade_envelope) MAILER(local) MAILER(smtp) MASQUERADE_AS(starshine.org) define(`RELAY_HOST', a2i) define(`SMART_HOST', a2i) define(`PSEUDONYMS', starshine|antares|antares.starshine.org|starshine.org)
(I've removed all the UUCP stuff that doesn't apply to you at all).
Note: This will NOT help with the user name -- just the host and domain name. You should probably just send all of your outgoing mail from an account name that matches your account name at your provider. There are other ways to do it -- but this is the easiest.
Another approach would require that your sendmail "trust" your account (with a define line to add your login ID as one which is "trusted" to "forge" their own "From" lines in sendmail headers. Then you'd adjust your mail-reader to reflect your provider's hostname and ID rather than your local one. The details of this vary from one mailer to another -- and I won't give the gory details here).
Although I said that this is not a fetchmail problem -- I'd look in th fetchmail docs for suggestions. I'd also read (or re-read) the latest version of the E-Mail HOW-TO.
-- Jim
Justin Mark Tweedie,
Our users no not have valid command shells in the /etc/passwd file (they have /etc/ppp/ppp.sh). I would like the users to use procmail to process each users mail but .forward returns an error saying user does not have a vaild shell.
The .forward file has the following entry
|IFS=' '&&exec /usr/local/bin/procmail -f-||exit 75 #justin
How can I make this work ???
Cheers Justin
I suspect that its actually 'sendmail' that issuing the complaint.
Add the ppp.sh to your /etc/shells file. procmail will still use /bin/sh for processing the recipes in the .procmailrc file.
Another method would be to use procmail as your local delivery agent. In your sendmail "mc" (m4 configuration file) you'd use the following:
FEATURE(local_procmail)
(and make sure that your copy of procmail is in a place where sendmail can find it -- either using symlinks or by adding:
define(`PROCMAIL_PATH', /usr/local/your/path/to/procmail);
Then you don't have to muss with .forward files at all. 'sendmail' will hand all local mail to procmail which will look for a .procmailrc file.
Another question to as is whether you want to use your ppp.sh has a login shell at all. If you want people to login in and be given an automatic PPP connection I'd look at some of the cool features of mgetty (which I haven't used yet -- but seen in the docs).
These allow you to define certain patterns that will be caught by 'mgetty' when it prompts for a login name -- so that something like Pusername will call .../ppplogin while Uusername will login with with 'uucico' etc.
If you want to limit your customers solely to ppp services and POP (with procmail) then you've probably can't do it in any truly secure or reasonably way. Since the .procmailrc can call on arbitrary external programs -- the user with a valid password and account can access other services on the system. Also the ftp protocol can be subverted to provide arbitrary interactive access -- unless it is run in a 'chroot' environment -- one which would make the processing of updating the user's .procmailrc and any other .forward or configuration files a hassle.
It can be done -- but it ultimately is more of a hassle than it's worth. So if you want to securely limit your customers' from access to interactive services and arbitrary commands you'll want to look at a more detailed plan than I could write up here.
-- Jim
From: Mike West,
Hi Jim, This may seem like a silly question, but I've been unable to find any HOW-TOs or suggestions on how to do it right. My question is, how should I purge my /var/log/messages file? I know this file continues to grow. What would be the recommended way to purge it each month? Also, are there any other log files that are growing that I might need to know about? Thanks in advance Jim.
I'm sorry to have dropped the ball on your message. Usually when I don't answer a LG question right away it's because I have to go do some research. In this case it was that I knew exactly what I wanted to say -- which would be "read my 'Log Management' article in the next issue of LG"
However haven't finished the article yet. I have finished the code.
Basically the quick answer is:
rm /var/log/messages kill -HUP $(cat /var/run/syslog.pid)(on systems that are configured to conform to the FSSTND and putting a syslog.pid file in /var/run).
The HUP signal being send to the syslogd process is to tell it to close and re-open its files. This is necessary because of the way that Unix handles open files. "unlinking" a file (removing the directory entry for it) is only a small part of actually removing it. Remember that real information about a file (size, location on the device, ownership, permissions, and all three date/time stamps for access, creation, and modification) is stored in the "inode." This is a unique, system maintained data structure. One of the fields in the inode is a "reference" or "link" count. If the name that you supplied to 'rm' was the only "hard link" to the file than the reference count reaches zero. So the filesystem driver will clear the inode and return all the blocks that were assigned to that file to the "free list" -- IF THE FILE WASN'T OPEN BY ANY PROCESS!
If there is any open file descriptor for the file -- then the file is maintained -- with no links (no name). This is because it could be critically bad to remove a file out from under a process with no warning.
So, many daemons interrupt a "Hang-up" signal (sent via the command 'kill -HUP') as a hint that they should "reinitialize in some way. That usually means that they close all files, re-read any configuration or options files and re-open any files that they need for their work.
You can also do a
cp /dev/null /var/log/messages.. and you get away without doing the 'kill -HUP'.
I don't really know why this doesn't get the syslog confused -- since it's offset into the file is all wrong. Probably this generates a "holey" file -- which is a topic for some other day.
Another quick answer is: Use the 'logrotate' program from Red Hat. (That comes with their 4.1 distribution -- and is probably freely usable if you just want to fetch the RPM from their web site. If you don't use a distribution that support RPM's you can get converters that translate .rpm files into tar or cpio files. You can also just use Midnight Commander to navigate through an RPM file just like it was a tar file or a directory).
The long answer looks a little more like:
#! /bin/bash ## jtd: Rotate logs ## This is intended to run as a cron job, once per day ## it renames a variety of log files and then prunes the ## oldest. cd /var/log TODAY=$(date +%Y%m%d) # YYYYMMDD convenient for sorting function rotate { cp $1 OLD/${1}.$TODAY cp /dev/null $1 } rotate maillog rotate messages rotate secure rotate spooler rotate cron ( echo -n "Subject: Filtered Logs for: " ; date "+%a %m/%d/%Y" echo; echo; echo; echo "Messages:" /root/bin/filter.log /root/lib/messages.filter OLD/messages.$TODAY echo; echo; echo "Cron:" /root/bin/filter.log /root/lib/cron.filter OLD/cron.$TODAY echo; echo; echo "--"; echo "Your Log Messaging System" echo; echo; echo ) | /usr/lib/sendmail -oi -oe root ## End of rotate.logs
That should be fairly self explanatory except for the part at the end with the (....) | sendmail .... stuff. The parenthese here group the output from all of those commands into the pipe for sendmail -- so the provide a whole message for sendmail. (Otherwise only the last echo would go to sendmail and the rest would try to go to the tty of the process that ran this -- which (when cron runs the job) will generate a different -- much uglier -- piece of mail.
Now there is one line in the sendmail group that bears further explanation: /root/bin/filter.log /root/lib/messages.filter OLD/messages.$TODAY
This is a script (filter.log) that I wrote -- it takes a data file (messages.filter) that I have created in little parts over several weeks and still have to update occasionally.
Here's the filter.log script:
#! /usr/bin/gawk -f # filter.log # by James T. Dennis # syntax filter.log patternfile datafile [datafile2 .....] # purpose -- trim patterns, listed in the first filename # from a series of data files (such as /var/adm/messages) # the patterns in the patternfile should take the form # of undelimited (no '/foo/' slashes and no "foo" quotes) # Note: you must use a '-' as the data file parameter if # if you to process stdin (use this as a filter in a pipe # otherwise this script will not see any input from it! ARGIND == 1 { # ugly hack. # allows first parameter to be specially used as the # pattern file and all others to be used as data to # be filtered; avoids need to use # gawk -v patterns=$filename .... syntax. if ( $0 ~/^[ \t]*$/ ) { next } # skip blank lines # also skip lines that start with hash # to allow comments in the patterns file. if ( $0 !~ /^\#/ ) { killpat[++i]=$0 }} ARGIND > 1 { for( i in killpat ) { if($0 ~ killpat[i]) { next }}} ARGIND > 1 { print FNR ": " $0 }
That's about eight lines of gawk code. I hope the comments are clear enough. All this does is reads one file full of pattern, and then use that set of patterns as a filter for all of the rest of the files that are fed through it.
Here's an excerpt from my ~root/lib/messages.filter file:
... ..? ..:..:.. antares ftpd\[[0-9]+\]: FTP session closed ... ..? ..:..:.. antares getty\[[0-9]+\]: exiting on TERM signal ... ..? ..:..:.. antares innd: .* ... ..? ..:..:.. antares kernel:[ \t]* ... ..? ..:..:.. antares kernel: Type: .*
Basically those first seventeen characters on each line match any date/time stamp -- the antares obviously matches my host name and the rest of each line matches items that might appear in my messages file that I don't care about.
I use alot of services on this machine. My filter file is only about 100 lines long. This scheme trims my messages file (several thousand lines per day) down to about 20 or 30 lines of "different" stuff per day.
Everyone once in awhile I see a new pattern that I add to the patterns list.
This isn't an ideal solution. It is unreasonable to expect of most new Linux users (who shouldn't "have to" learn this much about regular expressions to winnow the chaff from their messages file. However it is elegant (very few lines of code -- easy to understand exactly what's happening).
I thought about using something like swatch or some other log management package -- but my concern was that these are looking for "interesting things" and throwing the rest away. Mine looks for "boring things" and whatever is left is what I see. To me anything that is "unexpected" is interesting (in my messages file) -- so I have to use a fundamentally different approach. I look at these messages files as a professional sysadmin. They may warn me about problems before my users notice them. (incidentally you can create a larger messages file that handles messages for many hosts -- if you are using remote syslogging for example).
Most home users can just delete these files with abandon. They are handy diagnostics -- so I'd keep at least a few days worth of them around.
-- Jim
From: William Macdonald
Subject: OS showdown
Hi, I was reading one of the British weekly computing papers this week and there was an article about a shoot out between Intranetware and NT. This was to take place on 20th May in the Guggenhiem museum in NYC.
Intranetware sounds interesting. Sadly I think it may be too little, too late in the corporate world. However, if Novell picks the right pricing strategy and niche they may be able to come back in from the bottom.
I won't talk about NT -- except when someone is paying me for the discomfort.
The task was to have a system offering an SQL server that could process 1 billion transasctions in a day. This was supposed to be 10 time what Visa requires and 4 time what a corporation like American Airlines. It was all about proving that these OSs could work reliably in a mission critical environment.
If I wanted to do a billion SQL transactions a day I'd probably look at a Sun Starfire running Solaris. The Sun Starfire has 64 SPARC (UltraSPARC's??) running in parallel.
Having a face off between NT and Netware (or "Intra" Netware as they've labelled their new release) in this category is really ignoring the "real" contenders in the field of SQL.
Last I heard the world record for the largest database system was owned by Walmart and ran on Tandem mini-computers. However that was several years ago.
I haven't seen the follow up article yet so I can't say what the result was. The paper was saying it was going to be a massive comp with both the boss' there etc.
Sounds like typical hype to me. Pick one or two companies that you think are close to you and pretend that your small group comprises the whole market.
How would linux fair in a comp like this ?? The hardware resources were virtually unlimited. I think the NT box was a compaq 5000 (proliant ??). Quad processors, 2 GB RAM, etc.
The OS really doesn't have to much to do with the SQL performance. The main job of the OS in running an SQL engine is to provide system and file services as fast as possible and stay the heck out of the way the real work.
The other issue is that the hardware makes a big difference. So a clever engineer could make a DOG of a OS still look like a charging stallion -- by stacking the hardware in his favor.
If it was me -- I'd think about putting in a few large (9 Gig) "silicon disks." A silicon disk is really a bunch of RAM that's plugged into a special controller that makes it emulate a conventional IDE or SCSI hard drive. If you're Microsoft or Novell and you're serious about winning this (and other similar) face offs -- the half a million bucks you spend on the "silicon disks" may pay for itself in one showing.
In answer to your question -- Linux, by itself, can't compete in this show -- it needs an SQL server. Postgres '95 is, from what I've seen and heard, much too lightweight to go up against MS SQL Server -- and probably no match for whatever Novell is using. mSQL is also pretty lightweight. Mind you P'gres '95 and mSQL are more than adequate for most businesses -- and have to offer a price performance ratio that's unbeatable (even after figuring in "hidden" and "cost of ownership" factors). I'm not sure if Beagle is stable enough to even run.
So we have to ask:
What other SQL packages are available for Linux?
Pulling out my trusty _Linux_Journal_1997_Buyers's_Guide_ (and doing a Yahoo! search) I see:
That's all that are listed in the Commercial-HOWTO However -- here's a few more:
Sadly the "big three" (Informix, Oracle, and Sybase) list nothing about Linux on their sites. I suspect they still consider themselves to be "too good" for us -- and they are undoubtedly tangled in deep licensing aggreements with SCO, Sun, HP, and other big money institutions. So they probably view us as a "lesser threat" -- (compared to the 800 lb gorilla in Redmond). Nonetheless -- it doesn't look like they are willing to talk about Linux on their web pages.
I'd also like to take this opportunity to lament the poor organization and layout of these three sites. These are the large database software companies in the world -- and they can create a simple, INFORMATIVE web site. Too much "hype" and not enough "text."
(My father joked: "Oh! you meant 'hypertext' -- I thought it was 'hype or text'" -- Obviously too many companies hear it the same way and choose the first option of a mutually exclusive pair).
-- Jim
From: Alex Pikus of WEBeXpress
I have a DEC XLT-366 with NTS4.0 and I would like to add Linux to it. I have been running Linux on an i386 for a while. I have created 3 floppies:
I have upgrade AlphaBIOS to v5.24 (latest from DEC) and added a Linux boot option that points to a:\
You have me at a severe disadvantage. I'll be running Linux on an Alpha based system for the first time next week. So I'll have to try answering this blind.
When I load MILO I get the "MILO>" prompt without any problem. When I do "show" or "boot ..." at the MILO>" I get the following result ... SCSI controller gets identified as NCR810 on IRQ 28 ... test1 runs and gets stuck "due to a lost interrupt" and the system hangs ... In WinNTS4.0 the NCR810 appears on IRQ 29.
My first instinct is the ask if the autoprobe code in Linux (Alpha) is broken. Can you use a set of command-line (MILO) parameters to tell pass information about your SCSI controller to your kernel? You could also see about getting someone else with an Alpha based system to compile a kernel for you -- and make sure that it has values in it's scsi.h file that are appropriate to your system -- as well as insuring that the corrective drivers are built in.
How can make further progress here?
It's a tough question. Another thing I'd look at is to see if the Alpha system allows booting from a CD-ROM. Then I'd check out Red Hat's (or Craftworks') Linux for Alpha CD's -- asking each of them if they support this sort of boot.
(I happened to discover that the Red Hat Linux 4.1 (Intel) CD-ROM was bootable when I was working with one system that had an Adaptec 2940 controller where that was set as an option. This feature is also quite common on other Unix platforms such as SPARC and PA-RISC systems -- so it is a rather late addition to the PC world).
-- Jim
From: Stuby Bernd,
Hello there, First I have to metion that my Soundcard (MAD16 Pro from Shuttle Sound System with an OPTi 82C929 chipset) works right under Windows. I tried to get my Soundcard configured under Linux 2.0.25.with the same Parameters as under Windows but as I was booting the new compiled Kernel the Soundcard whistled and caused terrible noise. The same happened as I compiled the driver as a module and installed it in the kernel. In the 'README.cards' file the problem coming up just with this Soundcard is mentioned (something like line 3 mixer channel). I don't know what to do with this information and how to change the sounddriver to getting it working right. May be there's somebody who knows how to solve this problem or where I can find more information.
With best regards Bernd
Sigh. I've never used a sound card in my machine. I have a couple of them floating around -- and will eventually do that -- but for now I'll just have to depend on "the basics"
Did you check the Hardware-HOWTO? I see the MAD16 and this chipset listed there. That's encouraging. How about the Soundcard-HOWTO? Unfortunately this has no obvious reference to your problem. I'd suggest browsing through it in detail. Is your card a PnP (plug and "pray")? I see notes about that being a potential source of problems. I also noticed a question about "noise" being "picked up" by the sound card
http://sunsite.unc.edu/LDP/HOWTO/Sound-HOWTO-6.html#ss6.23 That might not match your probelm but it's worth looking at.
Did you double check for IRQ and DMA conflicts? The thing I hate about PC sound cards is that most of them use IRQ's and DMA channels. Under DOS/Windows you used to be able to be fairly sloppy about IRQ's. When your IRQ conflicts caused conflicts the symptoms (like system lockups) tend to get lost in the noise of other problems (like system lockups and mysterious intermittent failures). Under Linux these problems usually rear their ugly heads and have nowhere to hide.
Have you contacted the manufacturer of the card? I see a Windows '95 driver. No technical notes on their sound cards -- and no mention of anything other than Windows on their web site (that I could find). That would appear to typify the "we only do Windows" attitude of so many PC peripherals manufacturers. I've blind copied their support staff on this -- so they have the option to respond.
If this is a new purchase -- and you can't resolve the issue any other way -- I'd work with your retailer or the manufacturer to get a refund or exchange this with hardware that meets your needs. An interesting side note. While searching through Alta Vista on Yahoo! I found a page that described itself as The Linux Ultra Sound Project. Perhaps that will help you choose your next PC sound system (if it comes to that).
-- Jim
From: Larry Snyder,
Just re-read your excellent article on procmail in the May LJ. (And yes, I've read both man pages :-). What I want to try is:
[*emov* *nstruction*]or
remove@*
This should be a MUCH cheaper (in cpu cycles) way of implementing a spam filter than reading the header then going through all the possible domains that might be applicable. Most of the headers are forged in your average spam anyway....
Not my idea, but it sounds good to me. What do you think, and how would I code a body scan in the rc?
I think it's a terrible idea.
The code would be simple -- but the patterns you suggest are not very specific.
Here's the syntax (tested):
:0 B * (\[.*remove.*instruction.*\]|\[.*remove@.*\]) /dev/null
... note the capital "B" specifies that the recipe applies to the "Body" of the message -- the line that starts with an asterisk is the only conditional (pattern) the parentheses enclose/group the regular expression (regex) around the "pipe" character. The pipe character means "or" in egrep regex syntax. Thus (foo|bar) means "'foo' or 'bar'"
The square brackets are a special character in regexes (where they enclose "classes" of characters). Since you appeared to want to match the literal characters -- i.e. you wanted your phrases to be enclosed in square brackets -- I've had to "escape" them in my pattern -- so they are treated literally and not taken as delimiters.
The * (asterisk) character in the regex means "zero or more of the preceding element" and the . (dot or period) means "any single character" -- so the pair of them taken together means "any optional characters" If you use a pattern line like:
* foo*l
... it can match fool fooool and fooooolk and even fol but not forl or foorl. The egrep man page is a pre-requisite to any meaningful procmail work. Also O'Reilly has an entire book (albeit a small one) on regular expressions.
The gist of what I'm trying to convey is that .* is needed in regex'es -- even though you might use just * in shell or DOS "globbing" (the way that a shell matches filenames to "wildcards" is called "globbing" -- and generally does NOT use regular expressions -- despite some similarities in the meta characters used by each).
Not also that the * token at the beginning of this line is a procmail thing. It just identifies this as being a "condition" line. Lines in procmail recipes usually start with a token like a : (colon), a * (asterisk), a | (pipe) or a ! (bang or exclammation point) -- any that don't may consist of a folder name (either a file or a directory) or a shell variable assignment (which are the lines with = (equal signs) somewhere on them.
In other words the * (star) at the begining of that line is NOT part of the expression -- it's a token that tells the procmail processor that the rest of the line is a regex.
Personally I found that confusing when I first started with procmail.
Back to your original question:
I'm very hesitant to blindly throw mail away. I'd consider filing spam in a special folder which is only review in a cursory fashion. That would go something like this:
:0 B: * (\[.*remove.*instruction.*\]|\[.*remove@.*\]) prob.spam
Note that I've added a trailing : (colon) to the start of the recipe. This whole :x FLAGS business is a throwback to an early procmail which required each recipe to specify the number of patterns that followed the start of a recipe. Later :0 came to mean "I didn't count them -- look at the first character of each line for a token." This means that procmail will can forward through the patterns and -- when one matches -- it will execute ONE command line at the end of the recipe (variable assignments don't count).
I'm sure none of that made any sense. So :0 starts a recipe, the subsequent * ... lines provide a list of patterns, and each recipe ends with a folder name, a pipe, or a forward (a ! -- bang thingee). The : at the *END* of the :0 B line is a signal that this recipe should use locking -- so that to pieces of spam don't end up interlaced (smashed together) in your "prob.spam" mail folder. I usually use MH folders (which are directories in which each message takes up a single file -- with a number for a filename). That doesn't require locking -- you'd specify it with a folder like:
:0 * ^TO.*tag linux.gazette/.
... (notice the "/." (slash, dot) characters at the end of this).
Also note that folder names don't use a path. procmail defaults to using Mail (like elm and pine). You can set the MAILDIR variable to over-ride that -- mine is set to $HOME/mh. To write to /dev/null (where you should NOT attempt to lock the file!) you must use a full path (I suppose you could make a symlink named "null" in your MAILDIR or even do a mknod but....). When writing procmail scripts just think of $MAILDIR as your "current" directory (not really but...) and either use names directly under it (no leading slashes or dot/slash pairs) or use a full path.
The better answer (if you really want to filter mail that looks like spam) is to write an auto-responder. This should say something like:
The mail you've sent to foo has been trapped by a filtering system. To get past the filter you must add the following line as the first line in the body of your message: ...... ... Your original message follows: ......
... using this should minimize your risks. Spammers rely on volume -- no spammer will look through thousands of replies like this and manually send messages with the requisite "pass-through" or "bypass" directive to all of them. It's just not worth it. At the same time your friends and business associates probably won't mind pasting and resending (be sure to use a response format that "keeps" the body -- since your correspondents may get irritated if they have to dig up their original message for you.
Here's where we can work the averages against the spammer. He uses mass mailings to shove their message into our view -- we can each configure our systems to require unique (relatively insecure -- but unique) "pass codes" to reject "suspicious" mail. Getting the "pass codes" on thousands of accounts -- and using them before they are changed -- is not a task that can be automated easily (so long as we each use different explanations and formatting in our "bypass" instructions.
More drastic approaches are:
I hope some of these ideas help.
Here is a copy of one of my autoresponders for your convenience:
:0 * < 1000 * !^FROM_DAEMON * !^X-Loop:[ ]*[email protected] * ^Subject:[ ]*(procmail|mailbot) | ((formail -rk -A "Precedence: junk" \ -A "X-Loop: [email protected]" ; \ echo "Mail received on:" `date`) \ | $HOME/insert.doc -v file=$DOC/procmail.tutorial ) | $SENDMAIL -t -oi -oe
I realize this looks ugly. The first condition is to respond only to requests that are under 1K in size. (An earlier recipe directs larger messages to me). The next two try to prevent reponses to mail lists and things like "Postmaster@..." (to prevent some forms of "ringing") and check against the "eXtended" (custom) header that most procmail scripts use to identify mail loops. The next one matches subjects of "procmail" or "mailbot."
If all of those conditions are met than the message is piped to a complex command (spread over four lines -- it has the trailing "backslash" at the end of each of those -- to force procmail to treat it all as a single logical line:
This command basically breaks down like so:
(( formail -rk ...
... the two parenthese have to do with how the data passes through the shell's pipes. Each set allows me to group the output from a series of commands into each of my pipes.
.... the formail command creates a mail header the -r means to make this a "reply" and the -k means to "keep" the body. The two -A parameters are "adding" a couple of header lines. Those are enclosed in quotes because they contain spaces.
... the echo command adds a "timestamp" to when I received the mail. The `date` (backtick "date") is a common shell "macro expansion" construct -- Korn shell and others allow one to use the $(command) syntax to accomplish the same thing.
Now we close the inner group -- so formail's output -- and the echo's output are fed into my little awk script: insert.doc. This just takes a parameter (the -v file=) and scans its input for a blank line. After the first blank line insert.doc prints the contents of "file." Finally it then just prints all of the rest of it's input.
Here's a copy of insert.doc:
#! /usr/bin/gawk -f /^[ \t]*$/ && !INSERTED { print; system("cat " file ); INSERTED=1} 1
... that's just three lines: the pattern matches any line with nothing or just whitespace on it. INSERTED is a variable that I'm using as a flag. When those to conditions are met (a blank line is found *and* the variable INSERTED has not yet been set to anything) -- we print a blank line, call the system() function to cat the contents of a file -- whose name is stored in the 'file' variable, and we set the INSERTED flag. The '1' line is just an "unconditional true" (to awk). It is thus a pattern that matches any input -- since no action is specified (there's nothing in braces on that line) awk takes the default action -- it prints the input.
In awk the two lines:
1
... and
{print}
... are basically the same. They both match every line of input that reaches them and they both just print that and continue.
... Back to our ugly procmail recipe. 'insert.doc' has now "inserted" the contents of a doc file between formail's header and the body of the message that was "kept." So we combine all of that and pipe it into the local copy of sendmail. procmail thoughtfully presets the variable $SENDMAIL -- so we can use it to make our scripts (recipes) more portable (otherwise they would break when written on a system with /usr/lib/sendmail and moved to a system that uses /opt/local/new/allman/sendmail (or some silly thing like that)).
The switches on this sendmail command are:
... I'll leave it as an exercise to the reader to look those up in the O'Reilly "bat" book (the "official" Sendmail reference).
There are probably more elegant ways to do the insertion. However it is a little messy that our header and our "kept" body are combined in formail's output. If we had a simple shell syntax for handling multiple file streams (bash has this feature -- but I said *simple*) then it would be nice to change formail to write the header to one stream and the body to another. However we also want to avoid creating temp files (and all the hassles associated with cleaning up after them). So -- this is the shortest and least resource intensive that I've come up with.
So that's my extended tutorial on procmail.
I'd like to thank Stephen R. van den Berg (SRvdB) (creator of procmail), Eric Allman (creator of sendmail), and Alan Stebbens (an active contributor to the procmail mailing list -- and someone who's written some nice extensions to SmartList).
Alan Stebbens' web pages on mail handling can be found at: http://reality.sgi.com/aks/mail
-- Jim
From: David Cook,
We have spoken before on this issue over the caldera-users list (which I dropped because of too much crap). I recently gave up on Caldera's ability to support/move forward and acquired redhat 4.1.
All works well, except I cannot get uucico & cu to share properly the modem under control of uugetty. Other comm programs like minicom and seyon have no problem with it.
Both uucico and cu connect to the port and tell me that they cannot change the flow control !? and exit.
If I kill uugetty, both uucico and cu work perfectly.
In your discussion on the caldera newsgroup of Nov 2/96 you don't go into the details of your inbound connection, but you mention "mgetty" as opposed to uugetty.
What works/why doesn't mine?
What are pros/cons of mgetty?
By the way, I agree wholeheartedly with your rational for UUCP. Nobody else seems to apreciate the need for multiple peer connections and the inherit security concerns with bringing up an unattended TCP connection with an ISP.
Dave Cook, IBM Global Solutions.
The two most likely problems are: lock files or permissions
There are three factors that may cause problems with lock files: location, name, and format.
For lock files to work you must use the same device names for all access to a particular device -- i.e. if you use a symlink named 'modem' to access your modem with *getty -- then you must use the same symlink for your cu, uucico, pppd, minicom, kermit, seyon, etc. (or you must find some way to force them to map the device name to a properly named LCK..* file).
You must also configure each of these utilities to look for their lock files in the same location -- /var/lock/ under Red Hat. This configuration option may need to be done at compile time for some packages (mgetty) or it might be possible to over-ride it with configuration directives (Taylor UUCP) or even command line options.
The other things that all modem using packages have to agree on is the format of the lock file. This is normally a PID number of the process that creates the lock. It can be in "text" (human readable) or "binary" form.
Some packages never use the contents of the lock file -- its mere existence is sufficient. However most Linux/Unix packages that use device lock files will verify the validity of the lock file by reading the contents and checking the process status of whatever PID they read therefrom. If there is "no such process" -- they assume that it is a "stale" lock file and remove it.
I currently have all of my packages use text format and the /dev/modem symlink to /dev/ttyS1 (thus if I move my modem to /dev/ttyS2 or whatever -- say while migrating everything to a new machine -- all I have to change is the one symlink). My lock files are stored in /var/lock/
Permissions are another issue that have to be co-ordinated among all of the packages that must share a modem. One approach is to allow everyone write access to the modem. This, naturally, is a security whole large enough to steer an aircraft carrier through.
The most common approach is to make the /dev/ node owned by uucp.uucp or by root.uucp and group writable. Then we make all of the programs that access it SGID or SUID (uucp).
Here are the permissions I currently have set:
$ ls -ald `which uucico` `which cu` /dev/modem /dev/ttyS* /var/lock -r-sr-s--- 1 uucp uucp /usr/bin/cu -r-sr-s--- 1 uucp uucp /usr/sbin/uucico lrwxrwxrwx 1 uucp uucp /dev/modem -> /dev/ttyS1 crw-rw---- 1 root uucp /dev/ttyS0 crw-rw-r-- 1 root uucp /dev/ttyS1 crw------- 1 root tty /dev/ttyS2 crw-rw---- 1 root uucp /dev/ttyS3 drwxrwxr-x 6 root uucp /var/lock
On the next installation I do I'll probably experiment with tightening these up a little more. For example I might try setting the sticky bit on the /var/lock directory (forcing all file removals to be by the owner or root). That might prevent some programs from removing stale lock files (they would have to be SUID uucp rather than merely SGID uucp).
'cu' and 'uucico' are both SUID and SGID because they need access to configuration files in which passwords are stored. Those are mode 400 -- so a bug in minicom or kermit won't be enough to read the /etc/uucp/call file (for example). uucico is started by root run cron jobs and sometimes from a root owned shell at the console. cu is called via wrapper script by members of a modem group.
Things like pppd, diald, and mgetty are always exec'd by root (or SUID 'root' wrappers). mgetty is started by init and diald and pppd need to be able to set routing table entries (which requires root). So they don't need to be SUID anything. (If you want some users to be able to execute pppd you can make it SUID or you can write a simple SUID wrapper or SUID perl script. I favor perl on my home system and I make the resulting script inaccessible (unexecutable) by "other". At customer sites with multi-user systems I recommend C programs as wrappers -- a conservative approach that's been re-justified by recent announcements of new buffer overflows in sperl 5.003).
Oddly enough ttyS2 is the null modem that runs into the living room. I do a substantial portion of my writing while sitting in my easy chair watching CNN and SF (Babylon 5, Deep Space 9, Voyager that stuff).
Permissions are a particularly ugly portion of Unix since we rightly don't trust SUID things (with all of the buffer overflows, race conditions between stat() and open() calls and complex parsing trickery (ways to trick embedded system(), popen() and other calls that open a shell behind the programmer's back -- and are vulnerable to the full range of IFS, SHELL, alias, and LD_* attacks).
However I'm not sure that the upcoming Linux implementation of ACL's will help with this. I really need to read more about the planned approach. If it follows the MLS (multi- layer security) model of DEC and other commercial Unix implementations -- then using them make the system largely unusable for general-purpose computing (i.e. -- cast them solely as file servers).
From what I've read some of the problem is inherent in basing access primarily on ID and "group member ship" (really an extension of "identity"). For a long time I racked my brains to try to dream up alternative access control models -- and the only other one I've heard of is the "capabilities" of KeyKOS, Multics, and the newer Eros project.
Oh well -- we'll see. One nice thing about having the Linux and GNU project consolidating some much source code in such a small number of places is that it may just be possible to make fundamental changes to the OS design and "fix" enough different package to allow some those changes to "take" (attain a critical mass).
-- Jim
To: John D. Messina,
I was recently at the AIIM trade show in New York. There was nothing for Linux there, but I happened to wander over to the cyber cafe that was set up. I happened to be reading last month's Linux Gazette when a Microsoft employee walked up behind me. He was excited to find someone who was knowledgeable about Linux - he wanted to get a copy for himself.
I presume that you're directing this to the "Linux Gazette Answer Guy."
Anyway, we got to talking and he told me that Linux was getting so popular that Microsoft had decided to port ActiveX to Linux. Do you know if, in fact, this is true? If so, when might we see this port completed?
I have heard the same story from other Microsoft representatives (once at a Java SIG meeting where the MS group was showing off their J++ package).
This doesn't tell me whether or not the rumor is "true" -- but it does suggest that it is an "officially condoned leak." Even if I'd heard an estimated ship date (I heard this back in Nov. or Dec.) I wouldn't give it much credence.
(That is not MS bashing by the way -- I consider ship dates from all software companies and groups -- even our own Linus and company -- to be fantasies).
To be honest I didn't pursue the rumor. I asked the gentlemem I spoke to what ActiveX provides that CGI, SSI (server side includes), XSSI (extended server side includes), FastCGI, SafeTCL, Java and JavaScript don't. About the only feature they could think of is that it's from Microsoft. To be honest they tried valiantly to describe something -- but I just didn't get it.
So, your message as prompted me to ask this question again. Switching to another VC and firing up Lynx and my PPP line (really must get that ISDN configured one of these dasy) I surf on over to MS' web site.
After a mildly amusing series of redirects (their sites seems to be *all* .ASP (active server procedures?) files) I find my self at a reasonably readable index page. That's hopeful -- they don't qualify for my "Lynx Hall of Shame" nomination. I find the "Search" option and search on the single keyword "Linux."
"No Documents Match Query"
... hmm. That would be *too* easy wouldn't it. So I search on ActiveX:
"No Documents Match Query"
... uh-oh! I thought this "Search" Feature would search massive lists of press releases and "KnowlegeBase" articles and return thousands of hits. Obviously MS and I are speaking radically different languages.
Let's try Yahoo!
So I try "+ActiveX +Linux."
Even more startling was the related rumor -- that I heard at the same Java SIG meeting. The Microsoft reps there announced Microsoft's intention to port IE (Internet Explorer) to Unix. They didn't say which implementations of Unix would be the recipients of this dubious honor -- but suggested that Linux was under serious consideration.
(We can guess that the others would include SCO, Solaris, Digital, and HP-UX. Some of MS' former bed partners (IBM's AIX) would likely be snubbed -- and more "obscure" OS' (like FreeBSD???), and "outmoded" OS' like SunOS are almost certainly to be ignored).
It appears that the plan is to port ActiveX to a few X86 Unix platforms -- and use that to support an IE port (I bet IE is in serious trouble without ActiveX))
They'll run the hype about this for about a year before shipping anything -- trying to convince people to wait a little longer before adopting any other technologies.
"No! Joe! Don't start that project in Java -- wait a couple of months and those "Sun" and "Linux" users will be able to use the ActiveX version."
Some Links on this:
Everybody who uses NetNews or E-Mail should read the little essay on "Good Subject Lines." A promising page which I didn't have time to properly peruse is
There was alot of good info on Java, Linux, HTML, Ada, TCL and many other topics. I wouldn't be surprised if there was something about ActiveX somewhere below this page.
Suggestion: Sean -- Install Glimpse!
(I've copied many of the owners/webmasters at the sites I'm referring to here).
Sounds interesting and wholly unrelated to ActiveX.
Conclusion: Microsoft's mumblings to Linux users about porting IE and ActiveX to Linux is interesting. The mumbling is more interesting than any product they deliver is likely to be. I still don't know what ActiveX "does" well enough to understand what "supporting ActiveX under Linux" would mean.
It seems that ActiveX is a method of calling OCX and DLL code. That would imply that *using* ActiveX controls on Linux would require support for OCS and DLL's -- which would essentially mean porting all of the Windows API to work under Linux.
Now I have alot of trouble believing that Microsoft will deliver *uncompromised* support for Windows applications under Linux or any other non-Microsoft OS.
Can you imaging Bill Gates announcing that he's writing a multi-million dollar check to support the WINE project? If that happens I'd suggest we call in the Air Force with instructions to rescue the poor man from whatever UFO snatched him -- and get the FBI to arrest the imposter!
What's amazing is that this little upstart collection of freeware has gotten popular enough that the largest software company in the world is paying any attention to it at all.
Given Microsoft's history we have to assume that any announcement they make regarding Linux is carefully calculated to offer them some substantial benefit in their grand plan. That grand plan is to dominate the world of software -- to be *THE* software that controls everything (including your toaster and your telephone) (and everyone???).
This doesn't mean that we should react antagonistically to these announcements. The best bet -- for everyone who must make development or purchasing plans for any computer equipment -- is to simply cut through as much of the hype as possible and ask: What are the BENEFITS of the package that is shipping NOW?
Don't be swayed by people who talk about FEATURES (regardless of whether they are from from Microsoft, the local used car lot, or anywhere else).
The difference between BENEFITS and FEATURES is simply this -- Benefits are relevant to you.
The reason software publishers and marketeers in general push "features" is because they are engaged in MASS marketing. Exploring and understanding individual set of requirements is not feasible in MASS marketing.
(Personally one of the features that I find to be a benefit in the Linux market is the lack of hype. I don't have to spend time translating marketese and advertisian into common English).
I hope this answers your question. The short answers are:
Is it true (that MS is porting ActiveX to *ix)?
The rumor is widespread by their employees -- but there are no "official" announcements that can be found on their web site with their own search engine.
When might we see it? Who nows. Let's stick with NOW.
Finally let me ask this: What would you do with ActiveX support under Linux? Have you tried WABI? Does ActiveX work under Windows 3.1 and/or Windows 3.11? Would you try it under WABI?
What are your requirements (or what is your wishlist)? (Perhaps the Linux programming community can meet your requirements and/or fullfill your wishes more directly).
From: buck,
I just installed Redhat 4.1 and was not sure what packages that I really needed so I installed a lot just to be safe. The nice thing is that Redhat 4.1 has the package manager that I can use to safely remove items. Well seeing as how my installation was about 400 megs I really need to clean house here to reclaim space. Is is save to remove the developement packages and a lot of the networking stuff that I installed. And what about the shells and window managers that I don't use. I have Accelerated X so I know that I can get rid of a lot off the X stuff. I need my space back!
Since you just installed this -- and haven't had much time to put alot of new, unrecoverable data on it -- it should be "safe" to do just about anything to it. The worst that will happen if you trim out to much is that you'll have to re-install.
I personally recommend the opposite approach. Install the absolute minimum you think is usable. Then *add* packages one at a time.
I also strongly suggest creating a /etc/README file. Create it *right after* you reboot you machine following the install process. Make a dated note in there for each *system* level change you make to your system. (My rule of thumb is that anything thing I edited or installed as 'root' is a "system" level change).
Most of my notes are in the form of comments near the top of any config files or scripts that support them. Typical notes in /etc/README would be like:
Sun Apr 13 15:32:00 PDT 1997: jimd Installed mgetty. See comments in /usr/local/etc/mgetty/*.config. Sun May 4 01:21:11 PDT 1997: jimd Downloaded 2.0.30 kernel. unpacked into /usr/local/src/linux-2.0.30 and replace /usr/src/linux symlink accordingly. Picked *both* methods of TCP SYN cookies. Also trying built-in kerneld just about everything is loadable modules. Adaptec SCSI support has to be built-in though. Needed to change rc files to do the mount of DOS filesystem *after* rc.modules. ... etc.
Notice that these are free form -- a date, and login name (not ROOT's id -- but whoever is actualy doing work as root). I maintain a README even on my home machines.
The goal is to keep notes that are good enough that I could rebuild my system with all the packages I currently use -- just using the README. It tells me what packages I installed and what order I installed them in. It notes what things seemed important to me at the time (like the note that trying to start a kernel whose root filesystem is on a SCSI disk requires that the kernel be compile with that driver built-in -- easy to overlook and time consuming to fix if you forget it).
Sometime I ask myself questions in the README -- like: "Why is rnews throttling with this error:..." (and an excerpt from my /var/log messages).
This is handy if you later find that you need to correlate an anomaly on your system with some change made by your ISP -- or someone on your network.
Of course you could succumb to the modern trend -- buy another disk drive. I like to keep plenty of those around. (I have about 62Mb of e-mail currently cached in my mh folders -- that's built up since I did a fresh install last August -- with a few megs of carry over from my previous installation).
-- Jim
To: John E. (Ned) Patterson, ,br>
As a college student on a limited budget, I am forced to comprimise between Win95 and Linux. I use linux for just about everything, but need the office suite under Win95 since I can't afford to buy something for Linux. (Any recommendations you have for cheep alternatives would be appreciated, but that is not the point of the question.)
I presume you mean MS Office. (Caps mean a bit here). I personally have managed to get by without a couple of Office (Word or Excel) for some time. However I realize that many of us have to exchange documents with "less enlightened" individuals (like professors employers and fellow students).
So getting MS Office so you can handle .DOC and .XLS (and maybe PowerPoint) files is only a venial sin in the Church of Linux (say a few "Hail Tove's" and go in peace).
As for alternatives: Applixware, StarOffice, CliqSuite, Corel Application Suite (in Java), Caldera's Internet Office Suite, and a few others are out there. Some of them can do some document conversions from (and to??) .DOC format.
Those are all applications suites. For just spreadsheets you have Xess, Wingz and others.
In addition there are many individual applications. Take a look at the Linux Journal Buyer's Guide Issue for a reasonably comprehensive list of commercial applications for Linux (and most of the free was as well).
Personally I use vi, emacs (in a vi emulation mode -- to run M-x shell, and mh-e), and sc (spreadsheet calculator).
Recently I've started teaching myself TeX -- and I have high hopes for LyX though I haven't even seen it yet.
Unfortunately there is no good solution to the problem of proprietary document formats. MS DOC and MS XLS files are like a stranglehold on corporate America. I can't really blame MS for this -- the competition (including the freeware community) didn't offer a sufficiently attractive alternative. So everyone seems to have stepped up to the gallows and stuck their own necks in it.
"Owning" an ubiquitous data format is the fantasy of every commercial software company. You're customers will pass those documents around to their associates, vendors, even customers, and *expect* them to read it. Obviously MS is trying to leverage this by "integrating" their browser, mailer, spreadsheet, and word processors together with OLE, DSOM, ActiveX and anything else they can toss together.
The idea is to blur everything together so that customers link spreadsheets and documents into their web pages and e-mail -- and the recipients are then forced to have the same software. Get a critical mass doing that and "everyone" (except a few fringe Unix weirdos like me) just *HAS* to belly up and buy the whole suite.
This wouldn't be so bad -- but then MS has to keep revenues increasing (not just keep them flowing -- but keep them *increasing*). So we get upgrades. Each component of your software system has to be upgraded once every year or two -- and the upgrade *MUST* change some of the data (a one way conversion to the new format) -- which transparently makes your data inaccessible to anyone who's a version behind.
Even that wouldn't be so bad. Except that MS also has its limits. It can't be on every platform (so you can't access that stuff from your SGI or your Sun or your HP 700 or your OS/400). Not that MS *couldn't* create applications for these platforms. However that might take away some of Intel's edge -- and MS can't *OWN* the whole OS architecture on your Sun, SGI, HP or AS/400.
But enough of that diatribe. Let's just say -- I don't like proprietary file formats.
I mount my Win95 partition under /mnt/Win95, and would like to have write permission enabled for only certain users, much like that which is possible using AFS. Recognizing that is not terribly feasable, I have resorted to requireing root to mount the partition manually, but want toi be able to write to it as a random user, as long as it is mounted. The rw option for mount does not seem to cut the mustard, either. it allows write for root uid and gid, but not anyone else. Any suggestions?
You can mount your Win95 system to be writable by a specific group. All you have to do is use the right options. Try something like:
mount -t umsdos -w -ogid=10,uid=0,umask=007 /dev/hda1 /mnt/c
(note: you must use numeric GID and UID values here -- mount would look them up by name!)
This will allow anyone in group 10 (wheel on my system) to write to /mnt/c.
There are a few oddities in all of this. I personally would prefer to see a version of 'mount' -- or an option to 'mount' that would mount the target with whatever permissions and modes the underlying mount point had at mount time. In other words, as an admin., I'd like to set the ownership and permissions on /mnt/c to something like joeshmo user with a mode of 1777 (sticky bit set). Then I'd use a command like:
mount -o inherit /mnt/c /dev/hda1
Unfortunately I'm not enough of a coder to feel comfortable make this change (yet) and my e-mail with the current maintainer of the Linux mount (resulting from the last time I uttered this idea in public) suggests that it won't come from that source.
(While we were at it I'd also add that it would be nice to have a mount -o asuser -- which would be like the user option in that it would allow any user (with access to the SUID mount program) to mount the filesystem. The difference would be that the resulting mount point would be owned by the user -- and the nodev, nosuid etc, options would be enforced.)
Getting back to your question:
Another way to accomplish a similar effect (allowing some of your users to put files on under you /mnt/Win95 directory) would be to create a /usr/Win95 directory -- allow people to write files into that and use a script to mirror that over to the /mnt/Win95 tree.
(Personally I think the whole this is pretty dangerous -- so using the -o gid=... is the best bet).
-- Jim
I have a client who would like to use the left arrow key to backspace and erase characters to the left of the cursor. Is this possible? And how? Thanks for an answer.
Read the Keyboard-HOWTO (section 5). The loadkeys and xmodmap man pages, and the Backspace-Mini-HOWTO are also related to this. It is possible to completely remap your keys in Linux and in X Windows. You can also set up keybindings that are specific to bash (using the built in bind command) and to bash and other programs that use the "readline" library using the .inputrc file.
The Keyboard-HOWTO covers all of this.
-- Jim
To: Ronald B. Simon,
I have written several utility programs that I use all the time. I would like to add them to either the Application or Utility "pull down" menu of the Start menu. Could you address this in your Linux Gazette article?
I assume you are referring to the menus for your X "Window Manager."
Since you don't specify which window manager you're using (fvwm, fvwm95, twm, gwm, ctwm, mwm, olwm, TheNextLevel --- there are lots of wm's out there) -- I'll have to guess that you're using fvwm (which is the default) on most XFree86 systems. The fvwm95 (which is a modification of fvwm to provide a set of menus and behaviors that is visually similar to that of Windows '95) uses the same file/menu format (as far as I know).
The way you customize the menus of almost any wm is to edit (possibly creating) an rc file. For fvwm that would be ~/.fvwmrc
Here's an excerpt from mine (where I added the Wingz demo):
Popup "Apps" Exec "Wingz" exec /usr/local/bin/wingz & Nop "" Exec "Netscape" exec netscape & Exec "Mosaic" exec Mosaic & Nop "" Exec "Elm" exec xterm -e elm & Nop "" EndPopup You'd just add a line like: Exec "Your App" exec /path/to/your/app & .... to this. If you add a line like: PopUp "My Menu" MyMenu ... and a whole section like: PopUp "MyMenu" Exec "One App" exec /where/ever/one.app & Exec "Another Toy" exec /my/bin/toy & EndPopUp
... you'll have created your on submenu. Most other Window Managers have similar features and man pages to describe them.
-- Jim
To: Greg C. McNichol,
I am new to LINUX (and NT 4.0 for that matter) and would like any and all information I can get my hands on regarding the dual-boot issue. Any help is appreciated.
More than you wanted to know about:
Booting Multiple Operating Systems
There are several mini-HOW-TO documents specifically covering different combinations of multi-boot. Here's some that can be found at: http://www.linuxresources.com//LDP/HOWTO/HOWTO-INDEX.html
Personally I think the easiest approach to make Linux co-exsist with any of the DOS derived OS' (Win '95, OS/2, or NT) is to use Han Lerman's LOADLIN package. Available at "Sunsite": ftp://sunsite.unc.edu/pub/Linux/system/Linux-boot/lodlin16.tgz (85k)
To use this -- start by installing a copy of DOS (or Win '95). Be sure to leave some disk space unused (from DOS/Win '95's perspective) -- I like to add whole disks devoted to Linux.
Now install Linux on that 2nd, 3rd or nth hard drive -- or by adding Linux partitions to the unused portion of whichever hard drives you're already using. Be sure to configure Linux to 'mount' your DOS partition(s) (make them accessible as parts of the Unix/Linux directory structure). While installing be sure to answer "No" or "Skip" to any questions about "LILO" (Feel free to read the various HOW-TO's and FAQ's so you'll understand the issues better -- I'd have to give a rather complete tutorial on PC Architecture, BIOS boot sequence and disk partitioning to avoid oversimplifying this last item)
Once you're done with the Linux installation find and install a copy of LOADLIN.EXE. The LOADLIN package is a DOS program that loads a Linux kernel. It can be called from a DOS prompt (COMMAND.COM or 4DOS.COM) or it can be used as a INSTALL directive in your CONFIG.SYS (which you'd use with any of the multi-boot features out there -- including those that were built into DOS 6.x and later). After installation you'd boot into DOS (or into the so-called "Safe-Mode" for Windows '95) and call LOADLIN with a batch file like:
C: CD \LINUX LOADLIN.EXE RH2013.KRN root=/dev/hda2 .....
(Note the value of your root= parameter must correspond to the Linux device node for the drive and partition on which you've installed Linux. This example shows the second partition on the first IDE hard drive. The first partition on the second IDE drive would be /dev/hdb1 and the first "logical" partition within an extended partition of your fourth SCSI hard drive would be /dev/sdd5. The PC Architecture specifies room for 4 partitions per drive. Exactly one of those (per drive) may be an "extended" partition. An extended partition may have an arbitrary number of "logical" drives. The Linux nomenclature for logical drives always starts at 5 since 1 through 4 or reserved for the "real" partitions).
The root= parameter may not be necessary in some cases since the kernel has a default which was compiled into it -- and which might have been changed with the rdev command. rdev is a command that "patches" a Linux kernel with a pointer to it's "root device."
This whole concept of the "root device" or "root filesystem" being different than the location of your kernel may be confusing at first. Linux (and to a degree other forms of Unix) doesn't care where you put you kernel. You can put it on a floppy. That floppy can be formatted with a DOS, Minix or ext2 filesystem -- or can be just a "raw" kernel image. You can put your kernel on ANY DOS filesystem so long as LOADLIN can access it.
LOADLIN and LILO are "boot loaders" they copy the kernel into RAM and execute it. Since normal DOS (with no memory managers loaded -- programs like EMM, QEMM, and Windows itself) has no memory protection mechanisms it is possible to load an operating sytem from a DOS prompt. This is, indeed, how the Netware 3.x "Network Operating System" (NOS) has always been loaded (with a "kernel" image named SERVER.EXE). It is also how one loads the TSX-32 (a vaguely VMS like operating system for 386 and later PC's).
My my example RH2013.KRN is the name of a kernel file. Linux doesn't care what you name it's kernel file. I use the convention of naming mine LNXvwyy.KRN -- where v is the major version number, w is the minor version and yy is the build. LNX is for a "general use" kernel that I build myself, RH is a kernel I got from a RedHat CD, YGG would be from an Yggdrasil, etc).
One advantage of using LOADLIN over LILO is that can have as many kernels and your disk space allows. You can have them arranged in complex hierarchies. You can have as many batch files passing as many different combinations of of kernel parameters as you like. LILO is limited to 16 "stanzas" in its /lilo.conf file.
The other advantage of LOADLIN over LILO is that it is less scary and hard to understand for new users. To them Linux is just a DOS program that you have to reboot to get out of. It doesn't involve any of that mysterious "master boot record" stuff like a computer virus.
A final advantage of LOADLIN over LILO is that LOADLIN does not require that the root file system be located on a "BIOS accessible" device. That's a confusing statement -- because I just tossed in a whole new concept. The common system BIOS for virtually ALL PC's can only see one or two IDE hard drives (technically ST-506 or compatible -- with a WD8003 (???) or register compatible controller -- however ST-506 (the old MFM and RLL drives) haven't been in use on PC's since the XT) To "see" a 3rd or 4th hard drive -- or any SCSI hard drive the system requires additional software or firmware (or an "enhanced BIOS"). There is a dizzying array of considerations in this -- which have almost as many exceptions. So to get an idea of what is "BIOS" accessible you should just take a DOS boot floppy -- with no CONFIG.SYS at all -- and boot off of it. Any drive that you can't see is not BIOS accessible.
Clearly for the vast majority of us this is not a problem. For the system I'm on -- with two IDE drives, two internal SCSI drives, one internal CD reader, an external SCSI hard drive, a magneto optical drive, a 4 tape DAT autochanger and a new CD-Writer (which also doubles as a CD reader, of course) -- with all of that it makes a difference.
Incidentally this is not an "either/or" proposition. I have LILO installed on this system -- and I have LOADLIN as well. LILO can't boot my main installation (which is on the SCSI drives. But it can boot a second minimal root installation -- or my DOS or OS/2 partitions.
(I'm not sure the OS/2 partition is still there -- I might have replaced that with a FreeBSD partition at some point).
Anyway -- once you have DOS and Linux happy -- you can install NT with whatever "dual boot" option it supports. NT is far less flexible about how it boots. So far as I know there is no way to boot into DOS and simply run NT.
It should be noted that loading an OS from DOS (such as we've described with LOADLIN, or with FreeBSD's FBSDBOOT.EXE or TSX-32's RUNTSX.EXE) is a ONE WAY TRIP! You load them from a DOS prompt -- but DOS is completely removed from memory and there is no way to exit back to it. To get back to DOS you much reboot. This isn't a new experience to DOS users. There have been many games, BBS packages and other pieces of software that had not "exit" feature.
(In the case of Netware there is an option to return to DOS -- but it is common to use an AUTOEXEC.NCF (netware control file) that issues the Netware command REMOVE DOS to free up the memory that's reserved for this purpose).
In any event those mini-HOWTO's should get you going. The rest of this is just background info.
-- Jim
To: Brian Justice
I was browsing the web and noticed your web page on Linux. I am not familar with Linux but have an ISP who uses the software on their server.
I was wondering if anyone at your organization knew of any problems with
I'm the only one at my organization -- Starshine is a sole proprietorship.
Pentium notebooks with 28.8 modems connecting to Linux 1.2.13 internet servers that would do the following:
It sounds like you're saying that the Pentium Notebook is running some other OS -- like Windows or DOS and that it is using a PCMCIA modem to dial into another system (with unspecified modem and other hardware -- but which happens to run Linux).
If that's the case then you're troubleshooting the wrong end of the connection.
First identify which system is having the problem -- use the Pentium with the "piecemeal" (PCMCIA) modem to call a BBS or other ISP at 28.8. Try several.
Does your Pentium sytem have problems with all or most of them?
If so then it is quite likely a problem with the combination of your Pentium, your OS, and your piecemeal modem.
Try booting the Pentium off of a plain boring copy of DOS (with nothing but the PCMCIA drivers loaded). Repeat the other experiments. Does it still fail on all or most of them?
If so then it is probably the PCMCIA drivers.
Regular desktop 28.8 modems seem to work fine. I have a few 14.4 PCMCIA modems that seem to work fine.
Would incorrect settings cause this? Or could this be a program glitch that doesn't support these 28.8 modems due to the low level of the release? I noticed their are higher versions of Linux out there.
"incorrect settings" is a pretty vague term. Yes. The settings on your hardware *AND THEIRS* and the settings in your software *AND THEIRS* has to be right. Yes. The symptoms of incorrect settings (in the server hardware, the modem hardware, the OS/driver software or the applications software *AT EITHER END OF THE CONNECTION* could cause sufficiently sporadic handshaking that one or the other modem in a connection "gives up" and hangs up on the other.
The BIG question is "Have you heard of any 28.8 PCMCIA modem problems with Linux internet servers? " If so, could you drop me a few lines so I can talk this over with my ISP. If not , do you know of any other sites or places I can check for info about this subject.
I've heard of problems with every type of modem for every type of operating system running on every platform. None of them has been specific to PCMCIA modems with Linux. I've operated a couple of large BBS' (over a 100 lines on one and about 50 on the other) and worked with a number of corporate modem pools and remote access servers.
I don't understand why your ISP would want a note from me before talking to you.
It sounds like your asking me to say: "Oh yeah! He shouldn't be running Linux there!" ... or to say" "1.2.13! That fool -- he needs to upgrade to 2.0.30!" ... so you can then refer this "expert" opinion to some support jockey at your ISP.
Now if you mean that your ISP is running Linux 1.2.13 on a Pentium laptop with PCMCIA modems -- and using that as a server for his internet customers -- I'd venture to say that this is pretty ludicrous.
If you were running Linux on your laptop and having problems with your PCMCIA modem I wouldn't be terribly surprised. PCMCIA seems to be an unruly specification -- and the designers of PCMCIA equipment seem to have enough trouble in their (often unsuccessful) attempts to support simple DOS and Windows users. The programmers that contribute drivers for Linux often have to work with incomplete or nonexistent specifications for things like video cards and chipsets -- and PCMCIA cards of any sort.
I mostly avoid PCMCIA -- it is a spec that is ill-suited to any sort of peripheral other than *MEMORY CARDS* (which is, after all, what the letters MC stand for in this unpronounceable stream of gibberish that I dubbed "piecemeal" a few years ago).
Any help would be appreciated.
I could provide much better suggestions if I had more information about the setup. I could even provide real troubleshooting for my usual fees.
However, if the problem really is specific to your connections with your ISP (if these same 28.8 "piecemeal" modems work fine with say -- your Cubix RAS server or your favorite neighborhood BBS), then you should probably work with them to resolve it (or consider changing ISP's).
As a side note: Most ISP's use terminal servers on their modem banks. This means that they have their modems plugged into a device that's similar to a router (and usually made be a company that makes routers). That device controls the modems and converts each incoming session into an rlogin or "8-bit clean" telnet session on one more more ethernet segments.
Their Unix or other "internet servers" don't have any direct connections to any of the normal modems. (Sometimes an sysadmin will connnect a modem directly to the serial ports of one or more of these systems -- for administrative access so they can call on a special number and bypass the terminal servers, routers, etc).
It's possible that the problem is purely between the two brands of modems involved. Modern modems are complex devices (essentiall dedicated microcomputers) with substantial amounts of code in their firmware. Also the modem business sports cutthroat competition -- with great pressure to add "enhancements," a lot of fingerpointing, and *NO* incentive to share common code bases for interoperability's sake. So slight ambiguities in protocol specification lead to sporadic and chronic problems. Finally we're talking about analog to digital conversion at each end of the phone line. The phone companies have *NO* incentive to provide good clean (noise free) phone lines to you and your ISP. They make a lot more money on leased lines -- and get very little complaint for "voice grade" line quality.
The problem is that none of us should have been using modem for the last decade. We should have all had digital signals coming into our homes a long time ago. The various phone companies (each a monopoly in it's region -- and all stemming from a small set of monopolies) have never had any incentive to implement this, every incentive NOT to (since they can charge a couple grand for installationn and several hundred per month on the few T1's they to do sell -- and they'll never approach that with digital lines to the home. They do, however, have plenty of money to make their concerns heard in regulatory bodies throughout the government. So they cry "who's going to pay for it?" so loudly and so continuously that no one can hear the answer of the American people. Our answer should be "You (monopolies) will pay for it -- since we (the people) provided you with a legal monopoly and the funds to build OUR copper infrastructure" (but that answer will never be heard).
If you really want to read much more eloquent and much better researched tirades and diatribes on this topic -- subscribe to Boardwatch magazine and read Jack Rickard (the editor) -- who mixes this message with new information about communications technology every month.
-- Jim
Answer Guy #1, January 1997
Answer Guy #2, February 1997
Answer Guy #3, March 1997
Answer Guy #4, April 1997
Answer Guy #5, May 1997