|
Table of Contents:
|
|||||
Editor: Michael Orr
Technical Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Ben Okopnik, Dan Wilder, Don Marti
TWDT 1 (gzipped text file) TWDT 2 (HTML file) are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version. |
|||
This page maintained by the Editor of Linux Gazette, Copyright © 1996-2001 Specialized Systems Consultants, Inc. |
|||
Send tech-support questions, answers and article ideas to The Answer Gang <>. Other mail (including questions or comments about the Gazette itself) should go to <>. All material sent to either of these addresses will be considered for publication in the next issue. Please send answers to the original querent too, so that s/he can get the answer without waiting for the next issue.
Unanswered questions might appear here. Questions with answers--or answers only--appear in The Answer Gang, 2-Cent Tips, or here, depending on their content. There is no guarantee that questions will ever be answered, especially if not related to Linux.
Before asking a question, please check the Linux Gazette FAQ to see if it has been answered there.
Hi. Just dicovered Linuxgazette today. This is a really great site.
If you are looking for article ideas, I'd like to see an article, or even series, on writing device drivers for Linux. This is a topic I have found very little information about on the net.
Just my two cents.
Writing the Linux kernel's device driver interfaces are enough of a subject to fill a whole book. Maybe the O'Reilly Writing Linux Device Drivers (by Allessandro Rubini) and their Understanding the Linux Kernel by Daniel Pierre Bovet (among others) would be a good choice for you.
As for documention online: You could always read the sources, copy and paste parts of the Makefiles, copy an existing (similar) device driver and edit. I realize that this is a horrible oversimplification; but that's how must of the Linux kernel hackers started.
-- JimD
how to write a parallel port device driver,ie, interrupt driven. Not, for printer, but,for general device connected to EPP port..?
If you need something to be interrupt driven than you need a kernel module. You might be able to have a very simple kernel module that simply relays the interrupt as a character through a device node. Then you could have a user space process listening...
[ Some guesses about the nature of poll() and select(), trimmed. ] Please post a message to a good linux programming newsgroup for better details. Actually I've posted an abstracted version of this question to the comp.os.linux.development.system newsgroup on your behalf. So, perhaps there's already an answer waiting...
-- JimD
Well, the Answer Guy is just barely getting into this subject, so for this topic, he's a newbie like the rest of us. It looks like some of our weekend mechanics really want to get down into the spark plugs, there. So, if anyone feels inclined to write a down and dirty device driver article that explains a bit of deep wizardry in plain english, especially if it covers something that's new in the 2.4 series kernels, we'd love to publish it. -- Heather
Or if anybody would like to dissect and explain a small driver they've written, that would also make a good article. -- Mike
Great April edition. Thanks a lot. Keep up the good work (and the interesting topics).
Thanks for reading us, Jim! -- Heather
Finally, a way to subscribe to Linux Gazette!
Debian has long had the lg-issue## packages. Now it has a couple new packages:
Of course, it adds/removes packages only when you do a general package update. Note that these programs are supported by Debian, not by LG.
-- Mike (your friendly Linux Gazette Editor)
Looks like you have to get it from testing (aka woody) or unstable (aka sid) but at least it shouldn't require a libc update, if you temporarily add one of these to your sources list to get them. If one of these does ask for a libc change, it's a packaging bug and you should report it immediately.
But be especially careful to change things back, or you may find yourself in for a big surprise when you run "apt-get upgrade" to catch the latest security patches. -- Heather
Hi,
Just a note of constructive criticism.
I just read a copy of your writing of "Integrating" Linux/sendmail with MS Exchange, on this site.
http://lhd.datapower.com/LDP/LDP/LG/issue38/tag/5.html
It's something you wrote in 1999, so you may be writing differently now. If so, please disregard this letter.
What you are about to say is still valuable as a reminder to new members of the Answer Gang, potential article authors, and in fact just about anybody who hopes to be a very vocal Linux advocate. So I hope you don't mind that we've published it anyway. -- Heather
Although I found this information helpful, you appear to have trouble staying on track with the useful information, and like to get off subject with the Microsoft bashing.
I do use Microsoft products, as well as Linux and AIX products, and understand your point of view (even agree with most, if not all, of it).
My constructive criticism is that you should refrain from the Microsoft bashing or at least keep it to one line, and keep more to the point of trying to relay useful information. (I'm assuming that is what you are really trying to do.)
If someone is having trouble with something like integrating sendmail and exchange, they may not have a choice about what systems or software to use, and just need some well written, detailed information, not an anti-Microsoft commercial.
- Dave
Thanks, Dave. Issue 38 was a long time ago now.
The Answer Gang now includes a number of more cheerful sorts in addition to the curmudgeonly Answer Guy himself. One or two even still use Windows for some of their work. (Note: we avoid answering Windows questions at all, unless they're really about working with Linux environments.) Having more of us frees the ones who really don't want to touch anything about MS, from even having to answer. Also, a heavier editorial hand is being applied now.
As for myself, I look forward to the day when systems will be sufficiently easy to use that it will not be clear... nor terribly necessary to know... which OS is chugging along "under the hood". I don't think that day is at all close but I look forward to it anyway.
-- Heather
Dear Mike
I want to take the opportunity to thank you very much for the feed back and for taking the time to give me the information.
sorry pal, but after reading your reponse to a post for help, i couldnt help but send you this note about what an [ rude word deleted ] you must be...and would be very much surprised if anyone would want you on the payroll....
respectfully,
joe agate
http://rpmfind.net/linux/mdw/LDP/LG/issue50/tag/33.html
[ HTML version of same text, also deleted ]
Curiously, he chose to "rag" on the Answer Guy for one of his clearer answers to an unclear question. He calls him a bad name, then signs off "respectfully". Right.
Is it really "insensitive" to mention that LG has a search engine (http://www.linuxgazette.com/search.html)? Nope, I think not.
Is it "insensitive" to ask that querents mention what they've tried already? Maybe - but it's a fact of life in tech support, most people say "it's broken" not "I tried this, and that, and the other thing, and... yada yada ... anyways so you can see I tried everything, and it hates me". In plain verbal conversation it's rude to maunder on like that, so people tend not to do it. In tech support, the more info you can send us (that is related to the question) the better we can help. If this were a phone call, we could have this merry back and forth, and it still probably wouldn't take an hour. At typical Answer Gang speeds though... more info will help you as much as it helps us.
Is it "insensitive" to suggest that the poor bloke may have to buy a new card or new server? Probably. Oh well. Life's tough that way sometimes. Turns out that he's probably okay, according to another reader who assured us that the Jotan is indeed a Trident relative.
Is it insensitive to send us the same text as both plaintext and HTML? No, usually it just means someone didn't know how to turn off this "feature" (cough cough. cough. no, I'm okay. cough cough. water. Ahem) in MS Outlook. Try this great answer from Chris G. in last month's Answer Gang:
http://www.linuxgazette.com/issue65/tag/8.html
Lastly, I'm pretty sure it's insensitive to publish this, but Joe, you mailed us, and this is what we normally do with letters to the editors. Let us know when you have a linux question, and we'll try to answer it -- if you give us enough information!
Would it be possible for someone at your publication to provide me with a resource as to where I could locate an experienced Linux programmer for the Tampa Florida area. Is there a website for Linux job postings or a publication that I might be able to contact. I would appreciate any help that you could give me.
The Answer Gang published a list of sites for job searchers in the last issue. I'm sure you can post openings at these sites as well.
http://www.linuxgazette.com/issue65/lg_answer65.html#tag/greeting
LG does not publish job listings because they are so temporary in nature: the job may be filled by the time the issue is published!
-- Mike
To repeat from last time:
You can check out Linux Journal's Career Center (http://www.linuxjournal.com/employ), Geekfinder (http://www.geekfinder.com), the Sysadmin's Guild (SAGE) Job Center (http://www.usenix.org/sage/jobs/sage-jobs.html), or pay attention to your local area papers for when major high tech Job Fairs are in your area, so you can go to them. There are also some really generic job sites like Dice.Com (http://www.dice.com) or MonsterBoard (http://www.monsterboard.com). If you hate the corporate mold, check out some of the project offers at Cosource (http://www.cosource.com) or Collab.Net (http://www.collab.net). Or put up your consulting shingle by listing yourself at Linuxports (http://www.linuxports.com) and getting listed into a few search engines.
... and expand a little:
When I went to Google! and typed in the keywords:
linux jobs
-- it claimed it has about 400 entries.
As Mike noted, many of these allow employers to post job offers, as well as having jobseekers post resumés.
-- Heather
Contents: |
Submitters, send your News Bytes items in PLAIN TEXT format. Other formats may be rejected without reading. You have been warned! A one- or two-paragraph summary plus URL gets you a better announcement than an entire press release.
Linux-Mandrake 8.0 has been released. New features can be studied in detail on their website. Highlights include KDE 2.1.1, GNOME 1.4, Kernel 2.4.3, Xfree86 4.0.3, and Anti-aliasing. If you would like to donate to the project, go to http://www.linux-mandrake.com/donations/
Progeny Debian is out now. The download edition is already available, and the box set will be on sale from April 23. Ian Murdock, Progeny CEO and President, has said that Progeny Debian is not trying to be another distribution of Linux. "In fact, we don't see Progeny Debian as a separate distribution. It is an enhanced version of Debian for the commercial market. All of our development efforts are being contributed back to the Debian community, and we hope that our work can help make Debian better for all users", he comments.
Red Hat have announced Red Hat Linux 7.1 with 2.4 kernel. The Red Hat website has a complete list of the new features.
If you want to see Slackware continue, you can donate to the project via their PayPal account. Donate to [email protected]
SuSE Linux 7.1 for Sparc (Item No. 99985-21SPC) is supplied on five CD-ROMs with online documentation. It can be obtained exclusively directly from SuSE at the price of EUR 159 plus VAT. SuSE Linux 7.1 for Sparc is also available for download from ftp://ftp.suse.com/pub/suse/sparc/suse-sparc/
Listings courtesy Linux Journal. See LJ's Events page for the latest goings-on.
|
|
Linux for Industrial Applications 3rd Braunschweiger Linux-Tage |
May 4-6, 2001 Braunschweig, Germany http://braunschweiger.linuxtage.de/industrie/ |
|
|
Linux@Work Europe 2001 |
May 8 - June 15, 2001 Various Locations http://www.ltt.de/linux_at_work.2001 |
|
|
Linux Expo, São Paulo |
May 9-10, 2001 São Paulo, Brazil http://www.linux-expo.com |
|
|
SANS 2001 |
May 13-20, 2001 Baltimore, MD http://www.sans.org/SANS2001.htm |
|
|
7th Annual Applied Computing Conference |
May 14-17, 2001 Santa Clara, CA http://www.annatechnology.com/annatech/HomeConf2.asp |
|
|
Linux Expo, China |
May 15-18, 2001 Shanghai, China http://www.linux-expo.com |
|
|
SITI International Information Technologies Week OpenWorld Expo 2001 |
May 22-25, 2001 Montréal, Canada http://www.mediapublik.com/en/ |
|
|
Strictly e-Business Solutions Expo |
May 23-24, 2001 Minneapolis, MN http://www.strictlyebusinessexpo.com |
|
|
Linux Expo, Milan |
June 6-7, 2001 Milan, Italy http://www.linux-expo.com |
|
|
Linux Expo Montréal |
June 13-14, 2001 Montréal, Canada http://www.linuxexpomontreal.com/EN/home/ |
|
|
Open Source Handhelds Summit |
June 18-19, 2001 Austin, TX http://osdn.com/conferences/handhelds/ |
|
|
USENIX Annual Technical Conference |
June 25-30, 2001 Boston, MA http://www.usenix.org/events/usenix01 |
|
|
PC Expo |
June 26-29, 2001 New York, NY www.pcexpo.com |
|
|
Internet World Summer |
July 10-12, 2001 Chicago, IL http://www.internetworld.com |
|
|
O'Reilly Open Source Convention |
July 23-27, 2001 San Diego, CA http://conferences.oreilly.com |
|
|
10th USENIX Security Symposium |
August 13-17, 2001 Washington, D.C. http://www.usenix.org/events/sec01/ |
|
|
HunTEC Technology Expo & Conference Hosted by Hunstville IEEE |
August 17-18, 2001 Huntsville, AL URL unkown at present |
|
|
Computerfest |
August 25-26, 2001 Dayton, OH http://www.computerfest.com |
|
|
LinuxWorld Conference & Expo |
August 27-30, 2001 San Francisco, CA http://www.linuxworldexpo.com |
|
|
The O'Reilly Peer-to-Peer Conference |
September 17-20, 2001 Washington, DC http://conferences.oreilly.com/p2p/call_fall.html |
|
|
Linux Lunacy Co-Produced by Linux Journal and Geek Cruises Send a Friend LJ and Enter to Win a Cruise! |
October 21-28, 2001 Eastern Caribbean http://www.geekcruises.com |
|
|
LinuxWorld Conference & Expo |
October 30 - November 1, 2001 Frankfurt, Germany http://www.linuxworldexpo.de/linuxworldexpo/index.html |
|
|
5th Annual Linux Showcase & Conference |
November 6-10, 2001 Oakland, CA http://www.linuxshowcase.org/ |
|
|
Strictly e-Business Solutions Expo |
November 7-8, 2001 Houston, TX http://www.strictlyebusinessexpo.com |
|
|
LINUX Business Expo Co-located with COMDEX |
November 12-16, 2001 Las Vegas, NV http://www.linuxbusinessexpo.com |
|
|
15th Systems Administration Conference/LISA 2001 |
December 2-7, 2001 San Diego, CA http://www.usenix.org/events/lisa2001 |
|
Newlix Corporation has developed an intelligent and customisable administration solution for Linux based server appliances. Newlix ServerWare contains an intelligent administration engine so intuitive that it creators say "it can be seen as having an entire IT team sucked right into the server appliance!" This technology is intended to improve the functionality and ease of use of server appliances. Full details are available on the Newlix website.
eCluster from XGforce is a scalable, Intelligent load-balance cluster system, which can be scaled up to 1024 Internet cluster groups and each contains 1024 cluster nodes(1024X1024). With round trip time load balance algorithm, one can cluster any OSes on any CPUs, such as NT, Novell, UNIXes (SUN SPARC, OS2000 BSD, Linux), etc. The support of load balance or fail safe mode, ensured non-stoppable and fast business transactions for SQL Database Servers, such as Oracle, MS SQL, Informix, MySQL, Postgress, etc.
A load balance algorithm is used which includes CPU load, weighted load, vm usage, round trip time, CPU usage, etc. Other features include Network Traffic Distribution, Network Failsafe, CFS(tm), and Large Network Management for both Internet and Intranet. Ports for LINUX, SUN, FreeBSD, and NT are available and free support is provided. For more information, consult the XGforce website.
ActiveState has launched a new initiative to enable programming with open source technologies. The ActiveState Programmer Network (ASPN) includes quality assured binary distributions of Perl, Python and Tcl; multi-language and platform IDEs; technical references, sample code, recipes and more. For additional details or go to the ASPN website.
OTG Software has announced its acquisition of Smart Storage, a privately held provider of standards based DVD and CD storage management software. OTG expects Smart Storage's CD/DVD technology to speed its entry into the rich media market, to boost its international momentum, and to enable it to offer more solutions for storing and accessing data on the UNIX and Linux platforms.
There is a vulnerability in kernel 2.4.x IPTables which you should patch if you use Linux 2.4 for firewalling. Quoting from the SANS Institute's alert: "A vulnerability in the ip_conntrack_ftp module, which is responsible for tracking incoming FTP connections, has been found. This vulnerability could be used to bypass IPTables firewall rules if IPTables is configured to allow RELATED connections to pass unhindered, which is a standard configuration used with FTP servers. An attacker can trick the ip_conntrack_ftp module into creating RELATED connections, thus allowing various outbound connections to the network of the firewall itself." A patch is available.
The Duke of URL has the following links to tempt you:
Since the point of FAQ's is to get newbies up to speed, let's take this opportunity to direct attention to The linux-kernel mailing list FAQ. Then when you have digested that, you can see what's happening on-list.
SlashDot has offered the following links in the past month:
Bryan Pfaffenberger writes about Why Open Content Matters in his Linux Journal web column " Currents".
And please, some paper-clip sized sympathy for The Microsoft Office help clip who became the latest (and most welcome ;-) redundancy in the tech-sector recently.
Finally, not strictly Linux, but UnixSpace.com have announced a free on-line internet access service to their ConteXt database server. Includes 30Mb space, Unix shell command line, and some GUI treats focused on DataBase building.
Open SSH have announced the release of version 2.5.2 of their software. The Open SSH suite includes the ssh program which replaces rlogin and telnet, scp which replaces rcp, and sftp which replaces ftp. Also included is sshd which is the server side of the package, and the other basic utilities like ssh-add, ssh-agent, ssh-keygen and sftp-server. OpenSSH supports SSH protocol versions 1.3, 1.5, and 2.0. You cannot afford to ignore security!
Cylant Technology have released CylantSecure for Linux, an intrusion detection system that protects against both known and previously unknown forms of attack. It uses a modified kernel in conjunction with custom kernel modules to provide system protection -- and becomes part of the running kernel. Unlike other IDS products, CylantSecure does not use rules or patterns for identifying attacks, eliminating the need to rely on a database of known attack signatures. Instead, it focuses on actual software behaviour and builds a statistical model of nominal system behaviour. It enables a computer to distinguish between normal and abnormal behaviour, and then uses that information to stop malicious attacks before any damage can occur. Details on the software are in an online press release.
You can get a copy of IEMS 5 at http://www.ima.com/download/v5eval.html. The product overview can, also be downloaded. Beta participants of the Linux-Windows-based Internet Exchange Messaging Server (IEMS) 5 shall get a US$300 price reduction (about 30% off) if they purchase the messaging solution now than wait for its forthcoming official release.
The new version of Mailman, 2.0.4, compatible with Python 2.1, is out. LWN have the story.
There is no guarantee that your questions here will ever be answered. You can be published anonymously - just let us know!
Once again, welcome to the wonderful world of The Answer Gang. The peeve of the month this time is a tie:
give us permission to publish the thread, up front, when you ask the question.
We could probably use a few more articles that appeal to corporate users, though! Enough of that, though. Onward to something fun. The fun I took on this month is to upgrade my system.
Oh boy.
Surely I mentioned that I've been on a continuous upgrade path of SuSE since early 5.1? No? Well, okay, I admit, I did a "real" reinstall sometime around 6.1 or so, and then have chugged along on security updates and adding RPMs from the latest 6.x branch for a while. With an occasional graft from Debian packages and source tarballs.
Like any normal user I also have lots of different things I do, so my home directory's a bit messy, I have a few projects here and there, and I haven't been real prissy about which account I use to download general things like cartoons (Hi Shane!) or new kernel sources into. Usually I remember to move them to someplace under /usr/src eventually.
As Piglet was fond of saying, "Whatta... whatta mess."
Surely it would have been easier for me if I hadn't decided to buy an extra hard disk at the same time, discovered that my floppy bay stopped working (p.s. can't boot from my CD. Something to do with it being a SCSI device in an IDE system), and (eek!) was reminded that we'd like to get the column fragments in early this month.
Of course, I was able to abuse about a CD's worth of free disk space to cover for this. I made the extra hard disk a feature rather than more trouble by installing the new setup solely to it.
The install went fine, but it wasn't completely smooth. Here's a few hints if you're plotting an upgrade, and I promise, they don't depend on you using SuSE:
Beyond these normal things, I really needed to get some of these projects into directories of their own, so it's clear where I should put stuff for those things from now on. Rather like ordering the teenager to clean up their room...
Next thing I know the end of the month is approaching, and my dreams of handling TAG at a dreamy summertime pace are dashed again
I still think backups are your friend, but at least I didn't need 'em this time. All I need is more RAM and I'm set! The weather is improving and I'm having a great time. So here's those answers -- share and enjoy.
From dtrott
Answered By Ben Okopnik
Hi
I have recently installed debian 2.2, i was just wonderin if someone had a link or could suggest where i might find some basic instructions on setting up telnetd.
[Ben] <Warning! Opinion time!> I can give you a one-word instruction manual:
Don't.
Telnet is insecure, and its capabilities are seriously limited by comparison with SSH. Install ssh/sshd, and create an executable file in your /etc/init.d/ directory called "sshd" with the following content:
See attached sshd init script
Now, create a symlink in your /etc/rc2.d/:
ln -s /etc/init.d/sshd /etc/rc2.d/S20sshd
The "S20" part says to 'S'tart the service (rather than 'K'illing it); the '20' places it (on my system, at least - take a look at your own "rc2.d" and number it appropriately) just after the link to "inetd". The position of the link is not all that critical; next to "inetd" (which starts other network services) seems appropriate, though. If you want to run the server immediately after doing this, but don't want to reboot, simply type
sshd
at the command prompt.
That's it. Just type 'ssh' where you would usually type 'telnet'. Oh, and make sure to at least skim the 'ssh' and 'sshd' man pages; this will make you aware of the many options that are available with this protocol.
I have not researched SSH in great depth, by the way; "kill telnet" is sorta common wisdom at this point, and I've read just enough to agree with it. After using telnet for a number of years, I find that I'm very pleased by the configurability of SSH; that alone, security aspects aside, made it worth switching for me. It would be nice if the other TAG members chimed in with their take on the proper usage and better reasons, but I'm highly satisfied with my installation.
From Martin Colello
Answered By Ben Okopnik
This is probably so easy as to be considered a joke, but I can't figure out how to do it.
I need to touch a bunch of directories recursively. All the directories and their contents, but I can't find any option in the touch command that allows this.
Any help appreciated, thanks!
[Ben] I've never seen a 'recurse' option in any "touch" I've used. Doesn't mean that one doesn't exist, though. Since yours doesn't, try this:
find DIR -exec touch {} \;
where DIR is either the path to the top directory that contains all the subdirectories you want to touch, OR is a list of the paths to those subdirectories themselves. One of those two options should do what you want. If you want to touch only the files (not the directories), add the '-type f' option before '-exec'.
It can also be a healthy idea to try it with "echo" instead of "touch" the first time; I test a lot of my "dangerous" scripts that way before letting them loose on live data.
[Mike] Or:
find DIR | xargs touch
Hi, just wanted to let you know the command worked great, I never thought of using any other argument with find except -name.
[Ben] "find" is a very cool utility. The man page is pretty big, but it's well worth reading up on. Lots and lots of good options.
Linux Gazette rocks, thanks a lot for your help.
[Ben] Yeah, we know.
You're welcome!
Just out of curiosity, the reason I needed to touch everything was because my anonymous ftp server wouldn't show certain files even though the permissions were the same as others that worked. But I found if I touched one then it showed up. Now they all show up thanks to you, by why should this have happened in the first place?
[Ben] I haven't experienced any problems with setting up FTP servers, so I can't really comment.
[Heather] Readers, if you think you know what it might have been, let us know. You can reach The Answer Gang as .
From jzaragoza alberich
Answered By Mike Orr, Heather Stern
I know what a winmodem is. I know too that they must be thrown off. All of them. But what is a winprinter? How can they be recognized? Must they be thrown off too? Or do they work under Linux?
[Mike] Beware of any hardware that lists only Windows operating systems on the box. If it says "Windows and Macintosh", there's a good chance it will work with Linux too. But if it lists only Windows 95/98/2000, NT and Me, be suspicious. It could be like a Winmodem, where vital parts of the modem functionality are missing in the modem and must be emulated by the driver. or it could mean there's Windows-specific code in the printer. For instance, instead of using a standard page-description language like PCL or Postscript, the printer may be tied to the Windows printing system directly (e.g., it may communicate with the computer via Windows API calls).
[Heather] A winprinter (like a winmodem) does not have complete printer brains. It uses WIndows GDI calls to preprocess a buffer with the printable image and then just accepts it straight. If I read the original description right. Anyways like a winmodem it rally hits CPU when trying to get work done.
Under Windows that means the driver is really tiny, since GDI is part of the default DLLs that make the rest of the GUI work.
Under Linux that means if it's a Lexmark you can convince it to work, and if it's something else, you could try the Lexmark winprinter drivers ... and if they don't work, oh well, give that to someone who only uses windows and needs a cheap printer.
Under a debian system you can use the package lexmark7000linux. For others you should try Henryk Paluch's website: http://bimbo.fjfi.cvut.cz/~paluch/l7kdriver
[Mike] Combined printer/fax/copier/scanners should especially be avoided unless you know that model works with Linux.
[Heather] It turns out there's a really great site that keeps track of all sorts of things about printing under linux. Wouldn't you guess, it's: http://www.linuxprinting.org
Thank you very much. You are very kind. Of course it will be an honor to get into your magazine. Perhaps would you find it interesting to talk about other "winsoftware": scanners, digital photography devices, joysticks, etc.
From Peagler Kelley
Answered By Ben Okopnik
Hi,
I am doing a global search and replace on some files via a unix script. I use the sed command (saving a copy of the original in $file.old) like so:
sed "s/$INPUT_TXT/$OUTPUT_TXT/g" $file > $NEW_FILE
Then I perform a diff on the original file ($file) and the new file ($NEW_FILE) and redirect the output to /dev/null. If there is a difference between the two files, then I move the new file to the old file. Unfortunately I end up changing the permissions on all the files in the directory, depending on whatever default umask is set. Is there a way that I can either 1) find out what the permissions of the original file are and change them accordingly to the new file, or 2) move the new file to the original file while keeping the permissions of the old file?? Please let me know. Thanks!!
[Ben] "sed" is not the best tool for "in-place editing" of the kind you want to do - all you really want is to change the data in the file, right? Perl offers a solution that reduces it from a three-step process (change/diff/move) to one:
perl -i -wpe 's/$INPUT_TXT/$OUTPUT_TXT/g' $file
That's it. The editing is done in the file itself; the permissions are unchanged. No muss, no fuss, no greasy aftertaste.
I agree with you. I wanted to use perl, but the person who I'm creating this for does not know perl and will be responsible for supporting it. I like your quick, dirty solution and I may force them to use it just because it's easier . Thanks!
From hma
Answered By Heather Stern
Hi,
I like to thank u guys for great job. Am still going thru your past issues. I have sent 2 questions here and I have not heard anything from u guys.. is my pattern wrong?
[Heather] 1. We can't reply to every message. There are hundreds and hundreds of them every month. That you received no reply before is not a personal slight. We will never get to all of them. Just continue to look for whether someone else's answers apply to your questions too.
2. You had no subject this time - that usually doesn't help. For good behavior patterns that help your message get seen, see the TAG entry "Creed of the Querent" a few issues ago: http://www.linuxgazette.com/issue62/tag/5.html
I still have the following problem and I hope U can help now...The problem am having is that when I boot to my linux I recieve the following message: " Vfs: Kernel panic" and that it. It doesn't go.
[Heather] This message means the kernel loaded, then could not find what it's looking for next. VFS means virtual file system - that's what really manages all of Linux' filesystem drivers - it probably can't find the root fs. Although usually if that's the problem it says so. Odd.
When I try doing fsck, I get command not found.
[Heather] You're doing fsck from where? fsck is a linux utility, so you need to come up from a linux rescue disk of some sort. Toms Root Boot perhaps - handy because you can download and create one using DOS if you have to: http://www.toms.net/rb
There are Windows utilities for accessing ext2 volumes, but they won't directly help you get back into Linux, and I don't know of a Windows based ext2 analogue to Scandisk.
The problem started when I remove one of my harddisk and after puting it back, I got this problem. But I get into win when I select it.
[Heather] When you select Win from what? Have you replaced LILO with something else? (LILO doesn't normally have a selector, it makes you type things. Of course you could have Storm Linux, which replaced the boot: prompt with a cool selector screen.) Did you move any partitions before you put the drive back? Is the drive now your second drive?
To fix LILO:
- Boot from that rescue disk
- mount up the / volume on some empty directory ... /mnt would do nicely ... don't forget to tell it to use ext2, example:
mount -t ext2 /dev/hda5 /mnt- cd into the mountpoint and run chroot
- edit your /etc/lilo.conf so it mentions your new volumes... and points at the right volume as your root partition now! For example, if it's now the second drive, than all your references to /dev/hda have to refer to /dev/hdb, except for "boot" (which says where to put the LILO first stage)
- (If it has become the second drive) you'll also want to edit /etc/fstab, since all its drive references are off, too. Otherwise you'll get a failure in the next stage when Linux really spins up.
- Run /sbin/lilo to put your bootloader back together. Or else fix your new bootloader so it passes good options to your linux kernel - I think NT needs a copy of a (correct) LILO bootsector for its mechanism, in a tiny little file.
In short, LILO hates it when you move stuff around. Sorry.
I hope u reply now.
[Heather] My inbox load today was light and I'm in a crossplatform mood, so you got lucky. Usually I leave LILO matters for the rest of the Gang.
Thanks.
Hassan
[Heather] Have a happier day
... I merged answers about his chroot troubles into the steps above ...
Once again, thanx and keep up the good work...some day when I become a guru, I hope to help too...
Hassan
[Heather] When you feel ready, I'm pretty sure the Gang will still be here to welcome you aboard!
From Andrew Higgs
Answered By Ben Okopnik, Mike Orr, Faber Fedor
Hi all,
While on the subject....any suggestions on a good place to find out about regular expressions and procmail.
I collect mail from one ISP mailbox and send it on to the correct user based on email address. I also have a problem with people who use mailing list groups in Outlook etc. How do I split these properly?
Any pointers gratiously accepted.
[Ben] "procmail" uses extended regexes, much the same as "egrep". So, for some good examples and broad-scope explanations of those, try
man procmailex man procmailrc man grep # The "REGULAR EXPRESSIONS" section is a great reference man regex # Somewhat different "flavor", but useful
For single address to multiple local user resolution, read the "Email Addressing FAQ (How to use user+box@host addresses)" at www.faqs.org/faqs/mail/addressing/ - as it happens, I just ran into this thing yesterday, and was much impressed with the logical style and layout of it. Even if this is not exactly what you're doing, there are a number of relevant useful techniques described in this document - and it's mostly based on doing it with "procmail".
[Mike] Ben! You even wrote the article about procmail antispam filtering and you didn't mention it. Just in case Andrew hasn't seen it:
http://www.linuxgazette.com/issue62/okopnik.html
[Faber] Let me through out this little tidbit:
If you use the \< and \> word boundary markers in procmail, keep in mind that they consume (eat) the word boundary. Every other program I've used the word boundary markers on did not eat them.
Makes a BIG difference when your grepping your text (and took me an hour to figure out!).
Regards,
Faber Fedor
From Andrew
Answered By Andrew Higgs, Jim Dennis
Hello,
I swear that the hardest thing to setup under Linux (at least for me ) has been samba. Running Linux RedHat 6.1 & have a windows 98 se machine. I see the Linux machine when i go into Network Neighbourhood & when i click on it i get the password box but it ALWAYS fails. The message i am getting at the moment is this
[2001/04/08 04:02:05, 0] smbd/server.c:sig_hup(340) Got SIGHUP [2001/04/09 00:18:04, 0] smbd/password.c:domain_client_validate(1213) domain_client_validate: no challenge done - password failed [2001/04/09 00:18:05, 0] smbd/password.c:domain_client_validate(1213) domain_client_validate: no challenge done - password failed [2001/04/09 00:18:09, 0] smbd/password.c:domain_client_validate(1213) domain_client_validate: no challenge done - password failed [2001/04/09 00:18:10, 0] smbd/password.c:domain_client_validate(1213) domain_client_validate: no challenge done - password failed
What i would truley like is to setup Linux as a domain controller so when booting the windows machine validation is done via the Linux machine by verifying against the password file or smbpasswd file & so then u have access to shares at that point. Does it have to be WinNT to get this to work. I know their is the issue with encrypted passwords. Currently my Linux machine is set to yes Regards...
Andrew
[Andrew Higgs] Hi Andrew,
It is very possible to have Win 95 and 98 logon to a Samba domain. I assume you have read the relevant docs which come with Samba.
Have you added a Samba user to correspond with the one (your windows username) you are trying to use. It seems to me that it is failing because the username (and consequently the password) is not there. Try 'smbpasswd -a username' also 'man smbpasswd' for further details. Also bear in mind that Win 95 (and I assume 98) don't let you specify a username when trying to login to the Samba server, they just use your windows username.
I hope this is sorts out your problem.
Kind regards
Andrew Higgs
[JimD] I was just reading about the Samba 2.2.0 release at http://www.samba.org/samba/samba.html. The Samba team is continuing to strive towards full domain controller and "MS Active Directory" functionality. You should look at the new release and read every shred of documentation. Linux/Samba as a domain controller is still cutting edge (sometimes brushing against the cutting edge leaves us dripping a bit of blood).
You'll especially want to read the PDC HOWTO documentation at your nearest mirror of the Samba web site: http://www.samba.org Just follow the "Documentation" link and look for the FAQs and HOWTOs under the "New PDC Documentation" heading.
There are Samba mailing lists and discussion of Samba predominates the discussion on the news:comp.protocols.smb newsgroup. If you have a connection to a good NNTP server you can post your questions to a forum where hundreds of Samba users and specialists can see it.
Also, no tech support suggestions for Samba would be complete if we didn't point you to the Samba DIAGNOSIS guide. You can see that at: http://WWW.samba.org/samba/docs/DIAGNOSIS.html (where WWW can be www or the names of one of the many samba mirrors).
From Philippe CABAL
Answered By Michael J. Hammel
hi
[The Graphics Muse] Howdy.
i am looking for a free multiplateform (win32 + unix) 3d modeling / rendering / animating software all i found is povray and blender
[The Graphics Muse] That's about it.
but id like a software as open as pov (data souces) as intuitive as a blender
[The Graphics Muse] No such beast.
actually i am a proggramer so i need to have a look at the scene-source isnt there a vrml stuff that do broadcast output ?
[The Graphics Muse] Nope. You have a few choices, but nothing that fits all your desires here.
If you want VRML, you can try SCED or SCEDA. Both have pretty primitive interfaces, but are fairly sophisticated underneath. They include a constraint-based mechanism. The source is available. They are the only ones I know of that produce VRML output.
Outside of these two, all the other tools do not provide source: Blender, Houdini, Maya, etc. Blender is by far the least expensive but the most sophisticated for the money. Houdini and Maya are high end, high dollar products. POVRay is just a rendering engine, not an interactive modelling tool. A better option for rendering is probably BMRT, the Blue Moon Rendering Tools, which is a Renderman compliant renderer. It was actually used for several movies. However, like POVRay, it is just a rendering engine, not an interactive modeller.
Nothing is available on Linux for free and with source provided that can do broadcast output. You have to string together a few different tools in order to do that - for example Blender for modelling, something else to convert Blender files to RIB files (it doesn't do RIB yet), and then BMRT for rendering.
Blender is really the best thing going in this department, since you can add scripting to it fairly well using Python. It's interface is production quality. It just doesn't export to formats that can be used by other rendering engines, like RIB for PRMan (Pixar's rendering engine) or BMRT.
As for cross platform, give it up. Blender I think is cross platform. POVRay doesn't much care where it's ported to and I believe BMRT has been ported to Windows (much to the consternation of the original author, no doubt). But cross platform graphics tools are pretty difficult to do since such tools are often very happy close to the hardware, and getting close to the hardware on different OS's is not quite so easy a proposition.
Hope that helps.
From darrell rolstone
Answered By Ben Okopnik
Dear Staff of the Answer Gang!
I hope you folks appreciate the occasional question from a non-techie....whose really into seeing and helping the information revolution FLOURISH!
I'm a 52 year old "synergy design consultant" from Marin County California....living in Thailand for the last 6 plus years. I was a pioneer in the Wholistic Health movement of the 70's and a student of R.Buckminister Fuller. I'm a world class Nordic Physical Therapist.....and I have trouble with even the simplest technological things like copying something onto a disk! REALLY!
[Ben] <laugh> The two are not necessarily related... but say on.
But inspite of nearly total techno ignorance.....I'm quite skilled in the social aspects of the techno evolution/revolution! And I sincerely want to help that process along it's path.
So, my question('s) to you guru's of "geekness"....just what is being done in the area of co-ordinating all the linux "programing development" that is manifesting? Is there a "co-operative" formed? Can a (traditionally "left-brained" dominant) programer offer up his/her work to a linux "group of (traditionally right-brained dominant)marketers" that will then take over and bring his/her work to fruition?! (thereby "sharing the knowledge" at a higher level of efficiency).
If there is such a "group"......can you direct me to them? Praises upon you all for sharing your knowledge! Really!
[Ben] Well, Darrell... that's a heck of a question to ask of a bunch of traditionally left-brained computer types. <smile> Actually, if you're a student of revolutionary processes, you may find Linux very interesting for just that reason.
The Linux kernel itself - as wonderful of a thing as it is - is not (from my perspective) the thing that is responsible for the popularity and the tremendous growth of Linux. What is responsible for it is the Linux/Open Source model - that of people working on their own, or with a team, and getting full recognition for their work. The traditional hurdle of marketing a product is largely eliminated, since the greatest majority of the programs for Linux are free; the "distribution channel" - the Internet - is also mostly free (the costs are not assignable to Linux, so it is free in this regard.) In those terms, the marketing model for Linux and its software is not the traditional "push" - we have no need to stuff it down the gullets of barely willing customers - it is "pull": when people need a piece of software, they research it, download it, and install it. As well, the "feedback loop" that is usually set up between the programmer and the interested users of the program is a tremendously powerful tool: if fifty thousand people have pounded on your program for a few months, and the flow of bug reports has finally ground to a halt, either that program is as perfect as code can be, or it has simply been cowed into submission.
The effect of this is exactly what Robert Pirsig talked about in his "Zen and the Art of Motorcycle Maintenance" (I would guess that you're familiar with the work) - a shift toward Quality being the focus. That, to me, is the most exciting thing about Linux: quality really is "Job #1".
As to co-operatives... well, have you ever tried herding cats? There are several things that have worked well in the Open Source community, usually by providing maximum assistance and convenience but minimum direction: any of the large-scale programming projects, such as WINE, Mozilla, the whole series of GNU projects, KDE, etc. There is also SourceForge, which provides an archive/code repository/distribution point for development efforts.
I'm not sure if this is any kind of an answer that you were looking for; mostly, these are just the ramblings of a right-brained guy who loves using his creativity in a left-brained way. <chuckle> I think that dichotomy was a no-starter, for me; never could see it...
From Ian Carr-de Avelon
Answered By Mike Orr
In LG#65 I read:
"Another thing this article does is raise the question, just because we can use Linux in a wide variety of routing situations, should we? Are you choosing a Linux router because it's the most appropriate solution for the task, or simply because "we're a Linux-only shop"? "
Well... What are the choices? Basicly:
The Linux option has a lot going for it especially if you are an organisation which does not have a team only dedicated to routers, like large telcos do. Routing sits causing no problems for months, while you forget how to work on the router, and then when problems arrive it is panic stations, because nobody can work, clients are not being served and business is being lost. I run a Polish ISP with Linux and one CISCO router, which we bought because I was over ruled, because although the WAN card for Linux was cheaper, the CISCO dealer offered unbeatable financing. I don't see that changing soon.
Yours
Ian
[Mike] You bring up some good points, but that does not invalidate the question. I'm not saying Linux *shouldn't* be used for routing, just that each organization needs to weigh the price-vs-performance-vs-maintainability factors for iteself. The situation I was thinking about (and perhaps it wasn't clear in the paragraph) was not a small, low-traffic network, for which Linux's price and maintainability certainly runs circles over proprietary systems, but rather an an enterprise-level, high-traffic situation. Is there an amount of thoroughput above which Linux routers are not (currently) scalable, a point at which Ciscos would be more economical? I don't know, but a netadmin in that situation would want to explore both options before making a decision.
My point is not so much about Linux vs Cisco, but about jumping on the Linux bandwagon too quickly. We all know hundreds of companies that refused to consider any alternatives to buying NT servers, WINS servers, Novell servers, etc. The same can happen in the Linux world, if one refuses to consider an alternative to a Linux router more because it's politically incorrect than because of an actual comparision of price, performance and maintainability and how they would all affect your organization in its situation.
From eric lin
Answered By Ben Okopnik
Hello Answer Gang,
Let me start off by thanking all of you for providing such excellent service.
I'm running RedHat 6.2 with apache 1.3.9 and Sendmail 8.9.3 as an Internal web/mail server. I use it on a daily basis, but haven't changed any of the configurations since the initial install. Yet mysteriously the httpd.conf and the sendmail.conf files becomes null (file size of 0)!!! This occurs randomly and usually after a reboot of the system.
Since it is internal and no one uses it except for myself, I have no way of explaining why this is.
Do you guys have any ideas???
[Ben] Wow. That's odd. Very odd. It sounds like maybe some sort of a config file backup procedure (?) gone wrong. One of the first things I'd do is switch to "/etc/init.d" and grep the scripts there for any mention of the above files. I'd investigate anything I found with a very sceptical eye, possibly looking for evidence of intrusion (I can see some script kiddie being very interested in those two files...) or just a badly-written script.
If you can't find anything, try setting the immutable attribute on those files via "chattr" (see the manpage); this should at least stop them from "disappearing". I, for one, would be very interested to know what you find out in your troubleshooting process.
From Christi Gell
Answered By Mike Orr
I have a DSL and I want to get 2 dynamic IP addresses w/it. I've got a hub, but every time my husband needs to get online, I need to release my IP address.
[Mike] Your hub is connected directly to the DSL modem? In that case, you will have to contact your ISP to get a second dynamic address from them... if you can.
A more common scenario is to have one computer (the server) connected to the modem and also to the hub. The second computer is connected only to the hub. The first computer has IP firewalling and IP masquerading compiled into the kernel. (I assume you're running Linux, since you sent this to a Linux answer forum. If you're using Windows, you'll have to go somewhere else for help.) Then you enable IP masquerading on the server. Now the second computer can reach the Internet without needing a second dynamic IP from the ISP.
To set this up, search for "masquerading" or "masquerade" in the Linux Gazette search engine (www.linuxgazette.com/search.html). Or pick up a Linux configuration book from the bookstore or look in the manual that came with your distribution.
This handy dandy function is courtasy of Cobratek on #Mandrake on Efnet, it is super kewl, since you can unpack all your LG issues (you do have all 65 don't you?) and instanty view any one.
Simply add this function to either ~/.bashrc or better yet /etc/bashrc so everyone on your system can read LG.
function lg () { lynx /home/bandido/docs/Linux.Gazette/$1/index.html ; }
Remember to change path to whatever you unpack your LG issues to, and do not use ~/ dirname of course if you put the function in /etc/bashrc
Personally I unpack all issues like this,
/home/bandido/doc/Linux.Gazette/1
/home/bandido/doc/Linux.Gazette/2
3 4 5
etc
Thus, I type lg 20 or lg 35 etc, to open 20 or 35 instanlty in lynx, and I am in prior dir when I exit. Nice and handy, never far away from LG Feel feel to drop by #Mandrake on Efnet too, unlike most linux channeols, newbies are very much welcome.
Vim's syntax highlighting can be helpful at times at painful at other times. Add this to your .vimrc and you can turn colors on and off with the tap of a button.
" map F8 to switch on and off syntax highlighting function Swapcolor() if exists("g:syntax_on") syntax off set nohlsearch else syntax on set hlsearch endif endfunction map <F8> :call Swapcolor()<CR>
They start with letter "m." They look at a floppy disk as "a:" or "a:\" as Windoze does.
To copy one file to another, use "mcopy"
If you want to copy a file "myfile" from a: to your home directory, use this command:
mcopy a:\myfile /home
If you want to copy myfile from /home to a: use this command:
mcopy /home/myfile a
To check the contents of a file or directory, use mdir.
To check the contents of a:
mdir a:
Hope that helps a little bit,
stevew
Hello,
How can I use CFDISK from my REDHAT CD-ROM as though it was from a hard drive linux installation?
If this is during the install process, I am pretty sure you could hit <ctrl><alt><f1-?> to switch to another terminal. Keep cycling through the keys until you find a free terminal. You should then be able to use cfdisk.
-- Daniel
Dear answer guy,
I would like to print a dvi file on an HP600 deskjet printer. Is it possible ? I've tried with the commands dvilj, dvilj2p, dvilj4 and dvil4l, but there are all for LaserJet printers. So I have some strange results.
Have you tried "dvihp"? It's supposed to convert DVIs to HP PCL (Printer Control Language.) Or, you could always just run "dvips" - it'll produce a PostScript file that you should be able to print without any problems.
-- Ben
i need to find out the slot adress for my pci network card , how can i easily track down this
my network card is in slot 1 and i need to find out the adress (0x0081 or ???)
Does it say in the boot messages? Run "dmesg | less" to see your boot messages again. If you don't find the right information, please send us back a copy of your boot messages (in particular, the portions beginning with "PCI: " and anything that looks like it may be related to the network card).
Each PCI slot corresponds to a fixed address. Perhaps looking in include/linux/pci.h or drivers/pci/pci.c in the kernel source would help.
-- Mike
Back for more of your knowledge
And we're still here dishing it out!
I have an authlog file & i keep seeing this info within it
Apr 3 11:31:58 echelon pam_limits[27640]: invalid line 'hard^Icore^I0' Apr 3 11:31:58 echelon pam_limits[27640]: invalid line 'soft^Inproc^I100' Apr 3 11:31:58 echelon pam_limits[27640]: invalid line 'hard^Inproc^I150' Apr 3 11:31:58 echelon pam_limits[27640]: invalid line 'hard^Ifsize^I40000'
Pam was installed via an RPM & seems to be working fine within everything else.
I would just like to fix this area of it if possible
Check your /etc/security/limits.conf file. It seems PAM doesn't like it. Why? I don't know, but I checked my limits.conf file and my columns were separated by spaces, not tabs.
If you do a cat -v -t -e /etc/security/limits.conf, you'll see tabs as ^I and eon-oflines as $. -- Faber
Just to be nitpicky, cat -A is a combination of those options. -- Ben
cat -T is enough to see the dratted tabs as ^I but stray spaces at the end of the line still won't be obvious. -- Heather
Hi there Linux Gazette Team!
I was browsing through LG today, and came across the article 'Finding my computer at home from the outside'. This is a topic that interests me, as I like to be able to access my home machine from school. Although technically accurate, I found that writing these scripts is an extremely cumbersome way to do the job. (Not to mention that passwordless logins (secure tunnel or no) are just plain bad form...). I'm not writing this email to complain (you guys do too much good work), but rather to inform!
If you're in a situation like me, and you either can't get (or can't afford) a static IP on broadband, there is a much simpler solution. http://www.dyndns.org. A free service (they DO accept donations), DynDNS allows you to register a hostname (within one of their domains...for now), and run a client to update with them each time your IP changes. After registering with DynDNS, you can download a little client utility (I prefer ipcheck.py), and have it run from your /etc/ppp/ip-up script (I'm on DSL, so my connection is still PPP)...which is run every time that your IP changes.
I've found the service to be most valuable.
Thanks
-Ben Walton
Welcome to this months new feature....The Weekend Mechanic. Actually, for those of you who have been avid readers since LG's initial release, you'll realise that this column used to be written by John M Fisk in 1996-1998 and so it is not that new. However, I thought it would be nice to re-introduce this as a regular feature.
The Weekend Mechanic will draw together my experiences of Linux and the problems that I have had to solve either at home or at school each month. So, The Weekend Mechanic will concentrate on the following:
So, with that in mind, lets begin this months fixing and tinkering session......
I have noticed that more and more people when using Linux tend solely to rely on the GUI, hoping in vein that they do not have to type in any commands in fear of deleting their work, making a fatal mistake, etc. Although the only real threat of this happening is if you are logged in as "root", I find that people are still wary!! However, there will come a time regardless when typing in commands will be a necessity and it is important that your shell environment is customised so that you can complete your tasks with ease. This article will show you how to customise the login shell so that features such as Aliases, editors, shells, etc can work in the way that you want them to.
Firstly, we should make sure that you have an appropriate editor installed. There are many console editors to choose from, such as: emacs, joe, jed, pico, vi. Once you have found an editor that you like (I happen to use both Pico and Jed) then you can tell the shell to use. Some programs such as Cron (as we shall see later on..) rely on the shell having an editor set up so that you can edit the crontab.
There are two files that we shall be concentrating on. They are located in your home directory as: .bashrc and .bash_profile. In my .bashrc file, I find that it begins with the following:
# User specific aliases and functions alias ls='ls -o --color=auto' alias cad='cat /var/squidGuard/db/blacklist/adverts' alias cc='cd /mnt/cdrom/Mandrake/RPMS' alias mail='mail -v' alias rm='rm -i' alias cp='cp -i' alias mv='mv -i' alias d='ls' alias s='cd ..' alias p='cd -'
Aliases are useful, especially if you find yourself typing out a command that has a deep directory listing. For example, if you found yourself having to keep typing in the command
cd /var/spool/users/home/mail/root/sunto save all that typing you can specify a "shortcut" word that automatically does just that. Cool eh?
So to tell the shell that you want to use the word "checkmail" to do the command
cd /var/spool/users/home/mail/root/sunyou would add to the list:
alias checkmail='cd /var/spool/users/home/mail/root/sun'Then you could type the alias checkmail and hey presto....it works!!
Of course many people like to issue aliases to accommodate their typographical errors; i.e.,
alias eamcs='emacs' alias emcas='emacs'Personally I think this is a bad idea, and you should learn to type more accurately!
The next section we shall look at is how to tell the shell which programs to run when it is suitable to run them. In my .bash_profile file I have among the following:
PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/X11R6/bin ENV=$HOME/.bashrc USERNAME="root" export USERNAME ENV PATH
This is the section that we shall be concentrating upon setting these variables. Common variables that have not been set are ones like "EDITOR" and "MAIL". These variables are common to the user that is currently logged in, meaning that different values can be specifies for each user. The variable
specifies the editor to use when editing files. This variable is usually called from programs such as Pine and Cron, but it can be very useful when writing shell scripts.EDITOR
To set the variable, one has to add it to the "export" list, like this:
export USERNAME ENV PATH EDITOR
Exporting a variable releases it into the environment, rather than keeping it within a single program. Exporting is done, so that many different programs can use the same variable name with the same value, get it :-).
Once added to the export list, save the file, and exit your editor. So, now that we have defined a new variable, the next thing to do is to tell Bash, that it is there. To do this, you must "source" the file. This is a bash builtin that re-reads the file. You can either type this in, in one of two ways. Either you can specify
source filenameor you can specify a "." thus:
. filename
And that will then active your new added variable. Well, thats it for this section....
Do you ever find yourself repeating common tasks throughout the day, and wished that there was some sort of program that would automate it all for you? Well, look no further, Mr. Cron is here :-)
Cron is a scheduling program, and even more specifically it is known as a daemon. By daemon, I do not mean that it is a nasty creature with two horns!! A daemon is simply a program that runs in the background waiting for instructions. When it receives them, it executes them and when it has finished, it goes dormant again.
Cron is usually started when you switch to one of your run-levels. However, just to make sure it has started, issue the following command:
ps ax | grep crond
If you get a response similar to:
root 311 0.0 0.7 1284 112 ? S Dec24 0:00 crond root 8606 4.0 2.6 1148 388 tty2 S 12:47 0:00 grep crond
Then cron has started, and you are ready to use it. If you don't get "crond" returned, then you must start the daemon, by typing
crond
Cron is particularly useful when you find yourself needing to run backup and general maintenance programs. To tell cron when to run a program, you have to fill out several fields. Each separate program that is scheduled via cron is put into a file known as a crontab file. The fields are defined as the following:
Min Hour DOM Month DOW User Cmd
And a description of their input values are summarized in the table below:
FIELD | DESCRIPTION |
---|---|
Min | Specifies the minute on or past the hour. Values range from 0 to 59. |
Hour | Specifies the hour (Military style) that the script should run at. The range is from 0 to 23 where "0" is Midnight |
DOM | This is the Day of Month, that you want the command run on, e.g. to run a command on the 23th of each month, the DOM would be 23 |
Month | Specifies the month to run the script in. Values range from 1 to 12, where "1" is January and "12" is December. Or it can be specified using the first three letters of the month, e.g. May |
DOW | Specifies the day of the week, either as a numeric value of 0 to 7 (0 and 7 are both Sunday) or as the name of the week (using first three letters only), e.g. Mon |
User | Indicates who is running the command |
Cmd | The path and name of the script/program to be run |
You can use a "*" (without the quotes) in any of the time fields to mean "every minute", "every hour", etc.
So, with the above descriptions in mind, the following examples are all valid:
01 * * * * root /usr/bin/script "This command is run at one min past every hour" 17 8 * * * root /bin/mail "This command is run daily at 8:17 am" 17 20 * * * root /usr/bin/fetch "This command is run daily at 8:17 pm" 00 4 * * 0 root /bin/qweb "This command is run at 4 am every Sunday" * 4 * * Sun root /usr/bin/file2rpm "This command is run at 4 am every Sunday" 42 4 1 * * root /usr/bin/squidlog "This command is run 4:42 am every 1st of the month" 01 * 19 07 * root /usr/bin/xman "This command is run hourly on the 19th of July"
See how easy it is? :-). Cron also accepts more sophisticated time specifications: run "man 5 crontab" for an explanation of these.
Of course this is all very well, but I have not told you where to put any of your cron entries. So.........hang on there, reader.
The most common version of cron installed on linux systems is "vixie-cron", and so in the "/etc" folder should be a file called "crontab". If you have specified the environment variable EDITOR (see the above section) then you can simply type in:
crontab -e
And that will load the file into your text editor
If you did not open it using the above command, then open it using a text editor of your choice and you should find something that looks like the following
SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin [email protected] HOME=/ # run-parts 01 * * * * root run-parts /etc/cron.hourly 02 4 * * * root run-parts /etc/cron.daily 22 4 * * 0 root run-parts /etc/cron.weekly 42 4 1 * * root run-parts /etc/cron.monthly
The SHELL variable indicates the current shell that we are using
The PATH indicates the path to the most common programs
The MAILTO option indicates to whom the output of the cron result (i.e. whether it has worked or not) and any output from the program is to be mailed. If you find that it is annoying, then you can delete this variable.
The section below "#runparts" is supposed to work so that in the folder "/etc/cron.daily" for example, whatever script is in there gets executed daily. However, for some strange reason, it has never worked well for me, and I have since found it easier to specify my own cron list.
So, to add the above examples to our crontab, it is just a matter of copying and pasting them in:
SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin [email protected] HOME=/ # run-parts 01 * * * * root run-parts /etc/cron.hourly 02 4 * * * root run-parts /etc/cron.daily 22 4 * * 0 root run-parts /etc/cron.weekly 42 4 1 * * root run-parts /etc/cron.monthly #Custom Crontabs -- Put in by Thomas Adam 01 * * * * root /usr/bin/script 17 8 * * * root /bin/mail 17 20 * * * root /usr/bin/fetch 00 4 * * 0 root /bin/qweb * 4 * * Sun root /usr/bin/file2rpm 42 4 1 * * root /usr/bin/squidlog 01 * 19 07 * root /usr/bin/xman
Then save the file. Now the last thing we have to do is to tell cron that we have edited the file. That is done, with the following command:
crontab -u root /etc/crontab
And thats it...just sit back and wait..... You should find that by now your workload has diminished by about 25% or so!!!
Cron also has the ability of allowing and denying certain users who are allowed to use cron. To implement this, two files called cron.allow and cron.deny have to be created in the folder "/etc".
These files work in the following way. If for example you wanted nobody to have access to cron, then you would add the line "ALL" to the cron.deny file. If you wanted only certain people to use cron then you would add their username to the cron.allow file.
Rather than having to edit the file each time, I find it much easier to use the following command:
cat username >>/etc/cron.allow
Thats all there is to it....have a go and see what you think......!?!
Well folks, thats it for this month. I had hoped to do more, but unfortunately school work had intervened yet again!! I would really appreciate some feedback, general comments, hints as to articles, etc. Armed with this information I can then go about "making linux just that little more fun" :-)
I am now off to go and teach piano, do some Geography revision (yay) and then maybe continue working on some of my ongoing "bash script projects". After which, I might even be able to get some sleep. who knows?????
In the meantime however, I wish everyone...."happy Linuxing"
For those not into gaming, "All your base are belong to us", is a video-game slogan translated from the Japanese. Read the LA Times article, see the history, or watch Clippy sing it on the Microsoft site (turn on Javascript and run the mouse back and forth over the links in the left column and watch what the Clippy image does, then click on the "Click" button several times).
[Too bad I can't listen to the song, "It Looks Like You're Writing a Letter", since it's in Windows Media format. I would just like to see its parody version, "It Looks Like You're Writing a Suicide Note". -Mike.]
I read the overview by Jason McIntosh about ComicsML over the weekend. There's a lot of sense in what he says: about the ability to define comics and to be able to search archives and automate processes. As an example, just last week I was searching for a specific Far Side and luckily the text had been reproduced within a HTML page so I was able to find it. Otherwise I could have been searching fruitlessly for weeks.
That said, as an artist myself the act of documenting artwork and standardizing it seems to demystify the process. Part of the magic of reading today "Pearls Before Swine" or "Randolph Itch 2am" is having to track it down and read it. Automating the process turns it into a kind of Shakespeare-via-Brodies-Notes. There is no shortcut to art. Add that to the technical fact that the time taken to draw a strip will now double - once for the art which we already draw and ink, then the marking up of the ComicsML after the fact, thinking of an appropriate teaser, typing the spoken text etc. Sheesh! Most cartoonists barely have enough attention span to brush their teeth,
[And run around chasing villains all day. -Mike.]let alone do all this house-keeping for each and every strip.
Art as revenue-raiser also requires that human intervention be present. When HelpDex was running on LinuxToday.com the filenames were randomized so automated scripts or robots COULDN'T simply pull down the pics every day. LT was paying for a service that would entice readers to their site, not robots. There's no incentive to sponsor a strip without being able to quantify the hits or business generated.
I sound harsh, but I'm not. ComicsML does look like a good idea being introduced at a good time. The new breed of cartoonist literate in new technologies such as the ones working in Flash and so on would pick this up quickly and once standardized, could help spread information much more rapidly than currently occurs. I feel from conventional artists such as myself there is bound to be a slow uptake as this simply adds more effort to setup than the benefit it would realise.
What am I trying to say? I think it's a tidy idea but I'm too darn lazy to implement it unless I can see a direct benefit from the extra effort required.
OLinux: First of all, tell us about your background.
Ben Collins: I am generally speaking a programmer and systems administrator. In the past I have also worked as a Desktop Publisher and a web designer. I've worked for NASA LaRC, several ISPs and currently am working at Winstar .
OLinux: Please give a brief summary of Debian's History, Philosophy and Organization on handling free software development?
Ben Collins: Our philosophy goes back a long way. Mainly we believe that it is possible to create a completely free operating system with all of the things you need to do your daily work. That's what started Debian, and prompted Ian Murdock to write the Debian Manifesto. From there began our project, and from it has come the Debian Free Software Guidelines (DFSG), which defines the type of software licensing we consider to be Free in the sense of Freedom. Also came the Debian Social Contract, which defines what we will support for our users. Later, as we grew, came our Constitution, which defines our operating procedures, and breakdown of authority within the project.
We've basically given full control of each package to the maintainer of that package, so long as it falls within the guidelines of our well defined Policy. Our Policy is one of the strengths of the Debian distribution. Without it, we would not have a cohesive set of packages, and installs/upgrades would be a nightmare.
OLinux: How excited you are about being in front of the Debian Project? Do you have something in mind for the Debian Project? Are you going to make changes on the way the work is done?
Ben Collins: I'm extremely excited. This is my third run at the DPL position, and it is a goal I have finally achieved thanks completely to those in the project that have faith in my ability to handle the job. I have plans to clean up some loose ends that have been plaguing our internal organisation for some time. After this, I plan to tackle some of the more difficult situations that still linger, or are threatening to be a problem in the near future.
OLinux: What are going to be the differences between your leadership and the predecessor's?
Ben Collins: When I first came to Debian, Ian Jackson was finishing his term as DPL, and he was very inactive (to his defense, I do not know any details of his situation). Wichert then followed for two terms. I believe he did an excellent job keeping Debian going. However, my plans are to get Debian moving rather than continue to limp along with some of the problems we face.
OLinux: How are people organized and what are the tools used to control the results of the work being done in different projects and parts of the world?
Ben Collins: Within Debian, we have the maintainers (some 800 it seems now). Each of them is responsible for maintaining one or more packages (some do not maintain packages, but help with other projects internally, such as ftp archive, www site, etc.). They have complete control of their tasks within the guidelines and policy. Within this, some developers have grouped together to manage large specific tasks. Examples of this are the Debian Junior project, as well as the ports (such as sparc, arm, alapha, powerpc, etc.) and language projects.
All work is coordinated via mailing lists. Some people also use IRC as a way of immediate interaction (via irc.openprojects.net). We also have the Debian Bug Tracking System to manage bug reports for all of our packages and systems. This system is available publicly via our web pages. Anyone can file a bug, and track it's progress directly with the maintainer.
OLinux: How many people are working for Debian nowadays? Are you satisfied with the results?
Ben Collins: Last I checked, about 800. I am satisfied with the results. What I am not satisfied with is the influx of maintainers without a better scheme to manage them. Work is being done, but I want to see some other things in this area discussed and looked at.
OLinux: What do you think about people saying that the Debian 2.2 has too much bugs? What are you going to do in "Woody" to change this point of view?
Ben Collins: I was not aware that people said that. We have an excellent security team that fixes all known security related bugs. We also make regular point releases (2.2r3 is being worked on as I write this) to update the security patches into a new release. For woody we have a new "testing" mechanism which should help reduce the amount of time needed to release. Hopefully this will make more frequent releases possible.
OLinux: What are your expectations about the "Woody" launch?
Ben Collins: I look forward to a lot of the things that are going to be available in woody. Woody also promises to be the most architectures we have ever released at one time (by any distribution, that I am aware of).
OLinux: What are the active projects at Debian? How are they divided and coordinated in terms of content and staff for each project?
Ben Collins: Usually a project within Debian creates itself to fill a need. The project manages itself, and delegates within its own ranks who is responsible for what tasks. I'm not aware of all such projects, simply because most of them work in the background, silently making Debian better.
OLinux: Here, in Brazil, there is a project called Debian BR . This is a project that is translating the Debian content to Portuguese. Do you know that? If yes, what do you think about it? If not, you are invited to visit the Debian BR web site at debian-br.sourceforge.net. Do you know other projects like this in other countries?
Ben Collins: I had not heard of it before. I think it is an excellent thing, much like the JP and similar projects. The more people we can get Debian to, the better. I'll have a look at the web site, and I wish the best of luck to the project for it's efforts.
OLinux: Do you consider Debian the leading GNU/Linux distribution in the world?
Ben Collins: On many basis, yes. However, I measure Debian on what's important to me, and am well aware that it lacks in areas that are important to others. A recurring topic is our installer. I'm happy to report that a new modular installer is being worked on, and it so far appears to exceed, or will exceed, all of the goals that the group set for itself. It will probably not be done in time for woody, though.
OLinux: How is Debian's relationship with the GNOME Foundation? And with the KDE league?
Ben Collins: I'm not able to answer this question. I do know that we have some developers that work closely with both projects, and that GNOME and KDE are both fully integrated within our distribution.
OLinux: What are the advantages and what differentiates Debian from other popular distributions as SuSE or Red Hat, besides being a non-commercial distribution?
Ben Collins: I think we have three major strengths. One is our development model. No other distribution has all of its developers available first hand to take bug reports and suggestions from its user base.
No other distribution has as extensive a set of policies that allows it to distribute as many packages as we do, all integrated into our distribution, with easy installation.
No other distribution offers the ease of upgrades that we do. There have been reports of people being able to effortlessly upgrade from as far back as Debian 1.3 (bo) to the current stable 2.2 (potato) (note, this is a libc5 to libc6 upgrade path). Debian not only supports, but guarantees upgradability. It is one of our primary goals.
OLinux: How do you describe Debian Project achievements and what are the prospects and goals for the next years?
Ben Collins: The fact that Debian is still around, and is still growing is a major achievement. We have not lost site of our primary goals; to produce a free and stable distribution. Over the next few years I hope to see Debian prosper from commercial acceptance via companies like Progeny. I'm hoping that vendors will see us as a more viable solution for desktops and pre-installed systems.
OLinux: Give us some predictions about the growth of the GNU/Linux operating system for the next 2, 5 and 10 years.
Ben Collins: That's hard to predict. Unfortunately, as free as it may be, GNU/Linux is directly affected by the economy. The current trend of Internet companies starting to fail, will likely scare away of a lot of the venture capital that has flooded Linux in the past years. Hopefully this will be a good thing, and the Linux companies will have to start working to make their money, and not ride the wave of hype. I would guess that over the next 2 years, Linux's hype will settle down, and people will start taking it more seriously (not just those in-the-know).
In 5 years, I suspect that GNU/Linux will be as common as MacOS, Solaris and Windows in the home. In 10 years, who knows. That's like an eternity to the technical world, so Linux may be obsolete by then.
OLinux: What are the improvements that GNU/Linux needs to be more deployed in by the corporate market?
Ben Collins: An accepted, easy to use interface. KDE and GNOME are working toward this with great strides. But even with a good interface, getting accepted and being "common" take far longer than a development cycle.
OLinux: Debian is definitely the best Linux distro, but its hardware configuration interface and its installer are not so friendly. Is the Debian Project going to focus on a best interaction with the final user or it still a distribution for the systems administrators only?
Ben Collins: Yes, the debian-installer group is working very hard on this. We do not want to remain a niche distribution only used by administrators and hard-core hackers.
#!/usr/bin/perl # ftp://ftp.tardis.ed.ac.uk/users/ajcd/psutils.tar.gz # http://www.dcs.ed.ac.uk/home/ajcd/psutils/ # cp Makefile.unix Makefile # ln -s /usr/bin/perl /usr/local/bin/perl # mkdir -p /usr/local/share/man/man1 # /usr/local/bin/psbook #system ("lynx --source ftp://ftp.tardis.ed.ac.uk/users/ajcd/psutils.tar.gz > /tmp/psutils.tar.gz)"; # system ("cd /tmp; tar -zxvf psutils.tar.gz; cd psutils; cp Makefile.unix Makefile"); # system ("ln -s /usr/bin/perl /usr/local/bin/perl; mkdir -p /usr/local/share/man/man1"); # system ("cd /tmp/psutils; make; make install; ln -s /usr/local/bin/psutils /usr/bin/psutils"); # Ignore the lines above, unless you don't have psutils. # I keep the lines above just so I remember how I installed psutils. my $TempFile1 = "/tmp/HOWTO_Convert_1.ps"; my $TempFile2 = "/tmp/HOWTO_Convert_1.pdf"; my $SourceDir = "/root/HOWTO"; my $Destination = "/root/HOWTO_Books"; my $ZippedPDF = "/root/HOWTO_books_pdf.tgz"; my $ZippedPS = "/root/HOWTO_books_ps.tgz"; if (!(-d $Destination)) {system "mkdir $Destination";} print "Downloading HOWTOs from http://www.ibiblio.org/pub/Linux/docs/HOWTO/other-formats/ps/Linux-ps-HOWTOs.tar.gz\n"; system ("lynx --source http://www.ibiblio.org/pub/Linux/docs/HOWTO/other-formats/ps/Linux-ps-HOWTOs.tar.gz > $SourceDir/Linux-ps-HOWTOs.tar.gz"); system ("cd $SourceDir; tar -zxvf Linux-ps-HOWTOs.tar.gz"); my @Files = <$SourceDir/*.ps.gz>; foreach my $File (@Files) { my $command="gunzip -c $File | /usr/bin/psbook -s4 | mpage -2 > $TempFile1"; print "Executing psbook and mpage on $File\n$command\n"; system ($command); $command = "ps2pdf $TempFile1 $TempFile2"; print "Executing ps2pdf\n$command\n"; system ($command); my (@Temp) = split(/\//,$File); my $NamePDF = pop @Temp; my $NamePS = $NamePDF; $NamePDF =~ s/\.ps\.gz$/\.pdf/; $NamePS =~ s/\.ps\.gz$/\.ps/; my $NewPS = "$Destination/$NamePS"; my $NewPDF = "$Destination/$NamePDF"; system ("mv $TempFile2 $NewPDF"); print "Created the book-formatted HOWTO, $NewPDF\n"; system ("mv $TempFile1 $NewPS"); print "Created the book-formatted HOWTO, $NewPS\n"; } print "Creating zip files $ZippedPDF and $ZippedPS\n"; system ("tar -zcvf $ZippedPDF $Destination/*.pdf"); system ("tar -zcvf $ZippedPS $Destination/*.ps");
Mark works as an independent consultant donating time to causes like GNUJobs.com, writing articles, writing free software, and working as a volunteer at eastmont.net.
One thing that bothered me with some of the older versions of GDM was the fact that I couldn't place the login window anywhere I wanted on the screen. With the latest version, it as very easy. Also, I couldn't figure out how to make logos of people in the login window, and now I figured that out. The latest version of GDM is really nice and I have figured out how to configure it the way I wanted it to be configured, so I finally decided to write this article.
The danger of not using RPMs to install GDM, is the fact that I am installing a newer version of GDM on top of a GDM version which was installed by RPMs. This could cause problems if I decided to use an RPM in the future. I found an RPM version at ftp://ftp.gnome.org/pub/GNOME/stable/latest/redhat/i386/Base/gdm-2.2.0-1.i386.rpm in case you don't want to install it manually.
Initial Steps
Three additional Steps
Browser=true SetPosition=true PositionX=100 PositionY=100 Exclude=bin,daemon,adm,lp,sync,shutdown,halt,mail,news,uucp,operator,nobody,gdm,postgres,pvm,otherlogin GlobalFaceDir=/usr/share/faces/Also, here was my Init/Default script,
#!/bin/sh /usr/X11R6/bin/xsetroot -solid "#363047" xsri -geometry +5+5 /etc/X11/xdm/Logo2.png xsri -geometry +400+5 /home/mark/public_html/wedding/wed2.jpg xsri -geometry +700+500 /home/mark/public_html/wedding/walk.jpg xsri -geometry +200+500 /home/mark/public_html/wedding/kiss.jpg xsri -geometry +5+175 /home/mark/public_html/kiss.gif xsri -geometry +5+500 /usr/local/apache_gnujobs/htdocs/images/zing.png xeyes -geometry +825+5 & xclock -digital -geometry +825+125 -update 1 & xtriangles -geometry +800+300 &In order to get logos or pictures of people on the GDM screen, I had to make the name of the image exactly the name of username and put it in /usr/share/faces/. To test this, I took my logo for ZING and copied it to "/usr/share/faces/root" like
cp /usr/local/apache_gnujobs/htdocs/images/zing.png /usr/share/faces/rootNotice that there is no extension.
I would have liked to compare KDM with GDM, but I wasn't able to easily find a recent web page for KDM. I am also waiting until I install the latest version of KDE before I mess around with KDM anyways.
Mark works as an independent consultant donating time to causes like GNUJobs.com, writing articles, writing free software, and working as a volunteer at eastmont.net.
CVS is a version control system. Using it, you can record the history of your source files. CVS helps if you are part of a group of people working on the same project, sharing the same code. Several developers can work on the same project remotely using CVS's client-server model in which the code exists on a central server and each programmer get the source on his local machine from the CVS server (checkout) and save it back on the CVS server (checkin) after development. Each time a programmer checks in his new code into the CVS server, the difference is saved as a new version rather than overwriting the previous version. This allows the server to recreate any previous version upon request, although by default it distributes the latest version.
This article explains how to use CVS in client-server mode and get the most out of it.
You can find CVS in your Linux distribution or get the source from http://www.cvshome.org/downloads.html
The home page for CVS is http://www.cvshome.org.
The CVS repository stores a complete copy of all the files and directories which are under version control. Normally, you never access any of the files in the repository directly. Instead, you use CVS commands to get your own copy of the files into a working directory, and then work on that copy. When you've finished a set of changes, you check (or commit) them back into the repository. The repository then contains the changes which you have made, as well as recording exactly what you changed, when you changed it, and other such information.
Creating a Repository
To create a repository, run the CVS init command. It will set up an empty repository in the CVS root specified in the usual way .
cvs -d /usr/local/cvsroot init
Here /usr/local/cvsroot will become the repository.
CVSROOT environment variable
Set the CVSROOT environment variable in your shell startup script. For instance, in ~/.bashrc:
$ export CVSROOT=:pserver:[email protected]:/usr/local/cvsroot
Backing up the Repository
There are a few issues to consider when backing up the repository:
Remote Repositories
Your working copy of the sources can be on a different machine than the repository. Using CVS in this manner is known as client/server operation.
Setting up the server:
Put the following entry in /etc/inted.conf on server:
2401 stream tcp nowait root /usr/local/bin/cvs cvs -f --allow-root=/usr/cvsroot pserver
If your inetd wants a symbolic service name instead of a raw port number, then put this in `/etc/services':
cvspserver 2401/tcp
and put cvspserver instead of 2401 in `inetd.conf'.
After making you changes, send a HUP signal to inetd.
Password authentication for remote repository
For remote password authentication put a file `$CVSROOT/CVSROOT/passwd' . The file will look like:
anonymous:
kapil:1sOp854gDF3DY
melissa:tGX1fS8sun6rY:pubcvs
The password is in Unix encrypted form. The first line in the example will grant access to any CVS client attempting to authenticate as user anonymous, no matter what password they use. The second and third lines will grant access to kapil if he supplies his respective plaintext passwords.
The third will grant access to melissa if she supplies the correct password, but her CVS operations will actually run on the server side under the system user pubcvs.
Note: CVS can be configured not to check the UNIX real passwd file i.e /etc/passwd for CVS authentication by setting SystemAuth=no in the CVS `config' file ($CVSROOT/CVSROOT/config).
Using the client with password authentication
You have to login to CVS server for the first time:
cvs -d :pserver:[email protected]:/usr/local/cvsroot login
The you can use all the commands of CVS on the remote machine:
cvs -d :pserver:[email protected]:/usr/local/cvsroot checkout someproj
Read only repository access
It is possible to grant read-only repository access to people using the password-authenticated server. There are two ways to specify read-only access for a user: by inclusion, and by exclusion.
"Inclusion" means listing the user in the `$CVSROOT/CVSROOT/readers' file, which is simply a newline-separated list of users. Here is a sample `readers' file:
kapil
yogesh
john
(Don't forget the newline after the last user.)
"Exclusion" means listing everyone who should have write access. If the file $CVSROOT/CVSROOT/writers exists, then only those users listed in it will have write access, and everyone else will have read-only access. The `writers' file has the same format as the `readers' file.
Setting up the files in repository
If the files you want to install in CVS reside in `someproj', and you want them to appear in the repository as `$CVSROOT/someproj', you can do this:
$ cd someproj
$ cvs import -m "Imported sources" someproj vendor
rel1-1
Here The string `vendor' is a vendor tag, and `rel1-1' is a release tag.
CVS locks in repository
Any file in the repository with a name starting with `#cvs.rfl.' is a read lock. Any file in the repository with a name starting with `#cvs.wfl' is a write lock. The directory `#cvs.lock' serves as a master lock. That means one must obtain this lock first before creating any of the other locks.
To obtain a read lock, first create the `#cvs.lock' directory. If it fails because the directory already existed, wait for a while and try again. After obtaining the `#cvs.lock' lock, create a file whose name is `#cvs.rfl.' followed by information of your choice (for example, hostname and process identification number). Then remove the `#cvs.lock' directory to release the master lock. Then proceed with reading the repository. When you are done, remove the `#cvs.rfl' file to release the read lock.
To obtain a write lock, first create the `#cvs.lock' directory, as with a read lock. Then check that there are no files whose names start with `#cvs.rfl.'. If there are, remove `#cvs.lock', wait for a while, and try again. If there are no readers, then create a file whose name is `#cvs.wfl' followed by information of your choice (for example, hostname and process identification number). Hang on to the `#cvs.lock' lock. Proceed with writing the repository. When you are done, first remove the `#cvs.wfl' file and then the `#cvs.lock' directory.
Symbolic revisions using tags in CVS
The release number of final software releases are different from revisions in CVS. The revision numbers might change for several times between two releases.You can use the tag command to give a symbolic name to a certain revision of a file.
Change to the working directory and issue the following command for tagging:
$ cvs tag rel1-1 file.c
This command will tag the file "file.c" as release 1.1
$ cvs tag rel1-1 .
This command will tag all the files under current directory recursively as revision 1.1
You can use the `-v' flag to the status command to see all tags that a file has, and which revision numbers they represent by issuing the following command:
$ cvs status -v file.c
Now you can checkout any revision of a module by using the following command:
$ cvs checkout -r rel1-1 module1
here "module1" is the name of the module. The -r flag with checkout option makes it easy to retrieve the sources that make up revision 1.1 of the module `module1' at any time in the future.
File status
The cvs status command gives a status about the states of the files. You can get a status of the file by:
$ cvs status [options] files
Bringing a file up to date
When you want to update or merge a file, use the update command. This brings into your working copy the changes others have recently committed. Your modifications to a file are never lost when you use update. If no newer revision exists, running update has no effect. If you have edited the file, and a newer revision is available, CVS will merge all changes into your working copy.
Resolving Conflicts
If two people simultaneously make changes to different parts of the same file, CVS is smart enough to merge the changes itself. But if two people make changes to the same part of a file, CVS cannot tell what the final result is supposed to be, so it gives up and wines, "Conflict!" Conflicts arise when one developer commits a change and a second developer, without running cvs update to receive the first developer's change, tries to commit his own incompatible change. Resolving changes can take hours or even days. In this section, I will explain how to resolve source conflicts.
When you enter the cvs commit command to automatically upload all the files you have changed or added to a project, the CVS repository server may inform you that your locally-edited files are not up-to-date with the server or that you need to manually merge one or more files with newer versions that have already been uploaded to the repository by some other developer. Here's a typical warning message that occurred during a CVS commit process:
$ cvs commit
cvs commit: Examining .
cvs commit: Up-to-date check failed for `andy.htm'
cvs commit: Up-to-date check failed for `sample.htm'
cvs commit: Up-to-date check failed for `index.htm'
...
cvs [commit aborted]: correct above errors first!
You can use the cvs update command to update your local project copy with the latest changes in the cvs repository. To update your entire working copy of the site, open a command prompt, change to the directory containing the project you're developing, and issue the command:
$ cvs update
This will update and automatically merge every file that has changed since you last copied over new files from the CVS repository. Line-by-line updates to individual text files (such as HTML files) can often be handled automatically. CVS will list for you any files that require your attention for manual editing and merging.
Automatic merge example:
You are editing some project file called "index.html" locally and when you try to commit that file to CVS repository then CVS will give you the following error:
$ cvs commit index.html
cvs commit: Up-to-date check failed for `index.html'
cvs [commit aborted]: correct above errors first!
This happens because there is a newer version of the same file on the CVS repository. You should use cvs update command to get the latest version from the CVS repository onto your local machine:
$ cvs update index.html
RCS file: /usr/local/cvsroot/index.html,v
retrieving revision 1.4
retrieving revision 1.5
Merging differences between 1.4 and 1.5 into index.html
M index.htm
After the automatic merge process you should check the merged copy to check if it is working properly. When you are satisfied with the local copy of "index.html" file then you can commit it to CVS:
$ cvs commit index.htm
Checking in index.htm;
/usr/local/cvsroot/index.htm,v <-- index.htm
new revision: 1.6; previous revision: 1.5
done
Manual merge example:
In some cases, your recent work on a file might be so different that the CVS needs your manual intervention in order to integrate everyone's work and put it back into the site repository.
$ cvs commit index.html cvs commit: Up-to-date check failed for
`index.html' cvs [commit aborted]: correct above errors first!
Use the cvs update command to bring your local copy of the site up to date:
$ cvs update
cvs update: Updating .
RCS file: /usr/local/cvsroot/index.html,v
retrieving revision 1.5
retrieving revision 1.6
Merging differences between 1.5 and 1.6 into index.htm
rcsmerge: warning: conflicts during merge
cvs update: conflicts found in activity.htm
C index.htm
This time CVS was unable to merge the files automatically, so it created a special copy of the conflicting file in place of the original index.html. The file has marker lines to indicate the beginning and end of conflictiong region(s); e.g.,
<<<<<<<< filename
To resolve the conflict, simply edit the index.html file and replace the text between the markers and test the result until it works. You should also delete the markers
<<<<<<<<========>>>>>>>>
from the file. When you have finished correcting the file and have tested it, use the cvs commit command to put your latest copy of file into the repository:
$ cvs commit
Checking in index.html;
/usr/local/cvsroot/index.html,v <-- index.html
new revision: 1.7; previous revision: 1.6
done
Watches (CVS communication)
CVS can function as a communication device as well as a record-keeper. A "watches" feature provides multiple developers working on the same project with a way to notify each other about who is working on what files at a given time. By "setting a watch" on a file/directory , a developer can have CVS notify her if anyone else starts to work on that file by means of sending e-mail or some other method.
To use watches you have to edit two files in the repository administrative area. You have to edit the "$CVSROOT/CVSROOT/notify" file (which tells CVS how notifications are to be performed) and "$CVSROOT/CVSROOT/users" file(which supplies external e-mail addresses). The best way to modify administrative files is to checkout one copy from the repository ,edit them and then check in to repository .
To specify e-mail notification, first uncomment the following line from "$CVSROOT/CVSROOT/notify" file:
ALL mail %s -s "CVS notification"
This command causes notifications to be sent as e-mail with the subject line "CVS notification".
Then you have to create/edit the file "$CVSROOT/CVSROOT/users" . The format of each line in the users file is: CVS_USERNAME:EMAIL_ADDRESS. For example:
kapil:[email protected]
The CVS username at the beginning of the line corresponds to a CVS username in CVSROOT/password, or the server-side system username of the person running CVS. Following the colon is an external e-mail address to which CVS should send watch notifications for that user.
E-mail notification with logfile
CVS provides a feature of sending automated e-mail to everyone working on a project with a log message whenever a commit takes place. The program to do the mailing - contrib/log.pl in the CVS source distribution - can be installed anywhere on your system. You can also install it into "$CVSROOT/CVSROOT". You should change the following line in log.pl :
$mailcmd = "| Mail -s 'CVS update: $modulepath'";
Once you've setup the log.pl , you can put lines similar to these into your “loginfo” file. The `loginfo' file is used to control where `cvs commit' log information is sent. You can find it in "$CVSROOT/CVSROOT".
projectteam1 CVSROOT/log.pl %s -f CVSROOT/commitlog -m [email protected]
projectteam2 CVSROOT/log.pl %s -f CVSROOT/commitlog -m [email protected]
The %s expands to the names of the files being committed; the -f option to log.pl takes a file name, to which the log message will be appended (so CVSROOT/commitlog is an ever-growing file of log messages); and the -m flag takes an e-mail address, to which log.pl will send a message about the commit. The address is usually a mailing list, but you can specify the -m option as
many times as necessary in one log.pl command line.
Some commands related to setting up watches on files:
If you only want to be notified about, say, commits, you can restrict notifications by adjusting your watch with the -a flag (a for action):
$ cvs watch add -a commit hello.c
Or if you want to watch edits and commits but don't care about unedits, you could pass the -a flag twice:
$ cvs watch add -a edit -a commit hello.c
Adding a watch with the -a flag will never cause any of your existing watches to be removed. If you were watching for all three kinds of actions on hello.c, running
$ cvs watch add -a commit hello.c
has no effect - you'll still be a watcher for all three actions.
To remove watches, run:
$ cvs watch remove hello.c
which is similar to add in that, by default, it removes your watches for all three actions. If you pass -a arguments, it removes only the watches you specify:
$ cvs watch remove -a commit hello.c
To find out who is watching files, run cvs watchers:
$cvs watchers
$cvs watchers hello.c
To find out who is editing files, run cvs editors:
$cvs editors
$cvs editors hello.c
Note: It is necessary to run "cvs edit" before editing any file to be able to watch feature working. To make sure you do, CVS has a feature to remind the someone to use cvs edit with the help of the watch on command:
$ cd project
$ cvs watch on hello.c
By running cvs watch on hello.c, kapil causes future checkouts of project to create hello.c read-only in the working copy. When someone else tries to work on it, he'll discover that it's read-only and be reminded to run cvs edit first.
$cvs edit hello.c
Sometimes you need to revert back to previous version of your project. A project under CVS version control can quickly and conveniently revert to an earlier stage of its life. I will explain some of the common examples:
$ cvs checkout -D '1 year ago' preproject
Here preproject is the name of the project.
$ cvs checkout -r1.4 preproject
1.4 is CVS's revision number for that version.
Some common terms:
Import: This means taking an existing directory tree and copying it into the CVS repository, creating a new CVS project.
Commit: Apply all your changes to the CVS repository. Each changed file will be assigned a new CVS version.
Checkout: Get the working copy of files from cvs repository into the local directory.
Export: export is same as checkout. The only difference is that export does not copy out the CVS administrative directories, so you cannot run CVS commands in the resulting tree. On the other hand, this is how you create your "final" copy for distribution.
Upload: General term for Import or commit.
Download: General term for checkout or export.
Checkin: General term, same as commit.
Adding a file to the CVS repository "My_Files"
$ cvs add File3.txt
$ cvs commit
cvs add does not upload the file right away, but registers it to be uploaded at the next commit.
This invokes your default text editor and prompts you to enter a description of your changes. Save the file and quit the editor. CVS will then ask you to continue, and select the option to continue. Now you have uploaded a file to the CVS repository "My_Files".
Changing a file to the CVS repository "My_Files"
This can be done with cvs commit command. Let us add some content to the file File2.txt and then commit it to the cvs repository.
$ ls /var >> File2.txt
$ cvs commit
Removing files
To remove files from a site, you run the cvs remove command on the desired filenames in your working copy. As a ``safeguard'', cvs remove will not work if the working copies of your files still exist.
Syntax: $ cvs remove [options] files
$ cvs remove file.html
cvs server: file `file.html' still in working directory
cvs server: 1 file exists; remove it first
$
To get around this, you may use the -f option with the cvs remove command or remove the file first and then execute the cvs remove command.
$ cvs remove -f oldfile.html
cvs server: scheduling `oldfile.html' for removal
cvs server: use 'cvs commit' to remove this file permanently
$ cvs commit
Or
$ rm File3.txt
$ cvs remove File3.txt
$ cvs commit
This will not delete the actual file from the CVS server yet; it simply makes a note to tell the server to remove these files the next time you commit your working copy of the project.
Removing directories
The way that you remove a directory is to remove all the files in it. You don't remove the directory itself: there's no way to do that. Instead you specify the `-P' option to cvs update or cvs checkout, which will cause CVS to remove empty directories from working directories. (Note that cvs export always removes empty directories.) Note that `-P' is implied by the `-r' or `-D' options of checkout. This way CVS will be able to correctly create the directory or not depending on whether the particular version
you are checking out contains any files in that directory.
Creating the directory structure from number of files in the CVS repository
This cvs import command is used to put several projects in cvs repository.
$ cd source
here source is the files that you want to put in cvs repository.
$ cvs import -m "Test Import" My_Files Revision1 start
The string ‘Revision1’ is a vendor tag, and `start' is a release tag. “My_Files” is the name of directory in cvs repository. The –m option is to put log message.
Get the working copy of files from CVS
Okay, now we want to download these files into a Working directory.
When we checkout a package from cvs, it will create a directory for us. The parameter "My_Files" that we specified when we uploaded the files into cvs will be the name of
the directory created for us when cvs downloads the package for us.
Now we need to get the cvs package.
$ cvs checkout My_Files
Downloading updates that other people make
If you have downloaded a package from a repository that someone else is maintaining, if you wish to download all the changes, then execute the following command,
$ cvs update -dP
The "d" creates any directories that are or are missing. The "P" removes any directories that were deleted from the repository.
Viewing the difference
You can easily see the difference between two file using cvs.
$ cd project
$ cvs diff index.html
This command runs diff to compare the version of `index.html' that you checked out with your working copy. Here "project" is the name of the local project directory.
$ cvs diff -r 1.20 -r 1.21 hello.c
This command will show the difference between two versions of same file.
The annotate Command
With annotate, you can see who was the last person to touch each line of a file, and at what revision they touched it. It gives you more information than the history command:
$cvs annotate
View logs
$ cvs log -r 1.21 hello.c
This will show you the logs for hello.c version 1.21
Henner Zeller's CVSweb
It has a feature of browsing the CVS repository by web browser and even shows the latest revision and log message for each file. It presents you with a web-based interface to browse any and all of the sites and projects you manage by CVS. You can get it from http://stud.fh-heilbronn.de/~zeller/cgi/cvsweb.cgi/
Martin Cleaver's CVSweb
It features capabilities for file upload as well as file browsing of CVS repository. You can get this software from http://sourceforge.net/projects/cvswebclient/
LinCVS
A CVS GUI client for Linux. It provides nice features and easy to use. You can get it from: http://www.lincvs.org/
WinCVS
A CVS GUI client for Windows. It has many good features and I will recommend this software for Windows clients. You can get it from http://www.cvsgui.org/download.html
CVS Manual : http://www.cvshome.org/docs/manual/cvs.html
CVS Mailing lists: http://www.cvshome.org/communication.html
Kapil is a Linux/Unix and Internet security consultant. He has been working on various Linux/Unix systems and Internet Security for over three years. He maintains a web site (http://linux4biz.net) for providing free as well as commercial support for web, Linux and Unix solutions.
If you have an e-mail account, you are bo doubt getting mail that you have not asked for, and do not want in your inbox - unsolicited e-mail (aka spam). What's Spam? In 3D "meatspace", it is a luncheon meat manufactured by Hormel Corp (which also owns http://www.spam.com). Spam on the net though is unsolicited e-mail, unwanted e-mail, frequently sent in bulk and advertising some commercial proposition. Most of the Spam you probably get, and what this article deals with, is UC/BE (Unsolicited Commercial and/or Bulk E-Mail).
If you have a linux (or *nix) box, you have a set of powerful tools to stop all this spam from cluttering your inbox. These tools are even more useful to you if you run a production mailserver and want to stop spam from reaching your users.
The three cardinal rules of spamfighting are:
Protect yourself and prevent spammers from harvesting your address. Don't expose your primary e-mail addresses where a spammer can get at it and add it to his list. This includes places like /., usenet, publicly archived mailing lists, web based bulletin boards - in short, anywhere online. Instead, follow one of these steps:
1. Use a "throwaway" address (say [email protected]) when posting. If you find that this address is getting spammed, you can just throw it away and switch to another address. To be on the safe side, when you are posting online, "munge" your address to something like [email protected]. Obviously, spammers (who use robots to crawl the web searching for mail ids and burn the entire thing into a CD) will not be able to mail you.
2. If you run your own domain, use "expiring" mail addresses - addresses which will be valid for a [week|month|year], and will then cease to exist. This address can be something like [email protected]. In case you don't have your own domain, heck, use [email protected] instead :)
3. Both these measures have a major drawback: you have to keep changing your e-mail address--faster than your girlfriend changes her hairstyle! :) If your ISP uses sendmail, you have another option - "plus" addresses.
Plus addresses are available with newer versions of sendmail (8.8 and above). Just add a plus sign and any string you want after the username and before the '@'--the mail will still be delivered properly. For instance, [email protected] will reach me - sendmail will ignore everything after the plus. For a (slightly old) FAQ on how to implement plus addressing in various MTAs (and how to use them in various mail clients) see http://www.faqs.org/faqs/mail/addressing/. (Note that some MTAs use a hyphen instead of a plus sign. We'll still call them plus addresses here--but maybe we should call them "minus" addresses instead! )
Obligatory disclaimer: before you start using plus addresses in your e-mail, send yourself a test mail with a plus address and check whether it reaches you.
Plus addresses are useful because they reveal just where a spammer harvested your mail id from. For instance, if you subscribe to the Linux India Help mailing list, subscribe to it as [email protected] (and make sure you set your mail client to post messages to the list only using this identity or the list will bounce your mails). Both PINE and Mutt allow you to use different identities when posting (roles in PINE and folder hooks in mutt). Another advantage of plus addresses is that, if you start getting lots of spams to a plus address, you can just send all mails reaching that address to be read by Dave Null (aka /dev/null).
See Appendix #1 below for how to configure multiple identities (including plus addresses) in pine 4.x and Mutt.
You can do this at the MTA level and by running Procmail filters. If your remote mailbox gives you a unix shell account, run the filters there instead of on your desktop linux box. Naturally, for the MTA level config / patching, you have to be root :)
Several procmail recipes are available for you to trap and dev/null (or even complain about) most of the spam you get. The most popular one is Spambouncer by Catherine Hampton. Download for free at http://www.spambouncer.org. Another excellent page is maintained by Concordia University at http://alcor.concordia.ca/topics/email/auto/procmail/spam/. You can also check out SpamDunk by Walt Dnes.
As most linux boxes come installed with sendmail, I will go into slightly more detail here. Sendmail 8.8.7 (which came with Redhat 5.1) and above have spam blocking features, which allow you to deny mails from specific domains / domains blackholed in the MAPS RBL and other blackhole lists. In any case, upgrade to the latest version of sendmail available (currently 8.11.3, or the 8.12 betas).
Compiling sendmail is a really good idea (and is quite easy - with detailed instructions given in a file called INSTALL in the sendmail source tree). Or you can get prebuilt binaries in whatever format you want (rpm, deb and such).
Stock sendmail installs can reject SMTP connections from domains / addresses based on a database of filter rules - see /etc/mail/access (and /etc/mail/access.db, which you generate using makemap hash access.db < access).
/etc/mail/access can have e-mail addresses, whole domains or even specific ip addresses / ip blocks as keys.
[email protected] 550 Get lost - No spammers allowed spammer.com 550 Go to hell 192.168.212 REJECT
would refuse smtp connections from [email protected], any user from spammer.com (or hosts within the spammer.com domain), and any host on the 192.168.212.* netblock. For further (extremely detailed) details, see Claus Assmann's page at http://www.sendmail.org/~ca/email/ (and the sendmail FAQ at http://www.sendmail.org/faq/ won't hurt either).
Test this by sending a test mail to yourself from that host and then download the message using fetchmail, using the -v argument. This will allow you to monitor the SMTP transaction - when the FROM address is parsed, if sendmail sees that you have blacklisted the address, fetchmail will flush and delete it. Obvious warning: never put a reject entry your own mailhost or any host you accept mail from using fetchmail into your access db--you will lose mail if you do this.
You can also reject mail from all hosts listed in the MAPS RBL and other DNS based blackhole lists by enabling the dnsbl features in sendmail.mc and rebuilding sendmail.cf. See http://www.mail-abuse.org/rbl/usage.html for more details.
Oh yes - make sure you are not an open relay, which can be abused by spammers to relay their spam, leaving you with a clogged mailqueue, a mailbox full of thousands of bounces, angry flames from spammed people and possibly a listing in the RBL (if you are slow to fix it). See http://www.sendmail.org/tips/relaying.html and http://www.orbs.org/otherresources.html for more details.
Newer versions of sendmail dont make you an open relay - if you resist the temptation to configure sendmail using linuxconf (or most other auto config tools). Create a sendmail.mc file and regenerate sendmail.cf. For example, see http://www.hserus.net/sendmail.html (part of my Dialup HOWTO at http://www.hserus.net/dlhowto.html
See Appendix #2 below for antispam measures (including closing open relays) in other MTAs
Spam, being the insiduous, creeping slime that it is, will sooner or later slip through all your filters and enter your mailbox. A linux box gives you all you need to track the spammer down - basic *nix tools like whois, nslookup, traceroute, and the best one of all: dig. The best solution is to spare a little time (less than five minutes) to send out a few complaints to the spammer's webhost, his ISP, his freemail provider - anyone and everyone who can do serious damage to the spammer. These tools are also available on the web at http://www.samspade.org.
See Appendix #3 below for more links on tracing and reporting spam
Roles in PINE - With PINE 4.x and above, press S (Setup) and R (Roles). Add as many roles as you feel like and switch between them using # (the Hash character). Or you can choose between different roles when replying to an e-mail.
Roles in Mutt - Use folder hooks, so that all outgoing mail from a particular folder have the from field set to [email protected]
folder-hook linux "my_hdr From: [email protected] (My Linux Account)" set envelope_from # sets the envelope sender, which is what's checked # by the list server <= mutt 1.2.x and above
Procmail recipe to dev/null all mails sent to a tagged address that attracts too much spam:
# If mail is sent to [email protected] trash it :0: *^TO_ [email protected] /dev/null
QMail: See http://www.summersault.com/chris/techno/qmail/qmail-antispam.html for a detailed account of anti-spam features in qmail (several of them).
Other MTAs: Debian comes with Exim. There are other *nix MTAs as well. See http://www.mail-abuse.org/tsi/ar-fix.html (and the websites of each MTA) for a comprehensive howto.
Reference links:
is President of the Indian chapter of CAUCE, an international organization of people dedicated to fighting Spam. He is webmaster of KCircle, one of the world's most popular trivia quiz resources.
Author bios are now at the bottom of the corresponding article. This was suggested by , and we decided we like the idea. What do you readers think?
Answered By Ben Okopnik, Mike Orr, Heather Stern
I wanted to ask if you service yamahe music equipment.
[Ben] Hey, LG does mention "Yamaha" a dozen times in past issues (the sound card, obviously). Maybe the guy was getting desperate and trying every source... That is pretty wild, though. I'm still waiting for "Dear Earthlings: I just installed Windows on my UFO, and now I can't get back in..."
[Mike] "Hey Earthlings, we just installed something called Windoze En-Tee on our flying saucer, and it made all or monitors turn blue. There's a message in white letters on the screen, but we don't understand the language. Please help URGENTLY as our spacecraft is out of control and is locked on a crash course with Earth."
[Heather] Fellow-being: What you need is to install something else quick. Since your saucer can run Windoze (also called MSwin or windows or wind*ws) I recommend "ZipSlack" http://www.slackware.com/zipslack/getzip.php. It can load quickly onto the "FAT" filesystem MSwin uses and once you have successfully launched that you should stop crashing...You may want something more well-tuned to your saucer once you have that going.
According to my research (slim pickings, most of our movies about aliens don't describe their software, but one notes you are able to run some of our virus software), apparently your native operating system most closely resembles something here called "MacOS". This is at least part of the problem, nearly any earthling knows that the MacOS and Windows vendors are at war with each other.
Unfortunately MacOS is proprietary so getting you a working copy without getting some Earth hardware to go with it, may be a problem, esp. if you have no Earth currency aboard.
Fortunately, we Linuxers can recommend either Yellow Dog Linux or LinuxPPC 2000 , as well as Debian. I can't say which will have the fastest install - a Mac-using friend highly recommends the first... Debian is highly available so you should be able to reach a mirror site no matter which of our land masses you are presently nearest. To save you time the web link you need is http://cdimage.debian.org/ftp-mirrors.html. Normally they discourage getting ISO images directly like that but, they expect you to have a stable system to fetch. I hope you will have no problems whatsoever creating discs ... By the way, most earthlings don't understand the funny messages generated by those blue screens either. Luckily I can assure you they don't help fix the problem...
RFC 3092 muses on the etymology of 'foo' and 'bar'. Among other things, it says the "wildly popular" Smokey Stover comic strip of the 1930s by Bill Holman "featured a firetruck called the Foomobile that rode on two wheels."
The RFC also has a table of which other RFCs mention "foo", "bar" or "fubar".
Rory Krause and I came up with this one:
the sysadmin's dance: do the Buggy Boogie.
CueJack is a Windows application that lets you scan a products with a :CueCat scanner, then displays a web page with "alternative information" about the product's company. As you can guess, the "alternative information" is stuff the company doesn't want you to know. "This could be information about corporate abuse, boycotts against the company, even how much money the company is making, their corporate image as presented to shareholders, etc." Courtesy Slashdot.
Miscellania: The program was renamed from CueHack because another program already had the same name. The author is working on a Linux version but says there are technical difficulties.
... helps Businesses to eliminate the need of hiring telemarketers Automated initial customer application process and many more features......... You will smile all the way to the Bank!!
Automatically calls........ To market products and to make announcements. To confirm preset appointments, prescheduled meetings, and conferences. Our CTI software can be used by businesses and services by calling sequentially or randomly. Automatically dials up to 2,000 - 10,000 prospects per day without human interference. When our CTI software calls it can simply leave a message or it can ask for a response. You may obtain responses by recording their voices, asking them to press a key, to respond to choices or transferring to a live operator. Just record your messages, select which group ( Data Bases) you want to call , when you want to start and stop, and then let our CTI software got to work calling everyone. You will save tremendous time and get results very fast ! without increasing your overheads or hiring extra help.
[ Your Editor got an obnoxious phone call at home recently from one of these machines. The recorded message said, "Please hold until a representative can get to you." Click!Subject: BOUNCE [email protected]: Message too long (>40000 chars)
It's nice to know the TAG spamfilter is working. -Mike.]
Regarding the Nigerian money scam in last month's Not Linux.
I've been receiving faxes the same as this for a few years at work, the scam is worse than it looks: they *have* been succesful on several occaions - all UK (if not world) banks know not to let their customers get involved.
What basically happens is A N Idiot agrees to the deal, signs the papers and money appears in his/her account. A N Idiot then transfers most of the money to another account (the scammer's). The money coming into the account then never arrives - originating bank denies knowledge or whatever - A N Idiots bank ha's, in the mean time, sent the money to another bank. The first thing the bank does is debit A N Idiot's account... A N Idiot is holding the can for an awful lot more money than they thought they'd ever see.
Holding the can for more money than you thought you'd ever see is probably better than holding the can for the first million you've just made and is all you have because if you're 999,000 away from paying back 1,000,000 they aren't really going to try to get it back but if you're nearly their they'll clean you out then lock you up.
These scammers are real bastards, if I didn't know it happened I wouldn't believe people could be so bad to other people.
Anyway, I just thought I'd let you know that this scam has worked and how it worked (roughly and AIUI).
Keep up the work with the gazette.
From: James Suttie
Date: Fri, 20 Apr 2001 22:36:48 +0100
keep up the good work with Linux Gazette - here's one of many links to the African spam scam! http://www.state.vt.us/atg/NIGERIA.htm
Are you planning to rent a Limousine, Sedan or a Private car for your Teen Prom Ceremony this session?
ALWAYS SEND $5 CASH (U.S. CURRENCY) FOR EACH REPORT CHEQUES NOT BE ACCEPTED ALWAYS SEND YOUR ORDER VIA FIRST CLASS MAIL Make sure the cash is concealed by wrapping it in at least two sheets of paper. On one of those sheets of paper, include: (a) the number & name of the report you are ordering, (b) your e-mail address, and (c) your name & postal address.
Free Leads Daily - Spam FREE! No Cost to you!
YOU CAN make over a half million dollars every 4 to 5 months from your home for a one time investment of only twenty five U.S. Dollars.
Subject: BUSINESS OPPORTUNITY EXTRAORDINAIRE!!
I understand you are seeking information about home based business opportunities.
--LEGITIMATE online business, which is SUCCESSFUL and GROWING.
--PERFECT for someone who has VERY LITTLE TIME to invest or for someone who LOVES being online!
Please email me at the address below to receive FREE VITAL INFORMATION (You'll very well Kick Yourself and your Modem if you don't)
Our research indicates this information may be of interest to you.
Hello gazette
WOW! This is absolutely amazing! Now you can put money in your pocket at warp speed using the internet! We're not talking weeks or even days, but within HOURS!! Wouldn't you like to be $5,000 richer by the day after tomorrow? Then you can do it again as often as you like - even every day! For all the fantastic details, send a blank email to: [address]
[Wow, something that's actually relevant to LG -Mike.]
Happy Linuxing!
Michael Orr
Editor, Linux Gazette,