Linux Gazette, a member of the Linux Documentation Project, is an on-line WWW publication that is dedicated to two simple ideas:
|
|
TWDT 1 (text)
TWDT 2 (HTML)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version. Our thanks go to Tushar Teredesai for pasting together the HTML version.
Got any great ideas for improvements! Send your
This page written and maintained by the Editor of Linux Gazette,
Date: Thu, 31 Oct 1996 20:11:37 -0500 (EST)
Subject: Re: Linux Gazette Issue 11
From: Elliot Lee, [email protected]
Nice job, as always! :-)
-- Elliot, [email protected]
(Thanks! --Editor)
Date: Fri, 1 Nov 1996 10:49:21 -0600 (CST)
Subject: Search Engine
From: "Dan Crowson" [email protected]
Organization: CMS Communications, Inc.
Hello:
what kind of search engine are you using for the Linux Gazette www server? Is this a linux-based engine?
Thanks,
Dan
(Nope. It just builds on Linux --Editor)
Date: Mon, 4 Nov 1996 17:24:30 -0500
Subject: Comments on Issue #11
From: "R. Frank Louden" [email protected]
I am always glad to see another issue of LG. Thank you for taking the time to compose it. One comment I'd like to make is the most recent issue (#11) is difficult for me to read on the spiral binding background. For me, the text lies over close enough to the left edge of the page, and it is almost hidden in some parts of the page.
I may be one of a dying breed but I choose to use Mosaic and wish others would consider that MS and Netscape do NOT adhere to the HTML specs and are fragmenting the standards. I note that NCSA is working on a new version that will provide support that is not currently found in the version I use. I am at this moment using an unsupported version 2.7b5 (it's kinda buggy) but when it works it allows me to see the background you have used.
While whirly-gigs and gewgaws are nice, some of us are still not able to upgrade hardware at the whim of the industry and need to have some consideration from those who sponsor WWW HTML documents. I have accessed pages that are completely illegible (with my old Mosaic) and others (with a more up-to-date browser) that take prohibitively long times to download. There IS something to be said for standards.
Thanks again for the Gazette! It is great!!!
(There may be more than one problem here. First off, if you are using a mirror site, the problem is my fault. Somehow, when building the tar file for the mirror sites, a gif that was integral to the notebook motif -- it moved the print away from the spiral -- was left out. I am in the process of notifying the mirror site where the missing file can be downloaded.The notebook spiral was put in using "tables" which is an HTML standard. Here at SSC we too believe in following HTML standards. In fact the program that we use to push things to the web checks that the HTML conforms.
I have worried that by adding more graphics we might be causing problems with download times. However, we also would like to keep LG looking good, so thought we'd add away and see what kind of comments we get. So far it's tied. One who likes the spiral and yours against. BTW, if you are accessing LG through a mirror site, try the main site and see if it does better for you (http://www.ssc.com/lg).
Glad you like LG, I certainly have fun putting it together. --Editor)
Date: Fri, 01 Nov 1996 13:32:44 +1100
Subject: http://www.redhat.com/linux-info/lg/issue11/wkndmech.html
From: Ken Yap [email protected]
Hi, Like your Linux Gazette, but some GIFs on the page are not displaying. Path problem?
Thanks,
Ken
(John Fisk forwarded your mail to me. In building the tar file for the mirror sites some files got left out. I have furnished and updated file. Sorry about that. --Editor)Date: Mon, 04 Nov 1996 12:09:22 -0700
In Issue 11 there is an incorrect link.
On the page: http://www.ssc.com/lg/issue11/lg_tips11.html#xdm
The link currently is:
http://www.ssc.com/lg/issue11/alienor.fr/~pierre/index_us.html
But should be:
http://alienor.fr/~pierre/index_us.html
Thanks for a great 'zine! :-)
kb
(Got it fixed. Thanks for letting me know. --Editor)
Date: Mon, 4 Nov 1996 22:35:04 +0200 (EET)
Subject: Re: Linux Gazette Issue 11
From: Lialios Dionysios [email protected]
Hello, this is Dennis from Greece.
Well this time I managed to download the whole thing so now I have a full mirror. The only problem is that I didn't get (or I don't have) the searchbtn.gif and the htsearch.cgi that are used for the search engine.
Did I make something wrong or should I have something I don't?
Thank you in advance.
Dennis
(No, you did nothing wrong. I was so excited to have the search engine, I forgot that the mirrors wouldn't have the proper data bases. Since these data bases are very big and are for all of the SSC site, we have changed the links for the data base so that it always refers back to the SSC site rather than a relative address pointing to the mirror site. The updated front page file is in the update tar file along with the missing files. Let me know if it works for you. --Editor)
Date: Tue, 5 Nov 1996 09:01:44 -0800 (PST)
Subject: Request
From: [email protected] (Ivan Mauricio Montenegro)
It's the first time I hear about Linux Gazette, I'd want to have all the issues, but at the FTP addresses that appear on www.ssc.com have the horrible message "Login Error". What could I do?
Thanks!
Ivan Mauricio Montenegro
IEEE Student Branch, Vice-Chairperson
Distrital University, "Francisco Jos de Caldas", Bogota, Colombia
(Not sure why you are having a problem. I can tell that others are able to download from that address without problem. Are you using your browser to point to that address or logging on with anonymous ftp?I would suggest using a ftp mirror site that is closer to you. Unfortunately, Linux Gazette does not have a mirror site in South America at this time. There is one in Mexico which is somewhat closer to you than Seattle.
At any rate if you go to the Mirror Site page (http://www.ssc.com/lg/mirrors.html) in Linux Gazette, and use the links there to go to one of the ftp sites (ours or one of the mirrors), you shouldn't be asked for a login. (I never have been and that's why I am a little confused by the message you are getting.) Let me know if you continue to have problems, and thanks for writing. --Editor)
Date: Wed, 06 Nov 1996 21:21:59 -0500
Subject: Great new look
From: "Alan L. Waller" [email protected]
Classy !!!
Al
(Thanks! Glad you like it. --Editor)
Date: Sat, 09 Nov 1996 11:03:26 -0800
Subject: Thank you
From: Innocent Bystander [email protected]
Thank you very, very much for providing LG to people such as I, who haven't become Unix gods yet. After reading my first issue, I am now a dedicated reader. What can *I* do to assist LG?
Innocent Bystander, [email protected]
San Diego, CA
(Send us your favorite tips and tricks. We love new contributors. Other than that tell all your friends about us and promote Linux where ever you are. --Editor)
Date: Mon, 11 Nov 1996 08:08:08 -0600 (CST)
Subject: Re: Great Writing
To: "Lowe, Jimmy, D MSGT LGMPD" [email protected]
From: "John M. Fisk" [email protected]
Hello Jimmy!
Thanks so much for taking the time to write! I appreciate it. I honestly can't take the credit for this -- the kind folks at SSC (and the Linux Journal) offered to take over the management of the LG when its administrative upkeep just got to be too much. Marjorie Richardson is its capable new Editor.
I've taken the liberty of cc'ing a copy of this to her -- definitely deserves a pat on the back.
Thanks again and Best Wishes,
John
---------------------------
On Thu, 7 Nov 1996, Lowe, Jimmy, D MSGT LGMPD wrote:
> Hello John,
>
> I just wanted to say how glad I am to see the LG is being carried on
> in such a fine manner -- during the summer I began to worry a small but
> inspiring story was coming to an end. I think your writing is very
> entertaining and informative! I really appreciate your work and that of
> all the others in the Linux community and others (e.g. FSF).
>
> I hope to give back to this wonderful community of dedicated
> hobbiest/computer wizards once I get a little more up-to-speed.
>
> Hope you and your family are well,
>
> Jim Lowe, Montgomery AL
(I think John was being a little modest on this one. Jim was obviously glad to see John's new Weekend Mechanic column in Linux Gazette. I certainly was. Thanks a lot John. --Editor)
Date: Sat, 09 Nov 1996 11:30:32 -0500
Subject: Bravo!
From: "J.M. Paden" [email protected]
"TWDT" is most appreciated. Thanks for the response to your readers requests.
Regards,
(You're welcome. We do aim to please. --Editor.)
Date: Wed, 20 Nov 1996 13:41:36 -0800
Subject: Link to other Linux pages
From: "J. Hunter Heinlen" [email protected]
Greetings....
I've gone through your title page for the Linux Gazette, and could not find a link to other Linux pages. Please put a link to page with links to other, commonly used Linux pages just below the Mirror sites link, and ask those that you give links for to provide links to you. This will make finding information much easier. Thank you for your time.
(I'm not sure which are the commonly used Linux pages you'd like to have a link for on the LG front page. I have added a link to SSC's Linux Resources page at http://www.ssc.com/linux/. Why don't you look at that page and see if it has the links you are wanting. Let me know what you think. Thanks for writing. --Editor)
Date: Tue, 19 Nov 1996 08:59:14 -0500br Subject: LG width
From: Gerr [email protected]
Hi there. Just a suggestion about the page (which looks ... wow ... compared to before). If you could, however, try to keep it inside of one page wide, it would be wonderful. I find myself having to use the arrows to see what's on the end of lines on the right hand side of the page.. --
gerr@weaveworld
(Thank you for writing. I didn't realize it was running over. I use a rather large window for viewing it myself. The problem seems to be a combination of the spiral and the width of the text inside the <PRE> tags. Not sure what can be done, but we'll look into it. --Editor)
Date: Fri, 1 Nov 1996 09:58:52 -0800 (PST)
From: Laurie Lynne Tucker
dmesg | more -- Forget (or couldn't look fast enough) at boot time? This command will display your boot information (a.k.a., the "kernel ring buffer"). For more info, see the man page.
Date: Fri, 08 Nov 1996 03:42:27 -0800
From: Igor Markov
Organization: UCLA, Department of Mathematics
Hi,
Here's my 2c console trick:
I put the following line into my ~/.xsession file:
nxterm -ls -geometry 80x5+45+705 -rv -sb -name "System mesages" -fn 5x7 -T "System messages" -e tail -f /var/log/messages &and this one into my .fvwm:
Style "System messages" NoTitle, Sticky, WindowListSkipWhen I login, I have a small 5-line (but scrollable) window near the left bottom corner (you may need to change numbers in -geometry) where system messages appear in tiny font as soon as they are produced. This lets me see when my dial-up script succeeds, when someone logs into my computer via TCP/IP, when some system error happen etc.
The .fvwm setup strips the title bar and does other useful things, but is not necessary.
Caveat: if you leave this window for long time, a cron job which trims /var/log/messages will change the inode # for the file and tail -f is bound to freeze. In 99% this cron job wakes up 2-3am, so tail freeze may freeze only overnight. Login/logout and everything will be OK. Any other ideas?
Igor
Date: Sat, 2 Nov 1996 10:57:30 -0500 (EST)
From: Preston Brown
Regarding the recent message about not being able to get IP masquerading working with 2.0.xx kernels:
First, I *believe* that IP forwarding may have to be enabled for firewall support, but I can't say for sure. Suffice to say that I have forwarding, firewalling, and masquerading all compiled into my kernel. I have a PPP link set up to the outside world, and my local ethernet subnet (192.168.2.x) is masquerades so it can talk to the outside world as well.
ipfwadm is used to set up the information (I call it from /etc/rc.d/rc.local at boot time):
# ip forwarding policies ipfwadm -F -p deny ; default policy is to deny ; forwarding to all hosts. ipfwadm -F -a m -S 192.168.2.0/24 ; add an entry for masquerading of ; my local subnet modprobe ip_masq_ftp ; load ftp support modulea 'ipfwadm -F -l' (i.e. list all forwarding policies) yields:
IP firewall forward rules, default policy: deny type prot source destination ports acc/m all 192.168.2.0/24 anywhere n/aIndicating that all is fine. Your local subnet now should be set up to talk to the outside world just fine.
---
-Preston Brown, [email protected]
Date: Fri, 1 Nov 1996 09:58:52 -0800 (PST)
From: Laurie Lynne Tucker
A user's shell must be included in the list at /etc/shells for ftp to work!!!!! (by default, you get only /bin/sh and /bin/bash!)
--
laurie
Date: Fri, 1 Nov 1996 09:58:52 -0800 (PST)
From: Alex
In answer to the question:
What is the proper way to close and reopen a new /var/adm/messages file from a running system?Step one: rename the file. Syslog will still be writing in it after renaming so you don't lose messages. Step two: create a new one. After re-initializing syslogd it will be used. Step three: Make syslog use the new file. Do not restart it, just re-initialize.
This should work on a decent Unix(like) system, and I know Linux is one of them.
Date: Thu, 14 Nov 1996 12:55:15 -0500
From: "Michael O'Keefe",
Organization: Ericsson Research Canada
G'day,
If you are going to use any of the attributes to the
tag, then you should supply all of the attributes, even if you supply just the default values.The default <BODY> tag for Netscape, Mosaic and MSIE is <BODY BGCOLOR=#C0C0C0 TEXT=#000000 LINK=#0000FF VLINK=#0020F0 ALINK=#FF0000>
If you wish to slip the BACKGROUND attribute in there, by all means continue to do so, but for completeness (and good HTML designing) you should supply the other attributes as well.
The reason? You don't know what colors the user has set, and whether just setting a BACKGROUND image, or just a few of the colors will render the page viewable or not. By supplying all of the values, even at their defaults, you ensure that everything contrasts accordingly
-- Michael O'Keefe |[email protected]_ Lived on and Rode a Honda CBR1000F-H |[email protected] / | "It can't rain all the time" |Work:+1 514 345 7900 X5030 / | - The Crow - R.I.P. Brandon |Fax :+1 514 345 7980 /_p_| My views are MINE ALONE, blah blah, |Home:+1 514 684 8674 \`O'| yackety yack - don't come back |Fax :+1 514 684 8674(PCon?)_/_\|_,
Date: Fri, 1 Nov 1996 09:58:52 -0800 (PST)
From: Phil Hughes,
Here is a handy-dandy little shell script. It takes all the plain files (not directories) in the current directory and changes their names to lower case. Very handy when you unzip a bunch of MS-DOS files. If a name change would result in overwriting an existing file the script asks you before doing the overwrite.
--------------------------- cut here ----------------------------------- #!/bin/sh # lowerit # convert all file names in the current directory to lower case # only operates on plain files--does not change the name of directories # will ask for verification before overwriting an existing file for x in `ls` do if [ ! -f $x ]; then continue fi lc=`echo $x | tr '[A-Z]' '[a-z]'` if [ $lc != $x ]; then mv -i $x $lc fi done
Date: 11 Nov 1996 18:54:02 GMT
From: Geoff Short,
To remove users do the following:
Simple setups:
---------------------------------------------------------------------------- Ever sit and watch ants? They're always busy with [email protected] something, never stop for a moment. I just [email protected] can't identify with that kind of work ethic. http://kipper.york.ac.uk/~geoff ----------------------------------------------------------------------------
Date: Fri, 1 Nov 1996 09:58:52 -0800 (PST)
From: Steve Mann
Subject: Re: Root and passwords
If you have forgotten your root password:
root::0:0:root,,,:/:/bin/zshinstead of something like this:
wimpy:GoqTFXl3f:0:0:Steve:/root:/bin/zsh
Steve
================================================================== / Steve M Insignificant message goes here \ | CCIS: 529-7500 x7922 \|||/ | | Home: 722-1632 0 * | | Beeper: 1-800-502-2775 or 201-909-1575 oo0 ^ 0oo | | Email: [email protected] ~~~~~~~~~ | | Ramapo College Apartments (Cypress Q): 934-9357 \ This line left blank for no reason / =================================================================
Date: 11 Nov 1996 16:33:02 GMT
From: Adam Jenkins,
Organization: CMPSCI Department, UMass Amherst
Having problems sending a talk request to an IP-address other than your own?
The solution is to reset your host name to your new dynamic address. You need to figure out what dynamic address you've been assigned. Then you can use the "host" command to find the symbolic name for it, and then use the "hostname" command to reset your machine's hostname. Like this:
host 128.119.220.0a
Prints out a name. Use it in:
hostname name.domain.edu
That's it. You need to be root to run the "hostname" command with an argument. If you're using pppd to get your connection, you can put all of this into your /etc/ppp/ip-up script -- read the pppd man page for more info -- so that it will get done automatically when you log in.
The reason you need to do this is because when talk sends a talk request, it also sends along what it thinks is the return address so that the remote talk can respond. So if your local machine has a fake address, the remote talk will get that as the return address and you'll never see the response.
I also saw a patched version of talk on sunsite somewhere, where he made some hack to talk to get it to find your real address. But I like the "hostname" solution better because I've found at least one other program with the same problem, and the "hostname" solution fixes it too.
Date: Tue, 12 Nov 1996 15:01:58 +0000
From: Dominic Binks
Organization: AEthos Communication Systems Ltd.
A couple of things that interested me about the article on tar. I'm sure that the idea is to introduce pipes, and some of the lesser known unix utilities (tr, cut), but
tar -tfvz file.tar.gz | tr -s ' ' | cut -d ' ' -f8 | lesscan be written more concisely
tar tfz file.tar.gz | lessAlso you can use wild cards so
tar tfz file.tar.gz *README*will list all readmes in the file.
Finally two last pieces of useful Unix magic.
tar cfv - dirwill tar the directory dir and send the output to standard output. One piece of magic liked by Unix gurus is
tar cfv - dir | (cd dir2; tar xf -)which copies one directory hierarchy to another location.
Another piece of tar that might be really useful is that taring up a dos file system and moving it somewhere else will preserve *everything*. This means you can move your main DOS partition around, something that is very difficult to do with DOS.
One final tip for all UNIX newbies: you got a file which unix will not allow you to delete.
rm -- 'file'will get rid of it. In general -- terminates argument processing so that everything following is passed directly to the executable.
Have fun
Dominic Binks
Well, then today is your day! Linux Journal is seeking authors for our upcoming issues. We are particularly interested in authors willing to write about Perl, the Web and Linux. We have some general topics we are soliciting articles for listed on our web site at http://www.linuxjournal.com/wanted.html. Please don't let these ideas limit you - if you have a great article idea we'd love to hear about it.
For additional information:
Gary Moore, Editor Linux Journal,
http://www.ssc.com/LJ/
Debian Linux
SSC is also looking for an author to write a chapter on the installation of Debian Linux for the book Linux Installation and Getting Started by Matt Welsh. If you are interested, please send e-mail to .
Date: Wed 13 Nov 1996
Bill Machrone, vice president of technology for Ziff-Davis Publishing Co, recently wrote in an article about Linux that Netscape 3.0 and Java were not yet available for Linux. He was wrong. Such things happen. Big deal. Even magazines of the highest quality sometimes print things that are wrong. You tell them about it, and they print a correction in the next issue. That's the way professionals handle things.
That's not what some Linux people did, however. Instead, they flamed him, in private and in public. That's stupid. They urged others to also send flames to Machrone, which is worse.
Things wouldn't be so bad, but now we have the Internet. The Internet allows just a few idiots completely ruin the reputation of Linux.
Please, if you want to advocate Linux, be civil.
Lars Wirzenius, Moderator, comp.os.linux.announce
Bruce Perens, Project Leader, Debian GNU/Linux Distribution
Alan Cox, Linux Networking Project, Linux International Technical Board
For the latest article about Linux by Bill Machrone, see the November 11 issue of PC Week, "Up Periscope". This is a good article in which he requests feedback from Linux users.
"The Linux Software Map": Unix Review, January, 1997, discusses the need for Linux documentation and the Linux Software Map (LSM).
From Martin Michlmayr of Linux International we learn:
According to a survey among a partial readership of iX, a German magazine devoted to Unix and networking, Linux is used at work by 45% of the readers. Solaris 1 and 2 taken together come second with 36%, followed by HP-UX with 27%. 56% of companies with less than 50 employees use Linux whereas it is used by 38% of firms with more than 1,000 employees. In addition, 60% of the readers use Linux on their computers at home. Linux International,
Date: 30 Oct 1996
The October 22, 1996 edition of the
***** LINUX APPLICATION AND UTILITIES LIST *****
is now available at it's home site and mirrors.
The "Linux Applications and Utilities List" is an organized collection of pointers to the WWW home pages of almost 600 different Linux compatible application programs, system administration tools, utilities, device drivers, games, servers, programming tools, file, disk and desktop managers, Internet applications, and more.
The "Linux Applications and Utilities List" and mirrors can be found at:
Home Site U.S.A. (IL):
URL:http://www.xnet.com/~blatura/linapps.shtml
Bill Latura
Runtime Systems
Marc Perkel, , of Computer Tyme Software Lab, http://www.ctype.com/, has written a program to convert Man pages to HTML. Check out this web site with fully indexed man pages:
http://www.ctyme.com/linuxdoc.htm
This is a popular idea. There is an article coming out in the February issue of Linux Journal by Michael Hamilton, another guy who did this very same type of conversion. Michael's program is called vh-man2html and can be seen at http://www.caldera.com/cgi-bin/man2html. And he tells us of yet another page, http://wsinwp01.win.tue.nl:1234/maninfo.html, where converters can be found.
The "Mission Critical Linux Project" was created to document successful existing Linux systems which have a large load and 24 hour a day use. The survey will last until February 1, 1997.
If you could access our web site, please visit one of following:
For additional information:
Motoharu Kubo,
http://www.st.rim.or.jp/~mkubo/ (English page under construction)
A couple of new Linux Resources sites:
Russ Spooner,
http://www.pssltd.co.uk/kontagx/linux/index.html
Joe Hohertz,
http://www.golden.net/~jhohertz
Date: Thu, 07 Nov 1996
The first ever version of Slovenian HOWTO is released. The document addresses Linux localization issues specific to Slovenian users and is written in Slovene.
It can be accessed either on its "locus classicus":
http://sizif.mf.uni-lj.si/linux/cee/Slovenian-HOWTO.html
or the official Linux Documentation Project Site:
http://sunsite.unc.edu/mdw/HOWTO/Slovenian-HOWTO.html
or any of the numerous mirrors of the latter.
For additional information:
Primoz Peterlin,
Institut za biofiziko MF, Lipiceva 2, SLO-1105 Ljubljana, Slovenija, http://sizif.mf.uni-lj.si/~peterlin/
Date: Mon, 18 Nov 1996
Tempe, Arizona - Cronus has announced the release of the long awaited Geek Gadgets CD-ROM. Geek Gadgets contains the Amiga Developers Environment (ADE) which is a project conceived and managed by Cronus to produce and support Amiga ports of dozens of the most popular development tools and utilities from the Free Software Foundation, BSD and other sources. This CD contains all the tools necessary to get started programming on the Amiga including advanced C, C++, Fortran and ADA compilers, assembler, linker, EMACS editor, "make", source code control systems (rcs&cvs), text and file utilities, GNU debugger, text formatters (groff & TEX) and more. Geek Gadgets is the perfect companion to the AT Developers CD which contains documentation and utilities but no development tools. Released quarterly, Geek Gadgets provides a quick and cost effective way to obtain the latest ADE for those with slow and/or expensive Internet connections. As a bonus, all the tools can be run directly from the CD-ROM without the need to install any files on your hard drive.
Available from your local Amiga dealer or directly from Cronus. SRP $ 24.95
For additional information:
Michelle Fish,
Date: 30 Oct 1996
Release "4.3.4" of the Stepstone Objective C compiler is now available from System Essentials Limited for Linux versions 1.2.13 and higher.
See: http://www.nai.net/~lerman
Both Linux and OSF/1 Objective C 4.3.4 releases include:
Date: Sun, 03 Nov 1996
MathTools Ltd. is pleased to announce MAT<LIB>, a Matlab Compatible C++ Matrix Class Library, designed for development of advanced scientific high-level C++ code. Evaluation version of the MAT<LIB> can be downloaded from our home page, http://www.mathtools.com.
The library includes over 300 mathematical functions covering Complex math, Binary and unary operators, Powerful indexing capabilities, Signal processing, File I/O, Linear algebra, String operations and Graphics.
For additional information:
MathTools Ltd., http://www.mathtools.com
Date: Wed, 13 Nov 1996 04:30:07 GMT FIDOGATE 4.1.1, an update to version 4 of the FIDOGATE package is available.
FIDOGATE Version 4 ----------------------- * Fido-Internet Gateway * Fido FTN-FTN Gateway * Fido Mail Processor * Fido File Processor * Fido Areafix/Filefix ----------------------- Internet: - --------- http://www.fido.de/fidogate/ ftp://ftp.fido.de/pub/fidogate/ ftp://sunsite.unc.edu/pub/Linux/system/Fido/ fidogate-4.1.1.tar.gz 657 KbyteFor additional information:
Date: Wed, 13 Nov 1996
Fxvolume is a simple, no frills volume control designed to sit at the side of your screen and not get in the way. You simply run it, and then ignore it until you need to use it.
It controls the level of the master sound device under Linux, using a slider created from the Xforms library.
http://www.ee.mu.oz.au/staff/pbd/linux/fxvolume/
Use at your own risk - it has not been widely tested, but seems to work well enough... ;)
For additional information:
Paul Dwerryhouse,
University of Melbourne, Australia
Date: Tue, 05 Nov 1996
Announce: The free JAZZ midi sequencer version 2.6
JAZZ is a full size midi sequencer allowing record/play and many edit functions as quantize, copy, transpose ..., multiple undo; two main windows operating on whole tracks and single events; graphic pitch editing, GS sound editing functions and much more ...
JAZZ is copyright (C) by Andreas Voss and Per Sigmond, and is distributed under the terms of the GNU GENERAL PUBLIC LICENSE (Gnu GPL).
Web site: http://rokke.grm.hia.no/per/jazz.html
Linux binary distribution: ftp://rokke.grm.hia.no/pub/midi/jazz/linux-bin/
Files: jazz-bin-v26b-xview.tar.gz, jazz-help-v26b-xview.tar.gz
Source code distribution: ftp://rokke.grm.hia.no/pub/midi/jazz/
File: jazz-src-v26b.tar.gz
For additional information:
Andreas Voss.
Per Sigmond,
Ericsson AS, ETO,
Date: Wed, 13 Nov 1996 util-linux-2.6.tar.gz (source only distribution)
Util-linux is a suite of essential utilities for any Linux system. It's primary audience is system integrators (like the people at Red Hat) and DIY Linux hackers. The rest of you will get a digested version of util-linux installed with no risk to your sanity.
Util-linux is attempting to be portable, but the only platform it has been tested much on is Linux/Intel. There have however been integrated several patches for Arm, m68k, and Alpha linux versions. The present version is known to compile on at least Linux 1.2/libc 4.7.5 and Linux 2.0.22/Libc 5.3.12 (the Linux versions I run :-). People are encouraged to make _nice_ patches to util-linux and submit them to [email protected].
Util-Linux 2.6 is immediately available from
ftp://ftp.math.uio.no/pub/linux/util-linux-2.6/
NOTE: Before installing util-linux. READ the README or risk nuking your system. Thank you.
For additional information:
Nicolai Langfeldt,
The popular front against MWM
Date: 30 Oct 1996
LyX-0.10.7 has been uploaded to sunsite. It is also available from ftp://ftp.via.ecp.fr/pub/lyx and from my home page: http://www.lehigh.edu/~dlj0/LyriX.html
LyX is a WYSIWYG front-end to LaTeX. It is used much like a word-processor, but LaTeX produces the final document. Figures, tables, mathematical formulas, fonts, headers, etc., are all drawn on-screen essentially as they appear on the final document. Figures (postscript) are placed in the document using a simple menu, as are tables. General text formatting is accomplished by high-level menu choices that automatically set fonts, indentation, spacing, etc., according to general LaTeX rules, and display (essentially) these settings on the screen.
None of the power of LaTeX is lost, since you can embed any LaTeX command within a LyX document.
Primary-site: sunsite.unc.edu /pub/Linux/apps/editors
501577 lyx-0.10.7-ELF-bin.tar.gz (binary release)
612839 lyx-0.10.7.tar.gz (original source)
Copying-policy: GPL
For additional information:
David L. Johnson,
Lehigh University, http://www.lehigh.edu/~dlj0/dlj0.html
Date: Tue, 05 Nov 1996
Announcing a new release of MpegTV, the real-time software MPEG Player for Linux (x86 ELF) and FreeBSD.
A free version of the MpegTV player can be downloaded from the MpegTV web site at:
http://www.mpegtv.com/
Main features:
For additional information:
Tristan Savatier,
http://www.mpeg.org
Date: Wed, 13 Nov 1996
This message is to announce the public Beta release of the ISDN4Linux driver for SpellCaster ISA ISDN adapters. This beta program is open to anyone who prefers the bleeding edge and just can't wait for MP support. The beta driver currently supports the SpellCaster DataCommute/BRI and TeleCommute/BRI adapters and will also include support for the DataCommute/PRI adapter before the end of the Beta program.
You can download the beta driver from:
ftp://ftp.spellcast.com/pub/drivers/isdn4linux
You require kernel revision. 2.0. You will also need the isdn4k-utils package also available the above mentioned FTP site or ftp.franken.de
For additional information:
Erik Petersen,
Date: 30 Oct 1996
Star Division announces the public availability of the second beta version of its office productivity suite, StarOffice 3.1, for Linux/x86.
StarOffice 3.1 consists of:
This beta version expires at January, 1st, 1997. We will make newer beta versions available by then. The final version will be free of charge for private use. The price for commercial use is not yet decided.
StarOffice 3.1 can be downloaded from the directory:
ftp://sunsite.unc.edu/pub/Linux/apps/staroffice
For additional information:
Star Division GmbH, http://www.stardivision.de/
Matthias Kalle Dalheimer,
Marc Sewtz,
Date: Wed, 13 Nov 1996
Wget 1.4.0 [formerly known as Geturl] is an extensive rewrite of Geturl. Wget should now be easier to debug, maintain and most importantly, use.
Wget is a freely available network utility to download files from the World Wide Web using HTTP and FTP. It works non-interactively, thus enabling work in the background, after having logged off.
Wget works under almost all modern Unix variants and, unlike many other similar utilities, is written entirely in C, thus requiring no additional software (like Perl). As Wget uses the GNU Autoconf, it is easily built on and ported to other Unix's. Installation procedure is described in the INSTALL file.
You can get the latest version of wget at:
ftp://gnjilux.cc.fer.hr/pub/unix/util/wget/wget.tar.gz
For additional information:
Hrvoje Niksic,
SRCE Zagreb, Croatia
Date: Tue, 05 Nov 1996
Woven Goods for LINUX Version 1.0
Version 1.0 of Woven Goods for LINUX is a collection of World-Wide Web (WWW) Applications and Hypertext-based Information about LINUX. It is ready configured for the Slackware Distribution and currently tested with Version 3.1 (ELF). The Power Linux LST Distribution contains this collection as an integral part with some changes.
The five Parts of Woven Goods for LINUX are:
Woven Goods for LINUX is available via anonymous FTP from:
ftp://ftp.fokus.gmd.de/pub/Linux/woven
The HTML Pages of Woven Goods for LINUX are snap shots of the LINUX Pages at FOKUS - Research Institute of Open Communication Systems and are available from: http://www.fokus.gmd.de/linux For additional information:
Lutz Henckel,
GMD FOKUS, http://www.fokus.gmd.de/usr/hel/
Date: 30 Oct 1996
Announcing xldlas v0.40 in sunsite's incoming directory:
ftp://sunsite.unc.edu/pub/Linux/incoming/xldlas-0.40-srcbin.tgz
Soon to be moved to:
ftp://sunsite.unc.edu/pub/Linux/apps/math/xldlas-0.40-srcbin.tgz
xldlas is for doing statistics.
Here at Northern Michigan University, we run a Linux lab with 14 workstations. Upgrading from Redhat 3.0 to Redhat 4.0 has been quite an adventure. This article describes the upgrading of one workstation.
The first thing to do when upgrading is to free up a significant block of time. We used a day and a night to upgrade one machine. That included downloading the software, making floppy disks, and fixing our errors along the way. In fact, if you're a busy person, and Redhat 3.0 is working fine for you, then you might choose to delay the upgrade, or even avoid it. However, at the Linux Lab at Northern Michigan, we try and stay near the cutting edge, so the upgrade was a must for us.
The next step is to decide your upgrade method. The choices are the same ones from Redhat 3.0:
The quickest and easiest way is to use the CD-ROM drive. This is the only way if you don't have a direct Internet connection, since you cannot download the necessary amount of data through a modem in any reasonable amount of time Since our workstations don't have CD-ROM drives, and do have an excellent Internet connection, we chose to do an FTP install.
Before an FTP install can begin, two disks named boot.img and supp.img must be downloaded from ftp://ftp.redhat.com/pub/redhat/current/i386/images/ . They can be written to the floppy disks with the commands
dd if=boot.img of=/dev/fd0 (switch disks) dd if=supp.img of=/dev/fd0
The second disk is only needed for an FTP install. Redhat 3.0 required three disks for all install types, so this change makes a significant savings in user effort. However, we had used the Redhat 3.0 disks as emergency boot disks to correct problems like forgetting the root password (yes, this does happen). The Redhat 4.0 boot disks are missing several important utilities (i.e. tar and vi) so cannot be used for this purpose.
Also, notice that these two disks work for any supported hardware configuration. The older Redhat 3.0 required that the user search through a list of boot disks for the correct choice based on his hardware. This search often took more time than the download itself. Redhat 4.0 is much improved in this regard (our favorite new feature).
The first thing you'll see after inserting the boot.img disk and rebooting the computer is a LILO prompt. Just the words:
boot:
We would have liked more explanation of our choices here. Redhat 3.0 offered a very nice menu of help text that explained the possible parameters and their effects. However, if you just wait in a perplexed fashion long enough, the system will become impatient and boot Linux for you.
The first difference you'll notice is that Redhat 4.0 prompts you to describe your hardware. It asks about SCSI controllers and network adapters, showing you a list of possible choices. Behind the scenes the Redhat 4.0 install script loads kernel modules to access your hardware.
While this is happening is a good time to switch to virtual console #3 (press <ALT>F3). This console shows what's happening in more technical detail, describing things like the mounting and unmounting of file systems, and the downloading of files. The older Redhat 3.0 did not have this feature, which we often use to debug problems. You can switch back to the main action by pressing <ALT>F1.
The install scripts also query the user for network information. You should know your IP number, netmask, gateway, hostname, domain name, and name server before starting the install. We notice that Redhat 4.0 creates a default gateway and name server entry based upon your IP number and netmask, but that these defaults are rarely right. Better in our opinion would be to have no default at all than a misleading one.
Redhat 4.0 will show you a menu of possible software upgrades and additions. This list is essentially the same as Redhat 3.0, except that most packages have increased in version number.
The biggest problem we had involved the remote login software (rlogin, in.rlogind, in.rshd and in.telnetd). These have been upgraded to use the P.A.M. library and kerberos. However, we often login into our Linux workstations from older Sun Sparcs that do not run this software suite. For some unexplained reason, the SunOS clients could not access the Linux servers. We solved the problem by simply re-installing the older software.
In general, we suggest letting Redhat upgrade everything you might ever use. You should avoid downloading any software you are sure you will not need. Avoiding unneeded software will decreases the total time needed and the probability of network errors during the download.
Step one of the download process is to pick an FTP site. There are many listed here. We started by choosing a site with a fast 'ping time' from us, since ping time is a reasonable approximation of FTP throughput and is quite quick to gather. To find out the ping tome to a site like www.redhat.com, just type:
ping www.redhat.com
After ping runs for several packets, kill it with <CNTL>C. The average ping time will be shown at the bottom. We saw ping times from 80 - 300 milliseconds. Downloads are four times faster from the best site compared to the worst. It is well worth your time to explore sing ping before picking a site at random. The fastest was the aptly named ftp://ftp.real-time.com/pub/redhat . Unfortunately, they were not accepting FTP connections, so we used ftp://uicarhive.cso.uiuc.edu/pub/systems/linux/distributions/redhat . We could FTP to that site, but the download failed. It seems that the download scripts also want to know the version and architecture of the packages you are trying to download. Therefore, the correct URL is ftp://uicarhive.cso.uiuc.edu/pub/systems/linux/distributions/redhat/current/i386. That was not obvious from the directions. We suggest that the Redhat folks either change their script to add these subdirectories or make their directions more clear.
For us, upgrading required downloading over 300 megabytes. I must say the status screen during the download is quite nice. The biggest problem with it is that it does not show the progress of downloading each package. Since the download was so long, we left it running overnight. Unfortunately, it failed on the download of LILO. The download script then waited for us to press a key acknowledging the error, which meant it stopped downloading some time during the night. Better would be to continue downloading while informing the user of this error.
Once the download is finished, and you answer a few simple questions, you get to reboot your computer into Redhat 4.0 (yea!!).
The first thing we noticed is that the kernel has been upgraded to Linux 2.0.19. Some problems we had before, like our tape drive not working, were fixed with this upgrade. Also, our Adaptec 2740 SCSI controller was accessible for the first time. Java support is included in the upgraded kernel.
We discovered the auto-mounter daemon (amd) was running, and had created a directory named /proc. Inside /proc is every computer mountable by your workstation. For example, /proc/foo is the root directory of the host foo, assuming foo will allow outside access. Nice feature!!
The ps command has been changed. Formerly, we used 'ps -augx' to see all processes on our system. That command will no longer work. The new equivalent is 'ps -ax'.
The passwd command has been changed. In fact, my former password is now considered ill advised, and I've had to pick a new password.
The window manager fvwm95 has been included in the upgraded Redhat. Surprisingly, workman, the musical CD player, was not. See http://www.redhat.com/linux-info/pkglist/rh40_i386/all-packages.html for the complete list.
Happily, the Redhat 4.0 upgrade left much of our custom configuration intact. For example, we run a custom X server that Redhat left in place, and our NFS mounts as described in /etc/fstab were retained, even though the upgrade did change /etc/fstab to add other entries (like the /net file system). We did have to re-edit /etc/rc.d/rc.local to set our NIS domain.
The errata can be found at http://www.redhat.com/support/docs/rhl/rh40-errata-general.html . It is actually quite long. Basically, the errata is a list of package upgrades to Redhat 4.0, along with a description of applicability. We counted up to 40 packages to download and install, depending on your configuration. That just too many!! Why does not Redhat make these improved packages a part of the latest redhat release, possibly called Redhat 4.0.1?
Luckily, the process is quite mechanical, and requires little thought. Just download the needed files, and run rpm -U on them.
Netscape has upgraded since we did our original install. Unfortunately, Redhat does not include Netscape, so Netscape must be updated separately. Perhaps there are legal reasons Redhat does not include Netscape, but Redhat does include other non-free software, such as xv.
During the upgrade, the install scripts creates backup copies of certain files in /etc/rc.d/rc*.d with the extension ".rpmsave". Once everything is set up correctly, you can delete any files in /etc/rc.d/rc*.d/*.rpmsave.
Overall, the Redhat package is well done. The installation is easier for Redhat than any other Unix we know of. Redhat 4.0 is a collection of small upgrades of many packages from Redhat 3.0. There are only a few new packages (i.e.: fvwm95, TheNextLevel). Overall, our system is much as it was before, but with many small improvements. Unless you have some need to upgrade, or just feel like messing around with your system, we suggest the results may not be worth the effort. Even so, we like Redhat 4.0 very much.
If you have comments or suggestions, email me at
Abstract
In this article, I will describe some of the main features of TCSH, which I believe makes it worth using as the primary login shell. This article is not meant to persuade bash users to change! I've never used bash, and by that reason I know very little about it.
As some of you surely know, I've created a configuration tool called The Dotfile Generator, which can configure TCSH. I believe that this tool is very handy when one wants to get the most out of TCSH (without reading the manual page a couple of times.) Because of that I'll refer to this tool several times throughout this article to show how it can be used to set up TCSH.
With a high knowledge of your shell's power, you may decrease the time you need to spend in the shell, and increase the time spent on the original tasks
Basically one can complete on files and directories. This means that you can not complete on host names, process id's, options for a given program etc. Another thing you can not do with this type of completion is to complete on directory names only, when typing the argument for the command cd
In TCSH, the completion mechanism is enhanced so that it is possible to tell TCSH which list to complete from for each command. This means that you can tell TCSH to complete from a list of host names when completing on the commands rlogin and ping. An alternative is to tell it to complete only on directories when the command is cd.
To configure user defined completion with The Dotfile Generator (from now on called TDG) go to the page completion -> userdefined, this will bring up a page which looks like this:
As the command name, you tell TDG which command you wish to define a completion for. In this example it is rm. Next you have to tell TDG which arguments to the command, this completion should apply to. To do this, press the button labeled Position definition. This will bring up a page, which is split in two parts:
In the first part, you tell TDG, that the position definition, should be defined from the index of the argument, which is trying to be completed (the one, where the tab key is pressed.) Here you can tell it that you wish to complete on the first argument, all the arguments except the first one etc. | |
The alternative to position dependent completion is pattern dependent completion. This means that you can tell TDG, that this completion should only apply if the current word, the previous word or the word before the previous word conform with a given pattern. |
This solution is a bad idea if the list is used several places (e.g. a list of host names) in that case, one should select the list to be located in a variable, and then set this variable in the .tcshrc file.
To set up such a completion, first develop the command, which return the list to complete from. The command must return the completion list on standard output as a space separated list. When this is done, insert this command in the entry saying Output From Command.
Here's a little Perl command, which find the targets in a makefile:
perl -ne 'if (/^([^.#][^:]+):/) {print "$1 "}' MakefileIf this is inserted in the Entry, one can complete on targets from the file called Makefile, in the current working directory.
If someone should think that its only to promote TDG, that I describe TCSH through it, (s)he should take a look at the following line, which is the generated code for the make completion:
complete make 'p@*@`perl -ne '"'"'if (/^([^.#][^:]+):/) {print "$1"}'"'"'Makefile`@'
As has been discussed in issue6 of the Gazette, some of the prompt may be located in the xterm title bar instead of on the command line. To do this, choose font change and select Xterm. | |
To see a list of the previously executed commands, type history.
The following table lists the event specifiers:
!n | This refers to the history event, with index n |
!-n | This refers to the history event, which was executed, n times ago: !-1 for the previous command, !-2 for the one before the previous command etc. |
!! | This refers to the previous command |
!# | This refers to the current command |
!s | This refers to the most recent command, whose first word begins with the string s |
!?s? | This refers to the most recent command, which contain the sting s |
With these commands, you can re-execute a command. E.g. just type !!, to re-execute the previous command. This is however often not what you want to do. What you really wants is to re-execute some part of a previous command, with some new elements added. To do this, you can use one of the following word designators, which is appended to the event specifier, with a colon.
0 | The first word (i.e. the command name) |
n | The nth word |
$ | The last argument |
% | The word matched by an ?s? search |
x-y | Argument range from x to y |
* | All the arguments to the command (equal to ^-$) |
Now it's possible to get the last argument from the previous command, by typing !!:$. You'll however often see that you very often refer to the previous command, so if no event specifier is given, the previous command is used. This means that instead of writing !!:$, you may only write !$.
More words designators exists, and it's even possible to edit the words with different commands. For more information about this and for more examples, please take a look into the tcsh manual
It is possible to expand the history references on the command line before you evaluate them by pressing ESC-SPC or ESC-! (This is: first the escape key, and next the space key or the ! key). On some keyboards you may use the meta key instead of the escape key. I.e. M-SPC (One keystroke!) | |
* | Match any number of characters |
? | Match a single character |
[...] | Match any single character in the list |
[x-y] | Match any character within the range of characters from x to y |
[^...] | Match elements, which does not match the list |
{...} | This expands to all the words listed. There's no need that they match. |
^... | ^ in the beginning of a pattern negates the pattern. |
An example of this is the program mcopy which copy files from disk. To copy all files, you may wish to use a star as in: mcopy a:* /tmp. This does however not work since the shell will try to expand the star, and since it can not find any files, which starts with a:, it will signal an error. So if you wish to send a star to the program, you have to escape the star: mcopy a:\* .
There exists two very useful key bindings, which can be used with patterns: The first is C-xg, which list all the files matching the pattern, without executing the command. The other is C-x*, which expand the star on the command line. This is especially useful if you e.g. wishes to delete all files ending in .c except important.c, stable.cand another.c. To create a pattern for this, might be very hard, so just use the pattern *.c. Then type C-x*, which will expand *.cto all you .c files. Now it's easy to remove the three files from the listTCSH has a mechanism to create aliases for commands. This means that you can create an alias for ls -la just called la.
Aliases may refer to the arguments of the command line. This means that you can create a command called pack, which take a directory name and pack the directory with tar and gz. etc. Aliases can often be a bit hard to create since one often wants history/variable references expanded at time of use, and not at the definition time. This has been done easier with TDG, so go to the page aliases, to define aliases. If you end up with an alias you can not define on this page, but in tcsh, please For more information about aliases, see the tcsh manual
0.020u 0.040s 0:00.11 54.5% 0+0k 0+0io 21pf+0wInformative? Yes but... The gnu time command is a bit more understandable:
0.01user 0.08system 0:00.32elapsed 28%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (0major+0minor)pagefaults 0swapsBut still...
In TDG you can configure the output from the time command on the page called jobs. It looks like this:
As for the prompt, here's an entry once again for mixed tokens and and ordinary text. Remember, if there is something in TDG that you do not understand, help is available by pressing the right mouse button over the given widget.
Previous Next Table of Contents
feddi.como
which comes with the FEddi+bt packages; this paper is based upon version 0.5.
fido
.futility
Previous Next Table of Contents
The original author of the packages FEddi is Oliver Graf, 2:2454/130.69, the original port to bt a *nix is copyright (c) 1992, 1993 by Ben Stuyts, the adaptation to LINUX is copyright (c) 1993 Louis Lagendijk, and the person who made both versions usable is Manuel Soriano, .
Welcome as a future fellow of feddi and bt :-)
Congratulations for your decision to install this package. It's not too complicated, the only troubles you may run in are some permissions. The sources included in this package have already been patched to grant a smoother working.
As well fmbedit
as bt
show some minor problems, so don't flame as much and think that you didn't pay anything for it. You may contribute correcting bugs. Don't hold them for yourself, share them. Send me patches and will make this software improve. A hint: don't run it under X
, the terminal data base doesn't work smoothly, I'm up to fix this. Surely, some day I'll be able to path this :-) (I used to say this would be the next :-DDDDDDDDDDD)
I'm in due with:
ftoss
recognizes upper/lowerAt the end of this file you'll find messages with hints, all sent by feddi
fido
.We'll install fido
as a mail user, but you can give it another name.
If you see ~/
in this document, we refer to the user's home
directory.
/etc/passwd
Include the following line:
fido::2004:300::/home/fido:/bin/bash
/etc/group
Include the following line:
fido::300:uucp,fido,root
You'll need:
If not found, install it from disk-set D (Slackware)
ls /usr/bin/perl
If not found, install it from disk-set D (Slackware)
ls /usr/lib/libncurses.a
Change to the directory /FEddi-0.9pl5
Makefile
, put for variable SRCDIR
your fonts' path, e.g.:
SRCDIR=/root/trabajo/mailer/FEddi-dev
NODEPRG =
: nlfunct.o
else it won't compile.make
Do:
ncurses.h: No such file or directory
ln -s /usr/include/ncurses/curses.h /usr/include/ncurses/ncurses.h
su root make install exit
cp utils/* ~/fnet/utility
printmsg
#!/bin/sh cat | $HOME/fnet/utility/formatmsg | lpr
exportmsg
#!/bin/sh if test $1 = "new" then cat | $HOME/fnet/utility/formatmsg > "$2" else cat | $HOME/fnet/utility/formatmsg >> "$2" fi
Create these directories and do the following:
./outbound ./msgbase ./copy ./log ./inbound ./utility ./nodelist
chown -R fido.fido fnet
~/.feddirc
:
644
fido.uucp
You may base your configuration on this file, as it works for me without troubles.
; ; This .feddirc was automatically created with config.user ; ; Profile Section ; PROFILE Manuel Soriano 2:346/207.punto net_name the_passwd outbound 2:* 25:946/100.punto other_net_name the_passwd outbound 25:* 93:346/101.punto other_net_name the_passwd outbound 93:* END ; The first line is your main address, the following are subnets, the routing ; fro 25: to 93: is done by means of 2: ; ; ; ; Paths ; MsgBasePath ~/fnet/msgbase/ InboundPath ~/fnet/inbound/ OutboundPath ~/fnet/ UtilityPath ~/fnet/utility Log ~/fnet/log/feddi.log 200 CopyPath ~/fnet/copy/ NodelistPath ~/fnet/nodelist/ ; ; Misc ; Packer /usr/bin/zip -q -m -k -j %s %s ; Editor /usr/bin/vi %s Beep Yes AutoDelEmpty Yes KeepPKT No KeepNL Yes KeepBackups No ShowAllAddr Yes MaxMsgLength 64k QuoteLength 70 ReplySubject No AskForOrigName Yes AutoNextFolder Yes ; ; End of .feddirc ;
~/fnet/nodelist/fnlcrc
dial 34-6- 3 dial 34-6 dial * pointlist ptlstr34 pointlist eu_point nodelist region34 nodelist eu_nodes
dial
: According to your zone 34-6 (Valencia), 34-1 (Madrid), 34-3 (Barcelona), etc... As pointlist
, the different lists of points, you may use the point lists that come from the bbs, without modification. As nodelist
, the different lists of nodes, you may use the node lists that come from the bbs, without modification. That's it.
~/fnet/nodelist/compila0
permissions 777
#!/bin/bash unzip lista.zip mv EU_NODOS* eu_nodos mv EU_PUNTO* eu_punto mv PTLSTR34* ptlstr34 mv REGION34* region34 mv SNETLIST* snetlist mv SUBPTLST* subptlst
~/fnet/nodelist/compila1
permissions 777
#!/bin/bash rm fnlc.* fnlc
/usr/bin
Check your mail. Look for a mail package you might have for MS/DOS. Put it into the directory ~/fnet/inbound
and do
ftoss ; futility pack ; futility link
This will always be the way to handle your incoming mail. ftoss
will create automatically the folder according to your areas.
fmbedit
If everything went well you'll see the mail of that package on your screen :-)
The editor is quite simple and well documented. It looks somewhat like the fmail's editor.
Create a message in an area or two and do the following:
fscan
This will always be the way to handle your outgoing mail.
/bt
do
you should get in
make su root make install
/usr/bin
:
and in
-rwxr-xr-x 1 root fido 238983 Sep 15 18:04 /usr/bin/bt
/usr/lib/binkley
:
-rwxr-xr-x 1 root root 742 Sep 16 10:04 binkley.cfg -rw-r--r-- 1 uucp root 108 Sep 16 10:10 binkley.day -rw-r--r-- 1 root root 12332 Sep 15 16:20 binkley.lng -rw-r--r-- 1 uucp root 124 Mar 20 2029 binkley.scd -rwxr-xr-x 1 root root 14423 Sep 15 16:20 btctl -rwxr-xr-x 1 root root 13813 Sep 15 16:20 btlng -rwxr-xr-x 1 root root 15649 Sep 15 16:20 english.txt -rwsr-xr-x 1 uucp fido 1603 Sep 15 16:20 fido-toconv
/usr/lib/binkley/binkley.cfg
You may start with this file. Just change what you need and take away the numbers in parenthesis.
FEddiNodelist (1)Port 2 (2)baud 38400 LockBaud 38400 (3)Init ATZ0|~AT&K6|~ (4)Prefix ATDP PreDial ~ PreInit |v``^`` LogLevel 5 LineUpdate Gong AutoBaud PollTries 10 PollDelay 600 Unattended BoxType 0 NiceOutBound ReadHoldTime 1 (5)System seudonimo_fido (6)Sysop tu_nombre StatusLog /home/fido/fnet/log/binkley.log 200 Downloads /home/fido/fnet/inbound/ CaptureFile /home/fido/fnet/log/session.log NetFile /home/fido/fnet/inbound/ Hold /home/fido/fnet/outbound/ Nodelist /home/fido/fnet/nodelist/ (7)Address 2:346/[email protected] 5207 tel_del_boss (8)Key !the_passwd 2:346/207 (9)Domain FidoNet.org outbound Address 25:946/[email protected] Key !the_passwd 25:946/100 Domain EuroNet.org outbound Address 93:346/[email protected] Key !the_passwd 93:346/101 Domain SubNet.org outbound
/dev/modem
. Normally /dev/modem
is a symlink to /dev/cua0
or /dev/cua1
, (ln -s /dev/cua1 /dev/modem
). At least I have it this way...~/.profile
do
export BINKLEY=/usr/lib/binkley
(you need to do this just now. The next time you enter as
. ~/.profile
fido
you'll already have BINKLEY
initialized)
bt
If you run into troubles, for sure it's about permissions or a badly defined path. Check them out.
The owner is usually:
cannot re-open logfile
usuario.uucp
. The permissions: 664
getty
, normally it should get permissions to read and write for everybody. The message was:
Solution:
tty port can not be initialized
or ttyS1; (COM1: or COM2:).
chmod 666 /dev/ttyS0
ln -s /var/spool /usr
If you get a screen similar to frodo you could do the following:
ALT-Y
, call your bbs, it'll leave your mail there and fetch what you got. Then you just need to execute the commands mentioned for mail handling.
If it appears to have fallen asleep during the FIRST file transmission, hit the ESC
key to wake it up.
This is my templates file $FNET/msgbase/template
:
#if to (AreaMgr|FileScan) #; #; ********** Handling of AreaMgr- and FileScan-Mails ********** #; #else #if group (--InterNet--) #; #; ********** Handling of Internet-Mails ********** #; How are you #1E! #if mode (reply) In <#a> #f wrote: #. #quote #else #. #endif Greetings, Manu #|insertfortune #else #; #; ********** Handling of other Mails ********** #; Hi #1E! #if mode (reply|forward) #if mode (netreply) That happy day #d, #f said to #e in #a concerning "#s": #. #quote #endif #if mode (^reply) On #d, #f would write to #e concerning "#s": #. #quote #endif #if mode (forward) Even if it doesn't look like, it's a forward * Message from #f to #e * on #d to #t * concerning "#s" * in #a ,,, (o o) ---------------------------------oOO--(_)--OOo------------------------------ #text ---------------------------------------------------------------------------- #endif #else #. #endif #if group (--Intern--|^$) #if from Manuel Soriano Bye, Manu #|insertfortune #else Bye, #1F #endif #else Bye, #1F #endif #endif #endif \|/ 0-0 [email protected] *****---oOo-(_)-oOo---********************************************** * Manuel Soriano * El Perello/Valencia/Spain *
Once created your area directories, you can create an origin
file in each of them, and insert one or several lines (but not more than 70 chars) referring to your message's origin.
>From here on I'll state things I received from fido users.
futility
------------------------------------------------------------------------------ Message Number 1 from area R34.LINUX ------------------------------------------------------------------------------ From: Jesus Gambero (2:345/201.3) From: All Subj: FEddi Send: 25 Nov 95 15:43:57 ------------------------------------------------------------------------------ Hi. For now, FEddi hasn't got too much documentation, so after a couple of tests, finally I'm able to maintain the message base. futility tool delete "age+15&&protect-&&new-" R34.LINUX futility pack This will delete the messages older than 15 days which are not protected and which have been read. If you don't specify the area name, it'll refer to all. It happens that I leave some areas more days than others, so I have to specify a line for each area, but my customize it at will. Bye. --- FEddi 0.9pl5 via BinkleyTerm * Origin: Message written and send by Linux, of course!! (2:345/201.3)
------------------------------------------------------------------------------ Message Number 4 from area R34.LINUX ------------------------------------------------------------------------------ From: Javier Hernandez (2:346/207.48) From: ALL Subj: FILE REQUEST Send: 07 Dec 95 06:15:45 ------------------------------------------------------------------------------ Hi! I have been trying to find out how to do the RE: with the Linux software, and I already fetched my first file. I'll explain how I did it, just if anybody is interested, or knows about a more correct manner. First I write a Net, usually to my sysop. After finishing I exit with (Alt+x). Having the message activated, I hit (Alt+g) to open a small window which displays some data. Once seeing it, I pulse `Inc' and type the name of the file I wish to download. Finally I push `Esc'. This should be enough. Next time you call you'll receive the file. At least this is how it worked for me. Any comments? Bye, Javier [email protected] _\|/_ ***********************************************-----(O)---**** * Javi(Canary) * Valencia/Spain * --- FEddi 0.9pl5 via BinkleyTerm * Origin: RAMERA: persona que comercia con su RAM. (2:346/207.48)
------------------------------------------------------------------------------ Message Number 6 from area R34.LINUX ------------------------------------------------------------------------------ From: Javier Hernandez (2:346/207.48) From: Manuel Soriano Subj: Testing send. Send: 11 Dec 95 23:58:55 ------------------------------------------------------------------------------ Hi Manuel! As of 07 Dec 95, Manuel Soriano wrote to Javier Hernandez concerning "Testing send.": MS> I've received it correctly, in the write area, just tell us how you MS> did it. Hope you'll write us a feddi.howto :-) See, I put a file called "names" into /home/fido/fnet/msgbase which might be similar for you. The file's contents: -------------------------start here------------------------------------- *fj,Javier Hernandez,2:346/207.48 *fm,Francisco Moreno,2:346/207.1 *ap,Alfonso Perez-Almazan,2:346/207.2 *vk,Viktor Martinez,2:346/207.4 *sz,Salvador Zarzo,2:346/207.6 *el,Eduardo Lluna Gil,2:346/207.8 *bs,Bernardino Soldan,2:346/207.10 *ms,Manuel Soriano,2:346/207.14 *js,Jose Luis Sanchez,2:346/207.17 *jv,Jose Villanueva,2:346/207.28 *am,Alberto Mendoza,2:346/207.44 *pe,[email protected],2:342/3 *am,areamgr,2:346/207 *rt,[email protected],2:342/3 ----------------------------stop here----------------------------------- This causes that, inserting a net instead of writing a To:, push PgUp or PgDown, you can see the different names. As you see, I've even added some Internet addresses which I'm using sometimes. The first field, I think, is some kind of short keys to make a call directly to this line. I don't remember right now how is this done, but it's easy and you'll find it in the man page for feddi. I don't know if I missed something. If you agree, just add it to feddi.como. Let me know if you think there is missing something, I'll send it to you. See ya. Bye, Javier [email protected] [email protected] _\|/_ ***********************************************-----(O)---**** * Javi(Canary) * Valencia/Spain * --- FEddi 0.9pl5 via BinkleyTerm * Origin: RAMERA: person dealing with his RAM. (2:346/207.48)
------------------------------------------------------------------------------ Message Number 11 from area R34.LINUX ------------------------------------------------------------------------------ From: Jose Carlos Gutierrez (2:341/45.17) From: all Subj: Feddi-como, Scripts Send: 26 Dec 95 11:42:31 ------------------------------------------------------------------------------ Hi These are the files I'm using to automate mail. file /usr/local/bin/fido #!/bin/bash pushd ~/fnet/inbound .minusculas if [ -f snetlist.a* ] || [ -f subptlst.a* ] || [ -f region34.l* ] || [ -f ptlstr34.l* ]; then ~/fnet/nodelist/compilar fi ftoss futility link fmbedit fscan futility pack popd |------------| file ~/fnet/inbound/.minusculas (the dot is to avoid that it converts itself to lower case) #!/usr/bin/perl while ($nombre = <*>) { $nuevo_nombre = $nombre; $nuevo_nombre=~ tr/A-Z,Ñ/a-z,ñ/; print "$nombre -> $nuevo_nombre \n"; rename($nombre,"$nuevo_nombre"); } |------------| file ~/fnet/nodelist/compilar #!/bin/bash # file to compile the nodelist pushd ~/fnet/nodelist if [ -f ~/fnet/inbound/ptlstr34.l* ]; then rm ptlstr34* unpack ~/fnet/inbound/ptlstr34.l* fi if [ -f ~/fnet/inbound/region34.l* ]; then rm region34* unpack ~/fnet/inbound/region34.l* fi if [ -f ~/fnet/inbound/snetlist.a* ]; then rm snetlist* unpack ~/fnet/inbound/snetlist.a* fi if [ -f ~/fnet/inbound/subptlst.a* ]; then rm subptlst* unpack ~/fnet/inbound/subptlst.a* fi # what I'm doing here is insert the line of my Boss for him to call the bt # with ctrl + y (this is probably the most difficult way to do it, by I know # of no other). grep -i -B 4000 'Boss,2:341/45' ptlstr34.* > /tmp/file1 grep -i -A 4000 'Boss,2:341/45' ptlstr34.* > /tmp/file2 grep -v 'Boss,2:341/45' /tmp/file2 > /tmp/file3 rm ptlstr34.* cat /tmp/file1 > ptlstr34 # you'll have to adapt this line to your system echo ",0,Ma~ana_Remoto,Madrid,Rafa,34-1-6463023,9600,CM,V34,VFC" >> ptlstr34 cat /tmp/file3 >> ptlstr34 rm /tmp/file1 rm /tmp/file2 rm /tmp/file3 # rm -f ~/fnet/inbound/ptlstr34* rm -f ~/fnet/inbound/region34* rm -f ~/fnet/inbound/snetlist* rm -f ~/fnet/inbound/subptlst* rm fnlc.* fnlc popd Bye, Guti. --- FEddi 0.9pl5 via BinkleyTerm * Origin: THE GANG TM (2:341/45.17)
------------------------------------------------------------------------------ Message Number 1358 from area R34.LINUX ------------------------------------------------------------------------------ From: Pablo Gomez (2:341/43.40) From: All Subj: The personal area in FEDDI, a fine(ally) version ;-) Send: 24 Jun 96 00:35:31 ------------------------------------------------------------------------------ Hi! Will since some time we have been trying to find out a possibility to provide in FEDDI a personal area allowing the reception of mail directed to us from any area, and, over all, (as the former isn't difficult) reply them in a comfortable way, sending them back to the original areas. The following scripts at least allowed Francisco Jose Montilla and the author of this message to do the trick. The first step is creating an area which will later serve as PERSONAL. We can do it like: (As user fido) $ cd ~/msgbase $ mkdir +PERSONAL $ cp +R34.LINUX/* +PERSONAL/ (PERSONAL is the name you want to give the personal area) Check if the permissions and the owner of this new directory are the same as those you have in other areas. If not, correct them. Next, to clean the messages, do: $ futility "+delete" "all+" PERSONAL $ futility pack PERSONAL If you invoke fmbedit again, you'll the the new area, called PERSONAL! :-) magic? :-) Now we've got the base. Next part: Copy the new messages that are arriving to the system to our name. This is done (almost) automatically. If we create a file like: ,,, (o o) File: ~/msgbase/tosspath ---*reiss*------*schnippel*------oOO--(_)--OOo-------*knabber*-----*fetz*--- copy t"Pablo Gomez" PERSONAL ---*reiss*------*schnippel*--------------------------*knabber*-----*fetz*--- that's it. Obviously you'll have to replace my name (Pable Gomez) with yours, and PERSONAL with the name of your personal area. Each time we run ftoss, this will copy to the personal area the messages directed to us. This point deserves a comment. In fact, this will copy also the messages directed to us and received in NETMAIL. In my opinion, this is somewhat brain-dead, as the NETMAIL area is already our personal area. I don't know of no modification to avoid this copy. So a little later we'll have to make a certain adjustment. This is a piece (the important one ;-)) of the script I run to receive the mail. ,,, (o o) File: ~/bin/mimport ---*reiss*------*schnippel*------oOO--(_)--OOo-------*knabber*-----*fetz*--- #!/bin/sh # To manage the personal area PERSAREA=PERSONAL # Mail import ftoss # # Feeding personal area # We just have delivered the messages, generating the necessary duplicates in # PERSONAL. But we'd liked to delete the messages which we just copied to # the PERSONAL area, and which come from the NETMAIL area # futility tool "+delete" \ "new+&&text+\*\*\* ftoss: copied from NETMAIL" $PERSAREA # reconstruct threads futility pack futility link #[...] ---*reiss*------*schnippel*--------------------------*knabber*-----*fetz*--- Be careful: the lines `futility tool ...' and `new ..." are just one. The aim is to delete this redundant messages from NETMAIL. Going on with message handling. The messages in the PERSONAL area contain lines like: *** ftoss: copied from R34.LINUX (for instance) :-) I reply (just in the PERSONAL area) the message, and don't care for anything, _EXCEPT_ to not delete this line, which will serve later as a `witness' to allow the message be replied in the correct area. Then, exporting the mail, I run the following script: ,,, (o o) File: ~/bin/mexport ---*reiss*------*schnippel*------oOO--(_)--OOo-------*knabber*-----*fetz*--- #!/bin/sh USER_BIN_DIR=/home/fido/bin LOCAL_BIN_DIR=/usr/local/bin # Name of personal area PERSAREA=PERSONAL # user name USERNOM="Pablo Gomez" # temp output file name OUTFILE=/tmp/persanswr # Extraction of the messages in the personal area which are due for process # and which will then be marked as `sent' # futility tool "display" "attribute-se&&from+Pablo Gomez" $PERSAREA > $OUTFILE futility tool "+se" "attribute-se&&from+Pablo Gomez" $PERSAREA # distribution to the new areas... awk -f $USER_BIN_DIR/persreply.awk < $OUTFILE # scan the message base # $LOCAL_BIN_DIR/fscan ---*reiss*------*schnippel*--------------------------*knabber*-----*fetz*--- And the `awk' line included in the file persreply.awk reads: ,,, (o o) File: ~/bin/persreply.awk ---*reiss*------*schnippel*------oOO--(_)--OOo-------*knabber*-----*fetz*--- BEGIN { # # Touch this if necessary # ATTENTION: Watch also for instruction blocks marked with "####": # these too will need adjustment. # outputfile="/tmp/tmpreply" # # # down here I suppose only the blocks marked with `###' my need changes # borracmd=sprintf("rm -f %s", outputfile) replyarea="" estado=1 system(borracmd) } # It's only valid the first time found in each message. # Avoid copying, so it won't reach another system which is using the same # system /\*\*\* ftoss: copied from /{ if (estado==1) { viejoestado=2 estado=3 replyarea=$NF ### Modify: print "*** pers_area: Copiado desde area PERSONAL" >> "/tmp/tmpreply" } } /^#To: / { user="" for (n=2; n <= NF; n++) { user=sprintf("%s %s ",user,$n) } } # Avoid writing the following lines: /^#Area: / { viejoestado=estado estado=3 } /^#@To: / { viejoestado=estado estado=3 } # always but in the before mentioned cases... estado != 3{ ##### # # ATTENTION!: Modify as above. # Sorry for the hack, but I couldn't make it work otherwise. # print $0 >> "/tmp/tmpreply" } # Restore the previous state estado==3 { estado=viejoestado } /^###MESSAGE_END###/{ if (estado==2) { close (outputfile) comando=sprintf("cat %s | futility addmsg %s",outputfile, replyarea) system(comando) system(borracmd) estado=1 replyarea="" } } END { system(borracmd) } ---*reiss*------*schnippel*--------------------------*knabber*-----*fetz*--- Be careful: there are cut off lines (visibly), and there is a double hack which I wasn't able to resolve better. Instead of defining all of the above variables, there is one, `outputfile' which I had to redefine half way of the script as a constant, because I didn't know how to do it better. I tried to pass the variable quoted in different styles, but I couldn't achieve it. Maybe one of you could give me a hint. This was tested with several simultaneous messages, but I think I never failed to destroy the line with ***ftoss... Regards until the next time. I hope you'll find it useful. I'll be pleased to get comments, improvements, etc. Bye, Pablo GOMEZ [email protected] --- FEddi 0.9pl5 via BinkleyTerm * Origin: Puntomatico Remoto. Linux en Hoyo de Manzanares (2:341/43.40)
Alt+r
Ctrl+r
.Alt+n
Ctrl+n
pointlist
, otherwise just nothing happens.Alt+l
; using then the cursor right key, you'll be changing to the list of areas.Tab
key, and see a list similar to that which appears in the previous item. If you continue using this key you'll change the references to the linked messages.futility link
does) by one and the same Re: and by some yellow codes which appear in the right upper corner of the screen, in the zone dedicated to the message's header.Alt+y
, followed by f
; then Alt+j
and finally Tab
; you'll be able to ``navigate'' up to the file.Tab
applies to all operations related to files(insert file, export message to file, etc...)bt
:/usr/bin/bbs
echo -e "\033(U" /usr/bin/bt echo -e "\033(B"
chmod 755 /usr/bin/bbs
/usr/lib/binkley/binkley.cfg
changing the value of the line BoxType
to 3
:
[...] BoxType 3 [...]
Well, that's all, have fun, and we'll read about us via fido.
Don't forget:
Send me comments, modifications you have to this soft, but send flames to /dev/null
:-)
Bye,
Manu
|
||
|
muse:
|
|
his column is dedicated to the use, creation, distribution, and discussion of computer graphics tools for Linux systems. My first column, in the November issue of Linux Gazette, left something to be desired in both content and graphics. As one reader pointed out, I didn't even follow my own guideline for making background images. Well, it looked good on my system at home. The problem was one of poor time management on my part. I finished up the chapters of a web server book I'm co-authoring at the end of September, so I had more time to work on this months column. Hopefully the format is cleaner and the content more informative.
And, in the future, I'll try to follow my own guidelines. |
Disclaimer: Before I get too far into this I should note that any of the news items I post in this section are just that - news. Either I happened to run across them via some mailing list I was on, via some Usenet newsgroup, or via email from someone. I'm not necessarily endorsing these products (some of which may be commercial), I'm just letting you know I'd heard about them in the past month. | |||
New version of Pro MovieStudio driver available on Sunsite archivesWolfgang Koehler has released the 3.0 version of his PMS-grabber package to the sunsite archives. This package provides a driver and X application for grabbing frames from the Pro MovieStudio (aka PMS) adapter by Mediavision. Depending on when it is migrated to its final resting place, the package can be obtained either from ftp://sunsite.unc.edu/pub/Linux/incoming or ftp://sunsite.unc.edu/pub/Linux/apps/video. |
ImageMagick Library updatedA New revision of the ImageMagick Library, version 3.7.7, was released this past month. |
||
Netscape Tcl Plugin releasedThe Tcl Plugin 1.0 was also released this past month. This is a Netscape plugin that allows web page authors to write Tcl based applets for your web pages. |
Digigami looking for testers for MovieScreamer toolThere is now a conversion tool for creating Quicktime videos. Digigami is looking for Unix Webmasters to be Beta testers for its MovieScreamer multi-platform, 'Fast-Start' publishing and conversion tool for QuickTime(tm) movies. 'Fast-Start' QuickTime movies are standard 'flattened' movie files that have been 're-organized' for playback over the Internet (or corporate Intranets). |
||
Did you know?There is a font archive, complete with sample renderings of the fonts, available at http://www.ora.com/homepages/comp.fonts/ifa/os2cdrom/index.htm? The ftp site for the fonts is at ftp://ftp.cdrom.com/pub/os2/fonts/.A large list of general graphics information is available at ftp://x2ftp.oulu.fi/pub/msdos/programming/. Look under /theory, /math, /faq and a host of other subdirectories. There is a lot to wade through, but just about all of it has some value, including information on shading and object sorting. The Bare Bones Guide to HTML is a useful resource for people who need to find the correct HTML syntax for HTML 3.0 or Netscape based web pages. |
|||
O'Reilly releases The Linux Multimedia Guide.recently picked up my copy of The Linux Multimedia Guide by Jeff Tranter. This text covers a wide range of material related to the creation and use of multimedia files with respect to the Linux operating system. The text is approximately 350 pages, including source code listings for a number of sample multimedia applications which are discussed in one chapter of the book. As usual, O'Reilly provides copies of the source from their ftp site.When I first found out about this book I thought "Rats, Jeff beat me too it." Much of what Jeff covers is listed in my own Linux Graphics mini-Howto. However, there are quite a number of items not covered by the LGH (as I call it), such as audio, a bit more detail about video formats and tools, and programming considerations for various hardware (CD-ROMs, joysticks, and sound devices), which make the Linux Multimedia Guide a good addition to the O'Reilly family of Unix books. The text is divided into 5 sections:
Section two opens with a discussion on hardware requirements for doing multimedia on Linux systems. Most of this section centers on either the CD-ROM driver or the Linux Sound Driver (now known as OSS). There is also a short chapter on the joystick driver. The second longest section, A Survey of Multimedia Applications, covers applications for the various forms of multimedia. There are chapters on sound and music applications, graphics and animations applications, hypermedia applications, and games. The last chapter, on games, seems a bit out of place. There are games implemented as network applications using Java, JavaScript and the new Tcl/Tk plug-in for Netscape but this chapter doesn't cover these. This section is very similar to the LGH in that the chapters provide the program names and URLs associated with them (if any). The number of items covered is less than the LGH, but there are better descriptions of the applications in the book. Chapter fourteen opens the fourth section, the Multimedia Programmer's Guide. This section is the longest in the book and covers all the devices discussed earlier. Other chapters in this section cover some of the available toolkits available to multimedia developers. There is one chapter which contains three sample applications. In general I find the Linux Multimedia Guide a good reference text with a moderate degree of developer tutorials. Unlike many of the books available for Linux this text provides detailed explanation on the various programming interfaces, a useful tool beyond the simple "what is this and where do I get it" that many of the Howto's provide. The only drawback that I can see is that, like most of other Linux texts, this text does not provide a users perspective on any of the tools listed. If Linux is to ever go beyond a developer's-only platform there will need to be detailed users guides for the various well known applications. |
|
Linux Graphics mini-Howto
Unix Graphics Utilities
Linux Multimedia Page
More... |
I'm a big fan of utilities. When I saw that CND/RHS were distributed with older versions of the InfoZIP zip/unzip suite of archive utilities, I made upgrading them my first Linux project. It turned out to be a little bit more complicated than I thought it would be.
I especially wanted to add in the DES encryption modules to zip/unzip so they would be 100% file compatible with PKWare's archivers for MS-DOS. U.S. State Department rules make it difficult to implement this as an RPM, so I decided to do it as a classic shell script. The end user will have to ftp the source code (especially the DES code module) from the site specified in the script.
Script #1:
#!/bin/sh # # undatezip reverses updatezip and restores a Caldera Network Desktop v1.0 or # Red Hat Software v2.1/v3.0.3 InfoZIP suite installation to its original zip # v2.01 and unzip v5.12 configuration. This should only be necessary if you # need to upgrade from a pristine as-installed configuration. # # original versions >>updatezip >>> new versions # without encryption <<< undatezip <<< with encryption # # Copyright (C) 1996 by Robert G. "Doc" Savage. Permission is granted to # distribute this document by electronic means and on CDs provided that it # is kept entirely in its original format. Permission is also granted to # print and execute this document for personal use. The republishing of # this document in part or in whole without the permission of the copyright # holder by any means other than as noted above is prohibited. # # First, the executables # cd /usr/bin rm -f *.encrypt rm -f funzip unzip unzipsfx zip zipcloak zipgrep zipinfo zipnote zipsplit mv funzip383.export funzip mv unzip512.export unzip mv unzipsfx512.export unzipsfx mv zip201.export zip mv zipcloak201.export zipcloak mv zipinfo202.export zipinfo mv zipnote201.export zipnote mv zipsplit201.export zipsplit # cd /usr/man/man1 rm -f funzip.1 funzip39.1 unzip.1 unzip52.1 unzipsfx.1 unzipsfx52.1 zip.1 \ zip21.1 zipinfo.1 zipinfo21.1 zipgrep.1 zipgrep21.1 zipcloak.1 zipnote.1 \ zipsplit.1 mv funzip383.1 funzip.1 mv unzip512.1 unzip.1 mv unzipsfx512.1 unzipsfx.1 mv zip201.1 zip.1 mv zipinfo202.1 zipinfo.1 # cd hash -r # # That's it...
Script #2:
#!/bin/sh # # updatezip is a shell script for Caldera Network Desktop v1.0 or Red Hat # Software's v2.1/v3.0.3 distributions to upgrade the InfoZIP utilities unzip # from v5.12 to v5.2, and zip from v2.01 to v2.1. It also adds the zcrypt DES # encryption module not provided in the RHS (or any other) distribution. # # To undo this upgrade and restore a CND v1.0 or RHS v2.1/v3.0.3 installation # to its original zip/unzip configuration, run the companion file undatezip. # # original versions >>updatezip >>> new versions # without encryption <<< undatezip <<< with encryption # # Copyright (C) 1996 by Robert G. "Doc" Savage. Permission is granted to # distribute this document by electronic means and on CDs provided that it # is kept entirely in its original format. Permission is also granted to # print and execute this document for personal use. The republishing of # this document in part or in whole without the permission of the copyright # holder by any means other than as noted above is prohibited. # # It is divided into four sections: # # Section 1 create the working directory. # Section 2 compile the unzip and zip executables. # Section 3 replace the existing versions of the zip/unzip suite. # Section 4 clean up. # # Instructions # ========================================================== # # Download these files from : # # unzip52.zip # zcrypt26.zip # zip21.zip # # Copy them and updatezip to a safe directory (suggest root's home directory # /root). Use 'chmod 700 updatezip' to make it executable, then run it. # Execution time is slightly over four minutes on a DX4/100 system with 28M # of RAM, a 32-bit EISA host adapter, and an older SCSI-1(CCS) hard drive. # # IMPORTANT # --------- # Caldera Network Desktop 1.0, when first installed, is missing an important # file required to compile certain programs. The following lines create (or # recreate) this missing file. This script will fail without it. # cd /usr/src/linux make include/linux/version.h cd # # Section 1. Create the working directory and extract all required files. # mkdir /scratch cp unzip52.zip /scratch cp zcrypt26.zip /scratch cp zip21.zip /scratch cd /scratch # # Section 2. Compile unzip first, then zip # unzip unzip52 unzip -o zcrypt26 # -o forces overwrite of stub files cp -f ./unix/Makefile . make generic rm -f *.o # clean-up before next compile round unzip -o zip21 unzip -o zcrypt26 cp -f ./unix/Makefile . make generic_gcc # # Section 3. Install new versions of the zip/unzip suite. Preserve the # existing executables and man files first. Use soft links to point # to the new versions. # cd /usr/bin mv funzip funzip383.export mv unzip unzip512.export mv unzipsfx unzipsfx512.export mv zip zip201.export mv zipcloak zipcloak201.export mv zipinfo zipinfo202.export mv zipnote zipnote201.export mv zipsplit zipsplit201.export # cd /usr/man/man1 mv funzip.1 funzip383.1 mv unzip.1 unzip512.1 mv unzipsfx.1 unzipsfx512.1 mv zip.1 zip201.1 # note there is no zipgrep.1 in this distribution mv zipinfo.1 zipinfo202.1 # cd /usr/bin mv /scratch/funzip funzip39.encrypt mv /scratch/unzip unzip52.encrypt mv /scratch/unzipsfx unzipsfx52.encrypt mv /scratch/zip zip21.encrypt mv /scratch/zipcloak zipcloak21.encrypt mv /scratch/unix/zipgrep zipgrep21.encrypt mv /scratch/zipnote zipnote21.encrypt mv /scratch/zipsplit zipsplit21.encrypt # cd /usr/man/man1 mv /scratch/unix/funzip.1 funzip39.1 mv /scratch/unix/unzip.1 unzip52.1 mv /scratch/unix/unzipsfx.1 unzipsfx52.1 mv /scratch/man/zip.1 zip21.1 mv /scratch/man/zipgrep.1 zipgrep21.1 mv /scratch/unix/zipinfo.1 zipinfo21.1 # # Now establish the soft links # ln -s funzip39.1 funzip.1 ln -s unzip52.1 unzip.1 ln -s unzipsfx52.1 unzipsfx.1 ln -s zip21.1 zip.1 ln -s zip.1 zipcloak.1 # remember, zip.1 is ln -s zipgrep21.1 zipgrep.1 ln -s zipinfo21.1 zipinfo.1 ln -s zip.1 zipnote.1 # already soft-linked ln -s zip.1 zipsplit.1 # to zip21.1 # cd /usr/bin ln -s funzip39.encrypt funzip ln -s unzip52.encrypt unzip ln -s unzipsfx52.encrypt unzipsfx ln -s zip21.encrypt zip ln -s zipcloak21.encrypt zipcloak ln -s zipgrep21.encrypt zipgrep ln -s unzip52.encrypt zipinfo # a special link ln -s zipnote21.encrypt zipnote ln -s zipsplit21.encrypt zipsplit # # Section 4. Clean up the leftovers. # cd # go to your home directory rm -rf /scratch # nothing worth saving in the scratch directory hash -r # re-sync the paths # # That's it...
--Doc Savage, Sr. Network Engineer, I-NET, Inc.
John E. Davis of the Center for Space Research at MIT has written an interpreted programming language called Slang, which has a C-like syntax. He has written several programs using this language, including the slrn newsreader and the emacs-like Jed editor. Lately a few other programmers have begun to make use of Slang; one reason for this is that Slang allows the use of color in a text-mode program which will display equally well in an rxvt window under X.
Applications which are linked with the Slang library always seem to be text-mode programs. Typically Linux text-mode applications use the ncurses library to handle screen display. Ncurses enables the use of menus, a certain amount of color, and a more complex screen layout. These traits don't always translate well into an X-Windows environment; i.e. running in an xterm or rxvt window. If an application is linked with the Slang library instead its behavior is more consistent between the console and X sessions, especially when started from an rxvt window.
I get the impression that the xterm terminal emulator is used more commonly than rxvt, though this may be due more to tradition than innate superiority. Rxvt has been revised several times recently and in its current form (version 2.19) has much to recommend it. One feature which I appreciate is that it's memory usage is much lower than that of xterm. Rxvt handles color requests well, both background/foreground specifications and extension-specific colorization such as "color-ls". The most recent version even allows the use of Xpm images as background, similar to a web-page, though as with a web-page a background image would have to be carefully chosen so as not to obscure the text.
Some xterm variants make use of color, but some don't. I find the plenitude of xterms and color-xterms rather confusing; it's hard to tell just which ones you have, and they vary from distribution to distribution. Then there is xterm's Tektronix compatibility, which I've never seen a use for. Reading the xterm man page I get the impression that xterm was developed for older mainframe-and-terminal systems.
I'm sure as the benefits of Slang become more widely known we shall see more text-mode applications with Slang support included. There very well be others than the above-listed out there; these are just the ones I've run across.
Precompiled binaries for slrn, lynx, and the Jed editor (with Slang statically linked, I assume) are available at ftp://sunsite.unc.edu and its mirrors . I used these for some time, but recently I obtained the source for Slang and compiled a shared library. The advantage of this approach is that you can compile binaries which dynamically link the Slang library at runtime. Your executables will be smaller, and one shared library can service any number of Slang-using applications. Another advantage to obtaining the source distributions is that you'll end up with more documentation.
John E. Davis's creations (slrn, Jed, and the Slang sources) are available at their MIT home site. The most recent versions, as well as beta versions, can be found there.
This Mexican site is the source for the most recent versions of the Midnight Commander, as well as rxvt.
Beta versions (which seem stable to me) of Michael Elkins' Mutt mail program are available from this FTP site. Maybe you can get it to compile with Slang!
Lynx binaries with Slang support can be found at sunsite and its mirrors.
The source for the latest and greatest of the Dosemu releases can be found at the tsx-11 FTP site. (Version 0.64.1 was released in November).
If you're like me and work at the console often, you'll find it's nice to have applications available which work well (and look good!) in an X session too. I think you will be pleased with the high quality and low memory usage of the above-listed apps.
I've been writing these short reviews and other articles for the Gazette since issue number seven. Even with the short lead time inherent in a WWW-based publication it seems like new releases and URL changes often happen right after I submit an article. The status of several of the programs has changed since I wrote of them, so I thought I'd take this opportunity to list some of these changes.
By the way, I appreciate all of the email I've received in response to my articles; feel free to write if you have any comments or criticism.
It is a common practice to use the rescue/boot disks supplied with a Linux distribution if filesystem problems occur and you need to boot from a floppy. Typically these disks consist of a bootable compressed kernel on disk 1, with the second disk containing basic maintenance tools such as fsck.
On the few occasions I've had to boot from such disks the transition from my familiar Linux environment to the bare-essentials, limited boot-disk system (constrained by the size of a floppy disk) has been disconcerting, to say the least. Typically if an editor is available it's a small one with which I've never worked, and many of the tools I'm used to having around aren't there.
Recently has been refining a suite of customizable Perl scripts which make the creation of boot-disks from scratch easier. YARD (for Yet Another Rescue Disk) makes use of (and requires) the optional Linux kernel compressed ramdisk option, which allows you to load a compressed disk image into memory at boot-up. Paul Gortmaker has written a lucid explanation of the new ramdisk options in the file "ramdisk.txt", which is in the Documentation subdirectory of recent kernel source releases.
The Yard distribution contains two files which need to be edited as a first step. Config.pl is a Perl script which sets such preferences as the type of floppy you're using and whether you are making a single boot-disk or a double. The Bootdisk_Contents file contains a list of all of the files and utilities you would like on your disk(s). This file needs to be edited heavily, as it includes much more than will fit on even two disks. Anything you like can be included in this file.
The next step is to run the Perl script make_root_fs. This script gathers up all of the files you've specified (as well as all libraries upon which they depend) and constructs a root filesystem upon whichever device was specified in the Config.pl script. A ramdisk works well. The new filesystem is then compressed with gzip into a single file in your /tmp directory. Once this process is complete yet another Perl script, check_root_fs is run, which makes sure that all needed libraries,etc. are present.
After all of this preparation you're ready to actually write the rescue disks; here's where you find out if you've attempted to cram too much into them. The write_rescue_disk script first copies your compressed kernel (vmlinuz) onto the disk (the first disk if it's a two-disk set) and then copies the compressed filesystem image you've constructed onto whatever is left. It took me several tries to pare down what I wanted Initially on the disks to what would actually fit. The virtue of the Yard system is that all you need to do to try again is re-edit the Bootdisk_Contents file and re-make the filesystem. Yard also writes log-files which can be helpful in diagnosing problems.
Modular kernels are great, but if you boot a kernel image and a capability you need is a demand-loaded module you're out of luck. Yard sidesteps this potential problem by including your modules directory in the compressed filesystem, as well as making sure that the kernel-daemon /sbin/kerneld is started at boot-up.
The result of this process is a customized miniature Linux system. It's a nice feeling to know that if your filesystem is in shambles due to a power outage or a beta program run amuck that you at least have familiar tools available.
Once you've managed to edit a set of Yard configuration files which will successfully write working rescue disks, consider saving copies of these files in case the disks become corrupted. I just replaced the supplied files with my edited copies, then tarred and gzipped the Yard distribution and saved it to floppy.
Yard gives you the option of using or not using Lilo to boot your disks. I first tried Yard with Lilo, as Lilo has always worked well for me. It wouldn't work with my Yard disks, so I disabled that option. I'm using an old version of Lilo, left over from my original Slackware 3.00 Linux installation, which may explain this failure. Yard works fine without it. Lilo might be necessary if you need to include parameters in order to boot your system, such as those required for some SCSI hard disks.
Yard is available from the Yard home-page, as well as from the sunsite archive and its mirrors. It's well worth trying if you want the ultimate in control over just what is included on your rescue disks.
This show was actually billed as Unix Expo Plus I^2--a nod to the increasing interest in all things NT and Internet. In fact, in 1997 the show will no longer be called Unix Expo at all, it will be billed as IT Forum 97, Internet and Technology Forum.
Despite a preponderance of Internet and NT related vendors and seminars, (and the ubiquitous presence of Bill Gates), the show went very well for Linux Journal and SSC. Various disasters struck, notably the loss of half of our booth display by UPS, but all in all it was quite successful. With the exception of Caldera, all of us Linux-types were stuck off in the corner of the show room, but we were still swamped by happy Linux and Unix users who had specifically made the trek in support of their favorite OS. The show in general had a lower attendance than was expected by show management, but the Linux contingent were doing quite nicely anyway.
2,500 Linux Journal and 1,600 WEBsmith magazines were given away. Many people subscribed right there at the show, many others went away clutching their SSC Unix References or books with dazed-but-happy expressions. Those of us working the booth made lots of contacts, and I must say it was a great experience meeting subscribers and customers who share such enthusiasm for Unix and Linux. Those NT developers should take note: Unix users are a dedicated bunch.
New York was a blast, although I had to laugh when the locals got panicky when a 'Nor-Easter' blew through. I had to say, "Come on, guys. It's just raining." They should come to Seattle some time.
--Lydia Kinata, SSC Products SpecialistOn November 11 through 13, Carlie Fairchild and I attended the DECUS show in Anaheim, California. While DECUS has generally been a good show for SSC, this show was small and we were the only Linux vendor attending. The best guess why is with UseLinux coming up in the same place in January, it was an easy show for people--vendors as well as Linux-heads--to skip.
There was a series of talks on Linux presented by Jon "maddog" Hall and myself. Attendance was between 20 and 50, and I think we managed to make some converts.
Carlie had also arranged for to speak to the local Linux user's group on Wednesday night. About 25 attended (including "maddog"). I presented a talk called Looking at Linux. Much of this talk focused on the commercial viability of Linux, which was an issue many of the group's members had been attempting to address on their own. In the talk I stressed four criteria for commercial viability:
--Phil Hughes, Publisher Linux Journal
The first week of November, I went to Washington, D.C. to attend Open Systems World/FedUNIX. While several dedicated Linux fans came by the booth, most of the people I talked to knew very little about Linux. Some were just cruising the booths, collecting whatever anyone was giving away, but we don't mind--the literature they picked up may spark some real interest later on. (One show attendee, in addition to taking a few of whatever we had also took the neat twirly thing we'd acquired from another exhibitor's booth.)
Linux vendors in attendance were Yggdrasil Computing, InfoMagic, and Red Hat Software, giving me a chance to meet Adam Richter of Yggdrasil, Bob Young and Lisa Sullivan of Red Hat, and Henry Pierce and Greg Deeds of InfoMagic.
Adding credence to Linux's worth in the minds of those with no free software experience was Digital Equipment's display of a DEC Alpha running Linux and Maddog's enthusiasm for the operating system. (By the time I got over to actually see the machine, someone was demonstrating Quake on it. I sat down and showed him a couple things I remembered from playing Doom--it was kind of surreal to be sitting amidst all the professional frumpery of the show while virtually running around swinging a very large and lethal axe.)
Jeff Leyland of Wolfram Research, the makers of Mathematica, spoke about Wolfram switching to Linux as their development platform. There were other speakers I should have made time to hear, but I got caught up talking to people coming by our booth and asking about Linux. I know that after a few talks, the Linux booths would get flooded with people excited to check it out.
I also heard Ernst & Young--well known for their accounting services among other things--apparently use Red Hat Linux in-house and asked IBM, with whom they contract for computer services, to support their Linux machines. (If you're from Ernst & Young, please send me some mail. We'd like to hear about how you're using Linux.)
Adam Richter predicted a new version of Yggdrasil's Plug-and-Play Linux in the first quarter of 1997. At OSW they had pressings of their new 8-CD Internet Archives set, which includes several distributions, including a couple I hadn't heard of before.
I would've felt cutoff from the world (yes, even in D.C. on election night) if it hadn't been for David Lescher, who set me up with some dial-in PPP access for my laptop, and David Niemi, who made some necessary tweaks to my chat script. I'm also grateful to Mark Komarinski, who put together a Linux talk on very short notice when I found I was dangerously close to having no time whatsoever to prepare one myself.
The Santa Cruz Operation was there giving away copies of their Free SCO OpenServer. Someone who'd just acquired one of those gems asked me why she'd be interested in Linux if she had OpenServer; I noted its limitations and handed her a copy of Linux Journal, hoping to plant a seed. Some attendees were being less subtle, affixing prominently to their big blue IBM literature boxes the Linux bumper stickers we were giving away.
--Gary Moore, Editor of Linux Journal
This article is basically a summary of my experiences of setting up a web server under Linux. I will start with where/how to obtain Apache, then move on to installation, configuration, and finally how to get things running. This article is written from the point of view of my system, which is a Red Hat 4.0 system with v2.0.25 of the kernel. However, a "generic" installation or a similar setup should apply as well.
The obvious place to get the latest version of Apache is off of the Apache web site: http://www.apache.org. The source distribution file is apache_1.1.1.tar.gz while the Linux ELF binaries is apache_1.1-linux-ELF.tar.gz. Grab what you find is necessary...
If you are running Red Hat Linux 4.0 like I am, during the installation process you are allowed to select whether or not you want to install a web server. If you do, Red Hat 4.0 includes the latest Apache and installs everything automatically with a default configuration. This default configuration even RUNS correctly without any modifications! However, even in this case, please read my notes and preferences regarding installation in the next section.
Typically, unless you need to add special modules or features, the binary distribution or the default Red Hat installation should be fine. However, let's say you wanted to run Apache as a proxy server. In this case, you would need the source so you can compile the proxy module as part of the binary.
(Note: I have heard rumors that the binary included with Red Hat 4.0 has some bugs. I have yet to encounter any myself, so take that rumor with a big grain of salt.)
I'm not going to cover compiling Apache since it's actually a fairly painless process and pretty well documented. Given that, let's move on to actual installation...
Personally, I like to group all the web server files together in a centralized location. If you are installing this manually, then this is something you can do from the outset, and I highly suggest doing this since it will reduce administration headaches.
If you had Apache installed automatically as part of the Red Hat installation procedure, then things will NOT be centralized! In fact, I thought the file placement scheme was one of the most confusing I've ever encountered. Here's what the Red Hat installation does:
web server binaries | /usr/sbin/httpd /usr/sbin/httpd_monitor |
config files | /etc/httpd/conf/* |
log files | /etc/httpd/logs/* |
web server root (contains cgi, icons/images, and html files) |
/home/httpd/* |
I found this to be really disorganized, so I ended up putting mostly everything under one directory (I left the binaries in /usr/sbin):
mkdir /httpd mv /etc/httpd/conf /etc/httpd/logs /home/httpd/* /httpd rmdir /home/httpd
You should end up with:
/httpd/ /cgi-bin /cgi-src /conf /html /icons /logs
And then to preserve the original Redhat file locations:
ln -s /httpd /home/httpd ln -s /httpd/conf /etc/httpd/conf ln -s /httpd/logs /etc/httpd/logs
Finally, I added this link since I felt that it made more sense:
ln -s /httpd/logs /var/log/httpd
If you are installing and compiling Apache manually, you may want to have the original source files also located under /httpd (or whichever directory you have).
Apache has three main configuration files: access.conf, httpd.conf, and srm.conf. If you are running Red Hat 4.0, these files will already be set with the correct directory paths. If you centralized the locations of all these files, but made those symbolic links as I mentioned above, things will still be fine since the symbolic links preserves where Red Hat installed everything.
If you are doing a "generic" installation or have some other setup, then you will need to do the following:
In access.conf, change/update these directory entries:
<Directory /httpd/html> <Directory /httpd/cgi-bin>
In httpd.conf:
ServerRoot /httpd
In srm.conf:
DocumentRoot /httpd/html Alias /icons/ /httpd/icons/ ScriptAlias /cgi-bin/ /httpd/cgi-bin/
Essentially, these are the necessary directives in the config files that need to be updated with the new "centralized" organization.
For further configuration options, I will have to give the standard statement, "Please refer to the docs." :)
To make a long story short, you simply to need to execute the binary "httpd". Typically, this is done when the system starts up, in one of the rc files.
In Red Hat 4.0, it has more of a System V'ish startup style. In /etc/rc.d/init.d resides httpd.init, which is the script used to start and stop httpd. You can also execute this by hand if you find the need.
For other systems (or a manual install), I suggest starting httpd after most other services have started (i.e.: put it in rc.local). A simple line such as
/usr/sbin/httpd &
will suffice.
Obviously, it must start after tcp/ip networking has been started. :)
Needless to say, I didn't cover actual configuration options and how to manage your web server. The configuration options I leave to the Apache manual. Managing the web server itself depends on what kind of web site you want to run. My own system does not run a "real" web site; in other words, I don't advertise it for anything because it serves no real purpose other than for my own experimentation. However, you are more than welcome to take a look at it since it does have a bunch of Linux related links to it. The URL can be found at the end of this article.
Other than that, I would love to hear any comments and/or criticisms you may have about what I wrote. Originally, my plan was to write a monthly article about running/managing a web server under Linux. However, short of actually writing a manual on configuring Apache (which the Apache documentation is good enough as a reference), I don't know what else to write about since there may not be all that much to write about.
However, one idea for a monthly thing that might be good is to collect hints, tricks, and other useful information related to running a web server under Linux. Think of it more as a "2 cent tips for a linux web server." If anyone is interested in this, please drop me a note!
You've made it to the weekend and things have finally slowed down. You crawl outa bed, bag the shave 'n shower 'cause it's Saturday, grab that much needed cup of caffeine (your favorite alkaloid), and shuffle down the hall to the den. It's time to fire up the Linux box, break out the trusty 'ol Snap-On's, pop the hood, jack 'er up, and do a bit of overhauling! |
Well, I'm afraid that the 'ol Weekend Mechanic is going to be a short one this month. I've got six classes this fall and have finally reached the point in the semester where I guess they think we're smart enough to start actually doing things! And so, we're all doing things... LOTS of things, as a matter of fact. That hallowed barometer of academic industriousness, Euclid's Little Known "Shave-To-Face" Ratio, is falling predictably and the No-Doze blood titers are reaching therapeutic levels. As they say in Tennessee, we're all starting to look like a bunch of rugs...
"...walked all over, drug outside, and beat with a stick!"
Anyway, we're all surviving. All of you out there in Academia Land know what I'm talking about; all of you who've run the gauntlet already and have achieved "A Real Life" will smile knowingly. (And will smile to yourselves, knowing that there is no such thing as "A Real Life")
I've been eating a generous portion of Humble Pie here recently after last month's Tar Tricks and Find faux pas. I sincerely apologize for any mis-information and want to gratefully thank all of you who took the time to drop a note and provide more accurate information. I've gotten permission from a number of writers to include their letters which can be read below. They add a good deal more light to the subject!
Thanks guys!
Also, I did manage to eke out a bit of "recreational programming" time and hacked together a prototype tar archive viewer. This is still pretty alpha stuff, but it appears to be relatively stable and I've actually been using it. Any of you who are interested in Tcl/Tk might enjoy hacking away at this. I'll continue to tinker around with this and, by January or so, just might have a reasonably working version for all of you to play around with. If you're interested, have a look at it below.
Anyway, hope you enjoy!
John
Saturday, 23 November 1996
As I mentioned above, I've been eating a good deal of Humble Pie in the past couple weeks after last month's articles on using tar and find. Actually, everyone who wrote was very gracious AND took the time to provide more accurate information. I was impressed by the spirit in which this was done: no one was vindictive, no one was demeaning (although there were a few "raised eyebrow" type letters :-).
Anyway, I owe a great debt to all of you who took the time to write. I REALLY appreciate it. And to the school teacher from Des Moines, I'm almost done...
I will always RTFM
I will always RTFM
I will always RTFM
I will always RTFM
I will always RTFM
I will always RTFM...
Thanks folks. Here's the letters...
Date: Fri, 01 Nov 1996 15:32:09 +1100
From: Paul Russell <[email protected]>
To: [email protected]
Subject: Linux Weekend Mechanic: November Edition of the Linux Gazette (#11)
Hi John,
Just reading Linux gazette for the first time, and stumbled upon your Weekend Mechanic page. I'm sure you're going to get more mail about this, but I read with some astonishment your "More tar tricks" section.
My Linux box is currently about 1000kms away, but I believe that the "tar -tvzf file.tar.gz |tr -s ' ' |cut -d ' ' -f8 |less" can be replaced with "tar tzf file.tar.gz |less".
I liked it though. If you want a useful pipes example, how about a "Oops! I untarred in the wrong place and I want to clean up!" example:
tar tzf file.tar.gz | xargs rm -f 2>/dev/null tar tzf file.tar.gz | sed 's:[^/]*$::' | sort -ru | xargs rmdir 2>/dev/null
Analysis and improvement is left as an exercise for the reader. 8-)
Enjoy,
Paul.
--
[email protected] "Engineer? So you drive trains?" Lies, damned lies, and out-of-date documentation. Currently contracted to Telstra, Sydney.
Date: Tue, 05 Nov 1996 11:19:09 -0500
From: "James V. Di Toro III" <[email protected]>
To: [email protected]
Subject: LG #11 Weekend Mech.
Just a few nits on a couple of the things in this piece.
tar ...
Well it sure showed off some neat features of some utilities, but what you did with that first line can be solved by omitting one character from the tar options.
tar -tzf | less == tar -tvzf |tr -s ' ' |cut -d ' ' -f8 |less
which vs. type ...
which will also give you similar results on aliases and built-ins:
which ls ls: aliased to /bin/ls $LS_OPTIONS which complete complete: shell built-in command.
This is with tcsh 6.05, YMMV.
-- ================================================================ /| |\ James V. Di Toro III | "I've got a bad feeling / |_| \/\ System Administrator, GATS, Inc.| about This" |()\ / || [email protected] | |---0---_| W: (757) 865 - 7491 | -various \ / \ / ^:::^
(Just a quick note about this: James is right, 'which' works as he wrote above if you're using tcsh [because it is a shell built-in??]. Those of you running BASH and using the 'which' executable will find that the executable does not return information on aliases, shell functions, and shell built-in's. I wrote James back after trying this and he concurred.)
Date: Wed, 06 Nov 1996 11:14:51 +1100
From: Keith Owens <[email protected]>
To: [email protected]
Subject: More on locate and update
Saw your note on locate/find in LJ #11. According to my manual page, "locate lynx" is equivalent to "locate '*lynx*'", locate does automatic insertion of leading and trailing '*' if the pattern contains no metacharacters. "locate 'lynx*'" will only find files that start with lynx (i.e. no leading directory or /).
I find the locate command and its associated updatedb command to be very useful for indexing ftp lists and cdroms. Most sites and cdroms have a list of the files in one form or another but they are not easily searched. Some are in find format (directory included in file name), others in ls -lR (directory is separate). I created updatedb.gen (from updatedb) to read a file list and build a locate style database, locate.gen then searches that database.
For example, go to sunsite, /pub/Linux, download 00-find.Linux.gz, then run
updatedb.gen sunsite 00-find.Linux.gz
which builds /var/spool/locate/locatedb.sunsite. "locate.gen sunsite file" does an instant search of sunsite for the file, obviously you have to fetch a fresh copy of the listing occasionally.
Instead of searching several InfoMagic cdroms for a file, mount the first one and
updatedb.gen im /cdrom/00-find
"locate.gen im file" then does a very fast search of the entire set of InfoMagic cdroms and can be done without mounting any cdroms.
updatedb.gen and locate.gen are attached. updatedb.gen works out which format the input file is in and selects the field(s) containing the filename.
-- *** O . C. Software -- Consulting Services for Storage Management, *** *** Disaster Recovery, Security, TCP/IP, Intranets and Internet *** *** for mainframes (IBM, Fujitsu, Hitachi) and Unix. ***
(The idea of using 'locate' on a CD collection sounds like a GREAT idea. I've not yet had the time to try it, but plan to give this little gem a whirl!)
Date: Fri, 08 Nov 1996 09:21:52 -0600 (CST)
From: John Benavides <[email protected]>
To: [email protected]
Subject: How to use "-name" option on find command
In your column, Weekend Mechanic, in the "Linux Gazette" (Oct 1996 issue) on your Web page at: ../issue11/wkndmech.html
You say:
> The way that it should work is that you give locate a filename pattern
> which it searches for. Such as:
>
> locate lynx*
>
> However, when I tried this on my system, it simply returned nothing.
> Using locate lynx worked like a charm.
>
> Got me.
Whenever you use any command with arguments that need to contain wild card characters, don't forget to quote those wild card characters from the shell. I teach this to my students in my introductory UNIX class. Remember the shell gets first crack at the wild card. So the shell will try to match "lynx*" with any file names in your local directory.
Use the echo command to see what the shell expands your command line to for the buggy command:
echo locate lynx*
This will give you an idea of what the "locate" command is really searching for.
Any one of the three commands below will prevent the shell from processing your wild card pattern.
locate "lynx*" locate 'lynx*' locate lynx\*
The same is true for your other example with find:
find / -name "lynx*" -print find / -name 'lynx*' -print find / -name lynx\* -print
Regards,
John
-- +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + John Benavides | Hewlett Packard - CxD + + 3000 Waterview Parkway | e-mail:[email protected] + + Richardson, TX 75080 | (972) 497-4771 Fax: (972) 497-4245 + +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Date: Fri, 08 Nov 1996 16:34:23 +0100
From: Robert Budzynski <[email protected]>
To: [email protected]
Subject: why locate didn't work as expected...
Hi John,
Why didn't 'locate lynx*' work for you ? Well, here's what happens when you issue that command: first, bash (or any standard shell) attempts to match the pattern 'lynx*' against names of files present in the _current_ directory. If it finds any that match, they are _all_ substituted into the command line and passed on as arguments to locate. This sure isn't what you want... If (as was apparently the case) none are found, the pattern is left unexpanded... so why didn't it work? Well, to quote the man page:
If a pattern is a plain string -- it contains no metachar- acters -- locate displays all file names in the database that contain that string anywhere. If a pattern does con- tain metacharacters, locate only displays file names that match the pattern exactly. As a result, patterns that contain metacharacters should usually begin with a `*', and will most often end with one as well. The exceptions are patterns that are intended to explicitly match the beginning or end of a file name.
So, there's your answer! 'Match the pattern exactly' means here that the fully qualified pathname (starting with a /) must match. The other lesson here may be summarized with another quote from the locate(1) man page:
Patterns that contain metacharacters should be quoted to protect them from expansion by the shell.
This applies as well to patterns passed to 'find', i.e.
$ find /usr/local -name 'lynx*' -print
is the 'politically correct' command line to use.
Merry Linuxing!
-- ###################################################################### Robert J. Budzynski Institute of Theoretical Physics Warsaw University Warsaw, Poland ######################################################################
Date: Sat, 09 Nov 1996 02:11:00 +0000
From: Phil Bevan <[email protected]>
To: [email protected]
Subject: LG issue 11 - find
Hello John,
Glad to see you've not abandoned the Gazette totally. One thing though on your article about 'find'. I've noticed in the past when using the find command, it has not found all the files when using wild card characters such as '*' (as in your example find /usr/local -name lynx* -print). I discovered from one of the linux newsgroups, that the shell tries to expand lynx* first, and it is possible that find will not search all the directories. To stop bash from expanding the filename enclose it in single quotes, as below:
find /usr/local -name 'lynx*' -print
Bet you I'm not the first to point his out :)
Regards
Phil
--
This Sig intentionally left blank
Again, thanks to EVERYONE that took the time to write! I know that y'all are busy and I appreciate corrections, clarifications, and suggestions.
John
I'm going to apologize at the outset -- it's Saturday and I've still got a small mountain of work to do for the upcoming week and so I just don't have the time to write an awful lot about this. I'll try to summarize the highlights of what I was attempting to do and what actually worked.
To recap from last month, I'd been trying to find a way to get a simple listing of all the files in a tar archive. As was pointed out, this can be done using:
tar -tf file.tar tar -tzf file.tar.gzdepending on whether the file is a tar or tar+gzip file. (I'm assuming that you're using GNU tar, BTW, not all implementations of tar support the '-z' option which uses 'gzip' to either compress or uncompress an archive.)
The purpose for doing this was to allow you to get a tar listing and then use this as an argument to tar to print that file to standard output. For example, if your tar archive looked like:
xtoolwait-1.0/ xtoolwait-1.0/xtoolwait.c xtoolwait-1.0/Imakefile xtoolwait-1.0/COPYING-2.0 xtoolwait-1.0/xtoolplaces.diff xtoolwait-1.0/CHANGES xtoolwait-1.0/README xtoolwait-1.0/xtoolwait.man xtoolwait-1.0/xtoolwait-1.0.lsm
then using a command like:
tar -tzOf xtoolwait-1.0.tar.gz xtoolwait-1.0/README |less ^^^^^^^^^^^^^^^^^^^^
would allow you to view the file 'README' by piping it to 'less'. That was the reason for needing to get a listing of just the filenames in the archive -- to be able to invoke tar with the '-O' option so that it would output the results to standard output.
Now, the thing is that tcl/tk will allow you to capture the output of a file using 'open'. Coupled with 'fileevent', this allows you to direct the output of a command to a text widget for viewing and editing. So this was the direction I was going.
I've actually got this working now. It's definitely NOT a showpiece of tcl/tk coding: this 'ol thing wouldn't win any programming contests. Still, as a quick prototype (I hacked it out in two days...), it gave me some ideas about how to put together something a bit more sturdy. Basically, as it stand right now, its features include:
As I said, it is actually working right now and I've used it several times over the past couple weeks.
Parenthetically, I've been using Tcl 7.5 and Tk 4.1 for development. I don't know if any of you have tried compiling this from sources. Using the supplied makefile, I was unable to get the shared libraries to compile. It's been a while since I did this but, if memory serves me correctly, it fails on some system test and thus refuses to compile the shared libs. I did find, however, that by adding '-fPIC' to the CFLAGS, copying the *.o files to a separate directory, and then using something like:
gcc -shared -Wl,-soname,libtcl.so.0.7 -o libtcl.so.0.7.5 *.othat I was able to compile a working shared library. I'd be interested in hearing from anyone else who's tried to compile tcl or tk from sources. I'll quickly admit that I'm still a neophyte when it come to C and UNIX/Linux programming. The above works, but if it is "Not The Right Way" then please drop a note.
That said, let's take the penny tour...
To begin, when you start the tarvu program, it displays this directory browser. The path is displayed at the top along with the name of the file (if any) which has been selected. You navigate to a new directory by either:
If you single click on a file, then its name is displayed after the 'File:' label. Single clicking on a directory has no effect. If you click on a tar or tar+gzip file, then you can use the 'View/Edit...' button to view a listing of the contents of the file. Double clicking on the file has the same effect.
After a tar or tar+gzip file has been selected, the 'Tar Browser' allows you view the contents of the tar archive. Now, you can use the full set of operation buttons to either view/edit, save, or print a specific file within the tar archive. Single click on a file within the listing and then click on any of the operation buttons. If you double-click on a file, then it defaults to the file viewer:
The viewer allows you to view, edit, save, or print the contents of a file within the archive. The name of the file is displayed in the upper left hand corner. To either widen or lengthen the edit window, click on 'Widen Window' or 'Lengthen Window'. Now, you can manually resize the window, but doing so does not automatically resize the text widget. I've not been successful in figuring out how to do this, although I suspect that it can be done. Until then, use the buttons... :-)
If you edit the file and want to save it to disk (NOT back to the archive), then click on the 'Save As...' button. This brings up a directory browser which allows you to save the contents of the editing buffer:
This allows you to save or append the contents of the editing buffer to a file. The directory browser features work in a fashion similar to what was previously described. Because this was a quick hack, I simply coded another proc to provide the save/append feature for a FILE within the archive. So, if you go back to the tar archive list, select a file, and then click on the 'Save...' button, you'll see a directory browser which looks similar to the one above.
Finally, if you click on the 'Print...' button, a small dialog box is displayed:
You simply input the command to print to your printer and click on the 'Print' button.
Pretty simple, eh?
As I mentioned before, this is no paragon of programming brilliance. This was a rather quick hack, but it does show what can be VERY EASILY done with even the basic tools of Tcl/Tk. If anyone is interested in this, you can get the tcl script here:
For those of you using Netscape, hold down the Shift button and single click with the left mouse button on this link to save the file to disk. Call it whatever you'd like, and then set the permissions to something like:
% chmod 755 tclvu.tcl
I've got mine symlinked to 'tclvu' to make it easier to remember.
In all honesty, there are LOTS of things that could be done to make this more useful or efficient. Just a couple TODO's that come to mind include:
The thing is, there are all kinds of fun things that can be done. This simple tcl/tk wrapper for tar just lets you view, edit, and print files at the moment. The tar manual page can give you further ideas about what could be done.
For those of you needing a "real" tar utility, I'd strongly suggest using Miguel de Icaza's GREAT program Midnight Commander. You can pick up the sources at any ftp site which mirrors the GNU utilities such as:
Also, there's a program called xtar which is found at the ftp.x.org ftp site. I've honestly not seen this mentioned anywhere and yet it's a VERY handy little program that allows you to browse and view the contents of a tar archive. You'll need the Motif development libraries to compile this, however.
Well, as I said, this was a pretty quick tour. Please feel free to hack away at this and enjoy it. I tried to comment the code, so you should have some idea about my mental state when the thing was written.
Hope you enjoy!
John
Saturday 23 November 1996
Well, I'd hoped to include a lot more stuff in this month's WM column, but time has completely gotten away from me and it's already almost dinner time (and no homework done yet... :-). I must admit that I enjoy doing this a LOT more than Linear Algebra (...sorry Dr. Powell, it's still a GREAT course :-)
So, what are the rest of you guys working on out there...?
I upgraded my home system over this past summer to a Cyrix P-166+ machine with a Triton II MB, 32 MB EDO RAM, Diamond Stealth Video VRAM graphics card, Toshiba 8X CDROM, and CTX 1765GME monitor. I dropped in the old Maxtor 1.6 Gig drive from my previous machine and have just gotten a second Maxtor 2.0 drive (so Linux can finally have its own drive!). I'll be installing this and reinstalling much of the system over Christmas Break. If this sounds like a brewing "mail brown-out", you're probably right... :-)
I've also gotten pretty enamoured with Tcl/Tk as you might have noticed. This is a seriously fun programming environment. Now, I know that this isn't for everyone and there are folks who've tried tcl that just frankly didn't like the language. Still, there are a growing number of truly impressive add on's including tclX, BLT, Tix, and [incr tcl] that add a lot of nice features. I'd especially commend to the Tix extension. It provides a set of meta-widgets such as directory browsers, tabbed windows, and the like. It precludes your having to code these types of windows and gives you a higher level widget to work with. If you're interested in this, then definitely run the demo program as it give you a IMPRESSIVE tour of the widget set.
Finally, I've just gotten a microphone for my sound card (SB Vibra 16 PnP) and have been messing around with creating sound files.
Pretty cool :-)
I'm still not completely facile with all the basics, but I've gotten a few snippets recorded, including a "Happy Birthday" rendition (my wife and me) to our sister-in-law. It'd curl 'ol Lawrence Welk's toes, I suspect, but it was fun to send this rascal out via email. You know... "reach out and touch someone..."
Well, here's wishing you Happy Linux'ing!
Since I didn't have time this month do to a "Christmas Shopping List" of Linux goodies, I'll try to get this in next month so that after you return all those bottles of aftershave and the argyle socks, you'll know what to do with the money... :-)
From our household to yours,
Wishing you a Merry and Joyous Christmas Season!
John
Saturday 23 November 1996
If you'd like, drop me a note at:
John M. FiskVersion Information:
$Id: issue12.html,v 1.1.1.1 1997/09/14 15:01:37 schwarz Exp $
Thanks to all our authors, not just the ones above, but also those who wrote giving us their tips and tricks and making suggestions. Thanks also to our new mirror sites.
Major "Not Linux" projects on my plate these days are the repair of a quilt and Thanksgiving.
The "Sunbonnet Sue" quilt was made for my sister Gaynell when she about 5, and is turning into more work than I expected. But when I am finished, it will be beautiful again and will make a good Christmas present for her.
Thanksgiving feels like an even bigger project than the quilt repair--I get to host this year, which means I do the major part of the cooking. I will be serving traditional Southern fare, since I was raised in Texas. I feel like I should already be cooking to be ready on time. At any rate, I am looking forward to visiting with family, and eating too much. :-) I am also looking forward to the long weekend--four days off from work feels like a vacation!
Have fun!
Marjorie L. Richardson
Editor, Linux Gazette
Linux Gazette, http://www.ssc.com/lg/
This page written and maintained by the Editor of Linux Gazette,