"Linux Gazette...making Linux just a little more fun! "


More 2¢ Tips!


Send Linux Tips and Tricks to


Contents:


Rude Getty

Date: Mon, 23 June 1997 21:12:23
From: Heather Stern

I have a fairly important UNIX box at work, and I have come across a good trick to keep around.

Set one of your console getty's to a nice value of very rude, -17 or worse. That way if a disaster comes up and you have to use the console, it won't take forever to respond to you (because of whatever went wrong).


Keeping Track of File Size

Date:Mon 16 June 1997 13:34:24
From: Volker Hilsenstein

Hello everyone, I just read Bob Grabau's 2C-tip for keeping track of the size of file. Since it is a bit inconvenient to type all these lines each time you download something I wrote this little script:

#!/bin/bash
# This script monitors the size of the files given
# on the command line.
while :
do
  clear
    for i in $@; do
     echo File $i has the size `ls -l $i   | tr -s " " | cut -f 5 -d " "` bytes;
   done
sleep 1
done

Bye ... Volker


Reply to "What Packages do I Need?"

Date: Tue 24 June 1997 11:15:56
From: Michael Hammel,

You asked about what packages you could get rid of and mentioned that you had AcceleratedX and that because of this you "can get rid of a lot off the X stuff". Well, thats not really true. AcceleratedX provides the X server, but you still need to hang onto the X applications (/usr/X11R6/bin/*) and the libraries and include files (/usr/X11R6/lib and /usr/X11R6/include) if you wish to compile X applications or run X binaries that require shared libraries.

Keep in mind that X is actually made up of three distinct parts: the clients (the X programs you run like XEmacs or Netscape or xterm), the server (the display driver that talks to your video adapter), and the development tools (the libs, header files, imake, etc). General users (non-developers) can forego installation of the development tools but need to make sure to install the runtime libraries. Each Linux distribution packages these differently, so just be careful about which ones you remove.

One caveat: I used to work for Xi Graphics, but that was over a year and a half ago. Although I keep in touch with them, I haven't really looked at the product line lately. Its possible they ship the full X distributions now, but I kind of doubt it. If they are shipping the full X distributions (clients, server, development tools) then disregard what I've said.

Hope this helps.
-- Michael J. Hammel


Sound Card Support

Date: Mon 24 June 1997: 11:16:34
From: Michael Hammel,

With regards to your question in the LG about support for the MAD16 Pro from Shuttle Sound System under Linux, you might consider the OSS/Linux product from 4Front-Techologies. The sound drivers they supply support a rather wide range of adapters. The web paget http://www.4front-tech.com/osshw.html gives a list of what is and isn't supported. The Shuttle Sound System 48 is listed as being supported as well as generic support for the OPTi 82C929 chipset (which you listed as the chipset on this adapter).

This is commercial software but its only $20. I've been thinking of getting it myself. I have used its free predecessor, known at times as OSS/Lite or OSS/Free, and found it rather easy to use. I just haven't gotten around to ordering (mostly cuz I never seem to have time for doing installation or any other kind of admin work). I will eventually.

4Front's web site is at http://www.4front-tech.com.

Hope this helps.

-- Michael J. Hammel


InstallNTeX is Dangerous

Date: Fri 06 June 1997 12:31:14
From: Frank Langbein

Dear James:
On Fri, 6 Jun 1997, James wrote:

You have still

 make_dir "       LOG" "$VARDIR/log"       $DOU 1777
 
   make_dir " TMP-FONTS" "$VARDIR/fonts"     $DOU 1777

If I hadn't (now) commented-out your

(cd "$2"; $RM -rf *)
then both my /var/log/* and /var/fonts/* files and directories would have been deleted!

Actually VARDIR should also be a directory reserved for NTeX only (something like /var/lib/texmf). Deleting VARDIR/log is not really necessary unless someone has some MakeTeX* logs in there which are not user writable. Any pk or tfm files from older or non-NTeX installations could cause trouble later. Sometimes the font metrics change and if there are some old metrics used with a new bitmap or similar the resulting document might look rather strange. Further log and fonts have to be world writable (there are ways to prevent this, but I haven't implemented a wrapper for the MakeTeX* scripts yet), so placing them directly under /var is not really a good idea. I am aware that the documentation of the installation procedure is minimal which makes it especially hard to select the directories freely.

The real problem is that allowing to choose the directories freely. Selecting the TDS or the Linux filesystem standard is rather save and at most any other TeX files are deleted. The only real secure option would be to remove the free choice and only offer the Linux filesystem standard, the one from web2c 7.0 which is also TDS conform and a TDS conform sturcutre in a special NTeX directory. The free selection would not be accessible for a new user. I could add some expert option which still allows to use a totally free selection. Additional instead of deleting the directories they could be renamed.

There are plans for a new installation procedure, also supporting such things as read only volumes/AFS, better support for multiple platform installation, etc. This new release will not be available before I managed to implement all the things which were planed for 2.0. But that also means that there will probably be no new release this year as I have to concentrate on my studies. Nevertheless I will add a warning to the free selection in InstallNTeX. That's currently the only thing I can do without risking to add further bugs to InstallNTeX. Considering that my holiday starts next week I can't do more this month.

BTW, on another point, I had difficulty finding what directory was searched for the packages to be installed. Only in the ntex-guide, seemingly buried, is there:

This is caused by different ways to look for the package in NTeX-install, the text version of InstallNTeX and the Tcl/Tk version of InstallNTeX. Therefore you get some warnings even if NTeX-install would be able to install the packages. The minimal documentation is one of the real big drawbacks of NTeX. I'm currently working on a complete specification for the next release which will turn into a real documentation.

Thanks for pointing out the problems with the free selection of that paths. So far I concentrated on setting the installation paths to non-existing directories.

Regards,
Frank


Reply to Dangerous InstallNTeX Letter

To: Frank Langbein,
Date: Sat, 07 Jun 1997 10:11:06 -0600
From: James

Dear Frank:
The hidden application of the operation

rm -rf *
to the unpredictable and unqualified input from a broad base of naive users is highly likely to produce unexpected and undesired results for some of these users. This is the kind or circumstance more usually associated with a "prank". If this is _not_ your intent, then further modifications to the script "InstallNTeX" are required.

The script functions at issue include: mk_dirchain() ($RM -f $P), make_dir() ($RM -rf * and $RM -f "$2"), make_tds_ln() ($RM -f "$3"), and link_file() ($RM -rf "$2"). The impact of the operations when using unexpected parameters, from misspellings or misinterpretations, for instance, should be considered.

You might simply replace these operations with an authorization dialog, or you could create a dialog with several recovery options. (For the moment, I have replaced them with `echo "<some <warning parm&gr;"'.)

James G. Feeney


Monitoring An FTP Download

Date: Tue, 10 Jun 1997 19:54:25 +1000 (EST)
From: Nathan Hand

I saw the recent script someone posted in the 2c column to monitor an ftp download using the clear ; ls -l ; sleep trick. I'd just like to point out there's A Better Way.

Some systems will have the "watch" command installed. This command works pretty much like the script, except it uses curses and buffers for lightning fast updates. You use it something like

   watch -n 1 ls -l

And it prints out the current time, the file listing, and it does the refreshes so fast that you don't see the ls -l redraws. I think it looks a lot slicker, but otherwise it's the same as the script.

I don't know where the watch command comes from. I'm using a stock standard Red Hat system (4.0) so hopefully people with similar setups will also have a copy of this nifty little tool.


Programming Serial Ports

Date: Wed 18 June 1997 14:15:23
From: Tom Verbeure

Hello, A few days ago, I had to communicate using the serial port of a Sun workstation. A lot of information can be found here: http://www.stokely.com/stokely/unix.serial.port.resources and here: http://www.easysw.com/~mike/serial

Reading chapters 3 and 4 of that last page, can do wonders. It took me about 30 minutes to communicate with the machine connected to the serial port. The code should work on virtually any unix machine.

Hope this helps, Tom Verbeure


Another Way of Grepping Files in a Directory Tree

Date: Thu 12 June 15:34:12
From: Danny Yarbrough

That's a good tip. To work around the command line length limitation, you can use xargs(1):

find . -name "\*.c" -print | xargs grep foo
this builds a command line containing "grep foo" (in this case), plus as many arguments (one argument for each line of its standard input) as it can to make the largest (but not too long) command line it can. It then executes the command. It continues to build command lines and executing them until it reaches the end of file on standard input.

(Internally, I suppose xargs doesn't build command lines, but an array of arguments to pass to one of the exec*(2) family of system calls. The concept, however is the same).

xargs has a number of other useful options for inserting arguments into the middle of a command string, running a command once for each line of input, echoing each execution, etc. Check out the man page for more.

Cheers! Danny


More Grepping Files

Date: Mon 16 June 1997 08:45:56
From: Alec Clews

grep foo `find . -name \*.c -print`

The only caveat here is that UNIX is configured to limit max chars in a command line and the "find" command may generate a list of files too huge for shell to digest when it tries to run the grep portion as a command line. Typically this limit is 1024 chars per command line.

You can get around this with

find . -type f -name \*.c -exec grep foo {} /dev/null \;

Notes: The -type f skips directories (and soft links, use -follow if needed) that end with a c

The /dev/null is required to make grep display the name of the file it's searching. grep only displays the file name *and* the search string when there are multiple files to search, and /dev/null is a 0 length file.

Regards,
Alec


Still More On Grepping Files

Date: Sat 14 June 1997 10:57:34
From: Rick Bronson

Here is similiar way to grep for files in a directory tree. This method uses xargs and as such does not suffer from the max chars in a command line limit.

sea () 
{ 
    find . -name "$2" -print | xargs grep -i "$1"
}

I've defined it as a function in my .bashrc file, you would use it like:

sea "search this string" '*.[ch]'

Rick


Grepping

Date: Thu 19 June 1997 09:29:12
From: David Kastrup
Reply to "Grepping Files in a Tree Directory"

Well right. That's why most solutions to this problem are given using the xargs command which will construct command lines of appropriate size.

You'd write

find . -name \*.c -print|xargs grep foo
for this. This can be improved somewhat, however. If you suspect that you have files containing newlines or otherwise strange characters in them, try
find . -name \*.c -print0|xargs -0 grep foo --
This will use a special format for passing the file list from find to xargs which can properly identify all valid filenames. The -- tells grep that even strange file names like "-s" are to be interpreted as file names.

Of course, we would want to have a corresponding file name listed even if xargs calls a single grep in one of its invocation. We can manage this with

find . -name \*.c -print0|xargs -0 grep foo -- /dev/null
This will have at least two file names for grep (/dev/null and one given by xargs), so grep will print the file name for found matches.

The -- is a good thing to keep in mind when writing shell scripts. Most of the shell scripts searching through directories you find flying around get confused by file names like "-i" or "xxx\ yyy" and similar perversities.

David Kastrup


More on Grepping Files in a Tree

Date: Mon 02 June 1997 15:34:23
From: Chris Cox

My favorite trick for look for a string (or strings - egrep) in a tree:

$ find . -type f -print | xargs file | grep -i text |
   cut -f1 -d: | xargs grep pattern

This is a useful technique for other things...not just grepping.


Untarring/Zip

Date: Sun 22 June 1997 13:23:14
From: Mark Moran

I read the following 2-cent tip and was excited to think that I've finally reached a point in my 'linux' expertise I COULD contribute a 2-cent tip! I typically run:

tar xzf foo.tar.gz

to unzip and untar a program. But as Paul mentions the directory structure isn't included in the archive and it dumps in your current directory. Well before I do the above I run:
tar tzf foo.tar.gz

This will dump out to your console what going to be unarchived easily allowing you to see if there's a directory structure!!!!

Mark


An Addition to Hard Disk Duplication (LG #18)

Date: Thu 12 June 1997 15:34:32
From: Andreas Schiffler

Not suprisingly, Linux can do that of course for free and - even from a floppy bootimage for example (i.e. Slackware bootdisk console).

For identical harddrives the following will do the job:

cat /dev/hda >/dev/hdb

For non-identical harddrives one has to repartition the target first:

fdisk /dev/hda record the partitions (size, type)
fdisk /dev/hdb create same partitions
cat /dev/hda1 >/dev/hdb1 copy partitions
cat /dev/hda2 >/dev/hdb2
...

To create image files, simply redirect the target device to a file.

cat /dev/hda >image-file

To reinstall the MBR and lilo, just boot with a floppy using parameters that point to the root partition (as in LILO> linux root=/dev/hda1) and rerun lilo from within Linux.

Have fun
Andreas


Reply to ncftp (LG #18)

Date: Fri 20 June 1997 14:23:12
From: Andrew M. Dyer,

To monitor an ftp session I like to use ncftp which puts up a nice status bar. It comes in many linux distributions. When using the standard ftp program you can also use the

hash
command which prints a
#
every 1K bytes received. Some ftp clients also have the
bell
command which will send a bell character to your console for every file transferred.

For grepping files in a directory tree I like to use the -exec option to find. The syntax is cryptic, but there is no problem with overflowing the shell argument list. A version of the command shown in #18 whould be like this:

find . -name \*.c -exec grep foo {} /dev/null \;
(note the /dev/null forces grep to print the filename of the matched file). Another way to do this is with the mightily cool xargs program, which also solves the overflow problem and its a bit easier to remember:
find . -name \*.c -print | xargs grep foo /dev/null
(this last one is stolen from "UNIX Power Tools" by Jerry Peek, Tim O'Reilly and Mike Loukides - a whole big book of 2 cent tips.

For disk duplication we sometimes use a linux box with a secondary IDE controller, and use

dd
to copy the data over.
dd if=/dev/hdc of=/dev/hdd bs=1024k
this would copy the contents of /dev/hdc to /dev/hdd. The bs=1024k tells linux to use a large block size to speed the transfer.


Sockets and Pipes

Date: Thu, 12 Jun 1997 23:22:38 +1000 (EST) From: Waye-Ian Cheiw,

Hello!

Here's a tip!

Ever tried to pipe things, then realised what you want to pipe to is on another machine?

spiffy $ sort < file 
sh: sort: command not found 
spiffy $ # no sort installed here! gahck!

Try "socket", a simple utility that's included in the Debian distribution. Socket is a tool which can treat a network connection as part of a pipe.

spiffy $ cat file
c 
b
a
spiffy $ cat file | socket -s 7000 &   # Make pipe available at port 7000.
spiffy $ rlogin taffy
taffy $ socket spiffy 7000 | sort      # Continue pipe by connecting to spiffy.
a
b
c

It's also very handy for transferring files and directories in a snap.

spiffy $ ls -F 
mail/   project/
spiffy $ tar cf - mail project | gzip | socket -qs 6666 &
spiffy $ rlogin taffy
taffy $ socket spiffy 6666 | gunzip | tar xf - 
taffy $ ls -F
mail/   project/

The -q switch will close the connection on an end-of-file and conveniently terminate the pipes on both sides after the transfer.

It can also connect a shell command's input and output to a socket. There is also a switch, -l, which restarts that command every time someone connects to the socket.

spiffy $ socket -s 9999 -l -p "fortune" &
spiffy $ telnet localhost 9999
"Baseball is ninety percent mental.  The other half is physical." 
Connection closed by foreign host. 
Will make a cute service on port 9999 that spits out fortunes.

-- Ian!!


Hex Dump

Date: Tue 24 June 1997 22:54:12
From: Arne Wichmann

Hi.

One of my friends once wrote a small vi-compatible hex-editor. It can be found (as source) under vieta.math.uni-sb.de:/pub/misc/hexer-0.1.4c.tar.gz


More on Hex Dump

Date: Wed, 18 Jun 1997 10:15:26 -0700
From: James Gilb

I liked your gawk solution to displaying hex data. Two things (which people have probably already pointed out to you).

  1. If you don't want similar lines to be replaced by * *, use the -v option to hexdump. From the man page:

    -v The -v option causes hexdump to display all input data. Without the -v option, any number of groups of output lines, which would be identical to the immediately preceding group of output lines (except for the input offsets), are replaced with a line comprised of a single asterisk.

  2. In emacs, you can get a similar display using ESC-x hexl-mode. The output looks something like this:
    00000000: 01df 0007 30c3 8680 0000 334e 0000 00ff  ....0.....3N....
    00000010: 0048 1002 010b 0001 0000 1a90 0000 07e4  .H..............
    00000020: 0000 2724 0000 0758 0000 0200 0000 0000  ..'$...X........
    00000030: 0000 0760 0004 0002 0004 0004 0007 0005  ...`............
    00000040: 0003 0003 314c 0000 0000 0000 0000 0000  ....1L..........
    00000050: 0000 0000 0000 0000 0000 0000 2e70 6164  .............pad
    00000060: 0000 0000 0000 0000 0000 0000 0000 0014  ................
    00000070: 0000 01ec 0000 0000 0000 0000 0000 0000  ................
    00000080: 0000 0008 2e74 6578 7400 0000 0000 0200  .....text.......
    00000090: 0000 0200 0000 1a90 0000 0200 0000 2a98  ..............*.
    
    (I don't suppose it is supprising that emacs does this, after all, emacs is not just and editor, it is its own operating system.)


Reply to Z Protocol

Date: Mon 09 June 1997 19:34:54
From: Gregor Gerstmann

In reply to my remarks regarding file transfer with the z protocol in LinuxGazette issue17, April 1997, I received an e-mail that may be interesting to others too:

Hello!

I noticed your article in the Linux Gazette about the sz command, and really don't think you need to split up your downloads into smaller chunks.

The sz command uses the ZMODEM protocol, which is built to handle transmission errors. If sz reports a CRC error or a bad packet, it does not mean that the file produced by the download will be tainted. sz automatically retransmits bad packets.

If you have an old serial UART chip ( 8250 ), then you might be getting intermittent serial errors. If the link is unreliable, then sz may spend most of its time tied up in retransmission loops.

In this case, you should use a ZMODEM window to force the sending end to expect an `OK' acknowledgement every few packets.

  sz -w1024
Will specify a window of 1024 bytes.

-- Ian!!


Published in Linux Gazette Issue 19, July 1997


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


This page maintained by the Editor of Linux Gazette,
Copyright © 1997 Specialized Systems Consultants, Inc.