|For people running the client on UNIX and not being allowed to or not willing to
connect automatically like me, it may be useful to apply this script or this improved and enhanced version (comments for both below).
The main purpose is automatically running several prepared work units without sending
them immediately after they are processed. I tried to keep the script flexible,
so you may adapt it to your special or other(?) needs. I propose to use it once
an hour (unless you run it on a VERY fast machine, than you have to shorten this
interval) by a simple call with it's absolute path on the computer and nothing else. You fill simply in the first place the desired number of directories with
work units (option stop_after_xfer) and than activate the script by a crontab entry.
Important remark: I have changed the script to make it safe to use
on any UNIX machine; the increment of dirCount is the only line affected.
I was confused by the little nasty habit of LINUX, to make the bash appear as
bourne shell, whereas in reality the bash is executed and it's therefore not
complaining about certain (Korn) shell extensions, but simply executing them!
A few further hints seem to be appropriate (if you want to use it later or
to modify it, I propose simple to download also this page, as you can do with
nearly all of my HTML pages without any problems to use them offline afterwards):
At first decide, which version fits better to your needs: the second, new one
needs a Korn shell (write /bin/ksh instead of /bin/bash) or the bash, what I
generally try to avoid. But this new one can continue a begun unit after
interruption of the client or (most unlikely on UNIX systems) after a reboot
of the machine. Besides it contains a safety measure against undesired many
clients running in parallel; because on an old IBM RS/6000 PowerPC with AIX 4.2
I encountered the problem, that the process list wasn't always updated immediately. (a last hint: especially the bash may be located NOT in /bin, but for
example in /usr/local/bin on some UNIX systems, please try "which" to find out!)
So a few seconds of wait seem appropriate. This can be commented out for LINUX
and Solaris due to my experience and of course vice versa also used in the
first, pure bourne shell version, if you need it anyway.
the variable setiExt is set for the V2 and V3 clients, the V1 clients use(d) the
the next two variables are clearly computer and user dependent: the top directory setiTop,
the root for all of your SETI subdirectories has to be adapted first, than you
have to decide, how the names of the working directories are built: at the end
of setiDirNamePart simply a current number is appended; of course you have to create in the first
place enough directories with this naming convention to make use of them!
And they have to be all immediately "below" the first variables directory
for a more precise explanation of available options please read Berkeleys README companion to the
client. I want to stop the client after processing and call explicitly for the
default option NICE value 1
finally the minSetiNum determines, how many clients the script tries to
concurrently run at any call. The script is capable of starting enough clients
at once to gain this number, as long enough correctly named directories with
work units are available --- I strongly propose to set it to one higher then
the number of CPUs of the used machine! (the only way to guarantee full use
of its crunching power)
dirCount=0 : if you want, you may of course start also your numbering with 1
instead my mathematician like chosen value 0
psOptions="ax" : this has to be adapted to the used UNIX, the default is valid
for LINUX, but for example in Solaris you need to write psOptions="-Af".
Important remark: I have now changed it to "-Ao pid,args", which should
work on every UNIX 98 Standard conformant UNIX, which holds true for
Solaris 7 and later, AIX 4.3 and later, but not generally for LINUX yet
(blame them for it!), but it still works correctly on LINUX too.
the further statements are quite straightforward, I guess. Keep in mind, that
the client by itself is able to determine, if it is already running in the
current directory, so the script doesn't need to check for it by itself
finally here is a little bourne shell script, which
calculates the average turn around time for the given result files. The usage
of GNU grep (UNIX 98 grep doesn't offer the required option -L) simplifies
it very much, so it can easily drop the artificially and arbritrary average
lowering results of work unit processing, which were terminated by a RFI overflow. The remainder is
straightforward, I think.