Difference between revisions of "Phylogenetics: Bioinformatics Cluster"

From EEBedia
Jump to: navigation, search
(Creating the gopaup file using the touch command)
Line 69: Line 69:
 
If you want to go down one directory level (say from pauprun back down to your home directory), you can specify the parent directory using two dots:
 
If you want to go down one directory level (say from pauprun back down to your home directory), you can specify the parent directory using two dots:
 
  cd ..
 
  cd ..
 
==== Creating the gopaup file using the touch command ====
 
The touch command was designed for updating the last modification time of files, but I most often use it for creating files. To create a new file named <tt>gopaup</tt>, type the following (be sure to leave a space between each item, just as you see it below) and then press the Enter key
 
touch gopaup
 
 
#$ -o junk.txt -j y
 
cd $HOME/pauprun
 
paup -n run.nex
 
  
 
==== Creating run.nex using the pico editor ====
 
==== Creating run.nex using the pico editor ====
Line 104: Line 96:
 
  cat run.nex
 
  cat run.nex
  
=== Using PSFTP to upload the algae.nex data file ===
+
==== Create the gopaup file ====
 +
Now use pico to create a second file named <tt>gopaup</tt> in your home directory (the parent directory of the <tt>pauprun</tt> directory). Thie file should contain this text:
 +
#$ -o junk.txt -j y
 +
cd $HOME/pauprun
 +
paup -n run.nex
 +
 
 +
=== Using PSFTP to upload the algae.nex data file (Windows) ===
  
Locate the file <tt>algae.nex</tt> that we used in the previous lab. If you have deleted it, you will need to [http://hydrodictyon.eeb.uconn.edu/people/plewis/courses/phylogenetics/labs/Spring2005/data/algae.nex download] and save it again.
+
Download the file <tt>algae.nex</tt> from [http://hydrodictyon.eeb.uconn.edu/people/plewis/courses/phylogenetics/data/algae.nex here] and save it on your hard drive.
  
 
Make sure that algae.nex is in the same place as the PSFTP program, then start PSFTP by double-clicking it.
 
Make sure that algae.nex is in the same place as the PSFTP program, then start PSFTP by double-clicking it.
Line 122: Line 120:
 
  quit
 
  quit
 
to exit the PSFTP program.
 
to exit the PSFTP program.
 +
 +
=== Using scp to upload the algae.nex data file (Mac) ===
 +
 +
Download the file <tt>algae.nex</tt> from [http://hydrodictyon.eeb.uconn.edu/people/plewis/courses/phylogenetics/data/algae.nex here] and save it on your hard drive. Open the Terminal application and navigate to where you saved the file. If you saved it on the desktop, you can go there by typing <tt>cd Desktop</tt>.
 +
 +
Type the following to upload algae.nex to the cluster:
 +
scp algae.nex username@bbcxsrv1.biotech.uconn.edu
 +
where <tt>username</tt> should be replaced by your own user name on the cluster.
  
 
=== A few more UNIX commands ===
 
=== A few more UNIX commands ===
Line 130: Line 136:
 
   cd $HOME
 
   cd $HOME
 
   ls algae.*
 
   ls algae.*
Note the use of a wildcard character (*) in the ls command. This will show you only files that begin with the letters <tt>algae</tt> followed by a period and any number of other non-whitespace characters.
+
Note the use of a wildcard character (*) in the ls command. This will show you only files that begin with the letters <tt>algae</tt> followed by a period and any number of other non-whitespace characters. The <tt>$HOME</tt> is a predefined shell variable that will be relaced with your home directory. It is not necessary in this case - typing <tt>cd</tt> all by itself would take you to your home directory - but the <tt>$HOME</tt> variable is good to know about (especially for use in scripts).
  
 
==== mv command: moving or renaming a file ====
 
==== mv command: moving or renaming a file ====

Revision as of 03:37, 20 January 2009

Adiantum.png EEB 349: Phylogenetics
The goal of this lab exercise is to show you how to log into the Bioinformatics Facility computer cluster and perform a basic PAUP* analysis.

Part A: Using the UConn Bioinformatics Facility cluster

The Bioinformatics Facility is part of the UConn Biotechnology Center, which is located behind the Up-N-Atom Cafe in the lower level of the Biology/Physics building. Jeff Lary maintains a 17-node Apple Xserve G5 Cluster that can be used by UConn graduate students and faculty to conduct bioinformatics-related research (sequence analysis, biological database searches, phylogenetics, molecular evolution). You by now have an account on the cluster, and today you will learn how to start analyses remotely (i.e. from your laptop), check on their status, and download the results when your analysis is finished.

Obtaining the necessary communications software

You will be using a couple of simple (and free) programs to communicate with the head node of the cluster.

If you use MacOS 10.x...

The program ssh will allow you to communicate with the cluster using a protocol known as SSH (Secure Shell) that encrypts everything sent over the internet. You will use ssh to send commands to the cluster and see the output generated. In the old days, a protocol known as Telnet was used for this purpose, but it is no longer used because it did not encrypt anything, making it easy for someone with access to the network to see your username and password in plain text.

The other program you will use is called scp. It allows you to transfer files back and forth using the Secure Copy Protocol. It replaces the old protocol (FTP) that, like Telnet, sent usernames and passwords unencrypted across the network. If you find yourself wanting a fancier SCP client, check out Fugu.

Start by opening the Terminal application, which you can find in the Applications/Utilities folder on your hard drive. Using the Terminal program, you can connect to the cluster with the following command:

ssh username@bbcxsrv1.biotech.uconn.edu

where username should be replaced by your username on the cluster.

If you use Windows...

Visit the PuTTY web site, scroll down to the section labeled "Binaries" and save putty.exe and psftp.exe on your desktop.

The program PuTTY will allow you to communicate with the cluster using a protocol known as SSH (Secure Shell) that encrypts everything sent over the internet. You will use PuTTY to send commands to the cluster and see the output generated. In the old days, a protocol known as Telnet was used for this purpose, but it is no longer used because it did not encrypt anything, making it easy for someone with access to the network to see your username and password in plain text.

The other program you will use is called PSFTP. It allows you to transfer files back and forth using SFTP (Secure File Transfer Protocol). It replaces the old protocol (FTP) that, like Telnet, sent usernames and passwords unencrypted across the network. If you find yourself wanting a fancier SFTP client, check out FileZilla.

Double-click the PuTTY icon on your desktop to start the program. In the Host Name (or IP address) box, type bbcxsrv1.biotech.uconn.edu. Now type Bioinformatics cluster into the Saved Sessions box and press the Save button. This will save having to type the computer's name each time you want to connect. Now click the Open button to start a session. The first time you connect, you will get a PuTTY Security Alert. Just press the Yes button to close this dialog.

Now you should see the following prompt:

login as:

Type in your username and press Enter. Now you should see the password prompt:

Password:

Type in your password and press Enter. If all goes well, you should see something like this:

Welcome to Darwin!
[bbcxsrv1:~] plewis%

except that your username should appear instead of mine (plewis).

Learning enough UNIX to get around

I'm presuming that you do not know a lot of UNIX commands, but even if you are already a UNIX guru, please complete this section anyway because otherwise you will fail to create some files you will need later.

The cluster comprises MacIntosh G5 computers running MacOSX, but MacOSX is essentially a UNIX operating system with a very nice user interface. But today you will not be using the nice user interface! Instead, you will be communicating using the UNIX command console, so the first step is to learn a few important UNIX commands.

ls command: finding out what is in the present working directory

The ls command lists the files in the present working directory. Try typing just

ls

If you need more details about files than you see here, type

ls -la

instead. This version provides information about file permissions, ownership, size, and last modification date.

pwd command: finding out what directory you are in

Typing

pwd

shows you the full path of the present working directory. The path shown should end with your username, indicating that you are currently in your home directory.

mkdir command: creating a new directory

Typing the following command will create a new directory named pauprun in your home directory:

mkdir pauprun

Use the ls command now to make sure a directory of that name was indeed created.

cd command: leaving the nest and returning home again

The cd command lets you change the present working directory. To move into the newly-created pauprun directory, type

cd pauprun

You can always go back to your home directory (no matter how lost you get!) by typing just cd by itself

cd

If you want to go down one directory level (say from pauprun back down to your home directory), you can specify the parent directory using two dots:

cd ..

Creating run.nex using the pico editor

Another way to create a new file, or edit one that already exists, is to use the pico editor. Most people like using pico better than cat for creating new files: the only advantage cat has over pico is that it is guaranteed to be present on every UNIX computer, whereas pico is only present on some. You will now use pico to create a run.nex file containing a paup block. You will later execute this file in PAUP* to perform an analysis.

First use the pwd command to see where you are, then use cd to go into the pauprun directory if you are not already there. Type

pico run.nex

This will open the pico editor, and it should say [ New File ] at the bottom of the window to indicate that the run.nex file does not already exist. Note the menu of the commands along the bottom two rows. Each of these commands is invoked using the Ctrl key with the letter indicated. Thus, ^X Exit indicates that you can use the Ctrl key in combination with the letter X to exit pico.

For now, type the following into the editor:

#nexus

begin paup;
  log file=algae.output.txt start replace flush;
  execute algae.nex;
  set criterion=likelihood autoclose;
  lset nst=2 basefreq=estimate tratio=estimate rates=gamma shape=estimate;
  hsearch swap=none start=stepwise addseq=random nrep=1;
  lset basefreq=previous tratio=previous shape=previous;
  hsearch swap=tbr start=1;
  savetrees file=algae.ml.tre brlens;
  log stop;
  quit;
end;

Once you have entered everything, use ^X to exit. Pico will ask if you want to save the modified buffer, at which point you should press the Y key to answer yes. Pico will now ask you whether you want to use the file name run.nex; this time just press Enter to accept. Pico should now exit and you can use cat to look at the contents of the file you just created:

cat run.nex

Create the gopaup file

Now use pico to create a second file named gopaup in your home directory (the parent directory of the pauprun directory). Thie file should contain this text:

#$ -o junk.txt -j y
cd $HOME/pauprun
paup -n run.nex

Using PSFTP to upload the algae.nex data file (Windows)

Download the file algae.nex from here and save it on your hard drive.

Make sure that algae.nex is in the same place as the PSFTP program, then start PSFTP by double-clicking it.

PSFTP should say something like this:

psftp: no hostname specified; use "open host.name" to connect

To open a connection to the cluster, type

open bbcxsrv1.biotech.uconn.edu

then supply your username and password when prompted.

To upload algae.nex to the cluster, type

put algae.nex

If you do not see any error messages, then you can assume that the transfer worked. Type

quit

to exit the PSFTP program.

Using scp to upload the algae.nex data file (Mac)

Download the file algae.nex from here and save it on your hard drive. Open the Terminal application and navigate to where you saved the file. If you saved it on the desktop, you can go there by typing cd Desktop.

Type the following to upload algae.nex to the cluster:

scp algae.nex username@bbcxsrv1.biotech.uconn.edu

where username should be replaced by your own user name on the cluster.

A few more UNIX commands

You have now transfered a large file (algae.nex) to the cluster, but it is not in the right place. The algae.nex file should be in your home directory, whereas the run.nex file is in the pauprun directory. The run.nex file contains this line

execute algae.nex

which means that algae.nex should also be located in the pauprun directory. Use the following commands to ensure that (1) you are in your home directory, and (2) algae.nex is also in your home directory:

 cd $HOME
 ls algae.*

Note the use of a wildcard character (*) in the ls command. This will show you only files that begin with the letters algae followed by a period and any number of other non-whitespace characters. The $HOME is a predefined shell variable that will be relaced with your home directory. It is not necessary in this case - typing cd all by itself would take you to your home directory - but the $HOME variable is good to know about (especially for use in scripts).

mv command: moving or renaming a file

Now use the mv command to move algae.nex to the directory pauprun:

mv algae.nex pauprun

The mv command takes two arguments. The first argument is the name of the directory or file you want to move, whereas the second argument is the destination. The destination could be either a directory (which is true in this case) or a file name. If the directory pauprun did not already exist, mv would have interpreted this as a request to rename algae.nex to the file name pauprun! So, be aware that mv can rename files as well as move them.

cp command: copying a file

The cp command copies files. It leaves the original file in place and makes a copy elsehwere. You could have used this command to get a copy of algae.nex into the directory pauprun:

cp algae.nex pauprun

This would have left the original in your home directory, and made a duplicate of this file in the directory pauprun.

rm command: cleaning up

The rm command removes files. If you had used the cp command to copy algae.nex into the pauprun directory, you could remove the original file using these commands:

cd
rm algae.nex

The first cd command just ensures that the copy you are removing will be the one in your home directory (typing cd by itself acts the same as typing cd $HOME). If it bothers you that the system always asks your permission before deleting a file, you can force the issue using the -f option (but just keep in mind that this is more dangerous):

rm -f algae.nex

To delete an entire directory (don't try this now!), you can add the -r flag, which means to recursively apply the remove command to everything in every subdirectory:

rm -rf pauprun

The above command would remove everything in the pauprun directory (without asking!), and then remove the pauprun directory itself. I want to stress that this is a particularly dangerous command, so make sure you are not weary or distracted when you use it! Unlike the Windows or Mac graphical user interface, files deleted using rm are not moved first to the Recycle Bin or Trash, they are just gone. There is no undo for the rm command.

Starting a PAUP* analysis

If you've been following the directions in sequence, you now have two files (algae.nex and run.nex) in your $HOME/pauprun directory on the cluster, whereas the gopaup file should be in $HOME. Use the cd command to make sure you are in your home directory, then the cat command to look at the contents of the gopaup file you created earlier. You should see this:

#$ -o junk.txt -j y
cd $HOME/pauprun
paup -n run.nex

This file will be used by software called the Sun Grid Engine (SGE for short) to start your run. SGE provides a command called qsub that you will use to submit your analysis. SGE will then look for a node (i.e. machine) in the cluster that is currently not being used (or is being used to a lesser extent than other nodes) and will start your analysis on that node. This saves you the effort of looking amongst all 17 nodes in the cluster for one that is not busy.

Here is an explanation of each of the lines in gopaup:

  • Lines beginning with the two characters #$ are interpreted as commands by SGE itself. In this case, the command tells SGE to send any output from the program to a file named junk.txt and the -j y part says to append any error output to this as well (the j stands for join and the y for yes)
  • The second line is simply a cd command that changes the present working directory to the pauprun directory you created earlier. This will ensure that anything saved by PAUP* ends up in this directory rather than in your home directory. Note that $HOME is like a macro that will be expanded to the full path to your home directory.
  • The third and last line simply starts up PAUP* and executes the run.nex file. The -n flag tells PAUP* that no human is going to be listening or answering questions, so it should just use default answers to any questions it needs to ask during the run.

Submitting a job using qsub

Now you are ready to start the analysis. Make sure you are in your home directory, then type

qsub gopaup

Checking status using qstat

You can see if your run is still going using the qstat command:

qstat

If it is running, you will see an entry containing gopaup and the state will be r, for running. Here is what it looked like for me (I've omitted the rightmost part):

job-ID  prior   name       user         state submit/start at     queue
-----------------------------------------------------------------------------------------------
   5540 0.55500 gopaup     plewis       r     02/18/2007 13:38:47 all.q@node003.cluster.private
   5535 0.55500 bskinkultr jockusch     r     02/16/2007 16:18:49 all.q@node006.cluster.private
   5525 0.55500 mb.sh      plapierre    r     02/15/2007 10:46:17 all.q@node010.cluster.private 
   5433 0.55500 mb.sh      plapierre    r     02/08/2007 18:40:50 all.q@node012.cluster.private
   5539 0.55500 bskinkultr jockusch     r     02/18/2007 12:22:55 all.q@node015.cluster.private

My run is listed first, and is currently running on node 3 of the cluster.

Killing a job using qdel

Sometimes it is clear that an analysis is not going to do what you wanted it to. Suppose that just after you press the Enter key to start an analysis, you realize that you forgot to put in a savetrees command in your paup block (so in the end you will not be able to see the results of the search). In such situations, you really want to just kill the job, fix the problem, and then start it up again. Use the qdel command for this. Note that in the output of the qstat command above, my run had a job-ID equal to 5540. I could kill the job like this:

qdel 5540

SGE will say that it has scheduled the job for deletion, but in practice it kills it almost instantaneously in my experience. Be sure to delete any output files that have already been created before starting your run over again.

While PAUP* is running

While PAUP* is running, you can use cat to look at the log file:

cd pauprun
cat algae.output.txt

Using PSFTP to download the resulting treefile

When PAUP* finishes, qstat will no longer list your process. At this point, you need to use PSFTP to get the log and tree files that were saved back to your local computer. Start up PSFTP and type

open bbcxsrv1.biotech.uconn.edu

Note that both the log file (algae.output.txt) and the tree file (algae.ml.tre) are in the pauprun directory. PSFTP dropped you in your home directory, but you can tell PSFTP to change to the pauprun directory in the same way you tell UNIX that you want to change directories:

cd pauprun

You can likewise type ls in PSFTP to get a listing of files. Do this now to make sure you see the two files you want to download. Now, use the get command to download the files:

get algae.ml.tre
get algae.output.txt

Finally, close PSFTP using any of the following commands: quit, exit, bye.

Why both junk.txt and algae.output.txt?

In your home directory, SGE saved the output that PAUP* normally sends to the console to a file named junk.txt (we specified that it should do this in the gopaup file). I had you name this file junk.txt because you will not need this file after the run: the log command in your paup block ends up saving the same output in the file algae.output.txt. Why did we tell PAUP* to start a log file if SGE was going to save the output anyway? The main reason is that you can view the log file during the run, but you cannot view junk.txt until the run is finished. There will come a day when you have a PAUP* run that has been going for several days and want to know whether it is 10% or 90% finished. At this point you will appreciate being able to view the output file!

Delete junk.txt using the rm command

Because you do not need junk.txt, delete it using the rm command:

cd
rm -f junk.txt

You also no longer need the log and tree files because you downloaded them to your local computer using PSFTP:

cd pauprun
rm -f algae.ml.tre
rm -r algae.output.txt

It is a good idea to delete files you no longer need for two reasons:

  • you will later wonder whether you downloaded those files to your local machine and will have to spend time making sure you actually have saved the results locally
  • our cluster only has so much disk space, and thus it is just not possible for everyone to keep every file they ever created

Tips and tricks

Here are some miscellaneous tips and tricks to make your life easier while using PuTTY to communicate with the cluster.

Command completion using the tab key

You can often get away with only typing the first few letters of a filename; try pressing the Tab key after the first few letters and PuTTY will try to complete the thought. For example, cd into the pauprun directory, then type

cat alg<TAB>

If algae.nex is the only file in the directory in which the first three letters are alg, then PuTTY will type in the rest of the file name for you.

Wildcards

I've already mentioned this tip, but it bears repeating. When using most UNIX commands that accept filenames (e.g. ls, rm, mv, cp), you can place an asterisk inside the filename to stand in for any number of letters. So

ls algae*

will produce output like this

algae.ml.tre    algae.nex   algae.output.txt

Man pages

If you want to learn more options for any of the UNIX commands, you can use the man command to see the manual for that command. For example, here's how to see the manual describing the ls command:

man ls

It is important to know how to escape from a man page! The way to get out is to type the letter q. You can page down using Ctrl-f, page up through a man page using Ctrl-b, go to the end using Shift-g and return to the very beginning using 1,Shift-g (that is, type a 1, release it, then type Shift-g). You can also move line by line in a man page using the down and up arrows, and page by page using the PgUp and PgDn keys.

Part B: Starting a GARLI run on the cluster

GARLI is a program written by Derrick Zwickl for estimating the phylogeny using maximum likelihood, and is currently one of the best programs to use if you have a large problem (i.e. many taxa). I used GARLI to estimate the 738-taxon green plant phylogeny on the poster outside my office door in less than 5 hours. Another excellent ML program for large problems is RAxML, written by Alexandros Stamatakis.

GARLI does not give you much choice in the way of search strategy or substitution model. It uses the GTR+I+G model (General Time Reversible substitution model, with invariable sites and discrete gamma rate heterogeneity), and uses a genetic algorithm search strategy. The genetic algorithm (or GA, for short) search strategy is like other heuristic search strategies in that it cannot guarantee that the optimal tree will be found. Thus, as will all heuristic searches, it is a good idea to run GARLI several times (using different pseudorandom number seeds) to see if there is any variation in the estimated tree.

Today you will run GARLI on the cluster for a dataset with 50 taxa. This is not a particularly large problem, but then you only have an hour or so to get this done!

Preparing the GARLI control file

Like many non-interactive programs, GARLI uses a control file to specify the settings it will use during a run. Here is the control file distributed with GARLI:

[general]
datafname = rana.phy
streefname = random
ofprefix = ranaGarli
randseed = -1
megsclamemory = 500
availablememory = 512
logevery = 10
saveevery = 100
refinestart = 1
outputeachbettertopology = 1
enforcetermconditions = 1
genthreshfortopoterm = 20000
scorethreshforterm = .05
significanttopochange = 0.05
outputphyliptree = 0
outputmostlyuselessfiles = 0
dontinferproportioninvariant = 0  

[master]
nindivs = 4
holdover = 1
selectionintensity = .5
holdoverpenalty = 0
stopgen = 5000000
stoptime = 5000000

startoptprec = .5
minoptprec = .01
numberofprecreductions = 40
topoweight = 1.0
modweight = .05
brlenweight = .2
randnniweight = .2
randsprweight = .3
limsprweight =  .5
intervallength = 100
intervalstostore = 5

limsprrange = 6
meanbrlenmuts = 5
gammashapebrlen = 1000
gammashapemodel = 1000 

bootstrapreps = 0
inferinternalstateprobs = 0

Most of these settings are fine, but you will need to change a few of them before running GARLI.

Using curl to download the file

The first step is to get this file onto the cluster where you can use pico to edit it. You have already learned several ways to create a file on the cluster:

  • cat - > filename ... Ctrl-d
  • pico filename

Now you will learn a third way: using the curl program. Curl is useful when you know that a file exists on the internet at some URL. Curl stands for "Copy URL" - it allows you to copy a file at a particular address (URL) on the internet to your present working directory. I have placed a copy of the garli.conf file above at the following URL:

http://hydrodictyon.eeb.uconn.edu/eeb349/garli.conf

You can view it in a web browser if you wish. Here is how to get this file copied from that web address to your home directory on the cluster. I assume you have already logged into the cluster using PuTTY:

curl http://hydrodictyon.eeb.uconn.edu/eeb349/garli.conf > garli.conf

Curl acts a lot like cat: it basically spews the file to the console, so if you want to save it to a file, you need to redirect the output of curl to a file, which is what the > garli.conf part on the end is about.

Editing garli.conf with pico

Now fire up pico and edit this file:

pico garli.conf

You will only need to change two lines. Change this line

datafname = rana.phy

so that it looks like this instead

datafname = rbcl50.nex

Then change this line

ofprefix = ranaGarli

so that it looks like this instead

ofprefix = 50taxa

The ofprefix is used by GARLI to begin the name of all output files. I usually use something different than the data file name here. If you eventually want to delete all of the various files that GARLI creates, you can just say

rm -f 50taxa*

If, however, you specify ofprefix = rbcl50, then the command

rm -f rbcl50* 

would not only wipe out the files GARLI created, but would also delete the data file!

Once you have finished changing those two lines, exit pico using Ctrl-x, then create a directory named garlirun and move garli.conf into that directory.

Download the data file using curl

I have placed the data file (rbcL50.nex) at the following address:

http://hydrodictyon.eeb.uconn.edu/eeb349/rbcL50.nex

so you can use curl to download this file to the garlirun directory as follows:

cd $HOME/garlirun
curl http://hydrodictyon.eeb.uconn.edu/eeb349/rbcL50.nex > rbcL50.nex

New.png Three changes were made to this section:

  • The last line above has been corrected (previously it did not include the > rbcl50nex part on the end)
  • I changed rbcl50.nex to rbcL50.nex to avoid confusing the lower case letter L (l) for the number one (1)
  • I substituted a Tomato sequence for one of the two Spinach sequences, so now there is no duplication in the rbcL50.nex data file (but note that the data set has changed)

Preparing the gogarli SGE script

Now return to your home directory (using the cd command) and create a gogarli script that will be fed to qsub to start the analysis. Use either pico or cat to create the file with this text:

#$ -o junk.txt -j y
cd $HOME/garlirun
Garli.94 garli.conf

This file will look very similar to the gopaup script you created in part A. The only difference is that the data and control file are in the directory garlirun (not pauprun), the name of the program is Garli.94 (this means Garli version 0.94), and GARLI expects the name of the control file (garli.conf) on the command line instead of the name of the data file (rbcl50.nex). Remember that the name of the data file was specified inside the control file.

Running GARLI

Run GARLI by issuing the qsub command:

qsub gogarli

Check progress every few minutes using the qstat command. This run will take 15 or 20 minutes. If you get bored, you can cd into the garlirun directory and use this command to see the tail end of the log file that GARLI creates automatically:

tail 50taxa.log00.log

The tail command is like the cat command except that it only shows you the last few lines of the file (which often is just what you need).

Mailing the tree to yourself

After GARLI has finished, you should download the tree file (50taxa.best.tre) using the PSFTP get command, but here is another handy trick: you can email the tree to yourself using this command (issue this from within the garlirun directory where the tree file is located):

mail paul.lewis@uconn.edu < 50taxa.best.tre

This command will send mail to paul.lewis@uconn.edu, and the body of the email message will come from the file 50taxa.best.tre!