/ptmp on bluefire is going away in September, 2012
Please start using the /glade disk system

Hi all:

For those of you who use the /ptmp disk on bluefire or
other CISL machines, this is a good time to switch to
the /glade disk system. /ptmp will go away with bluefire,
and the /glade disk will be available on all NWSC machines.
The /glade disk is currently accessible from all CISL machines.
The CISL web page for /glade is here:


There are four sectors of the glade disk, each with its
own quota and scrubbing policy:

Home:    /glade/home/username 10GB quota, regularly backed up
Scratch: /glade/scratch/username no quota, 30-day scrubbing
Work:    /glade/user/username 300GB quota, 90-day scrubbing
Project: /glade/proj2/hao/tgcm/data (same as /hao/tgcm/data)

Every user already has a home and a directory on the scratch
space. If you do not yet have a work area, you can request
one by writing to cislhelp@ucar.edu. Work space is scrubbed
only when the entire /glade/user disk is >= 95% full, and then
only files not accessed in the last 90 days.

For running the TIEGCM/TIMEGCM models, you can set $TGCMDATA
to /hao/tgcm/data. A reasonable strategy is to keep source
code and svn working directories under your home, and build
and execute the model in an analogous directory under your
work space, or in the scratch area. Post-processing w/ IDL or
other languages can be performed on any glade disk from the
DASG machines (storm, mirage, etc).

For example, I might have svn source code on /glade/home/foster/tiegcm,
and set execdir = /glade/user/foster/tiegcm (and set $TGCMDATA =
/hao/tgcm/data). If I am making production runs, I will save
history files to the HPSS. For post-processing, debugging, etc.,
I will ssh to storm0 or mirage0, cd to wherever my history
files are stored, and use tgcmproc_f90, tgcmproc_idl, matlab,
ncl, nco, etc.

If we start doing this now, we will be in good shape when the
NWSC machines come on line.