Showing posts with label PSC. Show all posts
Showing posts with label PSC. Show all posts

Thursday, August 27, 2015

Greenfield, sample pbs job scripts



This  is an example to run a serial job on Greenfield using PBS: 

#!/bin/csh
#  Request 15 cores 
#  Note that this means the job will be allocated 750GB of memory 
#PBS -l nodes=1:ppn=15
#  Request 5 minutes of cpu time
#PBS -l walltime=30:00
#  Combine standard output and error into one file
#PBS -j oe 
set echo
cd $PBS_O_WORKDIR
module load R/3.2.1-mkl
# run my executable
R --vanilla --slave CMD BATCH ./example.R


And here is an example of “packing” several runs in one job,
so that they all run simultaneously. The output files should be all named different:


#!/bin/csh
#  Request 15 cores 
#  Note that this means the job will be allocated 750GB of memory 
#PBS -l nodes=1:ppn=15
#  Request 5 minutes of cpu time
#PBS -l walltime=30:00
#  Combine standard output and error into one file
#PBS -j oe 
set echo
cd $PBS_O_WORKDIR
# Define where /tmp files will be written
 setenv TMPDIR $PBS_O_WORKDIR
module load R/3.2.1-mkl
# run my executable
numactl -C +0 R --vanilla --slave CMD BATCH ./example0.R &
numactl -C +1 R --vanilla --slave CMD BATCH ./example1.R &
numactl -C +2 R --vanilla --slave CMD BATCH ./example2.R &
numactl -C +3 R --vanilla --slave CMD BATCH ./example3.R &
numactl -C +4 R --vanilla --slave CMD BATCH ./example4.R &
numactl -C +5 R --vanilla --slave CMD BATCH ./example5.R &
numactl -C +6 R --vanilla --slave CMD BATCH ./example6.R &
numactl -C +7 R --vanilla --slave CMD BATCH ./example7.R &
numactl -C +8 R --vanilla --slave CMD BATCH ./example8.R &
numactl -C +9 R --vanilla --slave CMD BATCH ./example9.R &
numactl -C +10 R --vanilla --slave CMD BATCH ./example10.R &
numactl -C +11 R --vanilla --slave CMD BATCH ./example11.R &
numactl -C +12 R --vanilla --slave CMD BATCH ./example12.R &
numactl -C +13 R --vanilla --slave CMD BATCH ./example13.R &
numactl -C +14 R --vanilla --slave CMD BATCH ./example14.R &
wait

Wednesday, August 26, 2015

greenfield job run (failed)



temporary storage place:  /crucible/mc48o9p/hqin2

emacs ms02.pbs













10:44am, $qsub ms02.pbs 

Error: 






Testing at $HOME, myR.R 
by 11:15am, I successfully submitted a job to greenfield. 

$ cat foo.pbs
#PBS -l nodes=1:ppn=1
#PBS -l walltime=5:00
#PBS -o test.out
#PBS -j oe
set echo
source /bin/bash
echo hostname
module load R

R -f myR.R















































greenfield, file transfer


xx@login.xsede.org


gsissh greenfield.psc.xsede.org





















Need to use /crucible for file storage
"Your /crucible home directory is  /crucible/group-name/user-name  wheregroup-name is the 7 character name for the PSC group associated with your grant."



$ ls /crucible/mc48o9p/hqin2



This works!















Monday, July 20, 2015

backup $SCRATCH from blacklight to byte

log into blacklight through xsede

hqin2@tg-login1:/brashear/hqin2> ls /arc/users/hqin2

0.ginppi.tar.gz  0.tar  mactower-network-failure-simulation-master.zip
/*I moved these files into a new folder /old */

qin2@tg-login1:/arc/users/hqin2> cd $SCRATCH
hqin2@tg-login1:/brashear/hqin2> pwd

/brashear/hqin
hqin2@tg-login1:/brashear/hqin2> tar cvf mactower-network-failure-simulation-master.20150720.tar mactower-network-failure-simulation-master/ &

cp mactower-network-failure-simulation-master.20150720.tar /arc/users/hqin2/.
/* this seems freezes my terminal. */

On byte:
Byte-2:blacklight hqin$ pwd

/Users/hqin/github/mactower-network-failure-simulation/blacklight
scp "hqin2@data.psc.xsede.org:mactower-network-failure-simulation-master.20150720.tar.gz" .







References:
http://hongqinlab.blogspot.com/2015/06/20150623tue-0624wed-0625thu-blacklight.html

blacklight --> greenfield --> bridges,

Notice email:

PSC is preparing to introduce its next-generation XSEDE-allocated system. Bridges is planned to enter production in January 2016 (see http://psc.edu/bridges). 

Blacklight will be decommissioned on August 15, 2015.

For the transition period, PSC will provide Greenfield, a new resource that, like Blacklight, features large shared memory. We are developing the user guide at http://www.psc.edu/index.php/resources-for-users/computing-resources/greenfield. Note that the content of this document is evolving.

While the computational capacity of Greenfield is less than that of Blacklight, we believe that your project can make good use of Greenfield and prepare you to continue on Bridges. Your accounts on Blacklight will remain active until 11pm EDT on August 15. Any files left on Blacklight’s $SCRATCH filesystem after August 15 will be lost. If you have an allocation on the Data Supercell (DSC), it will remain active for the remainder of your current XSEDE grant. DSC will be accessible from Greenfield and then from Bridges.

If you wish to discuss other options, or if you have any questions, please contact remarks@psc.edu
at your earliest convenience.

Tuesday, June 23, 2015

transfer files to blacklight 20150607 and 20150608

Sunday 20150607

After VPN into Spelman network, at helen.spelman.edu, scp to data.psc.edu using my xsede login works.







At helen scp test.txt hqin2@blacklight.psc.xsede.org:./.








6pm. Somehow, "mv" and "cp" from the login node to $SCRATCH freezes my shell.


================
Monday 20150608

From http://www.psc.edu/index.php/resources-for-users/computing-resources/blacklight
A sample set of commands on your local machine would be
    tar cf sourcedir.tar sourcedir
    scp sourcedir.tar joeuser@data.psc.xsede.org
For 'joeuser' you substitute your PSC userid. You can compress your tarball before you transfer it to speed up your transfer times. Then you could login to blacklight and issue the commands
    cd $SCRATCH
   tar xf /arc/users/joeuser/sourcedir.tar
Again for 'joeuser' you substitute your userid. This will unroll your tar file in your scratch directory.


On my byte laptop without VPN, try scp transfer which will take ~3.5 minutes.
Byte-2:projects hqin$ scp 0.ginppi.tar.gz hqin2@blacklight.psc.xsede.org:./.hqin2@blacklight.psc.xsede.org's password:
0.ginppi.tar.gz                                                                                               100%  386MB   1.9MB/s   03:23

Byte-2:projects hqin$ scp 0.ginppi.tar.gz hqin2@data.psc.edu:./.hqin2@data.psc.edu's password:
0.ginppi.tar.gz                                                                                               100%  386MB   2.1MB/s   03:02



On blacklight, I found data.psc.edu or data.psc.xsede.org is linked to /arc/users/hqin2
hqin2@tg-login1:/brashear/hqin2> ls -lh /arc/users/hqin2total 2.6G
-rw-r--r-- 1 hqin2 mc48o9p 3.6G 2015-06-07 16:50 0.tar
So, this is an entry and exit point for transferring data between my computer and blacklight.

hqin2@tg-login1:~> cd $SCRATCH  (This is probably is a key step)
hqin2@tg-login1:/brashear/hqin2> ls
qin  test.txt
hqin2@tg-login1:/brashear/hqin2> mkdir tmp
hqin2@tg-login1:/brashear/hqin2> cd tmp/
hqin2@tg-login1:/brashear/hqin2/tmp> tar xf /arc/users/hqin2/0.tar

hqin2@tg-login1:/brashear/hqin2/tmp> ll
total 4
drwxr-xr-x 15 hqin2 mc48o9p 4096 2014-07-20 10:39 0.ginppi.reliability.simulation
hqin2@tg-login1:/brashear/hqin2/tmp> du -sh
3.5G    .

 hqin2@tg-login1:/brashear/hqin2> tar xvfz /arc/users/hqin2/0.ginppi.tar.gz

OK, I now know how to transfer files to blacklight.




Tuesday, January 13, 2015

PSC blacklight trial, 20150113


Instructions: 
"Once you login you will be in your $HOME directory (/usr/users/1/hqin2) which is backed up but has a quota of 5 Gbytes. You also have access to a $SCRATCH directory (/brashear/hqin2) which has essentially unlimited storage  and is not backed up. Files in $SCRATCH may be removed, oldest first, to make room when needed, though we try to keep  them for 2-weeks at least.

There is a file archiver, you can access it as the directory /arc/users/hqin2/ from the login node, where you can store whatever you need to keep long-term (while your allocation is active, of course). You can also connect to the archiver via sftp, at data.psc.edu. You can use Fugu or any other graphical user interface if you prefer. This is the simplest way to transfer files to PSC, you can see them in the /arc directory from the login node and copy them to/from the $HOME or $SCRATCH directory as needed.

When you run and write data, we prefer that you write to $SCRATCH, which is a distributed file system and can handle the load, and not to $HOME."


hqin2@tg-login1:~> echo $HOME
/usr/users/1/hqin2
hqin2@tg-login1:~> echo $SCRATCH
/brashear/hqin2
hqin2@tg-login1:~> du /arc/users/hqin2
2 /arc/users/hqin2
hqin2@tg-login1:~> df /arc/users/hqin2
Filesystem           1K-blocks      Used Available Use% Mounted on
/arc                 3656882477312 2021505932032 1635376545280  56% /arc
hqin2@tg-login1:~> df -h /arc/users/hqin2
Filesystem            Size  Used Avail Use% Mounted on
/arc                  3.4P  1.9P  1.5P  56% /arc


Instructions:
"Look at this webpage:
http://www.psc.edu/index.php/computing-resources/blacklight

it has examples of scripts for running batch jobs, in particular I think you will want to run an 'interactive batch job' to check that your code works.

    qsub -I -l ncpus=16 -l walltime=0:30:00 -q debug

once you get a prompt, you are on the 'backend', or 'compute node', i.e. Blacklight proper, and everything runs there, not on the login node.

Let's say  I have a trivial R example:

y <- rnorm(10)
print(y)

this is saved in a file (example.R), and I want to run it. So I type the 'qsub ....' command above, and  after I got an interactive prompt, enter the following;

source /usr/share/modules/init/bash
module load R

R --slave CMD BATCH ./example.R

and the output appears in 'example.Rout'.  OK, so I'm done. To get out of the 'compute node', I type 'exit' and press enter.

The first two lines (source ... ; module ...) load the definition of the 'module' command, the second uses the module command to put (a version of) R in my path, and the last executes the R script in batch mode.  

Once I have figured out that everything is working, I can run the script in full batch mode (non-interactively) by putting this into a PBS script, i.e. a file, let's call it 'R.pbs':

#!/bin/bash
#PBS -q batch
#PBS -l ncpus=16
#PBS -l walltime=0:03:00

source /usr/share/modules/init/bash
module load R
cd $PBS_O_WORKDIR

ja
R --slave CMD BATCH ./example.R
ja -chlst

So you are just entering the commands you typed interactively,  after a line that indicates what 'shell' you want to run under, and some  options to the batch scheduler (the number of cores, and the minutes, which you had entered on the command line before).   What is new is the "cd $PBS_O_WORKDIR" which makes the script start on whatever directory you were when you submitted the command. Also, the couple of lines "ja" and "ja -chlst"  surrounding the call to R. They are not essential, but  collect useful information on the job (maximum amount of memory, time spent, cpu time used, etc.)

So you have this script called 'R.pbs',  and you can submit it to the scheduler with the command

    qsub R.pbs

The scheduler will reply with something like:
394363.tg-login1.blacklight.psc.teragrid.org

the number is the 'job ID' of your PBS job, which you can use to ask for more information from the scheduler.  You can always ask it 'what jobs do I have in the queue' like this:

    qstat -u hqin2

and it will list them all, together with the state (R means running, Q means it still in the queue).  If it lists nothing, it means all your jobs completed.  After the job completed, there should appear a couple of files in the directory where you put the script. Since I didn't use any option to give the job a name, the files would be named {script name}.e#### and {script name}.o####, in the example that would be R.pbs.o########## and R.pbs.e#######. The 'o' file has any output that the job would write to the standard output, the 'e' file anything that would normally go to the standard error file.   You can also redirect output from any command in the job script to a file. "

source /usr/share/modules/init/bash
module load R
R --slave CMD BATCH ./example.R
hqin2@tg-login1:~> ll example.R* #output is example.Rout
-rw-r--r-- 1 hqin2 mc48o9p  24 2015-01-13 20:47 example.R
-rw-r--r-- 1 hqin2 mc48o9p 942 2015-01-13 20:48 example.Rout
hqin2@tg-login1:~> nano -w R.pbs
hqin2@tg-login1:~> pwd
/usr/users/1/hqin2
hqin2@tg-login1:~> qsub R.pbs 
418673.tg-login1.blacklight.psc.teragrid.org

hqin2@tg-login1:~> qstat -u hqin2

tg-login1.blacklight.psc.teragrid.org: 
                                                                    Req'd  Req'd   Elap
Job ID               Username Queue    Jobname    SessID  NDS  TSK  Memory Time  S Time
-------------------- -------- -------- ---------- ------- ---- ---- ------ ----- - -----
418673.tg-login1     hqin2    batch_r  R.pbs          --   --    16    --  00:03 Q   -- 
hqin2@tg-login1:~> 

Nothing was in the output file. So, I modified the running line to "R -f example.R"

hqin2@tg-login1:~/test> ls
example.R  R2.pbs
hqin2@tg-login1:~/test> ll
total 8
-rw-r--r-- 1 hqin2 mc48o9p  24 2015-01-13 22:33 example.R
-rw-r--r-- 1 hqin2 mc48o9p 199 2015-01-13 22:33 R2.pbs
hqin2@tg-login1:~/test> qsub R2.pbs 
418692.tg-login1.blacklight.psc.teragrid.org
hqin2@tg-login1:~/test> qstat -u hqin2

tg-login1.blacklight.psc.teragrid.org: 
                                                                    Req'd  Req'd   Elap
Job ID               Username Queue    Jobname    SessID  NDS  TSK  Memory Time  S Time
-------------------- -------- -------- ---------- ------- ---- ---- ------ ----- - -----
418692.tg-login1     hqin2    batch_r  R2.pbs         --   --    16    --  00:03 Q   -- 
hqin2@tg-login1:~/test> cat R2.pbs 
#!/bin/bash
#PBS -q batch
#PBS -l ncpus=16
#PBS -l walltime=0:03:00

source /usr/share/modules/init/bash
module load R
cd $PBS_O_WORKDIR

ja
#R --slave CMD BATCH ./example.R
R -f example.R

ja -chlst


hqin2@tg-login1:~/test> ll
total 16
-rw-r--r-- 1 hqin2 mc48o9p   24 2015-01-13 22:33 example.R
-rw-r--r-- 1 hqin2 mc48o9p  199 2015-01-13 22:33 R2.pbs
-rw------- 1 hqin2 mc48o9p    0 2015-01-13 23:13 R2.pbs.e418692
-rw------- 1 hqin2 mc48o9p 4905 2015-01-13 23:13 R2.pbs.o418692
hqin2@tg-login1:~/test> cat R2.pbs.o418692 

R version 2.15.3 (2013-03-01) -- "Security Blanket"
Copyright (C) 2013 The R Foundation for Statistical Computing
ISBN 3-900051-07-0
Platform: x86_64-unknown-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

  Natural language support but running in an English locale

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> y = rnorm(10)
> print (y)
 [1] -0.46271891  0.34547494 -0.97556883 -0.64659599  0.01052027  0.06472313
 [7]  0.43858725  0.83961732 -0.74945123  0.15012829


Job Accounting - Command Report
===============================

    Command       Started    Elapsed    User CPU    Sys CPU       CPU      Block I/O    Swap In      CPU MEM        Characters           Logical I/O      CoreMem   VirtMem   Ex
     Name           At       Seconds    Seconds     Seconds    Delay Secs  Delay Secs  Delay Secs  Avg Mbytes     Read     Written     Read      Write    HiValue   HiValue   St   Ni  Fl   SBU's 
===============  ========  ==========  ==========  ==========  ==========  ==========  ==========  ==========  =========  =========  ========  ========  ========  ========  ===  ===  ==  =======
# CFG   ON(    1) (    7)  23:13:32 01/13/2015  System:  Linux bl0.psc.teragrid.org 2.6.32.49-0.3-default #1 SMP 2011-12-02 11:28:04 +0100 x86_64
ja               23:13:32        0.31        0.00        0.00        0.00        0.00        0.00        0.85      0.019      0.000        19         3      1064     23780    0    0         0.00
uname            23:13:32        0.00        0.00        0.00        0.00        0.00        0.00       12.64      0.004      0.000         8         1       664      5316    0    0         0.00
R                23:13:32        0.00        0.00        0.01        0.00        0.00        0.00        0.00      0.000      0.000         0         1       884     12616    0    0  F      0.00
sed              23:13:32        0.00        0.00        0.01        0.00        0.00        0.00        0.00      0.004      0.000        10         1       816      5396    0    0         0.00
R                23:13:32        0.00        0.00        0.01        0.00        0.00        0.00        0.00      0.000      0.000         0         1       888     12616    0    0  F      0.00
sed              23:13:32        0.00        0.00        0.01        0.00        0.00        0.00        0.00      0.004      0.000        10         1       812      5396    0    0         0.00
R                23:13:32        0.00        0.00        0.01        0.00        0.00        0.00        0.00      0.000      0.000         0         0       856     12612    0    0  F      0.00
rm               23:13:33        0.01        0.00        0.00        0.00        0.00        0.00        0.96      0.012      0.000        20         0       712      5336    0    0         0.00
R                23:13:33        0.35        0.22        0.08        0.00        0.00        0.00       70.16      4.166      0.001       190        25     32412     75240    0    0         0.00


Job CSA Accounting - Summary Report
====================================

Job Accounting File Name         : /dev/tmpfs/418692/.jacct65df3
Operating System                 : Linux bl0.psc.teragrid.org 2.6.32.49-0.3-default #1 SMP 2011-12-02 11:28:04 +0100 x86_64
User Name (ID)                   : hqin2 (51231)
Group Name (ID)                  : mc48o9p (15132)
Project Name (ID)                : ? (0)
Job ID                           : 0x65df3
Report Starts                    : 01/13/15 23:13:32
Report Ends                      : 01/13/15 23:13:33
Elapsed Time                     :            1      Seconds
User CPU Time                    :            0.2200 Seconds
System CPU Time                  :            0.1090 Seconds
CPU Time Core Memory Integral    :            5.2741 Mbyte-seconds
CPU Time Virtual Memory Integral :           15.2699 Mbyte-seconds
Maximum Core Memory Used         :           31.6523 Mbytes
Maximum Virtual Memory Used      :           73.4766 Mbytes
Characters Read                  :            4.2103 Mbytes
Characters Written               :            0.0012 Mbytes
Logical I/O Read Requests        :          257
Logical I/O Write Requests       :           33
CPU Delay                        :            0.0030 Seconds
Block I/O Delay                  :            0.0002 Seconds
Swap In Delay                    :            0.0000 Seconds
Number of Commands               :            9
System Billing Units             :            0.0000

hqin2@tg-login1:~/test> 


Note: I compared today's R.pbs with job1.sh on 20150112
the line  "source /usr/share/modules/init/bash" seems to be critical. It make sure that "module" can be recognized. 

Sunday, January 11, 2015

XSEDE trial, 20150111Sun, blacklight, OSG



Byte:xsede hqin$ ssh hongqin@login.xsede.org 
hongqin@login.xsede.org's password: 
Last login: Wed Aug 13 17:38:55 2014 from spelman-fw.spelman.edu

Welcome to the XSEDE Single Sign-On (SSO) Hub!

Your storage on this machine is limited to 100MB.

You may connect from here to any XSEDE resource on which you have an account.

To view a list of sites where you actually have an account, visit:
https://portal.xsede.org/group/xup/accounts

Here are the login commands for common XSEDE resources:

Blacklight: gsissh blacklight.psc.xsede.org
Darter: gsissh gsissh.darter.nics.xsede.org
Gordon Compute Cluster: gsissh gordon.sdsc.xsede.org
Gordon ION: gsissh gordon.sdsc.xsede.org
Keeneland: gsissh gsissh.keeneland.gatech.xsede.org
Mason: gsissh mason.iu.xsede.org
Maverick: gsissh -p 2222 maverick.tacc.xsede.org
Nautilus: gsissh gsissh.nautilus.nics.xsede.org
Open Science Grid: gsissh submit-1.osg.xsede.org
Stampede: gsissh -p 2222 stampede.tacc.xsede.org
SuperMIC: gsissh -p 2222 supermic.cct-lsu.xsede.org
Trestles: gsissh trestles.sdsc.xsede.org



Contact help@xsede.org for any assistance that may be needed.

[hongqin@gw69 ~]$ gsissh blacklight.psc.xsede.org
Last login: Thu Aug 14 21:14:38 2014 from 184.77.11.206

Pittsburgh Supercomputing Center
  
This system is for the use of authorized users only.  Unauthorized use may
be monitored and recorded.  In the course of such monitoring or through
system maintenance, the activities of authorized users may be monitored.
By using this system you expressly consent to such monitoring.

Blacklight is unique, for optimal performance see http://www.psc.edu/index.php/computing-resources/blackligh

hqin2@tg-login1:~> module load R
hqin2@tg-login1:~> R -f test.R

#nano -w job1.sh
hqin2@tg-login1:~> module load R
hqin2@tg-login1:~> R -f test.R

hqin2@tg-login1:~> sh job1.sh 
job1.sh: line 1: module: command not found

R version 2.15.3 (2013-03-01) -- "Security Blanket"
Copyright (C) 2013 The R Foundation for Statistical Computing
ISBN 3-900051-07-0
Platform: x86_64-unknown-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

  Natural language support but running in an English locale

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> x = 4*5
> write(x, "x.txt")


hqin2@tg-login1:~> qsub job1.sh
418479.tg-login1.blacklight.psc.teragrid.org

#in the queue as show by qstat


___________________
Note On 2015Jan13Tue, 
I logged to blacklight and found the job was executed correctly
hqin2@tg-login1:~> cat job1.sh.e418479 
/var/spool/torque/mom_priv/jobs/418479.tg-login1.blacklight.psc.teragrid.org.SC: line 1: module: command not found

/var/spool/torque/mom_priv/jobs/418479.tg-login1.blacklight.psc.teragrid.org.SC: line 2: R: command not found
__________________




hqin2@tg-login1:~> exit
logout

Connection to blacklight.psc.xsede.org closed.

[hongqin@gw69 ~]$ gsissh submit-1.osg.xsede.org

You have 1 active projects you can charge jobs to.

  Project Name                      Balance (CPU Hours)  End Date
  --------------------------------  -------------------  ------------
   TG-MCB140211                                 100000    08/08/2015

 Note that you can still charge to a project with a negative balance, as
 long as the project has not reached its end date. A project with a
 negative balance may result in jobs being given lower priority.

 When submitting jobs, please specify what project to charge to with a
 +ProjectName line in your HTCondor submit file. For example:

     +ProjectName = "TG-MCB140211" 

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

Your current filesystem usages and quotas are:

   /home             0% used (28.0 KB of 20.0 GB)  
   /local-scratch    0% used (0.0 B of 2.0 TB)  

NOTE: The /local-scratch filesystem automatically deletes files older than
      90 days.

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

2015-01-12 03:27:40 UTC [hongqin@xd-login:~]$ 








2015-01-12 03:43:03 UTC [hongqin@xd-login:~]$ cat testOSH.sub 
universe = vanilla
+ProjectName = "TG-MCB140211"
executable = testOSH.sh
output = job.out
error = job.err
log = job.log

notification = NEVER


queue








[hongqin@gw69 ~]$ pwd
/home/hongqin
[hongqin@gw69 ~]$ gsissh blacklight.psc.xsede.org
Last login: Sun Jan 11 22:04:05 2015 from gw69.iu.xsede.org

Pittsburgh Supercomputing Center
  
This system is for the use of authorized users only.  Unauthorized use may
be monitored and recorded.  In the course of such monitoring or through
system maintenance, the activities of authorized users may be monitored.
By using this system you expressly consent to such monitoring.

Blacklight is unique, for optimal performance see http://www.psc.edu/index.php/computing-resources/blacklight
hqin2@tg-login1:~> ls
job1.sh  job2.pbs  test.R  test.txt  x.txt
hqin2@tg-login1:~> ll
total 20
-rw-r--r-- 1 hqin2 mc48o9p 26 2015-01-11 22:13 job1.sh
-rw-r--r-- 1 hqin2 mc48o9p 38 2015-01-11 22:23 job2.pbs
-rw-r--r-- 1 hqin2 mc48o9p 26 2014-08-13 17:42 test.R
-rw-r--r-- 1 hqin2 mc48o9p 12 2014-08-13 17:41 test.txt
-rw-r--r-- 1 hqin2 mc48o9p  3 2015-01-11 22:13 x.txt
hqin2@tg-login1:~> qstat | grep qin

418479.tg-login1          job1.sh          hqin2                  0 Q batch_r