User:Haifengwiki

User page User page

How to copy files to xrootd
This shell command will copy files into xrootd and generates a address list of the files in xrootd. for i in `ls`; do xrdcp $i root://pcuwsun04.cern.ch//xrootd/users/haifeng/mc08.105011.J2_pythia_jetjet.recon.NTUPLE/$i; echo $i; echo root://pcuwsun04.cern.ch//xrootd/users/haifeng/mc08.105011.J2_pythia_jetjet.recon.NTUPLE/$i >> ~haifeng/data_mc/R14_Haifeng/mc08.105011.J2_pythia_jetjet.recon.NTUPLE.list ; done

How can we check files at xrootd
ssh pcuwsun05 cd /xrootd/users/haifeng/

Then you can check, remove your files.

xrootd directory
root://pcuwsun04.cern.ch//xrootd/users/
 * xrootd at CERN

root://atlas-bkp2.cs.wisc.edu//atlas/xrootd/users/ If you want to check the files in xrootd, you can ssh to higgs11
 * xrootd at Wisconsin

ssh nengxu@atlas02.cs.wisc.edu sudo su cexec chtcxrootd: "ls   /atlas/xrootd/users/quayle/z4j_ee_alpgen_10TeV/evgen/" | grep tau  | sort | more ssh root@pcuw03 cexec data: "ls /xrootd/users/quayle/vbfhww_sherpa_10TeV/evgen_130"  | grep root
 * You can list the files in xrootd in another way
 * For CERN,

How to restart the xrootd
ssh nengxu@atlas02.cs.wisc.edu sudo su sudo ssh c100.chtc.wisc.edu
 * Check chtc machine

check if it's the disk problem: tw_cli /c1 show

if not disk problem, then restart xrootd:

/opt/xrootd-20080828/xrdcluster restart


 * Restart xrootd server

ssh nengxu@atlas02.cs.wisc.edu sudo su ssh atlas-bkp2.cs.wisc.edu /opt/xrootd-20080828/xrdcluster restart

Restart all the chtc machines ssh nengxu@atlas02.cs.wisc.edu sudo su cexec chtcxrootd: /opt/xrootd-20080828/xrdcluster restart

How to check Xrootd files at CERN
ssh root@pcuw03 cexec data: "ls -lh /xrootd/users/quayle/vbfhww_sherpa_10TeV/evgen_130"

How to search and download files form grid to your local disk
First, set the grid environment. source ~tapas/.setGridProxy.bash And then

dq2-ls *datasetname dq2-ls -n datasetname dq2-ls -nf datasetname Choose the file you need, and download to your local disk. dq2_get datasetname

How to set up Athena environment
source setup.sh -tag=14.4.0,gcc34,AtlasProduction,runtime # echo "Setting standalone package" if test "${CMTROOT}" = ""; then CMTROOT=/afs/cern.ch/sw/contrib/CMT/v1r20p20080222; export CMTROOT fi . ${CMTROOT}/mgr/setup.sh tempfile=`${CMTROOT}/mgr/cmt -quiet build temporary_name` if test ! $? = 0 ; then tempfile=/tmp/cmt.$$; fi ${CMTROOT}/mgr/cmt setup -sh -pack=cmt_standalone -path=/users/montoya/myAna-00-00-00/trunk -no_cleanup $* >${tempfile};. ${tempfile} /bin/rm -f ${tempfile}
 * Source file
 * The setup.sh

How to setup AOD dumper 14.2.20
cd myAna-00-00-00/trunk source /afs/cern.ch/sw/contrib/CMT/v1r20p20080222/mgr/setup.sh cmt config source setCMT_14.2.20.bash cd TestProject/cmt cmt config source setup.sh cmt bro cmt config source setup.sh cmt bro gmake clean cmt bro gmake

How to submit jobs to grid and run AOD dumper
source ~tapas/.setGridProxy.bash cd directory_of_AOD_dumper cd trunk/ source setCMT_14.2.20.bash cd TestRun/ pathena --split=40 --inDS=mc08.105013.J4_pythia_jetjet.recon.AOD.e344_s456_r456 --outDS=user.HaifengLi.mc08.105013.J4_pythia_jetjet.recon.NTUPLE.e344_s456_r456 -v myAna.jobOptions_Rel_14.py


 * split N: split the file in to N files.
 * inDS:    Input AOD file. You have to search in grid.
 * outDS:   Output ntuple file. This is what you need.
 * myAna.jobOptions_Rel_14.py: joboption file.

Change and compile AOD dumper release 12
Use cd /users/tapas/Athena_rel_12 source setCMT_12.0.6.5.bash

After you change the dumper, cd /users/tapas/Athena_rel_12/PhysicsAnalysis/AnalysisCommon/myAna/cmt/ gmake

Trigger in AOD dumper

 * Trigger decision

mLog << MSG::INFO << "L1 triggers print " << m_trigDec->signatures(TriggerDecision::L1) << endreq; mLog << MSG::INFO << "L2 triggers print " << m_trigDec->signatures(TriggerDecision::L2) << endreq; mLog << MSG::INFO << "EF triggers print " << m_trigDec->signatures(TriggerDecision::EF) << endreq;
 * Release 12, you can add a piece of code to let the AOD dumper print Trigger names.

How to check the job status
pathena_util >>show >>kill(ID_job) Then your jobs are listed.

How to submit AOD dumper job to Condor

 * First, you should have a AOD dumper script.


 * You should have a submit script.


 * then you can submit your jobs at pcuwtwin01

ssh pcuwtwin01 ./submit_aodntup test 1 100 /users/haifeng/AOD_dumper/release_12/H_tautau_ll/grid/submitCondor/t1_script.bash

condor_q condor_rm haifeng
 * Check your jobs
 * Remove your jobs

How to run program in pcuwtwin
1)copy file xrdcp  yourfile_name   root://pcuwsun04.cern.ch//xrootd/users/haifeng/DIR_you 2)change or remove your file ssh pcuwsun05 rm something ls something
 * xrootd system 

ssh pcuw000 ssh pcuwtwin01 source /afs/cern.ch/sw/lcg/app/releases/ROOT/5.20.00/slc4_amd64_gcc34/root/bin/thisroot.sh
 * Condor submit jub
 *  pcuwtwin01...pcuwtwinxx

Wisconsin
ssh higgs05.cs.wisc.edu ssh c091.chtc.wisc.edu source /afs/hep.wisc.edu/atlas/root/root_v5.20.00.Linux.slc4_amd64.gcc3.4/bin/thisroot.sh

scp -r analysis_physics/* haifeng@higgs05.cs.wisc.edu:~/
 * Copy file to wisconsin

How to connect with MySQL
ssh nengxu@atlas02.cs.wisc.edu sudo su sudo ssh atlas-bkp3 mysql -u root

How to reboot MySQL sever
ssh nengxu@atlas02.cs.wisc.edu sudo su sudo ssh atlas-bkp3 /sbin/service mysqld restart

Operation of MySQL
update MCProStatusDir set NSubMit="1799" where JobID=314 ; delete from MCProStatusFile where Status="Killed" and MCStatusDirID=352;

Linux & BASH
scp -r analysis_physics/* haifeng@higgs05.cs.wisc.edu:~/
 * copy file to another mechine

df
 * check the space of disk

du -sh filename
 * check the file space

tar -cvzf filename.tgz  filename tar xvzf filename.tgz
 * tar
 * untar

date -u +%s
 * get the time of system

stat -c %Y test1 sed -i 's/new2/CERN/g' 2.txt
 * get the last modification time of a file: test1
 * find a string "new2" in '2.txt', substitute it using "CERN" and change the file 2.txt.

grep "some text" *
 * grep will find the text in * file and print them out

find ./work/  -mindepth 2 -maxdepth 2 | xargs chmod 744 du -h --max-depth=1 for each in `ps -ef | grep ” | grep -v PID | awk ‘{ print $3 }’`; do for every in `ps -ef | grep $each | grep -v cron | awk ‘{ print $2 }’`; do kill -9 $every; done; done
 * find the name of files in "work" directory and then pass this output to xargs.
 * check the space of some directories
 * process control

ps -ef | grep gnome |  grep -v grep   | awk '{print $7}'
 * awk

ps -ef | grep "slot" | awk '{print $7}' | sed -e 's/-.*//g'  | grep -v ":"
 * awk 2