## Rechercher

Sur ce site

Sur le Web du CNRS

Accueil > Masses et Interactions Fondamentales > ATLAS > Activités > Informatique > User Guide

## User Guide

### 1. Concerning your account :

Please refer to the FAQ pages of the computing service.

### 2. Which resources are available at LPNHE ?

Besides your leptop (or PC), the ATLAS group server is the machine lpnatlas.in2p3.fr (also named lpnp110.in2p3.fr)

• CPU : 16 cores (up to 32 with hyperthreading), Intel Xeon E5-2650 (2 GHz)
• Memory : 128 GB
• OS : SL6 x86_64

Other cpu resources are available at LPNHE :

• grid resources : see GRIF pages

### 2.1 How to log on ?

For security reasons, ssh acceses are restricted from outside the lab (see this page for more details). They can done through a dedicated server named :lpnclaude.in2p3.fr

• ssh access (double ssh) :
• ssh access with X11 tunneling

or

To simplify the command lines you can add the following line to your file /.ssh/config
Host lpnp110.in2p3.fr
ProxyCommand ssh -t -W %h :%your_login lpnclaude.in2p3.fr

This indicates to your machine that when you do a ssh to the server you will do automatically a ssh to lpnclaude.in2p3.fr. Thus you can tape directly :

You can do also :
ssh -Y -2 -A lpnatlas.in2p3.fr

### 2.2 Your working spaces :

• Your local space (on laptop and PC) depends on your machine. A priori it is not backuped.
• When you log on the group server the $HOME directory is actually on another central server for the whole laboratory. You have a quota of 24 GB. Your home directory is under /home/username. WARNING:Your home directory is the only space which is regularly backuped (see this FAQ). Large files (like data) should not be kept in this space. • /data : These are semi-permanent storage spaces, i.e data are stored for a few months typically. They are dedicated to the storage of large files, typically data. This area is not backuped. Long term storage should be done elsewhere (e.h hpss in Lyon or on grid). This space is about 100 TB without user quota. • Your HPSS space in Lyon allows you to same large data files on magnetic tapes. Typically any data saved in /atlas0 should be backuped either on castor at CERN or on HPSS in Lyon. Still one should remind that these services are not optimized to backup small files. It may be necessary to do a tar before. ### 2.3 Backup space : The computing service backups your home directories. Snapshots are available here : ls -la /home/.snapshot See here for more details. ### 2.4 Working at CERN or at CCIN2P3 remotely Create a mount directory mkdir /mnt Then define the following aliases in your .bashrc or bash_profile : alias mountccage=’sshfs userid ccage.in2p3.fr :/sps/atlas/yourdirectory/mnt/ccage-sps ; sshfs userid ccage.in2p3.fr :/afs/in2p3.fr/home/yourdirectory /mnt/ccage-home’ alias mountlxplus=’sshfs userid lxplus.cern.ch :/afs/cern.ch/work/yourdirectory /mnt/lxplus-work ;sshfs laforge lxplus.cern.ch :/afs/cern.ch/user/yourdirectory /mnt/lxplus-home’ alias unmountccage=’unmount /mnt/ccage-work ; unmount /mnt/ccage-home’ alias unmountlxplus=’umount /mnt/lxplus-work ; umount /mnt/lxplus-home’ To mount your cern workspaces, just type « mountlxplus » and you are asked for your password twice (one for each subspace I have defined). At the end of your session, unmount the spaces with « unmountlxplus ». When the space is mounted you just can work on the file as if they were local. You just open Xcode or emacs and edit directly through the path /mnt/lxplus-home/.... ### 2.5 Using the server as User Interface The group server can be used to access ROOT, ATLAS software, grid tools etc ### 3. How to use grid, Athena etc ? The ATLAS Computing Workbook give all necessary information to use computing resources. The pages on Software tutorial are also of interest. Only few additional informations are give below. ### 3.1 To start on grid The WorkBookStartingGrid give information how to get a grid certificate, join th eATLAS Virtual Organization and prepare your certificate to work. ### 3.2 Use of CernVM We use CernVM to access Athena, grid tools, versions of ROOT etc ... In particular you can use the ATLASLocalRootBase package to do all the setups. You can have a look at this wiki in Canada to have detailed examples. ### 3.3 Basic command lines zsh export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch /repo/ATLASLocalRootBase source$ATLAS_LOCAL_ROOT_BASE/user/atlasLocalSetup.sh

To use grid to get data (Rucio etc) :
localSetupRucioClients
export DQ2_LOCAL_SITE_ID=GRIF-LPNHE_LOCALGROUPDISK
export DPNS_HOST=lpnse1.in2p3.fr
voms-proxy-init -voms atlas -valid 96:0

### 3.4 How to save your data on LOCALGROUPDISK

When running jobs on grid, output files are stored on SCRATCHDISK area of the site where jobs ran, and are erased typically after two weeks. You can retrieve your files locally using rucio, but tou may want to save them on LOCALGROUPDISK areas which are Tier3 and so managed by end-users.

• on CCIN2P3 site, called IN2P3-CC_LOCALGROUPDISK
• on LPNHE site, called GRIF-LPNHE_LOCALGROUPDISK

There are several ways to get your data on these sites :

• when running jobs on grid, for example with prun, use the option —destSE=GRIF-LPNHE_LOCALGROUPDISK (see this page. Your data will be transferred automatically at the end of the job.
• it is possible to replicate data afterwards to these sites using Rucio D2R2. See this page and this one

If your data are not yet on grid it is possible to upload them, following these instructions. An example :

• In /data/atlas0/data/DATA/DATA15/DAOD_TOPQ5/filt/HistFinal/TtJpsi I have in directory v2433 a set of 4 root files, after my final analysis
• I create a dataset with this command line :
• I upload the content of the directory v2433/ on our Tier 3 with
rucio upload —rse GRIF-LPNHE_LOCALGROUPDISK user.derue:Data15-periodDtoJ.physics_Main.DAOD_TOPQ5.p3229 v2433/
• I can check that files can be retrieved by :

You can find these instructions on /home/derue/public/atlas/T3/SaveFilesOnLGD.txt

### 3.5 How to access your data on LOCALGROUPDISK

In case your data are located on grid - let’s say on LOCALGROUPDISK - you can access them directly from almost any server - let’s say « our »server - using root/xrootd protoco. Use the following command to get the list of files (with their path) from a given dataset
rucio list-file-replicas —rse GRIF-LPNHE_LOCALGROUPDISK —protocol root user.derue.410025.PowhegPythiaEvtGen.DAOD_TOPQ1.e3998_s3126_r9364_r9315_p3390.21222-ttdmeson-filtmuD0Dstar-syst-1a_output.root

This will give you sthing like :
/lpnse1.in2p3.fr:1094//dpm/in2p3.fr/home/atlas/atlaslocalgroupdisk/rucio/user/derue/5b/71/user.derue.13584469._000001.output.root
For example to copy this file you can do
xrdcp root ://lpnse1.in2p3.fr:1094//dpm/in2p3.fr/home/atlas/atlaslocalgroupdisk/rucio/user/derue/5b/71/user.derue.13584469._000001.output.root mylocalfile
You can open ROOT and create a skeleton for analysis :
TFile* xrfile = TFile::Open(« root ://lpnse1.in2p3.fr:1094//dpm/in2p3.fr/home/atlas/atlaslocalgroupdisk/rucio/user/derue/5b/71/user.derue.13584469._000001.output.root ») ;
TTree *tree = (TTree*)xrfile->Get(« nominal ») ;
tree->MakeSelector(« Tree_TtDMeson ») ;

===> it works
I want to run my analysis on it, frommy running ROOT macro I do sthing like

namein = « root ://lpnse1.in2p3.fr:1094//dpm/in2p3.fr/home/atlas/atlaslocalgroupdisk/rucio/user/derue/5b/71/user.derue.13584469._000001.output.root » ;
TChain dataset(name_Tree) ;
timer.Start() ;
// performs the selection
dataset.Process(« doTtDMeson/TtDMeson.C+ »,option+syst+Year) ;
The only thing i had to change in my code is

//TFile* file = new TFile(current_file_name,« read ») ; to
TFile* file = TFile::Open(current_file_name,« read ») ;

==> it runs

### 4.1 How to use ROOT locally ?

Version 6.22/02 is installed on the server. Other versions are available in /usr/local/.

You can also use ROOT through cvmfs localted at CERN :
zsh
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
source \$ATLAS_LOCAL_ROOT_BASE/user/atlasLocalSetup.sh
localSetupROOT or
localSetupROOT 6.18.04-x86_64-centos7-gcc8-opt

### 4.3 Using Intel compilers (via CERN executables and CC license)

:
source /afs/cern.ch/sw/IntelSoftware/linux/all-setup.sh
Then you can use C compiler (icc), C++ compiler (icpc) or Fortran compiler (ifort). Exemple :
icc truc.c

### 4.4 Other tools to be used

• VirtualBox is a free and open-source hypervisor for x86 computers currently being developed by Oracle. It supports the creation and management of guest virtual machines running versions and derivations of Windows, Linux, ....
• CodeLite is a free, open-source, cross-platform IDE for the C, C++, PHP, and JavaScript programming languages

Dans la même rubrique :