Partenaires

CNRS
IN2P3
Sorbonne Universite
Universite de Paris
Initiative Physique des Infinis
UPMC


Rechercher

Sur ce site


Accueil > Masses et Interactions Fondamentales > ATLAS > Activités > Informatique > User Guide

User Guide

 1. Concerning your account :

Please refer to the FAQ pages of the computing service.

 2. Which resources are available at LPNHE ?

Besides your laptop or PC, the ATLAS group server is the machine lpnatlas.in2p3.fr (also named lpnp110.in2p3.fr)

  • CPU : 16 cores (up to 32 with hyperthreading), Intel Xeon E5-2650 (2 GHz)
  • Memory : 128 GB
  • OS : SL6 x86_64

Other cpu resources are available at LPNHE :

  • grid resources : see GRIF pages
  • High Performance Computing : see this page)

 2.1 How to log on ?

For security reasons, ssh acceses are restricted from outside the lab (see this page for more details). They can done through a dedicated server named :lpnclaude.in2p3.fr

  • ssh access (double ssh) :
    ssh -t your_login lpnclaude.in2p3.fr ssh your_login lpnatlas.in2p3.fr
  • ssh access with X11 tunneling
    ssh -tY your_login lpnclaude.in2p3.fr ssh -Y your_login lpnatlas.in2p3.fr

    or
    ssh -tY your_login lpnclaude.in2p3.fr ssh -Y your_login lpnatlas.in2p3.fr

    To simplify the command lines you can add the following line to your file /.ssh/config
    Host lpnp110.in2p3.fr
    ProxyCommand ssh -t -W %h :%your_login lpnclaude.in2p3.fr

This indicates to your machine that when you do a ssh to the server you will do automatically a ssh to lpnclaude.in2p3.fr. Thus you can tape directly :
ssh your_login lpnatlas.in2p3.fr
sftp your_login lpnatlas.in2p3.fr
scp your_login lpnatlas.in2p3.fr:your_distant_file your_local_file
rsync -auv your_login lpnatlas.in2p3.fr:your_distant_directory your_local_directory

You can do also :
ssh -Y -2 -A lpnatlas.in2p3.fr

 2.2 Your working spaces :

  • Your local space (on laptop and PC) depends on your machine. A priori it is not backuped.
  • When you log on the group server the $HOME directory is actually on another central server for the whole laboratory. You have a quota of 24 GB. Your home directory is under /home/username.
    WARNING:Your home directory is the only space which is regularly backuped (see this FAQ). Large files (like data) should not be kept in this space.
  • /data : These are semi-permanent storage spaces, i.e data are stored for a few months typically. They are dedicated to the storage of large files, typically data. This area is not backuped. Long term storage should be done elsewhere (e.h hpss in Lyon or on grid). This space is about 100 TB without user quota.
  • Your HPSS space in Lyon allows you to same large data files on magnetic tapes. Typically any data saved in /atlas0 should be backuped either on castor at CERN or on HPSS in Lyon. Still one should remind that these services are not optimized to backup small files. It may be necessary to do a tar before.

 2.3 Backup space :

The computing service backups your home directories. Snapshots are available here :

ls -la /home/.snapshot

See here for more details.

 2.4 Working at CERN or at CCIN2P3 remotely

Create a mount directory

mkdir /mnt

Then define the following aliases in your .bashrc or bash_profile :
alias mountccage=’sshfs userid ccage.in2p3.fr :/sps/atlas/yourdirectory/mnt/ccage-sps ; sshfs userid ccage.in2p3.fr :/afs/in2p3.fr/home/yourdirectory /mnt/ccage-home’
alias mountlxplus=’sshfs userid lxplus.cern.ch :/afs/cern.ch/work/yourdirectory /mnt/lxplus-work ;sshfs laforge lxplus.cern.ch :/afs/cern.ch/user/yourdirectory /mnt/lxplus-home’
alias unmountccage=’unmount /mnt/ccage-work ; unmount /mnt/ccage-home’
alias unmountlxplus=’umount /mnt/lxplus-work ; umount /mnt/lxplus-home’

To mount your cern workspaces, just type « mountlxplus » and you are asked for your password twice (one for each subspace I have defined). At the end of your session, unmount the spaces with « unmountlxplus ».

When the space is mounted you just can work on the file as if they were local. You just open Xcode or emacs and edit
directly through the path /mnt/lxplus-home/....

 2.5 Using the server as User Interface

The group server can be used to access ROOT, ATLAS software, grid tools etc

 3. How to use grid, Athena etc ?

The ATLAS Computing Workbook give all necessary information to use computing resources. The pages on Software tutorial are also of interest.
Only few additional informations are give below.

 3.1 To start on grid

The WorkBookStartingGrid give information how to get a grid certificate, join th eATLAS Virtual Organization and prepare your certificate to work.

 3.2 Use of CernVM

We use CernVM to access Athena, grid tools, versions of ROOT etc ...
In particular you can use the ATLASLocalRootBase package to do all the setups. You can have a look at this wiki in Canada to have detailed examples.

 3.3 Basic command lines

zsh
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch
/repo/ATLASLocalRootBase
source $ATLAS_LOCAL_ROOT_BASE/user/atlasLocalSetup.sh

To use grid to get data (Rucio etc) :
localSetupRucioClients
export DQ2_LOCAL_SITE_ID=GRIF-LPNHE_LOCALGROUPDISK
export DPNS_HOST=lpnse1.in2p3.fr
voms-proxy-init -voms atlas -valid 96:0

 3.4 How to save your data on LOCALGROUPDISK

When running jobs on grid, output files are stored on SCRATCHDISK area of the site where jobs ran, and are erased typically after two weeks. You can retrieve your files locally using rucio, but tou may want to save them on LOCALGROUPDISK areas which are Tier3 and so managed by end-users.
You have access to two such LOCALGROUPDISK areas :

  • on CCIN2P3 site, called IN2P3-CC_LOCALGROUPDISK
  • on LPNHE and GRIF sites, called GRIF_LOCALGROUPDISK

There are several ways to get your data on these sites :

  • when running jobs on grid, for example with prun, use the option —destSE=GRIF_LOCALGROUPDISK (see this page. Your data will be transferred automatically at the end of the job.
  • it is possible to replicate data afterwards to these sites using Rucio D2R2. See this page and this one

If your data are not yet on grid it is possible to upload them, following these instructions. An example :

  • In /data/atlas0/data/DATA/DATA15/DAOD_TOPQ5/filt/HistFinal/TtJpsi I have in directory v2433 a set of 4 root files, after my final analysis
  • I create a dataset with this command line :
    rucio add-dataset user.derue:Data15-periodDtoJ.physics_Main.DAOD_TOPQ5.p3229
  • I upload the content of the directory v2433/ on our Tier 3 with
    rucio upload —rse GRIF_LOCALGROUPDISK user.derue:Data15-periodDtoJ.physics_Main.DAOD_TOPQ5.p3229 v2433/
  • I can check that files can be retrieved by :
    rucio download user.derue:Data15-periodDtoJ.physics_Main.DAOD_TOPQ5.p3229

You can find these instructions on /home/derue/public/atlas/T3/SaveFilesOnLGD.txt

 3.5 How to access your data on LOCALGROUPDISK

In case your data are located on grid - let’s say on LOCALGROUPDISK - you can access them directly from almost any server - let’s say « our »server - using root/xrootd protoco. Use the following command to get the list of files (with their path) from a given dataset
rucio list-file-replicas —rse GRIF_LOCALGROUPDISK —protocol root user.derue.410025.PowhegPythiaEvtGen.DAOD_TOPQ1.e3998_s3126_r9364_r9315_p3390.21222-ttdmeson-filtmuD0Dstar-syst-1a_output.root

This will give you sthing like :
/lpnse1.in2p3.fr:1094//dpm/in2p3.fr/home/atlas/atlaslocalgroupdisk/rucio/user/derue/5b/71/user.derue.13584469._000001.output.root
For example to copy this file you can do
xrdcp root ://lpnse1.in2p3.fr:1094//dpm/in2p3.fr/home/atlas/atlaslocalgroupdisk/rucio/user/derue/5b/71/user.derue.13584469._000001.output.root mylocalfile
You can open ROOT and create a skeleton for analysis :
TFile* xrfile = TFile::Open(« root ://lpnse1.in2p3.fr:1094//dpm/in2p3.fr/home/atlas/atlaslocalgroupdisk/rucio/user/derue/5b/71/user.derue.13584469._000001.output.root ») ;
TTree *tree = (TTree*)xrfile->Get(« nominal ») ;
tree->MakeSelector(« Tree_TtDMeson ») ;

===> it works
I want to run my analysis on it, frommy running ROOT macro I do sthing like

namein = « root ://lpnse1.in2p3.fr:1094//dpm/in2p3.fr/home/atlas/atlaslocalgroupdisk/rucio/user/derue/5b/71/user.derue.13584469._000001.output.root » ;
TChain dataset(name_Tree) ;
dataset.Add(namein) ;
std::cout<<"Reading file "<<namein<<std::endl ;
timer.Start() ;
// performs the selection
dataset.Process(« doTtDMeson/TtDMeson.C+ »,option+syst+Year) ;
The only thing i had to change in my code is

//TFile* file = new TFile(current_file_name,« read ») ; to
TFile* file = TFile::Open(current_file_name,« read ») ;

==> it runs

 4. Other tips and tricks

 4.1 How to use ROOT locally ?

You can use ROOT through cvmfs located at CERN :
zsh
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
source $ATLAS_LOCAL_ROOT_BASE/user/atlasLocalSetup.sh
localSetupROOT or
localSetupROOT 6.20.06-x86_64-centos7-gcc8-opt

 4.2 How to use batch system HTCondor at CERN ?

 4.3 Using Intel compilers (via CERN executables and CC license)

 :
source /afs/cern.ch/sw/IntelSoftware/linux/all-setup.sh
export INTEL_LICENSE_FILE=/home/beau/intel/licenses
Then you can use C compiler (icc), C++ compiler (icpc) or Fortran compiler (ifort). Exemple :
icc truc.c
See man pages (e.g. man icc, after init) for more informations.

 4.4 Other tools to be used

  • VirtualBox is a free and open-source hypervisor for x86 computers currently being developed by Oracle. It supports the creation and management of guest virtual machines running versions and derivations of Windows, Linux, ....
  • CodeLite is a free, open-source, cross-platform IDE for the C, C++, PHP, and JavaScript programming languages
Facebook

Dans la même rubrique :

Enregistrer au format PDF