Partenaires

CNRS IN2P3
UPMC
UPD
UPMC


Rechercher

Sur ce site

Sur le Web du CNRS


Accueil du site > Masses et Interactions Fondamentales > Atlas > Activités de recherche > Informatique > User Guide

User Guide

par Frederic Derue - 7 septembre

 1. Concerning your account :

Please refer to the FAQ pages of the computing service.

 2. Which resources are available at LPNHE ?

Besides your leptop (or PC), the ATLAS group server is the machine lpnatlas.in2p3.fr (also named lpnp110.in2p3.fr)

  • CPU : 16 cores (up to 32 with hyperthreading), Intel Xeon E5-2650 (2 GHz)
  • Memory : 128 GB
  • OS : SL6 x86_64

Other cpu resources are available at LPNHE :

  • grid resources : see GRIF pages
  • High Performance Computing : see this page)

 2.1 How to log on ?

For security reasons, ssh acceses are restricted from outside the lab (see this page for more details). They can done through a dedicated server named :lpnclaude.in2p3.fr

  • ssh access (double ssh) :
    ssh -t your_login lpnclaude.in2p3.fr ssh your_login lpnatlas.in2p3.fr
  • ssh access with X11 tunneling
    ssh -tY your_login lpnclaude.in2p3.fr ssh -Y your_login lpnatlas.in2p3.fr
    or
    ssh -tY your_login lpnclaude.in2p3.fr ssh -Y your_login lpnatlas.in2p3.fr

    To simplify the command lines you can add the following line to your file /.ssh/config
    Host lpnp110.in2p3.fr
    ProxyCommand ssh -t -W %h :%your_login lpnclaude.in2p3.fr

This indicates to your machine that when you do a ssh to the server you will do automatically a ssh to lpnclaude.in2p3.fr. Thus you can tape directly :
ssh your_login lpnatlas.in2p3.fr
sftp your_login lpnatlas.in2p3.fr
scp your_login lpnatlas.in2p3.fr:your_distant_file your_local_file
rsync -auv your_login lpnatlas.in2p3.fr:your_distant_directory your_local_directory

You can do also :
ssh -Y -2 -A lpnatlas.in2p3.fr

 2.2 Your working spaces :

  • Your local space (on laptop and PC) depends on your machine. A priori it is not backuped.
  • When you log on the group server the $HOME directory is actually on another central server for the whole laboratory. You have a quota of 24 GB. Your home directory is under /home/username. WARNING:Your home directory is the only space which is regularly backuped (see this FAQ). Large files (like data) should not be kept in this space.
  • /data : These are semi-permanent storage spaces, i.e data are stored for a few months typically. They are dedicated to the storage of large files, typically data. This area is not backuped. Long term storage should be done elsewhere (e.h hpss in Lyon or on grid). This space is about 100 TB without user quota.
  • Your HPSS space in Lyon allows you to same large data files on magnetic tapes. Typically any data saved in /atlas0 should be backuped either on castor at CERN or on HPSS in Lyon. Still one should remind that these services are not optimized to backup small files. It may be necessary to do a tar before.

 2.3 Working at CERN or at CCIN2P3 remotely

Create a mount directory mkdir /mnt Then define the following aliases in your .bashrc or bash_profile :
alias mountccage=’sshfs userid ccage.in2p3.fr :/sps/atlas/yourdirectory/mnt/ccage-sps ; sshfs userid ccage.in2p3.fr :/afs/in2p3.fr/home/yourdirectory /mnt/ccage-home’
alias mountlxplus=’sshfs userid lxplus.cern.ch :/afs/cern.ch/work/yourdirectory /mnt/lxplus-work ;sshfs laforge lxplus.cern.ch :/afs/cern.ch/user/yourdirectory /mnt/lxplus-home’
alias unmountccage=’unmount /mnt/ccage-work ; unmount /mnt/ccage-home’
alias unmountlxplus=’umount /mnt/lxplus-work ; umount /mnt/lxplus-home’

To mount your cern workspaces, just type « mountlxplus » and you are asked for your password twice (one for each subspace I have defined). At the end of your session, unmount the spaces with « unmountlxplus ».

When the space is mounted you just can work on the file as if they were local. You just open Xcode or emacs and edit directly through the path /mnt/lxplus-home/....

 2.4 Using the server as User Interface

The group server can be used to access ROOT, ATLAS software, grid tools etc

 3. How to use grid, Athena etc ?

The ATLAS Computing Workbook give all necessary information to use computing resources. The pages on Software tutorial are also of interest. Only few additional informations are give below.

 3.1 To start on grid

The WorkBookStartingGrid give information how to get a grid certificate, join th eATLAS Virtual Organization and prepare your certificate to work.

 3.2 Use of CernVM

We use CernVM to access Athena, grid tools, versions of ROOT etc ... In particular you can use the ATLASLocalRootBase package to do all the setups. You can have a look at this wiki in Canada to have detailed examples.

 3.3 Basic command lines

zsh
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch
/repo/ATLASLocalRootBase
source $ATLAS_LOCAL_ROOT_BASE/user/atlasLocalSetup.sh

To use grid to get data (Rucio etc) :
localSetupRucioClients
export DQ2_LOCAL_SITE_ID=GRIF-LPNHE_LOCALGROUPDISK
export DPNS_HOST=lpnse1.in2p3.fr
voms-proxy-init -voms atlas -valid 96:0

 3.4 How to save your data on LOCALGROUPDISK

When running jobs on grid, output files are stored on SCRATCHDISK area of the site where jobs ran, and are erased typically after two weeks. You can retrieve your files locally using rucio, but tou may want to save them on LOCALGROUPDISK areas which are Tier3 and so managed by end-users.
You have access to two such LOCALGROUPDISK areas :

  • on CCIN2P3 site, called IN2P3-CC_LOCALGROUPDISK
  • on LPNHE site, called GRIF-LPNHE_LOCALGROUPDISK
There are several ways to get your data on these sites :
  • when running jobs on grid, for example with prun, use the option —destSE=GRIF-LPNHE_LOCALGROUPDISK (see this page. Your data will be transferred automatically at the end of the job.
  • it is possible to replicate data afterwards to these sites using Rucio D2R2. See this page and this one
  • if your data are not yet on grid it is possible to upload them, following these instructions

 4. Other tips and tricks

 4.1 How to use ROOT locally ?

Version 6.04/14 is installed on the server. Other versions are available in /usr/local/.

You can also use ROOT through cvmfs localted at CERN (need afs) :
zsh export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase source $ATLAS_LOCAL_ROOT_BASE/user/atlasLocalSetup.sh localSetupROOT

 4.2 Using Intel compilers (via CERN executables and CC license)

 :
source /afs/cern.ch/sw/IntelSoftware/linux/all-setup.sh export INTEL_LICENSE_FILE=/home/beau/intel/licenses
Then you can use C compiler (icc), C++ compiler (icpc) or Fortran compiler (ifort). Exemple :
icc truc.c
See man pages (e.g. man icc, after init) for more informations.

 4.3 Other tools to be used

  • VirtualBox is a free and open-source hypervisor for x86 computers currently being developed by Oracle. It supports the creation and management of guest virtual machines running versions and derivations of Windows, Linux, ....
  • CodeLite is a free, open-source, cross-platform IDE for the C, C++, PHP, and JavaScript programming languages
Facebook