Partenaires

CNRS IN2P3
UPMC
UPD
UPMC


Rechercher

Sur ce site

Sur le Web du CNRS


Accueil du site > Masses et Interactions Fondamentales > Atlas > Activités > Informatique > User Guide

User Guide

par Frederic Derue - 27 janvier

 1. Concerning your account :

Please refer to the FAQ pages of the computing service.

 2. Which resources are available at LPNHE ?

Besides your leptop (or PC), the ATLAS group server is the machine lpnp110.in2p3.fr

  • CPU : 16 cores (up to 32 with hyperthreading), Intel Xeon E5-2650 (2 GHz)
  • Memory : 128 GB
  • OS : SL6 x86_64

Other cpu resources are available through grid computing (GRIF or High Performance Computing (see this page)

 2.1 How to log on ?

For security reasons, ssh acceses are restricted from outside the lab, and are done through a dedicated server lpnclaude.in2p3.fr

  • ssh access (double ssh) :
    ssh -t your_login lpnclaude.in2p3.fr ssh your_login lpnp110.in2p3.fr
  • ssh access with X11 tunneling
    ssh -tY your_login lpnclaude.in2p3.fr ssh -Y your_login lpnp110.in2p3.fr
    or
    ssh -tY your_login lpnclaude.in2p3.fr ssh -Y your_login lpnp110.in2p3.fr

    To simplify the command lines you can add the following line to your file /.ssh/config
    Host lpnp110.in2p3.fr
    ProxyCommand ssh -t -W %h :%your_login lpnclaude.in2p3.fr

This indicates to your machine that when you do a ssh to the server you will do automatically a ssh to lpnclaude.in2p3.fr. Thus you can tape directly :
ssh your_login lpnp110.in2p3.fr
sftp your_login lpnp110.in2p3.fr
scp your_login lpnp110.in2p3.fr:your_distant_file your_local_file
rsync -auv your_login lpnp110.in2p3.fr:your_distant_directory your_local_directory

 2.2 Your working spaces :

  • Your local space (on laptop and PC) depends on your machine. A priori it is not backuped.
  • When you log on the group server the $HOME directory is actually on another central server for the whole laboratory. You have a quota of 24 GB. Your home directory is under /home/username. WARNING:Your home directory is the only space which is regularly backuped (see this FAQ). Large files (like data) should not be kept in this space.
  • /data : These are semi-permanent storage spaces, i.e data are stored for a few months typically. They are dedicated to the storage of large files, typically data. This area is not backuped. Long term storage should be done elsewhere (e.h hpss in Lyon or on grid). This space is about 100 TB without user quota.
  • Your HPSS space in Lyon allows you to same large data files on magnetic tapes. Typically any data saved in /atlas0 should be backuped either on castor at CERN or on HPSS in Lyon. Still one should remind that these services are not optimized to backup small files. It may be necessary to do a tar before.

 2.3 Working at CERN or at CCIN2P3 remootely

Create a mount directory mkdir /mnt Then define the following aliases in your .bashrc or bash_profile :
alias mountccage=’sshfs userid ccage.in2p3.fr :/sps/atlas/yourdirectory/mnt/ccage-sps ; sshfs userid ccage.in2p3.fr :/afs/in2p3.fr/home/yourdirectory /mnt/ccage-home’
alias mountlxplus=’sshfs userid lxplus.cern.ch :/afs/cern.ch/work/yourdirectory /mnt/lxplus-work ;sshfs laforge lxplus.cern.ch :/afs/cern.ch/user/yourdirectory /mnt/lxplus-home’
alias unmountccage=’unmount /mnt/ccage-work ; unmount /mnt/ccage-home’
alias unmountlxplus=’umount /mnt/lxplus-work ; umount /mnt/lxplus-home’

To mount your cern workspaces, just type « mountlxplus » and you are asked for your password twice (one for each subspace I have defined). At the end of your session, unmount the spaces with « unmountlxplus ».

When the space is mounted you just can work on the file as if they were local. You just open Xcode or emacs and edit directly through the path /mnt/lxplus-home/....

 2.4 Using the server as User Interface

The group server can be used to access ROOT, ATLAS software, grid tools etc

 3. How to use grid, Athena etc ?

The ATLAS Computing Workbook give all necessary information to use computing resources. The pages on Software tutorial are also of interest. Only few additional informations are give below.

 3.1 To start on grid

The WorkBookStartingGrid give information how to get a grid certificate, join th eATLAS Virtual Organization and prepare your certificate to work.

 3.2 Use of CernVM

We use CernVM to access Athena, grid tools, versions of ROOT etc ... In particular you can use the ATLASLocalRootBase package to do all the setups. You can have a look at this wiki in Canada to have detailed examples.

 3.3 Basic command lines

zsh
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch
/repo/ATLASLocalRootBase
source $ATLAS_LOCAL_ROOT_BASE/user/atlasLocalSetup.sh

To use grid to get data (Rucio etc) :
localSetupRucioClients
export DQ2_LOCAL_SITE_ID=GRIF-LPNHE_LOCALGROUPDISK
export DPNS_HOST=lpnse1.in2p3.fr
voms-proxy-init -voms atlas -valid 96:0

 4. Other tips and tricks

 4.1 How to use ROOT locally ?

Version 6.04/14 is installed on the server. Other versions are available in /usr/local/

 4.2 Using Intel compilers (via CERN executables and CC license)

 :
source /afs/cern.ch/sw/IntelSoftware/linux/all-setup.sh export INTEL_LICENSE_FILE=/home/beau/intel/licenses
Then you can use C compiler (icc), C++ compiler (icpc) or Fortran compiler (ifort). Exemple :
icc truc.c
See man pages (e.g. man icc, after init) for more informations.

Facebook