Sur ce site

Sur le Web du CNRS

Accueil du site > Masses et Interactions Fondamentales > Atlas > Activités > Informatique > User Guide

User Guide

par Frederic Derue - 11 avril

 1. Concerning your account :

Please refer to the FAQ pages of the computing service.

 2. Which resources are available at LPNHE ?

Besides your leptop (or PC), the ATLAS group server is the machine (also named

  • CPU : 16 cores (up to 32 with hyperthreading), Intel Xeon E5-2650 (2 GHz)
  • Memory : 128 GB
  • OS : SL6 x86_64

Other cpu resources are available at LPNHE :

  • grid resources : see GRIF pages
  • High Performance Computing : see this page)

 2.1 How to log on ?

For security reasons, ssh acceses are restricted from outside the lab (see this page for more details). They can done through a dedicated server named

  • ssh access (double ssh) :
    ssh -t your_login ssh your_login
  • ssh access with X11 tunneling
    ssh -tY your_login ssh -Y your_login
    ssh -tY your_login ssh -Y your_login

    To simplify the command lines you can add the following line to your file /.ssh/config
    ProxyCommand ssh -t -W %h :%your_login

This indicates to your machine that when you do a ssh to the server you will do automatically a ssh to Thus you can tape directly :
ssh your_login
sftp your_login
scp your_login your_local_file
rsync -auv your_login your_local_directory

You can do also :
ssh -Y -2 -A

 2.2 Your working spaces :

  • Your local space (on laptop and PC) depends on your machine. A priori it is not backuped.
  • When you log on the group server the $HOME directory is actually on another central server for the whole laboratory. You have a quota of 24 GB. Your home directory is under /home/username. WARNING:Your home directory is the only space which is regularly backuped (see this FAQ). Large files (like data) should not be kept in this space.
  • /data : These are semi-permanent storage spaces, i.e data are stored for a few months typically. They are dedicated to the storage of large files, typically data. This area is not backuped. Long term storage should be done elsewhere (e.h hpss in Lyon or on grid). This space is about 100 TB without user quota.
  • Your HPSS space in Lyon allows you to same large data files on magnetic tapes. Typically any data saved in /atlas0 should be backuped either on castor at CERN or on HPSS in Lyon. Still one should remind that these services are not optimized to backup small files. It may be necessary to do a tar before.

 2.3 Working at CERN or at CCIN2P3 remootely

Create a mount directory mkdir /mnt Then define the following aliases in your .bashrc or bash_profile :
alias mountccage=’sshfs userid :/sps/atlas/yourdirectory/mnt/ccage-sps ; sshfs userid :/afs/ /mnt/ccage-home’
alias mountlxplus=’sshfs userid :/afs/ /mnt/lxplus-work ;sshfs laforge :/afs/ /mnt/lxplus-home’
alias unmountccage=’unmount /mnt/ccage-work ; unmount /mnt/ccage-home’
alias unmountlxplus=’umount /mnt/lxplus-work ; umount /mnt/lxplus-home’

To mount your cern workspaces, just type « mountlxplus » and you are asked for your password twice (one for each subspace I have defined). At the end of your session, unmount the spaces with « unmountlxplus ».

When the space is mounted you just can work on the file as if they were local. You just open Xcode or emacs and edit directly through the path /mnt/lxplus-home/....

 2.4 Using the server as User Interface

The group server can be used to access ROOT, ATLAS software, grid tools etc

 3. How to use grid, Athena etc ?

The ATLAS Computing Workbook give all necessary information to use computing resources. The pages on Software tutorial are also of interest. Only few additional informations are give below.

 3.1 To start on grid

The WorkBookStartingGrid give information how to get a grid certificate, join th eATLAS Virtual Organization and prepare your certificate to work.

 3.2 Use of CernVM

We use CernVM to access Athena, grid tools, versions of ROOT etc ... In particular you can use the ATLASLocalRootBase package to do all the setups. You can have a look at this wiki in Canada to have detailed examples.

 3.3 Basic command lines

export ATLAS_LOCAL_ROOT_BASE=/cvmfs/

To use grid to get data (Rucio etc) :
voms-proxy-init -voms atlas -valid 96:0

 4. Other tips and tricks

 4.1 How to use ROOT locally ?

Version 6.04/14 is installed on the server. Other versions are available in /usr/local/.

You can also use ROOT through cvmfs localted at CERN (need afs) :
zsh export ATLAS_LOCAL_ROOT_BASE=/cvmfs/ source $ATLAS_LOCAL_ROOT_BASE/user/ localSetupROOT

 4.2 Using Intel compilers (via CERN executables and CC license)

source /afs/ export INTEL_LICENSE_FILE=/home/beau/intel/licenses
Then you can use C compiler (icc), C++ compiler (icpc) or Fortran compiler (ifort). Exemple :
icc truc.c
See man pages (e.g. man icc, after init) for more informations.

 4.3 Other tools to be used

  • VirtualBox is a free and open-source hypervisor for x86 computers currently being developed by Oracle. It supports the creation and management of guest virtual machines running versions and derivations of Windows, Linux, ....
  • CodeLite is a free, open-source, cross-platform IDE for the C, C++, PHP, and JavaScript programming languages