Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 1 Tier-2 in India for Alice Susanta Kumar Pal, VECC, Kolkata International Workshop on Large Scale of Computing February 8-10, 2006 Variable Energy Cyclotron Centre 1/AF, Bidhan Nagar Kolkata – 700 064, India
45
Embed
Tier-2 in India for Alice Susanta Kumar Pal, VECC, Kolkata
International Workshop on Large Scale of Computing February 8-10, 2006 Variable Energy Cyclotron Centre 1/AF, Bidhan Nagar Kolkata – 700 064, India. Tier-2 in India for Alice Susanta Kumar Pal, VECC, Kolkata. What is GRID computing? - PowerPoint PPT Presentation
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 1
Tier-2 in India for Alice
Susanta Kumar Pal, VECC, Kolkata
International Workshop on Large Scale of ComputingFebruary 8-10, 2006
Variable Energy Cyclotron Centre1/AF, Bidhan Nagar
Kolkata – 700 064, India
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 2
What is GRID computing?Computational Grid is a collection of distributed, heterogeneous resources which can be used as an ensemble to execute large-scale applications. The emphasis is on :
Distributed supercomputing High throughput & data intensive applications. Large scale storage High speed network connectivity
Why ALICE is interested in GRID?1 year of Pb-Pb running: 1 PBytes of data1 year of p-p running : 1 PBytes of dataSimulations : 2 PBytesTotal Data storage: 4 Pbtyes/year
ALICE computing requirements:Simulations, Data reconstruction & analysis will use about 10,000 PC-years.
GRID is the solution for ALICEConnect high performance computers from all collaborating countries with a high speed secured network. implementing one virtual environment that is easy for the “end user’’.
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 3
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 4
CMSATLAS
LHCbCERN
Tier 0 Centre at CERN
physics group
LHC Computing Model
2001 - evolvingregional group
Tier2
Lab a
Uni a
Lab c
Uni n
Lab m
Lab b
Uni bUni y
Uni x
Tier3physics
department
Desktop
Germany
Tier 1
USA
UK
France
Italy
……….
CERN Tier 1
……….
The LHC Computing
Centre
The opportunity ofGrid technology
CERN Tier 0
The opportunity ofGrid technology
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 5
What is a Tier-2?The following is taken from the LCG RTAG 6 on Regional Centre Category and Service Definition.A possible categorization scheme for regional centres could be based on service qualities as follows:
Tier1* CPU cycles (Grid enabled computing elements), advance reservation* Disk storage, resident and temporary (Grid enabled storage elements), advance reservation* Mass storage (Grid enabled storage elements), advance reservation* State-of-the-art network bandwidth, quality of service* Commitment to provide access to primary/master copy of data over lifetime of LHC* Commitment to provide long-term access to specific analysis data* Commitment to resource upgrades as required* 24/7 services and resource support* National support role* Training and user support* Interactive support for particular applications
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 6
Tier3* CPU cycles (Grid enabled computing elements)* Local storage (not necessarily Grid enabled)* Focused commitment to data access or resource upgrade* Only local user support* Focused services for agreed and day-by-day analysis activities* Local interactive support
Tier4* Enable Grid access* Provide experiment specific tools
Tier2
* CPU cycles (Grid enabled computing elements)* Disk storage, maybe temporary only (Grid enabled storage elements)* May have mass storage* Sufficient network bandwidth for inter-operability* A weaker commitment to provide access to data over LHC lifetime* A weaker commitment to provide long-term access to specific analysis data* A weaker commitment to resource upgrades * Focused user support* 24/7 service but with no guaranteed short-time "crash" response
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 7
Raw
Calib
Calib&
Reco 1
ESD
AOD
Tag
Recon
ESD
AOD
Tag
AnaDPD
RAW data delivered by DAQ undergoCalibration and Reconstruction whichproduce for each event 3 kinds of objects:
1. ESD object2. AOD object3. Tag object
Done in Tier-0 site.
Further reconstruction and calibration of RAWdata will be done at Tier 1 and Tier 2.
The generation, reconstruction, storage and distribution of Monte-Carlo simulated datawill be the main task of Tier 1 and Tier 2.
DPD (Derived Physics Data) objects will beProcessed in Tier 3 and Tier 4.
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 8
Why Tier 2 ?
1. Tier-2 is the lowest level to be accessible by the whole collaboration2. Each sub-detector of ALICE has to be
associated with minimum Tier-2 because of large volume of calibration data
3. PMD and Muon Arm are the important sub-detectors of ALICE
4. We are solely responsible for PMD – from conception to commissioning
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 9
LHC Utilization -- ALICEALICE SetupHMPID
Muon Arm
TRD
PHOS
PMD
ITS
TOF
TPC
Indian contribution to ALICE : PMD, Muon Arm
Size: 16 x 26 meters
Weight: 10,000 tons
Cost 120 MCHF
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 10
ALICE Layout : another view
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 11
ALICE PMD (=2.3 – 3.5)
PMD in ALICE : Fully Indian Contribution
TwoHalves
InVertical
When not in use
Two planes of honeycomb proportional counters3 X0 thick lead converterArranged in two halves in vertical planeInstalled at z=360 cm from I.P.
Honeycomb countersfor ALICE PMD areModified edition of
STAR PMD
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 12
ALICE PMD Layout (Data taking position)
Unit Module
Super ModuleConverter + Support Plate
Total channels
Preshower+Veto
= 221,184
8 Supermodules
in 2 planes
48 Nos. Total
Unit module componentsHoneycomb (4608 cells)
Top, bottom PCBs
Cooling by air circulation
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 13
Components of a unit module
TOP PCB(details)
32-pin connector
Edge frameCopper honeycomb
Bottom PCB
4 X 16 cells for one MCM Board
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 14
PMD unit module assembled
Fabrication of unit modules at Kolkata, Jammu and Bhubaneswar
Final assembly at Kolkata
Unit Module, 4608 cells421 mm X 260 mm
Cell cross-section 23 mm2
Cell depth 5 mm
Centre-to-centre dist. 5 mm
Cell wall thickness 0.4 mm
Anode wire 20 µm dia (Au-plated tungsten)
Anode-cathode distance 750 µm (on PCB)
421 mm Unit cell parameters
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 15
Pseudorapidity coverage 2.3 – 3.5
Azimuthal coverage 2
Distance from vertex 350 cm
Detector active area 2 m2
Detector weight 1000 kg
No. of planes 2 (veto + preshower)
Lead plate + SS plate thickness 3 radiation length
Detector Gas detector with hexagonal cells
Hexagonal cell dimensions Depth: 0.5 mm, cross section: 0.22 cm2
Total number of cells 221184 (=>in each plane: 110592 cells)
Detector gas Ar+CO2 (70%+30%)
Total gas volume 0.02 m3 (20 Liters)
No. of Supermodules per plane 4
No. of unit modules per plane 6 (HV isolation at the unit module level)
No. of HV channels 48
Average Occupancy (at full multiplicity) 13% for CPV & 28% for PMD
Photon counting efficiency 64%
Purity of photon sample 60%
PMD Parameters
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 16
muon tracking quadrants assembled in Kolkata
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 17
Cell-to-cell gain calibration for PMDNeed : To have uniform response throughout the detector
How to do :
Test beam studies tell us – hadron response from our detector
Single cell is affected
Pulse height spectra landau in a cell : Mean can be used for calibration
In data look for single isolated cell ADC spectrum
ADC>0
0 0
0
00
0
STAR PMD Isolated cell ADC spectra
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 18
Cell-to-cell gain calibration for PMD
ALICE PMD has 221184 cells to be calibrated
For this we need isolated cell ADC spectra for each cell
Minimum number of entries need for a good Landau distribution ~ 1000
We need at least 1 Million events for calibration
So data volume ~200Kx0.25x4BytesX1Million ~ 200GB
Although calibration is done once for a period ofRunning, it may be advisable to check the calibration constants time-to-time.
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 19
With and Without MANAS Calibration – preproduction batch : gain spread 5%
Expected Gain Spread in MANAS production : ~ 2.3 %Channel gain calibration may not be essential any more
MANAS Calibration for Tracking Detectors of MuonSpectrometer
Production Batch
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 20
Physics data
Total number of readout pads in FMS: 1.1 X 106
Trigger rate – 1 kHz
Average occupency ~ 2.5%
1 month of Pb-Pb data
== 300 TB
Pedestal data:
Every run ~ 5 MB (σ values of each channel needs to be known for analysis.
One pedestal run/hour
40 GB/month
Electronic Calibration data
The frequency will depend on the observed gain spread (if it is <2.5% then uncalibrated resolution will be satisfactory)
Every run ~10 MB (for 2 point calibration)
GMS data
Comparable or less than pedestal data. Estimation from SINP group
Expected Data rate for MuonArm
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 21
Required Computing Resources for Tier 2
At VECC 2004 2005 2006 2007 2008 Total# CPU
(Intel Xeon)
8 8 48 64 64 = 192
Disk space (TB)
0.5 0.5 12 24 12 = 49
By 2006: # CPU(Intel P3) Disk Space (TB) Tier 0: 28800 727 Tier 1 + 2 7200 238Total 6 Tier 1 centers and for each Tier 1 there will be several (~ 5-6) Tier 2 Centers. Tier 2 centres should have the capacity roughly 30% of Tier1 + 2 capacity.
Bandwidth: Tier 2 centers rely entirely on associated Tier 1 centers forReconstruction data storage. For efficient and fast analysis a bandwidth of1.5 Gb/s is a reasonable value – although smaller values are viable.
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 22
The following H/W and S/W infrastructures were used :
•8 Node Cluster consisting of dual Xeon CPU & 400GB Disk Space.
•PBS Batch System with one Management server and eight Clients under OSCAR cluster management environment
•ALICE Environment ( AliEn ) was installed
•Data Grid has been registered at cern.ch
•AliROOT, GEANT and other production related packages are tested successfully in both ways
•Linked with CERN via 2Mbps available Internet link and Participated in PDC’04
Our Experience with Alice-Grid & PDC’04
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 23
Cor
e C
omp
onen
ts &
Ser
vice
sC
ore
Com
pon
ents
& S
ervi
ces
Inte
rfac
esIn
terf
aces
Cor
e M
idd
lew
are
Cor
e M
idd
lew
are
DB
I
DB
D
RD
BM
S
LDA
P
V.O
.Packa
ges
&C
om
man
ds
Core
Module
s
Exte
rnal
Libra
ries
Use
r Inte
rface
SO
AP/X
ML
AD
BI
File &
Meta
data
C
ata
logue
Config
Mgr
Packa
ge
Mgr W
eb
Port
al
CLI
GU
I
Low
leve
l L
ow le
vel
Hig
h le
vel
Hig
h le
vel
Use
r A
pplica
tion
API
(C/C
++
/perl/ja
va
)
FS
(…)
Com
puti
ng
Ele
ment
Sto
rage E
lem
ent
Logger
Data
base
Pro
xy
Auth
enti
cato
rR
eso
urc
e B
roke
rM
onit
or/
Gate
keeper
AliEn Architecture in GeneralAliEn Architecture in GeneralAliEn Architecture in GeneralAliEn Architecture in GeneralOur Experience with Alice-Grid & PDC’04
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 24
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 25
Our Experience with Alice-Grid & PDC’04
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 26
Changed the cluster software from OSCAR to QUATTOR
OSCAR Performance degrades with more Nos. of Nodes ( > 32 Nodes )
QUATTOR Perfomence does not degrade with increase of nodes Better performance with more Nos. of Nodes
Dedicated Band-Width for Tier2@Kolkata -> 4 Mbps
Separate Domain Name for Tier2; ‘tier2-kol.res.in’
Addition of More Nos. of CPUs (48-Xeon) and Storage(7TB)
Installation of gLite is in progress
Preparing for next PDC
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 27
gLiteMiddleware
Services
GAS
WM DM
TQ
PM
FTQ
ACE FC
CEJW(JA) SE
CR(LSF,..)
LJC SRM
LRC
API
GAS Grid Access ServiceWM Workload MgmtDM Data MgmtRB Resource BrokerTQ Task QueueFPS File Placement ServiceFTQ File Transfer QueuePM Package ManagerACE AliEn CE (pull)FC File CatalogueJW Job WrapperJA Job AgentLRC Local Replica CatalogueLJC Local Job CatalogueSE Storage ElementCE Computing ElementSRM Storage Resource MgrCR Computing Resource
(LSF, PBS,…)
gLite ArchitecturegLite Architecture
Preparing for next PDC
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 28
VECC- Cluster : High Availability Quattor CERN
Internet Cloud
Tier-2@KolkataSINP- Cluster : High Availability Quattor
ROUTER
FireWall
Switch
Management Node (Stand-by)
Management Node
4MbpsGigabit Network
Present Status
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 29
Present Status
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 30
Present Status
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 31
grid.veccal.ernet.in graphs last hour sorted descending
Present Status
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 32
New Domain name “tier2-kol.res.in’ has been registered and work is going on
CONDOR Batch System is running with one server and eight Clients under QUATTOR cluster management environment
AliROOT, GEANT and other production related packages are tested successfully in both ways
ALICE Environment ( AliEn ) at present NOT running Data Grid has been registered at cern.ch Linked with CERN via 2Mbps available Internet link. 4MBPS band-width is already installed and commissioned
Tier-II Centre for ALICE (Update on VECC and SINP Activities)
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 33
Infrastructure Status:•Fully Equipped GRID Room is Ready (Sufficient to accommodate projected Tier-II H/W)
Main RoomSix Auto Switchable AC units
UPS room
20ft x 19ft
10ft x 19ft
VECC
Main Room1. Six Auto Switchable AC units2. NW to VECC main
UPS room
30ft x 20ft
20ft x 15ft
SINP
Main RoomSix Auto Switchable AC units
UPS room
Tier-II Centre for ALICE (Update on VECC and SINP Activities)
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 34
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 36
LCG-AliEn-SE Interface using GridFTP
1st-July’03 to 18th Dec’03Status:The project was successfully completed
before scheduleMilestones of the Project were as follows : Test Bed Installation for Grid Environment
The configuration consists of one central server and two Sites.
Certification Authority Server has been installed A simple certification authority has been installed to generate certificates for authentication purpose.
Installation of GridFTP Library under AliEn The GridFTP daemon in.ftpd has been used as server and globus-url-copy has been used as client
Development of AliEn-SE Interface via GridFTP These newly developed modules along with necessary GridFTP libraries and changes made in existing AliEn Code have been committed to CVS Server at CERN.
Quality Assurance and Test Environment
For AliEn-ARDA* Prototype 23rd Dec’03 to 31st March’04
The Project was successfully completed. Milestones of the Project were: Exploration and Design of Test Scripts using
perl.
Implementation of Test Scripts for each Individual perl sub-module of AliEn.Individual perl sub-modules of AliEn code were tested for proper functionalities. It Generates a detailed report of the individual tests and maintains a log.
Validation of Test-Scripts and Procedures.
Testing Modules with perl Harnessing Environment.The Complete Suit was tested at CERN under perl Harnessing Environment for testing AliEn online and generating online consolidated report of the test.
Inline Documentation to the extent possible.
ARDA Architectural Road-Map towards Distributed Analysis
Contribution towards LCG-GRID PROJECTS
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 37
To Summarize•Infrastructure is ready to accommodate Tier-2 H/W
•Upgradation of present AliEn cluster with gLite is in progress
•Middleware are installed in the Current Hardware with latest Scientific Linux OS platform
•With limited resources, VECC took part in ALICE Production-Data-Challenge(PDC-2004)
• Getting ready for Next PDC with upgraded infrastructure
•Upgradation of CE and SE is in the process as per requirement
•One FTE and 2 Engineers with 30% of time are engaged•Two more FTEs are approved
Thank you
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 38
PMD
V0
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 39
PMD Split Position
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 40
Super Module and Unit module arrangement of ALICE PMD
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 41
Super Module Type B
UM :
- 12 - FEE Boards in a row- 6 Nos. of rows
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 42
Super Module Type A
UM:24-FEE Boards in a Row3- Rows
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 43
Contribution towards LCG Project•LCG-AliEn Storage Element Interface (addendum-5) 1st-July’03 to 18th Dec’03 Value: 150 KCHF•Test Suite for perl harnessing with AliEn (addendum-6) 23rd Dec’03 to 31st March’04 Value: 150 KCHF•VECC as a part of the Alice Data-Challenge Team run offline production on the existing infrastructure.•Currently main emphasis is on participation in ALICE Physics Data Challenge 2005-06.Future Projects:• Development of Test Environment for ADRA (Architectural Roadmap for Distributed Analysis) code• Testing ARDA code under Perl Test harness guidelines• Part in EGEE ( Enabling Grid for E-Science Euro) prototype development
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 44
Security: Overview
User sideGetting a CertificateBecoming a member of the VO
Server sideAuthentication /CAAuthorization / VO
Our Experience with Alice-Grid & PDC’04
Susanta K Pal IWLSC, Feb 8-10, 2006, Kolkata, India 45