CERN Status and Plans
Post on 23-Jan-2016
31 Views
Preview:
DESCRIPTION
Transcript
CERN IT Department
CH-1211 Genève 23
Switzerlandwww.cern.ch/
it
CERN Status and Plans
Maria Girone, CERN IT-DM
CERN IT Department
CH-1211 Genève 23
Switzerlandwww.cern.ch/
it
Slide 2
• CERN Physics DB Services Readiness– Current Resource Allocation & Usage – Alarm and Problem Escalation & Handling – New Requests
• Streams • Archive DBs
• Plans
Outline
CERN IT Department
CH-1211 Genève 23
Switzerlandwww.cern.ch/
it
Slide 3
Physics Database Services Review
• Note:– Alice online, integration, standby and archive DBs are
deployed on dual-core servers. Warranty extended for DBs at CC (to be replaced in Q1/Q2 2010)
– CMS online uses 4-core servers
Exp Online (Standby)
Offline (Standby)
Validation Archive
ALICE 6-nodes (Y) On PDBR No No
ATLAS 3-nodes (Y) 5-nodes (N) Two 2-nodes 3-nodes
CMS 6-nodes (Y) 4-nodes (Y) Two 2-nodes 3-nodes
LHCb 4-nodes (Y) 3-nodes (Y) 2-node No
WLCG 4-nodes (Y) Two 2-nodes No
CERN IT Department
CH-1211 Genève 23
Switzerlandwww.cern.ch/
it
Slide 4
ATLR (ATLAS Offline)
CERN IT Department
CH-1211 Genève 23
Switzerlandwww.cern.ch/
it
Slide 5
ATONR (ATLAS Online)
CERN IT Department
CH-1211 Genève 23
Switzerlandwww.cern.ch/
it
Slide 6
• Clear trend observed of increase in resource usage for ATLR. Actions taken:
1. Application review with developers
2. ATLAS_Dashboard migration to WLCGR
3. Panda and DQ2 to dedicated nodes (node 3 and node 5 – node 5 added)
4. Reallocation of TAGS to a dedicated DB (Archive DB)
ATLR: High Load Handling
CERN IT Department
CH-1211 Genève 23
Switzerlandwww.cern.ch/
it
Slide 7
ATLR: High load Issue
CERN IT Department
CH-1211 Genève 23
Switzerlandwww.cern.ch/
it
Slide 8
CMSR (CMS Offline)
CERN IT Department
CH-1211 Genève 23
Switzerlandwww.cern.ch/
it
Slide 9
CMSONR
CERN IT Department
CH-1211 Genève 23
Switzerlandwww.cern.ch/
it
Slide 10
LCGR
CERN IT Department
CH-1211 Genève 23
Switzerlandwww.cern.ch/
it
Slide 11
• Alarms: – GGUS handled by CERN and Tier1 site ROCs– For CERN there will be a piquet DB (being finalized now)
• Experiments and Physics DB contacts
https://twiki.cern.ch/twiki/bin/view/PDBService/PhysicsDatabasesS
ection
Ex: recent LHCb streams replication
• “Team tickets” Phydb.support@cern.ch and• grid-service-databases@cern.ch for Distributed DB
Operations
Ex: consultancy, interventions scheduling, problems
Alarms and Problem Handling
CERN IT Department
CH-1211 Genève 23
Switzerlandwww.cern.ch/
it
Slide 12
• 7 DBAs supporting the DB services
• Online DB and Offline DB support is 24x7
• Stream set-up support is 8x7
• Archive DB support ?
On-call team
CERN IT Department
CH-1211 Genève 23
Switzerlandwww.cern.ch/
it
Slide 13
• ATLAS: – consolidation of ATLR to critical offline applications (Cool, Panda,
DQ2) – Standby DB for ATLR – Dedicated resources to TAGS (expected increase of ~11TB/year
for event TAGS, ~8TB/year for MC TAGS) on a ATLAS Archive DB
• CMS – Freeze a few times per year the state of the production database
for use of re-reconstruction on a dedicated CMS Archive DB
• Till new hardware arrives archive DBs are deployed on old hardware, with limited space allocation of order of 1TB. Can be increased relaxing on-disk backups (tape backups are in place )
Work in Progress
CERN IT Department
CH-1211 Genève 23
Switzerlandwww.cern.ch/
it
Slide 14
• Prepare the move of production DB from RHEL4 to RHEL5 – Most of validation and test DBs are already on RHEL5 – Some validation DBs will be left on the current production DB
OS version for patch update validation
• Started to study new features of 11gR2 within the openlab program of work with Oracle (see tomorrow) with the aim of upgrading to 11g during the 2010-2011 shutdown
Medium Term Plans - End of Run 2010
CERN IT Department
CH-1211 Genève 23
Switzerlandwww.cern.ch/
it
Slide 15
• The Physics Database Services have grown significantly in the last few years – Service Size– Service Quality – User Appreciation
• A robust, scalable and performing service has been built
• Close collaboration with the experiments is even more important now as the real LHC workload is coming
Conclusions
top related