The next generation of Super Flavor Factories, like Super B and SuperKEKB, present significant computing challenges. Extrapolating the BaBar and Belle experience to the SuperB nominal luminosity of 10 36 cm −2 s −1 , we estimate that the data size collected after a few years of operation is 200 PB and the amount of CPU required to process them of the order of 2000 KHep-Spec06. Already in the current phase of detector design, the amount of simulated events needed for estimating the impact on very rare benchmark channels is huge and has required the development of new simulation tools and the deployment of a worldwide production distributed system. With the collider is in operation, very large data set have to be managed and new technologies with potential large impact on the computational models, like the many core CPUs, need to be effectively exploited. In addition SuperB, like the LHC experiments, will have to make use of distributed computing resources accessible via the Grid infrastructures while providing an efficient and reliable data access model to its final users. To explore the key issues, a dedicated R&D program has been launched and is now in progress. A description of the R&D goals and the status of ongoing activities is presented.
Computing for the next generation flavour factories
CORVO, Marco;FELLA, Armando;GIANOLI, Alberto;LUPPI, Eleonora;TOMASSETTI, Luca
2011
Abstract
The next generation of Super Flavor Factories, like Super B and SuperKEKB, present significant computing challenges. Extrapolating the BaBar and Belle experience to the SuperB nominal luminosity of 10 36 cm −2 s −1 , we estimate that the data size collected after a few years of operation is 200 PB and the amount of CPU required to process them of the order of 2000 KHep-Spec06. Already in the current phase of detector design, the amount of simulated events needed for estimating the impact on very rare benchmark channels is huge and has required the development of new simulation tools and the deployment of a worldwide production distributed system. With the collider is in operation, very large data set have to be managed and new technologies with potential large impact on the computational models, like the many core CPUs, need to be effectively exploited. In addition SuperB, like the LHC experiments, will have to make use of distributed computing resources accessible via the Grid infrastructures while providing an efficient and reliable data access model to its final users. To explore the key issues, a dedicated R&D program has been launched and is now in progress. A description of the R&D goals and the status of ongoing activities is presented.I documenti in SFERA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.