Grids, Clouds and the Big (CHEP) Apple

Stephan Zimmer is a PhD students in the OKC. This is his report from the CHEP conference last week.

Alvarez illustrating our fights when trying to connect to a video conference and what we always think but never dare to say
Do you care about computing? Probably not, probably you are happy just knowing that all your stuff just works. But what “Does work” actually means? Let me try to give you a few reasons why you should actually care… and why it matters.

Last week I was at the CHEP conference where the latest and greatest news of computing in high energy nuclear and particle physics were discussed. CHEP is an international conference with about 500 scientists, computing experts and business professionals, reviewing the current set of Clouds, Grids and technologies for the upcoming challenges, first and foremost those posed by the LHC experiments. I believe most sessions were actually recorded, so have a look and see if there’s anything that fancies your interest.

It was quite an exciting conference for me, given that we (physicists) usually don’t attend this kind of conference.

During the plenary talk we were shown some of the greatest highlights of all LHC experiments, along with the latest developments. The bottom line (but of course the ATLAS folks at OKC know that already) is that by the end of this year, we’ll probably have either killed or confirmed the intriguing hint of a signal in both CMS and ATLAS at roughly 125 GeV.

All of us who work in HEP know of the pain with C++ and ROOT and all the other goodies we (have to) use in the community. Those of us that DON’T use ROOT, please skip this paragraph.
We were promised by Fons Rademakers, the new Mr. ROOT, after Rene Brun (you all know him! By the way: he gave a nice talk covering computing in HEP since the 1970s and was honored by standing ovations during the closing session of CHEP) to see ROOT v6 by November and from my little experience of C++ this will bring quite some interesting changes, among others a giant boost of performance (and lots of more support for iOS on various levels). Axel Nauman from CERN detailed Cling and Clang in a blog post on the ROOT website (http://tinyurl.com/75wsfm9).

Perhaps a little closer to the actual use for tasks performed at the OKC, was a status report on BAT (the Bayesian Analysis Toolkit), where the authors have claimed that through a smart combination of massively parallelized MCMC with importance sampling, they could challenge our dear Multinest friends from Cambridge.

Oxana Smirnova from Lund reviewed what happened to Grid operations and predicted a dark future if we were not to adapt to more general needs and clearly make an effort in standardization of Grid software and technologies, the keyword here is STANDARDIZATION! In Stockholm (and in Sweden in general) we make use of the NorduGrid ARC middleware, which is based on a Nordic Grid initiative and works well – within ARC, but if you take some Grid service that runs at CERN for instance, you have to learn a different set of tools and learn all the specifics about the middleware.

Compatibility and Grid interoperability are prime targets for the DIRAC system, which adds another abstract layer that in itself translates whatever request you may have for any kind of Grid worker, into the site-specific Grid middleware. Since its development in LHCb, it has matured and I had a number of very fruitful discussions on how to implement this system in Fermi’s existing dataprocessing infrastructure.

In general it was quite interesting to see a very technical conference from the perspective of the End user – and how our paradigms and needs naturally don’t necessarily overlap. It is this friction between elegance and usability that probably has kept you from using resources on the Grid. But perhaps that we should change (provided you DO have computing intense applications). The other big elephant in the room already are Cloud services – and how Amazon’s EC3 may help disentangle the Higgs mysteries.

Speaking of the cloud, while the corporate business is already well in the cloud, we apparently seem to be unable to follow this trend. Andreas Peters from the CERN IT department illustrated this by showing various large scale data storage solutions, most of you may know xrootd, dCache or AFS – but the technology of this hasn’t really gotten up to speed with S3, Swift or even Facebook’s Haystack.

Did you know that Facebook (despite all criticism) has similar challenges as the HEP community? This year they have amounted 30 PB of storage, and they need to analyze this. They do this by moving the analysis to the cloud rather than the data to the user (analyst). If I could identify one trend than it is moving your analysis to the data rather than pulling data to you.

One lesson learned from 10 years of Grid and the emerging Cloud shows that the corporate world should not be ignored by default but we should aim at utilizing already existing solutions rather than spending time (and money) to develop our own little HEP toys.

A little jewel of the conference was Galvarez’ talk on video conferencing in HEP. I really liked that the developers of EVO (we all use it and many hate it), appreciate our frustration. Did you know that the HEP community again was a pioneer in video conferencing (a lot longer before Skype in 2003 or other commercial products saw the light of day). He pointed out, that there will probably never be a company addressing our needs, Skype and Google only support up to 10 users in a group call. EVO provides about 60 Million minutes of video conference (including audio) per year, which is saving about 100 USD per user per year. Oh, and since EVO is going to lose funding by the end of 2012, we are going commercial. To that end the Evogh Inc. was established. Luckily nothing is really going to change, except that hopefully we’ll get more stable clients and mobile device support.

Well, aside from this really packed conference, we were hosted at a prime location, New York City – and the organizers successfully tried to provide a home experience for about 500 participants from Physics, Computer Science, Engineering but also from the corporate world, such as a specialist from DELL detailing the challenges of moving to the EXAscale of computing.

Leave a Reply

Your email address will not be published. Required fields are marked *