Follow BigDATAwire:

January 9, 2012

ISC Issues Call to Action for Computer, Data Scientists

Datanami Staff

Before we know it, the first month of 2012 will wind down, which means that for those keeping an eye on movements in data-intensive computing, especially as it relates to HPC, there are only a couple of weeks to become part of the International Supercomputing Conference (ISC) in Hamburg.

Since the line between “traditional” high performance computing and big data problems is blurring, HPC-geared conferences could attract a new, more diverse set of attendees.

Those who are new to the world of high performance computing but have heavy data demands might find a lot in store this year in Germany, especially with the emphasis on cutting edge research targeted at new algorithms, tools and methods to make big data calculations and visualization more powerful and efficient.

The upcoming conference, which like SC here stateside, attracts supercomputing professionals from around the world, will be held June 17-21 in Hamburg, Germany. The organizers of the conference are seeking to round out the program with a call to submit full papers and tutorial proposals before January 22.

While full details are available at the ISC ’12 website, there are a few reasons to look to the event if you are working on solutions to data-intensive problems in both research and enterprise contexts. For one thing, a quick look at the lineup for the coming year shows that emphasis on big data we felt at Supercomputing this past year (SC11) will carry over to Germany.

Among the topics that the organizers seek are presentations and papers that address the following:

Architectures (multicore/manycore systems, heterogeneous systems, network technology and programming models).

Algorithms and Analysis (scalability on future architectures, performance evaluation, tuning).

Large-Scale Simulations (coupled simulations, industrial simulations and data visualization).

Future Trends (exascale, HPC clouds, etc.).

Storage and Data (file systems, tape libraries, data intensive applications and databases).

Software Engineering in HPC (application of methods, surveys, etc.).

Supercomputing Facility Issues (schedulers, management, monitoring and the like).

Additionally, the team behind the ISC ’12 conference is putting out a call to those working on scalable applications. They’re hoping to uncover a research group capable of demonstrating the ability to scale an applications to more than 50k cores.

On that note, two Gauss Awards will be handed out that recognize two research papers that cover scalable supercomputing. Another, the PRACE Award, will be granted to the best scientific paper produced by European students or scientists who are discovering innovative approaches to resource allocation, algorithms aimed to improve scalability and the discovery of new approaches to evaluate the performance of massively parallel architectures.

Please read the full sheet for more specific information about submission criteria and guidelines or to register for the event—one the organizers expect will draw close to 2400 attendees.

Related Stories

Live from SC11: HPC and Big Data Get Hitched in Seattle

Live from SC11: Data Intensive System Showdown

Bar Set for Data-Intensive Supercomputing

Interview: Cray CEO Sees Big Future in Big Data

BigDATAwire