09 Jul 2014

Installation of MIKE by DHI on Moffatt & Nichol’s High Performance Computing system

Moffatt & Nichol – one of the world’s leading specialised maritime planning and engineering firms – installed our MIKE by DHI software on their High Performance Computing (HPC) system.

Over the last two decades, HPC has become one of the fastest growing markets in the information technology sector. The demand for high-resolution, complex and large-scale numerical modelling studies is continuing to increase. In order to help our clients run their models, we utilise the latest technologies in our MIKE by DHI software to improve the performance of the most CPU-intensive modelling simulations.

Our software for coastal and marine environments, MIKE 21 and MIKE 3 with Flexible Mesh, was developed with parallel computations using Message Passing Interface (MPI) technology. This enables our software to utilise parallel computers, making it perfectly suited for HPC systems. Our tests show that on HPC systems with a large number of cores MIKE Flexible Mesh models can run substantially faster.

Moffatt & Nichol carry out computationally demanding simulations as part of their core business. As such, they recently purchased a Linux-based HPC cluster. With support from us, Oleg Mouraenko, Ph.D. (Senior Coastal Scientist at Moffatt & Nichol), set up MIKE by DHI software on their cluster. The first benchmark test using a MIKE 3 Flexible Mesh model from a previous project showed a reduced run time – from one month to less than one day. Moffatt & Nichol have already run a number of complex high-resolution models that are only feasible to simulate on an HPC system.

‘We are very pleased with the performance of the DHI software on our HPC cluster. DHI made it very easy for us to implement the new system by providing guidance and support. The decision to run MIKE Flexible Mesh models on the cluster greatly enhances our capabilities and provides new business opportunities’, said Oleg.

Moffatt & Nichol HPC system configuration

  • 272-core HPC system from Advanced Clustering Technologies, Inc.
  • 17 compute nodes, 1 head node
  • Dual Xeon E5-2667v2 3.3GHz 25M Cache 8 cores and 64GB (8x8GB) 1866GHz RAM per node
  • CentOS v6.4 operating system
  • 18 port QDR Infiniband switch
  • 50TB RAID6 storage