Computational Science

 

Over the last decade, computational science (aka scientific computing) has taken an increased significance in all areas of science and engineering, and beyond. In fact, computation has grown into an effective and efficient approach, which complements and bridges traditional theoretic and experimental approaches. The computational science group contributes to four major stages: (i) Identification of research problems to be engaged computationally, (ii) Development and application of underlying physical/mathematical models, (iii) Implementation of computer programs and performance benchmarks, and (iv) Conducting and analyzing simulations. We develop and apply a wide range of simulation and analysis tools and techniques to tackle a wide range of challenging problems requiring interdisciplinary and collaborative efforts. We work closely on these activities with CCT, with some of us holding joint position. The current research activities and future plan cover several domains: Cloud computing, coastal and environmental modeling, computational biology, computational materials, computational humanities, and data-intensive computing.

 

Faculty


Gerald Baumgartner: Computational Chemistry, Computational Humanities

Jianhua Chen: Computational Humanities

Bijaya Karki: Computational Materials, Large-Scale Simulations

Robert Kooima: Computational Image Processing

Seung-Jong Park: Computational Biology, Data-Intensive Computing

Brygg Ullmer: Computational Biology

R. Clint Whaley: High Performance Computing, Optimization, Parallel Computing

Gabrielle Allen (adjunct): Computational Fluid Dynamics, Numerical Relativity

Steven Brandt (adjunct): High Performance Computing 

Hartmut Kaiser (adjunct): Future systems, Exascale, Parallelism and Concurrency in C++

Clint Whaley: Computational Libraries

 

Specific Projects


Computational biology. In collaboration with Biological Scientists, Ullmer develops and applies HPC tools, workflows, user interfaces, and visualization for comparatively analyzing the "mobile element" composition of primate genomes (which typically composing ~50% of primate DNA). Supported by NSF-MRI and NIH.

Social media data mining. J Chen and Baumgartner design and apply new text mining algorithms and social network analysis methods to build a smart decision-support tool for Louisiana Poverty Initiative (LPI). The tool will analyze the social media information (Facebook, Twitter, Web pages, etc.), identify emerging trends and issues important, and help finding key collaborators for LPI agencies.

Tensor contraction engine. Baumgartner works on a domain-specific language of tensor contraction (generalized matrix multiplication) equations and an optimizing compiler that generates high-performance code. The goal of the TCE is to optimize and generate code for any target architecture: from multi-core desktop machines to supercomputers with multi-CPU and multi-GPU nodes configured in a cluster. Supported by the NSF.

Geomaterials simulations. Karki adopts a meta-computing framework (involving parallel programs, workflows, multiple job handling, and visualization) for materials research. In particular, they investigate the behavior and properties of Earth forming materials at extreme conditions of the deep interior using intensive first-principles (quantum mechanical) simulations. Supported by NSF-EAR.

Computation for image processing. Kooima applies heterogeneous computing technology to large-scale problems in image processing, in support of interactive visualization. Supported by NASA and BoR.

HPX. Hartmut Kaiser and the STE||AR Group at LSU (http://stellar.cct.lsu.edu/) are leading the development of HPX, a C++ runtime system based on a revolutionary execution model. We drive the maturation of the runtime system with applications from several research domains including astrophysics, coastal modeling, and big data.

Computational libraries.  Whaley supports the ATLAS (Automatically Tuned Linear Algebra Software) research framework, which provides one of the most widely-used computational libraries.  This includes work on applied computer architecture for performance tuning, auto-tuning for high performance, parallel algorithms and tuning. Supported by NSF.