SIParCS 2017 Projects

Projects for summer 2017

* U,G Denotes Availability for Undergraduate and/or Graduate Applicants 

  1. Accelerating Statistical Analysis through Parallel Computations *U,G
  2. C to Fortran Translation *U
  3. CAPSTONE: A Cloud-based Data Analysis as a Service Platform for Scientific Discovery *U,G
  4. Highly Concurrent Fundamental Visualization Algorithms and Emerging Processor Architectures *U,G
  5. Implementation of a Discontinuous Galerkin 3D Euler Solver on many-core CPUs and GPUs *G
  6. Improving HPC Scheduling through Machine Learning and Statistics *U
  7. Improving Single Threaded Performance of NCL Using OpenMP 4.0 SIMD Directives *U,G
  8. PySpark for "Big" Atmospheric & Oceanic Data Analysis *U
  9. Remapping and Conservation *G
  10. Source-to-source Fortran Modernization *U
  11. Supercomputer Infiniband Fabric Analysis *U,G

  1. Accelerating Statistical Analysis Through Parallel Computations *U,G
    Areas of interest: Application Optimization/Parallelization, Data Science, Numerical Methods

    Recently we have seen an emergence of modern statistical methods that are specifically designed to take advantage of high performance computing (HPC) infrastructure. Such methods are especially relevant for the statistical analysis of large spatial data, such as climate model output or satellite data. While these methods should scale well in theory, implementing them in practice on many cores and across nodes still raises many open questions at the forefront of current application research. This project will address some of these questions by investigating optimal HPC implementation strategies for spatial and spatio-temporal statistical methods applied to very large data sets. Specific project details will be adapted to the student’s interest and skill level.

    The details of the implementation are flexible depending on the candidate’s skills and interests, but will likely involve some of the following: writing Matlab, R, or C code, related scientific packages, and using MPI and OpenMP parallel programming paradigms.  The implementation will take place on NCAR’s new super computing environment, Cheyenne, using large and scientifically meaningful data sets from climate models and satellites.

    Students - The project is open to undergraduate and graduate students.

    Skills and Qualifications
    Students should be proficient with either Matlab, R, or C, and a Linux/UNIX background. Students should be familiar with parallel architecture and MPI and have completed coursework in linear algebra and basic statistics. Advanced statistical knowledge, especially in spatial statistics, is desired but not required. 

    Back to top


  2. C to Fortran Translation *U
    Areas of Interest: Software Engineering, Interlanguage interoperability
     

    In 2016, a SIParCS student created a tool, h2m, that can automatically translate C header files into Fortran modules. This can help automate standard interoperability between Fortran programs and their supporting C libraries. The technical aspects of the translations have been successful beyond our original expectations, but the tool, as it stands, still needs further development before it can be made available to end users.

    To become widely used, h2m should be easy to build (via standard tools like tar; configure; make; make install), and easy to use.  Also, the addition of command line arguments would allow h2m to access headers located in unexpected locations and perform slightly different translations as needed.

    Thus, the tasks of this internship includes:
    a) Identifying potential users for h2m and gather their requirements for additional functionality and command line       arguments.
    b) Simplifying and improving build process to make h2m more available and more widely useful.
    c) Modifying h2m to comport with the above objectives.

    Students - The project is open to undergraduate and graduate students.

    Skills and Qualifications
    Students should possess knowledge of Fortran and C/C++, have a basic understanding of compilers, and have the skill to interact and work with potential end users.

    Back to top


  3. CAPSTONE: A Cloud-based Data analysis as a Service Discovery *U,G
    Areas of interest: Cloud Based infrastructure, Software Engineering, Supercomputer Systems Operations, Scientific Gateways, Visualization

    This project is a continuation of a successful SIParCS project from last summer that built a Capstone prototype. Capstone is an NCAR Science Gateway project to develop a cloud-based, “scientific data analysis as a service” platform. Capstone envisions an end-to-end system providing user access to a toolkit of reusable components (web-enabled micro-services) for building and orchestrating workflows focused on atmospheric data preparation and analysis.

    The team of graduate and undergraduate students will conduct research and development on:
    a) the Capstone web service, user interface and database technologies required 
    b) scalable cloud-based storage solutions 
    c) optimization of performance, cost and resilience of the system 
    d) Capstone user interfaces and usability 
    e) data integrity 
     f) system security.  

    The student interns will work as an Agile Scrum development team in close coordination with NCAR engineers, end users and scientific stakeholders to implement new and to extend existing capabilities of the end-to-end project operating on a cloud platform. The students will work as a self-organizing and cross functional team in a highly collaborative environment.  

    Graduate student team members may provide mentoring to more junior team members and engage in Agile Scrum roles such as Scrum Master.

    Students - The project is open to undergraduate and graduate students.

    Skills and Qualifications
    Students should be familiar with the principles of web application and web service development, cloud services, database design, UNIX operating system commands; basic knowledge of file transfer protocols and web service data representations (for example JSON); and possess basic programming skills in Python, Java, JavaScript or C.

    Back to top


  4. Highly Concurrent Fundamental Visualization Algorithms and Emerging Processor Architectures *U,G
    Areas of interest: Application Optimization/Parallelization, Software Engineering, Visualization 

    One of the most disruptive changes in High Performance Computing today is the shift toward “many-core” architectures. These devices increase aggregate performance by provisioning large numbers of slower processing elements. Efficiently using them often requires restructuring of software and algorithms to expose sufficient amounts of data parallelism, and selecting an appropriate parallel programing interface. Current parallel programming options include programming libraries, such as pthreads, device-specific languages, such as nVidia’s CUDA, and language directives, such as OpenMP, to name a few. In the field of scientific visualization current implementations of many fundamental algorithms are unable to harness the power of many-core architectures because they lack the needed concurrency. Efforts are now underway at NCAR to rethink and redesign scientific software with these architectures in mind. To ease the programming burden, and ensure cross-device portability, device-agnostic toolkits such as VTK-m (m.vtk.org) are of particular interest.

    This internship offers the student the opportunity to gain experience with highly concurrent architectures and their programming interfaces. The goal of this internship is to further our understanding of the programming, performance, and constraints of high-concurrency devices in the context of scientific visualization. Students may implement and/or evaluate one or more fundamental visualization algorithms using high-level toolkits such as VTK-m, or device-portable programming libraries such as Thrust (thrust.github.io).

    Students - The project is open to undergraduate and graduate students.

    Skills and Qualifications: Undergraduate or graduate students should be enrolled in a computer science, physical science, or math curriculum. Experience with C/C++ programming, parallel programming, and high performance computing and scientific visualization is helpful, but not required.

    Back to top


  5. Implementation of a Discontinuous Galerkin 3D Euler Solver on many-core CPUs and GPUs *G
    Areas of interest: Application Optimization/Parallelization, Numerical Methods

    Non-hydrostatic (NH) models based on the compressible Euler system of equations are used for multi-scale atmospheric modeling. The discontinuous Galerkin (DG) method is becoming increasingly popular for NH modeling due to its high-order accuracy, geometric flexibility and excellent parallel efficiency.  A prototype NH model based on DG method  (DG-NH model) has been developed in 3D Cartesian geometry for research purpose. This Euler solver is MPI 2D parallel in x-y direction, and developed in Fortran 90/95. The 3D model employs fully 3D DG elements or the conventional dimension-split approach using 2D+1D elements, and with various time-stepping options. The DG-NH model is being used as a framework for testing new algorithms for the spatial and temporal discretization.

    The goals of this 2017 summer internship will be to extend the 2D MPI parallel implementation to NVIDIA GPUs, focusing on achieving good performance on them.

    Beginning with a single-threaded DG implementation of the Euler equations developed on CPUs, the three key software design issues to tackle are as follows:
    a) Benchmarking and optimizing the code’s single GPU performance through application profiling.
    b) Understanding the techniques and prospects for achieving performance portability between CPUs and GPUs in a single code base.
    c) Develop an optimal GPU implementation of the Euler solver by creating the multi-card/multi-node distributed memory implementation, which  minimizes the communication overhead, and  using the techniques such as overlapping communication and computation.

    The student intern will explore, implement and gather profiled measurements for the different strategies involved, and select an optimal strategy for testing scalability. The student will present results along with their analysis at the conclusion of the project.

    Students - This project is open to graduate students only.

    Skills and Qualifications: Students must have strong programming skills in C/C++ or Fortran 90/95. Students should be familiar with at least one parallel programming paradigm, such as message passing interface (MPI), thread parallelization using Pragmas, or the SIMD programming (e.g. with CUDA or OpenCL). Familiarity with numerical techniques for solving partial differential equations desirable.

     

    Back to top


  6. Improving HPC Scheduling through Machine Learning and Statistics *U
    Areas of interest: Data Science, Software Engineering, Supercomputer Systems Operations

    As supercomputers grow larger and more complex it is increasingly rare for a single user to utilize an entire system. As a result, many supercomputing centers, including NCAR, see a workload consisting of jobs of a variety of sizes and wall-clock time requests. To manage this, users are required to submit their jobs, along with a resource request, to a batch scheduler, which then tries to pack the jobs in the most efficient way possible.  The scheduler attempts to maximize the percentage of the overall resource that is utilized and minimize the time any particular job spends queuing.

    In this project, the student(s) will apply machine learning and/or statistical techniques to historical job data produced on the NCAR supercomputer systems in order to investigate methods of improving the efficiency of our job scheduling.  Once an approach has been devised, the student(s) will simulate the scheduling of historical jobs and compare their algorithm’s performance to that of our production schedulers. The end goals are to minimize the total wall-clock time that would have been required to execute all the jobs in the test dataset and reduce the average queue wait time.

    If time allows, the student(s) will produce a scheduler plug-in that implements their improvements and contribute it to one of the open-source schedulers used at NCAR (either SLURM or PBS Professional job scheduler systems).

    Students - This project is open to undergraduate students only. 

    Skills and Qualifications: Students with strong interests in machine learning and high-performance computing are encouraged to apply. Familiarity with the C or C++ programming languages is required. Familiarity with at least one UNIX-like operating system (Linux, BSD, etc), one or more high-level, dynamically typed, programming language (Python preferred), and at least one batch scheduler (SLURM or PBS Professional preferred) is desired.

    Back to top


  7. Improving Single Threaded Performance of NCL Using OpenMP 4.0 SIMD Directives *U,G
    Areas of interest: Application Optimization/Parallelization, Software Engineering

    Single instruction multiple data (SIMD) vector instructions are becoming increasingly important in obtaining good performance on modern microprocessors.  However, due to a lack of portability, programmers have primarily relied on the compiler to generate the SIMD vector instructions.  Unfortunately, compilers do not always do the best job of vectorizing the code they are provided. With the introduction of the OpenMP 4.0 SIMD directives, and improved compiler vectorization feedback, programmers now have control over SIMD vectorization in a portable way.    

    This project will focus on improving the vectorization of the NCAR Command Language (NCL),  a widely-used, interpreted language designed specifically for scientific data analysis and visualization.  

    The first task for the summer intern will be identifying NCL functions that are not currently taking advantage of SIMD vectorization.  This task will involve using compiler vectorization reporting tools against known slow functions, performance profiling to identify new ones,  and possibly some examination of assembly language output.  Once the functions with poor vectorization are identified, the next step will be to apply the OpenMP 4.0 SIMD directives to the code and measure the performance improvement while verifying correctness across a suite of compilers as they are applied.  In some cases, the algorithms or they are expressed in code may need modifications in order to vectorize.

    Student - The project is open to undergraduate and graduate students.

    Skills and Qualifications: Students must have strong programming skills in C/C++ and/or Fortran 77/90 in a Linux/OS X/Unix environment, as well as experience with computer architecture, assembly language, or embedded systems. For testing, experience with profilers, debuggers, Python and f2py is desirable. 

    Back to top


  8. PySpark for "Big" Atmospheric & Oceanic Data Analysis *U
    Areas of interest: Data Science, Software Engineering, Supercomputer Systems Operations

    Spark is a cluster computing paradigm based on the MapReduce paradigm that has garnered many scientific analysis workflows and are very well suited for Spark.  As a result of this lack of great deal of interest for its power and ease of use in analyzing “big data” in the commercial and computer science sectors.  In much of the scientific sector, however --- and specifically in the atmospheric and oceanic sciences --- Spark has not captured the interest of scientists for analyzing their data, even though their datasets may be larger than many commercial datasets interest, there are very few platforms on which scientists can experiment with and learn about using Hadoop and/or Spark for their scientific research.  Additionally, there are very few resources to teach and educate scientists on how or why to use Hadoop or Spark for their analysis.

    This project will allow the student the opportunity to explore and learn about distributed parallel computing on NCAR’s Yellowstone and/or Cheyenne supercomputers.  The student will study how to use Spark on Yellowstone and/or Cheyenne, and how to apply Spark to a number of common analyses that atmospheric and oceanic scientists commonly perform, such as temporal and zonal averaging of data and the computation of climatologies or the pre-processing of CMIP data, such as regridding, calendar harmonizing, and other “embarrassingly parallel” tasks.

    The student will summarize their accomplishments with:
    a) A "how to" document that scientists can use to help them learn how to use Spark for their own analysis
    b) A summary document for HPC administrators showing the pros and cons on using Spark for this kind of workflow

    Students - This project is open to undergraduate students only. 

    Skills and Qualifications:
    Students must be familiar with Linux or Unix, have experience with Python programming, have the ability and willingness to work with a team, and possess good communication and writing skills. Familiarity with parallel computing, Hadoop/MapReduce or Spark, and experience with NumPy is desirable. 

    Back to top


  9. Remapping and Conservation *G
    Areas of interest: Numerical Methods

    Conservation of total energy, mass and angular momentum in global atmosphere models is highly desirable. For example, the Community Atmosphere Model (CAM) based on method of spectral-elements (SE), used for this project, makes use of a mapping algorithm between two vertical grids in its solver. Not all quantities that are conserved by the continuous equations of motion can be conserved in the numerical model. Hence choices must be made on what physical quantities to conserve. In this summer project, a student interested in applying numerical methods to real-world modeling problems will investigate the accuracy of different combinations of conserved quantities in the mapping algorithm.

    Students - This project is open to graduate students only. 

    Skills and Qualifications: Students must have taken coursework in numerical methods for solving partial differential equations and quantitative physical science relevant to numerical modeling, possess knowledge of a high-level computing language (Fortran preferred), and have a working knowledge of a Unix and Linux variant of an operating system.

    Back to top


  10. Source-to-source Fortran Manipulation *U
    Areas of interest: Software Engineering 

    As language standards evolve, some older code may become no longer standards-compliant.  Where portability is a concern, adherence to the standard becomes more important.  The sheer magnitude of accumulated old code may inhibit efforts to keep sources up-to-date.  Thus, automated help can reduce the scientific programmer's workload. The goal of this internship is to develop a source-to-source translator able to provide this automated help. Working on a source-to-source translator is a great experience: the student will learn about how to parse the existing syntax of the language, understanding how that relates to the semantic; then, the student will need to transform back the same semantic in a different syntax. Regardless of what the subfield of computer science one will specialize in, having this understanding of how programming languages work will be very useful.

    As applied to Fortran, a well-known laundry list of older features, some once standard, some never standard, should be addressed. Briefly, the list includes: the "*n" notation in type declarations, Hollerith data, DO with CONTINUE, f66-style array(1) declarations, Livermore/Cray/VAX pointers, common blocks, equivalence, PAUSE,  BUFFER IN/BUFFER OUT/UNIT, ENCODE/DECODE, direct access record numbers following a quote.

    The degree of difficulty of repairing the above items varies widely. Some may be done as context-free substitutions, but others have scope-wide, or global, implications.

    A source-to-source translator, with compiler-like logic in the front end, is needed to solve these issues in the general case.  A simpler approach might work for some of the easier issues, but the more general the tool, the greater the value to the scientific programmer.

    The task for a SIParCS student is to adopt a tool developed at the University of Oregon to perform these transformations, and other similar transformations. The University of Oregon tool is now used in research to examine strategies for modifying Fortran for GPUs and other new architectures.  Extensibility is a priority goal.  Some of the transformations to be made may be found by the student’s interactions with potential users.

    Students - This project is open to undergraduate students only. 

    Skills and Qualifications - Students interested in developing source to source translation as a way to improve software developer productivity are encouraged to apply to this position. Students should have knowledge of programming languages in general, including C/C++, and the communications skills to interact and work with potential end users. A basic understanding of compilers and a working knowledge of Fortran is desirable.

    Back to top

     


  11. Supercomputer Infiniband Fabric Analysis *U,G
    Areas of interest: Software Engineering,  Supercomputer Systems Operations, Visualization 

    Supercomputing centers such as NCAR’s strive to provide users with a productive and efficient supercomputing interconnect, not only by observing performance, but also through static analysis of network topology, routing and design. A particularly important question is how best to optimize applications to fully utilize available system network efficiently? For this, we need static analysis of the interconnect to gain a better understanding of the design’s performance.

    The primary goal of this summer project will be to write a Tulip plugin in C++ to do static analysis on the Infiniband interconnect fabrics of NCAR’s “Yellowstone” and “Cheyenne” petaflop supercomputers. The undergraduate student on the team will be tasked to program basic functionality in the Tulip Plugin software. The graduate student will mentor the undergraduate, and be tasked with writing theoretical analysis software for the plugin. If time permits, the project will include analysis of realtime and archived performance data collected on the supercomputers.

    Students - The project is open to undergraduate and graduate students.

    Skills and Qualifications: Students interested in design and performance analysis high performance computing systems are encouraged to apply for this project. Students should have strong programming skills and specific experience in either Linux, C++, Python, Bash, and cmake. A working knowledge of high-performance computing environments and experience with Infiniband fabric is preferred but optional.

    Back to top