Roadmap

This page gives an idea on the approximate directions we are heading in terms of the near-term future development of each tool. The directions are approximate, because they reflect our views and desires. If you have a different one, let's discuss it!

The following is a brief breakdown on the current status and a roadmap of each tool, also detailing if/where help is needed. The tool-specific sections are followed by more general ones, such as potential improvements to the documentation and performance. See also the contributing guide.

Inciter

Compared to all other tools Inciter (Compressible flow solver) is currently receiving most of the development work. The following areas are actively developed. Please contact us if you would like to use and/or contribute to Inciter.

Solution-adaptive mesh refinement (AMR)

The AMR algorithm closely follows Jacob Waltz's paper. The algorithm is specific to tetrahedron-only grids and, by design, yields a conforming mesh, i.e., without hanging nodes. Tetrahedra enable automatic mesh generation for arbitrary complex 3D computational domains and since the algorithm always yields a conforming mesh, it can be (and is) independent of the underlying discretization scheme used to solve partial differential equations.

The AMR algorithm (both refinement and derefinement) is already implemented on top of Charm++, in a distributed-memory-parallel fashion, using asynchronous communication to ensure a conforming mesh across mesh partitions. In addition, any of Charm++'s automatic load balancing strategies can be employed to homogenize computational load across a simulation using object migration.

We are currently finishing derefinement in parallel. We need to add more tests exercising AMR, and then we will investigate and most likely improve (both serial and parallel) performance with and without load balancing.

P-adaptive discontinuous Galerkin (DG) methods.

We are implementing DG finite element methods for solving the equations governing single-, and multi-material fluids using the continuum approximation. We have already implemented 1st, 2nd, and 3rd order accurate DG methods and we are planning to also implement a 3rd order accurate reconstructed DG method as well.

Using this family of DG methods, we have also implemented a solution-adaptive algorithm to allow automatically selecting the degree of the approximation polynomial depending on the accuracy of the local numerical solution (p-refinement). This allows concentrating compute resources to those parts of the computational domain where required. Such adaptation also yields significant load imbalances for a simulation using multiple compute nodes, which is then remedied by turning on Charm++'s load balancing.

We are currently researching optimal ways to define refinement criteria and experimenting with various numerical error indicators. We are also currently implementing a DG algorithm for multi-material compressible flows.

Continuous Galerkin (CG) methods

We have two kinds of CG algorithms implemented for unstructured 3D meshes: (1) DiagCG, is a node-centered method, combining flux-corrected transport with Lax-Wendroff time stepping, and (2) ALECG, is also a node-centered method, featuring Riemann solvers and MUSCL reconstruction on cell-edges, combined with Runge-Kutta time stepping. Note that the ALECG algorithm can be viewed as either an edge-based finite element method on tetrahedron meshes or a node-centered finite volume method (on the dual mesh, consisting of arbitrary polyhedra). See Inciter (Compressible flow solver) for more details.

We have been using ALECG in production, and are adding adaptive mesh refinement and ALE mesh motion. Both CG algorithms need more testing, especially with AMR, equations of state other than ideal gas, and more examples exercising more complex-geometry domains.

Verification and validation (V&V)

While implementing the above algorithms (AMR, DG, and CG) we have mostly concentrated on correctness, demonstrating design-order of accuracy for smooth problems using manufactured solutions, and parallel scalability. To increase confidence in the algorithms, we need to setup and document more verification and validation cases.

Add and document more examples

Closely related to V&V, additional examples that demonstrate different capabilities of Inciter should be documented.

Add regression tests

We need to add more regression tests for those parts of the methods under src/PDE/ that are not yet adequately tested.

Hook back up particle tracer and output

As an experimental feature, Inciter used to have Lagrangian particles that were simply advected with the flow being computed. Particles were generated into each tetrahedron cell and were also communicated across partition boundaries. Such particles can be used for a variety of purposes. Examples are visualization tracers, representing physics that are better represented in a Lagrangian fashion, potentially interacting with other quantities represented on the mesh, or provide a statistical representation of unresolved-scale (sub-grid) processes, e.g., in turbulent flows.

This code is under src/Particles/ and parallel particle file output is under src/IO/ParticleWriter.[ch]pp, but not currently compiled and tested. It would be great to hook these pieces of code back into Inciter so additional physics capability can use it, e.g., a particle-in-cell solver.

Incorporate RootWriter into MeshConv

Inciter can save meshes and field output data in files that can be analyzed by ROOT. To exercise and test this functionality fileconv_main is currently used to convert data between ROOT and ExodusII formats. However, a better way to do this is to incorporate the functionality in FileConv to MeshConv (Mesh format converter).

Equations of state (EOS)

Currently, equations of state for ideal gases are implemented. It would be great to implement some more advanced EOSs, including coupling libraries with EOS tables. A good starting point for this is under PDE/EoS/.

UnitTest

UnitTest is also pretty mature and we heavily use it. There is no real need to do more work on it.

MeshConv

MeshConv (Mesh format converter) is also pretty mature for what we have needed it so far. It could, of course, can always be augmented with new mesh formats. Examples are ROOT and Omega_H, both discussed above.


Improve the documentation

The web site at quinoacomputing.github.io is automatically generated by our continuous integration server whenever new merges to branch develop or master happen. The procedure uses doxygen to parse the source code and to extract API documentation in XML format which is then post-processed by m.css which also adds the search function in the upper right corner. This helps us keep the content up-to-date with the code, but does also need some extra (added) content to describe more detailed subjects, e.g., in the form of HOWTOs.

The documentation could be improved both in terms of content as well as functionality.

Content

Examples

Not all existing functionality is demonstrated and documented via examples. This could definitely use some extra hands and most of this does not really require writing code.

HOWTOs

We have a few HOWTOs that detail some specific aspects of Inciter. These are the pages titled as _"How to add ..."_. Inciter (Compressible flow solver) could use some additional such pages to lower the barrier of entry for contributing.

Functionality

Add a blog page

It would be good to post blogs in the future. We currently do not have a mechanism on our web site to allow for a nice list of blog posts. (Well, we don't have blog posts either, but we have to start somewhere.) The magnum graphics engine's blog page is a good example of how this can be done in a way that nicely fits into the existing style of our web site.

Improve performance

For all of Quinoa's tools we have so far prioritized (1) learning how to use the Charm++ runtime system the most optimal way, and (2) achieve good parallel scalability (good strong and weak scaling) for large problems on large computers. Absolute performance has been somewhat secondary. By absolute perfomance we mean on-compute-node or single-CPU performance. This latter will definitely need some work with some potentially large (and juicy) low-hanging fruits. This can be done by profiling first, followed by identifying, then optimizing the hot-spots, while continuously benchmarking the changes by comparing to (1) current/previous implementation or (2) to a highly optimized other code (or codes) implementing a similar functionality. All tools, but most importantly, Inciter (Compressible flow solver) needs such performance optimization help.

Improve user-friendliness

Both Inciter (Compressible flow solver) needs some work on improving user-friendliness. Both parse their input file, expecting a different grammar, configuring the numerical solution they are to compute. There is only basic error handling during and after parsing their user input. To make them more effective, robust, and fun to use, there is more testing necessary. For example, one could use one of their regression test inputs, under tests/regression/ and randomly altering them, feed them garbage input with wrong syntax for various parameters and see if the correct error message is generated.

A potentially more fun (and more rewarding) way to do this is to automate this with fuzz testing by setting up, e.g., american fuzzy lop. If we could get help setting this up and make it part of our continuous integration, that would be outstanding!