Code Generation for Earth System Modeling

At the Workshop on Coupling Technologies in December, I was pleased to see that code generation is making some inroads into the climate and Earth System Modeling communities.  Although the idea is not completely new, a very vast majority of the code in the big models has been written by hand.  There are very good reasons for this.  There is a large amount of legacy code that has been extended and adapted over the years.  Working with such a large existing codebase makes code generation challenging.  It is relatively straightforward to create a generator when you have complete control over the format of the output.  However, generators that must take into account the structures of existing code are much more complicated to build.  A high-level code generator will often have a specific target architecture in mind, but if there is existing code then you have less guarantees about properties of that code.  And, in the case of large models that have grown organically over the years, you may actually have a bunch of different architectures glued together and adapted to each other.

A further impediment to code generation is the issue of scientific model validation.  Validation is a time-consuming process.  Once a set of source code has been validated, the last thing you want to do is go and change it.  But, in many cases, code generators will convert a small change in a high-level language into a lot of modified code.  Theoretically, this is not an issue.  In fact, a code generator can give you correctness guarantees that you don’t get when programming by hand (assuming you have validated the code generator itself).  But, there are still a lot of practical considerations that can make doing large scale code generation unappealing for large scientific models.

The current thinking about code generation for Earth System Models (ESMs) is to isolate the science from the “infrastructure” as much as possible.  The science (often called “user code”) will be coded by hand (as it is extremely hard to conceive of a code generator smart enough to handle the highly customized nature of scientific code), and the infrastructure (code responsible for gluing modules together and handling routine things such as parallel interpolation and I/O) will be generated around the science.  This is an appealing approach, assuming we can effectively isolate the infrastructure from the science.  I think this can be done to a large degree, although sometimes there is a blurring of the two.  For example, consider that couplers often contain scientific functions such as ensuring global conservation of quantities or converting between wet and dry mixing ratios of chemical constituents in the atmosphere.  So, we see that the science itself begins to seep into the infrastructure, even though we would like keep them isolated.  The degree to which complete separation of concerns is possible in practice is still an open question.

With respect to “infrastructure” code, there seems to be a general assumption that much of this kind of code is “boilerplate” in nature–that is, there are large segments of code that are repeated frequently with slight modifications.  This kind of code is a prime candidate for a template-based approach to code generation in which the common parts of the repeated code fragments are provided as a template and the variable parts can be filled in when the template is instantiated.  While I don’t disagree that templates are a nice way of handling oft-repeated code structures, I do question the underlying assumption of just how “boilerplate” the infrastructure code really is in the first place.

Consider this UML sequence diagram showing the interactions among sub-components of an atmosphere model–specifically the “physics” module and “dynamics” module.  What you’ll notice is that the interaction is not a simple call dynamics, call physics loop.  Instead, the interaction is more sophisticated.  Each module has several phases and the calls to the different phases are interleaved.  This interleaving is not arbitrary.  There are real scientific and numerical requirements that lead to the calling sequence.

But, the software engineering let-down here is that it is very hard to make sense of the reasons behind the sequencing of the calls (e.g., phases have names like run1(), run2(), run3()).  The scientific constraints between the two modules are hidden deep inside each call.  These kinds of constraints limit the amount of “boilerplate” code that can be generated automatically using a template-based approach.

Nonetheless, code generation is being used successfully in this community:

  • The PALM dynamic execution environment relies on code generation to interface user code with the scheduling and launching modules.  According to Andrea Piacentini, this approach replaces explicit calls to initialize and finalize methods making it easier to start several instances of the same code in parallel or in a sequence or to choose if a code has to start as a standalone executable or as a subroutine in a wrapped single executable.  In this case, the code generator provides flexibility as to how the user’s scientific code is packaged.
  • The Bespoke Framework Generator (BFG) also makes extensive use of code generation to wrap scientific code (which should be written as standard FORTRAN subroutines).  The philosophy here is that instead of writing to a specific framework (such as ESMF or OASIS), just write the science as plainly as possible and generate the rest of the infrastructure around it.  If this can be done successfully, it opens the door to interoperability as different kinds of wrappers and adapters can be generated for the same chunk of scientific code.
  • Although I am less familiar with WRF, my understanding is that code generation is used to create I/O calls for the hundreds of possible fields that the model supports.  This is a good example of straightforward template-based code generation.

These few examples show that code generation is useful for reducing the amount of hand written code (at the cost of writing the generator itself), delaying decisions about how to package scientific code, and providing increased flexibility in how to interface scientific code.  That being said, some things are not clear.  For example, how much time is actually saved using these approaches?  Several comments were made at the coupling workshop pointing out that the real complexity in these models lies in the science itself.  Dealing with issues of scientific sensitivity requires much more time and attention than interfacing software modules.  On the other hand, once an infrastructure or technology has been adopted, it can be painful to extract the science from it.

A related question is:  how much work is required to write a scientific module in an “infrastructure-neutral” way?  In other words, if you plan on generating a large part of the infrastructure code, then you have to change your development process a bit to take into account the code generation phase.  This means that higher-level abstractions such as grids, fields, and clocks no longer appear explicitly in the user-written portion of the code.  Instead, those abstractions are “woven” into the code at a later time, depending on the particular type of infrastructure that you wish to generate.  This roughly the approach taken by BFG.

An alternative approach is to define a set of common, high-level abstractions and write your scientific code using those abstractions.  If you wish to target multiple infrastructures (e.g., support FMS, ESMF, and OASIS), then the abstractions must be general enough to represent all your target infrastructures.  This is likely the reason why no one has attempted this approach: it is a lot of work to define a set of common modeling abstractions that could be mapped back to the concrete constructs of all of the infrastructures that you wish to support.  It’s not even clear if such a common set of abstractions could be defined in the first place due to the different scopes of the coupling technologies and the degree to which each addresses scientific concerns.  (For more info on the range of features supported by coupling technologies, see our feature modeling work.)

Advertisements

About rsdunlapiv

Computer science PhD student at Georgia Tech

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: