From Coupling Frameworks to Domain-Specific Languages

Reusable software for building coupled models in the earth science domains continues to evolve. Many tools required for coupling are provided as libraries that are imported by the models to be coupled (e.g., the Model Coupling Toolkit, OASIS/PSMILe). The Earth System Modeling Framework (ESMF) takes a different approach by providing a software framework for implementing coupled models. Szyperski et al. [1] note that a primary distinction between libraries and frameworks is what is shared: libraries are good at sharing concrete solution fragments, such as an implementation of a distributed interpolation algorithm, while frameworks are good at sharing architectural designs, such as representing a coupled model as a hierarchy of components that interact by exchanging data through import and export states. Frameworks can offer a significant advantage over libraries in cases where architectural decisions and patterns of object interaction can be fixed ahead of time and encoded directly in the framework. This speeds things up by alleviating the developer from having to make certain design decisions and write the corresponding implementation.

Work in the software engineering community has shown that a framework can be used to generate a domain-specific language (DSL), either by evolving an existing framework into a DSL [2] or by iteratively refining a DSL while developing a framework [3]. There are a number of arguments for moving beyond a framework to a DSL. Van Deursen [4] notes a few: (1) DSLs can guide framework design by helping to ensure conceptual purity (i.e., if a concept does not seem to fit into the DSL’s metamodel, then it probably should not be in the framework); (2) DSLs encourage black-box reuse (which favors “plugability” of components, not the highly coupled class inheritance approach); and (3) DSLs remove the programming language aspect from the framework providing a more abstract, conceptual view of the domain. Also, since a DSL is based on a conceptual metamodel, it should be able to describe applications within the target domain, even if they are not implemented using the underlying framework. In fact, this ability to describe applications outside the framework is a good way to validate the DSL [3]. The advantage for the ESM coupling domain is that a DSL allows us to describe numerical models and how they are coupled even if they are not implemented as components in a particular framework (i.e., ESMF).

As part of my dissertation work I am creating a DSL for the coupled ESM domain based primarily on ESMF (because it is a framework) and informed by the other coupling technologies currently in use in the domain. In this post I will outline some of difficulties involved, a list of possible solutions, and, in cases where I have made a decision, the solution that I have selected and why. The process I am using involves developing a domain metamodel by eliciting concepts from ESMF, including both structural and behavioral aspects of the framework. I am developing the metamodel using a UML-like, object-oriented notation. The metamodel itself will serve as the underlying conceptual model from which visual builder tools and concrete syntaxes can be later derived.

ESMF is both a white-box and black-box framework. The ESMF superstructure, which defines the hierarchical component architecture of a coupled model, is white-box because components are represented as abstract classes that must be filled in with the implementation of the model’s science–i.e., the set of numerical kernels that solve the underlying mathematical model. As is typical for most frameworks, numerical model interfaces in ESMF are implemented using an inversion of control paradigm. However, in a departure from most frameworks, the top-level driving code that indicates the sequencing of calls to the model’s sub-components must be provided by the user. This design decision provides flexibility in the sequencing of the model’s sub-components including the ability to exclude certain sub-components or change the order of sub-component sequencing based on scientific requirements. The ESMF infrastructure, which provides implementations of common coupled modeling functions (e.g., time management, regridding), is black-box in nature–these functions do not need to be extended with any custom functionality and their implementations are designed to be suitable for most coupled models out of the box.

How should the science code be represented for the white-box portions of the framework?

One difficulty is how to represent a model’s science code within the DSL. In ESMF, the user’s science code sits in the middle, between the component superstructure and the set of tools in the infrastructure. The superstructure layer acts like a wrapper providing a standard, well-defined interface into the model. In ESMF, the abstract class Gridded Component is extended by the user in order to provide its scientific implementation. This kind of white-box reuse pattern is harder to represent within the DSL when compared to the parameterized black-box routines in the ESMF infrastructure. That’s because the black-box constructs are called “as is” without requiring the addition of any implementation details by the user. The problem is related to the need to separate the varying parts of the system from the fixed parts. Specifically it asks how the varying parts are connected back to the invariant parts.

This issue of having to provide the science code in a form external to the DSL (e.g., a general-purpose programming language) could be avoided by developing a DSL that is truly complete [5]–i.e., an entire coupled model, including the numerical kernels that implement the underlying mathematical model, could be described within the DSL metamodel. I believe the current state of the art precludes defining a DSL that meets the completeness criteria. This is based on the fact that ESMF itself does not attempt to provide abstractions for writing the scientific kernels. The highly customized nature of this kind of code is perhaps best left to a general-purpose language (GPL) where the programmer has freedom to implement the science in whatever manner seems most appropriate.

One way to categorize solutions to this problem is by deciding which object is parameterized:

  • Write the scientific code in a separate module parameterized by the dependencies on the framework. I call this process of replacing calls to the framework with parameters as provisioning. The provisioned science module is linked to the code generated from the user specification (i.e., the coupling configuration). Let me clarify this with an example: the DSL provides abstractions for representing the spatial discretization of the numerical model (i.e., the grid) including its bounds and geophysical coordinates. The scientific algorithms are dependent on the grid properties for their implementations (e.g., think of nested loop conditions running over the entire domain). The science module declares its dependencies on the grid properties using standard general-purpose programming language idioms like subroutine arguments, instance variables, or global variables. The grid itself has been defined as an abstraction within the DSL, so its properties (e.g., min and max indices for each dimension) have to be translated into intrinsic types of the science module’s programming language. Some explicit mapping must be provided between the parameters in the general-purpose PL and the conceptual abstractions in the DSL.  The code output from the generator is linked to the science module. The generated code instantiates the parameters in the provisioned science module (e.g., by generating the required calls into the framework to retrieve the needed parameters).
  • A related approach is to provide the science code as a parameter to an instance of the DSL (i.e., a coupling configuration). This is in essence the opposite of the first approach in which the science code is the parameterized object. This approach advocates parameterizing a coupling configuration (which may be viewed as an application skeleton) with scientific kernels to fill in the “holes.” The science is provided as code fragments and the surrounding context is provided by the DSL (actually, the generated infrastructure behind the scenes). This approach, however, does not obviate the need to access framework-mediated properties such as information about the grid bounds and the location of field variables in memory. However, it does reduce the burden on the developer to declare the parameters explicitly in the science code–this can be done at the level of the DSL and the parameter declarations and instantiations in the science code can be added behind the scenes during the code generation phase.

The forces involved with deciding how to separate the science code:

  • Readability.  Although separated from the coupling framework, the science code should be as readable as possible. It should be easy for the programmer to understand the scientific algorithm. One way to aid in this is to ensure that the provisioning mechanism (which exposes dependencies from the framework to the science code) is clear and easy to understand. For example, good naming conventions can improve understandability of the meaning of each parameter.
  • Reducing parameter bandwidth.  Parameters flow from the framework to the science code and from the science code to the framework. Mechanisms that reduce the number of parameters should be preferred.
  • Code size. The less code the user has to write, the better.
  • Reuse. It is advantageous to be able to reuse the scientific modules in contexts outside of the DSL.
  • Legacy code. In cases where there is a lot of existing code, the process of separating science from the coupling framework can be expedited by requiring minimal changes to the existing implementation.

The first approach, co-locating the science code in a separate provisioned science module, is best for reuse (the module can be compiled independently from the DSL) and probably better for readability (because all parameters are provided explicitly using GPL idioms). It is not clear if the first approach would require less changes to legacy code when compared to the second approach. The first approach requires that the existing module be stripped of any existing framework code (at least any code that will not be provided by the DSL code generator) while still remaining a valid syntactically correct module in its GPL (e.g., a compilable Fortran module). This requires a good bit of refactoring as calls to the coupling framework are replaced with parameters in terms of GPL constructs such as subroutine arguments or private module variables. The second approach allows the user to copy and paste the numerical kernels directly from the existing implementation into the DSL instance, leaving behind any framework code that will now be generated. In other words, the surrounding context is removed with the understanding that it will be generated automatically during the DSL compilation phase. The size of the codebase also says something about whether to prefer the first of second approach: depending on a numerical model’s architecture, large codebases might only require provisioning at the highest levels of the architecture where most of the coupling-related functions live, and the rest of the model code can be left alone.

The second approach is preferable for the code size criteria (because only code fragments need be provided, not an entire GPL module, such as a Fortran module or C++ class) and for reducing the parameter bandwidth (because the framework manages a lot of abstractions, there are far more parameters flowing to the science code than the other way around). Early results indicate that the large number of abstractions managed by the framework that must be accessible to the science code results in a high parameter bandwidth–i.e., a large number of parameters that are mediated by the coupling framework must be exposed to the science code. This has the potential to reduce the reusability of the provisioned science module outside the context of the DSL and its accompanying code generator because it requires that some other external component has the capability of providing the required parameters.

Currently, I have implemented the first approach for a number of ESMF parameters and the implementation proves its feasibility. However, I am in the process of trying the second approach in order to better understand the tradeoffs between these two approaches.

What is the right balance between user control (explicit specification) and implicit behavior?

ESMF anticipates the need for user customization in the way the numerical model is implemented. The two primary abstract classes, Gridded Component and Coupler Component, have three abstract method types that are to be overridden by the user: initialize, run, and finalize. They are method types because an ESMF component might have several initialize methods, for example, with the a phase number distinguishing among them. The implementation of each of these methods is wide open affording the user a lot of flexibility in how the components behave for each phase.  Moreover, because the user is also responsible for writing the driver code, the ordering of calls to the overridden methods is not fixed–it is possible to call them in any order, including the run method before the init, to skip method calls, to ignore the phase sequence numbers, etc.

While this flexibility is a good thing, it does allow the user to “shoot themselves in the foot” if the framework is not used properly. In addition, there are some domain-specific design patterns that ESMF espouses whose use could be inferred automatically from a declarative specification. Providing implicit access to these domain-specific patterns saves the user time in writing the specification. The black-box portions of the framework are particularly amenable to implicitness because they require little additional information from the user.

A related development in the ESMF community is the advent of “usability layers” such as the NUOPC Layer, which encodes certain default behaviors and constraints on usage of the framework. A subset of the ESM community has adopted this layer in recognition that certain usage patterns of the framework can be handled implicitly. For example, a default driver’s behavior is to execute the run method of all couplers in sequence followed by the run methods of all gridded components in sequence. This elides the need of the user to write any driving code.  (See implicit coupling design document.)

How should the idea of implicitness be handled in the framework metamodel? One rule of thumb is that additional constraints, default behaviors, and implicit behaviors that are not general to all applications using the framework should be contained in a layer separate from (although dependent on) the framework metamodel. Modularization of such “usability layers” provides better ability to introduce new layers, for example, targeted at a single sub-domain, without cluttering up the framework metamodel itself. Where possible, constraints on the framework metamodel should be introduced to prevent incorrect usage of the framework by a usability layer or by the generated implementation. This has the advantage of converting what would otherwise be run-time checks into static checks on the specification.

The amount of explicitness required from the user determines, by definition, the size of the specification and therefore the size of the DSL itself. The effect of a usability layer is to shrink and simplify the corresponding DSL–that is, to the degree that a usability layer encodes implicit behavior, it reduces the amount of specification required from the user. A simplified DSL, therefore, can be obtained by only including concepts in a more narrowly-defined usability layer instead of the entire framework metamodel itself. An appropriate first pass at a coupled model DSL should handle the typical use cases where the implicit behaviors are sufficient, so I think it is a good idea to target such a usability layer for the first version of the DSL.


[1] C. Szyperski, D. Gruntz, and S. Murer. Component Software: Beyond Object-Oriented Programming, 2nd ed. New York: Addison-Wesley, 2002.

[2] Don Roberts and Ralph Johnson. Evolve Frameworks into Domain-Specific Languages.

[3] Xavier Amatriain and Pau Arumi. Frameworks Generate Domain-Specific Languages: A Case-Study in the Multimedia Domain. IEEE Transactions of Software Engineering, 2010.

[4] A. van Deursen. Domain-specific languages versus object-oriented frameworks: A financial engineering case study. In Smalltalk and Java in Industry and Academia, STJA ’97, pp. 35-39, 1997.

[5] S. Zachariadis, C. Mascolo, and W. Emmerich. The Satin Component System-A Metamodel for Engineering Adaptable Mobile Systems. IEEE Transactions on Software Engineering, 2006.


Tags: , , , ,

About rsdunlapiv

Computer science PhD student at Georgia Tech

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: