“Standardization” and e-science

Much of the work I have done on the Earth System Curator project is geared toward the standardization of a data model for describing climate modeling software and the output from climate simulations. (Okay, technically we are not creating a “standard” because we were not really chartered to do that nor do we wish to be prescriptive for the entire climate community. But, nonetheless, our task has been very much like a standardization effort.) For a moment, I want to step back from Curator and consider “standardization” itself.

Standardization is a task that leads us toward interoperability of systems. Although standardization is common in both industrial and scientific endeavors, it is interesting to consider what differences might arise between the standardization process for e-science vs. that of industry. The question I would like to answer is this: “What does standardization mean for e-science?” I contend that there are significant differences that affect how we should think about standardization in each arena.

This post is based on observations I have made while working on the Curator project. At the outset, our task was basically to create a common metadata formalism for describing climate models and output datasets. (I know this description of the project is far too short to be helpful, so please visit the website to read up on what were doing.) To be perfectly honest, the task of coming up with standardized metadata has proven to be very difficult. Lately I have been wondering whether standardization takes on a different meaning for e-science than for other kinds of communities (e.g., business-driven standardization).

Here are some observations that affect the way we look at standardization for e-science.

1. Users of scientific data are diverse and often anonymous.

This means that it is very difficult up front to say with certainty who exactly will be using scientific data once it is published (e.g., such as simulation output or observations from sensors, etc.) Certainly, there is an immediate set of users in mind before we begin collecting data for a scientific endeavor, but before long we realize that folks working in other domains might also benefit from the collected data.

So, in the name of interoperability, we set out to standardize our data so that when others acquire it, they can actually interpret it. However, this can be very challenging since we do not know exactly who will ultimately be using the data. Additionally, most scientific communities have developed their own “lingo,” and the word for describing a particular phenomena depends on the “lingo” you are using. These “lingos” have deep roots, and we cannot ask that entire communities change vocabularies (even though many will admit the deficiencies in their own vernacular). For a real-life example of “lingo tension”, check out this thread in the CF Metadata mailing list archives.

Now, changing gears to an e-business perspective, you could argue that before a standardization effort even gets off the ground, there is a pretty clear idea of what players are involved and how they plan on using the resource being standardized. This makes (or should make) the whole process a bit more well-defined since we know the audience and the usage patterns up front.

2. Scientific data is often repurposed and applied in ways not intended by the data’s originator

The raw data collected or generated by a scientific community may be repurposed, used by scientists in other communities, and otherwise applied in new ways not intended by the data’s originator. In fact, science thrives in an environment where previous findings can be reapplied to new situations.

The impact on standardization is that it is not possible to know up front the context in which scientific data will be used. This points to a need to keep standards as general as possible while still being precise and informative. One way to resolve the tension between these two is to allow for customization through extension. In other words, the standard itself could serve as a framework allowing community members to provide domain-specific customizations and/or mappings to terms in other domains. The recent explosion of “tagging” might be one way to solicit terms from diverse community members. What is unclear is how the highly unstructured nature of tagging can be reconciled with the highly structured world of data standardization.

3. Complexity of “configuration” involved in scientific data collection

I have used the general term “configuration” here to refer to all of the many complexities involved in preparing to collect scientific data–either via simulation or observation. I have more experience on the simulation side of things, and I can say with confidence that there is an extreme amount of configuration involved before a large scale computer simulation is run. Everything is a parameterized and all those parameters have to be set. For example, it is not uncommon for a shell script that kicks off a global climate simulation to be over 1500 lines long.

Now, say you are a scientist and you are planning on downloading some dataset over the Web and using it to inform your own research. You had better be very sure about what all went into creating that dataset. The best way to gain trust of a dataset is to know exactly how it was produced. This kind of metadata is often called “provenance.”

The sheer complexity of configuration bleeds over into the standardization process. In other words, you don’t just want to get a dataset in a standardized format, you also want a nice description of the configuration that took place leading up to the generation of that dataset. This kind of description is likely much more complex than a typical purchase order XML document. A scientific dataset should be accompanied by more than just a set of standard field names. It should include a “deep description” of what each field means, how it was generated, how it was post-processed, etc.

Perhaps all of this is pointing to the fact that in a scientific setting, the process is just as important (if not more important!) than the resulting data. Therefore, standardization efforts must be involved with the process part of doing science. The focus on recording process information seems less evident in other settings (e.g., it doesn’t make much sense to talk about how a purchase order was generated). Compounding the problem is the fact that the configuration process differs greatly among scientists even in the same domain. If we cannot standardize the configuration processes themselves, how can we at least describe them in a standardized way?

Advertisements

Tags: , ,

About rsdunlapiv

Computer science PhD student at Georgia Tech

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: