Assessing Software Qualities from Architectural Descriptions
The architecture of a software system determines qualities of the system as a whole, such as the system’s performance in terms of response time or throughput, how well the system scales, and to what degree the system is modular and extensible. Ideally, a system’s architectural specification could be used to assess its quality attributes. As noted by [Bosch], however, it is often difficult to assess qualities quantitatively with a high degree of accuracy from architectural descriptions. This is because many properties of the system that impact the quality attribute under consideration will not be known until a detailed design is available or, in the worst case, the implementation itself. In many cases, we are forced to live with the relatively wide margin of error that accompanies architectural assessments.
To be sure, there are kinds of analyses that are fully realizable even at the architectural level. However, these kinds of analyses are often theoretical in nature and many of them are predicated on having an architectural specification written in an Architectural Description Language (ADL) based on a formal conceptual model. The formalism provides the semantic foundation for the set of supported analyses. For example, the Wright ADL, which is based on Communicating Sequential Processes (CSP), relies on the notion of process refinement to automatically check compatibility between a component’s port and a connector’s role. Another analysis allows you to confirm that the composition of roles in a connector is deadlock free. While these and similar analyses are theoretically sound, you still get the feeling that they are somehow a bit too far removed from the actual software systems that they describe. For example, in most cases port/role compatibility is likely checked by the developer by comparing APIs, ensuring that some common datatypes or suitable exchange mechanisms exists, and, in the case of a particularly careful developer, checking pre- and post- conditions, etc. Often (but not always), the behavior of an external component that you wish to interact with is not so complex that you require a formalized description of it to determine if it is compatible with the rest of your system.
ADLs could be placed on a spectrum like the one pictured below. There are some quite generic ADLs (e.g., ACME) that, although broadly applicable, have limited analysis capabilities. On the other hand, some ADLs are highly-domain specific (e.g., MetaH). These languages have limited applicability, but provide richer analysis capabilities for systems that fall within the scope of the language. A good majority of the languages fall somewhere in the middle to left-hand side of the spectrum. For example, the Wright ADL is targeted at systems in which it is useful to describe the component and connectors in terms of their abstract behaviors. For systems with straightforward behavioral models, it might not be worth the effort to describe them using Wright.
An “ideal” ADL would have both broad applicability (i.e., could describe a wide-range of different kinds of systems) and would offer a wide range of targeted architectural analyses. One way to accomplish this would be to design a modular, extensible ADL such that analysis tools and different kinds of semantic frameworks could be “plugged in” to the language. This kind of thing is offered to a limited degree by ACME because you can annotate architectural elements with properties that, although ignored by the ACME tooling, could still be interpreted by external tools. The problem is that these external tools tend to be proprietary “one-offs” instead of reusable analysis modules made available to a large audience. One could imagine a market of architectural analysis modules that are well documented, downloadable, and easily integrated with existing ADL tooling.
[Bosch] Jan Bosch. Design and Use of Software Architectures: Adopting and Evolving a Product-Line Approach. Addison-Wesley Professional, 2000.