Problem Solving Environments


Many tools exist for the exploitation of the power of high-performance parallel and distributed computing. Many fields in science and engineering need to make use of this computing power. Yet the fact remains that many scientists and engineers writing code to perform this research dont make use of the available tools. This phenomena appears to be the result of the reluctance of scientists and engineers to undertake the additional task of learning to write parallel programs. The process of learning a new way of structuring a program is too much of a distraction from performing their core research. Computer Scientists have attempted to alleviate the problem in three ways: By creating compilers which can automatically detectparallelism, by creating tools and languages to make the creation of parallel programs simpler, and by creating the application programs themselves in the form of general-purpose solvers or custom codes. Automatic detection of parallelism in dusty deck programs has proven to be somewhat of a holy grail for parallel computing. New languages, while powerful enough to express available parallelism, have not seen widespread acceptance among the Fortran-using scientific community, largely because they still require the programmer to understand and make explicit the parallelism available in the problems. Creating custom applications exposes the same problem in reverse; computer scientists rarely have the desire to spend years becoming an expert in the field of the scientist or engineer who refuses to become an expert in computer science.

Recently, more and more attention has been paid to the idea of creating problem solving environments for fields in computational science and engineering. Problem solving environments attempt to bridge this widening gap between computer scientists and application domain scientists. However, many of the current environments fall short in either scope or power; they are either too restrictive to solve more than a few specific problems(on a few specific architectures), or lack enough of a bridge to remove the application domain scientist from understanding the complexities of parallel programming.

What is needed, then, is a system for creating applications by letting the two experts (the computer scientist and the application-domain scientist) both apply their expertise to the problem without the need for a complete understanding of what the other is doing. Working in complete ignorance of the problem, however, can only lead to disastrous results. In order to make visible to one side of the problem the necessary information about the other, abstractions would need to be created which embodied this critical information. The abstractions would serve as an interface level between the application-domain components of the software and the structural, distributed-computing components of the software. These abstractions would facilitate the implementation of an engine which could combine the code underlying them into a coherent distributed program.

In addition to the ability to create new applications that this decoupling of the computer science and application domain issues facilitates, there are several other advantages. Maintaining and extending these applications is much simpler since the pieces of the code are more compartmentalized. The computer scientist can go back to the code and change, for example, the processor allocation algorithm without having to interact with the domain code, and the application domain scientist can modify their piece of the problem without having to interact with the parallel code. This allows the application to grow as new techniques, technologies, or architectures become available, or to be easily moved between existing architectures and programming paradigms. Software reuse is also facilitated, since a piece of code represented by this abstraction effectively becomes a software module, where the abstraction provides the interface to the module.