Chapter 6

The Meta Science Agenda

When we say that computers are too powerful for the use to which they are being put, we don't for a minute assume static demand. Of course new applications will continue to create demand for more power. What's changed is that the ability to satisfy this demand with hardware has become so widespread that marginal improvements in power yield little or no value to the producer. Given the accelerated rate of platform price/performance improvement, the lag time between raw power availability and applications demand continues to grow. Thus the value computer buyers place on power as a purchase decision factor will continue to fall relative to the value they place on utility. In such an environment, investments to create and control relatively abundant hardware technologies return far less than investments in the relatively scarce software and systems integration technologies that create utility from widely available power."[1]

*Impedance to
Engineering Adaptation*

*Engineering Demand and
Mathematical Supply*

*Holistic Posing vs.
Reductionistic Solving*

The Anatomy of Scientific Computing

*Calculus
Arithmetic Infrastructure*

*AI in Auto Synthesis and Auto-Modeling
Research*

*Auto Synthesis and Optimization
Broadcasting*

*Concurrent
Engineering Optimization*

*The 21st Century Paradigm
Shift*

The absorption and application of new discoveries and new technologies depends most heavily on three capabilities: higher mathematics, adaptive engineering, and rapid prototyping.

As I discussed in Chapter 4, the engineering modeling process is much more volatile and evolutionary than the design of numerical solution algorithms. This mandates shifting application programming to primary-creativity levels using modeling languages with built-in algorithms. The inherent uncertainty of primary creativity causes iteration in the modeling process, and the labor-intensity of secondary creativity cannot be tolerated because it magnifies the cost-time envelope.

Today's programming technology is a burden to the engineer because it doesn't meet his primary need for adaptability in the exploration of new ideas. He must create mathematical designs in order to understand how science can be applied to fit real needs. He seldom has the luxury of complete information and must build without it, hoping to modify and adapt his creation as he learns from a succession of testing. He needs a programming medium that addresses applications at the inverse problem level of the scientific method, enabling him to rapidly prototype a problem model, obtaining mathematical results to better understand the problem and to evolve his model through a succession of prototypes until the results correctly reflect his need.

This approach is not feasible with today's *algorithmic*
programming. The engineer cannot
articulate his needs to the level of mathematical completeness demanded by the
algorithmic programming languages. The effort of building scientific-method
algorithms requires relatively complete understanding of the problem
beforehand; and once a commitment to an approach is made, it is very difficult
to change because this disrupts the problem formulation that the algorithm is
based upon, introducing risk that the algorithm will not work. This inherent
conflict between engineering and programming makes engineering software
development a torturous process.

Engineering progress is now paced by software development, wherein cost and delay often eradicate the leverage that is the purpose of the engineering. Software development becomes an end in itself, and its difficulty inhibits program evolution. Often the software is obsolete before it is finished, because its design reflects the earliest, most uncertain engineering knowledge that has not adequately evolved during the delayed development process. Freezing its engineering content into inflexible code defeats the primary need of engineering adaptability. This mismatch of algorithmic programming to the needs of engineering has caused heavy attrition in R&D because its impact is heaviest in ad hoc or custom programming that is the mainstay of R&D computing.

The economic feasibility of computerizing new applications is governed by demand for solutions to such problems and the supply of solution implementations. A demand population curve expresses the demand for computerization in terms of cost-time tolerance, as being determined by how fast an application can be programmed.

Figure 6-1 Demand Determined by
Programming Feasibility

Figure 6-1 illustrates the shape of this curve, showing that the more rapid the development, the more demand, because not only is the cost lower (as in a conventional commodity demand curve), but the utility of computerization is considerably greater owing to the feasibility of satisfying "short-fuse" time-critical needs such as proposal or design deadlines which may have dramatic rewards for success and severe penalties for failure.

It may be incidental to the need (demand) for computerization whether the application is relatively simple mathematically, involving only algebra-modeling (like spreadsheets), or involves higher mathematics. But this is critical to the supply side of the issue, since the capabilities of programming media have a dramatic impact on the cost-time of computerization. Short fuse scientific applications requiring calculus methods go begging, because they cannot be solved at any price with existing programming tools. Demand therefore falls off because fast response requirements cannot be met for new and ad hoc applications of higher mathematics.

Figure 6-2 Supply Determined by
Programming Media Response

The corresponding "supply curves" relate the
response of programming media designed for algebra-level applications and
calculus-level applications respectively (Figure 6-2). The major response break between
algebra-level media, such as FORTRAN and calculus-based media such as PROSE and
Fortran Calculus occurs in programming *inverse* problems (control domain)
where the inputs to a model are the unknowns to be solved for, and model
outputs are the solution criteria. This is the form in which problems appear to
scientists and engineers applying the scientific method.

By stacking the “demand curve” over the “supply curve” (Fig. 6-3), we can see the reason why the computer technocracy has not been able to activate much demand in R&D. The limited response interval and the level of math sophistication makes this domain the worst case for the computerization of new concepts. Conventional programming languages like FORTRAN, C, Java, Perl, and Python do not come close to addressing this primary need. But even if they did, the users of these languages do not apprehend problems and address them in terms of the scientific method. They have a different world view. The languages that meet this need must be scientific modeling languages, offering the end-user scientist as much facility as spreadsheet tools for financial planners.

In effect this means that the programming required is actually less difficult
than formulating the model after it has been conceptualized in the mind,
because essentially that is all there is to it, writing down the formulas and
then applying the verbs to solve them. What makes it less difficult is that
interaction with the language is the process which evolves and completes the
conceptualization. Thus the modeling process mirrors the iterative method of
solution—reaching toward a convergence. It is a process of successive
approximation where the user learns about the problem by experiencing the
mathematical behavior of the blend of formulas.

Inverse problems may be solved generally via optimization techniques which determine the inputs to optimize the output. Optimization technology heavily depends upon methods of differential calculus, which have been a barrier, until the advent of automatic differentiation. This exact arithmetic of calculus automatically articulates engineering models for optimization, providing a better alternative than any that could be supplied by the secondary creativity of mathematicians and programmers, even when aided by symbolic manipulation tools. This argument is illustrated, somewhat dramatically, in Volume 2.

The increased cost-time utility of synthetic calculus languages fulfills the needs of R&D and critical "short fuse" activities such as proposals, product design and test evaluation where competition motivates high demand. This user paradigm was first introduced to the marketplace with the PROSE language in 1973. The experience of many of the PROSE users is evidence of the gain in productivity this paradigm offers:

n
An early PROSE user (Krinsky) was a physicist at a
west-coast aerospace firm who wrote a program and solved an antenna design beam
synthesis problem in 3 weeks. This was an inverse problem requiring the
solution of 20 design parameters to shape the INTELSAT antenna beam to fit the
footprint of the 50 states of the U.S. In a transcribed interview he stated
that *it would have taken him about six months to do the same problem in
FORTRAN*.

n
A scientific programmer (Milton) for a west coast
research firm wrote a program and solved a maximum likelihood estimation
problem in 2 1/2 hours. In a
transcribed interview, he stated that this problem would have required *at
least a week to do in FORTRAN*, and he "wasn't looking forward to
it."

n
An optometrist at another west coast aerospace firm,
who was not a programmer, was able to program and solve a telescope optics
optimization problem *in two days *to prepare results for a proposal to
NASA for the space telescope project. In a transcribed interview, when asked about
his reaction to the new capability, he stated that he was "Quite jubilant
... because I had been able to solve on my own in a very short time frame, that
which I didn't think I could do. I had
established a new capability in a very short time frame."

n
Engineers at a mid-western tire manufacturer had been
working on a radial tire design problem for six months with FORTRAN without
success. When introduced to a PROSE
expert, they were able to program and solve the problem *in one day* with
his help.

n
Engineers at a northwestern electronics firm had been
trying, unsuccessfully, to optimize electron trajectories in the design of CRT
terminals, when first introduced to PROSE.
*Within 3 weeks* they were obtaining successful solutions from
PROSE programs, and subsequently acquired the product for use in standard CAD
applications.

n
An engineer at an aerospace subsidiary of an automotive
firm wrote a program and solved a laser system design problem *in three days*
with help of a PROSE expert.

n
An engineer at a well known testing laboratory wrote a
program and solved a problem in airport noise abatement optimization, *in one
day* while preparing a proposal to the Department of Transportation.

n
A group of engineers at a west-cost electro-optics firm
had been attempting to apply optimization techniques to the design of an
optical system for six months under a NASA contract, during which they had
already spent $50,000 and achieved only marginal success. In desperation to show results they used
PROSE and were able to program and solve the problem *in 3 weeks* with an
expenditure of $2400.

These and many other case examples illustrate the primary benefit of the new paradigm—the rapid development of very sophisticated computer applications from scratch. The cost savings resulted from dramatic reduction in manpower cost over the time required to solve the problem (including removal of extensive programming cost and the cost of mathematicians employed in solution method development). Further increases in productivity resulted from the timeliness of the solutions in time-critical design projects (satellite antennas, laser systems, CRT's) and proposal activities (telescope optics, airport noise abatement).

Such ad hoc applications represent the *least
justifiable* computer expenditures because their costs cannot be amortized
over recurring usage. Yet, because of their importance in competitive efforts,
they are highly motivated in engineering, especially in modern technology where
computer power is essential for dealing with technical sophistication. By
satisfying these "worst case" demand requirements, PROSE demonstrated
the demand-pull potential of the new paradigm.

The enabling of optimization technology fulfills the need for adaptive engineering by raising the level of programming to that of the science expression so that the entire task of programming is one of modeling rather than algorithm design. This level of programming was proven on time-sharing computers during the 1970's in PROSE and is now being made available on Linux platforms in a group of new interoperable meta-science languages, addressing legacy syntaxes, MetaFor (Fortran), MetaBas (Basic), MetaC (C), MetaCalc (FC[2]), MetaPerl (Perl) and MetaPy (Python). The MetaFor and MetaC languages are being developed for parallel processing as well as sequential processing.

Not only does the new meta-science media eliminate the additional labor of secondary creativity, it dramatically shortens the time to achieve results with primary creativity. It provides leverage enabling engineers to program ad hoc applications where the solution process requires higher mathematics, thereby raising the level of computing utility in primary creativity—vital to increasing R&D productivity.

Because inverse problems constitute the simplest means of describing problems as they occur in nature, direct programming in this form dramatically reduces the size and complexity of programs. Moreover, since optimization methodology is a "universal solvent" for inverse problems, it provides a standard problem-independent interface for different optimization algorithms, making them interchangeable as plug-in tools. Instead of developing or committing to the use of a tailored solution method (the approach of secondary creativity), the user may experiment with several in as many runs, and can use different methods to validate each other.

The scope of application complexity that can be directly
programmed is further extended by the ability to nest models and optimizaton
techniques in structured hierarchies to address applications that are beyond
the mathematical scope of FORTRAN. This
extends to higher mathematics the hierarchic simplification of programming that
FORTRAN II introduced in 1958—the *subroutine*. Subroutines enabled the
broad-scale use of FORTRAN for scientific software development.

In economic terms, the most prominent benefit is an increase in the supply of capability to application needs having much shorter turnaround requirements and greater mathematical sophistication than current programming can address.

The mathematical perspective afforded by a computer, as opposed to that required by pedagogy, is the abstraction of mathematics via simple metaphoric commands that describe problems holistically in the mode of a driver rather than reductionistically in the mode of a mechanic. Often in pedagogy, the natural holistic interface is bypassed as one proceeds from posing to solving without notice.

The condition that makes the quantum leap from reductionistic to holistic mathematics feasible is a unique coincidence of calculus, computer arithmetic, and the simple way word problems most directly invoke simple formulation in algebra as simultaneous equations. The result is a methodology of programming that apprehends problems precisely at the stage where word problems are posed into algebra, without further transformation.

At this stage, the simplest formulation is a set of
simultaneous equations that together describe the problem *holistically*
in terms of several variables. In this scenario, each equation is a statement
of one of the facts of the word problem.
But it is permitted that these facts are stated in terms of each other. Thus the user does not have to separate the
interdependences through algebraic manipulation. He or she merely relates all
of the facts and applies a tool which solves the simultaneous equations and
determines all of the unknowns at once.

This tool is *mathematical optimization*, the gift of
calculus. But the calculus is hidden, and never emerges to the programmer's
gaze. Thus the mathematical skill of the user need not exceed what is learned
in the ninth grade, long before the first course in calculus where the *methodological
theory* of optimization (e.g. deriving derivative formulas) is introduced.

By focusing on methodological theory rather than practical application, mathematics education has become self-limiting. The computer has long ago surpassed the primitive stage of solution methodology that can be followed perceptually by human beings in a reductionist fashion. Current algorithmic methods are multilevel numerical processes that can only be understood in principle, reinforced by their experienced outcome. Like physical processes, the disparity between the macro world of sense perception and the micro world of mechanics is beyond comprehension except by correlation of a macroscopic holistic metaphor with experimental behavior.

In order to transcend the limits of methodological comprehension, a holistic paradigm of macro modeling or problem synthesis has evolved. It builds applications as hierarchies of simultaneous equations formed holistically from word problems. Only when dynamic processes are modeled does this synthesis involve calculus-level mathematics, because then the holistic models contain differential as well as algebraic equations.

This mathematical synthesis paradigm is an emerging avenue of application growth through which primary creativity can consume the vast power and potential of modern computers. This avenue ascends to a higher plane of programming utility as far above conventional programming as spreadsheet programming is to symbolic machine (assembly) language—an even greater change than FORTRAN achieved originally. Its higher utility will stimulate a new primary creativity growth market in education and R&D.

In education, a movement dedicated to reforming mathematics teaching has been under way in recent years. This movement has centered in the teaching of calculus. A general overview of the calculus reform movement is provided by a sampling of the National Science Foundation (NSF) grants of a decade ago (fiscal year 1992) totalling over $4.6 million to 15 projects. The NSF was especially interested in supporting projects with the potential to revitalize calculus instruction on a large scale. For this reason, most of its awards were to consortia, generally of two- and four-year colleges, universities and high schools. A brief description of each is given below:

n
*Calculator Enhanced Instruction Project in a
Consortium of New Jersey and Pennsylvania Educational Institutions, (Union
County College)* - This was a continuation project which followed the
approaches of the Clemson University project and focused on faculty workshops.

n
*A Proposal for Implementing Calculus Reform in West
Appalachia (University of Kentucky)* - This project promoted wide scale
calculus reform at high schools and colleges in the area by providing examples
of full implementation of calculus reform at the consortium institutions.

n
*Calculus and Mathematica (University of Illinois,
Urbana)* - This calculus laboratory
sequence revised materials already in uses at colleges and high-schools,
providing new ones for differential equations and implementation in a network
of rural high schools.

n
*Preparing High School Students for Calculus and
Integrating Calculus into the Classroom (Volunteet State Community College)*-
This planning grant laid the groundwork for a regional coalition to improve the
quality of mathematics instuction.

n
*Mid-Atlantic Regional Calculus Consortium (Howard
University)* - This grant supported workshops for faculty and implementation
plans for using the Harvard calculus project materials in a variety of historically
black institutions.

n
*A New Calculus Program at the University of Michigan
(University of Michigan, Ann Arbor)* - This three-year project will
completely revise the first-year calculus program. Special features were intensive training of faculty and teaching
assistants, the use of cooperative learning, and integration of graphing
calculators into the curriculum.

n
*The Rhode Island Calculus Consortium Module Project
(Unveristy of Rhode Island)* - A series of self-contained modules were to be
created by adapting ideas and materials from successful calculus reform
projects. After pilot testing by
consortium members, the modules were to be revised and presented to a wider
audience.

n
*Implementing Computer-Integrated Calculus in High
School (University of Connecticut)* - The existing University of Connecticut
materials were to be adapted by faculty and high school teachers and tested, on
a small scale and, after revision, on a larger scale.

n
*Workshop Program for Dissemination of Calculus
Reform (Macalaster College) *- Sixteen one-week workshops were to be offered
at various sites throughout the country to present in-depth work with some
specific reform project as well as an overview of the calculus reform movement,
its goals, and problems.

n
*Preparing for a New Calculus (University of
Illinois, Urbana*) - This grant sponsored a working conference dealing with
calculus reform, curricular reform in school mathematics, and educational
technology initiatives. The conference
produced a set of action-oriented recommendations.

n
*Implementation and Dissemination of the Harvard
Consortium Materials in Arizona, Oklahoma, and Utah (University of Arizona)*
- Consortium members first implemented use of the Harvard materials, and
through an extensive support program extended this use to other institutions in
their geographical regions.

n
*Fully Renewed Calculus at Three Large Universities
(University of Iowa)* - The University of Iowa renewed calculus materials
were used and refined at Brigham Young University and the University of
Wisconsin, LaCrosse, the other consortium members. The project involved extensive training of faculty and teaching
assistants.

n
*Metrolina Calculus Consortium: Implementing a
Technology-Based Curriculum (University of North Carolina, Charlotte)* - A
summer workshop followed by monthly meetings helped faculty at a variety of
institutions to implement a technology-based calculus curriculum. Long term
research was to be conducted on the effectiveness of technology-based
instruction.

n
*Implementation of Calculus Reform at a Comprehensive
University with Project CALC (University of Mississippi)* - This grant
developed support materials for the implementation of Project CALC at a
comprehensive university, and also served as a test site for the Mississippi
Alliance for Minority Participation.

n
*A Video on Using Supercalculators in Curriculum
Reform (Clemson University)* - The video produced illustrates changes that
take place in mathematics classrooms when supercalculators are used
regularly. The completed video is available
free of charge to each of the nation's mathematics departments.

"Nicholas Bourbaki" is the pseudonym chosen by a collection of French mathematicians who set out, beginning after World War II, to rewrite all of mathematics from a rigorous, axiomatic point of view. The name "Bourbaki" has therefore come to signify a general viewpoint on mathematics, one that emphasizes formality and axiomatization. The prevailing tone of the calculus reform movement is strongly anti-Bourbaki. The essence of the movement is the reversal of the traditional approach epitomized by the formal derivation of derivatives, whose conditioning has created psychological blockage of mathematical insight and caused high attrition among otherwise capable students. The movement stresses the physical experience of derivatives first, in graphical or numerical form, before analytical approaches are learned. This potentially turns calculus into a laboratory course, like physics, in which the student can appreciate the application before learning its mechanics.

** The Schizo-Formal Computer** - Unfortunately
for the current direction of calculus reform, the integration of the computer
into the anti-Bourbaki curriculum has become something of a ruse, owing to the
pervasive presence of symbolic algebra software, such as

** The Computer as Solver** - The notion of a
computer as a formal symbol processor is a misperception brought about by the
success of the technique of compilation, which launched the major enterprise of
computer science and stimulated the pursuit of symbolic algebra and artificial
intelligence software. It is unfortunate that we seem to have forgotten why we
developed computers in the first place—

Computation has always played a central role in the closed
cycle known as the scientific method. A
new theory gains acceptance or falls by the wayside in direct proportion to its
success in explaining known phenomena and predicting new ones. The closed cycle has two parts that
correspond to the two sides of knowledge, theory and experiment, that must be
brought into synchronous agreement to confirm understanding. The first part, articulation of theory, is
the logico-mathematical *prediction* of the numerical behavior of a formal
theory. The second part is the adaptation or *control* of the prediction
to bring it into synchronous agreement with the experimental data. At this primary level of discourse we may
say that prediction involves *direct computation*, in that it articulates
a theory to produce its output (conclusions) given known inputs
(premises). Control, on the other hand
involves *inverse computation*, because it seeks to find unknown inputs
(parameters) that cause the output to match known experimental data. At
secondary levels within prediction or within control, direct and inverse
computations play equivalent roles in finding outputs or inputs, respectively.

In today's state of the art, most applications fall into
the category of prediction or *simulation*, which in turn utilizes two
principal methods, (a) Monte Carlo and (b) equations solving. The Monte Carlo
method is a brute force approach of exercising "thought experiments"
(simulations)—on individual parts of a process (e.g. neutrons in an atomic reactor)
to create and accumulate statistics to characterize the aggregate behavior of
these parts. Monte Carlo calculations
are inherently time-consuming, even on very fast electronic computers, because
of the large number of simulations necessary to satisfy sufficiency tests of
statistical significance.

The second principal method, equation solving, occurs more
often; namely, when (1) the behavior of the quantity of interest is known to
obey a linear or nonlinear algebraic equation, a differential equation, an
integral equation, or an integro-differential equation over some region of
space/time of given shape, and when (2) the desired quantity obeys specified
boundary conditions in space (and initial conditions in time for time-dependent
problems). The simplest situations occur when the desired quantity (the
dependent variable) is a function of only one independent variable, perhaps
time or one space dimension. Such differential equations are called *ordinary
differential equations* (ODEs).

When the dependent variable is a function of two or more independent variables, the appropriate differential equation is called a partial differential equation (PDE) because it involves partial derivatives that indicate the change in the dependent variable as one or another of the independent variables change while holding all other independent variables fixed. Except under special circumstances, the solution of such equations is computationally formidable.

** Analytical vs. Numerical Solution Methods** -
Previously I have cited the impotence of symbolic algebra (analytical)
approaches to solving mathematical problems and berated computer science for
being preoccupied with them. The justification of this is that such methods can
only solve certain limited examples (notably linear) of the kind of equations
discussed above, and only when these equations are isolated into idealized
domains characterizing

In scientific modeling, linear phenomena are seldom encountered, and isolated instances (singular parts) rarely occur. Mathematical models in practical science involve systems of equations which are mostly nonlinear. Although, analytical transformations are occasionally useful in preparing equations for the development of numerical solution algorithms, they do not play major roles in the solution process. The high-degree of development of symbolic manipulation methodologies is testimony to their importance in the pedagogy of mathematics. But this has little to do with scientific computing.

** The Capabilities of Computers** - The computer
can manipulate characters, evaluate Boolean functions, initiate extended
processes, and perform arithmetic on a finite subset of the real numbers. This
is the extent of their contribution to the solution of mathematical problems.
This last function is the key to its usefulness to scientists and engineers,
for their jobs require that they relate physical phenomena to numbers. An experimental physicist makes numerical
measurements, and the theoretical physicist predicts what those numerical
results should be. An engineer
determines the numerical parameters of a design before the prototype is built.
The FORTRAN breakthrough allowed these people, who saw the computer as a tool
for computation, to program the machine directly—to produce prediction formulas
to simulate mathematical behavior by performing the arithmetic of the
equations, and to specify numerical procedures (algorithms) that could guide
the equation arithmetic toward the numerical values that solved them. FORTRAN
stands for FORmula TRANslation. The compiler translates algebraic formulas into
commands for the computer to perform arithmetic. FORTRAN users did not need to know, for example, how the computer
divides two numbers in order to cause division to take place.

** The Limited Use of Calculus** - The concept of
derivatives and integrals as limits of infinitesimal change was a breakthrough
in the ability to describe changing physical systems. These two concepts were precisely defined, and the analytical
rules have long been known that relates them algebraically to known functions.
But the use of these rules in the prediction of mathematical behavior is highly
limited. As a result, the technology of scientific computing has developed
largely without direct application of calculus. Although differential
equations, integral equations, and integro-differential equations are

The only place where calculus has played a role has been
in the synthesis and transformation of equations during modeling and
programming. Thus whenever the formula for the derivative or an integral of a
known analytic function is needed, the rules of differential calculus or
integral calculus are used to derive it *before it is programmed*. If a large number of such applications are
necessary, then a program like *MACSYMA* or *Mathematica* is useful
to perform such formal derivations automatically.

As indicated above, most scientific applications today
fall into the class of prediction or simulation—*the articulation of theory*. In spite of the fact that from 1945 to 1990,
the problem-solving capability of the fastest computers improved approximately
10^{11} times, essentially all of this capacity has been dedicated to
simulation. To close the loop of the
scientific method requires simulation to be iterated in inverse control
calculations to match experimental data. This is the technology of *optimization*—the
primary agenda of calculus.

Optimization is the general solution method for inverse problems because it provides a criterion for solving an imbedded simulation problem in terms of its input parameters, according to the maxima/minima theory of differential calculus. It provides a systematic process of solving inverse problems by using the partial derivatives of the simulation, with respect to its parameters, to differentially correct the values of the parameters until an optimum simulation is found. Notice the emphasis on "... of the simulation" instead of "of the formulas". The derivatives required are the numerical values of the differentiated output of the simulation with respect to the input parameters of the simulation. The simplest form of optimization is the process of implicit equations solving to find the independent variables of a set of equations of the form: