The 001 Tool Suite: Evolution of Automation
This page traces the evolution of Hamilton’s automation tooling across three decades: from the first CASE product USE.IT (1983) through the 001 Tool Suite’s formal introduction (1990), its distributed systems extension (1991), and the mature commercial presentation (2012). Together, these publications document how a formal theory became working software that could define, analyze, and generate systems automatically.
The Functional Life Cycle Model and Its Automation: USE.IT (1983)
Section titled “The Functional Life Cycle Model and Its Automation: USE.IT (1983)”Replacing the Historical Life Cycle Model
Section titled “Replacing the Historical Life Cycle Model”Margaret H. Hamilton and Saydean Zeldin, Higher Order Software, Inc., 1983
Hamilton treats the conventional waterfall-like development model as a prototype to learn from, not a foundation to build on. She catalogs its formation: “created overnight 20 years ago to serve rapidly growing hardware technology, patched ad hoc ever since, with solutions that are often implementation-dependent and impossible to integrate.”
Her survey of contemporary tools — SADT, PSL/PSA, SREM/SDS, Warnier-Orr, HDM, Information Hiding, Structured Analysis/Design, CADES — yields a sharp diagnosis: they all share a fatal assumption. “They make the assumption that they must include as part of their requirements the existence of the historical model as a given.” The root problem is that developers are “relating to and depending on an inferior life cycle model. The solution is not to support the historical model but rather to learn from it and then to replace it.”
The Functional Life Cycle Model
Section titled “The Functional Life Cycle Model”Hamilton’s replacement has six major functions: Manage, Define, Analyze, Resource Allocate, Execute, Document. The model is function-driven, not event-driven. Any process in the life cycle can be viewed as an instance of any of these functions: “One person’s specifications are another’s requirements; one person’s implementation is another’s specification.”
USE.IT: The Automation
Section titled “USE.IT: The Automation”USE.IT is “an integrated family of tools for automating a system’s life cycle.” Its components:
-
AXES: Defines requirements using data types, functions, and structures. It is not a programming language but a requirements definition language — from one AXES definition, systems can reside in distributed or sequential environments, in Ada or Fortran, on various architectures. “AXES is a language for defining mechanisms for defining systems.”
-
Analyzer: Ensures logical completeness, consistency, and integration across independently developed modules.
-
RAT (Resource Allocation Tool): An automatic programmer. “The RAT reads in unambiguous requirements from any problem domain, received from the Analyzer, and produces source code from those requirements.” Hamilton notes this works because the Analyzer guarantees unambiguous input — the precondition that makes automatic programming feasible.
-
HOM (Higher Order Machine): Executes the “ratted” (generated) requirements.
The critical property: USE.IT is defined with AXES, analyzed with its own Analyzer, and resource-allocated with its own RAT. It develops itself using its own principles.
Productivity Evidence
Section titled “Productivity Evidence”Comparisons from three projects: the CLOCK problem took 4 man-hours with USE.IT versus 3 man-days manually. A radar system took 24 man-hours producing 800—1,000 lines of Fortran versus an estimated 80 man-days by DoD standards. A manufacturing system took 11 man-days producing approximately 10,000 lines of Fortran versus an estimated 2 years conventionally. Conservative estimate: USE.IT cuts costs by at least 75%.
001: A Rapid Development Approach for Rapid Prototyping (1990)
Section titled “001: A Rapid Development Approach for Rapid Prototyping (1990)”The “Too Late” Diagnosis
Section titled “The “Too Late” Diagnosis”Margaret H. Hamilton and William R. Hackler, Hamilton Technologies, Inc., 1990
Hamilton frames every failure of conventional development as a temporal problem: integration happens too late, errors are eliminated too late, flexibility happens too late, distributed environments happen too late, reusability happens too late, automation happens too late. The framing is more useful than “shift left” rhetoric because it identifies why things go wrong, not just when.
The solution — Development Before the Fact — is named here in a major publication for the first time. Each system is defined with properties that support its own development throughout its life cycle, inherently integrating its own real-world definitions, maximizing its own reliability, capitalizing on its own parallelism, and maximizing the potential for its own reuse and automation.
The Modeling Environment
Section titled “The Modeling Environment”The 001 modeling environment is described in its most technically complete published form. FMaps capture functional, temporal, and priority characteristics. TMaps capture spatial and structural relationships. OMaps instantiate TMaps; EMaps instantiate FMaps with values for a particular performance pass.
The three primitive control structures are presented with full formal rules:
- Join: Creates sequential dependency chains. Outputs of the right offspring become inputs of the left offspring.
- Include: Enables independent parallel execution. Children do not share data.
- Or: Provides decision-making. A partition function determines which child executes.
The IndependentRobots example demonstrates the structures in combination: two robots synchronized to work in parallel, with recursion (the system calls itself under Continue), Or decision (IsFinished decides between Finish and Continue), Include (Turn and Move as independent parallel functions), and Join (dependencies between processing steps).
Defined Structures
Section titled “Defined Structures”Two key reusable patterns are introduced:
CoInclude: A frequently occurring pattern defined with Include and Join. Only the leaf node functions change between uses — a “hidden repeat” that eliminates boilerplate.
Async: A real-time, communicating, concurrent, asynchronous structure. Applied in DependentRobots, where Turn and Move are dependent and coordinating (contrast with IndependentRobots where they are independent). One robot plans, the other executes.
Architecture Independence
Section titled “Architecture Independence”A critical demonstration shows three definitions of the same system (Transfer2Blocks): two architecture-dependent (hardcoded for 1 robot or 2 robots) and one architecture-independent (separating functional, resource, and resource allocation architectures). Only the “Where” statement changes to switch configurations. This is the technique for run-time performance analysis: define the system once, then analyze different resource allocations against the same functional architecture.
Productivity Results
Section titled “Productivity Results”Three SDIO-funded projects measured:
| Project | Domain | Productivity vs. baseline C | Productivity vs. expert C |
|---|---|---|---|
| DETEC | Discrete event simulation (Los Alamos) | 14:1 to 48:1 | 2:1 to 8:1 |
| OTD | Object tracking and designation (SDI) | ~25,500 equiv. C lines in 10 man-weeks | ~15,000 equiv. expert C |
| Executor | Real-time behavior observer for 001 | ~13,000 C lines in 2 man-weeks | Higher than OTD |
Approximately 75—90% of “testing” for OTD was completed before implementation through static analysis.
Prototyping Distributed Environments with 001 (1991)
Section titled “Prototyping Distributed Environments with 001 (1991)”Distributed Control Systems
Section titled “Distributed Control Systems”Margaret H. Hamilton and Ron Hackler, Hamilton Technologies, Inc., 1991
This short paper focuses specifically on 001’s distributed systems capabilities. It opens with a pointed critique of contemporary CASE tools: they “automate manual processes of the conventional development process when many of these processes need no longer be necessary.” Automating a bad process is not the same as replacing it with a good one.
Distributed control systems are defined using “stylized models” in 001 AXES covering environment, resources, interrupts, information organization, communication strategies, and functional distribution. Each aspect has a graphical representation grounded in formal language mechanisms. The combined set forms a “quick and friendly building block kit” that users can employ without knowing the formal details — but because the graphical representations are formally backed, the Resource Allocation Tool can automatically generate a fully executable distributed implementation.
The distributed architecture is a hierarchy of real-time distributed controllers where parent controllers are in charge of their children. Each controller coordinates communications, interrupts, and resources with other controllers while performing a portion of the distributed functional system. The Xecutor — a “meta operating system and simulator” that understands 001 semantics — provides real-time, asynchronous, event-driven execution with multiple concurrent control lines.
USL and Its Automation, the 001 Tool Suite (2012)
Section titled “USL and Its Automation, the 001 Tool Suite (2012)”The Mature Commercial Presentation
Section titled “The Mature Commercial Presentation”Margaret H. Hamilton, Hamilton Technologies, Inc., 2012
By 2012, the 001 Tool Suite had accumulated 26 years of application across domains: battlefield management, communications, homeland security, aerospace, emergency management, manufacturing, banking, medical, energy, traffic, robotics, and enterprise management. The presentation targets system integrators and end users considering USL adoption, containing material not found in the academic papers.
National Test Bed Results
Section titled “National Test Bed Results”The DoD Strategic Defense Initiative Organization’s “Software Engineering Tools Experiment” compared three contractor/vendor teams on the same problem. The 001 team (with Lockheed Martin as prime contractor) achieved 90% completion in 120 staff days, versus 75% completion in 140 staff days for one competitor and 50% for another. Only the 001 team produced running code.
Comparative Study Findings
Section titled “Comparative Study Findings”A study comparing 001 with a contemporary embedded systems development environment (Rational RequisitePro, Rational ROSE, LDRA, Borland debugger, custom scripts) found:
- 50—75% improvement in requirements management
- 400% improvement in design modeling
- 500% improvement in quality and completeness of auto-generated code
- 100% improvement in auto-generated design documentation
- 1000% improvement in reuse
Before the Fact vs. Traditional
Section titled “Before the Fact vs. Traditional”The presentation includes a systematic comparison matrix. Among the contrasts:
| Dimension | USL (Before the Fact) | Traditional |
|---|---|---|
| Interface errors | None in model; all found before implementation | Most found after implementation; some never found |
| Correctness | By built-in language properties | Behaviour uncertainties until after delivery |
| Integration | Inherent, seamless life cycle | Ad hoc, not seamless |
| Productivity vs. reliability | More reliable = higher productivity | More reliable = lower productivity |
| Testing | Less testing with each new capability | Trapped in “test to death” philosophy |
| Automation | Does real work (design, programming, docs, testing) | Supports manual process rather than doing real work |
| Code generation | 100% production-ready, automatically | Shell code or incomplete |
| Maintenance | At specification level | At code level |
| Self-generation | Tool defined with itself, generated by itself | Tool not integrated, not self-generated |
Hamilton summarizes: “With USL, the Potential Exists for Reaching the Goal of High Quality, ‘More for Less’ Systems and Software.”
References
Section titled “References”1983 Paper
Section titled “1983 Paper”57 references including internal HOS technical reports, government contract deliverables, and the 1974, 1976, 1978, and 1979 papers.
1990 Paper
Section titled “1990 Paper”References include Hamilton’s 1986 IEEE Spectrum article, DETEC and OTD final reports to Los Alamos National Laboratory, and Boehm’s Software Engineering Economics.
1991 Paper
Section titled “1991 Paper”References to the 1990 RSP paper and DETEC project reports.
2012 Presentation
Section titled “2012 Presentation”References include the 2008 IEEE Computer paper, DoD National Test Bed Final Report, and HTI technical documents.
Related Documents
Section titled “Related Documents”- Higher Order Software (1976) — The theoretical foundation. The axioms formalized in 1976 are the mathematical core that all these tools implement.
- The Relationship Between Design and Verification (1979) — The error analysis methodology that established why automation was needed and what properties it must guarantee.
- Preventative Software Systems (1994) — Published between the 1991 and 2012 documents on this page, the 1994 paper presents the most detailed single-paper description of the mature 001 tool suite.
- USL: Lessons Learned from Apollo (2008) — The formal USL paper that the 2012 webinar references as its primary academic citation.
- What the Errors Tell Us (2018) — Cites the 2012 webinar (slides 36—40) for comparative productivity data.
- HOS Conference Papers (1974, 1978) — The axioms that started it all, and the reliability philosophy that motivated the search for automation.