Skip to content
0 / 17
Pre-launch Review

The 001 Tool Suite: Evolution of Automation

This page traces the evolution of Hamilton’s automation tooling across three decades: from the first CASE product USE.IT (1983) through the 001 Tool Suite’s formal introduction (1990), its distributed systems extension (1991), and the mature commercial presentation (2012). Together, these publications document how a formal theory became working software that could define, analyze, and generate systems automatically.


The Functional Life Cycle Model and Its Automation: USE.IT (1983)

Section titled “The Functional Life Cycle Model and Its Automation: USE.IT (1983)”

Margaret H. Hamilton and Saydean Zeldin, Higher Order Software, Inc., 1983

Hamilton treats the conventional waterfall-like development model as a prototype to learn from, not a foundation to build on. She catalogs its formation: “created overnight 20 years ago to serve rapidly growing hardware technology, patched ad hoc ever since, with solutions that are often implementation-dependent and impossible to integrate.”

Her survey of contemporary tools — SADT, PSL/PSA, SREM/SDS, Warnier-Orr, HDM, Information Hiding, Structured Analysis/Design, CADES — yields a sharp diagnosis: they all share a fatal assumption. “They make the assumption that they must include as part of their requirements the existence of the historical model as a given.” The root problem is that developers are “relating to and depending on an inferior life cycle model. The solution is not to support the historical model but rather to learn from it and then to replace it.”

Hamilton’s replacement has six major functions: Manage, Define, Analyze, Resource Allocate, Execute, Document. The model is function-driven, not event-driven. Any process in the life cycle can be viewed as an instance of any of these functions: “One person’s specifications are another’s requirements; one person’s implementation is another’s specification.”

USE.IT is “an integrated family of tools for automating a system’s life cycle.” Its components:

  • AXES: Defines requirements using data types, functions, and structures. It is not a programming language but a requirements definition language — from one AXES definition, systems can reside in distributed or sequential environments, in Ada or Fortran, on various architectures. “AXES is a language for defining mechanisms for defining systems.”

  • Analyzer: Ensures logical completeness, consistency, and integration across independently developed modules.

  • RAT (Resource Allocation Tool): An automatic programmer. “The RAT reads in unambiguous requirements from any problem domain, received from the Analyzer, and produces source code from those requirements.” Hamilton notes this works because the Analyzer guarantees unambiguous input — the precondition that makes automatic programming feasible.

  • HOM (Higher Order Machine): Executes the “ratted” (generated) requirements.

The critical property: USE.IT is defined with AXES, analyzed with its own Analyzer, and resource-allocated with its own RAT. It develops itself using its own principles.

Comparisons from three projects: the CLOCK problem took 4 man-hours with USE.IT versus 3 man-days manually. A radar system took 24 man-hours producing 800—1,000 lines of Fortran versus an estimated 80 man-days by DoD standards. A manufacturing system took 11 man-days producing approximately 10,000 lines of Fortran versus an estimated 2 years conventionally. Conservative estimate: USE.IT cuts costs by at least 75%.


001: A Rapid Development Approach for Rapid Prototyping (1990)

Section titled “001: A Rapid Development Approach for Rapid Prototyping (1990)”

Margaret H. Hamilton and William R. Hackler, Hamilton Technologies, Inc., 1990

Hamilton frames every failure of conventional development as a temporal problem: integration happens too late, errors are eliminated too late, flexibility happens too late, distributed environments happen too late, reusability happens too late, automation happens too late. The framing is more useful than “shift left” rhetoric because it identifies why things go wrong, not just when.

The solution — Development Before the Fact — is named here in a major publication for the first time. Each system is defined with properties that support its own development throughout its life cycle, inherently integrating its own real-world definitions, maximizing its own reliability, capitalizing on its own parallelism, and maximizing the potential for its own reuse and automation.

The 001 modeling environment is described in its most technically complete published form. FMaps capture functional, temporal, and priority characteristics. TMaps capture spatial and structural relationships. OMaps instantiate TMaps; EMaps instantiate FMaps with values for a particular performance pass.

The three primitive control structures are presented with full formal rules:

  • Join: Creates sequential dependency chains. Outputs of the right offspring become inputs of the left offspring.
  • Include: Enables independent parallel execution. Children do not share data.
  • Or: Provides decision-making. A partition function determines which child executes.

The IndependentRobots example demonstrates the structures in combination: two robots synchronized to work in parallel, with recursion (the system calls itself under Continue), Or decision (IsFinished decides between Finish and Continue), Include (Turn and Move as independent parallel functions), and Join (dependencies between processing steps).

Two key reusable patterns are introduced:

CoInclude: A frequently occurring pattern defined with Include and Join. Only the leaf node functions change between uses — a “hidden repeat” that eliminates boilerplate.

Async: A real-time, communicating, concurrent, asynchronous structure. Applied in DependentRobots, where Turn and Move are dependent and coordinating (contrast with IndependentRobots where they are independent). One robot plans, the other executes.

A critical demonstration shows three definitions of the same system (Transfer2Blocks): two architecture-dependent (hardcoded for 1 robot or 2 robots) and one architecture-independent (separating functional, resource, and resource allocation architectures). Only the “Where” statement changes to switch configurations. This is the technique for run-time performance analysis: define the system once, then analyze different resource allocations against the same functional architecture.

Three SDIO-funded projects measured:

ProjectDomainProductivity vs. baseline CProductivity vs. expert C
DETECDiscrete event simulation (Los Alamos)14:1 to 48:12:1 to 8:1
OTDObject tracking and designation (SDI)~25,500 equiv. C lines in 10 man-weeks~15,000 equiv. expert C
ExecutorReal-time behavior observer for 001~13,000 C lines in 2 man-weeksHigher than OTD

Approximately 75—90% of “testing” for OTD was completed before implementation through static analysis.


Prototyping Distributed Environments with 001 (1991)

Section titled “Prototyping Distributed Environments with 001 (1991)”

Margaret H. Hamilton and Ron Hackler, Hamilton Technologies, Inc., 1991

This short paper focuses specifically on 001’s distributed systems capabilities. It opens with a pointed critique of contemporary CASE tools: they “automate manual processes of the conventional development process when many of these processes need no longer be necessary.” Automating a bad process is not the same as replacing it with a good one.

Distributed control systems are defined using “stylized models” in 001 AXES covering environment, resources, interrupts, information organization, communication strategies, and functional distribution. Each aspect has a graphical representation grounded in formal language mechanisms. The combined set forms a “quick and friendly building block kit” that users can employ without knowing the formal details — but because the graphical representations are formally backed, the Resource Allocation Tool can automatically generate a fully executable distributed implementation.

The distributed architecture is a hierarchy of real-time distributed controllers where parent controllers are in charge of their children. Each controller coordinates communications, interrupts, and resources with other controllers while performing a portion of the distributed functional system. The Xecutor — a “meta operating system and simulator” that understands 001 semantics — provides real-time, asynchronous, event-driven execution with multiple concurrent control lines.


USL and Its Automation, the 001 Tool Suite (2012)

Section titled “USL and Its Automation, the 001 Tool Suite (2012)”

Margaret H. Hamilton, Hamilton Technologies, Inc., 2012

By 2012, the 001 Tool Suite had accumulated 26 years of application across domains: battlefield management, communications, homeland security, aerospace, emergency management, manufacturing, banking, medical, energy, traffic, robotics, and enterprise management. The presentation targets system integrators and end users considering USL adoption, containing material not found in the academic papers.

The DoD Strategic Defense Initiative Organization’s “Software Engineering Tools Experiment” compared three contractor/vendor teams on the same problem. The 001 team (with Lockheed Martin as prime contractor) achieved 90% completion in 120 staff days, versus 75% completion in 140 staff days for one competitor and 50% for another. Only the 001 team produced running code.

A study comparing 001 with a contemporary embedded systems development environment (Rational RequisitePro, Rational ROSE, LDRA, Borland debugger, custom scripts) found:

  • 50—75% improvement in requirements management
  • 400% improvement in design modeling
  • 500% improvement in quality and completeness of auto-generated code
  • 100% improvement in auto-generated design documentation
  • 1000% improvement in reuse

The presentation includes a systematic comparison matrix. Among the contrasts:

DimensionUSL (Before the Fact)Traditional
Interface errorsNone in model; all found before implementationMost found after implementation; some never found
CorrectnessBy built-in language propertiesBehaviour uncertainties until after delivery
IntegrationInherent, seamless life cycleAd hoc, not seamless
Productivity vs. reliabilityMore reliable = higher productivityMore reliable = lower productivity
TestingLess testing with each new capabilityTrapped in “test to death” philosophy
AutomationDoes real work (design, programming, docs, testing)Supports manual process rather than doing real work
Code generation100% production-ready, automaticallyShell code or incomplete
MaintenanceAt specification levelAt code level
Self-generationTool defined with itself, generated by itselfTool not integrated, not self-generated

Hamilton summarizes: “With USL, the Potential Exists for Reaching the Goal of High Quality, ‘More for Less’ Systems and Software.”


57 references including internal HOS technical reports, government contract deliverables, and the 1974, 1976, 1978, and 1979 papers.

References include Hamilton’s 1986 IEEE Spectrum article, DETEC and OTD final reports to Los Alamos National Laboratory, and Boehm’s Software Engineering Economics.

References to the 1990 RSP paper and DETEC project reports.

References include the 2008 IEEE Computer paper, DoD National Test Bed Final Report, and HTI technical documents.