Skip to content
0 / 17
Pre-launch Review

A Letter to Margaret Hamilton

February 2026

Dear Ms. Hamilton,

I don’t know if you’ll ever read this, but I’m writing it anyway because some things need to be said even when you’re not sure anyone is listening. That probably sounds familiar to you - you spent years telling people that software was a serious engineering discipline and they literally laughed at you.

So here goes.

I was just enlighted while creating a digital archive of your published work. Your papers, your NASA technical reports, the Apollo documentation, the whole thing — 28 source collections, 31 PDFs, plus the AGC source code itself. From your 1971 “Computer Got Loaded” letter all the way through to the 2019 Apollo Flight Software retrospective, and the context documents that frame them. I use your methodologies in my software engineering practice. Your interface error taxonomy, priority-based recovery architecture, “prevent, don’t detect” philosophy.

It was like listening to an album for the first time and it changes your life. Reflecting on it, it hit me - The code was hand wired.

I mean, I knew that intellectually - core rope memory, sure, I’d heard the term. But when I actually dug into it? When I read about the women at Raytheon - the “rope mothers” - literally weaving copper wire through and around magnetic cores, one bit at a time? Your software was a physical object. You couldn’t push a patch. You couldn’t hotfix a production issue. Every bit of that flight software was a commitment woven into rope.

And you made it work. Not just work - you made it save the mission.

I’d like to tell you a story, because I think there’s a thread here that connects your work to something the software industry is fighting about right now, and I don’t think enough people see it.


I started writing code in the late 1980s. I was a kid (6 years old). I printed programs on tractor-fed dot matrix paper so I could read them - spread the pages out on the floor and trace through the logic with a pencil. I saved my code to floppy disks and eventually to a very small, very precious 30MB hard drive. I heard old-timers talk about punch cards and scheduling time on mainframes. Having a computer in my house, with no punch cards and no time-sharing queue? I was spoiled and I knew it.

For almost 20 years after that, I wrote code in a black terminal screen with vim. That was it. That was the whole setup. And honestly? I was good at it. I could navigate a codebase, refactor across files, debug production issues - all in a terminal, all with vim, all without a single graphical tool.

And here’s where I have to be honest about something embarrassing.

I thought anyone who used a “fancy IDE” wasn’t a serious developer. Like, if you needed autocomplete and syntax highlighting and a file browser built into your editor, you probably didn’t really understand what you were doing. My logic was: if programming is so easy that anyone can do it with these tools, then anyone will, and garbage in, garbage out. I wore my vim-only workflow like a badge of honor.

This was around 2015 or so. And defending this position was getting harder, because I hadn’t actually tried a modern IDE in years. The few times I had, it was because some proprietary compiler forced me into it, so of course the experience was terrible. Great data to confirm my bias.

Then my buddy Dax told me to try PyCharm.

So I did. And… wow. Just being able to see more than one file open at a time (I know, sounds crazy, right?). The autocomplete. The integrated debugging. The refactoring tools. It supercharged my programming. My productivity went up. My organization went up. And here’s the thing nobody warns you about - my joy went up. I was having more fun writing software.

The anxiety I had about “forgetting how to program” if I used better tools? That was never real. I didn’t forget anything. I just got better. The craftsmanship didn’t go away - it got amplified.

I have new tools that replaced PyCharm now, and they’ve increased my joy and productivity.


I’m telling you this, Ms. Hamilton, because right now in 2026, the software world is going through the exact same argument I had with myself. But instead of “vim vs IDE” it’s “human-written vs AI-assisted.”

People are rejecting work - good work, solid work - because someone used an AI tool in the process. Pull requests get flagged. Contributions get dismissed. People literally put robot emojis on code to mark it as tainted. Ask me how I know this pattern of thinking is narrow-minded. (Hint: I just told you.)

And here’s where your story comes back in.

Your code was woven by hand into copper wire. The next generation used punch cards. Then we got terminals. Then floppy disks. Then IDEs. Each time, the old guard looked at the new tools and said “that’s not real programming.” Each time, they were wrong. Not because the new tools were perfect - but because the tools were never the point.

You know what the point was? The point was always the same thing you were doing when you convinced MIT that the Apollo software needed asynchronous priority-based error recovery. The point was the thinking. The engineering. The craftsmanship.

Your flight software didn’t save Apollo 11 because it was hand-wired. It saved Apollo 11 because you designed a system where the computer could make decisions about what mattered most when everything went sideways. When those 1202 and 1201 alarms fired during descent - executive overflow, the computer literally running out of capacity to do everything it was asked to do - your software shed the low-priority tasks and kept running the ones that mattered. Guidance. Navigation. Control. Landing.

That wasn’t a property of the wiring. That was a property of the mind that designed the system.


The question everyone keeps asking right now is “where do the humans fit in?” And honestly, it seems like a silly question to me.

Just because someone doesn’t have to be in the driver’s seat doesn’t mean they shouldn’t be. We let machines build chips so we don’t have to hardwire memory. That’s a good thing. It freed us up to focus on using our actual human gifts - intuition, discernment, creativity - to steer the vehicle. But someone still needs to steer.

You figured this out decades ago. You went from “after the fact” thinking - find the bugs, fix the bugs, ship it, pray - to “before the fact” thinking. Development Before the Fact. Design systems where entire categories of errors can’t exist in the first place. You didn’t do that by rejecting tools. You did that by understanding, at a level most people never reach, what the actual problem was.

And the actual problem was never the tools. It was always the interfaces.

You found that 75% of errors in the Apollo software were interface errors. Ambiguous. Incomplete. Inconsistent. Wrong. Unnecessary. Over-specified. Six categories. And your whole career after Apollo was basically saying: “What if we built systems where those six things couldn’t happen?” That became Higher Order Software, then 001 Tool Suite, then the Universal Systems Language. Forty years of work, all flowing from the insight that the danger isn’t in the components - it’s in the spaces between them.

That insight is more relevant right now than at any point since you first articulated it. Because when humans work with AI tools to build software, the interface between human intent and machine output is EXACTLY where things break down. Ambiguous prompts. Incomplete specifications. Inconsistent expectations. Your taxonomy describes the failure modes of human-AI collaboration before human-AI collaboration even existed.

And then I watched your oral history. The Computer History Museum one, from April 2017 - David Brock interviewing you for nearly three hours. And near the end, after you’ve walked through everything - Apollo, Higher Order Software, 001, the Universal Systems Language, forty-plus years of building systems that prevent errors by construction - you said something that stopped me cold:

“The before the fact paradigm has a big chance I think in the future of taking off… if you’re dealing with a paradigm or a language in an environment which can handle any kind of system, then maybe some of the problems in AI that could be there because you’re using earlier paradigms might speed up more.”

I had to sit with that for a minute. Because up until that moment in the interview, I’d been connecting your work to AI on my own - seeing the parallels, drawing the lines, thinking I was the one making the clever observation. But you were already there. You said this in 2017. GPT-2 didn’t exist yet. The current wave of AI that everyone is losing their minds over? It was still two years from its first real spark. And there you were, in a quiet room at the Computer History Museum, calmly pointing out that “before the fact” wasn’t just a software engineering methodology - it was a paradigm designed to handle ANY kind of system, and that AI specifically would benefit from it. You weren’t inadvertently relevant to this moment. You were already thinking about it.


So here’s what I want to say, and I want to say it clearly:

It’s my opinion that people who dismiss AI-assisted work are making the same mistake I made when I dismissed IDEs. And the people who think AI means humans aren’t needed anymore are making an even bigger mistake - the kind of mistake your daughter Lauren accidentally demonstrated when she crashed the Apollo simulator by pressing buttons NASA said an astronaut would never press. (And then Jim Lovell did exactly that on Apollo 8.)

“That won’t happen” is never an engineering strategy. Hope is never an engineering strategy. You taught us that.

My answer to “where do humans fit in” is the same place they’ve always fit in. At the helm. Not because they have to be - the autopilot works fine in calm weather - but because someone needs to care. Someone needs to hold the responsibility not for profit, but because they have a passion for the art of it. Someone needs to be the person who says “what if the astronaut presses the wrong button?” when everyone else says that’s not worth worrying about.

To me, that’s what you gave us: engineering, craftsmanship, wisdom.


I built this archive because I think your work deserves to be accessible, searchable, and understood - not as history, but as methodology. “What the Errors Tell Us” shouldn’t be a paper you have to pay to access behind a paywall. It should be required reading for anyone building systems that people depend on.

My hope is pretty simple. I want to encourage anyone writing software - and that now includes people writing models, training agents, building systems that build systems - to follow your principles. Prevent, don’t detect. Design it right the first time. Make entire categories of errors impossible by construction. And when something does go wrong (because it will), have the architecture in place to shed the unnecessary and protect the critical.

The tools will keep changing. They always do. Someone reading this fifty years from now will probably think our current tools are as quaint as I think punch cards are. That’s fine. That’s progress.

What shouldn’t change is the engineering discipline you fought to establish. The one they laughed at you for naming. The one that saved Apollo 11.

Thank you, Ms. Hamilton. For the discipline. For the methodology. For insisting that software was worth taking seriously when nobody else did.

And for teaching the rest of us that tools evolve, but craftsmanship is forever.

# DO NOT USE GOPROG2 OR ENEMA WITHOUT CONSULTING POOH PEOPLE :) first poop joke in space?

With genuine respect and appreciation,

Ryan Malloy https://ryanmalloy.com

P.S. - “Computer Got Loaded” is one of the best titles for a technical letter I’ve ever seen. We found both Datamation issues — the full pair. The story of McCracken’s “foulup” framing and your quiet, surgical correction is in the archive.

P.P.S. - The archive is in pre-launch hold, waiting for your review. To verify the display, enter the DSKY lamp test command — V35E. You would have run it before every mission to light every segment and confirm the display was working. Consider this the same check, one last time, before we go.