Menu Close

Where are all the computers?

Andrew Miracle (AI/ML author) on Topic Where are all the computers

In the mid-20th century, the term “computer” did not refer to the fancy silicon-based machine you hold on your hands of the laptop you carry to work every morning. “computer” was a term used to describe a highly skilled mathematician, analyst or data cruncher, who performed complex calculations by hand or memory assisting top physicians, inventors and business leaders to make decisions faster, understand market dynamics or even take man to the moon.

In the early days of at NASA, “human computers” were indispensable to the success of early space missions, from John Glenn’s orbital flight to the Apollo Moon landing. Yet, by the 1960s, the advent of IBM’s mainframe computers and programming languages like Fortran rendered their roles obsolete, forcing them to adapt or risk displacement.

Today, as artificial intelligence (AI) reshapes industries, this historical pivot offers a critical lens through which to examine the recurring pattern of technological disruption. Just as NASA’s human computers transitioned from manual computation to programming, modern professionals face a similar imperative to evolve alongside AI. This report traces the trajectory of computational labor, analyzing how each technological leap — from human brains to mechanical systems to AI — reshapes workforces, creates new opportunities, and demands ethical foresight.

The central question emerges: If history is a guide, how will societies navigate the AI revolution’s promise and peril?


Origins of Human Computation

Long before electronic circuits, computation was a human endeavor. The term “computer” originated in the 17th century to describe individuals who performed mathematical calculations manually. By the 1930s, institutions like the United States National Advisory Committee for Aeronautics (NACA), began hiring women as “human computers” to process aerodynamic data. At Langley Research Center, a pioneering group of five white women formed the first computer pool in 1935, analyzing wind tunnel and flight test results. Their work required exceptional mathematical aptitude, precision, and endurance, as they solved differential equations and plotted trajectories using slide rules, graph paper, and mechanical calculators.

The demand for human computers surged during World War II and the Cold War space race. By the 1940s, NACA began recruiting African-American women, though segregation laws confined them to the segregated “West Area Computers” division. Despite systemic barriers, these mathematicians became instrumental to projects like the supersonic X-1 aircraft and Mercury program. Katherine Johnson, whose calculations verified the trajectory for John Glenn’s 1962 orbital flight, epitomized their critical role: her work was so trusted that Glenn reportedly insisted she manually check the IBM 7090’s results before his mission.

The human computers’ legacy extended beyond raw computation. They developed novel methodologies, such as iterative approximation techniques for solving nonlinear equations, which later informed algorithmic design. Their work laid the groundwork for modern numerical analysis, demonstrating that human intuition and creativity could outperform early mechanical systems — a reality mirrored in today’s debates over AI’s limitations.

IBM’s Mainframes and the Disruption of Human Labor

In the late 1950s, NASA’s investment in IBM 704 and 7090 mainframes marked a turning point. These room-sized machines could perform calculations in minutes that took human computers weeks. Initially met with skepticism, the IBM systems soon proved their worth during high-stakes missions like Mercury and Apollo. However, their introduction threatened the livelihoods of human computers, who faced a stark choice: adapt or become obsolete.

The shift was not seamless. Early programmers struggled to translate human intuition into machine-readable code. Fortran (Formula Translation), developed by John Backus at IBM in 1957, revolutionized this process by enabling scientists to write code using algebraic notation. For example, a simple Fortran loop to calculate orbital trajectories might resemble:

DO 10 J = 1,11  
I = 11 − J  
Y = F(A(I + 1))  
IF (400 − Y) 4,8,8  
4 PRINT 5,1  
5 FORMAT (I10, 10H TOO LARGE)  

This bridge between mathematical logic and machine execution democratized programming but demanded new skills from the workforce.

The replacement of human computers with IBM systems yielded unprecedented efficiencies. Complex simulations, such as reentry heating profiles for the Apollo command module, became feasible, accelerating the Moon landing. However, automation also concentrated technical power. Engineers who once collaborated with human computers now relied on a smaller cohort of programmers, altering workplace dynamics and marginalizing those unable to transition.

Similar patterns emerged in other sectors. The rise of spreadsheet software like VisiCalc (1979) and Excel (1985) displaced clerical workers but created demand for financial analysts. Word processors eliminated typist roles but expanded technical writing and editing fields. Each wave of automation reshaped labor markets, privileging adaptability over tradition.

The Human-Machine Symbiosis

The Fortran era also underscored the enduring value of human ingenuity. While IBM machines excelled at repetitive calculations, they lacked the contextual reasoning of their human predecessors. Programmers like Dorothy Vaughan synthesized domain knowledge (e.g., aerospace physics) with coding skills, ensuring machines solved the right problems.

AI’s Transformative Potential

Today’s AI systems, powered by neural networks and massive datasets, mirror the disruptive potential of 1960s mainframes. Large language models (LLMs) like GPT-4 can draft code, write reports, and solve complex problems that were once the exclusive domain of educated professionals. A 2023 McKinsey study estimates that AI could automate 30% of hours worked across industries by 2030, impacting roles in software engineering, law, and creative arts.

Yet, as with Fortran, AI’s rise is creating new niches. Prompt engineering, AI ethics auditing, and model fine-tuning are emerging disciplines requiring hybrid skills. For instance, biomedical researchers now use AI to predict protein structures but must validate outputs against experimental data — a blend of domain expertise and algorithmic literacy.

The arc from human computers to AI reveals a persistent truth: technological progress is cyclical, not linear. Each revolution displaces certain roles while creating others, demanding perpetual adaptation.

The human computers of Langley answered their era’s challenges with ingenuity and resilience. Their legacy implores us to approach AI not with fear but with the same determination to harness technology for collective advancement. The question remains: As we stand on the brink of another computational frontier, are we prepared to learn from history — or doomed to repeat its mistakes?

Skip to content

Share This

Copy Link to Clipboard

Copy