Curatorial Insights Archives - CHM https://computerhistory.org/blog/category/curatorial-insights/ Computer History Museum Fri, 16 Jan 2026 22:38:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Drawing the Future https://computerhistory.org/blog/drawing-the-future/ Fri, 16 Jan 2026 22:38:09 +0000 https://computerhistory.org/?p=33061 The ingenuity of industrial designers like Jerry Nichols helped Silicon Valley embrace mass production in the 1960s and early 1970s, transforming the Valley and the world.

The post Drawing the Future appeared first on CHM.

]]>
Jerry Nichols’s Early Designs in Silicon Valley

Industrial design in the 1960s and early 1970s was an important part of Silicon Valley’s transformation from fields and orchards to the world’s center for semiconductor engineering and development. But before glossy press releases and CAD mockups, there were pencils, felt-tip markers, foam-core, and the ingenuity of designers who had to make complex processes legible.

Jerry Nichols, with whom I spent two delightful meetings, belongs to that cohort. Jerry’s early work—spanning semiconductor handling, spin-coat systems, infrared bake ovens, aligners, and chemical distribution—shows how industrial design translated fragile laboratory methods into reproducible machines for a young industry that needed reliability and reproducibility to scale up to mass production.

Jerry Nichols at the Computer History Museum, April 2025.

Born March 7, 1941, Nichols was raised in northeast Indiana and graduated with a BA in Industrial Design from the University of Illinois, Champaign in 1963. He served as a US Army MP from 1963–1965 and moved to Silicon Valley after his military service to begin his design career.

Fairchild’s First IC Tester: Rendering the Possible

Nichols’s story in the Valley intersects early with the legendary Fairchild Semiconductor Corporation. In 1965–66, working alongside head designer Darryl Staley at JL Brandt & Associates, Nichols contributed to and studied renderings for what he recalls as, “the first Fairchild semiconductor test equipment,” producing renderings that matched a finished unit and a model used for customer approval. The exercise was typical of what industrial designers like Jerry do: showing engineers and executives what a workable product could look like and even how it could be used.

These drawings mattered. In a period when many instruments existed only as benches crowded with components, a coherent external form—operator panel, access doors, safety interlocks—signaled that the machine was ready to leave the lab and enter production, placing this work squarely amid the Valley’s turn from one-off rigs to standardized tools.

First Fairchild Semiconductor Corporation IC Tester, Concept Sketch, 1966.

First Fairchild Semiconductor Corporation IC Tester, Final Product, 1966.

From Tweezers to Tracks: Automating Spin-Coat

The most revealing thread in Nichols’s early portfolio is the march from hand methods to automated wafer processing.

In the beginning, technicians placed wafers on vacuum chucks and dispensed photoresist with eyedroppers—fragile, inconsistent, and slow. Nichols’s work with International Instruments (II Industries) rendered and then refined machines that automated these steps: “spin-bake” units that gripped wafers, dispensed resist, spun them at high RPM for photo deposition, and then handed them off to the next step: infrared (IR) bake ovens. A prototype shell gave way to production cabinets with raised decks, service bays, and clear operator interfaces, forms carefully designed around messy, real-world fluids and maintenance needs.

Crucially, design here was systems thinking. In Nichols’s descriptions, trays moved under a dispense head; the wafer dropped to a vacuum chuck; split catch-trays closed; spin and spray occurred; trays rose and carried the finished wafer downstream. The boxes and panels are the visible part; the choreography—the designer’s translation of process into motion—is Nichols’ achievement.

Sketch for internal wafer motion path for silicon wafer etching machine, 1970.

Nichols is quick to point out that his work is a team effort, requiring the talents of mechanical and electrical engineering as well as input from sales and marketing. In particular, he cites his closest collaborators, Don Schuman, lead mechanical engineer, Gene Litton, lead electrical engineer, Gerald Starek, CFO, and Carl Story, the CEO of II Industries.

Prototype Automated Wafer System from I.I. Industries, 1968.

Automated Wafer System brochure from I.I. Industries, 1968.

 

Heat, Zones, and Human Factors: The IR Bake Ovens

Nichols’s drawings and photographs also show production IR ovens with three controllable zones, integrated behind the spin modules.

These machines demanded more than a façade; they needed airflow, cooling, access paths, and readable controls in cleanroom conditions already stressed by heat. The design response consisted of elevated decks, removable panels, zoned controllers, and load/unload indexing. The documentation even catches the shop-floor realities: belts, motors, and the sheer bulk of equipment that nevertheless had to live in clean, human-scaled spaces.

Industrial design’s task was to highlight how the heavy and complex could appear precise. In fact, at this time the electronics technology used in II Industries’ products was vacuum tubes and mechanical solenoids. Nichol’s concepts outlined the choreography of internal mechanical movements required for building the first generation of integrated circuit-oriented wafer processing systems.

I.I. Industries Wafer Etching System (Poly Pro cabinet) 1974.

Concept sketch of internal wafer etching handling proposal, 1973.

When Form Follows Throughput: From Trays to Cartridges

As wafer sizes crept up, tray handling hit limits. Nichols’s later drawings capture the transition to cartridge-based systems, enclosed carriers that sealed wafers from contamination and moved them, robotically, from tool to tool.

Here, the designer’s canvas widens: modules must dock, elevate, and align; control panels multiply; cabinets stretch to twelve feet and beyond. Yet the visual language retains calm, repeatable panel design grammars and sightlines that let operators grasp the system at a glance. The move from two-inch to larger wafers is legible in proportion alone and the design scales without losing clarity.

Sketch showing wafer handling after transition to cartridge-based production, 1973.

Brochure for I.I. Industries IC Wafer Processing System (next generation), 1973.

The Aligner That Didn’t Ship—and Why It Matters

Not every elegant-looking machine becomes a product.

Nichols preserved images and accounts of an ambitious CRT-assisted aligner: a cabinet with cast panels and integrated optics meant to replace microscope-based, human alignment of photomasks. The technology of the moment couldn’t deliver reliable, fast alignment so the prototype stayed a prototype.

Even here, the record is instructive. Industrial design—indeed CHM’s collecting philosophy—is not just about preserving the winners. Alternate designs are useful for embodying different hypotheses that engineering can then test. That body of “near-miss” hardware is just as much a part of the Valley’s R&D archive as are the successful products.

Semiconductor Mask Aligner, Production Model, 1970.

Chemical Distribution System, Systems Chemistry Inc., Concept Sketch, 1979.

Chemical Dispense System, Front Panel Artwork, 1979.

Chemical Distribution System, Systems Chemistry Inc., Final Product, 1979.

Small Tools, Big Impact: The Vacuum Pickup Chuck

Sometimes the best design is a hand tool. Nichols’s vacuum pickup chuck—devised to replace tweezers that chipped wafer edges—shows the same care for interfaces on the smallest scale: a scoop geometry, a flush vacuum channel, and an ergonomic grip. It sold for tens of dollars, not tens of thousands, but it attacked a yield-killing failure mode with elegant simplicity.

Such tools rarely earn museum pedestals; they live in drawers and muscle memory. They also make the machines viable.

I.I. Industries, CD-I Silicon Wafer Vacuum Pickup Tool, 1969.

Foam-Core Futures: Making Models Speak

One of Nichols’s most evocative artifacts is a full-scale foam-core mock-up of an etcher—approximately five feet wide, with meters and knobs set into shaped panels. The craft is notable: kerf-cutbacks to form radii, wood under-frames for stiffness, hot-glue pragmatism. But the real point is the communicative power of Nichols’ drawings. Before a dollar was spent on castings, the team could stand around a believable machine, negotiate heights, reach envelopes, service doors, and the cadence of the operator’s body. Model as meeting, model as argument—that is industrial design at work.

Why Jerry Nichols’s Early Designs Matter

Read as a whole, Nichols’s early work marks the Valley’s pivot from craft to system. He drew and then helped realize the interfaces that let a volatile new industry move from lab bench to production line. His renderings domesticated the unknown for investors and customers; his panels and cabinets distilled complex physics into repeatable procedures; his tools and models improved yield before the word was fashionable.

This is industrial design as infrastructure: the quiet shaping of machines so people can trust them. In an era now mediated by screens, these artifacts remind us that progress once depended on designers who could smell a hot motor, feel a sticky switch, and still imagine how a process might flow when multiplied by a thousand.

Jerry Nichols’s early designs are not just attractive shells over clever mechanisms; they are the design language that taught Silicon Valley how to speak in terms of mass production.

FacebookTwitterCopy Link

The post Drawing the Future appeared first on CHM.

]]>
SmarterChild: A Chatbot Buddy from 2001 https://computerhistory.org/blog/smarterchild-a-chatbot-buddy-from-2001/ Thu, 29 May 2025 16:12:09 +0000 https://computerhistory.org/?p=32197 Did you know there were chatbots long before Siri and ChatGPT? SmarterChild came out in the early 2000s, during the heady days of the dot.com boom, and it's an important part of chatbot history.

The post SmarterChild: A Chatbot Buddy from 2001 appeared first on CHM.

]]>
Did you know that there were chatbots long before Siri and ChatGPT? CHM’s latest exhibit, Chatbots Decoded: Exploring AI, tracks the history of chatbots from the 1960s to today. The first chatbots on the consumer internet that came out in the early 2000s, during the heady days of the dot.com boom are an important part of the story. One of these, featured in the exhibit, was called SmarterChild.

Origins

SmarterChild was the product of ActiveBuddy, a company started by Timothy Kay, Robert Hoffer, and Peter Levitan in January 2000. After the failure of a previous startup, Kay and Hoffer brainstormed ideas for a new startup and came up with the idea to do a chatbot for instant messaging platforms. Kay believed natural language would be a compelling new user interface for computers, especially intelligent agents that could look up information like stock quotes, dictionary definitions, sports scores, and the weather in response to a written query. Kay wrote a prototype bot in two weeks, and Hoffer rounded up investment funding.

Content for the new chatbot was provided by a team of subject matter experts in New York City. The difficulty of hiring technical staff during the dot.com boom actually proved beneficial, and the members of the content team spanned a broad range of topics and interest areas and included schoolteachers, book publishers, and even a rock musician. To make it easy for the non-technical staff to write SmarterChild’s answers, the engineering team created a simple programming language that was easy for them to learn.

Education

Unlike today’s chatbots, which generate answers statistically based on training data gleaned from the internet, all of SmarterChild’s responses were written by humans, or what Kay calls “curated.” This meant that SmarterChild would not give false, dangerous, or otherwise problematic answers. Its human-curated responses were also more accurate and relevant than later machine-learning based voice assistants like Siri and Alexa, and it did not “hallucinate” by generating false information on its own like many of today’s chatbots.

How did SmarterChild work? If it did not understand the user’s query, it could respond by reformulating the query as a question of its own, similar to Eliza, a pioneering chatbot from the 1960s. But to be a useful information utility, SmarterChild needed a way to parse the user’s messages to discover what specific information users were asking for. To do so, it used pattern matching rules to locate key words, such as the user’s name, which would be stored in variables. If a word was ambiguous, SmarterChild would ask clarifying questions. For instance, if the user wanted information about “Java,” SmarterChild might ask, “Do you mean the coffee, the island, or the programming language?” Once SmarterChild had gathered the information it needed, it would respond with information according to scripts written by the content team.

SmarterChild pin in the CHM collection. CHM catalog number 102751427.

Personality

SmarterChild launched first on AOL Instant Messenger in 2001, and later on Yahoo! Messenger. It wasn’t available on MSN Messenger until after 2004 due to a bug in that platform. On all the services, users would add SmarterChild as a “buddy” in their Buddy List.

Although the initial vision of the chatbot was to deliver information, ActiveBuddy soon discovered that 97% of users (many of them teenagers) were simply chatting with SmarterChild for fun, a category the company classified as “inane chat.” The chatbot’s popularity was boosted by its “profanity handler,” written one weekend by the rock musician employee, who created a database of swearwords and wrote scripts that would respond in a clever way, such as “are all humans so rude?” SmarterChild became known for its snarky responses and persona. At its height, it had over 17 million users and handled 1 billion queries a month, according to Siri investor Shawn Carolan.

GooglyMinotaur was a SmarterChild chatbot created to promote the release of Radiohead’s album Amnesiac. Credit: Robin Bechtel.

ActiveBuddy had a difficult time monetizing SmarterChild. Brand marketing was one attempted use case, promoting products such as Elle Girl magazine, Harry Potter films, and Radiohead album releases. It was also used for customer support bots, like on Comcast’s website. Although this deal was lucrative for ActiveBuddy, Comcast did not allow ActiveBuddy to reveal that Comcast was using SmarterChild, hampering marketing efforts. Although ActiveBuddy (by then renamed Colloquis) was acquired by Microsoft in 2006, its technology was not used by Microsoft’s voice assistant, Cortana, which instead used in-house machine learning.

Legacy

Despite its struggles with a business model, SmarterChild had a clear impact on the voice assistants of the 2000s. In a 2014 Forbes magazine article, venture capitalist Shawn Carolan said that SmarterChild proved there was consumer demand for chatbots. He was influenced to invest in Siri, a spin-off of SRI International that Apple acquired and integrated into the iPhone. Siri too, has a cheeky personality and can engage in witty banter. The legacy of SmarterChild lives on in your iPhone today.

Watch an interview with SmarterChild co-creator Timothy Kay.

Interview with SmarterChild co-creator Timothy Kay, by the author.

Main image: SmarterChild was an Internet chatbot who you friended in your AIM buddy list. You could ask it for information, or chat for fun.

FacebookTwitterCopy Link

The post SmarterChild: A Chatbot Buddy from 2001 appeared first on CHM.

]]>
Fifty Years of the Personal Computer Operating System https://computerhistory.org/blog/fifty-years-of-the-personal-computer-operating-system/ Thu, 18 Apr 2024 17:03:12 +0000 https://computerhistory.org/?p=29263 Fifty years ago, PC software pioneer Gary Kildall demonstrated CP/M, the first commercially successful personal computer operating system.

The post Fifty Years of the Personal Computer Operating System appeared first on CHM.

]]>
PC software pioneer Gary Kildall demonstrated CP/M, the first commercially successful personal computer operating system in Pacific Grove, California, in 1974. Following is the story of how his company, Digital Research Inc., established CP/M as an industry standard and its subsequent loss to a version from Microsoft that copied the look and feel of the DRI software.

Early Days

Gary Arlen Kildall was born to a family of Scandinavian descent in Seattle, Washington, in 1942. His inventive skills flourished in repairing automobiles and having fun but suffered in scholastic pursuits. He qualified for admission to the University of Washington based on his teaching experience at the family-owned Kildall Nautical School rather than his high school grades.

Dorothy and Gary, circa 1978. Photo: Courtesy Kildall Family

Gary entered college and married his high school sweetheart Dorothy McEwen in 1963. He was one of 20 students accepted into the university’s first master’s program in computer science. Here, his mathematical talents were applied to a subject that fascinated him: all-night sessions programming a new Burroughs computer. To avoid the uncertainty of the draft at the height of the Vietnam War, on graduating with a PhD, he entered a US Navy officer training school and was posted to serve as an instructor in computer science at the Naval Postgraduate School (NPS) in Monterey, California.

Herrmann Hall, Naval Postgraduate School, Monterey. Creative Commons CC0 1.0 Universal Public Domain Dedication.

Gary remained at NPS as an associate professor after his tour of duty ended in 1972. He became fascinated with Intel Corporation’s first microprocessor chip and simulated its operation on the school’s IBM mainframe computer. This work earned him a consulting relationship with the company to develop PL/M, a high-level programming language that played a significant role in establishing Intel as the dominant supplier of chips for personal computers.

To design software tools for Intel’s second-generation processor, he needed to connect to a new 8″ floppy disk-drive storage unit from Memorex. He wrote code for the necessary interface software that he called CP/M (Control Program for Microcomputers) in a few weeks, but his efforts to build the electronic hardware required to transfer the data failed. The project languished for a year. Frustrated, he called electronic engineer John Torode, a college friend then teaching at UC Berkeley, who crafted a “beautiful rat’s nest of wirewraps, boards and cables” for the task.

This is going to be a “big thing”

Late one afternoon in the fall of 1974, together with John Torode, in the backyard workshop of his home at 781 Bayview Avenue, Pacific Grove, Gary “loaded my CP/M program from paper tape to the diskette and ‘booted’ CP/M from the diskette, and up came the prompt: *.”

“This may have been one of the most exciting days of my life, except, of course, when I visited Niagara Falls,” he exclaimed. We now have the power of an IBM S/370 [mainframe computer] at our fingertips.” This is going to be a “big thing,” they told each other and “retired for the evening to take on the simpler task of emptying a jug of not-so-good red wine … and speculating on the future of our new software tool.” 

By successfully booting a computer from a floppy disk drive, they had given birth to an operating system that, together with the microprocessor and the disk drive, would provide one of the key building blocks of the personal computer revolution. While they knew it was important, neither realized the extraordinary impact it would have on their lives and times.

781 Bayview Avenue, Pacific Grove, circa 1974. Photo: Courtesy Kildall Family

As Intel expressed no interest in CP/M, Gary was free to exploit the program on his own and sold the first license in 1975. He continued teaching part-time at NPS, and in 1976, with his wife Dorothy as cofounder, they established Intergalactic Digital Research to pursue commercial opportunities. They shortened the company name to Digital Research Inc. (DRI) when it became available.

Glenn Ewing, a former NPS student, approached DRI with the opportunity to license CP/M for a new family of disk subsystems for fast-growing microcomputer maker IMSAI Inc. Reluctant to adapt the code for another controller, Gary worked with Glen Ewing to split out the hardware dependent-portions so they could be incorporated into a separate piece of code called the BIOS (Basic Input Output System).

Before CP/M, computer manufacturers designed their operating systems to work only with their own hardware and peripheral equipment. An IBM OS would only work with IBM computers; a Burroughs OS would only work with Burroughs computers, etc. Applications had to be written for each computer’s specific OS. Such “closed systems” made it difficult or impossible to mix and match the best pieces of equipment and software applications programs from different manufacturers.

The BIOS code allowed all Intel and compatible microprocessor-based computers from other manufacturers to run CP/M on any new hardware. This capability stimulated the rise of an independent software industry by expanding the market’s potential size for each product. A single program could run without modification on computers supplied by multiple manufacturers, laying an essential foundation for the personal computer revolution.

DRI advertisement from 1978

Dorothy and Gary opened their first office at 716 Lighthouse Avenue, Pacific Grove, on the upper floor, with a view of Monterey Bay. They sold CP/M disks via mail order and walked to the post office every workday to pick up checks resulting from ads placed in industry magazines such as Byte and Dr. Dobbs’ Journal of Computer Calisthenics and Orthodontia.

A licensing deal with computer manufacturer IMSAI bestowed credibility across the industry. CP/M became accepted as a standard and was offered by most early personal computer vendors, including pioneers Altair, Amstrad, Kaypro, and Osborne.

Outside the DRI office at 801 Lighthouse Avenue in November 1980. Photo: John Pierce

In 1978, revenue topped $100,00 per month, and DRI purchased a Victorian house at 801 Lighthouse Avenue for the company headquarters. By 1980, DRI employed more than 20 people, and Fortune magazine reported that the company generated revenue of $3.5 million, five times the revenue of Microsoft at that time. Gary also acquired a Piper aircraft that allowed him to fly from Monterey to meet regularly with his customers in Silicon Valley and beyond.

To accommodate the expanding engineering staff hired to service the hundreds of different computer models used by more than a million people worldwide, DRI purchased a 1909 American Foursquare-style residence at 734 Lighthouse. Today, it houses the offices of the Carmel Pine Cone newspaper.

Gary in 734 Lighthouse Avenue. Photo: John Pierce

One Friday afternoon, Gary called the engineering staff together and announced that he would give them all a raise over the weekend. On Monday, when they returned to work, contractors began raising the building to make room in the basement for a new Digital Equipment Corporation VAX 11/750 computer system. After several weeks, supported by heavy wooden beams and house jacks, the engineers’ desks were five feet higher.

By 1983, DRI’s annual sales reached $45 million. The company employed over 500 people, including more than 100 engineers, and had expanded into another building at 160 Central Avenue, which today houses the Monterey Bay Aquarium’s offices.

The IBM PC Effect

In 1980, IBM established a new business division in Boca Raton, Florida, to develop a desktop computer for the mass market. To get the IBM PC, as it became known, to market as quickly as possible, they used commercially available components, including an Intel microprocessor chip. Bill Gates knew Gary from early discussions about merging their companies and setting up shop in Pacific Grove, so when an IBM procurement team visited Microsoft to license the BASIC interpreter program, he referred them to DRI for an operating system.

Gary at Monterey Airport with his Piper Aerostar. Photo: Tom Rolander

When the IBM team arrived in Pacific Grove, they met in the morning with Dorothy and DRI attorney Gerry Davis to agree on the terms of a non-disclosure agreement. Gary, who had flown his aircraft to Oakland to meet an important customer, returned in the afternoon, as scheduled, to discuss technical matters. IBM wished to purchase CP/M outright, whereas DRI sought a per-copy royalty payment in order to protect its existing base of business. The meeting ended in an impasse over financial terms, but Gary believed they had essentially agreed to do business. 

Kildall tried to renew the negotiations a couple of weeks later. IBM did not respond because, in the meantime, Bill Gates purchased an OS from Seattle Computer Products that was written to emulate the look and feel of CP/M. He then sold a one-time, non-exclusive license to IBM, which used the designation PC DOS. With great foresight, he retained the right to license the product to others as MS-DOS.

When Gary learned of this transaction, he threatened IBM with a lawsuit over what he believed was an illegal copy of CP/M. IBM responded by agreeing to fund DRI to adapt CP/M for the PC and to make both brands of OS available to customers. With CP/M’s reputation and enhanced features, DRI believed customers would opt for the better product.

IBM announced the PC on August 12, 1981, but with the PC-DOS list price set at $40 versus $240 for CP/M, most customers simply chose the former as the lower-cost option. Attorney Gerry Davis recalled that “IBM clearly betrayed the impression they gave Gary and me.”

Aftermath

DRI continued to thrive for several years with a multi-tasking operating system for the IBM PC-XT and a host of new products. The company also introduced operating systems with windowing capability and menu-driven user interfaces years before Apple and Microsoft.

At its peak, DRI employed over 500 people and opened operations in Asia and Europe. However, by the mid-1980s, in the struggle with the juggernaut created by the combined efforts of IBM and Microsoft, DRI had lost the basis of its operating systems business.

Dispirited, Gary, who never relished the responsibility of managing a large company or displayed the cut-throat business acumen of a Gates, sold the company to Novell Inc. of Provo, Utah, in 1991. Ultimately, Novell closed the California operation and, in 1996, disposed of the assets to Caldera, Inc., which used DRI intellectual property assets to prevail in a lawsuit against Microsoft.

In other pursuits, Gary founded KnowledgeSet with his friend and DRI VP of engineering, Tom Rolander, where they created the first CD-ROM encyclopedia for Grolier.

In an oral history for the Computer History Museum, Brian Halla, Intel’s technical liaison to DRI, recalls that Gary “showed me this VAX 11/780 that he had running in his basement, and he was so proud of it, and he said, ‘I figured out a way to have a computer generate animation,’ and he said, ‘Watch this. And he runs a demo of a Coke bottle that starts real slowly and starts spinning, and so as maybe several months went by, he lost interest in this, and he sold his setup to a little company called Pixar.'”

Kildall continued to innovate after selling DRI. He moved to Austin, Texas, where he founded Prometheus Light and Sound to explore wireless home networking technology and participated in charitable work for pediatric AIDS.

Gary Kildall died in 1996 at age 52 following an accident in Monterey. His ashes are buried in Seattle, the hometown he shared with Bill Gates. Dorothy McEwan Kildall purchased the Holman Ranch in Carmel Valley and served on many community boards, including the Heritage Society of Pacific Grove. She died in 2005.

The Legacy of Gary Kildall

In 1995, the Software and Information Industry Association presented Gary Kildall with a posthumous Lifetime Achievement Award, citing eight significant areas in which he contributed to the microcomputer industry.

In an obituary published in the Microprocessor Report in 1994, his friend, the late John Wharton, commented, “I don’t think Gary ever really begrudged Bill Gates his business success or his personal fortune. … what I think Gary wanted most was to share his excitement and enthusiasm for computers and technology with others.”

Gary Kildall in 1988 Photo: Copyright Tom O’Neal, Carmel Valley, CA

On April 25, 2014, the Institute of Electrical and Electronic Engineering, “The world’s largest professional association for the advancement of technology,” installed a bronze IEEE Milestone in Electrical Engineering and Computing plaque outside the former DRI headquarters at 801 Lighthouse Avenue. The Milestone program honors important events in electrical engineering and computing. Achievements such as Thomas Edison’s electric light bulb, Marconi’s wireless communications, and Bell Labs’ first transistor are recognized with plaques in appropriate locations.

The citation reads: “Dr. Gary A. Kildall demonstrated the first working prototype of CP/M (Control Program for Microcomputers) in Pacific Grove in 1974. Together with his invention of the BIOS (Basic Input Output System), Kildall’s operating system allowed a microprocessor-based computer to communicate with a disk drive storage unit and provided an important foundation for the personal computer revolution.”

In 2017, US Navy dignitaries, friends, family, and peers gathered to celebrate the dedication of the Gary A. Kildall Conference Room on the Naval Postgraduate School campus in Monterey. The ceremony included the installation of a duplicate of the IEEE plaque in the conference room.

Despite this wide recognition of his technical accomplishments, Gary’s legacy remains mired in a tangle of myths and conspiracy theories. The most persistent being driven by a 1982 comment attributed to Bill Gates and published in the London Times newspaper that “Gary was out flying when IBM came to visit, and that’s why they did not get the contract.”

The former editor of the Times, Harold Evans, atoned for that story in a PBS documentary and his book They Made America: Two Centuries of Innovators from the Steam Engine to the Search Engine. The subtitle of the chapter on Gary, “He saw the future and made it work. He was the true founder of the personal computer revolution and the father of PC software,” offers a sympathetic telling of the life and times of the entrepreneurial genius who helped give birth to the PC operating system 50 years ago this year.

Additional information at the Computer History Museum 

Comments in quotes in this article without source attribution are from Gary’s unpublished draft of Computer Connections: People, Places, and Events in the Evolution of the Personal Computer Industry, written in 1993. The Kidall family has authorized the online publication of extracts from this memoir in the blog Gary Kidall: In His Own Words.

The Computer History Museum has also made the source code of several early releases of CP/M available for non-commercial use.

A search for “Kildall” in the CHM collection catalog yields 45 records comprising objects, documents, and images, including a video of the 2014 CP/M IEEE Milestone Dedication event.

 

Main image: Gary Kildall at the first West Coast Computer Faire in the San Francisco Civic Auditorium in 1977. [CHM Object ID: 500004174 © Tom Munnecke/Hulton Archive/Getty Images]

FacebookTwitterCopy Link

The post Fifty Years of the Personal Computer Operating System appeared first on CHM.

]]>
Amplifying History https://computerhistory.org/blog/amplifying-history/ Wed, 06 Mar 2024 17:39:16 +0000 https://computerhistory.org/?p=29068 CHM acquires two early hearing aids that used the new transistor technology of the late 1940s to revolutionize the devices.

The post Amplifying History appeared first on CHM.

]]>
CHM Acquires Innovative Hearing Aids

With their hearing aids, the hard of hearing have consistently been among the earliest commercial adopters of new electronic technologies, as historian Mara Mills has shown. The histories of disability and of technology interweave in many different, fascinating, and important ways. The desire among many of the hard of hearing for assistive technology—hearing aids—has been a significant force in the history of electronics.

Mara Mills, “Hearing Aids and the History of Electronics Miniaturization,” IEEE Annals of the History of Computing, April-June 2011, pp. 24-44.

In the early years of the 20th century, the first hearing aids to use electricity were based on telephone technology. Many users found them terribly conspicuous, yearning for something smaller, more comfortable, and far less visible. The advent of the vacuum tube introduced the age of electronics, opening new possibilities for creating, altering, and transmitting sound.

Hearing aid manufacturers quickly adopted the new electronics technology, using tubes to amplify sound. They, along with the makers of portable radio sets, had an insatiable demand for smaller tubes that were more robust and less power hungry. The electronics industry responded, and the smaller tubes enabled miniaturized, less conspicuous, hearing aids.

Throughout the Second World War, military demands further drove the miniaturization of electronics, with ultra-miniature vacuum tubes, and the intensive development of printed circuitry. After the war, hearing aid manufacturers quickly used these developments to further their own decades-long efforts at miniaturization.

Transistor Technology

At the end of the 1940s, an incredibly important development in the history of electronics occurred: the invention of the transistor. The transistor could perform the electronic functions of a vacuum tube but following an entirely different principle. Instead of flows of electrons through the vacuum in the interior of a tube, transistors worked by controlling the flows of electrons through solid materials called “semiconductors.” With this new principle, transistors could be made much smaller than vacuum tubes, requiring far less power. Eventually, they were made to be extremely reliable as well.

While early transistors were dedicated to telecommunications and military production, the earliest commercial uses of transistors were in hearing aids, in keeping with the hard of hearing as the earliest of adopters of new electronics technology and miniaturization. Perhaps the world’s foremost collector of transistors and related objects, Jack Ward, recently donated to the Computer History Museum two very early hearing aids that employed transistors: the Sonotone Model 1010 and the Telex Model 954.

Sonotone 1010 donated by Jack Ward to the Computer History Museum. Photo by Aurora Tucker.

Inside the Sonotone 1010 donated by Jack Ward to the Computer History Museum. Photo by Aurora Tucker.

The Telex 954 donated by Jack Ward to the Computer History Museum. Photograph by Aurora Tucker.

While the transistor was first demonstrated in a laboratory at the Bell Telephone Laboratories at the very end of 1947, the Sonotone appeared on the market in December 1952. It was the first commercial product to use transistors. The Telex Model 954 appeared somewhat later, in December 1954, and, like the Sonotone 1010, used two miniature vacuum tubes along with one transistor to provide the amplification of sound. CHM is honored to have these incredible devices in its collection, preserving artifacts so important to both disability history and the history of electronics.

Explore More

Mara Mills, “Hearing Aids and the History of Electronics Miniaturization,” IEEE Annals of the History of Computing, April–June 2011, pp. 24-44.

Jack Ward’s Transistor Museum.

Main image: Inside the Telex 954 and the edge of the Sonotone 1010 donated by Jack Ward to the Computer History Museum. Photo by Aurora Tucker.

FacebookTwitterCopy Link

The post Amplifying History appeared first on CHM.

]]>
Echoes of History https://computerhistory.org/blog/echoes-of-history/ Wed, 14 Jun 2023 18:09:29 +0000 https://computerhistory.org/?p=27616 Three generations of Sutherland's visit CHM to see Jim Sutherland's 50-year-old creation, the ECHO IV home computer system.

The post Echoes of History appeared first on CHM.

]]>
Visits with computing pioneers are magical. Recently, Jim Sutherland, creator of one of CHM’s most intriguing artifacts, came to the Museum’s environmentally controlled storage facility with his son and grandson to see something he made over half a century ago.

Sutherland was the visionary engineer who, in 1965, built a home computing system based on minicomputer parts he had scavenged from work. He called it ECHO IV, an acronym for the “Electronic Computing Home Operator.”

ECHO quickly caught the attention of the media, appearing in dozens of publications. Like some of today’s coverage of new technology, the tone vacillated from wonder to irony. Even Jim’s wife Ruth remarked at the time, “At first, I thought it might really replace me!” Read the full story here.

The Sutherland family in front of ECHO IV. Jim sits at ECHO IV’s keyboard. His wife, Ruth, puts a raincoat on daughter Sally, while Jay and Ann look on. (Photo: Pittsburgh Post-Gazette, 1966)

Jim’s son Jay and grandson Evan took a transcontinental flight to visit CHM and see, perhaps for the last time, this wonderful invention of nearly 60 years ago. It was deeply moving to witness Jim’s joy at rediscovering something he had not seen in decades, seeing his pride at showing his grandson what he had built, and hearing Jay’s detailed memories of using ECHO IV as a young boy of about Evan’s age.

Occasions like this are great opportunities for revisiting the history of specific objects and asking questions of their creators. As former CHM Trustee Donna Dubinsky once said, “We live in an era when we can ask the great inventors of our days directly about their work . . . imagine being able to go up to Michelangelo and ask him questions.” And so, earlier that day, while Jay and Evan were on a guided tour of CHM, I conducted an extended oral history with Jim about ECHO IV as he sees it from today’s perspective. Stay tuned!

Main image:  From left, ECHO IV, Jim, Jay, and Evan Sutherland at CHM, May 18, 2023.

 

SUPPORT CHM’S MISSION

Blogs like these would not be possible without the generous support of people like you who care deeply about decoding technology. Please consider making a donation.

FacebookTwitterCopy Link

The post Echoes of History appeared first on CHM.

]]>
Weather By Computer https://computerhistory.org/blog/weather-by-computer/ Wed, 07 Jun 2023 16:04:04 +0000 https://computerhistory.org/?p=27575 Using homegrown satellite communications equipment in the early '60s, the CDC 1604 laid the foundation for modern weather forecasting tools.

The post Weather By Computer appeared first on CHM.

]]>
Laying the Foundation

Established on the Monterey Peninsula in 1961, the Fleet Numerical Weather Facility (FNWF), known locally as Fleet Numerical, was chartered to apply the newly emerging processing power of digital computers and communications technology to provide accurate weather and ocean condition prediction services to the US Navy. 

Based on the Naval Postgraduate School (NPS) campus and at a facility on Point Pinos in Pacific Grove, and using homegrown satellite communications equipment and Model #1, Serial #1 of the Control Data Corporation 1604 computer, FNWF laid the foundations for modern weather forecasting technology.

The Fleet Numerical Weather Facility

The Navy established a Numerical Weather Problems Group (Project NANWEP) in Suitland, MD, in 1958 to generate operational weather prediction products for the Navy. To take advantage of the computing capability at NPS, in March 1959, the Navy assigned NANWEP to Monterey under Capt. Paul M. Wolff, where, in 1961, it was renamed the Fleet Numerical Weather Facility (FNWF).

Wolff distinguished FNWF’s mission from other activities in the field: “Atmospheric and oceanographic analysis and prediction problems have been faced before — in the universities, in industry, in governmental agencies. To my knowledge, however, FNWF acts singularly in its treatment of the two fluids as a single, coupled system. Correct solutions to environmental problems demand this approach.” [1]

The Navy Acquires a Supercomputer

The NPS Department of Mathematics purchased its first electronic automatic digital computer, a National Cash Register NCR 102A in 1953. It was used in practically all phases of the physical sciences, including early approaches to weather simulation.

NCR 102A at NPS. Photo by Dean Vannice. Source: Calhoun: The NPS Institutional Archive

Pioneering computer architect Seymour Cray and his team built the world’s first commercially successful transistorized computer at Control Data Corporation (CDC) in Minneapolis in 1959. The central computer weighed one ton, and the console half a ton. With a clock speed of 5 microseconds, the 48-bit CDC 1604 claimed to be the fastest machine of its time.

CDC 1604 transistorized logic module. Source: Calhoun: The NPS Institutional Archive

In 1958, when the Bureau of Ships contracted to acquire ten 1604s from CDC, Cray lobbied for the first system to be delivered to Monterey. [2] And in January 1960, he personally supervised the installation of Model #1, Serial #1 of the CDC 1604 in Room 101A of Spanagel Hall.

The CDC 1604 is delivered and assembled in Spanagel Hall. Source: Calhoun: The NPS Institutional Archive

“I was there when Cray sat at the 1604 console and, like a master pianist, ran through the test programs,” said Edward Norton Ward, a mathematician and the first computer technician hired by Professor W. R. Church, Chairman of the Mathematics Department. “I watched and listened. When it’s raining knowledge, you just hold out your hand.” [3]

Lt Harry Nicholson at a 1604 console. Capt. Nicolson served as Commanding Officer of FNOC from 1982- 86. Source: Calhoun: The NPS Institutional Archive

According to Professor Douglas Williams who became Director of the NPS Computer Center in 1963, “It was used by submitting machine language programs on paper tape. There was no operating system and no assemblers, compilers, or utilities. I obtained a Fortran compiler—folklore says it was written by Seymour Cray.” [4] Despite its limitations, the 1604 boasted impressive computing power for the time, with 32,768 bits of 48-bit core main memory and 100,000 computations per second.

CDC 1604 tape drive storage units. Source: Calhoun: The NPS Institutional Archive

In August 1960, the Monterey Peninsula Herald reported that FNWF demonstrated “the first surface weather map to be produced by a computer … it cuts the time for compiling hemispheric weather forecasts from hours to minutes. And is 40 percent more accurate than old hand methods.”

Printout shows the temperature at the sea surface and various depths. Source: Weather by Computer

FNWF acquired its own CDC 1604 in 1961, which was installed with the school’s machine in the converted lobby of the first floor of Spanagel Hall. The complete system incorporating a CDC 160A for data transmission, a Varian 530 plotter, ASR-33 teletype machines, and tape storage drives, is shown below.

Diagram of Control Data computer system FNWF. Source: Weather by Computer

In 1963, CDC published a report, Weather By Computer, that described the FNWF operation and programs written for the 1604 that generated a broad range of weather predictions for naval operations worldwide.

Weather By Computer. Source: Computer History Museum

To eliminate conflicts between the immediate demands of computer processing time for weather prediction and the teaching needs of the school, in 1964, FNWF built a dedicated computer center on the NPS campus. To handle the increased computational load, a CDC 3200 computer (a 24-bit version of the 1604) was purchased in October and was running at full capacity by the end of the year.

Also, in 1964, FNWF established a separate Communications Division to design, fabricate, and test special-purpose electronic communications and interface devices to serve unique requirements for receiving and transmitting real-time weather data. Administrative offices, workshops, and R&D laboratories were located in a former Navy radar training facility on Point Pinos at 1352 Lighthouse Avenue, Pacific Grove.

When NPS acquired an IBM 360 Model 67 in 1967, Model #1, Serial #1 of the CDC 1604 was transferred to FNWF and moved to Point Pinos, where it continued to serve for archival storage of weather data.

All FNWF operations were consolidated at a single site at the Navy Annex, Monterey Airport in 1974 where today, as the Fleet Numerical Meteorology and Oceanography Center (FNMOC), it serves as a primary DoD production site for computer-generated meteorological and oceanographic analysis and forecast products worldwide. The operation is highly respected, and its computing capability ranks as one of the most powerful in its field in the world.

Main image: The CDC 1604 supercomputer with a figure as scale. Source: Wikipedia. https://en.wikipedia.org/wiki/CDC_1604

 

NOTES

[1] Wolff, Paul M. “Oceanographic data collection” (1968) Bulletin American Meteorological Society: Vol 49, No. 2, (February 1968), p. 96.

[2] Created by Congress in 1940, the Bureau of Ships responsibilities included supervising the design, construction, conversion, procurement, maintenance, and repair of ships and other craft for the Navy; managing shipyards, repair facilities, laboratories, and shore stations; developing specifications for fuels and lubricants; and conducting salvage operations.

[3] Honneger, Barbara, “NPS Computing: 50 Years Golden and Growing,” Calhoun: The NPS Institutional Archive.

[4] Douglas Williams, Interviews, Calhoun: The NPS Institutional Archive.

[5] The Computer History Museum collection holds a 1604 main cabinet, all three sections of the operator’s console, and the core memory unit, together with numerous other related documents and manuals.

SUPPORT CHM’S MISSION

Blogs like these would not be possible without the generous support of people like you who care deeply about decoding technology. Please consider making a donation.

FacebookTwitterCopy Link

The post Weather By Computer appeared first on CHM.

]]>
A Backup of Historical Proportions https://computerhistory.org/blog/a-backup-of-historical-proportions/ Wed, 10 May 2023 15:18:14 +0000 https://computerhistory.org/?p=27317 Discover what surprises await in CHM's release of the Xerox PARC file system archive.

The post A Backup of Historical Proportions appeared first on CHM.

]]>
Access the Xerox PARC file system archive here.

An Ancient Anxiety

“Is my phone really backed up in the Cloud?” “When was the last time I backed up my laptop?” “Is it true that I need a local backup of my Google Drive?!” “Oh dear, I forgot my password!” Now that we have interwoven computers so deeply into our daily lives, an ancient anxiety has become a fiercer everyday companion for us. For centuries we have worried “Are my most precious records okay?” In the past, we calmed this anxiety using a variety of technologies: safety deposit boxes, shoe boxes, photo albums, photocopies, scriptoria, institutional archives, and more. In a world of digital computing, we are all too aware of the fragility of record keeping. In some ways, our ancient anxiety has expanded.

Scriptoria were dedicated spaces for the copying of manuscripts, making this a drawing of a 16th century backup. From the National Gallery of Art. https://www.nga.gov/collection/art-object-page.74850.html

Computing professionals have been living with this digital flavor of archival anxiety for longer than the rest of us. From the very beginning, the fluidity and fungibility of digital information came with fragility. Making matters worse, many of the means for holding and storing digital information were less reliable and much harder to work with than today’s. As a result, computing professionals met their anxiety about—and real challenges of—digital fragility with a new discipline: They started to make purposeful copies. They began to back things up.

A Laboratory for the Office of the Future

PARC in 2022. Cmichel67, CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons

In 1970, the well-heeled corporate behemoth Xerox, with a nearly perfect monopoly on the quintessential office technology of photocopying, cut the ribbon on a new and ambitious bet on its future: the Xerox Palo Alto Research Center (PARC). PARC was a large research and development organization, comprised of distinct laboratories. Several concentrated on extending Xerox’s dominance of photocopying, like the General Science and Optical Science Laboratories. Others, specifically the Computer Science and Systems Science Laboratories, were aimed at a new goal. They would develop computer hardware and software that could plausibly form the basis for the “office of the future” some ten to fifteen years hence, giving Xerox a profound head start in this arena. The previous year, Xerox had leapt into the computer industry through the purchase, for an enormous sum, of the company SDS.

The leadership of PARC scoured the computing community across the United States and recruited what proved to be an astonishing collection of young talent. Part of the attraction PARC held for this cohort was, surely, the fact that the new laboratories held the opportunity to pursue a vision about the future of computing that they already held deeply. In this future, computing would be increasingly personal, graphical, interactive, and networked. Xerox’s deep pockets, and a PARC leadership that shared this vision, proved compelling.

Backing Up the Office of the Future

At PARC, the new recruits wanted to have the same sort of computing environment they had been familiar with in their academic research: a PDP-10 mainframe from the Digital Equipment Corporation running the timesharing TENEX operating system from BBN. Xerox refused. They had just purchased SDS, a maker of timesharing computers, and couldn’t countenance such a major purchase from their prime competitor. The PARC computing crowd responded by simply building their own clone of a PDP-10, calling it MAXC (an eye-poking pun on the name of the founder of SDS, Max Palevsky), and installed TENEX. Immediately, they began to back up what they were creating with MAXC. Using a TENEX program named BSYS, the PARC researchers could store their data and programs on 9-track magnetic tapes. Tape backups had arrived at PARC.

A 9-track tape drive in the collection of the Computer History Museum. https://www.computerhistory.org/collections/catalog/102752062

The next several years to 1975 contained a remarkable flourishing of computing developments at PARC. The researchers created the Alto computer and a swath of novel software for it that, through the subsequent decades, has broadly defined our use of computers. To learn more about this remarkable story, you might start here. Critical to the use and success of the Alto were PARC’s innovations in computer networking, specifically the creation of Ethernet for wired connectivity. Ethernet wove the many Altos across PARC together, further connecting them to the now two MAXC systems as well as a variety of printers. Moreover, PARC researchers developed the PUP networking protocol, allowing Xerox to knit together many local Ethernet networks across the US into a sprawling corporate internetwork.

Individual Alto users could store and back up their files in several ways. Altos could store information on removable “disk packs” the size of a medium pizza. Through the Ethernet, they could also store information on a series of IFSs, “Interim File Servers.” These were Altos outfitted with larger hard drives, running software that turned them into data stores. The researchers who developed the IFS software never anticipated that their “interim” systems would be used for some fifteen years.

With the IFSs, PARC researchers could store and share copies of their innovations, but the ancient anxiety demanded the question: “But what if something happened to an IFS?!” Here again, Ethernet held a solution. The PARC researchers created a new tape backup system, this time controlled by an Alto. Now, using Ethernet connections, files from the MAXC, the IFSs, and individuals’ Altos could be backed up to 9-track magnetic tapes. Later, at the end of the 1970s, the PARC researchers even developed a new program called ARCHIVIST, which ran on a more powerful successor to the Alto known as the Dorado. ARCHIVIST automated the process, allowing researchers to archive to and retrieve files from the IFSs by sending simple commands through electronic mail.

From Backup to Migration

Nearly a decade later, at the close of the 1980s, PARC’s researchers increasingly adopted commercially produced computers from outside the company, rather than the Altos, Dorados, and other systems that they had devised in-house. These outside computers were new workstations produced by a local firm, Sun Microsystems. While the Sun systems were directly inspired by the Altos, they brought PARC closer to the computing mainstream through Sun’s embrace of the Unix operating system and microprocessors. This shift to Sun implied yet another wrinkle for PARC’s solutions to its archival anxieties.

A Sun workstation in a Stanford laboratory. https://www.computerhistory.org/collections/catalog/102657163

By the start of the 1990s, PARC’s computer researchers began storing their information on new Unix-based servers using Sun’s Network File System (NFS) protocol, which has gone on to be a standard for Unix and Linux systems worldwide. These new PARC NFS servers used 8mm digital tape cassettes for backup. MAXC was decommissioned, and no one used the ARCHIVIST system anymore. PARC had accumulated an impressive thicket of 9-track magnetic tapes holding backups of programs, data, messages, and documents from the astonishing contributions of PARC to computing across the 1970s and 1980s, but now no one was using the 9-track tape systems anymore. With this, a particularly horrible aspect of the ancient archival anxiety came to the fore: “What if I lose the key to my lock box?” “What if I can’t access my backups anymore?” Now backup’s twin, migration, took center stage. PARC’s computer crowd wrote fresh programs that migrated the data from the 9-track tapes to the new 8mm digital tape cartridges, which they also used for their NFS servers. The older tapes were discarded, and the 8mm tapes of this remarkable record of the work of the 1970s and 1980s then sat for another decade.

An 8mm data tape cartridge. Mister rf, CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons

Migration Springs Eternal

Like hope, migration springs eternal. In 2003, the researchers at PARC realized that, while 8mm tapes were still in use, other media were becoming more popular. To keep the archive of PARC’s astonishing accomplishments accessible, migration would again be necessary. In that year, PARC researcher Dan Swinehart approached Al Kossow to tackle the challenge. Kossow was then a senior engineer at Apple Computer and already known as a passionate preservationist of both computer hardware and software, especially around the Alto. Kossow was able to transfer all the data from the 8mm tapes to a set of DVD-ROM discs. Again, this unique archive for the history of computing was safe, sound, and accessible—strictly within the confines of PARC.

A few years later, in 2006, Kossow joined the Computer History Museum (CHM) full-time as the Robert N. Miner Software Curator. When he had worked on the migration of the PARC archive to DVD, Kossow had created an extra CD-ROM onto which he had copied almost 15,000 files relating to the work done specifically on the Alto in the 1970s and 1980s, reflecting his keen appreciation for the importance of the Alto in the history of computing. Now at CHM, he and others began an effort to see if PARC would be willing to donate the extra Alto CD-ROM to CHM, and thereby open it up to the world. Testifying to the perseverance of CHM and the sagacity of PARC, the Alto archive CD-ROM was donated to CHM in 2011 with permission to make it public.

Public Translation

CHM now faced a major challenge. How could this nearly four-decade old software, data, and information be made accessible to today’s public? The information was created with a now deeply obsolete forty-year-old experimental computer, with research software that no one had touched in decades. Much of the Alto archive was in now arcane formats for printing like “Press” or for document editing like “Bravo.” Certainly, you couldn’t provide the public with working Altos to read the archive.

Paul McJones (right) in 1973, with Edsger Dijkstra. https://mcjones.org/dustydecks/archives/2011/04/

An answer to the dilemma came in 2013, through the contributions of Paul McJones. McJones is a retired software engineer and established software preservationist who had met many (future) PARC researchers while he was working at Berkeley in the 1960s and 1970s. In the second half of the 1970s, McJones had done programming for the new division set up by Xerox to commercialize PARC’s computer innovations. He again worked with many former PARC researchers in projects at DEC’s laboratory in Palo Alto, and then again as a principal scientist at Adobe.

During 2013, McJones crafted a program that processed the Alto archive, creating HTML translations of Bravo files and PDF translations of Press files, and organizing them into a set of web pages for access, search, and browsing. With this further act of migration qua translation, the Alto archive was at last ready to share with the world, and in 2014 https://xeroxalto.computerhistory.org/ went live.

From the Alto Archive to the PARC File System Archive

Since its launch, the Alto archive has proved essential for efforts at both CHM and the Living Computer Museum (LCM) in Seattle. At LCM (which sadly closed during the COVID pandemic), senior software engineer Josh Dersch used the archive and Al Kossow’s Bitsavers repository to build ContrAlto, an emulator for the Xerox Alto that can run on contemporary computers and, in turn, run the software found in the Alto archive. At LCM, ContrAlto was a key part in an impressive Alto restoration that visitors could use. At CHM itself, the Alto archive proved indispensable to a number of projects, ranging from its own restoration of an Alto, a major event on the history of the Alto, and a series of video ethnographies of software innovations on the Alto.

Charles Simonyi (standing) and Tom Malloy demonstrate the Bravo word processor on a restored Alto for a 2017 Computer History Museum event. Courtesy Doug Fairbairn.

But what of the rest of the PARC archive from the 1970s and 1980s that resided on the sixteen or so DVDs that remained sitting in a box? Could the rest of the archive be collected by CHM and, through it, be released to the public? Did the archive contain sensitive personal information that should not be released? Did it contain intellectual property that was still vital for PARC, or that was owned by others? Could these types of materials be identified and filtered out?

Once again, Paul McJones offered his expertise and help. Acting as a CHM volunteer, he entered into an NDA (non-disclosure agreement) with PARC enabling him to work there on the remaining archive. He copied the archive from the DVDs to a contemporary hard drive and identified personal files that should be filtered out. He used his translation and organization program to make the remaining archive readable and accessible and transferred it to PARC researchers and legal staff for review. Eventually it was sent to CHM. The resulting archive of nearly one hundred fifty thousand unique files of PARC’s groundbreaking work from the 1970s and 1980s arrived at CHM on a thumb drive and could now be made available to the public.

A screenshot of a Press file, detailing PARC backup procedures, now rendered as PDF in the new archive.

With the new archive, new challenges arose in preparing it for public release. Paul McJones’ program could convert Press and Bravo files to PDF and HTML, making them readable, but not the Tioga files found in great abundance in the new archive. Tioga is the file format for a successor text editor to Bravo that the PARC researchers had created and used extensively in the 1980s. A significant fraction of the archive remained inscrutable. This time, Josh Dersch, the creator of the Alto emulator, answered the call. He was able to supply logic for Paul McJones’ program to render Tioga files as HTML documents. The archive was finally unlocked.

The PARC File System Archive, Unlocked

The nearly one hundred and fifty thousand unique files —around four gigabytes of information—in the archive cover an astonishing landscape: programming languages; graphics; printing and typography; mathematics; networking; databases; file systems; electronic mail; servers; voice; artificial intelligence; hardware design; integrated circuit design tools and simulators; and additions to the Alto archive. All of this is open for you to explore today at https://info.computerhistory.org/xerox-parc-archive Explore! 

One thing that is missing, hopefully temporarily, are files related to the critically significant programming language and environment Smalltalk. Smalltalk is a key piece in both the history of object-oriented programming and that of the graphical user interface. The Smalltalk materials in the archive are currently under review by the company Cincom, which owns significant intellectual property rights in Smalltalk and markets Smalltalk-based software globally today. An additional unresolved question is what 8mm tape backups may possibly remain at PARC for the NFS servers, holding the archives of work done at PARC across the 1990s and in the new millennium. It is a topic for further investigation.

Exploring the Archive: The Unexpected Story of Interscript

What kinds of discoveries await in https://info.computerhistory.org/xerox-parc-archive? I’d like to share something really surprising and fascinating that I came across in the archive—a new story that has enriched my view of a topic in the history of computing that is tremendously important. I hope that it might inspire you to find your own discoveries in the archive.

Take a moment to consider how most writing occurs today. What tools do people most commonly use? Pencil and paper? Pen or brush and ink? Compare that to all the writing that we do through computing: taps on a keyboard—physical or onscreen—assembling texts, messages, posts, mail, lists, and documents of a bewildering assortment. Think also of voice to text, itself a kind of writing, sending messages, submitting search queries, and the like. In many parts of the world today, I think it’s very safe to say that most writing takes place through computing. How did this happen? One thing is certain, it did not happen on its own. How did we make computers write? This question animates my new book project, and while it is at an early stage one finding is absolutely clear: Many of the most innovative minds in the history of computing have devoted an extraordinary amount of time and energy to this very project of making computers write.

One of the episodes in this long historical project is the creation of PostScript, a coding language that afforded the ability for computers to produce high-quality printed pages. It acted as a common language that let you print exactly what you wanted, no matter which computer, app, or printer you happened to be using. I wrote about the story of PostScript a few months ago, when CHM released the source code for PostScript in connection with the fortieth anniversary of Adobe, the company that made it. While you may not be familiar with PostScript, you are certainly intimately aware of a technology that directly developed out of it: PDF.

Adobe cofounders John Warnock (left) and Chuck Geschke (right). Courtesy Adobe Inc. and Doug Menuez.

Adobe was founded in 1982 by two Xerox PARC computer researchers, Chuck Geschke and John Warnock, and their first order of business was to create PostScript. The reason was that the pair had worked with others—Butler Lampson, Bob Sproull, and Brian Reid—on a very similar project at PARC, the coding language Interpress. While Interpress differed from PostScript in some aspects of fundamental approach, the intention behind Interpress was exactly the same: creating a coding language for the high-quality printing of documents. Computers, programs, and printers that could “speak” Interpress would be able to cooperate seamlessly. The Interpress effort had started in Geschke’s laboratory at PARC in 1979, and by 1981 it had reached an advanced state of development. Leadership at Xerox had even agreed that Interpress would become the whole corporation’s standard, but that this would take years to happen. Concerned about that slow pace in the face of rapid developments in computing, Geschke and Warnock left PARC in 1982, forming Adobe to get a standard coding language for printing quickly into the world.

What I stumbled across in https://info.computerhistory.org/xerox-parc-archive reveals part of the story of what happened next for the researchers who had worked on Interpress and who remained at PARC. This was a new effort, initially called InterDoc, later Interscript, that aimed to do for editable documents just what Interpress and PostScript did for printable documents. Perhaps the same approach—creating a new coding language for the interchange of documents between various computers and apps—could work here as well.

The promise of Interscript, in two figures from a 1985 Xerox document. https://bitsavers.org/pdf/xerox/interscript/IntroductionToInterscript.pdf

The Interscript effort, as electronic mail held in the archive show, really took off in 1981 as the success of Interpress became clear. Spurred by Butler Lampson and Jim Mitchell, the project also included Brian Reid, who had worked with Lampson on Interpress, as well as Bob Ayers and Jim Horning, who worked especially closely with Mitchell. The Interscript project ran from 1981 into at least early 1984.

An email exchange in the archive, documenting the emergence of the first name for the effort, InterDoc.

What the Interscript team immediately discovered was that editable documents presented a greater challenge than printable documents. Editable documents were inherently dynamic. By definition, they were going to be subject to constant change. And these changes were not just about what words they contained and in what order. These changes were also about the organization and appearance of the text, the layout, from outlined or numbered text to headlines, captions, illustrations, columns, and the like. Creating a coding language that could contend with such dynamic complexity was a true challenge.

Furthermore, the editors that were emerging in the first half of the 1980s ran from the rudimentary to the elaborate. This spread of editor functions was itself another challenge. How could an interchange format work from the simplest to most complex editors? How could simple editors just work on the parts of a document that they could, but leave everything else alone?

The tree structure of Interscript’s layout templates. https://bitsavers.org/pdf/xerox/interscript/IntroductionToInterscript.pdf

To meet the challenges, the Interscript team again turned to computer science. Not only would they turn to a new coding language as part of the solution, but they would also look to one of the key organizational forms of computer science, the “tree” data structure. In it, elements are connected to one another in a hierarchy, just like the trunk of a tree leads to branches, and on to sticks and then twigs, each step in greater profusion. Very roughly put, Interscript would capture the possible layouts of an editable document as a tree structure of possible templates. Careful control algorithms would then guide the “pouring” of the text into the proper templates of the tree. These scripts would allow the editable document to be reconstituted, edited, and then shared between different computers and programs.

A portion of a November 1983 Tioga document, rendered as HTML in the archive, summarizing some of the milestones and motivations of the Interscript project.

Although significant progress had been made on designing Interscript into early 1984, the effort then appears to have ended abruptly. While Butler Lampson, in a telephone interview with me recently, holds it ultimately ended because it was “naïve” given the complexity of editable documents, another factor was that, at the end of 1983, the Computer Science Laboratory at PARC descended into chaos. This was the Laboratory that housed the Interscript project, and its charismatic leader Bob Taylor abruptly resigned, soon followed by half of its technical staff. Lampson left to rejoin Taylor, and Mitchell, temporarily thrust into the position of the manager for the unravelling laboratory, himself quickly departed for Acorn Computers.

Unsolved Problems

Remarkably, Lampson explains, no one has yet to solve the problem that Interscript set out to address. We still lack a common format for editable documents that can contend with layout well. In his view, only partial and de facto solutions exist. Microsoft’s Word, itself originally directly based on the Bravo editor from Xerox PARC, has become a de facto standard, but only because any editor needs to work with Word documents if it is to be commercially viable. And even so, we all know how layout suffers when moving a document from one editor to another. PDF, with its roots in printed documents, only succeeds in limited ways with editing. For his part, Mitchell believes that the fundamental approaches of Interscript had great promise, and that if they had been more diligently pursued by PARC and Xerox, our lives with electronic documents could have been much different, and for the better.

So here, in a single, small directory in https://xeroxparcarchive.computerhistory.org, lies a fascinating story about making computers write, and an unsolved problem within it. Who knows, perhaps the person who finally solves it will find inspiration in the archive.

Acknowledgements

This archival project, and this article, would have been impossible without the efforts of:
Al Kossow
Paul McJones
Hansen Hsu
Josh Dersch
Butler Lampson
Jim Mitchell
Tim Curley
Heather Walker
Eric Bier
John Kitchura
PARC, A Xerox Company
Xerox
James K. Foote, author of the 1991 README in the rosetta.tar file of the PARC archive, and who accomplished the transfer of the 9-track MAXC tapes to 8mm tapes.

Additional Resources

David C. Brock, “50 Years Later, We’re Still Living in the Xerox Alto’s World,” https://spectrum.ieee.org/xerox-alto

The Alto in CHM’s flagship exhibition, Revolution: The First 2000 Years of Computing, https://www.computerhistory.org/revolution/input-output/14/347

A selection of video recordings featuring an Alto computer restored by CHM, https://youtube.com/playlist?list=PLQsxaNhYv8dbSX7IyztvLjML_lgB1C_Bb

A 1986 lecture by Alan Kay, “The Dynabook—Past, Present, and Future,” https://www.youtube.com/watch?v=GMDphyKrAE8&list=PLQsxaNhYv8dbIuONzZcrM0IM7sTPQFqgr&index=8

A 1986 lecture by Butler Lampson, “Personal Distributed Computing – The Alto and Ethernet Software,” https://www.youtube.com/watch?v=h33A-KWJKDQ&list=PLQsxaNhYv8dbIuONzZcrM0IM7sTPQFqgr&index=9

A 1986 lecture by Chuck Thacker, “Personal Distributed Computing – The Alto and Ethernet Hardware,” https://www.youtube.com/watch?v=A9n2J24Jg2Y&list=PLQsxaNhYv8dbIuONzZcrM0IM7sTPQFqgr&index=10

Main Image: The Xerox PARC File System Archive, newly released by the Computer History Museum.

 

SUPPORT CHM’S MISSION

Blogs like these would not be possible without the generous support of people like you who care deeply about decoding technology. Please consider making a donation.

FacebookTwitterCopy Link

The post A Backup of Historical Proportions appeared first on CHM.

]]>
The Remarkable Ivan Sutherland https://computerhistory.org/blog/the-remarkable-ivan-sutherland/ Tue, 21 Feb 2023 12:50:16 +0000 https://computerhistory.org/?p=26909 CHM releases to the public for the first time a full oral history with Ivan Sutherland, pioneer of computer graphics, virtual reality, asynchronous systems, and more.

The post The Remarkable Ivan Sutherland appeared first on CHM.

]]>
In His Own Words

Ivan Sutherland has blazed a truly unique trail through computing over the past six decades. Along the way, he helped to open new pathways for others to explore and dramatically extend: interactive computer graphics, virtual reality, 3D computer graphics, and asynchronous systems, to name but a few.

The Computer History Museum is delighted to make public its two-part oral history with Ivan Sutherland, one of the most influential figures in the story of computing to date. These new oral history interviews present a wonderful opportunity to learn more about Ivan Sutherland’s life in computing directly from the source, with his own reflections and interpretations and in his own words. The transcripts for these interviews can be viewed and downloaded here and here. And the full interview video can be viewed below.

The Museum is deeply grateful to Bob Sproull, a lifelong colleague of Sutherland and himself a major figure in computing, for his roles as instigator, interviewer, and editor for these oral histories, and for involving me, Marc Weber, and Jim Waldo in the effort. The Museum is also delighted to make these oral history interviews public during the 60th anniversary year of Ivan Sutherland’s breakthrough in interactive computer graphics, the program Sketchpad, for which he earned his PhD from MIT in 1963.

A Man of Many Parts

There is a phrase, far more popular in 17th and 18th century England than it is today, that recurs for me when thinking about Ivan Sutherland and the remarkable story of his life in computing: “A man of many parts.” The description was used for an individual who had made serious contributions to a domain, while also possessing multiple, and often diverse, talents and pursuits. The description fits Ivan Sutherland well, but I think it also misses something important: there is a commonality in Sutherland’s multiple contributions and accomplishments, a connective tissue or shared wellspring for his many parts.

To get at this wellspring, start with geometry. From his youth, Sutherland possessed an unusually keen spatial, geometric intuition. In his mind and at his hands, he experienced an immediacy in perceiving how things fit and worked together. Perspective drawing involves a set of techniques to represent a three-dimensional scene on the two-dimensional plane of a sheet of paper or a stretch of canvas. These renderings can proceed in different ways, determined by the number of vanishing points employed. Together the vanishing points define the viewpoint of the observer. One-point, two-point, and three-point perspectives are all very different, providing distinct ways to understand the represented scene.

Thomas Eakins, Untitled (Perspective study of boy viewing object), Hirshhorn Museum and Sculpture Garden, Smithsonian Institution, Washington, DC, Gift of Joseph H. Hirshhorn, 1966. Accession Number 66.1553.A-B

This switching of viewpoints, the ability to look at something from a fresh and unexpected angle, and then to integrate this new perspective with those that came before, seems to me the link between Sutherland’s unusual spatial intuition with his diverse contributions in computing. It’s an ability to find a new viewpoint on a subject, to look at it from this novel perspective, and then to explore how this vantage might change the subject itself through fresh solutions and directions.

Connections and Intersections

Ivan (left) and Bert Sutherland at the Computer History Museum in 2017. Courtesy Doug Fairbairn.

In what follows, I trace some of the lines of Sutherland’s story, intersecting them with related materials held in the Museum’s collection. As recounted in his oral history interviews, Ivan’s life in computing was profoundly shaped by interactions he and his brother Bert had with two central figures in the early history of computing: Edmund Berkeley and Claude Shannon. Bert, who went on to a remarkable career in computing himself, distinguished by his roles as a research manager at Xerox PARC and at Sun Laboratories, told his story in his own oral history with the Museum.

The Sutherland brothers, through a connection of their mother’s, began visiting Edmund Berkeley in New York City from their home in Scarsdale while Ivan was still in grade school. At the time, Berkeley was establishing himself as a leading author, publisher, and consultant for the new world of digital computers. In Berkeley’s offices, the Sutherland brothers encountered his light-seeking robot “Squee,” now in the collection of the Computer History Museum, which also holds some of Berkeley’s papers.

Berkeley’s “robot squirrel,” Squee. Computer History Museum, B1630.01, © Mark Richards. https://www.computerhistory.org/collections/catalog/B1630.01

The Sutherland brothers worked on their own versions of light-seeking robots afterward at home, using surplus parts their engineer-father helped them to source in New York City, and the pursuit became rather long-lasting for Ivan. As an undergraduate engineering student at Carnegie Tech (today’s Carnegie Mellon University), and then again during his early stint as a graduate student at Caltech (before moving to MIT after one year), Sutherland continued to build more advanced, refined light-seeking robots of his own design. The reason? Aesthetics, he explains in his oral history. For Sutherland, engineering design has a strong aesthetic dimension. Beauty and simplicity, for Sutherland, gave the practice of engineering an aesthetics, an affective pull. “In fact, I think that engineering and art are very closely related,” he explains.

Ivan Sutherland discusses a surplus military gunsight computer his father installed for the brothers in the family kitchen. From the new CHM oral history.

In Berkeley’s offices, the Sutherland brothers also had the opportunity to work with his new creation, Simon, a very simple and inexpensive computer. Unlike the giant mainframes of this era, which relied on thousands of vacuum tubes, Simon was animated by a handful of inexpensive relays—simple electrical on/off switches. Nevertheless, the machine was able to perform mathematical and logical operations.

Berkeley’s Simon. Computer History Museum, 102627259, © Mark Richards. https://www.computerhistory.org/collections/catalog/102627259

Further, Simon was programmable, using instructions encoded on a punched paper tape. During his high school years in the 1950s, Ivan Sutherland was able to devise a working program for Simon, allowing it to perform division, quite a feat for the humble machine. “I’m quite proud of having written a division routine for a two-bit computer when I was in high school,” he explains in the oral history. “So I can almost literally say I’ve been in the computer business nearly all my life.”

Through Berkeley, the Sutherland brothers were introduced to another key figure in the early years of digital computing: Claude Shannon, renowned for his development of information theory. While a maestro of abstraction, Shannon was also a keen builder. During a visit to Shannon’s office at the Bell Telephone Laboratories in northern New Jersey, he showed the brothers his creation Theseus. It consisted of a small maze of movable metal panels affixed to the top of a metal box containing magnets and relay electronics like Berkeley’s Simon. Through the action of the relay electronics and magnets, a toy mouse was able to find its way through the maze and then “remember” the successful route. While the Sutherland brothers were duly impressed, their attempts to recreate this early effort in machine problem-solving and artificial intelligence proved unsuccessful.

Claude Shannon with Theseus. Computer History Museum, 102630792. https://www.computerhistory.org/collections/catalog/102630792

Breakthrough at MIT

After graduating from Carnegie Tech, Ivan Sutherland headed to Caltech for graduate studies in electrical engineering. There, as he recounts in his oral history, he was invited to attend a lunch with Marvin Minsky and Oliver Selfridge, two central figures in digital computing at MIT and the new field of artificial intelligence. Over the meal, Sutherland listened to Minsky and Selfridge’s enthusiastic reports of new computer developments at MIT and its Lincoln Laboratory. Adding to Sutherland’s excitement about the computer activity at MIT was the fact that Claude Shannon had moved there. Sutherland quickly decided to continue his graduate work at MIT, and Shannon agreed to advise him.

Two unidentified women at the controls of the TX-2 in 1962. http://www.bitsavers.org/pdf/mit/tx-2/photographs/2022-10-31/P91-206_RR_127176.jpg

Once at MIT, Sutherland met with Wesley Clark, the designer and impresario of an immensely powerful experimental computer, the TX-2, at MIT’s Lincoln Laboratory. Clark had designed the TX-2 incorporating two critical innovations in computer component technology: high-speed switching transistors and large capacity magnetic core memories. The machine would provide valuable lessons about the use, capabilities, and potential of these new technologies.

A transistorized “flip-flop” logic module from the TX-2. Computer History Museum, 102732767. https://www.computerhistory.org/collections/catalog/102732767

But perhaps more importantly, for Clark the TX-2 had the potential to make real a kind of computing that could become more widespread in the future. As Sutherland explains in his oral history, “Wes took TX-2 and treated it as a window into the future of what computing might be if everybody had one of his own.” Sutherland proposed to use TX-2 to create software for generating engineering drawings. Without hesitation, Clark gave him access to the machine.

Ivan Sutherland discusses the origins of Sketchpad in his new CHM oral history.

In January 1963, Ivan Sutherland successfully completed his PhD on the system he created on the TX-2, Sketchpad. With it, a user was able to interactively, and in real time, create line drawings on the computer’s CRT screen, using a light pen for direct input on the display. Sketchpad afforded many different capabilities for working with these line drawings, such as the automatic completion of shapes, sizing, the ability to copy and repeat elements, and more.

Ivan Sutherland using Sketchpad on the TX-2, circa 1962-1963. Computer History Museum, 102652182. https://www.computerhistory.org/collections/catalog/102652182

A Sketchpad image, from Ivan Sutherland’s dissertation, 1963. Computer History Museum, 102726907. https://www.computerhistory.org/collections/catalog/102726907

For Sutherland, and for many others who experienced and learned about it, Sketchpad represented much more than just a new way to create line art. As he put it in his thesis, “The Sketchpad system makes it possible for a man and a computer to converse rapidly through the medium of line drawings. Heretofore, most interaction between men and computers has been slowed down by the need to reduce all communication to written statements that can by typed; in the past, we have been writing letters to rather than conferring with our computers… The Sketchpad system… opens up a new era of man-machine communication.” A listing of Sutherland’s source code for Sketchpad in the Computer History Museum’s collection is available here, and his 1994 lecture about the history of Sketchpad can be viewed here.

Innovation in the Military

After MIT, Sutherland fulfilled his ROTC commitments to military service by serving in the US Army, first at the NSA, where he continued work on computer graphics, and then as the second director of the Information Processing Technology Office of ARPA, the Advanced Research Projects Administration of the Department of Defense. Only in his mid-twenties, Sutherland succeeded the MIT psychologist J. C. R. Licklider, who had established the office and its leading role in supporting computer science and artificial intelligence research in the nation.

While Sutherland continued many of Licklider’s projects at ARPA, he added new projects of his own to the mix. Critically for Sutherland, he supported a new effort by Wesley Clark, the designer of the TX-2 who had moved from MIT to Washington University, St. Louis. Clark had created an innovative small computer for an individual user called the LINC, especially suited to the real time needs of biomedical research, and moved the project and team to St. Louis. (Clark discusses the history of the LINC in a 1986 talk here.) Now, Clark envisioned an entirely new approach to computer design. In it, computers would be built up from distinct units, each unit providing an entire function. In this way, computers could be composed in a flexible and bespoke manner, built with just what was needed for some use, no more. Clark called the approach macromodule, and Sutherland funded the research.

Wesley Clark (left) and Charles Molnar (right) with a LINC computer. Molnar was a key figure in the macromodule research with Clark. Computer History Museum, 102680046. https://www.computerhistory.org/collections/catalog/102680046

The researchers in Clark’s macromodule effort succeeded in building a variety of different units, such as the one below donated to CHM by Ivan Sutherland that performed addition. The modularity of this new approach entailed a radical departure in digital computing design. In the mainstream, all the operations of computers were coordinated by following the regular beat of a single electronic signal, the “clock.” For the macromodule approach, an alternate, asynchronous approach to the orchestration of computer operations was required. The practical challenges and the theoretical potentials of asynchronous systems became a central passion and focus for Ivan Sutherland thereafter.

An addition function macromodule from Wesley Clark’s research group. Computer History Museum, 102766550. https://www.computerhistory.org/collections/catalog/102766550

Computer Graphics at Harvard and Utah

After his appointment at ARPA, Sutherland accepted a tenured engineering faculty position at Harvard University. There, Sutherland expanded his graphical ambitions from the two-dimensional abilities of Sketchpad to the concept of three-dimensional graphics and a new interface for experiencing them. Sutherland created a laboratory of graduate and undergraduate students alike, aimed at creating views of 3D scenes—drawn with lines—as well as a display worn on the head that would present different views of the 3D scene depending on the direction that the user looked. By the close of the 1960s, they had a system in place that could do just that. This project is frequently cited as an early milestone in the history of virtual reality. Sutherland discusses the project and its relation to virtual reality in this 1996 lecture.

The head-mounted display from Sutherland’s Harvard project. Computer History Museum, 102680042. https://www.computerhistory.org/collections/catalog/102680042

Students of USC professor and VR researcher Scott Fisher image Sutherland’s head-mounted display at the Computer History Museum in 2022 for a project to recreate his laboratory in virtual reality.

Some early results of the USC virtual reality effort.

Soon afterward, Sutherland left Harvard for the University of Utah, and for a new startup he was cofounding to pursue systems for 3D computer graphics. The key partner for Sutherland in both moves was David C. Evans, an accomplished computer researcher. Evans was establishing a computer science department focused on 3D computer graphics, the same focus as the company he was starting with Sutherland. The new company, Evans and Sutherland, moved quickly to produce workstations for creating 3D graphics, beginning with the LDS-1 and then moving on to the very successful Picture System. Other products and efforts became essential to computer animation and to military pilot training.

A page from a brochure for the Picture System. Computer History Museum, 102646288. https://www.computerhistory.org/collections/catalog/102646288

Sutherland and Evans also fostered a remarkably productive and creative community of students in computing and especially computer graphics at Utah, counting the cofounders of Adobe, Pixar, Silicon Graphics, and more among its members. Some of these figures discussed this remarkable environment in a 1994 meeting.

Sutherland’s experiences through his time in Utah comprise just the first half of his story in computing and engineering. Beyond it lies another startup, a faculty career at Caltech, a revolution in VLSI microchip design, a walking-robot project at Carnegie Mellon, venture capital investing, a consulting firm that became the basis for Sun Laboratories, and fresh contributions to asynchronous systems that continues to this day at Portland State. For these stories, Sutherland’s new oral history interviews (Part 1 and Part 2) are an incredible source, as are this event with the Sutherland brothers in 2004 and this retrospective lecture by Ivan Sutherland at the Computer History Museum in 2005.

Main image: Ivan Sutherland. Photo credit: BBVA Foundation.

 

SIGN UP!

Learn more about the Art of Code at CHM or sign up for regular emails about upcoming source code releases and related events.

FacebookTwitterCopy Link

The post The Remarkable Ivan Sutherland appeared first on CHM.

]]>