Guest Blog Archives - CHM https://computerhistory.org/blog/category/guest-blog/ Computer History Museum Fri, 23 Jan 2026 16:41:57 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Generative Music with the Muse https://computerhistory.org/blog/generative-music-with-the-muse/ Fri, 23 Jan 2026 16:41:57 +0000 https://computerhistory.org/?p=33149 Unpacking the mysteries of the rare Triadex Muse from the early 1970s, the first algorithm-based sequencer/synthesizer intended for home consumers.

The post Generative Music with the Muse appeared first on CHM.

]]>
The forgotten sequencer that brought algorithmic composition to the home

Strolling around the Computer History Museum, there are exhibits that are immediately recognizable. All we need is a glimpse of the Altair 8800 or Apple I, and we just… know. We walk over and stand in front of these pieces, instinctively lowering our voices and giving a quiet nod to anyone nearby. Like noticing the chisel marks on a marble statue or the brushstrokes on an oil painting, we’re struck by the realization that what we’re seeing is the result of human imagination and ingenuity. What was once abstract and almost mythical is there right in front of us.

However, there are also items on display at the Computer History Museum whose significance isn’t immediately apparent. Take the Triadex Muse. You might mistake this wedge of metal, switches, and wood panels for an obsolete piece of stereo equipment or a Cold War-era intercom.

The Triadex Muse on display at the Computer History Museum. Photo by Michael Hicks, November 3, 2013. (Source: Wikimedia Commons)

While it looks like something you’d find on some forgotten warehouse shelf, the Triadex Muse is an important piece of electronic music history. Developed by Edward Fredkin and Marvin Minsky at MIT in 1969, and commercially released in the early 1970s, it was the first algorithm-based sequencer/synthesizer intended for home consumers. It’s estimated that there were only 280 – 300 produced, making it a rare piece of gear.

You won’t find a friendly and familiar set of piano keys. The only controls are an orderly series of sliders. Its industrial design has more in common with that of a home appliance than a musical instrument. Like some sort of mystical radio receiver, beckoning users to adjust its controls until they land on some strange and alien wavelength.

Playing Music Through Binary Logic

The beauty of the Triadex Muse is in its simplicity. There’s no memory, no CPU, and no firmware. Just integrated circuits and electricity. Unlike the 1959 IBM 7090 mainframe computer, which was fed programming instructions via paper punch cards in coaxing out such party hits like Frère Jacques on the 1962 album Music from Mathematics, the only user input here is positioning sliders and flipping a couple switches.

The Triadex Muse is a simple combination of electronics, lacking RAM, ROM, or software. (Source: Internet Archive)

If you were to randomly move the sliders to different positions and fire up the Triadex Muse, you’d likely hear the chirp of a square wave melody from its internal speaker and see a vertical band of blue and green lights twinkling in time. You might think that the Triadex Muse is like a modern step sequencer or drum machine. Orderly and predictable. This is 1972. Forget it.

You wouldn’t be wrong that the blinking lights correspond to a beat. At each tick of the Muse’s internal clock, this single column of lights shows the complete state of zeroes and ones. The Muse is essentially a 40 x 8 matrix, with zeroes and ones evaluated at each “tick” of its internal clock, triggering sounds, and with each state potentially changing what happens next.

While anyone with a background in computer science would know what these lamps represent, the average home consumer would have no idea. These glittering lights further add to its sense of mystery. Although you do get some control over the Muse’s output, you can’t play it like a regular instrument. Even the owner’s manual admits that, “The Muse isn’t a music box.” Depending on how you set the sliders, it’s unlikely you’re going to come up with a tune you can hum along with while you wash the dishes.

Triadex Muse improvisation. Courtesy of Adachi Tomomi.

Making Melodies

While the Muse may sound like a robotic run-on sentence, it speaks in the familiar language of musical intervals.

Closeup of the Triadex Muse showing intervals. Photo by Mark Richards.

The INTERVAL block has four sliders that select notes from the major scale. The positions of A, B, and C determine the pitch, while D adds an octave. There are no flatted thirds or minor scales. You can’t play the blues. Though you might want to if you once owned the Triadex Muse and look up how much they’re currently going for—$$$.

The four INTERVAL switches, like much of the Muse, are deceptively straightforward. You might flip several to the same row and think you’ll hear a chord. Once again the Muse doesn’t follow modern conventions.

The Muse isn’t polyphonic. There is no harmony or overtones. This is binary arithmetic. Each interval is a weighted four-digit binary number. When summed they generate a 4-bit number that determines a new pitch. Music from mathematics… indeed.

The Muse’s intervals are triggered when the positions that they’re set to get a one bit from either the C or B section.

If you were to stare at the blue lights marked C ½ to C6 long enough, along with your friends asking you if you were feeling okay, you’d notice that they follow a set pattern. And if you knew how to count in binary (and let’s face it we know there are some of you out there), you’d see that the lamps C1, C2, C4, and C8 form a four-bit counter, cycling from 1 (0001) to 15 (1111) in a continuous loop. C ½ is simply the clock itself, a square wave that turns on and off at a rate linked to the tempo. C3 and C6 form a separate two-bit counter that increments every three cycles. By setting groups of threes against fours adds variety to the sequence.

Counting up from C8, we see 0100, which is four in binary. (Source: JavaScript Triadex Muse emulator built by Donald Tillman)

In most instances, if you set the INTERVALS only to C positions, you’d get repeating patterns of tones about as musically exciting as the 1978 memory game Simon (no offense to Simon, which you can also see on display at the CHM).

Pseudo-Random Sequencing

The B region is where the Muse becomes more than a repeating blooper of bleeps, but a generator of pseudo-random chaos. You might observe the flickering green lights occupying B1 – B31, and just when you’ve identified a pattern, the sequence feeds back and morphs into something new.

To understand what’s happening, first you need to know what the four THEME sliders actually do. If you think that flipping these will give you a familiar musical preset like rock, jazz, or a rumba, the Muse once again betrays you.

Each THEME slider is technically a “tap.” A tap is like a tiny beacon, monitoring the binary state of a specific point in the C or B region, and sending that off to an XNOR logic gate.

I know. This sounds complex. But understanding the B register is like finding the answer to a riddle. Once you see it, it’s oh-so-obvious.

At every click, the XNOR (Exclusive NOR) logic gate makes a decision based on what it receives from the taps. If the gate receives an even number of ones (including all zeroes), it sends a one to B1. If the number of ones is odd, it sends a zero to B1. In its default state, B1 – B31 will be a band of green.

Every decision the Muse makes lurches forward, sometimes syncing up as the same conditions persist, and then suddenly shifting when the inputs change. B1 – B31 is what is known as a Linear Shift Register, an ever-evolving bucket brigade of bits, where a new zero or one is handed to the top, while the bottom bit falls out.

The 31-bit B register creates an almost unending variety of variations and patterns. (Source: Donald Tillman’s JavaScript based Triadex Muse emulator)

A Predecessor to Modern Generative Music

The Triadex Muse is a dead-end in the history of electronic music. While one could daisy chain it to other Muses, it used a proprietary I/O. There are no control voltages you could patch into modular synthesizers. And MIDI was a decade away. Its entire ecosystem was The Triadex Muse itself, an external speaker, and if the march of blue and green squares wasn’t enough visual stimulation, you could also buy a light unit with Gaussian-like blurs of psychedelic colors flowing with the beat.

Like DNA from extinct species whose fragments persist in modern organisms, the Muse’s influence lives on in algorithmic composition.

In a 2001 interview, Sean Booth of Autechre, pioneers of generative electronic music said about their process that, “There’s absolutely nothing random about what we do. There might be a lot of number crunching going on, but there’s nothing random in there.”

This is exactly what the Muse was doing in 1972.

Marvin Minsky, the co-creator of the Muse, said in his 1981 paper “Music, Mind, and Meaning” that the challenge of composing music is that, “Whatever the intent, control is required or novelty will turn to nonsense.” This philosophy is hardwired into the Muse.

The melodies of the Muse could sound familiar. Or even strange. But its output still sounds like music, with underlying rules and logic that our brains detect as patterns, even though the Muse makes it almost impossible to predict what it will play next.

Main image: Triadex Muse, 1972. Computer History Museum, XD254.81. Gift of Ed Fredkin. Photo by Mark Richards.

FacebookTwitterCopy Link

The post Generative Music with the Muse appeared first on CHM.

]]>
PDP-1 Sings Boards of Canada https://computerhistory.org/blog/pdp-1-sings-boards-of-canada/ Tue, 21 Oct 2025 14:52:19 +0000 https://computerhistory.org/?p=32733 CHM volunteer Joe Lynch unpacks how Peter Samson's Harmony Compiler programmed the DEC PDP-1 to play music like Boards of Canada's "Olson."

The post PDP-1 Sings Boards of Canada appeared first on CHM.

]]>
Four Lightbulbs Bridge the Past Inside the Present

In 2025, the 1998 song “Olson,” by Boards of Canada was played using the 1962 Harmony Compiler on the world’s last running 1959 DEC PDP-1.

Boards of Canada “Olson” playing on the PDP-1.

The PDP-1 was never intended to produce audio, much less make music, long predating sound cards or MIDI ports. It does however have six “program flags,” which are flip-flops wired to six light bulbs on the control panel. A CPU instruction provides the ability to turn these light bulbs on or off via software.

These bulbs were originally intended to provide program status information to the computer operator, but Peter Samson repurposed four of these light bulbs into four square wave generators (or 1-bit DACs, put another way), by turning the bulbs on and off at audio frequencies with his Harmony Compiler software.

Waveforms of the four voices playing “Olson.”

Four wires are attached to the signal lines for these light bulbs. Four resistors are used to downmix these four signals into stereo audio channels and are combined with four capacitors to create low pass filters to cut out the buzz of the computer noise and soften the square waves. A connected vintage HeathKit stereo amplifier then drives speakers mounted on the wall behind the PDP-1.

For Peter’s initial PDP-1 implementation at MIT, prior to using the program flags, he added four flip-flops to the machine specifically for playing music, something easy to do in that research environment.

DEC asked me to write a version that would run on a stock PDP-1, so I hit on the idea of using the program flags. For this DEC offered me $200 and a meal at a French restaurant, though since it was DEC the meal turned out to be just lunch.

— Peter Samson

From Bach to Boards of Canada

Late at night in 1962, Bach could be heard playing through those four lightbulbs in an MIT research lab, as student and “unauthorized user” Peter Samson developed his Harmony Compiler.

Though the Harmony Compiler was originally developed to play the 1700s baroque organ works of Bach with four-part harmony, the synth melody and drone of “Olson,” combined with its lack of percussion, make it an exceptional fit. While this may seem coincidental, surprisingly there is an interesting lineage to be found between them.

The baroque organ pioneered the use of sustained, drone-like textures as musical foundations. The ability to hold multiple notes indefinitely while layering melody above created a spatial, immersive quality that laid the groundwork for ambient electronica’s musical aesthetics.

Peter’s pioneering work in computer music represents a bridge between the baroque organ’s architectural approach to sound and the hauntological electronic textures that would later define Boards of Canada. Reproducing Bach’s polyphonic organ works within the PDP-1’s severe computational constraints forced elegant solutions that enabled and codified the “melody-plus-drone” structure commonly heard in more modern digitally synthesized music.

Peter didn’t stop with the PDP-1 though, continuing to pioneer digital music synthesis on later PDP machines. When a PDP-6 arrived at MIT, its increased speed allowed Peter to write a version that supported six voices, tapping the Memory Indicator lights. He later wrote another version for the System Concepts PDP-15 clone, known as the SC-15.

His work in this field culminated with his development of the Systems Concepts Digital Synthesizer, better known as the “Samson Box,” the most advanced digital synthesizers of its day. It was first presented in 1974 at the Computer Music Conference at Michigan State University. In 1977, the Samson Box was installed at Stanford’s Center for Computer Research in Music and Acoustics (CCRMA) facility, a leading academic center dedicated to computer music and digital audio research.

Peter Samson stands next to the Samson Box at CCRMA. (Source: Life and Times of the Samson Box)

The influence of Samson’s approach extended through the Samson Box as it found use in educational media production.

These educational soundtracks, whether created on the Samson Box, or the 1970s synth work heard in educational films produced by the National Film Board of Canada, converged on a remarkably similar aesthetic: otherworldly timbres built from simple waveforms, and that particular quality of institutional nostalgia that would later heavily influence Boards of Canada.

The Samson Box can be heard in the music of Michael McNabb in the 1979 NASA film Mars in 3D. Boards of Canada’s “Dandelion” carried this tradition forward.

The Samson Box heard in the music of Michael McNabb in the 1979 NASA film “Mars in 3D.”

 

Board’s of Canada “Dandelion.”

Squeezing “Olson” into the PDP-1

“Olson” primarily consists of a melody over drone chords, with a piano outro. To work within the PDP-1’s four square wave generators, one voice is dedicated to the melody, and the other three are used for the high, middle, and low notes of the drone chords. All four voices are then repurposed for the eight measure outro.

The Harmony Compiler defines its own domain-specific language for encoding the score of each voice of a song. The middle note of the drone chords is the simplest voice, so let’s look at that as an example. In the transcription below, the score is seen in white, and comments describing each line are in green.

“Olson” score for the middle note of the bass drone chords.

In the line “2 8lt2 9l,/”, the “2” specifies this is the second measure. The first note of the measure, “8lt2”, is in the 8th position of the staff (G# in this case). The “l” indicates the note is played legato, flowing seamlessly into the next note, and “t2” indicates the duration is a half note. The next note is defined as “9l,” indicating an A# legato note, with “,” repeating the duration of the prior note. The end of the measure is indicated with the “/” character.

When encoded onto paper tape, the voice is broken into two sections: notes and bars. The notes section contains the notes of the measures, with the bars section containing offsets to the start of measures in the notes sections. By using the “copy” operation, we avoid duplicating data in the notes section.

The efficiency provided by the copy operations allow the entire song to fit in only 603 bytes.

A Fortunate Coincidence

One complexity in transcribing “Olson” is that it is tuned about 50 cents sharp from standard tuning. For example, a C note sits about halfway between C and C# in standard tuning.

However, the CHM PDP-1’s CPU coincidentally runs about 6% slower than spec, resulting in notes being played at about a 6% lower frequency. By transposing the entire transcription up by a semitone (e.g. C becomes C#), this speed discrepancy provides a tuning fairly close to the song’s original tuning.

In the transcription this was done by manually changing each note. In hindsight, the Harmony Compiler provides a simpler way to do this, where the transposition command “up 1” would have had the same effect with less work.

Music as Holes In Paper

The notes and beginning of the bars section of the middle drone voice.

Using holes in paper to encode music has historical roots. The “Pianola” player piano was doing this in 1896. In the case of the PDP-1 this is done digitally, with each note, bar offset, and related metadata stored as 18-bit words.

The Harmony Compiler works in two phases. The first phase converts the score to an “intermediate tape,” containing the notes and bars sections of each voice. The second phase then compiles the intermediate tape to an internal format that can more efficiently be played back on the PDP-1 by the same software.

The first phase can be run on just about any modern computer. First the scores are converted from ASCII to the FIO-DEC character encoding. Then the SIMH PDP-1 emulator is used to run the first phase of the Harmony Compiler and produce a file containing the intermediate tape.

The intermediate tape file is then punched to physical tape. Thanks to the robustness and longevity of the RS-232 protocol, using a modern laptop to drive a 42-year-old tape punch requires only a USB-RS232 adapter.

Punching “Olson” onto paper tape.

With the tape punched, “Olson” is ready to be loaded into the PDP-1, and played on a machine born over a decade before the song’s original composers.

Want to hear it play live?

We demo the PDP-1 at CHM on the first and third Saturdays of each month at 2:30 p.m. and 3:15 p.m. (Check here for the latest schedule.) Just ask one of the volunteers, and we’ll be happy to load up the “Olson” tape and play it!

 

FacebookTwitterCopy Link

The post PDP-1 Sings Boards of Canada appeared first on CHM.

]]>
RAMAC History Comes Alive https://computerhistory.org/blog/ramac-history-comes-alive/ Mon, 14 Jul 2025 15:43:07 +0000 https://computerhistory.org/?p=32394 At age 93, Nicholas F. Garcia, may be one of the last people around who worked at IBM’s first R&D lab on the West Coast, in the center of what would become Silicon Valley. His daughter tells his story.

The post RAMAC History Comes Alive appeared first on CHM.

]]>
I was visiting the Computer History Museum last year, when I noticed a machine in a glass case that looked like a big jukebox for the Space Age, cylindrical and gleaming. But instead of playing 45 RPM vinyl records, it was a display of the Random Access Method of Accounting and Control (RAMAC) disk storage system, part of IBM’s groundbreaking 1956 RAMAC 305 computer system. This was a revolutionary computer system, the first to use a hard disk drive for data storage.

As I was reading the exhibit captions, a memory came to mind: When I was interviewing my dad for a family history project, didn’t he mention working as a junior draftsman at the IBM lab in San Jose where they developed the RAMAC in the 1950s? Yes, he did.

At age 93, my father, Nicholas F. Garcia, may be one of the last people around who worked at IBM’s first R&D lab on the West Coast, in the center of what would become Silicon Valley.

He was a young draftsman at 99 Notre Dame Ave. in San Jose, California, a building IBM rented near downtown, where the RAMAC project, begun in 1952, was completed in 1956.

“It was the first computer system conceived around a radically new magnetic disk storage device. The extremely large capacity, rapid access, and low cost of magnetic disk storage revolutionized computer architecture, performance, and applications.” (citation: Milestones:RAMAC, 1956, Engineering and Technology History Wiki.)

After I confirmed that my Dad had worked in that historic lab, I mentioned it to Kirsten Tashev, vice president and chief curatorial and exhibitions officer, when we both were attending a CHM event, and she got things rolling. Would my Dad like to see a demo of the RAMAC in action? And would he be interested in being interviewed for an oral history in the CHM studio?

One thing led to another. I had the pleasure of seeing my father interviewed about his life and experiences at that early IBM lab for the Computer History Museum’s Oral History program.

Caption: Nicholas F. Garcia, being interviewed by Computer History Museum Senior Curator Dag Spicer, in the CHM studio for the CHM Oral History program. (Mountain View, CA. December 2, 2024. Photo credit: Dawn Garcia)

And he had quite a story to tell. My father has led an adventurous life—and the span of his career gave him a front-row seat and engineering role in the development of space and missiles technology in Silicon Valley.

Little did he know when he was picking cotton as a teenager in California’s Central Valley that he would later be designing re-entry mechanisms on space vehicles for Lockheed Missiles & Space Corporation (now Lockheed Martin).

Nicholas F. Garcia, age 16, shouldering a 100-pound bag of cotton he had picked by hand. (Mendota, California, 1947. Credit: Family of Nicholas F. Garcia)

He worked as an engineer at Lockheed in Sunnyvale for more than 30 years, hired a few years after the aerospace company moved its Missiles Systems Division from Burbank to Sunnyvale. He raised his family in San Jose and Cupertino, where the acres of fruit orchards—which exploded in white and pink blossoms each spring—gave way to acres of low-slung buildings of the booming computer industry.

Caption: Nicholas F. Garcia (center, rear), a young engineer at Lockheed Space & Missiles in Sunnyvale, pictured with a production team that built a test fixture that he had designed. The gimbal—a pivoted support that allows an object to rotate freely in one or more directions—was used to test reentry bodies for submarine-launched missiles. Garcia worked for Lockheed for more than 30 years. (Sunnyvale, 1961. Photo credit: Nicholas F. Garcia)

But first, he worked at IBM as a young man. He had served in the Air Force in the Korean War and returned to the Bay Area to attend college on the GI Bill. He was 25 years old.

In 1956, he won a drafting competition at San Jose City College where he was studying engineering. The prize, he said, was a summer job as a draftsman at IBM. He remembers an environment of creativity and innovation, with about 50 young engineers working at 99 Notre Dame Ave., coming up with new ideas for IBM. “They were a very tight knit group of people who helped each other,” he said.

Nick’s job was as a junior draftsman, creating drawings to document the RAMAC, using pen on linen, not paper. He recalled that the drawings were being sent to IBM’s headquarters in New York.

It was IBM’s first venture into California. He remembered them talking about the IBM culture in “pep talks.”

“Every Tuesday, we would have an all-hands meeting in a little studio, and on the screen would be a talk by some higher executive of IBM,” Nick said. “They would tell us where the project was, how things were going. And then also, to talk about the IBM way of life… people were concerned that this lab was the first step that IBM made into San Jose… They did not want people from that lab going out into San Jose and projecting an image that was not right.”

An element of that good impression was a strict dress code, Nick said. Everyone wore ties—even on the weekends. “I thought it was cool,” he said.

We were wearing ties every day. On Saturdays, when we were called in to work overtime, some guys wore shorts, and they got called on it.

— Nick Garcia

Nick was so inspired by the work of the IBM engineers, that he switched his college major from civil engineering to mechanical engineering. “I really loved the conceptual engineering they were doing. Very, very challenging, but still having to come up with a mechanical solution to a concept. I really admired that.”

But the summer ended, and Nick left to go to San Jose State University to study engineering, earn his degree, and went to work at Lockheed. He never ended up working again for IBM, but he remembered the IBM way fondly.

“What I saw in those first days at that summer job embedded in me an engineering value that the engineer is required to do something, but he has to be given the opportunity and the freedom to do the research and the conceptual work,” Nick said. “At IBM, from what I saw, they respected the engineer.”

Watch Nick’s oral history interview below, or read the transcript.

Oral History of Nicholas F. Garcia | CHM, Dec. 2, 2024

Main image: The RAMAC Restoration Team at the Computer History Museum. They were conducting a demo of a rebuilt early computer disc drive to show Nicholas F. Garcia, who worked as a junior draftsman in the IBM lab in San Jose in 1956 where the RAMAC was designed. Left to right: Dag Spicer, Joe Feng, Curtis Jones, Nicholas F. Garcia, and Dr. John Best. (December 2024. Photo credit: Dawn Garcia)

FacebookTwitterCopy Link

The post RAMAC History Comes Alive appeared first on CHM.

]]>
Programming in Harmony https://computerhistory.org/blog/programming-in-harmony/ Wed, 08 Jan 2025 16:44:15 +0000 https://computerhistory.org/?p=31477 Discover the roots of computer music and the development Harmony Compiler, one of the earliest pieces of music-making software to be distributed to users.

The post Programming in Harmony appeared first on CHM.

]]>
Restoring the History of Digital Music

In 1961, Peter Samson, a student at MIT, programmed the new PDP-1 minicomputer to play polyphonic music. This experiment led to the Harmony Compiler, one of the earliest pieces of music-making software to be distributed to users. It also inspired Peter’s later pioneering work in real-time digital sound synthesis.

Nowadays, Peter is working as a docent at CHM. I interviewed him in 2020 in the PDP-1 lab, where he demoed the music software and explained how he developed it.

Peter Samson loads a paper tape of polyphonic music on the PDP-1.

Peter’s system was novel in 1961 because it was both real-time and polyphonic: real-time meaning that the computer could play music “live,” polyphonic meaning it could play more than one note at a time.

As early as 1951, engineers in Australia programmed the CSIRAC computer to play music in real-time, but only monophonically (one note at a time). At Bell Labs, Max Mathews achieved polyphony with his MUSIC software in 1958, but this worked in “batch mode,” not real-time: running a “job” on punched cards produced a digital waveform on tape, which could be played back later via a custom-built digital-to-analog converter.

Closer to home, Ercolino Ferretti was developing experimental sound synthesis programs for MIT’s IBM 7094 mainframe, but these too were non-real-time. “Real-time was my thing from the start,” Peter told me. “Hands-on interaction versus, you know, submit your job.”

Peter Samson demonstrates his PDP-1 music programs at CHM, February 2020

Peter first attempted real-time synthesis in 1960 when he programmed MIT’s experimental TX-0 computer to play monophonic melodies. He did this by writing a program that sent precisely timed pulses to the machine’s built-in speaker: 262 pulses per second for “middle C,” 294 pulses per second for “D,” and so on. Timing the bits to coincide with the machine’s instruction cycle required clever programming, Peter recalled—but an even greater challenge arose when he considered rewriting the program to play several notes simultaneously.

“Where are we going to store multiple bits?” he wondered.

Flip-Flop Polyphony

His solution was to use outboard flip-flops. A flip-flop is a digital circuit whose output can be switched “on” (positive voltage) or “off” (no voltage) by an electrical control signal. With such a circuit, it is possible to generate an electrical waveform roughly the shape of a square wave by switching the circuit on and off at regular intervals.

A photograph of the TX-0, ca. 1956 in CHM’s collection, 102622467. Courtesy of Gwen Bell

The TX-0 had plug-in flip-flop units controlled from the machine’s processor. Peter determined that by using three of these, it would be possible to make the machine play three different musical notes simultaneously. Going a step further, he hypothesized that three-part polyphonic music might be attainable by programming the machine to play a sequence of three-note combinations one after the other. To test that hypothesis, he set about writing a new three-voice music player program.

Achieving three-part polyphony required an appropriate choice of music. Peter chose the “Presto” movement of J.S. Bach’s Organ Concerto in G Major, partly because he loved the music, and because the sound of an organ could be reasonably convincingly emulated using the means at his disposal.

“If you’re generous, it sounds like a pipe organ,” he reflected.

Programming TX-0 to play “contrapuntal”music—that is, music with multiple melodic lines—required Peter to conceptualize the music in an unconventional way. Musicians tend to think of three-part counterpoint “horizontally,” as three simultaneous intertwining melodies. Instead, Peter thought of the music “vertically.” He observed that in three-part polyphonic music, at any given instant, there may be up to three notes sounding simultaneously. His chosen piece, for example, begins with a high “G” in the right hand, “B” in the left hand, and low “G” in the pedal part, a situation that persists for the duration of a sixteenth-note. Then, the right hand changes to a high “D” and the left hand to a low “D,” while the pedal part holds the low “G” for a further sixteenth-note, and so on. In other words, Peter thought of the music as a sequence of three-note “instants,” each consisting of pitch 1, pitch 2, pitch 3, and duration N. An “instant,” he explained to me, is an arbitrary duration in which the pitches of all three voices remain constant.

To realize this concept, Peter wrote a music player program that read frequency and duration data, in three-note instants, from punched paper tape into TX-0’s magnetic core memory. The program switched the three flip-flops on and off simultaneously at the specified frequencies, holding each three-frequency combination for the specified duration before moving on to the next one. The flip-flops were connected to an electronic filter, amplifier, and loudspeaker, which removed some noise components of the signals and rendered them readily audible as musical notes.

Below is an audio clip of the third movement, “Presto,” from J.S. Bach’s Organ Concerto in G Major on PDP-1.

The three-part music player program worked well, but there was a problem: entering the data was very time consuming. Rendering a complete piece of music required Peter to manually divide the music up into three-note instants, calculate the frequency of each note and the number of loop iterations required to produce the correct duration for each instant, then painstakingly enter the data on paper tape. Having completed that process for the Bach organ concerto, Peter recalled his frustration.

“I’m not doing that again!” he told me. “I’m going to write a compiler.”

The Harmony Compiler

Peter’s work on the Harmony Compiler coincided with the arrival at MIT of a new PDP-1 minicomputer, a commercial machine based on the design of the experimental TX-0. Peter wrote the compiler to be cross-compatible with both machines. The compiler enabled him to follow a musical score and enter the note data in a simple symbolic shorthand: one number represented the note’s pitch-position on the stave; another, its duration as a quarter note, eighth note, sixteenth note, and so on. The compiler converted these numbers into frequency and loop-iteration values, meaning Peter no longer had to do those calculations manually.

Peter also devised a system of character symbols for musical articulation, and even Baroque ornamentation.

A page from the Harmony Compiler manual. A “staccato” (shortened) note could be specified by appending the letter “s” to a pitch value.

Baroque ornamentation. Trills, mordents, and turns each had character symbols.

Each contrapuntal part could be entered horizontally: the compiler took care of merging the parts and generating the vertical instants that Peter had previously had to figure out on paper. The compiler accepted as its input a punched paper tape containing pitch, duration, and articulation data in this more user-friendly format, and produced as its output a punched paper tape containing the less human-friendly data expected by the music player program.

While the Harmony Compiler was cross-platform, the music player program had to be re-written using the PDP-1’s instruction set. Peter took this opportunity to augment the program’s capabilities to four polyphonic voices, as well as adding routines for rendering the opcodes just mentioned.

Here is an audio clip of J.S. Bach’s Two Part Invention No. 1 on PDP-1.

In 1962, the Harmony Compiler and music player became part of the stock software bundle shipped with new PDP-1 machines. This places Peter’s programs amongst the earliest programmable music software packages to be distributed to users.

“DEC paid me $200 to write a version that will run on PDP-1 without the outboard flip-flops,” Peter explained.

Developing Digital Synthesis

Peter stopped working with the PDP-1 in 1963. He wrote a six-voice program for the PDP-6 some months later, then left MIT in 1970, but he remained at the forefront of developments in digital synthesis.

In the 1970s, Peter developed the Systems Concepts Digital Synthesizer, one of the first dedicated real-time digital synthesizers. A radically different design based on hardware rather than software, the Samson Box, as it became known, was motivated by the desire to facilitate, in real-time, the superior synthesis capabilities of non-real-time software systems like Max Mathews’ MUSIC programs. The emphasis on real-time made this an extension into the hardware domain of Peter’s earlier work developing synthesis software for the PDP-1 and PDP-6.

Restoring History

Meanwhile, back at MIT, Peter’s music software was maintained by various programmers until, eventually, the PDP lab closed. That would have been the end of the story, if not for the fact that somewhere along the line, a box of paper tapes, including several pieces of music and various parts of the player and compiler programs, made its way from MIT to the Computer History Museum.

It isn’t clear precisely when that happened, or who made the donation, but by extraordinary coincidence, Peter rediscovered this uncatalogued box of tapes in 2004, when he became involved as a volunteer on CHM’s PDP-1 Restoration Project. Playing music on the restored PDP-1 thus became one of the project’s goals.

Two paper tapes containing parts of the Harmony Compiler.

Peter immediately recognized that the programs had been rewritten for various newer systems over the years and would not be able to run on the old PDP-1 hardware. However, he knew the playing algorithm well, and with the salvaged tapes as an aide memoire—plus a flowchart he’d kept that outlined the process for interpreting musical data—he was able to write the player program again from scratch.

Rewriting the compiler forty years after the fact using ambiguously labelled punched paper tapes and a copy of the user manual as a reference would have been a much more onerous task, but in a turn of good fortune, a listing of it turned up in Peter’s basement.

“My wife was rummaging around in the basement and found a box of MIT memorabilia my mother had saved, bless her, and there at the bottom was a print-out of the compiler, a listing. A mere 67 pages? I’ll type it in again! Which I did. So we have a working compiler as well as a working player.”

It is only thanks to this somewhat improbable sequence of contingencies that, in 2020, I was able to receive a demonstration of Peter’s PDP-1 music programs—and hear about this important and often overlooked chapter in the history of digital synthesis.

Peter Samson listens to music during the PDP-1 Restoration Project, March 2005.

Members of the PDP-1 Restoration team discuss the project.

Digital synthesis with computers is commonplace nowadays: anybody with a laptop can try their hand at making music using soft synths running within a digital audio workstation (DAW) package like Logic, Reaper, Ableton, or GarageBand. The range of techniques used to generate sounds digitally has expanded enormously since the early 1960s, of course. Frequency modulation (FM), physical modelling, and granular synthesis algorithms have all been implemented in software, for instance.

Samson’s groundbreaking work with TX-0 and PDP-1 predated all these, however, and his programs can perhaps be regarded as the earliest real-time software synthesizers of all.

Acknowledgements

Thank you to Peter Samson, Heidi Hackford, Al Kossow, and Dag Spicer. Research for this article was supported by British Academy/Leverhulme grant SRG19\190060.

FacebookTwitterCopy Link

The post Programming in Harmony appeared first on CHM.

]]>
IBM and the Transformation of Corporate Design https://computerhistory.org/blog/ibm-and-the-transformation-of-corporate-design/ Tue, 02 Apr 2024 16:19:40 +0000 https://computerhistory.org/?p=29106 When IBM hired designers in the 1950s to shape how the public would see the new computing technology, the tech giant changed modern corporate branding forever.

The post IBM and the Transformation of Corporate Design appeared first on CHM.

]]>
Until the arrival of Eliot Noyes as IBM consulting director of design, IBM’s many office products were a confusion of styles: from 1930s-era punched card equipment—complete with steel Queen Anne legs—to room sized computers inflected with mid-century styling, festooned with tiny signal lamps. As IBM consolidated its various product lines in the early 1960s, most importantly with the 1964 System/360 mainframe, it also consolidated its corporate style: Noyes simplified IBM’s design vernacular to represent clean typography, a minimalist aesthetic, unified branding and office equipment and computers that were intuitive and easy to use.

Along with the corporate overhaul by Noyes and his associates, husband and wife team Charles and Ray Eames also worked closely with IBM to re-imagine the company for the 20th century. While Eliot Noyes was an industrial designer who focused on corporate design and branding, the Eameses were renowned for their contributions to furniture design, architecture, and multimedia productions.

Both the Eameses and Noyes shared a commitment to modernist design principles, including functionality, simplicity, and human-centered design. They believed in creating designs that were not only aesthetically pleasing but also practical and user-friendly.

The Eames Office/IBM Partnership

The Office of Charles and Ray Eames is among the most important firms in the history of design. Their 40-year career spanned architecture, furniture, exhibition design, film, graphics, books, toys, art and more.

Charles Eames was initially thrust into prominence by architect/designer/curator Eliot Noyes in 1940 when Charles and fellow Cranbrook Academy of Art faculty member Eero Saarinen won the 1940 MoMA Competition for Organic Design in Home Furnishings using the molded plywood manufacturing processes that the Eameses later perfected. It is for this furniture and the processes invented to manufacture it that the Eameses are perhaps most famous.

Although The Eameses designed some of the most successful furniture pieces of all time for Herman Miller and continued to innovate in this area throughout their career, their aperture opened significantly in the early 1950s. At this time, Charles and Ray began to explore communications theory and the power of film and multimedia exhibitions to transmit ideas, concepts and emotions.

A Communications Primer

During the time The Eames Office was cultivating their interest in communications, data processing and computing were still essentially new phenomena; largely the domain of governments and large corporations. Indeed, the implications of this technology were only beginning to enter popular culture. IBM was the dominant force in early computing in America, a subject that was at best misunderstood, if not somewhat feared by the general public.

1960s Vari-Vue “3D” Lenticular Postcard featuring the IBM 1440.

There was a clear opportunity for IBM to shape public opinion around computing technology in 1956 when newly installed CEO Thomas Watson Jr. hired Eliot Noyes to develop the company’s first corporatewide design initiative. It is impossible to overstate the impact of this comprehensive design program on modern corporate branding practice. Indeed, it is the benchmark by which all others might be judged.

Industrial Design Magazine, 1957.

In March 1957, Industrial Design Magazine noted: “Not long ago, in February, 1956, architect and industrial designer Eliot Noyes was asked by IBM to become Consultant Director of Design, and since his appointment he has led the company into an ambitious and unusual program to coordinate and upgrade design across the board, across every aspect of the company’s vast operations. This extends from the IBM trademark, through packaging, graphics, exhibitions, interiors and interior furnishings, such as the new Paul Rand drapery, to the business machines themselves and the buildings of the company.”

IBM Carbon Paper Packaging, 1950s, Paul Rand.

Noyes, in turn, hired a cadre of design luminaries including Paul Rand, George Nelson, Edgar Kaufmann, Eero Saarinen and The Eames Office to help not only remake IBM’s image, but to root the technology juggernaut in the fertile soil of Design and influence America’s perception of the societal value of computing technology.

The Eames Office, at the request of Eliot Noyes, produced an animated film for IBM, The Information Machine: Creative Man and the Data Processor, shown at the Noyes-designed IBM pavilion at the 1958 Brussels World Fair. This, and many subsequent works, appear prophetic as we tumble headlong into a new AI computing paradigm.

The Information Machine, 1958, The Eames Office.

Over the next 20 years, The Eames Office created dozens of exhibitions, films, books and experiences for IBM, most of which positioned computers as a natural extension of human reasoning and a tool with unparalleled potential to improve our world.

Some of this work, like the IBM pavilion for the 1964 NY World’s Fair, was seen by millions of people. Yet, surprisingly, much of the work produced under this partnership remains largely obscure.

Which is precisely why collecting it was so much fun!

IBM at the Fair, 1965, The Eames Office.

The Collection

The collection consists of several hundred artifacts from the 1950s–1970s, all of which the author acquired in the first decade of the new millenium. Originally exhibited at the LUNAR offices in Palo Alto in Spring 2011, much of the collection is now archived at the Computer History Museum. It includes ephemera from small and large-scale exhibitions, including Mathematica: A World of Numbers…and Beyond and the landmark IBM pavilion at the 1964-1965 New York World’s Fair in Flushing Meadows.

Exhibition invitation designed by Laura Martini and the author, punched at the Computer History Museum, 2011.

In the collection are all manner of Eames Office, Paul Rand and IBM staff designed ephemera including original brochures and annual reports, press kits and photographs, slides and postcards, packaging, magazine articles, and even a Cartier-produced trophy from the 1964/5 New York World’s Fair.

IBM Fair Brochure, 1964, Paul Rand.

Charles and Ray Eames pioneered a deeply human approach to design which, although evident in all of their work, was particularly powerful in their IBM projects. Their clear communication of technology’s role in society, through myriad IBM-sponsored exhibitions, short films and books, became the reference for a new form of corporate design and citizenship. Their work presents a finely integrated model that is as relevant today as it was 70 years ago.

Summary

These artifacts trace the massive impact the IBM design program had on American public perception of mathematics, science and computing technology. Critical examination and documentation of this largely overlooked partnership between Eliot Noyes, Paul Rand, George Nelson, Eero Saarinen, Edgar Kaufmann and The Eames Office is a vehicle for remembering not only the historical precedents that link design and technology, but also a potent case study detailing how the human centered design process can be used to fundamentally shift attitudes and the human experience.

Main Image: The IBM pavilion at the 1964 New York World’s Fair.

FacebookTwitterCopy Link

The post IBM and the Transformation of Corporate Design appeared first on CHM.

]]>
Logical Piano Lessons https://computerhistory.org/blog/logical-piano-lessons/ Tue, 01 Feb 2022 19:02:59 +0000 https://computerhistory.org/?p=24133 The Logical Piano looks like an early digital computer, but perhaps what links this Victorian device most profoundly with today's computers is, ironically, the fact that it was not about computing.

The post Logical Piano Lessons appeared first on CHM.

]]>
Playing AI on Ivory and Wood

William Stanley Jevons (1835–1882) is not a household name today, but he left his fingerprints on several strands of modern scientific thinking. In his 1874 Principles of Science, a copy of which can be found in the Computer History Museum collections, he sought to formulate precisely the idea of the scientific method; he feared that scientists “speak familiarly of Scientific Method, but they could not readily describe what they mean by that expression.”[1] He is also sometimes remembered as a founding figure of modern economics, celebrated for his efforts to introduce greater mathematization into political economy, as the subject was known in his time. And despite the seeming abstractness of many of his interests, he is also responsible for the construction of a striking physical artifact, his so-called Logical Machine or Logical Piano.

Jevons’s Logical Machine, as illustrated in his Principles of Science, from the CHM collection, https://ia800309.us.archive.org/8/items/principlesofsci00jevo/principlesofsci00jevo.pdf.

Illustrated here in the frontispiece to Principles of Science, and held today in the History of Science Museum in Oxford, the machine represented to Jevons “a conspicuous proof of the generality and power” of his logical method.[2] As a box with a keyboard that mechanically spits out solutions to problems input by a user, the Logical Piano readily evokes an embryonic idea of the digital computer when viewed with modern eyes. But perhaps what links this device most profoundly with today’s computers is, ironically, the fact that it was not about computing.

Computers as we know them descend from machines dreamt up and built for the sake of literal—which is to say, rather, numerical—computation. Indeed the name “electronic computer” was first a way of distinguishing a machine from the usual kind of computer: a clerical worker, frequently female, who performed tedious and complex calculations in the service of some large, bureaucratically organized scientific project. The first electronic computers did not so much replace these workers as modify and rearrange the work they were hired to perform. All of this collective human and machine labor aimed at executing complicated calculations with ever greater accuracy and efficiency.

Jevons’s famous contemporaries Charles Babbage and Ada Lovelace belong to this calculating tradition too. Their work around the Difference Engine and Analytic Engine aimed to delegate the work of computation to steam-powered mechanisms. Babbage, like the military engineers behind ENIAC a century later, suspected that the costly and time-consuming labor of large-scale calculation could be circumvented by a well-designed machine.

Once some of these large electronic calculating machines were up and running, engineers and programmers begin to envision other functions for them. The makers of computers recognized their potential applications to data processing, while other researchers envisioned a pure theory of computing largely detached from numerical calculation. The machines’ functions multiplied until the word “computer” came to denote a technology that today most people use for reasons utterly unrelated to computation.

Machinery is capable, in theory at least, of rivalling the labours of the most practised mathematicians …

— William Stanley Jevons

Jevons was not particularly interested in massive numerical computation, but he found Babbage’s project inspiring because he believed it had larger implications concerning the nature of intelligence. According to Jevons, by articulating a plan for the Analytical Engine, Babbage had already “shown that material machinery is capable, in theory at least, of rivalling the labours of the most practised mathematicians in all branches of their science.”[3] Meanwhile, George Boole had recently publicized an algebraic system of logic that translated syllogisms, the traditional matter of formal logic, into systems of equations. A syllogism is a form of reasoning that consists of two premises involving three terms and a conclusion following from them. The classic example is “If all Greeks are humans, and all humans are mortal, then all Greeks are mortal.” If this sort of logical deduction could be conducted by means of equations as Boole had demonstrated, and equations could be solved mechanically as Babbage had shown, then couldn’t a machine be built to perform logical reasoning?

I find it necessary to have each step of the work done separately in order that I may see whether I have planned every thing rightly.

— William Stanley Jevons

Jevons soon designed just such a machine, and hired a clockmaker (whom he did not credit by name) to build it according to his plans. Their working relationship was not frictionless: Jevons struggled to trust this hired craftsman, deeming it “necessary for me to go there almost every day to see that he is getting on right. I find it necessary to have each step of the work done separately in order that I may see whether I have planned every thing rightly.”[4] His lack of confidence in the artisan who actually built the Piano bears an unfortunate resemblance to the way early programmers would conceive of instructions for inanimate machines as needing to “contain everything necessary to cause the machine to perform the required calculations and every contingency must be foreseen.”[5]

It seems the first machine they produced was not satisfactory, but collaborative difficulties notwithstanding, by the next year Jevons was highly optimistic about their progress on yet another new machine “in appearance like a large accordion or a very small piano, & has 21 keys exactly like white piano keys.”[6]

The finished Logic Piano with its detachable cover removed to reveal some of the mechanism. Inv 18230, © History of Science Museum, University of Oxford. http://www.mhs.ox.ac.uk/wp-content/themes/mhs-2017-responsive/imu-media.php?irn=49854.

Jevons’s logical piano keys. Inv 18230, © History of Science Museum, University of Oxford. http://www.mhs.ox.ac.uk/wp-content/themes/mhs-2017-responsive/imu-media.php?irn=25129.

Jevons’s piano was built to reason from premises describing classes of objects: statements like “All As are Bs” or “Anything that is A and B is either C or D but not both.” A user could enter input of this kind on the keyboard, and a concealed system of rods and pulleys would cause a panel to display only those combinations of the classes A, B, C, and D not ruled out by the premises. He reported to the Royal Society:

By merely reading down the premises or data of an argument on a key board representing the terms, conjunctions, copula, and stops of a sentence, the machine is caused to make such a comparison of those premises that it becomes capable of returning any answer which may be logically deduced from them. … The actual process of logical deduction is thus reduced to a purely mechanical form.[7]

There is certainly room to object that this highly specific type of problem solving is too artificial to represent deduction in a meaningful way. Not all of Jevons’s contemporaries were convinced that his machine automated anything particularly difficult or worthwhile, and we can hear echoes of those Victorian debates in discussions of AI and machine learning today. Disagreements about whether a machine can reason are never just debates about what the machine can do; they are about what it means for human beings to reason in the first place.

These are big questions, and they confront us today in such disorienting new forms that, wherever we stand on twenty-first-century AI, we are unlikely now to see Jevons’s piano as a harbinger of the singularity. But historical distance is useful: the glaring differences between our technology and that of the Victorians only makes the rhetorical echoes more striking. We are not the first to be floored by the logical prowess of our devices (nor the first to fret that undetected flaws in their construction might render them unreliable). Nearly a century before people began to suspect that the digital computer was destined for higher logical functions than mere computing, Jevons looked to Babbage’s engines and argued that a machine could do more than calculate by steam: that his Logic Piano would not just compute, but reason.

Endnotes

1.) W. Stanley Jevons, The Principles of Science: A Treatise on Logic and Scientific Method (London: Macmillan and Co., 1883), vii. See the History of Science Museum’s entry for the Logic Machine at https://www.hsm.ox.ac.uk/collections-online#/item/hsm-catalogue-6547.

2.) Ibid., 107.

3.) William Stanley Jevons, “On the mechanical performance of logical inference,” Philosophical Transactions 160 (1870): 497–518, at 498.

4.) W. S. Jevons to Herbert Jevons, 25 September 1867, in William Stanley Jevons, Papers and Correspondence of William Stanley Jevons, ed. R. D. Collison Black, 7 vols. (London: The Macmillan Press LTD, 1972—1981), vol. III, 157–9, at 157.

5.) Maurice V. Wilkes, David Wheeler, and Stanley Gill, The Preparation of Programs for an Electronic Digital Computer, with Special Reference to the EDSAC and the Use of a Library Of Subroutines (Cambridge, MA: Addison Wesley Press Inc., 1951), 1.

6.) W. S. Jevons to Herbert Jevons, 23 June 1868 in Jevons, Papers and Correspondence, vol. III, 185.

7.) Jevons, “On the mechanical performance of logical inference,” 500.

FacebookTwitterCopy Link

The post Logical Piano Lessons appeared first on CHM.

]]>
AI Gives Two Teens Hope for COVID-19 https://computerhistory.org/blog/ai-gives-two-teens-hope-for-covid-19/ Fri, 23 Apr 2021 16:19:51 +0000 https://computerhistory.org/?p=21369 Two teens explore AI research to shift their perspective from the personal effects of COVID to the bigger picture, bringing them hope that technology can help.

The post AI Gives Two Teens Hope for COVID-19 appeared first on CHM.

]]>
We Will Never Get These Years Back

We once roamed the halls and hurried off to classes with bulky backpacks, and now we reside within our rooms and lead monotonous lives. With many high school seniors missing out on staple activities such as football games and prom, and with many juniors struggling to take part in standardized testing and participate in extracurriculars, it’s no secret that our livesmuch like the rest of the worldhave been impacted in ways we never imagined.

Given the duration and complexity of the COVID-19 pandemic, more than human minds have been necessary to develop clinical treatments, track, and prevent the spread of the coronavirus. Consequently, the world has turned to artificial intelligence (AI) for help. As high schoolers interested in AI, we decided to research ways AI and technology have been utilized to help the world fight and move past the pandemic.

The Centers for Disease Control and Prevention (CDC) states that symptoms of COVID-19 can appear in an individual two to fourteen days after being in contact with the virus. The list of symptoms seems to keep running down the page, with fevers/chills, coughs, shortness of breath, and diarrhea, to name a few. With time and research, it was concluded that an individual can test positive for COVID-19 antibodies without ever showing signs such as high fevers and persistent coughs; individuals who do not exhibit those symptoms of COVID-19 but have the disease are categorized as asymptomatic. Those who are asymptomatic can spread the virus without knowing, infecting more and more people.

Although people who are asymptomatic do not display symptoms, their coughs are different from healthy patients, and these differences can be detected by artificial intelligence.

We were interested to find that researchers at MIT have found a way to differentiate healthy and asymptomatic individuals by detecting the differences in their coughs. Using forced-cough recordings, researchers developed and trained an artificial intelligence algorithm to detect differences between the coughs based on the vocal cord strength and lung respiratory performance. They discovered that although people who are asymptomatic do not display symptoms, their coughs are different from healthy patients, and these differences can be detected by artificial intelligence programs. The model was able to detect 98.5 percent of asymptomatic coughs accurately. New algorithms such as this one shine a light on AI’s wide variety of uses and opens up the doors to new discoveries and treatments.

Although most of us are experiencing a pandemic for the first time, history has a way of repeating itself. With the Spanish Flu in 1918, the Asian Flu in 1957, the ongoing AIDS pandemic and epidemic since 1981, and the Ebola virus from 2014-16, the world has seen its fair share of diseases and pandemics, all of which have affected our way of living. People fighting earlier pandemics didn’t have the benefit of AI to speed up research, or the development of treatment and control solutions, but for our generation, technology has been key to gaining control over the virus. Our lives have changed in unimaginable ways, and knowing that we may never get these few years of our lives back seems frightening in the moment. Shifting to the bigger picture helps us understand that technology’s role in this pandemic is helping things like vaccine development and patient diagnosis progress at a much faster rate.

While protecting healthcare workers, the most effective way for us to move past the pandemic is through a clinical vaccine. Creating vaccines often takes years or even decades. It’s no easy task developing a vaccine. 

We were shocked to find that prior to the formation of the COVID-19 vaccine, the fastest developed vaccine was for mumps, which took close to four years to become available to the public.

In contrast, the COVID-19 vaccine was developed in roughly less than a year with the Pfizer-BioNTech vaccine granted emergency use authorization by the FDA on December 11th, 2020. Seven days later, Moderna received the same FDA approval for their version of the COVID-19 vaccine, and others have followed.

Our research found that machine-learning systems powered by AI have played a vital role in developing our current vaccine solutions. For any virus, there are thousands of different aspects the immune system reacts with. Therefore, there are thousands of varying vaccine solutions. AI and machine-learning tools can predict which specific parts of a virus are more likely to evoke an immune response causing us to feel sick. In doing so, vaccine production is sped up as scientists can utilize these tools and focus on the few specific parts of the virus known to make us sick.

Hearing heartwarming stories from teachers and friends about hugging their parents and grandparents for the first time in a year reminds us of how trying these times are, but we have hope.

Without doubt, we are excited to receive our doses of the vaccine. Regardless of when we are permitted to do so, vaccines serve as long-term solutions. In the meantime, we found that contact tracing apps have become increasingly efficient with the help of machine learning. Algorithms can automate alerts and notifications of exposures to COVID and analyze the immense amount of data each phone receives at any given time. Such processes are traditionally quite labor-intensive and slow but these algorithms have accelerated the process, which is critical during a pandemic.

Companies such as Apple and Google have developed contact tracing apps that work through our phones’ Bluetooth to scan for other nearby phones with the app. When two phones connect, they switch identification codes, and your phone begins to record how long you spend near another person based on factors such as how far away your two devices are and how strong the signal received is. If you were to test positive for COVID, your local health department asks you if you would like to notify others you may have exposed. If you agree, your phone will send an anonymous notification to other devices that came within six feet of yours for longer than fifteen minutes. When notified of your exposure to COVID, the app will give you instructions on the next steps to take, including being tested and isolating for fourteen days.

Social media and technology has filled a void, enabling people to feel more informed and connected.

From global response funds to the United Nations’ support of multilateralism, we are no longer just individual countries fighting this pandemic. Countries all over the globe are facing similar problems and hardships with lack of resources in hospitals and numbers of deaths increasing; despite this, social media and technology has filled a void, which now enables people to feel more informed of the pandemic and connected to their loved ones. In fact, this pandemic has introduced an “infodemic” that has brought together tech titans worldwide to better educate our society in understanding the pandemic and how it can be stopped. New trials and projects are being piloted so that new concepts can be taught to different countries; back in February of 2021 in Madhya Pradesh, India, drones were flown over the bustling city to release a chemical agent that was anticipated to slow down the virus. Other hospitals in Wuhan, China, and Kerala, India, are shifting to robot-driven care to minimize healthcare professionals’ risk.

Witnessing the power and potential of technology to help address the current pandemic has made us more hopeful for the future. While we may not know the answer to questions such as if masks will need to be worn regularly throughout our lifetimes, or if we will always need to stand six feet behind strangers in the grocery store, or as teenagers if we will ever get to link arms and gossip with friends again, global crises of all kinds call for more innovation and collaboration worldwide. With technology by our sides, our generation has the power to continue to innovate, collaborate, and change our world for the better.

About the Authors

Anwesha Mishra and Sohie Pal are alumnae of the Museum’s Teen Internship and Teen Engagement Council programs. They collaborated through online Zoom meetings and shared Google docs to write this blog.

Sources

Coles, Terri. “Contact Tracing Apps Use ML to Curb COVID-19 Outbreaks.” ITPro Today, July 10, 2020. https://www.itprotoday.com/machine-learning/contact-tracing-apps-use-ml-curb-covid-19-outbreaks.

FDA. “FDA Takes Additional Action in Fight Against COVID-19 By Issuing Emergency Use Authorization for Second COVID-19 Vaccine.” U.S. Food and Drug Administration. FDA, December 18, 2020. https://www.fda.gov/news-events/press-announcements/fda-takes-additional-action-fight-against-covid-19-issuing-emergency-use-authorization-second-covid.

FDA. “FDA Takes Key Action in Fight Against COVID-19 By Issuing Emergency Use Authorization for First COVID-19 Vaccine.” U.S. Food and Drug Administration. FDA News Release , December 11, 2020. https://www.fda.gov/news-events/press-announcements/fda-takes-key-action-fight-against-covid-19-issuing-emergency-use-authorization-first-covid-19.

Ferguson, Cat. “Do Digital Contact Tracing Apps Work? Here’s What You Need to Know.” MIT Technology Review. MIT Technology Review, November 20, 2020. https://www.technologyreview.com/2020/11/20/1012325/do-digital-contact-tracing-apps-work-heres-what-you-need-to-know/.

MIT News Office, Jennifer Chu. “Artificial Intelligence Model Detects Asymptomatic Covid-19 Infections through Cellphone-Recorded Coughs.” MIT News | Massachusetts Institute of Technology. MIT News Office , October 29, 2020. https://news.mit.edu/2020/covid-19-cough-cellphone-detection-1029.

Solis-Moreira, Jocelyn, and Yella Hewings-Martin. “COVID-19 Vaccine: How Was It Developed so Fast?” Medical News Today. MediLexicon International, November 15, 2020. https://www.medicalnewstoday.com/articles/how-did-we-develop-a-covid-19-vaccine-so-quickly#Rigorous-guidelines-for-clinical-trials.

Explore more CHM AI resources.

 

Education Sponsors

First_tech_Logo

FacebookTwitterCopy Link

The post AI Gives Two Teens Hope for COVID-19 appeared first on CHM.

]]>
Unprecedented: Gen Z’s Right to Vote https://computerhistory.org/blog/unprecedented-gen-zs-right-to-vote/ Fri, 16 Oct 2020 16:26:53 +0000 https://computerhistory.org/?p=18922 Gen Z first-time voters share their thoughts and concerns about elections in today's digital world.

The post Unprecedented: Gen Z’s Right to Vote appeared first on CHM.

]]>
First-time Voters Speak Up

“Should I post this? What should the caption be? Will this fit in with my feed?” These are just some of the many questions we, as teens, ponder constantly. Social media has always been a major part of our lives in connecting with our peers, favorite celebrities, and the world around us, especially during the pandemic.

But recently something has changed. 

There was a time we would go on Instagram for important life updates from our peers: Sonia’s trip to Sweden, Ryan’s new boyfriend, or Karie’s argument with her friend group because she didn’t comment on their photos. But we don’t see Jennifer’s picture-perfect smoothie bowl anymore. Instead, we see posts pleading for petition signatures and infographics educating us on topics we’ve never heard about. Led by the Black Lives Matter movement and other crises around the world, we witness the voices who have been facing injustices finally being heard, inspiring an outpouring of allies advocating for change.

For many of us, the Gen Z’ers—the ones who grew up with iPhones instead of phone books, searching through the internet instead of the library, and posting to Instagram instead of Myspace—this year is the first time we are able to vote in a federal election. And as we, the voters, are changing and growing, so is the very landscape of politics and the election itself. Campaign ads are running on YouTube and Facebook in addition to TV, and candidates are attempting to connect with us through sharing gaming stats or skincare routines rather than through hometown rallies.

We are a growing demographic.

— Gen Z

We are a growing demographic that political candidates want to appeal to. We have more power than ever, and we recognize that. With so many of us rallying together and taking a stand on social media, we are proud to be a generation that speaks up for what we believe in and one that is aware of the systemic issues impacting all of those around us. It’s empowering to be a generation that is “woke,” and it is the new normal.

To some of us, the shift in social media—from selfies to educational graphics about the Black Lives Matter movement, Yemen, Lebanon, systemic racism, LGBTQ+ injustices, saving the USPS, Trump vs Biden, and so on—has been quite overwhelming. It isn’t that we don’t agree with the causes behind these posts and ongoing political activism but rather how fast this change has come and the sheer volume of posts. As peers have begun to post more and more, the pressure has increased, and it feels like we are obligated to post. Terms floating in the air, such as “performative activism,” “attention seeker,” and “trend-follower” have become more prevalent and toxic. New questions have started to arise too: “I don’t normally post, but if I don’t post this, will people think I am racist?” “I already donated money … Do I have to tell everyone I did?” “All my friends are posting this graphic … should I post it too?”

We often believe social media content to be true.

— Gen Z

With recirculated posts on Instagram that we see over and over again, it’s hard to be exposed to new perspectives if we don’t go out of our way to find them. This is how an echo chamber—an environment where someone only encounters information that affirms their existing beliefs—is formed. Especially for teens like us, echo chambers are dangerous because they will be harder to break out of later in life. Although we learn how to find neutral and reliable sources of information in high school, we often forget to apply those skills when quickly scrolling through posts, especially when it’s harder to track the source of information. We often believe social media content to be true, even if it’s misleading or exaggerated. In a time when the political climate is especially polarized, information that circulates often presents a very biased perspective, which can hinder us from being accurately informed at the polls as first-time voters.

At first glance, social media seems like it can expose us to new ideas and help us expand our worldview, but social media’s algorithms only deliver more of the content that we like and already interact with. Even offline, confirmation bias—the tendency to look for information that only supports our opinions—makes it harder for us to be exposed to new perspectives. We often find ourselves looking for information where we already agree with the opinions on certain issues or political candidates instead of actively trying to understand the other side of the argument. It’s too easy to just click out of opposing articles and ignore the evidence presented.

Even the place where we live perpetuates our own echo chambers. Here in the Bay Area, we are often exposed to liberal perspectives, making many of our conservative peers in school feel like a minority. In other locations, residents may be predominantly conservative, making it hard to express opposing ideas there as well. Regardless of our political beliefs, it can be difficult to find information from reliable sources that helps us understand an issue through a variety of lenses.

It feels like candidates are trying to reach out to us, but they don’t always hit the mark.

— Gen Z

Not only has social media impacted our ability to find new perspectives and expand our own, it has also heavily influenced the way we view and interact with political candidates. Only knowing a reality with the internet and the web, we’re accustomed to being bombarded with information on online platforms. We’ll scroll through thousands of posts and videos every day, and we’ll see not only ads related to our personal interests but also ads from campaigns. The key part of the ad is not so much its content, but where it’s located—on a platform many of us use all the time. It feels like candidates are trying to reach out to us, but they don’t always hit the mark.

One particular YouTube ad asked us to rate Trump, but none of the options gave a remotely negative review of Trump. We thought, “Why don’t we have another option? He’s not telling us what he’ll do for us as voters, so why should we care?” But there have definitely been much worse attempts to connect with us. Elizabeth Warren’s Instagram livestreaming was a bit cringey and her campaign TikToks felt out of place. We understand many politicians are generations older and are highly unlikely to regularly use Instagram or TikTok. We appreciate when campaigns acknowledge youth platforms because they actually care and want to connect with new voters, not just for political theater, which is sometimes how these attempts come across. We want to see honesty, including mistakes, and candid opinions from a candidate.

When candidates showcase their lives in a natural way, it feels much more genuine. For example, when Alexandria Ocasio-Cortez (AOC) answered questions on Instagram about Congress, shared a day in the life of a representative, or tweeted her League of Legends ranking, many of us felt connected to her. It’s exciting to see someone who is relatable and understands the youth population, but it’s easier for AOC as a millennial herself whereas many of her colleagues are much older. We recognize that it’s not fair to expect older candidates to understand youth the same way. 

We’re frustrated with the current systems in place and are hungry for change.

— Gen Z

Still, it’s interesting to see the new ways that politicians have tried to reach us, like Biden’s campaign releasing yard signs in the popular video game Animal Crossing. A game where you can design your own island and catch animals with friends online, Animal Crossing was never a tool we expected a campaign to use. But given how our generation is growing more politically active than ever, it’s not far-fetched to involve video games in politics.

Although we are just a subsect of the youth population, we see youth everywhere across social media who care about anti-racism, the environment, and equitable human rights. We’re frustrated with the current systems in place and are hungry for change. We want to see that same passion, honesty, and care in candidates, especially when they use youth platforms. More importantly, we want to know who they are and what they stand for. 

As teens participating in politics at an unprecedented time, we have arguably been the most politically active generation ever. Even if we cannot vote, we strive to stay updated on the latest political news, watch the debates and conventions that have been streamed online, and discuss our opinions and beliefs with friends and family. We have shared on social media, marched at protests, spoken at city councils, and defied generational gaps and norms to champion political changes we believe in. We live in a society that looks down on us for growing up with incredible technology that has supposedly “made us lazy.” Some believe we shouldn’t have a say in political issues because we are “too young.” However, the fact is we live in a world where we have to deal with the consequences of actions that we did not have a say in, such as climate change, poverty, warfare, immigration, and even basic human rights for all. Now that our generation is being given a chance to vote, trust us when we say we will not take it for granted.

About the Authors

(Row by row from upper left in the image above) Zade Lobo, Diya Pathak, Merritt Vassallo, Lydia Lam, and Ramya Chitturi are five of CHM’s Teen Alumni, and previous members of the Museum’s Teen Internship and Teen Engagement Council programs. As tech-savvy and politically engaged Gen-Z’ers, they collaborated through online Zoom meetings and shared Google docs to reflect on the impact and influence of social media and technology for first-time voters. They look forward to voting in their first federal election soon!

 

Education Sponsors

First_tech_Logo

FacebookTwitterCopy Link

The post Unprecedented: Gen Z’s Right to Vote appeared first on CHM.

]]>