A design for a flying machine, by Leonardo da Vinci, 1488 / Institut de France, Paris
What is Technology?
Technology is an extension of ourselves that we create. For example, a hammer is an extension of ones arm, increasing its effectiveness. Technology can also be anything that isn’t “natural”, that is manufactured by humans.
In this class, we will focus on the technologies that emerged in Western culture, from the beginnings to today. But we don’t want a laundry list of inventions and devices. Rather we want to explore how those devices came about, and what they say about us and our history.
Whenever we learn about history, any kind of history, we are influenced by our own era. At present, to look at technology means to focus on innovation (1). I think this is partly because of the business emphasis on new products and inventions, which our contemporary society values highly. That focus is then reflected back through the past. Anything new or different is seen as a turning point, and items used for years or centuries are often ignored. Another trend at the moment is the idea that technology causes things, such as social change. This may be because in our own time it often feels like technology has a will of its own, that technology is changing faster than we can handle.
I think both of these perspectives, the focus on innovation and technological causation, have some merit but cannot tell the whole story. To focus only on new inventions, or to see innovation as something conscious in past societies, is deceptive. It may well be that inventions used over very long periods of time are those which have staying power, and thus reflect our history better than new items. The idea that technology is deterministic is also problematic. To me it seems like putting animate objectives onto inanimate objects. When a particular technology causes something, or when it “wants” something (2), we imply that its power is beyond our control.
Kevin Kelley on What Technology Wants, TED Talk (20 minutes)
Although there have been times when technology has indeed been beyond our control, to study history within a framework beyond human control is dangerous. It is hazardous in terms of accuracy, but it also is morally dangerous in that it may imply lack of responsibility for our own creations.
How Does Technology Happen?
Da Vinci’s idea for an helicopter, 500 years before modern helicopters were invented / Wikimedia Commons
The old saying goes, necessity is the mother of invention. In other words, when there is a need, someone invents something to fill the gap.
If this is so, there would be many incremental inventions and technological changes over the years. Something works, but not as well as it could, so an insightful user adds a part or changes something slightly. The more useful designs, presumably, get adopted.
And yet, when we study history, we are often presented with Great Inventions and Great Inventors (Gutenberg, Leonardo, Maxim, Oppenheimer). This fits with the “Great Man” theory of history. It’s the same kind of thinking that leads us to talk about “revolutions” – the Neolithic Revolution (suddenly, agriculture!), the Agricultural Revolution (suddenly, lots of food!), the Industrial Revolution (suddenly, industry!). As you can tell, I distrust this kind of history. It’s useful to help explain when certain elements of technology come together to create social, economic or political change.
I prefer the idea of continual development, where incremental changes make technology more useful, and ideas emerge and re-emerge. Take, for example, Leonardo da Vinci’s work. His Aerial Screw is a15th century design for what we would today call a helicopter. The idea of human-powered flight emerges and re-emerges. So does submarine transport (there were submarines spying in the American Revolution) and body part replacement ( there was a leg transplant in the 3rd century).
Paleolithic and Neolithic Technology
Early humans gathered together for protection against the elements and efficiency of food collection. Early technologies would have included the control of fire, shelter building, and ways of gathering food and saving water, but some of the most interesting are those that created art.
From Paleolithic to Agricultural?
Mother Goddess, The Neolithic site of Çatalhöyük / Anatolian Civilizations Museum, Ankara, Turkey
The Paleolithic (paleo=old, lith=stone) age indeed featured stone tools of all sorts, and we assume that early humans perfected designs of arrowheads, scrapers, baskets and containers, and other implements to increase the efficiency of hunting animals for food. Animals provided not only protein, but also skins (which could be used for clothing and as wraps for carrying things), bone (useful for small tools and ornamentation) and organs (for medical remedies).
I once attended a lecture in an anthropology class here at MiraCosta, given by a Dr. Ford. As a historian, I had heard about hunting and gathering, but I didn’t know until I sat in on his class about scavenging. Apparently hunting, being very hard work, doesn’t bring in much meat, or at least not on a predictable schedule. Gathering (of plants and berries) doesn’t supplement the diet but is rather its foundation. I learned that scavenging dead animals who had been killed by other animals was a major source of food. I had never thought of humans as scavengers, but it made perfect sense. It did make me wonder, though, whether cooking had been used not only to make meat more tender, but to burn off the toothmarks of other animals.
Back when I went to college, I was taught that agriculture was a natural progression from hunting and gathering. Different groups of humans took this leap at different times, likely discovering that planting seeds could be more efficient than gathering. Saving the seeds from the biggest, strongest plants also makes logical sense. But I was taught that it was agriculture that caused humans to settle down, to slow down their nomadic lifestyle of following animals, in order to farm. The first major communities, then, built up around these farming settlements.
That theory is being questioned now, partly because of new dating methods at very old sites such as Göbekli Tepe and Çatalhöyük. Both communities had many buildings, permanent structures, and for a long time it’s been assumed that they must have grown their food just outside of town. But further excavations and carbon dating methods have discovered no evidence of agriculture at all. The assumption may well have been wrong. If so, it means that large ceremonial centers may have been build for spiritual reasons rather than physical protection and a center of farming production.
This sort of shift in thinking is one of the reasons why History is not a “dead” subject. Many people believe that history is simply a set of facts from the past, that we know what happened. So, for example, a history book written in 1855 about ancient Greece must have all the same information as a book written today. But of course, that isn’t so. History is not a collection of facts, but rather of interpretations based on those facts we have. Sometimes new facts are discovered (or a lack of facts as noted at Göbekli Tepe and Çatal Höyük). This changes the interpretation. In this case, we may have to consider whether spiritual needs might have been, in some places and in some ways, as important as protection, food, and other basic needs.
The Role of Shamans
Certainly the shamans and other spiritual leaders were the first to be fed by the community. When people come together to hunt, gather, scavenge, and farm, almost all of them are working on survival for themselves and the group. There is little spare time – all energy is used for subsistence activities. As the community expands, if it does well, there may be some surplus food. This means that not everyone is required to engage directly in food production. The group can afford to “pay” a member to do something else by feeding him or her. The “first job” was almost always that of the shaman.
The shamans were paid to intercede with supernatural forces on behalf of the community. What did this have to do with technology? The shamans’ responsibility to provide spiritual guidance for the community was in many ways dependent on his or her ability to interpret the natural environment. Weather was a particular issue, since drought or flooding could kill crops and make animal movements shift, threatening the lifeblood of the group. Shamans became very good at astronomical and climate observations, determining and recording patterns in order to make predictions and keep the community prepared. They were, in fact, the first scientists, basing their recommendations on direct observation of the natural world.
Art as Technology
It is easy to see how the construction of buildings at places like Çatal Höyük would be technology, but a great deal of time and effort was also spent on items we would consider art. In many places in Europe, particularly in the caves of France, it is possible to view cave paintings of extraordinary craftsmanship. There are handprints on walls, and detailed paintings of animals. Some of the most intricate have been only recently discovered, within the last hundred years or so.
For a long time, scholars simply admired the work. But recently some have begun seeing it in a new way, noting particular patterns in the paintings. Take a look at our secondary source reading for this week (below), which suggests that artists may have been trying to convey motion, as in a comic.
Here’s an animation:
What we have here is evidence of what I think is a necessary part of society: narrative. Narrative, or story-telling, implies a self-consciousness in a culture, a desire to record the present in a way that will be useful to the future. Telling stories is part of every society we know of on the planet. Narrative defines who we are, whether it is the story of a hunt on a cave wall, or the Epic of Gilgamesh, or stories of heroes and saints. Shamans told stories to explain the natural world. And narrative can be a powerful force: we criticize national leaders who “rewrite history” to gloss over their country’s bad behaviors and teach future generations a more generous view of their people. We can trace the history of narratives from cave paintings and stories, to oral traditions passed down through memorization, to the development of writing, to printing, to the internet.
Effects of Agriculture
Once agriculture began (the Neolithic Age), which it did in fits and starts, life did change. From foraging to deliberately planting seeds, from returning to last year’s seed plot to deciding to stay, is the presumed pattern. We know that when people start planting, their diet shifts. Less scavenged and hunted meat is consumed, and more vegetable matter is eaten. Grains become central to the diet, as they evolved from grasses and contained concentrated carbohydrates. Sometimes the dietary shift could be deadly, as in the case of the Cahokia civilization in the Americas. They became so dependent on corn (maize) that they developed malnutrition and disease.
But for most the settling down provided opportunity. Planted crops could be improved by selecting out the largest or best-tasting fruits and grains to save seeds. Plants could be hybridized by hand, cross-pollinating to create new, stronger forms. Settlement appears to have led to the division of labor, with child-care being tied to an interruptible activity (farming) rather than one that could not be interrupted (hunting). Some historians believe that this sexual division of labor, where women tended continually while men went off to work occasionally, began here, with the Neolithic Revolution.
Technology as Art
Left: 3,500 years old, “Giant Beaker of Pavenstädt”, Stadtmuseum Gütersloh, Germany
Right: Neolithic stone arrowheads / British Museum
As one would expect, the technologies used served the purpose of ensuring survival and subsistence. Hunting with spears or bows and arrows requires arrowheads, which are still found today by archaeologists. Grain and other foods could be dried and saved, necessitating containers such as baskets or pots.
Late in the Neolithic era, monumental stone “henges” were a feature in some areas of Europe. Evidence of the oldest henge was found in 2002 near Goseck, Germany. The Goseck henge is estimated to be 7000 years old, and its southern gates line up with the summer and winter solstice. Like other henges (Stonehenge being most famous) the astronomical alignment of many henges has led archaeologists to believe they served as observatories. Certainly the need for shamans to observe nature closely would support this idea.
From what we know of more basic cultures that exist today, and from our own histories of legends and tales, we assume that Paleolithic and Neolithic people told stories. The cave paintings may have represented not only the animation in their daily lives, but specific tales of hunts and observing animal life. Container designs and even weapons may have been decorated to tell stories. Because we have nothing that is considered “writing” from this era, we call it “pre-historic” – before history. In the next unit we move on to evidence that goes beyond archaeology into an exploration of the tools used for civilization.
While it’s easy to see prehistoric technologies only in terms of arrowheads, hunting, and connections with the supernatural for the purposes of survival, we are discovering more connections all the time. The goods carried by Ötzi, a prehistoric man discovered in Alpine ice in 1991, included tools and remedies. Analysis of the body has indicated the use of acupuncture or something similar, in a pattern. He carried small bundles of fungus that were likely used as medicine. The shoes were made with bearskin soles and were waterproof. He carried a copper axe and arrows, and several plants designed to start fires under different conditions. Just this one discovery gives us an idea of the possible complexity we’re missing about this era.
Ancient Mesopotamia, Egypt, and the Levant
One definition of “civilization” is the culture that surrounded the creation of cities. There is also an implication that “civilization” refers to cultures that are not only urbanized, but that have organizations and structures for many areas of life, including politics and religion. We also tend to assume that these cultures keep records of themselves, not through pictures, but through writing. In ancient times, technology was based on fulfilling the need for social organization, including the structures connecting civilizations with the forces of nature.
Ancient Mesopotamia / Wikimedia Commons
The region of Mesopotamia is now modern-day Iraq, but in ancient times it was the location of three different civilizations.
The Tigris and Euphrates Rivers drain into what is now called the Persian Gulf. Nowadays they meet up and form one stream to the Gulf, but archaeologists discovered that in ancient times the shoreline was much further to the north. The ancient town of Uruk, now far inland, was on the sea. Over the centuries, silt carried down the rivers has built up the land and moved the shoreline southward.
Geographically, the rivers dominate the area, but they are not easy to work with. The Tigris and Euphrates rivers don’t have predictable cycles – they flood unexpectedly, and there are years of drought where the rivers don’t rise at all. Since the fertile, watered areas are closest to the rivers, this situation presented a challenge for early agriculture.
The theory of geographic determinism supports the idea that early Mesopotamian cultures were inventive, pessimistic, and warlike, all because of the rivers. The insecure environment for agriculture meant that a certain amount of creativity was called for, particularly in designing irrigation. The pessimism can be seen in the Mesopotamian view of the afterlife, a dark, dusty place where souls ate dirt and were shadows of their former live selves. It can also be seen in the Mesopotamian view of the gods – fickle, powerful forces who played with humans for their own purposes and really didn’t care what happened to them. For example, in the Epic of Gilgamesh, the goddess Ishtar falls in love with the human king Gilgamesh, and wants him for her lover. He has no desire to become one in a long line of discarded men, so he tells her no. She complains to her father, who sends the Bull of Heaven down to kill Gilgamesh’s whole town.
Gilgamesh and his lieutenant Enkidu ultimately slay the bull, but Enkidu dies in the process. They are heroes for standing up to the Bull, the tool of the gods sent to destroy them. Thus it is possible to triumph over supernatural forces, but at a human cost. This seems to reflect Mesopotamians’ view of the natural world – a place where unpredictable, fickle forces make life difficult.
The Sumerians, who gave us this story, emerged in the south. Among their achievements was the development of cuneiform writing, “wedge writing” made by pressing the end of a reed into wet clay. Unlike the pictographic writing of other cultures (such as the hieroglyphics of Egypt), cuneiform came to be more abstract and symbolic. While certainly not the first writing system, symbolic cuneiform marks a shift in literacy. Pictographs, though limited in number, can be interpreted easily because they look like the word they represent. Cuneiform came to look like just shapes in different patterns. Thus, to learn Sumerian, one would have to learn thousands of different symbols, or combinations of wedge strokes. Literacy, then, would be confined to people who had time to learn how to read.
Figures at top of stele above Hammurabi’s code of laws depicting the code being given to Hammurabi, c.1750 BC / Louvre Museum, Paris
Nevertheless, by the time of the second great civilization in Mesopotamia, cuneiform was being used to make sure everyone knew the law. Babylonia was ruled in the 18th century BC by king Hammurabi. Hammurabi (or more likely his scribes) compiled many of the disparate law codes of the region into one. Hammurabi’s Code was then written on stone stellae, carved into monoliths and mounted in the center of towns. The punishments were often harsh, as befits a society that lives for today.
The punishments in Hammurabi’s Code vary by the status of the offender. Typically, slaves had to pay with a body part (for example, stealing leads to your hand being cut off) while wealthier free citizens can pay a fine in silver. Crimes of slaves against free people were punished more harshly.
When we study a law code, we are looking at what I call a “prescriptive” source. It tells people what they’re supposed to do. That must mean they aren’t doing it. There would be no reason for a law unless a number of people were doing the thing forbidden by the law. So prescriptive sources can tell us a lot about what’s really going on in a culture.
So because it was forbidden, we know that people in ancient Mesopotamia were stealing, allowing their animals to trample others’ property, sleeping with people they weren’t supposed to, watering down drinks in bars, and striking each other a lot. We know that men tried to abandon wives who were sick, because the law forbids it. We know there was adultery, with the punishment that if caught in the act, the couple was tied together, weighted with a rock, and drowned in the river.
But we also know from the architecture that the reason for this harshness was to preserve the community. The ziggurat was the great architectural achievement of ancient Mesopotamia. The ziggurat itself was a kind of stepped pyramid with a temple at the top, and it was surrounded by the temple services. Located in the center of town and dedicated to the town’s particular god, the ziggurat was a center for life and ceremony.
Kings were mortal (unlike in Egypt), and when they died the new king was proclaimed at the ziggurat. In order to affirm his connection to the supernatural, he had to have sex with the top priestess of the temple to confer legitimacy. That’s another reason why law codes were necessary and very detailed – in Egypt, there were few law codes because the pharaoh was a god, and thus his word was simply the law in any particular matter. A mortal king must lay out the rules in advance.
The Nimrud lens / British Museum, London
Unlike Mesopotamia, ancient Egypt was highly stable. The Nile River, which flows from the south to the north into the Mediterranean, was so predictable that it was possible to measure its rise exactly, and determine what it would be in the future. The flat lands on either side of the Nile contains the fertile silt from the river, and when it rises each year it floods the land, leaving moisture and more good soil. Agricultural life was thus easy in Egypt.
The Goddess Ma’at wearing the “Feather of Truth” / Wikimedia Commons
As a result, Egyptian gods were seen as helpful and dependable. The goddess Ma’at brought justice by weighing the heart of the accused against a feather of truth. Anubis, the jackal-headed god, helped people pass into the afterlife, a place of peace and joy.
At first, this lovely afterlife was a gift only for pharaohs and very important elites, but as time went by the idea evolved that ordinary people deserved the trappings of an afterlife also.
The Great Pyramids were constructed to assure this afterlife for the pharaohs. Technology in Egypt was thus at the service of a deep spirituality. Unlike ziggurats, pyramids were places of death, and the ceremonial reanimation of life as the pharaoh entered the next world. As the afterlife was “democratized”, more and more workers were employed in the funeral industry, constructing pyramids, painting the interior walls, carving the stone.
The Egyptians also excelled in astronomical observation, needed for proper worship and holidays. Their personal care was complex enough to qualify as technology. They created makeup that was in demand on the trade routes, and even invented the vaginal sponge for birth control. This sponge was made of real sponges from the Mediterranean, soaked in lactic acid from the tips of acacia trees, embedding them with spermicide. We know about these methods because of the Ebers Papyrus, which dates from about 1500 BC. It contains information on various remedies, such as herbal inhalations for asthma and various laxatives, and a treatise on the heart indicating an understanding of the circulatory system.
Egyptians also produced papyrus, an inexpensive “paper” made from reeds. Although they held to a pictographic writing system, the abundance of papyrus meant that much knowledge was written for future generations.
Most early writing, including Egyptian hieroglyphics, were pictographic – each symbol meant a word, and each symbol looked like a picture of what it represented. Sumerian writing had been symbolic, in that gradually pictographs developed into abstract symbols:
The Phoenicians, however, broke with tradition, creating symbols that represented sounds instead of words. The Greeks adopted this “alphabet” idea from the Phoenicians, and (although vastly simplified) the evolution of alphabet technology then went something like this:
The Hebrew People
The Hebrews emerged in the eastern Mediterranean as a pastoral group, and it is unclear when they became monotheistic. What we do know is that the belief in one god became the base of their culture and the source of their difference from other cultures. Among the contributions of the Hebrews to our civilization include a belief in progress and the idea of a portable god, one who is invisible and not tied to a particular place.
When looking at the technology of the Hebrews, we can find some in the Bible (the measurements for the ark and the Tabernacle, for example), but I believe the most significant is text glossing. A core Jewish belief is that the word of God, as represented in the Hebrew Bible after the 9th century or so, is meant to be studied and examined. While the books themselves are precious and special, the ongoing examination and analysis of the Bible (and, indeed, all Jewish texts) is a particular responsibility. Although the Hebrews had priests, the role of these priests over time was reduced to that of ceremonial leaders. The spiritual leadership devolved to the rabbis, the ones who studied the text.
This page from the Talmud shows the text being analyzed in the middle, then commentary around it, then commentary on the commentary. This organizes the material and analysis visually, creating deeper study. So I guess the innovation here is rabbinical learning. The concept of “glossing” the text creates resources for later analysis. This allows for the development of law by precedent, where cases can be decided by looking back to previous analysis and determining the extent to which it may apply to a current circumstance. / Wikimedia Commons
Writing itself, as I mentioned with prehistoric narratives, is a self-conscious act for any society. Whether on stone stele or papyrus or parchment, the recording of commercial transactions, dynastic succession, calenders and other events in writing suggests a sense of history. Writing means we mean to preserve something beyond a single person’s lifetime, for others to learn or benefit from. Gadgets and inventions, individual pieces of technology, may be interesting in themselves, but they only become important when they reflect or represent a system of some kind. The system itself, whether writing, or irrigation, or embalming, tells us more about how people lived long ago than any single piece of engineering.
Ancient Greece and the Hellenistic Era
Hellenic or Classical Greece
“Hellenic” refers to the Greek word for Greece: “Hellas”. It is the era between the Archaic Age (the time of Homer) and the Hellenistic Era (following Alexander the Great). During this time most of Greece was divided into city-states (poleis), each with its own political system, and most in competition with the others for land or trade. The era is also called “Classical” Greece, since so much of the classic forms of architecture and sculpture developed during that time.
But much of what we call Hellenic or Classical Greece is really focused on the 5th century BC in Athens. The polis of Athens between the Persian Wars (which they won) and the Peloponnesian War with Sparta (which they lost) developed a rich culture that carried forward intellectual influences from before and added a new, naturalistic view of knowledge. Athens’ intellectual achievements were the product of a highly commercial society, supported by slavery and a democracy consisting of Greek, adult, male citizens.
The wealth of the society meant that there was time for learning. In cultures that must focus on subsistence, there is little time for intellectual exploration or education beyond that required to provide food and needed goods. Most “knowledge workers” would be shamans or others whose spiritual connections and scientific understanding of nature would make them of value. But surplus of agricultural goods, the first major advance out of the Neolithic Age, led not only to further specialized labor but the rise of a knowledgeable class of people. Those with extra wealth from trade could afford to hire smarter people to teach their children. Leisure time allowed for thinking about things.
Although many 6th-3rd century BC Greek writings have not come down to us in their original form, some were preserved and others were translated and retranslated outside of Europe. We have many more works showing us the Greek mind than we have from ancient Babylonia or Egypt. What we notice first in the Greek works is their emphasis on exploring and explaining the natural world. Greek scholars, for example, considered astronomy to be a branch of mathematics. They studied the stars and created models of planetary motion. They created maps of the known world, built cranes to move large objects, and designed plumbing and city systems. They used an abacus to do calculations.
Modern Technology Reveals Ancient Technology
The Antikythera Mechanism, found in a shipwreck a century ago, also indicates a high level of understanding of math and science. By the time sophisticated x-ray technologies were available to examine it carefully (around 2005), the computer had already changed our lives. The device is now interpreted as an ancient computer. This is another way that history is always changing – it is the new emphasis that our current culture has on computers that led to a new interpretation of an ancient object.
Writing and Its Influence
Another way in which our current view of the world influences our interpretations can be seen in the Greek view of writing. I’ve mentioned before that writing is a self-conscious act for a society, a way of preserving the past and present for the future. But Socrates, one of the most famous Greek philosophers, was against writing as a way of preserving knowledge. He thought it stultified knowledge, frozen it in time in a way that made ideas difficult to question, and thus new knowledge harder to create. His student, Plato, wrote a dialogue between Socrates and another character to explain his mentor’s point of view:
Plato, The Phaedrus – a dialogue between Socrates and Phaedrus (~370 B.C.)
Socrates: I cannot help feeling, Phaedrus, that writing is unfortunately like painting; for the creations of the painter have the attitude of life, and yet if you ask them a question they preserve a solemn silence. And the same may be said of speeches. You would imagine that they had intelligence, but if you want to know anything and put a question to one of them, the speaker always gives one unvarying answer. And when they have been once written down they are tumbled about anywhere among those who may or may not understand them, and know not to whom they should reply, to whom not: and, if they are maltreated or abused, they have no parent to protect them; and they cannot protect or defend themselves.
Phaedrus: That again is most true.
Socrates: Is there not another kind of word or speech far better than this, and having far greater power-a son of the same family, but lawfully begotten?
Phaedrus: Whom do you mean, and what is his origin?
Socrates: I mean an intelligent word graven in the soul of the learner, which can defend itself, and knows when to speak and when to be silent.
Phaedrus: You mean the living word of knowledge which has a soul, and of which written word is properly no more than an image?
Socrates: Yes, of course that is what I mean.
On the one hand, Socrates’ view may seem archaic. But on the other, we are now entering a world where there is so much information, and its format is always in flux. Some see the internet as a way to not only share writing, but to argue against it, to fight against it becoming static. Or perhaps Socrates would see the internet as just a collection of images representing ideas, rather than the development of ideas themselves.
Red figure artists sometimes signed their work. This is a Kylix signed by Phintias as painter, c.510 BC / Johns Hopkins Archaeological Museum
Greek art and architecture are often seen as a visual way to understand ancient Greek culture. Certainly the Greeks valued moderation and balance – that is evident everywhere from the design of the Parthenon to their medical works to the vases and pots they used every day. The Parthenon itself, commissioned by Pericles as a symbol of Athenian glory, was made of marble and had a huge painted statue of Athena inside it. The Greeks developed techniques for free-standing sculpture, and used it to express ideals of form. Many original Greek works, however, were cast in bronze. During and after the wars with Sparta, bronze works were often melted down for weapons. Many of the “ancient Greek” sculptures we have today are marble copies from Hellenistic and (primarily) Roman times.
The famous Attic pottery (Attica was the region surrounding Athens) is another representation of Greek artistic style, but it also represents their technological achievements. The pots themselves were beautifully made and used for everything from wine to olive oil. Styles were originally geometric, but by the 7th century BC had started to feature figures of humans and animals. “Black figure” painting was achieved by applying a clay slurry to a dry pot before firing. The paint could be incised and other colors applied on top. The iron-rich clay of Attica made for superior pots, and figures tended to be mythological. By the 6th century BC many pots were painted with images of (presumably famous) athletes, and some showed erotic scenes. Different artists had different styles, and historians today are able to distinguish among the masters.
Around 530 BC, the “red figure” technique was developed, a significant innovation that allowed not only for a black background, but also much finer detail. It was achieved through a three-stage firing technique, with the last stage at a lower temperature to melt and seal the images. The paint was applied directly to a smooth surface, making possible figures in profile, with detailed faces, and clear depictions of fur and feather on animals. Clothing could be more detailed, and additional colors used. Unlike in black figure pottery, the outlines carved into the pot disappeared during firing, melting into the black background, making for a much clearer image. A single technological innovation changed the field.
The Hellenistic Age
After the Peloponnesian War between Athens and Sparta ended, Spartan rule led to cultural decline. To the north of mainland Greece, in Macedonia, a warrior emerged who wanted to regain the glories of classical Greece. When Alexander conquered Greece and went on to create a huge empire to the east, he deliberately brought classical Greek culture with him. Alexander had been tutored by Aristotle, who had been a pupil of Plato, who had been a student of Socrates. Although he was not Athenian, Alexander saw himself as inheritor of Greek philosophy and science. Aristotle in particular had been concerned with categorizing knowledge, of both the natural and human world. Along with Alexander’s armies went scholars and natural philosophers, to study the world being conquered.
Campaigns and empire of Alexander the Great / Wikimedia Commons
Most of the people in Alexander’s huge empire weren’t Greek – they were Bactrian and Persian and Indian and many other cultures. Although the political empire Alexander set up did not survive his death, trade was free among the empire, and with goods flow ideas. The technologies of the east, including water-lifting devices, creating artworks with glass, cataract surgery techniques, and mathematical advances such as the independent work in what would be called the Pythagorean theorem spread across the empire. The city Alexander built in Egypt, Alexandria, featured a huge library and attracted many scholars. (Humility was neither a Greek nor Macedonian virtue – Alexander named a lot of cities he founded “Alexandria”.)
The list of scholars and intellectual achievements usually called “Ancient Greek” actually are from the Hellenistic Era (after Alexander’s death in 323 BC). Herophilus’s new ideas of systematic anatomy, Erasistratus’ work on the heart as the motor of the circulatory system, Galen’s development of the idea of bodily systems, Aristarchus’ idea that the sun is the center of the astronomical system, Euclidian geometry, the Archimedes screw – all of these are Hellenistic. Although historians often separate science and technology, Archimedes provides an excellent example of “practical” science, in his case in hydrostatics. The water-lifting screw he likely helped develop was used to lift water from the Nile for agriculture – unlike earlier devices, it was portable.
The Tower of the Winds or the Horologion of Andronikos Kyrrhestes, Athens / Wikimedia Commons
Around 50 BC, Andronikos constructed in Athens, not the first water clock, but an ancient one that still exists. Water clocks work without a need for astronomical observation, and through much of history they have been either novelties or related to religious observance, tracking months and celestial phenomenon rather than the hours of the day. His “Tower of the Winds” contained not only a water clock, but also eight sundials in the Cardinal and Primary Intercardinal directions (N, NE, E, SE, S, SW, W, NW) and a weather vane, designed to be seen from the Agora, so it might have helped people know how long they’d been shopping or attending government meetings!
One of the great horrors for historians likely occurred in AD 4th century, when the Library of Alexandria burned. It contained so many scrolls that its burning, which may have occurred in several separate fires, is a symbol of lost knowledge. The Library was not just a collection of documents, but a state-funded institution that sponsored scholars doing original research.
Practical technological development also occurred during this era. Experiments were done with the torsion-spring catapult, which could hurl large objects at enemy troops. Kickwheels were added to potter’s wheels, speeding up the production of ceramics. The works of Heron show the invention of a surveyor’s instrument (startlingly similar to what we use today), a carpenter’s level, and the screw press, which saved labor by fully pressing grapes and olives with ease. Heron also created toys for fun, many using “steam power” to work – balls rotating in a steam flume, little figures offering drinks when an altar’s candle is lit.
In the 4th century BC, a natural philosopher emerged who had an extraordinary influence on Western thought for the next 800 years. The scientific world view he created influenced not only philosophy but the way in which many people assumed the world (and the universe) worked.
Aristotle was a student of Plato, who had proposed a world of ideal forms. Perhaps for this reason, Aristotle saw nature as perfect and organized, and sought to reveal that organization. His view of the heavens was that the earth was still, and perfectly spherical objects moved in perfectly spherical motion around it. He dealt with the obvious problem of retrograde motion (some stars and planets appear to move backwards at certain times) by adding more spheres to the system. But Aristotle worked in other fields than cosmology: ethics, rhetoric, metaphysics, anatomy, and logic, to name a few. And even though he was not a tinkerer with engineering or objects, the systems he developed reflected what people saw every day. We do not feel the earth move, and the stars and planets do seem (from our view) to be spheres and to move in orbits. The division of matter into earth, air, fire and water also makes sense, as does the idea that earth and water are heavy and so naturally move toward the core of the earth, while air and fire are light and move upward. Motion that is not naturally occurring must be “forced” by an identifiable mover.
Aristotle also created classifications of animals into a hierarchy, identified three types of soul corresponding to vegetable/animal/human, and saw the practical arts as necessities while science was a luxury. He knew that only people with leisure could engage in deep thought, and did so out of curiosity.
Many of the practical technologies which developed apart from (or at least without reference to) Hellenistic science, however, occurred in Roman times, our next unit.
It would be a mistake to oversimplify and glorify the Greek advances. Until the Greeks, we simply do not know how technologies fit in with intellectual life. The scarcity of written evidence has led to historians viewing pre-Hellenic societies as primitive in their understanding, basing their thought on supernatural connections. According to Auguste Comte, a 19th century philosopher, mankind has three stages. The first, primitive stage he called the Theological Stage. Here people assume supernatural causation for natural phenomenon: the gods cause rain, inanimate objects have spirits inside them that determine their character. In the Metaphysical Stage, these forces become metaphysical instead: abstract concepts explain causation, such as warm air rising because it is in its nature to rise. In the final, Positive Stage, explanations are scientific, based on experimentation, observation and reason. We tend to assume that everyone before the Greeks was in the Theological Stage, imbuing rocks and weather with spirits, instead of acknowledging the observation and reason demonstrated by the shamans. When we acknowledge the scientific activities of the pre-Hellenic shamans, the Greek experience loses some of its novelty if none of its interest.
Roman, Byzantine, and Arab Worlds
The story of the expansion of Rome is a military adventure, but it is also a technological adventure. As the western half of the Empire fell to the barbarian invaders, Byzantium preserved Greek and Roman methods. By the 9th century, Arab scholars had not only reclaimed this knowledge but moved it forward. Technology during this era of “Late Antiquity” was used to hold together an empire and recover lost knowledge.
The Roman Republic, according to legend, was founded in 509BC with the overthrow of the monarchy. During the 5th and 4th centuries BC, Rome expanded from the center of the Italian peninsula, both northward to defeat the Etruscans and southward to the Greek colonies. I find it interesting that in this expansion, Romans always saw themselves as defending from attack rather than conquering and expanding.
Farming was the main activity of many in the Republic, and we can see some of the techniques by reading Cato. He was one of the first to write texts in Latin rather than Greek (the language of the intellectual world at the time), and he wrote a book on farming called De Agri Cultura. Among the advice he offered was the use of compost and manure to ensure good yields, ploughing deeply so that surface roots don’t form on olive trees, and the use of amurca (olive oil sediment) as a pesticide. It is all practical – there is no reference to science even though Cato himself was a learned man. In additional to providing practical techniques, Cato’s book glorified the role of the farmer in Roman life.
That said, many Roman men spent time soldiering, even if they were farmers. As Rome expanded, competition occurred with goods for the new regions (Sicilian grain, with its lower price, is one example), and farming became a more difficult way to earn a living. Military technological advancements were often achieved by improving on Greek technologies. One example is the javelin, a throwing spear. Javelins were the first line of attack in a Roman battle, but they didn’t always kill. The head was designed to twist on the shaft as the point entered a shield. This meant the javelin couldn’t be recovered, but it also meant it was hard for the enemy to remove, which they had to do to make the shield useful again. The enemy thus spent valuable time trying to pull out javelins while the Roman army attacked with swords. The Greek catapult was improved, adding pins and holes to improved accuracy. Catapults were basically siege engines, whose purpose was to beat down the walls of a fortification.
The Roman Empire
As the Republic became an Empire (around 27 BC), Roman technologies focused on construction that tied Rome’s disparate territories together while demonstrating Rome’s glory. The centralized government could order and pay for major projects.
Pont du Gard, in Vers-Pont-du-Gard, Gard department, South France. The Pont du Gard is the most famous part of the roman aqueduct which carried water from Uzès to Nîmes until roughly the 9th century when maintenance was abandoned. The monument is 49m high and now 275m long (it was 360m when intact) at its top. It’s the highest roman aqueduct, but also one of the best preserved (with the aqueduct of Segovia) / Wikimedia Commons
Aqueducts served multiple purposes. Designed to carry water from distant snowpacks to lowland towns, they were built to a high standard. Stone arches supported clay-lined channels above, which could then be diverted into different areas of town. In Rome itself, which was comprised of about 33% wealthy villas, 33% horrible slums, and 33% public areas, water to each could be controlled. In a drought, slums were cut off first, because people who lived there could go to the public areas, such as fountains and baths. Water to these areas could be reduced if the elite villas needed more. Thus the technology reinforced, and a very practical and evident way, the class structure of Rome. In rural areas, the huge rows of arches crossing the landscape reminded people that they were part of a huge empire, to which they owed their loyalty.
Aqueducts, and monumental Roman buildings, could be so large because of the use of concrete. Although not invented by the Romans, Roman concrete (a mix of quicklime, pozzolana ash and pumice stone) could be poured into forms and used as a core to support masonry.
Roman roads also tied the empire together, and were intended for moving troops quickly from place to place. (This is why we still find Roman roads under fields and rural areas – they were not built to connect towns or trade.) Originally the roads were made of wooden planks, on which troops could march and equipment could be rolled. But plank roads were subject to warping over time – they weathered badly. The Romans developed a sophisticated technique for building roads and streets, first digging out a trench, then layering it with different-sized gravel and stones, with large fitted stones on the surface. The roads were obviously intended to last forever, just like the Empire.
From City: A Story of Roman Planning and Construction, illustrated by David Macaulay
One major street innovation was standardization. In Roman towns, carts and wagons could clog the streets, so stepping stones were inserted across the streets at the corners. This made it possible for pedestrians to keep their feet clean crossing from sidewalk to sidewalk (another Roman invention). Since they were placed at set intervals, cart wheels had to go between them. The distance between stepping stones meant that all carts and wagons had to be the same size, and able to pass each other. This was a technological method for enforcing social habits, creating a better situation for all.
The ultimate Roman technology to me, though, is the geared waterwheel.
Might as well learn this now – I’m into waterwheels. And mills. And textile manufacturing. But that will come later!
Vitruvius is the font of information for historians on Roman waterwheels and many other constructions. He was a Roman officer and an engineer. His De Architectura is one of the few surviving written works on architecture from this time (it was rediscovered in 1414 by Renaissance collector Poggio Bracciolini). In his works, he drew and described technologies in use at the time, such as water clocks, cranes and catapults. We know about Greek technologies (such as Archimedes’ screw) because he described them. And archaeology has helped us see how the mills worked:
Grist mill animation: This animation shows the mechanical workings of a grist mill for grinding grains into meal or flour. The mill is powered by water. A stream or river is channeled through a channel (flume) that deposits the water over a wheel that has buckets evenly spaced around the edge (waterwheel). The weight and energy of the moving water turns the wheel to produce energy for the mill. The axle of the wheel is a long horizontal shaft that has a large gear (crown gear) on the the other end. This large gear drives a smaller gear (lantern pinion gear) attached to a vertical shaft that turns faster. This vertical shaft is also attached to a circular flat stone (runner stone) that then rotates on top of another static circular stone (bed stone). There is a hole in the center of the rotating top stone where grain is slowly fed in for grinding. The grain is ground between the stones and drops out at the edges of the stones where it is collected as meal or flour depending on how long and fine it is ground.
Why was massive flour production so important? Because in addition to feeding the military, we have those slums. Roman cities had many slaves and many poor, who posed a continual threat to the order of the empire. Keeping them fed was part of what historians call “bread and circuses” – Rome providing basic food and entertainment to prevent revolt.
How did the Romans know their place in the world, and expand so confidently? Hellenistic mathematicians and geographers Eratosthenes (3rd century BC, 3-volume Geographica lost but pieced together in excerpts from other sources) and Claudius Ptolemy (AD 2nd century, work rediscovered in the late 13th century by Maximus Planudes) had developed sophisticated world maps. Eratosthenes correctly calculated the circumference of the earth and provided a map, dividing the world into polar, temperate and tropical zones. Ptolemy improved the projection and added current knowledge. Here are their maps, in 19th century versions:
Ptolemy Maps / Wikimedia Commons
The Roman Empire declined in AD 5th century, having been subjected to poor leadership, barbarian migrations, and religious turmoil. The first of these, poor leadership, has a possible technological aspect, at least according to early 20th century historian Rudolf Kobert and popularized in 1965 by historian S.C. Gilfillan. Lead was a common element in the Roman world. It was used as a foundation for face makeup (to make skin white), a material for cups and tableware, and as a durable lining for water pipes. These uses meant that the wealthy Romans were the most exposed to lead. The cups in particular would have been a problem, since wine would cause the lead to leach out of the cup into the wine. In contrast, poorer people didn’t wear makeup, used ceramic tableware, drank water from clay pipes, and tended to drink water or milk (an antidote for lead, as it happens) instead of expensive wine. The theory is that the elite class was slowly poisoned.
In 1984 historian John Scarborough pointed out that this theory had become popular as a replacement for that of Victorian historian Edward Gibbon, who had blamed the fall of the Empire on Christianity. I can certainly see why 1965 would be the time when it would be popular – during that time we were discovering the damage that man-made chemicals were doing to our environment and our health. So this is yet another example of how what historians look at varies according to the issues of their own time. While the lead-poisoning theory has been discredited in recent years, in the case of certain individuals (I’m thinking of a couple of particularly insane emperors), it may have been a factor. The popularity of the theory also shows the continuing disbelief that such a huge and well-organized empire as Rome could ever fall.
But fall it did, as Germanic peoples moved in to the Empire. This was a migration taking centuries, and in fact many Germanic tribes collaborated with Rome and protected the borders. But population pressure proved too great, as many groups pushed their way west due to climate factors in Asia. The culture they brought with them was quite different, and their technological achievements difficult to document. They tended to be rural farmers or hunters or woodspeople, living off the land. Terrified Roman sources tell us that they had no respect for concrete buildings, or engineering, or churches, or baths (or bathing, for that matter). They were mostly illiterate, and brought in an oral culture. Many Greek and Roman scientific works “disappeared” – some had been traded by Greek and Roman merchants to places eastward, some were acquired by the Church and were kept in monastic libraries, and some simply vanished. Those finding their way east were preserved in the eastern Roman Empire, which had always been primarily Greek and had separated from the west in the 4th century. This Byzantine Empire was ruled by a strong emperor with deep ties to the Christian Church, and the classical and Hellenistic tradition of scholarship could be said to have devolved to the Byzantine Empire for several centuries.
Byzantium had its own technological innovations. The pendentive dome over the Hagia Sophia church (AD 2nd-3rd century) was another technological innovation. The technique of modeling a dome over a square space by cutting out a sphere would not be duplicated again in Europe until the Renaissance. They developed rafted grain mills that could be used on rivers and moved as needed, and counter-weighted trebuchets that were better than catapults. The infamous Greek fire, which was likely a pressurized fuel propellant flame-thrower on a large scale, was used in naval warfare. It was feared throughout the known world, and the chemical composition of the mixture was a state secret. Greek fire could also be sealed in pots for highly effective grenades.
Theotokos mosaic / Hagia Sophia, Istanbul
Mosaics had been invented by the Greeks many centuries before, but in the late Roman Empire they took on different characteristics due to new methods. Although in Western Rome, mosaics were mostly created as floors in villas, in Byzantium they covered walls and ceilings because the use of mosaics was in churches, and you didn’t want religious icons walked upon. Mosaics are made up of small pieces of colored substance. The earliest mosaics used colored stone, at first natural pebbles but by the Romans, engineered stone. In Byzantium, they began to import colored glass from Italy, and developed pastes made of glass, which allowed the light to dance through the mosaic. Mother of pearl, gold leaf, and silver added even more sparkle. Sponsored by the Christian state, mosaics got larger and more beautiful throughout Byzantium.
During these centuries a new power arose in the east, Islam. In converting to the new religion in the 7th century, many commercial tribes in Arabia and beyond repeated a pattern of collaboration and adaptation for the sake of trade. The ongoing commercial contact between Arab traders and those in Mesopotamia, Egypt and the Hellenistic Empire meant that books, ideas and inventions diffused throughout the emerging Islamic Empire. By the 9th century, the cultural and scientific center of this empire was Baghdad, heart of the Abbasid Caliphate, where Persian and Arabic scholars pored over the works of classical Greece and Rome. Like the researchers at Alexandria centuries before, these scholars were funded by the state to engage in their research. Some historians believe that state funding leads to more “pure” research – abstract explorations into the natural world, without practical intent. That would be science rather than technology, even though the knowledge might later influence technology.
Islamic scholars thus studied and advanced classical knowledge. In medicine, they divided hospitals into wards to prevent cross-infection, and taught medical students in the hands-on environment. They engaged in advanced surgical techniques, inhalant herbs for pain and anesthesia, and the application of sulfur for skin complaints. Al-Rāzī (known as Rhazes in Europe) was one of the most famous physicians of the day. His De variolis et morbillis (A Treatise on the Smallpox and Measles) showed the difference between these two diseases, and his Kitāb al-ḥāwī, the “Comprehensive Book,” surveyed Greek, Syrian and early Arab medicine. His books contained not only factual information, but his own experiences as a physician.
Another Abbassid specialty was an improved astrolabe. We think astrolabes were used in ancient Greece, at least in a simpler design. And we know they were used in Europe in the Middle Ages, after improvements were made by the Arabs. View this 9-minute TED talk on the astrolabe, because it not only tells how an astrolabe works, but also what we’ve lost now that we don’t use it.
Tom Wujec, Learn to Use the 13th-Century Astrolabe
Empires need vast resources to maintain them, but they also create vast resources. These cultural resources were funded by centralized states and merchants who can trade goods safely within their zones. They may include vast networks for moving armies, but they also can be cosmopolitan, recognizing and appreciating contributions from other cultures. Within such an environment, the collection of knowledge can be financed by empires with vision, who want to connect themselves to both the past and the future. Thus, when western Rome fell, Byzantium was able to preserve Roman as well as ancient Greek knowledge. And under the Abbasid caliphs it became apparent that a centralized state can provide good support for scientific endeavor – even today there is an appreciation that government is often the best benefactor for scientific experimentation and learning. With a longer view, it is possible to recognize the benefit of such sponsorship, beyond immediate profits.
The Middle Ages (800-1350)
The settling of the Germanic tribes represents the founding of European culture. Although Greek and Roman knowledge would impact culture somewhat during the medieval period, the era from AD 800-1300 represents an explosion of technological rather than scientific development.
I consider economic change to be the driver during the Middle Ages. Politically, Europe in the early Middle Ages was a place of local culture, local politics, and local loyalties. The single unifying factor was Christianity, making the Church in Rome the spiritual center. Missionaries from Rome spread ideas of the orthodoxy of the church. By the 9th century, new invaders again insured local rather than broader interactions. Muslim Saracens launched raids on the southern coast of Europe, Slavic tribes exerted pressure from the east, and Vikings raided the northern coasts, stealing goods from churches and terrorizing the population. These new attacks would eventually, like the Germanic invasions of the western Roman empire, become migrations instead of raids. But during the 9th and 10th centuries, raids forced a local response.
Feudalism is the word most commonly used to describe the political set-up of the time. German chieftains had become kings, and they had distributed lands among their vassals as they fought for territory. One, Charlemagne, really became an emperor, a king over many states, and launched a cultural program to revive classical knowledge. Even after his empire disintegrated under his descendants, it became obvious that kings and emperors could not make an effective armed response to a coastal raid hundreds of miles away. Thus the lords controlling the land became local rulers, and often more powerful than their overlords.
The lands they controlled produced more wealth during this era than before, and the population grew. Towns emerged, likely at the points of trade exchanges and fairs. People in towns made their livings from trade and manufacturing, importing their food from the countryside. Agricultural surplus from the lord’s manor could be sold on the market. Many history books mention the population increase first, and talk about the pressure it caused on the land. If this is true, it provided a motivation for agricultural technologies that some call an Agricultural Revolution. While that makes sense, how did this population increase occur? It would have to be from an increase in the food supply, so it is possible that a warmer, drier climate was also helpful.
Three major innovations mark this revolution.
The first was the heavy plow. Southern Europe had sandy soil, where a lightweight, wheeled plow worked well. In northern Europe, the soil was heavier, with more clay. The development of a heavier plow, with a mouldboard to turn the heavy soil over, made deeper furrows. This meant that seed could be sown more deeply, out of reach of birds and animals. That increased yields. An iron plowshare not only added to the weight but made a deeper cut. Large plows could be pulled by oxen who, though difficult to turn, were very strong.
The second was the horse collar. Before the 9th century, horses pulled plows and wagons with a harness the strapped across its neck. This put pressure on its windpipe, making heavy loads impossible (the horse would pass out). The horse collar put the pressure on the horse’s shoulders, so it could pull more weight easily. Horses were easier to maneuver than oxen, and were more versatile. Although they ate more, they were faster and could increase production on a manor by 30%.
The third was three-field rotation. Previously, arable land was divided into two plots – one for spring planting and the other to lay fallow (empty). This was necessary so that the fallow field would recover – if the same crops are planted year after year, it exhausts the soil and yields go down. The innovation in the medieval period was to divide fields into three parts: 1/3 for grains, 1/3 for legumes, and 1/3 fallow. This increased the amount of land in production during the growing season from 50% to 66%. The increase in food production was enormous.
There is an English nursery rhyme that starts, “Oats, peas, beans and barley grow”. This was the other aspect of the innovation – the legumes planted provided more protein for the diet, which was helpful when few people could eat much expensive meat. Peas and beans are also nitrogen-fixing crops – they contain nodes on their roots that collect and store nitrogen from the air. When plowed under after harvest, these crops do more than allow the soil to rest. They help it recover quickly.
Towns and Guild Production
Population increase and more food meant more customers in the towns. Medieval towns were run (and often founded) by gild merchants (merchant guilds). These were organizations of merchants who controlled prices and trade within the town. Only members of the guild could buy and sell goods in the town, and by the 11th century they comprised the town government. Large towns were often “chartered” – they received a renewable lease from the lord or king who owned the land they were on, and in return paid a fixed tax. As towns expanded and more money was made, these charters were a very good deal. They paid a pittance and had total self-government.
Goods within the town were manufactured by craftspeople, organized into crafts guilds. They controlled production and prices for their section of the town economy. There were guilds of bakers, wire-drawers, leather tanners, blacksmiths and wheelwrights. The cloth industry was the largest sector of the medieval economy. There were spinners’ guilds, weavers’ guilds, fullers’ guilds, dyers’ guilds, and finishers’ guilds.
The focus of my own research was fulling. Fulling is the process that follows the weaving of the spun wool into cloth. The idea is to use chemicals to treat the cloth while it’s being pounded. This felts the woolen fibers together, making a soft, waterproof cloth. In Roman times, fulling was done in large, shallow troughs with stones at the bottom. The woolen cloth was laid out in the trough, and the fullers would pee into the trough, adding water and alum or fuller’s earth. This provided the mix of caustic and fixing chemicals needed. Then they would walk the cloth, treading it on the stones. (If you know anyone named Walker, or Tucker, or Fuller, at least one of their ancestors was a fuller.) Afterwards, the cloth was stretched on tenter-hooks to dry, then sheared and finished.
The major industrial innovation of the Middle Ages related to (yes!) water power. Although there has been some recent evidence of water power for an occasional industrial process in ancient Rome, almost all ancient milling was related to grinding grain into flour. Fulling appears to be the first use of the medieval innovation, the cam. The cam protruded from the waterwheel shaft, and as the shaft turned, the cams pushed down the back of a hammer, allowing it to fall. This could automate the fulling process.
Earlier mills had used the rotary motion of the wheel to create rotary motion of millstones. The cam converted the rotary motion of the wheel to reciprocal motion.
While it was possible to build a fulling mill in town, the rivers in town tended to be slow and sluggish. Waterwheels usually had to be “undershot”; that is, pushed by the slow river from the bottom of the wheel. They weren’t that powerful. The great waterwheels of Rome had been in rural areas, running down hillsides with the water shooting over the top of the wheel to provide more velocity and power. But the fullers’ guild held power in the towns, not outside it.
Europe’s first entrepreneurs broke the rules and made arrangements for fulling mills to be built on lords’ lands in areas with falling water. The guilds responded by trying to prevent industrial espionage – they posted guards at the town gates to search anyone leaving with bundles of unfulled cloth. But eventually rural watermills would use the cam to run bellows for blacksmithing, stamps for minting coins, and other purposes. In addition to the cam, a crank could also be hooked up to a waterwheel, running sawmills and pumps.
Rural water power was the wave of the future, and threatened the power of the guilds. In lowland areas with wind but no fast-flowing water, windmills did the work.
Codices, Tablets, and Literacy
The Middle Ages also saw the invention of the codex, but very early – really around AD 4th century. I find it interesting that the book was invented about the time of the lowest literacy rate in Europe, during the Germanic migrations and the fall of the western Roman empire.
The Cookery Scroll / Morgan Library & Museum, New York
When we think of ancient writings, like at the Library of Alexandria or in Hellenistic marketplaces, we have to think of scrolls, usually of papyrus. Parchment, made of animal skin, was also sometimes used. Since papyrus was only grown in Egypt, any interruption in trade with Egypt could cause a shortage. Scrolls are interesting pieces of technology – they have advantages and disadvantages. Writing was on only one side (the inside) of the roll, so the letters didn’t touch each other, and they were durable (any damage done was usually only to the outside). Scrolls weren’t always rolled – Julius Caesar was known to fold his accordian-style for easier access and storage.
Folding papyrus or parchment sheets into quarters or eighths and binding them at one edge creates a codex. A popular conception is that the Christian Church deliberately adopted the codex so that Christian texts would look different from Jewish and pagan scrolled works, but I haven’t been able to verify that. It is just as likely that the advantages of the codex made it popular. It was easier to store, could be closed by one person (big scrolls were hard to roll), and the folded pages created an edge on which something could be written to help define it.
Left: Writing with the stylus in Roman period, they wrote the notes with the flat part of the stylus wiped one writing out again. / Photo by Peter van der Sluijs, Wikimedia Commons
Right: The Codex Gigas, 13th century, Bohemia / Kungl. biblioteket
Gradually the codex replaced the scroll, and as it did so items that weren’t converted were lost. We see this happen now – every time information storage changes format (records and cassettes to CDs, or movies to Blu-ray), some unique items are lost. Also lost may have been the linear nature of the scroll – pages are permanently stitched together in a scroll, and it doesn’t require a bookmark. You are always where you left off.
In addition to the codex, the Roman period also saw the advent of tabulae, wax tablets written on with a stylus. A few of these could be tied together into a stack, and even sealed as official documents. Unlike scrolls or codices, tablets were reusable – you could melt the whole page and erase everything. They could be designed to be lightweight (certain lightweight woods or bone were often preferred) and portable, with carrying cases. They could be used for drafting letters or other documents that would later be written in more permanent form, notebooks, sketchbooks, accounting books and diaries. Charlemagne apparently wore one around his neck in his unsuccessful efforts to learn how to read and write. (1)
These means of keeping records increased in importance throughout the Middle Ages. Early Germanic settlers were not literate, and their culture kept records verbally and visually. When land changed hands, for example, a ceremony was conducted in which the seller put a clod of earth into the buyer’s hand in front of witnesses. In fact, witnesses were of extreme importance in illiterate cultures, as they carried history. So did older people, who were valued for their age and memory. And lest we think that their memory was somehow faulty, the memories of people in the 4th century, say, were far better than ours.
In cultures without writing, people have prodigious memories. Medieval singers and troubadours who couldn’t write would move from town to town entertaining people with stories lasting two hours or more. Entire epics were memorized and sung or spoken. Often audiences could repeat an hour-long song they’d only heard once.
Literacy enables us to write things down and record them. But literacy also means things are lost. We have already noted Socrates’ concerns about writing via Plato – Socrates felt that writing entombed ideas in stone where they were harder to question. In the medieval period, as tribes and towns and people became literate, they lost their prodigious memories. Over the centuries they learned not to remember anymore, because they could always write it down. By the 20th century, I couldn’t go into a grocery store without a list if I were buying more than five things. And the hypertexted nature of the internet has made our memories even worse – the understanding that we can look it up on our phone means we need remember nothing at all.
Christ in Majesty from the Aberdeen Bestiary (folio 4v) / University of Aberdeen
Illuminated manuscripts were also communication technology, as well as artworks. Benedictine monks in particular supported literary activities, copying ancient and medieval texts by hand, usually into codices, beginning in the 10th century. The labor of scribes and illustrators was appreciated as work appropriate to Benedictines, who considered manual labor crucial to spirituality. Monks working in scriptoria copied works on religion, botany, herbs, and medicine. They made copies of Jerome’s Latin Vulgate Bible for use in churches. Monasteries could make money by selling the books, and they were needed by the 12th century for the new universities of learning. And those with artistic talent illuminated the text with images. Brilliant pigments were made of ground stone and applied with egg whites, and gold was pounded into foil sheets or powder. The illuminations took up more and more of the page as time went on – by the 15th century, when printing was popularized, some books were almost all illuminations.
Some historians believe that herbal medicine advanced most in monasteries during the medieval period, and that medical texts were of particular importance. Greek texts on medicine were often copied, preserving that knowledge. Most monasteries had an herb garden, and some monks who were experts in herbal medicine. It was a Benedictine tenet to care for the sick. That said, some historians believe that most monks were illiterate (and scribes just servants to the monastery) and learned little of value from the text they copied. The true repository of medical knowledge during the medieval period may actually have been the “wise women” who catered to the needs of sick villagers and townspeople every day. Even after medical schools were established (several at Italian universities like Salerno), when Arabic medicine was brought into the curriculum, most people trusted these lay healers.
Medical potions in an illuminated manuscript / Wikimedia Commons
There are a number of illuminations and drawings related to the practice of medicine that have survived, and one of the things I find most interesting is the pictures showing uroscopy. Some of the first writings translated from Arabic were medical treatises on diagnosis using the pulse (Galen had identified at least 27 types of pulse). The examination of a patient’s urine had been done since ancient times, and took on increased sophistication in the medieval period. The image on the left shows Constantine the African lecturing on the subject (13th century). The listeners seem to have brought flasks of urine for examination or examples – the shape of the urine flask (called a jordan) changed little for hundreds of years. The illuminations and drawings showing urine for diagnosis were created in brilliant color. Diagnosis was often dependent the on the color of the urine (books refer to “black wyne” color and “liver colored”, for example) (2) as well as the color of any sediment or stones. Consistency could also be better shown with color. The techniques developed for illuminating manuscripts were useful for anything requiring visual information. Art served science (or at least medical knowledge) in the development of such works.
The Gothic Cathedral
Façade of Reims Cathedral, France / Wikimedia Commons
Gothic architecture is a style of architecture that flourished in Europe. It evolved from Romanesque architecture and was succeeded by Renaissance architecture. Originating in 12th-century France and lasting into the 16th century, Gothic architecture was known during the period as Opus Francigenum (“French work”) with the term Gothic first appearing during the later part of the Renaissance. Its characteristics include the pointed arch, the ribbed vault (which evolved from the joint vaulting of romanesque architecture) and the flying buttress. Gothic architecture is most familiar as the architecture of many of the great cathedrals, abbeys and churches of Europe. It is also the architecture of many castles, palaces, town halls, guild halls, universities and to a less prominent extent, private dwellings, such as dorms and rooms.
It is in the great churches and cathedrals and in a number of civic buildings that the Gothic style was expressed most powerfully, its characteristics lending themselves to appeals to the emotions, whether springing from faith or from civic pride. A great number of ecclesiastical buildings remain from this period, of which even the smallest are often structures of architectural distinction while many of the larger churches are considered priceless works of art and are listed with UNESCO as World Heritage Sites. For this reason a study of Gothic architecture is largely a study of cathedrals and churches.
A series of Gothic revivals began in mid-18th-century England, spread through 19th-century Europe and continued, largely for ecclesiastical and university structures, into the 20th century.
Cavalry, Cannon, and Warfare Technologies
It is commonly thought that the development of the stirrup, a simple device, holds huge importance in medieval warfare. Stirrups likely came into use by cavalry around the 8th century. They provide for faster mounting and dismounting, and enable greater stability on the horse, particularly when wielding weapons. Interestingly, for awhile historians equated the stirrup with the rise of feudalism as a political system. While this view has fallen into disfavor (particularly because the Byzantines and Arabs adopted the stirrup around the same time, but not feudalism) the two do coincide. Stirrups made cavalry much more effective as a fighting force.
Medieval warfare focused on capturing and holding fortified outposts and the lands they defended. In 1095, Pope Urban II called the first international Crusade to the Holy Land. In addition to increasing his own power as pope (because he could call on soldiers from all nations), Urban was trying to redirect internal European violence caused by the cessation of localized warfare with invaders. By 1095 there were no more raids, and yet everywhere knights trained and lords fought with each other, having nothing else to do with the militarized system of feudalism. Urban saw a way to turn this violence to “good” by fighting the non-Christian forces who had occupied Jerusalem for hundreds of years. Crusades thus added another element to warfare, since with a Crusade long distances had to be covered with a great deal of equipment.
Siege warfare was the primary form of war in the medieval period. Whether on Crusade or fighting a local lord, territory could only be won by capturing the stronghold. Thus many technologies were focused on battering down walls and gates, and interfering with supplies of food and water. Pitched battles on battlefields could be profitable, however, if one captured a high-ranking member of the enemy and held him for ransom – most kings that engaged in combat were at one time captured and ransomed back. Some historians believed that full engagement was avoided when possible, as it was costly and difficult to hold territory through field combat. Castles were chosen strategically to be fortified as strongholds or left to fall. Sieges, by their nature, could be lengthy, continuing into bad-weather months and leading to disease among the troops.
Battle of Crécy. Image from a 15th-century illuminated manuscript of Jean Froissart’s Chronicles. / Bibliothèque nationale de France
On the battlefield, the most interesting aspect of late medieval warfare was the shift in archers’ equipment. The Battle of Crecy is famous for the defeat of the French crossbowmen at the hands (or arrows) of the English longbowmen. Archers were important because, if they could achieve accuracy at a distance, they could stop a cavalry charge. The crossbow (known as a ballista in ancient times) used a channel and bolt to achieve a range of about 380 yards. They were heavy and hard to tilt upward to get better range, and the iron rusted in the rain, but they were fairly easy to use and the bolts could pierce armor. They also didn’t need much strength, since a mechanical gear pulled back the string. The English bow, later called the longbow, was impervious to weather, because it used resin instead of iron and glue to hold it together, and was often made by the man who wielded it. Since they used no metal, they were lightweight and cheap for thousands of men to carry. In England those who didn’t hold much property were commanded by law to have and practice using a longbow. In the hands of a longbowmen, many more arrows could be fired per minute than with the crossbow, raining down (cheaper) arrows on the enemy.
By the 16th century, however, the longbow itself was being challenged by firearms, in particular the arquebus or “hackbutt”.
But back to siege warfare, where cannon were being improved to knock down walls. Cannons were first used in the 14th century, and they were tricky devices – many of them blew up and killed the gunners. Cast iron, for example, would burst if it wasn’t made properly and contained any phosphorus. Cannons didn’t have much distance accuracy for the first couple of hundred years of their use, but the propelling of a spherical ball into a castle wall at point-blank range could be very effective in a siege.
The noise of both cannons and hackbutts also frightened horses, which could be useful.
Rediscovery of Greek Knowledge
Of great significance to the intellectual life of the Middle Ages, and ultimately of the Renaissance, was the fall of Toledo to Christian troops in 1085. This led to the discovery of the Arabic works preserving and adding to the works of ancient Greece, and their translation by Jewish scholars.
It would be some time before anyone other than educated abbots and wealthy aristocrats would make use of the new texts. The medieval Church was concerned, also, about the value of such pre-Christian texts in the community of Christendom. Such works could be dangerous if they focused on individual achievement over the goals of the Christian community, or worldly victories without the hand of God.
My own work in college and graduate school, briefly explained above, was focused on the conflict between merchant and crafts guilds over the fulling mill. I showed that this conflict led to the merchant entrepreneurs’ success in moving fulling to the countryside, where there were no guild restrictions and where falling water made the mills more efficient. At the time, a popular book was Jean Gimpel’s The Medieval Machine (3), and the fulling mill was clearly one of many machines that created what Gimpel called a medieval industrial revolution. Since then, more evidence has come to light indicating the possibility of such mills in earlier times. This new research lends even more credibility to the idea of an industrial continuum, with technological knowledge continually gained and lost, rather than “revolutions” at certain times. I’ll discuss this further when we get to the 19th century’s “industrial revolution”. At the time I was studying the Middle Ages, many people still thought it was a time of darkness and superstition, rather than intellectual and mechanical progress. As we continue to study history, we find more and more abilities, machines, ideas and achievements in past eras than we thought.
(1) Michelle P. Brown, The Role of the Wax Tablet in Medieval Literacy: A Reconsideration in Light of a Recent Find from York, 1994.
(2) Peter Murray Jones, Medieval Medicine in Illuminated Manuscripts (The British Library, 1998).
(3) Jean Gimpel, The Medieval Machine: The Industrial Revolution of the Middle Ages (London: Gollancz, 1977).
Renaissance and Exploration (1350-1500)
The 15th and 16th centuries saw a change in thinking, influenced by the readings of classical and Arabic texts, and the flood of new ideas that always comes with expanding trade networks.
Optics and Perspective
The same historian who gave us the explanation of the recovery of classical texts in Toledo, James Burke, can also tell us about the optics that came into Europe and changed the technology of painting and building.
The teaching of optics was based on 11th century sources that had been “recovered” from the Arab world.
Toscanelli would apply the knowledge to architecture, helping architects like Brunelleschi design buildings like Florence’s Duomo, featuring the first full dome since the ancient world. The new science of optics also allowed a study of the geometric ways in which to see the world, which is what led to the perspective drawing and painting for which the Renaissance is so famous. And here’s the more practical explanation of drawing from perspective, based on the work of Alberti, another great Renaissance architect:
I think we can consider the word “perspective” in two different ways here. Perspective in drawing and art gave us some wonderful artworks, buildings, and even towns:
Holy Trinity, fresco by Masaccio, c.1427 / Basilica of Santa Maria Novella
Basilica di Santo Spirito, designed by Filippo Brunelleschi, Antonio Manetti, Giovanni da Gaiole, and Salvi d’Andrea, 1447-1481 / Florence, Italy
The Ideal City, by Fra Carnevale, c.1480-1484. / Wikimedia Commons
But the revival of classical learning and Arab optics also created perspective in the sense of seeing things differently. When you can reproduce nature so accurately on a canvas, what’s to say you can’t control it further in real life? Certainly Renaissance towns were designed as much for social control as for military fortification and mathematical ratios. If we consider perspective as a technology, does the use of it change how people behave? The rational control of nature is, I think, a huge impetus to many technologies.
I also must say that I personally am very grateful to the study of Arab optics. Although spectacles (eyeglasses) were first known to be used in the 15th century, the convex lenses were only for far-sighted people. I am nearsighted, so would not have been able to have proper vision until the 16th century, when concave lenses were developed with the help of Arab knowledge.
Monument to Paracelsus in Beratzhausen, Bavaria / Photo by Peter Bubenik, Wikimedia Commons
“In all things there is a poison, and there is nothing without a poison. It depends only upon the dose whether a poison is poison or not…”
Into the history of technology, there is always some science. Given our understanding of chemistry as applied to medicine, Paracelsus becomes an important figure. Philippus Aureolus Theophrastus Bombastus von Hohenheim (1493-1541), known as Paracelsus, considered himself an alchemist. The son of a chemist and doctor, he attended multiple universities but found their teaching unsatisfactory. In a way, Paracelsus denied much of what the Renaissance was about. The recovery of classic texts had led to a move away from empiricism – in the Middle Ages there was much experimentation with various herbs and compounds. Paracelsus’ understanding of substances led to the creation of laudanum, a distillation of opium that stopped pain, and other mineral-based medicines that in large doses were dangerous. Publication of his huge Great Surgery Book (left) in 1536 made him famous. He was the first to recommend mercury treatments for syphilis (more on this during the Enlightenment) and his treatments with antimony apparently cured Louis XIV years later. His ideas of “like cures like” may have provided the foundations for homeopathy in the 19th century, and he essentially invented chemotherapy. He was over 100 years ahead of his time in declaring as false Aristotle’s claim that all metals derived from mercury and sulfur. Although he was discredited during his lifetime, and treated as a quack, and although a number of his cures that he considered magical were dubious, his contribution to both medicine and chemistry is now acknowledged.
I find it interesting that even when people were trying to call Paracelsus a “medical chemist” or the “Luther of medicine”, he insisted that he was an alchemist. Alchemy has been considered by historians to be a precursor or fore-runner of modern chemistry. One of its central goals, the transmutation of lead and other base metals into gold, has been so discredited as to give the whole field a bad name. And yet not only Paracelsus, but later scientists like Newton, were alchemists.
There were many branches of alchemy, going back to medieval times. Paracelsus and Newton were into “practical alchemy”, the active experimentation with substances, with the goal of creating a hypothesis. In other words, Renaissance alchemists were close to modern science in their methods. Prior to the late 16th century, alchemists had seen the supernatural as a cause of everything – they experimented with substances but believed the cause of the transformations they witnessed was magical or spiritual. As that approach was replaced by nature as a foundational cause, we approach the modern scientific method. In a sense, then, the “trial and error” methods so typical to technological development were a signal of maturation in the field of science.
I rarely put a single technology in its own section, but in this case I think it’s warranted. By the 15th century, the Mediterranean had been dominated by small, lateen-sailed ships for centuries. Lateen sails are triangular, and when attached to lateen rigging, can be maneuvered to catch light winds and allow mariners to steer the ship precisely. This was important in the Mediterranean Sea, where a ship was not out of sight of land for long, and catching light winds was important for speed. When full rigs of lateen sails were tried on ships in northern Europe, they were too difficult to use. Rough seas and high winds made them difficult to manage.
For this reason, northern European ships had featured heavy hulls and square sails. At least since the Vikings, square-rigged ships had worked best in heavy seas.
The caravel combined the square sails for oceanic conditions with lateen sails for maneuvering, rigged on a medium-weight and sized hull. A caravel could maneuver out of a small port, sail along a coastline, cross a large body in heavy seas, and maneuver into shallow harbors abroad. Two things happened to make it the invention of the era.
A typical round caravel or caravela de armada (of 1500-1505), with origin in the Portuguese model of caravela redonda or square-rigged caravel (Livro das Armadas). There were also some other European and Mediterranean types of ships, also called round caravels, during the turn of the century and in the 16th century. / From the Livro das Armadas, 1566, British Library
First, in 1415, Prince Henry of Portugal (known as The Navigator) was looking for a way to cut out the Arab middlemen in the Sahara, and get west African trade goods (particularly gold and salt) directly from the source. At the time, the shape and size of Africa was not known beyond the Mediterranean coast and a little ways down the west coast. The map they had was just a 15th century version of Ptolemy’s map:
Ptolemy’s world map, reconstituted from Ptolemy’s Geography (circa 150) in the 15th century, indicating “Sinae” (China) at the extreme right, beyond the island of “Taprobane” (Ceylon, oversized) and the “Aurea Chersonesus” (Southeast Asian peninsula). Credited to Francesco di Antonio del Chierico / From Ptolemy’s Geography, Wikimedia Commons
Notice how Africa just sort of dissolves into a southern land mass. By sailing down the west coast of Africa in a series of missions, Henry was able to map the coastline and track the currents and winds. His discoveries led to further explorations. In 1488, Bartholemeu Dias reached the Cape. By 1498, Portuguese mariner Vasco de Gama had rounded the tip of Africa and would be the first European to travel by sea to India. The caravel got him there.
The second event was that in 1453, the Ottoman Empire took the city of Constantinople. The Crusades, which had escalated in various more embarrassing forms from 1095 to 1404, had ended. The Ottoman Turks had been expanding from Asia minor since the 13th century. With their conquest they took over the trade routes of both Arab and Christian traders operating in the cross-over territory between Europe and Asia. The heart of that cross-over was the city of Constantinople (previously Byzantium when it was Greek, now Istanbul and Muslim).
What I was taught in school was that the Ottoman Turks forbade Christians from trading in the city, which cut off the supply of luxury goods to Europe. Actually, they began taxing Christian traders at the same rate as others, which increased the price of their goods.
By the 1480s, Genoese navigator Christopher Columbus had developed an idea for sailing west to get to Asia. The idea was the result of a great deal of study and some seriously faulty conclusions. At the time, there were arguments about the correct circumference calculations of the earth, anthropological discoveries of driftwood and bones washing up on the coast of Iberia, and the popularity of Marco Polo’s book about his adventures to Asia. Due to his study of ancient texts and modern phenomenon, Columbus came to believe that the actual size of the earth is significantly smaller than it is, and that the Asian continent is much larger than it is. Thus he believed that the distance from Europe to Japan, directly to the West, was about 3,000 miles.
He introduced this idea to the king of Portugal, hoping for funding, and was turned down on the advice of the king’s scholars. Columbus set out to get funding from other Renaissance princes, and was always turned down, because the court scholars had more accurate calculations of the size of the earth and of Asia. These scholars were convinced that if Columbus set off he would soon reach “the ends of the earth”, the point (at about 3,000 miles) at which a ship runs out of food and water and cannot return. Funding finally was given by Queen Isabel of Spain, who had just united her country and was closing out the Reconquista kicking Muslims and Jews out of the country.
By the way, if you’ve heard stuff about Columbus believing the world was round when others said it was flat, and that he was right and everyone else was wrong, that’s…well, wrong. Luckily, he did find land at about 3,000 miles, just as his crew was about to mutiny. And he did it in a caravel.
The Columbian Exchange: Labor
On a macro level, two global regions that had been isolated from each other collided with great force in the New World. Historian Alfred Crosby has called the long-term transfer of plants, animals and diseases between these regions the “Columbian Exchange”. This map gives an idea:
Hernando Cortes of Spain arrived with troops and conquered the Aztec empire in 1519. He had a lot of help – there were many tribes living under Aztec domination who joined them. But disease played a major role in the victory. European diseases (you can see the Mexicans suffering from smallpox in the lower left) killed off amazing numbers of Native Americans, because they had no immunity from long-term contact. This still happens today when an isolated tribe is discovered. Cortes considered their die-off as proof of God’s support for the conquistadores. Many of those who didn’t die ran away, using their knowledge of the terrain to escape. The result was that the Spanish, and later the Portugese in Brazil, didn’t have enough labor to do the mining and farming necessary to make their new landholdings pay.
But in Africa, there was a source of labor. Since the 13th century, west African kingdoms had been expanding due to advanced political organization and domination of trans-Saharan trade routes. As these kingdoms expanded, they captured prisoners and traded them away. This provided both wealth for wars and got rid of populations that might rise up and cause trouble. These slaves were traded across the Sahara, and many had ended up in east Africa. Such slavery was not very abusive (despite the difficult trip) and slaves tended to be treated well in east Africa as part of the thriving economy there. High-class prisoners fetched higher values and were often able to pay their way to freedom for themselves or at least their children; lower-class prisoners were sold to do the work appropriate to their skills, be it silversmithing or farming. Muslim slave-traders dominated the trans-Saharan slave trade.
Europeans began arriving at west African ports and requesting slaves. It was handy for west African kingdoms, expanding inland and capturing prisoners of war, to trade these prisoners on the coast instead of to Muslim trans-Saharan traders. The character of the trade changed, and entrepreneurs on the west coast began journeying inland deliberately to capture people for what was becoming a trans-Atlantic slave trade.
The character of the trans-Atlantic slave trade was far more brutal than the trans-Saharan trade. Most interior Africans had never seen the ocean, much less a ship. They were piled in like animals, with no respect for class or language. Princes were chained next to peasants of different tribes, and all were treated as chattel. All would be put to mining or agricultural labor regardless of skill. Many threw themselves overboard, either not sure that the journey would ever end (it must have seemed like they had gone to hell) or unable to tolerate the treatment. But enough survived, and with their own disease immunity from Africa were able to reproduce. In Latin America, the children born to them were usually considered free, and intermarried despite various laws discouraging it. In North America, dominated by the 17th century by English and French, they became slaves in perpetuity, with their children inheriting their status.
Slavery, of course, was not new. Slavery was the foundation of much prosperity during the eras of classical Greece and Rome. Beginning in the 16th century, the Barbary slave trade captured Europeans and sold them in north Africa. The pirates who traded in slaves profited from weak Ottoman control of the north African coast, and a flood of Moorish refugees from the Reconquista in Spain (1492). African slaves, both black and white, had been an aspect of European life for centuries.
Left: Young man wearing a slave collar, Dutch, 17th century / Wikimedia Commons
Right: Slave figures on the tomb of Doge Giovanni Pesaro, designed by Baldassare Longhena and sculpted by Melchior Barthel, Venice (17th century) / Wikimedia Commons
I consider slavery not only an inhumane practice, but a practical way of providing labor and a hindrance to technological development. With an easily importable work force, and economic structures cleverly set up for its perpetuation, there was little need for technological innovation in Latin America or in the colonial south of what would become the United States. I suppose one could note the instruments of slave punishment as technology, but in general the regions of innovation would be those that did not practice slavery, a very different situation from that of ancient Greece.
Leonard da Vinci sketch of pulleys / British Library, London
Leonardo da Vinci gets his own section due to his pure understanding of technology as embodying practical use. In addition to being an expert painter, Leonardo designed many objects in his notebooks. Many of the inventions he drew were never built, or have been built by others as models. Some of the drawings were studies of mechanical possibility, such as the pulleys shown above. He designed a helicopter, a diving suit, a glider, and all sorts of siege machinery. He created elaborate sets for theatrical performances, drew water-lifting devices and improved upon them, and was a true “Renaissance man”.
Some of his drawings and inventions were used at the time, or came to be important technologies. One of the most significant is his design for a canal lock, still efficiently used on many canals today. We’ll talk more about canals in the Enlightenment lecture.
Leonardo’s Miter Lock:
This unit reaches beyond Europe, and the impact of its technologies reaches beyond the economy or the life of ordinary people. Was the caravel the cause of trans-Atlantic slavery? Was it necessary for Europeans to kill off so many people in the Americans? Does lack of scientific (or, in the case of Columbus, factual) information actually lead to more innovation? If you can design great things before anyone can build them, are you contributing to human knowledge? It is appropriate to the Renaissance itself that a small, practical technology can call into question larger issues of ethics, knowledge and science.
Reformations and Scientific Revolution (1500-1650)
The purpose of my lectures is not to list and explain pieces of engineering, but this one may look more like this than I intend! What we have in this era is a broadening of scientific knowledge going hand-in-hand with the development of instrumentation that could measure with greater accuracy and influence science even more. At the same time, theories were being proposed that shifted people’s view of the universe. Science, technology, and even art seem more on the same wavelength in this era than ever before.
Tab. XXI c. Die Buchdruckerei. (Beschreibung lt. Quelle), by Daniel Chodowiecki, c.1770 / Wikimedia Commons
The background to this era of technology consists of religious change and the political and social upheaval that followed.
The rise of Protestantism was strongly influenced by technology. During the Middle Ages and into the early Renaissance, the Roman Catholic Church had a monopoly on Christianity. The Church was a large political and economic structure as well as a religious institution. Much land throughout Europe was owned by the Church, and bishops were important political figures. The famines and Black Death of the early 14th century had weakened the Church due to their lack of response, as had the Avignon Papacy. This had begun in 1309, when the pope had decided that Rome had become too dangerous and moved the papal court to Avignon. By the time the papacy returned to Rome in 1377, Avignon had elected its own pope and there were two. There simply cannot be two popes – the pope is supposed to be the Vicar of Christ on Earth. The situation was not resolved until 1414.
The result was the rise of mystical religion and commentaries on the Church, even as the Church itself became more worldly and corrupt in the 1500s. Popes defended their thrones militarily and wealthy Italian families conspired to win the papacy. Wealth came into the Church through new methods, such as the sale of indulgences, which forgave people for sin in return for money. Humanist scholars, particularly those in northern Europe at a distance from Rome, wrote open critiques of Church practices: ignorant clergy, wealthy popes and bishops, corruption. And these critiques could be printed on the new technology.
Prior to Gutenberg’s movable type press, hand-written manuscripts and woodblock printing were the methods of distributing information on paper. Handwriting created unique works, and woodblocks gradually wore down. What Gutenberg, a metal-smith, created was movable type, each letter made of molten metal poured into a mold. One could produce many of these letters, then arrange them in frames for printing a page. When Martin Luther wrote his 95 Theses which started the Protestant Reformation, it was printed on a press and distributed. It was written because of Luther’s disgust at the sale of indulgence papers, which were also printed on a press. The ideas of Protestantism spread because of the press. Protestants believed that the way to get closer to God, to assure oneself of God’s grace, was to read the Bible. Luther himself had translated the original Greek Bible into German, and new editions in various languages were being printed.
Metals and Milling
De re metallica title page, by Georg Bauer, 1556 / Wikimedia Commons
We know quite a bit about metallurgy and milling technology during this time thanks to “Agricola”, the pen-name of Georg Bauer, who published De re metillica in 1556. Agricola, for his part, borrowed a lot of material from Vanoccio Biringuccio, who had published De la pirotechnia in 1540, detailing many metallurgical processes. Mining was so important and made so much money, that water power was implemented early to pump water out of mines. To work metal successfully, a source of water was needed, not only for running waterwheels but to wash the product at various stages. Charcoal was also needed in great supply; since it was made from wood, a source of wood was important too, which helps explain the superiority of eastern European metallurgical practices. Charcoal imparted fewer impurities to metal than any other fuel.
Metals were smelted, heated to remove impurities and extract substances like bismuth and antimony sulfide, and different kinds of rocks were added to fuse rock away from the minerals. Copper was purified by removing sulfur and iron, in a series of complex processes. A process known as liquidation was developed to extract amounts of silver and gold from copper; it liquefied the copper and lead together, then cooling the copper so that the gold and silver stick to the lead. The lead was then treated by cupelling, using a blast of air to solidify the lead. The last step was to use nitric acid to remove the silver from the gold. Although cuppellation was an old process, there are no descriptions of liquidation till the 15th century. These techniques worked so well they didn’t change much until the 19th century.
Copper was needed for decorative items and utensils, but its main application was in alloys, especially the making of bronze. Bronze (copper + tin) was strong and could be casted into shapes; it had been a popular medium for millennia, but making the process more efficient could produce more for less. Pewter, an alloy of tin and lead, was used to make cheaper plates and cups, and could be easily cast in iron molds. At first, printing letters were made of cast pewter, but if the pewter was too soft it would wear down. This need caused technicians to harden pewter with other substances (such as the bismuth they’d removed as an impurity). By the 17th century typeset was made of very hard lead alloys, and is still used on printing presses today.
Iron was also subject to changes during this era. Because iron had been so common for so many centuries, it’s actually harder to find primary sources about it than about more exotic “non-ferrous” metals. The innovation of the 15th century was the blast furnace, which replaced the slow, tedious method of reducing iron to wrought iron by removing as many impurities as possible. In the blast furnace, the iron remains in contact with the hot charcoal, which allows carbon to enter the iron, helping separate it from other impurities. The product could then be shaped by the hammers of a blacksmith. Viscous cast iron could be put in molds for casting cannon. Again the waterwheel helped, using the cam to push bellows and keep the furnaces hot.
The Cosmological Shift
Illustration by Galileo of one of his refractors. / Wikimedia Commons
In the 15th century, Polish priest Nicholas Copernicus developed an alternative to the cosmology of Ptolemy, which had been cemented by Aristotelian philosophy. The universal view that had held for centuries featured a motionless earth at the center, with planets and stars on spheres, arranged in concentric circles moving outward from the earth. This Ptolemaic-Aristotelian system had fit with the Catholic Church’s interpretation of the Bible, which stated that the earth was still. But Copernicus was a mathematician, and he was bothered by the complexity of a system that had to add a sphere every time a new object was discovered in the sky or a planet exhibited retrograde motion. To simplify the system with hundreds of spheres, he put the sun at the center. Mathematically it made things simpler. Knowing the Church would object, Copernicus did not allow his On the Revolutions of the Heavenly Bodies to be published till after his death.
It had an influence, however, in its mathematical elegance, and could be used for calculations so long as everyone understood it was theoretical. Galileo’s great achievement, or his great crime (depending on your point of view) was to use a telescope to actually look at the sky closely and see that Copernicus was correct. Telescopes at the time were primarily used for shipping, both on ships to see distances and on shore to see when ships were coming in with goods to trade. They had about 3X magnification. Galileo already had a history of working with mechanics to disprove Aristotle. He did experiments to show that light and heavier objects fell at the same rate, a direct contradiction of Aristotelian physics. Even though he was professor of mathematics at the University of Padua, he was fascinated by technology. He invented a pump and several types of balances or scales. In 1609 he developed a telescope with 20X magnification, and it was with this invention he saw mountains and bumps on the supposedly spherical planets, and moons orbiting around Jupiter.
Unable to consider what he saw as “theoretical”, Galileo published his discoveries as fact. As a result, he was forced to recant them by the Catholic Church’s investigative arm, the Inquisition. Found guilty of heresy, he spent the last years of his life under house arrest. But his work had influence beyond his circumstances, and the experimental method was a huge influence. During the same era, other experimentalists included Francis Bacon, who developed a philosophy based on empiricism, the use of the five senses to determine knowledge.
The Mechanical Arts
Robert Boyle’s air pump / From New Experiments … Touching the Spring of the Air …, 1561, Wikimedia Commons
It is during this “Early Modern” era that “mechanical arts” come into their own intellectually. The change of ideas was evident in Galileo’s work, which leaned toward the experimental rather than philosophical. It was expanded by Bacon, and later solidified by Isaac Newton.
Contemporary with Isaac Newton were experimenters like Robert Boyle, who created the air pump. When air was pumped out of the device, it create a vacuum. Plants, insects, animals could be placed in the glass sphere, and the effects of a lack of air could be studied. This led to a better understanding of air and gasses. But what’s interesting is that instead of starting with a theory and proceeding to build a device to prove the theory, Boyle began with the experiment itself. Although some historians of science see in this the evolution of the scientific method, I also see it as representing the new reliance on mechanics and empiricism as legitimate foundations of knowledge. It also killed quite a few little animals.
This division of science and technology is a tension throughout the class, and between historians of science and historians of technology. Among the practical inventions of this era were instruments designed for scientific experimentation, navigation, clockmaking, and cutting cylindrical forms. Brass (an alloy of copper and zinc, mixed with charcoal) could be used by skilled artisans to create sophisticated scientific instruments. At the end of the 16th century, medieval instruments such as astrolabes (which calculated the position of stars), quadrants, and sundials were declining in use. New techniques were developed to respond to the needs of an age of dividing up land as manorialism declined. By attaching part of an astrolabe to a tripod, and adding a compass, you can develop a surveying instrument. Navigational instruments were adapted – in the 1590s Captain John Davis adapted a cross-staff (used to measure the altitude of the sun to help find latitude) into a back-staff (which did the same with shadows, so you didn’t have to stare into the sun).
Computers (instruments for making computations) were also adapted. Rulers (also called scales) were invented for various uses. Galileo himself used an improved version of a hinged and graduated ruler called the “sector”, and found it so useful he hired a full-time instrument-maker, Marcantonio Mazzoleni, to produce it for commercial sale.
Mathematician John Napier discovered logarithms in 1614 as a means to simplify large calculations by using tables. Such tables could be carved onto portable “rules” for easy use. Gunter’s scale (invented 1620) used it to create a table for navigators, on a board a couple of feet long and about 1.5 inches wide. In 1622 William Oughtred made a sliding rule, so that the numbers being compared could by physically lined up with each other on the logarithmic scale.
A Pascaline, signed by Pascal in 1652 / Musée des arts et métiers
The slide is just an example of the inventiveness of the era. In 1645, mathematician Blaise Pascal presented his “pascaline” (shown above), a calculating mechanism made of brass that could add and subtract. Map-makers were happy to purchase the new “hodometers”, which used compasses and the measuring of mileage on cartwheels. Techniques for turning wood on a lathe developed in leaps and bounds, to help create wooden screws for presses and lathe machines themselves. Metal-cutting on a lathe developed as a separate technique – the 1550 Nuremberg printing press had copper screws,and later they were made of iron. Threaded screws were starting to be made at the beginning of the 17th century.
Glass became more important for scientific instrumentation. Obviously this was true for making lenses, both for telescopes and microscopes. And thermometers (yes, Galileo invented one of these too) were made of glass and used to measure body temperature for the first time. Galileo’s student Torricelli invented the barometer in 1643. The glass-blowers’ talents were needed to make these devices, and the center of glass methods was Venice, in particular the island of Murano, where glass-blowers were restricted to prevent fires in Venice proper and espionage. Murano’s glass techniques were the envy of Europe, and Venice wanted them kept secret. But it was Ippolito Francini of Florence who modified the lathe so it could grind lenses more precisely. Galileo bought his glass for his telescopes. To create better telescopes, such achievements had to be combined with better mirrors and better sheet brass for the housings.
Spring clocks / Wikimedia Commons
Clocks, of course, are also instruments. Spring-driven clocks, which had been around since the 15th century, improved in the 17th. The springs were another product of the advancements in metallurgy, and were improved upon with the invention of the hair-spring by Christiaan Huygens (who likely had invented the pendulum clock awhile before) and Robert Hooke (an inventor of the microscope). The hair-spring (shown left) helped control the balance wheel and thus the movement of the clock’s hands. It provided enough stability to enable to making of smaller timepieces. The pocket watch would be popular throughout the 18th and 19th century, as the pace of life quickened with industrialization. Larger clocks were works of art beginning in the 17th century. The one on the right featured an eight-day spring movement, and was made of oak, brass, pewter and bronze.
And in many ways, the spring-driven clock changed everything. Here’s a brief video about how and why:
It became obvious during this era that scientists, artisans and technicians were dependent on each other. But there is some question as to which of the many causes were actually causes, and which were effects. And what was the role of the Protestant Reformation, other than causing (or resulting from) a printing boom? Some historians believe that the questioning of the Catholic Church in general, once it broke away in the form of Protestantism, “freed” scientists to do what they needed to do without reference to Catholic dogma. What happened to Galileo supports this point of view. But I find it hard to swallow, since so many great ideas came from scholars who were Catholic. It makes me wonder whether English Protestants like Newton and Boyle have been given too much of the spotlight.
Enlightenment and Revolution
What was the Enlightenment? It’s very difficult to date, but historians like to separate it from the so-called “Scientific Revolution”. That revolution seems to focus on measurement and experimentation, and the ultimate combination of this approach with reason and deduction to create what we call the “scientific method”. By the beginning of the 18th century, the scientific method of combining empiricism with rationalism was already adopted among intellectuals. The Enlightenment occurred when thinkers began applying the lessons of science, as they saw them, to humanity. Technologies became the tools for that application.
Reason and Passion
An easy way to get at the Enlightenment is to see it as a tussle between two elements: the rational and the emotional. This had been a special concern of the ancient Greeks, who had believed that reason and passion needed to be balanced to achieve health, happiness and prosperity. Their society had in many ways been based on moderation – you can see it in the balanced art and architecture, and in their idea of medicine as restoring balance. 18th century philosophers such as Voltaire and Rousseau revived this struggle: each can be used to represent a side of the argument. Jean-Jacques Rousseau believed in the human connection to nature, and valued intuition and emotion over reason. His work The Social Contract (1762) created the idea that government should be based on the general will of the people being governed, which would be a founding idea for liberal revolution. His work Emile (also 1762) argued for a natural, guided form of education for children, rather than the learning by rote prized by 17th century philosophers like John Locke.
Voltaire favored the rational mind, particularly in opposition to superstition and human cruelty. For example, he went to court to defend the honor of a Protestant man, Jean Calas, who had been executed for murdering his son, who had become a Roman Catholic. The Calas case, overturned by a court in 1765 and restoring the family name, made Voltaire a champion of those innocent but accused because of rumor and prejudice. He was already famous for his wit and sophistication, and his ridicule of stupidity and ignorance in works like Candide. Like Voltaire, Denis Diderot believed that humankind’s great accomplishment were the achievements of the rational mind, applied to the real world. They worked with others on the Encyclopédie, a 28-volume work that purported to display all of human knowledge. For our purposes, the Encyclopedia contains an extraordinary amount of information, including many woodcut prints, explaining the technology of the day.
Images from Encyclopédie, ou dictionnaire raisonné des sciences, des arts et des métiers, by Denis Diderot, 1762 / Louvre Museum, Paris
Left: Needle making
Second from left: Printing press
Second from right: Felt hat making
The other way to see the focus on reason over passion is to look at art. Compare, for example:
Left: The Swing, by Jean-Honoré Fragonard, 1767 / The Wallace Collection, British Museum, London
Right: The Oath of the Horatii, by Jacques-Louis David, 1784 / Louvre Museum, Paris
Here we are contrasting the fluffy, fun Rococo style of Fragonard with the neo-classical, almost photographic, geometric, rational style of David’s work. Neo-classicism, the deliberate revival of classical Greek and Roman themes, was the artistic representation of this Age of Reason. Reason was seen almost like a goddess, a savior who would stop decades of superstition and political decisions based on tradition instead of rational decision-making.
Agriculture and Transport
The last of the Encyclopedia illustrations above, the plow, was part of a new Agricultural Revolution that caused great change. Although the revolution was primarily technological, there was also another factor: the influx of new foods from foreign countries, especially the Americas. The Columbian Exchange had brought many new foods, and some were adopted as “people food” (as opposed to those you feed animals) more quickly than others. Potatoes and tomatoes (which are related to each other) were difficult to get people to eat at first because the plant and leaves are poisonous. And potatoes exposed to the sun can also be toxic. Frederick the Great of Prussia, realizing the crop’s potential, adopted it as a royal plant and encouraged its cultivation. Potatoes are truly remarkable plants – they can grow in poor, sandy soil with little nutrition, and still provide large yields. They have Vitamin C and good carbohydrates, and can also be fed to animals. (If you don’t believe me, buy an organic potato, cut it into four pieces, and plant all four. Invite me over for potato salad in 6 months.)
Other foods and crops came over that fueled (I think literally) intellectual growth: coffee, tea and chocolate. Coffee arrived in the 17th century from the Middle East, leading to coffee houses for drink and spirited conversation. Tea came from China and India. Both coffee and tea, of course, have caffeine. Chocolate was a New World crop; in the Americas it was made into a bitter drink. The main chemical in chocolate is theobromine (food of the gods). Mixed with Canary Islands sugar and northern European milk, chocolate (as a drink) became extremely popular in the 18th century. At one time there were more chocolate houses than coffee houses in Europe. Tobacco, another New World crop developed in the 17th century, became even more popular in the 18th. The most popular form was snuff.
From The New Horse-Hoeing Husbandry, by Jethro Tull, 1731 / British Library, London
At the same time, technologies were introducing change in agricultural methods in Europe. Jethro Tull was a scholar who became a farmer, and in 1801 invented the seed drill for planting seeds evenly in a furrow. It was several decades before agricultural improvement became popular, and by then he was at the heart of it, publishing The New Horse Hoeing Husbandry in 1731. Changes in agriculture, including the movement to enclose common fields, created agricultural expansion and surpluses. When combined with adaptations of new foods, particularly the potato, the health of common people improved, and population expanded. In fact, it expanded so much as to create migrations of people to the cities, looking for work.
The produce of this era, and all the commercial goods from the colonies, needed to be transported. In England, this meant a boom in road and canal building, accompanied by investment ups and downs. Ups and downs were also needed on the canals themselves, to create smooth canals with towpaths for the animals to pull the barges. Often a particular canal would be higher or lower than the nearby navigable rivers. Barges needed to transfer easily from canals to rivers, so canal locks were built, a really cool technology that is still pretty much used in this form. Check out the animation:
This animation shows the basic workings of a canal lock. The lock allowed freight barges to move from canal to river and back. Locks were a means to raise or lower a boat between waterways that were at different heights form each other. A lock is basically a short section of canal that has two water tight gates. The water between the gates is at the same level as the incoming boat. The gate closest to the boat is opened. The boat floats in and the gate is closed. Water is then pumped in or drained out in order to bring the level of water up or down to the level on the other side of the lock. Then the second gate is opened and the boat is allowed to continue on on its journey on the other waterway. It’s a boat elevator. / Miller Design Works
The Patent System
Among the many inventions of the Enlightenment, one of the most important was the patent system itself. The idea of a patent, or monopoly on a particular invention for a certain number of years, may have originated with Venetian glass workers in the 15th century. But most patent law developed in the 18th and 19th century, for example French patent law during the early years of the French Revolution, and patents are enshrined in the American Constitution. Patents are designed to encourage innovation and, through publication, dissemination of technical information to encourage even more innovation.
Oliver Evans is an example of an inventor on the cusp of the patent system. He was American, inventing and trying to file patents before US Patent Law in 1790. Among his innovations are a water-driven flour mill that uses an assembly line to produce flour without it ever being touched by human hands:
Design patent by Oliver Evans for the water-driven flour mill, 1790 / Library of Congress
Evans also created a high-pressure steam locomotive, and many other things, but wasn’t able to get the patents he needed to protect his inventions. After the law of 1790, he spent a lot of time in court because people stole his ideas. Industrial espionage may seem like a modern thing, but it isn’t. In that same year, 1790, American Samuel Slater returned from England to Rhode Island. In England he had visited the automated water-powered textile mills, and his hosts had all visitors searched to make sure no one took drawings or plans of the mills out of the country. Slater held the plans to an English textile mill in his head, scribbled on the boat, and built the first American water-powered textile mill in Rhode Island.
Here I need to jump back in chronology a bit to introduce you to Gabriello Fallopio, Italian scientist of the 16th century (1523-1562). He was an anatomist at the University of Pisa and the University of Padua, and did a lot of dissections. He names a number of body parts, including the cochlea, the clitoris, and the (obviously) fallopian tubes, which connect the ovaries to the uterus and which he discovered. Together with Vesalius, he created a new Renaissance medicine that turned away from the classical models of Galen. But the other thing he did was conduct experiments in using condoms to prevent syphilis.
Syphilis was a deadly scourge in the 16th century, and would continue to deform, madden and kill its victims well into the 18th. Fallopio’s experiments proved that condoms could prevent transmission. By the 18th century, they could be obtained from apothecaries and ship captains (trying to shield their crews from infection with the “French pox” or the “Spanish pox” – English terms, of course – when they docked in port). Mercury had been used as a cure for two centuries, but usually in an inhaled form that caused severe side effects. Austrian physician Gerhard van Swieten instead used an oral suspension (later called Liquor Swientenni) which, though still debilitating during therapy, was more effective. If you look at the literature and pop culture reference to the “mercury cure” you will find dismissive statements about the use of mercury and how poisonous it was. Primary sources don’t necessarily agree – Casanova took the cure several times for syphilis, and was cured each time until he got it again from future encounters.
Mercury cures were also available by injection. Injections had been around since the 17th century when blood circulation was being understood, and both Christopher Wren and Robert Boyle had conducted experiments. In the 18th century an injection of an emetic helped a patient vomit up an object blocking his throat. But the real age of injections would have to wait till the 19th century invention of the calibrated syringe, which meant you didn’t need to cut open a vein.
Inoculations, however, could be delivered just under the skin. The idea that one could prevent full-blown disease by introducing a small amount of that same disease had been around for centuries. Credit is given to Edward Jenner for creating the first smallpox inoculation in 1796. But I think the credit should go to Lady Mary Wortley Montagu, who survived smallpox and was heavily scarred by it. In 1717 she described in letters the Turkish practice of inoculating for smallpox, and had her children inoculated. She tried to popularize the practice in England, but the inoculation was so strong that some children became scarred from the shot. The mortality rate for smallpox in the 18th century was 40%, and it was Jenner who discovered what many country people knew – that milkmaids didn’t get smallpox. Turns out that cattle carried cowpox, and many people in continual contact with cows carried that disease, which was very mild. Jenner made his innoculation out of cowpox, which worked much better.
Obstetrics became professionalized in the 18th century. Here’s a video explaining:
Such technological “advances” such as the forceps may have been influential in the attacks on midwifery and the ultimate reliance on surgeons for even simple births. Since many midwives rejected such tools as being harmful to the baby (which they often were), these technologies became the tools of surgeons, and helped 18th century doctors (many of whom were far less educated about births than midwives) take over. But there is no doubt that the models, like those shown here from the Museo Galileo, would be extremely helpful in teaching them the anatomical characteristics of difficult births.
Newspaper and Broadsheet Presses
Although the first newspapers were published in the 17th century, they became popular in the 18th during a time of intellectual expansion and political disruptions. The first broadsheets, or broadsides, were large single printed sheets sold or distributed to share ideas. Scholars wanting feedback on ideas, rabble rousers trying to rouse the rabble, advertisers marketing their products, or police looking for witnesses, all used broadsheets.
You can see here an ad for a pedometer, carried in the pocket, invented by a William Fraser in London at the beginning of the 18th century:
Ad for a pocket pedometer, by William Fraser, 18th century / British Library
And a report on capturing suspects in a murder (1780):
From The Morning Herald and Daily Adviser, London, 1780 / Library of Congress
Later, steam-run printing presses make these early newspapers (and advertisements) seem like a minor thing, but they were mass media despite the small runs. And they were crucially important to the American Revolution (see next topic).
With the coming of the broadsheets, and ultimately the mechanization of printing and cheaper publications by the mid-19th century, one question is how this new medium influenced society. In his 1962 book, The Gutenberg Galaxy: The Making of Typographic Man, media analyst Marshall McLuhan claimed that people changed radically as a result of the printing press, beginning in the 16th century. He saw four phases of the human change in response to media: the oral age (before writing), the hand-writing age (manuscripts), the printing age, and the electric age (and this was long before the internet). The evolution of reading printed text, he believed, created cognitive changes in the reader, which then influenced social and political changes. Reading is, after all, a solitary activity if you’re doing it silently. McLuhan suggests that more people reading print meant more people seeing themselves as the center of their own perspective – just you and the material, if you like. Is it possible that the development of concepts like individual rights went hand in hand with printing technology?
Political Revolutions: Printing Press and Guillotine
Benjamin Franklin’s printing press, c.1720 / The National Museum of American History
In 1763, the Seven Years War ended, and France had ceded her territories to Great Britain in both the Americas and India. The American theatre of this war had been called the French and Indian War. It had been an expensive war, particularly the defense of the British colonies in North America. The British government initiated taxation efforts to gain money from the American colonies, and the American merchants, who had been running their own affairs for decades, rebelled. The printing press was used throughout the colonies to garner support for the “patriots” – Benjamin Franklin’s press was particularly busy printing circulars and posters. Combined with John Locke’s ideas of political liberalism (the idea that government’s job is to protect life, liberty and property), the American Revolution led to the complete break from Britain in 1781, cemented by the treaty after the War of 1812.
The success of the ideas of individual liberty and the purpose of government, supported by Thomas Jefferson and other liberal thinkers, encouraged the liberals of France to launch a French Revolution, which increasingly became more radical. Liberal revolutionaries took over the government and imprisoned the king. Then more radical members were voted in, and they turned against the middle class liberals, trying to create total equality. Mass executions took place.
Left: A replica of the Halifax Gibbet on its original site, 2008, with St Mary’s Catholic church, Gibbet Street, in the background. / Photo by Paul Giazzard, Wikimedia Commons
Right: Execution of Henri Languille by Guillotine in 1905 / Wikimedia Commons
Technology can get beyond intentions, and be used for other purposes. We’ll see that in future eras, certainly – perhaps as technology gets more complex, it becomes harder to control its effects. But the Age of Reason had another, more subtle impact. The dependence on Reason could be seen as faulty – the Reign of Terror in France seems to show that total reason can go too far, undermining the very ideas of civilization it vows to protect. It also lacks emotion, which many people use daily to make decisions. Many people then, as now, used their emotions as a lens through which they saw the world. Technology can seem cold, devoid of humanity, mechanized in a way we may not want people, or society, to be. These concerns were first seen in the 18th century.
The Industrial Revolution
Most historians refer to this era as “The Industrial Revolution” and mark its dates around 1750-1850.
But to do so ignores something I call “industrial continuum”. As we have seen, during ancient times, Europeans were using water to power large waterwheels and mills to grind grain for flour. During the Middle Ages, Europeans harnessed water power for industry, creating machine mills that pounded cloth, minted coins, operated bellows for smelting iron, sawed logs, and pumped water. These technologies were also used in the 18th century.
Seen from a perspective of industrial continuum, the changes in the 18th and 19th century were not revolutionary. Rather they were steps in the evolution of industrial production.
So what was new? First, the source of power will move from water to steam. Second, the social impact will be greater than in any previous industrial advance. Although it would advance quickly, there is general agreement that the change began in England, which had a unique combination of huge labor pools resulting from the agricultural revolution, vast reserves of coal and iron, easy water transportation, and a thriving cloth trade.
In a sense, the boom began with a single invention related to the production of England’s primary export: woollen cloth. Since the 16th century, cloth had been spun using the spinning wheel, then woven using a broadloom. A spinning wheel could be worked by one person, but a broadloom took three: one to work the pedals to move the heddles up and down for weaving, and tall people at either end to pass the shuttle back and forth.
In an effort to speed up the weaving process, in the 1730s John Kay invented the “flying shuttle” loom:
Flying shuttle showing metal capped ends, wheels, and a pirn of weft thread / Grosse Scheidegg, Switzerland
The weaver could sit at the machine, working the pedals with his/her feet, and use the handle to activate springs that popped the shuttle back and forth. This eliminated the tall helpers, and speeded up the rate of production by about 300%.
Loom with Kay’s flying shuttle / Wikimedia Commons
That caused a problem, a technological “bottleneck”, because the amount of yarn from spinning wheels could not keep up with the demand from Kay looms. This led to further inventions:
Great Wheel Spinning (as in the Middle Ages)
Spinning by Hand (as in the Middle Ages)
Model of spinning jenny, Photo by Markus Schweiß, Museum of Early Industrialization, Wuppertal, Germany
The spinning machine hooked the devices vertically, and could spin fine threads. / Wikimedia Commons
Broadloom weaving and spinning wheel (as in the 16th century) / Wikimedia Commons
Hand Loom weaving (one person narrow)
Power loom / Wikimedia Commons
Ultimately, both spinning machines and the new looms developed further and could be connected to water power. The power looms and power spinning machines automated cloth production. They made possible factories, where unskilled women and children could tie up threads and keep machines going. Skilled spinners and weavers were no longer necessary, and the new machine-made cloth was less expensive and undercut the price of hand-made fabric.
The new technologies, applied to linen and cotton manufacturing, led to mass production.
Britain before and after the Industrial Revolution with the coal industry / Wikimedia Commons
England had the advantages in developing alternatives to water power. First, they had a problem. Water mills had to be located in the hills to get the fall of water necessary to run efficient water wheels. This meant they froze in winter, and were far from ports and markets.
They had another problem, this one in the iron industry. Iron smelting is a touchy business, requiring a pure fuel. For centuries, this fuel had been wood, burned down to make clean-burning charcoal. But by the 18th century, most of the usable forests had gone or were owned in private hands. The British navy took most of the trees tall enough for ship masts, and entrepreneurs had to look elsewhere.
England is an island built on coal. But coal has impurities, such as sulfur, that would make iron unusable. In 1709, Abraham Darby perfected the creation of a purified version of coal, called coke, that burned cleanly and could be used to smelt iron. As a result, iron production increased on a vast scale. Iron could then be used to make many things, such as boilers and pumps.
The steam engine utilized all of England’s resources: coal, iron, and water. Boilers were built of iron, and coal was used to heat the water in the boiler to make steam. Inventors like Thomas Newcomen and James Watt perfected the techniques of building steam engines, basing their designs on formulas for steam distillation derived from whiskey distilleries in Scotland.
Steam power was revolutionary. Unlike water power, it could be used anywhere, and did not rely on the weather. Hooked up to spinning machines and power looms, it could create huge factories with huge output. Hooked up to water pumps (its first use), it could be used to pump water out of coal mines to make lower reaches of coal more accessible. Hooked to a wheeled carriage, it could make a locomotive that moved goods along iron tracks.
Even today, steam power is basic. A nuclear power plant uses the splitting of the atom to create heat to boil water in a boiler, to create steam, to turn a turbine (which looks like a water wheel).
Social Impact of Industrialization
Each of the classes was affected by industrialization. The aristocracy could become involved, but many considered industry to be akin to trade, and thus beneath their interest. The landed classes could not control industrialization, and they had little capital to invest unless they’d made some through investments in trade. As aristocratic power declined, industry could provide an opportunity for younger sons who could not inherit the family lands. Forward-thinking nobles put their sons in industry, or married them into industrial families, and made a lot of money.
The middle class was the place for industrialists. In England, many had been religious dissenters, excluded from the money-making opportunities of religious conformers. Inventors, factory owners, managers, former craftspeople could find opportunity in industry. There was no need for great wealth or education, and the economic concepts of liberalism provided a rule-free environment where people could succeed based only on the usefulness of their ideas. Industrialization provided the middle class with great wealth, and they tried to turn this wealth into respectability.
The working class was newly created, because for the first time they did not work for themselves. These were the wage-laborers, frequently unskilled, including women and children. Instead of engaging in cottage industry as a supplement to industrial labor, or running shops in a town, these workers were employed in the factories that sprung up all over Europe.
The working conditions were appalling. Workers were poorly paid and their jobs were insecure (a result of the excess population, which meant they could be easily replaced). Uprooted from traditional town and village life, workers were disconnected from family control and kinship ties. There were no safety standards. Factories were crowded and poorly lighted. There was no heat in winter, since the coal used for heat would make smoke that would tinge the product. In textile mills, windows were kept closed in summer to preserve the humidity that kept the threads from breaking. The machines were unbearably loud, and many children went deaf. Limbs were lost in machines, and the injured workers fired and replaced by others. Children were beaten to make them work. Farming folk accustomed to rising and sleeping by the sun, and doing less work in winter, were working according to shifts by the clock. The work of tending machines was monotonous, the final product of ones labor often never seen.
In this cartoon, children are being beaten by the factory foreman for letting go of a spindle. He says, “Why did you let go the spindle, you young woman?”. She replies “My fingers were so cold I could not hold it…” Such cartoons represented the types of conditions leading to reforms, such as those instituted by the Sadler Committee, which heard the testimony of industrial workers. / Wikimedia Commons
Indeed, to some observers, such as Karl Marx, it seemed like people were becoming machines in order to work the machines. The technologies created during the industrial revolution did not consider the humanity of the worker, but rather focused on efficiency of production. I am not saying that previous eras were better in this regard – I am sure that treading cloth in a urine-filled vat during the Roman empire, or lugging boulders to build English roads in the 18th century, was not considerate of the worker. But I think when you combine the extensive hours of poorly paid repetitive work with the absence of any kind of individual approach to the work, you have a new kind of human misery.
Disease and Water
The death rate in the new industrial towns was high. These places, where neighborhoods had sprung up overnight, had slop sanitation and no clean water source. The worst cities were in England, where the industrialization had begun. England was a laboratory of new urbanization. The fact that London was the stinkiest and filthiest town in Europe was actually helpful, because Members of Parliament had only to go out in the street to see (or smell) the problem. Although Parliament did very little to help at the beginning of the 19th century, two things occurred to push legislation along. The first was the cholera epidemic of 1831, which may have come from India but hit both Europe and America. Cholera was thought to be a “miasmic” disease, spread through the vapor stench of bad air coming off decaying substances (of which there were plenty in the crowded streets).
However, the science of statistics was used by a scientist named William Farr to demonstrate that the neighborhoods with improper or illegal wells for water, and which were down nearest the elevation of the Thames River, got the most cholera.
Then Edwin Chadwick’s report Survey into the Sanitary Condition of the Labouring Classes in Great Britain was published in 1842, promoting an engineering solution to a social and health problem.
In 1853, a doctor named John Snow found a cesspit leaking sewage into a water well in an area where there was a sudden attack of cholera. When the pit was sealed and the water filtered, the cholera disappeared. By 1855, many members of the medical community were lobbying Parliament for sand filters on all water tracks from the Thames. Political will was lacking, though, until the summer of 1858, when the smell of the Thames was so bad it filled the Parliament itself with foul air. Then all of a sudden there seemed to be money for sand filters and the chlorine Snow recommended to disinfect city water supplies.
n this clip from James Burke’s “The Day the Universe Changed” series, William Farr uses statistics to make the discovery of illegal water as a source of contamination.
Steven Johnson on the Ghost Map
The Spread of Industrialization
Although much of the “research and development” was done in England, industrial methods were adopted throughout Europe and the United States. By 1900, specializations appeared. England set the trends in canals, railways and textile machines, while Germany developed an extensive chemical industry. The United States’ specialty was communications, possibly because the country was so large, and patent and investment systems so highly developed. During the 1830s and 40s, the telegraph made possible instant messaging through electricity. Samuel Morse of the U.S. used sets of dots (short taps) and dashes (longer taps) to communicate across the wires. The first message was sent in 1844.
Companies competed to string lines and establish telegraph stations. Instant communication changed the character of business because orders and transit information could be relayed immediately. Communications that had previously taken days, or even weeks to cross the sea, could happen now. You could tell your family you were in jail, or claim the last spot in chemistry class, or tell your girl you would be home for Christmas. And you could do it right now. Although the telephone gets more fanfare, it was really just an improvement on the telegraph, using voice.
Another refinement, quite literally, was in the production of steel. To turn iron into steel, various impurities and carbon must be effectively removed. Because the process was intricate, making steel was usually done in small batches, for items like swords and razor blades. But in the late 1840s, Englishmen Henry Bessemer and American William Kelly independently developed and patented furnaces that used oxygen to separate out the carbon and turn it into removable slag. Further refinements followed, with Bessemer achieving financial success and widespread adoption of his invention. And, it was exciting to watch:
According to Karl Marx, industrialization caused a major historical shift in economic relationships. During classical times, the two classes consisted of master and slave. During the Middle Ages, they were lord and serf. In both cases, the relationship was symbiotic. Each class had a responsibility to the other class. Masters had to feed and house slaves, and slaves had to work. Lords had to provide land and protection for serfs, and serfs had to farm and hand over a percentage of the crop. But in the creation of the two new classes caused by industrialization (bourgeoisie and proletariat), the symbiotic relationship was replaced by what Marx called “callous ‘cash payment'”. In other words, instead of mutual responsibility, the proletarian worked, and in return received from the bourgeoise a paycheck, and that was all. No job protection, no safety standards, no responsibility – just a paycheck. That’s another way that workers became part of the machine, and like a machine.
The Victorian Age
The Victorian Age was one in which technologies came together into systems. Although it is often known as an era of fascinating gadgets and inventions, the true innovation was the creation of these systems, including lighting, sewage, transportation and communications.
Preservation Technologies: Photographs, Phonographs, and Typewriters
Two Seated Sicilian Youths, by Wilhelm von Gloeden, c.1900 / Victorian and Albert Museum, London
I call photographs and phonographs “preservation technology”, because they help preserve live performances and images of people, places and objects. But they are also reproductive technologies, like the printing press. Some historians say that their invention is on par with the invention of writing, in its recording influence on our society.
Photography had a long development. Some take it back to the camera obscura as used by Alhazen (11th century) and Leonardo da Vinci (16th century) to better visualize objects. But these were reflecting, not recording, units. In the 18th century, experiments had been done exposing various salts to light and chemicals, which would lead to the idea of film. The first photographs appear to have been taken in France in 1826, with the popularization of pictures coming with the Daguerrotype in the 1830s. As with many innovation, scores of innovators followed, which advanced the development of the device. By the 1850s, glass plates were being used to capture images for development on paper, and by the 1880s George Eastman was creating cameras the the middle class could afford.
A Holmes stereoscope, the most popular form of 19th century stereoscope / Wikimedia Commons
A related invention of this era was the stereoscope, which provided 3-dimensional viewing, first of drawings and then photographs. In 1838 in England, Sir Charles Wheaton patented the stereoscope, then Dr Oliver Wendell Holmes re-invented the device in 1859, a side-hobby to his medical practice and time as an essayist. The Holmes stereoscope made photographs popular entertainment, with families and guests sitting around the parlor looking at double photographs they’d purchased. For some, stereoscope pictures were their first sight of foreign lands and exotic people – their view of the world.
Edison cylinder phonograph, c.1899 / Wikimedia Commons
Edison, in the United States, would patent the first cylinder phonograph in the 1870s. It recorded on tin foil, which wasn’t very durable. Other inventors made improvements. Recordings were made on wax cylinders, which could be played maybe 30 times before one could smooth down the wax and have it recut. Edison’s wax cylinder machine is shown here at right. Commercial cylinders were marketed around 1889, but the machine could also record. This feature would gradually become lost, as phonographs would be marketed only for playback of pre-recorded disks, as the medium became harder (plastic, vinyl) instead of malleable like wax.
The ability to record would not be back for ordinary consumers until the reel-to-reel recorders of the 1950s.
The Hansen Writing Ball, the first typewriter sold commercially, invented by Rasmus Malling-Hansen around 1865, first manufactured in 1870. This was a model from 1878. / Wikimedia Commons
Several prototypes of typewriters had been around since the 17th century, but the first commercial product was a ball typewriter created by Danish inventor Rasmus Malling-Hansen (shown above). It had all the parts of later typewriters, including the keys, ribbon, and platten. Production died with its owner, however, and other models became popular.
By 1872, typewriters featured QWERTY keyboards, named after the first letters on the keyboard. There is a popular story that the invention of the QWERTY keyboard was to slow typists down. This may well have been true, since very quick typing jammed the keys (it still does).
The Sholes and Glidden typewriter as produced by E. Remington and Sons / From The Expert Typist, by Clarence Smith
The most fun typewriter was the Sholes-Glidden of 1874, which sold well. I like it because it was mounted like a sewing machine, with a treadle to push the carriage return. I find it interesting that later typewriters would replace this with a manual carriage return that required you to stop typing and push the platten to the left with a handle. While I have no statistics on typing speed, and the Sholes-Glidden didn’t allow the operator to see the resulting type while typing, we won’t see another “automatic” carriage return until the electric typewriter, invented in the 1930s but not popular for consumers until the late 1940s.
European nations (especially Spain, Portugal, Germany, the Netherlands, France and Britain) had created colonial empires beginning in the 16th century. During the 17th century, their governments had created monopolies so that particular companies, such as the Dutch East India Company, controlled trade in certain areas of the globe. These companies operated under government charter, and created what we call “colonialism”, exploiting control of markets in raw materials from the colonies, and European exports to the colonies. The United States, as we know, was a colony that broke away, in 1776. Haiti did the same from France. But most colonies were firmly under control of the companies that founded them.
Quinine was used to treat malaria. The bottle indicates that the pharmacy that dispensed the quinine was in Petersburg, Virginia. / University of Virginia
Most of the colonies in Africa and Asia were along the coast – Europeans did not control any internal tropical areas at the beginning of the 19th century. According to Headrick, there were two reasons why: the first was topography, and the second was disease. Sailing ships required a deep draught in the water for their pointed hulls, and wind for sails. They were also made of wood. In the tropics, the rivers were shallow, there was no wind, and the wood rotted. Diseases included killers like malaria and fever, to which many local people had developed immunity. Several expeditions into Africa, like the Mungo Park expedition in 1805, had ended with everyone dead. This was why trade with Africa, South America and Asia (including India) had meant trade with coastal peoples, who obtained the goods (including slaves) from the interior of the continents.
In South America, Jesuits had discovered a palliative for malaria in the 18th century when they noted that the natives chewed on the bark of the cinchona tree. French chemists distilled the substance quinine from the cinchona bark, and it was tried as a cure for malaria, with mixed results. It took a while to realize that quinine was a prophylactic rather than a treatment; it worked if taken in advance and continually while in the tropics. In other words, it was a tonic rather than a cure.
In India, the British created quinine “tonic water”, and mixed it with gin, creating the gin and tonic. This is the classic drink of the British Raj – a cocktail as technology.
The Schooner-Match of the Royal Thames Yacht Club: The Glorianna Coming in. First at Gravesend. / Wikimedia Commons
Industrialization took care of the other problem, as metal steamboats were created with flat bottoms that could go up tropical rivers. They were propelled by steam power, fueled by coal, so they didn’t rely on the winds and could explore, and exploit, the interior of Africa and Asia.
By the 19th century, a shift had occurred, and most company control was changing to direct governmental control. One example is the Sepoy Rebellion in 1857, which occurred because of a new technology, the Enfield Rifle. This rifle was a vast improvement over earlier weapons. Instead of having to worry about powder and shot, the Enfield rifle used a paper cartridge containing both, and was conveniently breech-loading. The cartridges were greased to prevent moisture seeping in. To load the weapon, you bit off a bit of the greased paper so the powder could ignite, and popped the cartridge into the breech. Indian sepoys were instructed how to load the weapon, and many refused. The cartridges were greased with pork and beef fat. The religious beliefs of the Hindus forbade any contact with beef, and the Muslims were similarly prohibited from contact with pork. Sepoys who refused to load their weapons were stripped of their rank and dishonorably discharged. These sepoys returned home, where they created a mutiny against the British officers, who had undermined their culture in many ways. The Great Mutiny (Indian historians call it the Great Rebellion) raged in northern India, and quite a few British people, military and civilian, were killed.
After the mutiny was put down, the British government took over direct rule of India from the Company. More civil servants and their families came to India. And a segregation of British and Indian people took place that had never existed before: white-only clubs and restaurants, white-only railroad cars. Indians became second-class citizens in their own country, providing the labor for the jewel of the empire.
Patent drawing for R.J. Gatling’s “battery gun”, 9 May 1865. / Library of Congress
In the last half of the 19th century, much British policy revolved around India. The Suez Canal (1857) was built to provide easier sea access to India, but it was owned by France and Egypt. In 1875, Disraeli purchased enough shares of the canal from Egypt’s bankrupt ruler to control the Suez. In 1877, Disraeli also got Parliament to confer on Queen Victoria the title “Empress of India”.
Africa, on the other hand, became the place for no-holds-barred entrepreneurship, the so-called “Scramble for Africa”. Britain, France and Germany vied with each other for control. European technologies made possible not only exploration, mapping, and conquest, but also efficient exploitation of resources. African tribes often rebelled, requiring military measures. Since European governments had relatively little interest in directing armies in far-off places, local entrepreneurs often controlled national troops deployed overseas. Colonial wars became more common. Some of these wars, like the Anglo-Zulu War in 1874, were started by military leaders without government approval. The heavy field artillery proved less of an advantage in wars against natives than Europeans expected, even against people fighting with spears, but in most cases overwhelming force was ultimately effective. Railroads were built as investment opportunities, and they made money transporting goods and troops – imperial adventures made money for European investors back home. It also provided an opportunity to test new weapons. The Gatling machine gun (patent picture left) was seen as too brutal for use inside Europe, but acceptable for using against native peoples in Africa and Asia.
The revolver invented by Elisha Collier in 1818 / Wikimedia Commons
The Enfield Rifle was innovative in that it was breech-loading, but that was not the only innovation in firearms. Another was created by Elisha Collier in 1818. He created a “revolver” – a gun that held bullets in a revolving casing and could thus be fired repeatedly without reloading. In the US, Samuel Colt improved on the design to create a lightweight weapon on the same principle during the 1830s. By then, Eli Whitney (who had patented the cotton gin in 1794) had developed a factory system of production, where less skilled labor could be used to manufacture parts and assemble muskets. Colt would adopt this system.
But there was no demand for the weapon until the U.S. became involved in the Mexican War of expansion in 1846. As demand soared, Colt was able to build a large factory using new methods of production. The idea was for true interchangeability of parts, an old objective of Eli Whitney himself. Mass production of parts that could be assembled and replaced had been attempted before, but there had always been many parts that had to be produced by hand, using skilled labor.
The goal of interchangeable parts in weaponry had not been achieved before because the level of precision needed was too great. Machine milling to precise specifications, even for unusually shaped parts, became possible as Colt and others filed patents for competing machinery.
Revolver interchangeable parts / Wikimedia Commons
Early versions of the machine gun (which, if you think about it, is just a big, automatic revolver) would be used, along with many Colt revolvers and advanced rifles, during the American Civil War. There is not question that war, while it may not cause the invention of more effective weaponry, certainly makes its spread and adaptation more likely.
Hygiene and Medicine
Sanitation in hospitals improved by the end of the 19th century. During the Crimean War, the nurse every child learns about, Florence Nightingale, pointed out the unnecessary deaths caused by a lack of basic hygiene in military camps. Upon her return to England, she took up the cause of hospital and sanitation reform, and her Report on Rural Hygiene (1894) brought attention to conditions in the countryside. She worked with statisticians like William Farr to create scientific reports in order to convince politicians that public health issues were important: separating the sick from the well in poorhouses, providing proper medical care paid for by the government, and viewing children as a special population.
A satirical print from 1830 depicting Humphry Davy in a Royal Institution lecture administering a dose of laughing gas to a woman. / British Library, London
Surgical techniques were also improved during this era. New methods were taught in the surgery theatres of medical schools, where students could watch an operation. The greatest technological advance was anesthesia. Nitrous oxide was accidentally inhaled by a student of chemistry, and was found to produce giddiness and an insensitivity to pain. Although appropriate for “light” anesthesia (such as in dentistry) it was mostly used at parties by elites (yet another Victorian vice to overcome inhibitions caused by social repression ). But the development of ether made internal surgery possible. Ether put the patient deeply to sleep, so work in the abdominal cavity could take place without pain. It was a forgiving substance: a bit too little or a bit too much did no harm. But ether smelled terrible, and was highly flammable. The ultimate innovation was chloroform. It also put the patient deeply to sleep, but its smell was tolerable and it was not flammable. It was, however, difficult to determine the correct dosage, which led to anesthetics being a medical specialty. The use of chloroform was popularized by Queen Victoria, who for her eighth child wanted something to help her along. It was administered to her by the same Dr. John Snow who had helped solve the cholera mystery.
Surgeries were also made safer by new antiseptic procedures, like those introduced by Joseph Lister to spray the open body cavity with antiseptic during surgery. Doctors still operated in street clothes (I think of this a lot watching people in scrubs going out to lunch), and hand washing was difficult to implement since many did not see a reason for it. The 1850s were a turning point, with the development of basic ideas of germ theory in Germany and in France with Louis Pasteur, whose pasteurization process helped the beverage industry. The new ideas about germs, which were invisible, frightened people – there was much concern about catching diseases from other people and inanimate objects. By 1863, chemist Louis Pasteur’s research showed that deaths of “childbirth fever” were related to the streptococcus microbe, being transferred from one mother to another on the hands of practitioners. His work was ridiculed, as was his insistence that instruments be sterilized and hands scrubbed before touching a patient. In 1869 he would prove the connection in the maternity wards at the Hotel Dieu, but even then, there was confusion between normal strains of strep found in all women’s vaginas, and those considered infective and deadly.
‘Patent electric-medical machine’, Davis and Kidder, Britain, 1870-1900. / Science Museum-Science and Society Picture Library, London
“Alternative” cures were popular during this time, and many depended on people’s positive belief that technology could solve health problems. Electricity could be used to shock the whole patient or parts of the body, which could shift symptoms. One of the most popular uses of electricity was for what we could call a vibrator, used to bring women to orgasm (hysterical paroxysm) to cure hysteria. Hysteria was a disorder of the womb, resulting in anxiety and behavioral disturbances (likely a result of sexual and social repression). The treatments began in the 1880s, and no patent for household use was issued until 1902, so the vibrators were medical devices to be used in the doctor’s office. Other doctors performed female circumcision to prevent sexual pleasure, which was also thought to cure hysteria. Other “cures” included patent medicines, which are seriously maligned these days. Yes, they contained alcohol, and opiates, and cocaine. Yes, they caused addiction, and were used to drug crying children to sleep. But even today, we use derivatives of opium (morphine, Vicodin, Oxycontin) and cocaine (lidocaine, xylocaine) because they work. Alcohol was a perfectly serviceable cough suppressant in cough syrups until removed due to the current fear of alcoholism.
Sewage and water systems laid a foundation for what we now call infrastructure. Infrastructure is the technology of physical structures that supports a city or area. The term was first used in Europe in the 1870s and 1880s, though not very often. Systems and networks of roads, railroads, water pipes, telegraphs, and such were just being seen to be infrastructure. One of the most important of these elements was lighting.
Street lighting had been around since the 17th century in the major cities of Europe. They were financed by the increasingly centralized governments to fight crime and other unacceptable urban activity. Ironically, people stayed out later when the streets were lit, so the goal may not have been achieved. In areas where there were no public street lights, the methods went back to the 16th century: people were asked to hang a lighted lamp outside their doors at night. Gas lighting was developed in London at the beginning of the 19th century, using gas from a liquid distilled from coal. Other countries used oil-gas or kerosene.
In the 1870s and 80s, electric light (including arc lights and incandescents) began to appear in major cities. Incandescent lamps could be operated in a series, with isolation transformers to continue the circuit if a single lamp blew out. By 1885, incandescent lamps had come down in price enough that the middle class could have their homes wired for lighting. In 1882, Thomas Edison had created the first steam-powered electricity generation station in London , for the street lighting, and he built another in New York. Both used direct current, so the generators had to be close to the lamps, making big cities lightable but not smaller places and rural areas.
The biggest technological battle of the age was between Edison and Nikola Tesla, who developed alternating current at about the same time. Edison was an extremely good businessman and used the U.S. patent system extensively to gain monopolies for his inventions. He dismissed Tesla’s idea of alternating current, which turned out in the long run to be the most efficient way of providing electrical power over long distances. (Sir Joseph Swann in England, by the way, invented the incandescent light bulb at the same time as Edison.)
Left: Harington’s toilet, 1596, from ‘A New Discourse of a Stale Subject, called the Metamorphosis of Ajax
Right: Crapper’s Improved Registered Ornamental Flush-down W.C. with new design cast-iron syphon water waste preventer. — Image by © Corbis
My favorite invention of all time has to be the flush toilet. In Roman times, the houses of wealthy people had running water under a seat with a hole, to carry the waste away. But for the most part, running water to carry away waste was pretty rare during all the eras we’ve studied. The trick was usually to be able to do your business inside, but have the waste go outside. This could mean “garderobes” in the walls of castles (closets with a hole in a seat that carried the waste down a chute into a pit) or chamber pots in bedrooms and closets that could be emptied later. In 1596, James Harrington invented a valve pipe toilet, which he installed for Queen Elizabeth I, but it was smelly and noisy. During the 18th century, Alexander Cummings invented the s-shaped disposal pipe that kept out odors, and inventor John Brahma created the flap to keep toilets from freezing in winter. But it wasn’t until the 1880s that the flush toilet became popular. Thomas Crapper (yes, I know) invented the ballcock and therefore the modern flush toilet, although several inventors before him had made improvements on the 18th century models. As with most inventions, Crapper gets the credit because his product dominated the marketplace, and he was good at marketing.
I hesitated to put railroads and canals into the infrastructure section, but it is warranted even though they began as private enterprises funded through private investment. The first canals were funded by companies trying to connect ports with factories. Like imperialist ventures, canal investments could be profitable or be a total bust – it was sometimes impossible to tell until the canal was built. Many never got past the planning stages. In the 1830s, for example, Louis Galabert proposed the Canal Pyrenees, to connect the Canal du Midi (build under royal supervision in the 17th century) to the Atlantic.
The Suez Canal / Wikimedia Commons
The Suez Canal, though not built in Europe, affected European trade. It took 15 years to build. It was first proposed in 1854 by French diplomat Ferdinand de Lesseps, who contracted with the viceroy to form a company to build the canal. Britain objected, since France would get the benefit of the canal, but would later buy an interest in it. By directly linking the Mediterranean to the Red Sea, it made it possible for ships to sail from Europe to India and Asia without going around Africa. The pilot study estimated that 2,613 million cubic feet of earth would have to be moved, and 2,013 million cubic feet dredged from water. Egyptian workers were drafted and were poorly paid, but the technology was extraordinary (the huge steam dredger is shown at right, although when the canal began it was all done with Egyptian labor and shovels). The canal was opened in 1870. It was not deep enough or wide enough, and many ships were grounded in the canal and had to be rescued, but the time saved from not rounding Africa was worth it. The British took over Egypt, partly to prevent the canal falling into the hands of unfriendly rebels, so after 1882 they controlled the canal.
Interestingly, the opening of the canal caused the beginning of a unidirectional migration of Red Sea animals and plants into the Mediterranean. They call this the “Lessepsian migration” after Lesseps. And of course, it caused increased imperialism, or at least seemed to increase its speed.
And no increase in speed was as important as the railroad. Railroads had been around for years, connecting a single mine to a single port, for example. That meant that in both Europe and the U.S., there was no single standard for tracks or engines. That changed as governments became involved, and schemes were created to hook up existing railways and create new ones. Here, the United States plays a major role. The completion of the Transcontinental railroad in 1869, particularly when combined with the opening of the Suez Canal, meant that products and people really could travel around the globe. It is no coincidence that Jules Verne’s novel Around the World in Eighty Days was published in 1873. In that book, Englishman Phileas Fogg is challenged to travel around the world in 80 days, which he does despite delays and various adventures. His main transport was steamships and railroads, the symbols of the age, and correspondence throughout the journey took place using telegraph.
The First Telephone Exchange of Pécs / Wikimedia Commons
“Can you hear me now?” No, that’s not just a contemporary phrase – it was often used with the new telephones. I won’t retell the story of Alexander Graham Bell (you can find it in the children’s section of any library). First demonstrated in 1876, the telephone became popular for making possible two things: the transmitting of voice (instead of just dots and dashes) over wire, and the infrastructure that made it possible for phones to be installed in homes as well as public locations. The telephone exchange, as it was called by Bell and others, was invented by Tivadar Puskás, a Hungarian engineer who worked for Thomas Edison. The first exchange was cobbled together in New Haven, Connecticut, and worked on a subscription service. People installed phones in their homes and paid $1.50 a month to be able to make calls through the switchboard, using an operator.
Because they were modeled on telegraph exchanges, the use of telephones at first was fairly limited. People would lease a pair of telephones (one for home, one for office) and pay for the wire to be laid between them. The first phones used only one wire, just like the telegraph, so with early phones you both spoke into and listened using the same piece. Edison’s lab created the separate microphone, allowing for the 19th century phone we see in the movies, the candlestick phone. Early models have no dial, because an operator answered when you lifted the earpiece off the switchhook.
The leading country for number of telephones per capita at the end of the 19th century was not the U.S. – it was Sweden. New Zealand, Norway and Switzerland had the most extensive systems in the world.
Not everyone greeted the phone with enthusiasm, however. Just like today, their infrastructure destroyed the landscape. Telephone poles and wiring everywhere was not attractive. Along with home electric wiring, it brought fear of the impact on health, just like with today’s cell towers. Sand filters that make water cleaner were progress for everyone – private telephones that required wires and poles disrupting everyone’s view was quite another.
And for each new invention, something was lost. With an increase in communications by wire came an increase in the loss of privacy. With new transportation, isolation became less possible. New weaponry was seen at the time as decreasing military skill and relying on machinery to kill – there was particular disapproval at the use of mechanical weapons against African spears. Electrical wiring and city lighting brought a different kind of light, and changed habits of work and play. The bright light of electricity, compared to candle light and gaslight, illuminated close work after dark. This made possible shifts of work covering 24 hours, and night shifts for detailed labor. The bright light at night also changed melatonin production in the body, causing difficulties with ordinary sleep cycles and weakening people’s physical systems. Recording devices are often accused of causing social isolation (why go to a concert when you can listen at home?) and losses of memory, and typewriters would cause a decline in handwritten communications.
Into the Early 20th Century
At the turn of the 20th century, the processes of technological infrastructure, and the increased pace of patents and inventions, continued. With the adoption of electricity came conflicting technical standards and safety concerns, but also novel forms of entertainment and coding for war communications and espionage. Beginning with the machine gun, military technology will create a need for tactics for long-range weaponry and impersonal warfare. So at the same time as people are coming closer together through airplanes, radio and movies, they are distancing themselves when killing each other.
Sometimes the technologies that change everything are domestic. At the turn of the 20th century, in addition to lighting, electricity brought new implements for the home.
The idea of washing clothes automatically goes back to at least the 18th century, with simple cranks put on barrels to turn the clothes in soapy water. The first washing machines were patented in the 1840s, but they didn’t become automatic (that is, powered by gasoline or electricity) until the first decade of the 20th century. In Europe these were used in commercial laundries, while in the U.S. the focus was more on home use.
By 1890, 24% of U.S. households had running water and 8% had electricity. Other innovations for the home included toasters, waffle-irons, non-electric vacuum cleaners, electric irons, even popcorn machines. Many of these were sold as time-savers for housewives, products which would make their lives easier.
But these products did not make women’s work easier.
1913 Motor Washer ad / Wikimedia Commons
Historian Ruth Schwartz Cowan’s book More Work For Mother (NY: Basic Books, Inc. 1983) articulated the idea that the domestic technology revolution of the 1880s actually increased work, and she backed it up with historical research. It turns out that new technology not only caused changes in perceptions of housework, but also made housework exclusively a female task. An example was rug-cleaning. Prior to the vacuum cleaner, rugs were dragged outside every few months, usually by a hearty male member of the household. They were beaten with carpet-beaters to get out the dirt, aired, and brought back inside. The vacuum cleaner made it possible to clean the carpet more frequently, which changed the standard of cleanliness. The expectation became that the carpet would be cleaned once a month, or once a week. And with the vacuum cleaner so easy to use, the woman became the only carpet cleaner.
The situation with laundry was similar. Prior to the washing machine, middle-class families sent laundry out to have it done. Lower-class and rural families sometimes made it an occasion, with many family members dragging out the tub or dragging the clothes down to the river. Early washing machines were hardly more than powered tubs which shook the clothes, water and soap. Sometimes they agitated so much they “walked” across the kitchen!Washing machines (even when the dryer consisted of a squeeze-roller wringer mounted on top and a clothesline outside in the sun) increased the expectation that clothes be washed more frequently. Instead of taking your clothes off and airing them for tomorrow, you expected “mom” to do laundry every day.
Early icebox / Wikimedia Commons
Food refrigeration, however, actually did save time and trouble. Keeping food fresh had always been a concern of human societies. Root crops were kept in root cellars, water was put on the roof at night to cool, meats were salted or smoked. Ice was cut from the lake and put in the ice house. Fresh milk was only available if you had a cow or goat. With improved transportation in the mid-19th century, ice could be delivered to people’s homes, and used in railroad cars to keep food fresh. By then, industrialization was making it harder to find clean ice. Brewers were likely the first to use electricity to make ice. By 1880, refrigerators or “ice boxes” were produced with good insulation that made it possible to keep the ice frozen for longer periods. Electricity for home refrigerators wasn’t invented till around 1915.
The first airplanes were fascinating, dangerous things.
This video shows the incredible inventiveness of early aircraft designers. Shown are several short clips of early failed attempts to create controlled sustained mechanical flight. There are images of craft that flap wings like birds, have complicated rotating wing assemblies that break up, have umbrella-like assemblies that bounce up and down, rocket powered backpacks and finally ending with early footage of demonstration flights of the Wright Brothers’ first working design.
The trial and error shown in this footage was typical. The same methods had been used in early steam trains, where the boilers tended to explode. What’s interesting here is the tinkering, the efforts of people (some scientifically trained but most not) to try new things, get investors, try more things. Although deriving from Victorian times, the “garage engineer” was most appreciated in the early 20th century. Mechanical tinkering was a hobby throughout Europe and the U.S. The popular film Chitty Chitty Bang Bang, the story of a crackpot inventor who creates a flying car, is set in the 1910s for a reason (even though the 1968 film was based on a 1964 book about the 1920s).
Wright aircraft design1903
The Wright Brothers get credit, of course, for the first manned, controlled, sustained, powered flight in 1903. They were true pilots, having flown gliders off cliffs for many hours to learn how to control a plane during flight.
Airplanes captured the imaginations of many people, and led not only to further developments in air travel, but in imagining what else humans could do.
Interestingly, in the year before the Wright Brothers flew their plane, motion pictures were becoming commercially viable, and one of the best early films was about going to the moon.
The entire film is 15 minutes long, and worth watching for all sorts of reasons.
First movie, A Trip to the Moon (1902), remastered in color
Notice the way they get to the moon, with a cannon firing a capsule. Notice the arguments that precede the journey. Notice how the women are relegated to cheerleader status, and how the moon people are demonic and OK to kill. There’s as much here about imperialism as about science.
But it’s mostly about the film itself. Moving images were possible as far back as the 11th century (remember Al-Hazen?) with the camera obscura able to project light through a pinhole onto a wall or screen. But the viewing was always real-time and real place: the camera was simply reflecting what was happening in its view. Again, the time of recording was the Victorian age. In the 1880s, revolving viewers like the zoetrope (right) made it possible to see animations by looking through the slits at the spinning image.
The hero here was Eadward Muybridge, a photographer who seemed to view the possibilities of the camera. He took pictures of the American West, but was always trying new things: multiple time-lapse photos to show a building going up, and a 360-degree panorama of San Francisco.
San Francisco 360 degree panorama, 1902 / Wikimedia Commons
This was made for the wife of Leland Stanford (yes, that Stanford), former California governor and horse fanatic. He asked Muybridge to help with the question of whether all four of a horse’s feet leave the ground while galloping. While a single photo answered the question (yes they do), Stanford encouraged him to continue experimenting with recording motion on film. He set up series of cameras along the track, then used the zoetrope to play them back, so it looked like this:
This short animation shows the series of images shown successively of a horse and rider galloping that make up an early zeotrope “film”. Photos were taken every few feet as the horse moved down the track. These photos were attached in correct order inside of a rotating cylinder. Small slits placed between the photos allowed the viewer to see one photo at a time as the cylinder rotated. The successive images viewed created a simple film-like effect that showed the horse running.
Thomas Edison’s Kinetoscope / Wikimedia Commons
The masters of the medium were in France. The Lumière brothers, sons of a portrait painter, were photographers who were both mechanically minded. They patented processes for color photography and for dry-plate negatives in the 1880s, but didn’t get into motion pictures until the 1890s. In 1893 Louis Lumière saw a demonstration of Edison’s Kinetoscope, which looped celluloid film (created by the Eastman company) in front of a lens for viewing. The brothers created the lightweight Cinématographe, which unlike the Kinetoscope could both film and project the movie. They opened the first motion picture houses to show the movies they created.
Lightweight Cinématographe, by the Lumière Brothers / Wikimedia Commons
Georges Méliès, a stage magician, saw one of the Lumière movies in 1985, and was inspired. He tried to buy their camera but they wouldn’t sell, so he looked around for other inventors working on similar ideas. He journeyed to London to buy an Animatograph projector and film, and tinkered with it until it worked as a camera. Between 1896 and 1913 he created 500 films, all branded with his particular creativity and magic.
The inventor of the radio is still in dispute. The ill-fated Nikola Tesla appears to have invented radio wireless communication about the same time as Guglielmo Marconi, but was subject to the same sort of bad luck and patent hounding that caused him to lose battles against Thomas Edison. Tesla discovered that his Tesla coils (resonant transformers that could store power and produce high voltage) could be tuned to particular frequencies, and send information through the air. By 1895 he was ready to test a 50-mile signal when his lab burned down, causing delays.
The Marconi radio room about the RSS Olympic, 1911 / Wikimedia Commons
Guglielmo Marconi began tinkering with wireless communications as a young man, and by 1896 had transmitted wireless signals over several miles and received a British patent for wireless telegraphy. As his work continued, he applied for patents in the U.S. In 1897, Tesla was granted a US patent for basic radio, so the Patent Office turned downed Marconi’s patents. But Marconi’s business, Marconi Wireless Telegraph Company, Ltd., began having great success on the British stock market, and by 1904 the Patent Office reversed its decision and gave Marconi the patent on radio instead of Tesla. (If you’re unsure of Tesla’s brilliance, see this list of the patents he did get.)
Marconi won the Nobel Prize for the invention of radio in 1911. With the advent of the Great War, radio telephony became a crucial component of battlefield communications. Improvements such as oscillators, amplifiers and the electron tube made it possible to report a gas attack in the trenches, and guide aircraft to drop bombs. It wasn’t perfect – when battle began, phone lines were cut and transmitters blown up, and everyone went back to sending messages through runners, homing pigeons, and dogs. Plus the range of early radios in the field was only about 2,000 yards.
The Titanic, you may have heard, was an unsinkable, iceberg-proof ship that sunk when it hit an iceberg on its first voyage in 1912. It had Marconi radio equipment aboard, even though there had been improvements in radio developed by American inventors. Apparently this was because Marconi held the patent monopoly. And, according to the inquiry after the disaster, messages over wireless could easily jam each other on that kind of equipment. In addition, regulation of radio had not matured yet – while the U.S. required all big ships to have radios, for example, it did not require 24-hour monitoring or the setting of specific frequencies for communication. The Titanic disaster caused new laws to be implemented controlling this technology, creating an international distress signal and requiring all radio sets to interact with all other sets. Short waves and other more flexible systems would develop as a result. The disaster would also lead to the invention of echolocation “sonar” to use sound to find things in the water – it was patented within a month of the sinking.
In 1943, a few months after Tesla’s death, the US Supreme Court upheld Tesla’s radio patent, apparently because the Marconi company had sued the government for its use of his radio idea in World War I.
The Machine Gun and the Great War
Before we get into the technology of World War I, we have to talk politics. The foundations to the Great War lay in the Franco-Prussian War.
The Franco-Prussian War (1870-71) was fought as the final struggle in the unification of Germany. If you are counting, this is the Second Reich (the first one was when Charlemagne unified the Germanic tribes, and the third emerged later in the 1930s). The unification of Germany was primarily the work of Otto von Bismarck, who through genius and trickery had gotten all the competing powers (Austria, Russia, Italy) to lay off and let Germany unify into a state under the dominance of its biggest province, Prussia. He tricked France into declaring war on Prussia in 1870.
Reffye mitrailleuse Le Général Hanicque (“Canon à balles modèle 1866“), manufactured in 1867, on display in Les Invalides / Wikimedia Commons
A major factor in Prussia’s victory was the use of the machine gun, but not by them. Modern machine gun designs go back to 1718, when James Puckle of Britain patented a large gun that could fire shots from a revolving cylinder. But, like Collier’s revolver (remember that?) it was a flintlock, so it was cumbersome and unreliable. By the American Civil War, the Gatling Gun became the first automatic weapon in widespread use. Invented in 1862, it used percussion caps inside a brass cartridge and fired when the crank was turned by hand. In 1865, an improved version, the Reffye mitrailleuse, became the secret weapon of France. It could fire 100 rounds per minute with a good operator.
In adopting the new super-weapon, France made huge mistakes. The French command gave the guns to the gunner corps, who worked with field artillery. They substituted some of the machine guns for some of the field artillery. This put the machine guns at the back of the infantry, and in the hands of specialists in big guns which shot artillery shells over their own troops at the enemy. In battle, Prussian Krupps cannons, breech-loading and with a range of 4500 meters, dispatched most of the French front line. The French rarely got to use their poorly located super-weapons. (It is ironic that the Krupps cannon won a prize at the 1867 Great Exhibition of — wait for it — Paris.)
The Great War (so called because it was huge, not fabulous) began in Bosnia in 1914. Bosnia, a Slavic region, wanted independence from the Austro-Hungarian Empire. She had been agitating for some time, and had some support from Serbia. The heir to the Austro-Hungarian throne, Archduke Franz Ferdinand, decided to take his summer vacation in Sarajevo, capital of Bosnia. It was a political move, to show that Bosnia was still part of the empire. He and his wife were assassinated by a Bosnian revolutionary teenager as they traveled down the street in their open car.
The Austro-Hungarian Empire held Serbia responsible for the assassination. The killer had been of Serbian ethnicity, and the Empire knew that Serbia was supporting Bosnian independence. The Empire demanded that Serbia take responsibility, or there would be war. Serbia dawdled, and Austria-Hungary declared war on Serbia. It should have been a little war; Serbia had a little army, the Empire’s was huge. Instead it became a world war.
Why? Because of the secret alliances and treaties that had formed all over Europe during the 19th century. These alliances were designed to protect nations in an era of increasing nationalism, independence movements, and German and Italian unification. So Serbia had a big secret ally: Russia, protector of the Slavs. The agreement said that if anyone attacked Serbia, Russia would come to her defense. So Russia declared war on Austria-Hungary. Austria-Hungary had a secret treaty with Germany, saying if any major power declared war on Austria-Hungary, Germany would come to her defense. So Germany declared war on Russia. Russia had a secret treaty with France, saying if either one of them were attacked by Germany, the other would come to defend. So France declared war on Germany.
Germany, desperate now that she would have to fight on two fronts, planned to attack France first and beat her quickly (she knew how, from the Franco-Prussian War). It would take time for the industrially backward Russia to mobilize anyway. But the only way to take Paris quickly was to go through Belgium, because the French had fortified the area west of Alsace-Lorraine. Belgium was a neutral country, and said no to the Germans just marching through to take Paris. Germany marched through anyway. Belgium had a secret treaty with Great Britain to protect her. Britain entered the war in August 1914, only a few weeks after the assassination of the Archduke. The Ottoman Empire entered soon on the side of Germany.
British and French troops would fight the Germans on this “Western Front” in Belgium and northern France for the entire war. One reason was, again, the machine gun. When troops with machine guns at the front (everyone had the right idea by now) faced off, the only way to hold position was to dig a trench so you could keep your head below fire. Your machine gun could then be mounted above the trench, to kill anyone trying to take your trench. Since both sides had machine guns, and had studied since the Franco-Prussian war on how to use them effectively, taking ground involved shelling the enemy trench until you thought everyone was dead, then jumping out of your trench and running across “no man’s land” to the enemy’s trench, hoping to hell there wasn’t a single person left in the enemy trench to man the machine gun. The casualties were unbelievable. Hundreds were mowed down in a single charge. At Passchendaele in 1917, an advance of five miles cost 400,000 British casualties. Cavalry quickly became useless: what good was a horse in this situation?
Senior officers were forbidden from accompanying the first charge / Wikimedia Commons
The leadership was stunned. The British leadership handled things badly, according to numerous sources such as John Ellis’ Social History of the Machine Gun (New York: Pantheon Books, 1975) and, frankly, pretty much every book I’ve ever read on the Great War. British officers had a long tradition of nobility (both literal and in a moral sense). They continued to believe that the will and courage of the individual soldier was more important than the technology. It took several years of war to convince them that machine guns, gas masks, and airplanes could win the war. In many instances, the leadership blundered and cost the lives of many men. In one case, infantry were sent “over the top” of their trench to attack the enemy after only a half hour of shelling because the British were short of ammunition. In another, senior officers were forbidden from accompanying the first charge, leaving their men sitting around after taking a trench, waiting for orders while the Germans recovered and prepared to counterattack. At Gallipoli, pocket watches were not synchronized properly and men were ordered to wait to attack until five full minutes after artillery shelling stopped, giving the Turks plenty of time to re-man their trenches and their machine guns.
The average soldier was very likely to die in this war, by machine gun fire, disease in the trenches, or cold on the Eastern Front. If he survived, he would likely be subject to the newest illness, shell shock, where the mind went numb from the continual night shelling and the sight of total carnage. To what extent did one technology, the machine gun, contribute to all this?
The London-Faringdon Coach passing Buckland House, Berkshire, by James Pollard, 1835 / Yale Center for British Art
Trains on stamps, 1950s / Wikimedia Commons
U. S. President William Howard Taft’s White Motor Company Model M 40 horse-power steam 7 passenger touring car. / Library of Congress
The internal combustion engine, used in automobiles from about the 1880s, made things easier. Steam cars were hard to warm up on cold mornings. But the first gasoline cars required a crank, and they were (and are) stinky. Electric cars were very popular at the turn of the century, and looked to become the common mode of transport.
Ford Electric Car ad / Wikimedia Commons
Henry Ford’s production line utilized an assembly process similar to that in Colt’s factory a generation before, allowing the Ford Motor Company to mass produce automobiles. The cost of the Ford automobiles got low enough that middle class people could buy them. Electric vehicles were produced much less efficiently, so costs stayed hired. Then finds of petroleum in Texas and elsewhere made gas very cheap. Faced with a smelly, cheap product or a clean, expensive product, the public chose the Ford.
The automobile, of course, changed society in both Europe and America. It allowed for individual transport at the convenience of the owner, a mobile status symbol to show off, tinkering at home for the mechanically minded, and privacy for the young going out on dates. Ultimately it would necessitate improvements to the road systems, and regulation of traffic using stoplights and signs.
The age of mechanized weaponry was also the age of mechanized entertainment and the beginning of mechanized domestic work. You know I’ll tell you that much was lost because of this, and I don’t just mean men doing chores around the house. Instead of firing when you saw the whites of their eyes, you could kill the enemy from far away. Instead of going to a concert to hear music, you could listen to the phonograph or the radio. Instead of lighting a lamp, you could press a switch. But this is also an age where technologies were deployed before their time, such as the machine guns by the French in the Franco-Prussian War, and those crazy plane designs. Despite the advancement of science, most the those influencing technology during this time were not scientists. They were tinkerers using trial and error to create useful, and in some cases deadly, tools. The development of the automobile had the potential to change the entire landscape – for awhile they were deadly too, as urban areas became crowded with horse-drawn vehicles and cars at the same time. Kind of like what will happen soon when we have driverless cars and driven cars on the road together!
World War II and the Cold War
War, whether hot or cold, causes rapid advancement in technology. That’s a theme in history. Certainly World War II and the following Cold War provided an impetus and a backdrop for technological development. It is also with this era that science and technology work together most closely to create our contemporary mindset.
Following World War I, Europe and America celebrated with both introspection and a view toward the future. The introspection was deep, and much of it was the questioning of technology as human tools. The slaughter of the war had been astonishing, and the boys who did come back were damaged mentally and physically. There were more than a few people who believed that Western Civilization was coming to an end. There was a reaction against the dehumanization of mechanized slaughter, a feeling that the technology had gotten beyond people’s ability to control it. Philosophically, there were two ways to go after the war.
Logical empiricism related most closely to science, because it saw a lack of scientific thinking as part of a larger social problem. Within the scientific community, the early 20th century had been a time of dislocation and change. The relativity theories created by Albert Einstein are a good example of this. Before the late 19th century, geometry was based on the work of Euclid, following his axioms. Mathematicians began to move away from Euclid’s two-dimension plane, discussing and calculating curved surfaces that violated Euclidean axioms. Einstein’s view of space was similarly non-Euclidean and curved. And this theory of relativity meant that there were, in a sense, no constants in the universe – everything moved relative to everything else. This destroyed the Newtonian physics that every scientist had understood for over two centuries. It was time to build a new science based on rational principles, not just physical observation of the world. The logical empiricists (many of whom, like Einstein, were scientists) were in essence trying to revive Enlightenment principles of reason in the face of a seemingly irrational world. This philosophy saw that “metaphysical” evils such as war, racial hatred, set gender roles, were manifestations of irrationality and should be defeated with reason.
The other way to go was toward Existentialism, which assumed moral responsibility. Technologies do not create evil – people do. Machine guns were manned; they were not automatic in deciding to fire themselves. Jean-Paul Sartre, influenced by the German philosopher Heidegger during the 20s, rejected science as a way of understanding human behavior. Human beings are more than just people situated in a context. They have the responsibility of free will independent of what’s happening. Our innate freedom implies heavy responsibility for our actions. We choose to be who we are, and to do what we do. Although existentialism did not develop fully until after World War II (with its highly questionable acts of moral responsibility – for example in the death camps), it began right after the war and was a response to it.
Now, we know that some just rejected the introspective response. The economy of the U.S., which was only involved in the war for a little over a year, rebounded immediately and led to the Roaring Twenties. The economy of France recovered much more slowly, as much of the damage done to the land occurred in France. Germany attempted to become a republic, but the Treaty of Versailles blamed it for the war and demanded reparations. Unable to pay, Germany accepted the occupation of the Ruhr Valley, a major industrial area, by France in 1923 as partial reparations. The German economy collapsed without a base of production, and hyperinflation ensued. A dollar, worth 4 marks before the war, was worth 4.3 trillion marks in 1923. People starved and sold their clothes, jewels and children to survive.
Still Life on a Table with ‘Gillette’, by Georges Braque, 1914 / Georges Pompidou Center, Paris, France
Artists throughout Europe and America responded with a modern vision of the world. Bright colors and geometric shapes were featured in paintings, and geometry was expressed differently in architecture. These “modernists” in general rejected traditional forms and ideas, including Enlightenment rationalism, religion, and Victorian society. They used techniques that drew deliberate attention to the technologies in the art. Brush strokes were featured, works were created out of collages of various materials (like the work of Georges Braque, above). Modernism had begun before the war, at the end of the 19th century, but the responses to the war seemed to push it forward into a form of social commentary.
Another response to the horror of World War I was fascism. Developed in the early 1920s by Mussolini, fascism was a form of nationalism that was based on the idea of historical destiny through the state. As Mussolini himself wrote for the Italian Encylopedia in 1932:
The foundation of Fascism is the conception of the State, its character, its duty, and its aim. Fascism conceives of the State as an absolute, in comparison with which all individuals or groups are relative, only to be conceived of in their relation to the State.
Italy felt itself abased by World War I, ignored at the peace negotiations that redrew boundaries for Europe and the Middle East, and crowded out of the imperialist economies of the late 19th century. Since Germany too was abased, and also would adopt fascism, I tend to see fascism as the politics of those who feel they have been wronged by history. Fascism glorifies war as a tool of state, and despises both democracy and socialism as undermining the true potential of the individual:
Fascism, now and always, believes in holiness and in heroism; that is to say, in actions influenced by no economic motive, direct or indirect.
In its adoption of a metaphysical stance and its assumption of responsibility onto the state, fascism rejected both philosophical trends after the war. Its visual expression was in the art of Italian futurism, which began just before the war and glorified violence, speed and technology. Their work celebrated industrialization and technology in the same self-conscious way that the modernists celebrated the technology of art itself.
Left: Plastic Synthesis of the Idea of War, by Gino Severini 1915 / Municipal Gallery of Modern Art, Munich
Center: Lights + Sounds of a Night Train, Benedetta Cappa Marinetti, c.1924 / Guggenheim Museum, New York
Right: Unique Forms of Continuity in Space, by Umberto Boccioni, 1913 / Museum of Modern Art, New York
After the Stock Market Crash of 1929, the main European and American investors who had financed the rebuilding of Europe and the vast moneymaking structures of America, became isolationist. Investment was pulled back into American companies, and many went under. The technologies of the 1930s were primarily improvements on what had come before: frozen food using the new electric freezers, improvements in radio that allowed most homes to have a set and listen to programs, the long-playing vinyl phonograph record, reliably produced color film, Disney’s first animated feature (Snow White) – all were built upon previous technologies. The television was invented in the 1930s, but wouldn’t reach economies of scale till after World War II.
You can read about World War II anywhere. The short version is that the Mussolini’s fascist Italy began expanding in the 1920s, and in Germany the republic failed. The fascist National Socialists (Nazis) were brought to power in 1933. Different countries dealt with economic depression in different ways. Some, like the United States, Britain and France, adopted socialized methods to deal with the crisis, using deficit spending to keep people employed and create necessary public works. But Germany and Italy, and the newly imperialist Japan, chose expansion to create economic growth. The Nazi leadership managed to combine pseudo-scientific ideas of race (exactly the sort of thing the logical empiricists objected to) with this expansion to justify a recreation of the German state, destroyed by World War I’s unfair treaty. They began a “reunification” project of German-speaking peoples, then continued expanding into eastern Europe.
The invasion of Poland in September 1939 brought Britain and France into the war. The Soviet Union, originally in non-aggression pact with Germany, would join the Allies when it was attacked by Germany in 1940. As with the previous world war, this one brought extraordinary advances in military technology.
Cutaway drawing of a cavity magnetron from 1984. Part of the righthand magnet and copper anode block is cut away to show the cathode and cavities. This older magnetron uses two horseshoe shaped alnico magnets, modern tubes use rare earth magnets. / United States Navy
Detection technologies were crucial to finding and engaging (or avoiding) the enemy. Sonar (originally SOund Navigation And Ranging) had been patented right after the Titanic disaster. But it remained experimental until World War I, when German submarine warfare had caused more rapid development. By World War II, improved active sonar would be able to “ping” nearby submarines to find their location. Radar (originally Radio Detection And Ranging) was first patented in Britain by Robert Watson-Watt (a descendant of James Watt of steam engine fame) in 1935. Britain, as an island, was particularly vulnerable to air attack. Radar uses radio waves, broadcast through the air, to detect objects. Watson-Watt’s system was able to detect planes up to 80 miles away, but they had to be based on the ground. During World War II, the invention of the cavity magnetron made it possible to install microwave radar in the planes going up to meet the German air raids hitting civilian targets at night. The mass production of the magnetron in the U.S. made possible Allied superiority over the radar being used by the enemy. Nowadays we use it in microwave ovens.
Military Enigma machine, model “Enigma 1”, used during the late 1930s and during the war / Museo scienza e tecnologia Milano, Italy
Encryption technologies were crucial to communications, since both telegraph lines and wireless communications could be hacked by the enemy. Encryption technologies go back to ancient times, when secret messages were written on strips wrapped around a stick. The strip was brought to the recipient, and had to be wrapped around a stick of the same size to be understood. Julius Caesar apparently also wrote code just by moving all the letters over by one on the alphabet. During World War II, it was much more complex. German engineer Arthur Scherbius invented the famous Enigma machine at the end of World War I, and despite having the code broken by Polish spies, the improvements on design through the 1930s made the code very difficult to break. It was a complex machine, requiring operators at both ends to set the many rotors and plugs the same way and start at the same letter. Some say the machine wasn’t that good, and that’s why the British were able to break the code. But it’s more likely that mistakes and sloppiness in its use for unimportant communications made it possible to discover how to decode the important ones.
Mathematician Alan Turing was instrumental in creating computers that could work on cracking code – he is often given credit for creating the first electronic computer machines. In 1939, the Polish Cypher Bureau passed on the information they had about Enigma, and Turing used that information to create the “Bombe”, a machine that could be set using multiple switches and plug boards to decipher the code. The work was done at Bletchley Park in England.
How the code was cracked:
The clip goes a bit far in saying that the Germans didn’t know that code books were being retrieved from captured submarines – in fact, they went to great lengths to set self-destruct sequences for the submarines in case of capture. In June 1944, the German submarine U-505 was captured, along with its code books, by a brave team of Americans. The submarine now resides at Chicago’s Museum of Science and Industry, which has a good website on the story of the capture.
A book came out recently about working at Bletchley, and this BBC news story features some of the women who broke the code:
The Atom Bomb
As the war dragged on, their was fear that Germany was developing a superbomb based on nuclear fission. Beginning in 1935, three people worked together on the possibility of nuclear fission: Lise Meitner (Austrian physicist and the first woman in German to become a full professor in physics), Otto Hahn (German chemist who would later win the Nobel Prize) and Fritz Strassman (Hahn’s assistant). They were not the first to try: substantial work had been done the year before by Italian physicist Enrico Ferme. Meitner provided the theoretical foundation after they had succeeded in turning uranium into barium through removing 100 nucleons, recognizing the connections to Einstein’s formula changing mass into energy. Although I do not have a deep understanding of nuclear physics, I am struck by the similarity between splitting the atom and alchemy – both have the goal of changing one natural substance into another.
Although a converted Christian, Lise Meitner was born Jewish and had to escape to the Netherlands, with Hahn’s help, in 1938. In 1939, the scientists published their findings, which caused a sensation and led Albert Einstein to write to President Franklin Roosevelt, concerned that the knowledge of how to make a bomb might find its way into Nazi hands. When FDR created the Manhattan Project in 1940 to help develop the bomb in the U.S., Meitner was offered a job but refused, wanting nothing to do with making a bomb. Many of the scientists were, like her, refugees from Nazi Germany. When the U.S. entered the war in December 1941, the project expanded, ultimately employing 120,000 people at research locations around the country (though many were not told the goal of their research). By December 1942, the Chicago branch of the team, including Italian physicist Enrico Fermi, created a controlled chain reaction explosion using fission. Theoretical physicist J. Robert Oppenheimer led the effort at Los Alamos, New Mexico, detonating the first atomic weapon at the test site in July 1945. Oppenheimer was astonished at the power of the bomb. The flash was visible for 200 miles and the mushroom cloud of smoke reached 40,000 feet into the air. He quoted the Bhagavad Gita in a way that resonates with me about a lot of our technologies:
We knew the world would not be the same. A few people laughed, a few people cried, most people were silent. I remembered the line from the Hindu scripture, the Bhagavad-Gita. Vishnu is trying to persuade the Prince that he should do his duty and to impress him takes on his multi-armed form and says, “Now, I am become Death, the destroyer of worlds.” I suppose we all thought that one way or another.
Within weeks two versions of the bomb were on their way to Japan. One was dropped on Hiroshima and the other three days later on Nagasaki. Here’s my own analysis, using primary source footage:
Instead of returning to normalcy after the war, it became immediately apparent that Soviet expansion into Europe was not going to end with the war. Having invaded as far as Berlin to end the war, the Soviet Union worked to put friendly communist parties into power in the nations of Eastern Europe. By 1946, Winston Churchill was claiming that an “Iron Curtain” was descending in Europe, separating the free West from the communist East. The techniques of spying, and sonar, and many more technologies would be developed further as the United States and its allies vied against the Soviet Union and her allies to prevent either side gaining more control over other countries. The Cold War would continue until the Berlin Wall, built in 1961 to keep Soviet-controlled East Germans from emigrating to the West, came down in 1989.
From Vacuum Tubes to Transistors
From the end of the 19th century until the middle of the 20th, the heart of what we would call electronics was the vacuum tube. When I was a kid, I used to go with my dad down to the drug store, carefully carrying vacuum tubes that had burned out from the radio or television. We’d test them in a big testing rack, and buy new ones as needed. What I didn’t realize is that those tubes were capable of taking power from the socket and using it to heat up the filament in the tube to amplify the radio signal so we could listen to the radio through the speaker.
De Forest Audion tube from 1908, the first triode. The flat plate is visible at top, with the zigzag wire grid under it. The filament was originally under the grid but has burned out. / San Francisco Airport Museum
Bell Lab scientists invented the transistor in 1947, by mounting two gold contacts onto a germanium plate (the semi-conductor material that amplifies the power). This was the beginning of solid-state electronics, rather than the gas state required inside a vacuum tube. Transistors made it possible to create cheaper, easier ways to amplify power in a less fragile way. They were more durable and used less electricity. During the 1950s, transistors got smaller and lasted longer than any vacuum tube.
Replica of the first transistor / Wikimedia Commons
This was important to computers. Vacuum tubes were used as switches in computers during the 1940s. The decrypting computer Colossos, invented by electrical engineer Tommy Flowers, had 1600 tubes and was instrumental in breaking the German code during the war. In 1946, ENIAC, developed by the US Army and the first electronic computer that could store information (instead of being reset for each task) had 17,000 tubes. Tubes would fail every couple of days, and have to be located and replaced (though I’ll bet the Army didn’t have to bicycle down to the drug store to test them!). Though designed to work on artillery tables for ballistics, it was so powerful it did the calculations for the development of the hydrogen bomb. By 1950, Alan Turing was exploring the comparisons between the computer and the human brain, starting up the field of Artificial Intelligence.
Transistors increased reliability of computers because they replaced the tubes. The IBM 1401 was the first transistor computer:
Teenagers in the 1950s loved transistors too – they made possible the portable radio. Bringing music to the beach no longer meant you had to play guitar or lug along a phonograph.
Medicine and Genetics
March Of Dimes poster girl 1955, by Mary Kosloski / Wikimedia Commons
Other advances were based on previous knowledge. The pharmaceutical company Squibb, for example, created single-dose morphine tubes that could be injected by medics on the battlefield, knocking out the patient till they got him to surgery. To replace supplies of quinine cut off by Japanese occupation in the Pacific, Atabrine pills were invented to prevent malaria, but it was hard to get soldiers to take it: it was very bitter and turned the skin yellow. Ultrasound was invented shortly after the war, in 1956, and it was based on military sonar.
Alexander Fleming discovered what is now known as penicillin, on accident, in 1928 / Wikimedia Commons
Immunizations also got a “shot in the arm” (I had to do that) during this era, although development of vaccines for diphtheria and other diseases had been steady since the early 20th century. Some vaccines, such as the one for tuberculosis in 1927, didn’t prove effective. Others, such as the flu vaccine given to troops in 1942 to prevent the horrors of the post-World War I influenza epidemic of 1918, may have been more helpful. So was the typhoid vaccine of the 1950s, and the polio vaccine, which got off to a rough start with a contamination of the shots, but was back on track by 1955. Polio had crippled children, led people to be hospitalized for years in “iron lungs” that breathed for them, and had meant that President Roosevelt walked with crutches.
The government sponsored polio vaccines, and from that time to now is involved in vaccination as a public health issue. Vaccine research moved from killer and disabling diseases, to childhood diseases that were survivable and common: measles, mumps, rubella. The ever-increasing list of recommended vaccines, and the enforced nature of their administration, has caused some to turn away from vaccination. Some of the survivable diseases help natural immunity. Anti-vaccine groups who talk about side effects and problems are in the same category as people who objected to electricity – they are not Luddites but are rather resisting the incursion into their private lives of technologies they do not approve.
DNA photograph / Wikimedia Commons
The science of genetics engages similar controversies, particularly as it has developed into genetic technologies. After the war, chemist Rosalind Franklin and others were working to discover the structure of DNA (deoxyribonucleic acid), the large molecule that contains genetic code for an organism. Having learned x-ray techniques doing research in France, Franklin was hired to set up the x-ray crystallography unit at Kings College in London. Although hired to work alongside Maurice Wilkins, who was using x-ray crystallography to examine DNA, she was treated like a lab assistant. She was able to take two clear photographs of DNA (right), and presented them at a talk attended by American physiologist James Watson and physicist Francis Crick, who had also been working on discovering the structure of DNA. Franklin hypothesized that the shape of DNA was a helix. Between 1951 and 1953, Watson and Crick realized the shape was not only a helix but a double helix. They and Wilkins got the Nobel Prize in 1962. Not Franklin.
A major step forward in biochemistry, understanding the structure of DNA, and theorizing about it carrying genetic code, would create the moderns fields of molecular biology and genetics.
You’ll notice there’s a lot more science here. Historians James E. McClellan III and Harold Dorn posit that the development of the atomic bomb marks a turning point in the relationship between science and technology. It revealed the “practical potential of cutting-edge science” and established a relationship between government-funded research and the development of technology. They claim it is at this point that we begin to think of technology, not as tinkering or the creation of objects that do things, but as applied science. I think that may indeed apply to the United States (President Eisenhower’s warning about a “military-industrial complex” controlling the future is apt), but I’m not so sure about Europe. Besides, in both places there are independent companies (like Bell Labs here) working on technologies with people who have degrees in various fields other than science. And I’m always suspicious of any theory insisting that a huge change has occurred. In my experience, almost all changes are incremental. Perhaps the extraordinary ethical and moral implications of the atomic bomb set it apart, and so it is assumed that its development is a “revolution”. Maybe.
The Modern Age
Humans, as we know, are intimately connected to their technologies. In 1967, John Culkin summed up the work of media critic Marshall McLuhan as “we shape our tools, and thereafter we shape us”. That means that I too am shaped by the technologies I use, and those I’ve been exposed to, which will make this lecture a little more personal. Unlike the fulling mill, these technologies are those of my life! Existing with some of these, I still stand in wonder.
Television is one of the most controversial technologies of modern times, as much for its programming as its technology.
TV Comes to Alexandra Palace (1938)
Although the technology itself was worked with in various ways during the 1920s, the birthplace of electronic broadcast television was Alexandra Palace in the United Kingdom. In 1935, the British Broadcasting Corporation made the first broadcasts.
In the U.S., the lead company from the 1930s was the Radio Corporation of America (RCA). In 1929 they had dominated the phonograph industry. They ran commercial radio stations during the 1930s, and created a turntable for flat records that could be plugged into a radio. In the 1939 New York World’s Fair, RCA demonstrated the television.
Electrical engineer J. Campbell Swinton had envisioned a television that used a cathode ray tube and an electron gun in 1911, but no one had produced it. Californian Philo T. Farnsworth would get the credit, transmitting 60 lines electronically. RCA’s engineers used a lot of his work, so patent lawsuits followed, with Farnsworth being awarded the huge sum of $1 million. The 1939 television that RCA demonstrated at the World’s Fair was based on Farnsworth’s technology.
The war interrupted television development, but it surged ahead in the late 1940s and 1950s. The most popular early television programs in the United States were news broadcasts and children’s shows.
The TV, inside the box / Wikimedia Commons
Color television had been a concept since the beginning, and the first color TV broadcast actually dates back to 1928, based on the designs of John Logie Baird, a Scottish engineer. Like the black and white versions, these early televisions were mechanical rather than electronic, relying on turning motors and discs (right). The maximum resolution was about 240 lines, a fairly fuzzy image.
As the technology of color TV developed, the story changes to one of marketing, at least in the United States. The Columbia Broadcasting Service had created a color broadcasting system, but it was not compatible with the black-and-white sets people had in the 1950s and early 1960s. At first the Federal Communications Commission approved CBS’s system as the national standard because it produced a better picture, but then RCA started flooding the market with “compatible” televisions. These were cheaper black-and-white sets that could receive color broadcasts, even though they showed them in black-and-white. CBS’s system would have required a color television to see any show broadcast in color. RCA’s marketing effort convinced consumers that they wanted to buy black-and-white TVs to see color shows until color TVs became less expensive, so in 1953 the FCC granted RCA permission to create a color compatible television system, which meant the end of CBS’s system.
1954 Rose Parade in Pasadena, California / Wikimedia Commons
This is not only an example of a lost technology, but of a superior technology being run out by an inferior technology. The CBS system had extraordinary color fidelity, but the vagaries of consumer demand caused it to meet its end. By the time of color broadcasts, half of the 10 million RV sets in America were RCAs, which pushed the technology in that direction. On January 1, 1954, the National Broadcasting Company (an arm of RCA) showed the Rose Parade from Pasadena, in color.
Birth control, or contraceptives, have been used since very ancient times. The ancient Egyptians soaked sea sponges in lactic acid from acacia trees, creating a highly effective spermicidal sponge. Knowledge of reproductive technologies, like knowledge of any other kind, comes and goes in societies – sometimes it is more evident in the culture (such as during the 1920s) and other times it is suppressed by social and political mores, as during the Victorian era.
Most birth control was either behavioral (abstinence, withdrawl before ejaculation, douching) or based on a barrier (like the sponge, or the cervical caps and pessaries available at the turn of the 20th century). Chemical contraceptives had been tried throughout history, but herbs (such as blue cohosh and pennyroyal) tended to be more effective as abortificants than as preventatives.
Interesting, all of these technologies were for the woman’s use. Only the condom, which has an ancient history, was for use by men. It had become a commercial product in the 18th century, when it was made of sheepskin. Latex (rubber suspended in water) was first used for condoms in 1920. Since the 1850s, rubber condoms had been created using penis-shaped molds, wrapping strips of rubber around them, then curing them. They had a shelf life of just a few months. Latex rubber condoms had a longer shelf life and were easier to make; they were also cheaper.
But back to chemical contraceptives. “The Pill” has often been claimed as causing the liberation of Western women from the burdens of unwanted pregnancy. This view is, I’m afraid, based on lost knowledge that is the result of the Pill itself. The story is American.
In 1951, 72-year-old Margaret Sanger, who had campaigned (often illegally) for birth control knowledge for American women, had been imploring the scientific world to invent a birth control pill for women. Independent scientist Gregory Pincus met her at a party, and told her his own research into progesterone made such a pill possible, but he would need money. Sanger acquired a small grant for him from Planned Parenthood (which she had founded and was by then running 200 clinics). Pincus contacted Searle pharmaceutical company for support but they refused, so lack of funding stalled the invention. At the same time, an unknown doctor, Carl Djerassi, in Mexico had produced a synthetic progesterone pill. Dr John Rock, an infertility expert, had also been using progesterone on his patients, because when treatment was stopped they became more fertile. Ultimately Rock was able to provide the test patients Pincus needed to prove his pill worked.
Funding for the development of the pill was finally provided by a single wealthy person. Katharine McCormick. She had a degree in biology from the Massachusetts Institute of Technology, and was married to the heir of the McCormick Harvester fortune, who developed a form of schizophrenia that was understood to be hereditary. She decided never to have children, and interested herself in the birth control cause as well as women’s right to vote, meeting Margaret Sanger in 1917. When her husband died in 1947, she inherited control of $15 million. She was 75 years old, and despite her university education did not want to waste money on university research. She gave Pincus $40,000 to start, and would fund the entire development of The Pill.
Birth control pill – Enovid / Wikimedia Commons
The Pill first came on the market, by prescription, in 1960. By 1964, 25% of all couples in America were using the Pill to control pregnancies, and Searle pharmaceuticals was making a fortune. In addition to critics upset about the interference with “natural” processes, the early Pill subjected women to extremely high levels of hormones because of the fear that it might fail. And well into the 1970s, American and western European women had to pretend they were married to get a prescription. But at the time, the other methods available were seen as cumbersome. I don’t think they were a problem themselves – rather the difficulty was caused by the lack of education about women’s bodies. “Nice girls” didn’t touch themselves “down there” or learn how their body worked, so putting in a diaphragm or a Dutch cap was difficult and messy. A diaphragm (unlike the cap which required better body knowledge) required a fitting with a gynecologist, and they were expensive. The Pill was easy – take it once a day and exercise more to help with hormonal weight gain.
For this reason, it was seen as encouraging the Sexual Revolution, a time during the 1970s when presumably women chose their bedmates at will, and didn’t have to worry about getting pregnant. Although the reality wasn’t that simple, the Pill put women more fully in control of their reproduction.
What was lost, of course, was the ability of women to understand their own bodies. Although women’s activists in Europe and America tried to educate women on how their bodies worked, in reproductive issues the Pill made such knowledge unnecessary. There seemed to be no reason to reclaim knowledge of ones own natural cycles or processes.
Scene from Star Trek with William Shatner (Captain Kirk) and Leonard Nimoy (Spock) / Wikimedia Commons
In 1957, the Soviet Union launched Sputnik, the first man-made satellite to orbit the earth. Its goal was to measure atmospheric conditions, including solar winds and magnetic fields. It was able to send radio signals back to earth, and set of the “Space Race” between the United States and the Soviet Union. The scientific focus (including expanding science education in schools) would lead to not only more satellites, but intercontinental ballistic nuclear missiles, and studies in creating space armament systems.
Although scientific in its development and influence on education, Sputnik also led to a lot of people tinkering with radios to pick up the signal and become part of the space age experience. Space began to influence science fiction in stories about humanity, such as the American TV series Star Trek, the story of a space voyaging team encountering other species. Star Trek premiered in 1966, before the U.S. space program had created a successful launch to the moon. That was achieved in 1969, just barely in time to satisfy President Kennedy’s prediction that it would happen before the end of the decade. Fans of the series often discussed its science and technology.
Margaret Hamilton during her time as lead Apollo flight software designer. / NASA
The focus on the combination of science with technology makes it seem as though no one was focused on pure technology. The success of do-it-yourself kits and stores like Radio Shack in the U.S. belie this. Electronics could be tinkered with — I distinctly recall my dad removing the back of the TV, clipping the audio wires, and twisting in a two-wire setup with a switch, so that he could turn off the audio on TV commercials from across the room. Perhaps this is why I’ve always been dissatisfied with the state of technologies. I know they can all be improved!
And in which category is programming – science or technology? In 1960, mathematician Margaret Hamilton (right) was working at the Massachusetts Institute of Technology, coding on punch cards for the space program. Integral to the Apollo program, Hamilton developed code that made it possible for the Apollo missions to work, and for the men who went to the moon to return to Earth. Mathematics was the foundation, but the result was eminently practical.
In 1981, NASA created the first reusable space vehicle, the Space Shuttle. It never occurred to me that I’d have to teach about the Shuttle in the past tense – it’s another technology, like the Concorde (below) that is now lost. I never got to see it take off in person, though I did see it land, but I watched the launches on TV. The sense of technological triumph is still amazing, even on video:
International Space Station / NASA
With the fall of the Soviet dominance of Eastern Europe in 1989, the need to “beat the Russians” became less of a motivation for space technology. Although science and exploration had been touted as the goals of national space programs around the globe, the fact was that most were motivated by, inspired by, and funded because of, military aspirations. One notable exception was the International Space Station, launched in 1988. Although it was not the first space station, it has lasted the longest and hosted the most astronauts. It’s a science lab in the sky.
Aircraft engine flow / Wikimedia Commons
But what about good, old (well, not that old) suborbital airplanes? After the Wright Brothers, flight had developed incrementally, with changes in propulsion constituting the main transitions.The evolution of designs in propellers enabled more efficient use of power by the 1940s, and within a decade turboprops were being challenged by the new jet engines, whose development was typically pushed by military needs during World War II. Unfortunately, metal fatigue plagued the first jet airplanes, pioneered by Geoffrey de Havilland in England, and there were crashes. Disasters tend to stall technological advancement – death and public discussion result in a questioning of the wisdom of new technologies (that’s a theme). In this case, it was mostly a matter of figuring out what technological changes were needed to an airplane to make jet engines viable. They were simpler technologies, yet more expensive, than previous engines. This meant that the plane itself needed to be bigger to ensure enough passengers to make money – justifying the commercial cost of jet airplanes pushed the technology, making this an unusual case of civilian use becoming more significant than military use. They also needed to have a stronger fuselage to withstand the pressurization required in the cabin. Planes started getting huge and heavy.
Boosted by the energy crisis of the 1970s, jet technology advanced in efficiency due to carbon fiber technology. When gas became cheap again, the technology couldn’t be marketed, but it appeared again in modern engines.
Resistance to Modern Technologies
We have seen some resistance to every change in technology since the beginning of the class, and with good reason. Technologies are not morally neutral, though we would very much like them to be. Every new technology carries with it the assumptions of the time it was built, the people who built it, and the social environment in which it was created. The social impact of television was immediately obvious, but so were the physical impacts of many new products. Jumbo jet airliners cause noise and pollution (a word not in common use until 1955). And the new chemicals of the post-World War II era were also polluting and dangerous. DDT (dichlorodiphenyltrichloroethane) was used as an insecticide in World War II to protect troops in the South Pacific from malaria, and it came into common agricultural use shortly after. Biologist Rachel Carson studied the effects of DDT and published them in her book Silent Spring in 1962. Her documentation of the persistence, cancer-causing properties, and bird-killing effects of the pesticide led to its being banned in the 1970s in the United States.
Although first coined in the 1920s, and there had been “back to nature” movements since the early 19th century, the word “environmentalism” took on a new urgency in the post-war era. Films like Silent Running (1972), featuring a spaceship full of plants and animals trying to save them from a polluted earth, and the China Syndrome (1979), based on the idea of a nuclear power plant melting down due to power company cheating on inspections, were popular. Political action was important – the Green Parties of the various nations of Europe formed a coalition in 1979, merging into the European Greens in the early 21st century.
It influenced education, too. I took Ecology class in high school instead of Biology.
One of the interesting lost technologies of this era was the supersonic airplane, the Concorde. It was introduced as a marvel in 1976, and was retired 27 years later. It went twice as fast as conventional airplanes – it took only 3.5 hours from New York to Paris. It had a maximum cruising altitude of 60,000 feet, almost twice as high as regular planes. The plane had a number of technological problems related to its speed and altitude, including overheating and exposure to radiation, but it was a fatal crash in 2000, followed by the decline in air traffic following the September 11, 2001 attacks, that caused its demise. There simply weren’t enough paying customers.