

Pseudoscientific claims are often untestable or tested only in non-rigorous, anecdotal ways.

By Matthew A. McIntosh
Public Historian
Brewminate
Introduction: Defining Pseudoscience
Overview
In a world flooded with data and permeated by claims of knowledge, distinguishing between science and pseudoscience has become more vital—and more difficult—than ever before. The term “pseudoscience” conjures images of astrologers, conspiracy theorists, and snake oil salesmen, but its history is far older and more complex. From ancient augurs to modern influencers peddling dubious health supplements, pseudoscience has shadowed legitimate inquiry, often borrowing the language and symbols of science while denying its rigor and methods.
Etymology
The term pseudoscience is a linguistic hybrid, combining the Greek prefix pseudo- (ψευδής), meaning “false” or “deceptive,” with the Latin word scientia, meaning “knowledge.” It first appeared in print in the early 19th century, though the phenomenon it describes long predates the word itself. The earliest known usage of the term dates to 1796 in the work of the French physiologist François Magendie, who criticized speculative systems that lacked empirical grounding. The term gained wider circulation in the mid-19th century, particularly as the scientific revolution matured into more formal institutional structures and sought to separate itself from mystical or metaphysical systems. Its hybrid linguistic roots reflect an epistemological conflict—between knowledge grounded in empirical methods and frameworks masquerading as such to claim undeserved legitimacy.
Pseudoscience is often defined not by the presence of false claims per se—science itself is constantly revised and sometimes wrong—but by the absence of methodological rigor and falsifiability. The philosopher Karl Popper famously proposed falsifiability as the key demarcation criterion between science and pseudoscience. According to Popper, scientific theories must be structured in such a way that they can be proven wrong through observation or experiment; pseudoscientific claims, by contrast, are often constructed to be immune to disproof. For example, the claim that all outcomes support a theory—regardless of whether they align with predicted results—signals a retreat from falsifiability into the realm of dogma. Thus, while the term pseudoscience carries a pejorative connotation, it also serves as a philosophical tool for classifying forms of inquiry according to their relationship with empirical evidence and self-correction.
Conceptually, pseudoscience thrives in a gray zone between knowledge and belief. It often borrows the visual language, terminology, and rhetorical strategies of science to gain cultural legitimacy. Charts, formulas, lab coats, and citations may all be used to lend an air of authority to ideas that have not undergone proper peer review or experimental validation. This mimicry is part of what makes pseudoscience so culturally resilient. It exploits the epistemic trust that modern societies place in scientific institutions while circumventing the actual disciplinary norms that undergird those institutions. From astrology’s use of astronomical terms to phrenology’s cranial measurements, pseudoscientific systems often create an illusion of rigor that masks their speculative or ideologically driven foundations.
Another defining feature of pseudoscience is its resistance to change. Where science evolves through the accumulation of evidence, and ideas can be abandoned or revised in light of new data, pseudoscientific systems are typically inflexible. They often rely on confirmation bias, anecdotal evidence, and appeals to authority rather than reproducible results. This ideological fixity serves a psychological function: it provides adherents with certainty, coherence, and often a sense of purpose that science’s provisional nature may lack. In this way, pseudoscience operates not merely as faulty reasoning but as a cultural and emotional phenomenon—a belief system rather than a method of discovery. It offers a closed worldview, frequently imbued with moral or metaphysical implications that extend beyond empirical claims.
Finally, the label pseudoscience itself has become contested, both philosophically and politically. Critics have argued that it can be wielded too liberally, used to silence unconventional theories before they are fully explored or understood. Others point out that historically, some now-legitimate scientific ideas (such as plate tectonics or heliocentrism) were once dismissed as pseudoscientific. Yet while these critiques caution against epistemic authoritarianism, they do not invalidate the concept of pseudoscience as a useful heuristic. The challenge lies in applying the term with intellectual integrity, recognizing that the history of science includes both the suppression of innovative ideas and the proliferation of charlatanism. As such, the concept of pseudoscience remains central to understanding the boundaries of credible inquiry in both historical and contemporary contexts.
One of the most enduring and debated problems in the philosophy of science is the demarcation problem—how to distinguish genuine scientific inquiry from pseudoscientific imitation. While there is no universally accepted checklist, several criteria have been proposed by philosophers and historians to help mark this boundary. Chief among them is the presence of a systematic method, especially the scientific method, which emphasizes observation, hypothesis formation, experimentation, and replication. In science, claims are tested against reality through controlled observation or experimentation, and they must withstand scrutiny from a community of peers. Pseudoscience, by contrast, tends to lack this systematic approach. Its claims are often untestable or tested only in non-rigorous, anecdotal ways that are not subject to replication or independent verification. Thus, method and the attitude toward evidence are fundamental to the distinction.
Falsifiability, introduced by Karl Popper, remains one of the most cited criteria in demarcation discourse. A scientific theory must be structured so that it could, in principle, be proven false. This does not mean the theory is necessarily false, but rather that it exposes itself to the risk of refutation by empirical data. Pseudoscientific claims often avoid this vulnerability by being framed in ways that are vague, unfalsifiable, or self-confirming. For instance, many alternative medical practices claim to work through “energy fields” or “vibrations” that cannot be measured, observed, or defined in precise terms, making them immune to disproof. Such claims may persist unchanged even in the face of contradictory evidence, which starkly contrasts with the self-correcting nature of science, where a single robust counterexample can prompt a paradigm shift.
Another distinguishing criterion is peer review and openness to critique. Scientific research is embedded in a community of practice that includes mechanisms for evaluating, correcting, and building upon findings. Peer-reviewed journals, academic conferences, and institutional protocols serve to filter out flawed research and ensure transparency in methods and data. Pseudoscience tends to operate outside or on the fringes of this ecosystem, often preferring self-published materials, mass media platforms, or populist appeal over institutional engagement. When faced with criticism, pseudoscientific proponents frequently respond with ad hominem attacks or appeals to persecution rather than addressing the substantive flaws in their arguments. This resistance to critical engagement reveals an epistemic insulation that is antithetical to scientific inquiry.
Empirical adequacy and predictive power are also essential criteria. Scientific theories are expected to make predictions that can be verified under specific conditions and to account for a broad range of phenomena without excessive reliance on ad hoc hypotheses. A hallmark of pseudoscience is its tendency to introduce new, often untestable assumptions whenever its predictions fail, rather than revising the core theory. For example, proponents of astrology might explain away failed horoscopes by invoking unseen influences or improperly timed readings, rather than acknowledging flaws in the underlying framework. The use of ad hoc rationalizations prevents theories from evolving meaningfully and often results in the entrenchment of belief rather than the expansion of knowledge.
Lastly, consistency with existing knowledge and conceptual coherence are important indicators. While revolutionary scientific ideas may sometimes challenge existing paradigms, they generally do so in ways that are logically consistent and eventually integrated with broader scientific understanding. Pseudoscience often lacks this coherence, instead creating compartmentalized belief systems that contradict established physical laws or biological principles. Furthermore, scientific progress is cumulative; it builds on prior work and deepens our understanding of the world. Pseudoscience, on the other hand, tends to be stagnant or cyclical, recycling ideas that have already been discredited or disproven. In this way, science is characterized by dynamism and intellectual humility, while pseudoscience is marked by rigidity and an often overconfident certainty in unproven claims.
Pseudoscience matters because it has profound implications not only for individual belief systems but also for public policy, education, health, and democracy. While it may seem benign when limited to personal practices—such as reading horoscopes or wearing crystals—pseudoscientific beliefs can shape decisions with far-reaching consequences. In the realm of health, for example, the promotion of pseudoscientific treatments over evidence-based medicine has led to vaccine hesitancy, the rejection of life-saving treatments, and the embrace of dangerous “cures” for serious illnesses. During global crises such as the COVID-19 pandemic, pseudoscientific misinformation undermined public health efforts, fueled conspiracy theories, and resulted in avoidable deaths. In the political sphere, pseudoscientific ideologies like eugenics, racial pseudoscience, and climate change denial have influenced policies that perpetuate inequality and environmental degradation. Thus, pseudoscience is not just a matter of intellectual error—it is often an active force that can misdirect resources, foster social divisions, and erode trust in legitimate scientific institutions.
Moreover, pseudoscience challenges the foundations of critical thinking and scientific literacy that are essential to functioning societies. In an age where information is abundant but not always credible, the ability to distinguish between reliable and unreliable knowledge is vital. Pseudoscientific ideas often flourish in environments where scientific literacy is low and emotional appeal outweighs rational inquiry. They offer simple answers to complex questions, a sense of certainty in uncertain times, and community for those who feel marginalized or alienated from academic or governmental authority. While these functions help explain pseudoscience’s psychological appeal, they also make it resistant to correction and self-reflection. This epistemic closure—where beliefs are insulated from evidence and immune to challenge—threatens the ideals of open inquiry and democratic deliberation. In this way, combating pseudoscience is not simply a scientific or educational challenge, but a cultural and civic one, requiring sustained engagement across multiple domains of public life.
Pseudoscience in the Ancient World
Reading the Stars in Ancient Babylon

Ancient Babylonian astrology represents one of the earliest known systems in which celestial observation was systematically linked to terrestrial events. Originating in Mesopotamia, particularly in the region corresponding to modern-day Iraq, Babylonian astrology was deeply embedded in religious, political, and cosmological frameworks. As early as the second millennium BCE, Babylonian scholars—often priest-astronomers known as ṭupšarru—began to document the regular motions of celestial bodies and interpret their significance. Their efforts gave rise to a vast and sophisticated body of omen literature, notably the Enuma Anu Enlil, a compilation of around 7,000 omens concerning planetary movements, eclipses, lunar phases, and meteorological phenomena. This corpus formed the foundation for divinatory practices that would persist for centuries and later influence both Greco-Roman and Islamic astrology.
The Babylonian approach to astrology was fundamentally omenic rather than personal. Unlike later Greek and modern astrological traditions that focused on individual horoscopes, Babylonian astrology was primarily concerned with the fate of the king, the wellbeing of the state, and the cyclical patterns of nature. Celestial phenomena were interpreted as messages from the gods, particularly Anu (god of the heavens), Enlil (god of the air and storms), and Ea (god of wisdom), who were believed to communicate their will through the skies. When Jupiter was in a certain position, or when a lunar eclipse occurred in a specific month, it was seen not as a natural event, but as a sign of divine favor or wrath. These interpretations were not arbitrary; they were recorded over centuries and correlated with political and natural events, creating a proto-empirical body of predictive material.
Astrological omens in Babylonian culture were part of a broader divinatory tradition that included hepatoscopy (reading sheep livers), dream interpretation, and the reading of terrestrial omens like the behavior of animals or abnormalities in childbirth. Astrology, however, held a particularly prestigious place because of its association with the heavens, which were seen as a divine realm of order. The Babylonians were meticulous observers of the sky, and over time developed methods to predict eclipses and the periodic appearance of planets. Their mathematical astronomy became increasingly advanced by the first millennium BCE, culminating in the ability to forecast celestial events with remarkable precision using arithmetic schemes. Yet, despite this scientific sophistication, the interpretive framework remained theological and symbolic, not empirical in the modern sense.
One of the defining characteristics of Babylonian astrology was its reliance on precedent. The idea was that if a certain celestial configuration had previously occurred in conjunction with a specific event—such as a war, flood, or dynastic upheaval—then the reoccurrence of that configuration would herald a similar outcome. This “if A, then B” logic pervaded the omen texts and reveals a core feature of pseudoscientific thinking: the conflation of correlation with causation. Babylonian astrologers did not test hypotheses or subject their interpretations to falsification; rather, they compiled increasingly elaborate records that supported divinatory patterns retrospectively. This reliance on analogy over systematic causality contributed to the enduring appeal of the system, but also limited its potential for scientific evolution.
Despite its pseudoscientific interpretive structure, Babylonian astrology must be understood within the context of its time. The distinction between science, religion, and magic was not present in the ancient Near East as it is today. Priest-scholars were both astronomers and diviners, theologians and mathematicians. Their observations contributed to real advances in astronomy, timekeeping, and calendar reform. The accurate tracking of the lunar cycle, the solar year, and planetary periods were essential for agricultural planning, religious festivals, and royal omens. In this sense, astrology acted as a bridge between practical knowledge and metaphysical speculation—a blend of what we would now call proto-science and spiritual belief. This complexity resists simplistic dismissal and requires historical sensitivity.
By the 5th century BCE, Babylonian astrologers began constructing birth charts based on the positions of celestial bodies at the time of an individual’s birth. This innovation marked the beginning of natal astrology, a practice that would be adopted and further elaborated by the Greeks and Romans. These early horoscopes were reserved for elite individuals—often royalty—and still largely interpreted in political or dynastic terms. However, the conceptual leap from interpreting omens for the state to interpreting them for individuals reflected a significant shift in the role astrology played in society. It foreshadowed the personal horoscopic systems that would dominate later astrological traditions and expand astrology’s reach beyond the court into the private lives of ordinary people.
The intellectual prestige of Babylonian astrology also ensured its transmission and adaptation by neighboring cultures. During the Achaemenid Persian period (6th–4th centuries BCE), Babylonian astrologers served in the imperial bureaucracy and helped spread Mesopotamian astrological techniques across the empire. With the conquests of Alexander the Great, Greek scholars encountered this Mesopotamian knowledge and began synthesizing it with their own cosmological models. The result was the Hellenistic astrological tradition, centered in Alexandria, which combined Babylonian celestial omens, Egyptian decanal systems, and Greek philosophical concepts such as the four elements and planetary temperaments. This cross-cultural synthesis gave astrology a new intellectual framework and helped sustain its influence well into the medieval period.
Babylonian astrology also played a significant role in legitimizing political power. Kings routinely consulted astrologers before military campaigns or major decisions. Astrological omens could be used to justify or postpone wars, anoint successors, or explain the misfortunes of a ruler. In times of negative omens, Babylonian kings would sometimes install a “substitute king” (a šar pūhi)—a temporary figurehead meant to absorb divine wrath—until the danger had passed. This ritual, known as the substitute king ritual, highlights how astrology was not merely predictive but performative: it shaped reality through the enactment of belief. Such practices underscore how deeply embedded astrological systems were in the political and ritual life of Mesopotamia.
From a modern perspective, Babylonian astrology is typically categorized as a form of pseudoscience because it lacks testability, falsifiability, and causal explanation. However, this classification should not obscure its historical significance or intellectual complexity. While the interpretive content of astrology was symbolic and theological, its development required rigorous observational astronomy and a deep commitment to long-term data collection. The Babylonians were among the first civilizations to conceptualize time and motion in mathematically structured ways, and their records laid essential groundwork for later scientific astronomy. This paradox—advanced observational technique embedded in a divinatory framework—is emblematic of how pseudoscientific systems often co-exist with genuine empirical practices in the early history of knowledge.
Babylonian astrology illustrates the entangled roots of science and pseudoscience. It reveals how human beings, long before the modern scientific method, sought to find order and meaning in the universe through systematic observation of nature. Their efforts were constrained by theological assumptions and interpretive traditions that today would be considered non-scientific, yet they nonetheless contributed significantly to the intellectual infrastructure of ancient and medieval civilizations. By studying Babylonian astrology not merely as an error of belief but as a cultural system embedded in its time, we gain insight into the complex motivations—political, psychological, spiritual—that drive human beings to seek patterns in the stars, and to find in those patterns a reflection of their own fate.
Egyptian Medicine and Magic

Ancient Egyptian medicine, while rooted in practical knowledge of anatomy and disease, was inextricably linked to religious and magical beliefs. The ancient Egyptians believed that health was the result of harmony between the body and the gods, and that illness was caused by the interference of malevolent spiritual forces. This worldview shaped their approach to medicine, where physical treatments and magical rituals often went hand in hand. Egyptian medicine was deeply intertwined with religious practices, and healing was often seen as the work of gods, with physicians considered intermediaries who invoked divine aid to cure illness. This dual nature of healing—employing both practical remedies and magical interventions—made Egyptian medicine unique in the ancient world, where magic was regarded as a fundamental force influencing all aspects of existence.
One of the central figures in Egyptian medicine was Imhotep, the legendary vizier of the Third Dynasty Pharaoh Djoser, who became deified after his death. Imhotep was revered as the god of healing and medicine, and his legacy is a testament to the syncretism between medicine and magic in ancient Egypt. He was often depicted as a wise, rational healer who combined empirical medical knowledge with divine influence. His contributions to the development of medical texts and the practice of medicine in Egypt were substantial, and his divine status as the patron of physicians underscored the belief that the act of healing required divine sanction. In fact, many of the medical practitioners in Egypt were also priests, and their role was as much about mediating between the human and divine realms as it was about diagnosing and treating physical ailments.
Egyptian medical knowledge was codified in a series of texts, including the Ebers Papyrus (c. 1550 BCE) and the Edwin Smith Papyrus (c. 1600 BCE), both of which contain detailed accounts of diseases, treatments, and surgical techniques. The Ebers Papyrus is particularly notable for its extensive list of remedies for various ailments, many of which were rooted in herbal medicine. This includes treatments for skin conditions, gastrointestinal problems, and even heart disease. The Edwin Smith Papyrus is a surgical text that describes methods for treating wounds, fractures, and dislocations, indicating that the Egyptians possessed a relatively advanced understanding of human anatomy and trauma care. Despite the prevalence of empirical remedies, however, these texts also contain spells and incantations to aid in healing, illustrating the seamless integration of medical and magical practices.
Magic in ancient Egyptian medicine was not only used to treat illness but was also invoked to ward off evil spirits, protect the body, and ensure fertility. The Egyptians believed that illness could be caused by spiritual imbalance, curses, or angry gods, and thus healing required more than just physical remedies—it demanded a spiritual cure. One common magical practice involved the use of amulets, which were inscribed with protective spells and worn by patients. These amulets were believed to protect the wearer from disease and evil influences. The practice of placing the body in a protective cocoon of magic was evident in the embalming process, where the dead were surrounded by spells meant to ensure their safe passage into the afterlife and to protect their bodies from decay.
In addition to amulets, spells were an essential component of Egyptian medical practice. The Book of the Dead, a funerary text, included numerous magical incantations designed to protect the deceased from harmful spirits and guarantee safe passage to the afterlife. These spells were often recited by priests or healers who had specialized knowledge of the sacred texts. The connection between words, magic, and healing was deeply ingrained in Egyptian thought; the spoken word was seen as a powerful force capable of influencing the gods and nature. Thus, healing in ancient Egypt often involved not only physical treatments but also ritual recitations to invoke divine protection and intervention.
The Egyptian approach to medicine was also influenced by their understanding of the body and the forces of nature. The concept of the ka, a life force or spiritual essence that existed in every person, was central to their medical and magical worldview. Illness was often seen as an imbalance in the ka, and healing required restoring harmony between the body, spirit, and the gods. To maintain this balance, Egyptians practiced rituals that were intended to align the individual with cosmic forces, including the natural rhythms of the Nile River, the sun, and the stars. The close relationship between the physical body and the spiritual realm reflected the holistic nature of Egyptian healing, where both physical treatments and spiritual purification were required for true health.
Another aspect of Egyptian medicine was its close relationship with the practice of homeopathy, which is the idea that “like cures like.” Egyptians believed that certain substances contained inherent magical properties, and that by consuming or applying them, one could restore balance to the body. For example, they used honey and resins, not only for their medicinal properties but also for their symbolic and magical qualities. Honey, for instance, was seen as a substance favored by the gods, and it was commonly used to treat wounds due to its antiseptic properties. Similarly, plants and minerals were often used in both a medicinal and magical context, such as the use of the mandrake root, which was believed to have magical properties that could cure infertility and promote healing.
Ancient Egyptian physicians were highly skilled in treating various conditions, from dental problems to eye diseases, and they utilized a wide array of treatments, including surgery, bandaging, and splinting. The Ebers Papyrus contains references to surgical tools, some of which resemble those used in modern medicine, indicating that Egyptians had a certain level of surgical expertise. However, the spiritual aspects of healing remained central to their practice. For instance, cataract surgery, which was performed by skilled Egyptian doctors, involved both medical intervention and incantations. The healer would recite specific spells during the procedure, believing that the gods would aid in the healing process and that the patient’s soul would be restored along with their physical health.
Despite the highly ritualized nature of Egyptian medicine, the Egyptians did make significant contributions to medical knowledge that were based on observation and empirical evidence. Their understanding of anatomy, based largely on their experience with embalming the dead, was fairly advanced for the time. They had a clear understanding of the circulatory system, and there are records of them performing surgeries to drain abscesses or treat fractures. However, these advances were always interwoven with magical beliefs. A physician might diagnose a patient’s condition through observation and clinical judgment but would also recite prayers and perform rituals to ensure the patient’s recovery. This blending of practical and magical elements made Egyptian medicine unique in the ancient world, marking a fusion of science and spirituality that persisted for millennia.
Ancient Egyptian medicine was a complex amalgamation of empirical observation, religious beliefs, and magical practices. While the Egyptians made significant strides in areas like surgery, pharmacology, and anatomy, their healing practices were deeply embedded in the spiritual and magical worldview of the time. The use of incantations, amulets, and rituals to treat illness reflected the belief that health was not just a physical condition but a cosmic and spiritual balance. By combining empirical knowledge with divine intervention, Egyptian medicine offers a fascinating glimpse into how ancient cultures understood the relationship between body, spirit, and the forces of nature, leaving a legacy that influenced later medical and magical traditions.
Greek Speculation

Ancient Greek cosmologies were often speculative, blending early scientific inquiry with philosophical and mystical ideas. One of the most influential schools of thought was the Pythagorean tradition, founded by Pythagoras of Samos around the 6th century BCE. While Pythagoras is best known for his work in mathematics, particularly the Pythagorean theorem, his philosophical views extended deeply into cosmology, mysticism, and the nature of the universe. The Pythagoreans believed that the cosmos was governed by mathematical relationships, and that the physical world could be understood through the study of numbers and their inherent properties. They viewed numbers as more than just abstract symbols; for them, numbers had divine significance and were the very building blocks of reality. This mystical numerology placed them at the intersection of philosophy, science, and religion, reflecting the Greek tendency to seek out underlying principles behind the observable world.
The Pythagoreans believed that the universe itself was an ordered system, a cosmic harmony or cosmos, where everything was connected through mathematical ratios. One of their central ideas was the concept of the “music of the spheres”—the belief that the movements of the planets and celestial bodies created harmonious sounds, though these sounds were beyond human hearing. According to Pythagorean cosmology, the entire universe was structured according to geometric and mathematical principles, and the soul was also subject to these same principles. They believed that the soul could achieve purity by understanding and aligning itself with the divine order of the cosmos, often through ascetic practices, contemplation, and meditation on numbers and their mystical meanings. For the Pythagoreans, achieving harmony with the cosmos was not just a matter of intellectual understanding but of moral and spiritual purification.
The idea that numbers and geometry could represent the fundamental essence of the universe was rooted in the Pythagorean belief in the metaphysical power of numbers. The number “one” was seen as the source of all things, while other numbers had distinct symbolic meanings. The number “two,” for example, represented duality and change, while the number “three” was seen as representing harmony and balance. The number “four” was linked to stability, while the number “ten” (the sum of the first four numbers) was considered a symbol of completeness and perfection. This numerological framework was not merely a mathematical abstraction but was seen as a reflection of divine principles that structured the universe. The Pythagoreans believed that by understanding the nature of numbers and their relationships, one could unlock the secrets of the cosmos and achieve a deeper understanding of reality.
In addition to their focus on numbers and harmony, the Pythagoreans also held mystical and religious beliefs that influenced their cosmology. They believed in the transmigration of souls, or metempsychosis, the idea that souls were reincarnated into new bodies. This doctrine of reincarnation was central to their understanding of the universe and the human soul’s journey. The soul was seen as trapped in a cycle of birth, death, and rebirth, and the only way to escape this cycle was through purification. This purification process involved both intellectual and ethical practices, including the study of mathematics and philosophy, as well as adhering to a strict moral code. The Pythagoreans practiced vegetarianism, abstained from certain foods, and sought to live in harmony with the natural world, believing that these practices would help cleanse the soul and bring it closer to the divine.
Pythagorean cosmology had a profound influence on later Greek philosophers, particularly Plato. Plato’s idea of a rational and harmonious cosmos, governed by ideal forms and mathematical principles, was heavily inspired by Pythagorean thought. Plato’s Timaeus, one of his most important works on cosmology, reflects many of the ideas put forward by the Pythagoreans, including the belief in a divine order to the universe, the connection between mathematics and reality, and the idea that the soul is connected to this cosmic harmony. Moreover, the Pythagorean influence can be seen in the development of Neoplatonism, a philosophical movement that sought to reconcile mystical and rational elements of Greek thought. However, despite their emphasis on mathematics and rational order, Pythagorean cosmology also had elements of mysticism that placed it at the boundary between early philosophy and religious speculation, offering a fascinating blend of rational inquiry and mystical belief.
Roman Birds and Omens

In ancient Rome, superstition and divination were deeply embedded in daily life, influencing everything from political decisions to personal behavior. Romans believed that the world was governed by forces beyond human control, and that these forces could be understood through signs, omens, and divine intervention. The Romans were highly attuned to the natural world and interpreted various phenomena—such as weather patterns, animal behavior, and celestial events—as messages from the gods. Superstitions were widespread, often involving rituals meant to avert bad luck or ensure good fortune. These beliefs were not limited to the lower classes; even the most powerful political figures, including emperors, relied on divination and took omens seriously. The Roman state had its own official diviners, such as the augurs and haruspices, who were employed by the government to interpret signs and ensure the favor of the gods.
Divination, or the practice of seeking knowledge of the future or the will of the gods, was a key component of Roman superstition. One of the most common forms of divination was auspicy, the practice of interpreting the behavior of birds, particularly their flight patterns. Augurs, or religious officials, would observe the birds’ movements to determine whether they were auspicious signs of divine favor or warnings of impending disaster. Certain types of bird flight, the number of birds seen, and even their species could carry different meanings. This practice was vital in making decisions related to military campaigns, elections, and state rituals. If the auspices were unfavorable, actions might be postponed, or plans altered to appease the gods. Augury was so significant that it became an integral part of Roman political and military life, with officials often awaiting the approval of the gods before making major decisions.
Another important form of Roman divination was haruspicy, the examination of the entrails of sacrificed animals, particularly the liver. The haruspices, who specialized in this practice, believed that the gods revealed their will through the physical characteristics of the animal’s organs. For instance, the size, shape, and color of the liver could indicate the gods’ favor or displeasure. Haruspicy was especially important in times of crisis, such as during wars or before battles, when the outcome was uncertain and the need for divine guidance was paramount. The practice was not restricted to the Roman state; private citizens also engaged in haruspicy to ensure personal success or protection from misfortune. Haruspicy, along with augury, helped to create a society in which divine signs were continually sought to justify decisions and actions, whether in public or private life.
In addition to augury and haruspicy, the Romans also practiced a form of divination known as extispicy, which involved the examination of the internal organs of animals used for sacrifice. The Romans believed that the gods communicated their will not just through omens but also through the very flesh of the animals they offered. This form of divination was often used in conjunction with other rituals, such as the reading of the entrails of chickens, pigs, or sheep. The Romans also placed great importance on the appearance of certain natural phenomena, such as eclipses, comets, and thunderstorms, which were viewed as significant omens from the gods. Public events, including military campaigns and state rituals, were often delayed or altered based on these signs. The significance placed on interpreting natural phenomena shows how superstition and divination were tightly woven into the fabric of Roman religion and governance.
Superstition in ancient Rome extended beyond formal religious practices and permeated the everyday lives of individuals. Romans were highly superstitious, with a wide range of rituals and beliefs aimed at protecting themselves from misfortune and ensuring success. For example, many Romans wore amulets or charms to ward off the evil eye, a concept that was widely believed to cause harm through envy or malevolent intent. These charms were often inscribed with protective spells or symbols, and they were worn by both adults and children. Additionally, Romans frequently engaged in rituals to secure good fortune, such as performing specific actions on certain days of the week or during particular phases of the moon. Certain numbers, such as three, five, and seven, were considered particularly lucky, while others, like the number four, were seen as unlucky. The Romans’ devotion to omens and superstitions demonstrates how they viewed the world as an interconnected web of signs and symbols, where the gods’ will could be understood and shaped through careful attention to the world around them.
Mysticism and Proto-Science in the Medieval Period
Medieval Chemistry and Astrology

Medieval alchemy and astrology were central to both the Islamic and Christian worlds, where they played vital roles in shaping intellectual and spiritual life. In the Islamic world, alchemy was deeply intertwined with the rich legacy of Greco-Roman thought, Persian traditions, and early Islamic philosophy. Alchemists like Jabir ibn Hayyan (Geber), who lived during the 8th century, were crucial in developing early alchemical theories that sought to transmute base metals into gold, discover the philosopher’s stone, and understand the spiritual nature of substances. Alchemists in the Islamic world believed that physical transformation was closely linked with spiritual enlightenment, and many viewed alchemy not just as a science but as a path toward achieving divine wisdom. They were particularly influenced by the Neoplatonism of thinkers like Plotinus and the mystical traditions of Sufism, which emphasized the inner, transformative journey of the soul. Alchemical texts were translated into Latin, significantly influencing medieval European alchemy and astrology, where many of these ideas would later merge with Christian thought.
Astrology, similarly, had a profound impact on the Islamic intellectual tradition. Islamic scholars during the medieval period viewed astrology as both a science and an art, one that could reveal the divine order of the universe through the movements of the stars and planets. Islamic astrologers, such as al-Kindi and al-Battani, made significant contributions to astrological theory and practice, including refinements to the Ptolemaic system of astronomy. Islamic astrologers believed that the positions of celestial bodies could influence events on Earth, including the destinies of individuals and the fortunes of nations. In the Islamic world, astrology was frequently used in the courtly and political realm, helping rulers make decisions about war, governance, and personal matters, such as marriage and childbirth. Much like alchemy, astrology in the Islamic world was an intersection of scientific inquiry and mystical speculation, with the cosmos seen as a reflection of divine order and harmony.
In the Christian medieval world, alchemy and astrology had a somewhat different trajectory, shaped by theological concerns and the Church’s role in the regulation of knowledge. Christian alchemists, such as Thomas Aquinas and Albertus Magnus, were heavily influenced by the works of Islamic alchemists, especially through Latin translations of Arabic texts. Alchemy in the Christian world was initially pursued with the same goal as in the Islamic world: to transmute base metals into gold and to uncover the hidden mysteries of creation. However, Christian alchemists were also concerned with the relationship between material and spiritual transformation. The idea of achieving salvation through the purification of the soul was often paralleled with the purification of substances in alchemical processes. The philosopher’s stone, which could supposedly turn lead into gold, became symbolic of spiritual enlightenment and the quest for eternal life. Alchemy in this context was a blend of proto-science and mysticism, often guided by Christian concepts of redemption, purification, and divine intervention.
Astrology in medieval Christianity was a more contentious practice, especially as the influence of the Church grew during the Middle Ages. While astrology was respected and studied by many scholars, including figures like Roger Bacon and Richard of Wallingford, it was also often viewed with suspicion. The Church maintained that astrology, particularly when it was used to predict events and control human affairs, was contrary to the teachings of the Bible and the sovereignty of God’s will. However, medieval Christians did incorporate astrology into their understanding of the natural world, with many scholars seeing it as a way to interpret divine influence in the cosmos. Christian astrology was often viewed as compatible with Christian teachings when it was used in moderation, particularly when it was employed to understand the natural world rather than to control or predict human behavior. Nonetheless, the Church periodically issued edicts against astrology, and it remained a practice that was both respected and feared in medieval society.
Despite these tensions, the influence of both alchemy and astrology in medieval Christianity and Islam cannot be overstated. The integration of these practices with religious and philosophical systems helped to foster a broader view of the cosmos, one that blended material science with metaphysical and spiritual dimensions. In both the Islamic and Christian worlds, the pursuit of alchemical and astrological knowledge was seen as a means of uncovering divine truths, whether through the transformation of matter or the interpretation of celestial movements. In the Islamic world, this pursuit was more aligned with the intellectual exploration of natural philosophy, while in the Christian world, it was often framed within the context of the soul’s salvation and the divine plan. The cross-cultural exchange of alchemical and astrological knowledge between these two worlds helped shape the intellectual currents of the Middle Ages and laid the groundwork for the scientific and mystical explorations of the Renaissance.
Hermeticism and Neoplatonism

Hermeticism, named after the god Hermes Trismegistus, represents a complex tradition of spiritual, philosophical, and mystical teachings that emerged in the Hellenistic period, particularly around the 2nd and 3rd centuries CE. It is often regarded as a syncretic system, blending Greek philosophical ideas, particularly from Platonism and Stoicism, with Egyptian religious thought. Hermeticism places a strong emphasis on esoteric knowledge, or the pursuit of hidden wisdom that leads to spiritual enlightenment. Central to Hermetic thought is the belief in a divine source or One, from which all things emanate, and the goal of the Hermetic practitioner is to return to this source through intellectual, spiritual, and alchemical practices. Hermetic texts, such as the Corpus Hermeticum, a collection of writings attributed to Hermes Trismegistus, are filled with dialogues, prayers, and aphorisms that discuss the nature of the universe, the human soul, and the path to divine understanding. These texts became highly influential in both the development of medieval alchemy and the Renaissance revival of mystical and philosophical thought.
One of the core teachings of Hermeticism is the idea of divine unity and the relationship between the macrocosm (the universe) and the microcosm (the individual). According to Hermetic principles, the structure of the universe is mirrored in the structure of the human being, and by understanding oneself, one can understand the divine and the cosmos. This idea reflects the Hermetic axiom “As above, so below,” meaning that the spiritual realities of the universe are reflected in the physical world and vice versa. Hermeticism stresses the importance of personal transformation through the acquisition of secret knowledge (often through initiation or mystical practices), purification of the soul, and understanding the symbolic meanings of natural phenomena. The emphasis on hidden wisdom and its potential to achieve spiritual enlightenment has made Hermeticism one of the most enduring mystical traditions in Western esotericism, influencing later movements like Gnosticism, the Renaissance magical revival, and even the development of modern occultism.
Neoplatonism, which developed through the teachings of philosophers such as Plotinus, Porphyry, Iamblichus, and Proclus during the 3rd to 6th centuries CE, represents a philosophical system that builds upon the ideas of Plato but also incorporates elements of mysticism and metaphysical speculation. Neoplatonism posits the existence of a single, ultimate principle called the One, or the Good, from which all of reality emanates. This One is beyond being and cannot be comprehended by the human mind but is the source of all existence. Neoplatonic cosmology describes a hierarchy of being, with the One at the highest level, followed by the divine intellect (nous), the world soul, and the material world. According to Neoplatonism, everything in the universe is connected through this chain of emanation, and the ultimate goal of human life is to return to the One by transcending the material world and uniting the soul with divine intellect.
While both Hermeticism and Neoplatonism share a belief in a divine, transcendent source and the possibility of human spiritual ascent, they differ in their approach to achieving this ascent. Neoplatonism is more systematic and philosophical, focusing on the intellectual purification of the soul through contemplation and the cultivation of virtues. For Neoplatonists, philosophical reasoning and meditation on the nature of the One are the primary means of achieving union with the divine. Plotinus, for example, emphasized the practice of introspection, where the soul reflects on its own nature and its connection to the divine order. This inner contemplation allows the soul to transcend the material world and reunite with the divine. In contrast, Hermeticism is more eclectic, combining philosophical ideas with practical spiritual exercises such as rituals, prayers, alchemical transformations, and astrological practices. While Hermeticism also stresses intellectual enlightenment, it places greater emphasis on mystical experience and transformation, often involving an engagement with the natural world through symbolism and ritual.
Both traditions played pivotal roles in the development of Western esotericism, influencing early Christian mysticism, medieval alchemy, and Renaissance thinkers like Marsilio Ficino and Giovanni Pico della Mirandola. In the context of early Christianity, Neoplatonism influenced Christian theological thought, particularly in the works of thinkers like Augustine of Hippo, who integrated Neoplatonic ideas with Christian doctrine. Meanwhile, Hermeticism, with its emphasis on hidden knowledge and the pursuit of divine wisdom, became a key source for later occult traditions and influenced Renaissance philosophers who sought to revive ancient mystical teachings. The shared emphasis on spiritual ascent, the interplay between the physical and spiritual worlds, and the belief in a hidden, transcendent reality made both Hermeticism and Neoplatonism foundational to the development of mystical and esoteric traditions in the West. These teachings continue to resonate in modern philosophical and spiritual movements, where their exploration of divine unity, the nature of existence, and the potential for personal transformation remains deeply relevant.
Divination and Scholasticism

In medieval Europe, divination was practiced widely and was considered an important means of understanding divine will and predicting the future. Common forms of divination included astrology, chiromancy (palmistry), geomancy, and the reading of omens. Astrology, in particular, became highly developed during this period, with scholars such as Geoffrey Chaucer in his Treatise on the Astrolabe helping to popularize the practice. Medieval Christians were deeply influenced by the classical works of Ptolemy, who combined astronomy with astrology in a way that provided a framework for predicting events and understanding cosmic influences. Astrological knowledge was applied to a wide range of concerns, from determining the most auspicious dates for important events, such as weddings and battles, to understanding the impact of celestial movements on personal fortunes and health. Despite the strong link between astrology and ancient wisdom, the practice of divination during the medieval period was not universally accepted and often found itself in tension with both Christian theological doctrines and the emerging scholastic tradition.
Alongside astrology, other forms of divination, such as the use of dice, the interpretation of dreams, and even the observation of birds’ flight patterns (a practice known as auspicy), were commonplace in medieval society. These practices were often viewed as means to access hidden knowledge or predict future events. The popularity of divination was not restricted to the lay population; even clergy and royalty often sought the counsel of astrologers and diviners. In particular, royal courts were known to employ astrologers to choose auspicious times for battles or political decisions, while ordinary people sought guidance from fortune-tellers to predict their future. Despite the prominence of divination in daily life, it often faced opposition from religious authorities who saw it as incompatible with Christian doctrine. Divination was frequently associated with the “occult” and viewed as a form of superstition, especially when practiced by non-Christian or pagan practitioners, though even Christian theologians employed astrological or divinatory methods to a certain extent.
The rise of Scholasticism in the medieval period, particularly in the 12th and 13th centuries, brought about a critical reassessment of divination and the occult arts. Scholasticism was a method of intellectual inquiry that sought to reconcile faith with reason, drawing heavily on the works of Aristotle and other classical philosophers. Prominent scholastic thinkers, such as Thomas Aquinas and Albertus Magnus, sought to clarify the relationship between divine will and human agency, and their critiques of divination were based on theological and philosophical grounds. For instance, Aquinas, in his Summa Theologica, argued that divination, when it relied on knowledge of hidden causes (such as astrological influences), undermined the sovereignty of God. According to Aquinas, God alone possessed knowledge of the future, and any attempt by humans to predict or influence the future through divination was a challenge to God’s omniscience and divine plan. Moreover, scholastic philosophers like Aquinas emphasized the importance of free will in human decision-making, and they believed that relying on divination could encourage fatalism, undermining moral responsibility.
One of the key concerns of the scholastics regarding divination was its potential to lead people away from true religious devotion. While the Church officially condemned forms of divination that sought to control or manipulate the future, it also recognized the need for a certain level of understanding of natural events, including astrology, for practical purposes. As a result, there was a nuanced approach to divination within the intellectual framework of the Middle Ages. For example, the study of the heavens was not entirely rejected by the scholastics, as astrology was often integrated into the natural philosophy of the time. However, it was important for scholars to distinguish between astrology as a science of celestial bodies and astrology as a form of divination that attempted to foretell human affairs in a deterministic manner. This was particularly evident in the works of philosophers like Albertus Magnus, who treated astrology as a legitimate scientific pursuit but maintained that its predictive claims should be examined with skepticism, particularly when they conflicted with Christian teachings on divine providence.
The tension between medieval divinatory arts and scholastic critiques continued throughout the Middle Ages, culminating in the later Renaissance, when the study of astrology, alchemy, and magic was often pursued by both intellectuals and mystics. While the Church did not fully condemn the study of the natural world through astrology or other divinatory practices, it did demand that such practices be rooted in an understanding of divine order, rather than being used as a tool for personal gain or as an attempt to control the divine. The scholastic critique of divination ultimately led to a more cautious and intellectual approach to astrology and other mystical practices, with figures such as Giovanni Pico della Mirandola seeking to reconcile ancient knowledge with Christian theology. Despite this intellectual rapprochement, the age-old allure of divination persisted, and the medieval period saw a continued coexistence of religious orthodoxy and esoteric knowledge, paving the way for the more robust exploration of astrology and alchemy during the Renaissance.
Renaissance and Early Modern Esotericism
Paracelsian Medicine

Paracelsian medicine, named after the Swiss physician and alchemist Paracelsus (born Philippus Aureolus Theophrastus Bombastus von Hohenheim, 1493–1541), represents a radical departure from the traditional humoral and Galenic medical theories that dominated Europe for centuries. Paracelsus challenged the longstanding authority of Hippocratic and Galenic medicine by advocating for a more empirical, experimental approach to healing, one that was grounded in a deep understanding of chemistry, alchemy, and the natural world. Unlike his predecessors, Paracelsus rejected the idea that disease was primarily caused by imbalances in the four humors (blood, phlegm, yellow bile, and black bile) and instead proposed that diseases were the result of external, material causes, such as poisons, infections, and environmental factors. His approach focused on understanding the specific causes of disease at a molecular or elemental level, and he emphasized the importance of individualized treatments rather than the one-size-fits-all remedies prescribed by Galenic medicine.
Central to Paracelsian medicine was the belief in the healing power of chemicals and minerals, which Paracelsus famously referred to as the “spirit of the age.” Unlike traditional medicine, which relied heavily on herbal remedies and natural substances, Paracelsus advocated for the use of minerals and metals as medicinal substances. He believed that these substances had inherent healing properties that could be harnessed for therapeutic purposes. Paracelsus’s use of chemicals and minerals was groundbreaking, as he promoted the idea of using substances like mercury, sulfur, and arsenic in controlled dosages to treat a wide variety of ailments. He was the first to introduce the concept of “dose” in medicine, emphasizing that the right dosage of a substance could heal, while an excessive dose could harm or poison the patient. His ideas about the medicinal use of minerals laid the groundwork for later developments in pharmacology and chemistry, influencing both the study and application of medicinal chemistry in the centuries that followed.
Another key aspect of Paracelsian medicine was its focus on the concept of the “microcosm” and “macrocosm,” drawing on mystical and alchemical ideas that linked human beings to the natural world. Paracelsus believed that the human body was a reflection of the universe, and that a deep understanding of nature and the cosmos was essential for understanding human health. He proposed that each person had an “astral” body, governed by cosmic forces, which was connected to the physical body in a complex, spiritual relationship. This notion of a mystical connection between the body and the cosmos shaped Paracelsus’s approach to healing, as he believed that understanding the spiritual and elemental nature of a disease was just as important as understanding its physical symptoms. He also argued that the physician must not only treat the body but also the spirit, highlighting the importance of mental and emotional health in healing. This holistic view of the body as part of a larger cosmic order was an integral part of Paracelsus’s medical philosophy, setting him apart from more mechanistic views of the body and disease.
Paracelsus’s teachings on medicine were revolutionary, but they were met with strong resistance from the established medical community. The university-trained physicians, who adhered to the humoral theory of Galen, rejected Paracelsus’s unconventional ideas. Paracelsus was a staunch critic of the medical establishment, famously declaring that the work of physicians who followed Galen was nothing more than “bookish medicine” that lacked true understanding of the natural world. He believed that the reliance on ancient texts and theoretical knowledge was insufficient for treating actual patients. Instead, he advocated for an experiential approach to medicine, where the physician should be trained in observation, experimentation, and practical application. Paracelsus’s critiques of established medical practices and his unconventional methods led to his marginalization, and many of his ideas were dismissed as heretical or quackery by the academic medical community. Nevertheless, his influence began to grow, especially among later generations of physicians, alchemists, and chemists, who saw the potential in his approach to medicine.
Despite the initial resistance, the legacy of Paracelsian medicine endured, especially as the field of chemistry and pharmacology began to emerge in the early modern period. His emphasis on the use of minerals and chemicals in medicine laid the foundation for the development of modern pharmaceutical practices. Paracelsus’s belief in the importance of individualized treatment and the role of the physician as both a scientist and a healer became integral to the practice of medicine in the subsequent centuries. In the 17th and 18th centuries, many physicians and chemists began to experiment with Paracelsus’s ideas, leading to advancements in drug development and the eventual creation of more systematic approaches to the use of medicinal substances. Paracelsus’s contributions to medical theory, including his understanding of disease as the result of external causes, were pivotal in moving away from the ancient humoral theory and toward a more scientifically grounded approach to medicine. Though controversial in his time, Paracelsus’s emphasis on empirical observation and his integration of alchemical, spiritual, and naturalistic ideas set the stage for the eventual transformation of Western medicine from medieval to modern practices.
Occult and Magic

During the Renaissance, a profound shift occurred in Western intellectual and cultural life, characterized by a revival of interest in classical antiquity, especially the mystical and esoteric traditions that had been suppressed or marginalized during the medieval period. This period saw the resurgence of occult philosophy, a broad category encompassing the study of hidden or arcane knowledge, which sought to uncover the secret forces governing the cosmos and human existence. Occult philosophy was deeply influenced by the works of ancient Greek and Roman philosophers, particularly Neoplatonism, as well as the rediscovery of ancient Egyptian, Chaldean, and Hermetic texts. These texts, including the Corpus Hermeticum, were thought to contain esoteric wisdom that could unlock the mysteries of the universe and provide spiritual and practical guidance. The rebirth of magic during the Renaissance was intricately connected to this rediscovery of ancient occult traditions, as scholars, alchemists, and mystics began to combine intellectual inquiry with spiritual and supernatural practices.
One of the key figures in this resurgence of occult thought was Marsilio Ficino, an Italian philosopher and translator who is credited with reviving Neoplatonism and integrating it with Christian theology. Ficino’s works, especially his translations of Plato and Plotinus, laid the foundation for much of Renaissance occultism. Ficino viewed magic as a legitimate part of human inquiry, arguing that it was an aspect of divine philosophy, with its origins in the “light” of the One or the Good (the highest principle in Neoplatonism). He famously emphasized the idea that humans could access divine knowledge through intellectual and spiritual practice, linking the human soul to the divine through the contemplation of celestial order. Ficino’s belief in the efficacy of astrology, alchemy, and theurgy—rituals designed to invoke divine beings—was central to Renaissance occultism, as he believed these practices could elevate the soul and bring it closer to the divine.
Ficino was not alone in his enthusiasm for occult practices. Giovanni Pico della Mirandola, another prominent Italian Renaissance thinker, expanded upon Ficino’s work by synthesizing elements of Jewish Kabbalah, Christian mysticism, and Neoplatonism. Pico’s famous work, the Oration on the Dignity of Man, reflects his belief in the potential of human beings to transcend their earthly limitations through the pursuit of hidden knowledge. He argued that humans, as creatures endowed with both material and divine nature, had the unique ability to ascend toward the divine through the study of the occult sciences. Pico’s Kabbalistic and Hermetic teachings were particularly influential in the development of Renaissance magic, as he believed that the use of symbols and rituals could align humans with the higher cosmic order. His fusion of ancient mysticism with Renaissance humanism further reinforced the notion that the pursuit of occult knowledge could be a means of spiritual elevation and self-improvement.
During the Renaissance, the rebirth of magic was also closely tied to the practice of alchemy, which was seen not just as a precursor to modern chemistry, but as a sacred science that could transform both materials and the alchemist’s soul. Paracelsus, the Swiss physician and alchemist, is often considered a key figure in the magical and alchemical traditions of the Renaissance. Paracelsus rejected traditional medical theory and instead advocated for the use of minerals and chemicals as tools for healing, with the underlying belief that these substances had hidden, magical properties. His approach to medicine was part of a larger Renaissance interest in understanding the hidden forces of nature, which included an intense curiosity about the spiritual and material worlds. Alchemists believed that through experimentation and spiritual refinement, they could uncover the secrets of the universe, such as the creation of the philosopher’s stone, which was believed to have the power to turn base metals into gold and grant immortality. This mystical pursuit of alchemy, with its focus on hidden knowledge, made it a crucial part of Renaissance magic.
The rebirth of magic during the Renaissance was also marked by the development of astrology as both a scientific and mystical discipline. Renaissance astrologers, who were heavily influenced by Ptolemaic astronomy, combined the study of celestial movements with the belief that the stars and planets had a direct influence on earthly events. Johannes Kepler, the famous astronomer, is often seen as an example of this blend of astrology and emerging science. Although Kepler later distanced himself from astrology, he was initially drawn to it, and his early work reflects the Renaissance ideal that cosmic harmony could reveal spiritual truths. For many Renaissance scholars, astrology was a tool for understanding the divine order, and it was frequently used in conjunction with other forms of occultism, such as alchemy and magic, to help interpret the will of the heavens and guide human affairs. This worldview held that by understanding and manipulating cosmic forces, individuals could gain insight into the divine plan and achieve spiritual or material goals.
The rebirth of magic during the Renaissance represented a synthesis of mysticism, science, and philosophy, blending ancient occult traditions with the intellectual currents of the period. It was a time when human beings sought to bridge the gap between the material and spiritual worlds, using occult knowledge to unlock hidden truths about the universe and their place within it. While the church often viewed such practices with suspicion, regarding them as potentially heretical or dangerous, many Renaissance thinkers saw magic as a means of achieving enlightenment and spiritual transformation. The flourishing of occultism during this period laid the groundwork for later mystical movements and influenced the development of Western esotericism, setting the stage for a continuing exploration of magic, alchemy, and astrology in the centuries to come.
Skepticism and Rational Pushback

During the Renaissance, the revival of classical antiquity brought with it not only a flourishing of new ideas in art, science, and philosophy, but also a renewed examination of the intellectual foundations of Western thought. The period, which was marked by an increased focus on human reason and empirical inquiry, saw the rise of early forms of skepticism—a philosophical stance that questioned the reliability of human knowledge and emphasized the limitations of reason. Skeptical thinkers challenged the prevailing religious and medieval scholastic doctrines, which had been heavily based on authority and theological interpretation. Figures like Michel de Montaigne, whose work Essays laid the foundation for modern skepticism, argued that human knowledge was inherently limited and that reason was often unreliable. Montaigne’s skepticism was not so much a rejection of reason but rather a caution against its overreach, urging intellectual humility and the acceptance of the uncertainty of human existence. Montaigne’s insistence on the complexity of human perception and the variability of experience reflected a growing concern about the limits of what could be known with certainty.
Skepticism during the Renaissance also had significant implications for theology and the ongoing tensions between reason and faith. While the Catholic Church continued to dominate religious thought, there was an increasing questioning of traditional religious authorities and doctrines, spurred by the intellectual movements of the time. Renaissance thinkers, influenced by the revival of Greek and Roman thought, began to emphasize the importance of individual inquiry and personal judgment over institutional dogma. Giordano Bruno, for example, was a philosopher and cosmologist who argued for a more expansive, empirical approach to understanding the universe, rejecting the geocentric model of the universe espoused by the Church. Bruno’s views on the infinity of the universe and the multiplicity of worlds were radical for his time and led to his eventual execution for heresy. His ideas represented a challenge not only to the religious orthodoxy of the Church but also to the broader Aristotelian and scholastic philosophy that had dominated the intellectual world of the medieval period.
The Renaissance also saw a rationalist pushback against mystical and occult traditions, which had flourished alongside the resurgence of occult philosophy and magic. The rise of empiricism and a more scientific approach to knowledge led many thinkers to question the validity of mystical practices like alchemy, astrology, and divination. One of the key figures in this movement was Francis Bacon, an English philosopher who is often credited with laying the groundwork for modern scientific methodology. Bacon’s emphasis on inductive reasoning, observation, and experimentation marked a decisive break from medieval modes of thought, which often relied on deductive reasoning from established principles. Bacon’s writings, especially his work Novum Organum, called for a rigorous, empirical approach to understanding the natural world, which he believed could only be achieved through systematic observation and the collection of data, rather than relying on ancient authorities or esoteric traditions. His rationalist perspective offered a stark contrast to the mystical and speculative ideas that had gained traction in the Renaissance, arguing that knowledge should be built from the ground up rather than based on abstract or mystical theories.
Another influential figure in the rationalist pushback was René Descartes, whose philosophical and scientific writings laid the foundation for modern philosophy and the scientific revolution. Descartes is best known for his famous dictum, Cogito, ergo sum (“I think, therefore I am”), which encapsulates his belief in the primacy of reason and the ability of human thought to achieve certainty. Descartes’s skepticism was profound and systematic; in his Meditations on First Philosophy, he doubted everything that could possibly be doubted, including the existence of the external world and his own body. His goal was to arrive at an indubitable foundation for knowledge, which he ultimately found in the certainty of his own existence as a thinking being. Descartes’s approach to skepticism was grounded in rationalism, as he believed that human reason could achieve absolute certainty, in contrast to the Renaissance emphasis on subjective perception and the mystical aspects of knowledge. Descartes’s philosophy was a direct challenge to both the mystical elements of Renaissance thought and the medieval reliance on authority and tradition, offering a vision of knowledge that was based on clear, logical reasoning and doubt-free certainty.
The skepticism and rationalism of the Renaissance had lasting implications for the intellectual climate of the early modern period. As the period progressed, the ideas of early skeptics and rationalists laid the foundation for the scientific revolution, which would forever change the trajectory of Western thought. Thinkers like Galileo Galilei, Johannes Kepler, and Isaac Newton would build upon the foundations of empirical inquiry and rational analysis laid by figures like Bacon and Descartes. The growing emphasis on empiricism, mathematics, and natural philosophy gradually displaced the speculative and mystical approaches to knowledge that had dominated the medieval period. The rationalist pushback of the Renaissance, with its focus on reason, evidence, and critical inquiry, helped to shift the focus of intellectual life away from religious and esoteric explanations of the world and toward scientific methods that sought to explain natural phenomena through observation, experimentation, and mathematical reasoning. This intellectual transformation would become the hallmark of the Enlightenment and ultimately the modern world, shaping the way we understand knowledge, science, and human progress.
The Enlightenment and the Boundaries of Science
Mesmer’s Vitalism

Mesmerism, named after the German physician Franz Anton Mesmer, was a highly influential and controversial system of thought that emerged in the late 18th century and would go on to inspire both medical and mystical ideas. Mesmer’s central theory was that the human body was influenced by an invisible natural force, which he called “animal magnetism” or simply “mesmerism”. According to Mesmer, the body contained an energy force that could be manipulated through various techniques, most notably the application of magnets and the act of “mesmerizing” a patient. Mesmer believed that imbalances or blockages in the flow of this energy caused illness, and that by restoring its balance through controlled influence, he could cure a range of physical and psychological ailments. While his theories were often seen as pseudoscientific, Mesmerism gained widespread popularity in Europe, particularly in France, where it became a sensation among both the medical community and the general public. The famous case of the “Baquet” (a device that supposedly harnessed animal magnetism to cure illness) and Mesmer’s highly publicized demonstrations contributed to the rise of mesmerism as a form of medical therapy, despite the skepticism and criticism it faced from established medical authorities.
The practice of Mesmerism was deeply connected to the broader intellectual climate of the Enlightenment, an era in which reason and scientific inquiry were increasingly prized over superstition and mysticism. However, Mesmerism did not entirely conform to the rationalist ideals of the Enlightenment. While Enlightenment thinkers such as René Descartes and Isaac Newton sought to explain the natural world through reason, empirical observation, and scientific laws, Mesmer’s theory of invisible energies seemed to revert to a more mysterious, vitalistic understanding of nature. The French Enlightenment physician Denis Diderot, for example, was initially intrigued by Mesmer’s ideas but also critiqued the lack of empirical evidence behind his claims. The French Royal Commission, which was assembled in 1784 to investigate the validity of Mesmerism, included prominent thinkers such as Antoine Lavoisier and Benjamin Franklin, who concluded that Mesmer’s animal magnetism was not scientifically plausible. Despite this, Mesmerism’s popularity persisted, influencing both medical practice and the emerging field of hypnosis, which would become a significant area of study in the 19th century.
Mesmer’s ideas were part of a larger movement of vitalism, a theory in philosophy and biology that proposed that life was governed by a force distinct from the mechanical laws of physics. Vitalism posited that living organisms possessed an immaterial, “vital force” or “life force” that animated the body and governed its functions. This idea was particularly influential in the medical and biological sciences of the period. Vitalism can be seen as a response to the rise of mechanical philosophy (which viewed the body as a machine) and Newtonian physics, which applied mechanical laws to the workings of the universe. Vitalists, however, rejected the idea that life could be fully explained through mechanical processes alone. Instead, they suggested that living organisms were governed by an intangible force or energy that could not be explained by material science. Mesmerism, with its emphasis on a vital force influencing health and illness, was a practical application of this broader philosophical movement.
The notion of vitalism had roots in earlier philosophical traditions, particularly in the works of Rene Descartes, who believed that the body could be understood through mechanical principles, but that the soul, or “mind,” was a separate, non-material entity. However, vitalism took a more holistic turn during the Enlightenment, drawing on ideas from French medicine and German philosophy. Georg Wilhelm Friedrich Hegel and other German idealists argued that the “life force” was a dynamic, organizing principle that was responsible for the unity of the organism. The idea of an invisible vital force was also reflected in the writings of Luigi Galvani and Alessandro Volta, whose experiments with electrical currents in living tissue hinted at the possibility of an unknown energy influencing biological processes. The notion of animal magnetism, introduced by Mesmer, was just one manifestation of this vitalistic thinking, linking invisible forces to physical health. While the exact nature of the “life force” remained unclear, vitalism opened up new avenues for thinking about the relationship between the body, mind, and nature, and it continued to influence medical theories until the development of cell theory and modern biochemistry in the 19th century.
Although Mesmerism and vitalism ultimately did not survive as dominant theories in medical practice, their legacy continued to shape the development of psychological and medical sciences. The work of Mesmer directly influenced the later development of hypnosis and psychosomatic medicine, areas that recognize the interplay between the mind and body in the treatment of illness. Furthermore, vitalist ideas contributed to the eventual birth of holistic health practices, which view the body as a complex system influenced by not just physical factors, but also mental, emotional, and spiritual forces. The Enlightenment’s emphasis on rationalism, scientific observation, and empirical evidence challenged and eventually replaced vitalist and mesmerist theories. Still, the early interest in unseen forces and the potential of the human mind to influence physical health foreshadowed future developments in psychology, neuroscience, and energy medicine, making Mesmerism and vitalism key precursors to modern explorations of the mind-body connection.
Head Shapes

Phrenology, a pseudoscience that claimed to determine an individual’s character, personality traits, and mental faculties based on the shape and size of the skull, emerged during the Enlightenment as a system that purported to blend scientific inquiry with social theory. Founded by Franz Joseph Gall in the late 18th century, phrenology argued that different areas of the brain were responsible for specific intellectual, emotional, and moral functions. Gall proposed that the shape of a person’s skull would reflect the development of these areas of the brain, and by measuring the contours and bumps on the skull, one could infer a person’s traits. While initially focused on the study of the human brain and its faculties, phrenology quickly took a turn toward racial and social classifications, which would come to have dangerous implications for the development of scientific racism. This theory, which claimed to explain differences in intelligence, behavior, and moral character, was widely accepted in Europe and the United States for much of the 19th century and became a cornerstone of pseudo-scientific attempts to justify racial hierarchies.
Phrenology was not merely a scientific theory; it quickly became a tool for reinforcing social prejudices and racist ideologies, especially in its application to different racial groups. During the Enlightenment, there was a prevailing belief in the idea of human progress, but also in the hierarchy of races, with European populations often at the top of this hierarchy. As phrenologists began to apply their methods to different racial and ethnic groups, they consistently found that non-European races—particularly Africans, Indigenous peoples, and Asians—were deemed to have smaller or less-developed brain regions associated with intelligence, morality, and self-control. Phrenology was used as evidence to support the idea that these races were intellectually inferior, more prone to violence, and less capable of civilization. The shape of the skull, according to phrenologists, was not just a reflection of individual personality but a sign of inherent racial characteristics. In this way, phrenology became one of the pseudo-scientific justifications for the oppressive social systems of slavery, colonialism, and racial segregation that flourished during the Age of Enlightenment and beyond.
Scientific racism, which gained traction during the Enlightenment, relied heavily on pseudo-scientific theories like phrenology to reinforce the notion that certain races were biologically inferior to others. This form of racism claimed that racial differences were not only physical but also mental and moral, and these differences were thought to be immutable. The work of Johann Friedrich Blumenbach, a German scientist and anthropologist, was instrumental in shaping the early foundations of scientific racism. Blumenbach classified humans into five “races” based on skull shape and other physical characteristics, believing that Caucasians were the most “beautiful” and, by implication, the most “civilized” race, while Africans were at the bottom of the racial hierarchy. This hierarchical view of humanity, which classified people according to both physical and mental traits, was deeply entrenched in the Enlightenment’s intellectual currents and was widely disseminated through academic works, making it a powerful tool for justifying racial discrimination, slavery, and colonial exploitation.
The widespread acceptance of phrenology and scientific racism in the 19th century also reflected a broader cultural trend in the Enlightenment to seek empirical, “scientific” answers to questions of human difference, morality, and capability. Enlightenment thinkers sought to apply reason and scientific methodologies to all aspects of human life, including the study of race. Unfortunately, this intellectual trend often overlooked the deeply entrenched social, cultural, and political biases that informed these studies. Figures like George Combe, one of the most prominent phrenologists of the 19th century, built on Gall’s work and helped to popularize phrenology in Britain and the United States. Combe’s work, although grounded in the idea that human nature could be objectively studied, often reinforced contemporary racial prejudices. His public lectures and writings about how to improve individuals’ characters through education or the proper “cultivation” of certain faculties also discussed race in terms that fit with the broader scientific racist narratives of the time. This made phrenology a highly influential tool for shaping racial attitudes in the 19th century, especially among the educated elite.
The impact of phrenology and scientific racism was profound and enduring, influencing not only the development of racial theory but also the shaping of public policy. In the United States, for instance, phrenology was used to justify the racial inequalities of slavery and later segregation. In both Europe and the United States, phrenologists were often enlisted by those in power to provide a veneer of scientific legitimacy to discriminatory practices. Furthermore, the concept of racial inferiority, popularized by scientific racism, laid the groundwork for eugenics in the late 19th and early 20th centuries. Eugenics, a movement that sought to improve the genetic quality of the human population through selective breeding, often adopted phrenological and racial ideas to argue for the sterilization and segregation of those deemed “genetically inferior.” Phrenology’s influence, while waning as the scientific method advanced and modern neuroscience took shape, continued to inform racist ideologies well into the 20th century, with its legacy still present in some forms of pseudoscientific racialism today.
Methods and Evidence
During the Enlightenment, debates over method and evidence were central to the intellectual transformation of the period. As the Enlightenment emphasized the importance of reason and empirical observation, the methods by which knowledge was obtained and validated became subjects of intense debate. One of the central issues in these debates was the scientific method, a systematic approach to acquiring knowledge that emphasized observation, experimentation, and the use of evidence to test hypotheses. This approach, which had its roots in the works of Francis Bacon and René Descartes, aimed to shift the focus away from reliance on ancient authorities and speculative reasoning toward more practical, evidence-based inquiry. Bacon’s Novum Organum laid out a vision of inductive reasoning, where knowledge should be built from the ground up by collecting data and using it to derive general principles. This was in contrast to the medieval scholastic approach, which had prioritized deductive reasoning based on theological or philosophical doctrines. Bacon’s method became influential, and during the Enlightenment, it sparked new discussions about how best to investigate the natural world and establish reliable knowledge.
One of the most significant debates about method during the Enlightenment involved the tension between empiricism and rationalism. Empiricists, such as John Locke, George Berkeley, and David Hume, argued that all knowledge derives from sensory experience and that the human mind is a tabula rasa, a blank slate upon which experiences are written. Locke, in his Essay Concerning Human Understanding, rejected the idea of innate ideas and argued that knowledge comes from the interaction of the senses with the external world, which was a revolutionary idea in opposition to the rationalist tradition. Rationalists like René Descartes and Baruch Spinoza, on the other hand, believed that reason, independent of sensory experience, was the ultimate source of knowledge. Descartes’ famous dictum, Cogito, ergo sum (“I think, therefore I am”), encapsulated the idea that certain truths, such as the existence of the self, could be known through reason alone, without reliance on empirical observation. This debate between empiricism and rationalism was foundational to Enlightenment thinking, with empiricists advocating for scientific methods grounded in observation and experience, while rationalists emphasized the power of reason to uncover universal truths.
The debate over evidence was closely tied to these discussions of method. Empiricists were particularly concerned with the role of sensory data and the experiment as the most reliable sources of evidence. Isaac Newton, a towering figure of the Enlightenment, exemplified this empirical approach in his monumental work Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), where he sought to explain the laws of motion and gravitation through mathematical formulas derived from empirical observation and experimentation. Newton’s emphasis on mathematical modeling and experimental validation set the standard for how evidence should be treated in natural science. His approach marked a departure from earlier, speculative theories that had lacked rigorous experimental support. Newton’s success in explaining celestial phenomena using laws derived from observable evidence demonstrated the power of empirical evidence, which became the bedrock of Enlightenment scientific thought. The method of systematic experimentation, often under controlled conditions, became a defining feature of the Enlightenment’s empirical philosophy.
However, the debate over method and evidence was not limited to the natural sciences alone. It also had profound implications for social and political theory during the Enlightenment. Thinkers like Jean-Jacques Rousseau and Montesquieu applied empirical methods to the study of society, though their approaches were often more qualitative than the hard sciences. Montesquieu’s The Spirit of the Laws used comparative analysis to explore how different societies developed their political institutions, drawing conclusions based on historical and cultural evidence. Rousseau, in his The Social Contract, sought to derive principles of justice and social order from a close examination of human nature and society. These thinkers were deeply influenced by the empiricist tradition, even as their methods differed from the experimental methods of natural science. They emphasized the importance of evidence from history, human behavior, and the study of existing social structures, though they were often critiqued for lacking the kind of systematic rigor and empirical support that defined the natural sciences during the period.
As the Enlightenment progressed, the scientific method and the use of evidence became central not only to the fields of natural science but also to the development of new philosophical and social paradigms. The Enlightenment’s emphasis on critical thinking, the questioning of traditional authority, and the search for evidence-based knowledge paved the way for the modern scientific revolution. At the same time, the debates over method and evidence also laid the foundation for later intellectual movements, including positivism and empiricism in the 19th century, which sought to apply scientific principles to all fields of knowledge, including sociology and psychology. The period’s reliance on empirical evidence and systematic methodology also contributed to the rise of the social sciences and the eventual development of scientific methodology as the standard across various disciplines. However, despite these advances, the Enlightenment’s debates over method and evidence were not without their limitations. Many Enlightenment thinkers still grappled with questions about the nature of subjectivity, the role of intuition in knowledge, and the complexities of social and political structures, making it clear that even in the pursuit of objectivity, human understanding remains partial and evolving.
19th-Century Explosion of Pseudoscientific Movements
Talking to Ghosts

Spiritualism and the practice of séances became prominent features of 19th-century life, particularly in the United States and Europe, reflecting a cultural and spiritual movement that intertwined with the broader shifts of the period, including the rise of modern science and the ongoing search for meaning in a rapidly changing world. Spiritualism, as a religious and philosophical movement, was founded on the belief that the spirits of the dead could communicate with the living through mediums—individuals who claimed to have the ability to channel messages from the deceased. This belief was championed by events such as the Fox sisters’ infamous encounter with spirits in 1848, when two young sisters from New York seemingly communicated with a spirit through knocking sounds. This event, which attracted widespread attention, became the catalyst for the birth of Spiritualism, leading to the establishment of a religious and social movement that believed in the afterlife and the possibility of spirit communication. As industrialization and urbanization rapidly altered society, many found the spiritualist movement appealing as a source of comfort amid the uncertainties of the modern world, including the rising death toll from wars, disease, and industrial accidents.
The practice of séances, gatherings in which a medium would attempt to make contact with spirits, became central to the Spiritualist movement. These séances often took place in private homes, but public events were also common, attracting large audiences. During séances, participants would typically sit in a darkened room, sometimes with a spirit medium at the center, and await manifestations of spirit communication. The medium might use various methods, such as automatic writing, where they would write messages purportedly dictated by spirits, or table turning, where a table would allegedly move or tilt in response to questions posed to the spirits. Over time, the practices became more elaborate, with mediums claiming to produce physical phenomena such as ectoplasm—a substance believed to be exuded by the medium during spirit communication—further captivating the public’s imagination. The popularity of séances and other spiritualist phenomena surged in the 19th century, and many people, including prominent figures, participated in these events, seeking closure with deceased loved ones, guidance, or even answers to scientific and philosophical questions.
Scientific and intellectual responses to Spiritualism were varied and often conflicted. On the one hand, the rise of Spiritualism seemed to promise a form of communication with the beyond that resonated with the Victorian fascination with the supernatural and the unknown, while simultaneously challenging the rational, materialistic worldview that was gaining ground during the Scientific Revolution. Early spiritualist practices were sometimes viewed as an extension of the Romantic interest in the mystical and emotional aspects of life, pushing back against the mechanistic explanations of nature that characterized much of the Enlightenment. However, many intellectuals, scientists, and skeptics viewed Spiritualism with suspicion and derision. Figures like Sir Arthur Conan Doyle, the creator of Sherlock Holmes, were ardent proponents of Spiritualism and believed that scientific materialism could not fully explain spiritual phenomena. On the other hand, famous figures such as Harry Houdini, the magician, dedicated much of his life to exposing fraudulent mediums and debunking the claims of Spiritualists. Houdini, in particular, used his expertise in illusion and sleight of hand to demonstrate how mediums could easily manipulate audiences into believing they had made contact with the dead.
The skeptical movement that emerged in response to Spiritualism became increasingly organized and sophisticated. A significant critique of Spiritualism was the frequent exposure of fraudulent mediums who used trickery, deception, and psychological manipulation to produce convincing séances. In the late 19th century, a growing number of people began to argue that belief in Spiritualism was rooted in cognitive biases, such as the human tendency to seek patterns and explanations, even when none existed (a phenomenon known as pareidolia). One of the most notable groups to challenge Spiritualism was the Society for Psychical Research, founded in 1882 in London, which conducted investigations into mediumship and other supernatural claims. The society’s most famous members included James Clerk Maxwell, the physicist, and Sir William Crookes, the chemist, who initially supported Spiritualism but later became critical of its claims. The rise of skepticism and the work of such organizations reflected a growing confidence in empirical science, which increasingly sought to explain paranormal phenomena in naturalistic terms.
Despite mounting criticism and exposure of fraudulent practices, Spiritualism continued to thrive throughout the 19th century, attracting a wide array of followers from different social classes, including politicians, artists, and intellectuals. The movement’s appeal was particularly strong among those who had experienced loss or tragedy, such as families who had lost loved ones in wars or due to illness. Spiritualism provided an emotional and spiritual outlet, offering the possibility of continuing communication with the deceased and thus easing the grief of those left behind. This aspect of Spiritualism resonated with the broader 19th-century interest in the afterlife and the spiritual dimensions of existence. Moreover, the movement’s emphasis on female mediums also represented a significant aspect of its appeal, as many of the most famous mediums, such as the Fox sisters and Eusapia Palladino, were women. This gave Spiritualism a distinct feminist dimension, as women in the movement were seen as playing a central role in bridging the physical and spiritual realms, which may have resonated with broader social movements for women’s rights and suffrage.
By the early 20th century, the popularity of Spiritualism began to wane, partly due to the continued exposure of fraudulent practices and the increasing dominance of scientific materialism in public discourse. However, the legacy of Spiritualism and its associated practices, such as séances, continued to influence spiritual and religious movements well into the 20th century. Many ideas from Spiritualism found their way into the development of New Age beliefs and occultism in the modern era, while the interest in mediumship, psychic phenomena, and the afterlife remained a persistent cultural feature. In addition, the role of Spiritualism in challenging conventional views of life and death, as well as its contribution to the cultural climate of the 19th century, cannot be understated. Whether through the efforts of sincere believers or through the lens of skeptics like Houdini, the 19th century’s fascination with the supernatural and the ability to communicate with spirits would shape both popular culture and more serious philosophical inquiries into the nature of existence, consciousness, and the afterlife.
Theosophy and Occult Revivals

The 19th century was a period of intense spiritual and intellectual upheaval, marked by the rise of new religious and philosophical movements that sought to challenge materialism and provide alternative explanations of existence. One of the most significant of these movements was Theosophy, which emerged as a synthesis of Eastern religions, Western occultism, and various esoteric traditions. Theosophy was founded in the late 19th century by Helena Petrovna Blavatsky, a Russian-born mystic, and Henry Steel Olcott, an American military officer and lawyer. Theosophy aimed to unite the spiritual teachings of ancient cultures, including Hinduism, Buddhism, and ancient Egyptian mysticism, with Western philosophical and religious traditions, offering a comprehensive worldview that emphasized the existence of a hidden, spiritual reality beneath the surface of the material world. This spiritual system incorporated elements of mysticism, esoteric knowledge, and occult practices, positioning itself as a vehicle for spiritual enlightenment and a deeper understanding of the universe and human nature.
The founding of Theosophy coincided with a broader occult revival in the 19th century, as interest in hidden knowledge, the supernatural, and mystical teachings surged across Europe and the United States. This occult revival was partly a response to the increasing dominance of scientific materialism and rationalism during the Enlightenment and the Industrial Revolution, which many felt overlooked the spiritual dimensions of life. As industrialization and urbanization transformed society, people sought new forms of spirituality that could provide answers to existential questions and offer comfort in the face of an uncertain and rapidly changing world. The Theosophical movement was particularly influential because it drew on a wide array of religious, mystical, and occult traditions, making it accessible to a broad audience. It also attracted prominent intellectuals, artists, and social reformers, who saw in Theosophy a way to reconcile the scientific and the spiritual, the rational and the mystical.
At the core of Theosophy was the belief in a universal divine wisdom that transcended religious boundaries and could be accessed through spiritual practices such as meditation, study of sacred texts, and mystical experience. Theosophists believed in the immortality of the soul, reincarnation, and the existence of a spiritual hierarchy, with advanced spiritual beings, or Masters, guiding humanity toward higher knowledge and enlightenment. These ideas were influenced by Eastern religions, particularly Hinduism and Buddhism, but were also shaped by Western esoteric traditions, including Kabbalah, alchemy, and Rosicrucianism. Blavatsky’s magnum opus, The Secret Doctrine, published in 1888, outlined her vision of an ancient and secret wisdom that had been passed down through the ages, hidden from the general public but accessible to those who were spiritually attuned. This secret knowledge, she argued, contained the keys to understanding the cosmos, human evolution, and the purpose of life.
The occult revival in the 19th century was not limited to Theosophy alone. A range of esoteric movements emerged during this period, many of which shared similar themes of spiritual transformation and the quest for hidden knowledge. One of the most notable movements to emerge alongside Theosophy was the Hermetic Order of the Golden Dawn, which was founded in 1888 in London and became one of the most influential occult organizations of the late 19th and early 20th centuries. The Golden Dawn combined elements of Kabbalah, alchemy, astrology, and ritual magic with Western mystical traditions, and its teachings had a profound impact on later figures such as Aleister Crowley and W.B. Yeats. The Golden Dawn’s teachings emphasized the importance of initiation and spiritual advancement through occult rituals, which sought to align the practitioner with divine forces and facilitate personal transformation. Similar to Theosophy, the Golden Dawn emphasized the pursuit of hidden knowledge and the development of spiritual powers, with the ultimate goal of achieving union with the divine.
In addition to Theosophy and the Golden Dawn, the 19th century also saw the rise of spiritualism, which involved the communication with spirits through mediums, and the practice of astrology, which was revived during this period as a legitimate form of divination. These practices, along with Theosophy and other occult traditions, were part of a broader spiritual renaissance that sought to reclaim ancient wisdom and integrate it with modern life. This resurgence of interest in the mystical and the occult was facilitated by the increasing availability of spiritual and esoteric literature, the growth of occult societies, and the spread of these ideas through public lectures, books, and periodicals. The occult revival was also influenced by the growing interest in psychic phenomena, paranormal activity, and the boundaries of human consciousness, with many believing that scientific advances in psychology and physics could unlock new realms of spiritual experience. The popularity of figures like Blavatsky, Aleister Crowley, and Madame de Thebes, a famous fortune-teller in Paris, furthered the public’s fascination with occult ideas, with some viewing these figures as possessing unique access to hidden spiritual knowledge.
The legacy of Theosophy and the 19th-century occult revival continues to resonate in contemporary spiritual and New Age movements. Theosophy’s emphasis on reincarnation, karma, and the quest for spiritual enlightenment provided a foundation for many later movements, including the New Age movement of the 20th century. Ideas about the existence of hidden spiritual dimensions, the power of the mind to shape reality, and the exploration of mystical experiences continue to influence a wide range of spiritual and psychological practices today. In addition, Theosophy and other occult movements of the 19th century played a significant role in the intellectual climate of the period, challenging the materialism of the scientific establishment and offering an alternative framework for understanding the mysteries of existence. While these movements were often met with skepticism and criticism from mainstream scientists and philosophers, they contributed to a broader cultural and spiritual exploration that remains a defining feature of modern Western spirituality. Whether through the enduring popularity of Theosophy, the rise of New Age practices, or the continuing influence of the occult on art, literature, and culture, the ideas and practices of the 19th-century occult revival continue to shape spiritual and philosophical discourse today.
Evolution and Racism
Social Darwinism and pseudoscientific racism became intertwined in the intellectual currents of the late 19th century, as theories about evolution and natural selection were misappropriated to justify social hierarchies, colonialism, and racial discrimination. Originating from the scientific ideas put forth by Charles Darwin in his landmark work On the Origin of Species (1859), Darwinian evolution proposed that species evolved through natural selection, with the fittest individuals surviving to pass on their traits to the next generation. However, while Darwin’s work focused on the biological processes of natural selection within species, many later thinkers sought to apply these ideas to human societies. Herbert Spencer, a British philosopher, was one of the primary figures in developing Social Darwinism, a concept that used the metaphor of “survival of the fittest” to argue that the success of certain groups in society was a natural result of their superior biological qualities. Spencer’s work, which sought to apply Darwin’s biological theories to social and economic structures, was influential in shaping the thinking of the time, especially in justifying laissez-faire capitalism and the unequal distribution of wealth and power in society.
Social Darwinism was used as a tool to rationalize and legitimize various forms of social inequality, particularly in relation to race. Proponents of Social Darwinism argued that certain races, nations, or classes were inherently superior to others, based on the idea that they had evolved to be more successful in the struggle for survival. This ideology supported imperialism and colonialism, suggesting that European powers had a natural right to dominate other races, as they were perceived to be more advanced and evolved. Racial hierarchies, therefore, were not seen as social constructs but as natural divisions between more and less evolved groups. In the United States, Social Darwinism was often invoked to justify the subjugation of African Americans and the indigenous populations of the Americas, while in Europe, it was used to rationalize the exploitation and oppression of colonized peoples in Africa, Asia, and the Pacific. The pseudoscientific application of Darwinian ideas to human societies provided a veneer of legitimacy to the prevailing racial prejudices of the era, reinforcing the status quo of racial inequality.
Alongside Social Darwinism, the development of pseudoscientific racism in the 19th century found fertile ground in the growing popularity of phrenology, craniometry, and other pseudo-scientific disciplines that claimed to measure the intellectual and moral characteristics of different races based on physical features. Phrenology, which posited that the shape and size of a person’s skull could reveal their intellectual capabilities and personality traits, was widely accepted in academic and medical circles during much of the 19th century. Figures like Franz Joseph Gall and Johann Spurzheim argued that certain physical characteristics of the skull could be used to determine superiority or inferiority among races. These ideas were soon applied to race science, with European scientists asserting that people of African, Asian, and Indigenous descent had smaller skulls and inferior brain structures, supposedly supporting theories of racial hierarchy. Such claims were used to justify the exclusion and mistreatment of non-European peoples, bolstering racist attitudes in both the scientific community and the general public.
The rise of eugenics in the late 19th and early 20th centuries further entrenched the pseudoscientific basis for racist ideologies. Francis Galton, a cousin of Charles Darwin, is often credited with founding the field of eugenics, which sought to improve the human gene pool through selective breeding, sterilization, and other interventions. Eugenicists argued that society should promote the reproduction of individuals with “desirable” traits, typically those associated with the white European middle and upper classes, and limit the reproduction of individuals with “undesirable” traits, including those who were poor, mentally ill, or of non-European descent. Eugenics became a powerful pseudoscientific movement that not only fueled racial discrimination but also contributed to the rise of scientific racism, particularly in the United States and Europe. The eugenics movement influenced social policies, including immigration restrictions, forced sterilizations, and the implementation of racial quotas in many countries, and it provided intellectual justification for the racial hierarchies that were being constructed during the period. The ideas of racial purity and genetic superiority perpetuated by eugenicists were widely accepted, despite the fact that they had no basis in genuine scientific evidence.
The pseudoscientific racism of the 19th century was also reflected in the scientific establishment’s contributions to racial theory, which sought to give intellectual legitimacy to prevailing racist attitudes. Leading anthropologists, biologists, and scholars of the time often divided humanity into distinct “racial types,” with Caucasian, Mongoloid, and Negroid being the most commonly recognized categories. These racial typologies were used to support the claim that certain races were inherently more intelligent, moral, and capable of civilization than others. Louis Agassiz, a prominent 19th-century naturalist and professor at Harvard, was a vocal proponent of racial differences and the idea of separate human species, arguing that African slaves were a separate race from whites and that their subjugation was justified by their purported “lower” status. These ideas reinforced the social and political systems that supported slavery, segregation, and colonialism, and their influence extended far beyond the 19th century, contributing to the racial ideologies of the early 20th century, including the rise of Nazi ideology in Germany.
The legacy of Social Darwinism and pseudoscientific racism continued to shape intellectual and political discourse long after the 19th century. Although many of the pseudoscientific claims of racial superiority have been thoroughly discredited by modern genetics and anthropology, the impact of these ideologies continues to be felt in contemporary debates about race and inequality. The historical use of eugenics to justify racial segregation, forced sterilizations, and immigration restrictions serves as a cautionary tale about the dangers of using science to legitimize social hierarchies and discrimination. In the 20th century, the horrors of World War II and the Holocaust exposed the devastating consequences of racist pseudoscience, leading to a widespread rejection of Social Darwinism and eugenics as valid scientific disciplines. Nevertheless, the intellectual legacy of 19th-century pseudoscientific racism continues to influence contemporary discussions on race, providing insight into how science can be misused to justify social prejudices and political agendas.
Pseudoscience and Modernity (1900–1950)
Eugenics and Its Consequences

The early 20th century saw the widespread influence of eugenics, a pseudoscientific movement that sought to improve the human gene pool through selective breeding, sterilization, and other interventions. Eugenics gained traction as a prominent scientific and social movement, particularly in Europe and the United States, where it was championed by Francis Galton, the cousin of Charles Darwin, who coined the term. Galton, inspired by his cousin’s theory of natural selection, proposed that human populations could be improved through the scientific application of selective breeding, much like animals. The eugenics movement was grounded in the belief that certain “genetic traits” such as intelligence, criminality, and even physical disabilities were inherited and could be controlled through the management of reproduction. This idea quickly gained acceptance among intellectuals, policymakers, and the general public, promising to reduce social ills and create a “better” society by eliminating undesirable traits from the gene pool.
In the United States, eugenics became a powerful political force, influencing immigration laws, sterilization programs, and social policies. One of the most significant political consequences of the eugenics movement was the passage of restrictive immigration laws in the early 20th century, particularly the Immigration Act of 1924, which aimed to limit the entry of immigrants from Southern and Eastern Europe. Eugenicists argued that these populations were genetically inferior and contributed to the decline of American society. The movement also influenced the sterilization of individuals deemed “genetically unfit” to reproduce, including those with mental disabilities, the poor, and racial minorities. By the 1930s, more than 30 states in the U.S. had enacted sterilization laws, resulting in thousands of people being forcibly sterilized without their consent. These laws were based on the belief that certain groups of people should not be allowed to reproduce because they were seen as genetically inferior or prone to criminal behavior, thus perpetuating social problems.
Eugenics also played a major role in shaping social policies related to the “undesirable” populations, such as the mentally ill, the poor, and racial minorities. It was argued that by preventing these groups from reproducing, society could avoid future generations of poverty, criminality, and mental illness. This led to the implementation of eugenic marriage restrictions, which prohibited individuals considered “genetically inferior” from marrying or having children. These policies disproportionately targeted African Americans, immigrants, and individuals with mental or physical disabilities, reinforcing existing racial and social hierarchies. The movement was supported by many prominent figures in American society, including political leaders, physicians, and social reformers. Leading eugenicists like Harry H. Laughlin and Charles Davenport pushed for legislation that would prevent certain individuals from marrying or reproducing, and their influence can be seen in policies such as compulsory sterilization and marriage restrictions.
The political impact of eugenics was not limited to the United States; it also had significant consequences in Nazi Germany. Adolf Hitler and other members of the Nazi regime were heavily influenced by eugenic ideas and used them to justify their racial policies. The Nuremberg Laws, passed in 1935, were explicitly based on eugenic principles and aimed to preserve the “purity” of the Aryan race by prohibiting marriages between Germans and Jews and by enforcing sterilization of individuals deemed “racially impure” or “genetically defective.” The Nazi regime also adopted the eugenic practice of forced euthanasia programs, in which individuals with physical and mental disabilities were systematically killed in a process known as Aktion T4. The eugenics movement in Germany paved the way for the Holocaust, where millions of Jews, Roma, and disabled individuals were exterminated under the guise of improving the racial and genetic makeup of the population. The Nazis’ embrace of eugenics, grounded in the notion of racial purity, was a horrific extension of the pseudoscientific ideals that had gained prominence in the early 20th century.
As the atrocities of World War II and the Holocaust became known, eugenics began to lose its credibility and influence, particularly after the Nuremberg Trials exposed the full extent of the Nazi regime’s racial policies. The movement, which had once been considered a legitimate scientific and social enterprise, was increasingly discredited. However, its legacy continued to shape scientific and political discourse in the postwar years. The intellectual and political consequences of eugenics, particularly its role in justifying state-sponsored racism and discrimination, left a lasting imprint on modern social policy and ethical debates. The idea of controlling human reproduction through state intervention was sharply criticized in the postwar period, and the human rights movement gained momentum, advocating for the protection of individual freedoms and the rejection of any form of coercion in reproductive matters. Despite the decline of the eugenics movement, its influence is still evident in contemporary debates surrounding genetics, reproductive rights, and bioethics, where the shadow of its historical abuses continues to inform discussions about genetic manipulation and the boundaries of state control over human reproduction.
Fringe Archaeology

The legend of Atlantis has been one of the most enduring myths in Western history, sparking widespread fascination and speculation for centuries. First introduced by the ancient Greek philosopher Plato in his dialogues Timaeus and Critias around 360 BCE, Atlantis was described as a vast, advanced civilization that existed some 9,000 years before Plato’s time. According to Plato, Atlantis was a powerful island kingdom located beyond the “Pillars of Hercules” (modern-day Strait of Gibraltar), which had an advanced society with monumental architecture, rich resources, and a complex culture. However, after attempting to conquer Athens, Atlantis supposedly sank into the ocean in a single catastrophic event. Despite the fact that Plato’s account was likely intended as an allegory or a critique of imperialism and moral decay, the story of Atlantis was quickly adopted by later generations as a possible historical reality. This myth has continued to capture the imagination of people throughout history, and numerous theories have emerged that attempt to locate Atlantis, ranging from ideas based in geography to those rooted in mysticism and the paranormal.
In the centuries following Plato’s writings, the story of Atlantis evolved and became more deeply enmeshed in the broader tapestry of fringe archaeology. Scholars, explorers, and adventurers have sought to find evidence of Atlantis in diverse locations around the world. The search for Atlantis has spanned from the Mediterranean to the Caribbean, with theories placing the fabled city on islands such as Santorini (Thera), Crete, and even the Azores. One of the earliest and most influential proponents of the Atlantis theory was the philosopher Ignatius Donnelly, who in his 1882 book Atlantis: The Antediluvian World, argued that Atlantis was the origin of all known civilizations. Donnelly’s book connected the myth of Atlantis to historical accounts of other ancient civilizations, suggesting that the knowledge and cultural achievements of these peoples were passed down from a lost advanced civilization. He proposed that a catastrophic event—possibly a flood—had wiped out Atlantis, leading to the dispersal of its people and their knowledge to other parts of the world. This idea laid the groundwork for the pseudo-historical narratives that would dominate much of the fringe research surrounding Atlantis in the 20th century.
The popularity of Atlantis theories grew during the 20th century, particularly with the rise of alternative archaeology and the development of a subculture of thinkers who were critical of mainstream historical narratives. Many fringe archaeologists and pseudo-historians argue that the mainstream academic community deliberately suppresses evidence of lost civilizations, such as Atlantis, in favor of more conventional historical models. Some proponents of these ideas suggest that Atlantis was not only an advanced civilization in terms of technology and culture but also had access to esoteric knowledge or extraterrestrial influences, giving rise to theories that intertwine ancient astronauts and alternative history. Authors like Erich von Däniken, whose work Chariots of the Gods? (1968) popularized the idea of ancient aliens, often incorporate the Atlantis narrative into their broader theories about humanity’s ancient origins, suggesting that the Atlanteans were advanced beings who imparted their knowledge to early humans before their civilization was destroyed. These ideas, while not supported by credible archaeological evidence, have had a profound impact on popular culture, contributing to a growing subgenre of speculative and fringe archaeology that seeks to rewrite the past in sensational and often fantastical terms.
The study of fringe archaeology, which includes the search for Atlantis, is often criticized by mainstream archaeologists and historians for its lack of rigorous scientific methodology and reliance on conspiracy theories and unverified evidence. While legitimate archaeological research is based on a careful examination of physical evidence, historical records, and peer-reviewed research, fringe archaeology tends to focus on speculative claims, sensationalism, and the misinterpretation of historical and archaeological data. For example, many proponents of Atlantis theories cite mysterious underwater structures or unexplained ruins as potential remnants of the lost city, despite the fact that these structures often have plausible natural or human-made explanations. Underwater ruins found near the coastlines of various islands, such as Bimini in the Bahamas, have been touted by some Atlantis theorists as evidence of the sunken civilization, yet the structures have been shown to be natural formations or the result of later human activity. In many cases, fringe theorists selectively interpret archaeological evidence or ignore contradictory data in order to support their pre-existing beliefs, leading to conclusions that are not supported by scientific consensus.
Despite the lack of credible evidence, the search for Atlantis and the allure of lost civilizations continue to captivate the public imagination, feeding into broader discussions about human history and the possibility of ancient advanced civilizations that existed before recorded history. Atlantis theories often intersect with New Age and mystical beliefs, offering a sense of mystery and wonder that appeals to those seeking to break free from conventional views of history. Many people find these ideas comforting because they offer a narrative of a golden age that was lost but might one day be rediscovered, providing hope for a return to a more enlightened and advanced society. However, critics argue that the persistence of such fringe theories can hinder legitimate scientific inquiry and promote misinformation. The popularity of Atlantis theories in the media, especially in books, documentaries, and films, ensures that the legend remains a vibrant part of popular culture, where it continues to inspire both academic study and speculative fiction. The legacy of Atlantis as a symbol of human potential and lost wisdom persists, even as mainstream archaeology and history largely dismiss the idea of a literal, historical Atlantis.
Nationalism and the Nazis
Pseudoscientific nationalism emerged as a powerful and dangerous ideology in the 19th and 20th centuries, often intertwined with racial theories and claims of genetic superiority. At its core, pseudoscientific nationalism promoted the idea that certain nations or ethnic groups were inherently superior to others, based on fabricated or distorted interpretations of science and biology. This brand of nationalism sought to justify policies of ethnic purity, racial hierarchy, and territorial expansion, often using pseudoscientific theories to lend legitimacy to its claims. One of the most infamous manifestations of pseudoscientific nationalism occurred during the rise of the Nazi regime in Germany, where occultism, eugenics, and racial science were used to support an ideology of Aryan supremacy and the systematic oppression and extermination of non-Aryan peoples. The Nazis’ embrace of pseudoscience was not limited to their racist policies but extended to a wide range of occult beliefs and mythical ideologies, which helped shape the totalitarian vision of the Third Reich.
One of the central aspects of Nazi occultism was the belief in the Aryan race as a superior, almost mythical, group destined to dominate the world. The Nazis appropriated and distorted various elements of occultism and mysticism, mixing these ideas with pseudo-historical narratives about the ancient origins of the Aryan people. Influenced by ideas such as those of the 19th-century occultist Helena Blavatsky, whose Theosophy promoted a vision of an ancient Aryan race with extraordinary spiritual and physical abilities, Nazi ideology sought to link the Germanic peoples to a mystical, lost civilization. This blend of esoteric beliefs and pseudoscientific racial theories was particularly evident in the efforts of key figures like Heinrich Himmler, head of the SS, who was deeply interested in occult rituals, mystical traditions, and the idea of a Nordic master race. Himmler even commissioned research into the mystical past of the Aryans, establishing an occult branch within the SS dedicated to exploring the supposed lost knowledge of the ancient Aryan civilization. Himmler’s obsession with the occult, combined with pseudoscientific ideas about race, played a crucial role in shaping the Nazi worldview and its policies of racial purity.
In the realm of racial science, Nazi ideology was heavily influenced by eugenics and the belief that the “genetic fitness” of a nation could be improved through selective breeding. The Nazis adopted a distorted version of Darwinian evolutionary theory, one that focused not on adaptation to the environment but on the biological superiority of certain racial groups over others. This pseudoscientific approach led to policies of forced sterilization, the elimination of “undesirable” individuals through euthanasia programs, and the genocide of Jews, Roma, and other minorities. Nazi propaganda portrayed Jews and other non-Aryans as biologically inferior and linked their racial characteristics to criminality, disease, and degeneration. The pseudoscience behind Nazi racial policy was not based on any legitimate scientific understanding but was instead a fabrication designed to justify the state-sponsored murder of millions of people. Phrenology, craniometry, and other pseudoscientific disciplines were used by Nazi “scientists” to support these views, further demonstrating how pseudoscience was instrumental in validating the extreme nationalism and racism that underpinned the Third Reich’s agenda.
The use of pseudoscientific nationalism extended beyond the Nazis and had broader consequences for the development of racial ideologies throughout the 20th century. Many European nationalist movements, particularly in the aftermath of World War I, adopted elements of racial theory and pseudoscience to support the idea of ethnic homogeneity and national superiority. The notion of an ethnically pure nation was propagated by these movements, which often sought to erase or marginalize minority groups in the name of national identity. In the case of Nazi Germany, this vision of a pure, superior nation was coupled with the idea of Lebensraum (living space), the expansion of German territory into Eastern Europe, which was justified on the grounds that the Aryan race needed more space to thrive. This expansionist ideology, grounded in pseudoscientific ideas about race and genetics, led to the displacement and extermination of millions of people. The disastrous impact of this pseudoscientific nationalism is evidenced in the Holocaust, one of the most horrific genocides in human history, which was made possible by the Nazis’ fusion of pseudoscientific racism with aggressive nationalism and militarism.
The legacy of pseudoscientific nationalism remains a cautionary tale about the dangers of using pseudoscience to justify extreme political ideologies. The Nazi regime and its use of pseudoscientific theories to support their racist, xenophobic, and expansionist policies were not isolated incidents, but rather part of a broader historical trend of racial ideologies being bolstered by pseudoscience. While the immediate aftermath of World War II led to the discrediting of Nazi racial theory and the rejection of eugenics as a legitimate scientific discipline, the specter of pseudoscientific nationalism still lingers in contemporary political discourse. Modern-day far-right movements and neo-Nazi groups continue to propagate racist and xenophobic ideologies, often drawing on discredited racial theories and the same kinds of pseudoscientific justifications used by the Nazis. The continuing presence of these ideas in society underscores the importance of safeguarding scientific integrity and using sound evidence-based reasoning to combat ideologies that seek to exploit science for political gain.
Post-War Pseudoscience and the Rise of Conspiracism
Little Green Men

The concept of UFOs (Unidentified Flying Objects) and alien visitations became a significant part of 20th-century popular culture, gaining widespread attention in the aftermath of World War II. While tales of strange flying objects in the sky existed throughout history, it was the post-war period that saw the rapid emergence of UFOs as a modern phenomenon, largely due to a combination of technological advancements, increased media attention, and the cultural climate of the time. The Roswell Incident in 1947, in which a mysterious object crashed near Roswell, New Mexico, and was later claimed by the military to be a weather balloon, is often cited as the starting point for the modern UFO craze. Many theorists and UFO enthusiasts rejected the military explanation and insisted that the object was of extraterrestrial origin, leading to the formation of numerous conspiracy theories and the popularization of the idea that aliens had visited Earth.
The Roswell incident was followed by a wave of sightings throughout the 1950s, many of which were reported by civilians who claimed to have witnessed strange flying objects in the sky. One of the most notable of these was the Washington, D.C. UFO incident of 1952, where multiple unidentified objects were spotted flying over the nation’s capital. This event brought UFOs to the forefront of public attention and spurred debates about whether these objects were of alien origin or the result of secret military technology. U.S. government agencies, including the Air Force, were involved in investigating these sightings, with programs like Project Blue Book being established to analyze reports of UFO encounters. While the government often dismissed these sightings as misidentified aircraft or natural phenomena, the reports only fueled speculation about extraterrestrial life. The idea that UFOs were vehicles from another planet became more ingrained in the public consciousness, with sightings becoming more frequent and more bizarre.
In the 1960s and 1970s, the phenomenon of UFOs and alien visitations became deeply entwined with conspiracy theories, paranormal beliefs, and the counterculture movements of the time. Books like “Chariots of the Gods?” by Erich von Däniken, published in 1968, popularized the theory that ancient civilizations had been visited by extraterrestrials, and that these encounters were the source of human advancements in technology, culture, and religion. Von Däniken’s work led to the ancient astronaut theory, which proposed that alien visitors played a direct role in shaping human history. This idea became a cornerstone of UFO lore, with many proponents claiming that alien life had been interacting with humans for millennia, influencing everything from the construction of the pyramids to the development of agriculture. The rise of UFO cults and the increasing popularity of alien abduction stories further fueled the belief in extraterrestrial intervention in human affairs.
One of the most famous and enduring elements of the UFO and alien visitation phenomenon is the idea of alien abductions, which emerged in the 1960s and became a central feature of UFO lore. The story of Betty and Barney Hill, an American couple who claimed to have been abducted by aliens in 1961, is often considered the first widely publicized abduction story. Their account included detailed descriptions of being taken aboard a spacecraft, subjected to medical examinations, and then released with little memory of the event, which was later recovered through hypnosis. This abduction narrative became a template for countless other similar stories in the following decades. In these accounts, individuals often described being abducted by beings with large heads, black eyes, and grey skin, known as “greys,” and undergoing intrusive medical procedures, often involving the extraction of bodily fluids or the implantation of mysterious objects. These alien abduction stories became a global phenomenon, with thousands of people claiming to have experienced similar encounters, though skeptics argue that these accounts are often the result of sleep paralysis, suggestibility, or psychological trauma.
As the 20th century progressed, UFO sightings became increasingly connected to government cover-ups and secret military operations. The idea that the government was hiding evidence of extraterrestrial encounters became a central theme in UFO conspiracy theories. Prominent among these theories is the belief that the U.S. government, particularly the military-industrial complex, was concealing the truth about UFOs and alien visitation, possibly in order to exploit advanced alien technology for military purposes. The Majestic 12 documents, which surfaced in the 1980s, claimed to be evidence of a secret group within the U.S. government tasked with investigating UFOs and extraterrestrial encounters. While these documents were quickly debunked as hoaxes, they added to the widespread belief that the government was actively suppressing the truth. The notion that alien contact had been made, but was being kept secret from the public, became a driving force behind UFO research and activism, leading to calls for transparency and disclosure.
The 1980s and 1990s saw a shift in the way UFOs were portrayed in popular media, with a more prominent focus on alien visitation as both a threat and an opportunity for humanity. This shift was reflected in the success of shows like The X-Files, which blended government conspiracies, alien abductions, and supernatural phenomena into a compelling narrative. The show became a cultural touchstone, shaping how many people viewed UFOs and extraterrestrials. Additionally, books, documentaries, and films that delved into the idea of UFOs as extraterrestrial visits continued to capture public attention, with some arguing that extraterrestrial beings were either observing humanity or were actively trying to make contact. At the same time, however, a more skeptical view of UFOs began to emerge, particularly as people began to recognize the influence of psychological factors and hoaxes in many of the more extreme UFO and alien visitation claims.
Throughout the late 20th century, the U.S. government maintained an official stance of skepticism regarding the existence of UFOs, often attributing sightings to misidentified aircraft, weather phenomena, or experimental military technology. However, in 2017, the Pentagon declassified footage of unidentified aerial phenomena (UAPs), reigniting public interest in the possibility that UFOs might represent something beyond conventional explanations. The release of this footage, along with subsequent reports, including the 2020 report from the Office of the Director of National Intelligence, further fueled speculation that the U.S. government had withheld information about UFOs and extraterrestrial life for decades. While the government has not confirmed any contact with alien life, the growing interest in UAPs reflects the continuing belief that UFO sightings could be evidence of something extraterrestrial.
By the end of the 20th century, the idea of alien visitations had become firmly entrenched in popular culture, with UFOs being a central part of the modern mythos. The Roswell Incident, alien abduction stories, and government conspiracies were all elements of a larger narrative that wove together science fiction, conspiracy theories, and personal beliefs into a complex and often contradictory framework. For many, the idea that humans were not alone in the universe and that extraterrestrial beings had visited Earth was no longer just a subject of speculation but a deeply ingrained part of contemporary culture. However, the persistent mystery surrounding UFOs and alien visitation also highlighted the uncertainty and ambiguity of the phenomenon. Despite decades of research and investigation, the question of whether we are being visited by extraterrestrial beings remains unanswered, leaving UFOs and alien visitations as one of the most enduring mysteries of the 20th century.
As the 21st century dawned, the questions surrounding UFOs and alien visitation continued to evolve, especially with the growing interest in exoplanetary exploration and the search for extraterrestrial life. The development of space exploration technologies and the discovery of planets in the habitable zone of distant stars led to renewed speculation about the likelihood of alien life. At the same time, the increasing openness of government agencies, especially the Pentagon, about unidentified aerial phenomena has reignited the debate about the nature of UFOs. As scientific investigations into UFOs continue, the legacy of the 20th century’s fascination with alien visitations endures, and the possibility that humanity is not alone in the universe remains an enduring and compelling question.
Brain Waves
During the Cold War, the intense geopolitical rivalry between the United States and the Soviet Union extended into the realm of psychic research, as both superpowers sought to gain an edge over each other not only in conventional warfare but also in unconventional methods, including the exploration of the mind’s potential. This period, which spanned from the end of World War II through the 1980s, saw the rise of parapsychology as a field of interest for intelligence agencies and military organizations. Both the U.S. and the Soviet Union were motivated by the belief that harnessing psychic powers, such as telepathy, remote viewing, and psychokinesis, could offer an alternative means of gathering intelligence, disrupting enemy activities, or even influencing the course of events without the need for conventional warfare. Governments and agencies, including the CIA and KGB, began funding research into these paranormal phenomena, hoping to discover scientific proof of abilities that could be used in military operations.
One of the most well-known projects related to Cold War psychic research was the U.S. government’s Stargate Program, which was initiated in the early 1970s. The program sought to investigate the potential of remote viewing, the ability to gather information about distant or unseen targets through psychic means. The impetus for the Stargate Program came after reports that the Soviet Union was heavily investing in similar research, prompting U.S. intelligence agencies to explore the possibility that the Soviets had already achieved breakthroughs in psychic warfare. The CIA, along with the Defense Intelligence Agency (DIA) and the Army Intelligence and Security Command (INSCOM), began funding research into the use of psychics for military and intelligence purposes. Over the course of its existence, the Stargate Program investigated a variety of psychic phenomena, including clairvoyance, telepathy, and psychic healing, and involved a number of individuals who were claimed to possess extraordinary psychic abilities, some of whom were trained to attempt to retrieve information about specific military targets.
One of the most famous individuals associated with the Stargate Program was Ingo Swann, a prominent psychic who claimed to have developed his abilities to the point of accurately describing remote locations and objects. Swann’s work became central to the program’s investigation into the potential military uses of psychic phenomena. He was one of the primary figures involved in the program’s remote viewing experiments, where subjects were tasked with describing locations and objects that were hidden from their view, often with stunning accuracy. The most famous success of the program was when a remote viewer purportedly described the surface of the planet Jupiter in 1973, even though no spacecraft had yet been sent to the planet. While the results of these experiments were often controversial, with critics claiming that the findings could be attributed to chance or simple fraud, proponents of the research insisted that the successes demonstrated the potential of psychic abilities for intelligence gathering, espionage, and military applications.
The Soviet Union also heavily invested in psychic research, and it was rumored that they had developed their own extensive programs dedicated to exploring the use of paranormal abilities in warfare. The Soviet government’s interest in psychic phenomena was particularly evident in their research into telepathy and mind control. While details about Soviet programs were not as publicly available as their American counterparts, reports surfaced suggesting that the KGB and other agencies were studying ways to use psychic abilities for espionage, manipulation, and psychological warfare. The Soviet Union’s focus on psychotronics, the study of mind control and psychic influences on matter, led to the development of secret research facilities where individuals with purported psychic abilities were tested and trained. Many of these individuals were reported to have demonstrated feats of mind control, such as the ability to alter the behavior of animals or even influence the physical world with their thoughts. Despite a lack of solid scientific evidence supporting these claims, the psychic arms race between the superpowers fueled interest in these practices, and researchers on both sides continued to experiment with the possibilities of psychic warfare.
Despite the immense interest and investment in Cold War psychic research, the results were inconclusive, and by the end of the 1980s, the U.S. government began to scale back its funding for such programs. The Stargate Program was officially declassified in the 1990s, and many of the findings were released to the public, showing that, while some of the experiments had produced intriguing results, the scientific validity of psychic phenomena remained highly dubious. Official reports indicated that while some remote viewing sessions were successful, they could not be reliably reproduced or consistently produce useful intelligence. Skeptics argued that the results of these studies could be explained by statistical anomalies, the placebo effect, or simple trickery. As interest in parapsychology waned and new technologies for intelligence gathering, such as satellites and electronic surveillance, became more advanced, funding for psychic research all but disappeared. However, the legacy of Cold War psychic research continues to resonate, with many believing that these programs were just the tip of the iceberg in the pursuit of secret knowledge about the potential of the human mind.
Though Cold War psychic research did not yield definitive or widely accepted results, its impact on popular culture and public perception of the supernatural was profound. The idea that governments could harness psychic abilities to further their agendas became a staple of science fiction and conspiracy theories, with numerous books, films, and television shows exploring themes of mind control, government manipulation, and the hidden potential of human consciousness. The Stargate Program, in particular, became a key inspiration for works of fiction such as the TV series “The X-Files” and the Stargate franchise, which merged the idea of government conspiracies with the speculative possibility of extraterrestrial involvement in psychic research. While the scientific community largely discredited these practices, the intrigue surrounding the use of psychic powers for intelligence purposes during the Cold War continued to shape the public’s fascination with the intersection of the paranormal and the military-industrial complex.
Running Wild

The pseudoscientific media boom in the 20th century marked a significant shift in the way pseudoscience was disseminated to the public. With the rise of mass media, including television, radio, and eventually the internet, pseudoscience began to flourish in ways that were previously unimaginable. These platforms allowed pseudoscientific ideas to reach a wide audience, often with little to no critical oversight, leading to the rapid spread of fringe theories and unsupported claims. The post-World War II era, particularly the 1950s and 1960s, was a period of intense cultural and technological change, and this context provided fertile ground for the growth of pseudoscientific media. Ideas such as UFOs, ancient astronauts, paranormal phenomena, and psychic powers were featured prominently in new media outlets, attracting both public fascination and skepticism. Television shows, books, and later, the internet, became key platforms for pseudoscientific discourse, with sensationalism and mysticism often being packaged as entertaining or even educational content.
One of the most notable factors driving the pseudoscientific media boom was the rise of documentary-style television programming and sensationalistic news shows. During the mid-20th century, television emerged as a powerful tool for shaping public opinion and presenting a wide range of topics, often without the rigorous fact-checking that is expected in scientific or academic settings. Shows like “In Search of…”, hosted by Leonard Nimoy in the 1970s, explored topics such as UFOs, cryptozoology, and mysterious phenomena with a tone that invited the audience to question the unknown, regardless of whether there was any real evidence to support the claims. This type of programming capitalized on the public’s fascination with the unknown, creating an aura of mystery around subjects that were, in many cases, based on little more than speculation or hoaxes. Similar programs, such as “Unsolved Mysteries”, also used a blend of dramatic reenactments and real-life investigations to entertain while pushing pseudoscientific ideas into the mainstream, further popularizing concepts like alien abductions, ghost sightings, and unexplained disappearances.
In addition to television, print media also played a crucial role in the pseudoscientific media boom. Magazines such as “Fate” (which began publishing in 1948) and later “The National Enquirer” became infamous for their sensationalist coverage of paranormal events, conspiracy theories, and other fringe topics. These publications blurred the line between entertainment and journalism, offering speculative stories about UFOs, astrology, Bigfoot, and Atlantis, alongside reports of celebrity scandals and bizarre human interest stories. The allure of the extraordinary and the strange drew readers in, and the lack of rigorous journalistic standards meant that many of these stories were presented as fact, further legitimizing pseudoscientific claims. As these media outlets grew in popularity, they contributed to the normalization of pseudoscience in American culture, making it a regular part of the daily media diet for millions of people. The combination of entertainment and pseudoscience created a compelling narrative, one that resonated with individuals searching for answers to the mysteries of the universe.
The internet revolution in the 1990s and early 2000s amplified the reach of pseudoscience to an even broader audience. The advent of websites, online forums, and social media allowed for the rapid exchange of information, but it also created an environment where misinformation and pseudoscientific ideas could spread with little scrutiny. The YouTube platform, for example, became a hotbed for pseudoscientific content, where self-proclaimed experts could share videos promoting everything from anti-vaccine conspiracies to flat Earth theories, without the constraints of traditional media oversight. Online communities and forums like Reddit, Facebook, and Twitter further facilitated the growth of pseudoscience by providing spaces where like-minded individuals could share articles, videos, and other forms of content without having to confront rigorous scientific critique. These platforms created echo chambers in which pseudoscientific ideas could proliferate and reinforce themselves, often distorting scientific facts or presenting them in a misleading way to suit particular ideologies. The ability to find and connect with others who shared similar beliefs reinforced the spread of pseudoscience, making it increasingly difficult to discern credible information from fabricated or distorted claims.
The pseudoscientific media boom in the 20th century had profound consequences for the public understanding of science and knowledge. While many individuals remained skeptical of pseudoscientific claims, the sheer volume of sensationalized content created an atmosphere where dubious ideas could be taken seriously, especially when presented in an authoritative or charismatic manner. The psychological appeal of pseudoscience, with its promises of hidden knowledge and mystical truths, was amplified by its media representation. For many, pseudoscientific narratives offered a sense of comfort or control in a rapidly changing and often unsettling world. However, the rise of pseudoscientific media also sparked a counter-movement from the scientific community, which sought to challenge these claims and promote critical thinking. Organizations like the Committee for Skeptical Inquiry (CSI), founded in 1976, began actively debunking pseudoscientific claims and advocating for rational inquiry. Despite this, the power of media to shape public perception and fuel the growth of pseudoscience remains a significant challenge, with many of these ideas continuing to thrive in the modern media landscape.
New Age Movements and the “Wellness” Industry
Rocks and Chakras

The rise of crystal healing, chakras, and energy medicine within the wellness industry represents a fascinating intersection of spirituality, pseudoscience, and modern health trends. These practices, often rooted in ancient belief systems, gained significant popularity in the late 20th and early 21st centuries, especially within the New Age movement. Crystal healing, which involves using specific stones or crystals to promote physical, emotional, and spiritual well-being, became a mainstream element of alternative medicine. Practitioners believe that each crystal possesses unique energetic properties that can influence the body’s energy field, helping to alleviate ailments, balance emotions, or even protect against negative influences. The healing properties of these stones are often linked to the chakra system, an ancient concept that originated in Hindu and Buddhist traditions. According to chakra theory, the body has seven primary energy centers, each of which corresponds to different aspects of health, personality, and well-being. Crystals are said to align with and support these energy centers, creating balance and harmony.
The proliferation of energy medicine, a broad term that encompasses practices such as Reiki, acupuncture, and crystal healing, represents a key aspect of the wellness industry’s shift toward holistic and alternative health practices. This trend was fueled by the growing dissatisfaction with traditional Western medicine, particularly in response to the perceived over-reliance on pharmaceutical solutions and invasive procedures. Many individuals turned to energy medicine as a gentler, more natural alternative that emphasized the importance of the mind-body connection and the belief that health is influenced not only by physical factors but also by the flow of energy within the body. In the case of crystal healing, practitioners often suggest that crystals can absorb, store, and transmit energy, acting as conductors or amplifiers for the body’s natural healing energy. The chakra system, often linked to energy medicine, became an increasingly popular framework for understanding and addressing imbalances in the body’s energy field, leading to the widespread use of various holistic practices to restore harmony.
The commercialization of these practices played a significant role in their rise within the wellness industry. As demand for alternative health solutions grew, especially during the 1980s and 1990s, crystal healing and chakra therapies became central components of the wellness market. Crystal stores, wellness centers, and yoga studios began offering a variety of crystal-based products, including healing stones, jewelry, elixirs, and energy healing sessions. These establishments capitalized on the allure of the mystical and the promise of spiritual healing, creating an industry that merged ancient wisdom with modern marketing strategies. Celebrities and influencers also played a major role in popularizing these practices, often sharing their own experiences with crystals, energy healing, and chakra balancing on social media platforms. This visibility helped normalize the use of crystals and energy medicine as part of a broader wellness lifestyle, appealing to individuals seeking non-invasive and self-empowering methods of health maintenance.
However, despite their widespread popularity, crystal healing and energy medicine have been met with criticism from the scientific community. Critics argue that there is little to no empirical evidence to support the claims that crystals possess healing properties or that energy fields within the body can be manipulated in the ways that practitioners suggest. Many of these practices are rooted in anecdotal evidence or placebo effects, where the individual’s belief in the efficacy of the treatment may contribute to perceived improvements in health. The chakra system, for example, while having ancient spiritual and philosophical origins, lacks any biological or scientific foundation. Despite this, the wellness industry has continued to thrive by focusing on the psychological benefits of these practices, including stress relief, relaxation, and a sense of spiritual connection. For many participants, the belief in the healing properties of crystals and chakras is not as important as the feel-good experience of engaging in a practice that encourages personal empowerment and inner peace.
The rise of crystal healing, chakras, and energy medicine reflects broader shifts in society’s approach to health, well-being, and the mind-body connection. In an increasingly complex world, many individuals seek holistic practices that offer a sense of control and agency over their own health, and these practices provide a path to exploring spirituality and self-care. While the scientific validity of these practices may remain in question, the wellness industry continues to embrace them as part of a broader cultural movement towards non-traditional healing. The spiritual and emotional appeal of these practices remains potent, and they continue to be integrated into mainstream wellness routines alongside more conventional approaches to health. As long as individuals find meaning and healing in these practices, crystal healing and energy medicine will likely remain key components of the modern wellness landscape, regardless of the lack of scientific backing.
Antivaxxers
The history of anti-vaccine sentiment can be traced back to the very origins of vaccination, beginning in the late 18th century. Edward Jenner, an English physician, developed the first successful smallpox vaccine in 1796, based on the observation that milkmaids who had contracted cowpox seemed immune to smallpox. While Jenner’s discovery was revolutionary and saved countless lives, it also generated significant public skepticism. Early opponents of vaccination argued that it was unnatural or dangerous, with fears that injecting material from an animal (cowpox) into humans could lead to unforeseen consequences. This resistance was fueled by religious beliefs, fear of government intervention, and a general mistrust of medical authority, which would continue to shape the anti-vaccine movement for centuries to come. Although the smallpox vaccine eventually became widely accepted and led to the eventual eradication of the disease, the seeds of resistance to vaccination were planted and would resurface throughout history, particularly in the 19th and 20th centuries.
By the late 19th and early 20th centuries, the development of new vaccines, particularly for diseases like diphtheria, tetanus, and pertussis (whooping cough), brought forth a new wave of anti-vaccine sentiment. During this period, vaccines were often seen as both a medical breakthrough and a symbol of state authority, as governments in many countries began to mandate vaccinations for children. This led to backlash, particularly among parents who feared the safety of new vaccines and were resistant to government interference in personal health decisions. The anti-vaccine movement gained momentum with the rise of anti-compulsory vaccination leagues that opposed government mandates. At the same time, the increasing influence of medical quackery — the promotion of unproven and often dangerous health treatments — played a significant role in shaping public attitudes toward vaccines. Charismatic figures and fraudulent practitioners claimed to offer alternative cures that promised protection against diseases, often with little regard for scientific evidence. The prevalence of quackery in the medical field, combined with misinformation about vaccination risks, fueled distrust in conventional medicine and vaccination campaigns.
The 20th century saw the rise of medical quackery as a more organized and widespread phenomenon. As scientific understanding of infectious diseases and vaccines advanced, so did the pseudoscientific claims surrounding alternative treatments. Quacks and unscrupulous individuals promoted various remedies, including homeopathic treatments, herbal concoctions, and unproven “vaccines”, often marketed as safer or more effective than scientifically backed vaccines. In the early part of the century, for example, Dr. John R. Brinkley, a Kansas doctor, gained fame for his dubious medical practices, including performing goat gland transplants in an attempt to cure male impotence. While these treatments were completely ineffective and often dangerous, Brinkley’s success in marketing his practices to the public demonstrated the growing appeal of alternative medicine and its ability to exploit widespread public fears. The rise of pseudoscientific medical practices, combined with misinformation about the dangers of vaccines, laid the foundation for much of the anti-vaccine sentiment that would continue into the latter half of the 20th century.
The anti-vaccine movement took a more structured form in the second half of the 20th century, especially with the publication of Andrew Wakefield’s 1998 study in The Lancet. Wakefield, a British former doctor, falsely linked the MMR vaccine (measles, mumps, rubella) to autism in a study that was later revealed to be fraudulent. Despite the eventual retraction of the study and Wakefield’s medical license being revoked, the damage was done. The paper sparked widespread panic and led to a significant decline in vaccination rates, particularly in the UK and the United States. This sparked a resurgence of vaccine-preventable diseases like measles, mumps, and whooping cough, which had previously been under control. Anti-vaccine rhetoric flourished in the years following Wakefield’s publication, particularly with the rise of the internet and social media, which allowed for the rapid spread of misinformation. Websites, blogs, and online communities provided a platform for anti-vaccine activists to spread fear and misinformation about vaccine safety, creating an environment where pseudoscientific beliefs about vaccines could flourish unchecked.
In the 21st century, anti-vaccine sentiment has been increasingly driven by a mix of misinformation, conspiracy theories, and fear of pharmaceutical companies. Social media platforms like Facebook, Twitter, and Instagram have become hotbeds for anti-vaccine activism, where individuals and groups share false claims about vaccine ingredients, side effects, and supposed government cover-ups. These platforms have also provided a space for the rise of vaccine refusal movements, often linked to broader conspiratorial ideologies such as New World Order theories, alternative medicine advocacy, and natural living. The anti-vaccine movement has gained increasing political traction in certain parts of the world, with politicians and public figures aligning themselves with vaccine-hesitant or outright anti-vaccine stances, further legitimizing the movement. The recent COVID-19 pandemic has exacerbated these issues, as vaccine misinformation related to the virus and its vaccines spread rapidly across social media, contributing to significant vaccine hesitancy and resistance in many communities. The rise of medical quackery in the form of unproven treatments and conspiracy theories about the pandemic has further complicated public health efforts and made combating anti-vaccine sentiment more challenging.
The history of anti-vaccine sentiment and medical quackery reveals a complex interplay of cultural, social, and political forces. From the early days of smallpox vaccination to the rise of conspiracy theories in the 21st century, anti-vaccine movements have consistently drawn on fears of the unknown, distrust of authority, and the appeal of alternative medicine. While medical advancements have made vaccines one of the most effective tools in public health, the persistence of anti-vaccine sentiment, fueled by misinformation and pseudoscience, continues to threaten the effectiveness of immunization programs and the control of preventable diseases. Overcoming these challenges requires not only better education about the safety and efficacy of vaccines but also addressing the psychological and social factors that make people susceptible to pseudoscientific beliefs and medical quackery.
Scientology and Other Modern Cults

Scientology, founded in the early 1950s by L. Ron Hubbard, is one of the most well-known and controversial modern religious movements, often categorized as a cult due to its secretive nature, aggressive tactics, and highly controlled organizational structure. Scientology’s origins lie in Hubbard’s 1950 book Dianetics: The Modern Science of Mental Health, which outlined a pseudoscientific approach to psychology that he called Dianetics. The book promised a method to free individuals from the negative influences of past traumas and negative memories, which Hubbard referred to as engrams. He claimed that by accessing and confronting these engrams, individuals could reach a state of Clear, free from mental limitations. Scientology grew rapidly from the ideas outlined in Dianetics, with Hubbard establishing the Church of Scientology in 1954. Scientology quickly gained attention for its unique blend of self-help philosophy, spirituality, and pseudoscience, but it also faced increasing criticism and scrutiny due to its controversial practices and alleged manipulation of followers.
The organization’s core beliefs center around the idea of thetans, or immortal spiritual beings that transcend individual lifetimes. According to Scientology, humans are essentially thetans who have become trapped in physical forms, their memories clouded by engrams from past traumatic experiences, sometimes stretching back to past lives. Scientologists believe that through auditing—a process involving an electronic device called an E-meter—individuals can access and clear these engrams, allowing them to achieve higher spiritual states, eventually reaching the highest level of Operating Thetan (OT). The Church of Scientology is highly hierarchical, with individuals advancing through a series of secretive and expensive levels of initiation, known as The Bridge to Total Freedom. This hierarchical structure, combined with the organization’s focus on secrecy, has led many critics to label it as a cult, with accusations of exploiting vulnerable individuals for financial gain and enforcing strict loyalty through coercive tactics.
One of the most widely criticized aspects of Scientology is its aggressive response to criticism and dissent. The Church is notorious for using intimidation, harassment, and litigation to silence its critics, including former members, journalists, and even governments. The organization has been known to employ private investigators, surveillance, and legal threats to prevent negative media coverage and to suppress accounts from ex-members. One of the most famous cases of such retaliation was the 1991 exposé in Time magazine, which led to a massive legal battle with the Church. Critics argue that Scientology is not just a religion but a multi-million-dollar business that thrives on controlling its followers through both psychological and financial manipulation. Despite this, the Church has successfully maintained its status as a religion in the United States, where it is legally recognized as such, providing it with tax-exempt status and freedom of worship.
Scientology is not the only modern group to face accusations of cult-like behavior, and the rise of new religious movements and cults in the 20th and 21st centuries has brought attention to the psychological and social dynamics of these groups. Many of these movements share certain characteristics, such as authoritarian leadership, control over personal information, isolation from outside influences, and demands for financial contributions. Groups like the Raelian Movement, founded in the 1970s, which believes that life on Earth was created by extraterrestrial beings, and Transcendental Meditation (TM), popularized by Maharishi Mahesh Yogi, have been labeled as cults due to their manipulation of followers and emphasis on unquestioning devotion to their leaders. These groups often use charismatic leaders to attract followers, who are then encouraged to adopt a set of doctrines that offer salvation, enlightenment, or a higher state of consciousness.
The appeal of modern cults often lies in their promise of exclusive knowledge or the possibility of escaping from the pressures and complexities of modern life. Cult leaders often frame their teachings as possessing secrets that can lead to personal transformation, spiritual awakening, or liberation. This appeal is especially strong during times of personal crisis, where individuals are searching for meaning or direction. Cults frequently offer their followers a sense of belonging, purpose, and a clear answer to life’s big questions. However, this sense of community can be at odds with the often manipulative and exploitative practices that define cult dynamics. In many cases, followers are drawn into a world where free thought is discouraged, and loyalty to the group and its leader becomes paramount. Over time, members may find themselves trapped in an insular world that separates them from friends, family, and their previous support networks.
One of the most controversial aspects of modern cults is their financial exploitation of followers. Groups like Scientology have been accused of imposing steep financial demands on their members, who are expected to pay exorbitant fees for services such as auditing, training courses, and spiritual advancement. These financial demands can quickly escalate, leading individuals to accumulate significant debt in the pursuit of higher spiritual levels. In addition to financial exploitation, many cults require followers to devote large amounts of time and energy to the group, often isolating them from outside relationships and imposing a strict set of behavioral and ideological guidelines. This control over both the mental and material resources of individuals is a hallmark of many modern cults, raising ethical concerns about the boundaries of religious freedom and the protection of individual rights.
The process of exit from a cult, especially one as controlling as Scientology, can be incredibly difficult and psychologically traumatic. Ex-members often report a range of experiences, from cognitive dissonance to feelings of betrayal and emotional damage. The intense loyalty and commitment cultivated by these groups can make it challenging for individuals to leave, as they often face shunning, threats, or isolation from former friends and family members. In some cases, the Church of Scientology has been accused of employing aggressive tactics to prevent members from leaving, including psychological manipulation, blackmail, and harassment. The difficulty of breaking free from a cult is compounded by the sense of loss that individuals feel when they leave, not only in terms of relationships but also in their identity and the purpose that the group once provided. This is particularly evident in cases where followers have been deeply indoctrinated into the belief system and structured hierarchy of the organization.
Despite the controversy and scrutiny, Scientology and other modern cults have continued to thrive, adapting to changing cultural and technological landscapes. The use of social media, online platforms, and digital marketing has enabled groups to recruit new members and maintain a presence that was previously difficult to sustain. Scientology, for example, has embraced modern technology, offering online courses, digital resources, and even virtual auditing sessions for individuals who cannot attend in person. The globalization of these movements has also made them more pervasive, with followers from diverse cultural backgrounds and geographic locations coming together under the banner of shared beliefs. This global reach has led to increased international scrutiny, but it has also provided these organizations with new opportunities to spread their message and maintain their influence.
The psychological mechanisms at play within modern cults, including Scientology, continue to be a subject of fascination for psychologists and sociologists. These groups often use manipulative techniques, such as love bombing (excessive attention and praise for new recruits), gradual indoctrination, and emotional coercion, to bring individuals into the fold and maintain control. The language of empowerment and enlightenment is used to mask the deeper psychological and financial manipulation that members undergo. Understanding these mechanisms is key to unraveling the dynamics of modern cults and recognizing the warning signs that individuals may be at risk of being manipulated by such groups.
The legacy of modern cults like Scientology serves as a cautionary tale about the dangers of uncritical devotion and the human tendency to seek meaning and belonging in ways that can lead to exploitation. As long as there is a demand for spiritual or psychological answers to life’s most pressing questions, there will likely be individuals and organizations willing to capitalize on that vulnerability. The enduring appeal of these groups, despite their many controversies, underscores the ongoing need for education, skepticism, and critical thinking in navigating the complex and often murky terrain of new religious movements and cults.
The Internet and the Democratization of Pseudoscience
The rise of social media has significantly transformed the way information is disseminated and consumed, and one of its most consequential effects has been the creation of echo chambers—closed environments where users are primarily exposed to information that aligns with their preexisting beliefs. In the realm of pseudoscience, these echo chambers serve as powerful incubators for misinformation, allowing unfounded or debunked claims to flourish unchecked. Algorithms on platforms like Facebook, Twitter (now X), Instagram, and YouTube are designed to maximize engagement by feeding users content similar to what they have already interacted with. While this boosts platform activity, it also means that once a user begins engaging with pseudoscientific content—be it anti-vaccine propaganda, flat Earth theories, or miracle health cures—they are more likely to be fed similar material, reinforcing their beliefs and isolating them from contrary evidence. This self-reinforcing cycle makes it difficult for accurate scientific information to penetrate these closed loops.
One of the most concerning outcomes of these pseudoscientific echo chambers is their ability to normalize fringe beliefs. In traditional information environments, such as peer-reviewed journals or reputable news outlets, ideas must meet a standard of evidence before they are widely disseminated. Social media bypasses these gatekeepers, allowing any idea—no matter how spurious—to reach a global audience instantly. Once within a digital echo chamber, pseudoscientific claims are repeated, reinforced, and often embellished by like-minded users. Community validation becomes more important than factual accuracy, and individuals receive social rewards (likes, shares, comments) for aligning with the group narrative. This fosters a form of tribal identity where belonging is tied to the rejection of mainstream science or authority, making users less likely to engage with corrective information even when it is plainly presented.
Compounding the issue is the role of influencers and pseudo-experts who command significant followings within these echo chambers. Figures who may have little to no formal training in science or medicine can present themselves as authorities, often leveraging personal anecdotes or conspiratorial thinking to gain credibility. These individuals frequently use emotionally charged language, appeal to “common sense,” or position themselves as whistleblowers exposing hidden truths. Their content resonates deeply with followers who are already skeptical of institutional science, further entrenching pseudoscientific beliefs. The echo chamber dynamic makes it easy for followers to dismiss criticism of these influencers as censorship or attacks by a corrupt establishment, effectively inoculating them against external skepticism.
Efforts to debunk pseudoscience in social media spaces often face significant challenges, particularly because corrective information tends to be less engaging than sensational or conspiratorial claims. Scientific explanations often require nuance and context, which can be lost or ignored in the rapid, attention-driven economy of social media. Furthermore, debunking efforts can sometimes backfire, reinforcing false beliefs through what psychologists call the backfire effect—a phenomenon where individuals double down on their beliefs when presented with conflicting evidence. Social media companies have made attempts to address these problems through content moderation, fact-checking labels, and algorithmic tweaks, but such measures are often seen as too little, too late. The speed and volume at which misinformation spreads frequently outpaces efforts to contain it.
Social media echo chambers contribute to a fragmentation of public knowledge and the erosion of a shared reality. In the context of public health, environmental policy, or education, the persistence of pseudoscience fueled by echo chambers has real-world consequences—from vaccine refusal and climate change denial to the popularity of ineffective or dangerous alternative treatments. These outcomes not only endanger individuals but also pose collective risks to society. Addressing the issue will require a multifaceted approach, including better science communication, digital literacy education, and platform accountability. Breaking the cycle of pseudoscientific echo chambers is essential if society is to preserve the integrity of scientific discourse and public trust in evidence-based knowledge.
Denial
Climate change denial represents one of the most persistent and dangerous forms of pseudoscience in the modern era. Despite an overwhelming scientific consensus that the Earth is warming due to human activity—primarily the burning of fossil fuels—there remains a vocal movement that disputes, downplays, or outright rejects these findings. Climate denialism does not engage in the scientific process with empirical rigor or peer-reviewed methodology; instead, it often employs rhetorical techniques, cherry-picked data, and conspiracy theories to undermine legitimate science. It mimics the structure of scientific discourse while lacking its substance, thereby fostering public confusion and impeding necessary policy changes. This form of denial is not just a rejection of facts; it is an orchestrated campaign often funded by vested interests aiming to protect economic or ideological agendas.
The scientific basis for anthropogenic climate change is robust. Institutions across the globe—including NASA, the Intergovernmental Panel on Climate Change (IPCC), the National Oceanic and Atmospheric Administration (NOAA), and countless academic bodies—agree that greenhouse gases such as carbon dioxide, methane, and nitrous oxide are trapping heat in the Earth’s atmosphere. This is not a recent finding: the basic physics of the greenhouse effect has been understood since the 19th century. Modern observations, including rising global temperatures, shrinking ice caps, ocean acidification, and increased frequency of extreme weather events, all point to a rapidly changing climate. Yet climate change denial persists, often arguing against this consensus with flawed logic, misrepresented studies, or outright fabrication—hallmarks of pseudoscientific discourse.
A central tactic of climate denialism is the use of “fake experts.” These are individuals who present themselves as authorities on climate science despite lacking relevant credentials or peer-reviewed publications in the field. Often, these figures are retired scientists from unrelated disciplines, think-tank analysts, or even political commentators. Their role is to provide a veneer of credibility to dissenting views and create the illusion of a divided scientific community. Organizations like the Heartland Institute have promoted such individuals in media appearances, op-eds, and conferences aimed at disputing climate science. By elevating these voices, denialists attempt to generate a false equivalence in public debate—presenting a 97% consensus as merely one side of a “debate.”
Another common feature of climate denial is cherry-picking data. This involves selecting specific data points or time frames that seem to contradict the overall trend of global warming. For example, deniers may point to a brief period of cooling or a single anomalously cold winter to argue that climate change isn’t occurring, while ignoring the long-term trend of rising global temperatures. This tactic ignores the distinction between weather (short-term atmospheric conditions) and climate (long-term patterns). Similarly, graphs may be manipulated by altering axes, omitting relevant context, or focusing on localized trends instead of global averages. These misleading representations are intended not to inform, but to sow doubt and confusion.
Climate denial also thrives on conspiracy thinking, another hallmark of pseudoscience. Deniers often claim that climate change is a hoax perpetrated by scientists seeking grant money, governments pursuing increased control, or global elites aiming to reshape the economy. These theories ignore the decentralized and transparent nature of scientific inquiry, in which findings must be repeatedly tested, peer-reviewed, and replicated across disciplines and nations. Conspiracies also require a level of coordination and secrecy that is highly implausible given the number and diversity of scientists involved. Nevertheless, this rhetoric resonates with those predisposed to distrust authority, and it serves as a powerful psychological defense mechanism against the unsettling reality of a warming planet.
The political and economic dimensions of climate change denial further differentiate it from legitimate science. Many of the most prominent sources of denialism are funded by fossil fuel interests or politically motivated groups that oppose environmental regulation. Internal documents from companies like ExxonMobil have revealed that, despite acknowledging the reality of climate change internally as early as the 1970s, these corporations invested in misinformation campaigns to protect their profits. Think tanks and lobbying organizations continue to influence public discourse and policymaking by promoting discredited arguments, framing environmental action as economically catastrophic, and resisting international climate accords. This intersection of pseudoscience and propaganda underscores how climate denial is not merely a scientific issue, but a deeply political one.
In public discourse, climate denial benefits from media practices that emphasize balance over accuracy. For years, news outlets invited climate change deniers to speak alongside climate scientists in an effort to present “both sides” of the issue, inadvertently giving undue legitimacy to unscientific views. This practice, sometimes called “false balance,” misleads the public by portraying a scientific near-consensus as a contentious debate. While many media organizations have since revised this approach, the damage to public understanding has been substantial. It has allowed denialist narratives to take root and has delayed meaningful action by fostering doubt and apathy.
The impact of climate change denial is not just theoretical—it has real-world consequences. It delays policy responses, undermines environmental regulations, and reduces public support for sustainability initiatives. By casting doubt on the urgency or legitimacy of climate science, denialists enable the continuation of practices that exacerbate global warming. This inaction disproportionately affects vulnerable populations, including communities in low-lying coastal areas, regions prone to drought, and economically disadvantaged nations with fewer resources to adapt. In this way, climate denial functions not only as pseudoscience but as a form of social and environmental injustice.
Efforts to combat climate denial must address both the informational and emotional roots of the problem. Merely presenting facts is often insufficient, especially when denial is rooted in identity, ideology, or economic interest. Strategies must include science communication that is engaging, empathetic, and culturally sensitive. Educating the public about the nature of the scientific method, the peer review process, and the difference between skepticism and denial is essential. Likewise, holding corporations and political actors accountable for spreading misinformation is a necessary step toward restoring public trust in science.
Climate change denial is a cautionary example of how pseudoscience can flourish in the absence of critical thinking and institutional accountability. It demonstrates the power of coordinated misinformation, the vulnerabilities of media ecosystems, and the urgency of equipping the public with tools for discerning credible information. In confronting denialism, scientists, educators, policymakers, and citizens alike must advocate not just for the facts, but for the values of transparency, reason, and responsibility. Only then can society begin to bridge the gap between knowledge and action in the face of one of humanity’s greatest existential challenges.
Falling Flat
The resurgence of flat Earth belief in the 21st century is a striking example of how pseudoscientific ideas can re-emerge in new social and technological contexts. Although the notion that the Earth is flat was largely abandoned by educated societies over two millennia ago—thanks to empirical observations by ancient Greeks like Eratosthenes—the belief found new life through online platforms such as YouTube, Facebook, and Reddit. These platforms allowed flat Earth proponents to bypass traditional gatekeepers of information, building communities where misinformation could thrive unchecked. The movement has been sustained not by scientific discovery but by mistrust of institutions, an embrace of conspiracy logic, and a misplaced confidence in personal observation and intuition over expert consensus. Flat Earth belief today functions more as an ideological or identity-driven position than a sincere engagement with empirical evidence.
Flat Earthers often rely on misinterpretations of basic physics, optics, and geography, framing their skepticism as a form of independent inquiry. They frequently argue that if the Earth were round, it would be visibly curved on the horizon or that airline routes and space imagery are faked. These claims ignore centuries of observational data, from satellite telemetry to planetary motion and global navigation systems. Flat Earth discourse often mimics the language of science—discussing “experiments” or “models”—while operating entirely outside the framework of peer-reviewed research or falsifiability. The movement also demonstrates the power of confirmation bias, with participants selectively gathering information that supports their worldview and dismissing or ridiculing evidence to the contrary. In many ways, belief in a flat Earth is a symptom of broader epistemological confusion and rejection of expert knowledge in favor of personal belief.
The QAnon conspiracy theory shares several characteristics with flat Earth belief, including its rejection of established knowledge, deep distrust of authority, and reliance on internet echo chambers. Originating in 2017 with anonymous posts on 4chan and later 8kun, QAnon posits that a secret cabal of elite figures—often portrayed as Satan-worshipping pedophiles—controls global affairs, and that former President Donald Trump was fighting to dismantle this cabal. While the specific claims of QAnon are wildly implausible and unsupported by any credible evidence, the movement grew rapidly online, drawing in individuals through cryptic “drops” and encouraging followers to “do their own research.” Like flat Earth belief, QAnon exploits emotional narratives and exploits digital media ecosystems that prioritize engagement over accuracy, allowing falsehoods to spread virally with minimal scrutiny.
Both flat Earth resurgence and QAnon exemplify the psychosocial dynamics of modern pseudoscientific belief. These movements offer participants a sense of belonging, purpose, and insight—appealing to those who feel alienated or distrustful of mainstream institutions. They frame themselves as part of a righteous struggle against powerful forces, casting skeptics and scientists as either duped or complicit. This dualistic worldview resists correction, because any challenge to the belief system is interpreted as evidence of its truth: the stronger the denial from authorities, the more certain the believer becomes. In this way, both flat Earth and QAnon function as closed epistemic systems, impervious to external evidence and reinforcing themselves through internal validation and social rewards within their communities.
The intermingling of pseudoscientific and political conspiracies like flat Earth theory and QAnon signals a broader crisis of knowledge and trust in the digital age. These belief systems thrive in an environment where traditional sources of authority—science, journalism, education—are increasingly viewed with suspicion, and where social media allows for the rapid formation of ideological silos. They are not just curiosities or fringe phenomena; they influence real-world behavior, from vaccine refusal and climate denial to political violence and insurrection. Understanding their appeal and mechanics is critical to developing strategies for digital literacy, public education, and policy reform that can counteract the spread of pseudoscience and rebuild public confidence in shared reality.
Pseudoscience in the 21st Century
AI Is the End!
The rise of artificial intelligence in the 21st century has brought with it a flood of speculative narratives—some grounded in genuine concerns, others veering into the realm of techno-pseudoscience. Among the most persistent of these are AI doomsday predictions, which claim that superintelligent machines will one day surpass human intelligence, become uncontrollable, and potentially eradicate humanity. While these scenarios are often presented in dramatic media or philosophical discourse, they frequently lack grounding in current technical realities. Much of the popular fear around AI comes not from concrete developments, but from extrapolations of hypothetical possibilities, frequently amplified by science fiction tropes and sensational journalism. This disconnect between technological fact and public imagination allows pseudoscientific interpretations to flourish, often blurring the line between plausible foresight and unfalsifiable prophecy.
A hallmark of techno-pseudoscience in AI discourse is its tendency to rely on vague terminology and speculative logic, often devoid of empirical verification. Terms like “singularity,” “superintelligence,” or “recursive self-improvement” are used to invoke awe or fear without clearly defined parameters. While these concepts are philosophically interesting, they are often treated as inevitabilities rather than theoretical constructs, especially in popular discussions. Figures such as Ray Kurzweil have contributed to this narrative by predicting precise dates for the singularity—predictions that are based more on pattern-fitting and speculative trend analysis than on replicable scientific models. Such claims invite an almost millenarian fervor, turning technology into a domain of quasi-religious expectation rather than critical inquiry. The result is a pseudo-scientific framework that discourages skepticism, prizes visionary rhetoric over technical accountability, and often overstates the actual capabilities of current AI systems.
Another dimension of techno-pseudoscience involves misunderstanding the actual limitations and functioning of modern AI. Most of the systems in use today, including large language models like GPT or image recognition algorithms, are narrow AI—they operate within constrained domains and lack general reasoning abilities or self-awareness. However, many popular accounts present these tools as stepping stones toward autonomous consciousness or machine agency, conflating symbolic output with sentience. This confusion is exploited in media and some startup cultures, where claims about AI “thinking,” “feeling,” or “learning like a human” are made for shock value or investor interest. These anthropomorphic descriptions feed into public misunderstanding and lay the groundwork for both irrational fears and false hopes—two poles of techno-pseudoscientific thinking that can distract from real ethical and policy issues surrounding AI, such as bias, labor displacement, surveillance, and autonomy in lethal weaponry.
The techno-utopianism of Silicon Valley often overlaps with its darker inverse: techno-dystopianism. Both are rooted in a kind of futurist determinism that treats technological change as an autonomous force, immune to social, cultural, or political intervention. In this worldview, AI either saves or destroys humanity, while human agency is relegated to the sidelines. Such binary thinking not only reinforces pseudoscientific tendencies—it can also become ideologically convenient, justifying current inequities (“the machines will fix it”) or evading responsibility for harms (“it’s inevitable progress”). The cult-like reverence for technology seen in some AI circles shares similarities with other pseudoscientific movements throughout history, from eugenics to scientology—promising salvation or apocalypse through esoteric knowledge and exclusive access to “the truth.”
To counteract the pseudoscientific aspects of AI discourse, it is crucial to emphasize technological literacy, ethical skepticism, and interdisciplinary oversight. Instead of framing AI as an inevitable godlike force, public discourse should focus on how these tools are built, who controls them, how they are trained, and for what purposes. This requires clear communication from technologists, accountability from corporations, and active participation by ethicists, social scientists, and regulators. Without this grounded approach, AI risks becoming the latest stage for pseudoscientific thought—an opaque and mystified force, manipulated by elites and misunderstood by the public, carrying forward the long tradition of confusing technological power with transcendental authority. As history shows, whenever science loses transparency and democratic accountability, pseudoscience rushes to fill the void.
COVID-19 Mis/Disinformation
The COVID-19 pandemic was not only a global public health crisis—it was also a crisis of information. From the earliest days of the outbreak, misinformation and disinformation regarding the origins, transmission, prevention, and treatment of the virus proliferated across social media platforms, messaging apps, blogs, and even mainstream media outlets. This infodemic, as it came to be known, significantly hindered public health responses and contributed to avoidable illness and death. Much of the misleading content fell into the category of pseudoscience, in which claims were dressed in scientific language but lacked empirical support, peer-reviewed evidence, or methodological rigor. These pseudoscientific claims often leveraged public fear and scientific uncertainty to promote conspiracy theories, miracle cures, and politically motivated narratives.
One of the earliest and most enduring pseudoscientific narratives was the claim that COVID-19 was a bioweapon deliberately engineered and released from a laboratory. While the scientific community initially explored multiple possible origins—including a lab leak scenario—many of the more sensationalist claims deviated sharply from evidence-based inquiry and entered the realm of conspiracy theory. These narratives often ignored or distorted the genomic analysis conducted by virologists worldwide, which indicated a natural zoonotic origin was more likely. The bioweapon theory was bolstered by fringe figures with little to no expertise in virology but wide online platforms, and it was frequently amplified by political actors seeking to assign blame rather than promote solutions. In this context, pseudoscience was weaponized not just against public health but as a tool in geopolitical tension and culture war dynamics.
Another major avenue of pseudoscientific misinformation surrounded prevention and treatment, particularly the promotion of ineffective or dangerous remedies. Throughout the pandemic, numerous substances—such as hydroxychloroquine, ivermectin, chlorine dioxide (an industrial bleach), and colloidal silver—were marketed as COVID-19 cures, despite a lack of rigorous clinical evidence supporting their efficacy. These substances were often promoted by self-styled “experts,” alternative health influencers, or even elected officials. Their advocacy frequently relied on anecdotal evidence, misrepresented studies, or outright fabrications. In many cases, these treatments were actively harmful, leading to poisoning, medical complications, and in some cases death. The broader public health consequences were grave, as belief in miracle cures often led individuals to forgo proven preventative measures like vaccination or medical supervision.
Anti-mask and anti-lockdown pseudoscience also proliferated, rooted in a misunderstanding of basic epidemiological principles. Claims that masks caused carbon dioxide poisoning, weakened immune systems, or provided no protection at all were widely circulated despite being contradicted by an overwhelming body of scientific evidence. These claims often cited flawed or misinterpreted studies, or relied on appeals to “natural law” and personal liberty to override collective responsibility. The rejection of mask mandates became emblematic of a broader resistance to public health measures, with misinformation campaigns frequently linking such policies to tyrannical government control, drawing false parallels to totalitarian regimes, and invoking historical injustices inappropriately. This fusion of pseudoscience and ideology undermined compliance with basic safety practices and turned routine health recommendations into polarizing cultural flashpoints.
Vaccine misinformation represented perhaps the most sustained and dangerous form of pseudoscience during the pandemic. As mRNA vaccines were developed and distributed at unprecedented speed, an array of false claims emerged about their safety, efficacy, and ingredients. These included assertions that vaccines altered human DNA, contained microchips for surveillance, caused infertility, or led to mass deaths that were being hidden by authorities. While scientific institutions globally conducted rigorous trials and safety monitoring, pseudoscientific voices often dismissed these processes as fraudulent or manipulated. Vaccine-related conspiracy theories thrived in online communities and were amplified by well-known influencers, media figures, and even medical professionals who had been discredited but retained public followings. As a result, vaccine hesitancy surged in many countries, hampering efforts to reach herd immunity and prolonging the pandemic.
The pseudoscience of natural immunity versus vaccination also took hold in many circles. Proponents of this view argued that infection-acquired immunity was superior and that exposure to the virus should be encouraged among the young and healthy—a position not supported by leading epidemiologists. While natural immunity does occur, it is variable and often less predictable than vaccine-induced immunity, especially with emerging variants. Nevertheless, this line of argument became central to “Great Barrington Declaration” type positions, which favored minimal restrictions and emphasized “focused protection” of the vulnerable. Critics noted that this approach was grounded more in ideological and economic motivations than in rigorous science, and public health data eventually showed higher risks of reinfection and long-term health consequences, or “long COVID,” among those who had relied solely on infection-based immunity.
A particularly insidious aspect of COVID-19 pseudoscience was its use of scientific language and data manipulation to cloak misinformation in a veneer of credibility. Sophisticated-looking graphs, misquoted scientific papers, and jargon-filled social media posts lent an air of legitimacy to otherwise baseless claims. Pseudoscientific influencers would cherry-pick data points, compare unlike metrics, or distort temporal trends to argue that vaccines caused more harm than good, or that death tolls were exaggerated. This “data misrepresentation” made it difficult for laypeople to distinguish between genuine analysis and manipulation. Moreover, platforms like YouTube and Twitter served as echo chambers where confirmation bias thrived and pseudoscientific narratives spread faster than fact-checks could keep up.
The pandemic also saw the rise of “expertise laundering,” in which individuals with academic credentials in unrelated fields were presented as authorities on virology, immunology, or public health. These figures were frequently highlighted by anti-vaccine and anti-lockdown movements to legitimize fringe positions. While true scientific consensus depends on a rigorous process of peer review, replication, and cumulative knowledge, pseudoscience bypasses these steps, allowing charismatic or credentialed individuals to speak outside their areas of competence without correction. This practice added confusion for the public, especially when these individuals appeared on news panels, podcasts, or congressional hearings as if their views were equal to those of frontline researchers and epidemiologists.
COVID-19 pseudoscience was not merely an epistemological problem—it had real-world consequences, including the erosion of trust in science and public institutions. In some regions, misinformation became so entrenched that hospitals faced violent protests, healthcare workers were harassed, and public health officials were forced to resign or go into hiding. The weaponization of pseudoscience became a political tool, entrenching partisanship and driving wedges between communities. Disinformation campaigns, some backed by state actors or extremist movements, intentionally sought to destabilize democratic discourse by undermining consensus on basic facts. This not only prolonged the pandemic but also left lasting scars on the public’s relationship with science and health governance.
Combating COVID-19 pseudoscience requires more than fact-checking or censorship—it demands rebuilding public understanding of the scientific process, fostering media literacy, and promoting transparency and dialogue. The pandemic exposed the vulnerabilities of modern information ecosystems, where algorithms favor emotional content, sensationalism spreads more rapidly than nuance, and social trust is easily eroded. Moving forward, science communication must adapt to a landscape shaped by attention economy platforms and political tribalism. Interdisciplinary approaches—combining public health, sociology, psychology, and information science—will be essential to counter future pseudoscientific movements and to strengthen societal resilience against the next global crisis.
Populism
In the 21st century, the intersection of political populism and pseudoscience has become an increasingly potent and destabilizing force in global affairs. Populist leaders often position themselves as champions of the “common people” against “corrupt elites,” a dichotomy that naturally casts expert consensus—especially scientific consensus—as part of an oppressive establishment. In this rhetorical framework, science is not a method of acquiring knowledge but a tool wielded by technocrats, globalists, or political adversaries. This has led to the intentional politicization and distortion of scientific facts on issues ranging from climate change and public health to education and environmental regulation. By framing objective evidence as ideologically tainted or conspiratorial, populist figures can dismiss inconvenient data and replace it with emotionally charged, populist narratives that better align with their political agendas and appeal to the grievances of their base.
A key characteristic of political pseudoscience in the populist era is the rejection of complexity. Populism thrives on clear villains, simple explanations, and immediate solutions, while science—by nature—deals with nuance, uncertainty, and evolving understanding. This mismatch allows populist movements to weaponize scientific ambiguity, often claiming that the lack of absolute certainty in scientific models is evidence of fraud or manipulation. For example, climate models, which include ranges and probabilities rather than precise predictions, have been portrayed as unreliable or intentionally misleading. The COVID-19 pandemic similarly saw populist leaders minimize the severity of the virus, dismiss protective measures, and promote unproven treatments, all while accusing scientists and public health officials of being part of a broader scheme to curtail freedoms or disrupt national economies.
The media environment has exacerbated the problem by offering fertile ground for pseudoscientific populism to flourish. In an age of fragmented information ecosystems, social media platforms allow for the rapid spread of disinformation, often curated by algorithms that prioritize outrage and virality over truth. This facilitates the rise of alternative “experts,” echo chambers, and conspiratorial thinking, where communities can insulate themselves from mainstream scientific perspectives. Populist figures exploit these platforms masterfully, presenting themselves as truth-tellers battling censorship and corruption. They often amplify pseudoscientific claims not merely to deceive but to de-legitimize authoritative institutions, including academia, public health agencies, and international scientific bodies. In doing so, they create a parallel epistemology—one where science is filtered through ideological lenses, and facts are malleable depending on political necessity.
One of the most dangerous consequences of political pseudoscience is its impact on democratic institutions and policymaking. When evidence-based decision-making is undermined by populist rhetoric and pseudoscientific dogma, public policy becomes reactive, erratic, and increasingly detached from reality. The refusal to address climate change with appropriate urgency, the obstruction of pandemic mitigation efforts, and the rollback of environmental protections are all examples where pseudoscientific narratives shaped governance. Furthermore, pseudoscience in politics often goes hand-in-hand with scapegoating—blaming immigrants, minority groups, or international organizations for domestic issues—and uses unfounded claims to justify discriminatory or authoritarian policies. This erosion of truth as a shared foundation for democratic dialogue leads to polarization, institutional decay, and an increased susceptibility to authoritarian tendencies.
Combating political pseudoscience requires more than correcting individual falsehoods—it demands a reinvigoration of civic education, scientific literacy, and institutional trust. It also requires scientists and experts to engage more directly and accessibly with the public, countering the populist narrative that they are aloof or complicit in elite conspiracies. Media platforms bear a crucial responsibility in moderating the spread of disinformation and elevating credible voices. Meanwhile, democratic governments must resist the temptation to politicize science for short-term gain and instead reinforce its role as a cornerstone of rational governance. As history has repeatedly shown, societies that abandon science as a guide for public policy do so at their peril. In the 21st century, resisting the lure of pseudoscientific populism is not only a scientific imperative—it is a democratic one.
Conclusion: Fighting Falsehoods in the Age of Information
Learning as a Tool
Education and public science literacy are among the most powerful tools for combating pseudoscience, which thrives in environments of ignorance, distrust, and fragmented knowledge. At its core, pseudoscience exploits the gap between scientific complexity and public understanding, presenting simplified or distorted narratives that appear logical or intuitively correct but lack empirical grounding. Formal education—especially in the sciences—serves as a crucial defense against this manipulation by equipping individuals with the skills to evaluate evidence, understand methodological rigor, and discern the difference between correlation and causation. However, rote memorization of scientific facts is not sufficient. To be effective, science education must emphasize critical thinking, the nature of scientific inquiry, and the provisional nature of scientific knowledge that evolves through testing and peer review. When individuals understand how science works—not just what it says—they are far better equipped to spot the red flags of pseudoscience.
Public science literacy extends beyond schools and universities; it is a lifelong process influenced by media, culture, and institutional trust. In democratic societies, where citizens are asked to make informed choices on issues involving public health, environmental policy, and technological ethics, the stakes of science literacy are immense. Unfortunately, many educational systems inadequately prepare students to navigate real-world scientific debates. They often fail to address common misconceptions, promote interdisciplinary perspectives, or bridge the gap between scientific theory and its application in civic life. Moreover, science is frequently taught in a depersonalized way, disconnected from students’ everyday experiences or ethical implications. As a result, many individuals graduate without the tools to critically assess pseudoscientific claims about vaccines, climate change, alternative medicine, or genetic manipulation, leaving them vulnerable to misinformation, especially when it is cloaked in scientific language.
Science education must also adapt to a changing information landscape shaped by digital media and declining trust in traditional authorities. In a world where people increasingly get their information from social media, influencers, and short-form video content, traditional educational institutions must compete with pseudoscientific narratives that are often more emotionally appealing and digestible. Thus, effective science communication must become an integral part of education, helping students and the public understand not just the content but the communication of science. Media literacy—the ability to critically evaluate sources, recognize bias, and verify information—should be incorporated into curricula alongside scientific instruction. Public outreach by scientists, through blogs, podcasts, social platforms, and open-access resources, is also critical for building a scientifically informed citizenry. The more accessible and engaging scientific voices are in the public sphere, the less room there is for pseudoscience to monopolize public attention.
One of the major obstacles to improving science literacy is the politicization of science education itself. In some regions, efforts to teach evolution, climate science, or sexual health have been met with resistance from political, religious, or ideological groups. This creates an environment in which pseudoscientific views can be protected under the banner of cultural identity or parental rights, undermining national standards and perpetuating cycles of misinformation. Education policy, therefore, plays a pivotal role in safeguarding science literacy. Governments and school boards must resist pressures to dilute or censor curricula in the name of false balance or ideological accommodation. Science education should be based on consensus evidence and guided by professionals in the field—not shaped by lobbying efforts or populist demands. Where this principle is upheld, societies are more resilient against pseudoscientific contagion; where it is not, they become increasingly susceptible to harmful falsehoods.
Cultivating a public that can think scientifically—regardless of whether individuals pursue science professionally—is essential for the health of democratic societies. A scientifically literate population is less likely to fall prey to charlatans, more likely to support sound public policy, and better equipped to participate in rational, evidence-informed debate. Education, at all levels, must rise to the challenge of the 21st century by preparing students not only to memorize facts but to engage critically with a world awash in contested information. This means investing in teacher training, updating curricula to reflect contemporary challenges, and fostering partnerships between educational institutions, media, and scientific organizations. Fighting pseudoscience is not just about correcting errors—it is about cultivating minds that are intellectually curious, ethically aware, and armed with the cognitive tools to seek truth in a complicated world.
Thinking Caps
Confronting pseudoscience inevitably leads to deeper philosophical questions about the nature of belief, evidence, and truth. Human cognition is not primarily wired for objective reasoning but for survival, social bonding, and narrative coherence. As such, beliefs are often formed not through rational deliberation but through intuition, emotion, and community reinforcement. This creates a cognitive environment where pseudoscientific ideas can thrive, especially when they offer comfort, meaning, or identity. Philosophically, belief is not just a passive acceptance of facts but an active psychological state shaped by values, desires, and fears. Pseudoscientific beliefs, then, often persist not because they are convincing by empirical standards, but because they fulfill psychological or social needs that scientific explanations do not. Understanding this human dimension is essential to challenging pseudoscience effectively—not merely by providing more data, but by addressing the reasons people adopt and cling to unfounded beliefs in the first place.
The epistemological foundations of science—evidence, falsifiability, reproducibility—are often at odds with how individuals encounter information in everyday life. While science relies on rigorous methods to determine what is likely true, the public frequently interprets truth in terms of coherence with prior beliefs, trust in the source, or alignment with perceived common sense. This disjunction between scientific and lay epistemology leads to philosophical tensions: How do we justify beliefs, and what counts as good evidence? From a philosophical standpoint, the concept of evidence itself is not straightforward. It must be interpreted within theoretical frameworks and is often probabilistic rather than conclusive. Pseudoscientific claims, by contrast, often present evidence that is anecdotal, cherry-picked, or immune to disconfirmation. Philosophers like Karl Popper emphasized falsifiability as a demarcation criterion to distinguish science from pseudoscience, but real-world applications reveal the complexity of this boundary. Even within science, theories evolve, paradigms shift, and consensus takes time—leaving openings for pseudoscientific actors to accuse science of inconsistency or bias.
The idea of truth is central to both science and pseudoscience, yet it is often understood very differently in each realm. In science, truth is provisional and contingent on the best available evidence—it is always subject to revision in light of new findings. This humility is a strength of science but can be misinterpreted by the public as weakness or indecision. Pseudoscience, by contrast, often deals in absolutes. It offers certainty, final answers, and often an adversarial narrative in which “the truth” is being suppressed by powerful forces. This plays into a broader philosophical debate about whether truth is correspondence with reality (the traditional realist view), coherence with a belief system, or utility in a practical sense. Many pseudoscientific systems adopt a form of “coherence theory” implicitly, judging truth based on how well an idea fits into a worldview rather than how well it aligns with observable phenomena. This philosophical orientation makes empirical refutation nearly impossible, as counter-evidence is simply woven into the conspiracy or dismissed as fabricated.
Furthermore, belief in pseudoscience often reflects a deeper crisis of epistemic trust. The modern world has fractured traditional sources of authority and flooded the public with conflicting information. In such a context, people must decide whom to trust as much as what to believe. Philosophers have long debated the problem of epistemic authority—how and when we should defer to others in forming our beliefs. Scientific experts, institutions, and peer-reviewed publications are designed to establish such authority through consensus and transparency, but when these institutions are viewed as corrupt, elitist, or politically motivated, their claims are dismissed regardless of their evidentiary strength. In its place, people may turn to charismatic individuals, intuitive knowledge, or tribal affiliation. This highlights the philosophical importance of social epistemology—the study of knowledge in communal contexts—and underscores that truth-seeking is not merely an individual exercise but a deeply social one.
In confronting pseudoscience, then, we must adopt not just a scientific or educational strategy, but a philosophical one. We must grapple with why people believe, how they understand evidence, and what they consider to be truth. Addressing pseudoscience requires a respectful but firm challenge to systems of belief that resist falsification and reject accountability. It demands cultivating a public culture where epistemic humility is valued, evidence is rigorously interpreted, and truth is sought not for comfort, but for integrity. Philosophical reflection can guide us in creating such a culture—one that respects the psychological realities of belief while upholding the standards of reason and inquiry that make science one of humanity’s most powerful tools for understanding the world. Without this philosophical grounding, efforts to combat pseudoscience risk becoming mere rebuttal rather than true engagement with the complex landscape of human belief.
Toward a Resilient Epistemology
Developing a resilient epistemology—one that can robustly withstand the allure and spread of pseudoscience—requires a systematic reassessment of how knowledge is produced, disseminated, and consumed in society. In confronting the rise of pseudoscientific belief systems, the traditional Enlightenment model of reasoned discourse and empirical evidence alone has proven insufficient. A resilient epistemology must not only rest on sound principles of evidence and logic, but also be psychologically, socially, and institutionally equipped to resist manipulation, misinformation, and ideological capture. This involves acknowledging that belief is often not formed through abstract rationality, but within emotional, cultural, and political contexts. Therefore, the foundations of a durable epistemological framework must account for the cognitive vulnerabilities and biases inherent to human reasoning—confirmation bias, motivated reasoning, and the Dunning-Kruger effect, among others—and actively work to mitigate them through educational and communicative reform.
One major component of this new epistemology is the cultivation of what philosopher Miranda Fricker calls “epistemic virtue”—qualities such as intellectual humility, open-mindedness, and the willingness to revise one’s beliefs in light of new evidence. These virtues are not merely academic ideals but necessary habits of mind in a world where truth is contested and disinformation proliferates. A resilient epistemology thus requires building character as much as content knowledge, fostering a culture in which individuals recognize the limits of their understanding and are encouraged to consult legitimate expertise. This must be coupled with the ability to critically assess sources, understand the methodology behind claims, and distinguish between honest uncertainty and deliberate obfuscation. In this way, epistemic resilience becomes both an individual and collective endeavor, grounded in the idea that truth is not a possession, but a pursuit—one that demands continuous self-scrutiny and dialogue.
Institutionally, resilience against pseudoscience must also be embedded in the structures that produce and circulate knowledge. This includes strengthening journalistic standards, increasing the transparency and accountability of scientific institutions, and resisting the commodification of knowledge in ways that prioritize sensationalism or ideological alignment over accuracy. Universities, research organizations, and educational systems need to reaffirm their roles not only as repositories of knowledge but as custodians of public reason. Likewise, digital platforms that have become central to information exchange must be held accountable for their epistemic influence. Algorithms designed to maximize engagement often prioritize emotionally charged or polarizing content, inadvertently amplifying pseudoscientific claims. A resilient epistemology must therefore include ethical considerations about the technological mediation of knowledge and explore ways to design digital spaces that elevate credible information rather than erode it.
Moreover, this framework must reckon with the political dimensions of truth. Pseudoscience often gains traction not despite institutional breakdowns, but because of them—when populations lose faith in public institutions, feel alienated from elite discourses, or are exploited by populist rhetoric that weaponizes distrust. Thus, building epistemic resilience involves restoring the social contract between knowledge institutions and the public. This requires ensuring that science serves public interest, engages diverse communities, and is communicated in accessible and context-sensitive ways. It also means supporting mechanisms of participatory democracy where people feel agency in shaping the knowledge that governs their lives. Only when people see themselves as stakeholders in the knowledge process—rather than passive consumers or suspicious outsiders—can a shared commitment to truth be rebuilt and sustained.
A resilient epistemology must embrace complexity rather than shy away from it. One of the most seductive qualities of pseudoscience is its simplicity—it offers clear answers, familiar tropes, and moral clarity in a world that is often ambiguous and messy. In contrast, real knowledge often requires grappling with uncertainty, provisionality, and competing interpretations. Rather than reducing this complexity to bullet points or slogans, a resilient epistemic culture should teach people how to think through ambiguity constructively. This involves normalizing uncertainty in public discourse, highlighting the process of discovery over final answers, and training minds to tolerate and navigate the grey areas where science most often operates. By developing cognitive and institutional practices that embrace nuance, accountability, and epistemic humility, society can cultivate the resilience needed to defend truth—not as an authoritarian decree, but as a dynamic, collective achievement.
Originally published by Brewminate, 05.16.2025, under the terms of a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International license.