Andy Crouch
Our technologies give us an illusion of omnipresence—most of the time
- View Issue
- Subscribe
- Give a Gift
- Archives
Pastors
guest columnist Marshall Shelley
What do today’s pastors obsess on?
Leadership JournalNovember 7, 2001
A few weeks ago I was asked to speak on the topic: "What do today's pastors obsess on?" I was intrigued by the question. Having spent the last 20 years of my life talking with pastors and editing a journal for them, I reflected on the recurring and revealing comments I've heard from church leaders that provide a clue to their obsessions.
Here are two obsessions that I've observed:
REWARDS
By rewards, I do NOT mean financial. Most pastors don't get into the ministry for the money, and they're amazingly satisfied with the salaries they receive even though they're below the level of other professions requiring comparable education. (A survey by Leadership indicates that 68 percent of pastors consider themselves "well paid" or "fairly well paid," and only 32 percent are "underpaid" or "severely underpaid.")
But my observation is that pastors are most definitely seeking a reward. Whenever I ask, "What motivates you?" or "Why are you in ministry?" I frequently hear pastors say, "I just want to hear those words from the Lord: 'Well done, good and faithful servant!'"
Those words, of course, come from Jesus' parable of the talents (Matt. 25) in which the Master commends the faithful stewards and condemns the one who didn't invest the resources he'd been given.
If pastors obsess on anything, I'd say it's on trying to be found faithful—asking themselves: "How am I doing with what God has entrusted me?" In the parable, the reward for fulfilling your responsibilities is getting more responsibility. And also joy: "Come and share your Master's happiness."
Those who don't do anything with what they've been given are judged by the Master to be wicked, lazy, worthless. In ministry, success is not automatic; it's possible to fail in this stewardship. The real success in ministry comes from investing what you've been given. The parable seems to suggest that the reward is in the risking, not in playing it safe.
And the pastors I know obsess on how they can invest themselves and their congregations for the good of the kingdom. And because this investment involves risk, faith-stretching, and taking people beyond their comfort zone, this leads to a second obsession I've seen in pastors.
RELATIONSHIPS
Relationships are the source of the most satisfaction and mostfrustration for pastors.
As editor of Leadership, I frequently explore fresh article angles by sitting with pastors and asking, "When was the last time you couldn't sleep because of some aspect of ministry?" Almost always, the occasion was both (1) recent and (2) related to a strained relationship.
Relationships are both the personal and professional preoccupation of pastors. Contrast this with the attitudes of politicians, who are satisfied with a 51 percent approval rating. Many pastors lose sleep if there's one individual who is upset, not responding well, or on the warpath.
Every pastor finds ways of coexisting with those voices, based on prayer, a deeper understanding of the nature of ministry, and re-embracing the concept of "the calling." But almost every pastor I know has lost sleep at some point over a relationship that's been strained as a result of trying to lead the congregation to higher levels of commitment and discipleship.
The art of leadership is to stay in touch with where people are (as one pastor put it, "If a leader gets too far out in front of his troops, he's mistaken for the enemy!"), but not to let them camp in complacency right there.
What do pastors obsess on? They obsess on their responsibility to lead people closer to God, which can be risky and hazardous to relationships; and the reward that Jesus promises of one day hearing, "Well done, good and faithful servant! You have been faithful with a few things; I will put you in charge of many things. Come and share your master's happiness."
Marshall Shelley, editor of Leadership journal, is a featured speaker at the National Pastors Convention in February 2002.
Check out http://www.NationalPastorsConvention.com/for all the details, to request a free brochure, and to register.
To reply to the editors of this newsletter, write Newsletter@LeadershipJournal.net.
Sign up for the Church Leader's Newsletter and receive a new article plus useful information in your inbox every week!
Copyright © 2001 by the author or Christianity Today/Leadership Journal.Click here for reprint information on Leadership Journal.
By John Wilson
Baseball, leisure, and worship.
Books & CultureNovember 2, 2001
Last night at Bank One Ballpark in Phoenix, the 2001 World Series came to a fitting close: a seventh-game, bottom-of-the-ninth, come-from-behind victory by the Arizona Diamondbacks over the mighty New York Yankees, who were seeking their fourth consecutive championship and their fifth in six years. So ended one of the most memorable seasons in many years, with a Series that is certain to rank among the best ever.
Given the twists and turns that had led up to this finale, it was only to be expected that the Diamondbacks’ victory had an extra measure of improbability. They were facing Mariano Rivera, the Yankees’ Mr. Automatic, who has been the most dominating closer in postseason history. Rivera is a slender man, and where he gets the power to throw as fast and hard as he does is a mystery. But what has made him invincible is not simply raw speed—though that’s no small matter. His pitches swoop and dart viciously as they enter the strike zone. And as if that weren’t enough, he throws what hitters call a “heavy” ball, the kind that breaks bats and results in pitifully weak squibs.
In the eighth inning last night, Rivera was unhittable, and so when the D-Backs went into the ninth trailing 2-1—how they got to that point, after a duel for the ages between Roger Clemens and Curt Schilling, is a story in itself—their prospects weren’t cheery. But then longtime Cubs’ first baseman Mark Grace led off with a hit, and before you knew it the game was tied 2-2 with one out and the bags loaded with D-Backs. Yankee manager Joe Torre elected to play the infield in, hoping to cut off a ground ball that could allow the runner to score from third.
Torre’s choice was by the book, but TV analyst Tim McCarver noted the special risk of this standard maneuver when Rivera is on the mound. Always when you move the infield in, of course, you are taking a calculated risk, since a hitter has a much better chance of driving a ball through a drawn-in infield. But Rivera, McCarver said, routinely bedevils lefthanders with a ball that breaks in on their hands and breaks their bats, often resulting in soft flares that are easily catchable when the infield is back, but tantalizingly out of reach when they’re in. No sooner were the words out of McCarver’s mouth then the lefthanded Luis Gonzales fisted a soft flare off Rivera for the game-winning hit, and the place went berserk.
At such a moment it was possible to forget about the bloated corporate interests converging on Bank One Ballpark, the collective madness of owners and players and agents, and even the impending crisis as baseball’s general agreement expires and the specter of another extended labor dispute looms large. It was possible to be blessedly absorbed entirely in the game itself.
There’s a vein of writing about baseball that makes quasi-religious claims for the game, not entirely seriously, of course. The movie Bull Durham is a classic in this vein: the ballpark as a kind of church. At one level that way of talking about the game—which is certainly not the way most fans talk about it—strikes me as deeply perverse. The point of baseball is baseball. Its beauty and delight and its limitations are all of a piece: it’s a self-contained world. We’re not doing the game or ourselves any favors by loading it with a weight of meaning it’s not meant to bear.
And yet there is a sense in which the self-contained world of baseball points beyond itself to larger meanings. “The ability to be ‘at leisure,'” the German Catholic philosopher Josef Pieper tells us,
is one of the basic powers of the human soul. Like the gift of contemplative self-immersion in Being, and the ability to uplift one’s spirits in festivity, the power to be at leisure is the power to step beyond the working world and win contact with those superhuman, life-giving forces that can send us, renewed and alive again, into the busy world of work.. Only in such authentic leisure can the “door into freedom” be opened out of the confinement of that “hidden anxiety,” which a certain perceptive observer has seen as the distinctive character of the working world, for which “employment and unemployment are the two poles of an existence without escape.”
So Pieper wrote in 1948; his book was published in English translation as Leisure: The Basis of Culture in 1952 and has recently been reissued in a fine edition with some supplementary material by St. Augustine’s Press. I’m not sure what Pieper would make of the vast leisure industry in the United States today. Is there any room for “leisure” in Pieper’s sense in the society of the spectacle, the land of instant replays and sports channels and DVDs and “leisure communities”? We can hazard a guess, perhaps, from Pieper’s sardonic aside: “There will naturally be ‘games’—like the Roman circenses—but who could dignify the amusem*nts for the masses with the name of ‘festival’?” Pieper at Yankee Stadium? Maybe not.
For Pieper, celebration and festival are at the heart of leisure, and hence leisure ultimately is rooted in worship:
What does “rest from work” signify for the Bible or for ancient Greece and Rome? The meaning of a rest from labor is cultic: definite days and times were designated to the exclusive possession of the gods.
And so of course the Sabbath restrictions which, in baseball’s earlier days, ruled out games on Sunday.
No doubt last night’s World Series finale can be filed under “amusem*nts for the masses,” in whose company I must be counted. “Games,” yes. But is it possible that the God we talked about last week—the God who created such an excessive variety of beetles, the God whose mind may be reflected in some small measure in the useful uselessness of philosophy as practiced by beings made in his image—is it possible that this God might also be the source of the joy that spilled out of the ballpark in Phoenix? I think so. The more difficult question—Is God a Yankees fan?—remains to be answered.
John Wilson is editor of Books & Culture and editor-at-large for Christianity Today.
Copyright © 2001 by the author or Christianity Today/Christianity Today magazine.Click here for reprint information on Christianity Today.
Related Elsewhere:
Visit Books & Culture online at BooksAndCulture.com or subscribe here.
More World Series information and recap is available at the official site of Major League Baseball, The D-Backs site, and ESPN’s World Series section.
appears Mondays at ChristianityToday.com. Earlier Books & Culture Corners include:
Is God a Body-Snatcher? | The restless intelligence of philosopher Peter van Inwagen. (Oct. 30, 2001)“Science and the Spiritual Quest” | A place at the table for Christians, but at a price. (Oct. 22, 2001)Beyond Belief? | Nobel Prize-winner V.S. Naipaul’s accounts of Islam presuppose the superiority of modern skepticism. (Oct. 15, 2001)Covering Islam | Getting beyond the feel-good bromides. (Oct. 8, 2001)Christian Scholarship … For What? | Academic speakers affirm the value of beholding God’s creation. (Oct. 1, 2001)Myths of the Taliban | Misinformation and disinformation abounds. What do we know? (Sept. 24, 2001)The Imagination of Disaster | “We thought we were invulnerable.” Really? (Sept. 17, 2001)More Sex, Fewer Children | Mixed messages on condoms, contraception, and fertility. (Sept. 10, 2001)The Strange Case of Napoleon Beazley | The latest poster boy for death row chic. (Aug. 27, 2001)Apocalyptic City | The dream and the nightmare of megalopolis (Aug. 20, 2001)Megalopolis Forty Years On | The ambiguous face of the city. (Aug. 13, 2001)The Future Is Now | You want the news? Read science fiction. (Aug. 6, 2001)
- More fromBy John Wilson
Douglas LeBlanc
- View Issue
- Subscribe
- Give a Gift
- Archives
This issue of Books & Culture was already at the proof stage on September 11. It was strange but also good to continue with the mundane work of getting the issue out while we began to absorb the news of that terrible day.
The terrorist attack brought out the best and the worst in Americans: heroism from New Yorkers and xenophobia from thugs in various cities; a widely renewed solidarity, but also a temptation to pursue vengeance rather than justice. One troubling development within two days of the attack was hasty talk of rebuilding the two World Trade Center towers. “[I]f we must have a shrine or monument for our remorse, let’s put it on the 200th floor, right next to the antiaircraft guns,” wrote Jonah Goldberg, editor of National Review Online, who sometimes doesn’t know to put his contrarianism on a leash. And while Mayor Rudolph Giuliani was an inspiring leader of his besieged city, he shifted into Churchillian overdrive in promising that “the city’s skyline will be restored.”
Never mind that the twin towers represented 1970s architecture at its worst, betraying an ugly obsession with sleekness and uniformity; never mind that the greatness of a people is not measured by the height of their office buildings; never mind about the people who fell or leapt to their deaths from the immolated upper floors: American pride is at stake!
Now that terrorists have shown such deadly contempt for the World Trade Center, perhaps we should consider what those towers represented to the wider world. (The words Mammon and power come to mind.) Americans should not leave this space abandoned and barren, but nor should we feel that any new building shorter than 100 stories is somehow a crushing blow to the American soul.
Perhaps Americans will think better of new skyscrapers after rescuers have completed the grim task of finding every mangled body beneath the rubble of the twin towers. Perhaps then someone will remember the work of a Chinese American born in Ohio who now lives in New York City: Maya Lin.
If her name seems familiar to fans of design, it should: while still a student at Yale, Lin submitted her design for what became the Vietnam Veterans Memorial. Veterans groups and some pundits predicted that her design of black marble walls recessed into the ground would become a pit of despair, a memorial that denied veterans their dignity.
Instead, the Vietnam Veterans Memorial rapidly found a beloved place in Americans’ hearts, and it attracts more than a million visitors annually. Some cities built their own smaller versions of the memorial, and a scaled-down replica traveled from state to state. Lin’s monument achieved what no veterans parade could: it gave Americans a setting rich with meaning, where they could both weep about the Vietnam War and salute the people, dead and alive, who fought in that doomed mission.
As documented in the film Maya Lin: A Strong Clear Vision and in her book Boundaries (Simon & Schuster, 2000), Lin has built a brilliant career as a designer of modern memorials and dramatic sculptures for public buildings such as Penn Station. For her alma mater, she designed The Women’s Table (1993), which honors the history of women students at Yale. (Students and other New Haven residents gathered around the sculpture during a candlelight vigil to mourn the World Trade Center victims.)
At the Southern Poverty Law Center in Montgomery, Alabama, another Lin sculpture pays tribute to the slain heroes of the civil rights movement. At Ohio State University’s deconstructivist Wexner Center, Lin designed Groundswell, a sculpture of shattered glass that hints at a Zen rock garden.
In an essay published by The New York Review of Books, Lin described some other assumptions in designing the Vietnam Veterans Memorial:
I felt that as a culture we were extremely youth-oriented and not willing or able to accept death or dying as a part of life. The rites of mourning, which in more primitive and older cultures were very much a part of life, have been suppressed in our modern times. In the design of the memorial, a fundamental goal was to be honest about death, since we must accept that loss in order to begin to overcome it. The pain of the loss will always be there, it will always hurt, but we must acknowledge the death in order to move on.
Americans needed no help in grieving immediately after the attacks. In the longer term, our temptation will be to invoke the victims merely as the reason for our superpower rage. We must never forget the names of these fellow Americans. Maya Lin’s sharp vision would help us redeem a space made profane by the mass murder that stole them from us.
- More fromDouglas LeBlanc
Sam Torode
- View Issue
- Subscribe
- Give a Gift
- Archives
Ignorance, poverty, and vice must stop populating the world. Science must make woman the owner, the mistress of herself. Science, the only possible savior of mankind, must put it in the power of woman to decide for herself whether she will or will not become a mother.
—Margaret Sanger, c. 1920Modern life is based on control and science. We control the speed of our automobile. We control machines. We endeavor to control disease and death. Let us control the size of our family to ensure health and happiness.
—American family planning poster, c. 1940
A few weeks ago, my wife and I made one of our rare pilgrimages to the nearest shopping mall. Bethany had been given a Victoria’s Secret gift certificate at her bridal shower a year earlier, and was intent on finally redeeming it. The last time we visited a mall, she managed to lure me into Victoria’s Secret, where my friends surprised me with a humiliating practical joke; having learned my lesson, this time I ducked into a bookstore and left her to her business.
When she returned, Bethany had a good time telling me about the various looks she attracted, browsing in Victoria’s Secret with a round, pregnant belly. She had hoped to find some nursing bras; alas, they didn’t sell them. “I guess they don’t like to be reminded of where all that sexy lingerie leads.”
Does making love still lead to making babies? Well, I suppose so. … if you’re into that sort of thing. You know, like, whatever makes you happy.
In the decades since 1960, the year the FDA approved the first oral contraceptive drug, it has become quite common to think of sex and procreation as two separate phenomena. Today, when advertisem*nts for condoms are no more unusual than ads for cigarettes and cellular phones, it is difficult to fathom the situation in 1960. At that time, 30 states prohibited the advertisem*nt of contraceptives, and over 20 states restricted their sale. While these laws were often violated and seldom enforced, they still cast a shadow of ill repute over the burgeoning contraceptive industry. That shadow disappeared in 1964, when state laws restricting contraception were ruled unconstitutional.
Devices and Desires: A History of Contraceptives in Americaby Andrea ToneHill & Wang, 2001366 pp.; $30
Devices and Desires, a new history by Andrea Tone of the Georgia Institute of Technology, is a fascinating account of our quest to separate sex from procreation. Tone traces the rise of the American contraceptive trade from its bootleg beginnings to the major pharmaceutical industry it has become. In an engaging narrative style, Tone combines biographical sketches of pivotal figures with meticulous research drawn from often-overlooked sources—everything from newspaper advertisem*nts to private correspondence. Among the many surprises in the story of contraception are the roles of two evangelical Christians.
Comstock’s Crusade
The nineteenth-century laws banning the advertisem*nt and sale of contraceptives were largely the legacy of one man, Anthony Comstock. Tone describes Comstock as an evangelical and a tireless crusader against all manner of vice. The law he drafted, known as the Comstock Act, was a broad statute banning not only contraception but also indecent photographs and instruments used to perform abortions. The Comstock Act was passed by Congress in 1873; subsequently, many states used it as a model, enacting their own “mini-Comstock” laws. While earlier laws had criminalized abortion, these were the first to address contraception. [1]
In his crusade, Comstock was joined by a wide and diverse coalition, including women’s suffragists, purity reformers, and feminists. It was not until the twentieth century that feminists became champions of birth control. In the early days, Tone writes, women’s rights advocates “tended to support natural family planning methods but not contraceptives, which they associated with promiscuity, particularly with men’s license to have sex outside the bonds of marriage.” Even some radical “free love” advocates condemned contraception, insisting that the acceptance and celebration of a woman’s natural sexuality and fertility were essential to her spiritual and physiological well-being.
As Tone points out, Comstock was not motivated by a desire to oppress women or force them into bearing more children. Rather, “What Comstock and his cronies found so threatening was the prominence of contraceptives in the vice trade—a robust and increasingly visible commerce in illicit products and pleasures that seemed to encourage sexual license by freeing sex from marriage and childbearing.”
Like the early feminists and purity reformers, Comstock upheld means of fertility regulation that rely on self-discipline, such as abstinence during the suspected fertile time of a woman’s cycle. “For Comstock,” Tone writes, “such acts of self-restraint were permissible when they occurred between wedded men and women in the sanctity of the marriage bed. Indeed, given his affection for self-control, he may have believed them to be character building.”
Devices and Desires is a powerful corrective to Margaret Sanger’s widely believed claim that before the opening of her birth control clinics in the 1920s, contraceptive techniques were unknown among the poor and working classes. Tone brings to light the massive underground contraceptive trade that flourished even in the heyday of “Comstockery.” The obvious parallel is the bootleg alcohol trade under Prohibition. Where there’s a will, there’s a way; and the will to subvert procreation was especially strong in urban, industrial areas, where children were viewed as economic parasites rather than assets. “Not just a pregnancy preventative,” Tone explains, “contraceptives promised families barely making do a prophylactic against economic ruin.”
Tone provides several memorable vignettes of contraceptive entrepreneurs, but none more memorable than Julius Schmid. A poor Jewish immigrant in New York City in the late 1800s, Schmid found work at a sausage factory, cleaning animal intestines to be used as casings. To make some money on the side, he brought the extra intestines home and fashioned them into condoms—”skins,” as they were called in the days before “rubbers.” When his basem*nt operation was raided, Schmid paid his fines and returned to his lucrative business. Schmid eventually expanded into rubber and latex; today, his “Ramses” and “Sheiks” brands are sold on drugstore and supermarket shelves worldwide.
To avoid prosecution, many contraceptive entrepreneurs used subtle advertising. One “sanitary sponge for ladies,” for example, was advertised in newspapers as a means of keeping a woman’s body “germ free.” Larger companies used this same technique. Under the Comstock Act, Tone writes, “established pharmaceutical and rubber firms made devices and chemicals known to have contraceptive benefits but did not market them as birth control. Intrauterine devices (IUDs) were sold to correct prolapsed uteri. Carbolic acid, an antiseptic commonly used for contraceptive douching, was marketed for burns, scalds, whooping cough, diphtheria, and morning sickness.” The list of respected companies selling contraceptives in the late 1800s and early 1900s includes names like Goodyear, B.F. Goodrich, Johnson & Johnson, and Sears & Roebuck.
Both Goodyear and B.F. Goodrich manufactured rubber IUDs. They were also crafted in entrepreneurs’ basem*nts from common household materials and sold on the black market. “Intrauterine stem pessaries, as physicians called the devices, could span five inches,” Tone writes. “They consisted of a rubber, metal, or glass stem attached to a cup or button that held the stem upright and prevented it from becoming lost in the uterus.”
Devices and Desires includes photos of a wide array of IUDs, which resemble medieval instruments of torture. How exactly does an IUD prevent conception? Usually, it doesn’t. Rather, the introduction of a foreign object, such as an IUD, into the uterus “causes a local inflammation, or chronic, low-grade infection. … that makes egg implantation impossible.” An IUD works after conception, denying a newly fertilized embryo the ability to implant and grow in the lining of the uterus. Smaller, plastic IUDs are still used today.
Making America Safe for Contraception
In 1909, the U.S. Army and Navy began distributing chemical prophylactics for soldiers to apply to their genitals after intercourse. These medical preparations, which were not pleasant on the skin, were designed to protect against syphilis, gonorrhea, and chancroid, venereal diseases that had plagued the military since the Civil War.
Upon his appointment to Secretary of the Navy in 1913, Josephus Daniels was horrified to learn that such “preventative packets” were widely distributed on America’s ships. Like Comstock, Daniels was an evangelical. “It is wicked,” he wrote, “to encourage and approve placing in the hands of the men an appliance which will lead them to think that they may indulge in practices which are not sanctioned by moral, military, or civil law, with impunity, and the use of which would tend to subvert and destroy the very foundations of our moral and Christian beliefs and teachings in regard to these sexual matters.”
Within two years, Daniels succeeded in banning the sale of prophylactics to America’s fighting men. In their place, Tone writes, he advocated abstinence-based programs. Daniels “pledged to augment Navy efforts to teach men self-control through talks and pamphlets, a strategy he called ‘moral prophylaxis.'” In addition, rather than banishing chemical prophylactics altogether, Daniels put them in the hands of Navy doctors, who would administer them to men who admitted having engaged in illicit intercourse. This policy, he believed, would protect health without promoting immorality.
Daniels’s strategy failed. “Between April 1917 and December 1919,” Tone writes, “380,000 soldiers—roughly one in eleven—were diagnosed with syphilis, gonorrhea, or chancroid. … The Army estimated that every case of venereal disease cost it approximately $231. At the end of the war, it had spent over $50 million on treatment.”
In the wake of financial and public health disasters, the military reversed its policy. “By the 1940s,” Tone writes, “not only had condoms become an approved prophylactic, but military officials complained that they could not get them to the troops fast enough.” To ensure that our troops’ condoms were reliable, the FDA began inspecting and regulating their quality, making condoms the first contraceptives to receive the government’s stamp of approval. During the 1930s and 1940s, U.S. condom production more than doubled, giving new meaning to the public service campaign urging citizens to conserve and donate rubber for the war effort.
Long before World War II, however, the venereal disease crisis had a profound impact on how Americans viewed and acquired contraception. In 1918, a New York State court decision legalized condoms—not for the prevention of pregnancy, but for the prevention of disease. This ruling opened the door for Margaret Sanger to legally operate her birth control clinics. Though Sanger had defended her clinics on grounds of reproductive rights, she was exonerated by a judge’s ruling that contraception, when used “for the cure or prevention of disease,” was not “indecent or immoral.” For many years thereafter, condom packages were labeled “for the prevention of disease only.”
Early in her career, Sanger became convinced that contraceptives must be brought under the supervision of medical doctors rather than sold over-the-counter. At Sanger’s birth control clinics, trained personnel fitted diaphragms and instructed women in their use, operating legally under the rubric of “protecting women from life-threatening pregnancies.”
In 1937, the American Medical Association first endorsed contraception as a legitimate part of medical practice. The medicalization of contraception was part of a larger movement in the mid-twentieth century, in which all aspects of women’s reproductive health came under the control of doctors. Fueled in part by sensationalist claims about the “risks” of pregnancy, childbirth rapidly moved out of the home and into the hospital. Tone quotes one physician who, in 1920, described childbirth as a “pathologic process.” Birth is so risky, he declared, “that I have often wondered whether Nature did not deliberately intend women should be used up in the process of reproduction, in a manner analogous to that of salmon, which die after spawning.”
Doctors’ influence over women’s health was radically extended in 1960 by the Pill, which was available by prescription only and required careful medical supervision, including frequent cervical smears and breast exams. By bringing in female patients on a regular basis, dispensing medical contraception was a boon for family physicians.
A Singular Pill
Sexual Chemistry: A History of the Contraceptive Pillby Lara V. MarksYale Univ. Press, 2001372 pp.; $29.95
Complementing Tone’s study, a second new history focuses on the most popular contraceptive of all. Sexual Chemistry is the work of British historian Lara V. Marks, former research fellow in the history of twentieth-century medicine at Imperial College. Marks provides a thorough, dispassionate account of a drug that has provoked passionate responses from its inception.
The Pill was pharmacologically unique. Unlike previous drugs, which were designed to cure organic diseases, the Pill was intended for long-term use by healthy women. For this reason, Marks calls the Pill the first “designer” or “lifestyle” drug. The Pill was designed to vaccinate against pregnancy, “a condition,” Marks says, “not usually considered an illness.” Throughout her book, Marks uses the phrase “falling pregnant” in place of “getting” or “becoming” pregnant—as if it were a condition that creeps up on unsuspecting women.
Unlike any previous contraceptive, the Pill was taken orally, and not did not physically intrude upon the act of intercourse. Fulfilling Margaret Sanger’s dream of a “magic pill,” the Pill put contraception entirely in the hands of women. A man might not even know his partner was using the drug.
The impetus for creating the Pill went beyond the liberation of women, however. The main reason the Pill was developed, approved, and manufactured, Marks argues, was its potential for halting population growth. At the start of the Cold War, it was feared that overpopulation would provide conditions ripe for communist revolution. Sanger announced in 1950, “The world and almost all our civilization for the next twenty-five years, is going to depend on a simple, cheap, safe contraceptive to be used in poverty-stricken slums and jungles, among the most ignorant people.” [2]
When the first oral contraceptive, Enovid, was approved by the FDA in 1960, it was marketed primarily to married women who already had at least one child. But, as Marks shows, the Pill rapidly became the contraceptive of choice for single women. While the number of married women using the Pill declined in the 1970s after the drug’s health risks were widely publicized, the number of single women on the Pill continued to rise. The Pill was especially popular among college-educated singles, many of whom were introduced to the Pill by university health centers. For these women, it meant the power to pursue a career without sacrificing an active sex life. By 1982 an astonishing 80 percent of American women born after 1945 had swallowed the Pill.
While Devices and Desires tells of Protestants who crusaded against contraception, Sexual Chemistry explores further ironies in the story of Christians and contraception: particularly the roles of three Catholics in developing the drug that ignited one of the most divisive episodes in the history of their church. First was lapsed Catholic Margaret Sanger, one of eleven children. Sanger initiated the project, recruiting maverick reproductive biologist Gregory Pincus to conduct research into the possibility of a contraceptive pill. As Pincus later put it, he “invented the Pill at the request of a woman.” Second was Pincus’s right-hand man, renowned Harvard obstetrician John Rock, who conducted the human testing for the Pill. A devout Catholic, Rock attended Mass daily. A crucifix hung above the desk where he examined data culled from the Puerto Rican women and American psychiatric patients who were the first to swallow the Pill. In 1959, Rock traveled to Washington, D.C., and pleaded for FDA officials to approve Enovid. Within the FDA, it was a third Catholic, Pasquale DeFelice, who reviewed and approved the initial application.
In 1963, Rock published his manifesto, The Time Has Come: A Catholic Doctor’s Proposals to End the Battle over Birth Control. He argued that the Catholic Church should endorse the Pill because, unlike the condom or IUD, the Pill is “natural.” That is, the Pill uses naturally occurring sex hormones to fool a woman’s body into a state of pseudo-pregnancy, resulting in temporary sterility. “It is my confident hope that the medication will prove acceptable to my church,” Rock wrote,
since it merely gives to the human intellect the means to suppress ovulation; these means have heretofore come only from the ovary and, during pregnancy, the placenta. These unthinking organs supply their hormone, progesterone, at those times when nature seeks to protect a fertilized ovum or growing foetus from competition for the woman’s resources. The oral contraceptive simply duplicates the action of this natural hormone, when the woman herself feels the necessity for protection of her young—present or prospective.
Rock’s reasoning was clever but erroneous. Oral contraceptives are not hormones naturally produced by the body; they are synthetic steroids, different in both structure and effect from natural hormones. The introduction of progesterone into a woman’s bloodstream to induce a sterility can in no way be equated with the natural ebbs of a woman’s fertility before and after ovulation, during pregnancy and breastfeeding, and after menopause.
Moreover, contrary to Rock’s simplified explanation of how it works, the Pill does not always suppress ovulation. All oral contraceptives (both “combination” Pills, containing estrogen and progesterone, and “progesterone-only” Pills) have two backup mechanisms in case ovulation occurs. As Tone writes, “Progesterone also checks conception by thickening the cervical mucus, which inhibits sperm penetrability, and by preventing the full development of the uterine lining, without which a fertilized egg cannot implant.” The latter mechanism—preventing the implantation of a newly fertilized embryo—makes the Pill inherently problematic for those who believe that life begins at conception. [3]
For the most part, however, Christians have been content to brush aside such ethical concerns. As Marks documents, Catholic laymen began using the Pill as soon as it became available, in numbers closely corresponding to those of the general population. In 1965, a Newsweek survey found that 38 percent of American Catholics were using the Pill or other artificial contraceptives; of those under age 35, 60 percent were using contraception, and 80 percent hoped that the church would endorse the Pill. Their contraceptive habits went largely unaffected by the 1968 encyclical Humanae Vitae, which reaffirmed the traditional teaching against contraception. Pope Paul VI was too late—the people had already chosen.
While Marks does not address the evangelical reception of the Pill, we know that Protestants reached the same consensus as Catholic laymen, though not in opposition to their churches. All of the major Protestant denominations endorsed contraception by the 1970s. When asked to comment on Humanae Vitae, evangelical leader Billy Graham expressed his disagreement, citing the “population explosion.” In the 1970s, evangelical “sex guides” began to appear, lauding contraception as a means of elevating the meaning of sex above mere procreation. The most popular of these, The Act of Marriage by Tim and Beverly LaHaye, is still available in a revised edition, boasting over 2,500,000 copies in print. The LaHayes gave newlyweds this advice: “Because of its safety and simplicity, we consider the pill the preferred method for a new bride in the early stages of marriage. Then, after she and her husband have learned the art of married love, she may decide on some other method.”
But the unsettling ethical questions raised by the Pill are now more pressing than ever. They carry implications for a host of bioethical issues, from embryonic stem-cell research to human cloning—implications made clear in a new memoir from Carl Djerassi, the self-proclaimed “mother of the Pill.”
Sex and Reproduction—Filing for Divorce?
This Man’s Pill: Reflections on the 50th Birthday of the Pillby Carl DjerassiOxford Univ. Press, 2001308 pp.; $22.50
Djerassi has a prodigious intellect and an ego to match. Both are on glorious display in This Man’s Pill, a collection of autobiographical essays full of revealing insights into the creation of the Pill, its influence on society, and where contraceptive technology is headed.
Djerassi would be the first to point out a significant oversight in Tone’s and Marks’s histories: while acknowledging the work of others, they give Sanger, Katherine McCormick (Sanger’s ally and benefactor), Gregory Pincus, and John Rock the lion’s share of credit for developing the Pill. In a chapter titled “Genealogy and Birth of the Pill,” Djerassi explains why he deserves to be called the mother of the Pill.
In the 1940s and early 1950s, the hormone progesterone was in demand for treating menstrual disorders and infertility. Natural progesterone, which could be harvested only in minute quantities from animal glands, was scarce and expensive; to retain its potency, it also had to be injected into the body, which could be very painful. Building off of recent advances in steroid production, in which hormones were developed from compounds found in plants, Djerassi and his team of researchers in Mexico City were looking for a way to synthesize progesterone from wild, inedible Mexican yams. On October 15, 1951, they succeeded in creating a progesterone that could be cheaply produced in mass quantities. Far more potent than natural progesterone, Djerassi’s steroid could also be effectively taken by mouth.
At that same time, unbeknown to Djerassi and his team, Gregory Pincus was studying the ability of progesterone to suppress ovulation. When Pincus learned of Djerassi’s work, he found that Sanger’s “magic pill” had, in effect, already been invented. It was left up to Pincus, Rock, and others to test the drug and determine what dosage to prescribe. They discovered that adding a second hormone, estrogen, was necessary for reliable contraception (though later, in the mid-1960s, progesterone-only Pills were developed). Thus, Djerassi calls Pincus a “father” of the Pill, and Rock its “metaphorical obstetrician.”
In recent years, Djerassi has abandoned “hard chemistry” to focus on bioethics, writing novels and plays that explore the opportunities and dilemmas created by science. Djerassi argues that the Pill has sparked the most significant technological revolution of the twentieth century: “the gradual divorce of sex from reproduction.” While earlier contraceptives attempted to separate sex and reproduction, the “true realization of ‘sex for fun’ occurred only about forty years ago with the introduction of the Pill.” Women who used the Pill “were temporarily sterile, and thus could indulge in sexual pleasure without the fear of an unintended pregnancy.”
But engaging in sex without the “fear” of procreation is only one side of the coin. The complete separation of sex and fertility comes only when we can reproduce without having sex. This possibility became reality with in vitro fertilization. In 1978, Djerassi recounts, Louise Joy Brown “was conceived under a microscope, where her mother’s egg was fertilized with her father’s sperm; the fertilized egg was reintroduced into the mother’s womb two days later, and, after an otherwise conventional pregnancy, a normal girl baby was born nine months later. This technique has since become widely known as in vitro fertilization (IVF)—an event that has now been replicated several hundred thousand times through the birth of that many IVF babies.”
“Whereas reproductive technology’s focus during the latter half of the twentieth century was contraception,” Djerassi writes, “the technological challenge of the new millennium may well be conception.” In the near future, he suggests, young men and women will have their sperm and eggs frozen, then undergo sterilization. The Pill will become obsolete as couples engage freely in sterile sex, then “resurrect” their gametes and reproduce through in vitro fertilization. The attractiveness of this option is enhanced considerably, Djerassi suggests, by genetic screening; parents can weed out “defective” embryos and even choose the sex of their children—all before implantation. No longer will reproduction be left to the “random” collision of a sperm and egg.
Has the link between sex and procreation truly been severed? Despite the claims of men like Djerassi, evangelical natural law scholar J. Budziszewski insists otherwise. “The spirit of the age has most burdened people with a false picture of nature,” Budziszewski explains. “Their eyes dazzled by what technology can do, when they gaze upon human nature they see not a Design, but a canvas for their own designs. Because they can sever the causal link between sex and procreation, they suppose they have severed the link between sex and procreation.”
As Aristole taught, and Christian tradition affirms, man will not find happiness except by living in harmony with his created nature (as opposed to his fallen or “sin nature,” Christians would clarify). If this way of thinking is right, we cannot change the reality that sex is, by nature, the procreative act. If sex and fertility are inseparable by nature, then all attempts to control fertility by “liberating” it from sex will prove ultimately unsatisfying and destructive.
Does the history of contraception bear this out? Is it, as Djerassi proudly proclaims, a story of increasing liberation and fulfillment? Or is it better described as a story of increasing bondage to doctors, scientists, and corporations, and increasing disregard for the sanctity of life and the human person?
Sam Torode lives and works in rural Wisconsin, with his wife, Bethany. Together, they are authors of Open Embrace: A Protestant Couple Rethinks Contraception (Eerdmans, 2002).
Footnotes
1. A strong Protestant, Comstock was deeply suspicious of the Catholic Church. He once attended Sunday Mass out of curiosity, noting that night in his diary that it “Seemed much like Theater.” One wonders what Comstock would have thought had he lived to see the Protestant churches embrace contraception, while—nearly alone—the Catholic Magisterium stubbornly insisted on periodic abstinence as the only acceptable means of child spacing.
2. At the same time Sanger was moving contraception into medical clinics, Tone writes, “the scientific credibility of the birth control movement was enhanced by the search to limit the procreation of undesirable groups, and its leaders appropriated the eugenic language to promote their goals.” Like other eugenists, Sanger sought the improvement of the race through selective breeding. “Birth control,” Sanger once said, “is nothing more or less than the facilitation of the process of weeding out the unfit [and] of preventing defectives.” Particularly, Sanger believed it necessary to reduce the birth rate among African Americans—a need that, in her words, “the race did not recognize” itself.
Though eugenics lost favor after World War II, when its implications were made manifest by the Nazi atrocities, similar attitudes persist today. Advances in fetal testing and genetic screening have breathed new life into eugenics. Routine ultrasounds and tests such as amniocentesis allow parents to detect “defects” while a child is still in the womb. Nearly all babies identified as having Down Syndrome are aborted. With in vitro fertilization, well-to-do parents can now fertilize several embryos and discard those predisposed to various health problems or otherwise genetically “unfit.”
3. Today, John Rock is recognized not only for his role in creating the Pill but also for his pioneering efforts in treating infertility, paving the way for in vitro fertilization. In 1944, Rock and his team at Harvard Medical School, for the first time ever, fertilized four human embryos in a dish. The embryos were subsequently destroyed. The Pill, it turns out, was not the first technology Rock developed in violation of his church’s teachings on the sanctity of all human life from its earliest stages.
Copyright © 2001 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromSam Torode
Randal M. Jelks
- View Issue
- Subscribe
- Give a Gift
- Archives
Frederick Douglass summed up the physical cost of being a slave in the starkest of terms: slaves “were worked in all weathers. It was never too hot or too cold; it could never rain, blow, hail, or snow, too hard for us to work in the field. Work, work, work, was scarcely more the order of day than of the night.” The slaveholder had only one desire, which was to work the slaves as hard as possible to clear the land and plant and harvest crops. “The longest days were too short for [the overseer], and the shortest nights too long for him,” Douglass wrote.
But the psychic toll weighed even more heavily on Douglass: “I was somewhat unmanageable when I first went there, but a few months of this discipline tamed me. Mr. Covey succeeded in breaking me. I was broken in body, soul, and spirit. My natural elasticity was crushed, my intellect languished, the disposition to read departed, the cheerful spark that lingered about my eye died; the dark night of slavery closed in upon me, and behold a man transformed into a brute!” The transatlantic slave trade and its subsequent evolution in the history of the United States was a spiritual and social holocaust for African Americans.
American sociologists have been keen to assess how the slave system and its outgrowth—Jim Crow laws and the sharecropping system of the deep South—affected the behavior of African Americans. From W.E.B. DuBois’s late-nineteenth-century studies of the Negro family, to the 1930s writings of E. Franklin Frazier and Charles Johnson, to the 1960s Moynihan Report, many thinkers have pondered the link between agrarian slavery in the eighteenth and nineteenth centuries and the struggles of African Americans in modern, urban America. The sociological debate has been accompanied by fierce debates within American historiography about the slave family, personality, culture, and community.
American slavery has also inspired theological reflection. This tradition began in the eighteenth century with The Interesting Narrative of the Life of Olaudah Equiano, or Gustavas Vassa, the African, Written by Himself (1789). The stream wound through the nineteenth-century writings of Douglass and Francis Grimke into the twentieth century in Howard Thurman’s Jesus and the Disinherited (1949) and Deep River: Reflections on the Religious Insight of Certain of the Negro Spirituals (1955), Joseph R. Washington’s Black Religion: The Negro and Christianity in the United States (1964), and James H. Cone’s ingenious Black Nationalist theology in God of the Oppressed (1975). The theologians have tried to ascertain meaning from the horrors of slavery and determine how theology helps those oppressed by slavery to gain civic and personal freedom.
Three recent books from the disciplines of history, sociology, and theology continue the exploration of American slavery and its consequences. These inquiries make the reader aware that no American can face the future without facing the history of our country-a history scarred by conquest and chattel slavery. At the same time, these books offer specific challenges to African American intellectuals, Christian and non-Christian, to judge what type of faith they should place in this country’s civic and religious institutions.
Soul by Soul: Life Inside the Antebellum Slave Marketby Walter JohnsonHarvard Univ. Press, 1999320 pp.; $15.95
Walter Johnson’s Soul by Soul: Life Inside the Antebellum Slave Market is a richly textured history of human trade in the antebellum South, covering a period during which some two million slave sales were meticulously recorded. Johnson’s haunting study centers on New Orleans, the site of North America’s largest slave market. Unlike Eugene Genevose, whose Roll, Jordan, Roll: The World the Slaves Made emphasizes slaves’ failure to act out against the slaveholders’ hegemonic paternalism, Johnson looks at the roles played by slaves, traders, and slaveholders in the nasty enterprise of selling life. For the slave trade to work, everyone involved had to accept the terms of business.
But if slavery was a business, it was also much more than a business. The title of Johnson’s book is not casually chosen, for he seeks to grasp the impact of slavery on the very souls of everyone it touched. This ambition takes his work beyond that of historians who have traced the trajectory of the slave trade through commercial records only. Slaveholders, Johnson suggests, engaged in a form of “necromancy,” convincing themselves that they were omnipotent even as they depended on slaves to plow the fields, cook the food, and build their fortunes.
Indeed, the imagination of the South was captive to the fantasies of slavery. Every young white southerner dreamed of owning slaves, the ticket to wealth and status. When slaves failed to fulfill their owners’ dreams, they were beaten, sometimes killed, or resold in the market. Slaves could only hope to avoid dire fates by playing the role of the “good slave,” and sometimes those efforts failed. Slaveholders claimed, paternalistically, to buy or sell slaves in the name of the family, even though they were tearing families apart and fostering natal alienation on a daily basis.
Slave traders were the necessary, yet despised, middlemen in the complicated operations to sell and resell human beings from the Southeast into the lower South and Southwest. Johnson describes the different kinds of traders, from those who held corporate arrangements with headquarters in New Orleans and St. Louis, to the speculative and temporary traders, to the rural agents, all of whom shared in a desire to turn a profit.
In a perverse and coercive way, slave traders had to rely on the slaves to make themselves salable—to go on playing the “good slave.” Slaves complied, as Johnson explains, in hopes that the slave trader would honor their cooperation and sell them to a “decent” slaveholder or a slaveholder near their families. Amicability, however, was but a small factor in the traders’ calculus. Everything was a part of the transaction in the slave market, including the slaves’ bodies, the slaves’ color, and the slaves’ skills.
To even have the chance to play along in this sick game, slaves needed to know the rules. The community of the slave pen offered such information, serving as a place to decode the slave traders’ expectations. The pen was also a dehumanizing zone of brutality and fear:
The walls surounding the pens were so high—fifteen or twenty feet—that one New Orleans slave dealer thought they could keep out the wind. Inside those walls the air must have been thick with overcrowding, smoke and sh*t and lye, the smells of fifty or a hundred people forced to live in a space the size of a home lot.
And it is by learning about life in the slave pen, Johnson suggests, that we can begin to trace the network of social relationships that created and formed the antebellum South.
Johnson’s book reminds the reader how much slavery cost slaves spiritually, psychologically, and physically. Slavery and the slave market systematically robbed slaves of their dignity and full humanity. Freed from slavery in 1865, African Americans had only fragments of their lives with which to create new identities and a New World.
Orlando Patterson, one of our country’s finest historical sociologists, also traces the effects of slavery on African Americans in his book Rituals of Blood: Consequences of Slavery in Two American Centuries. In three essays, he argues that the contaminated legacy of slavery has been central to African American life and American culture.
The first essay links the brokenness of African American families today to the rupture of the family under slavery. The second explores the relationship between fundamentalist Christianity and white supremacy, suggesting that lynchings served as a form of human sacrifice. The last essay creatively evaluates images of the African American male.
Patterson’s opening essay, “Broken Bloodlines: Gender Relations and the Crisis of Marriages and Families Among Afro-Americans,” is prophetic, startling, and could stand on its own as a small book (it is 167 pages long). Patterson argues that a correction is required in the 1970s historiography about the African American family, which had set out to prove that African Americans had nuclear families and kinship networks from slavery into the twentieth century. These scholars, interested in the form of slave families, missed an essential element: how the slave family functioned under the conditions of slavery.
Slavery in the Americas profoundly altered the shape of the West African family. Though “Africans and their descendants tenaciously held on to the strong valuation of kinship and to the fundamental West African social tendency to use kinship as the idiom for the expression of all important relationships and rankings,” slavery had the “virulent” effect of devastating the roles of father and husband.
Patterson explains that “the status and role of husband could not exist under slavery, since it meant having independent rights in another person and, in both the U.S. South and West Africa, some authority over [children]. Fatherhood could also not exist, since this meant owning one’s children, having parental power and authority over them. Both infringed upon the power of the master and were therefore denied in law and made meaningless in practice.” The relationships that slavery forcibly engendered “resulted in patterns of interaction between men and women that, while precariously functional within the plantation slave regimes, were later to prove tragically disruptive for African Americans.”
Those disruptions continue today as African American family and gender relations are in crisis. “This crisis,” Patterson writes, “is the major internal source of wider problems of African Americans. It is the main means by which the group ends up victimizing itself.” This victimization results in social isolation; a disproportionate number of households headed by poor, single females raising fatherless children and a correspondingly disproportionate number of single males; and pervasive distrust between African American women and men, largely caused by male infidelity.
Patterson is not taking gratuitous swipes at African Americans. Rather, with ample statistical data, he is drawing readers’ attention to the lingering effects of slavery and Jim Crow. Patterns of family life formed under those regimes conspire to keep African Americans marginalized, even as de jure racism has eroded.
Patterson exposes the dirty laundry of African American family life and gender relations and gives us an unsentimental analysis. He states that the “problem and the solution are overwhelmingly in the hands of African American men. They must radically alter their ways. They must change their gender attitudes, their sexual morality, their low opinion of marriage and their chronic infidelity in marriages and cohabiting unions.”
“Broken Bloodlines” is a considerable essay that ought to provoke considerable reflection and debate. However, the second essay, “Feast of Blood: ‘Race,’ Religion, and Human Sacrifice in the Postbellum South,” is equally powerful and even more horrifying and theologically challenging.
While structuralist accounts of lynching emphasize demographic, economic, and political aspects, Patterson employs anthropological and sociological tools to uncover the meaning of lynching. He argues that in the South, as in many other cultures, symbolic killing enabled the dominant society to create and assert its identity.
Patterson notes that in groups as diverse as the Aztecs, America’s Northwest Coast Indians, the Ashantis in Ghana, and the Carthaginians, “human sacrifice was one of the main reasons for holding slaves and taking prisoners of war.” Such ceremonial killings tied a community in a ritual fellowship: “The sacrifice reinforced the most strongly held values of the group.”
Societies demand sacrifices, Patterson notes, when they are going through a transition: “Precisely such a period of acute liminal transition was faced by the Old South after the collapse of its system of slavery and during its forced transition to a new form of society, a transition that took some fifty years. That period, especially after the end of Reconstruction, was perhaps the worst episode in the history of African Americans, for as we will see, it was they who paid the expiatory and propitiatory price of the South’s transition, in increasingly savage rituals of human sacrifice.”
During the Southern transition, fundamentalist Christianity held firm. Those who took this ideology most seriously, including both pastors and members of the Ku Klux Klan, used it to give biblical sanction to the ritual murders taking place all over the South. White Christian southerners, led by clergy, fused a form of Christianity and white supremacist civic religion so that each powerfully supported the other. White Southern leaders called themselves “Redeemers,” while African Americans represented the sin and evil that the reengineered South must purge.
Patterson suggests that one of the ways African Americans responded to this poisoned strain of Christianity was by withdrawing from public religion. It was not until the mid-1950s, he claims, that African Americans reconnected their faith with a public gospel. Here Patterson is simply wrong. He has relied on an older historiography found in the works of E. Franklin Frazier, Carter Woodson, and Gayraud Wilmore. While it is true that African American Christians did not overtly respond to every injustice, they found many ways to resist on the local level. Black churches provided space for branches of the NAACP, created Settlement Houses, and generally practiced the Social Gospel with persistence and fervor.
I am surprised, too, that Patterson seems to be unaware of the many African American Christian intellectuals of this period who laid the groundwork for the discourse of the civil rights movement. African American Christians did not simply wake up in the 1950s to preach of “a spiritual community wherein service, fellowship, worship, forgiveness, and atonement are intimately linked.” Rather, it was individuals such as Howard Thurman, Benjamin Mays, and George Kelsey who set the theological foundation for the reframing of Christian theology through the eyes of the disinherited.
Like many another critic of Christianity, Patterson takes it upon himself to lecture Christians about the transmission of their own tradition, attacking Christianity in the name of Christ. Patterson asserts that the proclamation of the gospel in the context of the civil rights movement returned Christianity to its first-century roots by rejecting the “two-thousand-year Pauline betrayal of Christ’s life, reflected in the outrageous Christological dogma that the whole point of Jesus’ life, the only important thing about it, was that it ended on the cross.” The irony is that African American Christians who preached this Pauline doctrine created vibrant congregations and offered a powerful corrective to the distorted Christianity of the white South.
Patterson closes this remarkable collection with an essay titled “American Dionysus: Images of Afro-American Men at the Dawn of the Twenty-First Century.” In it he ponders why Americans were so intrigued in 1995 with Michael Jordan, O.J. Simpson, and Colin Ferguson (the mass murderer on the Long Island Railroad). Using this trio, Patterson traces America’s obsession and repulsion with African American men.
As America has become more integrated, he argues, African American men have been given a cultural role that is Dionysian in its glorification of athleticism, the bad boy image, and hip-hop cool. African American men function as a permanent foil to the more disciplined Apollonian values that other men, in theory, are expected to uphold. By creating, adapting, and responding to this image, African American men have effectively disqualified themselves from addressing the central problems of the African American family.
Ritual of Blood is a brilliant analysis of the troubles that have resulted from slavery, Jim Crow, and African Americans’ own self-destructive responses to an antagonistic society, but there is something amiss. In each of Patterson’s essays, he challenges African American men to change their behavior. Yet he neglects fully to account for either the negative pressures outside the African American community or the positive forces within.
Unless Patterson addresses these blind spots, he will come across as yet another intellectual (he is a Harvard professor) commenting on the debility of African Americans without seeing anything of value within the community. Although “the hood” may be marked by social isolation, it is also a place of caring and peculiar generosity that no statistical surveys bother to explore. Lastly, while Patterson respects the African American tradition of Christianity, he makes no comment on what the church, the longest standing institution within African American communities, might do to help those communities change for the better.
Down, Up, and Over: Slave Religion and Black Theologyby Dwight N. HopkinsFortress Press, 1999300 pp.; $20
The questions raised by Johnson and Patterson require a theological response. One place to look for it is in the work of Dwight Hopkins, an associate professor of theology at the University of Chicago Divinity School and a student of James Cone. His new book, Down, Up, and Over: Slave Religion and Black Theology, addresses the theological and social liberation of African Americans by examining the narratives of American slaves.
“Through the life and death religious examples of African American chattel, therefore, universal meanings about the final quest for full spiritual and material humanity flourish,” Hopkins writes. He argues that a “constructive black theology” is located within “the fragments of liberation theology found in the slave religious experiences.”
According to Hopkins, the task of black theology is to affirm, assert, and advocate a positive theology for poor and working people. Where necessary, black theology also knocks down, breaks up, and jumps over any demonic theologies that prevent the African American church and the African American people from working with the Spirit.
Hopkins divides his book into halves. In the first half, he revisits the history of American slavery in a context he calls, generically, “Protestantism and American culture.” He interprets the history of the United States by two arrivals— the arrival of slaves in the southern colonies and the arrival of Pilgrims on the Mayflower. Ironically, he never mentions American Indians in this scheme of liberation.
The religious and political elite of the country “seeded the religious soil with racial hierarchy” and encoded American culture with laws governing race relations. White theology supported white economic control while reducing black slaves to powerlessness. In response, African Americans recreated themselves, drawing out of their West African religious traditions and nineteenth-century Christianity an identity as “co-laborers” with God.
In the second half of the book, Hopkins argues for a threefold theory of liberation: “God—The Spirit of Total Liberation for Us,” “Jesus—The Spirit of Total Liberation with Us,” and “Human Purpose—The Spirit of Total Liberation in Us.” Leveraging selected biblical texts and slave narratives, he tries to push the boundaries of theology and find a new interpretive schema for emancipating the lives of African Americans. The attempt is noteworthy, for theology can and should be brought to bear on the sufferings and struggles of the African American community, but it fails because Hopkins neglects to explain why his approach matters to the lives of ordinary African American Christians.
Hopkins’s book has many other shortcomings. For one thing, Hopkins displays little familiarity with existing work on slaves and slave religion. He never cites such important books as Albert Raboteau’s Slave Religion: The “Invisible Institution” in the Antebellum South or Orlando Patterson’s Slavery and Social Death: A Comparative Study. These books might have helped Hopkins see the significance of the wide range of African American expressions of faith, including Afro-Baptist, Afro-Methodist, Afro-Catholic, and Afro-Calvinist. The way slaves and free people of color understood their theology informed not only how they worshiped and read the Bible but also how they understood their fight against oppression.
Hopkins’s discussion of slavery is compressed, and he misses the larger concepts that unjust social systems illuminate—most notably, the meaning of freedom. He presents his material on slavery too glibly, as though slave narratives speak on their own, without need of interpretation. He searches the narratives for proof texts without helping the reader understand where the documents come from and what they are trying to say. Furthermore, Hopkins at times romanticizes slave religion. I do not question the great piety of slaves, but, as nineteenth-century AME Bishop Daniel Payne warily observed, illiteracy and ignorance threatened their efforts.
Finally, it is a pity that a book bearing so much creative potential has no written elegance. Hopkins frequently uses theological jargon that distracts from his central point. Written elegance, like the written elegance of many slave narratives, is essential for challenging, persuading, and liberating minds.
Perhaps no contemporary black theologian can set forth the calling of African American theology better than ex-slave Henry Bibb. At the end of his slave narrative, he wrote, “Having thus tried to show the best side of slavery that I can conceive of, the reader can exercise his own judgment in deciding whether a man can be a Bible Christian, and yet hold his brethren as property, so that they may be sold at any time in market, as sheep or oxen, to pay his debts.” He further asks, “Is this Christianity? Is it honest or right? Is it doing as would we be done by? Is it accordance with principles of humanity or justice?”
Bibb answers his rhetorical questions with a resolution: “I believe slaveholding to be a sin against God and man under all circ*mstances. I have no sympathy with the person or persons who tolerate and support the system willingly and knowingly, morally, religiously, or politically.” He then commits his writing “to the path of freedom and revolutionizing public opinion upon this great subject.”
Though misused by many, Christian theology, faithfully interpreted by many radically unconventional black and white Christians, opposed the institution of slavery, as Bibb’s trenchant analysis reminds us. Because numerous social maladies resulting from slavery and racial discrimination remain, Christians must continue to struggle against anything that prevents individuals from being active and free people of God. Each of these books moves that struggle forward.
Randal M. Jelks is associate professor of history and director of multicultural academics at Calvin College.
Copyright © 2001 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromRandal M. Jelks
Karl W. Giberson
Part 2: Newton’s Principia.
- View Issue
- Subscribe
- Give a Gift
- Archives
I believe the souls of five hundred Sir Isaac Newtons would go to the making up of a Shakespeare or a Milton.
—Samuel Taylor Coleridge
The achievements of great writers, painters, and musicians are accessible to a general audience in a way that the achievements of great scientists are not. In deprecating Newton’s genius, the great poet and critic Samuel Taylor Coleridge was retailing a familiar humanist theme, that mere science lacks the exquisite depth and resonance of great art and indeed should be regarded as inferior to it. At the opposite from Coleridge’s disdain is the equally uninformed idolizing of an Albert Einstein or a Stephen Hawking: the Scientist as celebrity. But those are not the only alternatives. With a little patience, the outlines of Newton’s achivement can be grasped by anyone who is reading these words.
Newton’s masterwork is the Philosophiae Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), commonly known as the Principia. The first edition appeared in 1687, the second in 1713, and the third and final edition in 1726. Newton wrote the Principia in Latin, and until recently there has been only one complete translation into English, that of Andrew Motte in 1729. While adequate, Motte’s translation was flawed in many ways, and, as old books are prone to do, it became a bit of a linguistic fossil over the years as its prose remained the same while the language around it changed.
In response to these concerns the University of California Press brought out an elegant new edition in 1934, nicely bound in leather, with a revised translation by Florian Cajori. Cajori did not prepare a fresh translation from scratch; he simply tried to “defossilize” Motte’s English so modern readers could understand it. This Motte-Cajori version has been definitive since its introduction and continues to reside in some very respectable places on bookshelves throughout the English-speaking world; perhaps its most common residence is between Pascal and Locke as volume 34 of Britannica’s Great Books of the Western World.
The archaisms that Cajori removed from Motte’s translation include such delightful obscurities as “duplicate ratio,” which means simply “square of the ratio”; “subduplicate,” which means “square root”; and my favorite, “subsesquiplicate ratio,” which means “ratio of 3 to 2.” (Nostalgic scholars of the English language often lament about how the language has atrophied over the years. I am sure that the reader will agree the loss of these delightful terms is nothing less than tragic.) These editorial changes are not substantive and most certainly resulted in a clarification of the text.
But Cajori made other changes as well. He “updated” Newton’s physics in some questionable ways. At the time that Newton wrote the Principia, nature was interpreted within the context of the Aristotelian tradition, which understood changes in nature as coming from “within” things, out of an autonomous, almost organic sense of self-direction. Objects possessed an inner teleological “drive” to change.
Rocks fell down because it was their nature to “seek” the center of the universe. Fire rose up because it was “seeking” the top of the terrestrial realm. This Aristotelian understanding was overturned by an emerging paradigm which understood change mechanically, rather than organically. Within the mechanical paradigm, change was understood to come from without, mainly as the result of bodies pushing and pulling on each other by being in contact. But while Newton’s work may have led to what became known as a mechanical description of nature, this was far from how he saw it; indeed, as we shall see, one of his major challengers was Descartes, who had an even more mechanical model of the universe.
If you go looking for a statement of Newton’s First Law of Motion you will find the following, or something very similar (there is no “standard” formulation although the precise meaning of the law rarely, if ever, differs due to individual variations): An object at rest tends to stay at rest and an object in motion tends to stay in motion with the same speed and in the same direction unless acted upon by an unbalanced force.
I found this particular version on the very first Web site that was returned in response to a search for “Newton’s First Law.” It is basically the same as the version I learned and relearned in high school, college, and graduate school. It is the version that I teach to sophom*ores in my course on Newtonian mechanics at Eastern Nazarene College. It is essentially the same as the version that Cajori produced in his update of Motte. But it is not entirely faithful to Newton’s original formulation.
In all three editions of Newton’s Latin versions of the Principia he used the Latin verb perseverare to describe what a body does in the absence of an external force. Motte translated this correctly as “persevere.” Every body perseveres in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed thereon.
Now persevere has a decidedly organic, teleological connotation. That big rock that the backhoe could not dig out of my front lawn is not, it seems to me, persevering; it is just sitting there. Persevere is something that you do, not something that is done to you. After all, is not perseverance a traditional human virtue that we teach to our kids? We want them to persevere, even in the presence of an external force—to keep on walking “in a right line” while their companions veer off into mischief. Perseverance connotes volition, or self directedness. And it is entirely likely that Newton’s intuition, and most assuredly his vocabulary, would have oriented him toward the notion that a body moved, at least in part, in response to some sort of inner drive. If a body is sailing through empty space, alone, with no other bodies nearby, with no pushing or pulling happening, it is persevering in its motion.
The traditional Aristotelian intuition was that this body was moving under the influence of some sort of natural inner drive that kept it going. Otherwise why was it moving? Things don’t just happen for no reason. How can the absence of an external force cause anything to happen? Don’t causes need to be present? As the Newtonian tradition matured, this particular understanding was discarded in favor of a more mechanical explanation. And so, when Cajori was updating Motte’s translation of Newton, he changed this central text to read “Every body continues in its state of rest or uniform motion in a right line, unless it is compelled to change that state by forces impressed upon it.” Completely gone is the more nuanced suggestion that the motion of a body might have something to do with an internal drive or volition.[1]
I have labored this apparently minor point because it serves in many ways as an example of the larger problem that has confronted those who would try to understand the accomplishment of Isaac Newton. Newton was not a twentieth-century figure who appears mysteriously in the seventeenth century. He is very much a man of his time, with all that implies about paradigms, vocabularies, intuitions, assumptions, prejudices, and so on. He may not have been an Aristotelian, but neither was he a Newtonian.
The Hidden Architecture of the Universe
Whoever studies the Principia in awareness of the works of Newton’s predecessors will share the high value assigned this work ever since its first publication in 1687 and will rejoice that the human mind has been able to produce so magnificent a creation.
—I. Bernard Cohen
Many scholars lamented the flaws in the Motte-Cajori version, but an entirely new translation demanded a combination of linguistic, mathematical, and historical skills possessed by very few. After completing a scholarly edition of Newton’s Latin text, I. Bernard Cohen—whom Richard Westfall, Newton’s premier biographer, has called the “dean of practicing Newton scholars”—undertook a new translation into English with the help of Anne Whitman; Julia Budenz assisted in preparing the work for publication. This new rendering of Newton’s masterwork was issued by the University of California Press in 1999, almost 300 years after the Motte translation. In addition to a translation of Newton’s third edition, it includes a guide to the Principia by Cohen, running to almost 400 pages.
A good example of the subtlety and clarity Cohen brings to the task can be found in his treatment of what Newton calls vis insita. This is often rendered “innate force,” but Cohen suggests that “inherent force” is actually more accurate. It turns out that this particular detail has received quite a bit of scholarly attention. Just what did Newton mean when he spoke of the “force of inertia” (vis inertiae)? Inertia is one of the most important concepts in all of mechanics and one of the central elements of the Principia. The idea that bodies, once in motion, will continue to move on their own, precisely because there is not a force to “stop” them, was bizarre and revolutionary in the seventeenth century—so revolutionary that Newton took to calling this tendency a “force.”
But inertia is not a force, despite our continued use of colloquialisms like “force of inertia.” Readers have no doubt heard people refer to the “force” that, for example, propels them forward in their car when someone puts on the brakes. This is a misnomer. There is a force on the car from the braking that slows it down; without any similar force to slow the passengers, they continue to move forward, as if there were a force on them. The appearance of such a force has nothing to do with the passengers and everything to do with the car around them. Physicists would refer to the car as a “frame of reference.” There is a force on the frame of reference moving it with respect to things within the frame of reference—in this case, the passengers.
This point deserves attention because we would really like to know just how clearly Newton understood the actual character of inertia, one of the most critical elements in the passage from Aristotelian to modern mechanics. Did he think of inertia as some sort of Aristotelian, or Cartesian, innate tendency arising from within the body?
Or was his understanding more modern than that? Did he have a modern understanding but use seventeenth-century language to express it, much the same as we speak of sunrises, long after we have stopped believing that the sun moves and the Earth stands still?
In any event, Cohen argues, on the basis of several statements that Newton made in later editions of the Principia and elsewhere, that vis insita is legitimately rendered “inherent force.” This makes Newton less Cartesian, a distinction he would certainly applaud. Nevertheless, Cohen concludes that “Newton (if only on an unconscious or psychological level) has not fully abandoned the ancient notions that every motion must require a ‘mover’ or some kind of moving force, even if a very special kind of internal force.”
It is in Book 1 of the Principia that we find the birth of the science of mechanics, that extraordinary union of pure mathematics and careful observation that was to become the model to inspire science for a century or more. Empirical observations about the natural world, like the fact that bodies of different weight fall at the same rates, are cast in rigorous mathematical language. Says Cohen, “There is nothing in the antecedent literature of the science of motion that has this same magnitude or importance.” While it is certainly true that Galileo, Kepler, and others had glimpsed the mathematical promised land, none of them had set foot in it. Their contributions were partial, even muddled, and inconsistent in places. Not so for Newton, who strode confidently into that land and claimed it as his own.
In the seventeenth century, however, there were rules by which one was expected to play the game of science, and Newton found it very hard to get his scientific ducks all in a row without breaking these rules. For example, mathematical speculation was allowed to be free and unfettered. Creating imaginary mathematical worlds was perfectly acceptable; it was what mathematicians did. But suggesting that these imaginary mathematical worlds were actually descriptions of the physical world was not acceptable. This placed Newton in an awkward position.
He was discovering, as no one had before, that there was an extraordinary “fit” between the physical world that he could see out his window and the mathematical world that he was creating in his head. This discovery still inspires physics students to keep plodding away at seemingly intractable homework problems long after their roommates have gone to bed. Newton was perhaps the first to see clearly just how profoundly mathematical was the hidden architecture of the universe—the beams, pillars, and guy wires that held the whole thing together were indeed mathematical. And not just in some mystical Pythagorean sense. Newton’s vision was rigorous and defensible—but not by the epistemological standards of the seventeenth century.
If I may presume to guess what Newton really wanted to do in Book 1, I would suggest he was ready and eager to argue that this remarkable mathematical match-up between his physical observations and mathematical theorizing was so profound that his new “system of the world” just had to be true. This argument has since become a standard intuition among physicists, especially the mathematical deities of the field, like Paul Dirac, Albert Einstein, and Murray Gell-Mann; even lowly physics majors, en route to their b.s. degrees, work into the wee hours of the morning on homework, driven by the adrenaline generated by this aspect of their studies, listening to faint Pythagorean harmonies. But the assertion that a new theory should be accepted because of an extraordinary match between mathematics and physics was not one that made sense to everybody in the seventeenth century, which played by different epistemological rules.
So Newton was forced in Book 1 to argue that his presentation is “concerned with mathematics” only and that he is “putting aside any debates concerning physics.” However, this bifurcation between mathematics and physics was very unnatural for Newton, and he found it impossible to sustain consistently throughout. He often used the term “attraction” to describe the interaction between the Earth and the moon, or the sun and the Earth.
Attraction, of course, is a physics term that purports to explain just what it is that the mathematics is describing. But, as we have come to understand just how profoundly mathematical the world is, this distinction can no longer be sustained. The linkage between mathematics and physics is now understood rather differently.
The real world is not something that we examine, understand and explain without mathematics, after which we construct a mathematical model to assist us in picturing reality or to facilitate numerical calculations of positions and velocities. The real world is, rather, profoundly mathematical in ways that often make it entirely appropriate to equate understanding with mathematical understanding.
This point is worth stressing because it provides important insights into the ways that Newton was not only changing our understanding of the world but changing what it meant to say that we understand the world.
There were a great many brilliant scientists in seventeenth-century Europe—e.g., Descartes in France, Huygens in the Netherlands, Leibniz in Germany—who rejected Newton’s explanation because the kind of explanation it provided was not considered to be a proper explanation. The prevailing metaphysical assumptions on which most of seventeenth-century science was based required that motion and forces be understood mechanically in the most simplistic sense possible. If an object at rest started to move, this could only be the result of something coming into physical contact with the object. The traditional Aristotelian rejection of forces that could act at distances had been reiterated and given new authority by Descartes. Forces could act between bodies only if they were in physical contact. “Action at a distance,” as the alternative was known, was absurd, a retreat into the occult. To suggest that the Earth reached out through empty space and “pulled” on the moon was to speak nonsense, regardless of the mathematical precision with which this could be articulated.
The rejection of action at a distance, however, was a position fraught with its own set of difficulties, not the least of which was the challenge of explaining the motion of the moon around the Earth, or the planets around the sun. The time was past when the solid crystalline spheres of Aristotle could be invoked to explain the stability and regularity of the planetary motions. Or that there might be different laws of mechanics for heavenly bodies. Galileo, for example, had suggested that maybe there was a “circular inertia” that kept the planets in orbit about the sun and the moon about the Earth. By the end of the seventeenth century these options were no longer viable. A number of developments had undermined, once and for all, the traditional notion that the heavens were different from the earth and could have their own separate set of physical laws.
The realization that the heavens and the earth form a unity—which was absolutely central to Newton’s work—was an enduring achievement of the scientific revolution to which a great many thinkers had contributed. It is also a useful “spine” along which the history of astronomy can be succinctly arranged.
When Newton enrolled in Cambridge University in 1661, the curriculum was still largely unchanged from its medieval predecessor, and natural philosophy, in particular, was still dominated by the Aristotelian tradition, despite the advances of the previous century. Of particular relevance for understanding the context of Newton’s work were the various elements of the Aristotelian astronomical tradition, the familiar geocentric cosmos explicated unappreciatively in the early chapters of astronomy books.
The Aristotelian cosmos looked like this: The Earth was at the center of the universe, fixed and immobile, the largest object in the universe (although there was as yet no “uni” to the universe.) Material objects traveled in straight lines toward the center of the Earth as they sought their “natural” place at the center of the universe. Any motion other than unfettered vertical trajectories was “violent” and could only occur in the presence of a persistent push or pull mediated by mechanical contact. There was no such thing as an external “force” that could produce motion. There were thus two distinct kinds of motion: “natural,” emerging from within the nature of material objects as they freely sought the center of the universe, or “violent.”
The Aristotelian “universe” was divided into two realms: an earthly realm, where the elements earth, air, fire, and water did their thing, and a heavenly realm, where everything was composed of that special heavenly material called “ether.” (There was no such thing as empty space; all superficially empty spaces were really filled with ether.)
The boundary between the two realms was a huge crystalline sphere to which the moon was attached. The rotation of this large sphere was responsible for the regular revolution of the moon about the Earth. Once the motion of this sphere, or any of the others that carried the planets, was initiated it would continue without change. Nothing in the heavenly realm ever changed in any way; even the speeds of the planets in their orbits were absolutely uniform. Occasionally the motion of planets did appear to change, mysteriously reversing direction temporarily during the retrograde part of their cycle.
This posed a challenge that was met with the creative addition of a number of extra spheres and a complex system of what amounted to “gears” as wheels turned within wheels turning within other wheels which carried the planets in their looping orbits about the Earth. It was all very complex and Rube Goldbergian. But it worked well enough to last for two millennia, partially because nobody could understand it well enough to fix it.
The basic rule for the heavens was this: Everything beyond the orbit of the moon is composed of an un-changing ether; the motion of these ethereal bodies must also be unchanging and circular or composed of compounded circular motions. Circles had been prescribed by Plato and Pythagoras as the most perfect shape, and thus the shape of perfect orbits.
The heavenly realm ended with an outer crystalline sphere, just beyond the orbit of Saturn, the most distant planet visible with the naked eye. The stars were attached to this outer sphere. The consistent motion of this outer crystalline sphere accounted for the regularity of the zodiac, on its annual march through the heavens.
Occasionally there was change in the heavens, such as when a comet would appear or those rare occasions when a star would go supernova and explode, greatly increasing its brightness. Comets were “explained away” as atmospheric phenomena, not unlike meteors. Supernovas were so rare that most people never saw one and those who did found them so exceptional and unrepresentative that they could be safely ignored, much as contemporary scientists tend to set aside an “outlyer”—a data point so far from the rest that it must be an anomaly, perhaps an electrical glitch or a malfunction of some sort in the measuring apparatus.
The vast majority of celestial phenomena made perfect sense within the framework of this model. Those who scoff at it today often fail to note just how extensive was its explanatory prowess. To the naked eye all the stars most certainly appear to be at the same distance from the Earth. Anyone who has looked in innocent wonder at the night sky can easily sense the hemispherical dome overhead, with the stars all attached to it. We certainly experience the Earth as if it was at the center of the universe. There is indeed virtually no change in the heavens, in remarkable contrast to the earthly realm where relentless change is all but overwhelming. Planetary orbits are remarkably stable, and something must be holding them in place. And so on. Intuiting the reasons for the endurance of the Aristotelian worldview is really not that hard, especially if one is lying under the stars on a clear night.
The overturning and replacement of the Aristotelian cosmos is the quintessential scientific revolution, hopelessly over-emphasized as the paradigmatic model for understanding the advance of scientific knowledge. Prevailing paradigms, we are told, are overturned and replaced with new “incommensurable” ones. The roots of the revolution that overturned the Aristotelian tradition are buried deeply within the Aristotelian tradition itself, but digging out those roots is a scholarly project be-yond the scope of this essay. We will mention only the highlights.
The Dismantling of Aristotle
And new philosophy calls all in doubt,
The element of fire is quite put out;
The sun is lost, and the earth,
and no man’s wit
Can well direct him where to look for it.
—John Donne
The dismantling of Aristotle starts seriously with that most famous of Polish clerics—after the current pope, of course—Nicholas Copernicus (1473-1543). Copernicus moved the Earth out among the other planets, helping to eliminate the distinctive and distracting central location of the Earth and to create the important distinction between the location of the sun and the planets, a critical insight missing from Earth-centered cosmologies.
The Danish astronomer Tycho Brahe (1546-1601) observed comets and supernovas, and, because his observational techniques were so sophisticated, he established with certainty that comets and supernova were indeed in the heavenly realm. Brahe essentially destroyed the long-standing notion that the heavens were “perfect and unchanging.” The infamous Italian Galileo Galilei made telescopic observations that showed that the surface of the moon did not look particularly “heavenly” and, whatever the moon was made of, it certainly gave the appearance of being the same stuff as the Earth. Furthermore, there were satellites orbiting around Jupiter, showing that not everything needed to orbit about the earth, as his critics believed, despite the earlier work of Copernicus, which was still not widely accepted.
Tycho’s eccentric assistant Johannes Kepler (1571-1630) abolished the traditional circular orbits with his discovery, in 1609, that the orbit of Mars was elliptical and the sun was not at the center of that orbit but was displaced a bit to one of the two focal points of the ellipse. Kepler’s critical breakthrough led him to discover his now famous three laws of planetary motion—orbits are elliptical with the sun at a focal point of the ellipse; planets speed up in a very predictable way as they move along that portion of the orbit that is closest to the sun; and there is a specific relationship between the time it takes a planet to go around the sun and how far it is from the sun.
Kepler’s three laws identified an important relationship between planetary motions and the sun. This connection was so suggestive that Kepler speculated that the sun, in some mysterious way, perhaps analogous to magnetism, actually caused the planetary motions. But he was unable to make any progress on this, and it remained, together with Descartes’s idea of vortices and Aristotle’s crystalline spheres, just another speculative hypothesis about how the solar system works.
All the thinkers whose work so effectively undermined Aristotle had insights of varying significance into the new world order that was about to be inaugurated by Newton. Galileo knew the moon was not made of ether or perfectly spherical, Kepler got the shape of the orbits, Descartes discovered inertia, and so on. Other thinkers, like Hooke and Halley, had begun to glimpse bits of the final picture. But these insights were piecemeal and, in some case, so partial and decontextualized as to render them all but inconsequential. Newton’s achievement was a system of the world—a system in which a great number of physical phenomena could be almost fully explicated on the basis of a single idea. That idea, of course, was universal gravity.
Gravity provided a mechanism to hold the stars in place, doing away with the need for crystalline spheres; gravity provided forces to move objects on the Earth and in space, doing away with innate teleological drives; gravity provided a force to keep the planets orbiting regularly about the sun; gravity held the atmosphere of the Earth in place, while it hurtled around the sun at what must have seemed, to the horse-riding residents of the seventeenth century, to be a breakneck speed.
Newton’s Law of Universal Gravitation looks like this:
F = GMm/r²
In words it reads like this: The force between two bodies, a large body with a mass M and a small body with mass m, is equal to the product of their masses divided by the distance between their centers squared. (The “big G” is the gravitational constant, the most important number in the universe. Its role is to make the actual force come out right despite the fact that the masses and the distances can be measured in different units—feet, yards, miles, and so on.)
This formula has all sorts of practical applications that the reader has used on many occasions. For example, the reader’s weight is obtained by multiplying the reader’s mass by the mass of the Earth and dividing by the distance between the center of the reader and the center of the Earth. The resulting number is the number that registers on the bathroom scale when you step on it in the morning.
But, as I mentioned above, gravity is a force of attraction between bodies that need not be in contact. The ghostly fingers of gravity mysteriously reach out and tug on distant objects. And that was the old bugaboo, “action at a distance.”
Here was the problem that confronted Newton: his formula allowed him to calculate, with remarkable precision, the force of the Earth on the moon. But how could Newton establish that the force on the moon actually originated in this way? Certainly the moon moved about the Earth as if there were a force like this, and this formula allowed him to calculate its value. But the force could not be measured directly. There was no empirical evidence whatsoever for the force itself. The primary epistemological warrant for such a force was the formula. But how does one argue from a mathematical equation to a physical reality? What is the link that makes this possible? Can such a claim ever be more than speculation?
When Huygens studied the Principia, he stated that it had never even occurred to him to extend “the action of gravity to such great distances as those between the sun and the planets, or between the moon and the Earth.” He had not thought to do this because there was a much more plausible explanation, namely the “vortices of M. Descartes.” Huyghens added that he would not hold against Newton his “not being a Cartesian, provided he does not give us suppositions like that of attraction.” Keeping in mind that Huyghens was one of the greatest scientists of the seventeenth century and one of the few fully capable of understanding the Principia in detail, we can see in his preference for Cartesian vortices the battle that Newton had to fight.
Like Newton, Descartes had developed a “system of the world.” But unlike Newton’s, Descartes’s system was fully mechanical, in accordance with the philosophical rules of the seventeenth century. Descartes’s system was based on the idea of a huge vortex in the solar system, centered on the sun and swirling about like a tornado or, more benignly, like water going down a drain. In a vortex, the intensity declines as one goes out from the center. This explained why the outer planets moved more slowly—the vortex was weaker and slower out there.
By the standards of today, Descartes’s theory of vortices was hardly a theory at all. He could not calculate anything whatsoever. All he could do was argue, by a weak analogy with terrestrial vortices, that the vortex in the solar system would be stronger near the middle than at the edges. He could account for absolutely nothing beyond the simple, purely qualitative, observation that distant planets went slower than close ones and they all went in the same direction. Furthermore, what was the material that comprised the vortex? If the vortex of a tornado is made of air, and the vortex in the whirlpool of a drain is made of water, of what was the vortex of the solar system composed? The answer provided by Cartesian philosophy to this central question was very unsatisfactory: the vortex of the solar system was composed of some material reminiscent of Aristotle’s ether.
Was there any direct evidence for this material? No. Then how do we know it is there? By the principles of mechanical philosophy that tell us that forces can only be communicated by objects in direct contact with one another; that “action at a distance” is absurd; that there can be no such thing as empty space and, if there were, “gravity” could certainly not travel through it. This constitutes the “evidence” for the material vortex that swirls around the sun, carrying the planets with it. Like any scientific paradigm, the epistemological rules for what is allowed resonate conspicuously (and suspiciously) with what is discovered.
Contrast this with Newton, who explained the motions of the planets in terms of a gravitational force between them and the sun, a force that grew weaker as the square of the distance between them. How much could Newton explain with this model? Plenty. Planets under the influence of such a force would travel in elliptical orbits precisely as they were observed to do. Such planets would speed up as they got closer to the sun and slow down as they receded from the sun precisely as they were observed to do. The planets further from the sun would go slower, not in some general qualitative sense, but precisely as they were observed to do. It was all very tidy.
Comparing the Cartesian and Newtonian systems on this point is very instructive. Descartes could explain, in a general sort of way, how the planets moved. They were caught up in some sort of cosmic swirling vortex. There was, of course, no direct evidence for this vortex or the material of which it was composed, but the mechanism of its action could be visualized in a purely mechanical way. Anyone who had ever watched water run down any kind of drain or who had observed whirlpools could easily imagine how this might work. On the other hand, Newton could explain, in a rather precise way, exactly how the planets moved under the influence of gravity. But he could not provide a familiar mechanical model for how gravity worked.
Descartes’s model was quantitatively imprecise but based on familiar mechanical analogies, while Newton’s was quantitatively precise but based on a completely unfamiliar mechanism about which seemingly nothing could be said other than it existed. Descartes’s model was highly physical, with almost no mathematical content; Newton’s was highly mathematical with ambiguous physics. Preferences among the few European thinkers who could follow the debate derived more from philosophical starting assumptions than the models themselves. Finding a better example of incommensurable paradigms would be challenging.
The challenge of what constituted a proper explanation for the motion of the planets was at the heart of the Newtonian revolution. Understanding this challenge illuminates a number of issues. It explains, for example, why Huygens would say, “I have nothing against [Newton} not being a Cartesian, provided he does not give us suppositions like that of attraction.” It explains why Newton would write that he was setting forth “principles of philosophy [physics]” which “are not, however, philosophical but strictly mathematical.” And it explains, more importantly, just what kind of genius Newton was and how extraordinarily individual was his achievement.
This is the second article of a three-part series.
Karl W. Giberson is professor of physics at Eastern Nazarene College and editor of Research News.
Footnotes
1. There are other, less philosophical, problems with Cajori’s update of Motte. For example, a curious problem occurs in the translation of the following Latin sentence, in which Newton refers to some work done by Machin and Pemberton: Alia ratione motum nodorum J. Machin Astron. Prof. Gresham & Hen. Pemberton M.D. scorsum invenerunt. J. Machin was the “Gresham Professor of Astronomy” just as Oxford’s Keith Ward today could be described as the “Regius Professor of Theology.” When Motte translated this Latin expression he abbreviated Machin’s title to read “Astron. Prof. Gresh.” In Cajori’s version this abbreviation became incarnated as a third person, a “Professor Gresham.” Mr. Machin, Professor Gresham, and Dr. Henry Pemberton separately found out the motion of the moon by a different method. Historians will search in vain for the elusive Professor Gresham. He is no more real than Professor Regius, who works next door to Keith Ward.
The subject of this work, to use the name assigned by Newton in the first preface, is “rational mechanics.” Later on, Leibniz introduced the name “dynamics.” Although Newton objected to this name, “dynamics” provides an appropriate designation of the subject matter of the Principia, since “force” is a primary concept of that work. Indeed, the Principia can quite properly be described as a study of a variety of forces and the different kinds of motions they produce. Newton’s eventual goal, achieved in the third of the three “books” of which the Principia is composed, was to apply the results of the prior study to the system of the world, to the motions of the heavenly bodies. This subject is generally known today by the name used a century or so later by Laplace, “celestial mechanics.”
The history of how the Principia came into being has been told and retold. In the summer of 1684, the astronomer Edmond Halley visited Newton in order to find out whether he could solve a problem that had baffled Christopher Wren, Robert Hooke, and himself: to find the planetary orbit that would be produced by an inverse-square central force. Newton knew the answer to be an ellipse. He had solved the problem of elliptical orbits earlier, apparently in the period 1679-1680 during the course of an exchange of letters with Hooke. When Halley heard Newton’s reply, he urged him to write up the results. With Halley’s prodding and encouragement, Newton produced a short tract which exists in several versions and will be referred to as De Motu (On Motion), the common beginning of all the titles Newton gave to the several versions. Once started, Newton could not restrain the creative force of his genius, and the end product was the Principia. In the progress from De Motu to the Principia, Newton’s conception of what could be achieved by an empirically based mathematical science had become enlarged by several orders of magnitude.
—I. Bernard Cohen
Copyright © 2001 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromKarl W. Giberson
James Billington
The long-awaited second edition of David Barrett’s World Christian Encyclopedia.
- View Issue
- Subscribe
- Give a Gift
- Archives
World Christian Encyclopediaedited by David B. Barrett, George T. Kurian, and Todd M. JohnsonOxford Univ. Press, 20011,699 pp., 2 vols.; $295
The second edition of David Barrett’s World Christian Encyclopedia is one of the richest and most original compilations of human data produced at the turn of the millennium. In two oversized volumes and 1,699 pages of fine print, this work seeks to count the number of human beings in every religion on Earth in the year 2000—and to provide comparative figures back to 1900 and projections to 2025.
This work represents a massive expansion of the single-volume first edition, which appeared in 1982. It states that more than five billion people (85 percent of the world’s population) are “religionists,” that is, members of one or another of the world’s ten thousand distinct religions. Two billion are Christians, 1.18 billion are Muslims, and.8 billion Hindus.
The first volume is devoted largely to Christianity, which has kept pace with the rapid growth in world population but has dramatically changed its shape in the process. In the twentieth century, Christianity became a religion that is practiced everywhere in the inhabited world, but the overall growth of Christian adherents—from 588 million in 1900 to 2,000 million in 2000—has been almost entirely achieved in the world’s less developed nations, where the growth over the same period was from 83 million to 1,120 million.
The overall rate of Muslim growth was greater worldwide than that of Christianity. In Africa, however, adherents of Christianity grew by 38 fold and now outnumber Muslims on that continent. (Muslims were three and a half times more numerous than Christians in Africa in 1900 but grew only by tenfold in the twentieth century.)
What emerges most dramatically from the data here presented is the accelerating fragmentation of Christianity, from 20,800 denominations in 1981 to 34,000 just 20 years later. Of the six major groups into which Christianity is here divided, growth is by the far the greatest in the “Independent” category: post-denominational Pentecostals and charismatics often unaffiliated with any but their own congregation. This category now accounts for 27.7 percent of global Christianity and 38 percent of the world’s full-time Christian workers.
The great majority of Christians in this category are nonwhite, poor urban dwellers. Yet their collective wealth increased nearly tenfold between 1970 and 2000; they generally sustain family structures; and they are by far the most skillful in using modern media.
Other trends that stand out in the massive data sets include the persistence of tribal “ethnoreligions” (which grew from 117 million to 228 million in the twentieth century), the “meteoric rise of secular quasi-religions” (absolutist ideologies often framed explicitly as rivals to Christianity), and the disproportionate growth within the category labeled “marginal Christian” (Mormons, Jehovah’s Witnesses, and so on).
This project is a tribute to the truly heroic lifelong labors of its initiator, David Barrett. An Anglican missionary in Africa for 28 years, Barrett has been based in Virginia since 1985, working on this second edition with George Kurian and Todd Johnson. Barrett has visited most of the 238 nations and territories covered in this encyclopedia, and has gathered raw material from 444 specialists all over the world as well as from national census figures and United Nations data. He and his very small staff apparently continue to work in the basem*nt of a Presbyterian church in Richmond and are preparing a cd-rom version of the encyclopedia with additional analytic elaboration.
An attractive feature of the commentary scattered throughout the book is the fairness and modesty with which all the material is presented. The sympathy the compilers seem to feel for evangelism in general and the charismatic direction in particular lends an upbeat but far from triumphalist tone to the work. They point out at the beginning of their survey of 12,600 cultures of the world (the “Ethnosphere”) that “Christ’s Great Commission” to “disciple all peoples” (Matt. 28:19) is amplified by seven different descriptions of the diversity of peoples in Revelation. They conclude that “the Bible can thus be said to be fully aware of the vast ethnolinguistic diversity of the world and its importance for the Christian world mission.” But they then proceed directly to a sophisticated modern analysis purely “from a descriptive or anthropological point of view.”
As with any massive new set of statistics, these volumes can be cited selectively to prove contradictory propositions. But a few conclusions seem incontestable. Religion is not dying out—and indeed resumed rapid growth in the late twentieth century. Christianity has spread to “all peoples” but is now more divided than ever before and far stronger in the Southern than in the Northern hemisphere (81 percent of Christians were white in 1900; only 45 percent were in 2000). And the defections from Christianity in Europe and North America are now running at nearly two million a year.
There are, of course, limitations—particularly for believers—on the utility of itemizing the quantity of alleged adherents without clarifying the nature, let alone the qualities, of what they are adhering to. All mainline Protestant denominations (except Anglicans) are aggregated into one “megabloc,” which makes their treatment less nuanced and probing than that of the Pentecostal-charismatic “trans-megabloc groupings.” And while the mainline denominations are often losing numbers, their statistics are often dependably precise, whereas the independent churches and “post-denominational” groups may well be more prone to exaggerating their strength.
One of the original statistics specially cries out for deeper analysis from a Christian perspective: the alleged fact that 45 million of the 70 million martyrs in Christian history died for their faith in the twentieth century. One wonders if the future of Christianity will be shaped more by the deep experiences of martyrdom, sainthood, and sanctified community than by the shifting surface of allegiances in an uneasy world—however sophisticated the statistical predictions may be.
A disproportionate number of the twentieth-century martyrs came from Eastern Christian communions—from Ethiopia and Sudan to Armenia and China—and are still not fully recognized, let alone honored, in the broader Christian world. The Russian Orthodox Church has not yet fully honored its own many martyred believers.
The discussion of the Russian Church, it must be said, is not up to the high overall standard of the encyclopedia. It has Stalin recognizing the Patriarch after rather than during World War II, and notes that the Soviet government allowed a Catholic priest in the U.S. embassy without mentioning the total suppression of Eastern Rite Catholics in Ukraine. It states that “the church called itself the third Rome” after the establishment of the Moscow Patriarchate in 1589. In fact, a single monk had called Moscow the third Rome much earlier, but the metaphor was almost never used again until the nineteenth century and never by the church as such.
But these are small points. The encyclopedia combines remarkably well scientific objectivity and tolerance with a quiet evangelical faith. Already in the first edition of his work, Barrett warned against “equating the fortunes of organized Christianity and institutionalized religion with the fortunes of the Kingdom of God” and quoted approvingly Hans Kung’s injunction that “the Church must not conquer but serve the world religions.” The authors have served modern social science and religious believers equally well with this magisterial work.
James Billington is the Librarian of Congress.
Copyright © 2001 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromJames Billington
Mark A. Noll
- View Issue
- Subscribe
- Give a Gift
- Archives
In 1910 a great missionary conference was held in Edinburgh, Scotland, where Americans, Europeans, and missionaries from around the world strategized for the worldwide triumph of the Christian faith. Foremost in the minds of delegates to this meeting was the great advance of Christianity during the nineteenth century. In 1800 less than a fourth of the world’s population was identified with Christian churches; by 1900 almost 35 percent were affiliated. It seemed only logical to conclude that the same energy, the same wisdom, and the same trust in God that had brought this great advance would continue on to finish the evangelization of the world.
As it turned out, developments in the twentieth century both confirmed and disconfirmed the expectations of Edinburgh. While the proportion of affiliated Christians has remained steady at roughly one-third of world population, the great population surge of the past century has resulted in a proportionate surge of Christian adherents. The circ*mstance that would have most surprised delegates to Edinburgh is the location of the world’s Christians at the end of the twentieth century. At Edinburgh, only 18 of 1,400 delegates were not from Europe or North America. Not a single black African was in attendance. In 1910, the overwhelming predominance of Europeans and North Americans at a conference on “world Christianity” was not primarily the result of prejudice, since over 80 percent of the world’s affiliated Christian population lived in those regions. It was, therefore, only natural to think that the expansion of world Christianity would mean the expansion of Western Christianity into the world.
What actually happened was dramatically different. The surprises as well as the magnitude of developments in the twentieth-century history of Christianity can be illustrated by considering a series of comparisons for present realities as of this past week:
• Last Sunday it is probable that more believers attended church in China than in all of so-called Christian Europe.
• Last Sunday more Anglicans attended church in each of Kenya, South Africa, Tanzania, and Uganda than did Anglicans in Britain and Episcopalians in the United States combined—and the number of Anglicans at church in Nigeria was several times the number in these other African countries.
• Last Sunday more Presbyterians were at church in Ghana than in Scotland, and more were at church in the Uniting Presbyterian Church of Southern Africa than in the United States.
• Last Sunday more members of the Assemblies of God in Brazil were in church than the combined total of the Assemblies of God and the Church of God in Christ in the United States.
• Last Sunday more people attended the Yoido Full Gospel Church in Seoul (pastor Paul Young-gi Cho) than attended all of the churches in significant American denominations like the Christian Reformed Church, the Evangelical Free Church, or the Presbyterian Church in America.
• Last Sunday, Roman Catholics in the United States probably worshipped in more languages than at any previous time in American history.
• During the past week, there were more missionaries at work overseas (as a percentage of the nation’s affiliated Christian population) from Samoa and Singapore than from Canada and the United States.
• Last Sunday the churches with the largest attendance in England and France had mostly black congregations.
If it were possible to summarize the momentous changes in world Christianity over the course of the twentieth century, five themes might emerge: First, the decline of Christianity in Europe, as a result of a steady erosion in Western Europe and the traumatic clash with communism in Eastern Europe. Second, the renovation of the Roman Catholic Church, symbolized by the Second Vatican Council, to reflect both cultural conditions of the modern world and the growing presence of the Two-Thirds World in the Church (which now numbers about 1 billion adherents). Third, the displacement among Protestants of Britain and Germany as the driving agents of Christian expansion by the United States. Fourth, the expansion of Christianity into many regions where the Christian presence had been minimal or nonexistent, including China, Korea, many parts of India, and much of Africa. Fifth, a change in the pressing issues bearing down upon the Christian heartland from the jaded discontents of advanced Western civilization to the raw life-and-death struggles of poverty, disease, and tribal warfare in non-Western civilizations.
Examination of especially the last two of these five themes featured prominently at a remarkable conference held in early July of this year at the Hammanskraal retreat center of the University of Pretoria, South Africa. The organizers were part of Currents in World Christianity, a three-year project funded by the Pew Charitable Trusts that built on an earlier Pew-sponsored program, the North Atlantic Missiology Project. For the South African meeting, leadership was supplied by Dr. Brian Stanley of the Centre for Advanced Religious and Theological Studies at the University of Cambridge and Professor J.W. Hofmeyr of the University of Pretoria’s theology department.
In contrast to the 1910 Edinburgh conference, the perspectives of Europeans and North Americans did not dominate in Pretoria. About the same number of Africans as Europeans and North Americans addressed the conference (each about 40 percent of the 53 people on the program, with the rest from China, Korea, Brazil, the Philippines, Australia, and New Zealand). More important, what conference attenders heard were papers outlining the new realities of world Christianity that almost no one at Edinburgh would have predicted. Books and articles are forthcoming from the conference; meanwhile, it may be helpful to highlight a few of the meeting’s well-documented reports.
Chinese scholars presented carefully researched papers on how in the 1920s and 1930s groups of Chinese Christians began to develop indigenous forms of the faith as they selected from the offerings of Western missionaries what they felt was most helpful for their own setting. One of these groups was the Jesus Family Movement of Jing Dianying, which combined Pentecostal, Confucian, Social Gospel, and even communist elements into an active movement that was eventually silenced by Mao Zedong’s totalitarianism. Another was the “Local” or “Lord’s Recovery” Church associated with Watchman Nee, which, despite brutal treatment under Mao, was by the 1990s surging forward in China with tens of thousands of adherents and also with thousands of “Local” churches around the world. (Members from the Pretoria “Local” Church, made up substantially of Afrikaners, attended part of the meeting at Hammanskraal.)
Knowledgeable scholars, many of them still quite young, presented especially intriguing reports on complex developments among African churches—some still connected to missionary beginnings, more independent of Western ties, and still more reflecting a diverse mixture of Western and African influences. From the careful scholarship now well established or on the horizon—for Ghana, Nigeria, Cameroon, Benin, much of East Africa, South Africa, Zimbabwe, Malawi, and elsewhere—it is obvious that imported concepts like “Pentecostal” or “charismatic” have less and less relevance for situations defined increasingly by Christian engagement with local leaders, problems, achievements, and interpretations of Scripture. Several conference papers also documented in starkest terms how absolutely central poverty, disease, and conflict have become to Christian existence in Africa (but also in much of the rest of the newer Christian world as well). One of the most exciting papers outlined the emergence in Ghana of a theological understanding of Jesus as King and Chief who takes up and sanctifies in himself the interpersonal, intergenerational, and intercommunal mediations of the Ghanaians’ traditional rulers.
Space fails for detailing the wealth of the conference’s pathbreaking scholarship—on the megachurches of South Korea (at least 15 currently attended by 12,000 or more people each Sunday, some many times that number); on the Christian dimensions of migration (with several African Independent Churches now possessing strong European branches); on the tragic connection of Christian movements with ethnic warfare (and so a reprise of the century’s earlier tragedies, with their Christian connections, of World War I and World War II); on the 40-plus new Christian universities founded during the last 20 years in the Two-Thirds World; on the resilience of the overwhelmingly Christian communities in Nagaland, Mizoram, and Manipur in the face of intense pressure from Indian religious and governmental leaders; on the perils of globalization (where the god Mammon rolls like a juggernaut over all in its path) alongside the opportunities of globalization (where flourishing networks of believers precede, accompany, and follow the global flow of trade); and on and on.
The world anticipated by the Edinburgh Missionary Conference of 1910 is not what actually came into existence. As portrayed at the Hammanskraal Conference—and in a steadily growing wave of solid literature—what actually happened was much more unexpected, much more intriguing, much more threatening, much more complex, and much more an occasion for praising the Lord who sent his witnesses “to the ends of the Earth.”
Mark Noll is McManis Professor of Christian Thought at Wheaton College.
Copyright © 2001 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromMark A. Noll
John G. Stackhouse, Jr.
An anthropologist bonds with a tribe called “InterVarsity Christian Fellowship.”
- View Issue
- Subscribe
- Give a Gift
- Archives
The Church on the World’s Turf: An Evangelical Christian Group at a Secular Universityby Paul A. BradmadatOxford Univ. Press, 2000224 pp.; $35
To see ourselves as others see us has been more possible for North Atlantic evangelicals in the last 50 years than perhaps ever before. Since 1976, the so-called Year of the Evangelical, mainstream media have regularly featured evangelical churches, organizations, and leaders. At least as interesting to Books & Culture readers, however, is the flood of academic research that has poured forth from the presses since that time, and especially since George Marsden’s landmark study, Fundamentalism and American Culture (1980).
Perhaps the last social scientific discipline to engage evangelicalism has been anthropology. In a new book, University of Winnipeg religious studies professor Paul Bramadat offers the first ethnographical study of a distinctive form of evangelical community: the Christian student fellowship on a secular university campus.
The Church on the World’s Turf is based on Bramadat’s doctoral dissertation at McMaster University, a major university located southwest of Toronto in Hamilton, Ontario. Bramadat studied the McMaster chapter of InterVarsity Christian Fellowship (IVCF) for two extended periods: the fall term of 1994 and the entire academic year 1995-96. He attended most of the IVCF events and interviewed many of its leaders and rank-and-file members. He then went more than the proverbial extra mile as he concluded his field research by accompanying an evangelistic team of IVCFers to Lithuania in the spring of 1996.
The resulting study is a rich portrait of a group that will be generically familiar to many evangelicals, yet painted by a sympathetic outsider. Bramadat follows postmodern convention and identifies himself and his biases in his introduction. He is definitely an outsider: a Unitarian Universalist who, he says, “was predisposed to be tolerant of almost everyone except evangelicals or fundamentalists.” Yet he seems to have made an earnest effort to become analytically sympathetic to his subjects. He recognizes that an interpreter fails to truly understand a group if he cannot discern, and then demonstrate, how such a group of people could possibly think and do that.
Indeed, Bramadat went an extra mile in this respect as well. He tried sincerely, he writes, “throughout this project to remain both intellectually and emotionally open to their Lord.” The students who got to know him also thought, he says, that he had come very close to converting. Whether this measure of openness to one’s subject is required for good social science is debatable. But Bramadat cannot be suspected of setting up his evangelical subjects merely as convenient pots at which to aim liberal, secular, or social-scientific missiles.
IVCF emerges here in dynamics quite recognizable to many B&C readers. Bramadat deploys the metaphors of “bridges” and “fortresses” to depict the ambiguous relationship of this group of students with its university home. The very title of the book, of course, implies this relationship. “The church” (IVCF) is not claiming the university as its own. Indeed, it has ceded the university to “the world.” But it is present nonetheless “on the world’s turf.” And Bramadat indicates that IVCF members are involved not only in their classes but in other sectors of campus life as well. They are not merely huddling in a holy ghetto on campus.
Bramadat demonstrates, however, that IVCFers are more sectarian than churchly. In a chapter devoted to “Otherness,” these evangelicals apparently perceive themselves as quite distinct from their fellow students and from the university as a society. Predictably, this alienation shows up as students feel pressure to toe certain lines of political correctness in class and in the dorms regarding evolution, hom*osexuality, and religious pluralism. Bramadat quotes McMaster Divinity College professor Clark Pinnock sounding like the late Francis Schaeffer as he tells the students that “there is no common ground between Christianity and secularism.” While Pinnock encourages intellectual exchange with secularists on campus, Bramadat, at least, understands Pinnock as commending primarily a “fortress” mentality.
Other evangelical attitudes clearly helped Bramadat keep his personal distance from the group. Roman Catholics, for example, were widely assumed in the IVCF group to be simply “non-Christians”—even though IVCF has been welcoming Roman Catholic members for more than a generation. Only two of almost 200 members were majoring in religious studies. Yet McMaster’s department, as I know personally, includes several faculty members who are congenial toward evangelicalism. None of them are mentioned in this book. It is not clear, in fact, that IVCF students understand what religious studies actually is on a secular campus. One of them says to Bramadat, “It’s weird. [The religious studies professors] just don’t want to hear about your personal beliefs, which just takes away from the discussion of religious texts and personalities. … I have nothing against learning about the texts, but. … I don’t like not being able to express something that is so important in my life.”
As both Unitarian and social scientist, Bramadat is clearly bemused by the frequent references to spiritual warfare, and devotes an entire chapter to “Satan and the Spiritual Realm.” I interpret this emphasis as an indication of the increasing impact of charismatic and Pentecostal emphases in evangelicalism, as is the group’s choice of worship music primarily from Vineyard-style songs. Twenty years ago, when I belonged to a similar IVCF chapter at a university not far from McMaster, almost no one ever referred to demons, and Vineyard music was just beginning to blossom.
Perhaps most interesting in this account, but also perhaps most ephemeral, is the frequent reference to the novels of Frank Peretti as providing images and categories for conversation in this zone. I wonder whether IVCFers nowadays, just five years later, are still talking this way, now that the popularity of Peretti’s novels has waned.
At least one more thing keeps Bramadat from embracing this group’s religion: its lack of serious intellectual interest and ability. Here, at one of Canada’s most selective and productive research universities, the students at IVCF seem much more interested in the affective, relational, and moral dimensions of their faith than the intellectual. That perhaps is not entirely surprising. But when the intellectual aspect of Christianity is directly on trial, it is disappointing to find that representatives of the group fail badly to meet the challenge.
In what was for me the most miserable narrative in the book, the evangelistic team to Lithuania decides to hold an evening event during which the more established Christians would answer typical apologetic questions regarding the problem of evil, the reliability of the Bible, religious pluralism, and so on. Some members got cold feet the night before—prudently, it would seem. One of them confesses to the others, “What am I going to say if someone asks me about fossils or Hinduism? What do I know about that stuff?”
As Bramadat relates, no one else on the team seemed to know much about fossils, Hinduism, or anything else of apologetic interest to the dozen Lithuanians who showed up for the meeting. When one Lithuanian demanded an accounting for atrocities perpetrated by Christians through the centuries, an IVCFer responded, “Well, I see what you’re saying, but I don’t think it’s really our place to judge other people.” When someone asked about reincarnation, another team member replied, “Well, that’s an interesting question. All I can say is that we should all read the Bible and find out what it has to say about these questions, because there can’t be more than one Truth.” Similar questions met with similar answers—or evasions, as Bramadat dryly notes.
Bramadat proceeds to discuss Mark Noll’s Scandal of the Evangelical Mind to help his readers make sense of this disaster. But mere analysis doesn’t remove the sting from anyone who cares about evangelical intellectual life even on our leading campuses. Bramadat’s sketch, however, is not simply a list of traits he found perplexing or troublesome. In what might count as the most surprising chapter in the book, he discusses “The Role of Women” who make up the majority of the membership. He concludes that IVCF provides several crucial services to evangelical women: it offers them positions of leadership in a subculture that still generally does not; it gives them a safe place to negotiate the competing truth claims offered in the secular university; and it worships God in a style of piety that is suited especially to single women, with quasi-erotic and feminine imagery in prayers and especially in songs.
This last point might discomfit some readers, but Bramadat details his findings, citing lyrics from what he calls “repetitive, yearning songs” sung at IVCF events. He also notes that IVCF men refer to Jesus in typical masculine roles as “judge, father, teacher, mentor, and, least frequently, friend,” while the women envision Jesus as “the kind, sensitive recipient and unconditional requiter of love.” What Bramadat sees here in microcosm, of course, other observers have noted throughout contemporary evangelical piety.
Bramadat also hints that he admires the way IVCF students distance themselves from their peers at McMaster, particularly in the moral realm. They simply do not abuse drugs and alcohol as their dormmates do, and they refuse to engage in the casual sex, or even the “serial monogamy” of sequential exclusive sexual relationships, which is now typical of students. Indeed, it is in the moral sphere, much more than the intellectual sphere, that the IVCF at McMaster draws its clearest lines between “the church” and “the world.”
Bramadat is a careful enough interpreter to notice that the lines are not always sharp. He cites student references to popular television shows and movies, and wonders how IVCFers who espouse a traditional sexual ethic can so easily enjoy programs such as Seinfeld and Friends. He notes the way the students mimic and quote the characters, seemingly without any sense of dissonance, much less embarrassment.
He sees that, in the dynamics of IVCF as being in fact a mediating institution between church and world, some things come across the bridges into the fortress that perhaps should be kept out. Bramadat recognizes that IVCFers at McMaster are, like so many of the rest of us, bricoleurs (yes, he invokes Levi-Strauss). And he generously cautions his readers against expecting of evangelicals “a degree of self-awareness or consistency we do not expect of non-evangelicals.” Pastors and parents of such young people can decide how much comfort, however, lies in such words!
There is more that Bramadat notices along the way, including a hilariously sober account of evangelical prayer practices that involve both the frequent use of the modifier “just” (as in “Lord, we just want to ask you”) and what Bramadat calls the typical evangelical mouth-click. He tries to interpret the latter remarkable mannerism:
Its location in the rhetoric is similar to and often follows the word “just”: “God, we just [pause. … click] want to thank you for your son and to ask you . …” By implying that the speaker is unable to finish a prayer because he or she is overwhelmed by the opportunity to communicate with God, this sound softens the believer’s petition [which otherwise might sound arrogant].
Most of Bramadat’s observations are significantly more substantial than this, of course. And with his light interpretive touch—aware of theory regarding secularization, pluralization, privatization, and a whole range of other social processes, but beholden to none—Bramadat’s book sets a high standard for the anthropological studies of other evangelical groups that one hopes will follow.
John G. Stackhouse, Jr., is Sangwoo Youtong Chee Professor of Theology and Culture at Regent College, Vancouver, and author of Canadian Evangelicalism in the Twentieth Century: An Introduction to Its Character (Univ. of Toronto Press, 1993.)
Copyright © 2001 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromJohn G. Stackhouse, Jr.