I suppose that’s good, because the NIH’s very conservative funding process is one reason why so many researchers focus on the obvious questions. On the other hand, it’s not so clear that answering non-obvious questions lead to more insight than answering obvious questions. The question can be obvious or non-obvious and still generate that key to scientific progress, the unexpected answer.
Lovercraft has been a huge influence on writers both within and without the genres of horror and sci-fi; Lovecraft was influenced by Arthur Machen, who in many ways is the Old World, Celtic Christian doppelganger of the New England, agnostic Lovecraft.
While Lovecraft was concerned with the unseen horrors that could be discovered when science goes to far, Machen took a more opposing stance towards science, pitting rationality against what he saw as a deeper, ineffable reality that is just as frightening as, but perhaps less explicable than Lovecraft’s hostile cosmos.
From The Novel of the Black Seal, a character is getting hints that her rational view of the world won’t be able to encompass what she is about to learn:
I have told you I was of sceptical habit; but though I understood little or nothing, I began to dread, vainly proposing to myself the iterated dogmas of science that all life is material, and that in the system of things there is no undiscovered land, even beyond the remotest stars, where the supernatural can find a footing. Yet there struck in on this the thought that matter is as really awful and unknown as spirit, that science itself but dallied on the threshold, scarcely gaining more than a glimpse of the wonders of the inner place.
DeLillo’s new book of nine stories, The Angel Esmeralda, has at its core a series of situations that lead to trance states experienced by the insulted, the injured, and the vulnerable, who in its grip sometimes begin to babble in a form of secular glossolalia…
It’s doubtful that Neanderthals had any concept of extinction, of course, at least on a continent-wide or global scale. Yet you can imagine that there may have been some sense that something had gone terribly wrong, perhaps a recognition of an unyielding process that was squeezing them out, that the world was taking a new direction without them. Extinction was gradual, taking place over generations, and therefore most likely difficult to recognize.
They were living in a post-apocalyptic world. Nature had turned against them. They were being threatened by alien invaders with new, powerful weapons. Perhaps the Neanderthals were doing each other in, resorting to cannibalism and inter-tribal violence in their desperation. Did their society begin to crumble as their numbers dwindled, or as previously predictable rhythms of nature shifted? Were there lost traditions, passed-on legends of long-gone better days? With a little imagination, it’s easy to think of the Neanderthals in a classic, end of the world sci-fi context. What is it like to be a member of a self-aware, intelligent species that is dying away? What is it like to be the very last living members of that species? …
Actor, journalist , devotee of Celtic Christianity and the Holy Grail legend, Welshman Arthur Machen is considered one of the fathers of weird fiction, a master of mayhem whose work has drawn comparisons to H. P. Lovecraft and Edgar Allan Poe. Readers will find the perfect introduction to his style in this new collection. With the title story, an exercise in the bizarre that leaves the reader disoriented virtually from the first page, Machen turns even fundamental truths upside down.
To construct a model - as Mr. Palomar was aware - you have to start with something; that is, you have to have principles, from which, by deduction, you develop your own line of thinking. These principles - also known as axioms or postulates - are not something you select; you have them already, because if you did not have them, you could not even begin thinking. So Mr. Palomar also had some, but, since he was neither a mathematician nor a logician, he did not bother to define them. Deduction, in any case, was one of this favorite activities, because he could devote himself to it in silence and lone, without special equipment, at any place and moment, seated in his armchair or strolling. Induction, on the contrary, was something he did not really trust, perhaps because he thought his experiences vague and incomplete. The construction of a model, therefore, was for him a miracle of equilibrium between principles (left in shadow) and experience (elusive), but the result should be more substantial than either. In a well-made model, in fact, every detail must be conditioned by the others, so that everything holds together in absolute coherence, as in a mechanism where if one gear jams, everything jams. A model is by definition that in which nothing has to be changed, that which works perfectly; whereas reality, as we see clearly, does not work and constantly falls to pieces; so we must force it, more or less roughly, to assume the form of the model.
We now have unprecedented means of collecting data at the deepest molecular level of living systems and we have relatively cheap and accessible computer power to store and analyse this information. There is, however, a general sense that understanding all this information has lagged far behind its accumulation, and that the sheer quantity of new published material that can be accessed only by specialists in each field has produced a complete fragmentation of the science. No use will be served by regretting the passing of the golden years of molecular genetics when much was accomplished by combining thought with a few well-chosen experiments in simple virus and bacterial systems; nor is it useful to decry the present approach of ‘low input, high throughput, no output’ biology which dominates the pages of our relentlessly competing scientific journals. We should welcome with open arms everything that modern technology has to offer us but we must learn to use it in new ways. Biology urgently needs a theoretical basis to unify it and it is only theory that will allow us to convert data to knowledge.
"He said science was going to discover the basic secret of life some day," the bartender put in. He scratched his head and frowned. "Didn’t I read in the paper the other day where they’d finally found out what it was?"
"I missed that," I murmured.
"I saw that," said Sandra. "About two days ago."
"That’s right," said the bartender.
"What is the secret of life?" I asked.
"I forget," said Sandra.
"Protein," the bartender declared. "They found out something about protein."
"Yeah," said Sandra, "that’s it."
- from Cat’s Cradle (p. 25 in the 1970 Dell paperback.)
Wallace Stevens was a lawyer, an insurance executive, a man of numbers, one who valued precision in his observations. This is what makes him a scientist’s poet.
His “Of Modern Poetry” describes acts of imagination that resemble that act of imagination performed by scientists, the act of conceiving of a new question, a new way of thinking about something, and a line of attack towards the answer.
For context, Eleanor Cook says that Stevens saw “the time before World War I as ‘a stage-setting that since has been taken down and trucked away.’” (A Reader’s Guide to Wallace Stevens, p. 154) This leaves the poet faced with the challenge of inventing a completely new stage setting, and a new script, one that “will suffice,” one that “repeats” to the mind ‘Exactly, that which it wants to hear,” “Sounds passing through a sudden rightness.” This seems to me to capture any act of the imagination, poetic and scientific.
Of Modern Poetry
The poem of the mind in the act of finding
What will suffice. It has not always had
To find: the scene was set; it repeated what
Was in the script.
Then the theatre was changed
To something else. Its past was a souvenir.
It has to be living, to learn the speech of the place.
It has to face the men of the time and to meet
The women of the time. It has to think about war
And it has to find what will suffice. It has
To construct a new stage. It has to be on that stage,
And, like an insatiable actor, slowly and
With meditation, speak words that in the ear,
In the delicatest ear of the mind, repeat,
Exactly, that which it wants to hear, at the sound
Of which, an invisible audience listens,
Not to the play, but to itself, expressed
In an emotion as of two people, as of two
Emotions becoming one. The actor is
A metaphysician in the dark, twanging
An instrument, twanging a wiry string that gives
Sounds passing through sudden rightnesses, wholly
Containing the mind, below which it cannot descend,
The troubles plaguing academic science — including fierce competition for funding, dismal career opportunities for young scientists, overdependence on soft money, excessive time spent applying for grants, and many more — do not arise, Stephan suggests, from a shortage of funds. In 2009, she notes, the United States spent nearly $55 billion on university- and medical school–based research and development, far more than any other nation.
The problems arise, Stephan argues, from how that money is allocated: who gets to spend it, where, and on what. Unlike a number of other countries, the United States structures university-based research around short-term competitive grants to faculty members. The incentives built into this system lead universities to behave “as though they are high-end shopping centers,” she writes. “They turn around and lease the facilities to faculty in [exchange for] indirect costs on grants and buyout of salary. In many instances, faculty ‘pay’ for the opportunity of working at the university, receiving no guarantee of income if they fail to bring in a grant.” Those who land funding staff their labs with students enrolled in their department’s graduate program, or with postdocs. Paid out of the faculty member’s grant, both types of workers depend on the primary investigator’s (PI’s) continued success in the tournament.
Universities, however, also face considerable risks. They must, for example, provide large start-up packages to outfit new faculty members for the competition. Newcomers generally have about 3 years to establish a revenue stream — to start winning “the funding to stay in business,” Stephan says. The need to reduce risk explains universities’ growing penchant for hiring faculty members off the tenure track and using adjuncts for teaching. “Medical schools have gone a step further,” Stephan notes, “employing people, whether tenured or nontenured, with minimal guarantees of salary.” Where tenure once constituted a pledge to pay a person’s salary for life, it now constitutes, in the acerbic definition I’ve heard from some medical school professors, a mere “license to go out and fund your own salary.”
Risk avoidance has scientific as well as financial consequences. “The system … discourages faculty from pursuing research with uncertain outcomes,” which may endanger future grants or renewals. This peril is “particularly acute for those on soft money.” Experimental timidity produces “little chance that transformative research will occur and that the economy will reap significant returns from investments in research and development.”
As in all financial ventures, cost determines much of what goes on in the laboratory. “Cost plays a role in determining whether researchers work with male mice or female mice (females, it turns out, can be more expensive), whether principal investigators staff their labs with postdoctoral fellows (postdocs) or graduate students, and why faculty members prefer to staff labs with ‘temporary’ workers, be they graduate students, postdocs, or staff scientists, rather than with permanent staff.” Postdocs often are a PI’s best staffing buy, Stephan writes, because their excellent skills come with no requirement to pay tuition, which at top private institutions can run $30,000 a year or more. Overall, the need to reduce risk and cost in the grant-based system produces “incentives … to get bigger and bigger” by winning the maximum number of grants and, because grad students and postdocs do the actual bench work, to “produce more scientists and engineers than can possibly find jobs as independent researchers.”
Although one topflight report described this setup as “ ‘incredibly successful’ from the perspective of faculty,” Stephan observes, “it is the Ph.D. students and postdocs who are bearing the cost of the system — and the U.S. taxpayers — not the principal investigators.”
It is easy to suppose that few people realize on that occasion, which comes to all of us, when we look at the blue sky for the first time, that is to say: not merely see it, but look at it and experience it and for the first time have a sense that we live in the center of a physical poetry, a geography that would be intolerable except for the non-geography that exists there – few people realize that they are looking at the world of their own thoughts and the world of their own feelings.
New Year’s reading resolutions are floating around. Lately, my tendency is to avoid making lists of books to read, and instead to carry out reading projects. My current one is to read 1952: a good year for both mainstream literature and science fiction. Awhile back I started a post-apocalyptic sci-fi reading project (which I have yet to finish). One of the best ever post-apocalyptic novels came out in 1952, Wilson Tucker’s The Long Loud Silence. A couple of lousy ones came out too, Poul Anderson Vault of the Ages, and Andre Norton’s Star Man’s Son. I’ve read those three, but there’s one more big one from 1952, Bernard Wolfe’s Limbo, a post-apocalyptic cyborg novel that was on David Pringle’s top 100 Sci-fi Novel list. So I’m reading that one, since I picked it up for Christmas (thanks, Dad).
1952 was a great year for literature. I’m currently reading Steinbeck’s majestic East of Eden (thanks for the recommendation, Yago), which I will follow up with Ralph Ellison’s Invisible Man. Other remarkable books that came out that year are Hemingway’s The Old Man and the Sea, Flannery O’Connor’s Wise Blood, and Kurt Vonnegut’s Player Piano. Charlotte’s Web also came out in 1952.
Other interesting sci-fi published in book form that year includes Asimov’s Foundation and Empire, Clifford Simak’s City, van Vogt’s The Weapon Makers, and the very first Hugo award winner, Alfred Bester’s edgy, proto-cyberpunk The Demolished Man.
Why do this? In part because I was already reading East of Eden, but also, given my chronological sci-fi reading, I’m interested in contrasting what was going on in post-Hiroshima science fiction with the concerns that show up in mainstream literature.
So there it is - 1952 in (English) literature, and my first reading project of the year.
Certainly, achieving the goals of the Human Genome Project required engineers, physicists, and computer scientists. It would be silly to argue against large interdisciplinary teams where a mammoth technical goal can be clearly defined. But when I think of new fields in science that have been opened, I don’t think of interdisciplinary teams combining existing skills to solve a defined problem—I think of single interdisciplinary people inventing new ways to look at the world.
Focusing on interdisciplinary teams instead of interdisciplinary people reinforces standard disciplinary boundaries rather than breaking them down. An interdisciplinary team is a committee in which members identify themselves as an expert in something else besides the actual scientific problem at hand, and abdicate responsibility for the majority of the work because it’s not their field. Expecting a team of disciplinary scientists to develop a new field is like sending a team of monolingual diplomats to the United Nations.