Wednesday, October 22, 2014
Scientists in Cambridge, England have found hidden signatures in the brains of people in a vegetative state that point to networks that could support consciousness — even when a patient appears to be unconscious and unresponsive. The study could help doctors identify patients who are aware despite being unable to communicate. Although unable to move and respond, some patients in a vegetative state are able to carry out tasks such as imagining playing a game of tennis, the scientists note. Using a functional magnetic resonance imaging (fMRI) scanner, researchers have previously been able to record activity in the pre-motor cortex, the part of the brain that deals with movement, in apparently unconscious patients asked to imagine playing tennis.
Now, a team of researchers led by scientists at the University of Cambridge and the MRC Cognition and Brain Sciences Unit, Cambridge, have used high-density electroencephalographs (EEG) and graph theory to study networks of activity in the brains of 32 patients diagnosed as vegetative and minimally conscious and compare them to healthy adults. The researchers showed that the connectome — the rich and diversely connected networks that support awareness in the healthy brain — are typically impaired in patients in a vegetative state. But they also found that some vegetative patients had well-preserved brain networks that look similar to those of healthy adults — these patients were those who had shown signs of hidden awareness by following commands such as imagining playing tennis.
Bridge-builder I am
between the holy and the damned
between the bitter and the sweet
between chaff and the wheat
Bridge-builder I am
between the goat and the lamb
between the sermon and the sin
between the princess and Rumpelstiltskin
Bridge-builder I am
between the yoni and the lingam
between the darkness and the light
between the left hand and the right
Bridge-builder I am
between the storm and the calm
between the nightmare and the sleeper
between the cradle and the reaper
Bridge-builder I am
between the hex and the hexagram
between the chalice and the cauldron
between the gospel and the Gorgon
Bridge-builder I am
between the serpent and the wand
between the hunter and the hare
between the curse and the prayer
Bridge-builder I am
between the hanger and the hanged
between the water and the wine
between the pearls and the swine
Bridge-builder I am
between the beast and the human
for who can stop the dance
of eternal balance?
by John Agard
from Poetry Archive
Stephen Hsu in Nautilus Magazine (Photo by Cinerama/Courtesy of Getty Images):
The possibility of super-intelligence follows directly from the genetic basis of intelligence. Characteristics like height and cognitive ability are controlled by thousands of genes, each of small effect. A rough lower bound on the number of common genetic variants affecting each trait can be deduced from the positive or negative effect on the trait (measured in inches of height or IQ points) of already discovered gene variants, called alleles.
The Social Science Genome Association Consortium, an international collaboration involving dozens of university labs, has identified a handful of regions of human DNA that affect cognitive ability. They have shown that a handful of single-nucleotide polymorphisms in human DNA are statistically correlated with intelligence, even after correction for multiple testing of 1 million independent DNA regions, in a sample of over 100,000 individuals.
If only a small number of genes controlled cognition, then each of the gene variants should have altered IQ by a large chunk—about 15 points of variation between two individuals. But the largest effect size researchers have been able to detect thus far is less than a single point of IQ. Larger effect sizes would have been much easier to detect, but have not been seen.
This means that there must be at least thousands of IQ alleles to account for the actual variation seen in the general population. A more sophisticated analysis (with large error bars) yields an estimate of perhaps 10,000 in total.
Each genetic variant slightly increases or decreases cognitive ability. Because it is determined by many small additive effects, cognitive ability is normally distributed, following the familiar bell-shaped curve, with more people in the middle than in the tails. A person with more than the average number of positive (IQ-increasing) variants will be above average in ability. The number of positive alleles above the population average required to raise the trait value by a standard deviation—that is, 15 points—is proportional to the square root of the number of variants, or about 100. In a nutshell, 100 or so additional positive variants could raise IQ by 15 points.
Given that there are many thousands of potential positive variants, the implication is clear: If a human being could be engineered to have the positive version of each causal variant, they might exhibit cognitive ability which is roughly 100 standard deviations above average. This corresponds to more than 1,000 IQ points.
John Yargo in the LA Review of Books:
Bolaño’s biographers face a unique problem. The seductive popular image of him — something like a better-read Burroughs — is at odds with the voice of his fiction and his essays, which tends to be more generous, expansive, and penetrating than his image suggests. Even key events, like his arrest in Pinochet’s Chile or his “heroin addiction,” have been alternately credited as formative aspects of his personality, and discredited by his surviving family, friends, and rivals as erroneous planks of a legacy campaign.
What stands out in his fiction are the riotous voices, the contradictory and implausible characters, the restless equivocations and recapitulations: the polyphony. The first full-length biography in English, Bolaño: A Biography in Conversations, sidesteps “the authoritative biography” trap and attempts to recreate Bolaño-esque polyphony in telling the author’s own story. As the editor-in-chief of the Mexican edition of Playboy, Maristain conducted the last interviews, which appear with other conversations published between 1999 and 2005 in a handy collection, Roberto Bolaño: The Last Interview. In those interviews, Bolaño clearly relishes talking about books and contradicting himself and his image. If the interviews are not confiding in the usual sense of personal disclosures, to his credit, he is far more intimate and vulnerable when answering a question about Cervantes than when other authors are sharing sensitive details about their families.
As in the essay collection Between Parentheses, the picture that emerges from the interviews and the biography is a Bolaño that draws from different sources than contemporary Anglo-American literary fiction incubated in the university workshop. In place of Hemingway, Borges and Nicanor Parra; Carver is substituted by Breton; Denis Johnson usurped by Jacques Vaché and Witold Gombrowicz.
In Latin American fiction, he had a similar effect, shifting the terms on which authors would be understood.
Richard Marshall interviews Ofra Magidor in 3:AM Magazine:
3:AM: You say it’s important for linguistics, computer science – how so?
OM: In the case of linguistics, it is fairly obvious why category mistakes are important: one of the central tasks of linguistics explaining why some sentences are fine and others are infelicitous. In fact, category mistakes are a particularly interesting case, because a plausible argument can be made for explaining their oddness in terms of each of syntax, semantics, and pragmatics – so this is a good phenomenon to explore for anyone who is interested in the distinction between these three realms of language. This is probably why in the late 1960s category mistakes played a key role in one of the central disputes in the foundations of linguistics – that between interpretative semanticists (who claimed that syntax is autonomous of semantics) and generative semanticists (who rejected the sharp divide between these two realms).
I should also note there was a period in the 1960s when there was quite a lot of discussion of category mistakes happening in a parallel in linguistics and in philosophy, but there was practically no interaction at all between the two fields on this topic (they even used different terms – in linguistics authors usually refer to category mistakes as ‘selectional violations’). One thing I tried to do in the book was to bring together these two parallel debates. I’d like to think that these days there is much more co-operation between linguists and philosophers of language so this kind of divide is less likely to happen.
Moving to computer science: one straightforward way in which category mistakes are relevant is because of the field of computational linguistics. Suppose for example that you have an automatic translator which is given the sentence ‘John hit the ball’. If the translator looks up the word ‘ball’ in a dictionary, it will encounter (at least) two meanings: a spherical object that is used in games, and a formal gathering for dancing. It is obvious that the most natural interpretation of the sentence used the former meaning, and one way to see that is to note that if ‘ball’ were interpreted in the ‘dance’ sense, the sentence would be a category mistake. So being able to recognize category mistakes can help the automatic translator reach the correct interpretation.
But there is also a more general way in which the topic is relevant to computer science: computer programs use variables of various types which are assigned values – and it is very common to encounter cases where the value is of the wrong type for the variable. So there is an issue about how the program is going to deal with this kind of type mismatch which is in some ways parallel to the question of how natural languages deal with category mistakes.
Tuesday, October 21, 2014
The worst is yet to come, especially when we take into account the social and economic impact of the epidemic, which has so far hit only a small number of patients (by contrast, the combined death toll of Aids, tuberculosis and malaria, the ‘big three’ infectious pathogens, was six million a year as recently as 2000). Trade and commerce in West Africa have already been gravely affected. And Ebola has reached the heart of the Liberian government, which is led by the first woman to win a presidential election in an African democracy. There were rumours that President Ellen Johnson Sirleaf was not attending the UN meeting because she was busy dealing with the crisis, or because she faced political instability at home. But we knew that one of her staff had fallen ill with Ebola. A few days ago, we heard that another of our Liberian hosts, a senior health official, had placed herself in 21-day quarantine. Although she is without symptoms, her chief aide died of Ebola on 25 September. Such developments, along with the rapid pace and often spectacular features of the illness, have led to a level of fear and stigma which seems even greater than that normally caused by pandemic disease.
But the fact is that weak health systems, not unprecedented virulence or a previously unknown mode of transmission, are to blame for Ebola’s rapid spread. Weak health systems are also to blame for the high case-fatality rates in the current pandemic, which is caused by the Zaire strain of the virus.
I doubt that any other interview of the last ten years was more dramatic, more interesting as a clear statement of two positions or, in a sense, more absurdly grotesque than H.G. Wells’s interview with Stalin.
They met in Moscow on July 23 of last year and talked through an interpreter for nearly three hours. Wells gives a one-sided story in the last chapter of his “Experiment in Autobiography.” The official text of the interview can now be had in a pamphlet issued by International Publishers for two cents. A longer pamphlet, costing fifty cents in this country, was published in London byThe New Statesman and Nation. It contains both the interview and an exchange of letters in which Bernard Shaw is keener and wittier than Wells or J.M. Keynes. There is, unfortunately, no letter from Stalin. We know what Wells thinks about him; it would be instructive to hear what Stalin thinks about Wells.
The drama of their meeting lay in the contrast between two systems of thought. Stalin, with full authority, was speaking for communism, for the living heritage of Marx and Engels and Lenin. Wells is not an official figure and was speaking for himself; but he spoke with the voice of Anglo-American liberalism.
The show eases, somewhat, the famous difficulty of telling a Picasso from a Braque in the woodshedding period of 1909-12, which is termed Analytic Cubism. A wall text—a welcome one among far too many that are prolix, making for an installation that is like a walk-through textbook—points out Braque’s tendency toward ruddy luminosity and Picasso’s toward dramatic shadow. Still, the works speak a single visual language of clustered forms that advance and recede in bumps and hollows, with shaded planes, often bodiless contours, and stuttering fragments of representation. It’s said that they rendered objects from different viewpoints simultaneously, but seeing the works that way is beyond me. You don’t take in an Analytic Cubist picture as a whole. Rather, you survey it, as with an aerial view of some terrain that you must then explore on foot.
Oddly, for a style that crowds the picture plane, spatial illusion is crucial to Cubism. You know that you’re on the right track when, to your eye, the “little cube” elements start to pop in and out, as if in low relief. There’s a vicarious tactility to the experience. What the elements represent matters far less than where they are, relative to one another. To see how this works, it helps to take note of an endemic formal problem of Cubist painting: what to do in the corners, where the third dimension can’t be sustained.
Peter Conrad in The Guardian:
Revolutions usually leave ancient institutions tottering, societies shaken, the streets awash with blood. But what Walter Isaacson calls the “digital revolution” has kept its promise to liberate mankind. Enrichment for the few has been balanced by empowerment for the rest of us, and we can all – as the enraptured Isaacson says – enjoy a “sublime user experience” when we turn on our computers. Wikipedia gives us access to a global mind; on social media we can chat with friends we may never meet and who might not actually exist; blogs “democratise public discourse” by giving a voice to those who were once condemned to mute anonymity. Has heaven really come down to our wired-up, interconnected Earth?
What Isaacson sees as an eruption of communal creativity began with two boldly irreligious experiments: an attempt to manufacture life scientifically, followed by a scheme for a machine that could think. After Mary Shelley’s Frankenstein stitched together his monster, Byron’s bluestocking daughter Ada Lovelace devised an “analytical engine” that could numerically replicate the “changes of mutual relationship” that occurred in God’s creation. Unlike Shelley’s mad scientist, Lovelace stopped short of challenging the official creator: her apparatus had “no pretension to originate anything”. A century later, political necessity quashed this pious dread. The computing pioneers of the 1930s, as Isaacson points out, served military objectives. At MIT, Vannevar Bush’s differential analyser churned out artillery firing tables, and at Bletchley Park, after the war began, an all-electronic computer called the Colossus deciphered German codes. Later, the US air force and navy gobbled up all available microchips, which were used for guiding warheads aimed at targets in Russia or Cuba; only when the price of the chips dropped could they be used to power consumer products, not just weapons.
Anahad O'Connor in The New York Times:
A genetic variant that is particularly common in some Hispanic women with indigenous American ancestry appears to drastically lower the risk of breast cancer, a new study found. About one in five Latinas in the United States carry one copy of the variant, and roughly 1 percent carry two.
...Many genome-wide association studies have looked for associations with breast cancer in women of European descent. But this was the first such study to include large numbers of Latinas, who in this case hailed mostly from California, Colombia and Mexico, said the lead author of the study, Laura Fejerman of the Institute for Human Genetics in San Francisco. The researchers zeroed in on chromosome 6 and discovered the protective variant, which is known as a single nucleotide polymorphism, or SNP (pronounced (“snip”). They also discovered that its frequency tracked with indigenous ancestry. It occurred with about 15 percent frequency in Mexico, 10 percent in Colombia and 5 percent in Puerto Rico. But its frequency was below 1 percent in whites and blacks, and other studies have shown that it occurs in about 2 percent of Chinese people. “My expectation would be that if you go to a highly indigenous region in Latin America, the frequency of the variant would be between 15 and 20 percent,” Dr. Fejerman said. “But in places with very low indigenous concentration — places with high European ancestry — you might not even see it.”
Jaroslav Flegr is no kook. And yet, for years, he suspected his mind had been taken over by parasites that had invaded his brain. So the prolific biologist took his science-fiction hunch into the lab. What he’s now discovering will startle you. Could tiny organisms carried by house cats be creeping into our brains, causing everything from car wrecks to schizophrenia?
Kathleen McAuliffe in The Atlantic:
Certainly Flegr’s thinking is jarringly unconventional. Starting in the early 1990s, he began to suspect that a single-celled parasite in the protozoan family was subtly manipulating his personality, causing him to behave in strange, often self-destructive ways. And if it was messing with his mind, he reasoned, it was probably doing the same to others.
The parasite, which is excreted by cats in their feces, is called Toxoplasma gondii (T. gondii or Toxo for short) and is the microbe that causes toxoplasmosis—the reason pregnant women are told to avoid cats’ litter boxes. Since the 1920s, doctors have recognized that a woman who becomes infected during pregnancy can transmit the disease to the fetus, in some cases resulting in severe brain damage or death. T. gondii is also a major threat to people with weakened immunity: in the early days of the AIDS epidemic, before good antiretroviral drugs were developed, it was to blame for the dementia that afflicted many patients at the disease’s end stage. Healthy children and adults, however, usually experience nothing worse than brief flu-like symptoms before quickly fighting off the protozoan, which thereafter lies dormant inside brain cells—or at least that’s the standard medical wisdom.
But if Flegr is right, the “latent” parasite may be quietly tweaking the connections between our neurons, changing our response to frightening situations, our trust in others, how outgoing we are, and even our preference for certain scents. And that’s not all. He also believes that the organism contributes to car crashes, suicides, and mental disorders such as schizophrenia. When you add up all the different ways it can harm us, says Flegr, “Toxoplasma might even kill as many people as malaria, or at least a million people a year.”
Over at Philosophy Bites:
Subjective experience leads to the so-called 'hard problem' of consciousness: the difficulty of explaining qualia in terms of the brain. Keith Frankish discusses both the problem and a possible solution in this episode of the Philosophy Bites podcast.
Nick Smith in Aeon (Photo by W Eugene Smith/Magnum):
Apologies interact with the law in strange ways. Let’s start with criminal law. The modern penitentiary originated in the 18th century as a place of penance: it was where society sent its outcasts to study their Bibles, experience quiet self-alienation, hear the word of Christ, and repent. Less has changed than you might think.
Between 90 and 95 per cent of all criminal convictions in the US result from guilty pleas rather than jury trials. In many if not all of the millions of cases in the US criminal justice system, courts determine punishments in part based on their sense of whether the offender is remorseful or not. We might wince at the idea of secular states engaging in the ‘soul crafting’ of the original penitentiaries, but we still expect state agents to divine the essence of the offender’s nature and offer a suitable punishment based on her badness. We are, in other words, still in the grip of old spiritual traditions. And that leaves us with an old problem.
Findings of remorse in criminal contexts typically occur in the star chambers of intuition. State officials consult their gut feelings, evaluate a few emotional cues and then render a (typically unappealable) decision about the offender’s character. On the whole, they do not explain why they find an offender’s remorse compelling. They do not disclose or defend their standards of contrition. The US Federal Sentencing Guidelines attempted to add some consistency to punishments by allowing reductions in sentences for those who ‘accept responsibility’, but, in practice, accepting responsibility has come to mean agreeing to a plea even while denying guilt. The US Supreme Court has ruled that remorse can determine whether an offender lives or dies, yet we entrust such determinations to ‘know it when I see it’ standards, as if judges and juries can look into the eyes of offenders, intuit the depths of their evil, and punish accordingly.
This discretionary latitude has predictable consequences. Regardless of their blameworthiness, rich offenders tend to get more credit for their remorse than poor ones, a generalisation that holds throughout the US criminal process. Police officers are more likely to let a warning suffice when the offender is rich. Parole boards are more likely to find that a rich inmate is sufficiently reformed. By contrast, the apologies of minorities, the poor and the mentally disabled often fail to convince.
Sonja Pyykkö speaks to György Dragomán, author of "The White King", in Eurozine:
Day-to-day reality in a communist state was defined by a long list of forbidden practices, objects and opinions, and the culture of informants that aimed to keep everyone in check. Naturally, no one knew the identity of the informants, so neighbours, distant relatives and co-workers were all suspicious by default. Keeping people in a constant state of mistrust is a form of exercising power according to the ancient principle of divide and conquer. Dragomán links this distrust to the violence of the system:
"Conversations were full of violence and nearly every subject was approached through it. A dictatorship functions just so; violence replaces communication in its entirety. Since nobody could be trusted, you were forced into this violent guessing game of whether they'll hurt you or you them. It all started very early on, I can't even remember any other type of conversation. This is all in retrospect of course, at the time it felt completely normal."
Dragomán is very good at portraying the division between open, physical violence, and hidden violence that is apparent only on the level of speech and thought, and as a constant threat in everyday life.
"In some ways, the entire system's rhetoric was based on violence. Peace was of course a big deal and the state's rhetoric was always about peace, but there was always some battle involved. As a child I always had this terrible feeling that violence could emerge at any moment. Like in school, where during my childhood teachers still used canes. We weren't caned often, but the threat was always present. I remember this teacher, who had a broken arm in a cast. I remember the story was that he'd broken it when hitting a child. This probably wasn't true, but as a child, I believed the story completely."
Monday, October 20, 2014
by Gerald Dworkin
In the light of the recent fire-storm over the hirefire of Steven Salaita, I thought it might be interesting to revisit a case which raised similar issues about whether there are limits to what a University may do with respect to controversial speech. This was a case which did not raise issues about hiring and firing and procedural justice so it may perhaps be a better one to focus on.
In 2002, the Harvard English Department invited the Irish poet Tom Paulin to give a poetry reading as the Morris Gray lecturer. Shortly thereafter it was brought to the attention of the inviters that Paulin had made the following statements in an interview to an Egyptian newspaper.
"Brooklyn-born settlers in the occupied territories should be shot dead. I think they are Nazis, racists, I feel nothing but hatred for them." Brooklyn? Has the man no shame?
The newspaper also quoted him as saying: "I never believed that Israel had the right to exist at all." In a poem published earlier in the Observer he referred to the "Zionist SS" .
Another comment was "There's something profoundly sexual to the Zionist pleasure w/#Israel's aggression. Sublimation through bloodletting, a common perversion." Oh, sorry that was Steven Salaita.
As a result of this, and without as far as we know any influence by Harvard donors, the English Department retracted their invitation.
A hail of protests ensued. Strange bedfellows issued letters. This one came from Alan Dershowitz, Laurence Tribe and Charles Fried.
"By all accounts this Paulin fellow the English Department invited to lecture here is a despicable example of the anti-Semitic and/or anti-Israel posturing unfortunately quite widespread among European intellectuals (News, "Poet Flap Drew Summers' Input," Nov. 14). We think he probably should not have been invited. But Harvard has had its share of cranks, monsters, scoundrels and charlatans lecture here and has survived.
What is truly dangerous is the precedent of withdrawing an invitation because a speaker would cause, in the words of English department chair Lawrence Buell, "consternation and divisiveness." We are justly proud that our legal system insisted that the American Nazi Party be allowed to march through the heavily Jewish town of Skokie, Illinois. If Paulin had spoken, we are sure we would have found ways to tell him and each other what we think of him. Now he will be able to lurk smugly in his Oxford lair and sneer at America's vaunted traditions of free speech. There are some mistakes which are only made worse by trying to undo them."
James Shapiro, of Columbia where Paulin was visiting, condemned Harvard's actions as "disastrous".
by Rishidev Chaudhuri
At first (and at second, and third) glance, the use of spices in the cuisines of the subcontinent is a subtle and mysterious art, full of musty cupboards staffed by aging apothecaries (and grandmothers) and intertwined with theories of humor-balancing and our particular relationship to the gods. Recipes and spice blends are passed on in scribbled old notebooks and on furtive scraps of paper, copied and recopied like the epics, with long lists of spices and proportions, some crossed out and replaced with others for inexplicable reasons. The spices are essential, we are told, the order in which they are added is crucial, the mind of the cook must be perfectly clear, and the incantations must be uttered perfectly resonantly.
But how to make sense of this confusion if one did not grow up hovering over a mortar and pestle? Or even if one did and was momentarily distracted (perhaps by adolescence)? One route is a close reading of existing recipes and practices, noting patterns, highlighting parsimonious explanations and gradually drawing grander and grander conclusions. Equally useful is naïve phenomenological experimentation: an analytic strategy, where we isolate and examine spices to see what they bring to our senses. In this we should be motivated by Blake's dictum that to know what is enough we must cross it: the most clarifying way to figure out what a spice is doing is to increase its proportions in a recipe ad absurdum, until the structure starts to crack and you glimpse what column of the edifice was being held up by that particular spice. Unfortunately, while this is the right way to conduct disciplined phenomenological inquiry, it is not the right way to make something to eat, and so we will scale our ambitions back and instead simply exaggerate the spice that is being studied and strip away some of the surrounding complexity. This is an ongoing project of mine, as I try to understand subcontinental food, and I'm particularly interested in collecting and devising one-note recipes that highlight a particular spice (see this article on pepper, for example).
Coriander fruits, also commonly called coriander seeds, are good for this kind of analysis. Their flavors are crucial to many subcontinental foods, and are part of what makes the cuisine distinctive. Yet, unlike a number of other spices, coriander tends to be gentle and forgiving. It's a friendly spice, with flavors of citrus and flowers mixed in with a warm spiciness. If you have coriander seeds in your pantry, chew on a few seeds as you read this and you'll smell and taste the flavors I mean (you can do this with the powder too, but it's less pleasant and it'll dry out your mouth). There's also a slight soapiness, which I'm told some people pick up on more than others. If you're curious about the chemistry of coriander, Harold McGee's book On Food and Cooking is wonderful (as usual).
by Hari Balasubramanian
You might wonder who is conversing with whom. The best description I have is that these are two voices or perspectives in my head debating each other.
"This thing called the sense of self, the ego or the ‘I'. There are many claims floating around these days that confidently say that the sense of self is an illusion. Not sure what to make of this. If I accept such a claim then who or what is this ‘I' that just accepted the illusory nature of the self? It's like walking around in circles, like a dog chasing its tail and going nowhere."
"You could say the ‘I' is some kind of energy in our conscious experience that comes together in such a way as to create the illusion."
"Maybe so. But how does that help me? I still feel the sense of self exists; that's what is speaking right now! I can't just wish it away because somebody says it is an illusion. I can't wish it away even if my own intellect logically reasons out that it is an illusion. For example, I know very well that the body – the best proxy I have for the ‘I' – had a certain shape in the womb, a different shape as a baby, something entirely different as an adult, and will disintegrate after death. So I can reason pretty clearly that what we call the body is ever changing, from one moment to the next, that there is nothing constant there. Yet each one of us, without fail, invariably points to his or her body to claim that this is me…"
"I agree that there is something that always seems to be hovering around. And it is quite practical in claiming an ever-changing and perishable body, among a host of other perishable things, for itself. But when examined closely, the ‘I' cannot be pointed out as anything concrete – where is it?"
"It is right here, always the main point of reference, always claiming that this is me or this is not me. Or I like this or I do not like this, or I am neutral to this. We cannot even frame a sentence while conversing that does not have ‘I' or ‘you' or ‘this' or ‘that' in it. If consciousness of anything is there, the ‘I' is very much there mixed up with it. This is why – unless I experience it myself firsthand: I don't know what that would be like – the idea that the self is an illusion does not affect me. It's as if one moment the ‘I' feels strongly it exists, and then the very next moment the very same ‘I' cleverly changes hats and declares: ‘Well, I shouldn't take myself seriously, since there is strong evidence that I am an illusion!'"
"Still, don't you think there is some practical benefit to the idea of no-self, of not taking the ego seriously? When I observe my thoughts closely, I find there is very little control; I don't know where thoughts are coming from and what their source is. They just come and go; sometimes my mind is very busy, chaotic, and at other times very slow and relaxed. Everything – decisions, events, what captures my attention, how things unfold in time – seems so complex and intertwined. An emotion or idea or feeling or inspiration will surge up within me whether I want it or not. When this understanding sinks deep enough, I may learn to understand that others too are being driven by thoughts that are not under their control. So maybe the ‘I' can observe and train itself. It may or may not work – there are never any guarantees – but you remind yourself, all the time, to not take the ego seriously."
"You have to do it all the time because this thing called the ‘I' is present all the time!"
we unload the freight of day
as night wraps up what day has told
there’s not much more to say—
myself in shade, eagle in her hold
both are restless in day’s throes.
who among us really understands
what night becomes, where daylight goes,
who know the ground, the place we stand?
still the worm in unturned earth makes way,
a cardinal, blood red, in a maple’s crown
is more tuned than I am to the stuff the earth displays:
what lifts it up, what presses down
what’s hidden keeps us on the edge
with those we love our only hedge
by Jim Culleny
“Her hands full of earth, she kneels, in red suede high heels:” Planting a New Language in Diaspo/Renga
by Shadab Zeest Hashmi
This past summer, news of the Gaza massacres came most revealingly in images and videos taken with cell phones— the devices originally intended to connect us through voice, chronicling instead the horrors befalling Palestinians in real time, horrors that defy conventional language, and will not be chronicled with fidelity by the news media: a suffering made more pronounced by being pushed out of language. Through those seven weeks of Israeli bombardment, the days and nights linked with images of mangled children and rubble and hysteria had the effect of a long nightmare in which the sleeper neither has the power to change the outcome of impending calamity, nor is able to wake up and disengage from it.
The ripple effects of genocide and silencing go farther than we can imagine; victims and perpetrators can end up looking like a paper doll chain: inhumane/dehumanized. It never occurred to me that the claustrophobic effect of this chain may be reversed by another kind of chain, one that brings moments of erasure back into language by linking voices in poetry.
At a recent RAWI (Radius of Arab American Writers) conference, I heard a chain of poems, a “Renga,” written by two poets of different backgrounds, who, despite the unique stylistic sensibilities that set them apart, speak from the experience of calling many places home, and whose work is imbued with a concern to translate culture for the cause of a nuanced understanding of “the other.” These two poets, I discovered, know each other in the way it is common for writers to know each other— through writing— they had never met until that particular poetry reading I attended. Marilyn Hacker who resides in Paris, a celebrated author of many volumes of poetry, and Deema Shehabi, her younger counterpart in California, also a poet of multiple cultures, decided to assemble a series of linked poems. After four years and thousands of email exchanges containing drafts of poems, the work is now available in published form. The title of the book Diaspo/Renga is a play on the word “Diaspora,” and “Renga,” a traditional Japanese collaborative form. The Renga is made up of linked Tanka. Explaining the form, Deema Shehabi says: “Traditionally, one poet would write the first Tanka, followed by the other poet’s Tanka. The syllable count for each Tanka is 5-7-5 then 7-7. Marilyn stuck to the original syllable count where I did not.” In their adaptation of the Renga form, each poet writes two Tanka as a single poem, ten lines in all.
by Akim Reinhardt
Part I of this essay appeared last month.
Thus continues my grand voyage, in which a rusty ‘98 Honda Accord shuttles me from one end of North America to the other and back again . . .
After stumbling half-way across the continent, I settled into the northern Great Plains for a spell. Determined to visit a variety of archives, I cris-crossed South Dakota to the tune of a thousand miles. It's a big state.
First I spent some time in the East River college towns of Vermillion and Brookings. A hop, skip, and a jump from the Minnesota border, this here is Prairie Home Companion country. It's a land of hot dishes (casseroles) and Lutheran churches. Of sprawling horizons and "Oh, ya know."
There's lots of tall people. Lots of blond people. Lots of tall, blond people. I like it.
But after a week of researching and visiting old friends, I left behind the Scandinavian heritage and Minnesota-style niceties of eastern South Dakota. I made my way west across the Missouri River and then headed north. Actually, I crossed the line into North Dakota; Sitting Bull College on the Standing Rock Reservation is actually in the NoDak town of Fort Yates.
I'm happy to give the tribe some money, so I spent a night at the tribally owned Prairie Knights hotel and casino. I had a mind to play some poker, but when I went downstairs to investigate, I found the card room was already in the thick of a Texas Hold ‘Em tournament. So I bought a sandwich, returned to my room, and watch Derek Jeter's last game at Yankee Stadium.
After Standing Rock, the plan was to go straight down the gut of central South Dakota to Rosebud Reservation, which sits near the Nebraska border, and then westward to Pine Ridge Reservation in the state's southwestern corner.
If you were to plot my herky-jerky route across South Dakota, I suspect it would create an exciting new shape that mathematicians would get wide-eyed about. And then they'd come up with a cool name for this strange but essential new shape. Maybe something like an "akimus." The akimus will shed new light on our understanding of trapezoids. And of course it will have some mysterious relationship to Pi.
I can imagine this because I haven't passed a math class since the 10th grade.
by Grace Boey
Lately, I’ve been thinking a lot about Internet trolls. I’d always been vaguely aware of their presence, and had read some articles here and there about the threats they pose to constructive debate—but I never truly realized the full nature of their pestilence until I had to deal with them myself. Since I started publicly writing and commenting online, I’ve encountered abusive, non-constructive comments and emails on an increasingly frequent basis. I also co-manage an atheist social media page; I’m not the direct target of the trolls that lurk here, thankfully, but I do have to trawl through their vile comments, where they often abusively attack (or embarrass) causes I care about deeply.
Naturally, none of this has been good for my blood pressure. Last month, I became irritated enough to start work on a long exposition of online trolling—in the process, targeting specific trolls I’d personally encountered. Yes, hell hath no fury like a woman trolled, and I spent more time than I’d care to admit compiling comprehensive records of at least three of these individuals’ online activities. I even uncovered the physical, non-virtual identity of one of them.
You’d think I’d be happy for striking troll-hunter’s gold. Yet, the more I wrote and uncovered, the less I wanted to publish a piece bashing trolldom in general, let alone one that put specific individuals on the spot. Though I was pleased with the quality of the article, I refrained from running it. And very fortunately so—a couple of weeks after the piece would have been published, the Brenda Leyland troll-exposing controversy erupted.
Here’s what I've come to think: there’s very strong reason to believe that many compulsive Internet trolls need our active help. The impersonality of the internet makes it easy for them to dehumanize others, but for this same reason, it’s also easy for us to completely dehumanize them. But we must resist this temptation. Who are the people behind these monikers and computer screens, really? Why do they thrive on trolling, and why on earth don't they have anything better to do? How did they become this way? When we really stop to think about these questions, a disturbing social and psychological picture emerges. Virtual trolls may be a problem as much to their human selves as they are to their human victims.
Walter Johnston. Flaky Thorn Acacia. Timbavati, South Africa, 2014.
On safari in August we were told that a "gall making wasp" injects a kind of growth hormone into the thorn to make it expand (see below) and thus provide a well protected nourishing home for its eggs.
I have not been able to corroborate this. If someone else can, I'd love to learn.
Here's the best I have found:
"Myrmecophilous acacia are found in Eastern Africa and Mesoamerica ...
...They develop some to most of their stipular spines into inflated, globose, ovoid, fusiform or thick cyclindrical armatures. Their spines look like galls or horns leading to species names like White swollen thorn acacia (=A. bussei), Black-galled acacia (A. malacocephala), Hairy-galled acacia (=A. mbuluensis), Bull`s Horn acacia, or Ant-galled acacia also called Whistling thorn acacia
The swollen thorns are genetically fixed. They are not randomely generated by the sting of an insect, like the galls produced by a wasp that injects her chemicals into a leaf, which then forms galls. Therefore the so-called gall-thorns are not real galls.
The fresh thorn is drilled open by an ant queen. Then it is carved out and she lays her eggs inside, starting a new colony ...
The obligate mutualistic Acacia-ants (Pseudomyrex in Mesoamerica and Crematogaster in Africa) protect the plant in different ways: they fiercly attack browsing mammals, ravaging insects and epiphytic vines. They prevent any twig from neighbouring trees to touch their host – to prevent hostile ants from invading their tree. For the same reason they cut shoots of their tree that develop too far towards the canopy of neighboring trees."
Walter Johnston. Swollen thorn of the Flaky Thorn Acacia. Timbavati, South Africa, 2014.
More on acacias here.
Photographs posted with permissin from Walter Johnston.
by Bill Benzon
“The interests of humanity may change, the present curiosities in science may cease, and entirely different things may occupy the human mind in the future.” One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.
In scientific prognostication we have a condition analogous to a fact of archery--the farther back you are able to draw your longbow, the farther ahead you can shoot.
R. Buckminster Fuller, Critical Path
Let’s get started:
- A week ago The Guardian published a long piece in which Pankaj Mishra argued that the Western world no longer provides a model the rest of the world should or even can follow, if it ever had.
- A couple of weeks ago polymath David Byrne asserted that he’d lost interest in contemporary art, feeling it had devolved into “inoffensive tchotchkes for billionaires and the museums they fund,” a sentiment that the late Robert Hughes had been promulgating for some years.
- Back in 1996 science journalist John Horgan published The End of Science, in which he argued that many fields of science had reached a point where they were no longer intellectually productive. The big problems had been solved, more or less, and further investigations seemed to be running in circles without any clear sense of progress.
Not only am I sympathetic which each of these ideas, I think they all reflect the same underlying cause: the wellsprings of old ideas – about social organization, artistic expression, and scientific explanation, certainly, but also about fiction, legal codes, economics, education, music, gender and family, and a host of others – have run dry and new ones have not yet been discovered.
I’m quite familiar with this phenomenon in the case of literary studies, where I received my graduate training. The French landed in Baltimore in the Fall of 1966 and catalyzed three decades of intellectual invention. The invention all but stopped about twenty years ago, leaving literary studies afflicted with a sense of malaise that goes deeper than budget cuts and umbrage taken at silly articles in which humorists of The New York Times take potshots at papers presented at the annual convention of the Modern Language Association.
How could new ideas just stop? Have people gotten stupid or is something else going on? If so, what?