Thursday, October 23, 2014
John Quiggin in Crooked Timber:
Gough Whitlam, Prime Minister of Australia from 1972 to 1975, died on Tuesday. More than any other Australian political leader, and as much as any political figure anywhere, Gough Whitlam embodied social democracy in its ascendancy after World War II, its high water mark around 1970 and its defeat by what became known as neoliberalism in the wake of the crises of the 1970s.
Whitlam entered Parliament in 1952, having served in the Royal Australian Air Force during the War, and following a brief but distinguished legal career. Although Labor had already chosen a distinguished lawyer (HV Evatt) as leader, Whitlam’s middle-class professional background was unusual for Labor politicans
Whitlam marked a clear break with the older generation of Labor politicians in many other respects. He was largely indifferent to the party’s socialist objective (regarding the failure of the Chifley governments bank nationalisation referendum as having put the issue off the agenda) and actively hostile to the White Australia policy and protectionism, issues with which Labor had long been associated.
On the other hand, he was keen to expand the provision of public services like health and education, complete the welfare state for which previous Labor governments had laid the foundations, and make Australia a fully independent nation rather than being, in Robert Menzies words ‘British to the bootstraps’.
Adam Shatz in the London Review of Books:
The Death of Klinghoffer, John Adams’s 1991 opera about the hijacking of the Achille Lauro by the Palestine Liberation Front in 1985, has achieved a rare distinction in contemporary classical music: it’s considered so dangerous by its critics that they’d like to have it banned. For its opponents – the Klinghoffer family, Daniel Pearl’s father, conservative Jewish organisations, and now the former New York mayor Rudy Giuliani and former New York governor George Pataki, who took part in a noisy demonstration outside the Met last night – Klinghoffer is no less a sacrilege than The Satanic Verses was to Khomeini and his followers. They haven’t issued a fatwa, but they have done their best to sabotage the production ever since the Met announced it.
Peter Gelb, the Met’s general manager, capitulated in the summer to pressure from the Anti-Defamation League (and, according to the New York Times, from ‘three or four’ major Jewish donors), cancelling a live broadcast to cinemas around the world. The rationale for the decision, made against the backdrop of the Gaza offensive, was that the opera might be exploited by anti-semites. How, they didn’t say. For some reason the opera’s enemies don’t seem concerned that its unflinching portrayal of the murder of an elderly Jew in a wheelchair might be ‘used’ to foment anti-Muslim sentiment.
The notion that Adams and his librettist, Alice Goodman, are justifying terrorism is absurd. The hijacking is depicted in all its horror, chaos and fear. The scene that raised accusations of anti-semitism, a dinner table conversation among ‘the Rumors’, an American-Jewish family, was excised from the libretto long ago.
David Remnick in The New Yorker:
Benjamin Crowninshield Bradlee, the most charismatic and consequential newspaper editor of postwar America, died at the age of ninety-three on Tuesday. Among his many bequests to the Republic was a catalogue of swaggering anecdotes rich enough to float a week of testimonial dinners. Bradlee stories almost always relate to his glittering surface qualities, which combined the Brahmin and the profane. Let’s get at least one good one out of the way:
During his reign, from 1968 to 1991, as the executive editor of the WashingtonPost, Bradlee took time periodically to dictate correspondence into a recorder. His letters in no way resembled those of Emily Dickinson. He was given neither to self-doubt nor to self-restraint. In his era, there may have been demands by isolated readers for greater transparency, for correction or explanation, but there was no Internet, no Twitter, to amplify them. Bradlee was, by today’s standards, unchallengeable, and he was expert in the art of florid dismissal. His secretary, Debbie Regan, was, in turn, careful to reflect precisely his language when transcribing his dictation. One day, Regan approached the house grammarian, an editor named Tom Lippman, and admitted that she was perplexed. “Look, I have to ask you something,” she said. “Is ‘dickhead’ one word or two?”
This sort of stuff was especially entertaining when you remembered that Bradlee’s family was a concoction of seventeenth-century Yankees and semi-comic Vanity Fair-like European royalty.
Rory Stewart in the New York Review of Books:
There is a consensus in Afghan society: violence…must end. National reconciliation and respect for fundamental human rights will form the path to lasting peace and stability across the country. The people’s aspirations must be represented in an accountable, broad-based, gender-sensitive, multi-ethnic, representative government that delivers daily value.
That was twelve years ago. No one speaks like that now—not even the new president. The best case now is presented as political accommodation with the Taliban, the worst as civil war.
Western policymakers still argue, however, that something has been achieved: counterterrorist operations succeeded in destroying al-Qaeda in Afghanistan, there has been progress in health care and education, and even Afghan government has its strengths at the most local level. This is not much, given that the US-led coalition spent $1 trillion and deployed one million soldiers and civilians over thirteen years. But it is better than nothing; and it is tempting to think that everything has now been said: after all, such conclusions are now reflected in thousands of studies by aid agencies, multilateral organizations, foreign ministries, intelligence agencies, universities, and departments of defense.
But Anand Gopal’s No Good Men Among the Living shows that everything has not been said. His new and shocking indictment demonstrates that the failures of the intervention were worse than even the most cynical believed.
Billy Kung in ArtAsiaPacific:
From Dhaka, 32-year-old photographer Farzana Hossen has produced a harrowing document called “Lingering Scars” (2013), a series of photographs depicting women victims of acid attacks. Hossen is one of 13 photographers taking part in an exhibition called “Voice of Tacitness: Asian Women Photography,” currently running at the Hong Kong Arts Centre from October 19 until November 2. According to Acid Survivors Foundation (ASF) in Bangladesh, from 1999 to 2011 there were 1,084 reported cases of acid assaults against women. Most of these attacks were marriage related, or lovers spurned, and far too often women are blamed for family breakups and divorce. Worse still, is the fact that both society and state have often turned a blind eye to the situation. Violence against women is an ongoing issue that has been addressed by many artists, but none with the sobering directness and honesty that are shown through Hossen’s body of work. Her bravery and sensitivity in the portrayal of these women are admirable, and the trust she has established with them comes across in her images. Upon first viewing the photographs, one is almost affronted by the horror, but those immediate responses are soon overtaken by an unbelievable sadness and at the same time, the courage and warmth displayed between the subjects and the photographer. Hossen wrote:
“I intend to work with women and girls who have survived an acid attack, and are trying to rebuild their lives despite carrying horrific mental and physical wounds. This is a profoundly personal undertaking and an important part of reflecting upon my past. I plan to travel to ten different districts of Bangladesh where groups of survivors have built support structures. I want to document their lives, their struggles, their sufferings and their resilience. The stories of these women will shed light on a very dark corner of human existence. They will give voice to individuals who’ve been silenced by their oppressors. They will tell the world that we need to campaign for women’s rights.”
Ewen Callaway in Nature:
In 2004, researchers announced the discovery of Homo floresiensis, a small relative of modern humans that lived as recently as 18,000 years ago. The ‘hobbit’ is now considered the most important hominin fossil in a generation. Here, the scientists behind the find tell its story. The hobbit team did not set out to find a new species. Instead, the researchers were trying to trace how ancient people travelled from mainland Asia to Australia. At least that was the idea when they began digging in Liang Bua, a large, cool cave in the highlands of Flores in Indonesia. The team was led by archaeologists Mike Morwood and Raden Soejono, who are now deceased.
Roberts: It was a very small body. That was the first thing that was immediately apparent — but also an incredibly small skull. We first thought, “Oh, it’s a child.” There was a guy who was working with us called Rokus. He did all the faunal identifications of the bones. But Rokus said, “No, no, no, it’s not a child. It’s not modern human at all. It’s a different species.”
Saptomo: Thomas drew the skeleton on paper, and he faxed the drawing to Mike and to Professor Soejono in Jakarta.
Sutikna: Mike called me at night. I couldn’t understand what he was saying over the phone, he was so excited.
John Crowley in Lapham's Quarterly:
"Then what is time?” St. Augustine asked himself in his Confessions. “I know what it is if no one asks; but if anyone does, then I cannot explain it.”
Augustine saw the present as a vanishing knife edge between the past, which exists no longer, and the future, which doesn’t yet. All that exists is the present; but if the present is always present and never becomes the past, it’s not time, but eternity. Augustine’s view is what the metaphysicians call “presentism,” which holds that a comprehensive description of what exists (an ontology) can and should include only what exists right now. But among the things that do exist now are surely such things as the memory of former present moments and what existed in them, and the archives and old calendars that denote or describe them. Like the dropped mitten in the Ukrainian tale that is able to accommodate animals of all sizes seeking refuge in it from the cold, the ever-vanishing present is weirdly capacious—“There’s always room for one more!”
Time is continuous, but calendars are repetitive. They end by beginning again, adding units to ongoing time just by turning in place, like a stationary bicycle. Most calendars these days are largely empty, a frame for our personal events and commitments to be entered in; but historically calendars have existed in order to control time’s passage with recurring feasts, memorials, sacred duties, public duties, and sacred duties done publicly—what the church I grew up in calls holy days of obligation. Such a calendar can model in miniature the whole of time, its first day commemorating the first day of Creation, its red-letter days the great moments of world time coming up in the same order they occurred in history, the last date the last day, when all of time begins again. The recent fascination with the Mayan “long count” calendar reflects this: the world cycle was to end when the calendar did.
It’s possible to live in more than one time, more than one history of the world, without feeling a pressing need to reconcile them. Many people live in a sacred time—what the religious historian Mircea Eliade called “a primordial mythical time made present”—and a secular time, “secular” from the Latin saeculum, an age or a generation. Sacred time, “indefinitely recoverable, indefinitely repeatable,” according to Eliade, “neither changes nor is exhausted.” In secular time, on the other hand, each year, month, second, is a unique and unrepeatable unit that disappears even as it appears in the infinitesimal present.
An Arundel Tomb
Side by side, their faces blurred,
The earl and countess lie in stone,
Their proper habits vaguely shown
As jointed armour, stiffened pleat,
And that faint hint of the absurd -
The little dogs under their feet.
Such plainness of the pre-baroque
Hardly involves the eye, until
It meets his left-hand gauntlet, still
Clasped empty in the other; and
One sees, with a sharp tender shock,
His hand withdrawn, holding her hand.
They would not think to lie so long.
Such faithfulness in effigy
Was just a detail friends would see:
A sculptor's sweet commissioned grace
Thrown off in helping to prolong
The Latin names around the base.
They would not guess how early in
Their supine stationary voyage
The air would change to soundless damage,
Turn the old tenantry away;
How soon succeeding eyes begin
To look, not read. Rigidly they
Persisted, linked, through lengths and breadths
Of time. Snow fell, undated. Light
Each summer thronged the grass. A bright
Litter of birdcalls strewed the same
Bone-littered ground. And up the paths
The endless altered people came,
Washing at their identity.
Now, helpless in the hollow of
An unarmorial age, a trough
Of smoke in slow suspended skeins
Above their scrap of history,
Only an attitude remains:
Time has transfigures them into
Untruth. The stone fidelity
They hardly meant has come to be
Their final blazon, and to prove
Our almost-instinct almost true:
What will survive of us is love.
by Philip Larkin
from The Whitsun Weddings, 0964
Zeeya Merali in Discover:
In the 1999 sci-fi film classic The Matrix, the protagonist, Neo, is stunned to see people defying the laws of physics, running up walls and vanishing suddenly. These superhuman violations of the rules of the universe are possible because, unbeknownst to him, Neo’s consciousness is embedded in the Matrix, a virtual-reality simulation created by sentient machines.
The action really begins when Neo is given a fateful choice: Take the blue pill and return to his oblivious, virtual existence, or take the red pill to learn the truth about the Matrix and find out “how deep the rabbit hole goes.”
Physicists can now offer us the same choice, the ability to test whether we live in our own virtual Matrix, by studying radiation from space. As fanciful as it sounds, some philosophers have long argued that we’re actually more likely to be artificial intelligences trapped in a fake universe than we are organic minds in the “real” one.
But if that were true, the very laws of physics that allow us to devise such reality-checking technology may have little to do with the fundamental rules that govern the meta-universe inhabited by our simulators. To us, these programmers would be gods, able to twist reality on a whim.
So should we say yes to the offer to take the red pill and learn the truth — or are the implications too disturbing?
Evgeny Morozov in The Observer (Photograph: Mandel Ngan/AFP/Getty Images):
In the near future, Google will be the middleman standing between you and your fridge, you and your car, you and your rubbish bin, allowing the National Security Agency to satisfy its data addiction in bulk and via a single window.
This "smartification" of everyday life follows a familiar pattern: there's primary data – a list of what's in your smart fridge and your bin – and metadata – a log of how often you open either of these things or when they communicate with one another. Both produce interesting insights: cue smart mattresses – one recent model promises to track respiration and heart rates and how much you move during the night – and smart utensils that provide nutritional advice.
In addition to making our lives more efficient, this smart world also presents us with an exciting political choice. If so much of our everyday behaviour is already captured, analysed and nudged, why stick with unempirical approaches to regulation? Why rely on laws when one has sensors and feedback mechanisms? If policy interventions are to be – to use the buzzwords of the day – "evidence-based" and "results-oriented," technology is here to help.
This new type of governance has a name: algorithmic regulation. In as much as Silicon Valley has a political programme, this is it. Tim O'Reilly, an influential technology publisher, venture capitalist and ideas man (he is to blame for popularising the term "web 2.0") has been its most enthusiastic promoter. In a recent essay that lays out his reasoning, O'Reilly makes an intriguing case for the virtues of algorithmic regulation – a case that deserves close scrutiny both for what it promises policymakers and the simplistic assumptions it makes about politics, democracy and power.
Wednesday, October 22, 2014
Rachel Trethewey in The Independent:
In the famous image of Vita Sackville-West, Lady with a Red Hat, the writer is the embodiment of the confident young aristocrat. Exuding a languid elegance, her heavy-lidded Sackville eyes gaze out from beneath the broad brim. But this portrait captures another element of Vita’s persona. It was painted in 1918, shortly after her sexual awakening with Violet Keppel, and beneath the flamboyant clothes and bright lipstick there is an androgynous quality. In Behind the Mask, the first biography of Vita for 30 years, Matthew Dennison focuses on this ambiguity, exploring the duality which was rooted in her genetic inheritance and her eccentric upbringing.
Vita’s identity embraced masculine and feminine elements; her stiff-upper-lip English ancestry was in conflict with the Latin blood from her grandmother Pepita, a Spanish dancer who was the mistress of Lionel, Baron Sackville. Among their illegitimate offspring was Vita’s mother Victoria, who by marrying her cousin became the mistress of the Sackvilles’ ancestral home, Knole in Kent. The author of acclaimed biographies of Queen Victoria and her daughter Princess Beatrice, Dennison is particularly good at analysing complex mother-daughter relationships. Here, he sees Victoria’s identity interwoven with Vita’s. The former was a capricious character, he explains: “The fairy godmother was also a witch.” She claimed she could not bear to look at Vita because she was so ugly; the cruelty in Vita’s treatment of her lovers was learnt from her mother. An only child, Vita was often left at Knole with nannies and governesses while her parents travelled abroad. The house became like a person to her; built like a medieval village, it fired her imagination. Tragically for Vita, because she was a female she could not inherit the house. Dennison sees her fiction as addressing this; in her fantasy life, she celebrated a heroic male version of herself.
Daniel C. Dennett in Prospect:
For several millennia, people have worried about whether or not they have free will. What exactly worries them? No single answer suffices. For centuries the driving issue was about God’s supposed omniscience. If God knew what we were going to do before we did it, in what sense were we free to do otherwise? Weren’t we just acting out our parts in a Divine Script? Were any of our so-called decisions real decisions? Even before belief in an omniscient God began to wane, science took over the threatening role. Democritus, the ancient Greek philosopher and proto-scientist, postulated that the world, including us, was made of tiny entities—atoms—and imagined that unless atoms sometimes, unpredictably and for no reason, interrupted their trajectories with a random swerve, we would be trapped in causal chains that reached back for eternity, robbing us of our power to initiate actions on our own.
Lucretius adopted this idea, and expressed it with such dazzling power in his Stoic masterpiece, De Rerum Natura, that ever since the rediscovery of that poem in the 15th century, it has structured the thinking of philosophers and scientists alike. This breathtaking anticipation of quantum mechanics and its sub-atomic particles jumping—independently of all prior causation—from one state to another, has been seen by many to clarify the problem and enunciate its solution in one fell swoop: to have free will is to be the beneficiary of “quantum indeterminism” somewhere deep in our brains. But others have seen that an agent with what amounts to an utterly unpredictable roulette wheel in the driver’s seat hardly qualifies as an agent who is responsible for the actions chosen. Does free will require indeterminism or not? Many philosophers are sure they know the answer (I among them), but it must be acknowledged that nothing approaching consensus has yet been reached.
Kristin Hohenadel in Slate:
Held annually since 2009 in Grand Rapids, Michigan, ArtPrize is a democratic art competition open to anyone in the world over age 18, with generous cash prizes awarded by both a jury of experts and popular vote. For the first time, a single work—Intersections by Pakistan-born Anila Quayyum Agha—took this year’s public and juried grand prizes for a total of $300,000.
Agha’s stunning piece is an obvious crowd-pleaser, a 6½-foot square laser-cut, black lacquer wood cube suspended from the ceiling and lit with a single light bulb that casts breathtaking 32-feet-by-34-feet shadows to create instant architecture in an otherwise empty room.
The artist, who is now an associate professor of drawing at the Herron School of Art and Design in Indianapolis, explains on her website that the work is based on the geometrical patterns used in Islamic sacred spaces.
It was created to express what she describes as “the seminal experience of exclusion as a woman from a space of community and creativity such as a Mosque and translates the complex expressions of both wonder and exclusion that have been my experience while growing up in Pakistan.”
Michael Mark Cohen in Medium:
I am a white, middle class male professor at a big, public university, and every year I get up in front of a hundred and fifty to two hundred undergraduates in a class on the history of race in America and I ask them to shout white racial slurs at me.
The results are usually disappointing.
First of all, everyone knows that saying anything overtly racist in front of strangers is totally taboo. So the inhibitions to participation in this insane activity are already pretty great. Even so, most of these kids are not new to conversations about race; the majority of them are students of color, including loads of junior college transfers, student parents, vets, and a smattering of white kids, mostly freshmen. Of course some are just scared of speaking in front of so many people, no matter what the topic.
So I cajole a few of them into “Cracker” and “Red Neck.” We can usually get to “Hillbilly” or “Trailer Trash” or “White Trash,” possibly even “Peckerwood,” before folks recognize the “Cletus the slack-jawed yokel” pattern of class discrimination here. And being that we are at a top ranked west coast university, not only do we all share basic middle class aspirations, but we can feel pretty safe in the fact that there are no “Red Necks” here to insult.
Scientists in Cambridge, England have found hidden signatures in the brains of people in a vegetative state that point to networks that could support consciousness — even when a patient appears to be unconscious and unresponsive. The study could help doctors identify patients who are aware despite being unable to communicate. Although unable to move and respond, some patients in a vegetative state are able to carry out tasks such as imagining playing a game of tennis, the scientists note. Using a functional magnetic resonance imaging (fMRI) scanner, researchers have previously been able to record activity in the pre-motor cortex, the part of the brain that deals with movement, in apparently unconscious patients asked to imagine playing tennis.
Now, a team of researchers led by scientists at the University of Cambridge and the MRC Cognition and Brain Sciences Unit, Cambridge, have used high-density electroencephalographs (EEG) and graph theory to study networks of activity in the brains of 32 patients diagnosed as vegetative and minimally conscious and compare them to healthy adults. The researchers showed that the connectome — the rich and diversely connected networks that support awareness in the healthy brain — are typically impaired in patients in a vegetative state. But they also found that some vegetative patients had well-preserved brain networks that look similar to those of healthy adults — these patients were those who had shown signs of hidden awareness by following commands such as imagining playing tennis.
Bridge-builder I am
between the holy and the damned
between the bitter and the sweet
between chaff and the wheat
Bridge-builder I am
between the goat and the lamb
between the sermon and the sin
between the princess and Rumpelstiltskin
Bridge-builder I am
between the yoni and the lingam
between the darkness and the light
between the left hand and the right
Bridge-builder I am
between the storm and the calm
between the nightmare and the sleeper
between the cradle and the reaper
Bridge-builder I am
between the hex and the hexagram
between the chalice and the cauldron
between the gospel and the Gorgon
Bridge-builder I am
between the serpent and the wand
between the hunter and the hare
between the curse and the prayer
Bridge-builder I am
between the hanger and the hanged
between the water and the wine
between the pearls and the swine
Bridge-builder I am
between the beast and the human
for who can stop the dance
of eternal balance?
by John Agard
from Poetry Archive
Stephen Hsu in Nautilus Magazine (Photo by Cinerama/Courtesy of Getty Images):
The possibility of super-intelligence follows directly from the genetic basis of intelligence. Characteristics like height and cognitive ability are controlled by thousands of genes, each of small effect. A rough lower bound on the number of common genetic variants affecting each trait can be deduced from the positive or negative effect on the trait (measured in inches of height or IQ points) of already discovered gene variants, called alleles.
The Social Science Genome Association Consortium, an international collaboration involving dozens of university labs, has identified a handful of regions of human DNA that affect cognitive ability. They have shown that a handful of single-nucleotide polymorphisms in human DNA are statistically correlated with intelligence, even after correction for multiple testing of 1 million independent DNA regions, in a sample of over 100,000 individuals.
If only a small number of genes controlled cognition, then each of the gene variants should have altered IQ by a large chunk—about 15 points of variation between two individuals. But the largest effect size researchers have been able to detect thus far is less than a single point of IQ. Larger effect sizes would have been much easier to detect, but have not been seen.
This means that there must be at least thousands of IQ alleles to account for the actual variation seen in the general population. A more sophisticated analysis (with large error bars) yields an estimate of perhaps 10,000 in total.
Each genetic variant slightly increases or decreases cognitive ability. Because it is determined by many small additive effects, cognitive ability is normally distributed, following the familiar bell-shaped curve, with more people in the middle than in the tails. A person with more than the average number of positive (IQ-increasing) variants will be above average in ability. The number of positive alleles above the population average required to raise the trait value by a standard deviation—that is, 15 points—is proportional to the square root of the number of variants, or about 100. In a nutshell, 100 or so additional positive variants could raise IQ by 15 points.
Given that there are many thousands of potential positive variants, the implication is clear: If a human being could be engineered to have the positive version of each causal variant, they might exhibit cognitive ability which is roughly 100 standard deviations above average. This corresponds to more than 1,000 IQ points.
John Yargo in the LA Review of Books:
Bolaño’s biographers face a unique problem. The seductive popular image of him — something like a better-read Burroughs — is at odds with the voice of his fiction and his essays, which tends to be more generous, expansive, and penetrating than his image suggests. Even key events, like his arrest in Pinochet’s Chile or his “heroin addiction,” have been alternately credited as formative aspects of his personality, and discredited by his surviving family, friends, and rivals as erroneous planks of a legacy campaign.
What stands out in his fiction are the riotous voices, the contradictory and implausible characters, the restless equivocations and recapitulations: the polyphony. The first full-length biography in English, Bolaño: A Biography in Conversations, sidesteps “the authoritative biography” trap and attempts to recreate Bolaño-esque polyphony in telling the author’s own story. As the editor-in-chief of the Mexican edition of Playboy, Maristain conducted the last interviews, which appear with other conversations published between 1999 and 2005 in a handy collection, Roberto Bolaño: The Last Interview. In those interviews, Bolaño clearly relishes talking about books and contradicting himself and his image. If the interviews are not confiding in the usual sense of personal disclosures, to his credit, he is far more intimate and vulnerable when answering a question about Cervantes than when other authors are sharing sensitive details about their families.
As in the essay collection Between Parentheses, the picture that emerges from the interviews and the biography is a Bolaño that draws from different sources than contemporary Anglo-American literary fiction incubated in the university workshop. In place of Hemingway, Borges and Nicanor Parra; Carver is substituted by Breton; Denis Johnson usurped by Jacques Vaché and Witold Gombrowicz.
In Latin American fiction, he had a similar effect, shifting the terms on which authors would be understood.
Richard Marshall interviews Ofra Magidor in 3:AM Magazine:
3:AM: You say it’s important for linguistics, computer science – how so?
OM: In the case of linguistics, it is fairly obvious why category mistakes are important: one of the central tasks of linguistics explaining why some sentences are fine and others are infelicitous. In fact, category mistakes are a particularly interesting case, because a plausible argument can be made for explaining their oddness in terms of each of syntax, semantics, and pragmatics – so this is a good phenomenon to explore for anyone who is interested in the distinction between these three realms of language. This is probably why in the late 1960s category mistakes played a key role in one of the central disputes in the foundations of linguistics – that between interpretative semanticists (who claimed that syntax is autonomous of semantics) and generative semanticists (who rejected the sharp divide between these two realms).
I should also note there was a period in the 1960s when there was quite a lot of discussion of category mistakes happening in a parallel in linguistics and in philosophy, but there was practically no interaction at all between the two fields on this topic (they even used different terms – in linguistics authors usually refer to category mistakes as ‘selectional violations’). One thing I tried to do in the book was to bring together these two parallel debates. I’d like to think that these days there is much more co-operation between linguists and philosophers of language so this kind of divide is less likely to happen.
Moving to computer science: one straightforward way in which category mistakes are relevant is because of the field of computational linguistics. Suppose for example that you have an automatic translator which is given the sentence ‘John hit the ball’. If the translator looks up the word ‘ball’ in a dictionary, it will encounter (at least) two meanings: a spherical object that is used in games, and a formal gathering for dancing. It is obvious that the most natural interpretation of the sentence used the former meaning, and one way to see that is to note that if ‘ball’ were interpreted in the ‘dance’ sense, the sentence would be a category mistake. So being able to recognize category mistakes can help the automatic translator reach the correct interpretation.
But there is also a more general way in which the topic is relevant to computer science: computer programs use variables of various types which are assigned values – and it is very common to encounter cases where the value is of the wrong type for the variable. So there is an issue about how the program is going to deal with this kind of type mismatch which is in some ways parallel to the question of how natural languages deal with category mistakes.
Tuesday, October 21, 2014
The worst is yet to come, especially when we take into account the social and economic impact of the epidemic, which has so far hit only a small number of patients (by contrast, the combined death toll of Aids, tuberculosis and malaria, the ‘big three’ infectious pathogens, was six million a year as recently as 2000). Trade and commerce in West Africa have already been gravely affected. And Ebola has reached the heart of the Liberian government, which is led by the first woman to win a presidential election in an African democracy. There were rumours that President Ellen Johnson Sirleaf was not attending the UN meeting because she was busy dealing with the crisis, or because she faced political instability at home. But we knew that one of her staff had fallen ill with Ebola. A few days ago, we heard that another of our Liberian hosts, a senior health official, had placed herself in 21-day quarantine. Although she is without symptoms, her chief aide died of Ebola on 25 September. Such developments, along with the rapid pace and often spectacular features of the illness, have led to a level of fear and stigma which seems even greater than that normally caused by pandemic disease.
But the fact is that weak health systems, not unprecedented virulence or a previously unknown mode of transmission, are to blame for Ebola’s rapid spread. Weak health systems are also to blame for the high case-fatality rates in the current pandemic, which is caused by the Zaire strain of the virus.
I doubt that any other interview of the last ten years was more dramatic, more interesting as a clear statement of two positions or, in a sense, more absurdly grotesque than H.G. Wells’s interview with Stalin.
They met in Moscow on July 23 of last year and talked through an interpreter for nearly three hours. Wells gives a one-sided story in the last chapter of his “Experiment in Autobiography.” The official text of the interview can now be had in a pamphlet issued by International Publishers for two cents. A longer pamphlet, costing fifty cents in this country, was published in London byThe New Statesman and Nation. It contains both the interview and an exchange of letters in which Bernard Shaw is keener and wittier than Wells or J.M. Keynes. There is, unfortunately, no letter from Stalin. We know what Wells thinks about him; it would be instructive to hear what Stalin thinks about Wells.
The drama of their meeting lay in the contrast between two systems of thought. Stalin, with full authority, was speaking for communism, for the living heritage of Marx and Engels and Lenin. Wells is not an official figure and was speaking for himself; but he spoke with the voice of Anglo-American liberalism.
The show eases, somewhat, the famous difficulty of telling a Picasso from a Braque in the woodshedding period of 1909-12, which is termed Analytic Cubism. A wall text—a welcome one among far too many that are prolix, making for an installation that is like a walk-through textbook—points out Braque’s tendency toward ruddy luminosity and Picasso’s toward dramatic shadow. Still, the works speak a single visual language of clustered forms that advance and recede in bumps and hollows, with shaded planes, often bodiless contours, and stuttering fragments of representation. It’s said that they rendered objects from different viewpoints simultaneously, but seeing the works that way is beyond me. You don’t take in an Analytic Cubist picture as a whole. Rather, you survey it, as with an aerial view of some terrain that you must then explore on foot.
Oddly, for a style that crowds the picture plane, spatial illusion is crucial to Cubism. You know that you’re on the right track when, to your eye, the “little cube” elements start to pop in and out, as if in low relief. There’s a vicarious tactility to the experience. What the elements represent matters far less than where they are, relative to one another. To see how this works, it helps to take note of an endemic formal problem of Cubist painting: what to do in the corners, where the third dimension can’t be sustained.
Peter Conrad in The Guardian:
Revolutions usually leave ancient institutions tottering, societies shaken, the streets awash with blood. But what Walter Isaacson calls the “digital revolution” has kept its promise to liberate mankind. Enrichment for the few has been balanced by empowerment for the rest of us, and we can all – as the enraptured Isaacson says – enjoy a “sublime user experience” when we turn on our computers. Wikipedia gives us access to a global mind; on social media we can chat with friends we may never meet and who might not actually exist; blogs “democratise public discourse” by giving a voice to those who were once condemned to mute anonymity. Has heaven really come down to our wired-up, interconnected Earth?
What Isaacson sees as an eruption of communal creativity began with two boldly irreligious experiments: an attempt to manufacture life scientifically, followed by a scheme for a machine that could think. After Mary Shelley’s Frankenstein stitched together his monster, Byron’s bluestocking daughter Ada Lovelace devised an “analytical engine” that could numerically replicate the “changes of mutual relationship” that occurred in God’s creation. Unlike Shelley’s mad scientist, Lovelace stopped short of challenging the official creator: her apparatus had “no pretension to originate anything”. A century later, political necessity quashed this pious dread. The computing pioneers of the 1930s, as Isaacson points out, served military objectives. At MIT, Vannevar Bush’s differential analyser churned out artillery firing tables, and at Bletchley Park, after the war began, an all-electronic computer called the Colossus deciphered German codes. Later, the US air force and navy gobbled up all available microchips, which were used for guiding warheads aimed at targets in Russia or Cuba; only when the price of the chips dropped could they be used to power consumer products, not just weapons.