Monday, July 11, 2016
As you know, we are able to run the site only because our regular readers support us through subscriptions or one-time payments.
Whichever you'd like to do, please take a couple of minutes and use the appropriate button near the top of the left-hand column to make a contribution.
Please do it now!
New posts below.
Saturday, July 23, 2016
What my evening with Milo told me about Twitter’s biggest troll, the death of reason, and the crucible of A-list con-men that is the Republican National Convention
Laurie Penny in Welcome to the Screaming Room:
This is a story about how trolls took the wheel of the clown car of modern politics. It’s a story about the insider traders of the attention economy. It’s a story about fear and loathing and Donald Trump and you and me. It’s not a story about Milo Yiannopoulos, the professional alt-right provocateur who was just banned from Twitter permanently for sending racist abuse to actor Leslie Jones.
But it does start with Milo. So I should probably explain how we know each other and how, on a hot, weird night in Cleveland, I came to be riding in the backseat of his swank black trollmobile to the gayest neo-fascist rally at the RNC.
Esme Cribb in TPM Livewire:
New media writer Clay Shirky took to Twitter Friday afternoon to dismiss white liberals' response to Donald Trump as ineffective and self-indulgent – and to rally them to defeat Trump.
"Believe this: Trump could win," Shirky tweeted. "We can help stop him, but that means giving up on a lot of comfortable illusions."
The physicist Asimina Arvanitaki is thinking up ways to search gravitational wave data for evidence of dark matter particles orbiting black holes
Joshua Sokol in Quanta:
When physicists announced in February that they had detected gravitational waves firsthand, the foundations of physics scarcely rattled. The signal exactly matched the expectations physicists had arrived at after a century of tinkering with Einstein’s theory of general relativity. “There is a question: Can you do fundamental physics with it? Can you do things beyond the standard model with it?” said Savas Dimopoulos, a theoretical physicist at Stanford University. “And most people think the answer to that is no.”
Asimina Arvanitaki is not one of those people. A theoretical physicist at Ontario’s Perimeter Institute of Theoretical Physics, Arvanitaki has been dreaming up ways to use black holes to explore nature’s fundamental particles and forces since 2010, when she published a paper with Dimopoulos, her mentor from graduate school, and others. Together, they sketched out a “string axiverse,” a pantheon of as yet undiscovered, weakly interacting particles. Axions such as these have long been a favored candidate to explain dark matter and other mysteries.
In the intervening years, Arvanitaki and her colleagues have developed the idea through successive papers. But February’s announcement marked a turning point, where it all started to seem possible to test these ideas. Studying gravitational waves from the newfound population of merging black holes would allow physicists to search for those axions, since the axions would bind to black holes in what Arvanitaki describes as a “black hole atom.”
Kate Douglas in Evonomics:
Using a mathematical model of price fluctuations, for example, Bell has shown that prestige bias – our tendency to copy successful or prestigious individuals – influences pricing and investor behaviour in a way that creates or exacerbates market bubbles.
We also adapt our decisions according to the situation, which in turn changes the situations faced by others, and so on. The stability or otherwise of financial markets, for instance, depends to a great extent on traders, whose strategies vary according to what they expect to be most profitable at any one time. “The economy should be considered as a complex adaptive system in which the agents constantly react to, influence and are influenced by the other individuals in the economy,” says Kirman.
This is where biologists might help. Some researchers are used to exploring the nature and functions of complex interactions between networks of individuals as part of their attempts to understand swarms of locusts, termite colonies or entire ecosystems. Their work has provided insights into how information spreads within groups and how that influences consensus decision-making, says Iain Couzin from the Max Planck Institute for Ornithology in Konstanz, Germany – insights that could potentially improve our understanding of financial markets.
Take the popular notion of the “wisdom of the crowd” – the belief that large groups of people can make smart decisions even when poorly informed, because individual errors of judgement based on imperfect information tend to cancel out. In orthodox economics, the wisdom of the crowd helps to determine the prices of assets and ensure that markets function efficiently. “This is often misplaced,” says Couzin, who studies collective behaviour in animals from locusts to fish and baboons.
By creating a computer model based on how these animals make consensus decisions, Couzin and his colleagues showed last year that the wisdom of the crowd works only under certain conditions – and that contrary to popular belief, small groups with access to many sources of information tend to make the best decisions.
That’s because the individual decisions that make up the consensus are based on two types of environmental cue: those to which the entire group are exposed – known as high-correlation cues – and those that only some individuals see, or low-correlation cues. Couzin found that in larger groups, the information known by all members drowns out that which only a few individuals noticed.
Rachel Aviv in The New Yorker:
Martha Nussbaum was preparing to give a lecture at Trinity College, Dublin, in April, 1992, when she learned that her mother was dying in a hospital in Philadelphia. She couldn’t get a flight until the next day. That evening, Nussbaum, one of the foremost philosophers in America, gave her scheduled lecture, on the nature of emotions. “I thought, It’s inhuman—I shouldn’t be able to do this,” she said later. Then she thought, Well, of course I should do this. I mean, here I am. Why should I not do it? The audience is there, and they want to have the lecture
…Nussbaum is drawn to the idea that creative urgency—and the commitment to be good—derives from the awareness that we harbor aggression toward the people we love. A sixty-nine-year-old professor of law and philosophy at the University of Chicago (with appointments in classics, political science, Southern Asian studies, and the divinity school), Nussbaum has published twenty-four books and five hundred and nine papers and received fifty-seven honorary degrees. In 2014, she became the second woman to give the John Locke Lectures, at Oxford, the most eminent lecture series in philosophy. Last year, she received the Inamori Ethics Prize, an award for ethical leaders who improve the condition of mankind. A few weeks ago, she won five hundred thousand dollars as the recipient of the Kyoto Prize, the most prestigious award offered in fields not eligible for a Nobel, joining a small group of philosophers that includes Karl Popper and Jürgen Habermas. Honors and prizes remind her of potato chips; she enjoys them but is wary of becoming sated, like one of Aristotle’s “dumb grazing animals.” Her conception of a good life requires striving for a difficult goal, and, if she notices herself feeling too satisfied, she begins to feel discontent. Nussbaum is monumentally confident, intellectually and physically. She is beautiful, in a taut, flinty way, and carries herself like a queen. Her voice is high-pitched and dramatic, and she often seems delighted by the performance of being herself. Her work, which draws on her training in classics but also on anthropology, psychoanalysis, sociology, and a number of other fields, searches for the conditions for eudaimonia, a Greek word that describes a complete and flourishing life. At a time of insecurity for the humanities, Nussbaum’s work champions—and embodies—the reach of the humanistic endeavor. Nancy Sherman, a moral philosopher at Georgetown, told me, “Martha changed the face of philosophy by using literary skills to describe the very minutiae of a lived experience.”
Because our lives move in straight lines but our perceptions do not, we are forever trying to squeeze the latter’s unruliness into the former’s rigor. This, perhaps, explains why memoirs so often have the clean story arcs, senses of closure, and thematic consistencies that our lives never, ever have. Memoirs are lies; autobiographies are lies with footnotes. Somewhere in those footnotes, though, in those interstices clarifying and digressing from the main tales, lies glimmer of the real.
In The Child Poet, Homero Aridjis gives us such gleaming footnotes and green shoots of offhand mystery that we’re reminded that it’s not necessarily bad to be told lies, so long as the teller realizes that he is indeed lying. As Albert Camus said, fictions are lies that tell the truth. Through Aridjis’s memoir, the Mexican writer eschews the straight line and the tidy summation, opting instead for dark flashes and dream logic. The tale he tells of his childhood and adolescence isn’t, in the end, a tale at all but rather a series of vignettes. Some are lush with physical detail, while others are spare. Some vignettes are told at a remove even though it’s clear that Aridjis was present for the events. Others are visceral, immediate snapshots, even though they are hearsay; the memoirist captures the aura of events for which he wasn’t there.
But then other moments feel like a bit of both, in that they are conveyed with such tactile fervor that it’s easy to forget—and maybe he wants you to forget—that he couldn’t possibly remember the event being described, though he was undoubtedly present.
In a central scene in David Means’s debut novel, a dead Vietnam veteran delivers a powerful stream of consciousness directly into the mind of his former girlfriend. The horror of war, he explains with bitter resentment, cannot be “caught, bottled up, and taken back to the States”; there’s no fear that can be performed for the camera, no pain that can be massaged into a dispatch that will “make some kind of sense”. Yet after all they’ve gone through, Billy Thompson points out, the dead do not live to tell their own stories: “anything said by them is the pure fiction of the living and nothing more”.
Hystopia is the title of a novel within the novel, the full text of which is bookended by a series of editor’s and author’s notes, alongside fragmentary comments on the manuscript from various acquaintances of the purported author, Eugene Allen, an isolated 22-year-old veteran who has committed suicide. We’re warned from the outset that we may be at the mercy of an unreliable narrator: Allen suffered from a disease whose symptoms often include “delusional historical memories”.
Among the displays of assault rifles at the Mikhail Kalashnikov Museum in Izhevsk is a small lawnmower Kalashnikov designed to push about the grounds of his summer cottage. It is said that Mikhail Kalashnikov loved to care for his grass. Kalashnikov gave the lawnmower the same sensible qualities he gave the gun that bears his name. The lawnmower is light, simple, cheap to construct and easy to hold—something a child could use.
Kalashnikov didn’t regret inventing the Kalashnikov rifle. “I invented it for the protection of the Motherland,” he said. Still, he once mused that he would like to have been known as a man who helped farmers and gardeners. “I wanted to invent an engine that could run forever,” Kalashnikov once said. “I could have developed a new train, had I stayed in the railway.” But this was not to be.
Mikhail Kalashnikov was born in the rural locality of Kurya, the 17th child of peasants. When Kalashnikov was still a boy, his family’s property was confiscated and they were deported to Western Siberia. The farming was hard there, but harder was the shame of being exiled from the Soviet workers’ paradise. Kalashnikov was a sickly child and though his studies didn’t take him past secondary school, the future inventor dreamed of being a poet. After finishing the seventh grade, young Kalashnikov gathered his poetry books and worked as a technician on the Turkestan-Siberian railway, until he was conscripted into the Red Army in 1938. He worked with tanks and, in his spare time, tinkered with small arms. In 1941, Kalashnikov was wounded in battle. There, in the hospital, suffering from war wounds and shellshock, Kalashnikov had his vision. “I decided to build a gun of my own which could stand up to the Germans,” he would later say.
They say the chase ends where the earth is put together
by two halves, but no matter —because that is you
at thirty, perhaps forty:
corpus callosum of the brain,
two loaves opening and closing like a book.
Your arms spring out and lungs push and pull
rinsing the midnight air—
no matter, because you are there, chasing
the child of wonder and hope
through cities confined in smog.
You missile through firs, through mouths dusted
with mathematical chalk.
You follow the muddy-water spillways peppered with
Not the shadow that greets itself in the dark
but the utter collision of evaporating rain
leads you on.
Not the lightning’s sketch but the black puzzle of night,
as you appear and disappear among people,
chasing he who knows your name
but won’t tell.
by Victor Martinez
from Paper Dance -55 Latino Poets
Persea Books, 1994
Jason Zengerle in The New York Times:
It’s an axiom of American politics that presidents become more popular once they are ex-presidents. Admittedly, George W. Bush had nowhere to go but up. With two months left in his second term, Bush’s approval rating sat at an abysmal 25 percent, just one point higher than Richard Nixon’s during Watergate. On the day of Barack Obama’s inauguration, when a Marine helicopter ferried the outgoing president away from the United States Capitol, many in the crowd serenaded him with chants of “Bye-bye Bush!” and “Go home to Texas!” Then the predictable happened. Bush’s absence from public life made Americans’ hearts for him grow fonder. Out of the spotlight, he busied himself painting oil portraits of family pets and world leaders; when he did dip his toe into political waters, it was for laudable and uncontroversial causes like fighting AIDS and malaria in Africa. His poll numbers began their inexorable climb. By June of last year, Bush’s favorability rating was 52 percent — higher than Obama’s at the time. His younger brother, Jeb, started his ill-fated 2016 presidential run with the declaration, “I am my own man.” But by the end of Jeb’s run, he was appearing alongside Dubya at rallies. Although Jeb’s fraternal Hail Mary ultimately fell short, his older brother’s re-emergence on the campaign trail only served to confirm that, fewer than eight years after being hounded from the White House, George W. Bush had become a less polarizing, fairly popular, at times even lovable figure.
Readers of the presidential historian Jean Edward Smith’s mammoth new biography, “Bush,” will surely be cured of this political amnesia. Smith — who has written biographies of Ulysses S. Grant, Franklin Delano Roosevelt and Dwight Eisenhower — is unsparing in his verdict on our 43rd president. “Rarely in the history of the United States has the nation been so ill-served as during the presidency of George W. Bush,” Smith writes in the first sentence of the preface. And then he gets harsh. In Smith’s clipped retelling of his subject’s early years, Bush was an unaccomplished, callow son of privilege who cashed in on his family’s connections for everything from his admission to Yale to his avoidance of Vietnam. Quoting Bush’s tautological explanation of his wasted youth — “When I was young and irresponsible, I behaved young and irresponsibly” — Smith concludes, “That pretty well says it all.” Being Texas governor “was scarcely a full-time job,” and his 2000 victory in the presidential race owed as much to the ineptness of his Democratic opponent, Al Gore — who “came across as wooden and self-important” — as it did to Bush’s “ease on the campaign trail.” None of this prepared Bush for the gravity of the responsibilities he would face as president, Smith argues, and time and again Bush failed to meet the challenges of the office.
Friday, July 22, 2016
From sin taxes to the Affordable Care Act’s individual mandate, from tax rebates for buying an electric car to performance-based school funding, governments extensively deploy material incentives to regulate citizens’ behaviors. The idea is straightforward: economic costs and benefits shape people’s choices, so changing those costs and benefits can change their actions.
This approach is intuitively appealing in our age, as it uses an enlightened mix of encouragement and coercion to advance public goals. But it works only if people act rationally in their own self-interest and respond accordingly to alterations in cost-benefit calculations. This may not seem much of an “if”; the notion that we all maximize our own good has been the basis of a long strain of economic thinking stretching back at least to Adam Smith, who asserted, “It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest.” But is this really an accurate depiction of our behavior? And what is the significance of individual or collective political agency in a world of government-by-incentives?
Samuel Bowles’s new book The Moral Economy: Why Good Incentives Are No Substitute for Good Citizens provides a lucid and comprehensive answer to the first of these questions. Synthesizing findings from experimental and behavioral economics, psychology, and anthropology over the last two decades, Bowles convincingly argues that people do not act on the basis of amoral self-interest alone. Rather, we regularly proceed from “ethical and other-regarding motivations.” Furthermore, these “social preferences,” as Bowles call them, can be crowded out and eventually eroded by policies that rely exclusively on manipulating material self-interest. He then moves these lessons from social science research into the realm of both policymaking and political theory, contending that the proper role of government is to construct a “policy paradigm of synergy between incentives and constraints, on the one hand, and ethical and other-regarding motivations, on the other.”
Bowles doesn’t explore the second question, which is about political agency. This is a striking omission because he emphasizes the need for public policy and governance to cultivate good citizens. Yet there is no place in his recommendations for the active citizen practicing democracy through political participation, protest, and social movements. The book is haunted by the absence of active responses to government-instituted incentives and policies.
Paul Krugman reviews Mervyn King's The End of Alchemy: Money, Banking, and the Future of the Global Economy, in the NYRB:
These days, of course, the pound sterling is much less widely used than the dollar, the euro, or even the yen or the yuan, and the Bank of England is correspondingly overshadowed in many ways by its much younger counterparts abroad. Yet the bank still punches above its weight in troubled times. In part that’s because London remains a great financial center. But it’s also thanks to the Bank of England’s intellectual adventurousness.
It was a big departure for the Federal Reserve—which has historically been run by bankers rather than academics—when Ben Bernanke, a distinguished monetary economist, was appointed as chairman in 2006. But Mervyn King, a former professor at the London School of Economics, was already running the Bank of England. And it was these two professors who guided the English-speaking world’s biggest economies through the recent financial crisis.
Now King, like Bernanke, has written a book inspired by his experiences. But it’s not at all the book one might have expected. It’s not a play-by-play of the crisis, or a tell-all, or a personal memoir. In fact, King not-so-subtly mocks the authors of such books, which “share the same invisible subtitle: ‘how I saved the world.’”
King’s book is, instead, devoted to “economic ideas.” It is rich in wide-ranging historical detail, with many stories I didn’t know—the desperate shortage of banknotes at the outbreak of World War I, the remarkable emergence of the “Swiss dinar” (old Iraqi notes printed from Swiss plates) in Kurdistan. But it is mainly an extended meditation on monetary theory and the methodology of economics.
And a fascinating meditation it is. As I’ll explain shortly, King takes sides in a long-running dispute between mainstream economic analysis and a more or less radical fringe that rejects the mainstream’s methods—and comes down on the side of the radical fringe. The policy implications of his methodological radicalism aren’t as clear or, I’d argue, as persuasive as one might like, but he definitely challenges policy as well as research orthodoxy.
You don’t have to agree with everything King says—and I don’t—to be impressed by his willingness to let his freak flag fly. His assertion that we haven’t done nearly enough to head off the next financial crisis will, I think, receive wide assent; I don’t know anyone who thinks, for example, that the US financial reforms enacted in 2010 were sufficient. But his assertion that the whole intellectual frame we’ve been using is more or less irreparably flawed is a brave position that should produce a lot of soul-searching among both economists and policy officials.
Cedric Johnson in Jacobin:
Former New York City mayor Rudolph Giuliani, the supreme booster of “broken windows” policing, was quick to attack Black Lives Matter activists, claiming that BLM is “inherently racist because, number one, it divides us.” He also chastised activists for allegedly ignoring violence within black communities and suggested that they were responsible for civilian-police conflicts because their criticism “puts a target on the backs of” police officers.
Other conservatives have echoed claims that the Obama administration and Black Lives Matter protests have created dangerous conditions for police officers. They are wrong. Policing is not the most hazardous occupation in the United States. In fact, it is not even in the top ten.
And contrary to the claim that the Obama administration — no unwavering supporter of anti–police brutality efforts — has enabled anti-police sentiment, violence against police officers has decreased during Obama’s tenure, especially when compared to the George W. Bush years. Over 70 percent of the violence against law enforcement that has occurred so far this year has been carried out by white men.
Finally, anti–police brutality struggles should not be reduced to the “movement for black lives.” Surely the hashtag and slogan, and the network of activists who align with BLM, have been instrumental in drawing national and international attention to the issue of police violence, but on the ground, protests are comprised of all manner of people representing victims’ families, traditional civil rights organizations, neighborhood and community groups, labor unions, civil liberties advocates, youth and student organizations, various left political tendencies, and solitary actors. And organizing against police brutality has a much longer lineage, one that certainly predates the birth of BLM’s millennial spokespersons.
In the hands of conservatives, Black Lives Matter has become an easy foil for dismissing a longer-standing set of struggles against police violence and mass incarceration.
Here’s a fun game: Tap your finger against a surface, but before you do, predict the sound it will make. Did you get it right? If not, you better practice, because pretty soon a robot’s going to play this game better than you can.
On the march to the robot apocalypse, the ability to perform such a quirky task may not seem especially portentous, but new research out of MIT demonstrates why such a capacity lays the foundation for far more sophisticated actions.
Andrew Owens, a PhD student in MIT’s Computer Science and Artificial Intelligence Laboratory, and his collaborators presented the research at a conference in Las Vegas last month. There they explained how they’d engineered a computer algorithm that can “watch” silent video of a drumstick striking different kinds of objects and create a sound that, in many cases, closely matches the one generated by the actual event.
“The computer has to know a lot about the physical world,” says Owens. “It has to know what material you’re hitting, that a cushion is different than a rock or grass. It also has to know something about the action you’re taking. If you’re [striking the surface] hard, it should be a loud sound loud, if soft, a softer sound.”
Indifference, however, appears to be the norm. Most of us live in “carninormative” societies where meat eating is so normal that no matter how many qualms we might have about it, it just doesn’t feel wrong to most of us. This is most evident in the mismatch between the almost universal reflective disapproval of inhumane intensive farming and the unreflective buying choices of most consumers. Christopher Belshaw, in his contribution to The Moral Complexities of Eating Meat, is surely too optimistic when he claims it is unnecessary to say anything about factory farming because “there is little point either in defending the indefensible or in attacking a practice that almost every reader here will already condemn”. I am constantly amazed to see well-educated, thoughtful people order meat at restaurants without any questions about its provenance.
So we ought to be thinking more seriously about animal ethics. Yet it is often the case that the more rigorous we try to be, the more inadequate our conceptual tools look. The crudest tool of all is the utilitarian hedonic yardstick, which equates the good with whatever decreases suffering and increases happiness or pleasure. Utilitarianism starts with the undeniable premiss that the well-being of sentient creatures matters, yet ends with the incredible conclusion that all that matters is maximizing total well-being. Even if that were true, it’s difficult enough applying the principle to humans, since there are important qualitative differences between kinds of positive and negative human feelings. When we try to apply the principle across species, these problems multiply. How can you compare Hammy the hamster playing on his wheel and Miles Davis playing his trumpet?
Walking through the show, one can see how ultimately unsuited Martin was to be a hard-core Abstract Expressionist; the movement was too noisy, and what did she have to do with bop, the Beats, that wall of sound and bodies that wanted to shout the squares down in favor of “kicks”? Martin was interested not in discord but in harmony. While Jackson Pollock said he was nature, Martin strove to represent how nature made her feel or should make us feel—humble, free. Nature was to her what it was to Ralph Waldo Emerson’s “transparent eye” in his transcendentalist masterpiece, “Nature” (1836)—a space unrivaled in its ability to inspire and transform.
Emerson’s idealism—“Nature is made to conspire with spirit to emancipate us”—was not unlike Martin’s. Often, in her lovely, empathic writing, she tries to communicate what being an artist must mean if one is going to make real work: becoming a conduit of the beautiful, that which cannot be explained. In her 1989 piece, “Beauty Is the Mystery of Life,” Martin wrote:
When a beautiful rose dies beauty does not die because it is not really in the rose. Beauty is an awareness in the mind. It is a mental and emotional response that we make. We respond to life as though it were perfect. When we go into a forest we do not see the fallen rotting trees. We are inspired by a multitude of uprising trees…. The goal of life is happiness and to respond to life as though it were perfect is the way to happiness. It is also the way to positive art work.
Still, Martin would not be able to create “positive” artwork for years to come; the journey was long.
Azar Nafisi in The Guardian:
Do you remember the fox? Not just any fox, this one is a sage; the one that reveals the truth to the Little Prince, who reveals it to the pilot, who reveals it to us, the readers. As he says goodbye to his friend, the fox tells the Little Prince, “Here is my secret. It’s quite simple: One sees clearly only with the heart. Anything essential is invisible to the eyes.” When as a child I first heard my father read me The Little Prince in a sunny room in Tehran, I was not aware that the story, along with tales from Shahnameh: The Persian Book of Kings, Pinocchio, the work of Mulla Nasrudin, the Alice stories, The Wizard of Oz and The Ugly Duckling, among others, would become one of the main pillars of my “republic of imagination”. My father’s democratic way of introducing me to these stories shaped my attitude towards works of imagination as universal spaces, transcending the boundaries of geography, language, ethnicity, religion, gender, race, nationality and class. I knew that although this fox and his prince were products of a Frenchman’s mind, and although the book was written in a language foreign to me, at a time before I was born and in a place I had never seen, by virtue of hearing and later reading it, that story would also become my story, that Little Prince and fox belonged as much to me as Scheherazade and her 1,001 nights belonged to the French, American, British, Turkish, German and all other readers who would in reading cherish them and “tame” them, the way the Prince learned to tame the fox.
This is how I, a little girl from Iran, came to know and love France, through a little prince and a fox. I had met foxes before; in fact, my father introduced me to the animal in a fable by Jean de La Fontaine. In this story, like most stories, the fox is sly and clever, cheating a simple crow of his meal. Later, my father translated La Fontaine’s fables complete with their beautiful illustrations which he, an amateur painter, drew himself, copying from the originals. In those and most other illustrations the fox looked pretty, with a gorgeous bushy tail and wide eyes. The Little Prince’s fox was not pretty; its bushy tail, more like an upright broom, was not beautiful and his eyes were so narrow they could be barely seen. Yet this animal forever changed my attitude towards the fox – I began to see it in a different light. From this perspective, the fox’s slyness was not due to malice, but to the need to survive. Although I felt sorry for the chickens (which didn’t prevent me from eating them), the fox hunted them so that he could stay alive, unlike some human beings who not only kill and eat the chickens but hunt foxes for entertainment and sport. Gradually, I came to understand why those wide eyes, always brimming with anxiety and fear, seemed to be on the lookout for some invisible but very real menace.
Letter from My Ancestors
We wouldn’t write this,
wouldn’t even think of it. We are working
people without time on our hands. In the old country,
we milk cows or deliver the mail or leave,
scattering to South Africa, Connecticut, Missouri,
and finally, California for the Gold Rush –
Aaron and Lena run the Yosemite campground, general
store, a section of the stagecoach line. Morris comes
later, after the earthquake, finds two irons
and a board in the rubble of San Francisco.
Plenty of prostitutes need their dresses pressed, enough
to earn him the cash to open a haberdashery and marry
Sadie – we all have stories, yes, but we’re not thinking
stories. We have work to do, and a dozen children. They’ll
go on to pound nails and write up deals, not musings.
We document transactions. Our diaries record
temperatures, landmarks, symptoms. We
do not write our dreams. We place another order,
make the next delivery, save the next
dollar, give another generation – you,
maybe – the luxury of time
to write about us.
by Krista Benjamin
from The Best American Poetry 2006
Scribner Poetry, NY
Sara Chodosh in Scientific American:
Think about the first time you met your college roommate. You were probably nervous, talking a little too loudly and laughing a little too heartily. What else does that memory bring to mind? The lunch you shared later? The dorm mates you met that night? Memories beget memories, and as soon as you think of one, you think of more. Now neuroscientists are starting to figure out why. When two events happen in short succession, they feel somehow linked to each other. It turns out that apparent link has a physical manifestation in our brains, as researchers from the Hospital for Sick Children in Toronto (SickKids), the University of Toronto and Stanford University describe in this week’s Science. “Intuitively we know that there’s a structure to our memory,” says neuroscientist Paul Frankland, affiliated with both the University of Toronto and SickKids. “These experiments are starting to scratch the surface of how memories are linked in the brain.”
In your brain, and in the brains of lab mice, recollections are physically represented as collections of neurons with strengthened connections to one another. These clusters of connected cells are known as engrams, or memory traces. When a mouse receives a light shock to the foot in a particular cage, an engram forms to encode the memory of that event. Once that memory forms the set of neurons that make up the engram are more likely to fire. Furthermore, more excitable neurons—that is, brain cells that activate easily—are more likely to be recruited into an engram, so if you increase the excitability of particular neurons, you can preferentially include them in a new engram. The question was, did that principle apply to two memories that happen close together in time? Neurons in a newly formed memory trace are subsequently more excitable than neighboring brain cells for a transient period of time. It follows then that a memory formed soon after the first might be encoded in an overlapping population of neurons, which is exactly what Frankland and study co-lead author Sheena Josselyn, found.
Thursday, July 21, 2016
Julian Baggini in The Guardian:
The way these cream cakes flaunt themselves,” says saucy Carry On star Barbara Windsor, glaring disapprovingly at a chocolate eclair bursting with whipped cream, “it’s enough to lead a girl astray.” Her frown turns into a giggle. “Given half a chance,” she adds before tucking in gleefully.
Nothing captures the peculiarly moralistic British attitude to food better than this 15-second advert from the 1970s. And if poetry is the art of capturing whole worlds in few words then its immortal slogan “naughty but nice” is greater proof of its author’s artistry than the Booker prize its writer Salman Rushdie would go on to win.
For as long as we can remember, the British have associated delicious food with depraved indulgence. Anything that tastes good has got to be bad for your body, soul or both. The marketing department of Magnum knew this when it called its 2002 limited edition range the Seven Deadly Sins. Nothing makes a product more enticing than its being naughty, or even better, wicked.
Ed Yong in The Atlantic:
In 1995, if you had told Toby Spribille that he’d eventually overthrow a scientific idea that’s been the stuff of textbooks for 150 years, he would have laughed at you. Back then, his life seemed constrained to a very different path. He was raised in a Montana trailer park, and home-schooled by what he now describes as a “fundamentalist cult.” At a young age, he fell in love with science, but had no way of feeding that love. He longed to break away from his roots and get a proper education.
At 19, he got a job at a local forestry service. Within a few years, he had earned enough to leave home. His meager savings and non-existent grades meant that no American university would take him, so Spribille looked to Europe.
Thanks to his family background, he could speak German, and he had heard that many universities there charged no tuition fees. His missing qualifications were still a problem, but one that the University of Gottingen decided to overlook. “They said that under exceptional circumstances, they could enroll a few people every year without transcripts,” says Sprirbille. “That was the bottleneck of my life.”
Will Davies at the Political Economy Research Center:
Given that Brexit was an event imagined and delivered from within the Conservative Party, one of the most important analyses of it is Matthew d’Ancona’s examination of how the idea shifted from the party’s margins to its mainstream over the post-Thatcher era. Two things in particular stand out in his account.
Firstly, the political plausibility of Brexit rose as a direct response to Tony Blair’s dogmatic assumption that European integration was a historical destiny, which encompassed the UK. No doubt a figure such as Blair would have discovered a messianic agenda under any historical circumstances. But given he gained power specifically in the mid-90s, he was one palpable victim of the fin de siècle ideology (stereotyped by Francis Fukuyama’s ‘end of history’ thesis, but also present in Anthony Giddens’ ‘Third Way’) that the world was programmed to converge around a single political system.
Neo-conservative faith in violent ‘democratisation’ was Blair’s worst indulgence on this front, but a view of European unification (and expansion) as inevitable was responsible for inciting the Tory reaction within Westminster. Europe could have been viewed as a particular historical path, adopted in view of the particular awfulness of the European 20th century. Instead, in a Hegelian fashion, the idea of Europe became entangled with the idea of ‘globalisation’, and the conservative reaction was to refuse both.
Secondly, Tory Brexiteers view the EU as an anti-market project, which blocks economic freedom. This is also weirdly ahistorical.