While the U.S.S.R. eventually won the space race in 1961 by sending Yuri Gagarin into orbit, the Americans stole the show again on July 16, 1969, when NASA launched a Saturn V rocket from the Kennedy Space Center in Merritt Island, Florida. Four days and nearly 240,000 miles later, the three-man crew of Michael Collins, Buzz Aldrin, and Neil Armstrong arrived at their destination. Collins piloted the command module Columbia as Aldrin and Armstrong descended toward the moon’s surface inside the lunar module named after the national bird of the United States: the Eagle.
Armstrong’s heart rate jumped from 77 bpm to 156 bpm as Aldrin called out the altitude readings: “750 feet, coming down at 23 degrees . . . 700 feet, 21 down . . . 400 feet, down at 9.” When they finally touched down, Armstrong quietly said, “Houston. Tranquility Base here. Eagle has landed.”
The dusky seaside sparrow was still stuck back down on Earth. In fact, in that same year, one biologist observed that only thirty singing male dusky seaside sparrows remained on Merritt Island. The scientific community had been sounding the alarm about the disappearance of the dusky for years, but there was little concern shown beyond the small circle of ornithologists studying Florida’s Atlantic coast. The average sparrow is about as large as a human heart, though not nearly as important to the survival of actual humans. Perhaps the greatest thing the dusky seaside sparrow had working against it was that it was not as glorious or impressive as other species. It was no bald eagle. It was no heart. It was no moon.
From RM Vaughan’s interview with Paul Vermeersch about Self Defence for the Brave and Happy.
RM Vaughan: The book moves effortlessly between prophetic pronouncements and intimate, personal observations. Is it a goal of the book to conflate the two in order to make the reader more keenly aware that we live in prophetic times?
Paul Vermeersch: I think all times are equally prophetic and intimate. The lives of individuals unfold along with the cosmos. But the prophets only seem to get at half the picture, only the grand events. Perhaps one of the jobs of a poet is to be a prophet of the small things, too—to prophesy the taste of lobster, the pang of guilt, the fear of darkness. We can’t put small things on hold when big things happen. I think my poems encompass that spectrum: both the landscape and the figure within the landscape, both the star system and the escape pod within the star system.
From Henry Farrell‘s “Philip K. Dick and the Fake Humans.”
Standard utopias and standard dystopias are each perfect after their own particular fashion. We live somewhere queasier—a world in which technology is developing in ways that make it increasingly hard to distinguish human beings from artificial things. The world that the Internet and social media have created is less a system than an ecology, a proliferation of unexpected niches, and entities created and adapted to exploit them in deceptive ways. Vast commercial architectures are being colonized by quasi-autonomous parasites. Scammers have built algorithms to write fake books from scratch to sell on Amazon, compiling and modifying text from other books and online sources such as Wikipedia, to fool buyers or to take advantage of loopholes in Amazon’s compensation structure. Much of the world’s financial system is made out of bots—automated systems designed to continually probe markets for fleeting arbitrage opportunities. Less sophisticated programs plague online commerce systems such as eBay and Amazon, occasionally with extraordinary consequences, as when two warring bots bid the price of a biology book up to $23,698,655.93 (plus $3.99 shipping).
In his novels Dick was interested in seeing how people react when their reality starts to break down. A world in which the real commingles with the fake, so that no one can tell where the one ends and the other begins, is ripe for paranoia. The most toxic consequence of social media manipulation, whether by the Russian government or others, may have nothing to do with its success as propaganda. Instead, it is that it sows an existential distrust. People simply do not know what or who to believe anymore.
From Hariton Pushwagner’s Soft City.
From Alfred Whitehead’s Science and the Modern World.
“Modern science has imposed on humanity the necessity for wandering. Its progressive thought and its progressive technology make the transition through time, from generation to generation, a true migration into uncharted seas of adventure. The very benefit of wandering is that it is dangerous and needs skill to avert evils. We must expect, therefore, that the future will disclose dangers. It is the business of the future to be dangerous ; and it is among the merits of science that it equips the future for its duties.”
From Casey Weldon via fubiz.
From Douglas Coupland’s Escaping the superfuture.
Lately I’ve been experiencing a new temporal sensation that’s odd to articulate, but I do think is shared by most people. It’s this: until recently, the future was always something out there up ahead of us, something to anticipate or dread, but it was always away from the present.
But not any more. Somewhere in the past few years the present melted into the future. We’re now living inside the future 24/7 and this (weirdly electric and buzzy) sensation shows no sign of stopping — if anything, it grows ever more intense. Elsewhere I’ve labelled this experience “the extreme present” — or another label for this new realm might be “the superfuture”. In this superfuture I feel like I’m clamped into a temporal roller coaster and, at the crest of the first hill, I can see that my roller coaster actually runs off far into the horizon. Wait! How is this thing supposed to end?
It’s hard to accept that our new superfuture mind state is permanent and that it’s not going away — how could it? Our devices that cause it aren’t going to go away. They’ll just get better and faster and we’re going to embed ourselves in the superfuture ever more deeply.
It makes me wonder if the most important thing we could invent right now would be a technology that takes away our bottomless fear of missing out, our need to read the latest news update, our latest hook-up or our latest upgrade.
What kind of technology would that be?
Kevin Kelly on three breakthroughs in artificial intelligence.
In the next 10 years, 99 percent of the artificial intelligence that you will interact with, directly or indirectly, will be nerdily autistic, supersmart specialists.
In fact, this won’t really be intelligence, at least not as we’ve come to think of it. Indeed, intelligence may be a liability—especially if by “intelligence” we mean our peculiar self-awareness, all our frantic loops of introspection and messy currents of self-consciousness. We want our self-driving car to be inhumanly focused on the road, not obsessing over an argument it had with the garage. The synthetic Dr. Watson at our hospital should be maniacal in its work, never wondering whether it should have majored in English instead. As AIs develop, we might have to engineer ways to prevent consciousness in them—and our most premium AI services will likely be advertised as consciousness-free.
What we want instead of intelligence is artificial smartness. Unlike general intelligence, smartness is focused, measurable, specific. It also can think in ways completely different from human cognition. […]
Nonhuman intelligence is not a bug, it’s a feature. The chief virtue of AIs will be their alien intelligence. An AI will think about food differently than any chef, allowing us to think about food differently. Or to think about manufacturing materials differently. Or clothes. Or financial derivatives. Or any branch of science and art. The alienness of artificial intelligence will become more valuable to us than its speed or power.
As it does, it will help us better understand what we mean by intelligence in the first place. In the past, we would have said only a superintelligent AI could drive a car, or beat a human at Jeopardy! or chess. But once AI did each of those things, we considered that achievement obviously mechanical and hardly worth the label of true intelligence. Every success in AI redefines it.
But we haven’t just been redefining what we mean by AI—we’ve been redefining what it means to be human. Over the past 60 years, as mechanical processes have replicated behaviors and talents we thought were unique to humans, we’ve had to change our minds about what sets us apart. As we invent more species of AI, we will be forced to surrender more of what is supposedly unique about humans. We’ll spend the next decade—indeed, perhaps the next century—in a permanent identity crisis, constantly asking ourselves what humans are for. In the grandest irony of all, the greatest benefit of an everyday, utilitarian AI will not be increased productivity or an economics of abundance or a new way of doing science—although all those will happen. The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.