ONE HUNDRED AND EIGHTY ONE

From Aharon Appelfeld:

INTERVIEWER
The German spoken by your parents, you said later, was similar to the German of Franz Kafka.

APPELFELD
Yes, Franz Kafka—of all the writers, Franz Kafka. When I read him, he was immediately familiar to me.

INTERVIEWER
So you had a secular upbringing but with some knowledge of religion from your grandparents?

APPELFELD
Yes, I was very close to my maternal grandparents. My grandfather taught me a lot. To give you an example, he used to get up in the morning and pray, but before praying he would open the windows. He said to me, There should not be a barrier between us and God. If the windows are closed and the shutters are closed you cannot speak directly to God. This was something I will not forget. I’ll give you another example. He used to touch every object with great care. I am not just speaking about books. Hebrew books he used to kiss before opening and after closing the book, but he had care for everything— glasses and bottles, for instance.

INTERVIEWER
Why?

APPELFELD
Because they have something of the holy.

INTERVIEWER
Of the holy?

APPELFELD
Yes. You know, God is everywhere. He is in the human heart. He is in the plants. He is in the animals. Everywhere. You have to be very careful when you speak to human beings because the man who is standing in front of you has something divine in himself. Trees, they have something divine in them. Animals of course. And even objects, they have something of the divine.

ONE HUNDRED AND EIGHTY

Léopold Lambert, from The Speech of Things.

“A ruin is not, however, a proof as such. Only a narrative that would integrate the ruin as the object of its plot could transform it into one. The degree of consistency of this narrative is consequently proportional to the degree of truth that it will reach. In other words, we should avoid thinking of justice as the place where “true truth” is established, but rather as the forum—the same etymology as forensic—where “public truth” is debated. Similarly, the law should be less seen as the embodiment of a perfectly ethical set of rules but, rather, as the product of historical dominations crystalized into legislative norms. The public truth is therefore public insofar that it represents the ideological domination at work at the time this truth is constructed. In this way Forensis newly orients the designer’s expertise.

Part of this expertise is built on the ability to use specific tools that inform the discipline. In this regard, many instances of the book introduce a reconstruction of the witness architecture/object as a digital model. This constitutes a chronological inversion to the traditional architectural method, which initially foresees a given building or object through this technique of modeling, then organizes its construction or production. Two particularly probing instances of such an inversion can be found in the book.”

ONE HUNDRED AND SEVENTY SEVEN

Sogyal Rinpoche, in the Tibetan Book of Living and Dying.

Looking in will require of us great subtlety and great courage—nothing less than a complete shift in our attitude to life and to the mind. We are so addicted to looking outside ourselves that we have lost access to our inner being almost completely. We are terrified to look inward, because our culture has given us no idea of what we will find. We may even think that if we do we will be in danger of madness. This is one of the last and most resourceful ploys of ego to prevent us discovering our real nature.
So we make our lives so hectic that we eliminate the slightest risk of looking into ourselves. Even the idea of meditation can scare people. When they hear the words “egoless” or “emptiness,” they think experiencing those states will be like being thrown out of the door of a spaceship to float forever in a dark, chilling void. Nothing could be further from the truth. But in a world dedicated to distraction, silence and stillness terrify us; we protect ourselves from them with noise and frantic busyness. Looking into the nature of our mind is the last thing we would dare to do.

Sometimes I think we don’t want to ask any real questions about who we are, for fear of discovering there is some other reality than this one. What would this discovery make of how we have lived? How would our friends and colleagues react to what we now know? What would we do with the new knowledge? With knowledge comes responsibility. Sometimes even when the cell door is flung open, the prisoner chooses not to escape.”

ONE HUNDRED AND SEVENTY

Frank Gehry keeps it real:

“Let me tell you one thing,” he replied. “In this world we are living in, 98% of everything that is built and designed today is pure shit. There’s no sense of design, no respect for humanity or for anything else. They are damn buildings and that’s it.

“Once in a while, however, a group of people do something special. Very few, but God, leave us alone. We are dedicated to our work. I don’t ask for work … I work with clients who respect the art of architecture. Therefore, please don’t ask questions as stupid as that one.”

ONE HUNDRED AND SIXTY NINE

Kevin Kelly on three breakthroughs in artificial intelligence.

In the next 10 years, 99 percent of the artificial intelligence that you will interact with, directly or indirectly, will be nerdily autistic, supersmart specialists.

In fact, this won’t really be intelligence, at least not as we’ve come to think of it. Indeed, intelligence may be a liability—especially if by “intelligence” we mean our peculiar self-awareness, all our frantic loops of introspection and messy currents of self-consciousness. We want our self-driving car to be inhumanly focused on the road, not obsessing over an argument it had with the garage. The synthetic Dr. Watson at our hospital should be maniacal in its work, never wondering whether it should have majored in English instead. As AIs develop, we might have to engineer ways to prevent consciousness in them—and our most premium AI services will likely be advertised as consciousness-free.

What we want instead of intelligence is artificial smartness. Unlike general intelligence, smartness is focused, measurable, specific. It also can think in ways completely different from human cognition. […]

Nonhuman intelligence is not a bug, it’s a feature. The chief virtue of AIs will be their alien intelligence. An AI will think about food differently than any chef, allowing us to think about food differently. Or to think about manufacturing materials differently. Or clothes. Or financial derivatives. Or any branch of science and art. The alienness of artificial intelligence will become more valuable to us than its speed or power.

As it does, it will help us better understand what we mean by intelligence in the first place. In the past, we would have said only a superintelligent AI could drive a car, or beat a human at Jeopardy! or chess. But once AI did each of those things, we considered that achievement obviously mechanical and hardly worth the label of true intelligence. Every success in AI redefines it.

But we haven’t just been redefining what we mean by AI—we’ve been redefining what it means to be human. Over the past 60 years, as mechanical processes have replicated behaviors and talents we thought were unique to humans, we’ve had to change our minds about what sets us apart. As we invent more species of AI, we will be forced to surrender more of what is supposedly unique about humans. We’ll spend the next decade—indeed, perhaps the next century—in a permanent identity crisis, constantly asking ourselves what humans are for. In the grandest irony of all, the greatest benefit of an everyday, utilitarian AI will not be increased productivity or an economics of abundance or a new way of doing science—although all those will happen. The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.

ONE HUNDRED AND SIXTY EIGHT

About a year and a half ago I had a voice injury that left me basically unable to speak and on steroid medication for about two weeks. Friends joked about being like the Little Mermaid, but it was actually a brutally challenging, alienating, and awkward experience. Earlier this week, I found myself with a sore throat again, and instead of being careful I just kept talking and singing (and occasionally yelling “woo!” in the appropriate social contexts). I woke up this morning feeling really scared when I realized I was only able to whisper, again.

This morning I also found out that Georgia Webb is writing a comic series about her experience with voice loss and long-term recovery called DUMB. It looks really beautiful. Preorder here.

tumblr_nbwaw3XD461rqekrlo5_1280

ONE HUNDRED AND SIXTY SEVEN

Last night I couldn’t sleep because I was thinking about writing a paper on morality and robots. Are you allowed to write about Asimov in law school?

Powell’s radio voice was tense in Donovan’s ear: “Now, look, let’s start with the three fundamental Rules of Robotics — the three rules that are built most deeply into a robot’s positronic brain.” In the darkness, his gloved fingers ticked off each point.
“We have: One, a robot may not injure a human being, or, through inaction, allow a human being to come to harm.”
“Right!”
“Two,” continued Powell, “a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.”
“Right”
“And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”
“Right! Now where are we?”
“Exactly at the explanation. The conflict between the various rules is ironed out by the different positronic potentials in the brain. We’ll say that a robot is walking into danger and knows it. The automatic potential that Rule 3 sets up turns him back. But suppose you order him to walk into that
danger. In that case, Rule 2 sets up a counterpotential higher than the previous one and the robot follows orders at the risk of existence.”
“Well, I know that. What about it?”
“Let’s take Speedy’s case. Speedy is one of the latest models, extremely specialized, and as expensive as a battleship. It’s not a thing to be lightly destroyed”
“So?”
“So Rule 3 has been strengthened — that was specifically mentioned, by the way, in the advance notices on the SPD models — so that his allergy to danger is unusually high. At the same time, when you sent him out after the selenium, you gave him his order casually and without special emphasis, so that the Rule 2 potential set-up was rather weak. Now, hold on; I’m just stating facts.”
“All right, go ahead. I think I get it.”
“You see how it works, don’t you? There’s some sort of danger centering at the selenium pool. It increases as he approaches, and at a certain distance from it the Rule 3 potential, unusually high to start with, exactly balances the Rule 2 potential, unusually low to start with.”
Donovan rose to his feet in excitement. “ And it strikes an equilibrium. I see. Rule 3 drives him back and Rule 2 drives him forward–”
“So he follows a circle around the selenium pool, staying on the locus of all points of potential equilibrium. And unless we do something about it, he’ll stay on that circle forever, giving us the good old runaround.” Then, more thoughtfully: “And that, by the way, is what makes him drunk.
At potential equilibrium, half the positronic paths of his brain are out of kilter. I’m not a robot specialist, but that seems obvious. Probably he’s lost control of just those parts of his voluntary mechanism that a human drunk has. Ve-e-ery pretty.”
“But what’s the danger? If we knew what he was running from–”?

ONE HUNDRED AND SIXTY SIX

Jill Filipovic on the “trigger warning.”

“But generalized trigger warnings aren’t so much about helping people with PTSD as they are about a certain kind of performative feminism: they’re a low-stakes way to use the right language to identify yourself as conscious of social justice issues. Even better is demanding a trigger warning – that identifies you as even more aware, even more feminist, even more solicitous than the person who failed to adequately provide such a warning.”