Fundamentals: Afflict the Comfortable, Comfort the Afflicted

Been a while since I’ve done one of these, huh? If you’re not familiar, Fundamentals is a series where I discuss what I regard as fundamental ideas which underpin what I talk about on this site. These are the basic assumptions, How I Approach the World 101, written primarily so that I can point to them and say “go here” instead of having to periodically reiterate them. 

There’s an old maxim in journalism, which occasionally shows up in other fields: “Afflict the comfortable, comfort the afflicted.” Its meaning, in journalism at least, is fairly straightforward: avoid running stories in ways that make things worse for people in pain (for example, don’t publish the names of crime victims unless they want you to), and actively seek out stories that help people in trouble (for example, covering the negative impact of oppressive policies) or make life more difficult for people in positions of power (for example, uncovering a political scandal).

But I regard this as more than just a standard of journalistic ethics. It is a fundamental moral principle that underpins a lot of what I do, and so it’s worth unpacking a bit.

That “comfort the afflicted” is an important moral principle should go without saying. When people need help, you offer to help. (Helping, not saving, of course, but I’ve covered that elsewhere.) But why is it necessary to afflict the comfortable?

The answer is simple: communal responsibility. We are each of us responsible for bettering our own communities and cultures, which necessarily means subjecting them to scrutiny and change. This necessarily means that the members of our community who are comfortable with things as they are must be shaken up–if we are not disturbing them, then we are not improving our communities.

And this has wide-reaching implications. The common adage to “punch up, not kick down” is just a restatement of this principle. It’s why “reverse racism,” “men’s rights,” and “class war against the rich” are prima facie nonsensical, because whiteness, manhood, and wealth are excessively comfortable, safe positions in our society, and so puncturing their bubble of comfort is a necessary exercise in communal responsibility. That changes in our society are being perceived as afflictions by the comfortable serve as evidence that these changes are a good idea–or, to put it another way, the comments thread on any post about feminism demonstrates the necessity of feminism.

It is simply not enough to just help people who need help. Fundamental social change is required, and to achieve that will necessarily mean making people uncomfortable.

Fundamentals: Apatheism

Let’s get this out of the way before we start: beliefs are a poor predictor of behavior. You can try to compare cherrypicked lists of wrongs committed by adherents of this religion or that philosophy all you like; actual scientific study of the question shows that our beliefs are predictors of the arguments we use to justify our behavior, not the behavior itself.

So that out of the way, it follows that there are really only two arguments for believing in anything: because it’s true, or because it’s not known to be false and will make your life better if you believe it.

So we can toss some religious beliefs out immediately because we know they’re not true. The world is quite a bit older than 6,000 years, and humans showed up way later than day six. Prayer and magic can alter the emotional state of the people performing them or who know they’re being performed, but have no nontrivial material effects.

But then we get into things like afterlives and spiritual realms and incorporeal universe-filling ethereal entities, none of which can be showed to be false. And around here is when your typical Internet atheist will bring up Russel’s teapot, which I’m going to assume you’re all familiar with.

So, here’s the point where I piss off my fellow atheists: Russel’s teapot is bullshit.

Here’s why: the teapot is a material object that exists within the material universe. What we could call the Teapot Proposition is a proposition about the existence, properties, and behaviors of objects within the material universe. It is a positive, scientific claim, and therefore rightly subject to the rules of the scientific endeavor. To be more precise, it is a statement with material consequences; that is, there are measurable differences between a universe where it’s true and one where it’s not. Quite difficult differences to measure, true, but nonetheless a universe with Russel’s teapot is not the same universe as one without Russel’s teapot. It is a claim at least theoretically subject to rigorous scientific testing.

Most religious claims aren’t.

Certainly some are. “The entire universe went from nothing to essentially its modern state over a six-day period 6,000 years ago” is a claim with material consequences, and thus one that we know is false. The claim that it’s possible to curse a person is one with material consequences, and thus we know it’s false unless the person both believes in curses and believes they’ve been cursed, at which point it behaves consistently with the placebo effect.

But the belief that after a person dies, their consciousness continues in some form outside the material universe? That’s definitionally not a statement with material consequences, thanks to “outside the material universe.” It is thus not subject to the rules of science, because–and this is important–such a statement cannot be false. Why because a false statement is one that fails to accurately describe the universe, and the statement in question isn’t talking about the universe. It’s neither true nor false, which means it can never be known to be false, which means the only reason to believe or not believe it is whether it makes your life better.

Your life. Personally, as an individual.

Typically, and tellingly, at this point some Internet atheist complains that this means any claim has to be accepted as long as it’s non-falsifiable. Which isn’t actually true, if you read closely, but also kind of makes my point about Internet atheists: that the real motivation for a lot of them is not a desire for knowledge or to avoid falsehood, but to win arguments and feel superior to religious people.

But also: yes. Accept all religious claims as being useful beliefs for the people who hold them, and don’t worry about it as long as they’re not being assholes. This is the philosophical position of apatheism, which I define as “I don’t care if there are gods, and I don’t care what you believe about it.” I’m interested in it, certainly, because I am interested in the beliefs and behaviors of human beings, but I’ve got zero interest in doing anything about it.

Fundamentals: Everything Ends

My Little Po-Mo vol. 2 is on sale! Check out the Books page for more info!

I’ve started a Patreon to fund The Near-Apocalypse of ’09! Patrons get early access to articles and videos and more!

The one absolute certainty, the one thing we know, is death.

So of course we spend most of our lives trying to run or hide from it, because certainty is terrifying. We pretend that some aspect of the self survives death (which of course we all know instinctively isn’t true, hence why we mourn death more intensely than any other departure or separation), we pretend that we ourselves are immortal, or that something eternal exists–a perfect eternal state of bliss somewhere in the past or future or sideways from the everyday world of change and time and death.

And we do this knowing it’s false, because of the essential tragedy of the human condition, the need for unconditional love. We need to believe that love–some kind of love, be it familial or fraternal or romantic–is forever, but of course it never is; if nothing else it ends with the death of the lover. So we convince ourselves that there’s a way out, either a way to shed the need to be loved or a way to find something eternal. We lie to ourselves that there might be things without beginning or end, that there might be such a thing as “perfect.” All the while watching people die, endeavors fail, institutions fall, civilizations collapse.

But this doesn’t have to be a bad thing. Yes, everything we build must someday crumble. Yes, the day will come when the last person who knew you personally dies, and with them all direct memory of you vanishes from this Earth. Yes, even if you become a Shakespeare or an Alexander or a Siddhartha, sooner or later you will end up an Ozymandias.

But it also means that every corrupt and restraining authority will someday fall, that every unfair rule will someday cease to be enforced, that every bully’s strength will someday fail. It doesn’t matter what revolution you desire; wait long enough and the object of your rebellion will fall, if not in your lifetime than at some future point.

Nothing lasts forever, which means everything is always changing. Surely some of that change has to be for the better, at least some of the time, right?

Fundamentals: Where Morality Comes From

I’m a firm believer that the key to understanding some aspect of human behavior is to understand the motivations behind it. If you know why people do what they do, then understanding what becomes trivial.

Further, I firmly believe that you cannot prescribe until you first describe–that until you have done your best to understand what something is, you have no business arguing about what it should be. So it follows that, if I am going to talk about morality and ethics–and given that I regard morality, politics, and aesthetics as inextricably intertwined, I have talked and will continue to talk about them–it behooves me to first try to understand what motivates them.

So why do people want to be moral? The glib answer, of course, is the same reason anyone ever wants anything: they think it will feel better than the alternative. But what feelings, specifically, are at work with morality? I think it comes down, ultimately, to four emotions:

  • Shame: Being seen by others as immoral feels bad, being intimately associated with rejection and negative judgment.
  • Guilt: Seeing oneself as immoral likewise feels bad, being associated with failure and self-doubt.
  • Pride: Seeing oneself as moral (and being seen by others as moral) feels good, because it’s associated with acceptance, positive judgment, achievement, and self-esteem. (Note: Tentatively I place the sense of fairness here–that is, we wish to be treated fairly and to treat others fairly because of its impact on our sense of pride. It’s possible, however, to regard fairness as a separate, fifth emotion underlying morality.)
  • Empathy: Not exactly an emotion, but definitely emotional in nature and a strong motivator behind altruism.

Ultimately, moral behavior is a matter of avoiding shame and guilt, pursuing pride, and acting with empathy. Moral crises come about when it’s not possible to do all of these at once–for example, when avoiding social disapproval means failing one’s own standards and vice versa.

Of course, looked at this way, it becomes immediately obvious why no logically consistent moral code–regardless of the metaethics behind it–can really work: emotional states aren’t logically consistent. And we can’t actually reject this emotional basis, because without it there’s no reason to be moral. Nor can any one of these emotions be ignored: Shame is necessary because it’s how we learn to be guilty. Guilt is necessary because it’s the moral equivalent of burning one’s hand on a hot stove. Pride is necessary because without it the only advantage to being moral over being amoral is that you might get caught. And empathy is necessary because without it morality becomes an irrelevant abstraction, unconnected with the material wellbeing of real people in the real world. Together, shame and empathy prevent morality from becoming solipsistic or narcissistic; guilt and pride prevent it from becoming conformist.

So why bother with thinking about morality at all? Why not just go with kneejerk emotional responses to every situation? I think Daniel Dennett has a good answer here, and I recommend the relevant chapters in his Freedom Evolves on the topic. (And all the rest of it, for that matter.) But basically, thinking about moral questions and coming up with rules of thumb serves a few purposes.

The first reason is what Dennett describes by analogy to the story of Odysseus and the Sirens: Having principles is a way of metaphorically tying ourselves to the mast, so that when we face a situation “in the moment” we are better prepared to resist temptation. In other words, principles are about recognizing that we are imperfect actors and sometimes make decisions in the moment that, once we have time to think about them, we regret. Thinking about moral questions and adopting rules of thumb or broad principles is a kind of self-programming, training ourselves to feel extra guilt when we break them and extra pride when we follow them, thus increasing the likelihood of resisting temptation in the moment.

Another reason is communication. Part of morality is accepting responsibility for one’s community, and shame is a critical tool for policing that community. Shared principles are a key way for a community to define for itself how it will police its members by clarifying what kinds of behaviors are appropriate for other members of the community to shame. Of course members of the community may disagree, resulting in conflict, but conflict is an inevitable (and frequently desirable) part of being in a community.

Be clear, however: principles, lists of rules, and all other attempts to codify morality are models, which is to say they are necessarily not the thing modeled. Morality is not adherence to a set of principles, but rather a complex and irreducible social and emotional state, which is why excessive adherence to principles leads always to advocating obviously immoral behavior. Ethics, in other words, is rightly a descriptive, not prescriptive, branch of philosophy: journalistic ethics is a description of how good journalists behave, not a set of commandments handed down by the journalism gods from on high. Studying such models is obviously very useful in becoming a good journalist, but is not in itself sufficient–like any rule set, the point is to understand them well enough to know when to break them. Journalistic ethics are, of course, just an example–the same goes for any other kind of ethics.

Of course, if morality is emotional in nature, it follows that just as there is no “correct” way to feel about something, there is no “correct” morality. That said, just because there’s no correct way to feel doesn’t mean there are no incorrect ways; it’s simply factually untrue to say that there isn’t a broad consensus about certain behaviors in certain scenarios. Baby-eating, for example, is almost universally regarded as repulsive, and so we can fairly safely say that a model of morality which prescribes eating babies as a normal practice has failed to accurately depict its subject.

More to the point, it doesn’t actually matter that there’s no correct model: if my morality–which here includes both the ways in which I model morality through principles and reason and the underlying emotional reality–demands that I oppose someone else’s actions or attempts to make their model of morality dominant within the community, then it demands it. Which of course is why people give logically inconsistent answers to ethical dilemmas: the curious responses to the trolley problem are of course completely understandable once you recognize that while passive and active choices aren’t logically different, they feel different.

In the end, as with aesthetics, any prescriptive model will necessarily be imperfect. But that’s the human condition, isn’t it? Making do with imperfect materials, striving ever to replace our old mistakes with new ones.

Fundamentals: Stop Suspending Disbelief

At Anime USA last week, I mentioned in one of my panels–might have been Analyzing Anime 101, might have been Postmodern Anime, I don’t remember which and haven’t gone through the video yet–that “the concept of suspension of disbelief needs to die in a fire.” This, of course, led to some people coming up to ask me about it after the panel (because for some reason when I ask for questions at the end of a panel, nobody raises a hand, but the minute I start packing up, I’m swarmed with people wanting to ask questions).

Here is the problem with suspension of disbelief: it makes you less literate. I mean, it’s also fundamentally impossible, but even attempting it makes you less literate, because what suspending disbelief means is trying to forget that a story isn’t real. Which means, in turn, giving up the ability to recognize it as a deliberately constructed artifice, created by actual human hands for an audience of actual people, within the context of a culture.

That is a huge thing to ignore. It means losing all ability to examine technique, to think about the difference between portrayal and endorsement, to question a work’s positionality. By pretending that a work is a window to another world, you erase the distinction between author and historian. Everything that happens in a story is a choice by its storyteller; there is no otherworld where events proceed independently, and of which the storyteller is an objective, uninvolved observer dutifully recording the deeds of others.

Consider, since it is the main subject of this blog, a cartoon. To suspend disbelief is to pretend that its characters are real people within a real world that obeys consistent rules, which is anathema to a cartoon like, say, Ren and Stimpy or Adventure Time, which depend on constantly twisting and warping settings, situations, and characters to surprise and entertain. To suspend disbelief is to ignore the animation itself, to refuse to examine how art styles, distortions of characters’ bodies, framing and camera angle shape the story and convey the priorities and interests of its creators.

This is not to say that we should never consider the diegetic; that’s as absurd as only considering it, as “suspension of disbelief” demands. It is possible to talk about a character, to discuss their motivations and experiences, to have an emotional reaction to them, without pretending that they’re real. People have emotional reactions to the imaginary all the time, from anxiety about imagined scenarios for an upcoming task to sexual fantasies to happy daydreams. I can say, “Batman is driven by survivor guilt over his parents’ death,” or “Twilight Sparkle is prone to anxious overreaction,” and it remains true, even though the characters in question do not exist. Indeed, it is because they are characters, and thus far less complex and self-contradictory than real people, that I can make such straightforward claims about their behavior with little expectation of contradiction.

There is thus nothing at all to be gained from the suspension of disbelief. It does not add anything to the appreciation or exploration of narrative, and cuts off access to much. It is yet another example of how badly the emphasis in general education on basic literacy gets in the way of full literacy.

Fundamentals: Criticism and Social Justice

The world in which we live is deeply, horrifyingly unfair.

Some of that unfairness is inescapable, a consequence of the terrifying randomness and even more terrifying determinism of the universe. Our friends and loved ones are as likely to be hit by buses as our enemies. Babies who haven’t even figured out that other people exist yet, let alone tried to hurt them, get diseases that cause horrible lifelong suffering. Market forces tend to amplify initial small disparities in wealth. Trashy reality shows are more profitable than well-written and acted dramas, even though hardly anyone actually watches them.

But a lot of that unfairness was invented by humans, and is entirely under human control. This kind of unfairness can be divided into two categories, which is an entire article on its own, but we’re interested today in only one of them, systemic injustice: all of the ways in which the systems and power relations that comprise our society are structurally unfair, even in the absence of deliberate action by any one individual. In other words, for this particular topic we’re less interested in unfairness that arises from people cheating, and more interested in unfairness that arises from the rules themselves.

That’s where social justice comes in. The idea is simple, its execution hard: create a society in which as much systemic injustice as possible is eliminated or corrected for. More fundamentally, social justice is simply the idea that fixing systemic injustice wherever possible is a major moral imperative. That one is not personally responsible for any particular unfair act is irrelevant; systemic injustice is a problem of a community, rather than individuals, and therefore a matter of communal, rather than personal, responsibility.

Which brings us to the role of criticism in all this, and in particular a specific family of critical schools including the feminist, queer, and postcolonial schools, among others. The common thread is a particular function of critical analysis, namely the identification of ways in which the text expresses, reflects, encourages, or perpetuates systemic injustices. From a social justice perspective, this is an extremely important activity. Texts, after all, are a major component of a culture, and a community’s culture is the primary means by which it influences the behavior of individuals within the community. In other words, it is by means of culture that systemic injustices perpetuate themselves, and therefore it is in the realm of culture that they must be met, identified, and combatted.

The primary function of criticism in general, if such a thing exists, could be said to think about culture and engage with it more mindfully. The function of social justice criticism, then, is to engage with culture while being mindful of systemic injustices. Note that this is not necessarily the same thing as criticizing a particular culture; particularly when dealing with works that originate outside one’s own community, it’s important not to project one’s own community’s issues onto that other community. That said, the interpretation of a text is as much an expression of culture as the creation of the text, so it is entirely legitimate to look at how a text from one culture might read in one’s own culture, as part of a critique of one’s own culture.

Ultimately, the goal of this is not to say, for example, “This movie is racist and therefore bad.” (Though, of course, there are movies which are bad and racist, including ones where the racism is what makes them bad. But racism doesn’t automatically make a work bad, it makes it racist.) The goal is not to attack individual works or creators–though sometimes that is necessary, because one of the ways in which systemic injustice functions is by making it easy to ignore individual acts of injustice–but rather to, as a member of the community, participate in one’s communal responsibility to help identify and mitigate systemic unfairness.

The key point here is that social justice criticism is emphatically not about attacking another, because it’s not about the Other at all. It’s about confronting the darkness in the extended Self, one’s own communities and cultures, and exposing it to light so that it can be dealt with. It’s about embracing one’s own culpability in communal responsibility for the state of the culture, and choosing to be mindful of that responsibility as a first step toward performing it.

Fundamentals: Community, Culture, and Responsibility

“Fundamentals” is an irregular series in which I write about certain basic ideas underlying my work on this site.

No human being exists in isolation. Each and every one of us is a member of multiple communities, some joined by choice (e.g., fandoms), others thrust upon us as a consequence of birth or upbringing (e.g., family, ethnicity), as a consequence of other choices (e.g., coworkers), or external circumstances and pressures; some are permanent, others temporary. And every community has a culture: collective rules and values, stories, material products, and so on. We are shaped by the cultures of the communities to which we belong, and they in turn emerge from the actions of each individual within the community. This does not deny individual choice, free will, or any of that; rather, it simply notes the plain fact that we are neither mindless drones nor completely autonomous actors unaffected by our environments and interactions with others. We are both individuals and members of communities, and it is equally a mistake to overemphasize either.

Which brings us to a rather critical point about responsibility. There is a tendency among some, I think, to assume that responsibility is exclusive and zero sum–in other words, that there are a finite number of responsibility points for any given occurrence, and if I take them all then no one else gets any. In other words, if Bob does something bad, to suggest that Bob was influenced by the surrounding culture is to deny, at least in part, that Bob was responsible for his actions.

This is nonsense. Take it as given that an individual is totally responsible for their actions and the consequences thereof. Culture emerges from the aggregate actions of all members of a community, and therefore all members of a community are responsible for their actions that contribute to that culture. All members of the community are shaped by that culture, and therefore their actions are influenced by–in other words, partial consequences of–the culture, which is to say the aggregate actions of all members of the community.

Thus, consider Alice, who shares a community and culture with Bob. Alice’s actions help shape the culture of that community, and therefore also Bob’s actions. Thus, Alice is partially responsible for the actions of Bob.

If we are totally responsible for our own actions and the consequences thereof, in other words, it follows that we are also responsible for the cultures we create and the ways in which they shape our own and others’ actions. Personal responsibility necessarily implies cultural and communal responsibility.

Which, let’s be clear on some things before anybody accuses me of saying something I’m not:

  • This does not mean that anything anyone does is the responsibility of every community to which they belong. Rather, it is necessary to first show how a particular culture influenced the person’s actions, and only then is it possible to assign responsibility to the community.
  • This does not apply only to “bad” actions and influences. Culture can have lots of positive influences, in which case every member of the community has some responsibility for that, too.
  • As I already said, this does not negate personal responsibility, but follows logically from it. A person is still entirely responsible for their own actions, it is simply also the case that there is communal responsibility. Like most seeming contradictions, this only appears to be one because of an unstated assumption, that responsibility is zero-sum and exclusive. Reject that notion, and it is completely possible for two people to be completely responsible for the same event, let alone one person completely responsible and another partially responsible.
  • Personal and social responsibility are not qualitatively the same. Personal responsibility, generally, is much more direct and concentrated; social responsibility tends to be diffuse by its very nature, spread thinly across many people. There are, of course, exceptions: for example, when a prominent community leader deliberately creates a culture of hatred and fear, for example, they carry a much larger and more concentrated portion of the responsibility for members of the community who lash out than the rank and file do, though again that does not negate the responsibility of the rest of the community for accepting and perpetrating the culture.