Entries from March 2021 ↓

The battleground of names

Note: In 2021 I’ll publish one blog post per week. Here’s entry 12 of 52.

The image shows simple computer code that asks the user to input their name and age, then stores this data as a name dollar sign variable and an age dollar sign variable, and finally prints the information back out.
Screenshot of a tutorial for QBasic, a Microsoft programming language from the early nineties

In 2004, I began asking others to call me by my first name, Douglas. Before that, I’d been called a variant of my middle name (which I won’t share here for mundane privacy purposes). I wanted a fresh start, because I was just entering my first semester at a university, and also, I was annoyed by the various paperwork hassles seemingly everywhere when your legal name and the name you go by differ. For example, class rosters not specifying how you’re actually called encourage teachers to address you by what appears on the roster. Since many teachers labor under the unfair requirement that they educate very large class sizes, and thus face far too many students to always memorize the preferences of each successfully, try as they might, you as a student can go through months and months of unpleasantly trying to correct a teacher about your name, something that’s supposed to intimately characterize you. The indignity of being called wrongly is even more profound for those whose name changes signal giant shifts in their personhood, such as those who switch names as part of gender or religious transitions. Me, I just thought a fresh start and an end to the paperwork hassles would be nice.

The image shows Jim Carrey as Truman in the movie. He's standing atop steps and before a door. He has his arms and hands spread wide, and he's looking up, smiling.
Image from The Truman Show

It’s strange how names characterize us, isn’t it? Consider the eponymous name of the protagonist in the 1998 movie The Truman Show: unlike those around him pretending to be his neighbors, Truman is a true man. But names, at least when initially given, actually characterize the parents/caregivers, their aspirations for the infant who’s receiving some particular name in the first place. A sense of this reality is frequently missing from fiction, when authors pick a name to symbolize or allude to something about a character, rather than about that character’s parents or environment (including economic class). In other fiction, such disparities between a character’s true self and their name are portrayed, especially if the story involves a name change. While authors spend lots of time thinking over the given names characters go by among their peers, I think surnames in fiction don’t receive much scrutiny, particularly in terms of migration. If an author is writing a story set in 2030 in Nebraska, and currently in 2021 nearly all people alive with surname X live only in France, should the author provide backstory for why someone with surname X is living in Nebraska just nine years into the future? Or are surnames freebies for authors and readers alike? As long as it sounds good and plausible enough, maybe no bulletproof backstory is required. You could reduce such realism problems to absurdity by requiring an author depicting a coffeepot in a story to know how it got there, tracing it all the way back to the specific particles emerging from the Big Bang. On the other hand, books too often expect readers to assume narrators are white and show WASP-y names as the norm, presenting anything else as exceptions in need of explanations.

Since 2004, there’s been a certain discomfort with my first name, for many of those using it and me alike. I’m regularly asked the same question when meeting people: “Do you prefer Doug or Douglas?” The question stumped me for a very long time. Whenever I looked within, I discovered I legitimately don’t have a preference. Either is cool with me! So I couldn’t advise the question-askers, who as far as I could make out, wanted to be caring and accommodating. Just about every time I replied that I have no preference, the question-asker became frustrated. They said I should have a preference. But I didn’t. Maybe I hurt their feelings, as though they were going out of their way in offering to remember my preference, and my not having one stung like a rebuke, in some transactional world they exist in. Only this past month have I finally figured out something more about the question. I’ve been doing core strengthening in physical therapy to help with one of my legs (two surgeries on it in my life so far), and the physical therapist is extremely knowledgeable and competent. I like him, and I’m really grateful to have his excellent help. He told me he has a thing for trying to remember the best names to call people by. We were both a bit flummoxed by my lack of a preference between Doug or Douglas. I thought it over. I think the fact I get along well with this superb physical therapist enabled me to see something more about the question and my lack of a preference.

Here’s the answer, what’s been the answer all along: Doug and Douglas are the same name — just at different diction levels. Doug is informal; Douglas is formal. Compare “What’s up, Doug?” with “Listen, Douglas, we need to have a talk.” They’re the same name in two different forms. I don’t want to micromanage which level of formality my interlocutor picks for any particular conversation. I trust the appropriate diction level can just emerge naturally, simply from both parties’ interactions and the environment at large. Because I realized all this just a few weeks ago, I haven’t had the opportunity to test it out in real life yet. But the next time someone asks me, Do you prefer Doug or Douglas?, I’m going to tell them one’s for informal, the other’s for formal, and that they can select between the two as they think proper. I wonder what will happen. As long as they don’t call me Doogie.

The image shows a black book cover, with the title More Than Human at top, and at bottom, the author's name plus "The provocative novel of six people who became--together--a new form of humanity" and in reference to the author, "whose work is increasingly being called a classic of its kind"
Original hardback cover of Sturgeon’s best-known novel

A common thread in the above — asking others to use my first name (revealingly, sometimes frenemies from the past still don’t, pointedly refusing to honor my request); trying to justify to readers a surname’s presence in a geographic location and time period; attempting to explain to strangers that the two forms of a single name are for different diction levels — is a sense of individuals having to legitimize their names, and perhaps themselves, to others. Names are usually social, bestowed upon us and by us as we pass life down through generations. In Ursula K. Le Guin’s 1974 novel The Dispossessed, each character on an anarchist moon has but one name, handed out by a central computer registry to keep things organized. In contrast to this socially-focused system, in Theodore Sturgeon’s 1953 novel More Than Human, there’s a gripping moment when the first character we encounter, a lonely outsider, finally names himself. Initially, “Men turned away from him, women would not look”; however, after roughly five years living and working with the Prodd farming family, he learns to speak, though “always he preferred not to.” Eventually the farmer Mr Prodd asks him for his name (get it? Prodd as in prodding him). Because he has come to trust Mr Prodd, he’s able to fulfill this request. He thinks that a name “is the single thing which is me and what I have done and been and learned.” Despite his growing connection with the Prodds, he picks the name Alone, which he can manage to pronounce merely as a single syllable, Lone. That seems very individualist, but he chooses a name only when someone else asks him for it, so it’s an event both personal and social. (The book later follows Lone gathering what Sturgeon calls a gestalt, kind of a chosen family, from other outlier outsiders.) Does a person living on a desert island like a castaway need a name at all? Might they forget their own name? Finally, look at the concept of true names in Le Guin’s Earthsea fiction. Characters and objects in that univese have two names, a common one that’s safely shareable, and a second, secret, true name that empowers them and gives others power over them if it’s discovered. In neither case, however, are the names chosen.

The image shows an Internet Relay Chat window, with the user having typed "Hello to all you good people!" with a smiley emoticon. To the right of the image is a vertical list of various handles for users, such as Jolo and herbie and sat.
Internet Relay Chat, what I was doing in the nineties

Online, as in certain types of radio communication, users choose handles, also known as pseudonyms or simply nyms. These lessen tendencies in conversation/debate toward the logical fallacies of personal attacks and arguments from authority, where interlocuters waste time saying “You only believe that because you are [insert identity attribute here]”, as in, because you’re tall/short/rich/poor/white/of color, etc. With nyms, individuals can choose personally meaningful ways to describe themselves, and the handles can become so meaningful that among those heavily involved in computers (or perhaps simply involved in online chatting), it’s common to go by the handles even in face-to-face conversation, rather than by legal names. Some users, in contrast, choose random characters (for example: ang) to identify themselves, not wanting to give their personal story away to strangers. And some change nyms frequently, rebooting their name over and over, trying to prevent others from assuming things based on what might have been past interactions with the person. When I play around with it, this aspect of computing (akin to writing under a psuedonym) can feel very liberating.

At top, the image has text saying "If I was the teacher, i'd give this kid an A." Below that, the image shows a schoolwork assignment, which reads: Defend your answer. Rather than follow the assignment (for whatever it was, perhaps math or English Language Arts), the student has drawn a fort around the word answer, and drawn a solider with a machine gun saying "Sarge, I don't know how much longer we can hold them!"
Must everything be so stressful?

It seems names should be a touching aspect of life, and fun to ponder, but they’re commonly just another battleground. Picking a name can feel empowering (because how could an unchosen name really represent/express who you are?), while keeping a name bestowed by others can offer connection linking the past, present, and future together. Maybe, like successful accounts of trauma that provide healing, names need to be simultaneously personally meaningful, and effective and connecting in social contexts. Really thinking names through, as opposed to dissociation from life (“it’s all a blur”), as well as good relationships for experimenting with names, seem very helpful for individuals trying to determine what might be their own best path.

Creative Commons License

This blog post, The battleground of names, by Douglas Lucas, is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (human-readable summary of license). The license is based on a work at this URL: https://douglaslucas.com/blog/2021/03/26/battleground-names/. You can view the full license (the legal code aka the legalese) here. For learning more about Creative Commons, I suggest this article and the Creative Commons Frequently Asked Questions. Seeking permissions beyond the scope of this license, or want to correspond with me about this post one on one? Please email me: dal@riseup.net.

How I addressed a trauma anniversary that psychiatrists weren’t curious about

Note: In 2021 I’ll publish at least one blog post per week. Here’s entry 11 of 52.

Image shows a small gray notebook. On its front, the notebook says "notes" and "Cambridge edition."
The journal I use for logging my day. Available at that bastion of high culture, Tarjay, at least here in Seattle.

I used to not believe in trauma anniversaries, the distress a person can experience when a calendar date lines up with a past violation of their well-being. To my perspective back then, steeped unawares in the default corporate values, trauma anniversaries seemed too fantastical: how could a person’s nervous system remember all that, and how could it be tipped off that the fateful date was approaching? More importantly, multiple well-paid psychiatrists for decades, their corner offices fancy with diplomas and oak desks, never mentioned trauma anniversaries to me a single time, and consistently portrayed the mania I sometimes experienced as a meaningless, causeless brain fart. But during every April and May for seven straight years, indeed usually on the very date of May 31, I’d experience severe, hospitalizing mania. Despite the timing being as dependable as the Old Faithful geyser, the psychiatrists displayed zero curiosity about it, whereas friends would sometimes ask natural questions (“Why do you think it happens then?”). Unanimously, the psychiatrists told me (not so forthrightly of course): Just take these tranquilizers (“medicine”), these dopamine antagonists, pay up, and you might be able to have some sort of meager life over there in the corner, if you’re lucky. They didn’t say, while the psych pills shrink brains and tardive dyskensia looms at your door.

The image shows a black-and-white page from an academic catalog. It's a full page photo of six old white men in garb that is religious or academic or both: black robes, large crosses on necklaces, and so on. They are walking in a line, most of them smiling.
Page from UD catalog back then. Pies Iesu Domine, dona eis requiem. Also.

It wasn’t fun. The stigma has been perhaps worse than the mania. I’ll give two examples of hundreds. In 2000-2001, I attended the University of Dallas on a full scholarship to study philosophy and classics (Latin and ancient Greek). It was a small Catholic school, and I was an atheist fish in the wrong, small pond. U.D., as it was called for short, made it a selling point of their school that students would all take a trip to Rome together sophomore year, and I was really excited about it. After mania prevented me from participating in classes for roughly three weeks — this was two decades ago, before psychiatric diagnoses were so common that universities created more explicit policies for mental health emergencies — U.D. informed me I wasn’t going to Rome with everyone else. (Not long after, I dropped out.) Their decision made some sense: what the hell would you do practically with a student suffering manic psychosis, in the hotel, in the airport, etc.? In some cases, it makes sense to give a manic person a tiny bit of benzodiazepine, to help them sleep, and once they wake up, everyone together figure out what’s going on using a process like Open Dialogue; but, colleges weren’t and aren’t prepared to intervene that substantially (although you can imagine it someday, what with K-12s employing special staff to attend to some students’ medical needs, and now campuses outfitting themselves for the horrible idea of in-person classes during coronavirus). Undergraduates in their twenties, with private school backgrounds, haven’t lately been expected to be adults capable of handling themselves. The whole setup was paternalistic to begin with: the U.D. authorities were to watch out for our well-being in these scary foreign lands filled with terrorists or whatever. Bottom line, they looked at me and said No. Just as my K-12 considered kicking me out for the same reason (manic episodes), in a dramatic meeting with my family. The unfortunate “help” I was given for the whole dilemma, the answer from Texas in general was, go to psychiatrists, who will say there are no causes you can do anything about, and take your piece off our game board, get out of everyone else’s way. A very few years later, one of my best friends was going to Japan to teach English (and then went to India for six months); I was going in and out of psych hospitals. It was really discouraging, and I routinely used an imaginative, puffed-up, hypomanic grandiosity to sustain myself, to not think about (to dissociate from) my problems and keep writing music/words and pursuing all my other interests in rude opposition to “having a good work ethic” since I didn’t want to go along with seemingly everyone else’s philosophy of Don’t think too hard, don’t care too much, get a job any job.

Example number two. Here in Seattle, I went to a party for Clarion West Writers Workshop (which I completed in 2008), sometime between 2016 and 2019, honoring an author whose name I can’t remember (she was writing fiction about presidential assassinations, if anyone recalls…to be clear, that is people assassinating presidents, not presidents assassinating people). A random party guest was an employee at Navos, a greater Seattle mental health clinic, as a therapist or some related occupation. I happened to be standing in the small group to whom she was talking, merely happenstance party conversation, people holding drinks and the like. She asked if anyone was familiar with her workplace, this entity called Navos. I said yes. She blinked and said, “Wait, you volunteer there?” And I said, “No, as a patient.” She then literally raised up her nose in disgust and turned away from me. The other surrounding partygoers followed suit, showing disgust and turning away from me also. The look of disgust is a common expression made at someone slotted into a negative image role. Before the pandemic, once patients were called up the stairs from the waiting room at brick-and-mortar Navos, where the security guard watches them from his desk, the therapists would use key cards to let them through locked doors, under the rarely correct assumption that these medicalized humans might act out dangerously. It felt like being a zoo animal. A zoo animal in the social services, mind-twisting, smiley face version of a prison.

Reasons for admission to the West Virginia Hospital for the Insane, 1864-1889. (Source)

It’s taken several years, but I’ve made a deep study of the extensive decades of literature disputing the genetic theory of manic-depression, how the twin studies are used, the chemical imbalance theory, and other falsehoods, plus participating in a Hearing Voices Network chapter and devouring multiple books, podcasts, and documentaries detailing the success stories of psychiatric survivors (the secret that people have made full recoveries from repeated bouts of psychosis and tapered off their drugs is slowly becoming more widely known). I’m still studying this material and related helpful information, much of it published in peer-reviewed scientific journals, not that practicing psychiatrists read those (they’re busy going on ski trips with the money, possibly bringing their manipulated patients along for sex, too). But for those who might be unfamiliar with this vast literature, let’s just take the chemical imbalance theory briefly, a widely advertised theory which lately mainstream psychiatrists have had to start backpedaling. Millions upon millions of people in the United States today swallow psychopharmaceuticals daily, often antidepressants or sleeping pills; taking “meds” for the psych diagnoses considered less severe has become ordinary, a recommended way to survive the impossibilities of paid-work, while those with the harsher labels (schizophrenia, psychosis, etc.) are considered an abnormal, bad underclass. These millions and millions of people, whether with the “normal” labels of depression etc. or the more severe ones, are commonly told they “have” chemical imbalances. Which I suppose is like “having” a pet rock, only it’s invisible. The mystique of the doctor in the white coat can take over, preventing patients from asking obvious questions. How often do we hear, in place of evidence and logic, about a doctor, politician, or other idealized figure: I trust him; he’s a good guy? Yet we don’t need to feel an affinity with a prescriber; we need to ask the prescriber questions obvious to an impartial observer and verify what’s going on. Which chemical is imbalanced? How much of that chemical per microliter is too much? How much of that chemical per microliter is too little? What’s the safe range, per microliter, for that chemical, whichever one it might be? Who invented the chemical imbalance theory? When was it invented? Was it initially published in a scientific journal, and if so, what’s the citation for that article (and obtain a copy)? These very basic who what when where why and how questions are too often not asked, among other reasons because patients sometimes outright fear their doctors, their legal powers, and their way of snapping back at questions they dislike. The patients’ brains are being dramatically altered without enough questioning from the patients, as if psychopharmaceutical treatment is simply taking clocks to repair shops, to use sociologist Erving Goffman’s analogy in his 1961 book Asylums. With no time or motivation for curiosity, customers taking broken clocks to repair shops do not ask the repair-workers, Who invented clocks? Why do clocks need springs? The customers simply expect the gadgets to be fixed, then they pay the fee and bring the clocks home. People treat their own brains just like that. The error is supposed to be from birth — but sorry, there are no blood tests to prove it (no answers to the microliters questions), and all the vaunted genetics has persisted at a research level for a very long time, scrutinizing without holism people crammed into pidgeonholes, nothing definitive found — and you are to take the pills to remediate your inherent wrongness and then get back to the miserable paid-work for evil corporations and their ancillaries. Mental health suffering is increasing, understandably because humanity, in big picture terms, is seconds from self-caused extinction; watching humanity kill itself and many other species, psychiatrists do not have much to offer for explanation or success stories, but their industry does have criminal convictions at Nuremberg for enabling genocide, and see also the American Psychological Association’s more recent participation in CIA torture. Trusting these people to make dramatic alterations to your brain without asking questions isn’t a good idea. It isn’t mental health.

The image is a popular meme of Captain Picard from Star Trek: The Next Generation. It shows him in his captain's chair, hand on forehand, exasperated. The image has text at the bottom reading: So much fail.
Shy? Correct that chemical imbalance, too little alcohol, by drinking daily!

The chemical imbalance theory came about because scientists began noticing that when people were given certain pharmaceuticals for unrelated physical conditions, they would also act in different ways, so if it was considered good for them to act in those new ways, then they must, the scientists thought, lack enough of that chemical supplied by the pharmaceutical, and therefore they need to swallow some of it regularly to act right. In other words, if you aren’t doing such-and-such, but this other thing makes you do such-and-such when you swallow it, you must have a deficiency of that other thing. This is very bad reasoning. It’s like saying, imagine a shy person. The shy person is at a bar, they’re nervous about their clothes and hair, and they don’t know what to say to the other patrons, to the bartender, etc. But when at the bar we give them alcohol, they suddenly start talking more! Therefore they must need alcohol supplementation, a bit of booze each day, to correct their alcohol imbalance and act with the proper gregariousness. This specious reasoning — X makes you do Y so not doing Y must be caused by a lack of X — fits multiple types of causal logical fallacies. Imagine a psychiatrist in a critical reasoning class! You’re not lying on the floor currently, however when I punch you in the face, you fall to the ground; so, if you need to lie down, the obvious solution to your postural imbalance is to have me regularly punch you in the face a little bit each day for ongoing maintenance against your being-punched deficiency!

The trauma anniversary I was experiencing was combined with dissociation. Dissociation is tuning out in the face of overwhelming emotion. For instance, families in hospital rooms of a dying family member will too often largely, or almost completely, ignore the dying person, and stare at their phones to distract themselves and prevent themselves from experiencing the intense emotions and meanings regarding the impending death. After all, why say goodbye to grandpa when you can scroll instead? Anyway, I did many things to help overcome dissociation to some extent, mainly noticing when I was doing it and then slowly testing out feeling and expressing the emotions instead, which by the way, has physical analogues: feeling and expressing emotion isn’t just rearranging your internal world (like most of psychoanalysis is), but action-y, doing things outwardly, like cursing and kicking a trash can across the room if you’re really, really upset. This took me several years to get comfortable with; I still have a lot more to go. Further, the mania was dissociative in itself: escaping from overwhelm into delusional, grandiose fantasy. Sometimes it seems many people do not even know when they’re overwhelmed, since psychological education is insufficient or nonexistent, not to mention people understandably have blocks against considering what these terrifying topics mean for them. Even though for years and years, April and May meant mania for me, especially May 31, the calendar date of May 31 would roll around and I wouldn’t even know it was May 31. You would think, this most consequential date in my life, that sent me to in-patient lock-up over and over, would register on my radar as it neared. But it was too overwhelming, so I by habit didn’t even realize when it was coming. Among PTSD there are two types (I didn’t learn this from any psychiatrist): the popularly known one where you can’t stop thinking about the trauma, and the other type there’s less awareness about, mine, where you don’t think about the trauma at all. Not being able to find what was causing the trauma anniversary was as habitual as putting one foot in front of the other while walking: something I later was able to focus on starting a little at a time (baby steps), but for decades was more comfortable just going about on the autopilot approach, not thinking about it. Even if I tried to think about it, I could never pin down any specific trauma that happened to me during any long-ago April or May. My mind wouldn’t surface images or facts about any long-ago events in connection with the April/May period. Plus, it somehow didn’t seem “scientific” that something might have happened during those months in my past, a specific example of corporate propaganda (corporate portrayals of science) obscuring a person’s life from him. To top it all off, psychiatrists repeatedly found nothing about any of this worth talking about, same as the instance when an orderly physically assaulted me in a hospital, knocking me to the floor violently just for making a sarcastic comment, and multiple psychiatrists (attending and out-patient alike) said not a damn thing when I mentioned it. In fact, they used what educators call extinguishing. This is the classroom management technique where you ignore a student’s minor misbehavior, not reinforcing it, hoping it’ll disappear on its own, as it usually does (if indeed it is misbehavior; why should students be compelled to sit in cramped desks all day and penalized for “misbehavior” if they refuse?). Whenever I brought these reasonable topics up to psychiatrists, they used extinguishing. They’d just be silent. And then they’d change the subject to something comfortably medical in vibe, like dosages or the side/adverse effect of hives I got from neuroleptic. The psychiatrists felt far more comfortable talking about little checkbox algorithms for physical symptoms. Like eliminative materialists in academic philosophy departments insisting that minds don’t even exist, the psychiatrists kept diligently away from topics such as dissociation, which are actually decently understood by trauma experts. But again, practicing clinicians don’t read that material; that’s why they bully you instead if you ask too many questions, a trick they probably pick up from grand rounds questioning in medical school among other sources. In Fort Worth around 2002 or so, I once saw an orthopod with a sign in his waiting room that said something to the effect of, Any material patients talk about from the Internet will be ignored. Before the widespread adoption of the Internet and especially social media, medical professionals could easily tell each other at conferences how much their patients loved them (perhaps mistaking fear for respect or love), but now I think they’re slowly seeing the pitchforks approaching their insular world. Though some of them still talk blithely on youtube’d recordings of their conventions, making fun of their patients (accustomed to what they are doing, the psychiatrists might consider it merely analyzing their patients for their colleagues’ benefit), maybe unaware that those outside their myopic cult hear them and disagree. If you show your psychiatrist recent articles like this one from earlier this year — “What I have learnt from helping thousands of people taper off antidepressants & other psychotropic medications” by Adele Framer/Altostrata, the founder of SurvivingAntidepressants.org, published in the peer-reviewed Therapeutic Advances in Psychopharmacology journal — it’s not like the psychiatrist is going to say Thank you, and I think we all know that. Maybe it’s time for people to stop identifying so dogmatically with psychiatric labels (voted into existence by psychiatrists at conferences) and obsessing over the band-aid commodities sold for those labels (marketing categories), as if it’s the patients’ fault rather than corporations’ for wage-slavery, widespread pollution, and the rest.

The image shows a page from my logbook. The page shows my writing as described in the post, and the month and date circled. A portion of the page is redacted for privacy.
Captain’s log, stardate March 8, 2021.

Trying to figure this stuff out, I went to a Seattle psychologist who was very knowledgeable about alternative views, and understood that emotional distress is a human problem, not a chemistry set or test tube problem. I gained some very good information from him, although I wasn’t really ready for it until later in my life. One thing he did with me was called brainspotting, an offshoot of EMDR (Eye Movement Desensitization and Reprocessing). I’ve heard the psychologist Daniel Mackler (different person) describe EMDR as a way to helpfully shortcut someone toward discovering what might be causing a traumatic reaction, though not something that heals the psychic injury on its own. A discovery tool, not the cure. So this other, Seattle psychologist pointed a red light at my eyes in accordance with the brainspotting procedure. It caused me to blurt out a single word. I won’t specify it here for the privacy of myself and others, but it was a proper noun, let’s call it R. A few years went by before I recognized the significance of it.

In the meantime, I decided the best way to engage with this mysterious trauma anniversary was to always know the calendar date, so I’d be prepared to use grounding techniques and anything else I needed when April, May, or May 31 arrived. I found a very helpful type of journal, pictured above left and at the start of this post, that lets you circle the date and month. That physical action (as opposed to, say, the endless musings of psychoanalysis) of finding the month and the day on the horizontal lists and circling them helps me always know the current calendar date. Before the logbook, when I was picking out a box of fresh spinach at the grocery, I’d have to check its expiration date against the date on my wristwatch. But now I always know the date and no longer need to do that. Whereas previously, the April/May, and/or May 31, time period would stay in my subconscious, below awareness, too scary to be confronted, I was now bringing this feared problem into my awareness every single day, and I still do this daily. (Makes me think of Jung’s shadow concept or Le Guin’s novel A Wizard of Earthsea.)

I also use the logbook for other purposes too, most importantly to center my life on my calling of writing, which I’ll get to in a moment. I use the logbook to record my dreams each morning, if I remember them, and each night I use it for two exercises psychologist Terry Lynch recommends (his psychology courses are the most helpful material, bar none, that I’ve come across for understanding mental health issues). The exercises are writing down three things I did well that day and three things I’m grateful for from that day. The did-well exercise definitely makes me less susceptible to angry thoughts about how I’m supposedly no good at anything and the like; the exercise encourages me to have my own back, to defend myself from occasional automatic thoughts that are really internalized oppressions, not truths. The gratitude exercise makes me more optimistic in general. However, the benefit from both exercises has started to wear off somewhat, because over time I’ve reached the point that, seeking to go to bed quickly, I just scribble down the six things quickly like rushing through a crap homework assignment. I’ve started reading the six things aloud to combat the unthinking, rushed behavior. Finally, I use the logbook to check off certain foods I try to eat each day for nutritional purposes (a large navel orange for myo-inositol, pumpkin seeds for zinc, and so on), plus certain tasks, a.k.a. areas, I attempt to work on daily: writing fiction (it’s set in 2036), nonfiction (a book about hacktivism), and self (journaling and reading psychology stuff or books that teach practical skills). In years past, when I tried to keep a record of what I was up to, I’d give up after a day or three. But now I’ve been using the logbook consistently for months and months (and I always know the date!).

Two principles have helped me stay consistent with using the logbook daily. One I call “focusing.” I looked at myself and thought, what do I really want to focus on with my life? Do I really, truly want to be investing free time in playing Dungeons & Dragons with online friends, or rehearsing Spanish vocabulary flashcards? Those would be nice to do, but I’m actually here to accomplish various specific writing work. Thus I made a powerful commitment to spend my time actually doing that, not distracting myself with secondary goals that might be nice someday (such as more Spanish skill). Implementing that helps with mental health, too, because I’m not hiding from the challenges of writing by doing something I deep down know is less important to me. I vigilantly circumscribe who I spend (very limited) time with, because all sorts of friends and frenemies habitually criticize me and how I spend my time, or tease me at length as to why I should be playing Dungeons & Dragons with them or coming to this or that offline event, maybe because what kind of weirdo writes longform blog posts anyway, who does that? But I have to protect my availability, especially since writing is exceptionally time-consuming work, particularly when I prefer a thorough and research-intensive style. Second, I jettisoned the idea of deadlines or pressuring myself to write however many words daily. Instead of trying to fit those perfectionist demands, I decided to follow my own curiosity and work on the projects however that curiosity leads me. I still task myself with, besides my day job, spending at least an hour a day on three writing areas — fiction, nonfiction, self — plus doing some form of exercise, so four or five hours total, but since all that is frequently not possible every day (yet), I came up with a simple solution, a way to look at the situation with compassionate objectivity (to borrow Hillary Rettig’s phrase). My real task every day is just to to write on different lines in my logbook Exercise: Fiction: Nonfiction: Self: in case I complete any of the areas and can check it off. That simple chore, which takes perhaps 15 seconds, means that I’m still focusing on these three/four primary areas of work. I’m still caring about and trying to do them, even if it’s just writing down those four words in my logbook. If I don’t work on, say, fiction some particular day, well, life is life, just do the best you can. So I jettisoned all the crazy stress about deadlines and words-per-day, which really came from other people’s expectations, like a lady who once randomly lectured me for not writing as fast at a writing workshop as she thought I should, even though she wasn’t even part of the writing workshop! (She was there hunting for business intelligence for her company, I think.) When you really look for it, and aim to stick up for yourselves and others consistently, you realize there are many people circling around the world, prodding for weaknesses that they can mock you for if you’re vulnerable like a sitting duck, not skilled with firing back counter-insults or leaving the situation. I’ve learned to try not to ask others for their thoughts on these provocative topics too much offline, because bringing up a trouble or curiosity or passion I have all too often gives them an opening to mock or assert superiority without providing any sort of expertise to justify it. So over hanging out, I much prefer writing down the four areas in my logbook, working on them if I can (longhand feels so much more connected and channeling than typing!), and then checking them off one by one. If you’re thinking about trying this logbook technique, it might help to recall that you don’t have to do it the exact same way as I do. Over time, you can learn to trust yourself and your judgement, if you don’t already (many people with mental health problems don’t, though they might not admit it, not even to themselves, like political radicals asking their psychiatrists for permission, or oh excuse me, if the psychiatrist would think it’d be a good idea, before becoming a water protector or the like). You can vary the logbook as you see fit.

Back to the trauma anniversary and R. The idea for the self area — for journaling every day for some 30-90 minutes — came largely from Daniel Mackler’s thought-provoking youtube videos and Terry Lynch’s amazing book Selfhood. I won’t here describe how precisely I do my journaling, as that’s enough to fill a whole separate blog post. The point is, when I first purchased my blue journal (pictured below to end this blog entry), I immediately had the thought come to mind that I should use the journal to write about R. A powerful felt sense told me that doing so was going to be extremely helpful, and I no longer needed anyone else to confirm this for me or debate it. As Lynch says in this hour-long video on recovery from bipolar disorder (where he also mentions how important it is to take baby steps out of comfort zones; and, how important it is for people with manic-depressive tendencies to notice when, in a precursor to psychosis/delusion, they start using grandiose fantasy, such as daydreams of being a superhero, as a coping strategy for avoidance anxiety / putting off addressing problems), when people have severe mental health diagnoses, a crucial piece of their trauma history might not be the big trauma everyone’s looking for, the really obvious horrible thing that happened to them that everybody knows about and talks about. It could be some event that seems small in comparison, or even mundane from a very macroscopic perspective, something that commonly occurs in most people’s lives. But that “small” traumatic event could still be very meaningful yet unresolved for the particular person; usually, it’s events in childhood or adolescence, through which later life can be filtered. That’s how it was for me with R. Over the next several months, working diligently and just about daily, I filled up the entire blue journal with my thoughts and feelings and notes, almost completely about R, sometimes using investigative journalism techniques, researching public records and maps and so on to ensure accuracy (it needs to be a story with personal meaning, but also a story with factual currency in the social world).

Guess what I discovered! The boiling point of the R situation happened in April 1997, and just days later, I exhibited strange emotional distress, something I’d never done before. (I obsessed over packing and unpacking a bookbag and couldn’t respond in conversation with my family, as if I couldn’t even hear them, when they were asking me from across the bedroom what was wrong.) I was that exact month sent for the very first time to a mental health provider. Putting together these pieces wouldn’t be challenging for an impartial, outside observer with skill; in fact, they could probably do it in just a few minutes if presented with enough raw material about a client. But because I had/have the form of PTSD where I tended not to think in any detail about the trauma (except perhaps to haughtily dismiss its relevance), and because psychiatry was of no help (and in fact, with their extinguishing and their dodging subjects like dissociation and abuse by orderlies, psychiatry made matters worse), solving this has taken me decades. It’s no longer difficult for me to acknowledge that people remember, even if only subconsciously or somatically, what happened to them long ago (see savants’ feats of memory for instance, or the fascinating book The Woman Who Can’t Forget by Jill Price), and that something like glancing at the clock at the corner of a laptop screen might inform the subconscious that the date is May 31, even while the conscious mind is running madly away from the trauma anniversary. There’s actually another trauma anniversary for me in August, of lesser strength; on August 24th, 1998 came my second incident of psychosis. It was August 24th 1998 that got me put on psychopharmaceuticals. Second only to the April and May months, August has statistically been the next most common time period for the mania episodes. Tomorrow I’ll start filling up my new, second journal about that August trauma anniversary, and that August 24th 1998 event, whatever it was: I currently and for the last decades have had only a single image of it accessible in my memory. So I’ll have to piece it together, with investigative journalism-type research, looking at archived computer files, finding old school yearbooks in libraries, and so on, as well as by describing and narrating that one single accessible memory-image in such immense detail that additional memories begin surfacing. I’m glad I filled up the blue journal about R; now I no longer fear the April and May time frame, and indeed, I’ve made it through April and May unscathed recently, with the seven year nightmare stretch receding into the past.

Rather than psychosis, we should actually say extreme emotional distress. Whereas the word “psychosis” makes a person seem different, nonhuman, a deserving target of stigma and shunning, extreme emotional distress can happen to anyone, and it does. The handwaving about genetics and chemical imbalances, from which no conclusive evidence or tests have ever been provided, papers over the reality that millions upon millions of people are diagnosed with psychiatric labels and put on mind-altering brain-shrinking drugs, some of which already went into shortage during the pandemic and might go into shortage again (there will come a day when these pills are no longer readily available in this or that region, and patients are left to dangerously cold turkey off them), that elders are being force-drugged with neuroleptic in nursing homes (to make them easier for staff to manage), and that any calamity, from another coup attempt in the United States to a hurricane or an earthquake to the loss of a beloved pet, can be the last straw that causes your mind to snap if you don’t know how to address the psychic violation, and sometimes even if you do. You’re not immune from humanity, and along with so many other psychiatrized people, I am not excluded from it, try as some might.

I hope this post helps someone else suffering from trauma anniversaries and/or the PTSD where you don’t or can’t think about, where you dissociate from, can’t even remember, the specifics of the trauma.

The image shows a blue hardback journal. The cover has impressionist-style art flowers, a tree, and a bay of sea.

Creative Commons License

This blog post, How I addressed a trauma anniversary that psychiatrists weren’t curious about, by Douglas Lucas, is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (human-readable summary of license). The license is based on a work at this URL: https://douglaslucas.com/blog/2021/03/20/trauma-anniversary-curiosity/. You can view the full license (the legal code aka the legalese) here. For learning more about Creative Commons, I suggest this article and the Creative Commons Frequently Asked Questions. Seeking permissions beyond the scope of this license, or want to correspond with me about this post one on one? Please email me: dal@riseup.net

Views of happiness: Journey versus destination, part one of two

Note: In 2021 I’ll publish at least one blog post per week. Here’s entry 10 of 52.

Photo of Alki Beach in West Seattle by Laurel Mercury. (Source)

The Western philosophy I was taught in high school and undergrad, or from books sitting on the shelves of my family’s house, typically presented happiness as something a person achieves once they laboriously bring multi-step efforts to a successful conclusion. This viewpoint is commonly expressed in average conversations about goals. If a person wants to become happy, for example, they might hear that they should take up exercise. The concept is presented as if they need to add something to their usual routines. With the flavor of the stress-inducing Protestant work ethic, exercise and happiness are made to seem a struggle: happiness-seekers should whip themselves into shape, and if they fail, the lack of joy is their fault. But what if being happy is instead about removing, about subtracting, the multitudinous problems that have piled up onto a person across the decades, the various vampiric leeches sucking away energy from adults who otherwise would be automatically — in their default or natural state — going about completing worthwhile, thrilling tasks on the regular, enthusiastically and with curiosity, like a toddler exploring a beach, running around saying, “What’s that? What’s that?” and eager to make a sandcastle.

I associate the It’s the destination, not the journey position with the ancient philosopher Aristotle. He, and much of subsequent Western philosophy, argues in ethics tracts that people ultimately seek the summum bonum of happiness (or flourishing, or thriving, if you prefer those words and perhaps some no true Scotsman). Summum bonum is a Latin phrase meaning “the greatest/highest good.”

In the Nicomachean Ethics, written around 340 BC, Aristotle argues for the destination over the journey. He writes:

Every art and every investigation, and likewise every practical pursuit or undertaking, seems to aim at some good: hence it has been well said that the Good is That at which all things aim. […] there are numerous pursuits and arts and sciences, it follows that their ends are correspondingly numerous: for instance, the end of the science of medicine is health, that of the art of shipbuilding a vessel, that of strategy victory, that of domestic economy wealth […] If therefore among the ends at which our actions aim there be one which we will for its own sake, while we will the others only for the sake of this, and if we do not choose everything for the sake of something else (which would obviously result in a process ad infinitum, so that all desire would be futile and vain), it is clear that this one ultimate End must be the Good, and indeed the Supreme Good. Will not then a knowledge of this Supreme Good be also of great practical importance for the conduct of life? Will it not better enable us to attain our proper object, like archers having a target to aim at?

A Roman bust of Aristotle. Has this one been smashed yet?

For Aristotle in the Nicomachean Ethics, happiness is the the target humanity aims at. For instance, people might affix a mast to wood, for the sake of something else: making a finished vessel. Affixing the mast to wood is just a lesser bonum, relative to the greater bonum (happiness) that the vessel-builders are marching their way toward. Vessel done, they’ll use their finished ship for some third purpose, say sailing to another country. At the new country, it’s time for a fourth purpose, then a fifth, then a sixth, and so on. The only thing they don’t seek for some subsequent purpose is happiness itself, and thus for Aristotle in the Nicomachean Ethics, that shows happiness is each individual’s ultimate goal. Ultimate as in last in a sequence: the concluding object that a series of sweaty, hardworking steps gets you. All this is undergrad philosophy 101, which itself has the same project management, assembly line structure as Aristotle’s vision. Students don’t like smushing themselves into the too small, infantilizing desks, at the universities forcing them to install apps that vacuum up their personal information and surveil their body language for unapproved behavior, but they do it for the sake of something else: a credential (or the approval of Boomer parents, who lived during a different economic situation, when college degrees were scarce). That credential is valued for a third purpose, namely permission to serve in the trade economy of corporations and their ancillaries (or in the parents’ firms). That service leads to wages; the wages lead to television sets and Netflix subscriptions. Now they have the means to blot out the pain of having the happiness of the child on the beach stolen.

In the passage quoted above, Aristotle reveals himself when he says “all desire would be futile and vain” if “we do not choose everything for the sake of something else.” Of course people do labor to meet the needs of themselves and their networks, and rational, multi-step processes sometimes make sense in that regard as a practical matter or because people like the challenge and benefits of organizing their work. Yet exuberant people (or better put, people when they are exuberant) don’t experience life as a maze of action-steps and milestones sorted on clunky Microsoft Outlook calendars. Multiple youtube videos providing profiles of the various construction trades make it quite clear that one of the biggest reasons people go into that industry is for the simple reason that they want to be outside. They just want to be outside. That’s it. Feel the sun, feel strong hammering something, feel powerful running giant machines. But who does lack exuberance and get trapped in thought-conundrums of why do anything, when mortality / global warming / whatever will just kill us all? Depressed people do. They ask, “What’s the point?” As in, what’s the point of undertaking some particular action? Which is actually what, unfortunately, Aristotle recommends the world ask: all desire is futile and vain, he says, if whichever action in question doesn’t make sense in terms of something else. Well, guess what. Life doesn’t make sense in terms of other things. You’re here, you’re going to die, might as well enjoy it. Why? Because that’s a person’s default, automatic state. Makes as much sense to ask why a fish swims. That’s just what it does; if it doesn’t do that, it’s somehow broken. If things are going well, people are happy and want to go around doing cool stuff. It’s the sedentary, depressed, crushed-by-others, sickened by corporate pollution and other problems, condition that needs to be understood and subtracted away, removed, so that the underlying euphoria of being alive can resume.

If Aristotle truly authored another text, the Magna Moralia (written around the same period as the Nicomachean Ethics; authorship is uncertain), then he does recognize that some activities don’t fit his blueprint of do this because it leads to that, which then leads to another thing, which finally gets you happiness. In the Magna Moralia, he says at 1211b:

to the flute-player the activity and end are the same (for to play the flute is both his end and activity); but not to the art of housebuilding (for it has a different end beyond the activity)

In other words, according to Aristotle, for the flute-player and presumably other artists, just performing and creating is where it’s at, a union of activity (playing the instrument) and goal (happiness); meanwhile, building houses has as its aim some other thing (obtaining shelter, let’s say). Yet it’s quite arbitrary to divide flute-playing from housebuilding. There’s no real reason to assume a flute-player has no additional ends in mind, such as seducing someone with the flute music as a higher (more important) goal later on, or busking to gain donations the next morning (might even resent having to practice the flute to do that), or any other number of destinations the flute-playing could lead to. As for Aristotle’s portrayal of housebuilding as a utilitarian task that is justified because of some further purpose (shelter), well, consider that Winston Churchill enjoyed bricklaying as a hobby. The prime minister didn’t lay bricks as a Plan B because he was worried about suddenly being out of a job and needing a backup skill that could also escort him toward happiness. He did bricklaying for fun, same as Aristotle’s flute player (with photo-ops added). So, if flute-playing can be part of a laborious, step-by-step schema to achieve happiness, and housebuilding can be a pleasant hobby on the side — both counterexamples to the Magna Moralia presentation of the two labors — then it seems none of Aristotle’s arguments about this are particularly clear or insightful. I think what’s actually going on is that Aristotle (assuming he authored the Magna Moralia) has doubts or perhaps conflicts about his own schema. But judging from my philosophy classes and readings over the years, it’s the do this because then it’ll lead to that which then leads to this other thing and finally you get happiness, portion of Aristotle that’s really stuck with people over the centuries, regrettably. I myself tutored this Aristotle stuff approvingly for the university athletic department, teaching it to undergraduate players of women’s volleyball. Oops. They got what I was saying, but understandably couldn’t make sense of why Aristotle took the position he did.

Fast forward two and a half millennia to the noxious reactionary Ayn Rand, who endorsed much of Aristotle (“Whatever intellectual progress men have achieved rests on his achievements,” she wrote in her review of philosopher J.H. Randall’s book Aristotle). The laissez faire capitalist Rand took the Aristotle vibe that conveys a sense of having to complete task sequences — like laboring under a project manager, or an oppressed nation suffering the “rationalizing” forced economies of colonizer bureaucrats — in order to be happy, and put a somewhat simplified version of it on steroids. Her fiction, spreading her viewpoint on happiness, later flooded schools across the United States year in and year out as a result of the Ayn Rand Institute’s scholarship program, which rewards kids with money if they write winning essays on her work.

The cover of the 1964 paperback edition of The Virtue of Selfishness by Ayn Rand. It shows her standing on a New York City street.
She doesn’t look happy…

Two quotations from Rand on happiness give a concise snapshot of her stance. Both are from a 1961 essay reprinted in her 1964 nonfiction book The Virtue of Selfishness (the cover of the tattered paperback copy I read as a teen, pictured right, shows her standing in New York City with an austere expression, wearing a dollar sign brooch).

Happiness is that state of consciousness which proceeds from the achievement of one’s values.

Existentially, the activity of pursuing rational goals is the activity of maintaining one’s life; psychologically, its result, reward and concomitant is an emotional state of happiness.

Rand, in the above sentences, states that happiness is a reward following from the achievement of your values. She doesn’t present it as a default, automatic state that exists in the absence of problems. She’s pointing a finger at you, warning that you better do enough such-and-such, sweating from your brow like her character Howard Roark in a quarry, or your chances at happiness will slip away. When I was a teenager in Texas, I was reading such garbage, and it had a very negative impact on me, because I believed it, and didn’t realize how miserable I actually was, and didn’t have good roadmaps to clearing up my situation. If I were a teen and a follower of Rand’s Objectivism philosophy, and I were reading this post, I would probably complain that dispensing with assembly line sequences of rigidly planned tasks to yield, only at the end, happiness leaves people with nothing to follow but their emotions, or what Rand called “whim-worship.” I’ll respond to that argument in part two.

In escaping the Ayn Rand position, I didn’t, and still don’t, adopt the tack most take. As far as I can make out from my perspective, most people who are into this stuff, eventually reject the Rand view, and rightfully so. They grow exhausted of its unnecessary, stress-inducing hectoring about completing tasks (expressed in her fiction and newsletters and elsewhere). But sadly it seems a lot of people then settle on hiding from challenges. Don’t think too hard, don’t care too much, get a job any job, which seems to be about — in the face of overwhelming pressures and problems on all sides — staying in comfort zones, a permanent dissociation from life in general, all its mysteries and challenges and fascinations. There’s nothing wrong with ambitious goals and putting forth a lot of effort to achieve them (which sometimes makes bystanders uncomfortable, causing them to give the unsolicited advice, You know, there really is a lot of good TV lately). Rand can fuel people’s grandiosity, grandiosity that’s ultimately unhealthy, but Rand did seem, a little bit, to help me for a while as a young person to avoid the common view that all my goals should just go into the trash because vegging out is so much safer. There’s no conflict between valuing the journey and the ambition to achieve. It’s the journey not the destination, despite its poorly stated form (in which it seems to dismiss the destination altogether), helpfully provides the insight that, the day whatever accomplishment is finally achieved, the achiever will still have to do the dishes, brush their teeth, and take care of other “mundane” tasks — so if the person centers everything around their far-off goals, they’re sure to be disappointed and unhappy, but it’s possible to do both: to have big goals and enjoy the little things too. It can be both the journey and the destination.

The cover of Ursula K. Le Guin's translation of the Tao Te Ching.
Ursula K. Le Guin’s version of the Tao Te Ching. (More here)

My early twenties thankfully saw me emerging, very slowly, from these severe, perfectionist views of complicated, multi-step requirements to someday become happy (or so I hoped). The undergraduate philosophy department at TCU (where I earned my bachelor’s degree) had on faculty Spencer Wertz who just stopped by the intro 101 class for a small segment each year to introduce the very uninterested Texan students to the Tao Te Ching and its Taoist concept of wu wei. The idea of wu wei, which has various translations into English, means something like non-doing, or effortless action, or not-forcing. It’s a belief in refraining from needless effort, which still allows goals to be achieved and needs to be met. For instance, in the United States, protestors will too often form rigid, stationary lines to square off with cops and other “security” forces. The conflict is very stark: a visible separation demarcates who are the activists and who are the authorities, as they form up against each other like offensive and defensive lines in U.S. football (except standing). The protestors aim to occupy or hold their particular position. Hong Kong activists, in contrast, encourage a principle evocatively called Be Water. The Be Water idea has protestors flowing through the streets like guerrilla parkour, never staying in any single place too long, surrounding the authorities or disappearing altogether or re-emerging even, an unpredictable but dedicated river of people swirling around. (The HK-19 manual, an in-progress Google Doc in Burmese and English, promises to discuss Be Water in more detail eventually; I learned of the document from a twitter account you might consider following.)

A Hong Konger holds up a sign that reads: "Be Water! We are formless. We are shapeless. We can flow. We can crash. We are like water. We are Hong Kongers!"
Photo credit Mary Hui. (Source)

The contrast in protest methods beautifully illustrates the difference between rigid requirements of arduous, assembly line task sequences (say, the chain of steps to get an academic degree, with the bureaucratic numbering systems that catalog different courses like points on some vaunted grid of geometry meeting economic efficiency) and wu wei. As professor Wertz talked to our classes, some students whispered to one another, mocking the elderly man and his strange Chinese phrase. The rest of the philosophy faculty participated in various off-campus events together and with students — music shows and parties, both of which featured alcohol and people bragging of their alcohol consumption, human repetition compulsions of those angry intellectual male literary writers from the first half of the 20th century, spotlighted on stages boasting of their misery. These other philosophy professors, who had more Western orientations, were all thought of highly by the majority of the students, who were eager to be picked from above for grad school. Meanwhile, as far as I could make out, Wertz kept mostly to himself. Yet here I am, more than a decade later, painting a picture of the Taoist idea he encouraged. Very wu wei of him! His teachings didn’t address happiness directly (as far as I recall), but they did contribute to my thoughts on the matter later becoming un-stuck from the Eurocentric colonizer theories of academia with its assembly line vibe that to procure happiness you need to add box-checking struggles to your life. Like gentle water, Wertz’s explanations of wu wei didn’t have an immediate impact on me, but what he said was definitely a raindrop in the storm of different ideas that would eventually help me shift from the It’s the destination, not the journey position to a perspective that values both. No longer did happiness necessarily have to be about straining toward a far-off destination; all that needless effort could be subtracted away, removed, as wu wei suggests. Happiness could be a gentler, daily experience.

In part two, I’ll describe how the writings of F.M. Alexander, Ursula K. Le Guin, Heather Marsh, and others further changed my view on happiness.

Creative Commons License

This blog post, Views of happiness: Journey versus destination, part one of two, by Douglas Lucas, is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (human-readable summary of license). The license is based on a work at this URL: https://douglaslucas.com/blog/2021/03/13/happiness-views-journey-destination-1of2/. You can view the full license (the legal code aka the legalese) here. For learning more about Creative Commons, I suggest this article and the Creative Commons Frequently Asked Questions. Seeking permissions beyond the scope of this license, or want to correspond with me about this post one on one? Please email me: dal@riseup.net

Vaccinated, first jab! Here’s how it went

Note: In 2021, I’m publishing one blog post per week. This is entry 9 of 52.

Note: Basic information about COVID-19 vaccination can be found at the World Health Organization here and here.

Update: This article at The Atlantic discusses differences between the vaccines. This article at Vox discusses how the vaccines do or don’t apply to the coronavirus variants.

The author, masked and seated, receiving a vaccine from a syringe, which a nurse wearing a face shield holds with gloved hands and presses into the author's right arm.
Vaccine selfie

On Tuesday, Washington governor Jay Inslee issued a statement in response to president Joe Biden’s directive a few hours earlier that the 50 states prioritize childcare workers and educators (all staff for schools pre-K through grade 12) for coronavirus vaccinations. Inslee enabled Washingtonians in these occupations to get vaccinated immediately. As soon as I heard — my day job is in education — I got busy figuring out how to obtain my first shot.

Following emailed instructions from my employer, I checked out Washington state’s vaccine locator, a county-by-county tool that lists various clinics. The clinics’ websites had not yet been updated, since the news had just arrived; I was operating in a mild fog of war. Some of the busy health centers didn’t even have humans answering their phones. But using the vaccine locator, I saw a nearby place that appeared open and offered, of the three vaccines currently supplied in the United States, the Pfizer–BioNTech version. The Moderna vaccine seemed, from casual research, quite comparable to it, though I wasn’t thrilled by its higher dosage providing a tiny bit less effectiveness, and as for Johnson & Johnson, they knowingly put asbestos in baby powder, so I took their vaccine off my mental ideal list. Lawsuits have surrounded Pfizer too, but I had to draw the line somewhere. And I didn’t want to be picky: I decided that if I arrived at a clinic, and it turned out they were injecting people with the Moderna vaccine (the J&J wasn’t available in Washington state at that point), I’d just go ahead and get it.

Before heading off to the clinic, however, I printed a copy of my most recent pay stub and grabbed my most recent W-2, in case the healthcare workers wanted evidence of my employment. I also asked my primary care physician for one last serology/antibody blood test, and determined where to have one last PCR nasal swab done. Those were to confirm, as best as possible (the tests don’t reveal every case successfully), that I’ve never had COVID-19. I went to the medical facility; the phlebotomist drew my blood. After that, I went to a parking lot where a city fireman plunged a long stick, with a brush on its end, into each of my nostrils (or maybe he used two sticks/brushes total). If you’ve ever had the nasal swab done, you know it’s a very uncomfortable, but thankfully quick, procedure. While the stick-and-brush rooted around my nasal cavity, I distracted myself by thinking about how if there’s a hell, and I were burning in it, I’d be feeling a lot more agony than this, so don’t worry and just endure it. Having completed both tests (and both have since come back negative: no COVID), I headed for the clinic.

The place I’d located wasn’t answering their phone — well, only an unhelpful robot was — but I thought I could get answers in person. Sometimes people try to conduct the entirety of their research by calling or googling, methods that can save time, important when crushed by paid-jobs or other stress, but I’ve found (what with falling into privileged categories and all) it’s sometimes easier to simply find a sensible employee in the flesh and ask them face to face. Of course, this requires actually reaching the destination. When I was driving, I was unable to locate the correct street address, but I happened to pass by a large, impersonal-looking building with several people lined up outside. That must be it, I thought. It turned out to be a different clinic! But one also offering the Pfizer-BioNTech vaccine! I got in line, grinning at my luck.

A screenshot of the video game Mike Tyson's Punch-Out!! from the 8-bit Nintendo Entertainment System. It shows a boxing ring, wherein the player, Little Mac, is jabbing an opponent, Bald Bull, with his left fist. Bald Bull is making a pained expression.
All this talk of jabs makes me think of boxing

The others waiting in line were mostly educators of various ages, some of whom their principals had released from duty to go get their jabs. A healthcare worker came out the front doors and explained to us that each person hoping for a first shot needed to put their name on a wait list. Every wait list expires at the end of the day, meaning if a person didn’t receive a shot, they needed to come back another day and put their name on a new wait list, starting all the way over. I put my name on the Wednesday wait list. The employee said a shot might become available in the next hour or two, and if so, the clinic would call to tell me. Something in his manner suggested that a first jab really would be in supply after some 90 minutes. That’s why I waited in my car. Sure enough, I received the phone call right on schedule. At the front door again, I showed the healthcare workers the documentation of my employment, but they said the evidence wasn’t necessary. I went inside.

Once the usual pandemic screening was completed in an entryway (temperature check, questions, etc.), I was guided to a chair in the next room, where I sat and filled out paperwork. The numerous pages listed the vaccine’s unpronounceable ingredients, said it was authorized only for emergency use and not FDA approved, and explained that the vaccination would be kept on record in an immunization information system to help with public health goals, such as ensuring that as many people are vaccinated as possible. I handed in my paperwork, waited a little longer, and finally was led to the seat where I was to receive my first jab.

The nurse and I made small talk as she raised the sleeve of my mock turtleneck and I prepared my phone for a selfie. She took out the long syringe. Then she injected it into my arm. In an instant, it was over. I barely felt a thing. But I managed to click my phone successfully. With the card in hand — the CDC one that shows when you received each jab and which lot numbers the shots came from — and another card showing my appointment later this month at the same clinic for my second, final jab (the booster shot), I walked to an adjacent area for fifteen minutes of post-vaccination observation. The healthcare workers observe individuals who are jabbed, because in exceedingly rare instances, people have allergic reactions. For me, as expected, nothing happened, so after the fifteen minutes elapsed, I exited the building and climbed into my car.

Heading home, I was suddenly breathing a lot easier. What good fortune, to accomplish all three things (serology bloodwork; PCR nasal swab; first jab vaccination) in a single day: within just 24 hours, approximately, of the governor’s announcement. That evening and the next day, my upper arm was sore, and I felt a bit tired, common side effects of coronavirus vaccination — triggered by the mRNA in the shot, the body works hard to build improved T-cell immune protection and antibodies for a while as if sick, defenses that will then guard against COVID-19 in case of a real infection (but the vaccine does not contain any virus and cannot give you the disease). I wasn’t too tired overall, though; I was still able to wake at 5:30 a.m. the morning after the jab and go running for five miles. It felt like my path forward was now sunlit, no crazy coughing or long-term damage or potential death from the pandemic.

But many uncertainties still remain around COVID-19 vaccination. It’s unclear how much it will or won’t protect against the new strains (viruses mutate, after all). The B.1.1.7 and B.1.351 variants of coronavirus are here in King County / Seattle. Perhaps the variants will die out as more individuals are vaccinated, or perhaps people will have to get additional jabs to protect against them. It’s also unknown if vaccinated people, while not getting sick themselves, might still carry the pathogen and transmit it to others. Until humanity understands coronavirus better, these two reasons demonstrate why even those who are vaccinated should still mask up, physically distance, and follow other safety steps consistently. As the history of the 1918-1919 influenza epidemic shows, when people recklessly abandon safeguards as Texas currently is, highly infectious diseases catch fire again, flaming up anew. The United States has suffered more than half a million deaths since the pandemic began — far more than any other country on the planet — and that number will continue to rise for months and months. At the places I usually go, mask compliance is basically 100%, but I think because King County / Seattle has one of the lowest coronavirus rates among populous U.S. counties, many don’t see deaths or COVID-19 illnesses firsthand, and as a result they feel skeptical that coronavirus is a threat (I saw new graffiti this week that says Hang Inslee). If monkey doesn’t see, monkey doesn’t do, in many cases, anyway. I certainly understand and share the well-warranted distrust generally of the medical industry (whether conventional or alternative provider), except vaccinations against viruses are one of the genuine feats of contemporary science. See ebola or polio (though to be precise, neither of those have been eradicated yet).

I feel hopeful, and I look forward to getting my second jab done soon. Maybe this long nightmare is at last coming to an end; maybe a new beginning is finally emerging.

Creative Commons License

This blog post, Vaccinated, first jab! Here’s how it went, by Douglas Lucas, is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (human-readable summary of license). The license is based on a work at this URL: https://douglaslucas.com/blog/2021/03/05/vaccinated-first-jab/. You can view the full license (the legal code aka the legalese) here. For learning more about Creative Commons, I suggest this article and the Creative Commons Frequently Asked Questions. Seeking permissions beyond the scope of this license, or want to correspond with me about this post one on one? Please email me: dal@riseup.net.