Monday, February 17, 2020

[Unpopular Opinion] [Spoilers] [Review] My Take on the Season Finale of Bojack Horseman (with Four Alternative Endings)

Bojack Horseman had an unorthodox ending.  It split its final season into two parts, much in the way it constantly splits its viewers hearts into two parts every other episode.

This resulted in a several-month wait of nervous anticipation.  The first half of the 6th season showed Bojack, the troubled titular character, getting his shit together, but ended on a cliffhanger in which everything is about to go terribly, horribly wrong.  The second half of the 6th season, finally released on January 31st, resolved the cliffhanger and gave us an ending.

Was it the ending I wanted?

To be honest, no.


Naturally, I binge-watched the entire 8-episode second half in one sitting, the moment it dropped at 12 a.m. on January 31st.  I assume you did something similar, or you wouldn't be reading this right now.  I assume that you've seen the final season of Bojack Horseman, but if you haven't, go see it.

The second-from-the-last episode scared the shit out of me.  Granted, this was at about 3:30 a.m. so all of my emotions were pretty heightened.  Looking back, I felt like some of the symbolism was a little over-the-top.  It felt a tiny bit forced, the heavy-handed death allegory.  Black tendrils of water, a guy running down a narrow and impossibly long hallway... these images felt tired out.

But I can forgive that for three reasons.

First, this whole episode was alluded to / foreshadowed in the one of the early episodes of the season, when Bojack is talking to the therapy horse Dr. Champ, and Dr. Champ says that Bojack has told him everything, "even that dream where you're at a dinner party."

Second, Secretaridad's poem "The View from Halfway Down" justifies the entire episode.  The whole episode could have been garbage and this single poem (and the voice actor's delivery of it) really, really hammers home the terrible final thought processes of suicidal people.

 Click for full view.

Third, Herb's responses to Bojack.  When Bojack refers to the dream sequence as "this place," Herb says, "There is no place.  It's just your brain going through what it feels like it has to go through."  Later, Bojack says, "See you on the other side," and Herb gently replies, "Oh, Bojack, no.  There is no other side.  This is it."

One of my big complaints about this episode was the implication that you dream when you're dead, that you experience something, that there's some sort of fantastic dreamworld or meaningful experience you have in your final moments.  Herb's grounding the episode took care of that for me.  This episode wasn't for Bojack.  It was for us, the audience.  Bojack was incapable of having such a lucid dream while he was dead, and I felt like Herb's character helped clarify this.

But then we got the last episode.

In the very last episode, Bojack attends Princess Carolyn's wedding, and at the end, goes onto the roof to talk to Diane, who reveals she is also married and now lives in Texas.  Bojack "making it" felt like a little bit of cop-out.

But worse, Bojack's story ends with him going to jail.

First of all, one of the strengths of the show is that it's always been relatable.  Not all of us were stars in '90s sitcoms, but generally, Bojack's experiences are universal.  His need for approval, his fear that he's not good enough, his desperate chasing for friendships that validate him.  It feels very human (even though he's a horse).  Going to jail is not a universal experience.  And it can't even be said to be Bojack's rock-bottom, either.  Bojack is forced to go sober, and let me tell you, not a lot of recovering addicts ever really get "forced."  Most hit a personal rock-bottom and then realize they have to fix themselves or they will end up dead.  In other words, pardon the cliché, you can lead a horse to water but you can't make him drink.

They made Bojack drink and I hated that.  Bojack made his own bed and was constantly being forced to deal with the consequences of his own actions and choices.  Being sent to jail isn't the same as holding yourself personally accountable.  It's just being caught being shitty.  It says nothing of personal growth.  The show's relatability fell apart for me in the final episode.  The cheesy marriages of P.C. and Diane, respectively, also felt a little cliché. 

Bojack going to jail reminded me of Robert Downey Jr.'s '90s issues but, for most of the viewing audience who is not Robert Downey Jr., it was totally inaccessible.  Also, please explain to me why Bojack was sent to a maximum security facility for drug and trespassing charges?  Also, please explain to me why they would let him out for the weekend, if it's maximum security?  This episode was formulatic and hackneyed and, worst of all, it failed to deliver the relatability that made the rest of the show so totally phenomenal.  Bojack was a show built on exploring universal experiences, and the final episode felt all wrong to me.

A few things I liked.  I liked the reveal that Bojack had called Diane and left her a shitty, manipulative message wherein he basically held himself hostage.  This is classic suicidal drug abuser behavior and I love this final exploration of Bojack's and Diane's relationship. 

I also really, really liked the resolution with Hollyhock, which is that there wasn't one.  He fucked up and Hollyhock cut him out of her life because he was a toxic person, and even now that he's in recovery, he can't get it back.  That happens in real life and it's devastating and raw and real.  It's an open wound that never quite heals because you don't get the resolution or closure you want, no matter how much you get better.  I felt that.

As usual, I can't bring myself to complain about something without an attempt to fix it, so I thought up four alternative endings that I think would have worked better than Princess Carolyn's wedding, which had the gaudy, sitcommy weight of reassurance to it.  It felt out of place for the show, whose usual tone toes the line between absurd meaninglessness (ie, Dadaism) and unsatisfactory real-life stories that are relatable because of their open-ended rawness.

Alternative Ending #1: Bring Back the Baboon

One of the earliest moments in the series that really, truly resonated with the fan community was at the end of season 2 when the jogging baboon looms over Bojack and gives him some advice about running: "It gets easier… every day it gets a little easier.  But you gotta do it every day — that’s the hard part. But it does get easier."

Imagine if in the final episode, Bojack leaves prison, ends up in a crappy apartment, and drags himself to court-ordered AA meetings.  And... there's the baboon.

Baboon:You know what they say.  One day at a time.  I try to live my life by those words, but--
[Bojack is nodding.]
Baboon: --the truth is, it never really gets any easier.
Bojack: Wait a second!  You told me it gets easier.
Baboon: ...what are you talking about?
Bojack: When I was running!  You told me if I did it every day it would get easier!
Baboon: That was about running, man, not life! you mean to tell me you try to ascribe meaning to every casual interaction in your life like it's some kind of sitcom?
Bojack: Well screw me for trying to find meaning!
[Bojack gets up to leave.]

Baboon: Wait.
Bojack: What?
Baboon: Look.  Life isn't like a sitcom, or training for a marathon, because there's no conclusions.  There's no moment where you're suddenly just... done.  That's why we say you gotta take it one day at a time.  And I'm not going to bullshit you.  It doesn't get easier.  But... you get stronger.
[Bojack contemplates this, and then sits back down.]
[In the final scene, the camera pans out, showing Bojack and the baboon talking outside after the meeting.]

What it means: Exactly what it says.  You live each day and try to do your best, and it's hard, but if you keep trying, then hopefully it will get better.  People love that goddamn baboon and he disappeared after season 2, so I think this would have been an amazing reappearance.  Also, the wise old baboon being in AA and being shown to struggle is a powerful message.  Everyone's dealing with something; we should help each other out.

Alternative Ending #2: Callback to the Bag of Mulch

Bojack leaves prison and/or fades into obscurity after the bad press regarding Sarah Lynn.  He moves into a crappy apartment next to the 110 freeway.  At the end of the episode, he goes outside and leans on his car, smoking, and looking up at the skyline, listening to the traffic in quiet contemplation.

Bojack [to himself]: Maybe being on that show for all those years gave me a false impression in neat conclusions.  That everything always wraps up nicely at the end.  That, no matter how messy things get, there's a point to it.  [pausing] I'd like to think there's a point to it.  I don't believe in God but I wish I did.  I wish I could.  ...God?  ...if you're there... give me sign.
[There is a long, quiet pause.  Nothing happens.]
[Bojack sighs.]
[Suddenly, a bag of mulch falls from the overpass and lands into his car.]

[Bojack stares at it in shock, then looks up, then looks back down at it.  His incredulous expression turns to one of annoyance.]
Bojack [yelling]: Well what the hell is that supposed to mean?!
[Cut sharply to the end credits.]

What it means: Life is absurd and we are the ones who give it meaning.  This fits perfectly with the existentialism and dadaism that is a cornerstone of the show's aesthetic.  It's also a punchline to a confusing joke, which is about 90% of the show's plotlines.

Alternative Ending #3: Why the Long Face?

We got some of this in the final episode, in the form of a song that I personally didn't like much.  But for a long time, I had hoped, desperately, that the series finale would involve Bojack going to a bar and getting asked this question.

Ideally, I would have liked Bojack to have been driving and to have reached a crossroads.  (Literally.)  He parks, get out of his car, and goes into a bar.  He sits down.  (Cameo appearance: Dr. Champ in the background, getting shit-faced.)  The bartender comes up to him.

Bartender: What'll it be?
Bojack: Huh?
Bartender: ...what'll it be? need to choose something, you know.
[Bojack stares pitifully at the drinks along the wall, realizing he needs to make a choice.]
Bojack: But... what if I make the wrong choice?
Bartender: Then I guess you order something else?
Bojack: But what if the first choice influences every other choice?  What if I set myself down an irrevocable path of self-destruction and I get in too deep and can't turn back?
Bartender: Um...
Bojack: I shouldn't even be here.  What am I doing?  ...I have to choose something, but it feels like, no matter what I choose, it's wrong.  And not choosing... that's a choice, too, isn't it?
Bartender: I guess?
Bojack: I could choose to get a drink or I could just... go home. But then what?  What's the point?  Be good now, so I can mess up later?  
[A long, awkward pause ensues.]

Bartender [leaning onto the bar]: Wow, man.  That's all really heavy.  Sounds like you're working through some shit.
Bojack: I'm trying to figure things out and I just... don't know how.
Bartender: Need someone to listen?
[Bojack nods.]
Bartender: Alright, go ahead.  Tell me what's up, big guy.  ...why the long face?

What it means:  I love the idea of six seasons of suffering building up to a played-out one-line joke.  It perfectly captures life's absurd pointlessness.  However,  I also like the idea of Bojack being shown making a choice.  Does he relapse?  Does he go home?  What's next?  No idea!  Doesn't matter.  The point is that our lives are made up of a series of choices, and to live meaningfully, we have to acknowledge those choices, as well as their consequences.

Alternative Ending #4: "What's Behind That Door?" 

The iconic opening sequence of the show, each season, has been Bojack floating through life to a funky jazz tune before falling into his swimming pool.  He sees Diane and Mr. Peanutbutter appear above him, looking worried, and then a bright life from a helicopter.  The bright light transitions into him floating on the surface of the pool in shorts and sunglasses on a bright southern California day, and the camera pans out to show his hillside home in the Hollywood hills.

Now imagine this.  The final episode opens with the same opening sequence.  Bojack floats through life: his abusive childhood, his rise to stardom, his downward spiral, his falling into the pool... we see Diane and Mr. Peanutbutter appear and the bright light...

...and then it cuts to darkness.

For twenty-two minutes, we get a black screen.  No images.  No sound.  Just black.  And then it cuts to the credits.

What it means: As Herb already said, there's nothing behind the door.  When you die, you're dead, and your story ends.  This ending would be beyond frustrating for the viewers, but that's the point.  We got to know Bojack, to love him despite his faults, to want to his story to continue.  By killing himself, he robbed us of that.  Death is finality and there's nothing else beyond it.

So make it count.

Monday, February 10, 2020

Climate Change Is Causing Wildfires. Yes, Really.

I'll admit it.  The title of this post was clickbate.  Climate change doesn't cause wildfires... at least, not directly. But it certainly contributes to all of the conditions that feed wildfires, and that is what today's post is about.

Be forewarned: this is what my writing looks like when it's "forced."  I'm not entirely sure the structure here is as rock-solid as it could be, or that the transitional flows from one paragraph to the next are seamless.

I wrote this in response to a prompt for class.  The prompt: Please write an 800 word analysis on a topic of your choosing. Please remember: a news analysis is meant to examine, explain, illuminate, and suggest new and different ways of thinking about a topic than a straightforward news report. Analyses are not advocacy in the sense that they are prescriptive calls for action.

The professor advised us that news analyses often answer the question "how" or "why" about a breaking news story, and aim to answer that.  I had been thinking about the Australian brushfires, and decided to answer my own question: how does climate change actually cause wildfire?  The answer is below.

The view in Lake Conjola, South New Wales

The Australian brushfires of 2019 - 2020 received international attention as they grew into the nation’s largest wildfires on record, burning over 15 million acres of land and destroying over 6,000 buildings, almost half of them residential. The reasons cited for the fires include an intense heatwave and drought, but the severity of these conditions can be traced back to global warming. Climate change promotes conditions that affect the likelihood of wildfires, such as heat and dryness, and scientists have been able to draw a strong causative relationship between rising global temperatures and an increase in the frequency and severity of wildfires worldwide.

The first factor in Australia’s record wildfires was the extreme heat. 2019 was Australia’s warmest year on record. Over the past century, Australia’s climate has shifted by about one and a half degrees Celsius, according to the Australian Bureau of Meteorology. Intense heat waves have begun to occur more regularly, and are often coupled with drought.

2019 was also Australia’s driest year on record; rainfall was 40% lower than average. Drought is not uncommon in Australia; its lower elevation and uniform landmass, aided by the cold ocean currents that move in from the South Pacific, prevent evaporation from occurring inland. This prevents rain cloud formation. Another factor in preventing rainfall is high-pressure systems, particularly prevalent in the southwestern area of the continent. Drought increases the probability of brushfires by drying out vegetation, creating a ready source of fuel for wildfires and lowering the ignition threshold. Coupled with intense heat and electrical storms that can spark fires, Australia’s dry season is also called “brushfire season” because of the likelihood of uncontrolled wildfires.

The heat from the fires melted the rims off of this car.

A third consequence of the rising global temperature for Australia is higher summer temperatures, which result in an earlier onset of spring. This begets a more rapid melting of spring snowpack, which in turn causes the topsoil to dry out earlier and remain dry longer. The dry soil contributes to the dryness of the brush. The spring snowpack is also melting more rapidly simply because there is less of it; Australian rainfall has declined since 1970 in the southwest, and the below-average winter rainfall reduces the spring snowpack. Having a small spring snowpack that can be rapidly melted in high temperatures makes spring come on more rapidly and lasts longer.

Longer, hotter summers and longer, warmer springs have a fourth consequence that impacts wildfire likelihood: insect populations that thrive in warmer temperatures have exploded. The Fourth National Climate Assessment, which studies global climate change but predominantly focuses on North America, notes that warmer temperatures and longer springs allow beetle populations to grow unchecked. Typically, cold climates limit their numbers; left unregulated, tree beetles and other insects negatively impact plant growth in forests, killing off new tree growth and susceptible forest populations, which, once dead, dry rapidly and act as kindling in wildfires.

A man and his dog watch their ranch burn.

Australia’s most populous state, New South Wales, was the most significantly impacted by the wildfires and exemplifies how the changing global climate can contribute to the severity of wildfires. The 2019 brushfire was the state’s worst fire on record, both in terms of the area it affected (9.9 million acres) and how long it burned. 25 of the 35 fatalities from the fire were in this state. New South Wales has also experienced the most significant drought conditions in Australia, with rainfall 77% below average. Rainfall in New South Wales has decreased overall by 10 - 20% over the last fifty years.

Narrowing the scope even further, an example of how global climate change can be said to “cause” wildfires is New South Wales’ Gospers Mountain forest fire. The Gospers Mountain fire started from a single ignition point, caused by lightning. Dry forests are more likely to ignite naturally from lightning strikes. The Gospers Mountain fire has now destroyed 860,000 acres - an area twice as large as Hong Kong. According to the Centre for Environmental Risk Management of Brushfires at the University of Wollongong, this is the largest forest fire in Australian’s history, and it was unavoidable; the fire was started naturally, and its rapid growth was a natural consequence of the drought that hit New South Wales particularly severely.

 No filters.  
 Evacuees have flooded Instagram with photos of the red hellscape that is New South Wales.

The Australian Bureau of Meteorology, in its annual State of the Climate report, states that Australia’s climate over the last century has shown an increase in “extreme weather,” with more warm extremes and fewer cool extremes. Projections for Australian climate, based on global trends, predict that Australian temperatures will continue to increase, with more extremely hot days, and that average rainfall will decrease in southern Australia. Based on these trends, it goes on to predict that extreme “fire weather” will continue to increase, and that the fire season will lengthen as climate change continues to promote ideal wildfire conditions.

The Australian government has acknowledged that climate change and global warming is negatively impacting Australia’s environment. The continent is uniquely susceptible to negative impacts due to its geography, already-arid inland, high-pressure systems, strong coastal winds, and unpredictable rainfall. Although techniques for fighting and containing wildfires have improved over the last century, severe wildfires, like those seen in southwestern Australia, are expected to continue to grow in their frequency and intensity as global temperatures rise.

How you can help:

Monday, February 3, 2020

Eliminating FAFSA Benfits Students

The cost of higher education has always been a hot-button issue for presidential hopefuls, and it’s no wonder why; the cost of college tuition has increased eight times faster than wages since the 1980s, according to Forbes. But amid all the talks of reforming how much college costs, it’s rare to hear anyone speak of how college is paid for, and of the necessary re-examination of FAFSA, a 28-year-old piece of paperwork that’s letting students fall through the cracks.

FAFSA, the Free Application for Federal Student Aid, is a single form that determines student eligibility for all federal financial aid, and its link to college success cannot be overstated. The National College Access Network has found that students who file a FAFSA form are 72% more likely to finish college than those who don’t. According to the Department of Education, each year, FAFSA awards over $120 billion in grants, loans, and other types of aid, such as work-study programs. So why would anyone not fill out this form?

The answer is two-fold: the form is overly complicated and, worse, requires the income information of students’ parents if the students are under the age of 24.

FAFSA is 108 questions and, using a student’s parent’s income information, it generates a Student Aid Report, including the EFC, “expected family contribution,” which is the amount of money the government calculates that a family will contribute to the student’s college costs. The EFC determines how much financial aid a student is eligible for, such as how much they will get in grants (as opposed to loans), and whether their loans will be subsidized or not.

The students most negatively affected by this system are the socioeconomically disadvantaged ones that FAFSA was originally designed to help in the first place.

Let’s start with the simple fact that FAFSA does not look at how much a family actually contributes to a student’s tuition, but instead bases its calculations on tax returns. Tax forms do not, in any way, shape, or form, demonstrate how charitable one’s parents might feel; how much a parent can contribute does not necessarily reflect how much they will.

But the problem is deeper than this, because if a student under the age of 24 can’t get their parents’ income information on their FAFSA, then they are unable to file it at all and are ineligible for any form of student aid. According to the Harvard Political Review, each year, there are roughly $2.8 billion worth of Pell Grants that remain unclaimed.

It’s been over a decade since I had to fill out FAFSA, but I still remember it not-so-fondly as one of the worst parts of my college experience, because I was one of those students made ineligible due to a technicality. I was estranged from my parents and therefore could not complete the form. Mine was not a unique experience; each year, about 25% of FAFSA forms are left incomplete, according to the Federal Student Aid office.

There are plenty of reasons a parent might refuse to fill out their portion of the FAFSA form. Some have cited political protest; the FAFSA form is linked to the Selective Service System. Parents who are undocumented citizens or who didn’t fill out their taxes might understandably prefer not to get mired in legal documentation with the federal government. An illiterate parent might find the process too complicated. And some parents, like my own, might simply have a distant relationship with their child.

There are, of course, stipulations. You can bypass the FAFSA’s “dependence” clause if you are married, a veteran, or a parent of a child. But I, like many 18-year-olds, was disinclined to get married, have a child, or go to war just to complete the form.

A college’s financial aid office has the authority to change a student’s status from “dependent” to “independent” under “unusual circumstances,” but according to the National Postseconday Student Aid Study, only 0.5% of all undergraduate students are independent because of a dependency override. Parental refusal to fill out the form is not considered an unusual circumstance; students whose parents refuse to furnish their income information are routinely told to wait until they turn 24 to be able to complete the form and secure funds to attend college. This was the advice I, too, received from my alma mater’s financial aid office.

The question becomes whether or not FAFSA is even necessary to determine eligibility for federal financial aid in the first place. Susan Dynarski, a professor of public policy, education and economics at the University of Michigan, suggests that FAFSA could be eliminated altogether; she points out that “the information needed to calculate eligibility for that aid is already collected by the I.R.S.” The FAFSA is supposedly designed to help low-income students, but instead, it regularly fails them, thwarting attempts to receive aid based on technicalities, an inevitable byproduct of the form’s complexity.

FAFSA needs to be re-examined and simplified, and perhaps even eliminated. The poorest and most disenfranchised students that it is designed to help are the ones most likely to fall through its cracks, and it’s only by reforming FAFSA that these students can be saved. There are many factors that create a gap in education between low-income and high-income Americans. But FAFSA doesn’t have to be one of them.

Monday, January 27, 2020

The Editorial: Objectivity, Existentialism, and the Burden of Consumer Choice

Last week, I discussed journalism and the impossible modern standard of "objectivity."

This week, for one of my classes, we had the following prompt:

Please write an 800-word editorial on the importance of media literacy to better enable news consumers to distinguish between "fake news" items, blatantly partisan and biased news sources, and mainstream news stories that still must be critically examined and analyzed rather than taken merely at face value.

800 words isn't nearly enough for me to parse out this issue. My last post, of course, completely ignored purposeful fake news and blatant propaganda. There's so many moving parts about how news is made and how people consume it. Bias has to take into account socioeconomic factors, rural v. urban communities, sexual and racial factors... you could write a whole book on bias and how it affects journalism. "Media literacy" is not a simple thing. I am inclined to write prescriptive editorials that offer solutions, but there's no real solution to eliminating bias or propaganda. Some people will always distrust the news. And honestly, the news reflects the people who consume it, so really, maybe fake news is what people want. Who are we to deny the consumer what they demand? News that enrages also engages, and people don't want to be informed so much as they want to be entertained and to feel something.

I wasn't really thrilled with my end product because what I wrote sounded very hopeful and also puts a lot of trust in the average joe.  I don't think the average person is fantastic at identifying and acting on their own self-interests.  I guess it's wishful thinking, that we should have confidence in the general public to educate themselves and consume what is best for them.  The question of where we draw the line between personal, individual choice and a "nanny state" that dictates what people are allowed to do is a moral and philosophical debate that you could write another damn book on, and it leaves me feeling unsettled, because I don't really believe in "benevolent control" but also don't really trust people to do the right thing.  As an existentialist I guess I'm inclined toward giving people as much freedom as possible, but with the awareness that they're gonna fuck up.

The original draft of my essay was actually just a perfect re-write of Albert Camus' "The Stranger."

Anyway, here's my best effort, in 800 words, to offer up a solution. Of course, if such an easy solution existed, then it would have already been discovered by someone more educated and experienced than I. Ultimately, writing this took a long time and really challenged me; I got pretty mired in a lot of different articles and my takeaway was that no one is quite sure where journalism is going next. I think one of the biggest conclusions we can draw is simply that we don't know how to address people's distrust of the news, and that the media landscape is rapidly evolving into something entirely new that we haven't seen before. It's only with the benefit of hindsight we'll be able to really analyze what's happening, and why, and how we should respond to it.

The Editorial
(presented without memes)

Since 1990, American obesity rates have climbed more than 10%. At the same time, American trust in its mass media to provide accurate news coverage has dropped to historic lows, with only one-third of Americans expressing confidence in the news. What do these things have in common? Only that no single, simple prescriptive solution exists. But efforts to combat rising obesity rates have offered some insight into why “fake news” is so hard to address, and some of the initial steps that can be taken to slow its growth.

It’s unlikely that fake news can ever be completely eradicated, or that people’s trust in their media can ever be fully restored. The impact of fake news, however, can be reduced, and perhaps, with time, some amount of public trust can be regained. And the first step is to start labeling the news so Americans know precisely what they’re consuming.

Efforts to combat the obesity epidemic have resulted in a plethora of labeling initiatives: the FDA regulation that will put calorie labels on menus, for example, or the re-design of nutritional labeling on store-bought products. Ultimately, actions like these were designed to educate, encourage, or exemplify healthy lifestyles. But obesity rates have continued to climb. Ultimately, it is up to the discretion of the consumer to decide what to eat and how to behave. In a free, capitalist society, people are welcome to choose to consume what they want, and manufactured foods are designed to be tempting. Heavily processed and saturated with oil, sugar, and fat, the food in American grocery stores is engineered to make it desirable.

In the digital age, where profitability can be measured in clicks, news is likewise engineered to be desirable and tempting. “Clickbait” is one such example of “junk food” media: flavorful and alluring, yet nutritionally lacking. According to a survey done by the Pew Research Center, about 70% of Americans have used Facebook. And more than half of Americans who use Facebook claim to have seen “fake news” there. But just as Americans won’t give up junk food, Americans won’t give up Facebook, and the ease of distributing news there - real or fake - means that it’s unlikely that fake news will ever be eliminated from the platform.

Yet Facebook has taken steps to try to help people identify fake news. Since December of 2016, it has employed third-party fact-checkers like Snopes, Politifact, and to identify fake news stories. Stories posted on Facebook often appear with a second link beneath them to a fact-checking site that proclaims the story as false or misleading.

The labeling of news as “fake” does not stop it from being disseminated, no more than the caloric labeling of food prevents people from buying it. It does, however, educate people, allowing them to make a more informed decision on their own behalf. Freedom of choice, and freedom of speech, have always been strongly-held American ideals; labeling fake news, satire, or propaganda gives people the freedom to read and distribute it while making its bias more transparent.

But does it work?

Consider tobacco consumption. Cigarette use in America peaked in the early 1960s. In 1966, the Federal Cigarette Labeling and Advertising Act went into effect, requiring a “conspicuous” warning label on packs of cigarettes. Every year since 1975, cigarette consumption per capita in the United States has dropped. Today, less than 20% of Americans smoke, compared to about half in the 1960s.

Cigarettes and junk food are here to stay; the consumers demand it. Fake news, likewise, isn’t going anywhere. But labeling junk food to make it clear to consumers what they are buying allows them to make better decisions, if they choose to. And FDA labeling doesn’t only target junk food, but all food, allowing buyers to compare and contrast. Having third-party fact checkers examine and label news story as truthful, false, satirical, partially true, biased, or lacking in context is one way for people to have a better idea of the validity of the news they consume.

In the last five years, the growth of obesity rates in America has slowed. Educating people about what’s in the food they eat did not eliminate the problem, but it did appear to attenuate it. Likewise, labeling cigarettes as harmful, combined with public awareness campaigns, decreased consumption. In a free society that values a consumer’s right to choose, harmful products are difficult, if not impossible, to eliminate, and there will always be some portion of the population that chooses to consume the product, regardless of its effects. Thus, fake news, like junk food or cigarettes, is here to stay.

No single, simple, prescriptive solution exists to the fake news crisis, just as there is no single, simple prescriptive solution to obesity. But third-party fact checking and the clear labeling of news is a good place to start. And initiating the conversation about what constitutes “fake news” in the first place allows consumers to hold themselves, and each other, more accountable for the choices they make. Labeling the news with a tag from a third-party fact-checker ensures that the decisions people make are informed ones.

People don’t trust the news. Maybe it’s time for the news to start trusting the people.

 "Should I kill myself, or have a cup of coffee?" - me, while writing this
Also literally another Camus quote.

Monday, January 20, 2020

Objectivity Is Dead. But Journalism Doesn't Have To Be.

I'm in the final quarter of my journalism program and I have only two classes.  Those classes are "The Art of the Interview" and "Opinion Writing."  Opinion Writing (which gives an overview of subjective journalism such as reviews, personal essays, op-eds, and columnist-style work) is the one I'm most excited for because I have long felt that that a lot of standard news journalism, the stuff you read in the paper, is lacking in any sort of human voice.  In a world where everything is algorithms, I like to think that language and the art of story-telling remains a defining trait of humanity.

In conversations about the future of journalism in a digital world, one of the central linchpins is the public’s trust of the press, or lack thereof. Arguably the public’s increasing distrust of the media they consume is an inevitable symptom of untrustworthy “fake news” that has evolved with the rise of the internet. However, these conversations are predicated on the concept of digital news as duplicitous and consumers as hapless victims. Only a few hinted at the idea of news as a cooperative endeavor, one in which consumers share an equal part of the responsibility for the veracity of the news with the journalists who produce it.

The question of how to resolve or address the public’s growing distrust of the media likely doesn’t have a single answer, but I believe that confronting the inherent bias of reporting is a good place to start. Suggesting that bias in reporting is “inherent” is not a criticism of journalism; I have long believed that objectivity, a worthwhile goal to strive for, is also an unrealistic and unattainable one.  It's literally impossible to cover all viewpoints, and even if you did, giving all viewpoints an equal share of attention it itself biased... because not all ideas are created equal.

Dr. Mitchell Stephens asserts that journalism has only the “pretense” of objectivity, and suggests that such a pretense is harmful to the journalistic integrity of reporting news, as it creates an unachievable goal. Everything from word choice to paragraph organization can reveal a bias, and even a truly neutral reporter could have his articles interpreted as biased by a biased audience. Even the selection of the news itself reveals a bias.

What's more, the bland delivery of facts in news doesn’t make for a good or engaging story, and attempts to hide the human reporter behind a flimsy facade of objectivity is denying the artistic and emotional side to journalism. This "hiding" of the reporter feels dishonest, and perhaps it's part of the reason that the public is disinclined to trust the media. The journalist as a shadowy, unknown figure whose motivations are hidden is not one people are going to put a lot of trust in.

I do not think holding a personal opinion is out of line with reporting the facts; the concept of “truth-telling” is a key principle of the SPJ Code of Ethics and is defined as the “first obligation” in Bill Kovach’s and Tom Rosenstiel’s “Elements of Journalism.” But telling the truth does not mean one must be devoid of personal investment or opinion. In fact, I would prefer reporting that plainly states its personal investment, to provide context for the work itself. Pretending that a journalist can have no conflict of interest with the topic they are disseminating is holding the journalist to an inhuman standard. A passionate and personal reporter can still be truthful, but by revealing their own voice, I believe that they are disclosing to the reader their possible blind spots, and allowing readers to scrutinize their work more effectively.

Rule 0 of journalism: don't wear a bra. 
This allows readers to scrutinize your work more effectively, if you know what I mean. ( ͡° ͜ʖ ͡°)

Walter Lipmann's describes journalism as a sort of scientific method, leaving room for a degree of subjectivity or interpretation in reporting. Scientists do not conduct an experiment, put the data into tables, and then publish it without commentary. Every scientific paper includes a conclusion, one that suggests what the data means, how it might be applied, and what further research is needed. Scientists are well within their rights to focus their attention on the most promising experiments, to propose significance, to suggest and even advocate for meaning, based on their evidence. The key is that scientists are founding their "advocacy" on a cornerstone of factual evidence, and are clear when their hypotheses are only hypotheses.

True objectivity is not a single, inflexible, universal truth. Facts are filtered through a personal lens. Having journalists give interpretations of the news is not necessarily mutually exclusive with objectivity; in fact, it can bolster objectivity, by revealing their bias and providing context to the facts they've uncovered.

Of course, this is putting a lot of trust in the readers to exercise their due diligence in consuming the news.  As much as readers don't trust journalists, journalists don't trust readers, either.

Readers who are educated in the ways to spot issues with trustworthiness, transparency, and sourcing are empowered to repair them. Arguably, digital media may be less stringent in its sourcing, relying instead on hyperlinks. But this makes its sourcing more transparent, as well, assuming that the readers click the link. Readers have the ability to hold writers accountable thanks to comment sections online, where they can offer real-time input and corrections. Digital media may have more errors, or, perhaps, its errors are simply more likely to be noticed. But this means they can also be corrected more readily.

Marc Fisher, a senior editor at the Washington Post, wrote an article for the Columbia Journalism Review titled "Who Cares If It's True?"  The article isn't as dismissive of truthiness as the title implies.  Rather, Fisher’s proclaims that "traditional" fact-checking is increasingly being held to an "impossible standard."  But that "impossible standard" of air-tight fact-checking does not preclude fact-checking from journalism. Rather, it allows for errors to be made, and corrected in good faith. Fisher offers the hopeful viewpoint that modern readership is selecting for more copy-editing and that “the truth emerge[s] from trial and error.” This sentiment mirrors John Stuart Mill’s model of the “marketplace of ideas,” and places a degree of trust in the next generation of consumers to make the right choices in what media they read, share, and support.
Millennials be like...

Bob Cohn, in his article “Old-Media Values in New-Media Venues,” suggests that the gap between print news and digital news is closing, and that journalists are cross-training. I would argue that, as the news and the people who produce it evolve, so do their consumers, and that the readers of the news are themselves “cross-training.” People are no longer interacting with their news passively; they are ceasing to be consumers and instead are becoming participants. And that engagement could be the saving grace of journalism in the digital era.

Monday, January 13, 2020

Video Game Reviews: The Last of Us and Innocence, A Plague Tale

For all of my general nerdiness, I don't actually play a lot of video games.  Above all else, I'm a bookworm.  I like reading and writing, and maybe like, three Netflix shows.  (For the curious: Kimmy Schmidt, BoJack Horseman, and The Umbrella Academy.)

Part of the problem with video games, for me, is that they require a lot of investment in the medium.  I don't like video games for the same reason so many people don't like reading: it takes me too long.  Reading is less fun if you're not a strong reader.  For me, a person whose coordination is so bad that I can't chew gum and walk at the same time, video games are a difficult story-telling medium.  It's one of the only story-telling mediums in which, if you suck at the medium, you can actually get stuck in the story.  With a book you can technically skip parts you don't understand; in video games, if you are caught on a level, that's it.  For platforming games like Super Mario or Sonic, this didn't matter much, but for story-driven games, it's maddening.

This is one reason I like sandbox games like Skyrim. You can't really get "stuck," because it's an open world and at times feels truly limitless.

Disclaimer: I spend 90% of my time selling cabbages in the marketplace in Solitude, 
not actually doing quests.

Today I'd like to discuss two story-telling games I became acquainted with in the last year: The Last of Us (2013) and Innocence: A Plague Tale (2019).  The two have a surprising number of similarities.  Both are linear story-based horror genre games; both hinge on an "escort mission;" both heavily involve themes of childhood purity.  One is set in a future with zombies, and the other in a past with rats, but their strengths are similar, and both are games I'd recommend.


Let's start with The Last of Us.  It's a pretty popular game and you've likely heard of it; it won a ton of awards, including over 200 "Game of the Year"s.  Like many others born in the '80s, who grew up with arcade-style first-person shooter "House of the Dead," I have a soft spot for zombie games.  House of the Dead introduced us to the big fat zombies and to the chainsaw zombies, which we see in most zombie games nowadays.  Before The Last of Us, my favorite zombie game was probably Dead Rising, which features the standard slow-moving Romero zombies and cartoonish fun ways to kill them.  (It also inexplicably features a boss battle with a chainsaw-wielding clown. The clown isn't a zombie.  He's just sorta... there.)

He dual-wields chainsaws and self-awareness.

The Last of Us does a great job of putting a fresh spin on its zombies.  The zombies are victims of a fungal infection, not a virus, and their movements are fantastic.  They stagger around in a convincingly horrific manner that calls to mind the disjointed mannerisms of an opioid addict, but when they spot you, they suddenly get fast.  In the last stages of the game, we get to see "Clickers," who are people in the late stages of the fungal infection, whose faces are covered with fungal growths and who are sightless, resorting to clicking in order to navigate.  These zombies have a unique design and are also properly scary to look at.

The apocalypse itself is also properly horrible in The Last of Us.  The game is set 20 years after the outbreak, and we see plenty of military quarantine zones and compounds with people.  As the game progresses, there are less and less zombies, and more and more bad guys, including bandits, para-military organizations, cannibals, and crazed survivalists.  The game's message seems to be that the real danger is other people, not zombies, which I can get behind.

The scenery is breath-taking.

 Concept art.

The two biggest strengths of the game are, in my opinion, the story-telling and the voice acting.  There are several cut scenes that could have been cheesy but displayed some really ridiculous raw human emotion and talent.  Among them are when the main character, Joel, loses his daughter; when Joel's partner, Tess, is bitten by zombies and demands that Joel go on without her; when Henry commits suicide after his younger brother turns; and when Ellie, the girl Joel is escorting, makes her first kill.  These scenes could easily be over-acted, but in every one, you feel the full gravity of what happened to the characters, and hear it plainly as they choke up, their voices break, and they express their sorrow.

There are also moments of wonder that keep the game from being a purely nihilist experience.

The storytelling, like the voice acting, also packs a punch.  There are plenty of examples, but the one I'll give is this.  Many zombie games, such as Dead Rising or Left 4 Dead, lack children.  Generally, video games draw a heavy red line when it comes to shooting kids, even zombie kids.  It's a good line to be drawn.  Which is why it was so shockingly unsettling to me when, while exploring an abandoned home in The Last of Us, Joel suddenly stumbles into a bedroom with a crib in it.  This detail, implying the existence of a baby, was a punch to the gut, and was followed immediately by a second punch when Joel walks into the next bedroom to discover a brightly-colored child's room with twin bunk beds.  A note in the room explains that the family was evacuated safely (although their family dog was released into the wild).  Later in the game, a second home with a crib, pet carriers, and toys implies a family life, but this time, the fate of the family is left ambiguous.  Finally, while clearing out sewers, Joel finds evidence of a pacifist community that included children.  Toys and makeshift classrooms in the underground compound are all eerily unoccupied, until finally, Joel stumbles into a locked room, where a man has killed himself; beside him is a blanket thrown over several little bodies with their legs poking out.  Written on the floor beside this pile are three words: "THEY DIDN'T SUFFER."

Holy shit, Last of Us!  You really didn't shy away from shit here, and I commend you for it!  While a zombie child might have been tasteless, the desperation evident in the whole scene and the graceful way it's handled deliver a perfect emotional impact that not many games could pull off.

The Last of Us does a really good job of making a serious and emotionally impactful post-apocalyptic game.  It has some first-person shooter elements but manages to avoid ever having a cheesy or cartoonish style.  Any time you start to get desensitized, the game lobs a new whammy at you, ensuring that you never quite shake off the feeling of wrongness that so many other post-apocalypse zombie games fail to capture.

There are a few minor errors that can be pretty easily forgiven.  (For example, in one scene outside of Pittsburgh, you find a boat by the river with a note saying it came back from sea, which doesn't make sense, as Pittsburgh is a land-locked city whose river feeds into the Mississippi, not the ocean.)  The gameplay mechanics are smooth and the AI isn't awful; unlike most escort missions, the people you work with don't interfere with your sneaking or get in your way much, which is a downfall of so many other escort-mission games.

Innocence: A Plague Tale feels like a sister game to The Last of Us.  It's set in the past, the 1300s to be specific, instead of the future.  Playing as a young girl, you move through a village and later a town that are infested by hoards of plague rats, escorted your very young, frail brother. Like The Last of us, we have some scenes of upsetting, grotesque disease that hit an emotional note.  There's an ominous vibe from the beginning when the main character, Amicia, loses her pet dog.  Later, the game does an exquisite job of balancing disgust with tension, as Amicia and her brother, Hugo navigate through a farm where cow carcasses are stacked up high, and later, the aftermath of a battle field covered with corpses.

Like The Last of Us, the story-telling does a great job with pacing and upping the ante.  Like The Last of Us, the rats are more of a problem in the beginning, and as the game unfolds, you spend increasing amounts of time avoiding people, including English soldiers, town guards, and members of the Inquisition who are hunting you.  Like The Last of Us, there are some truly devastating moments; Rodric's death near the end hits with a surprising amount of raw emotion.

The mechanisms are similar in terms of crafting weapons and relying on ranged attacks; like The Last of Us, there's a lot of sneaking around.  There's more puzzle-like elements and little platforming, but the puzzles are not overly boring and the scenery keeps the game from ever getting frustrating.  The voice acting here, too, is incredible; the children's voices and dialogue are well-done and manage to side-step the shrillness or overly babyishness that so many children's voices have in video games.  In particular, the main character, Amicia, has a great voice actor, who is able to communicate complex feelings toward her brother: affection and concern is often mixed with annoyance and impatience.

There's definitely a degree of hyperbole in this game; the number of rats, dead cows, bones in the catacombs, and so forth are beyond a reasonable amount.  Yet the game manages to hold on to the same serious, grounded tone as The Last of Us.  Both games take slightly ridiculous concepts and use them as a backdrop to explore very real human emotions, and I think they're nuanced and elegant in the stories they choose to tell.

One curious contrast between these games comes in the animation.  The Last of Us uses a copious amount of water in its scenery and gameplay.

There's definitely a bit of flexing in the animation; there's one segment that occurs in a snow storm, with flurries of snowflakes and prints in the ground showing a high degree of attention to detail.  Everything from the blood splatter to the shadows on the walls is incredibly detailed, and the studio that developed this game deserves some recognition for that.  The studio, Naughty Dog, was established in 1984, and according to Owler, it has a staff of 316 and generates an annual revenue of $100 million.  Innocence: A Plague Tale was produced by Asobo, which was established in 2002, has a staff of 140, and generates an annual revenue of $7 million.  Innocence isn't as polished as Last of Us; the individual rats, for example, have a slick, rubbery appearance instead of furry.  A lot of the textures in this game are a little more "shiny" and inorganic, but it's still very easy to get sucked in.  The game feels like a much older game, at least animation-wise, simply because there were less development resources and talents behind it.  But this isn't a criticism; considering what it was working with, Asobo produced a remarkable game that's lovely to watch.

"What kind of person watches video game play-throughs on YouTube?"
That's right.  I'm the target audience for those videos.  It was me all along.

Like any game with a very linear storyline, these games don't have a huge amount of replay value, but I still think they are well-worth a purchase.  There are plenty of games out there for people who like gaming for the interactive elements, but there are too few who really invest in their characters and plots.  The Last of Us and Innocence: A Plague Tale are two rare gems that capture the human experience and don't cheap on on the artistic elements of their games, and I would be more inclined to play video games if I felt like all of them put as much effort as these two titles into the writing, acting, and story-telling.

Until then, it's Skyrim for me.

Monday, January 6, 2020

New Year, New Project: Introducing "The Kick"

I believe I've alluded periodically to my obsession with "The Umbrella Academy."  It's a comic published by Dark Horse and was recently made into a Netflix show.  It's got just about every element I like in fiction: dysfunctional superheroes, existential angst, a flamboyantly fabulous asshole who you can't help but love.

While discussing the ten-episode show for the umpteenth time, I said, "I wish the writers would just do what I want."  To which my husband offered a childishly uncomplicated solution: "Then you should either become a writer or publish your own work."

He's right, of course.  Most of the fictional content I generate uses existing franchises, which precludes it from being published, monetized, or useful in any way whatsoever.

In case you're wondering what I want from The Umbrella Academy, it's this.  I have a theory that Reginald Hargreeves works for the Temps Commission.  I believe that he orchestrated the apocalypse from beginning to end, beginning with, of course, his own death.  After all, for the apocalypse to happen, all of the Hargreeves had to come together in the first place; Klaus had to discard the notes about Vanya where Leonard could find them; Allison and Luther had to fuck up horribly to push Vanya over the edge; Five had to fuck up the timeline further to ensure everything went to shit.  This explains a lot, including how Reginald knew the apocalypse was about to happen in the first place, and why he was such a shit father.  Because he had to guarantee the kids were poorly-adjusted adults in order to cause the end of the world.  What's more, it explains how Grace knew about the apocalypse, and why Reginald was so adamant about keeping track of how long Five had been gone... because he knew all along that Five was working for the Temps.

I hope you've seen the show or else this article so far probably seems like incoherent rambling.

I believe that Season 2 would be amazing if all the kids went back in time and had to confront Reginald as an agent of the Temps Commission.

But this post isn't about my fan theory.

Although if it were I would show you this chart in which it's clear Reginald is the source of the apocalypse.

It's about Andrew's suggestion that I stop wasting my time on existing franchises and try to build something new.

"After all," he pointed out, "you have a ton of ideas.  If you just change the names of the characters, plenty of your stories are divergent enough to be a separate franchise."

My main character be like

Andrew was right, of course.  I have a lot of unrealized ideas, many of which I never endeavored to develop fully because of an idea that they were "unmarketable."  But lately, dark and dysfunctional superheroes are in, and there are plenty of publishing houses (Dark Horse, Vertigo, and Image Comics, to name a few) who love the dark subversion of the superhero genre, especially when it includes familiar elements and tropes.
Umbrella Academy, for example: X-Men meets Arrested Development.

So, I ask you, dear reader, what is one of the most common tropes of the superhero genre that has inexplicably not been subverted?

Hint: I bitched about this well over a year ago.

The "young boy sidekick" theme is one that feels increasingly fake and unrelatable in today's culture, and incidentally, I already had an idea for a story exploring this.

Here's my general outline and characters for my newest entirely original comic book idea, "The Kick," hopefully hitting shelves within a year or two, depending on how quickly I can get a storyboard onto the right person's desk.

The Kick

  • Tomas Castiel: a 17-yr-old "Enhanced" individual whose superpowers have just manifested with puberty.  Bright eyed and bushy tailed, our young protagonist aspires to become a popular superhero.  He lives in Los Angeles with his two sisters and patrols the city at night with the aid of a police scanner he purchased off of Craigslist.
  • Alec Slick (aka "Tesla Man"): a tech mogul and ultra-popular superhero who is notable for saving the world from an alien invasion while he was in college.  Now 38 and the head of TL-Corp, Alec is a charming, handsome, clever, and filthy rich eccentric who spends a sizable chunk of his time hanging out in his personal resort and spa on an orbiting space station.
  • Aria Platsky: a burned-out superhero publicist who juggles various case files for a company called "Heroes on Dial," which is basically a mall cop rental but for superheroes.
  • Beth: The first Kick.
  • Tristan: The second Kick.
  • The Oracle: an Enhanced with the power of shape-shifting, who has given up the superhero life since an accident left him horribly deformed and slightly mentally unstable.  He's Tomas's confidant and friend.

Volume One

Overview: Castiel is an aimless latchkey kid with recently discovered superpowers.  One night, while intervening in a drug store robbery, he comes face-to-face with Tesla Man, the world's most popular and beloved superhero.  Tesla Man expresses interest in mentoring Castiel and offers him a partnership.  To protect his identity, Castiel takes on the mantle of "The Kick."  The two proceed to go on various thrilling adventures against Tesla Man's various supervillain rivals, including The Entropy Institute, The Post-Master General, Professor Pandamonium, Nightshade, and rival corrupt businessman Gene Hamilton.

Alec Slick is everything Castiel could want in a mentor: supportive, encouraging, flattering.  The only one who seems suspicious of him is The Oracle, who believes that the closeness between Castiel and Slick is weird.  (Castiel believes Oracle is only jealous; Oracle is 27 and hardly has room to talk.)  As the two get closer and closer, Slick and Castiel eventually end up in a romantic relationship, kept secret to protect Castiel's identity from too much scrutiny.

It slowly dawns on Castiel that he has been groomed from the beginning and Slick is a clever manipulator who orchestrated the entire "mentorship."  The final straw comes when Castiel realizes that "Kick" is a short for "Sidekick," and they were never equals or partners; Slick has always seen him as lesser, and his insistence on secret identities and so forth was all designed to maintain a questionably appropriate relationship.  Full of guilt, shame, and embarrassment, Castiel resigns as "The Kick" and returns to a life of anonymity.

Volume Two

Two years later.  Castiel is now nineteen and a sophomore at a community college.  He has rebranded himself as "The Scorpion" and is trying to make a name for himself as an independent superhero.  Regrettably, he' got a lot of competition; Los Angeles is a major city for the Enhanced to try to make a name for themselves.  He goes to Heroes On Dial to try to get some help rebranding, but lacks the money to purchase their services.

He discovers that Tesla Man has a new "Kick."  Remembering his own experience and worried it's happening again, he approaches the new Kick and tries to warn them, but is dismissed as being a jealous ex.  He goes back to Heroes On Dial, desperate for help to take on Tesla Man.  No one wants to help him, except for Aria Platsky.  Operating independently of Heroes On Dial, she takes on Castiel's case, trying to dig up dirt on Tesla Man and help Castiel rehab his own image.  (For one, she tells him to ditch the moniker Scorpion, as no one like Scorpions.  He renames himself Scorp Kid, which she says is even worse.  Unfortunately, it sticks after he foils a major plot by Nightshade to poison the city's water system.)

With Aria's help, Scorp Kid tracks down the previous "Kick," and learns the tragic story of the first Kick, who died.  Tristan, the second Kick, wants nothing to do with Scorp Kid's crusade against Tesla Man.  Fortunately, Scorp Kid manages to get enough dirt on Tesla Man to take him to trial.  The nation watches the drama unfold with baited breath as their beloved protector is accused of abusing his power, arrested, and put on trial.

While on trial, aliens attack the city, and Tesla Man defeats them.  He returns in time from fighting aliens in order to make his final trial appearance.  The jury is out for less than a minute, and return with a Not Guilty verdict, clearly grateful that he saved the world again and willing to forgive and forget all misdeeds.

The judge calls this a clear, gross miscarriage of justice and calls for a mistrial.  In the meantime, he sentences Slick to 15 months of house arrest for violating the conditions of his bail by going into orbit to fight aliens.  Castiel is deeply discouraged by what happened, but Aria suggests that at least he's getting some semblance of justice, and, more importantly, has disrupted the relationship between Slick and his newest "Kick."

Volume Three

One year later.  A jury has recently acquitted Alec Slick of charges of child endangerment and sexual misconduct.  Scorp Kid, walking home at dawn after a night of superhero patrol, watches the story unfold in the window of an electronics shop in disgust.  Slick transmits a message to the people of earth from his space station, where he is still under house arrest.  He thanks the people for believing in him and states that he thinks the charges brought against him are indicative of a larger problem, which is that people don't trust Enhanced people.  He says that the problem is that the Enhanced have a very different culture and are not well understood by normal people, and that they aren't integrating well into society.  To rectify this, he says, he is starting up a "finishing school" for promising young Enhanced on his space station, where he can mentor classes of students and oversee their actions to ensure no one else ever has to go through what he has.

Castiel turns away from the storefront to walk home, hands in his pockets, a single thought bubble forming above him.  "God damn it..."