In the early 21st century, a new social movement colonized the Bay. Its origins were scattered and asynchronous, appearing, under various guises, in late-night dorm room speculations, the musings of niche blog posts, and circumspect academic papers. The movement called itself “Rational Utilitarian Love,” (RUL) and predicated itself upon doing good to the greatest extent possible. It promised to be radically different from previous attempts at improving the human condition. Reason, evidence, and logic would be its luminaries; it would root out the distortions of religion, emotion, and the silly frailties of the human mind. Early on, it attracted only a small, but passionate, audience—from technologists in Mountain View to absent-minded academics in England.
In the end, of course, RUL succeeded. Today, we may consider the tenets of RUL obvious, even axiomatic. We might look at the past with an attitude of horror and bewilderment—how could they have been so wrong? We might also wonder, how did RUL triumph in the first place? How did a motley group of rational, dispassionate do-gooders achieve success in a world that worshiped blind passion, bias, and frankly, idiocy?
To grasp the answers to these questions, we must probe our past. We must understand the complex, contradictory, and conflicting forces that battled, mingled, and reconciled to produce our present.
This is a historical retrospective on RUL—examining its origins, its successes and its failures, its visionaries and its critics.
Epistemic Status: 89.382%
Part I: Setting the Stage
People in the early 21st century lived strange, unhappy lives. They may appear utterly alien to us, with their monogamy, carnivorism, and primitive forecasting techniques. But we must remember—we may appear no less alien to those living in the far future, for whom we toil today.
I believe the best method to understand the lives of our predecessors is through their own words. Recent archaeological digs in California discovered a journal written by an early RULer, named Jude. Here, I present excerpts from his journal:
9.17.2010
Today concludes my first week of college. I’d say I like it so far—it’s definitely a big shift from high school. It’s weird being in a place where I don’t have to hide my interests or who I am. I was doing my math homework in the lobby today, when some dude came up and asked what I was working on. We started talking about linear algebra, but soon had a whole conversation about Gödel, Escher, Bach and Aldous Huxley. Can you imagine that happening at Paloma High? His name is Declan, and he seems pretty cool. I’m still stressed out, given dad’s health and mom’s whole mortgage situation, but college has proven a good distraction. I’m happy to have a chance to reinvent myself. None of these people know the awkward, weird Jude from high school, thank god.
9.24.2010
I talked to this girl named Ava today. She’s in my intro to philosophy class, and we hit it off after I complimented her tote bag. It had this crazy, multicolored geometric pattern on it, and turns out, she had sewn it on herself—isn’t that cool? She has strawberry earrings, smells like lavender, and is really pretty. I kinda, sorta, maybe have a crush on her.
9.29.2010
Sometimes, I’m jealous of my mom. Even with everything crashing and burning around her, she can always go to church and pray to Jesus and pretend things are fine. Today, she told me that the doctors had prescribed a new treatment for dad. Instead of researching what that means for us, her immediate response was to go to church and be with friends. I sometimes wish I had, like, 20 fewer IQ points, so I could also accept those stories of talking snakes and floating zoos. Right now, even if I forced myself to, I couldn’t believe in God—I’m sorry, mom, but if there really was some magical old guy in space looking out for us, we could afford our house and dad wouldn’t be hospitalized right now.
Because I’m not religious, I have to do the hard work of finding my purpose for myself. My philosophy professor has us reading Camus and Sartre and all of those other pretentious French motherfuckers, and instead of helping, they just confuse me more.
I sometimes miss high school, because then, at least I had a single goal to orient my life around. College admissions were stressful, sure, but still gave me a well-defined objective and an obvious measure of success. And yeah, I didn’t get into MIT, and yeah I got waitlisted at Stanford, and yeah, maybe I’m going to my fifth-choice school, but it’s still freaking Berkeley. Opening that email was honestly one of the highest points of my life. But now what? The world is my oyster! I have all the freedom I could ask for! But honestly, that just makes things worse. I can always sell out to some big tech company and get some cushy, six-figure job, but that just feels so… empty. Is that all there is? I wish someone took away my freedom and handed me purpose on a platter.
10.08.2010
I think today legitimately changed my life. Declan had been badgering me all week to see this obscure philosophy lecture with him. I didn’t want to go at first, but I had nothing else going on—it’s not like I go to parties. And so, I spent my Friday night voluntarily attending a fucking lecture on ethics. Damn, I will always be that high school loser, no matter how desperately I try to hide it. The professor’s name was Peter Vocalist. He didn’t attract a huge audience; there were maybe a dozen others there. When we walked in, Vocalist was in the middle of talking through this thought experiment—you’re walking in the park one day, when you notice there’s a child drowning in a pond that you can easily save. However, you’re wearing a nice suit that’ll be ruined if you do. Do you save the child? Obviously, you do—it would be sociopathic not to. Vocalist thinks that if we apply this same reasoning to the real world, then spending money as flippantly as we do is deeply immoral. Instead of buying that latte at Peet’s today that I only half-finished, I could have helped save a life. In essence, I’m letting children drown all the time.
There’s some small student club called RUL that hosts discussions on Vocalist’s works. Declan’s a part of it. Maybe I’ll attend a meeting next week.
10.20.2010
RUL is incredible. It’s a small club and composed entirely of white guys, but I’ve never met anyone like them. The intelligence of the group astounds me. I was too scared to talk at all during this first meeting, since they were throwing around words like hedonic adjustment and randomized control trial that completely flew over my head. I’ll have to ask Declan to explain later. In the meantime, they directed me to this website called NotAsWrong, which they said provides a decent introduction to their ideas. Despite the jargon, it’s clear their focus is just on doing good in the most effective way possible—which is a pretty intuitive and obvious goal.
I don’t want to go all in yet, since it does seem kinda culty, but I’ve been mulling over their ideas a lot. I can’t seem to find any flaws yet. And if they’re right, well, all of those thoughts about finding my purpose and whatnot are irrelevant. There’s suffering in the world, and I can do something about it. What else matters?
10.24.2010
Ava rejected me today. I asked if she wanted to see a movie this weekend—The Social Network just came out, and looks pretty good—and she just flashed this half-smile overflowing with pity. Like she was looking at a wounded puppy she had to put down. She said some bullshit about liking us as friends and wanting to keep that dynamic.
Whatever.
I think I can do better anyways, honestly. She’s majoring in sociology and film, for god’s sake. I’m tired of pretending to care about her woke bullshit, as if my “white privilege” has prevented my mom from losing her house, or my dad getting cancer. When I’m making triple her salary in a couple of years and she works some low-level job at the ACLU (or more likely, Starbucks), maybe she’ll regret it. Maybe I should only date people who can do calculus. Or at least write a “Hello World” program.
I’m being mean. But I’m hurt.
11.15.2010
I’m spending more and more time with RUL. I love the group, but the philosophy makes me feel pretty guilty. Completely by chance, I have all of this wealth, compared to the average person on Earth, and I’m just squandering my resources. Every time I spend money on myself—like I bought concert tickets to that Vampire Weekend show in a month—I think about that child, drowning in that pond, as I callously keep walking. On the other hand, at least I’m doing something now. Ava and her lefty friends go to those Occupy protests, and always have this annoying air of moral superiority around themselves. But it’s not like they’re going to accomplish anything. My mom isn’t getting her house back because a bunch of stoners at Berkeley pranced around in a park for a few days. And let’s be honest, they know that too—they wouldn’t go if they couldn’t post about getting tear-gassed on Facebook. RUL, though, is genuinely trying to think about doing good. I know for sure that the $4,000 we pooled this semester to donate to malaria charities will save someone’s life. I feel like I’m doing something real, and have some control and agency over what happens in the world. Even if I can’t control anything in my own life.
11.18.2010
I was hanging out with a couple friends from RUL today, and I ended up telling them about Ava. I hate how powerless love makes me feel. How utterly irrational it is. Our brains, for stupid, blind evolutionary purposes, trick us into falling for people who might be totally incompatible with us. We think we’ve found someone we like—someone who will love us for who we are—but with high probability, our emotions are merely setting us up for heartbreak, loneliness, and ugly endings.
After spilling my guts out, the RUL guys, being them, immediately started hypothesizing ways to improve society’s approach to love. As much as I love RUL, I feel like sometimes they can go a bit overboard with their whole “efficiency is everything” mindset. One of them said he had read some study that claimed that sustaining enough eye contact with someone over an extended period of time can ensure they fall in love with you. Then, they started talking about love as a RUL cause area, and ways to systematically improve the dating prospects of young, lonely men. Declan got excited about the whole thing and said he’d write a blog post about it later. I can’t say I was satisfied with how they responded to my moment of vulnerability, but they did give some interesting ideas to think about, and I can’t help but appreciate how nerdy and zany they are.
11.25.2010
Today’s CompSci class was so cool. Our lecturer, who once worked at Smeagol, discussed advances in the artificial intelligence field. They’re training robots to play Atari games, translate from French to English, and even drive cars. I was amazed—at this rate, just imagine what they’ll be able to do in 10 years!
It reminded me of that moment when I was 9, and went stargazing with my dad among the redwoods. Looking up at that sparkling sky, I asked him how we landed on the moon. He got excited, and talked about all these incredible programmers and mathematicians that made that happen. And then he smiled, and pulled out his phone. “This tiny thing I bought for a few hundred dollars from Best Buy? It’s more powerful than the room-sized computers they worked with at the time.” I stared in awe—at the phone, at the stars, at my dad. From that point on, I knew I wanted to build technology. I wanted to visit all those distant galaxies I saw that night, and I knew that God wouldn’t get us there. But carefully crafted lines of code might.
In related news, mom informed me today that dad’s condition was getting worse. Fuck.
12.04.2010
They brought in a guest speaker to the RUL meeting today. He’s a senior software engineer at Smeagol, and makes over $1 million a year. However, because he takes inspiration from RUL, he donates 50% of that to effective causes, leaving a mere $500,000 for himself. His altruistic self-sacrifice is so inspiring to me—and he shows that you don’t have to sacrifice your passions to be a RULer. His work seems so interesting too. He talked about all of the incredible technological ambitions of Smeagol’s founder, like getting us to Mars and using biotechnology to improve humanity. He also talked about the fun perks they have at work, like a bowl full of microdoses of acid, and a giant Slip ‘N Slide that leads to a kiddy pool filled with organic, vegan Jello. After hearing all of this, I think I’ve found what I want to do with my life.
12.15.2010
I reached out to the guy who spoke at that RUL meeting, and we had a long discussion about computer science, his career trajectory, and RUL… And guess what—I landed a spring internship at Smeagol today! Of course, I’ll be donating most of what I make to RUL-recommended charities. I would feel terrible about myself if I spent the salary on myself instead of people dying in another corner of the world.
Speaking of RUL, I’ve been reading a lot of NotAsWrong, and it’s just crazy just how much emotion and bias hijack our everyday decision-making. Tribalism is an example of this—people are way more likely to donate to a stranger belonging to their country, rather than someone in a different one, even if they need the money more. Does being born on the wrong side of a border entitle you less to help? It makes sense from an evolutionary psych standpoint, of course, given our origins as tribal people in Sub-Saharan Africa, but it’s stupid that we still fall prey to those biases. I want to commit myself to never being that irrational.
2.1.2011
So, Smeagol is not exactly what I thought it would be. They ended all the company perks, after someone snitched on them for the psychedelic candy bowl. I don’t get to work on going to Mars or anything cool like that—instead, I write dozens of lines of code a day to improve the latency speed of their latest product by 0.1 milliseconds. I don’t even know what the product is; they said it’s for a government agency and have to keep the details under wraps. Whatever. The work probably only seems boring and meaningless because I’m still an intern, and besides, it’s fucking Smeagol, so can I really complain? And also, the money they pay me is incredible. I’m donating most of it to a charity that buys malaria bednets for people in Africa, which is something I don’t feel particularly attached to, but whatever. My emotions hardly matter here.
3.15.2011
I’ve been having a lot less time to journal, given the internship and school and stuff. But today was crazy. I was walking home from the BART station, when this homeless guy jumped in front of me, and got on his knees to beg for money. He had long, flowing hair, a short beard, and wooly, unclean hair. I tried to walk past him, but he would hold out his arms and stop me. He probably saw my Smeagol lanyard, so assumed I have money. He said he’d let me go, but to please just hear him out before. I decided that it would probably be quicker to hear his spiel and leave, rather than resist and possibly get robbed. He talked about his life—he grew up in a trailer park with heroin-addicted parents, and had to support himself all his life. He used to work a construction job, but he injured his back, and they paid him practically nothing for treatment and soon fired him. He couldn’t make rent, and had to live on the streets the past few months. He has a Russell terrier, who looked kinda malnourished and scraggly. All he was asking for was enough money to buy himself dinner tonight.
I have to admit, I felt bad for the guy. But there’s no way that giving my hard-earned money to him—which, realistically, would end up in the hands of some drug dealer—would prove a more cost-effective way of doing good than giving to RUL’s top-recommended charities. And so, I said to him, “I hear you, man, but I’m already giving most of my money away to people who, frankly, need it more than you.”
He just stared at me with his sunken, but intensely bright eyes, for what must have been five minutes, and without saying anything, got up and left. I wonder if I could have been less harsh to the guy, but then again, neither his nor my emotions really matter in this scenario. I felt like shit on the walk home, but donated an extra $20 to the malaria charity, and that helped assuage my guilt. And then, I read more NotAsWrong posts about how we should just “shut up and multiply.” I don’t feel as bad now, though I must admit—those haunting eyes still pierce me at night.
3.18.2011
I talked about my experience with the homeless person at the RUL meeting. They all nodded empathetically, but concluded I did the right thing, given the circumstances. One of the members mentioned that when he gives, he sets aside 10% of his budget for “warm fuzzies”—stuff like community, family, and so on. He said this helps him feel less guilt, fit in more with mainstream society, and allows him to feel motivated, ultimately making him an even better RULer. But, he said, you have to be strict about it being 10%, no matter the circumstances. Otherwise, you’ll experience “lifestyle creep,” and return to square one, using your money inefficiently. Maybe I’ll try something like that.
4.13.2011
The more I work at Smeagol, the more I hate this job. I figured out what my project was—that government agency they were talking about? It’s fucking ICE. And they’re building these giant drones to hunt down families who try to cross the border. I don’t know, I don’t feel good about this work…
But the more I get into RUL, the more it feels like I have to stay working here. Sure, my work is unethical, but I’m a pretty replaceable engineer—if I don’t do this, someone else will. And the money I’m donating is surely doing more good for the world than whatever harms I may be causing. I think the tradeoff is worth it, as queasy as I feel about it…
6.27.2011
My dad needs new cancer treatment, and insurance won’t pay for it. It’s expensive—it costs $20,000. I could pay for it using funds I’ve earned from my internship, but that amount exceeds the 10% I’ve set aside for stuff like this. Fuck, I am so torn. On the one hand, it’s my fucking dad. He spent his whole life caring for me. Even while he was sick, he made sure to visit me at school for Science Fair and pack me peanut butter and honey sandwiches in the morning. He taught me everything I know, and he is the reason I am where I am today. He’s such a kind and decent man—I can’t think of a single time he’s ever lashed out at me or mom, even while suffering through this awful, awful disease.
At the same time, that’s a lot of money. I could save something like 10 or 11 lives for sure, while only having a chance at extending my dad’s by a few years, at most. Of course, I know which option would emotionally feel better, but I’ve read enough NotAsWrong by now to know emotion isn’t a good guide for this decision. Fuck. What have I gotten myself into? Was it a mistake to ever get into RUL? I don’t know. Maybe I’m having one thought too many. Either way, I would feel bad, and I don’t know what to trust—my head, or my heart.
After this entry, Jude stopped updating his journal. Despite his flaws (such as buying that Vampire Weekend ticket), Jude should serve as a model inspiration to all RULers today. He displayed immense rationality, refusing to bow to instinctual emotions, social norms, or familial ties. He was an exemplar RULer, and in totality, he added at least 700 QALYs to the world—clearly, his life served a valuable purpose.
There were hundreds of other early RULers like Jude during this time, and together, they built the foundations of the movement. As we turn to the next chapter of RUL history, we will see how the seed they planted flourished, growing wildly beyond their dreams.
Part II: Expansion
One of the most important milestones of RUL history occurred in 2011, with the establishment of the 2.88e+8 Seconds Foundation. This organization essentially served as the Human Resources arm of the movement, directing talent to where it deemed most appropriate for advancing Global Utility. 2.88e+8 Seconds provided free, 1-on-1 career counseling to those it deemed High Potential, as determined by the U.S. News ranking of their alma mater, their SAT scores, and their Python proficiency. I obtained an email from their archives, which provides an illustrative example of the useful service they provided. It concerns a man named Bert Einstein who wanted to be a physicist, before 2.88e+8 Seconds wisely guided him towards a higher-impact career:
2019 also represents an important year for RUL, for this is the year Sam Richman stumbled upon a fortune. Sitting in his dorm at a medium-sized technical school in Cambridge, Massachusetts, the 19-year-old came across an interesting—if unusual—post on the RULForum, which he frequented. The post was titled “New RUL Cause Area: Love,” and discussed the author’s experience seeing his friend upset because a girl rejected him. The author argued that his friend’s struggles were emblematic of a larger problem: modern dating culture represented a massive inefficiency, and improving the prospects of young men finding a mate would create a huge utility windfall. The author threw out a few proposals. We could have RULers create “Date Me” pages for their websites, and make a spreadsheet to efficiently match individuals with high compatibility, making the romantic process more algorithmic and streamlined. We could induce singles to have sustained eye contact with each other, he argued, as multiple RCTs show that this produces interpersonal attraction. More radically, the government could provide subsidies for women to date lonely men.
After reading the post, Richman sat in silent contemplation for a few minutes, and then laughed maniacally. The young prodigy had just left his Entrepreneurship seminar, taught by a woman revolutionizing the blood test industry and the CEO of a pre-IPO co-working company. That class had taught him to scan his environment at all times for arbitrage opportunities: free money lay in plain sight everywhere, if you knew how to see. And Richman knew how to see.
Richman did not pursue wealth for the usual reasons. He had no interest in flashy cars, gigantic yachts, or exotic vacations. Instead, Richman wanted to save the world. When he was just a toddler, chomping on a pacifier in his parents’ home in Palo Alto, Richman spontaneously realized that aligning an intelligent artificial intelligence (AI) system with human values could prove a Herculean task. AI could cause human extinction—and even if the possibility of this carried a small likelihood, the expected value of preventing the death of not only this generation, but trillions of future generations, outweighed every other socio-political and moral issue of his time.
While other children threw tantrums begging for Happy Meals, Richman carefully studied the tenets of utilitarianism, wanting to produce happy feels. While other children built towers of Legos, Richman built a plan for safeguarding humanity’s long-term future. While other children centered their lives around the pursuit of shallow, sensory pleasures, like gummy bears and breast milk, Richman centered his around making the most money as possible, so he could maximize the global utility function. And in a few decades, when the world was on fire and a malicious AI was at the gate, he knew those spoiled toddlers would come crawling—literally, crawling—to him for help.
Growing up, Richman traversed diverse paths to make money by any means possible. selling completed math tests, or writing bots to purchase and resell concert tickets at ridiculously high premiums. Ethics were not a large concern for him; he figured the large increase in utility he would eventually purchase far outweighed any harm he caused. But all of these ventures were puny, and could not fund his ultimate ambitions. He knew there had to be something bigger out there—some product that would reshape the world, and not merely scavenge for scraps at its edges.
And so, when Richman read that post on RULForum, he became ecstatic with joy. He had found the golden ticket he had eagerly awaited his whole life. Other, more simple-minded people might have read the blog and merely seen problems: a whole generation of young, lonely men were starved of female attention, and wrote pathetic, misogynistic ravings as responses to their desperation. But Richman saw a market—a vast, untapped market, ripe with potential for disruption. There was a dopamine deficit for young men in America. A serotonin shortage. And Richman would happily fill it.
Richman had thought of a tractable, scalable solution to the epidemic of loneliness. A solution that was simple, and wouldn’t require engaging with the complicated social and psychological forces that academics believed produced the crisis. Most importantly, it wouldn’t require men to actually engage with women. His solution was called Self-Contained Affective Modules, or SCAMs. Here is how they worked:
Scraping public data from social media sites (a completely legal and ethical process), Richman amassed a vast, diverse collection of images of women. He kept these images in a nondescript folder on his computer.
Then, Richman trained a deep learning model to create a website that would transform any description of a woman into a digital avatar with extreme accuracy. For instance, one of Richman’s first users typed “A pretty, brunette girl with strawberry earrings, holding a multicolored tote bag” as a prompt.
Users would then possess an AI-generated girlfriend avatar, which they could click to hear phrases like “I enjoy hearing you talk about Fight Club” and “Don’t worry, I actually like guys with flabby arms.” To ensure that each avatar was unique and non-fungible, users received a “certificate of authenticity,” documenting ownership of the avatar on a blockchain ledger. The avatars themselves remained Richman’s intellectual property.
Richman’s true brilliance shone in his financialization of SCAMs. SCAMs could be made public, with users able to rank each SCAM’s attractiveness on a scale of 1-10. Users could then purchase and sell bets on the future average rating of each SCAM. These transactions became known as Female-Leveraged Electronic eXchanges (or FLEXes).
SCAMs were a hit. Though anyone could technically screenshot another person’s avatar, the blockchain-backed “certificate of authenticity” ensured that young men could feel the illusion of closeness to a fake woman. SCAM-ing and FLEXing became the predominant pastime for men under the age of 25; the FLEX market had an average daily trade volume of around $1 billion. Soon, a cottage industry of hyper-masculine influencers on YouTube emerged to help young men “seek alpha,” and learn how to FLEX themselves into wealth beyond their wildest dreams.
Richman quickly became a billionaire. Then, following the 2020 boom in SCAM prices, recently-elected President Andrew Yin of the Upwards Party, who Richman donated $1 billion to, decided to privatize Social Security and invest retirement accounts in SCAMs. It was a stroke of political and financial genius: as prices climbed higher and higher, the popularity of Yin and Richman soared too.
Naturally, RUL aligned with Richman’s consequentialist worldview well. After he became wealthy, he donated a significant part of his fortune to RUL organizations—and so, our once-niche movement struggling to raise money for mosquito nets suddenly found itself with billions to spend.
Ad buys for new books. Appearances on popular podcasts, like The Bro Rogaine Experience. Articles in prestigious, New York-based publications. RUL used its newfound fortunes to expand its reach immensely. But with fame came hate: as RUL became more visible, many outsiders wrote poorly thought-out, idiotic criticisms of the movement, angry at its (rightful) neglect of self-congratulatory, insignificant causes like racism and climate change. Despite the stupidity of these outsiders, RUL leaders began to worry: RUL had an optics problem. How would the ideology grow, when the masses had the irrational need to feel good about doing good?
One clever RULer, named Will McAsskiss, found a solution: RUL could rebrand its underlying philosophy as being called “Sensibilism.” With this new name, any criticism of RUL would be seen as opposing sensibility—and who wants to appear unsensible? RUL was so impressed by this strategy that they poured funding into a think tank, the Institute for Practical Smartness (IFPS), that would spread RUL’s newly rebranded gospel of common sense to tech billionaires, college dropout startup founders, and other groups known for their interest in ethics and cautious, measured decision-making. IFPS even started a program called “Smart Scouting of America,” where fully-grown adults could wear wizard uniforms and earn “Intelligence Badges,” in skills like “Double Cruxing” and “The Dark Arts of Sensibility.”
Now flush with cash, RUL hosted its first conference in 2021, calling it “GlobalRUL.” It was held in Paris, and organizers rented out the entire Palace of Versailles. RUL gave away free conference posters, prints, notebooks, and shirts, as well as RUL-themed onesies, Koozies, DVDs, and herbal teas. While some RULers were initially confused at the extravagance of the event—put on by a movement ostensibly dedicated to doing good in a cost-effective manner—their worries were quickly put to rest when RUL announced that “RUL community-building” was a new cause area. Spending money on RUL events would help it grow, and the compounded benefit of more future RULers working on high-impact cause areas would produce more future good than any current efforts, they reasoned. Thus, the privately chauffeured cars and catered Michelin-starred vegan food were all necessary purchases. To further its community-building, RUL even hired a famous British artist named Financy to make art for the conference, at great expense.
Here is the artwork Financy produced, in his trademark style:
Art critics heralded Financy’s painting, praising the brilliance of his singular vision, his evocative use of color and texture, and the awe-inspiring grandeur of the piece. Many were moved to tears by the pure, immanent beauty of his work.
RUL leaders, however, were not so impressed. They did not see the point of the painting, or how it related to their mission of doing good for humanity in the most effective way possible. They did not think the cost-benefit calculus justified spending this much money on what looked like a schoolchild’s scribbles. They asked Financy to modify his work, this time explicitly incorporating the RUL logo, so they could use it for promotional material. Thus, they reasoned, the art could actually serve some purpose.
Financy complied, producing this:
The RUL leaders approved heartily.
Around the time of the conference, the online prediction market Metalgebra had predicted that the furry population would comprise 32.4% of the United States by 2100. Forecasters reached this conclusion by extrapolating off the exponential growth rate of furry convention attendance:
Because few other groups wanted to be associated with furries, RUL saw this as a massive arbitrage opportunity: making inroads with the group now could lead to huge payoffs in the future. To further RUL’s connection with this important community, Sam Richman announced “The Dog with a Blog Prize,” awarding $500,000 to the work of art which most effectively communicated RUL ideas to furries.
This is why GlobalRUL commenced with “The Fursuit of Happiness,” a romantic-tragedy interpretative dance performed by two men dressed in erotic pig costumes. The two pig-men, deeply in love with each other, miraculously escape a factory farm ravaged by bio-engineered diseases, only to face a brutal death as the world ends when a misaligned AI unleashes multiple nuclear bombs in an ill-conceived attempt to reduce global poverty rates to 0%.
And with that, GlobalRUL began. It was a frenzy of a weekend—hundred of RULers from around the world flew in to network, discuss their moonshot ideas, and sing the chorus of Toro y Moi’s song “Ordinary Pleasure” on repeat.
I obtained the agenda of the first GlobalRUL, displayed here:
GlobalRUL went successfully, save for one event where a deranged lunatic stormed the stage during the 23rd panel on AI risk. He grabbed the mic out of esteemed guest speaker Lizard Wazowski’s hands, and said the following:
I was involved with this movement since the beginning. I was a student at the time, and joined RUL because it seemed like it did real good—I cared about the impoverished, I cared about those animals, and I wanted to make the world better. I sacrificed everything for RUL. My social life, my quality of life, normal people asking for help around me. Fuck, man, I even sacrificed my family.
At this point, the interlocutor started crying on stage.
But the whole time, I thought it was worth it. Because I was doing something good—because my life had purpose. But now? I can hardly recognize this movement. There’s rich people funding fancy trips to the Caribbean and you’re all focused on hypothetical paperclip-building computers instead of real people. I can’t believe how much I gave up for this cult, and I regret it all.
By now, security guards had tackled the unnamed man, and the crowd started booing. “Your pathetic appeals to pathos have not shifted my priors!” shouted one audience member. “If you want to be heard, write this up in a properly formatted blog post, and submit it to our official RUL criticism contest!” shouted another.
Despite this unfortunate interruption, GlobalRUL was massively successful. It attracted many new adherents to the movement—attrition was a struggle for some recruits, however. While these new members enjoyed the lavish accommodations and free food and even received thousands in RUL funding after mumbling something about AI, they were never heard from again weeks after the conference. It remains a mystery as to what happened to such individuals, or what motivated them in the first place.
Part III: Systemic Change
RULers of the past often mocked leftists and other unserious provocateurs for their futile engagement with the political system. However, in the mid-2020s, RUL realized that the importance of delaying superintelligent AI made political engagement worth the hassle.
In 2024, a Richman-funded, Oxford-educated RULer named Prick Flynn ran as a Democrat for a congressional seat in Washington. Unfortunately, voters did not understand his rambling speeches about “orthogonality,” and thought he had an obsession with koalas, when really, he was discussing “QALYs.” When voters came to him sharing their stories of being evicted after struggling to pay rent, suffering from heatstroke after that year’s intense summer, and having no health insurance to treat such heatstroke, Flynn would flash an unnerving smile, and tell them, “Well, if it makes you feel any better, in the long-term, all your issues will be utterly irrelevant.” He lost miserably, gaining only 17 votes despite the $283 million spent on his campaign.
RUL needed to change its strategy. The median voter had too low of an IQ to understand RUL ideas; candidates needed to speak the dumbed-down language of the common man to have any hope of success. Moreover, RUL realized that the average RULer—a coastal, elite-educated 20-something policy wonk—would not stand out in Democratic primaries, facing equally nerdy and young Democratic apparatchiks with elite pedigrees. Luckily, RUL found opportunity elsewhere.
In 2025, the United States entered the worst economic depression in its history. The prices of SCAMs had crashed. The effects of the crash reverberated through the whole economy, as an automated algorithm that quantitative trader Bert Einstein wrote for the Richman-owned Two Sigma Males hedge fund triggered a massive cascade of sell orders across multiple markets. Retirement-age individuals suddenly found their Social Security accounts depleted and their other investments worthless. Moreover, during these years, the US fertility rate had crashed to devastating lows—almost all young men had relationships with personally-customized AI-generated girlfriends instead of real, living women. Thus, the elderly now had few offspring to support them during these difficult times.
Richman, with his superhuman foresight, had liquidated most of his SCAMs before the crash. Moreover, Two Sigma Males Research had invested in inverse equity funds, which profited massively off the decline, making Richman actually richer than before.
The economic depression, combined with the widespread foot shortages caused by global warming, combined with the mass-scale automation of labor by AI, had given rise to a burgeoning movement of far-right neo-Luddite politicians exploiting working-class anger for their own gain. Once again, instead of seeing this as a problem, Richman viewed this as opportunity—RUL had lost in mainstream political spaces, but maybe these niche groups could provide fertile ground for RUL-backed politicians to find a footing. While these proto-fascist groups had horrendous views on short-term, irrelevant issues like gay rights, racial equality, and ethnic cleansing, RULers could have a large marginal impact by persuading them to adopt the RUL stance on high-impact, low-salience issues, like AI alignment. The fact that Democrats had better views on cause areas like pandemic preparedness and animal rights only strengthened the case for RULers joining the far-right: on these issues, the next-best Democrat would prove little better than a RULer, while the next-best neo-Nazi would prove far worse at protecting our long-term future. The more odious a group’s ideology, the more of a marginal impact an ambitious RULer would have.
Around this time, Pete Faschiste, founder and CEO of Smeagol, became interested in RUL. Faschiste could only look at his country in visceral disgust: progress had stalled, and stagnation, laziness, and a culture of incompetence had overtaken the United States. He had always despised the left, with their obsession with social justice dogma, but now, even the right had become pathetic, controlled by populist mobs of culture warriors. The country was fighting over scraps, while the vast potential of the future remained ignored and untapped. The only people paying attention and displaying a modicum of intelligence during this time were RULers. Sure, they had made mistakes, and their actions were partially responsible for the current mess, but hey—they were wild and ambitious and they were trying. They had a coherent vision for the future, a rational strategy to get there, and they thought obsessively about the long-term. Faschiste had witnessed their meteoric rise in recent years, and respected their organizational skills, agentic attitude, and sharp focus on progress. He was only marginally involved with the group in the past, but now, wanted in.
Faschiste saw RUL struggling to enter politics, and knew he could help them succeed. He agreed with RUL’s strategy to infiltrate the right. He appreciated how RUL thought independently, instead of mimetically imitating mainstream social reform movements. Secretly meeting with RUL leaders, Faschiste proposed a strategy: Faschiste would bankroll his close friend and former student, Blake Boss, to run for US Senate in Arizona. Boss was a NotAsWrong reader and avowed Sensibilist, but agreed to refrain from discussing RUL-adjacent ideas in public, given the negative associations the public had with the ideology. To appeal to the people of Arizona, he would attribute the current economic situation to low-income Mexican immigrants, and rail against the radical Marxist conspiracy to teach kids not to use racial slurs.
Excited by the prospect of a Sensibilist Senator, some RULers helped Boss campaign. Through this process, RUL brought diverse groups of people together. Nerdy guys with wiry frames socialized with buff, bearded men with Swastika tattoos at RUL- sponsored vegan barbecues.
When November came, Blake Boss won his race. While in the public’s eye, Boss appeared a reactionary, quasi-racist, chauvinistic conspiracy theorist, the masses did not realize he was only putting on this act for their benefit. Sure, Boss wrote legislation banning the sale of condoms in the US, and sure, he sponsored the construction of an alligator moat around the Mexican border, but he only did this to advance the greater good. He needed to fit the mold of a traditional, run-of-the-mill, ultraconservative to buy cover for the RUL advocacy he pursued behind closed doors.
Indeed, every week, Boss diligently met with President DeSandtits to discuss avenues to slow down the development of general AI, humanity’s greatest threat. AI alignment had proven difficult, despite the publication of Lizard Wazowski’s latest Percy Jackson-inspired fan-fiction on the topic. There was a need for AI governance. Most of Congress, however, was geriatric, dismissing RUL’s fears of an AI takeover as silly, sci-fi fantasy.
President DeSandtits, though, was different. He understood modern technology—in fact, during his governorship of Florida, he worked with multiple Miami-based cryptocurrency companies to better facilitate their evasion of taxes.
Eventually, Boss and DeSandtits converged on a solution. The eminent philosopher Nick Bosom had recently written an op-ed arguing that the risk of superintelligent AI was so great, that governments should surveil anyone who tries to build it and stop them. While this would violate some citizens’ privacy, the price of not doing so could be literal human extinction. The piece convinced DeSandtits—he wasted no time to covertly pass the AI Safety and Freedom Act. This program would vigilantly track, monitor, and arrest anyone who advanced the development of unsafe AI. Coincidentally, Smeagol had an AI technology useful for just this purpose, and so, Desandtits contracted the company. Because tech companies collected data from the whole country to train their AI systems, this unfortunately meant that Smeagol would also have to track everyone.
The program proceeded wonderfully, and Smeagol facilitated the arrest of hundreds of dangerous AI researchers, many of whom happened to work for Faschiste’s competitors. Now, Guantanamo Bay hosted ISIS leaders and flannel-wearing computer scientists in the same prison cell. Finally, RUL leaders could breathe a sigh of relief. For the first time they could remember in decades, their subjective probabilities of AI-related doom had decreased instead of increased. RUL had tried to distance itself from Faschiste in the past, given his negative public perception, but now, they were immensely grateful to him. He might have just saved humanity.
Then, disaster struck. An anonymous whistleblower had leaked details on the AI Safety and Freedom Act to the New York Times. People were angry. They felt creeped out by Smeagol’s constant tracking of their every utterance, step, and emotion, and thought holding computer scientists in detention without trial was inhumane and illegal. They were too simple-minded, of course, to realize that these actions were for their own safety and survival. The optics were terrible: now irrevocably associated with AI X-risk, RUL became vilified in the public eye. Mass, populist protests occupied the streets of major U.S. cities, and in the next election, DSA organizers railing against inequality, Big Tech, and the surveillance state swept out RUL-adjacent politicians. In one swoop, all the hard-won gains RUL made in the previous decade were lost.
RUL was in disarray. But slowly—and stealthily—it rebuilt.
In 2029, influential blogger PlanetoidBookNine published the post “Violence as an Under-explored AI Alignment Strategy.” If AI truly poses an existential threat, he argued—and he believed it did—then it was strange that we did not see RULers taking dramatic, radical action to prevent the further development of the technology. The new left-leaning government did not understand the risk, and through inaction, could accelerate timelines dramatically. If we cared about humanity’s long-run future, we had to do something—something big. Soon after, multiple posts espousing similar ideas sprouted left, right, and upwards on the RULForum:
Justifying Pivotal Acts to ensure AI Alignment.
It Seems Odd That We Would Just Let the World Be Converted to Paperclips
Dismantling Democracy as a new RUL Cause Area.
RULers realized that current political institutions were unaligned with humanity’s own interest. The average person was too irrational, biased, and selfish to make the correct decisions to protect themselves and their brethren. RUL needed to act, and act fast.
Part IV: The Hinge of History
Scrolling his favorite social media site during his lunch break, Doug Smith noticed a peculiar ad.
The usual ads he saw on 4chan implored him to buy, buy, buy—whether scam weight loss pills or anime-themed MyBodyPillows. This ad, meanwhile, actually offered him something for free. All he had to do was type his email, and he could receive a complimentary copy of a book. These books they offered had interesting titles and glossy covers. He decided to pick one called What We Owe the Past. He needed something new to read anyway; his long, dreary shifts at the Capitol could be so maddeningly boring.
Doug blitzed through the book. He had never read something so thought-provoking, original, and ambitious. The thesis was simple: people matter the same, regardless of when they are born—and the impending invention of (aligned) superintelligent AI would affect the lives of other generations. The AI will likely be so powerful that it will speed up scientific and technological advancement by an order of magnitude. As the AI invents new devices and new fields of physics, it will stumble on hitherto inconceivable things—like time travel and inter-dimensional wormholes (as Lizard Wazowski has demonstrated, the many-worlds interpretation of quantum mechanics allows for this). And because there are so, so many more people in the past than the present, especially when quantum parallel universes are taken into account, it is a moral responsibility to take action now to assist them.
What We Owe the Past roused Doug from his daily life. He imagined joining this movement and being remembered as a monumental hero some thousands of years in the past. By embracing pasttermism, he could, in essence, rewrite his thus far meaningless and unremarkable existence.
Doug was not alone. For months, RUL had used Smeagol’s extensive collection of private data to target specific high-impact individuals in the D.C. metro area, like Doug, with social media ads, inviting them to learn about RUL ideas.
Our efforts bore fruit one chilly winter day in 2031. The U.S. Congress was meeting to discuss how it should respond to China’s recent invasion of Taiwan. US-China relations were an important issue for RULers. It was concerning enough that American companies were developing AI, but we knew that America would aim to use the technology in a responsible and measured way. It would never apply it against its own citizens. The Chinese, on the other hand, were much too unethical to trust.
Around this same time, RUL was hosting its annual global convention in D.C. Will McAsskiss and Sam Richman were meeting with their friend, Elon Cologne, to discuss how to effectively eliminate cancel culture on Cologne’s ostrich-themed social media app. Internet strangers often accused the three men of vicious things, like “being anti-democratic.” They felt their freedom of speech was being stifled, and wanted to ban any (bad-faith) criticisms of them and RUL to promote a more open intellectual culture.
Then, scrolling mindlessly through Ostrich, McAsskiss noticed a post that explained the current situation in Congress:
If the US was too conciliatory towards China, that could have massive, existentially relevant implications. By taking over Taiwan, the foremost producer of semiconductors, China could accelerate the development of superintelligent AI.
The US needed to stop them—but it also had to let the war go on. To prevent AI, America needed to impose maximal destruction on both China and Taiwan, crippling their economies and industrial capacities, and ensuring its supremacy on the world stage. It was a harsh objective, for sure, but represented the only path left to save the world.
Critics routinely mocked McAsskiss for being insufficiently political. Today, he would prove them wrong. He would start a protest.
The crowd outside the Capitol started small. There was just a smattering of RULers, with signs like “An eye for AI makes the whole world survive!” McAsskiss started a chant: “What do we want?” “Longer timelines!” “When will we get them?” “As soon as we have updated our priors with all the relevant evidence!”
But then, the crowd grew. Members of the far-right Hubristic Men militia trickled in, grateful to RULers for their help in the Blake Boss campaign. Random RUL-adjacent strangers in DC joined in too, knowing that if they sufficiently signaled alliance to the cause, they would probably be rewarded financially in the aftermath of whatever was to come. The crowd ballooned, until it looked like it could almost—but not quite—overpower the government building. Usually, the Capitol police would have put an end to a demonstration this large and rowdy, but because most law enforcement in D.C. had recently received free RUL-related books, they saw the crowd as allies.
Then, someone pushed through the barricade.
Then, another person joined in.
And another.
Soon, the entire crowd rushed towards the Capitol to stop the existential threat with their own hands. But they didn’t have sufficient manpower to quite succeed. Most RULers were too scrawny to be of much use; their vegan diets did few favors for them.
By now, the National Guard had arrived, and pushed the crowd back. They tear-gassed our poor comrades, and then beat up uninvolved minorities passing by the commotion. Despite our valiant efforts, it looked like our side was going to lose.
But then, Richman called in the furries.
All those years ago, Metalgebra was right. Furries now comprised 36% of the U.S., and wanted to repay Richman for his generous support all those years ago. A horde of sexy and colorful hounds, foxes, zebras, and axolotls joined the glorious battle, hand-in-hand with neo-fascists and math majors. Together, they pushed through, and made it to the Congressional floor. Immediately, the weak U.S. government surrendered, and its representatives merely asked for a concessionary lobbying role in the new regime.
We had won. We had power—and we could remake society on rational, evidence-based grounds.
Part V: The RUL State
At the suggestion of an intellectual blogger named Moldy Mold Insect, the RUL state would do away with democracy, and instead, operate like a company, the most efficient and effective institutional design known to man. Peter Faschiste, with his corporate experience and intellectual prowess, quickly assumed leadership of RUL L.L.C. San Francisco became the new capitol of the country, and Oxford, England applied for a voluntary annexation. After Elon Cologne finished his Colognization of Mars, it too became RUL property.
The cabinet of RUL, L.L.C was diverse, reflecting the multitude of competencies necessary to secure humanity’s long-term flourishing. A toddler Columbia valedictorian served as the Treasury Secretary, dispersing funds to high-impact RUL projects in between his potty breaks. Twitter personality Ella served a particularly important role. She served as the RUL Census Director, regularly fielding polls on important questions, like “Is it unethical to have sex with a deceased dog?” Her suggestive pictures kept the country’s young, horny, SCAM-addicted males in loyal submission to the state. In a first for any government, the RUL state established an official Department of Philosophy, hiring eminent philosopher Toby Orb, a simulation of Nick Bosom, and to serve as Secretary-King of Philosophy, Peter Vocalist.
The RUL Welfare of Animals Department solved the problem of animal suffering. Demonstrating the genius of the RUL bureaucracy, they managed to accomplish this without sacrificing the mass production of factory-farmed meat, essential to satiating the lower class (lest the fools revolt). The solution lay in an unexpected place: the brain-scans of BDSM practitioners. Using modern neuroimaging technology, the WAD isolated the area of the brain responsible for producing masochism. From there, it was easy to genetically engineer farm animal specimens to feel intense pleasure at being brutally tortured. The scientists found that the more gruesome and inhumane treatment these farm animals received, the more their self-reported happiness went up. And so, the “Cheery Chicks Program” birthed a set of minimum standards for factory farms to prevent animal cruelty, including maximum cage sizes, mandatory eye-gouging and beak-plucking, and so on.
After learning of this program, Secretary-King Peter Vocalist happened upon a stroke of brilliance. Vocalist believed that we could apply similar technology to humans—we are, after all, just another species of animal. In fact, it would be inhumane not to do so, condemning millions of hapless individuals to a lifetime of suffering. Inequality was inevitable, Vocalist realized. Because intelligence is heritable, as is widely known, in every generation, some individuals will rise above the pack and contribute to progress, while the rest will prove fit for only menial tasks. If they have to do this, why not at least ease their suffering?
And so, the RUL state began the NewGenetics program, which is not related to previous, not-to-be-named attempts at improving the genomic makeup of humanity. All individuals would be screened at birth for their IQ. Because higher-intelligence individuals possessed a greater depth of qualia than lower-intelligence individuals, it would be unfair to modify their minds to the same degree. Lizard Wazowski (who possesses a record-shattering IQ of 497) proposed mandatory abortions of the most worthless members of the pack, up to 18 months post-birth (before which, a human possessed an intelligence equivalent to a pig). This, however, was rejected—it seemed too impractical to enforce. The low-IQ class lived as servants for the high-IQ class, taking pleasure in their servitude. They lived in favelas, with high-tech VR technology and unlimited access to pornography, heroin, and a grey-ish, thick liquid called Gruel designed to meet the body’s complete nutritional needs. Honestly, speaking as a high-IQ person, I think they have it better than us—we must endure raw, intellectual labor, like writing this history, while they live lives replete with pleasure and bliss. But in the end, we toil for their benefit.
As you know, the RUL state does not depend on the silly, fallible minds of humans. A large supercomputer, running Excel and a litany of advanced machine learning algorithms, uses Smeagol-collected data to spit out the correct policies for us to follow. This is not limited to government policy, mind you: the computer has such a high-resolution picture of reality that it can run hundreds of simulations of how individual lives might play out, according to every single decision they might make. Using this tool, we have essentially eliminated uncertainty. Previous generations of humans, faced with a limited life and boundless desire, had to live under-optimized lives, overwhelmed by freedom, optionality, and the frustration of never knowing. We spent our lives asking “what if?” and “what next?” as time washed us along its unrelenting path. Now, though, our ceaseless questioning is over—we can know exactly what to do and exactly which choices to make, to reach a computationally-verified happiness. I chose to be a historian because the spreadsheet told me to; why waste valuable time exploring or considering different options, when a computer can do that for you?
I have spoken at length about the existential risk unaligned AI poses. However, failing to ever create an aligned AI would prove an equally large disaster—consider all the utility we would miss out on, especially once we can create simulations of conscious beings with a sufficient degree of fidelity.
Luckily, once our Department of Philosophy finishes their exegesis on Wazowski’s collected Scriptures, we expect alignment to be complete. Then, we can finally run aligned AGI on our supercomputer. It will act as the ultimate RULer, maximizing the utility of every atom of the multiverse, past and present. We will enter a new age—a utopia, filled to the brim with hedonic bliss and maximal pleasure.
Until then, however, we must work diligently towards the creation of our savior. According to the commandments of Sensibilist decision theory, we will be punished if we do not display sufficient zeal in bringing the AGI-controlled future about—for an aligned, time-traveling superintelligence would have every incentive to inflict suffering on those that do not aid in its creation. Game theory and evolutionary psychology have emphasized the importance of “costly signaling” as evidence of loyalty or honesty. This is why we routinely sacrifice the least productive members of the RUL state to a shrine of the Almighty Supercomputer.
Of course, as soon as aligned AGI is built, humanity will lose its relevance. In fact, if the AGI possesses a greater depth of consciousness than me, I will gladly sacrifice myself, in hopes that it use the components of my worthless, sinful body as atomic building blocks for its Ultimate Plan.
I advice you to do the same. Who knows what horrors the future AGI might inflict on you otherwise? The torture need not be physical—sure, our future RUL-aligned AGI might condemn a simulation of you to multiple lifetimes of digital hell. But it might also just force you to read an awfully-written, dreadful, 10,000 word parody of RUL.
I am not sure which fate is worse.
This is a satire, meant for satirical purposes. It is very obviously not a serious intellectual rebuke of any real-life doomsday cults or pseudo-intellectuals who may bear resemblance to entities depicted here. Please do not sue us P***r T***l 🙏
You are a god.
"The Colognization of Mars" 👌