19 min read

Aaron Swartz, the Sweet Prince of Reddit

Aaron was the Reddit prince of an open web that never fully arrived. Altman is the OpenAI Sith Lord building the Death Star while we applaud how fast it does our taxes.
Aaron Swartz, the Sweet Prince of Reddit
Aaron Swartz- Early web pioneer and developer of Reddit.

By Matt Stone

Many good men and women have come and gone throughout the millennia, leaving behind marks of their existence, sometimes great, other times small and trivial, but our lives have been greatly impacted by people we never knew existed, much less ever met.

The unknown farmers who first domesticated wheat, rice, and maize changed human civilization far more than many kings ever did. They made cities, surplus, and complex society possible.

The anonymous scribes who copied religious texts, laws, and classical literature kept huge parts of human knowledge alive through war, collapse, and empire.

The unnamed builders and laborers behind roads, irrigation systems, temples, aqueducts, and ports made states and trade function, even though the rulers got the credit.

The unknown sailors and navigators who passed down wind patterns, currents, and routes expanded trade and contact between civilizations long before famous explorers had their names written in the history books.

The anonymous workers and midwives who preserved practical medical knowledge across generations helped humanity survive long before modern medicine had institutions to aid us in medicinal care.

The unidentified inventors of basic tools like the wheel, the plow, early metallurgy techniques, and even soap had enormous civilizational influence, but their names are lost to time.

The countless enslaved people whose labor built empires, wealth, and infrastructure transformed world history while being denied recognition as historical actors.

The unknown pamphleteers, printers, and local organizers in revolutions often shaped public opinion more than the famous leaders remembered in textbooks.

The brave men and women who traversed the seas on shoddily thrown together boats made of timber and rope. Drifting into the terrifying and humbling oceans that have claimed so many before. And taken so many since.

The anonymous coders, engineers, and open-source contributors behind foundational internet tools have had huge influence on daily life, even though most people will never know their names. But some deserve to have their names known. They deserve to have their stories told so that humanity can understand what it took to get here.

Aaron Swartz was one of those people.

Aaron Swartz was born in Highland Park, Illinois, in 1986, the oldest of three boys in a family already steeped in software and computers. His father founded the software company Mark Williams, and Aaron was deep into programming at a young age. By twelve, he had built The Info Network, a user-generated encyclopedia project that won the ArsDigita Prize.

By fourteen, he was helping write RSS 1.0, which is an absurd sentence to type about a teenager, but that was Aaron. He attended North Shore Country Day School, left before finishing high school, took classes at Lake Forest College, and later went to Stanford before dropping out.

The record that survives paints him less as a conventional grade-chaser than as one of those rare kids who was plainly operating ahead of the institution around him.

Wired called him an “18-year-old computer prodigy,” and the Internet Hall of Fame later placed him among the formative builders of the web.

People who write about Swartz tend to come back to the same traits: brilliance, intensity, political seriousness, and a kind of moral impatience with bullshit. Lawrence Lessig described him as brilliant and funny, but also as someone defined by a constant struggle for what he believed was right.

The Electronic Frontier Foundation remembered him as a programmer, activist, entrepreneur, and community builder. That combination is what made him so unusual. He was not just a gifted coder nor just an activist. He had the technical ability to help build the architecture of the modern internet. Perhaps more importantly, he also had the nerve and foresight to ask who it was for and who it was leaving out.

He believed the internet was supposed to break gates open, not give them better branding with stronger watchdogs. Aaron built Infogami in 2005 while he was part of Y Combinator, a startup accelerator that gives very young companies seed funding, advice, and investor access in an effort to help them grow. At the time, Reddit was a separate Y Combinator startup created by Steve Huffman and Alexis Ohanian. Swartz was not the person who originally launched Reddit. He came from a parallel project in the same startup program.

Infogami was not a copy of Reddit or an early version of it. It was a broader publishing and wiki platform, built to let people create and organize web pages collaboratively. While working on Infogami, Swartz also created web.py, a Python web framework. In the fall of 2005, he worked with Reddit’s founders to rewrite Reddit’s early codebase in Python using web.py. That technical collaboration connected him directly to Reddit before the companies formally came together.

When Infogami failed to secure enough momentum on its own, Y Combinator pushed a deal that combined the two startups in November 2005. That merger folded Swartz and his project into Reddit’s company infrastructure. From that point forward, he was part of Reddit’s early team and helped shape its technical foundation, even though the site had originally been launched by Huffman and Ohanian.

As soon as Infogami and Reddit were merged in late 2005, he helped rewrite Reddit’s code in Python using web.py, the framework he had built. What made Reddit attractive in that moment was that it fit the same broad philosophy: the web as a place where users, not gatekeepers, surfaced what mattered. This take is even more ironic when viewing Reddit today as a hive of communities, each with their own distinct "gatekeepers" in the moderation culture. Even when he later grew disillusioned with corporate life after the Condé Nast sale, the pattern remained the same. Swartz was consistently more interested in open participation and public access than in turning online communities into conventional media products.

That same philosophy expanded beyond Reddit into a much larger moral and political commitment. His dream was an open internet and, beyond that, an open knowledge order. You can hear it most clearly in his “Guerilla Open Access Manifesto,” where he argued that “information is power,” that scientific and scholarly knowledge was being privatized by corporations, and that people with access had a duty to share it. This was not some side hobby for him. It ran through his work on Creative Commons, through Open Library, through Watchdog.net, through Demand Progress, and through the politics he moved toward as he got older.

The connection between Reddit and Aaron Swartz’s later activism becomes clearer the more you learn about him. In both cases, he was fighting gatekeeping. First it was social and editorial. Later it was academic and institutional. The scale changed, but the principle did not. Unfortunately, Swartz would likely not recognize Reddit today.

A site born out of the early internet dream of user-driven exchange now runs on layers of moderation, policy enforcement, algorithmic visibility, and institutional risk management. Some of that was inevitable. Scale always invites control. Once Reddit became obvious as a hub of cultural and societal influence, wealthy, nefarious actors moved in that embraced anything but transparency. Once a platform gets big enough, somebody starts guarding the gates, whether they call themselves admins, moderators, trust and safety teams, community managers, or human resources departments.

What makes Reddit especially ironic is that its mythology still trades on openness, spontaneity, and the unruly intelligence of the crowd, while its actual life depends on constant filtration. The old promise was that users would surface what mattered. The present reality is that whole categories of speech, tone, and dissent live or die by the judgment of volunteer moderators, platform rules, advertiser pressure, and opaque ranking systems. The forum of the people gradually became another managed environment.

That is the recurring pattern of the internet. Spaces built in the name of freedom age into systems of control, usually while still speaking the language of freedom. Reddit is hardly alone there, but it may be one of the cleaner examples. The site that helped define digital anti-gatekeeping now has gatekeepers everywhere.

That is the path that led to the conduct for which he was prosecuted. In 2010 and 2011, while a fellow at Harvard’s Safra Center, Swartz used MIT’s network to download a massive number of JSTOR academic articles. Federal prosecutors later charged him with wire fraud and multiple counts under the Computer Fraud and Abuse Act. Swartz was charged, never convicted, and the case ended before trial. JSTOR recovered the files and settled its own claims, but the federal prosecution continued. MIT’s later internal report also raised serious questions about whether prosecutors had fully examined the issue of authorization, given MIT’s open guest-network policy.

Aaron Swartz’s “crime” was downloading a huge portion of JSTOR’s academic journal archive through MIT’s network, apparently with the intent to free knowledge that was locked behind paywalls. The government did not treat it that way. Federal prosecutors treated it as a serious computer crime case and built it into a felony prosecution under the Computer Fraud and Abuse Act and the Wire Fraud Act.

The original federal indictment came in July 2011. The U.S. Attorney’s Office in Massachusetts said Swartz had, between September 24, 2010 and January 6, 2011, accessed MIT’s network without authorization, entered a restricted wiring closet, connected a laptop there, and downloaded a major portion of JSTOR’s archive.

The mechanics of the case are worth laying out because they show how prosecutors built the narrative. MIT had a guest-access network, and JSTOR was available on MIT’s campus because MIT subscribed to it. According to MIT’s later report, JSTOR first noticed unusually large downloading from the MIT network in late 2010 and began blocking that traffic. In response, Swartz allegedly changed his computer’s MAC address, used different IP addresses, and then physically placed a laptop in a network closet in Building 16 at MIT to keep the downloads going.

MIT police discovered the laptop, set up a camera, and eventually coordinated with local police and the Secret Service. On January 6, 2011, Swartz was arrested by MIT police and a Secret Service agent after returning to retrieve the equipment.

What turned this from a campus dispute into a federal hammer was the charging strategy. The first indictment in 2011 included counts under the CFAA and wire fraud law. Then, in September 2012, prosecutors filed a superseding indictment that expanded the case to 13 felony counts. Wired summarized those charges as wire fraud, computer fraud, unlawfully obtaining information from a protected computer, recklessly damaging a protected computer, and aiding and abetting. That expansion is a big part of why the case became a national symbol of prosecutorial overreach.

The main statute used against Aaron Swartz was the Computer Fraud and Abuse Act, a law written in the 1980s before the modern internet had even fully taken shape. By the time of the JSTOR case, prosecutors were using that older anti-hacking law in a setting it had not been built to handle cleanly: bulk downloading from a university network that was intentionally open to visitors, where the dispute turned less on smashing through a clear security wall than on questions of authorization, terms of use, automated access, and scale.

MIT’s own review later noted that prosecutors apparently never asked MIT whether Swartz had authorized access to the network, and the report said the case law on when an “insider” or otherwise permitted user acts “without authorization” was muddy. That ambiguity was a big part of what made the case feel unfamiliar.

Swartz was not accused of stealing credit card numbers, planting malware, or draining bank accounts. He was accused of downloading an enormous quantity of scholarly articles from JSTOR through MIT’s network, including by evading technical blocks once JSTOR and MIT tried to stop him. JSTOR recovered the files and chose not to pursue civil litigation, yet the federal prosecution continued anyway.

That left the country arguing over whether this was best understood as classic criminal hacking, aggressive violation of access rules, civil disobedience around knowledge, or some unstable mix of all three.

It was also unexplored territory because the legal theory threatened to turn everyday rule-breaking on computers into a felony. Contemporary critics pointed out that several charges were tied to violations of MIT and JSTOR policies, which raised a larger question: if breaking a website’s rules or an institution’s user policy can become federal criminal access “without authorization,” then an enormous amount of ordinary online behavior starts to slide into potential criminality. That concern became so central after Swartz’s death that lawmakers proposed “Aaron’s Law” to narrow the CFAA and make clear that terms-of-service violations should not automatically become federal crimes.

The case also sat at the frontier between older academic publishing norms and a newer politics of digital access. Universities paid subscription fees so their communities, and in MIT’s case even visitors on its open campus network, could read JSTOR content. But that did not settle the legal or moral question of what happened when one person used that access to download material on a massive automated scale, apparently with the aim of liberating knowledge from paywalls. That combination: open network, licensed database, automated bulk downloading, unclear authorization boundaries, and a federal anti-hacking statute carrying severe penalties was precisely what made the case feel like ground the law had not learned how to crawl, much less walk on yet.

The government’s theory was that Swartz had gone beyond mere excessive downloading. Prosecutors claimed he hid his identity, evaded network blocks, entered a restricted wiring closet, and used scripts and spoofed identifiers to keep taking files after JSTOR and MIT tried to stop him. Wired’s reporting on the indictment says prosecutors even cited his “Guerilla Open Access Manifesto” to argue intent, essentially using his political beliefs about information freedom as evidence of criminal motive.

The defense side, and many critics afterward, saw the case very differently. JSTOR recovered the files and did not pursue civil claims. MIT’s own report says one unresolved question was whether Swartz’s use of MIT’s open network was actually “unauthorized” in the sense the law required. The report also makes clear that MIT made a series of discretionary choices at key moments, including not taking a stronger stand against a harsh prosecution. MIT did not initiate the prosecution, but it did not forcefully oppose it either.

The punishment hanging over him was severe. The DOJ press release on the original indictment said he faced penalties that included up to 35 years in prison, supervised release, forfeiture, restitution, and fines that could total up to $1 million if convicted on all counts. Later public debate focused on plea offers. Attorney General Eric Holder said prosecutors had offered plea deals that would have involved much shorter prison exposure, including an early offer of around three months and later offers in the 0–6 month range, but the formal indictment exposure was vastly higher and remained the backdrop over the case.

Swartz died by suicide on January 11, 2013, before he could stand trial. MIT’s report notes that his trial had been scheduled for April 1, 2013. His death set off a huge backlash against the CFAA, against federal charging tactics, and against MIT’s posture in the case. Even years later, the case is still cited as the clearest example of how the state can take behavior that looks like civil disobedience or digital trespass and escalate it into a career-ending, life-crushing felony prosecution.

"Sam Can Never Be Trusted"

Aaron Swartz and Sam Altman knew each other through the first Y Combinator cohort in 2005. Altman was there with Loopt. Swartz was there with Infogami, the project that later merged into Reddit. That put them in the same tiny startup circle at the very beginning of Y Combinator, long before either man became a larger public symbol. The public record supports that they were batch mates, not that they were close collaborators. What is clear is that Swartz formed a harsh opinion of Altman over time. The most widely cited recent account says Swartz later warned friends that Altman “can never be trusted” and called him “a sociopath.”

The Original 2005 Y Combinator Class. Aaron Swartz is directly to the left of OpenAI CEO, Sam Altman.

What makes the comparison so jarring now is that they expose a brutal asymmetry in how power tends to process unauthorized copying. Swartz was prosecuted by the federal government after downloading a massive archive of scholarly articles from JSTOR through MIT’s network, in a case built in part on the Computer Fraud and Abuse Act. OpenAI, by contrast, sits in a sprawling civil-litigation landscape over allegations that it trained models on copyrighted books, articles, and other protected works at enormous scale. As of April 2026, OpenAI is still fighting those cases in court, arguing that training on copyrighted material is fair use, that models learn patterns rather than store books, and that the law should treat training as transformative. The key point is that this has not produced criminal prosecution. It has produced lawsuits, discovery fights, and unresolved fair-use battles.

That difference is exactly why people find it enraging. Swartz became the face of an information-freedom prosecution over scholarly articles, while one of the most powerful companies in the world has defended the large-scale ingestion of copyrighted works as part of building a trillion-dollar AI industry. Courts have not settled the issue cleanly. Recent rulings in AI copyright cases have been mixed. One judge called AI training on books “quintessentially transformative” in the Anthropic case, while also finding separate liability for retaining millions of pirated books in a central library. Another judge in the Meta litigation was more skeptical and warned that AI could flood creative markets. OpenAI’s own cases remain active, with discovery ongoing and fair use still unresolved on the merits. That means the present situation is not “no punishment because it was declared legal.” It is closer to “enormous corporate defendants operating inside a live zone of legal uncertainty, with the benefit of time, money, and elite institutional defense.”

The deeper reason it feels insane is moral, not just procedural. Swartz’s politics were rooted in the idea that knowledge should not be hoarded behind gates that the public cannot cross. Now a dominant AI company is accused of ingesting huge volumes of copyrighted human work in order to build private systems of enormous commercial value, then defending the practice as socially useful transformation. That is not the same act. Swartz was trying to break open access. OpenAI is building proprietary infrastructure. But the contrast is still ugly: one man was crushed under the weight of the state while the corporation gets to litigate from a position of strength and argue that copying at planetary scale is innovation. Legally, those are different postures. Politically, they sit close enough to each other to make the hypocrisy hard to ignore.

According to the recent New Yorker profile, Swartz warned friends that Altman “can never be trusted” and called him “a sociopath.” That line hits now because it no longer reads like random startup gossip from a vanished scene. It reads like an early alarm from someone whose politics centered on openness, public knowledge, and moral seriousness about who gets to control information.

The New Yorker profile becomes truly useful when it stops treating Sam Altman as a gifted founder with a complicated personality and starts treating him as something much less novel and much more consequential: an egomaniac whose trustworthiness now matters at the level of infrastructure, governance, and civilizational power. That shift is the whole fucking thing.

Plenty of ambitious executives are slippery. Plenty of founders cultivate mystique while moving fast and breaking internal promises. The question here is different, and the scale is unlike any mankind has had to wrestle with previously. OpenAI was sold to the public as a structure specifically designed to keep extraordinary power from hardening around any one person or profit motive.

The company’s nonprofit shell, its charter language, its safety posture, and its public rhetoric all implied restraint. What the profile suggests instead is that the official story was stewardship while the internal story, again as reported, was consolidation. The 2023 board crisis, the accusations of lying, the reported sidelining of foundational constraints, and the widening gap between safety language and operational behavior all point in the same direction. Once enough money, geopolitical significance, and institutional inevitability gathered around Altman, the guardrails, like his scruples, appear to have bent with alarming speed.

That is where the profile’s stranger details stop being colorful and start being revealing. One former OpenAI executive described the company’s expansion in language so bizarre it sounded less like business analysis than eschatology: they were, in his words, "building portals” and “genuinely summoning aliens.” He said Altman had added “one in the Middle East” and that it was “wildly important” to grasp how “scary” that should be.

That quote has been treated online as a joke, a meme, or proof that these people are unwell. The more serious reading is that the people nearest the machine are reaching for mythic language because ordinary managerial language no longer feels adequate to what they are building. Data centers become portals. Scaling becomes summoning. Infrastructure becomes an encounter with something alien.

Whether that means the systems feel uncanny, opaque, geopolitically monstrous, or simply too large for human categories, the underlying point is the same: even insiders seem to experience this buildout as something stranger and more dangerous than a normal corporate expansion. OpenAI has always operated with concrete, technical, if even hopeful language. This juxtaposition is highlighted further because it sits beside a public message of calm stewardship and technical responsibility.

The comparison to Aaron Swartz turns that contradiction into something uglier. Swartz was charged under the Computer Fraud and Abuse Act after downloading a massive archive of JSTOR articles through MIT’s network. The legal terrain was far from clean. MIT’s own later review stated that prosecutors apparently never asked whether, under MIT’s open guest policy, his access to the network was authorized, and noted that there was at least a real issue there. The law was muddy but the response was not. Swartz was arrested, indicted, and threatened with the kind of prosecutorial pressure that can ruin a person long before a jury ever speaks. He became the emblem of what the old order was willing to do when a person treated locked-up knowledge as a public inheritance rather than a private asset. When a person treats the greater good as the greater good.

Now put that beside the present arrangement. OpenAI is one of several major AI companies defending itself in a widening set of copyright suits over the alleged use of huge volumes of copyrighted books and other protected material for model training. The legal posture is different. These are civil cases, not even criminal prosecutions.

The defense is different too. OpenAI and its peers argue that training is transformative fair use, that models learn statistical patterns rather than reproduce books as books, and that the law should treat large-scale ingestion as a legitimate form of technological development. Courts have begun to draw lines, but not in one clean place.

Reuters has reported rulings favorable to AI companies on some training theories while also recognizing liability around the retention of pirated libraries and the market harm done to authors and publishers. The important fact is not that the law has already blessed all of this. It has not. The important fact is that the companies building the next layer of global information infrastructure are contesting these questions from positions of immense money, elite legal defense, and institutional patience. They operate inside uncertainty with the presumption of innovation, and under no obligation to ensure the world becomes a more fair and equitable place.

That is why the contrast lands so hard and refuses to go away. Swartz treated knowledge like something that ought to circulate more freely, and the state met him with a weaponized theory of unauthorized access. Altman runs a company accused of ingesting human knowledge at planetary scale in order to build proprietary systems of enormous private leverage, and the fight takes place as strategic litigation over the terms of the future with no criminal charges on the table. The acts are not identical. The statutes are not identical. The legal theories are not identical. But moral clarity does not require identity, and the acts are close enough that current law enforcement facial recognition software wouldn't even know the difference.

One man was ground down under the old regime for trying to break open access. The new regime treats copying at industrial scale as something that powerful firms get to defend as transformation, progress, and inevitability. Swartz confronted a system that saw unauthorized copying as threat. Altman inhabits a system increasingly willing to see mass appropriation as infrastructure. That asymmetry is not a side note. It is one of the defining hypocrisies of the age.

Swartz was charged under a decades-old anti-hacking law after downloading a massive archive of JSTOR articles through MIT’s network, and MIT’s own later review said prosecutors apparently never even asked whether, under MIT’s open guest policy, his access to the network was authorized. The case sat in a legally muddy zone, but the force brought against him was not muddy at all. He was arrested, indicted, and threatened with ruin. He became the face of a state response to unauthorized copying in the name of public access.

Now set that beside the present landscape. OpenAI is fighting a growing set of copyright suits over allegations that its systems were trained on huge quantities of copyrighted books and other protected material. As of 2026, those fights remain civil, not criminal. OpenAI’s defense is that AI training is transformative fair use, and the courts have not settled the question cleanly. Reuters has reported that judges have begun drawing lines in different places, with some rulings favorable to AI companies on training and others recognizing liability around the retention of pirated libraries or the market effects on authors and publishers.

In other words, the companies building the next layer of global information infrastructure are operating inside uncertainty, but they are operating there with money, patience, elite counsel, and the presumption that this is the right kind of innovation until proven otherwise.

Sam Altman’s story, at least in The New Yorker’s 2026 profile, is almost the inverse of Swartz. Swartz got hammered by the state over access to knowledge. Altman, according to that reporting, rose by telling different factions what they needed to hear while allegedly misleading colleagues and board members about safety protocols, internal approvals, and conflicts. The piece says Ilya Sutskever compiled memos accusing Altman of a pattern whose first listed item was “Lying,” and it describes a larger history of people around Altman concluding that he treated constraints as things to route around.

Swartz belonged to the old internet faith, the almost embarrassingly sincere one, the one that believed knowledge should move, that gates were meant to be challenged, that human inheritance was too important to be locked behind the right badge, the right login, the right institutional wall. He looked at hoarded knowledge and saw a moral failure. He looked at the architecture of exclusion and wanted to tear it down. Whatever his flaws, whatever the legal distinctions, whatever the arguments people will make to soften the comparison, his center of gravity was outward. He moved toward opening, toward a just world.

Altman stands at the head of something colder. His company speaks in the language of stewardship, safety, and benefit for humanity while building systems by absorbing the written labor of humanity at a scale that would have sounded insane a decade ago. Swartz was prosecuted for trying to free knowledge. Altman’s world is defended as progress while it digests that same inheritance, concentrates it, and reissues it through private machinery wrapped in inevitability. One was treated like a threat to order. The other is treated like the future of order itself.

That is the split. Swartz carried the light-side heresy that knowledge belongs to people. Altman stands closer to the imperial logic that everything human can be taken, processed, enclosed, and returned to us as a service. One believed the archive should breathe. The other is helping build the machine that inhales the archive whole.

One was hunted as a criminal. The other is protected as a visionary. If you want the clean myth underneath all the law, all the policy, all the corporate euphemism, it is not hard to find. Aaron was the Reddit prince of an open web that never fully arrived. Altman is the OpenAI Sith Lord building the Death Star while we applaud how fast it does our taxes.

Sam Altman May Control Our Future—Can He Be Trusted?
New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI.
One-Year Later: In Remembrance of Aaron Swartz
Aaron Swartz, a brilliant 26-year-old software developer at Thoughtworks, renowned hacker and social justice activist, committed suicide on January 11, 2013.