<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Responsible-Ai on Sebastian Spicker</title>
    <link>https://sebastianspicker.github.io/tags/responsible-ai/</link>
    <description>Recent content in Responsible-Ai on Sebastian Spicker</description>
    
    <generator>Hugo -- 0.160.0</generator>
    <language>en</language>
    <lastBuildDate>Tue, 03 Mar 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://sebastianspicker.github.io/tags/responsible-ai/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Oppenheimer Didn&#39;t Have an Acceptable Use Policy</title>
      <link>https://sebastianspicker.github.io/posts/ai-warfare-anthropic-atom-bomb/</link>
      <pubDate>Tue, 03 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://sebastianspicker.github.io/posts/ai-warfare-anthropic-atom-bomb/</guid>
      <description>Anthropic has drawn a public line on military use of its models. The physics community spent the better part of the twentieth century working out what it means to draw that line after you have already built the thing. As a physicist watching this unfold, I find the parallels clarifying and the differences more unsettling than the parallels.</description>
      <content:encoded><![CDATA[<p><em>Physicists inherit, along with the formalism and the problem sets, a particular
set of guilt. The profession has been working
through its relationship to weapons, state violence, and the gap between
scientific capability and ethical readiness since August 1945. This post is about
why I think the current moment in AI closely resembles that history, and why
Anthropic&rsquo;s decision to draw a line matters even if — especially if — you think
the line is imperfect.</em></p>
<hr>
<h2 id="what-just-happened">What Just Happened</h2>
<p>The news this week involves Anthropic and the question of whether and how large
language models should be available for military applications. Anthropic has stepped
back from a path toward unrestricted military use and restated a position: there
are things their models will not be used for, weapons development and autonomous
lethal systems among them. The response from parts of the defence and national
security community has been predictable — naïve, idealistic, unilateral disarmament,
your adversaries will not make the same choice.</p>
<p>These are not stupid objections. I want to take them seriously. But I also want
to explain why, as someone who spent years studying physics in the shadow of the
Manhattan Project&rsquo;s legacy, the framing of those objections sounds very familiar,
and why that familiarity is not reassuring.</p>
<hr>
<h2 id="what-the-physicists-thought-they-were-doing">What the Physicists Thought They Were Doing</h2>
<p>The scientists who built the atomic bomb were not, for the most part, indifferent
to what they were building. Many of them were refugees from European fascism.
They understood what a Nazi atomic weapon would mean. The urgency was real, the
moral reasoning was coherent, and the conclusion — build it before the other side
does — followed from the premises.</p>
<p>What the premises did not include was adequate weight for what happens after the
technical problem is solved.</p>
<p>By the time the Trinity test produced results in July 1945, Germany had already
surrendered. The original justification — prevent the Nazis from getting there
first — had evaporated. What remained was a weapon, an infrastructure for building
more weapons, and a strategic and political logic that had largely moved beyond
the scientists&rsquo; control. The Franck Report, written by a group of Manhattan Project
scientists in June 1945, argued against using the bomb on a Japanese city without
prior demonstration. It was ignored. Oppenheimer, who chaired the Interim
Committee&rsquo;s scientific panel, signed off on the Hiroshima target recommendation.
He spent the rest of his life with that.</p>
<p>The lesson most physics students absorb from this history is something like: the
scientists were not the decision-makers, the decision was going to be made anyway,
and the presence of principled scientists in the room was better than their absence.
The system was going to do what it was going to do; all you could influence was
the margin.</p>
<p>I believed this for a long time. I am less sure of it now.</p>
<hr>
<h2 id="the-analogy-and-its-limits">The Analogy and Its Limits</h2>
<p>The comparison between the atom bomb and artificial general intelligence — or even
current large language models at the capability frontier — is made often enough
that it has become a cliché, which is usually the point at which people stop
thinking carefully about it. Let me try to be specific about where the analogy
holds and where it breaks.</p>
<p><strong>Where it holds:</strong></p>
<p>The core structural similarity is this: a small number of researchers, working
at the frontier of a capability that most people do not understand, are making
decisions that will constrain or enable uses they cannot fully anticipate, in
contexts they will not control. The physics community in 1942 had a clearer view
of what fission could do than any political or military decision-maker. The AI
research community in 2026 has a clearer view of what large language models can
do — and of what more capable successors will do — than most of the people who
will deploy them.</p>
<p>That epistemic position is not morally neutral. Knowing more than the decision-makers
does not mean you have unlimited responsibility, but it does mean you have more
responsibility than someone who does not know. Feigning ignorance about downstream
applications is not available to you.</p>
<p>The second similarity: once the capability exists and is demonstrated, the
normative landscape changes. Before Trinity, the question of whether to build nuclear
weapons was still open. After Trinity, it was no longer open in the same way — the
knowledge existed, the infrastructure existed, the geopolitical expectations had
already been set. The arms race was not caused by the bomb, but the bomb&rsquo;s existence
changed what the arms race meant and how fast it moved. We are somewhere in the
vicinity of that transition with frontier AI systems. The question of whether to
build them is still formally open for any given company or research group, but the
landscape is already different from what it was five years ago.</p>
<p><strong>Where it breaks:</strong></p>
<p>The atom bomb was a single-use physical object whose primary function was destroying
things. Large language models are general-purpose cognitive tools with a very wide
range of applications, the majority of which are not weapons-relevant. This matters
because it changes the policy space. You could, in principle, have not built the
atom bomb. You cannot, in principle, not build language models while still having
language models for medicine, education, scientific research, and the other
applications that are clearly beneficial. The dual-use problem for AI is more
severe, not less severe, than it was for physics.</p>
<p>The other important difference: the Manhattan Project was conducted in secret, under
wartime conditions, with a relatively well-defined adversarial structure. The current
AI landscape involves many organisations, many countries, public publication of
research, and no clear equivalent of the Axis/Allied framing. The game theory
of &ldquo;if we don&rsquo;t do it, they will&rdquo; is more complicated when &ldquo;they&rdquo; is not a single
identifiable adversary with symmetric interests.</p>
<hr>
<h2 id="what-anthropics-line-actually-says">What Anthropic&rsquo;s Line Actually Says</h2>
<p>Setting aside for a moment whether the line is in the right place, there is something
worth examining in the act of drawing it at all.</p>
<p>The standard criticism — that a unilateral ethical commitment in a competitive
field simply advantages less scrupulous actors — assumes that ethical commitments
are pure costs with no countervailing benefits. This is the argument the weapons
lobby has made about every arms control proposal in the history of arms control,
and it has sometimes been right. Unilateral disarmament without reciprocal
commitments can leave you worse off. This is not a trivial point.</p>
<p>But it smuggles in an assumption that deserves scrutiny: that the relevant
competition is primarily between AI companies, and that the only variable that
matters is relative capability. If you accept that framing, then any ethical
constraint is a handicap and the only rational strategy is to develop as fast as
possible with as few restrictions as possible.</p>
<p>That framing has a name in physics. It is called the arms race equilibrium, and
the physics community spent thirty years understanding what it produces. It produces
capability accumulation without a corresponding development of the normative
frameworks, institutional safeguards, and mutual verification mechanisms that
make the capability survivable. It produces Hiroshima, then the hydrogen bomb,
then MIRV, then the point at which the accumulated arsenal is large enough to
end complex life on Earth several times over, at which point you negotiate the
first real arms limitation treaties — from a starting position of vastly more
deployed capability than anyone needed and vastly less trust than anyone wanted.</p>
<p>The question Anthropic is implicitly asking is whether there is a path that does
not look like that. The answer is not obvious. But I think it is worth asking.</p>
<hr>
<h2 id="what-the-physicists-should-have-done">What the Physicists Should Have Done</h2>
<p>Here is the counterfactual that haunts the Manhattan Project&rsquo;s legacy: what if
the scientific community had treated the ethics of the bomb as seriously as the
physics, from the beginning?</p>
<p>Not naïvely. Not by refusing to work on it and ceding the possibility of influencing
it. But by making the ethical analysis parallel to the technical analysis, by
treating the question of use as a scientific question with as much rigour as the
question of yield, and by using the epistemic authority that came from being the
people who understood the capability to push, hard, for the normative frameworks
that did not yet exist.</p>
<p>Some scientists did this. Szilard circulated a petition, signed by 70 Manhattan
Project scientists, against the use of the bomb on Japanese cities without prior
warning. It did not work. But the effort was real, and the record of the effort
matters — both as evidence that the scientific community was not unanimous in its
acquiescence and as a model for what engaged dissent looks like from inside a
project that is going to proceed regardless.</p>
<p>What most scientists did not do, and what the profession largely did not do in the
decades that followed, was treat the ethical work as primary. Physics built its
identity around the technical capability — the extraordinary achievement of
understanding nature at the deepest level — and treated the ethical consequences
as someone else&rsquo;s department. The bomb was the military&rsquo;s problem. The cold war was
the politicians&rsquo; problem. The physicists kept doing physics.</p>
<p>This was comfortable and it was wrong.</p>
<hr>
<h2 id="what-i-want-from-ai-researchers">What I Want From AI Researchers</h2>
<p>I want AI researchers to do what the physicists did not, and to do it now, while
the critical decisions are still open.</p>
<p>Anthropic drawing a line is one version of this. It is imperfect — the line is
in a particular place, the enforcement mechanisms are limited, the competitive
dynamics are real. But it is a claim that the people who built the capability
have ongoing responsibility for how it is used, and that some uses are outside
the bounds of what should happen regardless of what is technically possible.</p>
<p>That claim is not naïve. It is, in fact, the claim the Franck Report was making
in 1945: that capability does not determine use, that scientists have a voice in
the normative question, and that using that voice is part of the job rather than
a distraction from it.</p>
<p>What I want beyond that is for the AI research community to treat the ethics
as primary rather than as footnotes. Not ethics review boards that approve research
post hoc. Not responsible AI teams that are consulted after the capability has
been developed. A genuine integration of the normative analysis into the research
process itself — asking, at each stage, what this capability makes possible and
who benefits from that possibility and who pays the cost.</p>
<p>The physics community got to August 1945 before it had that conversation in earnest.
The conversation has been going on ever since, and it has produced important
institutional frameworks — the Bulletin of the Atomic Scientists, the arms control
treaties, the export control regimes, the norms against first use. These things
matter. But they were built in reaction to a capability that had already been
deployed, and the shape of everything that followed was constrained by that
starting point.</p>
<p>The AI community is not there yet. The starting point is still being established.
That is what makes this moment consequential, and what makes Anthropic&rsquo;s line —
wherever exactly it is drawn — worth defending as an act of principle rather than
dismissing as an act of commercial positioning.</p>
<hr>
<h2 id="a-note-on-the-of-our-time-framing">A Note on the &ldquo;Of Our Time&rdquo; Framing</h2>
<p>I am aware that comparisons to the atom bomb are sometimes used to generate
unwarranted urgency, to short-circuit careful reasoning by invoking the most
extreme case. I want to be clear about what I am and am not claiming.</p>
<p>I am not claiming that current large language models are as immediately dangerous
as nuclear weapons. They are not.</p>
<p>I am claiming that the structural situation — researchers at the capability
frontier, ahead of the policy frameworks, making decisions that will constrain
future options, in a competitive environment with adversarial dynamics — is
similar enough that the lessons of the Manhattan Project period are directly
relevant. Not as prophecy. As a guide to the kind of mistakes that are available
to make.</p>
<p>The physicists had plenty of warning. Szilard had been worried since 1933.
Einstein wrote to Roosevelt in 1939. The Franck Report was written before
Hiroshima. The warnings were on the record. What was not on the record was
a scientific community that treated those warnings as actionable constraints
on its own behaviour rather than as advisories for policymakers.</p>
<p>That is the thing I want to be different this time.</p>
<hr>
<h2 id="references">References</h2>
<p>Franck, J. et al. (1945). <em>Report of the Committee on Political and Social Problems
(The Franck Report).</em> National Archives, Record Group 77.</p>
<p>Oppenheimer, J. R. (1965). Interview on <em>The Decision to Drop the Bomb</em> (NBC
documentary). Recorded 1965.</p>
<p>Rhodes, R. (1986). <em>The Making of the Atomic
Bomb.</em> Simon &amp; Schuster.</p>
<p>Russell, B., &amp; Einstein, A. (1955). <em>The Russell–Einstein Manifesto.</em>
Pugwash Conferences on Science and World Affairs.</p>
<p>Szilard, L. (1945). <em>A Petition to the President of the United States.</em> July 17,
1945. Available via the Atomic Heritage Foundation.</p>
<p>Bulletin of the Atomic Scientists (1945–present). <em>Doomsday Clock statements.</em>
<a href="https://thebulletin.org/doomsday-clock/">https://thebulletin.org/doomsday-clock/</a></p>
]]></content:encoded>
    </item>
  </channel>
</rss>
