<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Reed Bender</title><link>https://reedbender.com/</link><description>Recent content on Reed Bender</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Wed, 25 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://reedbender.com/index.xml" rel="self" type="application/rss+xml"/><item><title>Data Is the Moat</title><link>https://reedbender.com/writing/data-is-the-moat/</link><pubDate>Wed, 25 Mar 2026 00:00:00 +0000</pubDate><guid>https://reedbender.com/writing/data-is-the-moat/</guid><description>&lt;p&gt;The AI discourse right now is fixated on two things: model improvements and agent frameworks. Which model is best. Which orchestration layer is cleanest. How many tools can you wire into a loop. How many agents can you run in parallel? Every week my X feed is filled with a new harness, a new benchmark, a new claim about reasoning.&lt;/p&gt;
&lt;p&gt;None of it matters as much as the data.&lt;/p&gt;
&lt;p&gt;The models are converging. A &lt;a href="https://www.semianalysis.com/p/google-we-have-no-moat-and-neither"&gt;leaked Google memo&lt;/a&gt; called it in 2023: &amp;ldquo;We Have No Moat, And Neither Does OpenAI.&amp;rdquo; Since then, &lt;a href="https://huggingface.co/Qwen"&gt;Qwen&lt;/a&gt;, &lt;a href="https://www.deepseek.com/"&gt;DeepSeek&lt;/a&gt;, and Llama have all closed the gap. James Betker at OpenAI &lt;a href="https://nonint.com/2023/06/10/the-it-in-ai-models-is-the-dataset/"&gt;said it best&lt;/a&gt;:&lt;/p&gt;</description></item><item><title>Building the Lab: Why I Run My Own Infrastructure</title><link>https://reedbender.com/writing/building-the-lab/part-1-why-i-run-my-own-infrastructure/</link><pubDate>Sun, 22 Mar 2026 00:00:00 +0000</pubDate><guid>https://reedbender.com/writing/building-the-lab/part-1-why-i-run-my-own-infrastructure/</guid><description>&lt;p&gt;I spend nearly all of my professional time building production infrastructure on AWS for biomedical research. Kubernetes clusters, Terraform modules, CI/CD pipelines, Postgres databases, agentic AI orchestration.&lt;/p&gt;
&lt;p&gt;My general inclination has always been towards cloud-native development. By every reasonable measure, the last thing I need is a rack of Raspberry Pis and a workstation that draws a kilowatt of power.&lt;/p&gt;
&lt;p&gt;I built it anyway. Because the thing I&amp;rsquo;m trying to understand can&amp;rsquo;t be rented.&lt;/p&gt;</description></item><item><title>The Magus: Building the Primary Workstation</title><link>https://reedbender.com/writing/building-the-lab/part-2-the-magus/</link><pubDate>Sun, 22 Mar 2026 00:00:00 +0000</pubDate><guid>https://reedbender.com/writing/building-the-lab/part-2-the-magus/</guid><description>&lt;p&gt;This is where I write code, and where the agents do most of their thinking.&lt;/p&gt;
&lt;p&gt;A llama.cpp server sits on the LAN with an OpenAI-compatible API, serving a 12GB model out of VRAM to all 4 agents asynchronously. Heartbeat cycles, structured outputs, tool calls, internal coordination &amp;ndash; the always-on work stays local and costs nothing per token. When an agent needs real reasoning depth it routes to Sonnet. When the task is hard or externally visible it escalates to Opus. The local GPUs are the floor of the system. The cloud tiers only get called when the work actually warrants it.&lt;/p&gt;</description></item><item><title>The Rack: Physical Infrastructure and the Tarot Node Map</title><link>https://reedbender.com/writing/building-the-lab/part-3-the-rack/</link><pubDate>Sun, 22 Mar 2026 00:00:00 +0000</pubDate><guid>https://reedbender.com/writing/building-the-lab/part-3-the-rack/</guid><description>&lt;p&gt;8 rack units, 10-inch form factor, sitting on a desk. It houses the agent cluster, the memory server, the monitoring node, all the networking, and a hardwired audio system that lets any machine on the LAN speak through passive speakers mounted on the rack rails.&lt;/p&gt;
&lt;p&gt;The whole thing draws under 200W.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://reedbender.com/images/lab/rack.jpg" alt="The rack"&gt;&lt;/p&gt;
&lt;h2 id="the-nodes"&gt;The Nodes&lt;/h2&gt;
&lt;p&gt;Every node in the lab is named after a Tarot card. The full mapping and the reasoning behind it are in &lt;a href="https://reedbender.com/writing/building-the-lab/part-1-why-i-run-my-own-infrastructure/#the-tarot-map"&gt;part 1&lt;/a&gt;. Here&amp;rsquo;s the physical layout:&lt;/p&gt;</description></item><item><title>Please Explain this Gap in your Resume</title><link>https://reedbender.com/writing/please-explain-this-gap/</link><pubDate>Sat, 21 Mar 2026 00:00:00 +0000</pubDate><guid>https://reedbender.com/writing/please-explain-this-gap/</guid><description>&lt;p&gt;There&amp;rsquo;s a stretch on my resume between leaving a computational biology role at Flagship Pioneering in Cambridge and showing up as a Senior Data Engineer at Lantern Pharma in Dallas that doesn&amp;rsquo;t look anything like career progression.&lt;/p&gt;
&lt;p&gt;I was on a farm in North Carolina.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;d spent two years at &lt;a href="https://www.flagshippioneering.com/companies/etiome"&gt;Etiome&lt;/a&gt; as the first software engineer, building ML infrastructure for single-cell RNA-sequencing and electronic medical records. We were modeling the biochemical factors of disease progression temporally, at single-cell resolution, trying to capture how disease actually evolves across patients where everybody is n=1. Great team, good data, interesting problems. But the higher the resolution got, the more it felt like I was getting a sharper and sharper image of the wrong thing. We could see what happened as disease progressed. We couldn&amp;rsquo;t see what caused it to progress. If the biochemical pathways were transistors, we were watching them flip without ever finding the programming language behind their orchestration. Those are two very different questions, and no amount of sequencer resolution was going to collapse them into one.&lt;/p&gt;</description></item><item><title>Our Superconducting Consciousness</title><link>https://reedbender.com/writing/our-superconducting-consciousness/</link><pubDate>Sun, 06 Feb 2022 00:00:00 +0000</pubDate><guid>https://reedbender.com/writing/our-superconducting-consciousness/</guid><description>&lt;figure&gt;
&lt;img src="https://reedbender.com/images/superconducting/hero-expand-amy-pennell.jpg" alt="Expand, by Amy Pennell."&gt;
&lt;figcaption&gt;Expand, by Amy Pennell.&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;Behind it all is surely an idea so simple, so beautiful, that when we grasp it &amp;ndash; in a decade, a century, or a millennium &amp;ndash; we will all say to each other, how could it have been otherwise?&amp;rdquo;&lt;/p&gt;
&lt;p&gt;&amp;ndash; John Archibald Wheeler, 1986&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p class="post-note"&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you would prefer to read this as a PDF manuscript, it is available &lt;a href="https://superconductingconsciousnesspublic.s3.amazonaws.com/Bender_Superconducting_Consciousness.pdf"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p class="post-note"&gt;&lt;strong&gt;2026 Author's Note:&lt;/strong&gt; This essay was originally published in February 2022. In September 2022, &lt;em&gt;Nature&lt;/em&gt; retracted the Snider &lt;em&gt;et al.&lt;/em&gt; (2020) paper on room-temperature superconductivity in carbonaceous sulfur hydride, which is cited in Section 4. The retraction states: &lt;em&gt;"We have now established that some key data processing steps used a non-standard, user-defined procedure... these processing issues undermine confidence in the published magnetic susceptibility data as a whole."&lt;/em&gt; (&lt;a href="https://doi.org/10.1038/s41586-022-05294-9"&gt;Nature 610, 804, 2022&lt;/a&gt;). The broader argument of this essay, that biological superconductivity may be possible drawing on Cope, Becker, and Mikheenko, does not depend on the Snider result. The relevant passage has been annotated below. I have left the original text intact for transparency rather than silently removing it.&lt;/p&gt;</description></item><item><title>About</title><link>https://reedbender.com/about/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://reedbender.com/about/</guid><description>&lt;p&gt;I&amp;rsquo;ve been asking two questions for most of my career and it took me a while to realize they were different. The first: &amp;ldquo;what can I learn from this data?&amp;rdquo; That one keeps me grounded. The second: &amp;ldquo;what is &lt;a href="https://arxiv.org/abs/2505.15849"&gt;&lt;em&gt;life&lt;/em&gt;&lt;/a&gt; doing that this data can&amp;rsquo;t teach me?&amp;rdquo; That one keeps me up at night.&lt;/p&gt;
&lt;p&gt;I lead platform architecture at &lt;a href="https://lanternpharma.com"&gt;Lantern Pharma&lt;/a&gt;, where my team built &lt;a href="https://withzeta.ai"&gt;withZeta.ai&lt;/a&gt; to accelerate drug discovery for rare and resistant cancers. I run &lt;a href="https://reedbender.com/attune-intelligence/"&gt;Attune Intelligence&lt;/a&gt;, bootstrapped and self-funded, because that second question can&amp;rsquo;t survive a grant cycle. All of my published work is compiled under &lt;a href="https://reedbender.com/research/"&gt;research&lt;/a&gt;. The thinking behind it is in my &lt;a href="https://reedbender.com/writing/"&gt;writing&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Attune Intelligence</title><link>https://reedbender.com/attune-intelligence/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://reedbender.com/attune-intelligence/</guid><description>&lt;div class="attune-mark"&gt;
 &lt;a href="https://attuneintelligence.ai" target="_blank" rel="noopener" class="attune-mark__link"&gt;
 &lt;img src="https://reedbender.com/images/logos/attune.png" alt="Attune Intelligence" loading="lazy"&gt;
 &lt;/a&gt;
&lt;/div&gt;
&lt;p&gt;Attune Intelligence is an independent research lab. No investors, no series A, no institutional affiliation. The consulting funds the hardware. The questions drive the work.&lt;/p&gt;
&lt;hr&gt;
&lt;div class="section-label"&gt;Research&lt;/div&gt;
&lt;p&gt;The research asks what intelligence is as a natural phenomenon. Not a product feature or a benchmark score, but a physical process that cells have been channeling for billions of years before we named it. The boundary between living systems and silicon is less clear than most of the field assumes, and that matters for how we build.&lt;/p&gt;</description></item><item><title>Engineering</title><link>https://reedbender.com/engineering/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://reedbender.com/engineering/</guid><description>&lt;div class="github-activity"&gt;
 &lt;div class="github-activity__header"&gt;
 &lt;span class="section-label"&gt;GitHub Activity&lt;/span&gt;
 &lt;a href="https://github.com/mrbende" class="github-activity__link"&gt;@mrbende &amp;rarr;&lt;/a&gt;
 &lt;/div&gt;
 &lt;p class="github-activity__note"&gt;Public and open-source contributions only. Lantern Pharma work and private repos aren't reflected here.&lt;/p&gt;
 &lt;img src="https://ghchart.rshah.org/c0622d/mrbende" alt="GitHub contribution activity" loading="lazy" class="github-activity__grid"&gt;
&lt;/div&gt;
&lt;p&gt;Most of my career has been spent building systems that move biological data from raw to useful. Cloud infrastructure, production pipelines, databases, agentic AI. Across cancer drug discovery, genomics, and edtech.&lt;/p&gt;
&lt;p&gt;Good infrastructure disappears. When it&amp;rsquo;s working, nobody thinks about it. The database responds, the pipeline runs, the agents coordinate, the deployment holds under load. The craft is in building systems that stay invisible, that absorb complexity so the people using them never have to. Most of what I&amp;rsquo;ve built will never be seen by anyone other than the engineers who maintain it after me. That&amp;rsquo;s the point.&lt;/p&gt;</description></item><item><title>Research</title><link>https://reedbender.com/research/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://reedbender.com/research/</guid><description>&lt;div class="section-label"&gt;Featured&lt;/div&gt;
&lt;div class="pub-item pub-item--highlight"&gt;
 &lt;div class="pub-item__venue"&gt;arXiv preprint · 2025 · &lt;span class="pub-item__cofirst"&gt;first author&lt;/span&gt;&lt;/div&gt;
 &lt;div class="pub-item__title"&gt;&lt;a href="https://arxiv.org/abs/2505.15849"&gt;What Lives? A meta-analysis of diverse opinions on the definition of life&lt;/a&gt;&lt;/div&gt;
 &lt;div class="pub-item__authors"&gt;Reed Bender, Karina Kofman, Blaise Agüera y Arcas, Michael Levin&lt;/div&gt;
 &lt;div class="pub-item__description"&gt;
 68 expert definitions of life from 14 countries, analyzed by 3 LLMs using pairwise correlation, agglomerative clustering, and t-SNE projection. What most consider a taxonomy, we turned into a topology.
 &lt;/div&gt;
&lt;/div&gt;
&lt;hr&gt;
&lt;p&gt;A decade of systems engineering and applied ML on cancer genomics teaches you exactly where computation runs out of biology to explain.&lt;/p&gt;</description></item></channel></rss>