← Back to Blog
PhysicsFoundationsPhilosophy

Part 3 of 4: Foundations of Physics

Ignition

Unbounded.

Nicholas WebbApril 2026

I am not a physicist.

I need to say that up front because the rest of this is going to sound like I think I am one, and I don't. I have no physics degree. I took the standard college courses — physics, math, statistics — and I've been reading books and listening to podcasts about fundamental physics for years because I find it genuinely fascinating. But I have never taken a graduate course in quantum field theory. I cannot derive the Dirac equation off the top of my head. I have enormous respect for the people who can, and for the decades of rigorous work that built the Standard Model. I am not claiming to be one of them.

I am a software engineer. Fourteen years at companies you've heard of, building systems that process millions of transactions a day. Enterprise software is survival of the fittest, and it hones good systems instincts — most of the job is streamlining, making the structure match what the data actually is. Nature operates the same way. She only deals in MVPs. She is conservative by necessity. Systems at scale have to be. So when something in a complex system doesn't fit the pattern, I notice.

I understand the physics better than I can express it in standard notation. For me, mathematical formalism feels like swimming in scuba gear — you're in the water, you're moving, but you feel separated from the environment. The off-gassing is disruptive. It's awkward. Useful, but neither fluid nor intuitive. Euler and Ramanujan would disagree — they swam naked in that water — but even they were constrained by what the notation could express versus what they could see. Translate the same math to code, though, and I understand it fine. The logic is identical. The interface is different.

The Idea

I've been wrestling with the tension between general relativity and quantum mechanics for decades. Not professionally — just as someone who reads the books, listens to the lectures, and can't stop turning it over. The uneasiness between the two frameworks, the way they describe the same universe in mutually incompatible languages — that had been curing in my head for years.

The spark was Bell. I'd encountered the theorem before and filed it away, but this time I had a tool that could actually follow the thread. His story — how he took Einstein's complaint seriously when the rest of the field had moved on, how he turned philosophy into a testable prediction — lit tinder that had been curing for years.

The specific idea, though, came from the Standard Model's parameters. The way they were presented struck me differently than it strikes a physicist.

A physicist sees 19 free parameters and thinks: these are inputs to the Lagrangian. Boundary conditions. You measure them, you plug them in, the machine runs.

I saw 19 parameters and thought: these aren't all the same data type.

Electric charge is exactly one-third or two-thirds. Spin is exactly one-half. Color comes in exactly three. These are clean, exact, primal — simple ratios that never vary, identical every time you measure them. They have the character of structure. Of architecture.

Then you have the masses. The electron mass is 0.511 MeV. The muon is 105.658 MeV. The top quark is 172,570 MeV. These span six orders of magnitude with no obvious pattern. They have the character of measurements — outputs of some process, not inputs to a design.

These two kinds of numbers are sitting in the same table, treated by the same formalism, as if they're the same kind of thing. But they're not. One kind is quantized. The other kind looks like it fell out of something.

In software, this is a code smell. When two fundamentally different data types are stored in the same field, it means someone isn't modeling the domain correctly. It means there's a missing abstraction.

That was the idea. It went on the shelf.

The Shelf

Everyone has a shelf. It's where the interesting ideas go when you don't have the time, the tools, or the credentials to pursue them. The shelf is where “what if?” goes to wither, quietly, while the next work deadline, the next project, the next thing that actually pays takes priority. They never die. That's the worst part. They just sit there.

Before AI, my shelf was full. The physics idea sat next to half a dozen others — things I'd noticed, patterns I'd wondered about, questions I didn't have the formalism to even properly ask. I'm not a physicist. I can't spend six months learning quantum field theory notation just to check whether a hunch holds up. Nobody with a day job and a family can.

The shelf is where outsider insight withers. Not because the ideas are bad — most of them probably are — but because the activation energy to check them is prohibitive. The gap between “interesting thought” and “let me test that” is measured in years of education and institutional access.

In the right hands, AI is a bridge across that gap. Not a shortcut — plenty of investigations still fall flat. But you get to dive further into them than you ever could alone.

The Method

I don't use AI the way most people do. I don't ask it to write my code or generate my content. I use it the way a researcher uses a graduate student who happens to have read every paper ever written and can do calculations at the speed of light — but has no original ideas and will cheerfully pursue whatever direction I point it in.

The key insight about AI that almost nobody talks about: it gives back what you give it. Point it at the ground, it describes dirt. Point it at the sky, it describes clouds. It doesn't decide where to look. You do.

From the first day, I imposed a discipline borrowed from software engineering: test-driven development. In software, TDD means you write the test before you write the code. The test defines what success looks like. Then you write the minimum code that passes the test. If the test fails, the code is wrong. No exceptions. No “well, it's close enough.” No “the test must be wrong.”

In physics, my version of TDD was: data is the source of truth. Every pattern I thought I saw had to be checked against measured values — PDG data, FLAG lattice averages, experimental results. Not approximately. Not “in the right ballpark.” To sub-percent precision or it doesn't count.

This is not how theoretical physics is normally done. Normally, you start with a theoretical framework, derive predictions, and then compare to data. I did it backwards. I started with the data, sorted it by type, looked for patterns, and refused to hypothesize until the patterns were undeniable.

The AI was the engine that made this practical. I'd have a hunch — “is this mixing angle the square root of that mass ratio?” — and instead of spending a week deriving, checking literature, and running edge cases, I'd test it in thirty seconds. Wrong? Next. Right? Push harder. That particular one was right, and it cracked the problem wide open.

We tried two years of possibilities in a weekend.

What AI Is Not

I want to be clear about what the AI did and did not do.

It did not notice that the Standard Model parameters split into two categories. I did. The AI would never have looked at the problem that way — it's trained on physics literature that treats all parameters as the same kind of object. It would have given me a literature review and a shrug.

It did not ask “why would a wave stop at the boundary?” I did. I described a wave that rolls past the edge and comes back on itself, and the AI said “that's called an orbifold, here's the math.” The concept existed. Nobody had pointed it at this problem.

AI did not propose that the heaviest generation is the source and the lightest is the destination — a raindrop falling from a cloud, not a ladder being climbed. I did. The AI would have defaulted to the standard framework where generation number is just an index.

Every conceptual breakthrough came from a question the AI wouldn't have asked. Every question came from looking at the problem sideways, unburdened by the training that tells you how you're supposed to look at it.

What the AI did was make iteration instant. It turned “I wonder if...” into “let's check” without a six-month detour through a textbook. It was a tireless, infinitely patient collaborator that could calculate anything, recall any paper, and never once said “that's a stupid question.”

But it never once said “have you considered looking at it this way?” either. That part was mine.

The Outsider's Advantage

Michael Faraday was a bookbinder's apprentice. No formal education. No mathematics. He couldn't write an equation to save his life.

He discovered electromagnetic induction — the principle behind every electric motor, every generator, every transformer on Earth. He discovered diamagnetism. He discovered electrolysis. He introduced the concept of the field — the idea that forces act through space, not at a distance — which became the foundation of all modern physics.

Maxwell had to come along later and translate Faraday's physical intuitions into equations. The math came after the understanding. Not before.

There have always been two kinds of minds in physics. There's the formalist — Dirac, Bohr, von Neumann — who thinks in equations, whose native language is the notation itself. And there's the intuitive — Einstein, Faraday, Sagan — who thinks in pictures, in analogies, in physical models, and needs someone else to write it down. The rare mind like Feynman bridges both worlds. But they're both necessary. Always have been. The formalist without the intuitive has nothing new to formalize. The intuitive without the formalist can't prove anything.

Over the last hundred years, there has been less and less space for the Faradays of the world. The rigor that made physics precise also narrowed who gets to participate.

AI opens that door again.

Not because AI does the thinking. But because AI handles the formalism. The barrier that stopped a modern Faraday — “you can't check your intuition without ten years of mathematical training” — dissolves when you can describe your idea in plain language and get the equations back in thirty seconds.

This is the Unbounded era. Not the AI doing the breakthrough. The AI enabling the kind of mind that produces breakthroughs — the generalist, the pattern-spotter, the person who asks “why” instead of “how” — to operate in fields that had been gated by specialization.

When you stare at the same twenty numbers for fifty years, they stop being data and start being furniture. This isn't a criticism — it's neuroscience. Your brain edits out your optic nerve's blind spot so completely that you don't know it's there. Familiarity does the same thing to data. You stop seeing what the numbers are and start seeing what you've been told they are. The Standard Model parameters are “free parameters.” The mass hierarchy is “an open problem.” The boundary conditions are “Dirichlet.” These are labels that close the file.

I hadn't stared at them for fifty years. I came from a discipline where organizing data by type is the first thing you do — where if two values behave differently, they are different, and you model them accordingly. I brought enterprise system ontology to a physics problem, and the first thing it told me was that two very different kinds of data were being treated as the same thing.

I looked at the data and asked what it was trying to tell me.

The answer, when it came, was simple enough to be suspicious and precise enough to be hard to dismiss.

Previous: Part 2 — The Altitude Problem — The tool defines the view. Always has. Next: Part 4 — The Looking Glass — Separating quantum from classical to find the pattern hiding in plain sight.