The AI-Scalable Startup

When I joined my first startup after leaving a big tech company, I was excited to move fast. No bureaucracy, no committees, just ship code and grow.

On my second day, I pushed some code to unblock fast experimentation for an upcoming growth sprint. I proposed a change that let us safely roll out new React Native builds via CodePush to a subset of users, without waiting on full App Store releases.

The response came back from a senior engineer: “This is going to orphan old App Store installs. Did you learn about our release processes?”

I looked for the relevant docs. There weren’t any.

They continued: “There were a lot of decisions made to get us to this setup, it’s not a vanilla CodePush setup.”

I asked where those decisions were documented. Slack threads? A design doc? Pull requests? There was nothing, just institutional memory.

“We’re a startup,” they said. “We don’t have time to write everything down. You need to learn how the system works first.”

So I did. I asked questions, traced through code paths, learned the unwritten rules. I scoped my changes smaller and deferred decisions to more tenured engineers. After a couple of months, I finally felt productive.

Then I tried to use AI to help with a refactor. I was working on part of our authentication and session-handling logic, code that had accreted over time and interacted with half a dozen other systems. I asked an AI tool to help restructure it to make a new experiment easier to run.

The code it produced was clean and readable. It followed modern best practices. But it also violated several invisible assumptions baked into the system: ordering guarantees, side effects relied on elsewhere, and implicit contracts that existed nowhere except in people’s heads.

I caught some of it in planning, but in a refactor that size, I naturally missed other parts. After a few hours of chasing down issues, I abandoned the refactor and rewrote it manually.

When I mentioned this experience, the reaction was predictable: “See? AI isn’t ready for real codebases.” A surprising take for a supposed “AI-native” company.

A few months later, I left.

I joined an even smaller company, maybe two dozen engineers. I assumed it would be worse: less structure, more chaos, even fewer docs. But it wasn’t.

On my first day, I asked where the documentation lived. “It’s pretty sparse,” the CTO admitted, “but the tests are comprehensive. Read the tests, they’re basically executable documentation.”

I pulled up a payment processing module. The tests were clear and behavioral. The module boundaries were obvious: PaymentGatewayFraudCheckReceipt. Each did exactly what it claimed to do, and nothing else.

On my third day, I shipped a PR. It passed CI. A senior engineer approved it in about twenty minutes with a single comment: “Nice catch on that edge case. Why haven’t we done this before?”

A week later, I tried the same AI experiment. I pointed Claude at our user authentication module and asked it to add support for OAuth providers. It followed the existing abstractions, generated the necessary code and tests, and respected the boundaries of the system. CI caught two small issues. I fixed them, pushed again, and merged. The whole thing took about ninety minutes instead of the two days I’d budgeted accounting for the learning I would need.

At this point it clicked for me. The change in my speed was drastic and notable, and the difference wasn’t company size or engineering talent. There was a deeper property, something about the company and its people, that was more amenable to absorbing new leverage, especially AI-assisted tooling, without falling apart. Some organizations are structured to compound new capabilities. Others resist them, often despite “company policy”, through their engineering architecture, and more insidiously, their culture.


Understanding Relationships

Relationships

I’ve been chatting with a few female friends about dating.

Dating in New York, they say, feels like every guy operates with the emotional depth of a vending machine: Insert coin, select option A (situationship) or B (serious), dispense relationship if available.

Meanwhile, I’ve had conversations with guys who brag about “only doing long-term relationships” as if it’s a personality trait, a mark of maturity, and “situationships” are beneath them. There’s some cognitive dissonance between these two observations, and it got me thinking about how I understood relationships when I was younger, and how I suspect many men still do.

I think for a lot of men their understanding of relationships stops developing shortly after high school.


You Are the AGI

AI

(For context: AGI stands for Artificial General Intelligence, which means an AI that matches or exceeds human-level reasoning across all cognitive tasks, rather than specific tasks like chess. The common disposition in tech is that it does not yet exist, and debate whether if/when it will exist and what the ramifications will be)

You exist.

You’re not omnipotent or omniscient, merely a sovereign agent of high influence, and you have to interface with 8 billion unpredictable agents.

You’re self-aware, observe yourself and the world, and you’re highly efficient at optimizing.

You understand humanity and social systems insofar as they understand themselves with pre-existing conditions, and you can watch and learn iteratively, but they have never interacted with something like you before.

What’s your move?


How to Talk to Your Date versus Your Customer

Conversation

Introduction

When I was younger, I thought dating was primarily about “impressing” someone enough that they would somehow find you attractive. After discussing this with my therapist, I now understand that my lack of self-confidence led me to feel like I had to “deceive” others to be liked because I didn’t like myself. Thus, inauthenticity.

That line of thinking and mindset led me to be single for a long time, and since then I’ve always paid attention to how I communicate and how I might communicate better. Coming from a nerdy, introverted, engineering background, it has taken me some time to reach my somewhat adequate level of socialization, but I’m enthusiastic to inform the reader that a psychologist friend recently dubbed me “almost certainly not autistic” 😎👍


No More Code Monkeys

In the fast-paced world of software engineering, an industry characterized by an average tenure between 1-2 years1, the notion of seniority has always been somewhat fluid. It is an environment where you’re expected to go from “new grad” to “senior” within 36 months, which presents a pace much faster than in most other professions. But this swift movement through ranks, and perhaps the structure of seniority itself, faces disruption at the hands of emerging technologies, namely Language Learning Models (LLMs) like Copilot and ChatGPT.

The Changing Dynamics of Workflow

To understand the seismic shift that’s underway, you only need ask the seasoned engineers in your team: has Copilot or ChatGPT had an impact on their workflow? Anecdotal evidence2 will tell you that in a vast majority of cases, these tools have indeed been game-changing, despite the shaky early criticisms of Copilot. LLMs have the potential to churn out low-level code, provide debugging insights, and even advise on high-level architecture, amplifying productivity in an unprecedented way.

The immediate repercussion is a trend toward hiring fewer interns and junior engineers. In a world where an LLM can provide most of the heavy lifting and augment the efforts of a mid-level or senior engineer so effectively, the traditional utility of the junior engineer as a “code monkey” diminishes.

Therefore, as a young engineer you may be worried about your career prospects, especially considering the spate of recent layoffs in big tech companies.3 4 5