Hi. I’m Dustin. I’m a creative and curious Sr. Frontend Engineer at True Anomaly in Los Angeles, CA. I spe­cial­ize in front-end web de­vel­op­ment with Ja­va­Script, React, and Node.js. You can also find me on GitHub and LinkedIn.

  • A Senior Engineer’s Guide to Learning a New Codebase

    My manager challenged me to learn a new part of the codebase that I’ve never worked on before. I figured I would use it as an opportunity to jot down the process I use. Learning a new codebase should be a structured transition from observation to mental modeling. There are three phases: mapping, tracing, and modification. This approach moves a developer from treating code as a “black box” to understanding its guts.

    1. Mapping and context

    Mapping involves understanding the system’s architecture and data flow before reading business logic. In large-scale applications, reading code without a high-level map often leads to cognitive overload.

    Documentation

    The first step is identifying the “why” behind the implementation. Documentation provides the historical context of technical choices. Architectural documents detail why specific frameworks or patterns – such as a local-first architecture or a specific state management library – were selected over alternatives. Understanding these business constraints prevents the logic from appearing arbitrary.

    Entry points

    Every software module has a defined entry point or “front door.” Identifying these allows a developer to narrow their focus to the most critical paths.

    • Identify web API route handlers and middleware chains.
    • Find the public API or the primary export file for libraries
    • Map props and useEffect statements that fire when a component is mounted in the frontend.

    Data flow

    A core requirement of mapping is diagramming how data moves through the system. This includes identifying where data enters (ingestion), where it is transformed (business logic), and where it is stored (persistence). This involves tracing a request from the router to the controller, through the service layer, and finally to the database driver.

    2. Tracing and analysis

    Once the boundaries are defined, the developer needs to see the code working. Static analysis is insufficient for understanding complex state changes and asynchronous operations.

    Step debugging

    The next step involves using a debugger to step through execution. Rather than simulating logic mentally, developers set breakpoints at high-level user actions, such as an API request or a UI event. By observing the call stack and variable state in real-time, the developer gains a factual understanding of how data mutates across different functions.

    Test suites

    Automated tests are the most accurate documentation of a system’s current state. They define the “contract” of a module.

    1. Run existing unit and integration tests to ensure a stable environment.
    2. Intentionally modify a conditional statement or return value within the source code.
    3. Watch for which tests fail. The resulting failures show the dependencies and the blast radius of changes within that module.

    Version control

    Codebases are rarely static designs; they are historical records of bug fixes and shifting requirements. Using GitHub or tools like GitLens, look at the original Pull Request (PR) associated with a line of code. PR comments often contain the reasoning behind non-obvious workarounds or edge-case handling that the code itself does not explicitly state.

    3. Modification and validation

    The final step is validating the mental model through direct, non-destructive interaction with the code.

    Commenting

    As a developer reads through complex files, they should add “scratchpad” comments to summarize the perceived function of specific blocks. For example: // Filters expired tokens prior to authentication check. If a block of logic cannot be summarized in a single sentence, the developer has identified a gap in their mental model. These comments are for personal clarification and are not intended for the final commit.

    Refactoring

    Attempting a local-only refactor is a high-signal method for testing assumptions. This involves renaming variables for clarity or extracting long functions into smaller helpers. Because these changes are kept in a local branch and not merged, the developer can experiment without risk. If the refactor breaks the system in an unexpected way, it indicates that the developer’s understanding was incomplete.

    Contribution

    A developer’s “beginner’s mind” is a temporary asset that allows them to see gaps in new areas that established team members may overlook. The final step in gaining proficiency is contributing back to the project.

    • Correcting an outdated README or updating instructions.
    • Adding tests for edge cases discovered during the tracing phase.
    • Writing JSDoc comments for functions that lacked clarity.

    Synthesizing new knowledge into a documentation PR forces a final verification of the mental model while providing immediate value to the engineering team.

  • Matteo Collina on the Future of Software Engineering

    Matteo has some good points about The Future of the Software Engineering Career. For one, valuable engineers will have to rely on fundamentals to work with generated code:

    Algorithms. Distributed systems. Hardware architectures. Cache management techniques. Networking fundamentals. Database internals. These aren’t academic exercises anymore. They’re the foundation for evaluating AI-generated code.

    When an AI produces a sorting algorithm, can you tell if it’s appropriate for your data properties? When it suggests a caching layer, do you understand the trade-offs in consistency? When it generates a distributed system design, can you spot the failure modes?

    There are endless layers to study here. Computer science fundamentals that seemed theoretical suddenly matter for practical work. The student who deeply understands how things work will outperform the one who only knows how to use them.

    You know, the hard parts of software development.

    I completely agree with his prediction that there will be a boom in small business applications that can be built using AI:

    Remember when every business needed a website, and local web developers built them? We’re about to see the same thing, but for custom software applications…

    This is going to be a booming industry. And it favors people who can talk to clients, understand their real problems, and deliver working solutions. It favors generalists who can move quickly over specialists in narrow technologies. It favors judgment over raw coding speed

  • Matthew Hansen on the Hard Parts of Software Development

    In AI Makes the Easy Part Easier and the Hard Part Harder, Matthew shares a lesson that has taken me more than 20 years to learn:

    Writing code is the easy part of the job. It always has been. The hard part is investigation, understanding context, validating assumptions, and knowing why a particular approach is the right one for this situation.

    The hardest part of software development, however, is and always has been, people. Politics, opinions, emotions. You can’t find an AI for that, although it can help.

  • The Shift From Programming to Engineering

    Speaking of everything bagels, Hampton Lintorn-Catlin has an interesting perspective about the role of engineers versus programmers in the AI age. AI agents now perform the bulk of code‑writing, shifting the valuable work from typing code to designing systems, orchestrating agents, reviewing output, and defining standards.

    In response to the notion that you can’t use AI in complex codebases, Hampton says “If agents aren’t working well in your codebase, you need to figure out why that is instead of throwing your hands up.” That’s the kind of leadership advice Satya Nadella would be proud of.

  • Satya Nadella’s Architecture for Executive Success

    Jeffrey Snover invites us behind the scenes of his promotion to Technical Fellow, when he joined Microsoft’s other Senior Executives. Satya Nadella gave this admonishment, paraphrased by Jeffrey:

    Congratulations …. your days of whining are over​. In this room, we deliver success, we don’t whine.​ Look, I’m not confused, I know you walk through fields of shit every day. Your job is to find the rose petals. Don’t come whining that you don’t have the resources you need.​ We’ve done our homework. We’ve evaluated the portfolio, considered the opportunities and allocated our available resources to those opportunities. That is what you have to work with. Your job is to manufacture success with the resources you’ve been allocated

    This philosophy was what once made working at Netflix so great. The feeling of being empowered was incredible, but the drive to behave like a member of the Dream Team is what had the biggest impact on me. If you want a master class in leadership, read No Rules Rules, by Reid Hastings.

  • The Future of Programming Looks a Lot Like an Everything Bagel

    What does the future of programming look like?

    So, which is it? I say it’s an everything bagel:

    Jobu Tupaki:  I got bored one day – and I put everything on a bagel. Everything. All my hopes and dreams, my old report cards, every breed of dog, every last personal ad on craigslist. Sesame. Poppy seed. Salt. And it collapsed in on itself. ‘Cause, you see, when you really put everything on a bagel, it becomes this.

    Here’s a quick compare and contrast.

    What Each Role Optimizes For

    • Agentic Engineering optimizes for throughput with correctness. The goal is to produce high‑quality software quickly by orchestrating AI agents, not by typing faster.
    • SRE optimizes for reliability over time. The mission is to keep systems predictable under load, failure, and change.
    • Product Engineering optimizes for user and business value. The focus is on shipping features that matter and iterating toward impact.

    How they work day to day

    Agentic Engineers

    They treat AI like a fast but unreliable junior developer. Their craft is decomposition, prompting, reviewing, and testing. They build scaffolds, guardrails, and workflows that turn raw model output into maintainable software.

    SREs

    They live in the world of SLIs, SLOs, error budgets, and incident response. Their work is automation-heavy and relentlessly focused on reducing toil, improving observability, and ensuring systems behave as expected.

    Product Engineers

    They sit closest to users. They translate product requirements into working features, iterate quickly, and balance speed with long‑term maintainability. They’re the connective tissue between design, backend, and business goals.

    How each role uses AI

    AI is reshaping all three roles, but in different ways:

    • Agentic Engineers build around AI. It’s the core tool, not an add‑on.
    • SREs use AI as an accelerator—log summarization, anomaly detection, config generation—but remain deeply skeptical. Reliability requires verification, not vibes.
    • Product Engineers use AI to multiply iteration speed: scaffolding features, generating UI variants, writing tests, and exploring alternatives faster than ever.

    The common thread: AI shifts the work upward, toward orchestration and decision‑making.

    Skills that define each discipline

    • Agentic Engineering: system design, decomposition, prompt engineering, test‑driven development, code review, architectural thinking.
    • SRE: distributed systems, observability, automation, incident response, performance engineering.
    • Product Engineering: UX intuition, full‑stack development, rapid prototyping, experimentation, cross‑functional communication.

    These skill sets overlap, but each role has a distinct emphasis.

    The downsides of each discipline

    • Agentic Engineering risks over‑trusting AI output, weak test coverage, and architectural drift from agent‑generated code.
    • SRE risks over‑engineering reliability, becoming a ticket‑ops team, or burning out under incident load.
    • Product Engineering risks shipping fast but accruing tech debt, or building features that don’t move metrics.

    Understanding these failure modes is part of understanding the craft.

    Where the roles converge

    AI is pushing all three disciplines toward a shared future:

    • Less manual coding, more orchestration
    • Less toil, more automation
    • Less focus on implementation, more on system‑level thinking
    • Less “just ship it,” more continuous verification

    But their centers of gravity remain distinct:

    • Agentic Engineering asks: How do we build software with AI?
    • SRE asks: How do we keep software reliable as it changes?
    • Product Engineering asks: How do we build the right software for users?

    Together, they form a triangle of modern engineering practice.

    The everything bagel

    If AI continues absorbing more of the implementation work, the future engineer – regardless of title – starts to look like an everything bagel:

    • the orchestration mindset of an Agentic Engineer
    • the reliability instincts of an SRE
    • the user‑centric judgment of a Product Engineer
    • Sesame. Poppy seed. Salt.

    In other words:

    The future of engineering is a system designer, reliability steward, and product thinker who directs AI to do the work.

    Classic software engineer answer.

  • How Software Will Survive with AI

    In Software Survival 3.0 Steve Yegge lays out a model for software survivability that states, roughly: AI is advancing exponentially, and in a world where agents can synthesize almost any software on demand, only certain kinds of software will survive. It basically assumes that AI is cheap and lazy and will reach for tools (aka your tools) that have the following characteristics:

    1. Knowledge dense. They embody decades of knowledge and would be too expensive for an AI to build from scratch.
    2. Efficient. They run more efficiently on CPUs than on GPU inference.
    3. Broadly useful. They are general-purpose tools with wide applicability.
    4. Familiar. Agents have to know your tool exists in terms (via popularity, documentation, etc.) or at least how to find it.
    5. Low Friction. Interestingly, their hallucinations should work, docs reinforce intuitive behavior.
    6. Appeal to humans. Whether it is human curated, human created, or human experiences.

    There’s a fun story about Beads, which has evolved over 100 sub-commands, intended for use by AI.

  • AGENTS.md outperforms skills

    Interesting research from Vercel about the performance of AGENTS.md vs skills. Their pragmatic suggestion is to create an index of docs in your AGENTS.md. The results were 100% scores across the board.

  • AI is not Inevitable

    I got nerd-sniped right before work and just had to write this post.

    In AI code and software craft, Alex looks at AI through the lens of Jacques Ellul’s “technique.”

    Jacques Ellul describes his concept of “technique” as the reduction of activity to a set of efficient means to a measured and defined end — a way of thinking dominant in modernity.

    He argues that an arts and crafts style movement that focuses on craftsmanship can stave off the totality of technique. I’m all for arts and crafts, but that view is at odds with Ellul’s view. Ellul believed that technique was “inevitable” and all-consuming and unstoppable, like the smoke monster in Lost.

    Andrew Feenberg’s viewpoint is actually more in line with Alex’s conclusion. Feenberg took a more hopeful view that technology can be democratized by injecting human values. And there’s some evidence to back that up.

    For example, the “technique” of the 19th-century factory was brutal efficiency. But through unions and laws (human agency), we forced the technique to adapt to child labor laws and safety standards. Efficiency was curbed by social values.

    Feenberg showed us a few ways to push back against Technique.

    1. Redefine efficiency

    Donald Knuth, a renowned computer scientist, invented literate programming, which redefined the way we write code – by putting us humans first. In literate programming you start with prose and interject code, rather than writing code and sprinkling in comments. He inverted the existing model from speed of implementation to ease of understanding.

    Similarly, Feenberg would redefine AI by building AI tools that optimize for maintainability, readability, and beauty.

    2. Subversive rationalization

    In the 1980s, the French government distributed the Minitel (a proto-internet terminal) to millions of homes. The technique goal was bureaucratic efficiency: to modernize the phone directory and deliver government information. It was cold, rational, and top-down.

    Instead, users hacked the system. They ignored the government directories and turned the network into a massive, chaotic instant-messaging service. They used the machine for flirting, arguing, and socializing. The users subverted the rational design. They took a tool of control and turned it into a tool of communication.

    In other words, don’t just boycott AI. Misuse it.

    3. Primary vs. secondary instrumentalization

    Feenberg distinguishes between two layers of technology. To overcome technique, we have to re-integrate them. The primary instrumentalization is the raw technical aspect. For code that means purely technical, decontextualized logic. The second instrumentalization is social, aesthetic, and ethical context. The code is elegant and respects the user’s privacy. To unify the two, we must demand that the second instrumentalization be integrated into the first.

    Wennerberg is right to identify the “slop” as a threat, but wrong to suggest we can defeat it with nostalgia. Retreating to “software arts and crafts” doesn’t change anything (I’m still for it though); it merely leaves the engine of modern society running on autopilot, optimized only for profit.

    Feenberg offers a harder, but more effective path: don’t abandon the machine – hack it. By embedding human values into our definitions of efficiency and refusing to accept raw functionality as the final standard, we stop being victims of technique. The goal is not to escape the future, but to shape it.

    Now it’s time for me to go write some code.

  • Watermark Your Writing to Prove You’re Human

    I’ve got a sick kid and I couldn’t sleep. My brain was going, so I’m up reading articles in the wee hours of the morning.

    By chance I came across the idea of “watermarking” your writing. Using it is a clever way to authenticate your work. Since everyone’s accusing everyone of making “AI slop” it helps to have some assurances tucked into your writing. You also add a fun little challenge to the process.

    Although you should read the whole article, the gist is:

    1. Stir up the structure
    2. Add specific details
    3. Be messy, asymmetrical, uneven, and opinionated
    4. Explore nuances from your own voice

    Extra points if you use actual Steganography to encode messages into your writing, like I did.