Hi. I’m Dustin. I’m a creative and curious Sr. Frontend Engineer at True Anomaly in Los Angeles, CA. I spe­cial­ize in front-end web de­vel­op­ment with Ja­va­Script, React, and Node.js. You can also find me on GitHub and LinkedIn.

  • The Future of Programming Looks a Lot Like an Everything Bagel

    What does the future of programming look like?

    So, which is it? I say it’s an everything bagel:

    Jobu Tupaki:  I got bored one day – and I put everything on a bagel. Everything. All my hopes and dreams, my old report cards, every breed of dog, every last personal ad on craigslist. Sesame. Poppy seed. Salt. And it collapsed in on itself. ‘Cause, you see, when you really put everything on a bagel, it becomes this.

    Here’s a quick compare and contrast.

    What Each Role Optimizes For

    • Agentic Engineering optimizes for throughput with correctness. The goal is to produce high‑quality software quickly by orchestrating AI agents, not by typing faster.
    • SRE optimizes for reliability over time. The mission is to keep systems predictable under load, failure, and change.
    • Product Engineering optimizes for user and business value. The focus is on shipping features that matter and iterating toward impact.

    How they work day to day

    Agentic Engineers

    They treat AI like a fast but unreliable junior developer. Their craft is decomposition, prompting, reviewing, and testing. They build scaffolds, guardrails, and workflows that turn raw model output into maintainable software.

    SREs

    They live in the world of SLIs, SLOs, error budgets, and incident response. Their work is automation-heavy and relentlessly focused on reducing toil, improving observability, and ensuring systems behave as expected.

    Product Engineers

    They sit closest to users. They translate product requirements into working features, iterate quickly, and balance speed with long‑term maintainability. They’re the connective tissue between design, backend, and business goals.

    How each role uses AI

    AI is reshaping all three roles, but in different ways:

    • Agentic Engineers build around AI. It’s the core tool, not an add‑on.
    • SREs use AI as an accelerator—log summarization, anomaly detection, config generation—but remain deeply skeptical. Reliability requires verification, not vibes.
    • Product Engineers use AI to multiply iteration speed: scaffolding features, generating UI variants, writing tests, and exploring alternatives faster than ever.

    The common thread: AI shifts the work upward, toward orchestration and decision‑making.

    Skills that define each discipline

    • Agentic Engineering: system design, decomposition, prompt engineering, test‑driven development, code review, architectural thinking.
    • SRE: distributed systems, observability, automation, incident response, performance engineering.
    • Product Engineering: UX intuition, full‑stack development, rapid prototyping, experimentation, cross‑functional communication.

    These skill sets overlap, but each role has a distinct emphasis.

    The downsides of each discipline

    • Agentic Engineering risks over‑trusting AI output, weak test coverage, and architectural drift from agent‑generated code.
    • SRE risks over‑engineering reliability, becoming a ticket‑ops team, or burning out under incident load.
    • Product Engineering risks shipping fast but accruing tech debt, or building features that don’t move metrics.

    Understanding these failure modes is part of understanding the craft.

    Where the roles converge

    AI is pushing all three disciplines toward a shared future:

    • Less manual coding, more orchestration
    • Less toil, more automation
    • Less focus on implementation, more on system‑level thinking
    • Less “just ship it,” more continuous verification

    But their centers of gravity remain distinct:

    • Agentic Engineering asks: How do we build software with AI?
    • SRE asks: How do we keep software reliable as it changes?
    • Product Engineering asks: How do we build the right software for users?

    Together, they form a triangle of modern engineering practice.

    The everything bagel

    If AI continues absorbing more of the implementation work, the future engineer – regardless of title – starts to look like an everything bagel:

    • the orchestration mindset of an Agentic Engineer
    • the reliability instincts of an SRE
    • the user‑centric judgment of a Product Engineer
    • Sesame. Poppy seed. Salt.

    In other words:

    The future of engineering is a system designer, reliability steward, and product thinker who directs AI to do the work.

    Classic software engineer answer.

  • How Software Will Survive with AI

    In Software Survival 3.0 Steve Yegge lays out a model for software survivability that states, roughly: AI is advancing exponentially, and in a world where agents can synthesize almost any software on demand, only certain kinds of software will survive. It basically assumes that AI is cheap and lazy and will reach for tools (aka your tools) that have the following characteristics:

    1. Knowledge dense. They embody decades of knowledge and would be too expensive for an AI to build from scratch.
    2. Efficient. They run more efficiently on CPUs than on GPU inference.
    3. Broadly useful. They are general-purpose tools with wide applicability.
    4. Familiar. Agents have to know your tool exists in terms (via popularity, documentation, etc.) or at least how to find it.
    5. Low Friction. Interestingly, their hallucinations should work, docs reinforce intuitive behavior.
    6. Appeal to humans. Whether it is human curated, human created, or human experiences.

    There’s a fun story about Beads, which has evolved over 100 sub-commands, intended for use by AI.

  • AGENTS.md outperforms skills

    Interesting research from Vercel about the performance of AGENTS.md vs skills. Their pragmatic suggestion is to create an index of docs in your AGENTS.md. The results were 100% scores across the board.

  • AI is not Inevitable

    I got nerd-sniped right before work and just had to write this post.

    In AI code and software craft, Alex looks at AI through the lens of Jacques Ellul’s “technique.”

    Jacques Ellul describes his concept of “technique” as the reduction of activity to a set of efficient means to a measured and defined end — a way of thinking dominant in modernity.

    He argues that an arts and crafts style movement that focuses on craftsmanship can stave off the totality of technique. I’m all for arts and crafts, but that view is at odds with Ellul’s view. Ellul believed that technique was “inevitable” and all-consuming and unstoppable, like the smoke monster in Lost.

    Andrew Feenberg’s viewpoint is actually more in line with Alex’s conclusion. Feenberg took a more hopeful view that technology can be democratized by injecting human values. And there’s some evidence to back that up.

    For example, the “technique” of the 19th-century factory was brutal efficiency. But through unions and laws (human agency), we forced the technique to adapt to child labor laws and safety standards. Efficiency was curbed by social values.

    Feenberg showed us a few ways to push back against Technique.

    1. Redefine efficiency

    Donald Knuth, a renowned computer scientist, invented literate programming, which redefined the way we write code – by putting us humans first. In literate programming you start with prose and interject code, rather than writing code and sprinkling in comments. He inverted the existing model from speed of implementation to ease of understanding.

    Similarly, Feenberg would redefine AI by building AI tools that optimize for maintainability, readability, and beauty.

    2. Subversive rationalization

    In the 1980s, the French government distributed the Minitel (a proto-internet terminal) to millions of homes. The technique goal was bureaucratic efficiency: to modernize the phone directory and deliver government information. It was cold, rational, and top-down.

    Instead, users hacked the system. They ignored the government directories and turned the network into a massive, chaotic instant-messaging service. They used the machine for flirting, arguing, and socializing. The users subverted the rational design. They took a tool of control and turned it into a tool of communication.

    In other words, don’t just boycott AI. Misuse it.

    3. Primary vs. secondary instrumentalization

    Feenberg distinguishes between two layers of technology. To overcome technique, we have to re-integrate them. The primary instrumentalization is the raw technical aspect. For code that means purely technical, decontextualized logic. The second instrumentalization is social, aesthetic, and ethical context. The code is elegant and respects the user’s privacy. To unify the two, we must demand that the second instrumentalization be integrated into the first.

    Wennerberg is right to identify the “slop” as a threat, but wrong to suggest we can defeat it with nostalgia. Retreating to “software arts and crafts” doesn’t change anything (I’m still for it though); it merely leaves the engine of modern society running on autopilot, optimized only for profit.

    Feenberg offers a harder, but more effective path: don’t abandon the machine – hack it. By embedding human values into our definitions of efficiency and refusing to accept raw functionality as the final standard, we stop being victims of technique. The goal is not to escape the future, but to shape it.

    Now it’s time for me to go write some code.

  • Watermark Your Writing to Prove You’re Human

    I’ve got a sick kid and I couldn’t sleep. My brain was going, so I’m up reading articles in the wee hours of the morning.

    By chance I came across the idea of “watermarking” your writing. Using it is a clever way to authenticate your work. Since everyone’s accusing everyone of making “AI slop” it helps to have some assurances tucked into your writing. You also add a fun little challenge to the process.

    Although you should read the whole article, the gist is:

    1. Stir up the structure
    2. Add specific details
    3. Be messy, asymmetrical, uneven, and opinionated
    4. Explore nuances from your own voice

    Extra points if you use actual Steganography to encode messages into your writing, like I did.

  • The Most Extreme CSS Reset Ever Created: 10,000 Lines of Failure

    My CSS reset exploding into a mushroom cloud
    Free fire explosion image“/ CC0 1.0

    I appreciate the intentionality of Vale’s CSS reset – everything has a reason. But what I found the most eye-opening were the links to the default stylesheets used by Chrome, Safari, and Firefox. These files are overwhelming, but the nerd in me really wanted to know what the differences were between each of them. In detail. Then I could make The One CSS Reset to Rule Them All. It would be better than normalize.css, better than the Meyer reset!

    Analyzing the Default CSS

    To do a proper analysis I needed to download the default CSS files and clean them. Safari is used by mobile, desktop, and vision and it uses directives to differentiate between them. I removed everything that wasn’t for desktop. Next, I minified each of the files to remove comments and whitespace. With clean “data” I could now analyze the files using the NPM package cssstats-cli. Here’s what I came up with.

    • Chrome is 48 KB and has 298 rules.
    • Safari is 26 KB and has 175 rules.
    • Firefox is 15 KB and has 143 rules.

    I wanted to try to see what was actually styled after the rules had been set, but it turns out this is a hard problem. I thought maybe I could write a comprehensive test page, and then programmatically walk through the document to view the computed styles for each object. Then I could compare each of the three stylesheets to see where the actual differences were.

    Finding the Differences

    Starting with the Chrome stylesheet, I worked my way through about a quarter of it, making HTML to test each ruleset. It was then that I realized testing this was going to be a nightmare. There are just so many rules, targeting so many different scenarios. Many of these scenarios would rarely be triggered. In fact, it may even be impossible. It was time for Claude to step in.

    I created a list of every selector in the default CSS files for Chrome, Safari, and Firefox. Then I asked Claude to create a single HTML file with elements that matched every selector. That gave me a massive file with about 600 elements.

    Generating the CSS Reset

    Next, I created a script to open the HTML file in each browser, using Playwright to grab the computed styles for every element. The script saved all of the computed styles to JSON files. Just a reminder that there are 520 CSS properties on every element! Finally, I created a script that compared the JSON files and for every difference, selected an appropriate default and wrote that style to a CSS file.

    The result was a 10,000-line monstrosity of a CSS reset that basically created the lowest common denominator of stylesheets. We’re talking Times New Roman on every element, font sizes set in pixels, etc. Upon visual inspection, I noticed that there were still differences. Queue the table flip. Claude and I added more code to normalize values, handling rounding of decimals, shorthand property differences, etc. The results were almost perfect, but there were still problems.

    Optimizing the Result

    After looking over the generated stylesheet, I could see that lots of similar properties were getting repeated. I thought, maybe I needed an optimization step. So, I configured it scripts to run CSSO on the generated stylesheet. That cut the size down to 5,400 lines, which was much better, but still far from what a CSS reset file should be doing. Also, it should be stated at this point that I was clearly in CSS normalization territory and not CSS reset territory. But the line between the two gets blurry.

    Nuclear CSS Reset

    It’s at this point that I came to the conclusion that if you seriously want to normalize/reset the default styles of every browser, there’s only one way to do it. Destroy all user-agent styles and then build from the ground up, styling only what you need. This is the second most extreme CSS reset ever created:

    * { all: unset }

    rip

  • Chris Coyier on CSS Module Imports

    Wow! You can now import CSS modules in Firefox. This means that it works in every browser except for Safari. Via Frontend Masters Blog

  • Anthropic on AI-Resistant Interviews

    It’s ironic that Anthropic is looking to beat AI in their interviews, but aren’t we all? Here is a list of key principles from Designing AI-resistant technical evaluations.

    1. Test the process, not just the output

    Many tasks can be solved well by AI, so the final answer is no longer a good signal. Evaluations should:

    • Capture intermediate reasoning steps
    • Require explanations of trade‑offs
    • Examine decision‑making, not just correctness

    2. Use tasks that require contextual judgment

    AI is good at pattern‑matching and known problem types. It struggles more with:

    • Ambiguous requirements
    • Real‑world constraints
    • Messy or incomplete information
    • Prioritization under uncertainty

    Evaluations should lean into these.

    3. Incorporate novel or unseen problem types

    If a task is widely available online, an AI model has probably trained on it. Stronger evaluations:

    • Use fresh, unpublished tasks
    • Introduce domain‑specific constraints
    • Require synthesis across multiple knowledge areas

    4. Look for human‑specific signals

    Anthropic highlights qualities that AI still struggles to fake:

    • Personal experience
    • Tacit knowledge
    • Real‑time collaboration
    • Values‑driven reasoning
    • Long‑horizon planning with incomplete data

    Evaluations can intentionally probe these.

    5. Design for partial AI assistance

    Instead of pretending AI doesn’t exist, assume candidates will use it. Good evaluations:

    • Allow AI for some steps
    • Restrict AI for others
    • Measure how well a person integrates AI into their workflow

    This mirrors real‑world work more accurately.

  • Anil Dash on Codeless

    The next meaningful breakthrough that has emerged in AI‑assisted software development is orchestrating fleets of coding bots that can build entire features or apps from a plain‑English strategic plan, not line‑by‑line prompts. This isn’t “AI helps me code”; it’s “AI builds the code while I direct the strategy.” Anil Dash calls this “codeless,” and I think that’s a great name for it.

  • Raphael Amorim on Monozukuri and Software Development

    [AI prioritizes] time over quality. To achieve quality, a programmer needs to have experienced what is being built. Software development follows the path of craftsmanship, where an artisan—through years of experience, repeated attempts, occasional luck, or natural talent—can produce remarkable results. This idea aligns closely with a Japanese concept known as monozukuri.

    The Art of Craftsmanship (Monozukuri) in the Age of AI