Hi. I’m Dustin. I’m a creative and curious Sr. Frontend Engineer at True Anomaly in Los Angeles, CA. I spe­cial­ize in front-end web de­vel­op­ment with Ja­va­Script, React, and Node.js. You can also find me on GitHub and LinkedIn.

  • How Software Will Survive with AI

    In Software Survival 3.0 Steve Yegge lays out a model for software survivability that states, roughly: AI is advancing exponentially, and in a world where agents can synthesize almost any software on demand, only certain kinds of software will survive. It basically assumes that AI is cheap and lazy and will reach for tools (aka your tools) that have the following characteristics:

    1. Knowledge dense. They embody decades of knowledge and would be too expensive for an AI to build from scratch.
    2. Efficient. They run more efficiently on CPUs than on GPU inference.
    3. Broadly useful. They are general-purpose tools with wide applicability.
    4. Familiar. Agents have to know your tool exists in terms (via popularity, documentation, etc.) or at least how to find it.
    5. Low Friction. Interestingly, their hallucinations should work, docs reinforce intuitive behavior.
    6. Appeal to humans. Whether it is human curated, human created, or human experiences.

    There’s a fun story about Beads, which has evolved over 100 sub-commands, intended for use by AI.

  • AGENTS.md outperforms skills

    Interesting research from Vercel about the performance of AGENTS.md vs skills. Their pragmatic suggestion is to create an index of docs in your AGENTS.md. The results were 100% scores across the board.

  • AI is not Inevitable

    I got nerd-sniped right before work and just had to write this post.

    In AI code and software craft, Alex looks at AI through the lens of Jacques Ellul’s “technique.”

    Jacques Ellul describes his concept of “technique” as the reduction of activity to a set of efficient means to a measured and defined end — a way of thinking dominant in modernity.

    He argues that an arts and crafts style movement that focuses on craftsmanship can stave off the totality of technique. I’m all for arts and crafts, but that view is at odds with Ellul’s view. Ellul believed that technique was “inevitable” and all-consuming and unstoppable, like the smoke monster in Lost.

    Andrew Feenberg’s viewpoint is actually more in line with Alex’s conclusion. Feenberg took a more hopeful view that technology can be democratized by injecting human values. And there’s some evidence to back that up.

    For example, the “technique” of the 19th-century factory was brutal efficiency. But through unions and laws (human agency), we forced the technique to adapt to child labor laws and safety standards. Efficiency was curbed by social values.

    Feenberg showed us a few ways to push back against Technique.

    1. Redefine efficiency

    Donald Knuth, a renowned computer scientist, invented literate programming, which redefined the way we write code – by putting us humans first. In literate programming you start with prose and interject code, rather than writing code and sprinkling in comments. He inverted the existing model from speed of implementation to ease of understanding.

    Similarly, Feenberg would redefine AI by building AI tools that optimize for maintainability, readability, and beauty.

    2. Subversive rationalization

    In the 1980s, the French government distributed the Minitel (a proto-internet terminal) to millions of homes. The technique goal was bureaucratic efficiency: to modernize the phone directory and deliver government information. It was cold, rational, and top-down.

    Instead, users hacked the system. They ignored the government directories and turned the network into a massive, chaotic instant-messaging service. They used the machine for flirting, arguing, and socializing. The users subverted the rational design. They took a tool of control and turned it into a tool of communication.

    In other words, don’t just boycott AI. Misuse it.

    3. Primary vs. secondary instrumentalization

    Feenberg distinguishes between two layers of technology. To overcome technique, we have to re-integrate them. The primary instrumentalization is the raw technical aspect. For code that means purely technical, decontextualized logic. The second instrumentalization is social, aesthetic, and ethical context. The code is elegant and respects the user’s privacy. To unify the two, we must demand that the second instrumentalization be integrated into the first.

    Wennerberg is right to identify the “slop” as a threat, but wrong to suggest we can defeat it with nostalgia. Retreating to “software arts and crafts” doesn’t change anything (I’m still for it though); it merely leaves the engine of modern society running on autopilot, optimized only for profit.

    Feenberg offers a harder, but more effective path: don’t abandon the machine – hack it. By embedding human values into our definitions of efficiency and refusing to accept raw functionality as the final standard, we stop being victims of technique. The goal is not to escape the future, but to shape it.

    Now it’s time for me to go write some code.

  • Watermark Your Writing to Prove You’re Human

    I’ve got a sick kid and I couldn’t sleep. My brain was going, so I’m up reading articles in the wee hours of the morning.

    By chance I came across the idea of “watermarking” your writing. Using it is a clever way to authenticate your work. Since everyone’s accusing everyone of making “AI slop” it helps to have some assurances tucked into your writing. You also add a fun little challenge to the process.

    Although you should read the whole article, the gist is:

    1. Stir up the structure
    2. Add specific details
    3. Be messy, asymmetrical, uneven, and opinionated
    4. Explore nuances from your own voice

    Extra points if you use actual Steganography to encode messages into your writing, like I did.

  • The Most Extreme CSS Reset Ever Created: 10,000 Lines of Failure

    My CSS reset exploding into a mushroom cloud
    Free fire explosion image“/ CC0 1.0

    I appreciate the intentionality of Vale’s CSS reset – everything has a reason. But what I found the most eye-opening were the links to the default stylesheets used by Chrome, Safari, and Firefox. These files are overwhelming, but the nerd in me really wanted to know what the differences were between each of them. In detail. Then I could make The One CSS Reset to Rule Them All. It would be better than normalize.css, better than the Meyer reset!

    Analyzing the Default CSS

    To do a proper analysis I needed to download the default CSS files and clean them. Safari is used by mobile, desktop, and vision and it uses directives to differentiate between them. I removed everything that wasn’t for desktop. Next, I minified each of the files to remove comments and whitespace. With clean “data” I could now analyze the files using the NPM package cssstats-cli. Here’s what I came up with.

    • Chrome is 48 KB and has 298 rules.
    • Safari is 26 KB and has 175 rules.
    • Firefox is 15 KB and has 143 rules.

    I wanted to try to see what was actually styled after the rules had been set, but it turns out this is a hard problem. I thought maybe I could write a comprehensive test page, and then programmatically walk through the document to view the computed styles for each object. Then I could compare each of the three stylesheets to see where the actual differences were.

    Finding the Differences

    Starting with the Chrome stylesheet, I worked my way through about a quarter of it, making HTML to test each ruleset. It was then that I realized testing this was going to be a nightmare. There are just so many rules, targeting so many different scenarios. Many of these scenarios would rarely be triggered. In fact, it may even be impossible. It was time for Claude to step in.

    I created a list of every selector in the default CSS files for Chrome, Safari, and Firefox. Then I asked Claude to create a single HTML file with elements that matched every selector. That gave me a massive file with about 600 elements.

    Generating the CSS Reset

    Next, I created a script to open the HTML file in each browser, using Playwright to grab the computed styles for every element. The script saved all of the computed styles to JSON files. Just a reminder that there are 520 CSS properties on every element! Finally, I created a script that compared the JSON files and for every difference, selected an appropriate default and wrote that style to a CSS file.

    The result was a 10,000-line monstrosity of a CSS reset that basically created the lowest common denominator of stylesheets. We’re talking Times New Roman on every element, font sizes set in pixels, etc. Upon visual inspection, I noticed that there were still differences. Queue the table flip. Claude and I added more code to normalize values, handling rounding of decimals, shorthand property differences, etc. The results were almost perfect, but there were still problems.

    Optimizing the Result

    After looking over the generated stylesheet, I could see that lots of similar properties were getting repeated. I thought, maybe I needed an optimization step. So, I configured it scripts to run CSSO on the generated stylesheet. That cut the size down to 5,400 lines, which was much better, but still far from what a CSS reset file should be doing. Also, it should be stated at this point that I was clearly in CSS normalization territory and not CSS reset territory. But the line between the two gets blurry.

    Nuclear CSS Reset

    It’s at this point that I came to the conclusion that if you seriously want to normalize/reset the default styles of every browser, there’s only one way to do it. Destroy all user-agent styles and then build from the ground up, styling only what you need. This is the second most extreme CSS reset ever created:

    * { all: unset }

    rip

  • Chris Coyier on CSS Module Imports

    Wow! You can now import CSS modules in Firefox. This means that it works in every browser except for Safari. Via Frontend Masters Blog

  • Anthropic on AI-Resistant Interviews

    It’s ironic that Anthropic is looking to beat AI in their interviews, but aren’t we all? Here is a list of key principles from Designing AI-resistant technical evaluations.

    1. Test the process, not just the output

    Many tasks can be solved well by AI, so the final answer is no longer a good signal. Evaluations should:

    • Capture intermediate reasoning steps
    • Require explanations of trade‑offs
    • Examine decision‑making, not just correctness

    2. Use tasks that require contextual judgment

    AI is good at pattern‑matching and known problem types. It struggles more with:

    • Ambiguous requirements
    • Real‑world constraints
    • Messy or incomplete information
    • Prioritization under uncertainty

    Evaluations should lean into these.

    3. Incorporate novel or unseen problem types

    If a task is widely available online, an AI model has probably trained on it. Stronger evaluations:

    • Use fresh, unpublished tasks
    • Introduce domain‑specific constraints
    • Require synthesis across multiple knowledge areas

    4. Look for human‑specific signals

    Anthropic highlights qualities that AI still struggles to fake:

    • Personal experience
    • Tacit knowledge
    • Real‑time collaboration
    • Values‑driven reasoning
    • Long‑horizon planning with incomplete data

    Evaluations can intentionally probe these.

    5. Design for partial AI assistance

    Instead of pretending AI doesn’t exist, assume candidates will use it. Good evaluations:

    • Allow AI for some steps
    • Restrict AI for others
    • Measure how well a person integrates AI into their workflow

    This mirrors real‑world work more accurately.

  • Anil Dash on Codeless

    The next meaningful breakthrough that has emerged in AI‑assisted software development is orchestrating fleets of coding bots that can build entire features or apps from a plain‑English strategic plan, not line‑by‑line prompts. This isn’t “AI helps me code”; it’s “AI builds the code while I direct the strategy.” Anil Dash calls this “codeless,” and I think that’s a great name for it.

  • Raphael Amorim on Monozukuri and Software Development

    [AI prioritizes] time over quality. To achieve quality, a programmer needs to have experienced what is being built. Software development follows the path of craftsmanship, where an artisan—through years of experience, repeated attempts, occasional luck, or natural talent—can produce remarkable results. This idea aligns closely with a Japanese concept known as monozukuri.

    The Art of Craftsmanship (Monozukuri) in the Age of AI

  • Details Make the Design

    I was poking around on Detail, getting some inspiration when it occurred to me that the small details really do make a big difference. I know that’s not a huge revelation, but CSS makes it incredibly easy these days.

    Chris Coyier has a post about modern CSS two-liners that have a big impact on the design of a website. Here are two more, not as fresh, n <= 2 liners.

    Page transitions

    You can put a nice fade in/out effect on the entire site in one line. Now when you navigate to any other page on the site, it will look nice and polished.

    @view-transition { navigation: auto; }

    Selection color

    Let’s not forget selection color. Of course, you can make the selection color as wild as you want, but a simple inversion of the main site colors is usually enough to make it look intentional.

    ::selection {
      color: white;
      background-color: black;
    }