Category: Articles

  • The Most Extreme CSS Reset Ever Created: 10,000 Lines of Failure

    My CSS reset exploding into a mushroom cloud
    Free fire explosion image“/ CC0 1.0

    I appreciate the intentionality of Vale’s CSS reset – everything has a reason. But what I found the most eye-opening were the links to the default stylesheets used by Chrome, Safari, and Firefox. These files are overwhelming, but the nerd in me really wanted to know what the differences were between each of them. In detail. Then I could make The One CSS Reset to Rule Them All. It would be better than normalize.css, better than the Meyer reset!

    Analyzing the Default CSS

    To do a proper analysis I needed to download the default CSS files and clean them. Safari is used by mobile, desktop, and vision and it uses directives to differentiate between them. I removed everything that wasn’t for desktop. Next, I minified each of the files to remove comments and whitespace. With clean “data” I could now analyze the files using the NPM package cssstats-cli. Here’s what I came up with.

    • Chrome is 48 KB and has 298 rules.
    • Safari is 26 KB and has 175 rules.
    • Firefox is 15 KB and has 143 rules.

    I wanted to try to see what was actually styled after the rules had been set, but it turns out this is a hard problem. I thought maybe I could write a comprehensive test page, and then programmatically walk through the document to view the computed styles for each object. Then I could compare each of the three stylesheets to see where the actual differences were.

    Finding the Differences

    Starting with the Chrome stylesheet, I worked my way through about a quarter of it, making HTML to test each ruleset. It was then that I realized testing this was going to be a nightmare. There are just so many rules, targeting so many different scenarios. Many of these scenarios would rarely be triggered. In fact, it may even be impossible. It was time for Claude to step in.

    I created a list of every selector in the default CSS files for Chrome, Safari, and Firefox. Then I asked Claude to create a single HTML file with elements that matched every selector. That gave me a massive file with about 600 elements.

    Generating the CSS Reset

    Next, I created a script to open the HTML file in each browser, using Playwright to grab the computed styles for every element. The script saved all of the computed styles to JSON files. Just a reminder that there are 520 CSS properties on every element! Finally, I created a script that compared the JSON files and for every difference, selected an appropriate default and wrote that style to a CSS file.

    The result was a 10,000-line monstrosity of a CSS reset that basically created the lowest common denominator of stylesheets. We’re talking Times New Roman on every element, font sizes set in pixels, etc. Upon visual inspection, I noticed that there were still differences. Queue the table flip. Claude and I added more code to normalize values, handling rounding of decimals, shorthand property differences, etc. The results were almost perfect, but there were still problems.

    Optimizing the Result

    After looking over the generated stylesheet, I could see that lots of similar properties were getting repeated. I thought, maybe I needed an optimization step. So, I configured it scripts to run CSSO on the generated stylesheet. That cut the size down to 5,400 lines, which was much better, but still far from what a CSS reset file should be doing. Also, it should be stated at this point that I was clearly in CSS normalization territory and not CSS reset territory. But the line between the two gets blurry.

    Nuclear CSS Reset

    It’s at this point that I came to the conclusion that if you seriously want to normalize/reset the default styles of every browser, there’s only one way to do it. Destroy all user-agent styles and then build from the ground up, styling only what you need. This is the second most extreme CSS reset ever created:

    * { all: unset }

    rip

  • Why I Chose Tauri for My Text Adventure Game

    When I started designing Head in the Cloud, a horror text-adventure game, I figured C would be the natural language of choice. There is a romanticism to writing a text-based game in C. But I wanted to ship the game, and I knew that C wasn’t the best choice for that (for me).

    I ultimately chose Tauri, a framework that allows you to build desktop applications using web technologies, over a traditional systems language. Here’s why.

    1. Avoiding the Language Learning Curve

    My biggest constraint was time. As a Lead Software Engineer with two decades of experience in web development, I’m most familiar with HTML, CSS, and JavaScript. Conversely, I don’t know C and my knowledge of Rust (another systems language) is novice at best.

    Choosing C would have turned this into a language learning exercise rather than a game development journey. By choosing Tauri, I eliminated the “language tax.” I can think in game logic—inventory arrays, state management, narrative branching—rather than syntax. The goal is to ship a game, not to learn a language.

    2. CSS is Great for Text, Layouts, and Graphics

    The web browser is the most sophisticated text rendering engine in existence. If I were to build in C, I would be giving that up. I also wanted keep the option to modernize the game with graphics. A text-based game benefits a lot from illustrations.

    Using Tauri allows me to use CSS. I can utilize Flexbox and Grid for responsive layouts that look good on any screen size. I can load custom web fonts to set the retro atmosphere instantly. I can use CSS animations for subtle text fades or “glitch” effects that would be nightmarish to code from scratch in C. Tauri gives me a AAA-level UI layer for free.

    3. Smaller Build Sizes than Electron

    The immediate counter-argument to using web tech for desktop apps is usually “Electron bloat.” Electron bundles a version of the Chromium browser and Node.js into every single application installer. This leads to simple chat apps weighing in at 100MB+.

    Tauri solves this by relying on the Operating System’s native webview (WKWebView on macOS, WebView2 on Windows, WebKitGTK on Linux). It does not bundle a browser.

    The result is massive binary reduction. A basic Tauri app can be less than 5MB. For a text adventure game, keeping the footprint small is essential. I get the development experience of Electron without forcing the user to download an entire web browser just to play a text game.

    4. Simpler Architecture than Electron

    While I wanted the web environment, I did not want the Electron ecosystem. Electron is powerful, but it requires you to manage the complexity of the Main vs. Renderer processes, context bridges, and inter-process communication (IPC).

    Tauri simplifies this architecture. It is built on Rust, providing a secure, lightweight backend that communicates with the frontend. I don’t have to worry about spinning up worker threads manually or managing complex menu configurations just to get a window on the screen. It provides sensible defaults that let me focus on the JavaScript layer where my game logic lives.

    5. Iteration Velocity

    Game development requires constant tweaking. You change a line of dialogue, you tweak a color, you adjust a timing delay.

    In a C environment, this is a compile-run loop. In the Tauri environment, I have access to Hot Module Replacement (HMR). I can change the CSS of the game interface or the JavaScript logic of a puzzle, and the game window updates instantly without a restart. Over the course of a 6-month development cycle, those saved seconds compound into days of saved time.

    6. Safety by Default

    Writing a game in C opens the door to memory leaks and segfaults. One bad pointer arithmetic error can crash the user’s desktop.

    Tauri relies on Rust for its backend bindings. Even though I am a Rust novice, I benefit from Rust’s memory safety guarantees. I am writing high-level JavaScript, which is sandboxed, and the heavy lifting is done by a backend that is proven to be memory-safe. It is a safety net that C simply does not offer.

    7. Cross-Platform without the Pain

    Finally, compiling C for Windows, macOS, and Linux requires managing makefiles, compiler flags, and distinct build environments. Tauri abstracts this complexity. With a few commands, I can cross-compile binaries for the major operating systems. Since the UI is just a webview, I don’t have to rewrite the rendering logic for different OS window managers. It ensures Head in the Cloud is accessible to everyone, regardless of their machine.

    Conclusion

    There is no “best” language, only the best tool for the job at hand. For a high-fidelity 3D shooter, C++ or Rust is the answer. But for a narrative-driven text adventure built by a veteran web developer? Tauri offers the perfect intersection of performance, file size, and developer velocity. It lets me respect the user’s hardware while respecting my own time.

  • What I Care About as a Lead Engineer

    I care about different things now that I have grown into a lead engineer. There are some things that I have started to care more about, and some things that I have started to care less about.

    What a lead cares less about

    1. Personal Commit Volume
      • Junior/Senior View: “I need to ship a lot of code to prove my value.”
      • Lead View: “I need to unblock others so they can ship code.”
        • As a lead, you spend time on code reviews, design, answering questions, and clearing the path. As a result, the number of commits you make decreases.
    2. The Latest and Greatest Software
      • Junior/Senior View: “Let’s use this new beta framework; it’s faster, cooler, and more modern.”
      • Lead View: “Can we hire for this? Is it stable? Does it have good documentation?”
        • Leads focus on long-term maintainability and hiring over cutting-edge trends. As a lead, I shy away from big rewrites if they are not critical.
    3. Micro-Optimization & “Perfect” Code
      • Junior/Senior View: “This function can be 5 ms faster if I rewrite it three times.”
      • Lead View: “Is it good enough to ship today?”
        • Seniors often chase architectural purity. Leads are more open to taking on technical debt if it gives a business edge, like meeting a deadline. They need a plan to pay it off later.
    4. How the Team Solves the Problem (Implementation Details)
      • Junior/Senior View: “This is exactly how I would write this class.”
      • Lead View: “Does the interface match the spec? Does it pass tests? Then do it your way.”
        • Leads learn to delegate ownership. Nitpicking every line of a Senior’s code to match their style creates a bottleneck. It also demoralizes the team. They care about the contract (inputs/outputs), not the implementation.
    5. Being the “Smartest in the Room”
      • Junior/Senior View: “I need to have the answer to every technical question to show authority.”
      • Lead View: “I need to find the person who has the answer.”
        • A Lead’s value comes from synthesis and decision-making, not encyclopedic knowledge. They feel at ease saying, “I don’t know, let’s ask the database expert.” This change means they now route knowledge instead of sharing it.
    6. Rigidity in Process
      • Junior/Senior View: “The ticket had incorrect formatting, so I won’t do it.”
      • Lead View: “This is urgent; I’ll fix the ticket later.”
        • While Leads usually enforce process, they also know when to break it. They focus less on sticking to strict rules, like needing 100% test coverage for a prototype. Instead, they focus on the practical needs of the business.

    What a lead cares more about

    Lead engineers change their focus from output, like writing code, to outcomes. This includes system reliability, team speed, and business value. Their focus is on the big picture. They focus on the product’s technical success. Their concerns are strategic, not tactical.

    1. The “Bus Factor” (Risk Mitigation)
      • The Concern: “If we lose someone tomorrow, does the project die?”
      • The Action: Leads obsess over knowledge silos. They push for rotating key tasks. They need documentation for complex systems. They also avoid obscure technologies that are hard to hire for.
    2. Force Multiplication (Developer Experience)
      • The Concern: “Why does it take 45 minutes to deploy a one-line change?”
      • The Action: Leads care about the feedback loop. They invest time in CI/CD pipelines, local dev environments, and linting tools. If they save 5 minutes for 10 developers, that’s 50 minutes of extra productivity per build.
    3. Observability & Average Recovery Time
      • The Concern: “How do we know it’s broken before the customers tweet about it?”
      • The Action: Seniors care that the code passes tests; Leads care that the code emits logs. They focus on metrics, dashboards, and alerting. This way, when issues arise, they can find the root cause in minutes, not days.
    4. Technical Debt as a Financial Instrument
      • The Concern: “Are we paying too much ‘interest’ on this legacy code?”
      • The Action: Leads don’t aim for zero technical debt; they aim for managed debt. They talk with product managers to “buy” time for refactoring. They explain the cost of inaction. For example, they say, “If we don’t fix this now, feature X will take twice as long to build next month.”
    5. Alignment with Business Goals
      • The Concern: “We are building a Ferrari, but the business needs a moving van.”
      • The Action: Leads are the filter between “cool tech” and “profitable tech.” They prevent over-engineering. If a basic CRUD app can solve the business problem, they will block the team from building a complex microservices architecture. Even if the team wants to do so.
    6. Consensus and “Disagree and Commit”
      • The Concern: “Is the team paralyzed by debate?”
      • The Action: Leads care about decision velocity. They lead technical talks and make sure everyone’s voice gets heard. They also finalize decisions to resolve ties, helping the team move forward.

    The transition from Senior to Lead is fundamentally about trading personal output for team output. It requires letting go of the immediate satisfaction of closing tickets to focus on the often invisible work that keeps the system healthy and the team moving. By prioritizing business alignment, risk mitigation, and developer experience over technical perfection and personal commit counts, a Lead Engineer ensures that the team doesn’t just ship code, but delivers sustainable value.

  • De Morgan’s Laws in Plain English

    Today my coworker made an interesting comment about some code I wrote. The code looked like this:

    const isParty = !cake || !iceCream ? false : true;

    My intention was to say “if there isn’t any cake or there isn’t any ice cream, then it’s not a party.” And I think we can all agree with that. My coworker said that maybe we should “De Morgan” this code. So I took to the internet to understand what that meant!

    Interestingly, there’s a whole field of study devoted to logic statements like this. And there are two rules in particular that apply directly to this scenario. So let’s take a look at them and see how they could improve our party code.

    De Morgan’s Laws

    1. The First Law: The “Not AND” Rule, says that not (A and B) is the same as (not A) or (not B). Since our party code is in the second form, we know we can refactor it to make our intention much more clear:const isParty = !(cake && iceCream) ? false : true;
    2. The Second Law: The “Not OR” Rule, says that not (A or B) is the same as (not A) and (not B). If we were less picky about our parties, we might have written: const isParty = !(cake || iceCream) ? false : true;. This can be refactored to:const isParty = !cake && !iceCream ? false : true;

    Flipping De Morgan

    The goal of De Morgan’s laws in software engineering is to make the code easier to understand. And it does. But there’s still more to be desired. So here’s another principle:

    In software (and English) we affirm that something is true rather than negate that it is false, because negation makes things harder to understand.

    So instead of saying “it’s not a party if there isn’t any cake or ice cream,” we should say, “it’s a party if there is cake and ice cream.” Here’s how that looks in code. Given:

    const isParty = !(cake && iceCream) ? false : true;

    we can flip the condition to that it reads in the affirmative:

    const isParty = cake && iceCream ? true : false;

    Then we can refactor that by dropping the ternary altogether:

    const isParty = cake && iceCream;

    That’s MUCH easier to understand!

  • Packaging JS Apps with QuickJS

    QuickJS is a tiny JavaScript engine written in C. Its author, Fabrice Bellard, created FFMPEG and QEMU. QuickJS can run outside of traditional browser environments. This includes packaging JavaScript applications for distribution.

    As of writing, QuickJS supports ECMAScript 2023, so you can write modern JavaScript. It is also fast and well-tested.

    One of the engine’s features is compiling JavaScript code into bytecode. Bytecode is a compact set of instructions that an interpreter can execute. It makes execution more efficient, which is perfect where performance is important. Additionally, the engine supports compilation of JavaScript into dependency-free standalone executables. We will use this feature to package JavaScript apps.

    QuickJS’s small footprint is also suitable for embedded systems and resource-constrained environments.

    Recently, Amazon announced its Low Latency Runtime (LLRT), built on QuickJS. According to Amazon, LLRT starts 10x faster and is 2x less expensive than other JS runtimes on AWS Lambda. That’s a pretty impressive use case.

    Installation

    Installing QuickJS in your development environment is straightforward. Here’s how to get started. Note that on Windows, you can use WSL with Linux.

    Before jumping in, make sure you have Make and a C compiler like GCC or Clang.

    The first step is to get a copy of the source code. You can download the source from the QuickJS website or by cloning it from the GitHub repository. We’ll download it from the website and untar the file:

    wget https://bellard.org/quickjs/quickjs-2024-01-13.tar.xz
    tar -xJf quickjs-2024-01-13.tar.xz

    This command creates the directory quickjs-2024-01-13 with all the necessary source files. I’ll refer to this directory as quickjs from now on.

    Compatibility

    QuickJS does not rely on V8, WebKit, or Gecko. It is not compatible with NodeJS or Deno APIs. It does have access to the OS through its own APIs. To make sure your code is compatible with QuickJS, ensure the following:

    • Your code uses ECMAScript 2023 or lower
    • You are not using browser-specific APIs
    • You are not using Node.js or Deno-specific APIs

    QuickJS focuses on the core JavaScript language, so your code should be platform-agnostic.

    Organization

    For this simple Hello World app we’re putting our files into the quickjs directory. In a business context, you should keep your code separate from QuickJS. Otherwise, you can structure your project as you would any other JavaScript project. You can even use modules.

    Building QuickJS

    Once you have the source code, the next step is to compile it. This will convert the code into an executable you can run on your computer. Navigate to the quickjs directory with cd quickjs.

    Finally, compile the source with the make command. QuickJS’s Makefile will detect your OS and choose the appropriate compiler and flags. Run:

    make

    On macOS, Make will use Clang as the default compiler, whereas on Linux, GCC is more common. Compilation will take a few moments. Once completed, several new files will be added to the quickjs directory. qjs is the command-line tool for executing JavaScript files. qjsc is a tool for compiling JavaScript into bytecode.

    Testing QuickJS

    You can run a simple JavaScript file to verify your installation. Create a file named hello.js with the following code:

    console.log("Hello, QuickJS!");

    Save the file into the quickjs directory and then execute it using the qjs binary:

    ./qjs hello.js

    If QuickJS installed, you will see the message “Hello, QuickJS!” Now you have QuickJS set up and ready, it’s time to package an application.

    Packaging JavaScript

    To create a standalone executable, use the qjsc executable in the quickjs directory. You can execute it with these commands:

    qjsc -o hello hello.js
    ./hello

    If it works, you will see “Hello, QuickJS!” on your terminal. This is the file that we will distribute.

    There are many flags that you can pass to qjsc. Try experimenting with each. For example, output bytecode instead of an executable. Or disable regular expressions to decrease binary size.

    Package Size

    The executable for this “Hello, World!” example is 4.6 MB. It may seem large for such a simple program. However, consider the alternatives. Using deno compile I get an executable that is 76 MB. Compiling the hello.js file using Node 21 produces a 98 MB file. So, In perspective 4.6 MB seems pretty good.

    Distribution

    You don’t need anything special to distribute a QuickJS-packaged application. Using the standalone executables, you have a range of distribution options.

    QuickJS is portable. That means that it can run in many environments. When preparing for distribution, consider the target platforms. If you want your application to work on Windows, macOS, and Linux, you must build the app on each system.

    Closing Thoughts

    Whether you’re developing IoT devices or building server-side tools QuickJS is a stellar option. It’s easy to use, fast, and produces relatively tiny executables.

    Try experimenting with it, push its boundaries, and see how it can be used in your projects. What has your experience with QuickJS been? I’d love to hear your stories, successes, and lessons learned.

  • Pure Scrum

    Scrum is a simple approach to software development based on Agile. I have been using some form of Scrum, at least personally, since the mid-2000’s when I learned about Extreme Programming, then Test Driven Development, Agile, Gherkin, Behavior Driven Development, actual Scrum, and the Scaled Agile Framework. It’s safe to say that I have picked up a lot of non-Scrum practices along the way, and while they are not necessarily bad, they aren’t necessarily Scrum. So I thought I’d write a bit about those little differences.

    Daily Scrum

    Scrum calls Standups Daily Scrum. The mixup probably came from Extreme Programming which has a daily standup meeting. The typical Standup is focused on what each individual did yesterday, what they will do today, and whether anything is blocking them from getting their work done. But that isn’t the only approach.

    A better approach might be to look at the Sprint Backlog to see how things are coming along, discussing the items as needed. I like this because it puts the emphasis on the product and the team, not the individual.

    Product Backlog Items

    User Stories in Extreme Programming are called Product Backlog Items in Scrum, and Scrum doesn’t care how you write them as long as they are defined with sufficient detail.

    That means that my beloved As a <role> I can <capability>, so that <receive benefit> format isn’t necessary, and neither are Scenarios as defined in BDD and enhanced with Cucumber’s Given... Then... When... syntax. Both of which I lurve.

    Events

    Ceremonies are just Events in Scrum. By the way, there are no gaps between Sprints which means that all Events take place during a Sprint.

    • Sprints can last up to a month
    • Daily Scrum meetings are 15 minutes tops
    • Sprint Planning can last up to eight hours
    • Sprint Review up to four hours
    • Sprint Retrospectives are up to three hours

    That’s up to about 20 hours of meetings during a three week sprint. I can tell you from experience that, while it seems like a lot, it can really pay off.

    Sprint Review

    The Sprint Review isn’t supposed to be just a demo. It is supposed to be a collaborative working session with everyone on the team, including the stakeholders, to provide feedback on the product. The Product Backlog should be updated as a result of any feedback gathered during the session.

    It is also important that only releasable code is shown during the review. If it isn’t truly done, then it can’t be released, and therefore should not be part of the review.

    Roles

    There are three main roles in a Scrum team: Development Team, Product Owner, and Scrum Master. Scrum doesn’t prohibit the Develepment Team from interacting with the Product Owner, Stakeholders, or Customers (end-users), in fact, it encourages that kind of thing.

    Interestingly, Scrum seems to reduce the responsibilities of the Scrum Master to simply to helping the team understand Scrum and removing any impediments to the process. The Scrum Master does not “drive” the team by handing out tasks or telling people what to do.

    Scrum allows Scrum Masters and Product Owners to be on the Development Team, but it recommends against doing so because of conflicts of interest and high workloads.

    Development Team

    The Development Team consists of everyone who is doing the work of creating the product during a sprint. This includes but is not limited to, programmers, designers, marketing, writers, and researchers.

    Scrum stresses that the Development Team is completely self-organizing—only they can decide how to turn the backlog into functionality. It is also worth noting that the Development Team recognizes no titles or sub teams such as Lead Developer, Architect, or testing team. The whole team pitches in to complete the increment regardless of individual specialization.


    There are probably many other modifications to Scrum that I have picked up over the years, but these are the main ones that came up during training. I don’t think that these modifications are necessarily wrong—honestly I think many of them are awesome, but I have a natural bias toward systems and processes—they just aren’t Scrum, or “pure” Scrum.