Hi, I’m Dustin Boston, a creative and curious Sr. Frontend Engineer at True Anomaly, in Los Angeles, California. I specialize in frontend web development with JavaScript, React, and Node.js. You can also find me on GitHub and LinkedIn.

  • Why I Chose Tauri for My Text Adventure Game

    When I started designing Head in the Cloud, a horror text-adventure game, I figured C would be the natural language of choice. There is a romanticism to writing a text-based game in C. But I wanted to ship the game, and I knew that C wasn’t the best choice for that (for me).

    I ultimately chose Tauri, a framework that allows you to build desktop applications using web technologies, over a traditional systems language. Here’s why.

    1. Avoiding the Language Learning Curve

    My biggest constraint was time. As a Lead Software Engineer with two decades of experience in web development, I’m most familiar with HTML, CSS, and JavaScript. Conversely, I don’t know C and my knowledge of Rust (another systems language) is novice at best.

    Choosing C would have turned this into a language learning exercise rather than a game development journey. By choosing Tauri, I eliminated the “language tax.” I can think in game logic—inventory arrays, state management, narrative branching—rather than syntax. The goal is to ship a game, not to learn a language.

    2. CSS is Great for Text, Layouts, and Graphics

    The web browser is the most sophisticated text rendering engine in existence. If I were to build in C, I would be giving that up. I also wanted keep the option to modernize the game with graphics. A text-based game benefits a lot from illustrations.

    Using Tauri allows me to use CSS. I can utilize Flexbox and Grid for responsive layouts that look good on any screen size. I can load custom web fonts to set the retro atmosphere instantly. I can use CSS animations for subtle text fades or “glitch” effects that would be nightmarish to code from scratch in C. Tauri gives me a AAA-level UI layer for free.

    3. Smaller Build Sizes than Electron

    The immediate counter-argument to using web tech for desktop apps is usually “Electron bloat.” Electron bundles a version of the Chromium browser and Node.js into every single application installer. This leads to simple chat apps weighing in at 100MB+.

    Tauri solves this by relying on the Operating System’s native webview (WKWebView on macOS, WebView2 on Windows, WebKitGTK on Linux). It does not bundle a browser.

    The result is massive binary reduction. A basic Tauri app can be less than 5MB. For a text adventure game, keeping the footprint small is essential. I get the development experience of Electron without forcing the user to download an entire web browser just to play a text game.

    4. Simpler Architecture than Electron

    While I wanted the web environment, I did not want the Electron ecosystem. Electron is powerful, but it requires you to manage the complexity of the Main vs. Renderer processes, context bridges, and inter-process communication (IPC).

    Tauri simplifies this architecture. It is built on Rust, providing a secure, lightweight backend that communicates with the frontend. I don’t have to worry about spinning up worker threads manually or managing complex menu configurations just to get a window on the screen. It provides sensible defaults that let me focus on the JavaScript layer where my game logic lives.

    5. Iteration Velocity

    Game development requires constant tweaking. You change a line of dialogue, you tweak a color, you adjust a timing delay.

    In a C environment, this is a compile-run loop. In the Tauri environment, I have access to Hot Module Replacement (HMR). I can change the CSS of the game interface or the JavaScript logic of a puzzle, and the game window updates instantly without a restart. Over the course of a 6-month development cycle, those saved seconds compound into days of saved time.

    6. Safety by Default

    Writing a game in C opens the door to memory leaks and segfaults. One bad pointer arithmetic error can crash the user’s desktop.

    Tauri relies on Rust for its backend bindings. Even though I am a Rust novice, I benefit from Rust’s memory safety guarantees. I am writing high-level JavaScript, which is sandboxed, and the heavy lifting is done by a backend that is proven to be memory-safe. It is a safety net that C simply does not offer.

    7. Cross-Platform without the Pain

    Finally, compiling C for Windows, macOS, and Linux requires managing makefiles, compiler flags, and distinct build environments. Tauri abstracts this complexity. With a few commands, I can cross-compile binaries for the major operating systems. Since the UI is just a webview, I don’t have to rewrite the rendering logic for different OS window managers. It ensures Head in the Cloud is accessible to everyone, regardless of their machine.

    Conclusion

    There is no “best” language, only the best tool for the job at hand. For a high-fidelity 3D shooter, C++ or Rust is the answer. But for a narrative-driven text adventure built by a veteran web developer? Tauri offers the perfect intersection of performance, file size, and developer velocity. It lets me respect the user’s hardware while respecting my own time.

  • Summary of the HTTP Archive 2025 Web Almanac

    The HTTP Archive’s 2025 Web Almanac is important but huge at 15 chapters. I’ve summed up each chapter so you can get gist without having to spend hours reading through it. Ready for this?

    Fonts

    • Web fonts are nearly universal
    • Self hosting continues to rise, Google fonts still dominates
    • Icon fonts are still around
    • Woff2 is the defacto standard
    • Preconnect and preload are now mainstream,
    • font-display: swap is the new normal
    • System fonts are rising for performance
    • Variable fonts are growing but not dominant

    Web Assembly

    • Web Assembly has evolved into a universal runtime
    • Adoption is growing steadily, especially among top-tier sites
    • Modules vary widely in size and usage
    • .NET and system level libraries dominate
    • Advanced WASM features are exploding in usage

    Third Parties

    • Third parties are nearly universal
    • Request volume is rising
    • Scripts, images, and other types dominate
    • Google services dominate the ecosystem
    • Low ranked sites load more third-party requests
    • TCF is the most widely used consent framework
    • Third-parties recursively load more third-parties

    Generative AI

    • Has become a core part of the modern web
    • Cloud has higher quality models than local but at the cost of privacy
    • Local AI technologies like WebGPU, WebNN, etc. are maturing
    • Browsers are starting to ship AI APIs
    • Robots.txt is universal, llms.txt is new and barely adopted
    • There has been an increase in AI-favored words
    • Vibe coding platforms are accelerating site creation
    • The .ai domain exploded
    • Agentic browsers are the next frontier

    SEO

    • AI systems are beginning to rely on SEO practices
    • Visibility means being understood by AI, not just crawled
    • Crawlability and indexability are getting more complex
    • Titles and headings are consistent, meta description rebounding
    • JSON-LD is leading structured data adoption, AMP below 1%

    Accessibility

    • Progress is slow, lighthouse scores are up slightly
    • Laws like EAA and ADA drive change
    • Core failures like contrast, focus indicators, and ARIA misuse remain
    • Gov and edu domains lead in accessibility

    Performance

    • Loading performance (LCP & FCP) improved modestly
    • Modern formats like WebP and AVIF are growing, but JPG still leads
    • Resource prioritization with fetchpriority is rising
    • Interactivity (INP & TBT) is strongest on desktop, mobile improved
    • Home pages outperform secondary pages
    • Visual stability (CLS) improved, unsized images are the worst offender

    Privacy

    • 75% of websites include at least one third-party tracker
    • Google Analytics and Facebook Pixel dominate the tracking ecosystem
    • Cookies remain core to tracking
    • Stateless tracking and fingerprinting remain and are hard to block
    • Trackers are adapting to browser protections
    • Regulatory compliance is patchy
    • Do Not Track is still detected even though it’s obsolete

    Security

    • DDoS attacks reached an unprecedented scale
    • Supply-chain compromises grew dramatically
    • There was a major React vulnerability
    • Transport security is nearly universal
    • Let’s Encrypt still leads with Google Trust Services gaining
    • Security headers adoption is rising
    • Isolation via (COOP, CORP, COEP) is improving

    PWAs

    • PWAs have matured after a decade
    • All main browsers support installation
    • Service worker adoption has exploded
    • Manifest usage is stable but only 10% of sites have one
    • PWA technology usage has doubled since 2022
    • Notifications are mostly ignored

    CMSs

    • 54% of websites use a CMS
    • WordPress still leads, powering 64% of CMSs
    • Shopify, Wix, and Squarespace grew modestly
    • Performance is defined more by implementation than platform
    • Page builders like are widespread but add complexity
    • Wix leads with core web vitals
    • Average page weight is over 2 MB on desktop and mobile
    • Structured data, semantic markup and clarity matter most to LLMs

    Ecommerce

    • Four models, SaaS, PaaS, Self-Hosted, API-first
    • Subdirectory stores may be missed (make the store your home page)
    • Ecommerce is 20% of the web
    • WooCommerce dominates, Shopify is second and growing
    • Performance is a major differentiator
    • PayPal’s share dropped, Stripe and Google Pay gained

    Page Weight

    • Page weight still matters
    • Low end devices choke on large JS
    • Metered data makes heavy pages literally unaffordable
    • The web keeps getting heavier, from 845kb in 2015, to 2,362Kb in 2025
    • Images and video dominate bytes
    • JavaScript sizes are massive
    • Request volume is rising
    • Adoption of compression, minification, caching is mixed
    • Page weight strongly correlates with core web vitals

    CDNs

    • CDNs now drive modern web protocols, especially HTTP/3
    • Adoption of CDNs keeps rising with Cloudflare and Google dominating
    • Performance wins are huge
    • Brotli compression is now widespread
    • CDNs enforce strong security

    Cookies

    • 60% of cookies are from third parties
    • Most cookies come from ads and analytics networks
    • Security attributes (HttpOnly, Secure) are underused
    • Cloudflare leads in partitioning of cookies
    • Median cookie age is one year
  • Conviviality as the Antidote to Enshittification

    A website to destroy all websites argues that friendliness is the cure for enshittification (the process where online platforms degrade in quality over time as they shift focus from serving users to extracting maximum profit for shareholders) of the internet.

    The article outlines three things that convey friendliness on the web:

    • Authoring and sharing content for free
    • Creating your own social networks
    • Learning how to write code

    And personal websites are the embodiment of conviviality on the web. It ends with an encouragement to make your own site and share it on PersonalSit.es.

  • Don’t Forget About JavaScript Iterators

    In Stop turning everything into arrays (and do less work instead) Matt Smith argues that modern JavaScript developers often default to converting data into arrays so they can chain familiar methods like .map(), .filter(), and .slice(). That habit, he says, leads to unnecessary work—extra memory usage, wasted computation, and slower apps.

    Instead, he encourages leaning on iterators and generator functions, which let you process data lazily and sequentially. With iterators:

    • Work happens only when consumed, not upfront.
    • You avoid building full arrays when you only need a slice of the results.
    • You can stream data from APIs or large collections without blowing up memory.
    • You can compose transformations without paying the cost until the final .toArray().

    His rule of thumb: If you don’t need the whole array, don’t create one. Iterators represent “work that hasn’t happened yet,” making them ideal for efficient pipelines, async data fetching, and scenarios where you only need a subset of results.

  • Code Review at AI-Scale

    Traditional Code Review Is Dead. What Comes Next? makes the case that humans won’t be able to keep up with code reviews when AI is at scale. It concludes that humans should move to testing behaviors in a preview environment (i.e. manual testing) instead of line-by-line reviews. Personally, I don’t see how you can ship AI-generated code without a traditional code review. However, I do agree that humans will be the bottleneck when AI is operating on code at scale. Is there a better solution?

  • AI-Ready Frontend Architecture

    To get the most out of AI it makes sense to architect your frontend in a well-known and predictable way. I’ve been thinking about what an AI-ready frontend architecture might look like but have yet to test and recommend a specific solution. However, Nelson Michael has an interesting take on the problem in the article A Developer’s Guide to Designing AI-Ready Frontend Architecture.

    Here are some key points:

    • AI amplifies weak architecture
    • Teach AI your architecture explicitly file
    • Directory conventions matter more than ever
    • Use design systems as constraints
    • The use-case pattern is the backbone
    • Use middleware for cross-cutting concerns
    • Pay special attention to auth, testing, and observability
  • Dave Kiss on Ideas and Execution

    I was so disappointed when I first heard the adage “ideas are a dime a dozen.” I had plenty of ideas. The hard part, I learned, was actually building the damn thing. Plenty of people have pointed out how that barrier is gone now, but I like how Dave says it.

    Remember when coming up with a great idea was the easy part? Ideas were worthless. What was valuable was the commitment. The grit. The planning, the technical prowess, the unwavering ability to think night and day about a product, a problem space, incessantly obsessing, unsatisfied until you had some semblance of a working solution. It took hustle, brain power, studying, iteration, failures.

  • Simon Willison on Technical Blogging

    Simon gives some solid, and surprising, advice for technical bloggers:

    My number one tip for blogging is to lower your standards! Aim to hit publish while you are still actively unhappy with what you have written, because the only alternative is a huge folder full of drafts and never publishing anything at all.

  • Robots.txt is Required for Google

    Fix Your robots.txt or Your Site Disappears from Google

    Google now requires a robots.txt file or your site won’t get indexed.

  • Attention Residue

    What Is Attention Residue? The Hidden Focus Killer That’s Sabotaging Your Productivity

    Attention residue is when human attention remains “stuck” on a previous task after switching to a new one. It happens because the brain can’t instantly switch “mental sets.” Instead of a clean break, a “background process” continues running on the unfinished or emotional previous task. So your focus gets fragmented and your cognitive capacity is reduced.

    The Cost of Context Switching

    The article argues that multitasking is a myth; it’s actually rapid task switching with heavy neurological penalties. I 100% agree with this.

    • Productivity Loss: Constant toggling can cost up to 40% of productive time.
    • Recovery Time: It takes an average of 9.5 minutes to fully return to a productive workflow after a digital interruption.
    • Cognitive Decline: Residue impairs working memory, problem-solving, and creativity. It mimics the effects of a lower IQ and contributes to chronic decision fatigue and burnout.
    • The Zeigarnik Effect: Incomplete tasks occupy more mental RAM than completed ones, exacerbating the residue.

    The Upside: Productive Residue

    Residue is only negative when your attention is scattered across different topics. When applied to a single deep-work task, residue can be an asset. The subconscious continues processing the single problem during breaks (incubation), leading to breakthroughs.

    7 Strategies to Eliminate Residue

    The author outlines seven methods to minimize cognitive drag:

    1. Master Single-Tasking: Create an environment where only one task is possible. Use blockers and dedicated devices to force focus.
    2. Design Transition Rituals: Perform a “brain dump” or physical movement (stretching, walking) between tasks to signal the brain to close the previous mental set.
    3. Strategic Time Blocking: Batch similar tasks (e.g., all emails at once) to reduce the frequency of mental gear-shifting.
    4. Control Information Diet: Turn off non-urgent notifications. Most workplace residue comes from treating email/Slack as a synchronous, always-on activity rather than a batched task.
    5. Create Boundaries: Use physical cues (headphones, closed doors) and digital hygiene (separate browser profiles for work vs. leisure) to compartmentalize focus.
    6. The “Parking Lot” Method: When a distracting thought or incomplete task pops up, write it down immediately to “offload” it from working memory, then return to the current task.
    7. Schedule Recovery: Build in buffer times between high-intensity tasks and use a “shutdown ritual” at the end of the day to sever the connection to work stress.