Recent Blog Posts

My Remote Working Setup, Early 2022 Edition

January 7, 2022 by Steven Ng

Having worked from home for years now, my desk setup has gone through many iterations over the past decade. I recently revamped it again just before the holidays, and since it's the New Year, I figured I'd give a walkthrough of my latest setup.

Since my desk was a little messy, I opted not to put a photograph of the setup in the post. Maybe next time.

The Desk

Several years ago, I decided to hop on the standing desk trend, so my current desk is an inexpensive Ikea Skarsta manual sit/stand desk.

I did buy some keyboard tray rails and a shelf to add an extra wide keyboard tray, which is great.

After getting plantar fasciitis in both heels (yes, I was using a standing mat), I stopped standing. I keep my desk at standing height however, as that allows me to keep storage underneath (using some Ikea cabinets), and I use a relatively inexpensive drafting stool that is surprisingly comfortable. I did add a gel seat cushion to it, but the stool is quite comfortable even without a cushion because of its "tractor" style moulded seat design.

I have a power bar, gigabit switch and wire tray mounted to the bottom of the desk. On the legs, I mounted a bunch of headphone hooks and the amplifier for my desktop speakers.

Computers

On my desk, I am working off three computers at the same time:

  • Lenovo Thinkcentre M75s Gen 2 (AMD Ryzen 7, 64GB RAM, 1.5TB NVME)
  • Asus Zenbook Pro UX (6th Gen i7, 24GB RAM, 1TB NVME)
  • Surface Pro (1st gen, specs too bad to be worth mentioning)

My desktop was purchased at the beginning of 2021 to replace an aging fourth gen i7 that had a motherboard that capped RAM to 16GB. I chose a Ryzen 7 4750G CPU because of its speed and relatively low power consumption. Having 64GB of RAM available now offers a lot of freedom in terms of how many concurrent applications I can run and makes running well-provisioned virtual machines a lot easier now.

Adding the laptop to the setup was actually a recent thing. I hadn't been using it much, and I wanted to start using virtual workspaces to increase my focus and productivity. The thing with virtual desktops is that new windows never get open in the workspace that I want it to, etc. So for items that I always want in the foreground, like mail and my calendar and other comms, I have them open on the laptop.

By having the apps I need to always be visible running on separate screens driven by a different computer, using virtual workspaces becomes a lot less annoying, albeit still imperfect.

The Surface Pro is mainly a monitor for my security cameras, but it does come in handy for other things like documentation.

I use a single keyboard and mouse to control all three computers - Microsoft's Mouse with Borders app works great in this scenario. For the keyboard, I use an old discontinued SteelSeries Apex gaming keyboard (not mechanical), which I love because of its many programmable macro keys. I use a Kensington track ball, because using a mouse creates hand/arm strain for me in my old age (sigh).

In terms of other peripherals, I use a Stream Deck which is like a context-sensitive macro keypad, I have some discontinued Kanto Ben passive speakers driven by an inexpensive low power audio amplifier that I mounted to my desk. Other random stuff I have are card readers and multiple hubs.

In terms of networking, I have a gigabit switch mounted under my desk, and all my computers run wired. I'm not a fan of using wireless unless I absolutely have to. My networking (wired and wireless) is mostly Ubiquiti gear (mainly because of ease of use and setup), although I use OpnSense as my firewall. Because I don't like having ports open, I don't have a VPN. I use ZeroTier instead.

I have other computers that aren't part of my desk but important to my workflow. I bought some cheap old Dell refurbs that I keep in my basement for running Docker containers, and virtual machines.

My desktop is also connected to a CyberPower UPS in case of a short power outage.

Screens

There's a direct correlation to my productivity and the amount of screen real estate I have. Unfortunately, since I hit my forties, I've needed reading glasses for my near vision, which really made thing complicated (in other words, more expensive). I can't stand using bifocals or progressives. My fix for that was to work with my optometrist to have a separate prescription for working on the computer. Basically, I have a pair of single-vision "work glasses", where everything is sharp between 21" and 27" from my eyes. The hardest part about having "work glasses" is remembering to swap them out for my normal glasses when I leave my desk.

My screen setup has five screens:

  • 2 x 27" 4K monitors running at 100% scaling
  • 1 x 15.6" 1080p Portable USB monitor
  • 1 x 15.6" Laptop Screen
  • 1 x 10.6" Original Surface Pro

The two 4K screens are my main screens, and are driven by my desktop computer. I use a gas spring dual arm mount for these two monitors. The dual mount ended up being a mistake in that it is much more limiting in terms of positioning the screens. Having two discrete arms at different mount points is much more flexible. I'll probably change over on my next major desk overhaul.

The three small screens sit on my desk below my main screens. The two 15.6 screens are driven by my laptop computer.

That I run my 4K screens at 100% scaling allows me to have a lot of screen real estate compared to using 125%, 150% or 200% (which translates to 1080p in terms of real estate).

Comms

While my laptop has an integrated webcam, it's pretty much garbage in terms of image quality.

I have tried all sorts of things as a webcam - using an old Android Phone via USB or Wifi (great image quality but a little fussy), connecting an actual camera (great image quality but very fussy), some cheap webcams (when name brand cams were unavailable or being scalped due to pandemic pricing) and document cameras (great quality but sometimes hard to position).

In the end, I use a document camera when I can. While document cameras are primarily used for education for showing documents, they're still just cameras at their core (Windows recognizes it as a webcam, so the experience is seamless). The resolution and color balance tend to be a lot better, and many have built in lighting.

I have an overhead LED panel that I mounted to my desk for extra lighting on video conferencing, as having more light always improves the picture quality of a webcam.

In terms of microphones, I've bounced back and forth between a Shure MV7 and a Logitech/Blue Yeti X, but I always come back to the Shure, because it's more directional and smaller. I use either microphone with desktop stands as opposed to a boom arm (no room). Both are connected via USB. Both the Shure MV7 and Yeti X Pro have software that can tweak the profile of how your voice sounds, giving you a more radio announcer sound.

When I can't do calls over speaker, I do have a headset that i use for comms as well. For that, I use a Bose QuietComfort 35 II Gaming Headset. It's basically a normal Bose QC35 II except that it has a desktop controller and a removable boom microphone. It's one of the most comfortable headsets I've ever used.

In spite of having speakers on my desktop, my two microphones have been pretty good in terms of echo cancellation so I don't often have to resort to using earbuds or a headset.

Other Random Stuff

To prevent accidental keypresses on my laptop computer, I use an acrylic keyboard cover over the keyboard and a small acrylic box lid that I had lying around to cover its numeric keypad. The covers also let me put stuff on the keyboard cover to make up for lost space on my desktop.

I use removable magnetic tip USB cables for charging and peripheral connections. I have two types of these cables, some cheaper charge-only tips and some data-capable tips. I use the data-capable tips with peripherals like my microphones and smartphones (for charging or in the case that I want to use one as a webcam).

In terms of gear, I have found that devices marked as "gaming" devices tend to be the best possible options, which sucks, because they have a tacky gamer esthetic, while often being the best option for work use. It's slowly starting to change in terms of esthetics, but there's still a long way to go.

I recently picked up a pair of RGB LED bars for the desk, not because I like tacky RGB, but because I thought they might be good supplemental lighting for video calls. Whether they last will depend on how effective they are.

For cable management, the best solution I found were no-name clips that came with 3M adhesive. I had tried using some 3M Command cable clips, but that didn't take. While the 3M Command removable adhesive is great, the clips were made of brittle plastic that breaks easily. I do still buy 3M Command strips without any clips, as they work well with no-name clips and for mounting lightweight items in general.

I also display a few toys on my desk. Believe it or not, they're not there for a frivolous reason. They remind me to stay focused. I have a duck figurine, so that I can Ask The Duck. A squirrel figurine is present to remind me to avoid distractions (inspired by Dug's "Squirrel!" tag line in the Pixar movie Up), and the Homer, a toy car inspired by an episode of the Simpsons, reminds me that perfect is the enemy of good.

I tweak my desk setup every few months in an attempt to optimize it for my work habits (or because I bought a new peripheral, etc.), but those tweaks usually aren't very significant. If I do another major desk revamp, I'll create a new post.


Microsoft Edge Annoyance: Disable "Mini Menu" on Text Selection

December 17, 2021 by Steven Ng

As is Microsoft's habit, they recently snuck in a feature called a "mini menu" into their Edge browser.

Instead of telling me about the feature and letting me decide whether to enable it, it's enabled by default, because as you know, Microsoft read my mind and knew that's what I obviously wanted.

The feature, in a nutshell, pops up a menu with a vertical ellipsis when you select some text, and in my opinion, really breaks the user experience, especially when using a web application (as opposed to a web site). It introduces unnecessary and unwanted friction if you are even remotely a power user.

Fortunately, you can turn it off.

To turn it off, go to your Edge Settings > Appearance (you can use the url edge://settings/appearance to jump there quickly), and scroll down to the Context Menus section. In the subsection Mini menu on text selection set the switch to the off position for Show mini menu when selecting text.

If you sorta like the feature, you can blacklist sites, but honestly, you're probably better off turning the feature off altogether.

Once again, thanks, Microsoft!

PS. If you are looking for fixes to common Windows annoyances, you can check my periodically updated article on Microsoft Windows Annoyances, and How To Fix Them.


Ask the Duck

December 16, 2021 by Steven Ng

If you've ever had to help anyone troubleshoot an issue, then you're probably familiar with the frequently used opening statement of "it's not working", which is easily the most useless piece of information about a problem you'll ever get.

The only useful nugget of information in that statement is that there is possibly a problem with a system or possibly a problem with the user/requester.

To most experts, empty problem descriptions can elicit curt, disdainful responses, which, to a degree, are understandable, but are bad form nonetheless.

An expert's time is valuable (as in expensive) and shouldn't be wasted on having to make the requester provide more information, when it should be obvious that some homework or thought should go into the problem description.

The thing is, asking a question or providing a problem description is actually a skill. And it's not obvious to organizations to provide this type of training to new recruits, even those in technical positions, irrespective of seniority.

I lurk a number of technical subreddits on occasion to answer questions, because I like the "puzzle" aspect of helping to solve someone's problem. Unfortunately, with a lot of requests for help, you get empty problem descriptions with zero context to what the poster is trying to accomplish. And sadly, some of these questions come from people who are purportedly good at what they do.

And, it seems like a lot of posters don't bother (or know to bother) to provide some clearly written context to their problems to help more knowledgeable members of the subreddit solve their issue. It usually results in a bit of back and forth before a clear picture of the problem is even created.

At some point, a lot of the more knowledgeable members of the subreddit simply get tired of answering questions, or providing nice responses, leading to some of the newbies to incorrectly conclude that a subreddit is toxic (which is usually untrue).

In the end, however, it comes down to people not knowing how to ask the duck.

I don't know the true origin of the phrase, as there are different versions of the analogy. Nevertheless, it usually goes something like this:

Someone had a fake or toy duck, and when they had a problem, they would ask the question to the duck out loud. Since the duck has no knowledge of anything (it is a duck, after all), the person asking the question would provide more context to their problem than they would to a senior person or expert, and the answer/solution is often revealed in the process of asking the question itself.

A lot of people aren't familiar with the concept of asking the duck. It's easily trained, and it is an incredibly important part of problem solving and communicating. It's probably something that should be taught in schools.

For organizations, however, it should absolutely be included as part of the onboarding orientation. It is a skill every employed person should have, and it only takes a few minutes to teach or reinforce.

So the next time someone asks you for help on a problem without any context, ask them if they asked the duck first. And if they don't understand what that means, explain the concept to them nicely.


Who Needs SQL Server When You've Got Postgresql?

December 15, 2021 by Steven Ng

I don't know why I didn't post about Babelfish for Postgresql when it came out a while ago, but the topic came up today during lunch with a friend, so... better late than never?

If you're not familar with Babelfish for Postgresql, it's an extension for Postgresql that gives you wire-protocol and T-SQL query compatibility with Microsoft SQL Server.

In plain English, it lets your Postgresql server pretend that it's a Microsoft SQL Server. What does that mean? It means Postgresql can be a literal drop-in replacement for SQL Server, sans any licensing fees.

Should you just decomission all your SQL Server instances and replace them with Babelfish enabled Postgresql? That depends. Short answer is probably still "no" for the time being.

If you're using enterprise software designed for MSSQL, your vendor (ahem, IBM) will probaby not provide any support if you're not using MSSQL or another approved database server. So don't be using Babelfish for PG as your Cognos content store any soon.

On a related topic, FerretDB is a similar project in that it makes Postgresql a MongoDB compatible server. In other words, FerretDB makes your Postgresql server "web scale" (audio on that link is nsfw).


How I Style My Svelte Components

December 14, 2021 by Steven Ng

Background

So this is just a not-so-little post about how I do styling (as a process) in Svelte, and spoiler alert, I don't use Tailwind.

I understand the appeal of Tailwind, but I there's a herd mindset around it. Like most technologies, whether it be web frameworks or whatnot, people who mentally invest in them can get quite... religious about them. As a result, any attempt at discourse tends to be one-sided.

And of course, you can use Tailwind with Svelte. In fact, a lot of people do, and are very happy doing so. And that's great.

But unless a project dictates that I use it, I generally don't find a need for it.

On Premade Component Libraries

As a rule, I generally don't use a premade component library, like IBM's Carbon, Bootstrap or Material UI. Years ago, when I built a web based project management solution, I used Material UI for AngularJS, and had... mixed results.

The problem (for me) with these canned components is that they get you from point A to B fairly quickly, but unless your end goal is a cookie cutter user interface, you'll end up rewriting or modifying components to make things work the way you want to.

When you're trying to build something that is in your mind, special, you often need to make your interface components do more than what canned libraries offer. You then end up forking the code (because nobody wants to be at the mercy of another party's development or fix schedule), and making your own UI components. I'm obviously not adverse to using canned libraries if a project dictates that I do. If, however, my final objective is a better mousetrap, I rarely go in that direction.

Clearly I'm not the only one thinking that, because Tailwind is a great, flexible alternative to canned component libraries (notably Tailwind UI). It has clearly struck a chord with a lot of front end developers. It's probably a good option for users of a lot of web frameworks too. If I was still using AngularJS, I'd probably use Tailwind. But because I use Svelte, that ship has sailed.

My main argument, however, is that I don't consider Tailwind to be a must-use tool for all Svelte developers.

Svelte Styling

One of the reasons why I like Svelte is that styling is tightly scoped within components. Any CSS you create for a component is specific to that component unless you scope your CSS differently (using :global). Depending on the number of HTML elements in your Svelte component, you may not even need to use any classes. At all.

Take this obviously fake and oversimplified component:

<!-- TextInput.svelte -->
<script>
  export let name;
  export let label;
</script>
<div>
  <label>{label}</label>
  <input type="text" name="name" bind:value={name}/>
</div>

In Svelte, you don't need classes to style the component. Svelte already applies a unique class (at compile time) to the component to ensure that CSS that you write for the component is isolated to that component. So for CSS, you can simply apply the styles to the elements themselves:

<style>
  div {
    padding: 20px;
  }
  label {
    font-weight: bold;
  }
  input {
    border-color: blue;
  }
</style>

Unless you have multiples of elements, you don't even need any classes at all. Even if you do have multiples, your option isn't limited to using classes. If it's appropriate, you can create child components with their own styling. You can also use attributes in your HTML elements. For example <div type="sometype"> and then style for [type="sometype"]. In any case, all the styles in your .svelte file's <style> block are still isolated to the component.

If you have your own CSS library (like I do), you already have common rules about styling for most common components. There's no need to be cluttering your HTML markup with classes when most of the time the default styles in your .svelte file are exactly what you need.

If you need to theme or customize your look and feel, you have options. One of them is obviously Tailwind. Another is using CSS variables, like I do.

While people tend to firmly put themselves into one camp or another, I don't take the approach that there is one true way to do things. There is always more than one way to do something, and there may be good reasons for not doing it the way the herd is doing it.

Global Level Styling

For styling Svelte, I basically have a small CSS reset file and a library of CSS variables (aka. CSS custom properties). CSS custom properties basically work on all modern browsers, so unless you're required to support IE 11, which has been EOL'ed, it's a better solution (in my opinion) than adding a pile of classes into every HTML attribute.

There are obviously some preconditions that need to be met in order for this system to work, and the most important one is that all your components are under your control. If you like to use components from other libraries "as is", then this strategy isn't great. As mentioned earlier, I personally don't like to use components from other libraries "as is", because those components tend to be designed to be everything to everyone, which often doesn't work out when your application has very specific requirements and UI behaviour expectations (which is more often than not).

CSS Resets

The purpose of a reset file is to make all browsers use the same defaults. If you're unfamiliar with resets, see here and here for more information.

The job of the reset mostly comes in the form of setting margins, padding and line heights. This way, when you apply your CSS styles, your pages will render with a consistent look across all browsers and operating systems.

My reset file isn't very large, and I basically use it in all my projects as my CSS starting point.

CSS Variables

I guess the easiest way to describe how I style my web apps now is that it is driven by CSS variables. The approach is very similar to using Pollen. I hadn't heard of Pollen when I came up with my approach, but outside of variable naming conventions, it's virtually the same as what I do.

If you're unfamiliar with CSS variables, you would declare the like this:

:root {
  --lightest-gray: #dee2e6;
  --lighter-gray: #ced4da;
  --light-gray: #adb5bd;
  --gray: #868e96;
  --dark-gray: #495057;
  --darker-gray: #343a40;
  --darkest-gray: #212529;
  --black: var(--darkest-gray);
  --white: #ffffff;
}

And to use the variable, you would do something like this:

.some-selector {
  background-color: var(--light-gray)
}

You can also assign variables to other variables:

:root (
  --light-gray: #eeeeee;
  --component-border: var(--light-gray)
}

Perhaps the handiest thing about CSS custom properties is that you can assign a fallback. This lets you set sensible defaults in the absence of a CSS variable:

background-color: var(--light-gray, #efefef);

If your browser can't find the --light-gray variable, the element's background will use #efefef as the default.

So in a Svelte component...

<script>
  // ...your code here
</script>

<div>
  I'm a custom component!
</div>

<style>
  div {
    background-color: var(--light-gray, #EFEFEF)
    border-color: var(--component-border, #000000)
  }
</style>

And in use:

<!-- the tailwind way -->
<MyCustomComponent class="bg-gray-100 border-black"/>

<!-- look ma, no classes! -->
<MyCustomComponent/>

Why This Approach Works For Me

So one gotcha with my approach is that if you don't have any existing styling CSS from past work, this approach is potentially a lot of effort to get into a workable solution.

Over the years, I've amassed my own building blocks for web based UI components that I have either built or forked from other libraries. It's an opinionated set of components that I have been reusing as the foundation of my web projects for quite some time. Because of this, all my apps share a very similar look and feel.

It did not take long to convert everything into CSS variables, nor did it take a long time to get the variables integrated into my Svelte code.

You may be unlucky and not have a foundation of CSS styling to start with. If, however, this approach does seem like your preferred way of working, I would suggest using Pollen as your starting point, as it has all the fundamentals covered.

Relax Tailwinder, Keep On Tailwinding

Finally, if comments about Tailwind at Hacker News or Reddit are any indicator, Some Tailwind enthusiasts will probably see red-900 and hate this post. ¯\_(ツ)_/¯

Instead of reacting, maybe take a breath. This post is just about my approach to styling Svelte. While I am saying Tailwind isn't for me, I did not say that it isn't for you.

Now get back to Tailwinding like a boss, and forget that you ever saw this post. You'll be much better for it.


The Rockstar Programming Language

December 8, 2021 by Steven Ng

So I guess I must have missed it in when it was first announced 2018, but on Hacker News yesterday, the Rockstar programming language hit the front page.

Rockstar is a language that is unlike any programming language I've seen. Every program you write is... a song. For example, the fizz buzz program goes like this:

Midnight takes your heart and your soul
While your heart is as high as your soul
Put your heart without your soul into your heart

Give back your heart

Desire is a lovestruck ladykiller
My world is nothing 
Fire is ice
Hate is water
Until my world is Desire,
Build my world up
If Midnight taking my world, Fire is nothing and Midnight taking my world, Hate is nothing
Shout "FizzBuzz!"
Take it to the top

If Midnight taking my world, Fire is nothing
Shout "Fizz!"
Take it to the top

If Midnight taking my world, Hate is nothing
Say "Buzz!"
Take it to the top

Whisper my world

I won't spoil anything by explaining the language's syntax, but there's an interesting presentation on Rockstar by the language's creator, Dylan Beattie. I've set the link to jump to the meaty part of his long presentation.

It appears that the language was created as a goof on the term "rock star programmer", but since the language's announcement, there have actually been compiler implementations created. For all intents and purposes, it's a real programming language.

If we're lucky, we'll see another language in the future called "Ninja" too.


Optical Disc Pitfalls

December 7, 2021 by Steven Ng

I was recently doing a little purging to "spark joy", and some of that purging included backups on optical discs (CD and DVD) that go back more than twenty years.

I'm a little ashamed to admit that I am a bit of a data hoarder, but I'm getting better.

My general feeling now is "if you haven't looked at this in 10 years, you're probably not going to miss it." In any case, I separated my optical disks between data and non-data (e.g., very old software installers, etc.) and still decided to review the content on the optical discs just in case there were some things I'd want to retain.

The one hard lesson that I learned was that not all opticals survived the test of time. The non-surviving opticals saved me a lot of time in terms of review—sort of, but not when you consider how slow optical discs are to read— but they did cause me to rethink my use of opticals as a backup and archive medium.

To be fair, the cheaper the optical discs that were used, the higher the likelihood they were to fail. Having said that, many branded opticals had failed too. Some discs worked on some drives, but failed on others (I was using two different drives on two different computers to try to get through them faster).

But an even bigger problem, is that opticals have, in a way, become a bit of a dinosaur. Unless you have a wallet teeming with cash, tape drives are priced beyond the reach of most small businesses and consumers. Blu-Ray writables provide a somewhat affordable method of cold storage, but my experience with them have been hit or miss.

Another conundrum? How do you destroy an optical disc before disposal? There are many shredders that can shred a CD, but it is a bloody mess. You will end up with tiny shards of sharp plastic and glitter flying all over the place. I discovered that wrapping an optical disc with saran wrap reduces the flying crap produced by the shredding process. Also, some of my older discs (namely Maxell CD-Rs) would jam my shredder.

There used to be a device that would emboss the readable surface of a disk, but they're no longer in production and hard to find. I guess you could use a sander, but the dust particles produced by that probably isn't great for your lungs.

If you have a lot of discs to destroy, I would suggest buying a cheap, secondary shredder (to avoid ruining your paper shredder) and a lot of Saran Wrap.

It also sucks that where I live, there's no way to recycle old writable optical discs. They're basically destined for landfill.

Going back to the offline backup issue, however, what is a person to do? Well, there are cloud services. Using something like OneDrive, Dropbox or something similar is a good way to back up, since they tend to be a relatively frictionless way to get your data in the cloud (and offer a little protection from ransomware attacks). External hard drives and SSDs are good, but you have to remember to disconnect them from your devices, lest they be exposed to a malicious attack or catastrophic user error.

I still wish there was an easy and affordable solution for offline write-once-read-many high capacity removable media backup, but alas, there's simply not enough demand to make that happen.



Housekeeping: Site Updates

November 25, 2021 by Steven Ng

Just some housekeeping notes related to the site that are neither here nor there.

I've moved the site off Github Pages recently, as I did a very minor rewrite of the site, and changed it from a static site to one that requires an actual server. As part of the site refresh, I have finally made the site responsive as it relates to phone screens. It's not as though the site was not readable on small screens before, but it is much better now.

I also recently dropped Google Analytics from the site as well. I never really looked at the stats, and I realized that I'm not that interested in maximizing my audience. I know my audience is microscopic, and overanalyzing what my SEO options are is frankly not a good use of my time. I'm not saying I won't revisit this in the future, but right now, analytics are off the table.

What this means is that on the main domain (www.braintapper.com and braintapper.com), no cookies are dropped into your browser. If I have embedded a video from YouTube, you might end up with some cookies for that.

While I currently don't have any applications running on any subdomains (e.g., someapp.braintapper.com), some of those applications may use cookies, but each of those applications will have their own privacy and cookie policy as required.

Because they are subdomains, you will see cookies under the *.braintapper.com in your browser.

Also, I have updated content in the Articles section, including adding one for Svelte/SvelteKit resources.

Now that I have a content management system set up, I'm hoping it will result in more frequent posts, but I'm not making any promises there. Better to manage your expectations, right?


My Dockerfile for SvelteKit

November 24, 2021 by Steven Ng

I pretty much do most of my web development in Sveltekit these days. Because I mostly prefer to deploy to Docker containers, it comes in handy having a reusable Dockerfile template.

I've got a template below that can be used with Sveltekit applications built with the node-adapter.

FROM node:16-alpine

ENV NODE_ENV production

RUN apk add dumb-init
RUN apk update && apk add bash

WORKDIR /home/app

COPY package*.json ./
COPY .env ./

RUN npm ci --only=production

COPY . .

EXPOSE 3000

CMD ["dumb-init", "node", "/home/app/build/index.js"]

The key points to know:

  1. I use /home/app as my working folder, feel free to substitute that with your own preferred path.
  2. I don't use my Sveltekit .env for storing environment variables. For me, it's more like a config file for my web application. My .env file contains the names of the environment variables that I use in the app, which is why I copy it to the container.To access environment variables in my Docker compose file, I access them using process.env.
  3. I like to have a bash shell in my containers in case something bad happens.
  4. I use dumb-init to start my process as PID 1.
  5. I copy . . because I have additional files beyond the build folder as part of my application (for database migrations, etc.). I use .dockerignore to exclude anything not required in my container image.