Tech Blog

Table of Contents
  1. The Tech Lead Role
  2. Why I Hate Jira and Confluence
  3. CodeMash 2023
  4. Beam Box
  5. Wildermyth Legacy
  6. Resume Feedback
  7. Unix Tools
  8. Quest Command
  9. Palette Cycle
  10. Server NAS
  11. Markdown with Kotlin
  12. No JS Website
  13. Github Actions
  14. Going Open Source
  15. A Short Return to Modding
  16. Advent of Code
  17. Readings
  18. Tech Blog Site
  19. Sprite Sheet Gifs
  20. Hacktoberfest 2020

The Tech Lead Role


While Tech Leads are present in almost every tech company, the ambiguity in their role can lead to misunderstanding and waste. I think Tech Leads provide the most value when they have a clear vision of their role and its responsibilities.

The Tech Lead (TL) role is widely and loosely defined. It can mean everything from "most senior dev on the team" to "HR People Manager over developers". I'd define it as "person with final say on technical decisions and responsible for all technical output of a team". In my definition, the TL works with and oversees the developers on their team, even though those devs usually report to a separate people manager.

Junior developers start their career by pulling singular cards and understanding the system just enough to complete that block of work. They don't decide what card to play or why it's more valuable than other cards, though they should be learning the answer to those questions. As developers grow, they become more involved with the planning and design side of software: they attend grooming sessions, break down features into cards, and eventually move to decomposing user needs into those features. As a developer moves towards a TL or architect role, they continue to take on more tasks that require self (and team) direction, instead of fulfilling pre-designed tasks. This parallels business in general, as a foot soldier, you don't question orders, but as a CEO (though you may get many suggestions) there is no one to tell you what to do; you have to decide the path for your team.

A developer is responsible for completing their card. A tech lead is responsible for a team's technology holistically. There are many factors and compromises to be made, and the sphere of responsibility and the sphere of influence don't always fully overlap, but that doesn't change the goal of the job. As a TL, you have more insight into the team than any of the bosses in the chain above you, and the competent leaders above you are counting on you to be able to assess and redirect the team as needed to achieve its goals.

I love being a tech lead. In part that's because it's about as close as you can get to being a CTO without having to be far up the career ladder or take on as much personal risk. My job isn't to work a specific card, or to develop a specific feature, it's (along with my other team leads) to tend and grow a mature software building team. My role specifically is to guide, develop, and oversee the team from a software development practices side. I expect to share prioritization work with my product owner (though they have final say) and to assist my team's people manager, generally by giving insight into their people's technical skill level. In my domains, I don't expect to be told to make a different technical decision, nor what style or practices to focus on. I'm actively seeking advice in all these areas, but I have a responsibility and an expectation that making those decisions is my contribution to the company. This is a broad scope and naturally entails a level of ambiguity.

I believe the best tech leads are constantly context switching between translating business desires into features, working with the product owner and delivery manager/scrum master to estimate and prioritize work, collaborating with other areas and teams (security, architecture, integrations), working with the devs (coaching, reviews), and doing technical work (card, proof of concepts etc). A good TL isn't waiting to be told which area to focus on, but is constantly evaluating the best use of their focus. They know they can't possibly get done everything that would be valuable for them to do, so they're constantly making priority calls on what to focus on. I think TLs styles can and should vary, and I think that the same TL may operate very differently on different teams and even on the same team at different times, as they adjust to fit what the team needs. These choices are subjective and messy.

In the course of my career, the average TL I see does card or sprint work about 20% of the time. Because I love coding, I've often been able to find ways (even if it means working more hours) to code closer to 60% of my TL time. I wouldn't recommend this to the average TL, and I believe I've been able to do it because I've ruthlessly pushed back against busywork. I fight against low value meetings, I've regularly invested time in tooling to eliminate tedious administrative tasks, and I've really focused on empowering devs to take over parts of my job (eventually helping them to become tech leads themselves). While sometimes a TL needs to shield a team by participating themselves in really low value busy work, it's even better to fight against having that low value work in the first place. I believe a necessary part of the TL's job is to identify time sinks, whether they are coming from an overly ambitious dev, a product owner that misunderstands time and effort for a feature, or even a boss or exec who is disconnected from the tech.

I do not believe that a TL's time should be factored into any kind of capacity estimation as it's essential that a TL has freedom to pivot to whatever is most needed for the team, at any time. That may be coaching a struggling dev one day and calming a concerned stake holder the next. If the TL does not have this flexibility, it means that he's been reduced to a 'senior dev' position, and that essential role of a TL is not being fulfilled. This is in line with how a product owner or people manager's time is not tracked as part of team capacity, even though they are necessary for the functioning of the team; they, like the TL, are a different role from a developer.

The TL position is not for everyone, it requires a self drive as well as an ability to feel comfortable in ambiguity. It's a challenging role, but that very challenge makes it more


Why I Hate Jira and Confluence


Confluence and Jira are used at a number of companies I've worked for. In each one I've been frustrated by using them. It's not that I think they're terrible products (though I do think they're not ideal), but that they're filling a round hole with square pegs. They are clearly successful products, but I believe this is more due to their effective marketing than to the value they provide developers. For those unaware, Confluence acts as a knowledge store, where people can write and link to text pages (with formatting and pictures). Jira is a sprint board and sprint metrics analytics platform.

At the core, I believe that Confluence and Jira are marketed to and developed for business users, but they are used predominately by developers. The feature set is designed for non-technical, non-power users, but is then forced on people who would be power users if they could. The end result is that at least half of their users are stuck taking tedious steps instead of accelerating. In this post I'm thinking purely from a development team perspective: Jira and Confluence may be ideal for business users, UX people, and product managers. That said, I believe it's used in a larger majority by developers, and is not a good fit for them.

I'm a huge fan of Unix Philosophy and Confluence / Jira clearly go against that. Both tools are kitchen sink approaches that try to do everything in a single, semi-walled garden. The application is not at all open to tinkering, but instead wants to bring your code into its garden through extensions, which are often paid. The tools is meant to work for a large organization with centralized structures, as opposed to many small teams using a decentralized approach. Source code is available only for commercial license holders. When another dev and I spent a good chunk of time attempting to dockerize and upgrade confluence, we ended giving up because it was such a pain. It's organization seems to lend itself to a "one size fits all" where every team shares the same Jira settings and therefore have a bunch of process and restrictions that don't fit the team's use case.

Confluence accepts markdown, but once you've pasted it in, it auto converts it to its internal representation and then there is no going backward. Just like its version control, its editing is a proprietary black box. This means most of the things that I hate about Microsoft Word (not hackable, destructive editing, etc) are true of confluence, coupled with that I can't use any of my existing tooling for version control work (review, rollbacks etc). In essence, confluence expects the user to be entirely gui based and work with only their tools, without modifications. The freedom to work how I like to work, or to iterate on my processes, like I do everywhere else in my development life, is lost here.

Jira works the same way. I give it points for its ok API that I've been able to use to essentially eject from card creation through the gui. Outside of that, it's again focused on non-technical users. Card creation is an incredibly mouse-based experience where most fields companies consider mandatory (team, reporter, epic, points, etc) are specific fields that must be clicked and then waited a half second delay for the field to populate, before clicking again to confirm. The editor is the same editor from Confluence. There are automations that can be applied, but they're all low/no code: again focused on non-developers, to the pain of developers as there is a loss of power/tooling when a dev has to translate code into dragging boxes around etc. There is a large amount of dashboards and analytics that delivery managers and scrum masters have access to, but most that I've known export raw data into excel so they can manipulate it and see what they actually need.

Github issues, wikis and boards are so much nicer to use as a developer and offer a fairly "developer purposes" complete alternative. Other projects like Trello offer lighter weight boards that fulfill many of the needs in a developer friendly way. I believe that Atlassian is such a juggernaut for several reasons. First, they market to the decision makers instead of the users (assuming these tools are used "most" by developers). Second, they are courting large companies that are generally more focused on busyness and conformity than productivity and innovation, and are therefore more interested in a tool that excels at tracking developers than allowing developers to be productive.

Throughout this post, I've made the assumption that the majority of uses or Confluence and Jira are developers. I think this is true in any ideal case. Where this falls down is when developers are separated from their users by multiple levels of analysts and middle management. Developers, like other craftsmen, should be purveyors of tools. We should be elevating our non-technical partners and teaching them how to use, tweak, and make better tooling. Instead, we often adopt a large, bloated, one size fits all tool simply because we'd rather keep our heads down than to improve our and our partners lives. When a developer acknowledge a problem, and then sighs and admits defeat because "it's always been this way" or because "that's what the company wants" it makes me think that developer has exchanged the fulfilling, exciting role of problem solving for the dehumanizing ticket-taker mentality.

CodeMash 2023



If you're here from my talk on pairing, thank you!

Feel free to checkout my website, blog, and the talk slides. You may also be interested in the Beam Box that my brother and I designed and brought to display at the CodeMash Maker Space.

Beam Box




Thanks for scanning the QR Code! My brother and I designed and modeled this "Beam Box" to play with combining 3d printing and LED lights. The goal was to make a conversation/art piece that would be modern and fun. The name came from the idea of having a beam racing around in a box.

We finished this just in time for CodeMash, but we're considering uploading the files later. If we do upload them, I'll update this post with a link to them, so keep this page bookmarked if you're curious.

This was predominantly a pairing project between my brother and I. If you'd like to hear more about pairing, attend my "Pairing: From Pain to Profit" talk Friday at 4pm!

If you're curious to hear more about how we created the project, the mistakes we made and lessons we learned, read on!

Vision and Definitions

I've always enjoyed messing around in Blender, though I've never been really good at 3d modelling, or really anything that requires spatial skills. I bought a printer last November and have been using it as an excuse to learn more about modelling. I watched a great 35 part course on precision modelling that really helped me understand how to be more precise in my models. (There is a lot of self promotion / recap, but using youtube's 1.5x and skipping intros, it was super informative).

When talking with my brother about printing ideas, he came up with the concept for a "Beam Box": a "black cube whose inner walls are diffused led lights". With that vision, we spent a number of nights together modelling the cube and thinking through how we could make it printable and constructible.

We started with the Blender starter cube, and then made guesses as to how large the holes should be, how large the tubes should be, and how we'd insert the diffusers. We wanted to create something that could be assembled and reasonably dissembled, so that it was easier to insert or change the light strips. At the same time we wanted something sturdy enough to "pass the shake test" where we pick the cube up and shake it. If it fell apart, it wasn't robust enough; we didn't want someone bumping into it and it falling apart.

We started by measuring the exact width of the led light strip, and then making tubes that were just a hair wider. We knew we needed to thread them through somehow, so we made holes in the corners for feeding. Before going any further, we went ahead and printed the bottom ring and the four pillars for scale. (The top ring would need to be printed separately because its essentially all overhangs. This scale model was super small, but it gave us something both to conceptualize size, as well as to be able to talk through design ideas without having to draw or model an idea. We could pick up, rotate, and point to the model. (I think this speaks to how important prototyping is, regardless of your field; it's so fun to see software principles like 'thin vertical slice' and 'iteration' be relevant in other fields.)


Before I go further, it'd be good to define some terms, both from 3d printing and part names we came up with.

3d Printing Terms

Term Definition
Skirt A small square outline around your print space
Brim When printing something tall and thin, you usually want a brim. This is a thin layer of plastic around the base of your print. This gives you more surface area so the vertical print doesn't shift or fall over during printing. Then you just rip it off the bottom of your piece after the print is done.
Stringing This comes in a light form, that looks like fuzz or small hair, or it can be full on the width of your filament, where the plastic didn't stick to the base or rest of your model.
Overhang When the printer is printing over nothing. You can't print on thin air! If you do, you'll get extreme stringing, which at its worst means a ball of plastic yarn. I find these get worse if you make a turn while in mid air.
Supports Supports are temporary parts of the print that hold up overhangs so you don't get stringing. Then you rip them out at the end.
Bridging You can print over surprisingly large gaps if both ends of the gap have something to connect to. This looks like you're making a bridge from one side to another. Bridges that are too long will lead to stringing or the plastic sagging.
Slicing Taking a 3d model and slicing it into printable layers. I used Prusa Slicer.

Beam Box Part Naming


Horizontal tubes that the leds run through. These are black and prevent light from shining through.



Vertical tubes. Slightly different from horizontal tubes because they have clasps to connect rings, and they have L shaped diffusers.



Thinner white walls that diffuse the light / glow.


L Diffuser

An L shaped diffuser that is used to provide light for the pillars.



Corners are where a lot of the complexity rest, as the diffusers and pillars snap into them.



Pillars uses these to clasp onto the corner for a firm fit. They're connected to pillars and the corners have a matching indent. Also pictured are the inset bumps on the pillar that sheath into matching slots in the bottom of the corners.


Insets / Slots

Pillars used an insets that fit into a slot in the corner. This acted as a 'backboard' for the clasp, so that the clasp was forced into its slot by the inset pushing on the slot. (In the pic the corner is upside down on the left with its slots. The pillar is on the right with its insets).



Our reference for corners plus tubes. Essentially one square.



We printed one ring with all 4 pillars built in so we didn't have to snap them in or worry about printing double-sided pillars.



Initial Scaling

Printing the initial scale made us realize we wanted to make the box a lot bigger. We also realized that we wanted the diffuser panes to 'slide' into the tubes. We initially considered more complex designs, but we found that sliding the plate down through snug slot in the corner was enough to keep the diffuser in, even in an upside down 'shake test'. We did eventually decide to replace the slide down with a 'bend in the middle and slide into corners' as that kept the snug fit, made things more symmetric, and helped with printing.

After work on the tube and a first pass on diffuser inserts and corners, we scaled up to do a 'hot spot' test. This was done to see if the light was even on the diffuser or if it made concentrated patches of light and dark. We also wanted to test the tube walls and make sure light only came from the diffuser. Our initial test (the larger gray tube below) with gray plastic found the walls to be too thin. As seen in the tube, we planned to do an inner support wall for the LEDs to run along, but we removed it for later iterations (which helped a lot with hot spots).

We also tried scaling up a second time by 1.5x. The large white rectangle below is half a diffuser at 1.5x. At that scale I couldn't print a single tube all in one piece; it would be too big for the printer, and use a lot of plastic. So we kept the original scale. (Technically we chopped off 2cm because I had misread my rectangle print bed as square and used the larger side).


Connecting the Pieces

The next chapter was spent in honing in how to snap all the printed pieces together, and focused on the corner. We went through a lot of iterations figuring out how to connect the diffusers and pillars to the corners. This challenge essentially centered around a number of core concepts:

Most of the time that I thought I needed more complicated design than friction fit, I was wrong. The challenge mainly revolved around accounting for printer tolerance to get something snug without needing major force to fit together. Initially I added clasps, and while they worked great in the one piece corner test, we found they were too hard to attach when we had four corners to all connect, all of which were applying forces to each other. The L diffusers slid into the corners, but they were too loose to stay in for a shake test. I could (and probably should) have gently tweaked the width of the inset/gap, but instead I added a small nub that let me 'click in' the part and stay secure.

It was really fun to realize I could print less and less material while still testing connection points. As I tried more variants (each part in the pic below is unique and a different attempt) I realized I only needed enough to represent the connection point, which saved me both material and time. I came to think of it as doing 'unit tests' for printing.

Once I was happy with my unit tests, I printed the black and white piece, which was a single corner and half a tube. This let me test my latest version of the pillar and L diffuser connections, as well as do an at-scale light test, while still not printing a whole tube.


CodeMash Plaque

When we realized we'd have it done in time for CodeMash, I came up with a plaque that people could scan to read more about our project! The trickiest part was creating the QR code. I found an online generator for the image, and the imported it into blender as an image plane. I then used a Displace modifier to push white up and black down. This was really tricky in that to get the resolution right I had to subdivide the plane a huge number of times, which made blender slow. Once I had it though, I was able to use the boolean modify to 'stamp' it onto a plane cube, and then use 'Limited Dissolve' to make the mesh have probably 1/100th of the original vertices. Then it was just a simple two color print. Black background, white lettering and qr code. I also "Ironed" the letters and qr code to make them more smooth. I'm really happy with how well it came out and how easy it was to scan the code in my tests. (Hopefully it was easy for you too!).


Mistakes We Made

The first scale ring I didn't use enough supports, and so we had a number of imperfections and some serious internal stringing. Later prints I used supports for the entire upper lip of the tubes.

We didn't use enough glue on the bed for that ring either, and so had some external boils and imperfections. Future prints I made sure to have a layer of glue before any large prints.

A couple times I missed smaller parts (like the nub on the L diffusers, or clamps) when printing. Throughout the iterations I got more careful about doing thorough reviews while slicing the model.

Where the pillars connect to the corners, we have an inset and sleeve pattern, which I like. However the spaces weren't symmetric and because of being on a corner, it was hard to get them to snap in, especially when you had three other pillars that needed to go in and not much flex. This proved to be a pain.

I used clamps to make the corner-pillar connection snug. The hope was to pass a shake test without glue. This was overkill and while a single corner (unit test) was easy to connect, connecting all four corners (integration test) didn't work at all with the clamps. There just wasn't enough give. Turns out, snapping off the clamps and just gluing the sleeves/insets was much easier.

As mentioned above, a sign of our immature design was how hard it was to get things to snap together (moderate) and how much harder it was to remove those pieces. For instance, getting the diffusers in was much easier than removing them to start inserting the lights / led tubes.

Speaking of inserting lights, it was hard to feed all the wire in. Having wider tubes would have given us more thumb room for positioning.

What We Learned

Measure Once, print twice. For real. The material is cheap if you're doing small parts, and no amount of wrestling in blender will be as good as seeing the real thing. This feels similar to a TDD / agile approach where no amount of up front documentation will prepare you for reality. It's also much easier to reason about something you can hold in your hand.

Print Minimum. If you're going to follow the above rule (or even if you don't) you want to print the minimum you can for each iteration and test. This saves you money (usually measured in cents, not dollars), but more importantly it saves you time.

Blender CAD is pretty cool. I had fun going through the tutorials and even more trying to actually build something with my new knowledge. It helped move those skills into longer term memory / muscle memory, and I could feel my design abilities grow while using it. Being able to snap to specific points and build with mm precision is really neat, and I'm excited to see what I can apply this knowledge to next.

Mirror Modifiers are fun. Mirror (and boolean) modifiers are really powerful, and let me design one part of the cube, and then just mirror it to all the other parts. This let me worry about one correct design, and not how I'd duplicate it elsewhere. Mirror and Boolean (subtract one piece from another, used for clasps and nubs) are comparatively simple to learn, but can be combined in powerful ways.

Snug is great. We spent time over designing connections before realizing how powerful friction alone can be when used correctly.

Watch out for overhangs. A lot of time was spent figuring out how to make a design printable, where overhangs didn't distort it. Using print supports helped a lot, but connections couldn't take the distortion of 'residue' that's left behind when removing supports. For those we really had to think about how we could use slopes or angles to remove overhangs. This was surprisingly fun, as it made the design feel like solving a puzzle.

Print Modular. Doing a 13 hour base print was terrifying. I was so concerned it'd go bad 9 hours in. If I were to do this again I'd figure out how to print the pillars separately (double sided) so that the largest print is just a single base.


Wildermyth Legacy



Heroes of the Yondering Lands

Wildermyth is a wonderful procedural narrative game where you recruit randomly generated characters and then watch them change over their lives. They start as farmers and grow into famed warriors. They get married, confront their pasts, can lose their limbs or life in battle, and go through major transformations. The game's common praise is that players become more attached to the procedurally generated and storied heroes than they do to fully authored characters in linear games. Characters that live to the end of a campaign (essentially a complete story) go to live in your "player legacy" and can be re-recruited for future stories. This adds an interesting Pokemon aspect to the game. You're not just catching new heroes, but you're watching them evolve in both their skills and their histories and relationships.

I was really looking for a game that had good characters, a narrative focus, and RPG elements and happened on Wildermyth by finding a random glowing review of it through my RSS feed. (Another benefit I've reaped from going open source). I've had a blast playing it both alone and as a multiplayer game with friends (with two copies of the game and remote play we've done 4 player campaigns where two people host and connect, and the other two each remote play with one of the hosts. Then each person shares controls/characters with just one other person. It's even better with more copies of the game, as Wildermyth let's you assign characters to specific players).

As important events happen to your characters, entries are added to their history. You can edit those history entries, but they are limited to 700 characters and you can't add arbitrary entries. As we had fun stories 'emerge', or relationships develop outside the game's knowledge (say a redemption arc, or an ironic series of events), there isn't a built in way to document those wider stories.

After a session with friends, I'd find I'd want to review the characters, or that I had a question about one of the stories from the session. I wanted to spend more time in that world, without having to load the game up. I realized I wanted a 'companion app' that would let me view my collection of heroes on my phone, and that would let me create and store additional info about those characters. The game's save files are in json, and the legacy has a built in 'character export' that let's you export character data as well as separate png files for both their body and head.

Design Decisions

I set out with several design goals:

As I've spoken about in my blog on quest command, I generally hate doing UI work as it always seems the slowest process of any project I work on. The best UI experience I've had is with HTML and CSS, and given the ubiquity of browsers, a web app made sense for being both desktop and mobile. Due to being client side focused, I also figured I could easily host it on github pages to get something live quickly.

I've used the normal JS frameworks, used game frameworks that compiled to JS (most the games on my main page are built in KorGE), used Kotlin Multiplatform to run Quest Command as Js in the browser, and even built sites (like this one and my main page) with no JS at all. Lately I've flirted a bit with KotlinJS, first as part of the Quest Command multiplatform effort, and then in a spike at work. Impressed by its JS interop, general ecosystem and simply by how unexpectedly often things "just worked", I decided to give it a go for a larger project like this.

A Mobile Legacy

Writing a UI focused single page web app in Kotlin has been surprisingly exciting, and in the first 20 days I slammed down over 130 commits. KotlinJs is, while still feeling cutting edge, feels surprisingly well supported. Getting to put my business logic in Kotlin is great, but being able to define typing for external JS, including npm packages was neat, a little tricky and very satisfying. With just a little mapping code I was able to pull in a library for reading a zip file and later to use with local storage (more on that later) and then be back into a 'statically typed' world, even in using these vanilla JS libraries.

The app loads a default character in order to give users a 'sample', but the magic comes from compiling save files, exported characters, and a 'strings file' from the game into a zip and locally uploading it. The app parses the zip, reads all the characters out, identifies their pictures, and interpolates the text of their history events, doing string interpolation to turn templated base strings into lines that are customized by the character's name, gender, hometown, etc.

One of the things that I love seeing when I interview devs is a list of hobby projects. I've found devs with hobby projects are less likely to try to reinvent the wheel at work, because they've already done so at home. Using KotlinJS, but no framework, I've allowed myself to reinvent a number of wheels, just for the fun of doing it. After I had basic functionality down, I implemented routing from scratch. I hit a couple snags due to bad initial designs, but bad design on my part was really the only hiccup. I also created a fairly robust search that looks at character name, character aspects, personality, and class level. It took me an hour to implement; everything just worked. It's such a blast when each new feature just kind of flows out without hardly any resistance.

The largest challenge I had was with local storage. In order to not require the user to re-import the zip on every page refresh, I wrote everything (including base64 encoded images!) to json and stored it in the browser's localStorage. This worked great, was synchronous, loaded quickly, and had an easy interface. It also had a 5mb limit which I swiftly ran into with all the character images. In order to move forward, I removed my re-import constraint and continued building other things out. For a faster test loop, I added a zip file to the project resources (and git ignored it) and then told the website to grab that file and load it if it existed. This let me keep testing the full zip even without having local storage working.

I spent a good amount of time wrestling with if I should scrap the HTML approach and make an android app or something. Not being able to load files or have local storage was a bear. I couldn't auto-load a zip, and when I tried to allow users to enter a link to a zip (through google drive or something) and then store that link in local storage, I (rightfully) was blocked for making a CORS request. (Letting users load arbitrary links through json is a security nightmare). Googling around for local storage limits and workaround eventually lead me to indexedDB, which I discovered is an async JS DB built into modern browsers. And, to my surprise, it doesn't really have a size limit. Unfortunately, KotlinJS's std-lib doesn't yet have support for indexDB, so I couldn't natively call it.

Fortunately, KotlinJS has a neat feature where you can make calls to untyped, vanilla JS, and even still wrap those calls in typing. This is what I used to expose JS's Object.keys function:

object JsonObject {

    fun keys(obj: Any): List<String> {

        val raw = js("Object.keys(obj)") as Array<*>

        return { it as String }



Unfortunately again, the more complicated, async nature of indexedDB made it too tricky to use the raw JS bridge. Instead I tried a couple npm packages and was able to settle on the second one, localForage. Creating the mappings were really simple and I was able to essentially just persist my whole 'in memory store' to one key in the indexDB, and then read the whole thing back out on page load. It's simple, was easy to implement and reason about, and so far loads really quickly. I'm even able to persist search criteria, so reloading also reloads your most recent search (and the page also uses anchors to scroll to the card you were looking at, or show the details page, so you can even bookmark your characters).



external object LocalForage {

    fun setItem(key: String, value: Any): Promise<*>

    fun getItem(key: String): Promise<Any?>


There are a number of features I'd like to add, but I'm really proud of what I've been able to build so far. I'm kind of shocked and elated at how easy KotlinJS has been to work with. Part of me is nervous that I'm starting to focus too much on one language, but on the other hand, it's been so convenient to work with that it's hard not to say my next front end project won't also use KotlinJS. It's also made me again wrestle with the usefulness of frameworks, but that thought would be served better by its own blog post.

Resume Feedback



Writing a Resume

As I've continued to update my resume over the years and as I've become more involved with interviewing developers for both my teams and other teams, I've become increasingly opinionated around what makes a "good" resume. I think a "good" resume is incredibly subjective and evolving, and I don't think I'm an expert by any means. That said, it seemed worth collecting my thoughts on what I think makes a resume effective.

Writing a resume is its own skillset and can be frustrating because of how much you can iterate on how little. Fortunately we get lot's of practice as we update it over the years. I think it's worth spending that practice time being reflective and intentional. That investment not only creates a hopefully nice looking document, but also allows you to think through elevator speeches, "a little about yourself" intros, and responses to interviews.


Who is your audience, how do they use resumes, and what do you want them to feel/think after looking at yours? It's worth thinking about these questions every time you work on your resume. A resume that makes you look great is nice, but the best resume gets you an interview because you look like you'll solve the potential employer's problem, whatever that is.

For me a resume is something I'm immediately skeptic of; something I'm expecting to be lied to in and something I'm looking for reasons to dismiss a candidate. There are 'triggers' I'm looking for to switch gears and then 'root' for a candidate, until I find red flags etc. For an employer a resume is a filtering tool to dismiss candidates before wasting time interviewing a bad fit. I'm looking to find unsupported claims or lazy work masquerading as passion.

At this point I strongly believe every developer that's not already established in their career should have a website. Creating a web page is so easy to do and host these days, and it creates a portable resume and place for self expression. If you don't have your own website (even if you're a predominantly back end developer) I question if you're really "passionate" about development. You should put your resume on your website. This makes it easy for someone to always get the latest copy of it, and ensures it's easy for you to grab a copy at a moment's notice.

On your website, you can 'frame' your resume with text around it / make it a focal point. Then you can link to your website from github, linked in, email, and any other 'funnel' that you have. Just like SEO and selling a product, you want to have channels that lead employer's to click "buy" on you.

It used to be that 99% of the time a resume was printed before being read. These days it's probably 90% digital, but it's still good to make sure your resume is printable. That said, I like putting a QR code on my resume so if they do print it, they can scan the code to get back to my site. (You should also have your website written out on the resume).

I don't know anyone else that does it this way, but my resume is written in html. This gives me full control over exact display / placement, makes it easy to version control, helps me keep my web skills sharp, and is easy to convert into a pdf by 'printing to pdf' from chrome. It's also just a lot more pleasant than trying to format a word doc.

A resume is a time to be really anal and consider every word. Everything from layout to what story you're trying to tell is important, and you'll be fiddling with and incrementally improving it possibly over decades. (This is another great reason for version control. It's nice to compare to older versions and see your evolution).

Here's my resume for reference. I think my resume is currently a bit overcrowded, but otherwise hopefully it's an example of what I'm suggesting / what my opinion of a resume should be.


Resumes are generally looked at in batches. The resume has to make it through a recruiter and then often goes to a higher manager and tech lead. Lots of eyes are looking for a reason to not waste precious time interviewing or bringing an entire team in to interview a dud. Consequently, each pair of eyes is skimming the resume for green or red flags. If I see enough red flags I'm not even going to read the big chunks of text.

A resume that is multiple pages feels like an insult, as it's not the norm and indicates the candidate doesn't value my time. Large blocks of text are insulting in the same way, at a much lower level. A good candidate's resume makes it easy for me to evaluate them, just as a good employee is easy to manage. I want to make my boss's job easier, and that starts with a resume.

A resume that has a crazy font or layout tells me the person is not used to interviewing and/or is not professional; it's an easy reason to skip that person and focus on the others. A good resume stands out, but only slightly. It catches my attention because it feels clean and well designed. It feels readable and welcoming at a glance. Much of this has to do with proper spacing and is reflective of good web design. Sections should be skimmable and my eye should fall on the most important pieces, just like in a painting.

Generally I stay away from color in resumes due to them often being printed. A nice, single color can highlight section headers and provide anchor points for the eye, but should be previewed in grayscale to make sure they still look ok when printed.

Common Flags

I read a resume like a newspaper; my eye skims all the sections, goes back up to the name (which I probably initially skipped), swims around for a github, and then often goes there before looking at the rest of the resume. A site or github tells me what they've done, and is more 'honest' than what they say they've done. After that I look at most recent experience first and scan for accomplishments over participation and my list of green and red flags. At that point I start building my case / bias for why we should skip or interview. This is a guttural reaction and I don't think that's necessarily a bad thing. I should be strongly in favor of interviewing a candidate after reading the resume. If I'm apathetic towards them it's not worth investing further. Reviewing a resume is a messy, subjective, bias filled process. After seeing so many resumes and noticing the correlations in the interview and hired process, interviewers develop an intuition and a set of flags they look for.

If a candidate has a website and it looks well built, they have clear non-trivial hobby projects, and/or they have a github that's actually active, I get really excited because it's clear this developer is actually passionate about what they do, which is about the biggest green flag I can find in a resume. I'm also looking for "accomplishment" instead of "participation" language. I'm looking for a candidate whose experiences are filled with how they accomplished or delivered value instead of how they filled a seat or did exactly what was asked of them. This indicates that they're hungry to succeed, that they can lead and self start, and that they can communicate at a level above the weeds they're currently in.

On the other hand, there are a number of red flags that quickly bias me against the candidate. Any time I see that the developer has had a single language career, has spent five or more years in a single company (or worse single role) I assume that developer is stale, unmotivated, or a slow/unengaged learner. Exhaustive lists, whether of technologies known or experiences had are also a turn off. They indicate to me either a lack of respect for my time, or an indicator that the candidate is trying to bluff / overwhelm me. I'd rather see a much smaller lists of what was accomplished over those experiences or built with those languages. Lists of technologies used also don't indicate to me how strong the candidate is and often in the interview I find there is very little knowledge of most of the languages listed. I'd much rather a list of projects (and their language) that I can verify myself. I also have a knee-jerk reaction to listing EE technologies like Spring with great aplumb. Often it's meant that they've only used EE frameworks and usually blindly implement patterns instead of thinking through and understanding architecture. In general anything that indicates big business bureaucracy often means the candidate is less agile, self start, and able to think outside of rigid instructions.

Content Tips

Can you reduce your opening paragraph to a tag line? Can you reduce your experience bullet points from two sentences to one? If need be, you can (and should) increase font size to something that's nicely readable. Less words is better for a resume, especially if you can distill the essence of what you want to communicate. When you've done a good job, you can lift phrases straight from your resume to respond to questions in interviews, and if you've spent that amount of time, you'll have those phrases memorized.

In the pdf version, anything that can be linked to should be a hyper link so someone can easily verify your claim. Your website, github, email, and each specific project should link to its web page. Given you have the space you can also spell out the address or similar for your projects for the printed version, but so long as the hyperlink prints normally it should be enough to just make existing text link properly. This lets someone who is interested verify that your projects are solidly done etc.

Wherever possible replace "I did" or "I completed" with "I accomplished" language. "Participated in Hackathon" tells me you showed up, but I don't know if you were a help to your team. "Lead team in Hackathon" or "Won best in Show at Hackathon" or "Completed Playable Game at Hackathon" tells me that you weren't just showing up, but also pushing things forward. Even better if you can link to the project, and better yet if they can see it running/play it. Employer's don't want someone who shows up to work on time and leaves at the end of the day and was a warm body in between; they want someone who will push to get things done without being hand held. Any opportunity you have to show you're that latter person will make a potential employer more interested in you. I don't care that you worked at a place; I care that you accomplished things for that place, because I care that you'll accomplish things for me.

This is something I do bullet point by bullet point when I'm updating my resume. I look at each line and make sure it's stating what I accomplished by doing something. This says that I not only learned the tech or did what I was hired to do, but also how I exceeded expectations in a way that mattered to my employer.

Unix Tools


I want to continue to grow and be more familiar with the unix / linux toolset. I think there is great value in learning the tools that exist on any linux machine, but I think it's also worth exploring new tools. Here are some of the newer tools that I'd like to use more often.

Tool Alias Purpose
bat Nicer file reading (cat)
bottom btm Graphical system usage (top)
duf Graphical DF
dust Graphical DU (disk utility, I think)
fd-find fd Fast and intuitive find
glow Markdown rendering on the terminal
httpie http Easier curl
jq Parse JSON
jc Convert many tools to JSON for easier parsing
lsd Nicer LS
mcfly Bash history; I'm having trouble to get it working on wsl
mosh Resilient SSH
navi Cheatsheet tool
procs Graphical ps alternative

Quest Command


Visual Log

Having started programming with visual basic for Oblivion and Skyrim, learning Java in order to mod a FTL save manager was a revelation. I remember reading Effective Java in the early years and learning the basics of more advanced programming. And so, while today I find java nearly unbearable to write, it will always hold a special place in my heart, and for a long time it was my favorite language.

Programming started as a hobby for me, and it's continued to be that ever since. After messing with an existing save editor, essentially my first java project was to spend six months, at least 20 hours a week creating a java space ship game (like FTL). It was terrible, and I committed every programing sin you could, and then felt the great pain of living with those errors (which is why Clean Code was such a revelation). In my early years I repeated a cycle of working on a project as long as I could until it's complexity and scope grew to a point where it wasn't feasible to continue to work on because the knowledge domain and many layers of coupling required too much research to get anything done. In effect, to work on any one part of the app, I'd have to remember how every part of the app worked. It was about this time I was considering to take yet another stab at my Stars Between game, and was planning to try an event based approach to see if I could minimize coupling.

Work allowed me to attend Code Mash and on a whim I signed up for a 'pre-compiler' (longer session that occurs the first day of the conference) that was on a language called Kotlin. I believe it was a four hour session that was followed by another four hour session in the afternoon. After the first session I dropped all the other talks I had planned to attend and spent the full day learning about Kotlin. It seemed like a revelation to me. (Ironically, I believe I also attended a talk on Elixir that week, and was unimpressed). To me, Kotlin is functional, except when it's convenient to be OOP, it's statically typed without being verbose or slow to write, and it's highly discoverable (I can follow methods and dig into source without having to abuse 'find'). It was probably within a week that I decided all my hobby projects going forward would uses Kotlin and that I'd convert anything I was currently working on over.

I quickly worked on converting my new, more event based version of Starship over to Kotlin (and then gave up, frustrated with the UI aspects). I also worked on creating my Palette Cycle app for Android. My third project was the culmination of several evolutions I was going through. I wanted to create a game that was as open and world interactive as games like Runescape, Skyrim, and Breath of the Wild, but I knew I'd never accomplish something with such a large scope, as a single dev, working as a hobby. I also knew that I continued to run into the complexity issue (but that an event based system seemed promising). Finally, I had tried enough front end systems (HTML, Angular, React, AWT, Swing, Tornado FX, etc) to know that I just didn't enjoy trying to make GUIs and that's where my efforts almost always ground to a halt. I thought about the evolution from Morrowind to it's sequel's sequel Skyrim. The latter game took many times the budget to make, but wasn't hugely more interactive than the earlier game; instead it seemed much of the budget went into graphic fidelity (coding, modeling, textures, voice actors, etc). This lead me to the third conclusion that formed my goal: what if I didn't do a gui at all?

Quest Command was born of those three goals: create an open, interactive world, use Kotlin and an event based system, and do so as a text only game. Quest Command has been my most active project since then; over the years I've produced over 800 commits on master and constantly refactored both small and large chunks of the game. It's scope grows as I feel like working on something interesting, and yet the event based system has worked well to prevent me from ever falling off due to complexity. (I think being a more mature developer has helped as well). On my work computer I keep the repo bookmarked as it constantly provides references of how I've solved a problem before or examples I can show a dev of how to approach a problem.

Quest Command and the Workplace.

Creating a long running, larger project has been instrumental in helping my work career. First, it's allowed me to explore concepts in a safe space where I'm not wasting company time. I often see less mature (though not necessarily less experienced) developers try out their latest interest at work not because it's the right fit for the task, but because they're curious and not doing this exploration in their free time. I also see developers not able to quickly discern a concept because they haven't practiced that exploration etc. Having a long running, non-trivial project allows you to explore concepts and still feel the effect of them in a way a trivial kata doesn't do. It also makes lessons more real as you have to actually deal with their consequences. I think this alone has driven me to be a more mature developer and allowed me to address leadership things at work instead of being stuck just trying to discern at the technical level.

Having a long running project is a great way to generate interesting talks. In the past I've talked about Quest Command itself as well as used it as an example to talk about Testing with Kotlin. Having talks that are portable across companies (that you don't lose when you change jobs) has allowed me to build a backlog so that I can jump into that space when opportunities arise, without needing to re-invent the wheel. Having presentations that focus on something I've spent time on and care about make it easy to be engaged in presentation and knowledgeable when asked questions.

Quest Command has been a blast to continue to tinker with. I highly recommend that a developer find something they really care about and stick with it over a long period of time. Doing so builds discernment, architectural intuition, and a storehouse of tools to bring to bear at the workplace. It grows both your skills and your career, while being enjoying and satisfying on it's own merits.

The Evolution of Quest Command

Quest Command has evolved significantly over time. One of the lessons I learned from one of the technically stronger devs I've ever worked with was a lack of fear to do large scale refactors when appropriate (which in itself requires discernment). In Quest Command I've done a number of larger scale refactors.

The first large one was to refactor my many types of 'targets' (things you could interact with in the game) to one large Target object. Initially I had separate Activators (things like levers or grain shutes), Actors (NPCs), and Items (things you could hold). As more and more of the behavior became the same (you can burn an NPC, or a pie, or a tree), it didn't make sense to use inheritance, composition with interfaces, etc. And so I consolidated all of them into a single Target (which I later renamed to Thing). This also lead to a recursive idea: my map was a network of locations. A body became a network of locations, each location being a body part. Any Thing could have a body, and any body part could have a (or more than one) Thing. This makes things like saving confusing, but means I can reuse traversal and path find for finding your way through the overworld, equipping a cloak, or climbing a large monster. Both Things and Locations become large objects, but can use composition to break out all their component pieces to still be manageable (and persistable).

I also spent a couple weeks at one point refactoring out all the hard coded calls to the game's singular player, and instead pass the player around in events. This opens the door for multiplayer, as a server and client only need to pass text back and forth and so long as we have a map of player id to Thing, there should be no other real work needed to suddenly have a server that would allow multiple players to play in the same world. (Security and not abusing the system are entirely different. Without a queue and internal clock, a user could submit a long list of commands and essentially hijack the server / kill everyone in one command etc, but that would be a concern for making an actual MMO, not a localhost friendly coop match).

Another large effort was spent in ripping out the months of work I had done on a json 'game inquiry and rules' system that let me create dynamic interactions in json by allowing the json to 'ping' the game state and depending on certain factors take different decisions. I replaced this by creating a Kotlin DSL. This means that modding will be trickier (I'll need to take the Minecraft Jar loading aproach over just grabbing static json files), but gave me a ton more power and convenience at the same time.

Finally, lately I've been working on removing the JVM specific code from the main app, so that I can wrap it with a multiplatform Korge 'terminal' and deploy the app to web and phones.

Palette Cycle


Mark's Art

As I mention in the Quest Command blog post, creating Palette Cycle was one of the first apps I ever built with Kotlin, and so far the only Android app that I've published. Before going any further, it's worth discussing the beautiful art behind this app. I spoke to it a bit in this presentation I gave at my workplace after building the first version of the app, but it's worth talking about here in a hopefully more focused fashion.

Palette Cycling Art

In the early 90s, before computer games could handle a lot of colors, let alone complex graphics, artists endeavored really clever tricks to create beauty in constrained digital environments. If you haven't seen it already, I highly recommend you watch the fascinating talk that Mark Ferrari gives about the time and unique art it created. This video blew my mind and created my fascination with palette cycling art.

Palette Cycling art works by treating images very differently than we think of them today. Instead of each pixel in an image pointing directly to a color, it points to a spot in a palette (think of a painter's palette). This spot in the palette in turn points to a color. This one level of abstraction is really powerful, because now we can cycle which color that spot in the palette points to, and thereby change the color of all the pixels that point to that part of the palette. By doing different kinds of cycles, we can create all sorts of interesting movement.

Cycling Explanation

It's easier to understand by looking at it in motion, and quite frankly I find this art beautiful and highly recommend you check out the online demos here and here. It still blows my mind that you can have art look that good and in some ways surpass modern gifs (you're not limited to a tiny loop) with such clarity and grace.

The First Pass

Inspired by the art, and since the website's code was not obfuscated (merely minified), I set to work and created an android app that allowed you to use the Living Worlds art as a live (moving) background for your phone. I did so by reading through the javascript and then replicating the logic with Kotlin in the Android environment. I made the same api calls that the website did, so I didn't distribute any of the copyrighted art, but instead consumed it like any browser would.

After deploying it to Google Play (for free), I got a warning letter that I should take it down by a fan of the original art who thought I had done something wrong. I explained that I hadn't but since I followed Joseph Huckaby on twitter (one of original developers on Living Worlds, and the person who coded the website) I reached out to him to make sure. He kindly forwarded me to Ian Gilman, who was another of the original developers and was now working on an Android app himself. It was a strange and sublime thing to interact with these developers who had gone before and so long ago made such a neat thing. I felt treated like we were all 'just devs' and was really encouraged.

Ian agreed that I hadn't done anything to violate copyright etc as long as I didn't sell the app, and even asked for some help on a rendering issue he was having with the new app. That's a feather I think I'll always keep in my cap, even if I wasn't able to help him a ton since he was using JS and I had used Kotlin. When he released the Living Worlds Android App I pointed my app's description at theirs in hopes it could drive people to buy the app. In my opinion, as someone who doesn't generally spend money on apps, it's well worth it to support the artist and devs. (It's still being updated today!)

Later Updates

As I've talked about in going open source, I'm trying to decrease my dependance on Google. When Google Play sent me an alert that they would remove my app from the store if I didn't update my privacy policy per their new terms, I was miffed. When they said you were now required to provide privacy policy even if you didn't use / track any data, I was annoyed. When they said you couldn't fill out a form on google, but had to self host and provide a url to it, I was fed up. I decided to move my app to FDroid (an open source google play alternative) and let the google play version slowly die as google killed it off. The PR/Issue is still open right now. It's a slower, more manual process, but I also feel like more of a human going through it.

This move required me to rebuild the app, which lead to a fairly large refactor and update as I upgrade dependencies etc after a number of years unused. This also lead to me implementing new features and generally tweaking things. My app lacks a significant amount of art and detail that was added for the Living Worlds app, as that art isn't available to just pull from a website. That said, the less detailed art on the website has many base scenes that aren't in the app, and so my app has more variety despite having less detail. It also has a good deal more customization as you can force a time of day, use a parallax affect on the background and generally have more control over what things look like.

Server NAS



My main computer clocks in with something like 6 hard drives totalling over 11 terabytes of space. That sounds like quite the flex until you realize that 90% of that is 5 year old disk based hard drives that have moved from computer to computer like a hermit crab changing its shell. Less than 3 of those terabytes are SSD.

I've done some level of backups within that mesh of drives, backing things up from drive A to drive R, etc. But all of those backups have been manual, and all within the same tower. One lightning bolt could take it all out.

For a long time I've known that I need to do some kind of backup for all those videos that I filmed in high school when I was learning to edit, or all the family photos. Growing up with terrible internet, I've become something of a digital hoarder. Never knowing when internet may go down for days developed within me a real distrust of any data not on local disk. I've never really considered backup services for that reason as well as for the cost, and as I discussed in going open source I'm not keen to pump my personal files up to an off premise service.

So when I finally decided to do a backup solution, I came up with two main goals. First, the backup must be local, and accessible over LAN / Wifi. Second, the backup must be durable and resistant to data corruption. In my mind, this meant a RAID (Redundant Array of Inexpensive Disks) NAS (Network Attached Storage). Raid basically means storing data on two or more hard drivers, where basically the data can be lost or deleted from one disk, and still read from another one, so even if a drive fails you don't lose data.

I asked around at my company and got a number of recommendations for different ways to do RAID as well as good NAS units to buy. I was surprised at how expensive most NAS options were. Most focused on the streaming aspect and had large amounts of ram; it seems the main use case is for music and video streaming. This use case is adjacent to mine, where most media I consume is purchased and local to my used device; I was looking more for a library to pull copies of media from, and where I could have a second resting place for documents etc.

And then a server admin coworker gave me a really strange suggestion: why not buy a whole server?

I had never considered buying an actual server, but my coworker made a great argument: they're dirt cheap, business grade, and much more powerful than a comparable NAS. The server I bought was slightly more expensive than the NAS I was considering, but it had significantly more hard drive space, something like 8x the ram and a much better processor. For most consumers it would have been way overkill, and would require a server admin, but for me it would prove to be an ideal learning experience. As someone who has spent much of my career pushing software to "other people's servers", having my own server mean I could dip my toes into the server admin role without risking bringing a real production down.

So far I've stumbled through setting up RAID for the drives and installed a GUIless Ubuntu, which means the only way to interact with the server is through a remote terminal and using shell commands. It's been a learning experience but it's been great to shell in from my windows desktop or linux laptop to tweak something. I've also setup mounts so that both computers can browse files from the server, and copied over all the files that should be backed up. I've installed (and then disabled) Pi Hole and set up a minecraft server that runs all my custom mods as well as runs as a service with a huge amount of ram. (As a side note I created a utility app inspired by git that looks at the updated date of every file in a minecraft world and saves only those that change to different 'version' folders. It lets me take regular backups at 1/10th the normal size, but then roll back to whatever version if something goes wrong. It's in a private repo at the moment but if I break it out I'll update here with a link). I also set up a git server for private repos that I wouldn't want on github (like for pass).

Finally, I recently created a Basic Backup app that let's me recursively grab all files in a folder (and its children) and copy any new or updated files over to a destination, erasing the old version. It's not sophisticated at all, but it works over a network mounted folder and let's me idempotently back up any 'new pictures' etc with a single command. It has no versioning, but as anything that logically should be versioned is in various git repos, it fits my use case nicely.

While not at all what I expected would be the outcome when I started looking into backup solutions, I couldn't be more pleased with running a local server. It's way overkill for my use case, but it's been a great learning experience and seems like a shell that I could continue growing into for years to come.

Markdown with Kotlin


In an earlier blog post about building this blog, I referenced a joke about how people build their own blog frameworks only to blog about that build, and then stop blogging. In keeping with that, it seemed time to write a second small post about how I built this blog framework. In this case I threw out the original version and rebuilt it using the Simple Site app I talked about building in my no js post. It's interesting that recently I'm seeing this sort of attitude become more widespread.

The initial idea was to apply the same setup for blogging as for the site. You use an alias to run a jar that watches changes to your markdown files and then packs them up into HTML and gives you a table of contents.

I later realized that I needed to change the library in order to support tables (and have a bit more control over how I wanted to transform the markdown).

There was probably more here, but it took me so long to get around to writing this that I think I've forgotten most the details!

No JS Website


Lighthouse Report

Sometimes developers find a hammer, and all the world is a nail. When I built my first website, it was all html and plain javascript. When a mentor introduced me to Angular (1.2 I think), it was a revelation. Templating and organizing my website made life so much easier. Since then I've used JQuery, JSPs, Angular 2 (and later), React, Elixir Live View, etc.

Every site I've built for a company has been a single page web app and used a large framework like Angular or React. It's increasingly felt like a hammer used to apply a stamp. Most of the times the single page app could have just been a single page website, but both business and developers are used to a large framework, and so that's what's used, even when it's completely overkill.

A couple years ago, when Stardew Valley updated and I wanted to update my Stardew Bin Tracker (source) site that tracks how much you've shipped, I decided to strip out the angular and try to make it as simple as possible. This was largely in response to how frustrated I was at work with how slow we were moving due to all of the framework baggage we were carrying. At the time, we were building a site in react, and it seemed like to add a single feature to a page you needed to touch twenty files that were all just shuffling data around. There was so much complexity and indirection it seemed ridiculous. To my joy and surprise, I was able to rebuild the tracker site with just JQuery and a library or two. It seemed so much easier and more maintainable (at least in theory).

When applying for my current position, I updated my website from angular 5 to angular 8. Since then, and related to my post on going more open source, I've been using Brave and defaulting to turning javascript off on websites by default. When I decided it was time to modernize my website yet again, I decided to forgo angular completely and to challenge myself to rebuild my website without using javascript.

The main two problems I had where how could I use templates so that I didn't have to repeat myself, and how could I loop and if over those templates. If I had that ability, I felt like I didn't need javascript for anything else. I don't need an app, it's all static content, and I can do basic movement and responsiveness through plain old css.

After wrestling with a number of options, I created Simple Site. I use Kotlin to watch a couple of input folders (one for data and one for html). I then do some very basic html parsing to create very simplistic extensions to html. In effect, the app grabs my html, and then uses the templates to fill in loops, if statements, values, and includes for other htmls. It then concats the results to a single html file and single css file. Because it's static, I can then run a standard browser watch to display changes. Technically I have two different programs watching folders, but it still live reloads faster than an angular app. Simple Site can be built as a jar and I created an alias so that I could run the jar from any project and have it watch and recompile those files.

This kept mental overhead really low, as I was just writing html with very little extension. It's really lightweight, and still allows me to stay organized and create sections that include other sections or loop over components. Minus the interactivity of javascript, I don't feel like I lack anything, as my use case was already so static. On the deployment side, my app went from multiple mb to just a few kilobytes, as I have no libraries or extensive dependencies fetched from NPM. The site is blazing fast and in my opinion looks and works just as nicely. So other than the challenge of not having javascript, I basically hit no pain points, and find things better by every metric.

I've been using it for a while now to update my site, and have actually significantly expanded it in the new no-js world, including a section on games in KorGE (that each link to playable versions of the game that do use javascript, using Kotlin Multi Platform). I've cleaned up some of the CSS and modernized the look of my mods. There is still work to be done, but I've found that removing so many layers of abstraction has given me better visibility into the actual html and css, and helped me improve those skills.

There is a certain kind of bliss that comes with being frustrated with something for years, come up with and iterate on a solution, and eventually build a tool that you like and that relives that frustration. My tool may never overtake the big name frameworks, but it works perfectly for my use case, and that's enough for me.

Github Actions


Github Actions

At my first company, we used Jenkins for all our CI/CD (continuous integration and continuous deployment, though really we weren't continuously doing either of those). Jenkins seems like the standard go to for larger, older companies. While I found it serviceable at the time, I've grown to dislike it, looking back. Mostly because of association with bad practices at companies I've used it, and also some because it seems so gui focused.

I've used a number of other CI/CD frameworks since then. When trying to get a job at my second company, I set up Travis to do CI/CD on my personal website (which I had just redone). I found it so much simpler than Jenkins, even if it was a bit less discoverable. I had something up and running (for free) very quickly, and then essentially never had to change it again. At my current company, we're using Concourse, and while it has a steep learning curve and can be really complex to do certain things, I very much appreciate it's unix philosophy inspired designs. (Everything is CLI based, infra as code, immutable etc).

I had previously toyed with Github actions when it came out, on the Smart Columbus project, but didn't dig too deep as I just didn't have a need. (My CI/CD projects were running on Travis without issue). However, when Travis decided to finally shut down in favor of, (or maybe the opposite, I'm still confused) I figured I'd just port to github actions instead.

Github Actions seem incredibly easy to use. Setting a flow up for Quest Command was painless, fairly quick to setup, and logical. While I miss some of the unixy design of concourse, and haven't created my own action yet, cannibalizing examples and tweaking them to meet my needs has been really easy and straight forward.

The integration with the rest of github makes github actions really convenient. It's so nice to create actions as part of your code base, to be able to easily leverage crowd sourced actions, and to be able to see the results in the same site, instead of having to coordinate with a second site for CI/CD. Secrets management works really nicely and as an added bonus, everything runs really quickly. In Travis I'd often have to wait in a queue for my build to succeed, where as with github actions I've not seen that (yet).

It's so cool to see github create and support a feature like this.

Going Open Source


really bad brand work

After the capital riot in the US, I was somewhat shocked to see the response by big tech. From what I could tell, Parler (a startup competitor to Twitter) was no more complicit in enabling the planning of the riot than Facebook or Twitter. Yet somehow they were completely de-platformed, while the giants in that space were unaffected. Their servers were taken down, there security provider switched their services off, their database was hacked (due to that lack of security and some bad code), and user information both of those who participated in the riot and of those who had nothing to do with the riot was leaked online and was shared in its totality.

I've always thought of Google and Facebook like a robot in an Asimov book. I've not been concerned about them collecting my info any more than I'm nervous that my bathroom walls see me get out of the shower. I'm such a small statistical spec in their eyes, I've never worried about them caring about me other than to target their ads to me (that I mostly block anyway). I still believe this is true today.

However, that's not how the customers of Parler were treated, nor the employees. That bulk data suddenly became very specific and very targeted, and the companies that are the backbone of the free internet had no qualms annihilating their competition (and using a hypocritical political stance to refuse service to a company doing the same thing they were doing). It's scary that the US is becoming so politically charged, that even discussing something like this can be shut down for being conspiratorial or political.

I don't see Google or Facebook 'coming for me'. I don't see them on that trajectory. But their responses made me realize how easy it would be for them to change from 'benevolent ad producer' to 'totalitarian regime' or something else scary and bad. I'm sure they already have enough information to deduce today where I'll be and what I'll be doing in 10 years, but for the first time I realized maybe in 10 years I won't want them to know.

To that end, I made a resolution: become less reliant on big tech, become diversified in what service providers I use, become more privacy minded, and heavily prefer open source software.

I made a list of all of the Google products I use (and what wonderful, well crafted, convenient products they are!). I looked at what other big tech I was using as well as other apps that are not open source. I then methodically, ploddingly went through and tried open source alternatives. I've done this over a number of months, little by little. Some apps and services I deemed too important or convenient to use a much inferior open source app in place of, but in most cases I've been really happy with the alternatives I've found.

Today, everything takes a little longer, breaks a little easier, and requires more attention. But I find that my digital world is much more tailored to my preferences, many apps I use make so much more sense to me, I feel in control instead of inundated by influencers, and I've learned about a lot of software. Should privacy not be a concern at all, I'd still be really glad I started being more intentional about the apps I use.

Here are a number of apps I've replaced so far, and the app I picked. Every app has a ton of alternatives. These are just the ones I've landed on. I'll probably go back and update this in the future as I make more choices, etc.

Google App Alternative Thoughts
Calendar Thunderbird It's surprisingly hard to find a CalDav calendar app that works across OS types (Android, Linux, Windows) and doesn't require a sign in. Thunderbird works on both Linux and Windows and syncs with CalDav to my local server so my android calendar app can sync up.
Chrome Brave Basically Chrome, but with more security and privacy, and less trackability
Chrome Passwords Pass While I still use Brave to remember passwords, I also use Pass, both personally and at my job. I love it's unixy approach to password management
Contacts Contacts Open Source Equivalent
Files (Android) Material Files Works for SMB shares with my server as well.
Gimp Gimp Not new, but still my go to for picture editing
Gmail TutaMail A privacy focused email provider. Not as convenient as Gmail, but I appreciate their focus on privacy and worst case I'm again splitting out my personal information form all being under one umbrella
Google Play FDroid Does the job well enough for apps that it has
Intuit US Taxes Really nice open source tax filing software
Keep Joplin I now use Joplin both personally and professionally. It's a wonder for taking notes and staying organized
Maps OSM Open Street Maps is good, but it just can't compete (yet) with Google Maps.
Multifactor Auth Aegis Honestly a much nicer user experience than Google Auth. Grouping keys by usage (work, finance) makes it easier to find codes, you can back up your vault (encrypted) and the codes are encrypted at rest and require a password or fingerprint to unlock.
News Liferea / Feeder It is so refreshing to get to pick my news sources instead of constantly being 'influenced'. I was so sick of hearing about Covid, but google wanted me to see it, and so I couldn't turn covid articles or the big covid banner off. With these open source apps I can pick my sources and turn off words in the topic header. It's much more granular and effective than Google News "I'm not interested".
Office Libre Office I don't think it's great, but I've come to basically hate office anyway, so this is passable. I find I use VSCode almost anywhere I used to use word.
Phone Phone Open Source Equivalent
Reddit Libreddit Less tracking and annoying behavior like pop ups
Search SwissCows Privacy focused search that, while definitely is inferior to Google, is more privacy focused and also filters out some level of crap. Even if they did keep all my traffic, I like that my browser and my search engine are built by different companies.
SMS SMS Open Source Equivalent
Twitter Nitter Privacy, no need to log in, no need for JS, no annoying pop ups to use the app
Twitter Nitter Redirect Browser extension that redirects twitter links to Nitter links
Termux Termux Not really an alternative, but Termux has been really fun to play with on my Android phone
VSCode VSCodium The open source parts of vs code without the tracking

A Short Return to Modding



The Glory Days

I read 100 pages or so of C for Dummies when I was 12. I thought that programming seemed pretty dry and tedious.

It wasn't until highschool that I started programming a variant of visual basic in order to tweak my game, and other people's mods, so that I could make Oblivion more the way I wanted it to be. That was the first time that I felt that burning need to solve some programming problem, that same feeling I feel almost every day between work and hobby programming. (It's like having something on the tip of your tongue and also not feeling like you can change the subject until you solve the problem / understand the answer. It also feels like having boundless energy and the ability to manifest your will, ex nihilo).

Oblivion tweaking became Oblivion modding, and soon I was waist deep in grand ideas for adventures and epic mods. Of course, I had no idea what I was doing and mostly made spaghetti messes and unreleased experiments. It wasn't until the release of Skyrim in 2011 that I became completely, and more realistically, focused on modding. This peaked the summer of 2013, when I realized I may want to do something like this for a career. I decided to take two weeks and do nothing but make mods. I figured that if it was a fad, I'd get bored. Instead, I left the house once the entire time, and the two weeks felt like a near constant rush of adrenaline. I created Alternate Actors, my most downloaded mod, and I had a blast. In the glory days I made over 30 mods, and they were downloaded over 700k times. (More a testament to the environment and popularity of Skyrim than to my specific skills).

It was a wonderful experience to be part of, and it galvanized that I wanted to write code for a living. Looking back, I'm still proud of what I wrote, even if it's an obvious mess. It's clear I was really enjoying what I did (and what a gift, I still enjoy coding today, as much or more as in that first experience!) I wrote my code in notepad++, had to run a command to manually compile it, and had to wait 1 to 3 minutes to start the game up and test. I generally had no concept of objects or small file sizes. And yet I took time to carefully document my code, and clearly was having fun when I wrote all of my debug statements in spanish!

The Departure

Starting a career took me away from modding for two reasons. First, I had less time to mod. Second, with my growing skills I could build actual games, tools, and other full projects, instead of sticking to just tweaking someone elses body of work. To this day I probably get 1 to 2 requests to update, port or build a mod, nearly a decade after those glory days.

Dipping in to Say Hello

When I caught Covid, I turned to Skyrim almost as comfort food. I decided to start a new save with modern mods and more clean modding practices. New tools like Vortex and just generally understanding computers and modding principles made the experience so much nicer. When I decided I wanted to be able to auto sort my mods and so decided to create an Auto Sort Mod (source).

Coming back I was both hit with nostalgia and blown away by how bad my old workflow was. Papyrus doesn't even have an implementation of maps, and is really not general purpose. Fortunately I found an SKSE plugin that essentially lets scripts call out to a DLL and get a reference to an object and get and set values on that object.

Creating that mod and digging through the source I could find for any of my old Skyrim Mods reminded me of how grateful I am that I was able to spend time modding back in the day, and how grateful I am that I get to write code for a living now.

Advent of Code



I didn't hear about Katas until applying to a Test Driven Design (TDD) focused company. I remember wanting to apply to a company that tested candidates by having them complete a kata (or coding problem) using TDD. The strict TDD kata involves a language agnostic word problem and challenges the user to complete the 'features'. The dev is supposed to write the smallest test they possibly can. Then, after running the test and seeing it fail, the dev writes the smallest amount of code they can to pass the test. As a developer, it's nearly impossible to resist the urge to do more than that, but if you can stay disciplined, almost anal, the exercise can really make you think. TDD katas force you to not only think about a solution, but about how to get to that solution in a disciplined way. It also helps force you to think about writing tests that are strong individually and that work together to provide a safety net for programming. I have found a good kata to be therapeutic, like a good puzzle.

Advent of Code is something like a micro kata each day during the season of advent. Starting December first, each day reveals one problem in two parts (the second usually being more challenging). Each day is generally more difficult than the last, making the month something of a marathon where you watch peers slowly drop off. It's a great challenge to wrestle with in the evenings and then discuss with coworkers at lunch the next day.

In 2018 I was working in Elixir, and so attempted advent with my coworkers all in that language. I made it all of 2 days. I remember sheer frustration at the levels of abstraction I had to hold in parallel: 1) How do I actually solve this problem? 2) How do I do drive that through tests? 3) How do I do that in Elixir. Elixir was new to me, and a significant paradigm shift from the OO (Object Oriented) languages I had previously used. Unable to store global state or effectively use side effects really forced me to think differently. Generally it felt like just a lot more hassle to do something within a tight set of constraints, but it also forced me to adopt a new perspective. It also was really frustrating to have such an easy answer to #1 above, a decent answer to #2, and then be stymied by seemingly needing to do something backwards or a lot of code to answer #3. Many of the solutions of my peers also felt cryptic, relying on terse code that lacked signifiers of what it was doing, without already knowing one off language features. (I think a language has great value when someone whose never used it before can make a good guess as to what a chunk of code is doing; I don't think that's a feature of elixir). In the end, it was just too painful to devote additional hours after working all day in elixir.

In 2020 I decided to give advent of code another go, this time in Kotlin. I was 4 months into a new job and playing the tech lead on a new project/product. Around the time we were pitching our first large use case, and leadership asked us to go back to the drawing board. Work was busy and a little stressful, but I was blown away by how much easier it was to do strict TDD and solve problems in Kotlin. I think we all have languages that we're drawn to, that work the way we think. I know several people who really did seem to think in elixir, and I found myself thinking and planning to solve the problems often by sketching it out in Kotlin. The mix of functional and object oriented styles possible in Kotlin allowed me to be really flexible. At the same time the static typing and great editor hints seemed to let me focus on the problems instead of holding that information in my head, like you need to do with more loose languages like node or python. In 10 days I amassed 108 test driven commits. Until the last day almost every problem came easily to me, and without needing hints. (On day 9 or 10 I could not find out why my unit tests were all passing but the actual problem was failing. I got a correct answer from a friend and was able to work backwards from that. I eventually discovered that I needed a long instead of an int. My answer was right, but too big for it's variable).

I would have liked to keep going, but I started to feel exhausted, and realized I needed to stop burning the candle at both ends. The next day or so I realized I needed to stop burning the candle at all. I ended up taking a day off of work and basically sleeping that and the weekend away, and losing my sense of smell. Covid had finally found me (or so I think), and that was the end of Advent 2020.

So long as I'm not overburdened in other areas of life, I'd like to take another stab in 2021. Hopefully I stay healthy and can make it a bit further next year!



TFA, where I spent a year reading and programming.

My Entrance into Programming

When I was around 12, I bought and read around 100 pages of [C for dummies] and while I found it kind of neat, I decided programming wasn't for me.

It wasn't until modding Oblivion in high school and then Skyrim (se) in college that I really started getting into programming. I went from the form of visual basic used in Oblivion and Skyrim to Java when I wanted to mod FTL. I found a 13 hour video course on programming Java, and shouted out loud twice while working through it, because I was so excited about how much better Java was than the visual basic I had been using.

The year after college I did a one year graduate program that included creating a 'product'. I spent evenings and weekends creating a terrible [starship game] in Java (and Swing!). I loved doing it, and abused basically every anti-pattern you can run into. I had no formal education or mentor to instruct me, and so I continually burnt myself, and learned how painful bad development practices can be.

Reading Fills a Gap

When I discovered Clean Code, it was a revelation. His advice seemed extreme, but I could identify with many of the problems he listed, and getting to hear someone explain guidelines and why something was dangerous or a better alternative was wonderful. I ended up reading the book again in my first job as a developer and found it more agreeable; to this day I recommend it to new developers. (I read The Pragmatic Programmer much later. While it's heavily recommended along side Clean Code by many experienced developers, I found it to be really dated at this point (It spends a chapter on setting up a good IDE, like the new Vim).)

Reading Clean Code made me realize the importance of supplementing hands on experience with books. As someone who is self taught, I missed the formal lecture of a boot camp or college computer science degree. While I've found that most of the best developers I have worked with were self taught, I think that there is great value in supplementing that apprentice style learning with formal/academic inquiry. Those focused looks help provide mental frameworks that take intuitive lessons and help you reason about and communicate them to others.

Ever since, I've tried to push myself to always be reading a coding book, a book in an adjacent space (Game Design, UX, Business Leadership, etc) and a fiction book (Sci Fi, comics, fantasy). This generally means I get through books very slowly, but every now nad then I hit a month were I code less and read a lot more. In those early days, I found Effective Java fascinating, and its discussion on generics mind opening. Save the Cat!, while about screenwriting, was a great teacher on user engagement.

I was able to get my company to pay for several game design books (before I was even in software development) including A Theory of Fun (which is a breeze and delight to read) and Game Feel, which formed my early understanding of good UX. Later at the same company I was given The Devops Handbook as part of a reading club. While it felt like 50% marketing to middle management, the other 50% felt like great knowledge and a great set of weapons to fight for a more developer empowered world.

Recent Reading

When I joined one of the software companies I've worked for, it was near the end of the year, and I was given a month to spend (or lose) a $500 discretionary education allowance. After that month, we were told that due to the acquisition, we needed to spend next year's allowance (of the same amount) before February, or again lose it. Aside from the ~$100 I spent on Raspberry PIs, I bought nearly $1,000 of books in just a couple months. To do so, I quickly drained my 'to buy and read list' and then spoke with most of my new teammates to find the books that they had found impactful or that they were purchasing. I bought books like Programming Language Pragmatics, Thinking Fast and Slow, Concepts, Techniques, and Models of Computer Programming, Metaprogramming Elixir, An Introduction to Functional Programming Through Lambda Calculus and Mythical Man-Month all of which I haven't started reading yet. I also bought a book that I had read a couple chapters of in college and having read fully now can say its one of my favorite pieces of academic literature (and a comic at that!): Understanding Comics

It was this perfect storm that created my deep backlog of unread non-fiction, but it also taught me another lesson. If you find a person to be competent, be biased towards buying and reading their book recommendations. A recommended book tells you both more about the person who recommended it, and also hopefully provides value to you like it did for them. Books are more money-cheap than time-cheap and if they're good, they seem almost always worth the investment.

In my final days at one of my companies, there was a good deal of downtime due to acquisition changes, and I had the opportunity to read and study on work time. I was finally able to push through and complete Pro Git. While sometimes reading like a textbook or talking about outdated content, I thought it was generally a great read. The author, who was one of the founders of Github, is genuinely passionate about git, and his enthusiasm comes through to the reader. It was also fantastic to get to spend some time really thinking about and trying to grok how git works. With how often we use git as developers (all day long) I thought it was a great investment, and I hope to spend more time bouncing between using git and reading about it, in order to get better at using that tool. It's a free ebook that I highly recommend.

One of the book recommendations that I got was to read The Art of Unix Programming. This non-secure basic html site felt revelatory to me in a similar way to how Clean Code once felt. After finishing Pro Git, I wanted to jump into this book, but didn't want to just read on my computer. This was the beginning of my Site Crawler, which I used to turn the website into an epub ebook, and have gone on to use to download captures from xbox and old sprite sheets. It's a large book, and can at times drift into content that feels outdated or irrelevant. It spends a good deal of time of unix programs that I don't think I'll ever use. Some chapters felt like a slog. That said, the page on philosophy alone is incredibly worth the read.

The book as a whole gives a fantastic glimpse into the history of software development, and I was amazed at how many problems of today where thought through and solved then, only to be forgotten or not passed down to today's software developers. I've heard that 50% of the people in software today have been there less than 5 years, and that that trend continues due to our explosive growth and change as an industry. I heard at a conference that many of the problems of today were solved in white papers in the 50s and 60s, but they didn't have the computer power back then, and today we're the worst industry at knowing our own past, and so we miss those solutions and re-invent the wheel. It was fascinating to start to fill another gap that I possibly missed in college, and to hear an insider talk about what it was like to be their. It excited me to imagine what it was like, and to think about how we are still in an exiting, pioneering time of our industry. The Philosphy of Unix has helped me start to connect and unify the past and present of our industry, and while I have more holes in my understanding than solid parts, it was a really exciting start.

These days, my subtle code insult is to say "that looks clever", and my highest complement is to say "that seems unixy". I love that they recognized in the 60s that developer time was more expensive than computer time, and so many other fundamental lessons that the bulk of coders wrestle with even today. The disciplined focus on simplicity (humility) over cleverness (arrogance that causes confusion) seems now to me to be a fundamental part of software design, and a useful lens for so many situations. Just as Clean Code had given me a framework to reason about how my code should read, the Philosophy of Unix has given me a framework to think about the design and architecture of my code. It's also interesting to see how much of what we call "Agile" today was called Unix Philosophy then, and how certain behaviors and disciplines have stayed consistent throughout the life of our industry.

Reading Now

These days I'm reading Evil by Design, the mildly horrifying look at how UX manipulates people.

I've started (but am hoping to finish Evil by Design before really focusing on) The Design of Everyday Things per a colleague's recommendation. So far it's been both interesting and seems to be creating a great framework for talking about good design. As part of a small book club at work I'm also reading Accelerate, a sequel of sorts to the Devops Handbook (by most of the same authors). It focuses on how and what to measure to be able to understand and predict developer productivity.

At some point I want to read The Cathedral and the Bazaar a hopefully smaller sibling to the Philosophy of Unix. I also have that massive backlist of programming textbooks to start chipping away at. It's exciting and intimidating to think of all the books laying around waiting to be read, and it's hard to balance work, hobby programming, and then spending more time thinking about that same subject, but I'm convinced it's a worthwhile investment.

Ps. If you're in the mood for a laugh, check out MIT's The Tao of Programming

Tech Blog Site


EDIT: I flip back and forth about making my website repo open or private. If the repo links don't work and you're interested, let me know and I'll make these into gists or something.

It seems inevitable that every dev eventually start and then abandon a blog. I've thought a bit about doing one as a sort of journal for myself. I know there are a lot of sites that provide that functionality, but I wanted to play around with trying to invent that wheel, and wanted something I could tweak and fiddle with. I also wanted to be able to write version controlled markdown, and then have that converted into html and pushed through a simple pipeline. Finally, I didn't want to have to learn or be dependent on a more mainstream content platform like Medium, Blogger, or Wordpress. Instead I wanted to somehow embed it into my website. I ended up with this mess of a hacky solution that I had a lot of fun building.

My website has basically a three step deploy process: npm run build and then npm run deploy-content. Then I need to go to cloudfront and invalidate the cache (if I want to quickly check the results). I wanted something where with one script I could push files somewhere, and then they could be dynamically pulled in without having to invalidate caches. I also wanted to convert the markdown to html so that I could preview locally almost exactly what the 'published' result would look like.

The Basic Setup

In between matches of Halo friday night I started a simple node script to read all the markdown files in an input folder, use markdown-it to convert it to html, and then publish it to an output folder. Then I use the aws sdk to push it to a bucket. (I figured I could use a number of hosting platforms, but pushing to the bucket was easy, cheap, and in line with hosting my site).

Saturday was spent finishing the above script and working though the needed website changes. I had two main challenges: dynamically discovering what blog html files existed, and then pulling them into the site. I assumed I'd do a bucket list objects, and then pull them by accessing the public-read files. Figuring out the list objects rest call was trickier than I thought it would be, so I stubbed the file names I knew I had and worked on the second challenge.

Given a list of filenames, and knowing the bucket, I fetched each of the html files from AWS, converted them to strings, and then set the inner html of my 'blog-entry' components to that html. While a hacky and possibly unsafe operation, these files come from the same bucket as the rest of my website, and angular does do some sanitation automatically these days, so I figured it was 'good enough'. (I also had to fiddle with cors in the bucket settings to grab the files locally).

Listing Files

Next came trying to discover said files to pull. While I've done a lot of listing objects in buckets through the cli or sdk, I've never done the base api call. It took quite a bit of digging to find something that worked, and I also realized I had to update my bucket policy to allow that action. Annoyingly, it only comes back in xml, and so I had to parse the xml and navigate its nodes to get the keys (file names) that I cared about.

This solution worked until I pushed to 'prod'. In production I got failures to load because of 'mixed content'. I hadn't noticed, but the list object api call was http instead of https. Setting it to https returned content, but didn't have a certificate itself, so despite my site having a cert, it was still considered insecure. At this point I was pretty frustrated with what should have been a simple action, and so I decided to take a step back and think of other ways I could solve the problem.

Because I was using a pipeline to generate and deploy the dynamic content, it did have knowledge about those files. So I decided to have the node publisher script to keep track of the output file names and create an additional text file that had a line separated list of filenames. This meant that I could hardcode the website to find that 'index' file without a list objects, and then just read that file to know what files to pull. While still being somewhat hard coded, this gives me a nicer separation of concerns and lets the blog own what entries to show (I could push multiple blog entries and only show some of them in the future). This also made the website code simpler as it only parses a text file. (This feels more unixy too).

Other Additions

Later I realized my blogs should be sorted by date. I considered using file created/modified date, but thought that didn't work for backdating blogs etc. Once again I relied on a hacky solution that works since I'm the only one that needs to follow the convention. When publishing the files, I read the third line and parse the date, and then sort the filenames in my output file list using those dates. This means that the output list is in the right order, and the site just pulls them/displays them in the same order it gets from the list. Not robust, but convenient and possible since it's easier to enforce a convention with just one dev.

Once I had the html in my site, I wanted to make a couple changes to it. I wanted a dynamically generated table of contents, and I wanted each entry's title to be an anchor tag that I could bookmark or share. I debated doing this as part of the publishing step (creating a json object with title, id, and content). In the end though, I liked that the blog publishing was just responsible for the content and its order, without knowing about what the website would do with it.

Instead I transformed the html in my main blog page component. Here I grabbed the header tag and turned it into an anchor tag with a link to itself (the anchor is actually added in the blog entry component. These ids/blog entry objects could then also be passed to my table of contents so that it could create the links to each section. Finally, I had to add a lifecycle hook to navigate to an anchor tag on page render, so a shared/bookmarked link would scroll to the right post.


A note on testing: Due to my relative lack of skill (and patience) with front ends, I compound my frustrations by not writing many tests. This, and my general hacking together of front end solutions means that my website is not robust and I often need to hunt down bugs and self inflicted wastes of time. This is something I should do better at, but find hard to motivate myself to do when there are other more exciting things I could spend my free time on building.

I sunk a ton of work into creating a rickety solution when there are so many robust, polished tools out there, but I'm really happy with my extremely personalized 'blogging platform', and I really like having a tool that works exactly how I'd like it to, and that I can customize further with any features I think up. Now I wonder: will I continue to use it, or will it become yet another abandoned dev blog?

The week after I made this post, I found this comic that rings pretty true (slight language warning).

Sprite Sheet Gifs



My current workplace makes heavy use of emojis in our Slack, and as I was thinking about adding more, I thought that old Nintendo pixel art would be a really good fit. Pixel art is already optimized for small display areas, and reads cleanly in those small reaction areas. Nintendo is also easily recognized and has a huge collection of great characters.


I found a website that has ripped or recreated a bunch of sprites from those old SNES games. However, saving each image one at a time was a pain, so I updated my site crawler to download all the sprites on a page.


Once I had a spritesheet, the next step is to combine the desired frames into a gif. Ideally I'd use the wonderful pixel art program Aseprite to read in the tile sheet as a grid and then export the gif. However, I soon realized that none of the tilesheets I had downloaded were in uniform grids. (It makes sense that extracting/recreating assets would not care about making the grid uniform since the creators aren't thinking about reading these sheets programmatically).


I tried using the Gimp to make the gifs by hand, and while that worked, it was laborious and meant I had to line up every frame by hand. This got me thinkging about how 'easy' it would be to create an app that takes a tilesheet, determines a grid, and then reprints the sprites on that grid. I thought about it for the rest of that week and then spent the weekend building AutoSprite


Autosprite reads in an image and finds the background color by finding the color with the most pixels. It then finds all non-background pixels (foreground pixels or sprite pixels) and adds them to a list of 'seeds'. For each seed pixel, we walk all neighbor pixels and add them to the 'sprite' if they're also foreground pixels. Once we have a complete sprite, we remove all of its pixels from our list of seeds. We repeat that until we have no more seeds, and instead have a list of sprites.


The app then calculates the bounding box of a sprite by grabbing the min and max x and y values for any of its pixels. Once we have all the bounding boxes, we can figure out the minimum grid size needed to accommodate them all uniformly. With grid information we can now write a new image where the sprites are all spaced uniformly based on the grid. I can then load these images into Aseprite and because the grid is uniform, all the frames line up, I can do any manual adjustments, and then export as a gif.


This method isn't perfect though. For one I lose some stray/orphan pixels (the app throws away sprites with less than 5 pixels etc). This is because I'm only crawling pixels in cardinal directions and I have no tolorance for background pixels. I also pick the largest grid for all sprites, so if a tilesheet has sprites of decently different sizes, the small sprites can inherit an overly large grid. I thought about inferring multiple grids, but it seems better to let the user do that by creating multiple input images. Finally, I may reorder sprites, which could mess up natural frame progressions in the tilesheet, but sadly I couldn't think of a way to understand and detect frame ordering.


After all that programming, I was too worn out to spend much time making gifs. Hopefully in the future I'll create a bunch more, like the ones in this post. Either way, it was a lot of fun to build these tools.


Hacktoberfest 2020



In 2019 I had the good fortune to get to write open source code for work. Almost our entire team participated in Hacktoberfest, as it was basically getting paid to write open source code and get a free T-Shirt to boot. Ironically half of my pull requests were reverting previous commits that I had made.

This year I'm no longer getting paid to write open source code, so I had to actually think about what PRs to make. While I spend a good chunk of my weekends writing code, it's generally for one off hobby projects where I'm the sole contributor, and branching only makes sense for longer running experiments; generally I'm just committing directly to master.

I knew I wanted to update my site-crawler site to support scraping more than one type of website/book so that I could add another book, but coming up with the other two PRs proved more challenging. I didn't want to just cheese things with updates to readmes, but I didn't have another project that fit into discernable PR chunks (as opposed to spending a couple weeks on a larger branch).

In the end I decided to redo the roman numerals kata in Kotlin, and actually complete it. This chunked nicely into PRs and gave me the two I needed for Hacktoberfest, plus a couple extra. Finally, near the end of the month, I added a final PR for better collision in my platformer experiment/game Vex.