Tech Blog

  1. Github Actions
  2. Going Open Source
  3. A Short Return to Modding
  4. Advent of Code
  5. Readings
  6. Tech Blog Site
  7. Sprite Sheet Gifs
  8. Hacktoberfest 2020

Github Actions

5-08-2021

Github Actions

At my first company, we used Jenkins for all our CI/CD (Continues integration and continues deployment, though really we weren't continuously doing either of those). Jenkins seems like the standard go to for larger, older companies. While I found it serviceable at the time, I've grown to dislike it, looking back. Mostly because of association with bad practices at companies I've used it, and also some because it seems so gui focused.

I've used a number of other CI/CD frameworks since then. When trying to get a job at my second company, I set up Travis to do CI/CD on my personal website (which I had just redone). I found it so much simpler than Jenkins, if it was a bit less discoverable. I had something up and running (for free) very quickly, and then essentially never had to change it again. At my current company, we're using Concourse, and while it has a steep learning curve and can be really complex to do certain things, I very much appreciate it's unix philosophy inspired designs. (Everything is CLI based, infra as code, immutable etc).

I had previously toyed with Github actions when it came out, on the Smart Columbus project, but didn't dig too deep as I just didn't have a need. (My CI/CD projects were running on Travis without issue). However, when Travis decided to finally shut down travis-ci.org in favor of travis-ci.com, (or maybe the opposite, I'm still confused) I figured I'd just port to github actions instead.

Github Actions seem incredibly easy to use. Setting a flow up for Quest Command was painless, fairly quick to setup, and logical. While I miss some of the unixy design of concourse, and haven't created my own action yet, cannibalizing examples and tweaking them to meet my needs has been really easy and straight forward.

The integration with the rest of github makes github actions really convenient. It's so nice to create actions as part of your code base, to be able to easily leverage crowd sourced actions, and to be able to see the results in the same site, instead of having to coordinate with a second site for CI/CD. Secrets management works really nicely and as an added bonus, everything runs really quickly. In Travis I'd often have to wait in a queue for my build to succeed, where as with github actions I've not seen that (yet).

It's so cool to see github create and support a feature like this.

Going Open Source

2-01-2021

really bad brand work

After the capital riot in the US, I was somewhat shocked to see the response by big tech. From what I could tell, Parler (a startup competitor to Twitter) was no more complicit in enabling the planning of the riot than Facebook or Twitter. Yet somehow they were completely de-platformed, while the giants in that space were unaffected. Their servers were taken down, there security provider switched their services off, their database was hacked (due to that lack of security and some bad code), and user information both of those who participated in the riot and of those who had nothing to do with the riot was leaked online and was shared in its totality.

I've always thought of Google and Facebook like a robot in an Asimov book. I've not been concerned about them collecting my info any more than I'm nervous that my bathroom walls see me get out of the shower. I'm such a small statistical spec in their eyes, I've never worried about them caring about me other than to target their ads to me (that I mostly block anyway). I still believe this is true today.

However, that's not how the customers of Parler were treated, nor the employees. That bulk data suddenly became very specific and very targeted, and the companies that are the backbone of the free internet had no qualms annihilating their competition (and using a hypocritical political stance to refuse service to a company doing the same thing they were doing). It's scary that the US is becoming so politically charged, that even discussing something like this can be shut down for being conspiratorial or political.

I don't see Google or Facebook 'coming for me'. I don't see them on that trajectory. But their responses made me realize how easy it would be for them to change from 'benevolent ad producer' to 'totalitarian regime' or something else scary and bad. I'm sure they already have enough information to deduce today where I'll be and what I'll be doing in 10 years, but for the first time I realized maybe in 10 years I won't want them to know.

To that end, I made a resolution: become less reliant on big tech, become diversified in what service providers I use, become more privacy minded, and heavily prefer open source software.

I made a list of all of the Google products I use (and what wonderful, well crafted, convenient products they are!). I looked at what other big tech I was using as well as other apps that are not open source. I then methodically, ploddingly went through and tried open source alternatives. I've done this over a number of months, little by little. Some apps and services I deemed too important or convenient to use a much inferior open source app in place of, but in most cases I've been really happy with the alternatives I've found.

Today, everything takes a little longer, breaks a little easier, and requires more attention. But I find that my digital world is much more tailored to my preferences, many apps I use make so much more sense to me, I feel in control instead of inundated by influencers, and I've learned about a lot of software. Should privacy not be a concern at all, I'd still be really glad I started being more intentional about the apps I use.

Here are a number of apps I've replaced so far, and the app I picked. Every app has a ton of alternatives. These are just the ones I've landed on. I'll probably go back and update this in the future as I make more choices, etc.

Google AppAlternativeThoughts
MapsOSMOpen Street Maps is good, but it just can't compete (yet) with Google Maps.
KeepJoplinI now use Joplin both personally and professionally. It's a wonder for taking notes and staying organized
ChromeBraveBasically Chrome, but with more security and privacy, and less trackability
SearchSwissCowsPrivacy focused search that, while definitely is inferior to Google, is more privacy focused and also filters out some level of crap. Even if they did keep all my traffic, I like that my browser and my search engine are built by different companies.
Chrome PasswordsPassWhile I still use Brave to remember passwords, I also use Pass, both personally and at my job. I love it's unixy approach to password management
TermuxTermuxNot really an alternative, but Termux has been really fun to play with on my Android phone
GmailTutaMailA privacy focused email provider. Not as convenient as Gmail, but I appreciate their focus on privacy and worst case I'm again splitting out my personal information form all being under one umbrella
OfficeLibre OfficeI don't think it's great, but I've come to basically hate office anyway, so this is passable. I find I use VSCode almost anywhere I used to use word.
GimpGimpNot new, but still my go to for picture editing
NewsLiferea / FlymIt is so refreshing to get to pick my news sources instead of constantly being 'influenced'. I was so sick of hearing about Covid, but google wanted me to see it, and so I couldn't turn covid articles or the big covid banner off. With these open source apps I can pick my sources and turn off words in the topic header. It's much more granular and effective than Google News "I'm not interested".
ContactsContactsOpen Source Equivalent
SMSSMSOpen Source Equivalent
PhonePhoneOpen Source Equivalent
Google PlayFDroidDoes the job well enough for apps that it has

A Short Return to Modding

1-13-2021

Mods

The Glory Days

I read 100 pages or so of C for Dummies when I was 12. I thought that programming seemed pretty dry and tedious.

It wasn't until highschool that I started programming a variant of visual basic in order to tweak my game, and other people's mods, so that I could make Oblivion more the way I wanted it to be. That was the first time that I felt that burning need to solve some programming problem, that same feeling I feel almost every day between work and hobby programming. (It's like having something on the tip of your tongue and also not feeling like you can change the subject until you solve the problem / understand the answer. It also feels like having boundless energy and the ability to manifest your will, ex nihilo).

Oblivion tweaking became Oblivion modding, and soon I was waist deep in grand ideas for adventures and epic mods. Of course, I had no idea what I was doing and mostly made spaghetti messes and unreleased experiments. It wasn't until the release of Skyrim in 2011 that I became completely, and more realistically, focused on modding. This peaked the summer of 2013, when I realized I may want to do something like this for a career. I decided to take two weeks and do nothing but make mods. I figured that if it was a fad, I'd get bored. Instead, I left the house once the entire time, and the two weeks felt like a near constant rush of adrenaline. I created Alternate Actors, my most downloaded mod, and I had a blast. In the glory days I made over 30 mods, and they were downloaded over 700k times. (More a testament to the environment and popularity of Skyrim than to my specific skills).

It was a wonderful experience to be part of, and it galvanized that I wanted to write code for a living. Looking back, I'm still proud of what I wrote, even if it's an obvious mess. It's clear I was really enjoying what I did (and what a gift, I still enjoy coding today, as much or more as in that first experience!) I wrote my code in notepad++, had to run a command to manually compile it, and had to wait 1 to 3 minutes to start the game up and test. I generally had no concept of objects or small file sizes. And yet I took time to carefully document my code, and clearly was having fun when I wrote all of my debug statements in spanish!

The Departure

Starting a career took me away from modding for two reasons. First, I had less time to mod. Second, with my growing skills I could build actual games, tools, and other full projects, instead of sticking to just tweaking someone elses body of work. To this day I probably get 1 to 2 requests to update, port or build a mod, nearly a decade after those glory days.

Dipping in to Say Hello

When I caught Covid, I turned to Skyrim almost as comfort food. I decided to start a new save with modern mods and more clean modding practices. New tools like Vortex and just generally understanding computers and modding principles made the experience so much nicer. When I decided I wanted to be able to auto sort my mods and so decided to create an Auto Sort Mod (source).

Coming back I was both hit with nostalgia and blown away by how bad my old workflow was. Papyrus doesn't even have an implementation of maps, and is really not general purpose. Fortunately I found an SKSE plugin that essentially lets scripts call out to a DLL and get a reference to an object and get and set values on that object.

Creating that mod and digging through the source I could find for any of my old Skyrim Mods reminded me of how grateful I am that I was able to spend time modding back in the day, and how grateful I am that I get to write code for a living now.

Advent of Code

12-30-2020

advent

I didn't hear about Katas until applying to a Test Driven Design (TDD) focused company. I remember wanting to apply to a company that tested candidates by having them complete a kata (or coding problem) using TDD. The strict TDD kata involves a language agnostic word problem and challenges the user to complete the 'features'. The dev is supposed to write the smallest test they possibly can. Then, after running the test and seeing it fail, the dev writes the smallest amount of code they can to pass the test. As a developer, it's nearly impossible to resist the urge to do more than that, but if you can stay disciplined, almost anal, the exercise can really make you think. TDD katas force you to not only think about a solution, but about how to get to that solution in a disciplined way. It also helps force you to think about writing tests that are strong individually and that work together to provide a safety net for programming. I have found a good kata to be therapeutic, like a good puzzle.

Advent of Code is something like a micro kata each day during the season of advent. Starting December first, each day reveals one problem in two parts (the second usually being more challenging). Each day is generally more difficult than the last, making the month something of a marathon where you watch peers slowly drop off. It's a great challenge to wrestle with in the evenings and then discuss with coworkers at lunch the next day.

In 2018 I was working in Elixir, and so attempted advent with my coworkers all in that language. I made it all of 2 days. I remember sheer frustration at the levels of abstraction I had to hold in parallel: 1) How do I actually solve this problem? 2) How do I do drive that through tests? 3) How do I do that in Elixir. Elixir was new to me, and a significant paradigm shift from the OO (Object Oriented) languages I had previously used. Unable to store global state or effectively use side effects really forced me to think differently. Generally it felt like just a lot more hassle to do something within a tight set of constraints, but it also forced me to adopt a new perspective. It also was really frustrating to have such an easy answer to #1 above, a decent answer to #2, and then be stymied by seemingly needing to do something backwards or a lot of code to answer #3. Many of the solutions of my peers also felt cryptic, relying on terse code that lacked signifiers of what it was doing, without already knowing one off language features. (I think a language has great value when someone whose never used it before can make a good guess as to what a chunk of code is doing; I don't think that's a feature of elixir). In the end, it was just too painful to devote additional hours after working all day in elixir.

In 2020 I decided to give advent of code another go, this time in Kotlin. I was 4 months into a new job and playing the tech lead on a new project/product. Around the time we were pitching our first large use case, and leadership asked us to go back to the drawing board. Work was busy and a little stressful, but I was blown away by how much easier it was to do strict TDD and solve problems in Kotlin. I think we all have languages that we're drawn to, that work the way we think. I know several people who really did seem to think in elixir, and I found myself thinking and planning to solve the problems often by sketching it out in Kotlin. The mix of functional and object oriented styles possible in Kotlin allowed me to be really flexible. At the same time the static typing and great editor hints seemed to let me focus on the problems instead of holding that information in my head, like you need to do with more loose languages like node or python. In 10 days I amassed 108 test driven commits. Until the last day almost every problem came easily to me, and without needing hints. (On day 9 or 10 I could not find out why my unit tests were all passing but the actual problem was failing. I got a correct answer from a friend and was able to work backwards from that. I eventually discovered that I needed a long instead of an int. My answer was right, but too big for it's variable).

I would have liked to keep going, but I started to feel exhausted, and realized I needed to stop burning the candle at both ends. The next day or so I realized I needed to stop burning the candle at all. I ended up taking a day off of work and basically sleeping that and the weekend away, and losing my sense of smell. Covid had finally found me (or so I think), and that was the end of Advent 2020.

So long as I'm not overburdened in other areas of life, I'd like to take another stab in 2021. Hopefully I stay healthy and can make it a bit further next year!

Readings

11-29-2020

TFA, where I spent a year reading and programming.

My Entrance into Programming

When I was around 12, I bought and read around 100 pages of [C for dummies] and while I found it kind of neat, I decided programming wasn't for me.

It wasn't until modding [Oblivion] in high school and then [Skyrim] in college that I really started getting into programming. I went from the form of visual basic used in Oblivion and Skyrim to Java when I wanted to mod [FTL]. I found a 13 hour video course on programming Java, and shouted out loud twice while working through it, because I was so excited about how much better Java was than the visual basic I had been using.

The year after college I did a one year graduate program that included creating a 'product'. I spent evenings and weekends creating a terrible [starship game] in Java (and Swing!). I loved doing it, and abused basically every anti-pattern you can run into. I had no formal education or mentor to instruct me, and so I continually burnt myself, and learned how painful bad development practices can be.

Reading Fills a Gap

When I discovered Clean Code, it was a revelation. His advice seemed extreme, but I could identify with many of the problems he listed, and getting to hear someone explain guidelines and why something was dangerous or a better alternative was wonderful. I ended up reading the book again in my first job as a developer and found it more agreeable; to this day I recommend it to new developers. (I read The Pragmatic Programmer much later. While it's heavily recommended along side Clean Code by many experienced developers, I found it to be really dated at this point (It spends a chapter on setting up a good IDE, like the new Vim).)

Reading Clean Code made me realize the importance of supplementing hands on experience with books. As someone who is self taught, I missed the formal lecture of a boot camp or college computer science degree. While I've found that most of the best developers I have worked with were self taught, I think that there is great value in supplementing that apprentice style learning with formal/academic inquiry. Those focused looks help provide mental frameworks that take intuitive lessons and help you reason about and communicate them to others.

Ever since, I've tried to push myself to always be reading a coding book, a book in an adjacent space (Game Design, UX, Business Leadership, etc) and a fiction book (Sci Fi, comics, fantasy). This generally means I get through books very slowly, but every now nad then I hit a month were I code less and read a lot more. In those early days, I found Effective Java fascinating, and its discussion on generics mind opening. Save the Cat!, while about screenwriting, was a great teacher on user engagement.

I was able to get my company to pay for several game design books (before I was even in software development) including A Theory of Fun (which is a breeze and delight to read) and Game Feel, which formed my early understanding of good UX. Later at the same company I was given The Devops Handbook as part of a reading club. While it felt like 50% marketing to middle management, the other 50% felt like great knowledge and a great set of weapons to fight for a more developer empowered world.

Recent Reading

When I joined one of the software companies I've worked for, it was near the end of the year, and I was given a month to spend (or lose) a $500 discretionary education allowance. After that month, we were told that due to the acquisition, we needed to spend next year's allowance (of the same amount) before February, or again lose it. Aside from the ~$100 I spent on Raspberry PIs, I bought nearly $1,000 of books in just a couple months. To do so, I quickly drained my 'to buy and read list' and then spoke with most of my new teammates to find the books that they had found impactful or that they were purchasing. I bought books like Programming Language Pragmatics, Thinking Fast and Slow, Concepts, Techniques, and Models of Computer Programming, Metaprogramming Elixir, An Introduction to Functional Programming Through Lambda Calculus and Mythical Man-Month all of which I haven't started reading yet. I also bought a book that I had read a couple chapters of in college and having read fully now can say its one of my favorite pieces of academic literature (and a comic at that!): Understanding Comics

It was this perfect storm that created my deep backlog of unread non-fiction, but it also taught me another lesson. If you find a person to be competent, be biased towards buying and reading their book recommendations. A recommended book tells you both more about the person who recommended it, and also hopefully provides value to you like it did for them. Books are more money-cheap than time-cheap and if they're good, they seem almost always worth the investment.

In my final days at one of my companies, there was a good deal of downtime due to acquisition changes, and I had the opportunity to read and study on work time. I was finally able to push through and complete Pro Git. While sometimes reading like a textbook or talking about outdated content, I thought it was generally a great read. The author, who was one of the founders of Github, is genuinely passionate about git, and his enthusiasm comes through to the reader. It was also fantastic to get to spend some time really thinking about and trying to grok how git works. With how often we use git as developers (all day long) I thought it was a great investment, and I hope to spend more time bouncing between using git and reading about it, in order to get better at using that tool. It's a free ebook that I highly recommend.

One of the book recommendations that I got was to read The Art of Unix Programming. This non-secure basic html site felt revelatory to me in a similar way to how Clean Code once felt. After finishing Pro Git, I wanted to jump into this book, but didn't want to just read on my computer. This was the beginning of my Site Crawler, which I used to turn the website into an epub ebook, and have gone on to use to download captures from xbox and old sprite sheets. It's a large book, and can at times drift into content that feels outdated or irrelevant. It spends a good deal of time of unix programs that I don't think I'll ever use. Some chapters felt like a slog. That said, the page on philosophy alone is incredibly worth the read.

The book as a whole gives a fantastic glimpse into the history of software development, and I was amazed at how many problems of today where thought through and solved then, only to be forgotten or not passed down to today's software developers. I've heard that 50% of the people in software today have been there less than 5 years, and that that trend continues due to our explosive growth and change as an industry. I heard at a conference that many of the problems of today were solved in white papers in the 50s and 60s, but they didn't have the computer power back then, and today we're the worst industry at knowing our own past, and so we miss those solutions and re-invent the wheel. It was fascinating to start to fill another gap that I possibly missed in college, and to hear an insider talk about what it was like to be their. It excited me to imagine what it was like, and to think about how we are still in an exiting, pioneering time of our industry. The Philosphy of Unix has helped me start to connect and unify the past and present of our industry, and while I have more holes in my understanding than solid parts, it was a really exciting start.

These days, my subtle code insult is to say "that looks clever", and my highest complement is to say "that seems unixy". I love that they recognized in the 60s that developer time was more expensive than computer time, and so many other fundamental lessons that the bulk of coders wrestle with even today. The disciplined focus on simplicity (humility) over cleverness (arrogance that causes confusion) seems now to me to be a fundamental part of software design, and a useful lens for so many situations. Just as Clean Code had given me a framework to reason about how my code should read, the Philosophy of Unix has given me a framework to think about the design and architecture of my code. It's also interesting to see how much of what we call "Agile" today was called Unix Philosophy then, and how certain behaviors and disciplines have stayed consistent throughout the life of our industry.

Reading Now

These days I'm reading Evil by Design, the mildly horrifying look at how UX manipulates people. I've started (but am hoping to finish Evil by Design before really focusing on) The Design of Everyday Things per a colleague's recommendation. So far it's been both interesting and seems to be creating a great framework for talking about good design. As part of a small book club at work I'm also reading Accelerate, a sequel of sorts to the Devops Handbook (by most of the same authors). It focuses on how and what to measure to be able to understand and predict developer productivity.

At some point I want to read The Cathedral and the Bazaar a hopefully smaller sibling to the Philosophy of Unix. I also have that massive backlist of programming textbooks to start chipping away at. It's exciting and intimidating to think of all the books laying around waiting to be read, and it's hard to balance work, hobby programming, and then spending more time thinking about that same subject, but I'm convinced it's a worthwhile investment.

Ps. If you're in the mood for a laugh, check out MIT's The Tao of Programming

Tech Blog Site

11-21-2020

EDIT: I flip back and forth about making my website repo open or private. If the repo links don't work and you're interested, let me know and I'll make these into gists or something.

It seems inevitable that every dev eventually start and then abandon a blog. I've thought a bit about doing one as a sort of journal for myself. I know there are a lot of sites that provide that functionality, but I wanted to play around with trying to invent that wheel, and wanted something I could tweak and fiddle with. I also wanted to be able to write version controlled markdown, and then have that converted into html and pushed through a simple pipeline. Finally, I didn't want to have to learn or be dependent on a more mainstream content platform like Medium, Blogger, or Wordpress. Instead I wanted to somehow embed it into my website. I ended up with this mess of a hacky solution that I had a lot of fun building.

My website has basically a three step deploy process: npm run build and then npm run deploy-content. Then I need to go to cloudfront and invalidate the cache (if I want to quickly check the results). I wanted something where with one script I could push files somewhere, and then they could be dynamically pulled in without having to invalidate caches. I also wanted to convert the markdown to html so that I could preview locally almost exactly what the 'published' result would look like.

The Basic Setup

In between matches of Halo friday night I started a simple node script to read all the markdown files in an input folder, use markdown-it to convert it to html, and then publish it to an output folder. Then I use the aws sdk to push it to a bucket. (I figured I could use a number of hosting platforms, but pushing to the bucket was easy, cheap, and in line with hosting my site).

Saturday was spent finishing the above script and working though the needed website changes. I had two main challenges: dynamically discovering what blog html files existed, and then pulling them into the site. I assumed I'd do a bucket list objects, and then pull them by accessing the public-read files. Figuring out the list objects rest call was trickier than I thought it would be, so I stubbed the file names I knew I had and worked on the second challenge.

Given a list of filenames, and knowing the bucket, I fetched each of the html files from AWS, converted them to strings, and then set the inner html of my 'blog-entry' components to that html. While a hacky and possibly unsafe operation, these files come from the same bucket as the rest of my website, and angular does do some sanitation automatically these days, so I figured it was 'good enough'. (I also had to fiddle with cors in the bucket settings to grab the files locally).

Listing Files

Next came trying to discover said files to pull. While I've done a lot of listing objects in buckets through the cli or sdk, I've never done the base api call. It took quite a bit of digging to find something that worked, and I also realized I had to update my bucket policy to allow that action. Annoyingly, it only comes back in xml, and so I had to parse the xml and navigate its nodes to get the keys (file names) that I cared about.

This solution worked until I pushed to 'prod'. In production I got failures to load because of 'mixed content'. I hadn't noticed, but the list object api call was http instead of https. Setting it to https returned content, but didn't have a certificate itself, so despite my site having a cert, it was still considered insecure. At this point I was pretty frustrated with what should have been a simple action, and so I decided to take a step back and think of other ways I could solve the problem.

Because I was using a pipeline to generate and deploy the dynamic content, it did have knowledge about those files. So I decided to have the node publisher script to keep track of the output file names and create an additional text file that had a line separated list of filenames. This meant that I could hardcode the website to find that 'index' file without a list objects, and then just read that file to know what files to pull. While still being somewhat hard coded, this gives me a nicer separation of concerns and lets the blog own what entries to show (I could push multiple blog entries and only show some of them in the future). This also made the website code simpler as it only parses a text file. (This feels more unixy too).

Other Additions

Later I realized my blogs should be sorted by date. I considered using file created/modified date, but thought that didn't work for backdating blogs etc. Once again I relied on a hacky solution that works since I'm the only one that needs to follow the convention. When publishing the files, I read the third line and parse the date, and then sort the filenames in my output file list using those dates. This means that the output list is in the right order, and the site just pulls them/displays them in the same order it gets from the list. Not robust, but convenient and possible since it's easier to enforce a convention with just one dev.

Once I had the html in my site, I wanted to make a couple changes to it. I wanted a dynamically generated table of contents, and I wanted each entry's title to be an anchor tag that I could bookmark or share. I debated doing this as part of the publishing step (creating a json object with title, id, and content). In the end though, I liked that the blog publishing was just responsible for the content and its order, without knowing about what the website would do with it.

Instead I transformed the html in my main blog page component. Here I grabbed the header tag and turned it into an anchor tag with a link to itself (the anchor is actually added in the blog entry component. These ids/blog entry objects could then also be passed to my table of contents so that it could create the links to each section. Finally, I had to add a lifecycle hook to navigate to an anchor tag on page render, so a shared/bookmarked link would scroll to the right post.

Conclusion

A note on testing: Due to my relative lack of skill (and patience) with front ends, I compound my frustrations by not writing many tests. This, and my general hacking together of front end solutions means that my website is not robust and I often need to hunt down bugs and self inflicted wastes of time. This is something I should do better at, but find hard to motivate myself to do when there are other more exciting things I could spend my free time on building.

I sunk a ton of work into creating a rickety solution when there are so many robust, polished tools out there, but I'm really happy with my extremely personalized 'blogging platform', and I really like having a tool that works exactly how I'd like it to, and that I can customize further with any features I think up. Now I wonder: will I continue to use it, or will it become yet another abandoned dev blog?

The week after I made this post, I found this comic that rings pretty true (slight language warning).

Sprite Sheet Gifs

11-14-2020

Mario My current workplace makes heavy use of emojis in our Slack, and as I was thinking about adding more, I thought that old Nintendo pixel art would be a really good fit. Pixel art is already optimized for small display areas, and reads cleanly in those small reaction areas. Nintendo is also easily recognized and has a huge collection of great characters.

FlyingKoopa I found a website that has ripped or recreated a bunch of sprites from those old SNES games. However, saving each image one at a time was a pain, so I updated my site crawler to download all the sprites on a page.

MarioFly Once I had a spritesheet, the next step is to combine the desired frames into a gif. Ideally I'd use the wonderful pixel art program Aseprite to read in the tile sheet as a grid and then export the gif. However, I soon realized that none of the tilesheets I had downloaded were in uniform grids. (It makes sense that extracting/recreating assets would not care about making the grid uniform since the creators aren't thinking about reading these sheets programmatically).

LinkStruggle I tried using the Gimp to make the gifs by hand, and while that worked, it was laborious and meant I had to line up every frame by hand. This got me thinkging about how 'easy' it would be to create an app that takes a tilesheet, determines a grid, and then reprints the sprites on that grid. I thought about it for the rest of that week and then spent the weekend building AutoSprite

Knuckles Autosprite reads in an image and finds the background color by finding the color with the most pixels. It then finds all non-background pixels (foreground pixels or sprite pixels) and adds them to a list of 'seeds'. For each seed pixel, we walk all neighbor pixels and add them to the 'sprite' if they're also foreground pixels. Once we have a complete sprite, we remove all of its pixels from our list of seeds. We repeat that until we have no more seeds, and instead have a list of sprites.

Sonic The app then calculates the bounding box of a sprite by grabbing the min and max x and y values for any of its pixels. Once we have all the bounding boxes, we can figure out the minimum grid size needed to accommodate them all uniformly. With grid information we can now write a new image where the sprites are all spaced uniformly based on the grid. I can then load these images into Aseprite and because the grid is uniform, all the frames line up, I can do any manual adjustments, and then export as a gif.

Boo This method isn't perfect though. For one I lose some stray/orphan pixels (the app throws away sprites with less than 5 pixels etc). This is because I'm only crawling pixels in cardinal directions and I have no tolorance for background pixels. I also pick the largest grid for all sprites, so if a tilesheet has sprites of decently different sizes, the small sprites can inherit an overly large grid. I thought about inferring multiple grids, but it seems better to let the user do that by creating multiple input images. Finally, I may reorder sprites, which could mess up natural frame progressions in the tilesheet, but sadly I couldn't think of a way to understand and detect frame ordering.

GoofyBoo After all that programming, I was too worn out to spend much time making gifs. Hopefully in the future I'll create a bunch more, like the ones in this post. Either way, it was a lot of fun to build these tools.

Yoshi

Hacktoberfest 2020

10-31-2020

PRs

In 2019 I had the good fortune to get to write open source code for work. Almost our entire team participated in Hacktoberfest, as it was basically getting paid to write open source code and get a free T-Shirt to boot. Ironically half of my pull requests were reverting previous commits that I had made.

This year I'm no longer getting paid to write open source code, so I had to actually think about what PRs to make. While I spend a good chunk of my weekends writing code, it's generally for one off hobby projects where I'm the sole contributor, and branching only makes sense for longer running experiments; generally I'm just committing directly to master.

I knew I wanted to update my site-crawler site to support scraping more than one type of website/book so that I could add another book, but coming up with the other two PRs proved more challenging. I didn't want to just cheese things with updates to readmes, but I didn't have another project that fit into discernable PR chunks (as opposed to spending a couple weeks on a larger branch).

In the end I decided to redo the roman numerals kata in Kotlin, and actually complete it. This chunked nicely into PRs and gave me the two I needed for Hacktoberfest, plus a couple extra. Finally, near the end of the month, I added a final PR for better collision in my platformer experiment/game Vex.