Work Experience

My career path from Jun. 2015 to Oct. 2022—from when I graduated college to when I quit my latest job. The actual web dev career starts around 2019-2020—everything before that is my data analyst career.

~

Frontent Developer3iSeoul, South Korea

Now that I think about it, the main reason I joined this company was probably because I wanted to experience something new—I knew they were using Vue (no experience) instead of React (what I was most familiar with), and I knew that my job was going to have something to do with WebGL technology, which I had zero idea of what to expect. Well, I've also got to admit that this company—at least from what I'd gathered on their website and the products they were selling—gave a different (in a positive way) vibe, compared to all the Korean companies that I had experienced before. In fact, I was going to take the offer I got from other company, but after the final interview with 3i, I changed my mind almost on a whim.

I guess challenging oneself with new adventures from time to time can be refreshing and rewarding, but this time there was just too much to take on that I was pretty much burned out at the end 😅. The biggest challenge I had was the fact that there was practically no one in the company I could ask for help whenever I got stuck in the code level—all the major devs who built these Vue.js and three.js apps were long gone already by the time I got there. Nor were there any sort of written documentation/wiki about the codebases preserved within the company (I really should've run away at this point, but well I didn't..)

At least I learned a lot about 3D graphics in this period, while working on the complete rewrite of the company's JavaScript 3D engine—something very similar to Google Street View. It was a daunting task that felt impossible at first, but I managed to come up with two(!) versions (v1 & v2) of the rewrite at the end, each accompanied with a fairly thorough documentation—three.js manual and books like Introduction to Computer Graphics helped me grasp the core ideas of computer graphics immensely, and I should also mention Three.js journey helped me organize the structure of the codebase a lot.

Speaking of code structure, it was this time around that I first got interested in design patterns to find the best proven pattern that fit my codebase. For example, I had a bunch of classes that were oblivious to each other's existence, but nevertheless needed to exchange information. So basically I wanted a mechanism to deliver a state change to the target class (from some random classes the target class doesn't know exist) and to subsequently propagate the change in state to those classes that are interested—I think in the end I resorted to copying mrdoob's eventdispatcher.js for the job. What I also learned while dealing with several classes was how tricky it could be to come up with the right level of abstractions—sometimes I exposed too little public APIs for a class to be flexible, and other times I exposed too much such APIs that the consumers of the class took almost complete control over the internals of the class, which inevitably resulted in tight coupling between them. This whole journey of seeking good design patterns and abstractions actually left me wanting to study college level CS education even more than usual—in fact, this was one of the trigger points that made me leave the company and decide to take a break from work altogether for a while.

Apart from 3D graphics and three.js, I also got to experience some web technologies that were new to me. Most notably, it was my first time using Vue (both 2 & 3), first time using TypeScript, first time with rollup, and vite. For the docs, I used VitePress and Docusaurus.

It's weird because I can't really articulate why, but while using Vue—even though I didn't have major issues using them—I found myself craving for React more. I still remember the impression I got while reading (old) React docs. It felt like the documentation was written by very mature and smart computer scientists. For example, when I was reading this section of the docs, I was convinced that whoever wrote this must have known what they were doing; it conveyed such a powerful authority. Well as much as I respect Evan You, I didn't really feel such impression while reading Vue's docs. And I guess it's also the people around React such as Dan Abramov who I respect a lot as an engineer that made me want to come back to using React. All in all, my career at this company had nothing to do with React, but it ended up making me more loyal to it in the end 😄.

~

Frontent DeveloperDawinpropertySeoul, South Korea

Having only worked at B2B companies before, I wanted to work on products that targeted customers more directly (B2C), so I was glad to join this company since it runs a real estate website where potential home buyers can come and interact with various information on a 2D area map. The website is also useful for home sellers because they can post their home for sale.

The website was a React single page application (SPA) using React Router as a client-side router and Redux-Saga to manage the global state. And the map itself was provided by a third-party service called kakaomap Maps API. In fact one of the first things I did after joining the company was to add some additional features to the map such as satellite view and a distance measuring tool using kakaomap APIs.

I also had a chance to work with web forms—the form that sellers need to fill out to post their home on the website. Particularly, there were a lot of information in the form that the user needed to fill in, so naturally it was divided into multiple steps, yet it didn't really have a mechanism to save progress as a draft—which means the user will loose their form states and need to fill out the form again(!) if something goes wrong while navigating back and forth between steps. So we decided to add a draft state of the application in the database and save the draft whenever the user navigates to next/previous steps. It was more of a backend work, but fortunately the backend app was a node.js app (JavaScript yay) and I had some prior experience in SQL (our database was MySQL) so I didn't have too much trouble implementing Express api handlers and writing SQL queries for them.

Speaking of node.js, one of the tasks I was assigned to had to do with creating an API endpoint—that was separate from our backend node.js app—that collects and aggregates some third-party data we wanted to use on our website. I remember I had a hard time implementing the 'aggregation' part of the logic. It was in essence similar to nested SQL GROUP BY statement followed by averaging or summing the group members (something like Array.reduce())—something that I did pretty regularly when I was a data analyst in either R, Python, or SQL. However it was surprisingly harder to do the same thing in JavaScript. I vaguely remember I came up with some unnecessarily convoluted recursive groupby() function and a reducer that used a lot of Array.flat() and Array.flatMap().

Back to the frontend app, I want to mention that this React app was also suffering from bloated and unmanageable useEffect()s—something that I had experienced in the previous company as well. The problem was worse here because there were a few giant React components that many other components depended on that were doing too much work—which means these massive React components contained too much (nested) conditionals and business logics that were hard to follow which branch corresponded to which case. I was very surprised that this was, in fact, an intentional design choice made by one of our main frontend developers—she favored to have these bloated components around, and everytime I needed to add something new, she would point me to put more logic in them. I, on the other hand, wanted to break down these monster components to smaller pieces so that we have more modular and independent components. Unfortunately, everytime I suggested decomposing big components or attempted at doing it, she would strongly oppose it—she was a firm believer that things were easier to control and manage by having few big components that do a lot of work. This fundamental difference in view actually was a major reason I left the company.

~

Software EngineerAgileSoDASeoul, South Korea

By constantly showing my interest and aptitude towards computer systems and software development many times, I was finally granted a transfer to the engineering team of the company as I wished. We had a B2B SaaS product whose target users were themselves data analysts at our client companies, and I was excited to bring my perspective as a former data analyst—more of the actual user's perspective I'd say—to the product. Old me back then was fairly confident that he could learn new things relatively quickly and get himself up to speed fast enough to handle the tasks thrown at him in the near future, but little did he know what kind of challenges were waiting for him—as the old saying goes, ignorance is bliss 😂.

I mean, I had my fair share of coding in R and Python (albeit not software production code) and some sys admin/IT infra stuff on Linux, but now we were talking about a full-fledged backend application written in Java, a frontend written in React, a MariaDB for database, and underneath all that, a big (frankly unnecessary) infrastructure to serve and manage all these services, namely, Docker and Kubernetes. I remember putting in around 3-4 extra hours after work for months digging through the codebases and various docs websites—Kubernetes, Eclipse Vert.x, React, MariaDB, Dockerfile, etc.—to make myself useful faster.

Slowly but gradually I became more self-sufficient, and the tasks assigned to me evolved from supporting minor feature developments to implementing major independent feature. As my understanding of the implementation of the product grew however, I began to question the decisions that had been made in regard to our tech stack—such as "Was involving Kubernetes absolutely necessary, especially when it's suffice to have a single machine to run all our services?", "If managing docker containers was a concern why couldn't we just go with docker compose instead of Kubernetes?", "Why this esoteric framework called Vert.x for our backend?" or "Why do we need a distributed storage solution such as GlusterFS when we don't really have any storage scaling issue yet?"

I was especially disturbed by the complexity of the frontend codebase which was originally bootstrapped by this boilerplate project. The starter project (which was outdated and unmaintained) was meant for the web apps that needed both server-side and client-side rendering techniques, but we weren't using the server-side rendering at all—we were only rendering the UI client-side, which means a static server would've been sufficient to serve the skeleton html, client-side JavaScripts, and other static assets, but instead we had a full-blown node.js server and all the other complexity that came with isomorphic rendering technique. I was so frustrated with the frontend codebase that I gave a presentation internally in the company to raise my concern.

Shortly after though, I was given an opportunity to implement the UI for the new product from scratch. I was in such high spirits in the beginning because this time I was able to use React hooks (every React component in the existing codebase was a class component). While I tried hard not to include any unnecessary dependencies, I relied heavily on Blueprint.js and TanStack Table for complex components such as forms, dialogs, and tables. For charts and graphs, I used Apache ECharts and react-chartjs-2.

Despite my initial excitement for React hooks however, I noticed some of my useEffect() hooks were getting incredibly complicated to the point where I was no longer able to analyze what they did without slapping in console.log() here and there and checking the runtime behaviors. I failed to manage the complexity of some React components, and I also probably overused useEffect() hook unnecessarily due to my lack of understanding of React rendering model at that time. I also must admit that at the end I was very afraid to even touch some of my useEffect()s because they were so brittle that the slightest change would break something unexpectedly—my own poor usage of useEffect()s left me traumatized and made me question the usefulness of React hooks for a while 😅.

~

Data AnalystAgileSoDASeoul, South Korea

This was the third and the last job I had as a data analyst. This particular company was where I was able to switch position to software engineer later, which evolved into an entirely frontend position at the end. Almost half of my time here as a data analyst, I was involved in the projects that attempted to predict the future stock prices (or the directions of them), which—not surprisingly—failed spectacularly, resulting in the models whose prediction accuracies were around 50% (which is equivalent of a coin toss 😂). In hindsight, we were trying too hard to find even a smallest signal in the stock market data, which I believe is inherently random with no meaningful pattern to detect and capture with a model. Related, it was this time around that I started to lose interset in data analysis as a profession. I felt especially frustrated when the data I was analyzing didn't really have any meaningful or interesting patterns—in that case, no amount of work from my end could turn the result around. It was basically garbage in, garbage out.

In contrast, my interest towards computer system and software engineering were growing steadily while doing stuff like shell scripting, spawning a process, killing a malfunctioning process, cron jobs, ssh tunneling, etc. that were necessary for our data analysis team to do their job—I mostly volunteered to do these kind of peripheral works because they were fun to do.

One particular project I was involved in had something to do with data migration—from the client's Oracle database to our data analytics cluster, which used hdfs as its default file system. It sounded daunting at first, but the tool like sqoop helped me move the entire Oracle database and tables to hdfs as Hive tables without too much challenges. I also needed to port the Oracle SQL queries we were using in the project to HiveQL so that our queries still work after the migration was done. This was supposed to be a job for the engineer in the project, but his workload was too much at that time that he couldn't come to the project site (client's office where we—data analysts—were working alongside the client) as much as we wanted him to, so I ended up doing it myself. In fact, this kind of episode helped me later to appeal to the management that I wanted to move to an engineering position.

Personal ProjectMini Hadoop Cluster

Circa 2016-2017, I was more than fascinated by the concept of distributed computing represented primarily by Hadoop and Spark. The very idea that multiple computers, connected through a network, work together to represent a single entity (a cluster) to handle expensive computations that is beyond any single machine was just phenomenal to me.

I could've just spun up multiple virtual machines on my laptop to provision and play with a Hadoop cluster, but instead I decided to create one that's composed of physically separated machines—which, now that I think about it, was totally unnecessary and kind of a waste of money, but at the moment I thought it'd be more fun that way... and well it actually was 😄. Although I didn't really use this cluster of machines in any meaningful way after I was done configuring, this cluster would later be transformed into my kubernetes dev cluster and serve me well at work.

~

Data AnalystMobigenSeoul, South Korea

It was my second job as a data analyst. I remember this company had much bigger engineering department compared to my first company, probably because it had a B2B SaaS product or two already being sold when I joined—they were putting together a data analytics team for the first time when they hired me. I didn't know the details of the products they had, but the software engineers developing the products used Python as their primary language of choice (or so they said since I never saw or touched the actual product codebases). Naturally, there were quite a lot of competent Python programmers in the company including the then CTO, who led a Python study group for new hires, which I was a part of, once a week. There were also a few data analysts that had already been working in the company by the time I joined, and they were also using mostly Python for their tasks. So it felt natural for me to learn and explore data analysis tools in Python ecosystem—I remember using tools like scikit-learn, pandas, and matplotlib.

To me, 2016 and 2017 were prime time for machine learning. Well, I think it wasn't just me, but there were a lot of excitement in general around machine learning in that time period, thanks to people like Andrew Ng, Christopher Bishop, and Yann LeCun giving public and accessible lectures and talks. It was also the period of artificial neural networks, deep learning, and TensorFlow as I remember it.

~

Data AnalystNowDreamSeoul, South Korea

This was my first job as a data analyst fresh out of college. With memory still relatively fresh—which is no longer the case—I remember I tried very hard to apply what I'd learned at school to the tasks I was assigned.

It was also this time around that I started developing interest towards computer systems behind the scene. You see, when I was in school, my typical IDE was a desktop app called RStudio and that was it. But now my IDE was running on the server, and I accessed my RStudio through the browser (as in the form of web app). At that time, this RStudio "server" and the web UI that I can connect and use through the browser felt like dark magic almost 🤯. And to control this misterious server—which was nothing but a desktop PC quietly sitting in the corner of the office—I had to connect to it through something called an SSH client (I mostly used PuTTY back then), which would instantly start a Bash session if successfully connected—it was the first time I interacted with a Linux OS. I was quite fascinated by shell environment of Linux that I would randomly connect to the server just to play around with Bash and its utilites.

It was also the period where big data hype was at its peak, and I had a chance to play a little bit with Hadoop and Hive—as a matter of fact, HiveQL was my first experience with SQL.

UCLAStatistics, B.S.Los Angeles, CA

Some group project slides I've found still sitting in my google drive 😂