The day I gave up on the title naming scheme
Ok, to cut straight to the chase this time — what did I do today? The quick and simple answer to that is that I spent a lot of my time continuing to work on improving statusbot and trying to add in as many features as possible before I have to go. The long answer? Well, that’s going to be quite interesting…
“Pizza time” – Spider-Man
So, at the start of the day, I was anticipating yet another perfectly normal workday. I’d come in, get my work station set up, boot up my computer, and find a new problem/feature to solve/implement. But then the first thing arose — the monthly(?) scrum demo session. I’ve had this on my work calendar for a while, but I definitely did not understand what it would be at first glance. I mean, what do you think a scrum demo session would be? Well, it turns out that it’s a pretty interesting event where all the various teams at Pendo (Ion Chefs is a team, for example), all get together and demo what they accomplished over the past sprint. So some of the teams had “physical” products to show such as improved analytical software, and some groups had slightly less tangible things to show, such as the Perf Serfs (another backend team focused on performance), which simply had “33% speedup of [something] for desktop clients”. Honestly the presentations let me more clearly see the difference between the Ion Chefs and other teams, given that the Ion Chefs operate on a Kanban system instead of scrum. The key difference was that the other teams all worked towards their own features to implement, however the Ion Chefs, in a similar way to the Perf Serfs, seem to work towards more arbitrary goals or scattered tasks that don’t really fit in other categories. Because of that, they have less physical products that can be shown, however they in turn can say that they directly touched a lot more of the code and implemented many fixes.
But moving on — the second obstacle of the day — project ownership migration. As all good things do, this temporary internship must come to an end. Because I have to leave on Friday, Pendo has to figure out what to do with the code that I’ve been working on. Obviously I can’t continue to work on it, and it’s pretty hard to just give it to someone already on a team and say to just continue working on it, so the obvious solution it to relegate it to a summer intern. And that’s where the next major challenge comes into play for me — remember how it took a while for me to actually get used to and fully understand the codebases that I’ve been working with? I now need to help make sure that it’s easy for Riley, the summer intern, to pick up where I’m leaving off and keep the ball rolling on the project. Interestingly enough, Riley actually was an intern for some time last year, so she already has some experience with the codebases that I’ve been working on — she originally implemented one of the commands that I had to reimplement for statusbot! But back to the project transition: my goals for tomorrow will be drastically different from what they’ve been in the past, as I’ll not only need to get as much done with statusbot as possible, but I’ll also need to make sure that Riley knows enough about the codebase to pick up smoothly from where I’ll be leaving off. But that’s all boring human stuff. I’m sure you’re here for the [TECHINCAL STUFF WARNING]
Here’s literally everything I’ve written for statusbot, but obfuscated because I suck at screenshots and this definitely isn’t intentional.
So, what’s the issue of the day that I get to talk about? I mentioned this yesterday, but webhooks pretty much completely rely on JSON and HTTP POST requests. Now, when the parser you use to actually handle the inbound POST requests doesn’t like the data that it’s getting, your entire codebase usually hits the fan pretty hard. And, of course, the parser isn’t usually able to provide verbose information about whatever issue arises, so you end up only knowing that there was an error parsing payload
. I mean, given all of that, what do you do? Well, I decided that in order to avoid the (on average) 4m deploy cycle to the Google Cloud in order to test the application, that it would be a pretty good idea to instead figure out how to locally host the application. But that in itself leads to even more issues — locally you don’t have access to the tokens that you need (except I fixed that before with the hidden JSON file), you don’t have access to the Appengine’s DataStore (so you don’t have the real testing data), and you definitely don’t have access to the internet, so you can’t even connect to the services you’re testing in thee first place. But, as it turns out, the solution to all of those were pretty easy — take the lazy route. I mean the only real issue that can’t be fixed by just sending hardcoding the solution is handling the inbound webhooks or outbound API calls.
So what’s the easiest solution to that problem? Pretending to be Slack/GitHub, of course. Basically when you run the application in a testing environment, it turns into a mini version of the cloud server that it’ll be deployed to and ultimately run on. That then means that you can actually still send requests to the application running in the local testing environment. So, how do you do that? A few jankily written scripts that are soon made defunct as soon as you remember that curl
is able to make POST requests. But what I basically ended up doing was log all of the inbound packets to the cloud application, download and process them, and then send them to the local testing application. It’s definitely a (probably over)complicated solution to the problem, but it works. And that’s what software development is about, after all.
Some fun stats:
Preemptive end results of internship (if you care about stats):
- Lines of code written (statusbot): ~1350
- Lines of code debugged (in pankbot): ~950
- Commits authored (total): ~20
- Enjoyment (total): ∞
- Words written for this entry: too many
~John