Intern Interviews

Over the past week or so, my teammates and I have been interviewing potential candidates for a summer intern position. It’s a development role, so of course, we have been asking some basic coding questions. We have also asked the candidates to solve a couple of simple programming challenges, both before and during the interview.

If you’ve ever been in an interview for a programming position before, the idea of coding as part of an interview probably sounds pretty normal. It is quite common to ask these sorts of questions, even to programmers applying to much more high level positions, strictly due to the small amount of “phony” programmers applying to highly skilled positions who actually cannot code their way out of a wet paper bag. We’ve all read the horror stories.

But that’s not what this post is about. This post is about what we do after we have concluded that the candidate actually knows how to solve very trivial programming problems. In of our current batch of interviews, as the last question, we have been presenting a coding problem which consists of three phases:

The First Phase

The first phase is just a simple text processing problem that we expect the candidate to solve in a specific manner. So far, no one has strayed outside of our assumptions, and no one has failed to solve this portion. Even so, this first part actually provides a smidgen of useful information. The main thing we have taken notice of is the candidate’s choice of programming language.

Due to the disconnected bubble of CS academia, all of these candidates know at least C and Java. And without fail, these are the languages that they claim to be the strongest in. A handful of our candidates chose to do the text processing problem in C, which is an immediate handicap. And indeed, all of the candidates who tried to solve the problem in C have had issues with pointer manipulation. It’s not so much the problems with pointers that concern me, rather it is the default choice of C to solve a problem that would be much easier in a language like Java (or Python, or C#, or whatever). For every candidate, we specified that they are allowed to use any language, even pseudocode, to solve this problem. So why would anyone choose C?

The Second Phase

The second phase of our question is just a test that the candidates understands and can apply a basic CS concept. In this case, the concept is recursion. I have begun asking this question in two steps, first by simply saying something like “can you solve this problem without using any looping keywords, like ‘for’, or ‘while’?” If the candidate seems stumped by this, we will connect the dots for them and mention that the concept they will need to use is called “recursion”, and so far everyone has been on board from at least this point.

I don’t believe that you can get even a year into a CS degree without being introduced to the concept of recursion, so I’m not too worried that asking this question is potentially alienating anybody. It’s somewhat arbitrary, but I think general algorithms that can be solved recursively introduce themselves in the real world much more frequently than anything involving sorting algorithms or red-black trees, for example.

The Third Phase (a.k.a, The Perl Phase)

Early on in the interviewing process, one of the guys on the team suggested a new type of coding question. As I mentioned in my last post, a large part of what my teammates and I deal with is a messy, legacy Perl codebase. So my teammate suggested, in not so many words:

Why don’t we have them try to write some Perl? Just say, “here you go, you have the full power of the internet”, and let them try to figure out how to solve a simple problem in a programming language that they’ve never used before?

This actually turned out to be a stroke of genius. It’s not a bad language by any stretch, but Perl is not exactly en vogue right now, so needless to say, nobody we interviewed knew even the first thing about Perl. This is a good thing. Remember, these are intern candidates we’re interviewing. A major part of being an intern is learning new things. If all you know is C and Java, you better believe you’re going to be learning some new languages if you’re going to work on our team.

Chris Shaver summed up in a recent post:

To find out what the candidate knows, ask them about something they know…

In a similar vein, to find out how a candidate learns, ask them to code something in a language that they don’t know. For anyone who made it through the first two stages with no major issues, we presented this exact scenario to them. We gave them a clean browser opened to google, a text editor with Perl syntax highlighting enabled, and a terminal. Then we instructed them to do whatever they needed to do to write the same program as before in Perl.

Don’t get me wrong, we’re not cruel. For every candidate, we clarified that we weren’t really looking for a working solution at the end of it, rather we were just interested in how the candidate tries to solve a new problem. Let’s be fair here, the majority of what a developer does in day to day work is “advanced googling”. We wanted to find out how good these candidates were at searching for information and learning from it. There are a few things that I learned from this experiment:

  1. Anyone capable of writing a basic computer program must also have some basic googling chops, and can come to some useful pages without too much trouble. Surprisingly, this did not turn out to be a particularly enlightening part of the process. No one really struggled to come up with good sources.

  2. The way a developer consumes the information seems to be much more pertinent than actually being able to find it. One candidate in particular was skimming through many pages very quickly. While he may have come across some good sources, they were rendered useless by the candidate skipping over large portions of important information in favor of quick, small code samples. On the other hand, the only candidate who actually ended up with a working solution worked through the sources slowly, with an apparent goal of understanding the information.

  3. This process can give insight into how a candidate works through problems. One candidate in particular, with a bit of prodding, showed that he was able to segregate the different pieces of logic in his program and test the correctness of each portion to find out where an issue may be present. One flaw in this approach was exposed by the sole candidate who was able to produce a working solution. She actually came to a working solution before ever running the application. This gave great insight into her ability to search for and process information on the internet, but it didn’t give any insight into the way she worked through problems.

All in all, I think this was a useful experiment in interviewing. During technical interviews, we are expected to do some very unnatural things, and this was a way to see how a candidate works in a more realistic environment. When is the last time you wrote a program longhand on a whiteboard, without consulting any sort of documentation, and without being able to run the code? It was probably while interviewing for your current job.

Legacy Code Non-Proliferation

At work, one of the projects that my team maintains is a legacy Perl codebase. This project originally contained both the client and server elements of a homegrown build system, as well as a homegrown SCM system, among other things. This project was initially written by a sole developer in the 90s, and over the years it has been maintained by probably 10 or so developers, none of whom likely knew Perl before working on this project.

Obviously, this is not an ideal situation for code quality. And indeed, this project has some pretty bad code. I’m talking single letter identifier names, global variables, no use of OOP whatsoever, massive subroutines, and very minimal use of the module system. In short, it’s a mess.

Fortunately, a lot of this code is not actually used anymore.

It isn’t uncommon to come across a file that was last modified over 10 years ago. In certain source files, if you grab the name of a function from its definition, there’s a decent chance that a grep won’t return any other results for that name in the entire codebase.

For this reason, I have proposed a policy of legacy code non-proliferation to my teammates. The gist of this policy is that any time a developer is tinkering in a specific area of this project, any clearly redundant or unused functionality should be dropped along with any other changes that are occurring. The goal here is not to go out of our way to weed out unused code, but if we happen to notice any “dead branches” in the area we’re working in, we will prune them.

When dropping this old code, we will segregate the removal into its own submission, and it will be tagged with a standard bug number. That way, if some legacy functionality that we’ve dropped actually turns out to be super important, it’s a simple process to restore it. After ripping out the old code, we can continue working in this area of the project with less of a mental burden.

So how did we even get into this situation?

There are a few specific styles of unused legacy code that I have come across:

  • One thing I have noticed a few times is instances where someone refactored a piece of commonly used code into its own function in a module, but the developer failed to update all of the instances of this functionality to reference the module. Or perhaps they did do the refactoring, but they left the old implementation commented out, presumably for the sake of posterity. The fix here is simple. Replace the redundant code with the generalized implementation that someone has already written for us.

  • Another thing I’ve noticed is the use of boolean flags to toggle between new and old functionality. This practice is fine when actually testing out a new way of doing things, but once you actually make the switch for good, it is important to remove the dead code path. Again, fixing this is fairly straightforward. Find every branching structure where we toggle between the old and new functionality, and simply remove the old branch and any code that is exclusive to it.

  • The last scenario is simply code paths, or full scripts, that are no longer used. An example of this is a script I came across that used to be called by users directly, on the command line. There were several different ways that a user could call this script, with a handful of different arguments. Today, no one uses this script directly, rather it is called indirectly from a UI application in a fairly restricted manner. A large part of the original functionality is no longer required. Again, our task is relatively simple. Remove the unused script or code path.

By no means am I suggesting that we all go diving into our old legacy codebases and ripping out anything that seems unnecessary. However, if you are actively working in a legacy codebase, maybe employ some of these techniques to try to reduce the amount of cruft, with the goal of eventually honing down the codebase into a useful nugget of core functionality. Removing unused legacy code is the first step in making it easier to reason about the overall design of the application and facilitating future improvements.

What's the Matter with Kansas?

As January wrapped up, I completed the first book in my reading list. The full title is What’s the Matter with Kansas? How Conservatives Won the Heart of America by Thomas Frank.

The book covers the rise of right-wing populism among conservatives in US politics, and Frank uses the changes in his home state of Kansas as a token for the movement as a whole. Using a barrage of references from literature, radio, and television pundits, he outlines why and how the movement started, and then how its supporters actually began to succeed in getting people elected to public office. Frank refers to it as the “backlash”, and it is essentially what turned into the tea party movement.

The book is overall well-written, and appears to be built on the results of extensive research. Frank can get a bit long-winded at points, diving deep into exposition of his interactions with local residents rather than simply squeezing out the key morsel of information that he’s working towards. Apart from a few instances of this, I really enjoyed the read, and I highly recommend this book to anyone interested in US politics.

What’s amazing about this book, though, is that it came out in 2004. Let that sink in for a minute. That’s about 5 years before the beginning of the tea party movement. It’s 12 years before Donald Trump was elected as a president (supposedly) representing conservative populism.

Reading this book in the current political climate makes Frank seem like a soothsayer. Adding to this feeling is a small section in the epilogue that talks about the state of the Democratic party. Frank posits that the Democrats are becoming too friendly with big corporations, and are alienating working-class voters. It’s almost as if he had seen the entire 2016 presidential election play out 12 years prior. Frank has a new book dedicated to this topic called Listen, Liberal: Or What Ever Happened to the Party of the People?. Stay tuned for a future review of this one.

Being a flaming liberal myself, I have been immersed over the past couple weeks in the various movements that have sprung up post-inauguration to attempt to bring the Democratic party back to its progressive roots to win back some political ground. These are, you guessed it, populist movements that aim to get money out of politics and elect modern Democratic candidates who work for the people, and not big corporations. Who knows, maybe we’ll see a successful left-wing backlash in the coming election cycles. Hopefully it’s not too late…


Recently I saw a Facebook post from my cousin that caught my eye. He said that he would be reading 52 books this year, one book per week. This resonated with me when I saw it. I like reading, and I like the idea of a well-defined, easily validated goal.

Don’t get me wrong, I do not have the willpower to read a book every week. However, I was inspired enough to aim for a similar goal of my own. Last weekend I created a rough list of 12 books on a post-it note that I will aim to read this year. The full list is:

  • Jan - What’s the Matter with Kansas - Thomas Frank
  • Feb - Don Quixote - Miguel de Cervantes
  • Mar - Our Revolution: A Future to Believe In - Bernie Sanders
  • Apr - We - Yevgeny Zamyatin
  • May - Scar Tissue - Anthony Kiedis
  • Jun - On the Road - Jack Kerouac
  • Jul - God Is Not Great: How Religion Poisons Everything - Christopher Hitchens
  • Aug - Breakfast of Champions - Kurt Vonnegut
  • Sep - The Inmates Are Running the Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity - Alan Cooper
  • Oct - The Great Gatsby - F. Scott Fitzgerald
  • Nov - Walden - Henry David Thoreau
  • Dec - Love in the Time of Cholera - Gabriel García Márquez

I may end up swapping one or two out here and there, but this seems like pretty good starting point to me. The list alternates between fiction and nonfiction, and I’ve attempted to keep a reasonbly varied selection to keep things interesting.

The main goal here is to keep myself reading, and I have created a similar goal to attempt to keep myself blogging, so keep an eye out for monthly posts about each book at the very least. I’m currently wrapping up What’s the Matter with Kansas, so there will be a post about it in the works very soon.

Apart from that, stay tuned for some posts that are actually about software development maybe?

NOFX - The Decline


Yesterday I attended a meetup at the Microsoft New England Research and Development Center in Cambridge! Despite being in a different city, it was only a few short miles from my overpriced apartment in Boston.

Networking and Pizza

This meetup was more organized than most, and included an actual schedule. The first order of business was eating pizza and networking with other developers. I chatted with a developer who works remotely for StackOverflow, and another dev who is currently looking for a C++ gig.

Being my first meetup in the area, I didn’t know anybody, so it was good to talk to a few people at least. During the networking portion I was mainly distracted/bemused by the sights of Shawn Wildermuth and Miguel de Icaza just…hanging out with everybody. It feels stupid to admit, but I guess I didn’t realize that developers who have attained a “rock star” type reputation are still only rock stars in the relatively small world of software development. They don’t have handlers. They don’t appear from backstage all of a sudden when the lights dim. They enjoy the free pizza just as much as you do.

Hello World

The primary objective of this meetup was to record an episode of Shawn’s Hello World Podcast, which is currently on tour visiting a bunch of cities across America along with a handful of international stops. The podcast uses an interview format where Shawn talks with developers who speak about their roots and history in the development field. In this instance, the subject of the episode was Miguel de Icaza.

Miguel talked about his roots in open source, and how contributed as a hardcore OSS developer for a long time. He eventually moved into the world of closed source software, only to be bought out by Microsoft, who then proceeded to open source his product and give it away for free in an exercise of ridiculous irony.

An interesting tidbit from the podcast was that Miguel considers Gnumeric, a spreadsheet application, to be the most beautiful code he has ever written, and he’s not above admitting it. On programming forums, you’ll often see people asking for examples of beautiful code that they can learn from. Maybe this could be a good example.

The podcast was recorded live at the event, but based on the upload schedule, it should be available on Shawn’s site within the next few weeks. I highly recommend giving it a listen once it’s available.


After the podcast, Shawn did about an hour-long demo/introduction to ASP.NET Core. If you’ve seen any of his Pluralsight courses, you’ll know that he is somewhat of a subject matter expert when it comes to web development on the Microsoft stack, so I was very excited for this part.

After seeing Shawn’s presentation, it has never been more clear to me that ASP.NET Core is effectively Microsoft’s response to the development style of Node.js with Express. This is not a bad thing. In fact it’s great! C# is a beautiful language, and JavaScript…isn’t. Bringing the design ideology behind Express to ASP.NET is a great thing in my opinion.

With the standard ASP.NET MVC project template, you’re given a massive pile of included features that you don’t even know what they do, and virtually every resource for learning the framework kind of just rolls with it. It reminds me of people starting with Java as their first programming language, and being told to just ignore most of the language features you’re required to use just to get a simple “Hello, world!” printed to the console.

So Shawn started his demo from basically nothing. All he had was a console application which listened to web requests. During the talk, he slowly added and configured pieces of middleware from Nuget, demonstrating the flexibility of the new framework. Even fundamental things that you’d assume are included by default were actually added to the base web app:

  • Application configuration
  • Support for serving static files
  • The .NET Framework itself

These were all included as Nuget packages and tied into the web application as middleware. It was a pretty compelling demo, and like the podcast, the code should be available in the coming weeks at Shawn’s github repo for the tour.

Overall, this was a fun and interesting event. I definitely hope to go to more like this in the Boston area. I met other local developers, I learned new things, and I even got some free pizza out of it. What more is there to ask for?

Delta Sleep - Lake Sprinkle Sprankle

Get Involved

Hello, my name is Evan. This is the first post of my new blog, “Evan Is Great”. The focus will mostly be on software development, however I will likely sneak in a post about homebrewing or other topics that I’m interested in from time to time.

The inspiration for this blog was provided, in part, by a documentary by Scott Hanselman and Rob Conery called Get Involved!. You can watch it for free on Pluralsight. The purpose of the documentary is to present information on getting involved (never would have guessed) in the software development community, and an entire chapter is dedicated to blogging. Scott encourages developers to blog about things they’re “jazzed” about, and I intend to do just that.

To start out, however, I will probably be blogging about getting involved in the community a lot. See, I recently relocated from my hometown of Milwaukee, WI to Boston, MA for a job opportunity, and I know a grand total of 0 people in this city. Due to this, I consider getting involved to be a genuine necessity should I desire not go insane and become a hermit in but a year’s time.

Next week I will be attending an event hosted by Microsoft DevBoston wherein Shawn Wildermuth will record a podcast with Miguel de Icaza as part of his Hello World Road Trip tour, then give an hour-long presentation on ASP.NET Core. I’m jazzed about every word in that sentence, so it should make Scott happy.

Stay tuned for my first real foray into “getting involved” in Boston next week.

Ditch Croaker - Library Shrine

Proudly powered by Hexo and Theme by Hacker
© 2017 Evan Ogas