Stop Being So Clever

One of my favorite things about software engineering is the short feedback loop. It is one of very few professions that allow you to postualte, implement, and test a solution to a problem within hours or even minutes. And if it doesn't work, you simply try something else. The cost of trying new things seems pretty low when all you need to do tweak some text files.

And when you do solve a problem it's a rush. When your solution is elegant it's almost intoxicating. We love solving problems elegantly through:

  • Efficient data structures and smart algorithms
  • Clever abstraction
  • Clean code, with little or no duplicated logic

Elegant solutions are where I think engineers tend to think of themselves as craftsmen. However, elegant solutions are not always good solutions. Elegant solutions are often the beginnings of technical debt.

Elegance is not an end. This is obvious. But I think the line between high quality solutions and elegant solutions is often blurred. It's often hard to recognize in the moment that we're going too far, that we need to stop being so clever.

My Lesson in Elegance

When I was a consultant, I worked on a large project for a private equity fund-of-funds client, replacing their propietary investment transaction system. The old system was written in Delphi, and was starting to fail badly. A lot of the application relied on database triggers and stored procedures to do the heavy lifting of calculations and business logic. The Windows application would deadlock on large calculations like the year end valuations of funds. Almost all other technology in their stack was running on the .NET framework as an internal web application or service, which were much easier to administer and deploy.

Accountants spent a large amount time doing very manual calculations to break out transactions by pro-rata (by ownership percentage). In particular their clients invest annually in their funds, those funds make investments in external partnerships. However the cash flows and returns are often sent at different levels in this hiearchy. Sometimes through other investment companies or comingled funds. Another time sync was moving data between different systems.

So in short, the general goals were to:

  • Move the technology to the .NET web stack
  • Make the user inteface fast and similar to Excel
  • Integrate with other services to speed up data entry

At the onset of the project, my main focus was on these problems:

  • We can't really trust JavaScript to calculate numbers correctly on the front end, so all caculations needed to be validated on the backend. How do we keep things in-sync?
  • The transactions could be entered and edited from the parent and child level of the transactions.
    • How do we reconcile the dollars entered at different levels of the hierarchy? How do you maintain the pro-rata constraints?
    • How do we deal with rounding error?
  • How do we share this logic across all transactions?
  • How do we deserialize and serialize this information from a database row?
  • How do we validate that the data was entered correctly for different transactions?

I wrote a lot of code to make all of this work in what I felt at the time was a clever, elegant solution. The hierarchical nature of these transactions led me to using a genericized tree to hold an abstract class Flow, eg Tree<TransactionFlow>. The calculations were all done by traversing the tree, either up or down depending on the way the data was entered. Each concrete implementation of the flow object would handle it's own process for validation and database object. I became obsessed with refactoring as much of the commonalities out as I could. And I wrote all of this before we built out the rest of our phases.

But as the project dragged on and as I was getting ready to leave my job, it became more and more clear that I had saddled the team with technical debt. I was trying hard to build something maintainable and elegant, but the deep levels of abstraction were hard to understand and follow. The changing requirements made a lot of the code obsolete or difficult to maintain. At times it was important for a user to just override the calculation workflow or the validations altogether. Stock distributions just didn't work the way cash draws and distributions did, neither did valuations. What was at first a simple algorithm to distribute dollars from parents to children, became a lot more complicated.

What went wrong?

Premature Optimization

At the time I started on this project, I had just finished reading the book Code Complete 2. This is a fantastic book and it's been very impactful on my career. However, the lessons from this book and others like it need to be administered pragmatically. I was refactoring for the sake of refactoring. Cleaning up code before it was really a problem.

My favorite quote about this is from Christopher Miles' post, Java Doesn't Need to be So Bad

I'm advocating an approach where you're mindful of these design patterns but you keep them in the background. When you have working code that is starting to get messy, or when someone on your team has been banging their head on a problem for way too long, that's when you take out the patterns book. Because at that point you have a pretty good idea of what the problem actually is. You even have a solution that's mostly working and that solution can be re-factored to fit the pattern that seems most appropriate.

Producing high quality code is good, but balancing this goal with the ability to easily understand and maintain the solution is more important. My solution could have been a lot easier to understand if I was willing to let things be a little more repetitive at the beginning. The commonalities would have been much easier to factor out during the project than way at the beginning.

Lack of Feedback

When you are writing elegant code, you think you're making rational decisions. I thought my abstraction of the transaction model made a lot of sense and I knew that we had to build many more flavors of the same thing in the coming phases of the project. What I failed to recognize is that the requirements were more fluid. That the user might need more flexibility in how they worked with the software.

This is why getting regular feedback on your code is critical. If your colleagues can't understand your code with some basic explanation, that's a bad sign. This is also why it's important for multiple people to also work on the same code base.

Focus on Building for Today, not Tomorrow

Having focus is so important. It's one of the lessons that I have learned well while working for a startups in San Francisco. You can have a vision for the company, and your projects, but right now you need to solve today's problems, not tomorrow's. I was building for problems months in advance when they weren't yet fully undertstood. Even now, it's occasionally very tempting to build with the anticipation of problems in a month, but I have to stop myself. I don't have enough information to make those decisions.

Recognizing the Problem

It's pretty easy to get mad at the people who wrote over-engineered code that you have to maintain, chalking it up to incompetence or even malice. I'm guilty of this, too.

But the nuance here is that I didn't do this out of malice and I think of myself as a good engineer. I was trying to do the right thing. That's the irony of most technical debt. It's built with the best intentions.

It's easier to see this problem in others but not necessarily ourselves, as John Mathis points out:

Unfortunately, a lot of software engineering education and even our interview processes are built around these age old questions:

  • How efficient is your code?
  • What's the runtime of your algorithm?
  • How can you make it faster?
  • How clean is your code?
  • Etc, etc, etc.

It's drilled into our heads that elegant code is a good thing, that it's the mark of a good software engineer.

While all of these concepts are important to understand, I think it's more important to know when these things really matter and when they don't. When to apply these principles and when not to. That's the mark of a great software engineer.

HTTP in Unity

I just spent the past month building the new version of our Playnomics PlayRM SDK for Unity. Unity is designed to be a cross-platform write-once, deploy-everywhere game engine making it a very popular tool for publishers looking to reach web, native desktop, and mobile audiences with a single game. Most of my background has not been in games, so learning Unity was both an opportunity to see how our customers work with it and to build some great new features for our SDK.

Our SDK provides game developers with tools for tracking player behavior and engagement so that they can:
* Better understand and segment their audience * Reach out to new like-minded players * Retain their current audience * Ultimately generate more revenue for their games

The SDK is designed to be easy to install and it needs to be light-weight. It shouldn't minimize the amount of work done of the UI thread, and it should never crash or negatively affect game play.

The more I worked with Unity, the more I realized that Unity's feature set is constrained by the fact that it supports so many different platforms. While this is a completely valid architecture constraint, it made working with some thing as simple as an HTTP request challenging. I spent a lot of time looking for answers only to get frustrated with Unity's docs. Forum Q & A was helpful, but most of the information was scattered, and some of it was just plain wrong. This post is an attempt to assemble all of this information so that you can avoid some of the same headaches we had.

C# ... but not all of it

Before I joined Playnomics, most of my background was concentrated in the .NET Framework with some exposure to open-source projects and languages. (Since then I have seen the light from working in a Linux environment, but that's for another blog post.) I got excited when I learned Unity supported writing code in C# (via the Mono Project implementation of the C# ECMA spec). I still think C# is probably one of the best programming languages out there; it has a myriad of features that Java is still lacking and it's working hard to support some of the nicer features of languages like Ruby and Python: lambdas, dynamic typing, etc.

A major component of PlayRM, like many SDKs, is an HTTP client that can communicate with a RESTful web service. Many of the HTTP requests in the SDK are fire-and-forget, because the game rarely needs any feedback that the request completed successfully. Our SDK internally manages a queue of requests and saves them back to local storage when they can't be processed or the game is being shutdown. To keep the SDK lightweight I was hoping to run most of our SDK calls on a background thread.

Unity does provide a class as part of the UnityEngine.dll called WWW, but I wasn't very pleased with its offering:

  • You have no control over timeouts.
  • It's error property is also just a string, making it hard to discern exactly why something failed. Typed exceptions are really helpful for that.
  • No generic response object: the responses are canned to either a text string or texture image.
  • Not to mention, no library in the UnityEngine.dll can be considered thread-safe. Yuck!

Having worked with C# before, I was already familiar with the System.Net library; it makes sending http requests and reading their responses dead-simple. With that library, you get typed-exceptions and you can read the stream into a generic byte array for a nice separation of concerns: let the caller decide what they want to do with the response.

while(taskQueue.Count > 0)  
{
    WebRequest httpRequest = null;
    HttpWebResponse response = null;
    Stream stream = null;
    bool requestSucceeded = false;

    var request = taskQueue.Dequeue();
    request.Attempts ++;
    try{
        httpRequest = WebRequest.Create(request.Url);
        response = (HttpWebResponse) httpRequest.GetResponse();

        Logger.Log(LogType.Log, "Starting Request {0}, Attempt {1}", request.Url, request.Attempts);

        if(response.StatusCode == HttpStatusCode.OK)
        {
            byte[] data = null;
            if(response.ContentLength > 0)
            {
                stream = response.GetResponseStream();
                byte[] buffer = new byte[16*1024];
                using (MemoryStream ms = new MemoryStream())
                {
                    int read;
                    while ((read = stream.Read(buffer, 0, buffer.Length)) > 0)
                    {
                        ms.Write(buffer, 0, read);
                    }
                    data = ms.ToArray();
                }
            }
            var apiResponse = new ApiResponse(request, data);
            completedTasksQueue.Enqueue(apiResponse);
            //remove this item
            requestSucceeded = true;
            Logger.Log(LogType.Log, "Request succeeded");
        } else {
            Logger.Log(LogType.Log, "Request failed response.StatusCode {0}", response.StatusCode);
        }

    } catch (System.Net.WebException wex) {
        Logger.Log(LogType.Warning, "Web Exception {1}", wex.GetType().Name, wex.Message);
    } catch (System.IO.IOException iox){
        Logger.Log(LogType.Warning, "Web Exception {1}", iox.GetType().Name, iox.Message);
    } finally {
        if(response != null){
            response.Close();
        }
        if(stream != null){
            stream.Close();
        }
    }
    LastEventTimeStamp = DateTime.UtcNow;

    if(!requestSucceeded && request.Attempts < maxRetryCount){
        Logger.Log(LogType.Log, "Request failed, but retrying ... ");
        taskQueue.Enqueue(request);
    }
}

This worked perfectly in the web player and the Unity Editor, but when we started testing on mobile phones nothing worked. It was pretty anticlimactic after so much work. I spent a lot of time cycling in the logging-building-debugging-repeat process. I eventually found out that Unity doesn't support System.Net.HttpRequest on every platform! Officially they only support TCP/IP Sockets via System.Net.Sockets or WWW.

Now you might say that hey, "You big dummy! You should have looked this up first!", and given how much time was poured into this pursuit I'd agree with you. However, I feel like this is an oversight on Unity's part. MonoTouch, a project which lets you write C# code and deploy it to both Android and iOS supports System.Net. There are some security considerations when deploying Unity games to the web, which I'll discuss later, but that's a question of usage, not of implementation. HTTP is such a common protocol and the fact that Unity has such an inflexible implementation of it, can really make things difficult.

At the very least, the Unity build should have generated warnings when I used libraries that aren't supported on all platforms.

Sockets ... but only if you pay me

So the alternative was to rewrite the HTTP client using TCP/IP Sockets. While it's frustrating that you need to do this just to have a better HTTP client, this is a viable solution. The downside is that if you want to use System.Net.Sockets on all platforms you need the Unity Pro licensing which costs a pretty good chunk of change. Playnomics builds tools to help every game studios from UbiSoft to a one-man army producing games and managing them; we can't afford to just cut-off potential customers.

We were stuck with using WWW.

Solution ... coroutines

While Unity may not provide the best support for what we needed, they do understand the need to keep HTTP requests from blocking the UI thread. They do this through using coroutines. Coroutines allow a subroutine to yield execution back to a caller while maintaining state, so that everything will continue where it left off the next time the subroutine is called. It's essentially a snapshot of the stack-frame. The canonical example is an iterator over a list or some process.

In Unity, coroutines are a major building block of their engine. It's a neat trick to let an animation run for a set amount of time or until the HTTP request has completed. From what I have seen in forum posts, some speculate that Unity is maintaining an internal data structure of each coroutine state when yield is called, and then checks each state after the Update event is called on the MonoBehavior. We can't be sure unless we inspect the code Unity generates, but the important take-aways are that if you use WWW:

  • You should take advantage of coroutines.
  • You should yield the call on the http request so that you don't block the UI thread.
  • All of the work with HTTP request, before and after it is complete needs to be done on the UI thread.
  • Whatever class that encapsulates this logic must inherit from MonoBehavior.

If you look at this, you might be thinking … man this sucks, what if I have lots of parallel requests? The truth is you can! And it's a very subtle thing with coroutines which you'll see when you look at some of our final code:

private IEnumerator ProcessRequests()  
{
    if(isProcessing || taskQueue.Count == 0){
        yield break;
    }

    isProcessing = true;
    //get a temp variable, otherwise we could be stuck in an infinite-loop (offline scenario)
    int itemsToProcess = taskQueue.Count;
    while(itemsToProcess > 0){
        ApiRequest request = taskQueue.Peek();
        if((!request.FutureProcessDate.HasValue) || (request.FutureProcessDate.Value < DateTime.UtcNow))
        {
            //not need to wait here, so don't yeild return
            StartCoroutine( DoRequest(taskQueue.Dequeue()) );
        }
        itemsToProcess --;
    }
    isProcessing = false;
}

private IEnumerator DoRequest(ApiRequest request)  
{
    request.Attempts ++;
    Logger.Log(LogType.Log, "Starting Request (Attempt {1}) {0}", request.Url, request.Attempts);
    WWW httpRequest = new WWW(request.Url);

    //yield control back to the caller so that we're not waiting for the download to complete,
    //will resume the call when the web call completes
    yield return httpRequest;

    if(httpRequest.isDone && string.IsNullOrEmpty(httpRequest.error))
    {
        if(request.RequestCompleteHandler != null){
            request.RequestCompleteHandler(request.Url, httpRequest);
        }
        LocalCache.instance.UpdateLastEventEpochTime(DateTime.UtcNow);
        Logger.Log(LogType.Log, "Request succeeded");

    } else {
        Logger.Log(LogType.Warning, "Request failed with Error: {0}", httpRequest.error);

        if(request.ShouldPersist && request.Attempts < maxRetryCount){
            double retryMinutes = Math.Pow((double)futureTaskIncrementMinutes, (double)request.Attempts);
            request.FutureProcessDate = DateTime.UtcNow.AddMinutes(retryMinutes);
            Logger.Log(LogType.Log, "Request failed, but retrying in {0} minutes.", retryMinutes);
            taskQueue.Enqueue(request);
        } else {
            Logger.Log(LogType.Log, "Request failed, not retrying.");
        }
    }
}

A word about threading

If we had been capable of working with sockets on a separate thread, we would need a way for code on the background thread to talk with code on the Unity thread, because there are scenarios when we want to notify the UI of some request completion: a creative for a message has been downloaded. Communication going the other way is a trivial problem. Android and iOS both provide avenues for notifying the UI of some event, but Unity's lack of thread-safety makes this a little precarious.

Your best case is to build a thread-safe data structure which both threads can interact with. The Mono Project doesn't include System.Collections.Concurrent but you can write a pretty simple thread-safe queue like this:

using System;  
using System.Collections.Generic;

namespace PlaynomicsPlugin  
{
    internal class ConcurrentQueue<T>{
        private readonly object syncLock = new object();
        private Queue<T> queue;

        public int Count
        {
            get
            {
                lock(syncLock) 
                {
                    return queue.Count;
                }
            }
        }

        public ConcurrentQueue()
        {
            this.queue = new Queue<T>();
        }

        public T Peek()
        {
            lock(syncLock)
            {
                return queue.Peek();
            }
        }   

        public void Enqueue(T obj)
        {
            lock(syncLock)
            {
                queue.Enqueue(obj);
            }
        }

        public T Dequeue()
        {
            lock(syncLock)
            {
                return queue.Dequeue();
            }
        }

        public void Clear()
        {
            lock(syncLock)
            {
                queue.Clear();
            }
        }

        public T[] CopyToArray()
        {
            lock(syncLock)
            {
                if(queue.Count == 0)
                {
                    return new T[0];
                }

                    T[] values = new T[queue.Count];
                queue.CopyTo(values, 0);    
                return values;
            }
        }

        public static ConcurrentQueue<T> InitFromArray(IEnumerable<T> initValues)
        {
            var queue = new ConcurrentQueue<T>();

            if(initValues == null)  
            {
                return queue;
            }

            foreach(T val in initValues)
            {
                queue.Enqueue(val);
            }

            return queue;
        }
    }
}

You can then poll the data structure in the Update call of a MonoBehavior:

void Update()  
{
    //the http worker has a method which returns (dequeues) 
    //an IEnumerable set of completed requests, 
    //internall it is enqueueing the completed requests
    foreach(ApiResponse response in httpWorker.GetCompletedResponses())
    {
        if(response.Request.RequestCompleteHandler != null)
        {
                //we want to notify an object that we have completed the request
                response.Request.RequestCompleteHandler(response.Request.Url, response.ResponseData);
        }
    }
}

Security

In the Unity web player, you're restricted in the browser to what resources it can access, because of cross-origin scripting protections. By default, you can only access resources from the same origin domain, eg: your game is hosted on http://myawesomegame.com, so it can only GET or POST to resources on http://myawesomegame.com. The one exception is that you can retrieve images for textures from different sites, with some limitations. If you need to hit REST service outside of your own domain, the other domain needs to add a crossdomain.xml file to the root of its domain:

<?xml version="1.0"?>  
<cross-domain-policy>  
    <allow-access-from domain="*"/>
</cross-domain-policy>  

These security issues also apply to sockets, but the implementation is a little different; it depends on what the ports you are opening and what port the policy is available at. You can read more about this and other security considerations in the Unity Web player here.

Detecting Offline Mode

iOS and Android both provide ways of detecting if the device has connectivity, but Unity doesn't appear to have any insights into this. You can of course, call those native libraries through Unity's plugins architecture.

Due to time constraints, we didn't have time to add this functionality to our SDK. Our current hueristic is to add a geometrically increasing wait time to each web request so that we wait for increasingly larger sets of time with each failure.

Going forward

Despite some of the nuances with HTTP in Unity, it still is a great game engine with a great community of developers; we're happy to be one of their partners. We realize that making a great HTTP client is not Unity's main focus, they are focused on building great technologies for game studios, not technologies for third party tools.

Unity also supports marshaling of C++ code through its plugins architecture; this is one avenue that we may look into for improving the overall performance of our SDK in future releases because we can hopefully utilize cURL, our own process queue, and have the ability to allocate separate worker threads as we see fit. However, using native C++ is limited to only desktop and mobile platforms. Customers writing code for the desktop would also need to have a Unity Pro license.

This somewhat, fractured solution highlights our challenge as a third party tool for Unity game developers. The solution going forward may be C ++ marshaling for mobile, and only WWW for web and desktop games. It's not ideal, but performance and loss of connectivity are much larger issues for mobile games.

I'll keep you posted on our progress.

Heading West: From Chicago Consultant to San Francisco Startup

Consulting Bliss

I graduated from Northwestern University, and unlike many of the people in generation, I was lucky enough to have a job secured when I got my diploma. I was also going to work in consulting, a field that as an undergrad had been lauded as one of the best career opportunities for college graduates. For many it's the stepping stone into business, the best-paid break before graduate school, or the best way to not really know what you want to do, but still have a white-collar job. I had been passed up by the prestigious management consulting firms (brainteaser questions and case interviews were never my specialty), but I was going to work for OpenBI, LLC, a small Chicago-based open source business intelligence consultancy. I was excited to start my next chapter of life. I was immersed into the world of Linux, ROLAP, and Pentaho quickly, but ultimately found out quickly that I missed software development.

I left Open BI after a few months on a client, and joined West Monroe Partners, LLC. WMP had a far larger staff and I would be able to work on custom software again. Although, I had had a few years of experience with .NET Framework and C# I immediately started learning new things, very quickly. If I had to isolate one positive thing about consulting and about my experience at WMP, it's that I learned an incredible amount very quickly early in my career. While a lot of my learning was very motivated by own curiosity and drive to get better at my trade, being a consultant affords you the opportunity to experiment, especially when you possess an intangible technical expertise - like software development. You are the "expert" and you are, in the majority of my experiences, given the free will to build solutions as you see fit. Your major constraint, of course, is your client's budget and time, but as we know in software those rarely end up as we expect.

The Honeymoon is Over

Within a year, I was working for a major client and helping to manage small teams and larger projects. By all standards, things at my job were going well: I'd been promoted. I'd gotten raises and bonuses for my hard work. I had a good circle of friends at work. Yet, I felt increasingly dissatisfied. I had picked up a CLR book and read it in my free time, hoping to glean a better understanding of how Microsoft's framework actually functioned. This earned me odd looks from people in the office. I was pushing the limits of many of my manager's technical knowledge, and felt that very few people could mentor or guide me; there just isn't time to do this in a leveraged business model. While I did work with some very technical people, many of my managers had become better salespeople than engineers. These people were not dumb, they were quite smart, but at a certain point their ability to be technical had stopped mattering. I quickly realized that in professional services it becomes less about what you can do, and more about how you can work with and sell to clients. I didn't enjoy sales and I wanted to keep growing technically. Advice from a former boss at a past internship echoed through my brain as I became increasingly disenfranchised, "If you want to keep fixing older code stay in Chicago, if you want to build new things, go to San Francisco."

The Hunt Begins

For months, a friend had been tempting me to move to California; ironically he is originally from the Midwest, and I from California. His father is also, conveniently, a headhunter. I called him to discuss how I could move from Chicago to the San Francisco Bay Area. As our phone conversation continued, I started to aired my doubts whether this was the right move ... "Was I being too impulsive? How would this look on my resume?" ... He retorted "Don't worry, Jared. You're going to make mistakes in your career ... If you want to move to SF, then go for it." He was right; in April of 2012, I finally bit and started my job hunt.

I placed my resume on Dice.com and scanned Hacker News for jobs that I thought would be a good match for me. After countless phone screenings, resume drops, two separate interview days in California, I finally landed at Dynamic Signal as a software engineer.

The Result

On my first day at DySi, I made some quip about Node.js, at WMP this would have resulted in some chuckles and "reinventing the wheel" remarks, but within a minute one of our other backend engineers explained that he had just gone to a conference and rattled off some specific benefits of the framework. Caught off-guard, I felt incredibly stupid, but then I got really happy. Finally a culture that really valued being technical. I still feel that way after a few months of working here, and I am really happy I made the move.

However, moving to San Francisco and transitioning to a startup has had some interesting twists and turns that might be worth mentioning:

  1. Renting to/in San Francisco is Hard
    The San Francisco rental market is insanely expensive and supply-constrained, and searching for a place to live here can be nerve-racking. Craigslist is the de facto tool for apartment searching, and now that PadMapper has been killed, you're stuck with that awful UI. Make sure you plan ahead and can buffer yourself for moving expenses and temporary expenses.

    That being said SF is awesome and the ideal place for the crazy, the weird, the quirky, the outdoorsy, even the brogrammer.

  2. Silicon Valley is Like Hollywood for Engineers
    A friend of mine compared Hollywood and Silicon Valley, and I found his explanation very intuitive. Like actors, startup engineers here are hoping for that one big break that might never come. Some colleagues have worked at startups that have repeatedly failed, leaving them with no equity to speak of. The downside of this, of course, is that in SF if you're a programmer, you're really not that special.
  3. Equity Might Be Meaningless
    It's almost as if equity, at this point, is just table-steaks needed to hire an engineer here. It really only matters if you IPO or sell, and it is highly rare that as an engineer that you will retire early because of it. Most engineers I have met appear to live comfortable lives, but few are splendidly rich. I'm not saying that it isn't nice to have, but it's something you learn not to expect coming to fruition.
  4. Failure is Ok
    The culture of the Valley is try, fail, and then repeat until you get it right. You have to enjoy that to keep working at startups.
  5. Advancement is Ambiguous
    Coming from a company culture of metrics, goals, and annual reviews, it was bit of culture shock joining a startup.Titles rarely matter, and cultures are generally very flat. Promotions are not really on your minds, keeping the product alive is what drives you.
  6. Agile, but A La Carte
    In the consulting world, Agile and LEAN methodologies have become almost like a religion, a holy grail that experts and firms offer as expertise. A startup is rarely dogmatic about their implementation (we have a daily scrum, but our size doesn't necessitate an Agile planning game). It will pick and choose what it feels is helpful and relevant to building the product. In general, the most constant behavior I can observe is that as an engineer and as a company you have to flexible and adjust to priorities quickly.

Moving Forward

While my time thus far in the startup scene has been very limited, I do feel satisfied that I made the right choice. Consulting exposed to me to different industries and helped me to grow quickly, at a very early stage of my career. I learned a lot and made some great friends along the way, all while in one of the best cities in the US: Chicago. However, the transition to a startup was ultimately the right choice for me and I think it's one of the more exciting avenues for people who want to continue developing their engineering skills. In time that may change, because the downside of working at a startup is that the endeavor is ultimately a gamble. Many, many startups fail leaving the participants without a job and without any equity. Fortunately, the demand for software engineers in the Valley is insatiable and there many great established firms here as well. For now, I am excited to keep enjoying the ride. I have already learned so much in just two months and I can't wait to share some of it here.

Enterprise Software Sucks

Why does Enterprise Software suck so much?

It's pretty hard not to see how quickly computing technologies, and even their accompanying software languages and frameworks have evolved recently. Think about the first time you started using a web browser or a software application. Now think of your experience today. What if I then told you that you had to go back to using older technologies in your daily life. Remember when there was no Gmail, iTunes, or Excel before 2010 was released? Ugh! that would really suck. Clearly evolution in technologies has improved our relationship with computers.

So why do so many enterprise applications seem so stuck in the past? Why do we have to use cumbersome applications in our work environments?

  1. The first major reason is enterprise applications typically support some key business drivers - revenue, sales,
    reporting, accounting - we can't just swap out new technologies and replace them without thoroughly understanding the implications of doing so.
  2. The second major reason is cost. Developing custom applications or even purchasing licenses can be very expensive
    and it may be difficult to prove the ultimate ROI from these costs.

However, I would argue that there is an answer to both of these issues.

Inflexibility is the Symptom of Highly-Coupled Architectures

A fundamental problem with business software is that it's often difficult to replace parts without gutting the entire application and starting from scratch. A change in database (changing from Oracle to Sql Server or even Redis) or a dramatic change in the UI (going from classic ASP.NET to MVC3) can literally require an application rewrite. However, this is typically an architectural problem and it doesn't need to be this way. The issue is that this application was developed in a highly-coupled way.

Poor Separation of Concerns

In my experience, it rears its ugly head the most with applications that rely on a SQL database. A major trend of the early two-thousands was to build relational databases where stored procedures, triggers, and views allowed developers to create data-driven applications. A stored procedure was often used to perform basic CRUD applications, and to even create highly-complex business driven data operations. This in principle sounded like a great idea. But it does start to break down over time.

  • They are difficult to test in an automated way. Tracking bugs and issues in stored procedures is much more time intensive, and dependent on data available.
  • They don't properly separate concerns between business logic and pure CRUD logic. It is far easier for a developer, to rewrite logic in code than it is to modify highly coupled data access. In general, database access code should be very simple and generic.
  • Performance may degrade over time. A stored procedure written today may slow down tomorrow because the tables have grown much larger. However, code is often a lot easier to tune.
  • Result sets from stored procedures are weakly typed when they are returned from the database, which means that extra code needs to be written to transform them into typed objects. If a developer doesn't do this, they introduce performance issues and potential run-time bugs. For instance, unboxing/boxig data from a DataTable object is 300% less efficient than just using a type for data; this is can be very expensive when running algorithms over large data sets. Columns in the stored procedure may also change names or get dropped. There is no way for the developer to know this until they hit run-time bug.

Thankfully there are many, many ORM tools available now. If your application is using stored procedures, you should start moving over to an ORM tool soon.

Lack of Inversion of Control

This issue can be a little more subtle. A lot of .NET based applications are written in the traditional N-Tier pattern where code is typically separated into layers for the User Interface, Business Logic, and Data Access Layer. These code libraries are meant to create high level separations so that you can move an application from one UI framework to another easily or even share business logic code across many applications. It also makes automated testing easier.

In general this architectural pattern can be very powerful. But for applications that have very long lifecycles or complex business logic, you may want to consider architectures that utilize inversion of control. These architectures promote highly-cohesive, loosely-coupled code - in the end more flexible software. It promotes highly-cohesive code, because programmers are forced to program to interfaces. It's loosely-coupled because the interfaces are implemented at run-time. The actual implementation of your code is a delayed decision making your development more Agile and flexible.

The onion architecture is one of these patterns in the .NET world. You can check out a presentation that I put together on this here. These blog articles also describe it very well: Peeling Back the Onion Architecture by Tony Sneed and The Onion Architecture by Jeff Palermo. I found Tony Sneed's demo very helpful for learning. Jeff Palermo is the creator of the Onion Architecture. This idea originally started with Alistair Cockburn in his Hexagonal Architecture.

For a general understanding of inversion of control, check out this blog post. Lots of good examples here.

Inflexible Software is Costing You More than You Think

As a consultant, I have seen how legacy systems can make a user's work life painful. People develop workarounds and intricate processes just to work with older, inflexible software. Often times, a process can take up hours of their day, and because it is so inefficient they are less enthusiastic and motivated to use the product. That's less time spent working with customers, driving revenue, or growing the business. Training becomes difficult and cumbersome. Data quality and integrity might also be issues, if the application can't be flexible to new requirements.

When assessing the return on investment in software development, businesses need to take into account time savings. They also need to understand that a user's relationship with software can dictate how productive they are willing to be. Great tools are exciting and they motivate people in powerful ways.

In the end, enterprise software doesn't have to be so bad. A good user experience doesn't need to be a Facebook-only quality, enterprise applications matter, too.

Improve Your Career and Life, Learn a Programming Language

Software is Eating the World, and You Should Join In

Marc Andreessen stated last year that "software is eating the world." While the rapid growth of software is disrupting major parts of our economy (for better or for worse), it is creating a very real need for people who understand what software is capable of. That talent not only lies in engineers who can write this code, but in the product managers, salespeople, and most importantly - the customers and users who shape products. I also believe that even non-engineers would benefit from an understanding of software. To stay competitive and relevant in the Information Age, you owe it to yourself to understand programming languages and how they can be used to improve your business, and even personal life. You may never write a full application, but you will able to make a bigger impact in everything you do.

Coding is Not Impossible

Most people hear the words "software engineering" - and immediately want to run. I don't blame them - just the word "engineering" scares people. Like Calculus or linear algebra, software engineering may seem like a set of abstract and intangible principles, but at its core all of these things are tools. They are tools created to solve problems; a means to an end. Isaac Newton created Calculus as the tool for his studies in physics; we use software to solve computational and optimization problems every day. While it's easy to become trapped in the theories behind software, at the end of the day it exists to solve problems with real value.

And like math, code is not impossible. It just takes practice. While I do wholeheartedly believe that great software only happens when you understand the fundamental ideas of computer science: algorithms, design patterns, and software architectures, you shouldn't let these a barrier to your entry into the software world. Sometimes its easier to learn to play a song first before you learn the music theory behind it. You'll probably have more fun, too.

Common Roots

There is currently a plethora of languages out there. Especially in the web development community. Scala, Ruby, Clojure, C#, PHP, JavaScript ... the list goes on and on, and it will continue to grow. The important thing to remember is that no matter what all languages have to talk to a processor at the end of the day. Like Romance languages, programming languages share common roots. A majority of web languages all end up getting converted into C, and then finally into assembly language which your computer hardware can understand and turn into electrical signals. Thanks to the work of a lot of people you don't have to worry about that when you write code, but this has important implications about performance and the ability to deploy software on different operating systems.

If you take the time to master one language, you will learn others very quickly. Major differences come into play when you compare languages by:
* Statically- vs. Dynamically-Typed Variables * Interpreted vs. Compiled Code * Scripting vs. Procedural vs. Functional Languages * Managed vs. Unmanaged Code

If you're unfamiliar with these terms don't feel out of the loop, WikiPedia and StackOverflow can be a great resource.

Enterprise software engineers typically like languages that are statically typed, compiled, and object-oriented like Java and C#. Both of these languages support functional programming, but F#, Haskel, and Clojure focus more heavily on functional programming.

Many start-ups prefer Ruby and its sibling web framework Rails, because the syntax of Ruby is very natural and eliminates a lot of code overhead. Python is also very similar.

Right Tool for the Right Job

So if I've piqued your interest in software, you're probably wondering what language you should start with. The answer as usual is: it depends. Seasoned developers will typically say their language is the "best" one, but try to avoid this bias. Remember these languages are tools, and they were created to serve a purpose - some of them have very specific purposes.

  • If you're interested in working in Enterprise software, you should learn C# or Java. C# runs on Microsoft servers and that comes with a cost, but also a very robust server environment. Visual Studio is also a very sophisticated development tool, and in my opinion it blows the defacto Java tool, Eclipse, out of the water. However, large technology companies typically have an affinity to Java because its free and it runs on Linux and Unix. Both Scala and Clojure are designed to work with the Java Virtual Machine and the Microsoft .NET Framework, the building blocks of these managed languages.
  • If you need to make a quick content-based site for your business, you should try WordPress. It is free and incredibly flexible. Even though it is branded as a blogging tool, it is so flexible that I have been help many people start a company website with it.
  • If you are immersed in data and spreadsheets, you should learn SQL and Excel VBA. These two tools will save you many, many hours and are incredibly powerful.
  • If you have interests in finance, particularly high-volume trading, you would do well to learn C++. C++ is also still used very heavily in video game development and graphics software.

Keep in mind, though, that to be relevant you need to keep learning about these tools and what they can do. Also remember that while open-source languages are free, they come with the price of a potentially steeper learning curve and unpredictable support. I think this ultimately makes a stronger developer, but it can create headaches sometimes. Java or C# maybe the best way to start learning, because they are very forgiving and structured.

My Disclaimer

Okay, so here is my disclaimer. I am a software engineer who works in Enterprise software. I have worked mostly in the Microsoft stack and I use statically-typed languages. I like using both procedural and functional languages. I am not heavily practiced in Ruby-on-Rails although I do work with other MVC frameworks. My first language was Java. I have also worked with Perl, Scheme, and PHP.

Display None Bottlenecks in Internet Explorer

While Internet Explorer has certainly improved in recent years, it still can still be a nightmare to work with. If you ask any web developer they spend a lot of time navigating the treacherous world of cross-browser development. The reality, however, is that Internet Explorer is still used prevalently. If you work on enterprise web applications like I do, you'll find that most businesses operate on IE.

Most often you will just need to alter your CSS rules or HTML layout to get things back to normal. This is typically an iterative process, but something that becomes easier with experience.

The Problem

A more subtle issue is that IE has a hard time processing large amounts of elements which are marked as invisible; the page loading will run very slowly or even crash. Usually this has a negligible effect on page load times, but when dealing with a lot of data it becomes very noticeable.

For instance, you might have a grid which displays a table of items. To display this table in ASP.NET you might utilize a DataGrid or Repeater control. Each item might have a link to that then opens another div so that you can edit or delete this item. Let's also assume that the edit div has some drop down lists which help you edit the item. Typically to keep this div hidden on the initial page load, you would create something like:

<div id="divEditItem" runat="server" style="display:none">  
    <!-- Your edit controls here -->
</div>  
<div runat="server" id="divGrid" style="display:block">  
    <table>
        <thead>
            <tr>
                <th>
                    <!-- table headers here -->
                </th>
            </tr>
        </thead>
        <tbody>
            <asp:Repeater id="rptGrid" runat="server">
                <ContentTemplate>
                    <tr>
                        <!-- row level data from your data objects -->
                    </tr>
                </ContentTemplate>

            </asp:Repeater>
        </tbody>
    </table>
</div>  

Now, let's suppose you want to bind all of your dropdown lists on the initial page load so that editing these items will be fairly fast. You might even try to keep divEditItem hidden until the asynchronous PostBack completes with a JavaScript endRequest callback function. Normally, this doesn't cause any issues; the page will load just fine. But if your lists are sufficiently large or you are binding a lot of data to the invisible control, you may notice that the page will load slowly or IE may even crash.

These problems can also arise when using jQuery. For instance, if you have a large jQuery DataTable that you are initializing while it is invisible the script might crash.

The Solution

Try to bind the data-intensive controls only when you are showing the control. If you want to prevent the need for rebinding controls every time use a ViewState boolean to keep track of whether or not the control has been loaded. The cost of the ViewState variable will be less taxing on your user's experience, than it will be waiting to for IE to process and render invisible elements in your DOM. Expensive jQuery operations like the DataTables initialization should only be called only on visible DOM elements.

While this doesn't create the slickest UI, your users will be much more frustrated with applications that crash their browser.