Discipline Is the Foundation of Innovation
Why achieving big things requires executing everything else by the book
“I’m actually as proud of the things we haven’t done as the things I have done. Innovation is saying ‘no’ to 1,000 things.” – Steve Jobs
Books about famous innovators like Steve Jobs or Jeff Bezos are filled with tales of revolutionary new ideas that disrupt the status quo.
What you don’t often hear about, however, is the extreme focus and discipline it takes behind the scenes to make innovation possible.
If you want to revolutionize your industry, your mental effort must go toward your biggest challenges, and you should execute everything else by the book.
The goal of minimal engineering is to be this book for people who build software – the missing chapters of every success story detailing the battles innovators chose not to fight, which are just as important as the battles that they won.
This article highlights a few areas where I’ve seen people (including my former self) waste the most time struggling against immovable laws of software engineering, with the hope that you can steer clear of them and take a shorter road to innovation.
Long-term roadmaps are important
Agile software development emerged in the 90s as an antidote to the inefficiencies of the waterfall model.
The core problem with waterfall is that extensive planning happens up front, followed by a long development process. By the time the software ships, it is often out of date because the requirements were finalized months or years earlier.
Long software development cycles and changing requirements are a real issue, but some agile proponents have thrown the baby out with the bathwater by eschewing long-term roadmaps entirely, claiming they are harmful because “you can’t predict the future.”
You should be highly skeptical of practices that are based on broad generalizations like “you can’t predict the future.”
The obvious truth is that sometimes you can predict the future, and sometimes you can’t.
Hindsight bias causes people to think the future is more predictable than it actually is, which is a real challenge.
However, not planning for things that are predictable is foolish. Conversely, good long-term roadmap can be extremely beneficial.
Jeff Bezos is famous for basing Amazon’s strategy around things that won’t change, like that customers will always want low prices and fast delivery.
On a software team, you may not be able to predict what features customers will want or even what line of business you’ll be in next year. But if you stop and think, there may be more invariants that you realize.
If you sell software to businesses, for example, you will need to handle data. You will need security, you will need a reliable billing system. You will probably need role-based access control for larger customers, and audit logging.
It’s possible you’ll stop building business software or go bankrupt. However, if you’re 95% sure you’ll have to do something, long-term roadmap planning dramatically improves your chances of success. It helps you make better decisions now about resources, architecture, and systems that you will need in the future.
Disciplined roadmap planning requires flexibility in adapting to future uncertainty, but also diligently assessing things that won’t change.
Smaller tasks are better
The reason people fall into the trap of avoiding long-term planning is that it can be dangerous. Biting off too much at once to achieve a big vision dramatically increases the chances of failure.
A key tenet of lean methodology’s approach for reducing waste is to keep task sizes small.
The scrum methodology involves “sprints” that are usually two weeks, and other frameworks like Shape Up involve a maximum iteration size of six weeks.
The efficiency of small tasks is fairly common knowledge, but people still struggle to put it into practice.
The reason is that when you first plan a new piece of functionality, that plan describes the end state. It often contains dependencies that prevent it from really working until the whole thing is finished.
By default, projects tend to start out large.
It usually takes significant effort beyond initial planning to break dependencies and deliver code, features, and user value in smaller pieces.
This extra effort might not seem worthwhile at the start when you intend to complete the whole project. Why spend an extra day to break up a 4-week project into two 2-week milestones when you could use that day to start working?
The issue, of course, is that things rarely go as planned. The larger the plan, the more likely this is to happen.
Discipline with small task sizes means assessing whether people have devoted enough effort to breaking down tasks up front, and then looking back at task sizes in project retrospectives to continuously improve.
What Every CEO Should Know About Software Planning covers this topic in more detail.
Some tasks are inherently large
Sometimes teams recognize the importance of long-term roadmap planning and small task sizes, but still experience massive project cost overruns.
The issue is that some initiatives really do require months or more of well-executed work to realize their full value, no matter how hard you try to break them down.
Examples of large tasks include migrating to a new platform, overhauling a major system, or making changes to core software architecture.
Most work doesn’t fall into this category, but I’ve never seen a real business that doesn’t encounter major technical initiatives from time to time.
If you commit to such an initiative and start working on small pieces without carefully planning the full scope and how everything will fit together, you’re asking for trouble.
When my last company was acquired and we merged engineering teams, the buyer was nine months into a three-month effort to replace their payment system. I quickly learned that the culprit was refusal to plan more than two weeks ahead because doing so “wasn’t agile.”
There’s a difference between implementing software in small batches and incomplete planning.
Disciplined project execution requires working on tasks in small iterations, but also planning each iteration in detail to avoid unpleasant surprises.
If people say this “isn’t agile”, then too bad. Neither is working on multi-month projects in the first place, but sometimes that’s the reality.
Standardized processes are more efficient
Some big and successful tech companies like Facebook are known for giving their teams freedom to work in whatever way they want.
This approach resonates with engineers who like doing things their own way.
It also avoids overly restrictive processes that can emerge at large organizations and stifle innovation.
While this freedom may be good for teams in the short term, it comes at a significant cost.
In the long run, process fragmentation makes things like training, switching teams, reporting on activity, and managing multiple teams a lot harder.
It also adds cognitive burden as people debate low-value process choices rather than focusing on bigger challenges.
(Keep in mind that Facebook is flush with cash and is able to hire the most talented engineers in the world, so their teams’ capacity to self-manage may mitigate these costs more so than at other companies.)
The truth is that much like the argument of tabs vs. spaces, most process choices don’t have a major impact one way or another, but inconsistency does.
Organizations are generally better off just standardizing things like version control systems, ticket tracking systems, and even sprint processes.
Disciplined process standardization necessitates weighing the global, long-term cost of fragmentation against the potential benefit of flexibility, and is a microcosm of overall disciplined innovation.
Quality is important
Underinvesting in quality didn’t used to be as common of a problem, but The Lean Startup movement and Facebook’s mantra of “move fast and break things” changed that.
Modern innovators are under tremendous pressure to discover customer needs and build valuable products as quickly as possible.
Like with agile, some practitioners have taken the idea of a minimum viable product (MVP) too far and saddled their business with major quality problems.
The heart of the issue is the distinction between a prototype and production software.
This distinction is muddied by having early users pay for prototypes, which can shift their perspective and make their feedback more valuable.
Once customers are paying for a prototype, it’s also tempting for start-ups with limited runway to keep selling the prototype rather than switching modes and investing in production software.
There’s no clear line between prototype and production, but you should ask yourself: if there’s a medium-severity bug, will customers expect you to fix it?
If the answer is yes, then you need production quality, which involves observability, automated testing, on-call rotations, and prioritization practices like fixing all your bugs.
Failure to enact quality discipline will lead to engineers spending all of their time dealing with urgent interruptions rather than building new innovative functionality.
Fixed-scope deadlines hurt quality
All software businesses face pressure to deliver on time and on budget.
Inexperienced leaders often make the mistake of believing it is possible to do so without sacrificing quality.
In reality, when leadership asks a team to complete a fixed scope of work by a deadline without compromising quality, the team is forced to cut corners in ways that are not immediately apparent.
This might involve shipping sloppy code that is difficult to read and maintain, or foregoing automation of important tests.
Quality will ultimately suffer down the road, making future development slower and creating a downward spiral if management fails to ease up on their expectations.
Teams that repeatedly cut quality are not fun places to work.
The better approach is to give engineering teams autonomy over quality by making project scope flexible.
As projects evolve, engineers will discover that some features are easier to implement than expected, and some are much more difficult. Empowering them to actively discuss scope reduction with product managers throughout the project greatly improves your overall return on investment and helps maintain quality standards.
Your architecture and development system is a product
When you first create new software, you don’t really have your own architecture or development environment. Instead, it is based on third-party systems.
As you create more specialized functionality for your business, those third-party systems become increasingly inadequate for building the software that you need.
It is important to recognize from day one that the software you use to build your software (your “platform”) is a product itself, and is critical to your competitive advantage.
To manage your platform well, you need clear ownership and resources.
A dedicated platform team may not make sense for smaller organizations, but inattention to the platform and core architecture can lead to spiraling technical debt and inability to get anything done.
Some entrepreneurs take a cavalier approach to tech debt, claiming there will be more resources to fix it later and all that matters is product demand.
Sure, some highly sticky businesses like Twitter have pulled out of a tech debt spiral, but others with fewer resources may not be able to do so. Also, even though Twitter survived, fixing their tech debt was extremely costly and letting it accumulate was probably not a good decision.
A disciplined software team should be able to identify who owns decisions about platform investments, measure how much effort is being dedicated to them, and assess whether the level of investment is appropriate for their stage of growth.
Data analysis is important
Many organizations claim to be data driven, but struggle to use data effectively for making decisions.
The most important thing that people fail to understand about data is that they’re already using it every day. Making a decision “without data” is impossible – this actually just means relying on your memory of information you’ve gathered over time.
The human mind is incredibly powerful and can arrive at insights that are quite hard to derive from quantitative analysis.
However, the human mind is also incredibly biased.
The availability heuristic can dramatically skew one’s sense of the frequency and severity of events, especially when drawing from a limited sample size like things you’ve heard in conversations.
The double-whammy is layering on confirmation bias and only focusing on data that confirms your pre-existing beliefs.
If you want to build software with discipline following the principles outlined above, you must also instill discipline around data analysis so you can accurately assess your progress.
Effectively using data requires analysis by someone who is trained and has the proper context to interpret the results.
This is not to say that people without “data” in their job shouldn’t be allowed to analyze data, but they do need proper training for their domain.
Part of this training involves statistical literacy to avoid common pitfalls like mistaking correlation for causation or concluding that a number “changed” when the change is within the normal range of random variation.
A more subtle error that even trained data analysts make is lack of understanding source data idiosyncrasies. Real data often contains significant errors or gaps. If you don’t work with the data regularly and have a solid understanding of how it was collected, it is easy to overlook these issues.
For example, if you’re looking at code commit activity to assess task size, you can miss data if you don’t properly handle squash and rebase merges.
Using a third-party vendor (like minware) can help fill in a lot of this context and make data more self-service.
However, no vendor will know the full context of your business, so it’s important to have an in-house expert to configure vendor tools and curate accurate reports.
For example, when we create reports about development activity with minware, we provide expertise on interpreting Git commit data to avoid problems with squash and rebase merges mentioned above. However, we can’t know (without having someone embedded in the company) whether a person with lower output is part-time, an intern, or has other non-development responsibilities that make their level of contribution in line with expectations.
It is particularly important for leaders to demonstrate discipline in this area, because they rarely have the context to analyze data themselves. They should solicit analysis when consuming data and foster skill development so that their organization can successfully use data to make better decisions.
Conclusion
To innovate, you need to be aggressively revolutionary in your business, but also maintain focus to avoid distraction from your core purpose.
At the same time, software engineering is a complex endeavor filled with many choices and pitfalls.
I and many others have navigated these pitfalls the hard way, wasting a lot of time on things that weren’t on the critical path to innovation.
The goal of minimal engineering is to share timeless, hard-won knowledge and provide a framework for software engineering discipline to maximize your chances of revolutionizing your industry.
We have covered a few core tenets of minimal engineering in this article, and plan to expand its breadth and depth in future articles.