3 Steps to Effective Software Design

It is quite an astonishing experience to see that the world is thriving together in using the computer as a major mean to run its daily course. People are continually pushing forward with expanding their knowledge with regard to computer technology. However, what is more astonishing to know is the fact that from a world that started from sharpened rocks and two-stone fires, we are now living in a generation renovated by sharpened minds, illuminated by the knowledge we have acquired over the trial and errors of time. Thanks to the power of computing, this world is living in transcendence and will likely to be so much more.

a computer graphic

Computers have crept in practically every crevice of our daily lives. In fact, we can go as far as to say that our lifestyle is now based on what our computers can offer. In our social structure, we are witnesses to the rising number of people who are utilizing the wonders of social networking sites or SNS. It has become so in demand that almost everyone has their hands full on Facebook, Twitter, Instagram and many other social networking platforms.

Even the field of business has taken notice of computers’ powerful appeal. In fact, websites are now primary requirement for businesses to provide content for these people dubbed as ‘netizens’. A prevailing tool in the internet called WordPress has become widely used, in fact according to this WordPress hosting test, this platform is on the rise and will continue to rise given the consistency of the sites. Looking at the bigger picture, we can see that businesses utilizing the power of computing produce jobs for people who are learned in the discipline of computer engineering and computer science, as well as information technology.

We can see it wide open that computer technology provides a link to different and varying fields. From social to economic to the academic field, there is a linear pattern which computer technology offers. This is the same reason why people nowadays are leaning towards understanding its processes, the very fundamentals of the broad subject.

One of the most important aspects of computer technology is software design. What is software design really? Software design refers to the process of applying software solutions to one or more set of problems. Frankly speaking, we cannot utilize any of our computers without its software. This is why it is important for us, especially the ones interested in the field, to take a look on the process of designing software.

Here are three guidelines to understand the software design process.

Planning is Key

Good planning is THE phase which all successful projects share. For software design, the right planning makes all the difference. Thus, the first step towards successful software design planning is critical analysis.

keys displaying "needs" and "wants"

Software designers must interact, extract, recognize and analyse. Software design is an intricate process that starts with having a good grasp on what objectives to meet. Designers should have good interaction skills with the client, so that he may extract the essential purpose and preferences they have.

Recognizing and distinguishing which requirements need to be checked and rechecked is also an important part of the planning phase. In addition, the developers should have to perform a high level analysis on what steps to do and how to do it.

The second phase of planning focuses more on specification. The backbone of software design is the mathematical description which will be utilized rigorously in the specification task. Specification is the task of accurately defining the software to be written.

a venn diagram of specification

Most effective specifications are actually prepared to interpret and fine-tune applications that were already well-built, albeit safety-critical software systems are often carefully specified prior to application development. In addition to that, specifications play a more significant role for external interfaces that must stay stable.

In the last stage of planning comes the architecture of the software. Software Architecture is defined as an abstract representation of that system. Basically, it is the software construction which makes sure that the requirements and specifications are addressed properly. It’s the blueprint of the software to be implemented.

Make it Happen

After all the elements are made clear during the planning stage, Execution phase comes next. In software design, the execution stages are implementation and testing.

Implementation comes as an antecedent to all development to execute the plan. In this case, it is the coding part. Programming of the design proves to be the most obvious job of the software engineer. Various programming languages can be utilized in this stage of execution, thus different approaches can be made in designing software.

keep calm and continue testing

After the code has been written, it is important that the project be tested and reviewed in search for any glitches. This opens the door to find out which workaround solve any underlying problem. Testing is a demanding and laborious task for the software engineer, but it proves to be vital in the development of the actual software. Without testing, software design cannot be proofed or verified.

Maintain it, Sustain It

Software design maintenance is the cherry on top of planning and execution. The truth of the matter is that issues in the software are inevitable. That is why sustaining it with exquisite maintenance model is just as important as the development itself. Documentation, training and support, and primary maintenance are the key factors in software sustenance.

One of the most significant things most people in the software design field forget is consumer training and support. We know that even when the software is out, and there is nobody who knows how to operate it, the whole thing is pointless. With ample training, support and fast feedback, everything can run smoothly. However, how do companies in software design conduct training and support?

software repair

Software designers and involved parties give out seminars and training to customers and clients before they start using the application. In this way, the client can freely ask questions while learning how to handle the product.

In addition, documentation comes as a very integral part in maintenance. Documentation in the field of software design is an important task that deals with filing the internal design of the software to create references for future maintenance and enhancement. In the literal context, it’s a diary of what worked and what did not in the implementation process. It goes hand in hand with testing. In a nutshell, documentation is the recording and citation part of the whole development process.

Having documented all the problems and the technicalities, the primary maintenance now comes in to play. All the mentioned processes come down to the maintenance. It’s a vigorous process that needs to ample time be. Almost always, there will be problems in the software. That is the reason why there is more work in maintenance than the planning and coding combined. It is all about continuous growth and continuous learning for the full-fledged software designers.

Software design is definitely a complicated process that includes various phases. It requires a lot of work and a whole lot more dedication. With the right planning, designers can learn which demands to meet and how they can provide specific solutions. The implementation comes afterwards where the programming and testing phase brings out the beauty of the technology. Lastly, the maintenance of this software is to be watched out for since problems will certainly arise.

Our relentless reliance on technology attests that we are a race that is still evolving. To evolve is to accept that change inevitable, especially in meeting the changing needs of our technologically demanding times. With right conviction, we are continually expanding our knowledge in the field of computer technology for the advancement of our lives.

Software design is a key ingredient in this endeavour, and having the right knowledge to go about its process is a significant factor for both professionals and aspirants alike.

The Role Of Design In Software Development Process

Anytime we want to build or create something, we simply have to imagine how it will look like. Of course, “function” is important as well, perhaps even more than “form” in some cases, but when it comes to software development and modern methods of coding – design is equally important and valuable. Some software developers even say that you should always design first and code later, but this old geek “joke” cannot be applied to all cases and there are many ways in which good software can be developed if you code first.


The reason why this old concept is being forgotten and even obsolete lies in the fact that modern programming tools allow faster construction of code and this also enables developers to find the problems and fix them easily, without the need to perform numerous tests and spend a lot of time verifying the stability and performances of the algorithm.

Software design focuses on both main elements of the code – algorithm and architecture, and this makes it so important. Since design affects the end user in a lot of ways, sometimes even more than the back-end part of the project, it is vital that all elements of a software solution are placed in the right position and that the final design suits the needs of the customers.

Designers have to be in constant communication with the rest of the team, simply because software development is not a “one man’s job” anymore. Several people, at least, are involved in this process, and large companies often deploy dozens of experts to work on a single piece of software. Therefore, communication is key, and being able to understand what the rest of the team needs is imperative for a good designer.

Project managers, consultants, developers, content writers, testers, users, etc. are all a good source information, and a good designer will find the perfect balance between all of these parties.

The process of design is basically about problem-solving and planning, but it can be divided into three main stages. In the first stage, you brainstorm ideas, creating concepts and making plans about the project and the ways in which your end-product should look like. Once you find a suitable idea, you move on to the second stage, which has one goal – to create a wire-frame of the main elements that make the architecture of the software. It is important to accommodate everything in a way that will be simple but functional, and we all know how simplicity is hard to achieve. After this step comes the third stage, which can be called the “actual design”, and this part of the process is concerned with the shapes, colors, textures and all similar features of those elements that are a part of the design. According to client’s wishes and preferences, the product receives its final form, and aesthetic purposes are important in this final stage as well. Designers who make everything look “nice” will have a lot of satisfied customers, and they will justify the importance of good design when it comes to the process of software development.

Learn Vim Fast: Moving In and Getting Around

If you started learning Vim with my last post, and you’ve been practicing the handful of commands we covered, then you’re probably pretty tired of how you have to move around and edit things with only those rudimentary commands. It’s time to expand our command set and crank up our efficiency in getting things done with Vim, but before we do that, let’s take a look at how to make Vim a bit more comfortable to look at.

Moving In

Normally I use gvim, the variant of Vim with a GUI window. Out of the box, gvim looks like this:


New gvim window

It looks somewhat uninviting to my eyes. Personally, I prefer editing code on a dark background, so I’d like to change the color scheme. Also, the bar of icons at the top of the window only encourages you to use the mouse for those common tasks, but we don’t need them. The extra space would be better, so we’ll get rid of them. To change these settings and more, we need to create a Vim configuration file. Putting this file in your home directory will cause Vim to load the configuration whenever it launches. With Vim open, type :e ~/.vimrc to create this file for editing, and enter the following lines:


Listing of .vimrc

Don’t be overwhelmed. We’ll go through each one of these settings, but I wanted to lay them all out as one listing first.

Starting at the top, set nocompatible disables compatibility mode so that some new features of Vim work correctly. This setting is likely already set this way, but we’ll be extra sure.

Next, set history=100 allows you to use the up and down arrows at the command line (by typing ‘:’) to search back and forth through your history of commands, and you can execute any previous command by going back to it, editing it if needed, and hitting <Enter>. The history is set to some number I don’t remember by default, but 100 is a good large number. You can set it to whatever you want.

The next two settings, incsearch and hlsearch, cause the cursor to advance to the first matching search term and highlight all matching search terms when you’re searching, respectively. These will come into play later when we cover searching, but just know that they are nice to have on and feel quite natural with the search command.

The set backspace=indent,eol,start setting makes it so the <Backspace> key will delete indents, end-of-line characters, and characters not added in the current Insert Mode session when you’re in Insert Mode. It basically makes the <Backspace> key work as you’d expect, and in recent experience, it seems that backspace is set this way by default. It doesn’t hurt to be explicit, though.

The set go-=T setting is short for guioption-=T, and this gets rid of the toolbar because you aren’t gonna need it. Besides, it just wastes vertical space, which is at a premium with today’s wide screen monitors.

The next set of options set up how Vim handles tabs. The tab stop is set to 2 spaces, tabs are expanded into spaces, and shifting tabs back and forth will shift them by 2 spaces. This is somewhat of a personal preference. Some people prefer other tab sizes, but this setup has generally worked for me to make tabs behave in a reasonable way. If you want a different tab size, just change the 2s to something else.

The line and column settings force the GUI window to open with that number of lines and columns. You can still resize the window, but these values are a good fit for most of the monitors I use. I might tweak them for larger monitors. I also turn on line numbers with set number, and make Vim automatically update a buffer when the file in the buffer was changed outside of Vim with set autoread.

The section with the set guifont settings changes the font depending on which operating system Vim is running on. Each of the fonts is one that is available by default on the corresponding system and a nice font to look at for coding.

The set wildignore option forces Vim to ignore certain file extensions that I really don’t want it to ever open.

The next command is pretty cool. The autocmd BufWritePre command attaches a task to a hook that executes just prior to writing a buffer to a file. This particular one trims all whitespace from the end of every line. It’s a nice little add-on, and there are a number of other hooks that you can attach commands to. I haven’t found anything else that I want to execute automatically on other actions, but the option is there and the sky’s the limit.

The last two settings turn syntax highlighting on (even though it’s most likely already on) and set the color scheme to my preferred one. You can see the comment before the last line that reminds me where to put the Vim color scheme file. I always forget when setting up a new Vim. That’s where you should put it, too. My favorite is vividchalk, gleaned from this list of color schemes.

Now that you have all of these settings entered, you can save the file and then execute :source ~/.vimrc to actually load the Vim configuration for the currently running Vim session. Some of the changes are immediately visible, while the rest will become apparent as you work in Vim.


Listing of .vimrc with new settings sourced

Ahh, that looks much better. Now we can start exploring some ways to get around more easily in Vim than what we’ve been doing with just the arrow, page up, and page down keys.


Getting Around

For the rest of the commands we are going to cover, we’ll use a snippet of code as our canvas to experiment with what these commands do. It’s a Ruby module that implements a few basic statistics functions, but what it does isn’t very important. We’ll simply use it as something to look at in the context of learning new Vim commands. Here’s what it looks like:


Ruby code listing in gvim

And here’s the code, so you can copy and paste it into a text file.

module Statistics def self.sum(data) data.inject(0.0) { |s, val| s + val } end def self.mean(data) sum(data) / data.size end def self.variance(data) mu = mean data squared_diff = data.map { |val| (mu - val) ** 2 } mean squared_diff end def self.stdev(data) Math.sqrt variance(data) end
end data = [1,2,2,3,3,3,3.5,3.5,3.5,4,4,4,5,5,6]
p Statistics.mean(data)
p Statistics.stdev(data)

Now, let’s say we want to go down to the variance method so we can change the name to var. The cursor is currently at the start of the buffer. Up until now, we’d have to use the arrow keys to get there, but it’s 10 lines down and 12 characters over. It’s so tedious to use the arrow keys, so we want a faster way.

One marginally faster way to move is to use the h, j, k, and l keys. The h and l keys move left and right, respectively, and the j and k keys move down and up, respectively. You can keep these commands straight by remembering that the h key is on the left, the l key is on the right, the j key has a hook down, and the k key has a line going up. If you can get used to using these keys, great. I never have, and I continue to use the arrow keys for small movements. For larger movements like this, we have better options.

Type 10G to move the cursor directly to the beginning of line 10. This is why it’s nice to have the line numbers listed. It makes it really easy to jump to exactly the line you want to go to. The G command can be thought of as Goto, and it will make the cursor go to the line number corresponding to whatever number was typed immediately before it. Most Vim commands are like this, where you can specify a number before the command. In this case, it denotes the line number, but in most other cases it will specify how many times to repeat the command.

Another way to get to the tenth line is to type 10gg. It essentially has the same effect, but G and gg have a subtle difference. If you type G without specifying a number, it will move the cursor to the last line of the buffer, while if you do the same with gg, it will take you to the first line of the buffer. It’s pretty handy if you don’t know how long the buffer is, but you want to get to the end as fast as possible. A 1G would suffice for getting to the beginning of the buffer, but gg is a bit faster to type. Anyway, here’s where we are now:


Goto line 10 in gvim

To go the rest of the way, we want to move quickly across this line without having to tap the l key 12 times. We can do better with the w command. This command moves the cursor to the beginning of the next word, which is the first alphanumeric character after a non-alphanumeric character. (An underscore is considered an alphanumeric character.) The lowercase w will also stop at punctuation marks, like the ‘.’, ‘(‘, and ‘)’ in the code. A capital W will only stop at characters following a whitespace character. I’m sure I’m missing some subtleties here, but after experimenting with it for a while, you’ll get a sense for what characters w and W will stop on.

For this case, we want to type www to get to the ‘v’. Finally, we type lll to move over to the ‘i’, and we’re ready to change the name. For now, that means typing i to get into Insert Mode, and hitting the <Delete> key five times to delete ‘iance’.


Rename variance in gvim

Most Vim commands have opposite commands, and since w will only move the cursor forward, we need the opposite command for moving backward one word at a time. That command is b, for backward (or maybe backword, so we’ve got word and backword), and W has the corresponding B as its opposite command. All of these commands accept a repeat number before them, so we could have just as easily gotten to the ‘v’ by typing 3w instead of www.

Vim also has quick commands for moving to the beginning or end of a line. If you know regular expressions, these will look familiar. A ^ will move the cursor to the beginning of the line the cursor is on, and a $ will move it to the end of the line.

One more way to get where we want to go even faster, is to use the search function. So far we’ve gotten to the desired position in 8 characters (10gg3w3l), but search does it in less, and we don’t have to do any mental counting. Starting back at the beginning of the file, we could type /ia, and we’re already there. The / starts the search, and with every character typed after that, the cursor moves to the first matching string of characters that were typed so far. We only need ‘ia’ to get to ‘iance’, and we can hit <Enter> to end the search. We’re now ready to go into Insert Mode and make the change.


Search for 'ia' in gvim

Notice how two instances of ‘ia’ have been highlighted. To get to the second one and make the same change, we can type n to move to the next matching instance of the most recent search string. Now we’re starting to pick up some speed. If the last change you made was deleting ‘iance’ from the first ‘variance’, then you can type . to make the same change to the next ‘variance’ once the cursor is in the right place.

The search and next match functions also have opposites. To search backwards, use ?, and to move to the previous match, use N. If you combine the two by searching with ? and moving to the previous match with N, then the cursor will actually move forward in the buffer when moving to the previous match. Most of the time, I use / and n, but the variants can come in handy sometimes.

For the last set of movement commands, we’ll combine moving and entering Insert Mode. Notice that the last couple lines print out calculations of the mean and standard deviation of some data, but we haven’t tried the variance method directly. Let’s quickly add a line to do that. First, type 22gg to get to line 22. Then type o to open a line below line 22 with the insertion point at the beginning of the new line.


Open a new line in gvim

Finally, you can type p Statistics.var(data), remember to hit <Esc>, and you’re done. The open new line command has an opposite as well. The O command opens a new line above the line with the cursor. At this point you may be asking yourself if i also has a capital I version, and of course, the answer is yes. The I command is not the opposite of i, though. It enters Insert Mode with the insertion point before the first non-whitespace character on the line that the cursor is on instead of immediately before the character that the cursor is on. The opposite command of i is actually a, for append, and a puts the insertion point just after the character that the cursor is on. To round things out, a capital A puts the insertion point at the end of the current line. So there are plenty of ways to enter Insert Mode, with each one coming in handy in certain situations.

It may seem a bit overwhelming at this point, but with a little practice, these keystrokes become second nature. Many of them make sense from the letters that were picked to be associated with commands, and making commands orthogonal allows for some great flexibility. To sum up, in this post we’ve covered the following new commands:

  • h, j, k, l – Move left, down, up, and right
  • gg, G, <n>gg – Move to first, last, or specified line
  • w, W, b, B – Move forward or backward a word
  • ^, $ – Move to beginning or end of the line
  • /, ? – Search forward or backward
  • n, N – Move to next or previous match
  • . – Repeat last edit
  • o, O – Open a line in Insert Mode after or before the current line
  • a, A – Append to current character or end of the current line
  • I – Insert at beginning of the current line

After practicing these movements, you’ll be able to fly around your code files with ease. Keep practicing, and next time we’ll cover many of the fast ways we can modify text in Vim.

Let’s block ads! (Why?)

The 5 Best Free FTP Clients

Transferring files to and from your web host or server is best done with what’s commonly known an FTP client, though the term is a bit dated because there are more secure alternatives such as SFTP and FTPS.

When I was putting together this list, this was my criteria:

  • Supports secure file transfer protocols: FTP isn’t secure. Among its many flaws, plain FTP doesn’t encrypt the data you’re transferring. If your data is compromised en route to its destination, your credentials (username and password) and your data can easily be read. SFTP (which stands for SHH File Transfer Protocol) is a popular secure alternative, but there are many others.
  • Has a GUI: There are some awesome FTP clients with a command-line interface, but for a great number of people, a graphical user interface is more approachable and easier to use.

Topping the list is FileZilla, an open source FTP client. It’s fast, being able to handle simultaneous transmissions (multi-threaded transfers), and supports SFTP and FTPS (which stands for FTP over SSL). What’s more, it’s available on all operating systems, so if you work on multiple computers — like if you’re forced to use Windows at work but you have a Mac at home — you don’t need to use a different application for your file-transferring needs.


Available on Windows, Mac OS and Linux

Cyberduck can take care of a ton of your file-transferring needs: SFTP, WebDav, Amazon S3, and more. It has a minimalist UI, which makes it super easy to use.


Available on Windows and Mac OS

This Mozilla Firefox add-on gives you a very capable FTP/SFTP client right within your browser. It’s available on all platforms that can run Firefox.


Available on Windows, Mac OS and Linux

Classic FTP is a file transfer client that’s free for non-commercial use. It has a very simple interface, which is a good thing, because it makes it easy and intuitive to use. I like its “Compare Directories” feature that’s helpful for seeing differences between your local and remote files.

Classic FTP

Available on Windows and Mac OS

This popular FTP client has a very long list of features, and if you’re a Windows user, it’s certainly worth a look. WinSCP can deal with multiple file-transfer protocols (SFTP, SCP, FTP, and WebDav). It has a built-in text editor for making quick text edits more convenient, and has scripting support for power users.


Available on Windows

Honorable Mention: Transmit

For this post, I decided to focus on free software. But it just doesn’t seem right to leave out Transmit (which costs $34) in a post about FTP clients because it’s a popular option used by web developers on Mac OS. It has a lot of innovative features and its user-friendliness is unmatched. If you’ve got the cash to spare and you’re on a Mac, this might be your best option.

TransmitSource: panic.com

Available on Mac OS

Which FTP client do you use?

There’s a great deal of FTP clients out there. If your favorite FTP client isn’t on the list, please mention it in the comments for the benefit of other readers. And if you’ve used any of the FTP clients mentioned here, please do share your thoughts about them too.

Jacob Gube is the founder of Six Revisions. He’s a front-end developer. Connect with him on Twitter and Facebook.



12 Brackets Extensions That Will Make Your Life Easier

Brackets is a great source code editor for web designers and front-end web developers. It has a ton of useful features out of the box. You can make your Brackets experience even better by using extensions.

These Brackets extensions will help make your web design and front-end web development workflow a little easier.

Quickly see the current level of browser support a certain web technology has without leaving Brackets. This extension sources its data from Can I use.


HTML Skeleton helps you set up your HTML files quickly by automatically inserting basic markup such as the doctype declaration, <html>, <head>, <body>, etc.

HTML Skeleton

Related: A Generic HTML5 Template

Rapidly mark up a list of text into list items (<li>), table rows (<tr>), hyperlinks, (<a>), and more with HTML Wrapper.

HTML Wrapper

This is a super simple extension that adds file icons in Brackets’s sidebar. The icons are excellent visual cues that make it much easier to identify the file you’d like to work on.

Brackets Icons

Automatically and intelligently add vendor prefixes to your CSS properties with the Autoprefixer extension. It uses browser support data from Can I use to decide whether or not a vendor prefix is needed. It’ll also remove unnecessary vendor prefixes.


This extension will remove unneeded characters from your JavaScript and CSS files. This process is called minification, and it can improve your website’s speed.

JS CSS Minifier

This extension highlights CSS errors and code-quality issues. The errors and warnings reported by this extension are based on CSS Lint rules.


Emmet is a collection of tools and keyboard shortcuts that can speed up HTML- and CSS-authoring.


Need some text to fill up your design prototype? The Lorem Ipsum Generator extension helps you conveniently generate dummy text. (And if you need image placeholders, have a look at Lorem Pixel or DEVimg.)

Lorem Ipsum Generator

This extension will help keep your HTML, CSS, and JavaScript code consistently formatted, indented, and — most importantly — readable. An alternative option to check out is the CSSComb extension.


Make sure you don’t forget your project tasks by using the Simple To-Do extension, which allows you to create and manage to-do lists for each project within Brackets.

Simple To-Do

Transferring and syncing your project’s files to your web host or server requires FTP or SFTP, but such a fundamental web development feature doesn’t come with Brackets. To remedy the situation, use the eqFTP extension, an FTP/STFP client that you can operate from within Brackets.


How to Install Brackets Extensions

The quickest way to install Brackets extensions is by using the Extension Manager — access it by choosing File > Extension Manager in Brackets’s toolbar.

Brackets Extension Manager

If I didn’t mention your favorite Brackets extension, please talk about it in the comments.

Jacob Gube is the founder of Six Revisions. He’s a front-end developer. Connect with him on Twitter and Facebook.


A New Breed of Free Source Code Editors

10 Open Source Blogging Platforms for Developers

15 Free Books for People Who Code

Should Web Designers Know HTML and CSS?

5 Games That Teach You How to Code

This was published on Mar 28, 2016


Let’s block ads! (Why?)

7 Free UX E-Books Worth Reading

The best designers are lifelong students. While nothing beats experience in the field, the amount of helpful online resources certainly helps keep our knowledge sharp.

In this post, I’ve rounded up some useful e-books that provide excellent UX advice and insights.

This is a free e-book by usability consultancy firm Userfocus. The best part of this book is its casual tone. Acronyms like “the CRAP way to usability” and The Beatles analogies make remembering the book’s lessons a lot easier, and makes for an interesting read. That’s why this book is one of my favorites.

50 User Experience Best Practices

As the book’s title implies, 50 User Experience Best Practices delivers UX tips and best practices. It delves into subjects such as user research and content strategy. One of the secrets to this book’s success is its creative and easy-to-comprehend visuals. This e-book was written and published by the now-defunct UX design agency, Above the Fold.

UX Design Trends Bundle

Over at UXPin, my team and I have written and published a lot of free e-books. For this post, I’d like to specifically highlight our UX Design Trends Bundle. It’s a compilation of three of our e-books: Web Design Trends 2016, UX Design Trends 2015 & 2016, and Mobile UI Design Trends 2015 & 2016. Totaling 350+ pages, this bundle examines over 300 excellent designs.

UX Storytellers: Connecting the Dots

Published in 2009, UX Storytellers: Connecting the Dots, continues to be a very insightful read. This classic e-book stays relevant because of its unique format: It collects stand-alone stories and advice from 42 UX professionals. At 586 pages, there’s a ton of content in this book. Download it now to learn about the struggles — and solutions — UX professionals can expect to face.

The UX Reader

This e-book covers all the important components of the UX design process. It’s full of valuable insights, making it appealing to both beginners and veterans alike. The book is divided into five categories: Collaboration, Research, Design, Development, and Refinement. Each category contains a series of articles written by different members of MailChimp’s UX team.

Learn from Great Design

Only a portion of this book, 57 pages, is free.

In this e-book, web designer Tom Kenny does in-depth analyses of great web designs, pointing out what they’re doing right, and also what they could do better. For those that learn best by looking at real-world examples, this book is a great read.

The full version of this e-book contains 20 case studies; the free sample only has 3 of those case studies.

The Practical Interaction Design Bundle

I’ll end this list with another UXPin selection. This bundle contains three of our IxD e-books: Interaction Design Best Practices Volume 1 and Volume 2, as well as Consistency in UI Design.

  • Interaction Design Best Practices Volume 1 covers the “tangibles” — visuals, words, and space — and explains how to implement signifiers, how to construct a visual hierarchy, and how to make interactions feel like real conversations.
  • Interaction Design Best Practices Volume 2 covers the “intangibles” — time, responsiveness, and behavior — and covers topics from animation to enjoyment.
  • Consistency in UI Design explains the role that consistency plays in learnability, reducing friction, and drawing attention to certain elements.

Altogether, the bundle includes 250 pages of best practices and 60 design examples.

Did I leave out your favorite UX e-book? Let me know in the comments.

About the Author

Jerry Cao is a content strategist at UXPin. In the past few years, he’s worked on improving website experiences through better content, design, and information architecture (IA). Join him on Twitter: @jerrycao_uxpin.

Read Next

Quick Overview of User Experience for Web Designers

Creating a Timeless User Experience

10 Free Web Design Books Worth Reading

10 Awesome UX Podcasts

This was published on Mar 21, 2016


Let’s block ads! (Why?)

(Over)using with in Elixir 1.2

Elixir 1.2 introduced a new expression type, with. It’s so new that the syntax highlighter I use in this blog doesn’t know about it.

with is a bit like let in other functional languages, in that it defines a local scope for variables. This means you can write something like

owner = "Jill" with name = "/etc/passwd", stat = File.stat!(name), owner = stat.uid,
do: IO.puts "#{name} is owned by user ##{owner}" IO.puts "And #{owner} is still Jill"

The with expression has two parts. The first is a list of expressions; the second is a do block. The inital expressions are evaluated in turn, and then the code in the do block is evaluated. Any variables introduced inside a with are local to that with. The the case of the example code, this means that the line owner = stat.uid will create a new variable, and not change the binding of the variable of the same name in the outer scope.

On its own, this is a big win, as it lets us break apart complex function call sequences that aren’t amenable to a pipeline. Basically, we get temporary variables. And this makes reading code a lot more fun.

For example, here’s some code I wrote a year ago. It handles the command-line options for the Earmark markdown parser:

defp parse_args(argv) do switches = [ help: :boolean, version: :boolean, ] aliases = [ h: :help, v: :version ] parse = OptionParser.parse(argv, switches: switches, aliases: aliases) case parse do { [ {switch, true } ], _, _ } -> switch { _, [ filename ], _ } -> open_file(filename) { _, [ ], _ } -> :stdio _ -> :help end

Quick! Scan this and decide how many times the switches variable is used in the function. You have to stop and parse the code to find out. And given the ugly case expression at the end, that isn’t trivial.

Here’s how I’d have written this code this morning:

defp parse_args(argv) do parse = with switches = [ help: :boolean, version: :boolean ], aliases = [ h: :help, v: :version ], do: OptionParser.parse(argv, switches: switches, aliases: aliases) case parse do { [ {switch, true } ], _, _ } -> switch { _, [ filename ], _ } -> open_file(filename) { _, [ ], _ } -> :stdio _ -> :help end

Now the scope of the switches and aliases is explicit—we know that can’t be used in the case.

There’s still the parse variable, though. We could handle this with a nested with, but that would proably make our function harder to read. Instead, I think I’d refactor this into two helper functions:

defp parse_args(argv) do argv |> parse_into_options |> options_to_values
end defp parse_into_options(argv) do with switches = [ help: :boolean, version: :boolean ], aliases = [ h: :help, v: :version ], do: OptionParser.parse(argv, switches: switches, aliases: aliases)
end defp options_to_values(options) do case options do { [ {switch, true } ], _, _ } -> switch { _, [ filename ], _ } -> open_file(filename) { _, [ ], _ } -> :stdio _ -> :help end

Much better: easier to read, easier to test, and easier to change.

Now, at this point you might be wondering why I left the with expression in the parse_into_options function. A good question, and one I’ll ty to answer after looking at the second use of with.

with and Pattern Matching

The previous section parsed command line arguments. Let’s change it up (slightly) and look at validating options passed between functions.

I’m in the middle of writing an Elixir interface to GitLab, the open source GitHub contender. It’s a simple but wide JSON REST API, with dozens, if not hundreds of available calls. And most of these calls take a set of named parameters, some required and some optional. For example, the API to create a user has four required parameters (email, name, password, and username) along with a bunch of optional ones (bio, Skype and Twitter handles, and so on).

I wanted my interface code to validate that the parameters passed to it met the GitLab API spec, so I wrote a simple option checking library. Here’s some idea of how it could be used:

@create_options_spec %{ required: MapSet.new([ :email, :name, :password, :username ]), optional: MapSet.new([ :admin, :bio, :can_create_group, :confirm, :extern_uid, :linkedin, :projects_limit, :provider, :skype, :twitter, :website_url ])
} def create_user(options) do { :ok, full_options } = Options.check(options, @create_options_spec) API.post("users", full_options)

The options specification is a Map with two keys, :required and optional. We pass it to Options.check which validates that the options passed to the API contains all required values and any additional values are in the optional set.

Here’s a first implementation of the option checker:

def check(given, spec) when is_list(given) do with keys = given |> Dict.keys |> MapSet.new, do: if opts_required(keys, spec) == :ok && opts_optional(keys, spec) == :ok do { :ok, given } else :error end

We extract the keys from the options we are given, then call two helper methods to verify that all required values are there and that any other keys are in the optional list. These both return :ok if their checks pass, {:error, msg} otherwise.

Although this code works, we sacrificed the error messages to keep it compact. If either checking function fails to return :ok, we bail and return :error.

This is where with shines. In the list of expressions between the with and the do we can use <-, the new conditional pattern match operator.

def check(given, spec) when is_list(given) do with keys = given |> Dict.keys |> MapSet.new, :ok <- opts_required(keys, spec), :ok <- opts_optional(keys, spec), do: { :ok, given }

The <- operator does a pattern match, just like =. If the match succeeds, then the effect of the two is identical—variables on the left are bound to values if necessary, and execution continues.

= and <- diverge if the match fails. The = operator will raise and exception. But <- does something sneaky: it terminates the execution of the with expression, but doesn’t raise an exception. Instead the with returns the value that couldn’t be matched.

In our option checker, this means that if both the required and optional checks return :ok, we fall through and the with returns the {:ok, given} tuple.

But if either fails, it will return {:error, msg}. As the <- operator won’t match, the with clause will exist early. Its value will be the error tuple, and so that’s what the function returns.

The Point, Labored

The new with expression gives you two great features in one tidy package: lexical scoping and early exit on failure.

It makes your code better.

Use it.

A lot.

Here’s Where I Differ with José

Johnny Winn interviewed José for the Elixir Fountain podcast a few weeks ago.

The discussion turned to the new features of Elixir 1.2, and José described with. At the end, he somewhat downplayed it, saying you rarely needed it, but when you did it was invaluable. He mentioned that there were perhaps just a couple of times it was used in the Elixir source.

I think that with is more than that. You rarely need it, but you’d often benefit from using it. In fact, I’m am experimenting with using it every time I create a function-level local variable.

What I’m finding is that this discipline drives me to create simpler, single-purpose functions. If I have a function where I can’t easily encapsulate a local within a with, then I spend a moment thinking about splitting it into two. And that split almost always improves my code.

So that’s why I left the with in the parse_into_options function earlier.

defp parse_into_options(argv) do with switches = [ help: :boolean, version: :boolean ], aliases = [ h: :help, v: :version ], do: OptionParser.parse(argv, switches: switches, aliases: aliases)

It isn’t needed, but I like the way it delineates the two parts of the function, making it clear what is incidental and what is core. In my head, it has a narrative structure that simple linear code lacks.

This is just unfounded opinion. But you might want to experiment with the technique foe a few weeks to see how it works for you.

Two is Too Many

There is a key rule that I personally operate by when I’m doing incremental development and design, which I call “two is too many.” It’s how I implement the “be only as generic as you need to be” rule from the Three Flaws of Software Design.

Essentially, I know exactly how generic my code needs to be by noticing that I’m tempted to cut and paste some code, and then instead of cutting and pasting it, designing a generic solution that meets just those two specific needs. I do this as soon as I’m tempted to have two implementations of something.

For example, let’s say I was designing an audio decoder, and at first I only supported WAV files. Then I wanted to add an MP3 parser to the code. There would definitely be common parts to the WAV and MP3 parsing code, and instead of copying and pasting any of it, I would immediately make a superclass or utility library that did only what I needed for those two implementations.

The key aspect of this is that I did it right away—I didn’t allow there to be two competing implementations; I immediately made one generic solution. The next important aspect of this is that I didn’t make it too generic—the solution only supports WAV and MP3 and doesn’t expect other formats in any way.

Another part of this rule is that a developer should ideally never have to modify one part of the code in a similar or identical way to how they just modified a different part of it. They should not have to “remember” to update Class A when they update Class B. They should not have to know that if Constant X changes, you have to update File Y. In other words, it’s not just two implementations that are bad, but also two locations. It isn’t always possible to implement systems this way, but it’s something to strive for.

If you find yourself in a situation where you have to have two locations for something, make sure that the system fails loudly and visibly when they are not “in sync.” Compilation should fail, a test that always gets run should fail, etc. It should be impossible to let them get out of sync.

And of course, the simplest part of this rule is the classic “Don’t Repeat Yourself” principle—don’t have two constants that represent the same exact thing, don’t have two functions that do the same exact thing, etc.

There are likely other ways that this rule applies. The general idea is that when you want to have two implementations of a single concept, you should somehow make that into a single implementation instead.

When refactoring, this rule helps find things that could be improved and gives some guidance on how to go about it. When you see duplicate logic in the system, you should attempt to combine those two locations into one. Then if there is another location, combine that one into the new generic system, and proceed in that manner. That is, if there are many different implementations that need to be combined into one, you can do incremental refactoring by combining two implementations at a time, as long as combining them does actually make the system simpler (easier to understand and maintain). Sometimes you have to figure out the best order in which to combine them to make this most efficient, but if you can’t figure that out, don’t worry about it—just combine two at a time and usually you’ll wind up with a single good solution to all the problems.

It’s also important not to combine things when they shouldn’t be combined. There are times when combining two implementations into one would cause more complexity for the system as a whole or violate the Single Responsibility Principle. For example, if your system’s representation of a Car and a Person have some slightly similar code, don’t solve this “problem” by combining them into a single CarPerson class. That’s not likely to decrease complexity, because a CarPerson is actually two different things and should be represented by two separate classes.

This isn’t a hard and fast law of the universe—it’s a more of a strong guideline that I use for making judgments about design as I develop incrementally. However, it’s quite useful in refactoring a legacy system, developing a new system, and just generally improving code simplicity.


Immutability, State, and Functions

Let’s start with the obligatory call to authority:

In functional programming, programs are executed by evaluating expressions, in contrast with imperative programming where programs are composed of statements which change global state when executed. Functional programming typically avoids using mutable state.


Well, that seems pretty definitive. “Functional programming typically avoids mutable state.” Seems pretty clearcut.

But it’s wrong.

Explaining why I thing that will involve a trip down the path I’ve been exploring over the last year or so, as I have tried to crystalize my thinking on the new styles of programming, and the role of transformation as both a top-down and bottom-up coding and design technique.

Let’s start by thinking about state.

Where Does a Program Keep Its State?

Programs run on computers, and at the lowest level their model of computation is tied to that of the machines on which the execute. Down at that low level, the state of a program is the state of the computer—the values in memory and the values in registers.1 Some of those registers are used internally by the processor for housekeeping. Perhaps the most important of these is the program counter (PC). You can think of the PC as a pointer to the next instruction to execute.

We can take this up a level. Here’s a simple program:

|> String.downcase # => "cat"
|> String.codepoints # => [ "c", "a", "t" ]
|> Enum.sort # => [ "a", "c", "t" ]

The |> notation is syntactic sugar for passing the result of a function as the first parameter of the next function. The preceding code is equivalent to


Thrilling stuff, eh?

Let’s image we’d just finished executing the first line. What is our state?

Somewhere in memory, there’s a data structure representing the string “Cat”. That’s the first part of our state. The second part is the value of the program counter. Logically, it’s pointing to the start of line 2.

Execute one more line. String.downcase is passed the string “Cat”. The result, another string containing “cat”, is stored in a different place in our computer. The PC now points to the start of line 3.

And so it goes. With each step, the state of the computer changes, meaning that the state of our program changes.

State is not immutable.

Is This Splitting Hairs?

Yes and no.

Yes, because no one would argue that the state of a computer is unchanged during the execution of a program.

No, because people still say that immutable state is a characteristic of functional programming. That’s wrong. Worse, that also leads us to model programming wrongly. And that’s what the rest of this post is about.

What Is Immutable?

Let’s get this out of the way first. In a functional program, values are immutable. Look at the following code.

person = get_user_details("Dave")

Let’s assume that get_user_details returns some structured data, which we dump out to some log file on line two. In a language with immutable values, that data can never be changed. We know that nothing in the function do_something_with can change the data referenced by the person variable, and so the debugging we write on line 4 is guaranteed to be the same as that created on line 2.

If we wanted to change the information for Dave, we’d have to create copy of Dave’s data:

person1 = change_subscription_status(person, :active)

Now we have the variable person bound to the initial value of the Dave person, and person1 references the version with a changed subscription status.

If you’ve been using languages with mutable data, at this point you’ll have intuitively created a mental picture where person and person1 reference different chunks of memory. And you might be thinking that this is remarkably inefficient. But in an immutable world, it needn’t be. Because the runtime knows that the original data will never be changed, it can reuse much of it in person1. In principle, you could have a runtime that represented new values as nothing more that a set of changes to be applied to the original.

Anyway, back to state.

person = get_user_details("Dave")
person1 = change_subscription_status(person, :active)
IO.inspect person1

Let’s represent the state using a tuple containing the pseudo program counter and the values bound to variables.

Line person person1
2 value1
3 value1
4 value1 value2


How to Handle Code Complexity in a Software Company

Here’s an obvious statement that has some subtle consequences:

Only an individual programmer can resolve code complexity.

That is, resolving code complexity requires the attention of an individual person on that code. They can certainly use appropriate tools to make the task easier, but ultimately it’s the application of human intelligence, attention, and work that simplifies code.

So what? Why does this matter? Well, to be clearer:

Resolving code complexity usually requires detailed work at the level of the individual contributor.

If a manager just says “simplify the code!” and leaves it at that, usually nothing happens, because (a) they’re not being specific enough, (b) they don’t necessarily have the knowledge required about each individual piece of code in order to be that specific, and (c) part of understanding the problem is actually going through the process of solving it, and the manager isn’t the person writing the solution.

The higher a manager’s level in the company, the more true this is. When a CTO, Vice President, or Engineering Director gives an instruction like “improve code quality” but doesn’t get much more specific than that, what tends to happen is that a lot of motion occurs in the company but the codebase doesn’t significantly improve.

It’s very tempting, if you’re a software engineering manager, to propose broad, sweeping solutions to problems that affect large areas. The problem with that approach to code complexity is that the problem is usually composed of many different small projects that require detailed work from individual programmers. So, if you try to handle everything with the same broad solution, that solution won’t fit most of the situations that need to be handled. Your attempt at a broad solution will actually backfire, with software engineers feeling like they did a lot of work but didn’t actually produce a maintainable, simple codebase. (This is a common pattern in software management, and it contributes to the mistaken belief that code complexity is inevitable and nothing can be done about it.)

So what can you do as a manager, if you have a complex codebase and want to resolve it? Well, the trick is to get the data from the individual contributors and then work with them to help them resolve the issues. The sequence goes roughly like this:

  1. Ask each member of your team to write down a list of what frustrates them about the code. The symptoms of code complexity are things like emotional reactions to code, confusions about code, feeling like a piece will break if you touch it, difficulties optimizing, etc. So you want the answers to questions like, “Is there a part of the system that makes you nervous when you modify it?” or “Is there some part of the codebase that frustrates you to work with?”Each individual software engineer should write their own list. I wouldn’t recommend implementing some system for collecting the lists—just have people write down the issues for themselves in whatever way is easiest for them. Give them a few days to write this list; they might think of other things over time.

    The list doesn’t just have to be about your own codebase, but can be about any code that the developer has to work with or use.

    You’re looking for symptoms at this point, not causes. Developers can be as general or as specific as they want, for this list.

  2. Call a meeting with your team and have each person bring their list and a computer that they can use to access the codebase. The ideal size for a team meeting like this is about six or seven people, so you might want to break things down into sub-teams.In this meeting you want to go over the lists and get the name of a specific directory, file, class, method, or block of code to associate with each symptom. Even if somebody says something like, “The whole codebase has no unit tests,” then you might say, “Tell me about a specific time that that affected you,” and use the response to that to narrow down what files it’s most important to write unit tests for right away. You also want to be sure that you’re really getting a description of the problem, which might be something more like “It’s difficult to refactor the codebase because I don’t know if I’m breaking other people’s modules.” Then unit tests might be the solution, but you first want to narrow down specifically where the problem lies, as much as possible. (It’s true that almost all code should be unit tested, but if you don’t have any unit tests, you’ll need to start off with some doable task on the subject.)

    In general, the idea here is that only code can actually be fixed, so you have to know what piece of code is the problem. It might be true that there’s a broad problem, but that problem can be broken down into specific problems with specific pieces of code that are affected, one by one.

  3. Using the information from the meeting, file a bug describing the problem (not the solution, just the problem!) for each directory, file, class, etc. that was named. A bug could be as simple as “FrobberFactory is hard to understand.”If a solution was suggested during the meeting, you can note that in the bug, but the bug itself should primarily be about the problem.
  4. Now it’s time to prioritize. The first thing to do is to look at which issues affect the largest number of developers the most severely. Those are high priority issues. Usually this part of prioritization is done by somebody who has a broad view over developers in the team or company. Often, this is a manager.That said, sometimes issues have an order that they should be resolved in that is not directly related to their severity. For example, Issue X has to be resolved before Issue Y can be resolved, or resolving Issue A would make resolving Issue B easier. This means that Issue A and Issue X should be fixed first even if they’re not as severe as the issues that they block. Often, there’s a chain of issues like this and the trick is to find the issue at the bottom of the stack. Handling this part of prioritization incorrectly is one of the most common and major mistakes in software design. It may seem like a minor detail, but in fact it is critical to the success of efforts to resolve complexity. The essence of good software design in all situations is taking the right actions in the right sequence. Forcing developers to tackle issues out of sequence (without regard for which problems underlie which other problems) will cause code complexity.

    This part of prioritization is a technical task that is usually best done by the technical lead of the team. Sometimes this is a manager, but other times it’s a senior software engineer.

    Sometimes you don’t really know which issue to tackle first until you’re doing development on one piece of code and you discover that it would be easier to fix a different piece of code first. With that said, if you can determine the ordering up front, it’s good to do so. But if you find that you’d have to get into actually figuring out solutions in order to determine the ordering, just skip it for now.

    Whether you do it up front or during development, it’s important that individual programmers do realize when there is an underlying task to tackle before the one they have been assigned. They must be empowered to switch from their current task to the one that actually blocks them. There is a limit to this (for example, rewriting the whole system into another language just to fix one file is not a good use of time) but generally, “finding the issue at the bottom of the stack” is one of the most important tasks a developer has when doing these sorts of cleanups.

  5. Now you assign each bug to an individual contributor. This is a pretty standard managerial process, and while it definitely involves some detailed work and communication, I would imagine that most software engineering managers are already familiar with how to do it.One tricky piece here is that some of the bugs might be about code that isn’t maintained by your team. In that case you’ll have to work appropriately through the organization to get the appropriate team to take responsibility for the issue. It helps to have buy-in from a manager that you have in common with the other team, higher up the chain, here.

    In some organizations, if the other team’s problem is not too complex or detailed, it might also be possible for your team to just make the changes themselves. This is a judgment call that you can make based on what you think is best for overall productivity.

  6. Now that you have all of these bugs filed, you have to figure out when to address them. Generally, the right thing to do is to make sure that developers regularly fix some of the code quality issues that you filed along with their feature work.If your team makes plans for a period of time like a quarter or six weeks, you should include some of the code cleanups in every plan. The best way to do this is to have developers first do cleanups that would make their specific feature work easier, and then have them do that feature work. Usually this doesn’t even slow down their feature work overall. (That is, if this is done correctly, developers can usually accomplish the same amount of feature work in a quarter that they could even if they weren’t also doing code cleanups, providing evidence that the code cleanups are already improving productivity.)

    Don’t stop normal feature development entirely to just work on code quality. Instead, make sure that enough code quality work is being done continuously that the codebase’s quality is always improving overall rather than getting worse over time.

If you do those things, that should get you well on the road to an actually-improving codebase. There’s actually quite a bit to know about this process in general—perhaps enough for another entire book. However, the above plus some common sense and experience should be enough to make major improvements in the quality of your codebase, and perhaps even improve your life as a software engineer or manager, too.


P.S. If you do find yourself wanting more help on it, I’d be happy to come speak at your company. Just let me know.

Test-Driven Development and the Cycle of Observation

Today there was an interesting discussion between Kent Beck, Martin Fowler, and David Heinemeier Hansson on the nature and use of Test-Driven Development (TDD), where one writes tests first and then writes code.

Each participant in the conversation had different personal preferences for how they write code, which makes sense. However, from each participant’s personal preference you could extract an identical principle: “I need to observe something before I can make a decision.” Kent often (though not always) liked writing tests first so that he could observe their behavior while coding. David often (though not always) wanted to write some initial code, observe that to decide on how to write more code, and so on. Even when they talked about their alternative methods (Kent talking about times he doesn’t use TDD, for example) they still always talked about having something to look at as an inherent part of the development process.

It’s possible to minimize this point and say it’s only relevant to debugging or testing. It’s true that it’s useful in those areas, but when you talk to many senior developers you find that this idea is actually a fundamental basis of their whole development workflow. They want to see something that will help them make decisions about their code. It’s not something that only happens when code is complete or when there’s an bug—it’s something that happens at every moment of the software lifecycle.

This is such a broad principle that you could say the cycle of all software development is:

Observation → Decision → Action → Observation → Decision → Action → etc.

If you want a term for this, you could call it the “Cycle of Observation” or “ODA.”


What do I mean by all of this? Well, let’s take some examples to make it clearer. When doing TDD, the cycle looks like:

  1. See a problem (observation).
  2. Decide to solve the problem (decision).
  3. Write a test (action).
  4. Look at the test and see if the API looks good (observation).
  5. If it doesn’t look good, decide how to fix it (decision), change the test (action), and repeat Observation → Decision → Action until you like what the API looks like.
  6. Now that the API looks good, run the test and see that it fails (observation).
  7. Decide how you’re going to make the test pass (decision).
  8. Write some code (action).
  9. Run the test and see that it passes or fails (observation).
  10. If it fails, decide how to fix it (decision) and write some code (action) until the test passes (observation).
  11. Decide what to work on next, based on principles of software design, knowledge of the problem, or the data you gained while writing the previous code (decision).
  12. And so on.

Another valid way to go about this would be to write the code first. The difference from the above sequence is that Step 3 would be “write some code” rather than “write a test.” Then you observe the code itself to make further decisions, or you write tests after the code and observe those.

There are many valid processes.

Development Processes and Productivity

What’s interesting is that, as far as I know, every valid development process follows this cycle as its primary guiding principle. Even large-scale processes like Agile that cover a whole team have this built into them. In fact, Agile is to some degree an attempt to have shorter Observation-Decision-Action cycles (every few weeks) for a team than previous broken models (Waterfall, aka “Big Design Up Front”) which took months or years to get through a single cycle.

So, shorter cycles seem to be better than longer cycles. In fact, it’s possible that most of the goal of developer productivity could be accomplished simply by shortening the ODA cycle down to the smallest reasonable time period for the developer, the team, or the organization.

Usually you can accomplish these shorter cycles just by focusing on the Observation step. Once you’ve done that, the other two parts of the cycle tend to speed up on their own. (If they don’t, there are other remedies, but that’s another post.)

There are three key factors to address in Observation:

  • The speed with which information can be delivered to developers. (For example, having fast tests.)
  • The completeness of information delivered to the developers. (For example, having enough test coverage.)
  • The accuracy of information delivered to developers. (For example, having reliable tests.)

This helps us understand the reasons behind the success of certain development tools in recent decades. Continuous Integration, production monitoring systems, profilers, debuggers, better error messages in compilers, IDEs that highlight bad code—almost everything that’s “worked” has done so because it made Observation faster, more accurate, or more complete.

There is one catch—you have to deliver the information in such a way that it can actually be received by people. If you dump a huge sea of information on people without making it easy for them to find the specific data they care about, the data becomes useless. If nobody ever receives a production alert, then it doesn’t matter. If a developer is never sure of the accuracy of information received, then they may start to ignore it. You must successfully communicate the information, not just generate it.

The First ODA

There is a “big ODA cycle” that represents the whole process of software development—seeing a problem, deciding on a solution, and delivering it as software. Within that big cycle there are many smaller ones (see the need for a feature, decide on how the feature should work, and then write the feature). There are even smaller cycles within that (observe the requirements for a single change, decide on an implementation, write some code), and so on.

The trickiest part is the first ODA cycle in any of these sequences, because you have to make an observation with no previous decision or action.

For the “big” cycle, it may seem like you start off with nothing to observe. There’s no code or computer output to see yet! But in reality, you start off with at least yourself to observe. You have your environment around you. You have other people to talk to, a world to explore. Your first observations are often not of code, but of something to solve in the real world that will help people somehow.

Then when you’re doing development, sometimes you’ll come to a point where you have to decide “what do I work on next?” This is where knowing the laws of software design can help, because you can apply them to the code you’ve written and the problem you observed, which lets you decide on the sequence to work in. You can think of these principles as a form of observation that comes second-hand—the experience of thousands of person-years compressed into laws and rules that can help you make decisions now. Second-hand observation is completely valid observation, as long as it’s accurate.

You can even view even the process of Observation as its own little ODA cycle: look at the world, decide to put your attention on something, put your attention on that thing, observe it, decide based on that to observe something else, etc.

There are likely infinite ways to use this principle; all of the above represents just a few examples.


The Secret of Fast Programming: Stop Thinking

When I talk to developers about code complexity, they often say that they want to write simple code, but deadline pressure or underlying issues mean that they just don’t have the time or knowledge necessary to both complete the task and refine it to simplicity.

Well, it’s certainly true that putting time pressure on developers tends to lead to them writing complex code. However, deadlines don’t have to lead to complexity. Instead of saying “This deadline prevents me from writing simple code,” one could equally say, “I am not a fast-enough programmer to make this simple.” That is, the faster you are as a programmer, the less your code quality has to be affected by deadlines.

Now, that’s nice to say, but how does one actually become faster? Is it a magic skill that people are born with? Do you become fast by being somehow “smarter” than other people?

No, it’s not magic or in-born at all. In fact, there is just one simple rule that, if followed, will eventually solve the problem entirely:

Any time you find yourself stopping to think, something is wrong.

Perhaps that sounds incredible, but it works remarkably well. Think about it—when you’re sitting in front of your editor but not coding very quickly, is it because you’re a slow typer? I doubt it—“having to type too much” is rarely a developer’s productivity problem. Instead, the pauses where you’re not typing are what make it slow. And what are developers usually doing during those pauses? Stopping to think—perhaps about the problem, perhaps about the tools, perhaps about email, whatever. But any time this happens, it indicates a problem.

The thinking is not the problem itself—it is a sign of some other problem. It could be one of many different issues:


The most common reason developers stop to think is that they did not fully understand some word or symbol.

This happened to me just the other day. It was taking me hours to write what should have been a really simple service. I kept stopping to think about it, trying to work out how it should behave. Finally, I realized that I didn’t understand one of the input variables to the primary function. I knew the name of its type, but I had never gone and read the definition of the type—I didn’t really understand what that variable (a word or symbol) meant. As soon as I looked up the type’s code and docs, everything became clear and I wrote that service like a demon (pun partially intended).

This can happen in almost infinite ways. Many people dive into a programming language without learning what (, ), [, ], {, }, +, *, and % really mean in that language. Some developers don’t understand how the computer really works. Remember when I wrote The Singular Secret of the Rockstar Programmer? This is why! Because when you truly understand, you don’t have to stop to think. It’s also a major motivation behind my book—understanding that there are unshakable laws to software design can eliminate a lot of the “stopping to think” moments.

So if you find that you are stopping to think, don’t try to solve the problem in your mind—search outside of yourself for what you didn’t understand. Then go look at something that will help you understand it. This even applies to questions like “Will a user ever read this text?” You might not have a User Experience Research Department to really answer that question, but you can at least make a drawing, show it to somebody, and ask their opinion. Don’t just sit there and think—do something. Only action leads to understanding.


Sometimes developers stop to think because they can’t hold enough concepts in their mind at once—lots of things are relating to each other in a complex way and they have to think through it. In this case, it’s almost always more efficient to write or draw something than it is to think about it. What you want is something you can look at, or somehow perceive outside of yourself. This is a form of understanding, but it’s special enough that I wanted to call it out on its own.


Sometimes the problem is “I have no idea what code to start writing.” The simplest solution here is to just start writing whatever code you know that you can write right now. Pick the part of the problem that you understand completely, and write the solution for that—even if it’s just one function, or an unimportant class.

Often, the simplest piece of code to start with is the “core” of the application. For example, if I was going to write a YouTube app, I would start with the video player. Think of it as an exercise in continuous delivery—write the code that would actually make a product first, no matter how silly or small that product is. A video player without any other UI is a product that does something useful (play video), even if it’s not a complete product yet.

If you’re not sure how to write even that core code yet, then just start with the code you are sure about. Generally I find that once a piece of the problem becomes solved, it’s much easier to solve the rest of it. Sometimes the problem unfolds in steps—you solve one part, which makes the solution of the next part obvious, and so forth. Whichever part doesn’t require much thinking to create, write that part now.

Skipping a Step

Another specialized understanding problem is when you’ve skipped some step in the proper sequence of development. For example, let’s say our Bike object depends on the Wheels, Pedals, and Frame objects. If you try to write the whole Bike object without writing the Wheels, Pedals, or Frame objects, you’re going to have to think a lot about those non-existent classes. On the other hand, if you write the Wheels class when there is no Bike class at all, you might have to think a lot about how the Wheels class is going to be used by the Bike class.

The right solution there would be to implement enough of the Bike class to get to the point where you need Wheels. Then write enough of the Wheels class to satisfy your immediate need in the Bike class. Then go back to the Bike class, and work on that until the next time you need one of the underlying pieces. Just like the “Starting” section, find the part of the problem that you can solve without thinking, and solve that immediately.

Don’t jump over steps in the development of your system and expect that you’ll be productive.

Physical Problems

If I haven’t eaten enough, I tend to get distracted and start to think because I’m hungry. It might not be thoughts about my stomach, but I wouldn’t be thinking if I were full—I’d be focused. This can also happen with sleep, illness, or any sort of body problem. It’s not as common as the “understanding” problem from above, so first always look for something you didn’t fully understand. If you’re really sure you understood everything, then physical problems could be a candidate.


When a developer becomes distracted by something external, such as noise, it can take some thinking to remember where they were in their solution. The answer here is relatively simple—before you start to develop, make sure that you are in an environment that will not distract you, or make it impossible for distractions to interrupt you. Some people close the door to their office, some people put on headphones, some people put up a “do not disturb” sign—whatever it takes. You might have to work together with your manager or co-workers to create a truly distraction-free environment for development.


Sometimes a developer sits and thinks because they feel unsure about themselves or their decisions. The solution to this is similar to the solution in the “Understanding” section—whatever you are uncertain about, learn more about it until you become certain enough to write code. If you just feel generally uncertain as a programmer, it might be that there are many things to learn more about, such as the fundamentals listed in Why Programmers Suck. Go through each piece you need to learn until you really understand it, then move on to the next piece, and so on. There will always be learning involved in the process of programming, but as you know more and more about it, you will become faster and faster and have to think less and less.

False Ideas

Many people have been told that thinking is what smart people do, thus, they stop to think in order to make intelligent decisions. However, this is a false idea. If thinking alone made you a genius, then everybody would be Einstein. Truly smart people learn, observe, decide, and act. They gain knowledge and then use that knowledge to address the problems in front of them. If you really want to be smart, use your intelligence to cause action in the physical universe—don’t use it just to think great thoughts to yourself.


All of the above is the secret to being a fast programmer when you are sitting and writing code. If you are caught up all day in reading email and going to meetings, then no programming happens whatsoever—that’s a different problem. Some aspects of it are similar (it’s a bit like the organization “stopping to think,”) but it’s not the same.

Still, there are some analogous solutions you could try. Perhaps the organization does not fully understand you or your role, which is why they’re sending you so much email and putting you in so many meetings. Perhaps there’s something about the organization that you don’t fully understand, such as how to go to fewer meetings and get less email. :-) Maybe even some organizational difficulties can be resolved by adapting the solutions in this post to groups of people instead of individuals.