The 5 Best Free FTP Clients

Transferring files to and from your web host or server is best done with what’s commonly known an FTP client, though the term is a bit dated because there are more secure alternatives such as SFTP and FTPS.

When I was putting together this list, this was my criteria:

  • Supports secure file transfer protocols: FTP isn’t secure. Among its many flaws, plain FTP doesn’t encrypt the data you’re transferring. If your data is compromised en route to its destination, your credentials (username and password) and your data can easily be read. SFTP (which stands for SHH File Transfer Protocol) is a popular secure alternative, but there are many others.
  • Has a GUI: There are some awesome FTP clients with a command-line interface, but for a great number of people, a graphical user interface is more approachable and easier to use.

Topping the list is FileZilla, an open source FTP client. It’s fast, being able to handle simultaneous transmissions (multi-threaded transfers), and supports SFTP and FTPS (which stands for FTP over SSL). What’s more, it’s available on all operating systems, so if you work on multiple computers — like if you’re forced to use Windows at work but you have a Mac at home — you don’t need to use a different application for your file-transferring needs.


Available on Windows, Mac OS and Linux

Cyberduck can take care of a ton of your file-transferring needs: SFTP, WebDav, Amazon S3, and more. It has a minimalist UI, which makes it super easy to use.


Available on Windows and Mac OS

This Mozilla Firefox add-on gives you a very capable FTP/SFTP client right within your browser. It’s available on all platforms that can run Firefox.


Available on Windows, Mac OS and Linux

Classic FTP is a file transfer client that’s free for non-commercial use. It has a very simple interface, which is a good thing, because it makes it easy and intuitive to use. I like its “Compare Directories” feature that’s helpful for seeing differences between your local and remote files.

Classic FTP

Available on Windows and Mac OS

This popular FTP client has a very long list of features, and if you’re a Windows user, it’s certainly worth a look. WinSCP can deal with multiple file-transfer protocols (SFTP, SCP, FTP, and WebDav). It has a built-in text editor for making quick text edits more convenient, and has scripting support for power users.


Available on Windows

Honorable Mention: Transmit

For this post, I decided to focus on free software. But it just doesn’t seem right to leave out Transmit (which costs $34) in a post about FTP clients because it’s a popular option used by web developers on Mac OS. It has a lot of innovative features and its user-friendliness is unmatched. If you’ve got the cash to spare and you’re on a Mac, this might be your best option.


Available on Mac OS

Which FTP client do you use?

There’s a great deal of FTP clients out there. If your favorite FTP client isn’t on the list, please mention it in the comments for the benefit of other readers. And if you’ve used any of the FTP clients mentioned here, please do share your thoughts about them too.

Jacob Gube is the founder of Six Revisions. He’s a front-end developer. Connect with him on Twitter and Facebook.



12 Brackets Extensions That Will Make Your Life Easier

Brackets is a great source code editor for web designers and front-end web developers. It has a ton of useful features out of the box. You can make your Brackets experience even better by using extensions.

These Brackets extensions will help make your web design and front-end web development workflow a little easier.

Quickly see the current level of browser support a certain web technology has without leaving Brackets. This extension sources its data from Can I use.


HTML Skeleton helps you set up your HTML files quickly by automatically inserting basic markup such as the doctype declaration, <html>, <head>, <body>, etc.

HTML Skeleton

Related: A Generic HTML5 Template

Rapidly mark up a list of text into list items (<li>), table rows (<tr>), hyperlinks, (<a>), and more with HTML Wrapper.

HTML Wrapper

This is a super simple extension that adds file icons in Brackets’s sidebar. The icons are excellent visual cues that make it much easier to identify the file you’d like to work on.

Brackets Icons

Automatically and intelligently add vendor prefixes to your CSS properties with the Autoprefixer extension. It uses browser support data from Can I use to decide whether or not a vendor prefix is needed. It’ll also remove unnecessary vendor prefixes.


This extension will remove unneeded characters from your JavaScript and CSS files. This process is called minification, and it can improve your website’s speed.

JS CSS Minifier

This extension highlights CSS errors and code-quality issues. The errors and warnings reported by this extension are based on CSS Lint rules.


Emmet is a collection of tools and keyboard shortcuts that can speed up HTML- and CSS-authoring.


Need some text to fill up your design prototype? The Lorem Ipsum Generator extension helps you conveniently generate dummy text. (And if you need image placeholders, have a look at Lorem Pixel or DEVimg.)

Lorem Ipsum Generator

This extension will help keep your HTML, CSS, and JavaScript code consistently formatted, indented, and — most importantly — readable. An alternative option to check out is the CSSComb extension.


Make sure you don’t forget your project tasks by using the Simple To-Do extension, which allows you to create and manage to-do lists for each project within Brackets.

Simple To-Do

Transferring and syncing your project’s files to your web host or server requires FTP or SFTP, but such a fundamental web development feature doesn’t come with Brackets. To remedy the situation, use the eqFTP extension, an FTP/STFP client that you can operate from within Brackets.


How to Install Brackets Extensions

The quickest way to install Brackets extensions is by using the Extension Manager — access it by choosing File > Extension Manager in Brackets’s toolbar.

Brackets Extension Manager

If I didn’t mention your favorite Brackets extension, please talk about it in the comments.

Jacob Gube is the founder of Six Revisions. He’s a front-end developer. Connect with him on Twitter and Facebook.


A New Breed of Free Source Code Editors

10 Open Source Blogging Platforms for Developers

15 Free Books for People Who Code

Should Web Designers Know HTML and CSS?

5 Games That Teach You How to Code

This was published on Mar 28, 2016


Let’s block ads! (Why?)

7 Free UX E-Books Worth Reading

The best designers are lifelong students. While nothing beats experience in the field, the amount of helpful online resources certainly helps keep our knowledge sharp.

In this post, I’ve rounded up some useful e-books that provide excellent UX advice and insights.

This is a free e-book by usability consultancy firm Userfocus. The best part of this book is its casual tone. Acronyms like “the CRAP way to usability” and The Beatles analogies make remembering the book’s lessons a lot easier, and makes for an interesting read. That’s why this book is one of my favorites.

50 User Experience Best Practices

As the book’s title implies, 50 User Experience Best Practices delivers UX tips and best practices. It delves into subjects such as user research and content strategy. One of the secrets to this book’s success is its creative and easy-to-comprehend visuals. This e-book was written and published by the now-defunct UX design agency, Above the Fold.

UX Design Trends Bundle

Over at UXPin, my team and I have written and published a lot of free e-books. For this post, I’d like to specifically highlight our UX Design Trends Bundle. It’s a compilation of three of our e-books: Web Design Trends 2016, UX Design Trends 2015 & 2016, and Mobile UI Design Trends 2015 & 2016. Totaling 350+ pages, this bundle examines over 300 excellent designs.

UX Storytellers: Connecting the Dots

Published in 2009, UX Storytellers: Connecting the Dots, continues to be a very insightful read. This classic e-book stays relevant because of its unique format: It collects stand-alone stories and advice from 42 UX professionals. At 586 pages, there’s a ton of content in this book. Download it now to learn about the struggles — and solutions — UX professionals can expect to face.

The UX Reader

This e-book covers all the important components of the UX design process. It’s full of valuable insights, making it appealing to both beginners and veterans alike. The book is divided into five categories: Collaboration, Research, Design, Development, and Refinement. Each category contains a series of articles written by different members of MailChimp’s UX team.

Learn from Great Design

Only a portion of this book, 57 pages, is free.

In this e-book, web designer Tom Kenny does in-depth analyses of great web designs, pointing out what they’re doing right, and also what they could do better. For those that learn best by looking at real-world examples, this book is a great read.

The full version of this e-book contains 20 case studies; the free sample only has 3 of those case studies.

The Practical Interaction Design Bundle

I’ll end this list with another UXPin selection. This bundle contains three of our IxD e-books: Interaction Design Best Practices Volume 1 and Volume 2, as well as Consistency in UI Design.

  • Interaction Design Best Practices Volume 1 covers the “tangibles” — visuals, words, and space — and explains how to implement signifiers, how to construct a visual hierarchy, and how to make interactions feel like real conversations.
  • Interaction Design Best Practices Volume 2 covers the “intangibles” — time, responsiveness, and behavior — and covers topics from animation to enjoyment.
  • Consistency in UI Design explains the role that consistency plays in learnability, reducing friction, and drawing attention to certain elements.

Altogether, the bundle includes 250 pages of best practices and 60 design examples.

Did I leave out your favorite UX e-book? Let me know in the comments.

About the Author

Jerry Cao is a content strategist at UXPin. In the past few years, he’s worked on improving website experiences through better content, design, and information architecture (IA). Join him on Twitter: @jerrycao_uxpin.

Read Next

Quick Overview of User Experience for Web Designers

Creating a Timeless User Experience

10 Free Web Design Books Worth Reading

10 Awesome UX Podcasts

This was published on Mar 21, 2016


Let’s block ads! (Why?)

(Over)using with in Elixir 1.2

Elixir 1.2 introduced a new expression type, with. It’s so new that the syntax highlighter I use in this blog doesn’t know about it.

with is a bit like let in other functional languages, in that it defines a local scope for variables. This means you can write something like

owner = "Jill" with name = "/etc/passwd", stat = File.stat!(name), owner = stat.uid,
do: IO.puts "#{name} is owned by user ##{owner}" IO.puts "And #{owner} is still Jill"

The with expression has two parts. The first is a list of expressions; the second is a do block. The inital expressions are evaluated in turn, and then the code in the do block is evaluated. Any variables introduced inside a with are local to that with. The the case of the example code, this means that the line owner = stat.uid will create a new variable, and not change the binding of the variable of the same name in the outer scope.

On its own, this is a big win, as it lets us break apart complex function call sequences that aren’t amenable to a pipeline. Basically, we get temporary variables. And this makes reading code a lot more fun.

For example, here’s some code I wrote a year ago. It handles the command-line options for the Earmark markdown parser:

defp parse_args(argv) do switches = [ help: :boolean, version: :boolean, ] aliases = [ h: :help, v: :version ] parse = OptionParser.parse(argv, switches: switches, aliases: aliases) case parse do { [ {switch, true } ], _, _ } -> switch { _, [ filename ], _ } -> open_file(filename) { _, [ ], _ } -> :stdio _ -> :help end

Quick! Scan this and decide how many times the switches variable is used in the function. You have to stop and parse the code to find out. And given the ugly case expression at the end, that isn’t trivial.

Here’s how I’d have written this code this morning:

defp parse_args(argv) do parse = with switches = [ help: :boolean, version: :boolean ], aliases = [ h: :help, v: :version ], do: OptionParser.parse(argv, switches: switches, aliases: aliases) case parse do { [ {switch, true } ], _, _ } -> switch { _, [ filename ], _ } -> open_file(filename) { _, [ ], _ } -> :stdio _ -> :help end

Now the scope of the switches and aliases is explicit—we know that can’t be used in the case.

There’s still the parse variable, though. We could handle this with a nested with, but that would proably make our function harder to read. Instead, I think I’d refactor this into two helper functions:

defp parse_args(argv) do argv |> parse_into_options |> options_to_values
end defp parse_into_options(argv) do with switches = [ help: :boolean, version: :boolean ], aliases = [ h: :help, v: :version ], do: OptionParser.parse(argv, switches: switches, aliases: aliases)
end defp options_to_values(options) do case options do { [ {switch, true } ], _, _ } -> switch { _, [ filename ], _ } -> open_file(filename) { _, [ ], _ } -> :stdio _ -> :help end

Much better: easier to read, easier to test, and easier to change.

Now, at this point you might be wondering why I left the with expression in the parse_into_options function. A good question, and one I’ll ty to answer after looking at the second use of with.

with and Pattern Matching

The previous section parsed command line arguments. Let’s change it up (slightly) and look at validating options passed between functions.

I’m in the middle of writing an Elixir interface to GitLab, the open source GitHub contender. It’s a simple but wide JSON REST API, with dozens, if not hundreds of available calls. And most of these calls take a set of named parameters, some required and some optional. For example, the API to create a user has four required parameters (email, name, password, and username) along with a bunch of optional ones (bio, Skype and Twitter handles, and so on).

I wanted my interface code to validate that the parameters passed to it met the GitLab API spec, so I wrote a simple option checking library. Here’s some idea of how it could be used:

@create_options_spec %{ required:[ :email, :name, :password, :username ]), optional:[ :admin, :bio, :can_create_group, :confirm, :extern_uid, :linkedin, :projects_limit, :provider, :skype, :twitter, :website_url ])
} def create_user(options) do { :ok, full_options } = Options.check(options, @create_options_spec)"users", full_options)

The options specification is a Map with two keys, :required and optional. We pass it to Options.check which validates that the options passed to the API contains all required values and any additional values are in the optional set.

Here’s a first implementation of the option checker:

def check(given, spec) when is_list(given) do with keys = given |> Dict.keys |>, do: if opts_required(keys, spec) == :ok && opts_optional(keys, spec) == :ok do { :ok, given } else :error end

We extract the keys from the options we are given, then call two helper methods to verify that all required values are there and that any other keys are in the optional list. These both return :ok if their checks pass, {:error, msg} otherwise.

Although this code works, we sacrificed the error messages to keep it compact. If either checking function fails to return :ok, we bail and return :error.

This is where with shines. In the list of expressions between the with and the do we can use <-, the new conditional pattern match operator.

def check(given, spec) when is_list(given) do with keys = given |> Dict.keys |>, :ok <- opts_required(keys, spec), :ok <- opts_optional(keys, spec), do: { :ok, given }

The <- operator does a pattern match, just like =. If the match succeeds, then the effect of the two is identical—variables on the left are bound to values if necessary, and execution continues.

= and <- diverge if the match fails. The = operator will raise and exception. But <- does something sneaky: it terminates the execution of the with expression, but doesn’t raise an exception. Instead the with returns the value that couldn’t be matched.

In our option checker, this means that if both the required and optional checks return :ok, we fall through and the with returns the {:ok, given} tuple.

But if either fails, it will return {:error, msg}. As the <- operator won’t match, the with clause will exist early. Its value will be the error tuple, and so that’s what the function returns.

The Point, Labored

The new with expression gives you two great features in one tidy package: lexical scoping and early exit on failure.

It makes your code better.

Use it.

A lot.

Here’s Where I Differ with José

Johnny Winn interviewed José for the Elixir Fountain podcast a few weeks ago.

The discussion turned to the new features of Elixir 1.2, and José described with. At the end, he somewhat downplayed it, saying you rarely needed it, but when you did it was invaluable. He mentioned that there were perhaps just a couple of times it was used in the Elixir source.

I think that with is more than that. You rarely need it, but you’d often benefit from using it. In fact, I’m am experimenting with using it every time I create a function-level local variable.

What I’m finding is that this discipline drives me to create simpler, single-purpose functions. If I have a function where I can’t easily encapsulate a local within a with, then I spend a moment thinking about splitting it into two. And that split almost always improves my code.

So that’s why I left the with in the parse_into_options function earlier.

defp parse_into_options(argv) do with switches = [ help: :boolean, version: :boolean ], aliases = [ h: :help, v: :version ], do: OptionParser.parse(argv, switches: switches, aliases: aliases)

It isn’t needed, but I like the way it delineates the two parts of the function, making it clear what is incidental and what is core. In my head, it has a narrative structure that simple linear code lacks.

This is just unfounded opinion. But you might want to experiment with the technique foe a few weeks to see how it works for you.

Two is Too Many

There is a key rule that I personally operate by when I’m doing incremental development and design, which I call “two is too many.” It’s how I implement the “be only as generic as you need to be” rule from the Three Flaws of Software Design.

Essentially, I know exactly how generic my code needs to be by noticing that I’m tempted to cut and paste some code, and then instead of cutting and pasting it, designing a generic solution that meets just those two specific needs. I do this as soon as I’m tempted to have two implementations of something.

For example, let’s say I was designing an audio decoder, and at first I only supported WAV files. Then I wanted to add an MP3 parser to the code. There would definitely be common parts to the WAV and MP3 parsing code, and instead of copying and pasting any of it, I would immediately make a superclass or utility library that did only what I needed for those two implementations.

The key aspect of this is that I did it right away—I didn’t allow there to be two competing implementations; I immediately made one generic solution. The next important aspect of this is that I didn’t make it too generic—the solution only supports WAV and MP3 and doesn’t expect other formats in any way.

Another part of this rule is that a developer should ideally never have to modify one part of the code in a similar or identical way to how they just modified a different part of it. They should not have to “remember” to update Class A when they update Class B. They should not have to know that if Constant X changes, you have to update File Y. In other words, it’s not just two implementations that are bad, but also two locations. It isn’t always possible to implement systems this way, but it’s something to strive for.

If you find yourself in a situation where you have to have two locations for something, make sure that the system fails loudly and visibly when they are not “in sync.” Compilation should fail, a test that always gets run should fail, etc. It should be impossible to let them get out of sync.

And of course, the simplest part of this rule is the classic “Don’t Repeat Yourself” principle—don’t have two constants that represent the same exact thing, don’t have two functions that do the same exact thing, etc.

There are likely other ways that this rule applies. The general idea is that when you want to have two implementations of a single concept, you should somehow make that into a single implementation instead.

When refactoring, this rule helps find things that could be improved and gives some guidance on how to go about it. When you see duplicate logic in the system, you should attempt to combine those two locations into one. Then if there is another location, combine that one into the new generic system, and proceed in that manner. That is, if there are many different implementations that need to be combined into one, you can do incremental refactoring by combining two implementations at a time, as long as combining them does actually make the system simpler (easier to understand and maintain). Sometimes you have to figure out the best order in which to combine them to make this most efficient, but if you can’t figure that out, don’t worry about it—just combine two at a time and usually you’ll wind up with a single good solution to all the problems.

It’s also important not to combine things when they shouldn’t be combined. There are times when combining two implementations into one would cause more complexity for the system as a whole or violate the Single Responsibility Principle. For example, if your system’s representation of a Car and a Person have some slightly similar code, don’t solve this “problem” by combining them into a single CarPerson class. That’s not likely to decrease complexity, because a CarPerson is actually two different things and should be represented by two separate classes.

This isn’t a hard and fast law of the universe—it’s a more of a strong guideline that I use for making judgments about design as I develop incrementally. However, it’s quite useful in refactoring a legacy system, developing a new system, and just generally improving code simplicity.


Immutability, State, and Functions

Let’s start with the obligatory call to authority:

In functional programming, programs are executed by evaluating expressions, in contrast with imperative programming where programs are composed of statements which change global state when executed. Functional programming typically avoids using mutable state.

Well, that seems pretty definitive. “Functional programming typically avoids mutable state.” Seems pretty clearcut.

But it’s wrong.

Explaining why I thing that will involve a trip down the path I’ve been exploring over the last year or so, as I have tried to crystalize my thinking on the new styles of programming, and the role of transformation as both a top-down and bottom-up coding and design technique.

Let’s start by thinking about state.

Where Does a Program Keep Its State?

Programs run on computers, and at the lowest level their model of computation is tied to that of the machines on which the execute. Down at that low level, the state of a program is the state of the computer—the values in memory and the values in registers.1 Some of those registers are used internally by the processor for housekeeping. Perhaps the most important of these is the program counter (PC). You can think of the PC as a pointer to the next instruction to execute.

We can take this up a level. Here’s a simple program:

|> String.downcase # => "cat"
|> String.codepoints # => [ "c", "a", "t" ]
|> Enum.sort # => [ "a", "c", "t" ]

The |> notation is syntactic sugar for passing the result of a function as the first parameter of the next function. The preceding code is equivalent to


Thrilling stuff, eh?

Let’s image we’d just finished executing the first line. What is our state?

Somewhere in memory, there’s a data structure representing the string “Cat”. That’s the first part of our state. The second part is the value of the program counter. Logically, it’s pointing to the start of line 2.

Execute one more line. String.downcase is passed the string “Cat”. The result, another string containing “cat”, is stored in a different place in our computer. The PC now points to the start of line 3.

And so it goes. With each step, the state of the computer changes, meaning that the state of our program changes.

State is not immutable.

Is This Splitting Hairs?

Yes and no.

Yes, because no one would argue that the state of a computer is unchanged during the execution of a program.

No, because people still say that immutable state is a characteristic of functional programming. That’s wrong. Worse, that also leads us to model programming wrongly. And that’s what the rest of this post is about.

What Is Immutable?

Let’s get this out of the way first. In a functional program, values are immutable. Look at the following code.

person = get_user_details("Dave")

Let’s assume that get_user_details returns some structured data, which we dump out to some log file on line two. In a language with immutable values, that data can never be changed. We know that nothing in the function do_something_with can change the data referenced by the person variable, and so the debugging we write on line 4 is guaranteed to be the same as that created on line 2.

If we wanted to change the information for Dave, we’d have to create copy of Dave’s data:

person1 = change_subscription_status(person, :active)

Now we have the variable person bound to the initial value of the Dave person, and person1 references the version with a changed subscription status.

If you’ve been using languages with mutable data, at this point you’ll have intuitively created a mental picture where person and person1 reference different chunks of memory. And you might be thinking that this is remarkably inefficient. But in an immutable world, it needn’t be. Because the runtime knows that the original data will never be changed, it can reuse much of it in person1. In principle, you could have a runtime that represented new values as nothing more that a set of changes to be applied to the original.

Anyway, back to state.

person = get_user_details("Dave")
person1 = change_subscription_status(person, :active)
IO.inspect person1

Let’s represent the state using a tuple containing the pseudo program counter and the values bound to variables.

Line person person1
2 value1
3 value1
4 value1 value2


How to Handle Code Complexity in a Software Company

Here’s an obvious statement that has some subtle consequences:

Only an individual programmer can resolve code complexity.

That is, resolving code complexity requires the attention of an individual person on that code. They can certainly use appropriate tools to make the task easier, but ultimately it’s the application of human intelligence, attention, and work that simplifies code.

So what? Why does this matter? Well, to be clearer:

Resolving code complexity usually requires detailed work at the level of the individual contributor.

If a manager just says “simplify the code!” and leaves it at that, usually nothing happens, because (a) they’re not being specific enough, (b) they don’t necessarily have the knowledge required about each individual piece of code in order to be that specific, and (c) part of understanding the problem is actually going through the process of solving it, and the manager isn’t the person writing the solution.

The higher a manager’s level in the company, the more true this is. When a CTO, Vice President, or Engineering Director gives an instruction like “improve code quality” but doesn’t get much more specific than that, what tends to happen is that a lot of motion occurs in the company but the codebase doesn’t significantly improve.

It’s very tempting, if you’re a software engineering manager, to propose broad, sweeping solutions to problems that affect large areas. The problem with that approach to code complexity is that the problem is usually composed of many different small projects that require detailed work from individual programmers. So, if you try to handle everything with the same broad solution, that solution won’t fit most of the situations that need to be handled. Your attempt at a broad solution will actually backfire, with software engineers feeling like they did a lot of work but didn’t actually produce a maintainable, simple codebase. (This is a common pattern in software management, and it contributes to the mistaken belief that code complexity is inevitable and nothing can be done about it.)

So what can you do as a manager, if you have a complex codebase and want to resolve it? Well, the trick is to get the data from the individual contributors and then work with them to help them resolve the issues. The sequence goes roughly like this:

  1. Ask each member of your team to write down a list of what frustrates them about the code. The symptoms of code complexity are things like emotional reactions to code, confusions about code, feeling like a piece will break if you touch it, difficulties optimizing, etc. So you want the answers to questions like, “Is there a part of the system that makes you nervous when you modify it?” or “Is there some part of the codebase that frustrates you to work with?”Each individual software engineer should write their own list. I wouldn’t recommend implementing some system for collecting the lists—just have people write down the issues for themselves in whatever way is easiest for them. Give them a few days to write this list; they might think of other things over time.

    The list doesn’t just have to be about your own codebase, but can be about any code that the developer has to work with or use.

    You’re looking for symptoms at this point, not causes. Developers can be as general or as specific as they want, for this list.

  2. Call a meeting with your team and have each person bring their list and a computer that they can use to access the codebase. The ideal size for a team meeting like this is about six or seven people, so you might want to break things down into sub-teams.In this meeting you want to go over the lists and get the name of a specific directory, file, class, method, or block of code to associate with each symptom. Even if somebody says something like, “The whole codebase has no unit tests,” then you might say, “Tell me about a specific time that that affected you,” and use the response to that to narrow down what files it’s most important to write unit tests for right away. You also want to be sure that you’re really getting a description of the problem, which might be something more like “It’s difficult to refactor the codebase because I don’t know if I’m breaking other people’s modules.” Then unit tests might be the solution, but you first want to narrow down specifically where the problem lies, as much as possible. (It’s true that almost all code should be unit tested, but if you don’t have any unit tests, you’ll need to start off with some doable task on the subject.)

    In general, the idea here is that only code can actually be fixed, so you have to know what piece of code is the problem. It might be true that there’s a broad problem, but that problem can be broken down into specific problems with specific pieces of code that are affected, one by one.

  3. Using the information from the meeting, file a bug describing the problem (not the solution, just the problem!) for each directory, file, class, etc. that was named. A bug could be as simple as “FrobberFactory is hard to understand.”If a solution was suggested during the meeting, you can note that in the bug, but the bug itself should primarily be about the problem.
  4. Now it’s time to prioritize. The first thing to do is to look at which issues affect the largest number of developers the most severely. Those are high priority issues. Usually this part of prioritization is done by somebody who has a broad view over developers in the team or company. Often, this is a manager.That said, sometimes issues have an order that they should be resolved in that is not directly related to their severity. For example, Issue X has to be resolved before Issue Y can be resolved, or resolving Issue A would make resolving Issue B easier. This means that Issue A and Issue X should be fixed first even if they’re not as severe as the issues that they block. Often, there’s a chain of issues like this and the trick is to find the issue at the bottom of the stack. Handling this part of prioritization incorrectly is one of the most common and major mistakes in software design. It may seem like a minor detail, but in fact it is critical to the success of efforts to resolve complexity. The essence of good software design in all situations is taking the right actions in the right sequence. Forcing developers to tackle issues out of sequence (without regard for which problems underlie which other problems) will cause code complexity.

    This part of prioritization is a technical task that is usually best done by the technical lead of the team. Sometimes this is a manager, but other times it’s a senior software engineer.

    Sometimes you don’t really know which issue to tackle first until you’re doing development on one piece of code and you discover that it would be easier to fix a different piece of code first. With that said, if you can determine the ordering up front, it’s good to do so. But if you find that you’d have to get into actually figuring out solutions in order to determine the ordering, just skip it for now.

    Whether you do it up front or during development, it’s important that individual programmers do realize when there is an underlying task to tackle before the one they have been assigned. They must be empowered to switch from their current task to the one that actually blocks them. There is a limit to this (for example, rewriting the whole system into another language just to fix one file is not a good use of time) but generally, “finding the issue at the bottom of the stack” is one of the most important tasks a developer has when doing these sorts of cleanups.

  5. Now you assign each bug to an individual contributor. This is a pretty standard managerial process, and while it definitely involves some detailed work and communication, I would imagine that most software engineering managers are already familiar with how to do it.One tricky piece here is that some of the bugs might be about code that isn’t maintained by your team. In that case you’ll have to work appropriately through the organization to get the appropriate team to take responsibility for the issue. It helps to have buy-in from a manager that you have in common with the other team, higher up the chain, here.

    In some organizations, if the other team’s problem is not too complex or detailed, it might also be possible for your team to just make the changes themselves. This is a judgment call that you can make based on what you think is best for overall productivity.

  6. Now that you have all of these bugs filed, you have to figure out when to address them. Generally, the right thing to do is to make sure that developers regularly fix some of the code quality issues that you filed along with their feature work.If your team makes plans for a period of time like a quarter or six weeks, you should include some of the code cleanups in every plan. The best way to do this is to have developers first do cleanups that would make their specific feature work easier, and then have them do that feature work. Usually this doesn’t even slow down their feature work overall. (That is, if this is done correctly, developers can usually accomplish the same amount of feature work in a quarter that they could even if they weren’t also doing code cleanups, providing evidence that the code cleanups are already improving productivity.)

    Don’t stop normal feature development entirely to just work on code quality. Instead, make sure that enough code quality work is being done continuously that the codebase’s quality is always improving overall rather than getting worse over time.

If you do those things, that should get you well on the road to an actually-improving codebase. There’s actually quite a bit to know about this process in general—perhaps enough for another entire book. However, the above plus some common sense and experience should be enough to make major improvements in the quality of your codebase, and perhaps even improve your life as a software engineer or manager, too.


P.S. If you do find yourself wanting more help on it, I’d be happy to come speak at your company. Just let me know.

Test-Driven Development and the Cycle of Observation

Today there was an interesting discussion between Kent Beck, Martin Fowler, and David Heinemeier Hansson on the nature and use of Test-Driven Development (TDD), where one writes tests first and then writes code.

Each participant in the conversation had different personal preferences for how they write code, which makes sense. However, from each participant’s personal preference you could extract an identical principle: “I need to observe something before I can make a decision.” Kent often (though not always) liked writing tests first so that he could observe their behavior while coding. David often (though not always) wanted to write some initial code, observe that to decide on how to write more code, and so on. Even when they talked about their alternative methods (Kent talking about times he doesn’t use TDD, for example) they still always talked about having something to look at as an inherent part of the development process.

It’s possible to minimize this point and say it’s only relevant to debugging or testing. It’s true that it’s useful in those areas, but when you talk to many senior developers you find that this idea is actually a fundamental basis of their whole development workflow. They want to see something that will help them make decisions about their code. It’s not something that only happens when code is complete or when there’s an bug—it’s something that happens at every moment of the software lifecycle.

This is such a broad principle that you could say the cycle of all software development is:

Observation → Decision → Action → Observation → Decision → Action → etc.

If you want a term for this, you could call it the “Cycle of Observation” or “ODA.”


What do I mean by all of this? Well, let’s take some examples to make it clearer. When doing TDD, the cycle looks like:

  1. See a problem (observation).
  2. Decide to solve the problem (decision).
  3. Write a test (action).
  4. Look at the test and see if the API looks good (observation).
  5. If it doesn’t look good, decide how to fix it (decision), change the test (action), and repeat Observation → Decision → Action until you like what the API looks like.
  6. Now that the API looks good, run the test and see that it fails (observation).
  7. Decide how you’re going to make the test pass (decision).
  8. Write some code (action).
  9. Run the test and see that it passes or fails (observation).
  10. If it fails, decide how to fix it (decision) and write some code (action) until the test passes (observation).
  11. Decide what to work on next, based on principles of software design, knowledge of the problem, or the data you gained while writing the previous code (decision).
  12. And so on.

Another valid way to go about this would be to write the code first. The difference from the above sequence is that Step 3 would be “write some code” rather than “write a test.” Then you observe the code itself to make further decisions, or you write tests after the code and observe those.

There are many valid processes.

Development Processes and Productivity

What’s interesting is that, as far as I know, every valid development process follows this cycle as its primary guiding principle. Even large-scale processes like Agile that cover a whole team have this built into them. In fact, Agile is to some degree an attempt to have shorter Observation-Decision-Action cycles (every few weeks) for a team than previous broken models (Waterfall, aka “Big Design Up Front”) which took months or years to get through a single cycle.

So, shorter cycles seem to be better than longer cycles. In fact, it’s possible that most of the goal of developer productivity could be accomplished simply by shortening the ODA cycle down to the smallest reasonable time period for the developer, the team, or the organization.

Usually you can accomplish these shorter cycles just by focusing on the Observation step. Once you’ve done that, the other two parts of the cycle tend to speed up on their own. (If they don’t, there are other remedies, but that’s another post.)

There are three key factors to address in Observation:

  • The speed with which information can be delivered to developers. (For example, having fast tests.)
  • The completeness of information delivered to the developers. (For example, having enough test coverage.)
  • The accuracy of information delivered to developers. (For example, having reliable tests.)

This helps us understand the reasons behind the success of certain development tools in recent decades. Continuous Integration, production monitoring systems, profilers, debuggers, better error messages in compilers, IDEs that highlight bad code—almost everything that’s “worked” has done so because it made Observation faster, more accurate, or more complete.

There is one catch—you have to deliver the information in such a way that it can actually be received by people. If you dump a huge sea of information on people without making it easy for them to find the specific data they care about, the data becomes useless. If nobody ever receives a production alert, then it doesn’t matter. If a developer is never sure of the accuracy of information received, then they may start to ignore it. You must successfully communicate the information, not just generate it.

The First ODA

There is a “big ODA cycle” that represents the whole process of software development—seeing a problem, deciding on a solution, and delivering it as software. Within that big cycle there are many smaller ones (see the need for a feature, decide on how the feature should work, and then write the feature). There are even smaller cycles within that (observe the requirements for a single change, decide on an implementation, write some code), and so on.

The trickiest part is the first ODA cycle in any of these sequences, because you have to make an observation with no previous decision or action.

For the “big” cycle, it may seem like you start off with nothing to observe. There’s no code or computer output to see yet! But in reality, you start off with at least yourself to observe. You have your environment around you. You have other people to talk to, a world to explore. Your first observations are often not of code, but of something to solve in the real world that will help people somehow.

Then when you’re doing development, sometimes you’ll come to a point where you have to decide “what do I work on next?” This is where knowing the laws of software design can help, because you can apply them to the code you’ve written and the problem you observed, which lets you decide on the sequence to work in. You can think of these principles as a form of observation that comes second-hand—the experience of thousands of person-years compressed into laws and rules that can help you make decisions now. Second-hand observation is completely valid observation, as long as it’s accurate.

You can even view even the process of Observation as its own little ODA cycle: look at the world, decide to put your attention on something, put your attention on that thing, observe it, decide based on that to observe something else, etc.

There are likely infinite ways to use this principle; all of the above represents just a few examples.


The Secret of Fast Programming: Stop Thinking

When I talk to developers about code complexity, they often say that they want to write simple code, but deadline pressure or underlying issues mean that they just don’t have the time or knowledge necessary to both complete the task and refine it to simplicity.

Well, it’s certainly true that putting time pressure on developers tends to lead to them writing complex code. However, deadlines don’t have to lead to complexity. Instead of saying “This deadline prevents me from writing simple code,” one could equally say, “I am not a fast-enough programmer to make this simple.” That is, the faster you are as a programmer, the less your code quality has to be affected by deadlines.

Now, that’s nice to say, but how does one actually become faster? Is it a magic skill that people are born with? Do you become fast by being somehow “smarter” than other people?

No, it’s not magic or in-born at all. In fact, there is just one simple rule that, if followed, will eventually solve the problem entirely:

Any time you find yourself stopping to think, something is wrong.

Perhaps that sounds incredible, but it works remarkably well. Think about it—when you’re sitting in front of your editor but not coding very quickly, is it because you’re a slow typer? I doubt it—“having to type too much” is rarely a developer’s productivity problem. Instead, the pauses where you’re not typing are what make it slow. And what are developers usually doing during those pauses? Stopping to think—perhaps about the problem, perhaps about the tools, perhaps about email, whatever. But any time this happens, it indicates a problem.

The thinking is not the problem itself—it is a sign of some other problem. It could be one of many different issues:


The most common reason developers stop to think is that they did not fully understand some word or symbol.

This happened to me just the other day. It was taking me hours to write what should have been a really simple service. I kept stopping to think about it, trying to work out how it should behave. Finally, I realized that I didn’t understand one of the input variables to the primary function. I knew the name of its type, but I had never gone and read the definition of the type—I didn’t really understand what that variable (a word or symbol) meant. As soon as I looked up the type’s code and docs, everything became clear and I wrote that service like a demon (pun partially intended).

This can happen in almost infinite ways. Many people dive into a programming language without learning what (, ), [, ], {, }, +, *, and % really mean in that language. Some developers don’t understand how the computer really works. Remember when I wrote The Singular Secret of the Rockstar Programmer? This is why! Because when you truly understand, you don’t have to stop to think. It’s also a major motivation behind my book—understanding that there are unshakable laws to software design can eliminate a lot of the “stopping to think” moments.

So if you find that you are stopping to think, don’t try to solve the problem in your mind—search outside of yourself for what you didn’t understand. Then go look at something that will help you understand it. This even applies to questions like “Will a user ever read this text?” You might not have a User Experience Research Department to really answer that question, but you can at least make a drawing, show it to somebody, and ask their opinion. Don’t just sit there and think—do something. Only action leads to understanding.


Sometimes developers stop to think because they can’t hold enough concepts in their mind at once—lots of things are relating to each other in a complex way and they have to think through it. In this case, it’s almost always more efficient to write or draw something than it is to think about it. What you want is something you can look at, or somehow perceive outside of yourself. This is a form of understanding, but it’s special enough that I wanted to call it out on its own.


Sometimes the problem is “I have no idea what code to start writing.” The simplest solution here is to just start writing whatever code you know that you can write right now. Pick the part of the problem that you understand completely, and write the solution for that—even if it’s just one function, or an unimportant class.

Often, the simplest piece of code to start with is the “core” of the application. For example, if I was going to write a YouTube app, I would start with the video player. Think of it as an exercise in continuous delivery—write the code that would actually make a product first, no matter how silly or small that product is. A video player without any other UI is a product that does something useful (play video), even if it’s not a complete product yet.

If you’re not sure how to write even that core code yet, then just start with the code you are sure about. Generally I find that once a piece of the problem becomes solved, it’s much easier to solve the rest of it. Sometimes the problem unfolds in steps—you solve one part, which makes the solution of the next part obvious, and so forth. Whichever part doesn’t require much thinking to create, write that part now.

Skipping a Step

Another specialized understanding problem is when you’ve skipped some step in the proper sequence of development. For example, let’s say our Bike object depends on the Wheels, Pedals, and Frame objects. If you try to write the whole Bike object without writing the Wheels, Pedals, or Frame objects, you’re going to have to think a lot about those non-existent classes. On the other hand, if you write the Wheels class when there is no Bike class at all, you might have to think a lot about how the Wheels class is going to be used by the Bike class.

The right solution there would be to implement enough of the Bike class to get to the point where you need Wheels. Then write enough of the Wheels class to satisfy your immediate need in the Bike class. Then go back to the Bike class, and work on that until the next time you need one of the underlying pieces. Just like the “Starting” section, find the part of the problem that you can solve without thinking, and solve that immediately.

Don’t jump over steps in the development of your system and expect that you’ll be productive.

Physical Problems

If I haven’t eaten enough, I tend to get distracted and start to think because I’m hungry. It might not be thoughts about my stomach, but I wouldn’t be thinking if I were full—I’d be focused. This can also happen with sleep, illness, or any sort of body problem. It’s not as common as the “understanding” problem from above, so first always look for something you didn’t fully understand. If you’re really sure you understood everything, then physical problems could be a candidate.


When a developer becomes distracted by something external, such as noise, it can take some thinking to remember where they were in their solution. The answer here is relatively simple—before you start to develop, make sure that you are in an environment that will not distract you, or make it impossible for distractions to interrupt you. Some people close the door to their office, some people put on headphones, some people put up a “do not disturb” sign—whatever it takes. You might have to work together with your manager or co-workers to create a truly distraction-free environment for development.


Sometimes a developer sits and thinks because they feel unsure about themselves or their decisions. The solution to this is similar to the solution in the “Understanding” section—whatever you are uncertain about, learn more about it until you become certain enough to write code. If you just feel generally uncertain as a programmer, it might be that there are many things to learn more about, such as the fundamentals listed in Why Programmers Suck. Go through each piece you need to learn until you really understand it, then move on to the next piece, and so on. There will always be learning involved in the process of programming, but as you know more and more about it, you will become faster and faster and have to think less and less.

False Ideas

Many people have been told that thinking is what smart people do, thus, they stop to think in order to make intelligent decisions. However, this is a false idea. If thinking alone made you a genius, then everybody would be Einstein. Truly smart people learn, observe, decide, and act. They gain knowledge and then use that knowledge to address the problems in front of them. If you really want to be smart, use your intelligence to cause action in the physical universe—don’t use it just to think great thoughts to yourself.


All of the above is the secret to being a fast programmer when you are sitting and writing code. If you are caught up all day in reading email and going to meetings, then no programming happens whatsoever—that’s a different problem. Some aspects of it are similar (it’s a bit like the organization “stopping to think,”) but it’s not the same.

Still, there are some analogous solutions you could try. Perhaps the organization does not fully understand you or your role, which is why they’re sending you so much email and putting you in so many meetings. Perhaps there’s something about the organization that you don’t fully understand, such as how to go to fewer meetings and get less email. :-) Maybe even some organizational difficulties can be resolved by adapting the solutions in this post to groups of people instead of individuals.


Make It Never Come Back

When solving a problem in a codebase, you’re not done when the symptoms stop. You’re done when the problem has disappeared and will never come back.

It’s very easy to stop solving a problem when it no longer has any visible symptoms. You’ve fixed the bug, nobody is complaining, and there seem to be other pressing issues. So why continue to do work on it? It’s fine for now, right?

No. Remember that what we care about the most in software is the future. The way that software companies get into unmanageable situations with their codebases is not really handling problems until they are done.

This also explains why some organizations cannot get their tangled codebase back into a good state. They see one problem in the code, they tackle it until nobody’s complaining anymore, and then they move on to tackling the next symptom they see. They don’t put a framework in place to make sure the problem is never coming back. They don’t trace the problem to its source and then make it vanish. Thus their codebase never really becomes “healthy.”

This pattern of failing to fully handle problems is very common. As a result, many developers believe it is impossible for large software projects to stay well-designed–they say, “All software will eventually have to be thrown away and re-written.”

This is not true. I have spent most of my career either designing sustainable codebases from scratch or refactoring bad codebases into good ones. No matter how bad a codebase is, you can resolve its problems. However, you have to understand software design, you need enough manpower, and you have to handle problems until they will never come back.

In general, a good guideline for how resolved a problem has to be is:

A problem is resolved to the degree that no human being will ever have to pay attention to it again.

Accomplishing this in an absolute sense is impossible–you can’t predict the entire future, and so on–but that’s more of a philosophical objection than a practical one. In most practical circumstances you can effectively resolve a problem to the degree that nobody has to pay attention to it now and there’s no immediately-apparent reason they’d have to pay attention to it in the future either.


Let’s say you have a web page and you write a “hit counter” for the site that tracks how many people have visited it. You discover a bug in the hit counter–it’s counting 1.5 times as many visits as it should be counting. You have a few options for how you could solve this:

You could ignore the problem.
The rationale here would be that your site isn’t very popular and so it doesn’t matter if your hit counter is lying. Also, it’s making your site look more successful than it is, which might help you.

The reason this is a bad solution is that there are many future scenarios in which this could again become a problem–particularly if your site becomes very successful. For example, a major news publication publishes your hit numbers–but they are false. This causes a scandal, your users lose trust in you (after all, you knew about the problem and didn’t solve it) and your site becomes unpopular again. One could easily imagine other ways this problem could come back to haunt you.

You could hack a quick solution.
When you display the hits, just divide them by 1.5 and the number is accurate. However, you didn’t investigate the underlying cause, which turns out to be that it counts 3x as many hits from 8:00 to 11:00 in the morning. Later your traffic pattern changes and your counter is completely wrong again. You might not even notice for a while because the hack will make it harder to debug.
Investigate and resolve the underlying cause.
You discover it’s counting 3x hits from 8:00 to 11:00. You discover this happens because your web server deletes many old files from the disk during that time, and that interferes with the hit counter for some reason.

At this point you have another opportunity to hack a solution–you could simply disable the deletion process or make it run less frequently. But that’s not really tracing down the underlying cause. What you want to know is, “Why does it miscount just because something else is happening on the machine?”

Investigating further, you discover that if you interrupt the program and then restart it, it will count the last visit again. The deletion process was using so many resources on the machine that it was interrupting the counter two times for every visit between 8:00 and 11:00. So it counted every visit three times during that period. But actually, the bug could have added infinite (or at least unpredictable) counts depending on the load on the machine.

You redesign the counter so that it counts reliably even when interrupted, and the problem disappears.

Obviously the right choice from that list is to investigate the underlying cause and resolve it. That causes the problem to vanish, and most developers would believe they are done there. However, there’s still more to do if you really want to be sure the problem will never again require human attention.

First off, somebody could come along and change the code of the hit counter, reverting it back to a broken state in the future. Obviously the right solution for that is to add an automated test that assures the correct functioning of the hit counter even when it is interrupted. Then you make sure that test runs continuously and alerts developers when it fails. Now you’re done, right?

Nope. Even at this point, there are some future risks that have to be handled.

The next issue is that the test you’ve written has to be easy to maintain. If the test is hard to maintain–it changes a lot when developers change the code, the test code itself is cryptic, it would be easy for it to return a false positive if the code changes, etc.–then there’s a good chance the test will break or somebody will disable it in the future. Then the problem could again require human attention. So you have to assure that you’ve written a maintainable test, and refactor the test if it’s not maintainable. This may lead you down another path of investigation into the test framework or the system under test, to figure out a refactoring that would make the test code simpler.

After this you have concerns like the continuous integration system (the test runner)–is it reliable? Could it fail in a way that would make your test require human attention? This could be another path of investigation.

All of these paths of investigation may turn up other problems that then have to be traced down to their sources, which may turn up more problems to trace down, and so on. You may find that you can discover (and possibly resolve) all your codebase’s major issues just by starting with a few symptoms and being very determined about tracing down underlying causes.

Does anybody really do this? Yes. It might seem difficult at first, but as you resolve more and more of these underlying issues, things really do start to get easier and you can move faster and faster with fewer and fewer problems.

Down the Rabbit Hole

Beyond all of this, if you really want to get adventurous, there’s one more question you can ask: why did the developer write buggy code in the first place? Why was it possible for a bug to ever exist? Is it a problem with the developer’s education? Was it something about their process? Should they be writing tests as they go? Was there some design problem in the system that made it hard to modify? Is the programming language too complex? Are the libraries they’re using not well-written? Is the operating system not behaving well? Was the documentation unclear?

Once you get your answer, you can ask what the underlying cause of that problem is, and continue asking that question until you’re satisfied. But beware: this can take you down a rabbit hole and into a place that changes your whole view of software development. In fact, theoretically this system is unlimited, and would eventually result in resolving the underlying problems of the entire software industry. How far you want to go is up to you.

Installing multiple versions of Ruby using RVM

Ruby Version Manager (RVM) is a tool that allows you to install multiple versions of Ruby and have multiple versions of the same interpreter. Very handy for those who have to maintain different applications using different versions of Ruby.

To start, download RVM and install the latest stable version of Ruby:

$ echo insecure >> ~/.curlrc
$ curl -L | bash -s stable --ruby
$ source ~/.bash_profile

Install an old version of Ruby:

$ rvm install 1.8.6
$ rvm use 1.8.6 --default
$ ruby -v
ruby 1.8.6

Create a Gem set and install an old version of Rails:

$ rvm gemset create rails123
$ gem install rails -v 1.2.3
$ rails -v
Rails 1.2.3

Switch back to your system:

$ rvm system
$ rails -v
Rails 2.3.5

Switch back to your RVM environment:

$ rvm 1.8.6@rails123

And, if you want to remove Rails 1.2.3, just delete the Gem set:

$ rvm gemset delete rails123

Alternatively to RVM, you also might look into rbenv.

Let’s block ads! (Why?)

Go is gaining momentum

Golang MascotThe Go language is gaining momentum among software engineers and PaaS/IaaS vendors. I think they all see potential in a simple, reliable, efficient and native-compiling language with a solid baseline API.

Thanks to its simplicity, performance, intuitive design and Google’s commitment to its future, Go could spark a change in the industry like we haven’t seen since the rise of Ruby in 2004. Ruby gained popularity among web developers and system administrators after David Hansson created Rails and Luke Kanies Puppet. Python, on the other hand, is still more attractive to teachers and academics and seems to be the preferred teaching language in many top universities, along with C/C++ and Java.

Go is more of an evolutionary than revolutionary language, and a great alternative to languages like Ruby, Python and C/C++, or platforms like Node.js. Experts predict that Go will become the dominant language for systems work in IaaS, Orchestration, and PaaS in the next couple of years. It has the potential to make a significant impact on server-side software, however, it still needs more adoption outside Google.