Technologies That Changed How We Communicate

Communication has immensely changed in the past few years. From using craggy, ringing telephone lines to multitasking through the latest smart phones, mankind has witnessed how connecting with people has evolved to whole new level. Thanks to technological geniuses like Steve Jobs, the market of using communication devices (like his brainchild – the iPhone) is projected to grow in astounding numbers.

Behind these marvelous devices come even more spectacular applications which make these communication advancements progressively powerful.

An old telephone.

Through computer technology and software incorporation, communication tools have become integral elements in every industry in every corner of the planet. The use of software in being connected with other people has become a huge support in bringing efficiency and comfort to what previously was a hard and complicated task. With the continuously increasing demand for unending connectivity (through social media), mobile application downloads are similarly on the rise.

Fax Machines

Basic communication tools (like telephones and fax machines) have evolved so much that its technology has become a cultural aspect of today’s generation. Before, a telephone system with caller ID was an exciting advancement. Now, fax machines are given a more reinforcing makeover with its integration with software development. Fax messaging has gone far beyond than what it was expected to be as it now enters the internet. Gmail has come up with a way to provide a more efficient and less expensive way of sending fax messages. Check out Gmail Fax Help ( for more information.

An old fax machine.

Technological evolution has definitely shown how much more mankind can make lives much easier and convenient. Since the advantages of software engineering and design have been much more evident nowadays, it is only apt that one should take a look at what applications and technologies sparked the masterful evolution of communication.

Up High with Skype

Since its establishment in August 2003, Skype Technologies has been making tidal waves in the video chatting and voice call services across the globe. The company leads world-wide communication by fronting video calls in an excellently integrated application which can be used to view the person on the other side.

A guy waving through Skype.

This idea has held a huge potential back when the internet was still starting to snowball. The concept of the application is to make long-distance communication with another person more personal using video. As everyone knows by now, a webcam is needed from both parties in order for the system to work. With a press of a button (or dialing of numbers) in the application, the unique experience of video conversation can begin.

From this simple idea, world communication has improved by leaps and bounds.

Skype has been a major player in the video communications market and is continuing their operations up to this day. More people are still using Skype, from conducting online interviews to simply chatting with friends from the farthest regions of the planet.  Even with big competitors like Facebook Video Call and Facetime, Skype is still a thriving application that needs much appreciation for their innovative ways.

Mail Time in No Time

The earlier method of communication started from writing letters and mailing them to local places and even across the globe. Obviously, the process took a very long time to complete because of the travel time for each trip of the mailer.

Fast forward a couple of years (or decades, even), snail mail improved when local post offices took a step forwards and started to send letters with a faster, more systematic approach on the delivery. The travel time of each letter became a bit more efficient, but it is still considerably slow.

Today, sending messages has not only become paper-less, it also has become an instant message.

Yahoo Messenger.

Leading company Yahoo! has provided services that allow each user to communicate through online chatting. Who could ever forget the notorious smile of the Yahoo Messenger application when it is signed in? Yahoo messenger has become an instant brand for quick and exciting messaging that everyone in the world would know. It is at the forefront of what is called Instant Messaging or IM in the dial up internet era.

What Yahoo! Messenger (or YM) did is to revolutionize the way chatting works. Since most people would rather look at a certain application than checking their emails from time to time, the company made extreme efforts in providing a service that’s free and easy to use. One would only need to have a Yahoo! Account to build his YM and his messenger’s list. The application also boasted of using emoticons, now more popularly known as Emojis. These emoticons have established them as a brand that promotes communication in a fun and easy manner.

Social networking sites are not necessarily new in the business. Truly, it is more apparent now than ever with the recent advances of Facebook, Twitter and Instagram. Nonetheless, social networking actually began way back, when dominated the internet.

Myspace is a social media site which incorporates internet sharing, commenting and posting in a more old-fashioned way. Photos and videos can be linked in one’s profile to the other to show off what he or she has been up to lately. This has been a brilliant way for people to share their lives and build up an online personality.

Myspace music.

Though not in the levels of Facebook today, MySpace was once the perennial favorite. It did so by tempting the key young adult demographic with music, videos, and a funky, feature-filled environment. It looked and felt more hip and trendy than major competitor Friendster right from the start. True enough, it conducted a campaign in the early days to show alienated Friendster users just what they were missing.

The company behind Myspace recognized the innovative approach they needed to do and mapped social media as a true stamp in the internet. Though it was not necessarily an application, it was a platform in which SNS sites made their foundation.

As such, is considered as one of the most successful technologies ever produced, linking communication and technology in an engaging and interactive set-up during its time.


People now reside in a world where connecting with each other is as easy as 1, 2 and 3. Communication has grown so much over the years, almost at an instant, that every company is swinging left and right to provide that one right recipe to be something new and something that stands out. This attitude can be traced back by looking at the things that previous software designers and software engineers did in the past. Modern communication machinery is founded on the efforts of enthusiasts that established the beneficial link of communication and technology.

Applications and technologies in the past paved the way for an extreme communications evolution.


Simply put, there are many technologies that set as a foundation for the current technologies people has right now. Thanks to them, peoples around the world can link together harmoniously, knowing that everything, especially communication, can be made fast and easy.

How Agile Software Development Can Fix Technical Debt

Traditionally, programmers are taught that software programs have a phase-based approach to development which is simply feature development, alpha, beta, and golden master (GM). This has been the staple for a very long time now.

The thing is, software nowadays is sold to users against their competitors because of their features. For example, you would choose an organizer software that has a bonus feature of having bookmark widgets than those that only have calendars. Features are now the selling point of most software and have introduced many challenges to be addressed by programmers.


Feature Development

Feature development includes the phase where the new features are built, and (ideally) residual issues from the most recent release are addressed when possible. The development cycle reaches “alpha” when each feature is implemented and ready for testing. Consequently, “beta” hits when enough bugs have been fixed to enable customer feedback.

In unfortunate cases, when programmers or a team of programmers are busy trying to fix enough bugs to reach beta, new bugs appear. This is one of the most common, and most annoying phenomena in programming, especially when the designs are revolutionary and complex.

It’s a classic case of whack-a-mole: fix one bug, and two more pop up. Frustrating for most, but this is where the bottleneck of programming productivity comes.

Technical debt.

Finally, after a long game of whack-a-mole with the bugs, the program release phase reaches the golden master milestone when there are zero open bugs. Then again, this is usually achieved by programmers by fixing just the issues that could deter the program from leaving the beta phase. Simply put, programmers only remove the bugs that are outright apparent, and leave the unnoticeable bugs for the next releases when users have reported them.

Technical Debt: The Ultimate Programmer’s Challenge

Constantly procrastinating on bugs that need to be fixed is a dangerous way to make software. As the bug count grows, tackling it becomes increasingly daunting–resulting in a vicious death-spiral of technical debt.

Technical debt is a concept in programming that reflects the extra development work that arises when code that is easy to implement in the short run is used instead of applying the best overall solution. Technical debt is usually associated with ultra-complex programming, especially in the context of refactoring.

To make matters worse, schedules get derailed because coding around the bugs slows down development. Meanwhile, customers are experiencing death by a thousand cuts caused by unfixed defects. As such, many experts in programming has begun addressing the growing problems on technical debt, and introduced Agile Software Development.

Agile Software Development: The Main Answer to Technical Debt

In software application development, Agile Software Development (ASD) is technically defined as “a methodology for the creative process that anticipates the need for flexibility and applies a level of pragmatism into the delivery of the finished product.”

The Agile Development process.

Agile software development focuses on keeping code simple, testing often, and delivering functional bits of the application as soon as they’re ready. The goal of ASD is to build upon small client-approved parts as the project progresses, as opposed to delivering one large application at the end of the project.

Agile puts the “quality factor” into the iterative development approach in order for the programming team to maintain a consistent level of quality release every time. If a feature is half-baked, it is essentially thrown into the trash bin. Good programmers now have a simple trick: defining or redefining the definition of “done.”

For traditional teams, “done” means “good enough” for Quality Assurance (QA) to begin with. The problem with this definition is that only the obvious bugs are apparent in early in the release cycle. As a result, by the time QA gets their hands on it, the product is saddled with layers upon layers of defects that weren’t easily noticed.

Agile teams, however, define “done” as ready to release; this does not only mean that it could be tendered to the users. It also means developers don’t move on to the next story or feature until their current item is practically in the customer’s’ hands. To speed things along, they use techniques like feature branching workflows, automated testing, and continuous integration throughout the development cycle.

The Agile Manifesto

Agile development is not a methodology in itself. It is the collective term that describes several agile methodologies. It is basically a by-product of a collaboration between many software developers that value the quality and sought a good way to help the up and coming surge of software programmers.

SCRUM process.

At the signing of Agile Manifesto in 2001, these methodologies include Scrum, XP, Crystal, FDD, and DSDM. Since then, lean practices have also emerged as a valuable agile methodology and so are included under the agile development umbrella. The Agile Manifesto includes:

  1.   Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
  2.   Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
  3.   Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
  4.   Business people and developers must work together daily throughout the project.
  5.   Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
  6.   The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
  7.   Working software is the primary measure of progress.
  8.   Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  9.   Continuous attention to technical excellence and good design enhances agility.
  10. Simplicity–the art of maximizing the amount of work not done–is essential.
  11. The best architectures, requirements, and designs emerge from self-organizing teams.
  12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

Ultimately, Agile Software Development is an approach to programming that is lean in nature, like how manufacturers practice Lean Manufacturing: Reduce waste, include the clients every step of the way, and address problems as soon as possible.

Other Benefits of Agile Software Development

Agile provides multiple opportunities for the stakeholder (the client or the target user) and to the team engagement before, during, and after each phase. The flexibility, versatility, and efficiency of ASD allows the stakeholders to be involved every step of the way.

It’s like getting a Gmail account – an all-around email service, fax messaging capability ( and access to practically all essential media communication tool in the web.

By involving the client in every step of the project, there is a high degree of collaboration between the client and project team, providing more opportunities for the team to truly understand the client’s vision. Rendering quality software frequently increases stakeholders’ trust in the team’s ability to deliver high-quality working software, and encourages them to be more deeply engaged in the project.

A tech team meeting.

While the team needs to stay focused on delivering the agreed subset of the product’s features during each iteration, there is an opportunity to constantly refine and re-prioritize the overall product backlog. New or changed backlog items can be planned for the next iteration, providing the opportunity to introduce changes within a few weeks.

Also, an Agile approach provides a unique opportunity for clients to be involved throughout the project – from prioritizing of features and iterations planning, to review sessions and frequent software builds containing new features. However, this approach also requires clients to understand that they are seeing a work in progress in exchange for this added benefit of transparency.

By breaking down the project into manageable units, the project team can focus on high-quality development, testing, and collaboration. Also, by producing frequent builds and conducting testing and reviews in every iteration, quality is improved by finding and fixing defects quickly and identifying expectation mismatches early.

3 Steps to Effective Software Design

It is quite an astonishing experience to see that the world is thriving together in using the computer as a major mean to run its daily course. People are continually pushing forward with expanding their knowledge with regard to computer technology. However, what is more astonishing to know is the fact that from a world that started from sharpened rocks and two-stone fires, we are now living in a generation renovated by sharpened minds, illuminated by the knowledge we have acquired over the trial and errors of time. Thanks to the power of computing, this world is living in transcendence and will likely to be so much more.

a computer graphic

Computers have crept in practically every crevice of our daily lives. In fact, we can go as far as to say that our lifestyle is now based on what our computers can offer. In our social structure, we are witnesses to the rising number of people who are utilizing the wonders of social networking sites or SNS. It has become so in demand that almost everyone has their hands full on Facebook, Twitter, Instagram and many other social networking platforms.

Even the field of business has taken notice of computers’ powerful appeal. In fact, websites are now primary requirement for businesses to provide content for these people dubbed as ‘netizens’. A prevailing tool in the internet called WordPress has become widely used, in fact according to this WordPress hosting test, this platform is on the rise and will continue to rise given the consistency of the sites. Looking at the bigger picture, we can see that businesses utilizing the power of computing produce jobs for people who are learned in the discipline of computer engineering and computer science, as well as information technology.

We can see it wide open that computer technology provides a link to different and varying fields. From social to economic to the academic field, there is a linear pattern which computer technology offers. This is the same reason why people nowadays are leaning towards understanding its processes, the very fundamentals of the broad subject.

One of the most important aspects of computer technology is software design. What is software design really? Software design refers to the process of applying software solutions to one or more set of problems. Frankly speaking, we cannot utilize any of our computers without its software. This is why it is important for us, especially the ones interested in the field, to take a look on the process of designing software.

Here are three guidelines to understand the software design process.

Planning is Key

Good planning is THE phase which all successful projects share. For software design, the right planning makes all the difference. Thus, the first step towards successful software design planning is critical analysis.

keys displaying "needs" and "wants"

Software designers must interact, extract, recognize and analyse. Software design is an intricate process that starts with having a good grasp on what objectives to meet. Designers should have good interaction skills with the client, so that he may extract the essential purpose and preferences they have.

Recognizing and distinguishing which requirements need to be checked and rechecked is also an important part of the planning phase. In addition, the developers should have to perform a high level analysis on what steps to do and how to do it.

The second phase of planning focuses more on specification. The backbone of software design is the mathematical description which will be utilized rigorously in the specification task. Specification is the task of accurately defining the software to be written.

a venn diagram of specification

Most effective specifications are actually prepared to interpret and fine-tune applications that were already well-built, albeit safety-critical software systems are often carefully specified prior to application development. In addition to that, specifications play a more significant role for external interfaces that must stay stable.

In the last stage of planning comes the architecture of the software. Software Architecture is defined as an abstract representation of that system. Basically, it is the software construction which makes sure that the requirements and specifications are addressed properly. It’s the blueprint of the software to be implemented.

Make it Happen

After all the elements are made clear during the planning stage, Execution phase comes next. In software design, the execution stages are implementation and testing.

Implementation comes as an antecedent to all development to execute the plan. In this case, it is the coding part. Programming of the design proves to be the most obvious job of the software engineer. Various programming languages can be utilized in this stage of execution, thus different approaches can be made in designing software.

keep calm and continue testing

After the code has been written, it is important that the project be tested and reviewed in search for any glitches. This opens the door to find out which workaround solve any underlying problem. Testing is a demanding and laborious task for the software engineer, but it proves to be vital in the development of the actual software. Without testing, software design cannot be proofed or verified.

Maintain it, Sustain It

Software design maintenance is the cherry on top of planning and execution. The truth of the matter is that issues in the software are inevitable. That is why sustaining it with exquisite maintenance model is just as important as the development itself. Documentation, training and support, and primary maintenance are the key factors in software sustenance.

One of the most significant things most people in the software design field forget is consumer training and support. We know that even when the software is out, and there is nobody who knows how to operate it, the whole thing is pointless. With ample training, support and fast feedback, everything can run smoothly. However, how do companies in software design conduct training and support?

software repair

Software designers and involved parties give out seminars and training to customers and clients before they start using the application. In this way, the client can freely ask questions while learning how to handle the product.

In addition, documentation comes as a very integral part in maintenance. Documentation in the field of software design is an important task that deals with filing the internal design of the software to create references for future maintenance and enhancement. In the literal context, it’s a diary of what worked and what did not in the implementation process. It goes hand in hand with testing. In a nutshell, documentation is the recording and citation part of the whole development process.

Having documented all the problems and the technicalities, the primary maintenance now comes in to play. All the mentioned processes come down to the maintenance. It’s a vigorous process that needs to ample time be. Almost always, there will be problems in the software. That is the reason why there is more work in maintenance than the planning and coding combined. It is all about continuous growth and continuous learning for the full-fledged software designers.

Software design is definitely a complicated process that includes various phases. It requires a lot of work and a whole lot more dedication. With the right planning, designers can learn which demands to meet and how they can provide specific solutions. The implementation comes afterwards where the programming and testing phase brings out the beauty of the technology. Lastly, the maintenance of this software is to be watched out for since problems will certainly arise.

Our relentless reliance on technology attests that we are a race that is still evolving. To evolve is to accept that change inevitable, especially in meeting the changing needs of our technologically demanding times. With right conviction, we are continually expanding our knowledge in the field of computer technology for the advancement of our lives.

Software design is a key ingredient in this endeavour, and having the right knowledge to go about its process is a significant factor for both professionals and aspirants alike.

The Role Of Design In Software Development Process

Anytime we want to build or create something, we simply have to imagine how it will look like. Of course, “function” is important as well, perhaps even more than “form” in some cases, but when it comes to software development and modern methods of coding – design is equally important and valuable. Some software developers even say that you should always design first and code later, but this old geek “joke” cannot be applied to all cases and there are many ways in which good software can be developed if you code first.


The reason why this old concept is being forgotten and even obsolete lies in the fact that modern programming tools allow faster construction of code and this also enables developers to find the problems and fix them easily, without the need to perform numerous tests and spend a lot of time verifying the stability and performances of the algorithm.

Software design focuses on both main elements of the code – algorithm and architecture, and this makes it so important. Since design affects the end user in a lot of ways, sometimes even more than the back-end part of the project, it is vital that all elements of a software solution are placed in the right position and that the final design suits the needs of the customers.

Designers have to be in constant communication with the rest of the team, simply because software development is not a “one man’s job” anymore. Several people, at least, are involved in this process, and large companies often deploy dozens of experts to work on a single piece of software. Therefore, communication is key, and being able to understand what the rest of the team needs is imperative for a good designer.

Project managers, consultants, developers, content writers, testers, users, etc. are all a good source information, and a good designer will find the perfect balance between all of these parties.

The process of design is basically about problem-solving and planning, but it can be divided into three main stages. In the first stage, you brainstorm ideas, creating concepts and making plans about the project and the ways in which your end-product should look like. Once you find a suitable idea, you move on to the second stage, which has one goal – to create a wire-frame of the main elements that make the architecture of the software. It is important to accommodate everything in a way that will be simple but functional, and we all know how simplicity is hard to achieve. After this step comes the third stage, which can be called the “actual design”, and this part of the process is concerned with the shapes, colors, textures and all similar features of those elements that are a part of the design. According to client’s wishes and preferences, the product receives its final form, and aesthetic purposes are important in this final stage as well. Designers who make everything look “nice” will have a lot of satisfied customers, and they will justify the importance of good design when it comes to the process of software development.

Learn Vim Fast: Moving In and Getting Around

If you started learning Vim with my last post, and you’ve been practicing the handful of commands we covered, then you’re probably pretty tired of how you have to move around and edit things with only those rudimentary commands. It’s time to expand our command set and crank up our efficiency in getting things done with Vim, but before we do that, let’s take a look at how to make Vim a bit more comfortable to look at.

Moving In

Normally I use gvim, the variant of Vim with a GUI window. Out of the box, gvim looks like this:


New gvim window

It looks somewhat uninviting to my eyes. Personally, I prefer editing code on a dark background, so I’d like to change the color scheme. Also, the bar of icons at the top of the window only encourages you to use the mouse for those common tasks, but we don’t need them. The extra space would be better, so we’ll get rid of them. To change these settings and more, we need to create a Vim configuration file. Putting this file in your home directory will cause Vim to load the configuration whenever it launches. With Vim open, type :e ~/.vimrc to create this file for editing, and enter the following lines:


Listing of .vimrc

Don’t be overwhelmed. We’ll go through each one of these settings, but I wanted to lay them all out as one listing first.

Starting at the top, set nocompatible disables compatibility mode so that some new features of Vim work correctly. This setting is likely already set this way, but we’ll be extra sure.

Next, set history=100 allows you to use the up and down arrows at the command line (by typing ‘:’) to search back and forth through your history of commands, and you can execute any previous command by going back to it, editing it if needed, and hitting <Enter>. The history is set to some number I don’t remember by default, but 100 is a good large number. You can set it to whatever you want.

The next two settings, incsearch and hlsearch, cause the cursor to advance to the first matching search term and highlight all matching search terms when you’re searching, respectively. These will come into play later when we cover searching, but just know that they are nice to have on and feel quite natural with the search command.

The set backspace=indent,eol,start setting makes it so the <Backspace> key will delete indents, end-of-line characters, and characters not added in the current Insert Mode session when you’re in Insert Mode. It basically makes the <Backspace> key work as you’d expect, and in recent experience, it seems that backspace is set this way by default. It doesn’t hurt to be explicit, though.

The set go-=T setting is short for guioption-=T, and this gets rid of the toolbar because you aren’t gonna need it. Besides, it just wastes vertical space, which is at a premium with today’s wide screen monitors.

The next set of options set up how Vim handles tabs. The tab stop is set to 2 spaces, tabs are expanded into spaces, and shifting tabs back and forth will shift them by 2 spaces. This is somewhat of a personal preference. Some people prefer other tab sizes, but this setup has generally worked for me to make tabs behave in a reasonable way. If you want a different tab size, just change the 2s to something else.

The line and column settings force the GUI window to open with that number of lines and columns. You can still resize the window, but these values are a good fit for most of the monitors I use. I might tweak them for larger monitors. I also turn on line numbers with set number, and make Vim automatically update a buffer when the file in the buffer was changed outside of Vim with set autoread.

The section with the set guifont settings changes the font depending on which operating system Vim is running on. Each of the fonts is one that is available by default on the corresponding system and a nice font to look at for coding.

The set wildignore option forces Vim to ignore certain file extensions that I really don’t want it to ever open.

The next command is pretty cool. The autocmd BufWritePre command attaches a task to a hook that executes just prior to writing a buffer to a file. This particular one trims all whitespace from the end of every line. It’s a nice little add-on, and there are a number of other hooks that you can attach commands to. I haven’t found anything else that I want to execute automatically on other actions, but the option is there and the sky’s the limit.

The last two settings turn syntax highlighting on (even though it’s most likely already on) and set the color scheme to my preferred one. You can see the comment before the last line that reminds me where to put the Vim color scheme file. I always forget when setting up a new Vim. That’s where you should put it, too. My favorite is vividchalk, gleaned from this list of color schemes.

Now that you have all of these settings entered, you can save the file and then execute :source ~/.vimrc to actually load the Vim configuration for the currently running Vim session. Some of the changes are immediately visible, while the rest will become apparent as you work in Vim.


Listing of .vimrc with new settings sourced

Ahh, that looks much better. Now we can start exploring some ways to get around more easily in Vim than what we’ve been doing with just the arrow, page up, and page down keys.


Getting Around

For the rest of the commands we are going to cover, we’ll use a snippet of code as our canvas to experiment with what these commands do. It’s a Ruby module that implements a few basic statistics functions, but what it does isn’t very important. We’ll simply use it as something to look at in the context of learning new Vim commands. Here’s what it looks like:


Ruby code listing in gvim

And here’s the code, so you can copy and paste it into a text file.

module Statistics def self.sum(data) data.inject(0.0) { |s, val| s + val } end def self.mean(data) sum(data) / data.size end def self.variance(data) mu = mean data squared_diff = { |val| (mu - val) ** 2 } mean squared_diff end def self.stdev(data) Math.sqrt variance(data) end
end data = [1,2,2,3,3,3,3.5,3.5,3.5,4,4,4,5,5,6]
p Statistics.mean(data)
p Statistics.stdev(data)

Now, let’s say we want to go down to the variance method so we can change the name to var. The cursor is currently at the start of the buffer. Up until now, we’d have to use the arrow keys to get there, but it’s 10 lines down and 12 characters over. It’s so tedious to use the arrow keys, so we want a faster way.

One marginally faster way to move is to use the h, j, k, and l keys. The h and l keys move left and right, respectively, and the j and k keys move down and up, respectively. You can keep these commands straight by remembering that the h key is on the left, the l key is on the right, the j key has a hook down, and the k key has a line going up. If you can get used to using these keys, great. I never have, and I continue to use the arrow keys for small movements. For larger movements like this, we have better options.

Type 10G to move the cursor directly to the beginning of line 10. This is why it’s nice to have the line numbers listed. It makes it really easy to jump to exactly the line you want to go to. The G command can be thought of as Goto, and it will make the cursor go to the line number corresponding to whatever number was typed immediately before it. Most Vim commands are like this, where you can specify a number before the command. In this case, it denotes the line number, but in most other cases it will specify how many times to repeat the command.

Another way to get to the tenth line is to type 10gg. It essentially has the same effect, but G and gg have a subtle difference. If you type G without specifying a number, it will move the cursor to the last line of the buffer, while if you do the same with gg, it will take you to the first line of the buffer. It’s pretty handy if you don’t know how long the buffer is, but you want to get to the end as fast as possible. A 1G would suffice for getting to the beginning of the buffer, but gg is a bit faster to type. Anyway, here’s where we are now:


Goto line 10 in gvim

To go the rest of the way, we want to move quickly across this line without having to tap the l key 12 times. We can do better with the w command. This command moves the cursor to the beginning of the next word, which is the first alphanumeric character after a non-alphanumeric character. (An underscore is considered an alphanumeric character.) The lowercase w will also stop at punctuation marks, like the ‘.’, ‘(‘, and ‘)’ in the code. A capital W will only stop at characters following a whitespace character. I’m sure I’m missing some subtleties here, but after experimenting with it for a while, you’ll get a sense for what characters w and W will stop on.

For this case, we want to type www to get to the ‘v’. Finally, we type lll to move over to the ‘i’, and we’re ready to change the name. For now, that means typing i to get into Insert Mode, and hitting the <Delete> key five times to delete ‘iance’.


Rename variance in gvim

Most Vim commands have opposite commands, and since w will only move the cursor forward, we need the opposite command for moving backward one word at a time. That command is b, for backward (or maybe backword, so we’ve got word and backword), and W has the corresponding B as its opposite command. All of these commands accept a repeat number before them, so we could have just as easily gotten to the ‘v’ by typing 3w instead of www.

Vim also has quick commands for moving to the beginning or end of a line. If you know regular expressions, these will look familiar. A ^ will move the cursor to the beginning of the line the cursor is on, and a $ will move it to the end of the line.

One more way to get where we want to go even faster, is to use the search function. So far we’ve gotten to the desired position in 8 characters (10gg3w3l), but search does it in less, and we don’t have to do any mental counting. Starting back at the beginning of the file, we could type /ia, and we’re already there. The / starts the search, and with every character typed after that, the cursor moves to the first matching string of characters that were typed so far. We only need ‘ia’ to get to ‘iance’, and we can hit <Enter> to end the search. We’re now ready to go into Insert Mode and make the change.


Search for 'ia' in gvim

Notice how two instances of ‘ia’ have been highlighted. To get to the second one and make the same change, we can type n to move to the next matching instance of the most recent search string. Now we’re starting to pick up some speed. If the last change you made was deleting ‘iance’ from the first ‘variance’, then you can type . to make the same change to the next ‘variance’ once the cursor is in the right place.

The search and next match functions also have opposites. To search backwards, use ?, and to move to the previous match, use N. If you combine the two by searching with ? and moving to the previous match with N, then the cursor will actually move forward in the buffer when moving to the previous match. Most of the time, I use / and n, but the variants can come in handy sometimes.

For the last set of movement commands, we’ll combine moving and entering Insert Mode. Notice that the last couple lines print out calculations of the mean and standard deviation of some data, but we haven’t tried the variance method directly. Let’s quickly add a line to do that. First, type 22gg to get to line 22. Then type o to open a line below line 22 with the insertion point at the beginning of the new line.


Open a new line in gvim

Finally, you can type p Statistics.var(data), remember to hit <Esc>, and you’re done. The open new line command has an opposite as well. The O command opens a new line above the line with the cursor. At this point you may be asking yourself if i also has a capital I version, and of course, the answer is yes. The I command is not the opposite of i, though. It enters Insert Mode with the insertion point before the first non-whitespace character on the line that the cursor is on instead of immediately before the character that the cursor is on. The opposite command of i is actually a, for append, and a puts the insertion point just after the character that the cursor is on. To round things out, a capital A puts the insertion point at the end of the current line. So there are plenty of ways to enter Insert Mode, with each one coming in handy in certain situations.

It may seem a bit overwhelming at this point, but with a little practice, these keystrokes become second nature. Many of them make sense from the letters that were picked to be associated with commands, and making commands orthogonal allows for some great flexibility. To sum up, in this post we’ve covered the following new commands:

  • h, j, k, l – Move left, down, up, and right
  • gg, G, <n>gg – Move to first, last, or specified line
  • w, W, b, B – Move forward or backward a word
  • ^, $ – Move to beginning or end of the line
  • /, ? – Search forward or backward
  • n, N – Move to next or previous match
  • . – Repeat last edit
  • o, O – Open a line in Insert Mode after or before the current line
  • a, A – Append to current character or end of the current line
  • I – Insert at beginning of the current line

After practicing these movements, you’ll be able to fly around your code files with ease. Keep practicing, and next time we’ll cover many of the fast ways we can modify text in Vim.

Let’s block ads! (Why?)

The 5 Best Free FTP Clients

Transferring files to and from your web host or server is best done with what’s commonly known an FTP client, though the term is a bit dated because there are more secure alternatives such as SFTP and FTPS.

When I was putting together this list, this was my criteria:

  • Supports secure file transfer protocols: FTP isn’t secure. Among its many flaws, plain FTP doesn’t encrypt the data you’re transferring. If your data is compromised en route to its destination, your credentials (username and password) and your data can easily be read. SFTP (which stands for SHH File Transfer Protocol) is a popular secure alternative, but there are many others.
  • Has a GUI: There are some awesome FTP clients with a command-line interface, but for a great number of people, a graphical user interface is more approachable and easier to use.

Topping the list is FileZilla, an open source FTP client. It’s fast, being able to handle simultaneous transmissions (multi-threaded transfers), and supports SFTP and FTPS (which stands for FTP over SSL). What’s more, it’s available on all operating systems, so if you work on multiple computers — like if you’re forced to use Windows at work but you have a Mac at home — you don’t need to use a different application for your file-transferring needs.


Available on Windows, Mac OS and Linux

Cyberduck can take care of a ton of your file-transferring needs: SFTP, WebDav, Amazon S3, and more. It has a minimalist UI, which makes it super easy to use.


Available on Windows and Mac OS

This Mozilla Firefox add-on gives you a very capable FTP/SFTP client right within your browser. It’s available on all platforms that can run Firefox.


Available on Windows, Mac OS and Linux

Classic FTP is a file transfer client that’s free for non-commercial use. It has a very simple interface, which is a good thing, because it makes it easy and intuitive to use. I like its “Compare Directories” feature that’s helpful for seeing differences between your local and remote files.

Classic FTP

Available on Windows and Mac OS

This popular FTP client has a very long list of features, and if you’re a Windows user, it’s certainly worth a look. WinSCP can deal with multiple file-transfer protocols (SFTP, SCP, FTP, and WebDav). It has a built-in text editor for making quick text edits more convenient, and has scripting support for power users.


Available on Windows

Honorable Mention: Transmit

For this post, I decided to focus on free software. But it just doesn’t seem right to leave out Transmit (which costs $34) in a post about FTP clients because it’s a popular option used by web developers on Mac OS. It has a lot of innovative features and its user-friendliness is unmatched. If you’ve got the cash to spare and you’re on a Mac, this might be your best option.


Available on Mac OS

Which FTP client do you use?

There’s a great deal of FTP clients out there. If your favorite FTP client isn’t on the list, please mention it in the comments for the benefit of other readers. And if you’ve used any of the FTP clients mentioned here, please do share your thoughts about them too.

Jacob Gube is the founder of Six Revisions. He’s a front-end developer. Connect with him on Twitter and Facebook.



12 Brackets Extensions That Will Make Your Life Easier

Brackets is a great source code editor for web designers and front-end web developers. It has a ton of useful features out of the box. You can make your Brackets experience even better by using extensions.

These Brackets extensions will help make your web design and front-end web development workflow a little easier.

Quickly see the current level of browser support a certain web technology has without leaving Brackets. This extension sources its data from Can I use.


HTML Skeleton helps you set up your HTML files quickly by automatically inserting basic markup such as the doctype declaration, <html>, <head>, <body>, etc.

HTML Skeleton

Related: A Generic HTML5 Template

Rapidly mark up a list of text into list items (<li>), table rows (<tr>), hyperlinks, (<a>), and more with HTML Wrapper.

HTML Wrapper

This is a super simple extension that adds file icons in Brackets’s sidebar. The icons are excellent visual cues that make it much easier to identify the file you’d like to work on.

Brackets Icons

Automatically and intelligently add vendor prefixes to your CSS properties with the Autoprefixer extension. It uses browser support data from Can I use to decide whether or not a vendor prefix is needed. It’ll also remove unnecessary vendor prefixes.


This extension will remove unneeded characters from your JavaScript and CSS files. This process is called minification, and it can improve your website’s speed.

JS CSS Minifier

This extension highlights CSS errors and code-quality issues. The errors and warnings reported by this extension are based on CSS Lint rules.


Emmet is a collection of tools and keyboard shortcuts that can speed up HTML- and CSS-authoring.


Need some text to fill up your design prototype? The Lorem Ipsum Generator extension helps you conveniently generate dummy text. (And if you need image placeholders, have a look at Lorem Pixel or DEVimg.)

Lorem Ipsum Generator

This extension will help keep your HTML, CSS, and JavaScript code consistently formatted, indented, and — most importantly — readable. An alternative option to check out is the CSSComb extension.


Make sure you don’t forget your project tasks by using the Simple To-Do extension, which allows you to create and manage to-do lists for each project within Brackets.

Simple To-Do

Transferring and syncing your project’s files to your web host or server requires FTP or SFTP, but such a fundamental web development feature doesn’t come with Brackets. To remedy the situation, use the eqFTP extension, an FTP/STFP client that you can operate from within Brackets.


How to Install Brackets Extensions

The quickest way to install Brackets extensions is by using the Extension Manager — access it by choosing File > Extension Manager in Brackets’s toolbar.

Brackets Extension Manager

If I didn’t mention your favorite Brackets extension, please talk about it in the comments.

Jacob Gube is the founder of Six Revisions. He’s a front-end developer. Connect with him on Twitter and Facebook.


A New Breed of Free Source Code Editors

10 Open Source Blogging Platforms for Developers

15 Free Books for People Who Code

Should Web Designers Know HTML and CSS?

5 Games That Teach You How to Code

This was published on Mar 28, 2016


Let’s block ads! (Why?)

7 Free UX E-Books Worth Reading

The best designers are lifelong students. While nothing beats experience in the field, the amount of helpful online resources certainly helps keep our knowledge sharp.

In this post, I’ve rounded up some useful e-books that provide excellent UX advice and insights.

This is a free e-book by usability consultancy firm Userfocus. The best part of this book is its casual tone. Acronyms like “the CRAP way to usability” and The Beatles analogies make remembering the book’s lessons a lot easier, and makes for an interesting read. That’s why this book is one of my favorites.

50 User Experience Best Practices

As the book’s title implies, 50 User Experience Best Practices delivers UX tips and best practices. It delves into subjects such as user research and content strategy. One of the secrets to this book’s success is its creative and easy-to-comprehend visuals. This e-book was written and published by the now-defunct UX design agency, Above the Fold.

UX Design Trends Bundle

Over at UXPin, my team and I have written and published a lot of free e-books. For this post, I’d like to specifically highlight our UX Design Trends Bundle. It’s a compilation of three of our e-books: Web Design Trends 2016, UX Design Trends 2015 & 2016, and Mobile UI Design Trends 2015 & 2016. Totaling 350+ pages, this bundle examines over 300 excellent designs.

UX Storytellers: Connecting the Dots

Published in 2009, UX Storytellers: Connecting the Dots, continues to be a very insightful read. This classic e-book stays relevant because of its unique format: It collects stand-alone stories and advice from 42 UX professionals. At 586 pages, there’s a ton of content in this book. Download it now to learn about the struggles — and solutions — UX professionals can expect to face.

The UX Reader

This e-book covers all the important components of the UX design process. It’s full of valuable insights, making it appealing to both beginners and veterans alike. The book is divided into five categories: Collaboration, Research, Design, Development, and Refinement. Each category contains a series of articles written by different members of MailChimp’s UX team.

Learn from Great Design

Only a portion of this book, 57 pages, is free.

In this e-book, web designer Tom Kenny does in-depth analyses of great web designs, pointing out what they’re doing right, and also what they could do better. For those that learn best by looking at real-world examples, this book is a great read.

The full version of this e-book contains 20 case studies; the free sample only has 3 of those case studies.

The Practical Interaction Design Bundle

I’ll end this list with another UXPin selection. This bundle contains three of our IxD e-books: Interaction Design Best Practices Volume 1 and Volume 2, as well as Consistency in UI Design.

  • Interaction Design Best Practices Volume 1 covers the “tangibles” — visuals, words, and space — and explains how to implement signifiers, how to construct a visual hierarchy, and how to make interactions feel like real conversations.
  • Interaction Design Best Practices Volume 2 covers the “intangibles” — time, responsiveness, and behavior — and covers topics from animation to enjoyment.
  • Consistency in UI Design explains the role that consistency plays in learnability, reducing friction, and drawing attention to certain elements.

Altogether, the bundle includes 250 pages of best practices and 60 design examples.

Did I leave out your favorite UX e-book? Let me know in the comments.

About the Author

Jerry Cao is a content strategist at UXPin. In the past few years, he’s worked on improving website experiences through better content, design, and information architecture (IA). Join him on Twitter: @jerrycao_uxpin.

Read Next

Quick Overview of User Experience for Web Designers

Creating a Timeless User Experience

10 Free Web Design Books Worth Reading

10 Awesome UX Podcasts

This was published on Mar 21, 2016


Let’s block ads! (Why?)

(Over)using with in Elixir 1.2

Elixir 1.2 introduced a new expression type, with. It’s so new that the syntax highlighter I use in this blog doesn’t know about it.

with is a bit like let in other functional languages, in that it defines a local scope for variables. This means you can write something like

owner = "Jill" with name = "/etc/passwd", stat = File.stat!(name), owner = stat.uid,
do: IO.puts "#{name} is owned by user ##{owner}" IO.puts "And #{owner} is still Jill"

The with expression has two parts. The first is a list of expressions; the second is a do block. The inital expressions are evaluated in turn, and then the code in the do block is evaluated. Any variables introduced inside a with are local to that with. The the case of the example code, this means that the line owner = stat.uid will create a new variable, and not change the binding of the variable of the same name in the outer scope.

On its own, this is a big win, as it lets us break apart complex function call sequences that aren’t amenable to a pipeline. Basically, we get temporary variables. And this makes reading code a lot more fun.

For example, here’s some code I wrote a year ago. It handles the command-line options for the Earmark markdown parser:

defp parse_args(argv) do switches = [ help: :boolean, version: :boolean, ] aliases = [ h: :help, v: :version ] parse = OptionParser.parse(argv, switches: switches, aliases: aliases) case parse do { [ {switch, true } ], _, _ } -> switch { _, [ filename ], _ } -> open_file(filename) { _, [ ], _ } -> :stdio _ -> :help end

Quick! Scan this and decide how many times the switches variable is used in the function. You have to stop and parse the code to find out. And given the ugly case expression at the end, that isn’t trivial.

Here’s how I’d have written this code this morning:

defp parse_args(argv) do parse = with switches = [ help: :boolean, version: :boolean ], aliases = [ h: :help, v: :version ], do: OptionParser.parse(argv, switches: switches, aliases: aliases) case parse do { [ {switch, true } ], _, _ } -> switch { _, [ filename ], _ } -> open_file(filename) { _, [ ], _ } -> :stdio _ -> :help end

Now the scope of the switches and aliases is explicit—we know that can’t be used in the case.

There’s still the parse variable, though. We could handle this with a nested with, but that would proably make our function harder to read. Instead, I think I’d refactor this into two helper functions:

defp parse_args(argv) do argv |> parse_into_options |> options_to_values
end defp parse_into_options(argv) do with switches = [ help: :boolean, version: :boolean ], aliases = [ h: :help, v: :version ], do: OptionParser.parse(argv, switches: switches, aliases: aliases)
end defp options_to_values(options) do case options do { [ {switch, true } ], _, _ } -> switch { _, [ filename ], _ } -> open_file(filename) { _, [ ], _ } -> :stdio _ -> :help end

Much better: easier to read, easier to test, and easier to change.

Now, at this point you might be wondering why I left the with expression in the parse_into_options function. A good question, and one I’ll ty to answer after looking at the second use of with.

with and Pattern Matching

The previous section parsed command line arguments. Let’s change it up (slightly) and look at validating options passed between functions.

I’m in the middle of writing an Elixir interface to GitLab, the open source GitHub contender. It’s a simple but wide JSON REST API, with dozens, if not hundreds of available calls. And most of these calls take a set of named parameters, some required and some optional. For example, the API to create a user has four required parameters (email, name, password, and username) along with a bunch of optional ones (bio, Skype and Twitter handles, and so on).

I wanted my interface code to validate that the parameters passed to it met the GitLab API spec, so I wrote a simple option checking library. Here’s some idea of how it could be used:

@create_options_spec %{ required:[ :email, :name, :password, :username ]), optional:[ :admin, :bio, :can_create_group, :confirm, :extern_uid, :linkedin, :projects_limit, :provider, :skype, :twitter, :website_url ])
} def create_user(options) do { :ok, full_options } = Options.check(options, @create_options_spec)"users", full_options)

The options specification is a Map with two keys, :required and optional. We pass it to Options.check which validates that the options passed to the API contains all required values and any additional values are in the optional set.

Here’s a first implementation of the option checker:

def check(given, spec) when is_list(given) do with keys = given |> Dict.keys |>, do: if opts_required(keys, spec) == :ok && opts_optional(keys, spec) == :ok do { :ok, given } else :error end

We extract the keys from the options we are given, then call two helper methods to verify that all required values are there and that any other keys are in the optional list. These both return :ok if their checks pass, {:error, msg} otherwise.

Although this code works, we sacrificed the error messages to keep it compact. If either checking function fails to return :ok, we bail and return :error.

This is where with shines. In the list of expressions between the with and the do we can use <-, the new conditional pattern match operator.

def check(given, spec) when is_list(given) do with keys = given |> Dict.keys |>, :ok <- opts_required(keys, spec), :ok <- opts_optional(keys, spec), do: { :ok, given }

The <- operator does a pattern match, just like =. If the match succeeds, then the effect of the two is identical—variables on the left are bound to values if necessary, and execution continues.

= and <- diverge if the match fails. The = operator will raise and exception. But <- does something sneaky: it terminates the execution of the with expression, but doesn’t raise an exception. Instead the with returns the value that couldn’t be matched.

In our option checker, this means that if both the required and optional checks return :ok, we fall through and the with returns the {:ok, given} tuple.

But if either fails, it will return {:error, msg}. As the <- operator won’t match, the with clause will exist early. Its value will be the error tuple, and so that’s what the function returns.

The Point, Labored

The new with expression gives you two great features in one tidy package: lexical scoping and early exit on failure.

It makes your code better.

Use it.

A lot.

Here’s Where I Differ with José

Johnny Winn interviewed José for the Elixir Fountain podcast a few weeks ago.

The discussion turned to the new features of Elixir 1.2, and José described with. At the end, he somewhat downplayed it, saying you rarely needed it, but when you did it was invaluable. He mentioned that there were perhaps just a couple of times it was used in the Elixir source.

I think that with is more than that. You rarely need it, but you’d often benefit from using it. In fact, I’m am experimenting with using it every time I create a function-level local variable.

What I’m finding is that this discipline drives me to create simpler, single-purpose functions. If I have a function where I can’t easily encapsulate a local within a with, then I spend a moment thinking about splitting it into two. And that split almost always improves my code.

So that’s why I left the with in the parse_into_options function earlier.

defp parse_into_options(argv) do with switches = [ help: :boolean, version: :boolean ], aliases = [ h: :help, v: :version ], do: OptionParser.parse(argv, switches: switches, aliases: aliases)

It isn’t needed, but I like the way it delineates the two parts of the function, making it clear what is incidental and what is core. In my head, it has a narrative structure that simple linear code lacks.

This is just unfounded opinion. But you might want to experiment with the technique foe a few weeks to see how it works for you.

Two is Too Many

There is a key rule that I personally operate by when I’m doing incremental development and design, which I call “two is too many.” It’s how I implement the “be only as generic as you need to be” rule from the Three Flaws of Software Design.

Essentially, I know exactly how generic my code needs to be by noticing that I’m tempted to cut and paste some code, and then instead of cutting and pasting it, designing a generic solution that meets just those two specific needs. I do this as soon as I’m tempted to have two implementations of something.

For example, let’s say I was designing an audio decoder, and at first I only supported WAV files. Then I wanted to add an MP3 parser to the code. There would definitely be common parts to the WAV and MP3 parsing code, and instead of copying and pasting any of it, I would immediately make a superclass or utility library that did only what I needed for those two implementations.

The key aspect of this is that I did it right away—I didn’t allow there to be two competing implementations; I immediately made one generic solution. The next important aspect of this is that I didn’t make it too generic—the solution only supports WAV and MP3 and doesn’t expect other formats in any way.

Another part of this rule is that a developer should ideally never have to modify one part of the code in a similar or identical way to how they just modified a different part of it. They should not have to “remember” to update Class A when they update Class B. They should not have to know that if Constant X changes, you have to update File Y. In other words, it’s not just two implementations that are bad, but also two locations. It isn’t always possible to implement systems this way, but it’s something to strive for.

If you find yourself in a situation where you have to have two locations for something, make sure that the system fails loudly and visibly when they are not “in sync.” Compilation should fail, a test that always gets run should fail, etc. It should be impossible to let them get out of sync.

And of course, the simplest part of this rule is the classic “Don’t Repeat Yourself” principle—don’t have two constants that represent the same exact thing, don’t have two functions that do the same exact thing, etc.

There are likely other ways that this rule applies. The general idea is that when you want to have two implementations of a single concept, you should somehow make that into a single implementation instead.

When refactoring, this rule helps find things that could be improved and gives some guidance on how to go about it. When you see duplicate logic in the system, you should attempt to combine those two locations into one. Then if there is another location, combine that one into the new generic system, and proceed in that manner. That is, if there are many different implementations that need to be combined into one, you can do incremental refactoring by combining two implementations at a time, as long as combining them does actually make the system simpler (easier to understand and maintain). Sometimes you have to figure out the best order in which to combine them to make this most efficient, but if you can’t figure that out, don’t worry about it—just combine two at a time and usually you’ll wind up with a single good solution to all the problems.

It’s also important not to combine things when they shouldn’t be combined. There are times when combining two implementations into one would cause more complexity for the system as a whole or violate the Single Responsibility Principle. For example, if your system’s representation of a Car and a Person have some slightly similar code, don’t solve this “problem” by combining them into a single CarPerson class. That’s not likely to decrease complexity, because a CarPerson is actually two different things and should be represented by two separate classes.

This isn’t a hard and fast law of the universe—it’s a more of a strong guideline that I use for making judgments about design as I develop incrementally. However, it’s quite useful in refactoring a legacy system, developing a new system, and just generally improving code simplicity.


Immutability, State, and Functions

Let’s start with the obligatory call to authority:

In functional programming, programs are executed by evaluating expressions, in contrast with imperative programming where programs are composed of statements which change global state when executed. Functional programming typically avoids using mutable state.

Well, that seems pretty definitive. “Functional programming typically avoids mutable state.” Seems pretty clearcut.

But it’s wrong.

Explaining why I thing that will involve a trip down the path I’ve been exploring over the last year or so, as I have tried to crystalize my thinking on the new styles of programming, and the role of transformation as both a top-down and bottom-up coding and design technique.

Let’s start by thinking about state.

Where Does a Program Keep Its State?

Programs run on computers, and at the lowest level their model of computation is tied to that of the machines on which the execute. Down at that low level, the state of a program is the state of the computer—the values in memory and the values in registers.1 Some of those registers are used internally by the processor for housekeeping. Perhaps the most important of these is the program counter (PC). You can think of the PC as a pointer to the next instruction to execute.

We can take this up a level. Here’s a simple program:

|> String.downcase # => "cat"
|> String.codepoints # => [ "c", "a", "t" ]
|> Enum.sort # => [ "a", "c", "t" ]

The |> notation is syntactic sugar for passing the result of a function as the first parameter of the next function. The preceding code is equivalent to


Thrilling stuff, eh?

Let’s image we’d just finished executing the first line. What is our state?

Somewhere in memory, there’s a data structure representing the string “Cat”. That’s the first part of our state. The second part is the value of the program counter. Logically, it’s pointing to the start of line 2.

Execute one more line. String.downcase is passed the string “Cat”. The result, another string containing “cat”, is stored in a different place in our computer. The PC now points to the start of line 3.

And so it goes. With each step, the state of the computer changes, meaning that the state of our program changes.

State is not immutable.

Is This Splitting Hairs?

Yes and no.

Yes, because no one would argue that the state of a computer is unchanged during the execution of a program.

No, because people still say that immutable state is a characteristic of functional programming. That’s wrong. Worse, that also leads us to model programming wrongly. And that’s what the rest of this post is about.

What Is Immutable?

Let’s get this out of the way first. In a functional program, values are immutable. Look at the following code.

person = get_user_details("Dave")

Let’s assume that get_user_details returns some structured data, which we dump out to some log file on line two. In a language with immutable values, that data can never be changed. We know that nothing in the function do_something_with can change the data referenced by the person variable, and so the debugging we write on line 4 is guaranteed to be the same as that created on line 2.

If we wanted to change the information for Dave, we’d have to create copy of Dave’s data:

person1 = change_subscription_status(person, :active)

Now we have the variable person bound to the initial value of the Dave person, and person1 references the version with a changed subscription status.

If you’ve been using languages with mutable data, at this point you’ll have intuitively created a mental picture where person and person1 reference different chunks of memory. And you might be thinking that this is remarkably inefficient. But in an immutable world, it needn’t be. Because the runtime knows that the original data will never be changed, it can reuse much of it in person1. In principle, you could have a runtime that represented new values as nothing more that a set of changes to be applied to the original.

Anyway, back to state.

person = get_user_details("Dave")
person1 = change_subscription_status(person, :active)
IO.inspect person1

Let’s represent the state using a tuple containing the pseudo program counter and the values bound to variables.

Line person person1
2 value1
3 value1
4 value1 value2


How to Handle Code Complexity in a Software Company

Here’s an obvious statement that has some subtle consequences:

Only an individual programmer can resolve code complexity.

That is, resolving code complexity requires the attention of an individual person on that code. They can certainly use appropriate tools to make the task easier, but ultimately it’s the application of human intelligence, attention, and work that simplifies code.

So what? Why does this matter? Well, to be clearer:

Resolving code complexity usually requires detailed work at the level of the individual contributor.

If a manager just says “simplify the code!” and leaves it at that, usually nothing happens, because (a) they’re not being specific enough, (b) they don’t necessarily have the knowledge required about each individual piece of code in order to be that specific, and (c) part of understanding the problem is actually going through the process of solving it, and the manager isn’t the person writing the solution.

The higher a manager’s level in the company, the more true this is. When a CTO, Vice President, or Engineering Director gives an instruction like “improve code quality” but doesn’t get much more specific than that, what tends to happen is that a lot of motion occurs in the company but the codebase doesn’t significantly improve.

It’s very tempting, if you’re a software engineering manager, to propose broad, sweeping solutions to problems that affect large areas. The problem with that approach to code complexity is that the problem is usually composed of many different small projects that require detailed work from individual programmers. So, if you try to handle everything with the same broad solution, that solution won’t fit most of the situations that need to be handled. Your attempt at a broad solution will actually backfire, with software engineers feeling like they did a lot of work but didn’t actually produce a maintainable, simple codebase. (This is a common pattern in software management, and it contributes to the mistaken belief that code complexity is inevitable and nothing can be done about it.)

So what can you do as a manager, if you have a complex codebase and want to resolve it? Well, the trick is to get the data from the individual contributors and then work with them to help them resolve the issues. The sequence goes roughly like this:

  1. Ask each member of your team to write down a list of what frustrates them about the code. The symptoms of code complexity are things like emotional reactions to code, confusions about code, feeling like a piece will break if you touch it, difficulties optimizing, etc. So you want the answers to questions like, “Is there a part of the system that makes you nervous when you modify it?” or “Is there some part of the codebase that frustrates you to work with?”Each individual software engineer should write their own list. I wouldn’t recommend implementing some system for collecting the lists—just have people write down the issues for themselves in whatever way is easiest for them. Give them a few days to write this list; they might think of other things over time.

    The list doesn’t just have to be about your own codebase, but can be about any code that the developer has to work with or use.

    You’re looking for symptoms at this point, not causes. Developers can be as general or as specific as they want, for this list.

  2. Call a meeting with your team and have each person bring their list and a computer that they can use to access the codebase. The ideal size for a team meeting like this is about six or seven people, so you might want to break things down into sub-teams.In this meeting you want to go over the lists and get the name of a specific directory, file, class, method, or block of code to associate with each symptom. Even if somebody says something like, “The whole codebase has no unit tests,” then you might say, “Tell me about a specific time that that affected you,” and use the response to that to narrow down what files it’s most important to write unit tests for right away. You also want to be sure that you’re really getting a description of the problem, which might be something more like “It’s difficult to refactor the codebase because I don’t know if I’m breaking other people’s modules.” Then unit tests might be the solution, but you first want to narrow down specifically where the problem lies, as much as possible. (It’s true that almost all code should be unit tested, but if you don’t have any unit tests, you’ll need to start off with some doable task on the subject.)

    In general, the idea here is that only code can actually be fixed, so you have to know what piece of code is the problem. It might be true that there’s a broad problem, but that problem can be broken down into specific problems with specific pieces of code that are affected, one by one.

  3. Using the information from the meeting, file a bug describing the problem (not the solution, just the problem!) for each directory, file, class, etc. that was named. A bug could be as simple as “FrobberFactory is hard to understand.”If a solution was suggested during the meeting, you can note that in the bug, but the bug itself should primarily be about the problem.
  4. Now it’s time to prioritize. The first thing to do is to look at which issues affect the largest number of developers the most severely. Those are high priority issues. Usually this part of prioritization is done by somebody who has a broad view over developers in the team or company. Often, this is a manager.That said, sometimes issues have an order that they should be resolved in that is not directly related to their severity. For example, Issue X has to be resolved before Issue Y can be resolved, or resolving Issue A would make resolving Issue B easier. This means that Issue A and Issue X should be fixed first even if they’re not as severe as the issues that they block. Often, there’s a chain of issues like this and the trick is to find the issue at the bottom of the stack. Handling this part of prioritization incorrectly is one of the most common and major mistakes in software design. It may seem like a minor detail, but in fact it is critical to the success of efforts to resolve complexity. The essence of good software design in all situations is taking the right actions in the right sequence. Forcing developers to tackle issues out of sequence (without regard for which problems underlie which other problems) will cause code complexity.

    This part of prioritization is a technical task that is usually best done by the technical lead of the team. Sometimes this is a manager, but other times it’s a senior software engineer.

    Sometimes you don’t really know which issue to tackle first until you’re doing development on one piece of code and you discover that it would be easier to fix a different piece of code first. With that said, if you can determine the ordering up front, it’s good to do so. But if you find that you’d have to get into actually figuring out solutions in order to determine the ordering, just skip it for now.

    Whether you do it up front or during development, it’s important that individual programmers do realize when there is an underlying task to tackle before the one they have been assigned. They must be empowered to switch from their current task to the one that actually blocks them. There is a limit to this (for example, rewriting the whole system into another language just to fix one file is not a good use of time) but generally, “finding the issue at the bottom of the stack” is one of the most important tasks a developer has when doing these sorts of cleanups.

  5. Now you assign each bug to an individual contributor. This is a pretty standard managerial process, and while it definitely involves some detailed work and communication, I would imagine that most software engineering managers are already familiar with how to do it.One tricky piece here is that some of the bugs might be about code that isn’t maintained by your team. In that case you’ll have to work appropriately through the organization to get the appropriate team to take responsibility for the issue. It helps to have buy-in from a manager that you have in common with the other team, higher up the chain, here.

    In some organizations, if the other team’s problem is not too complex or detailed, it might also be possible for your team to just make the changes themselves. This is a judgment call that you can make based on what you think is best for overall productivity.

  6. Now that you have all of these bugs filed, you have to figure out when to address them. Generally, the right thing to do is to make sure that developers regularly fix some of the code quality issues that you filed along with their feature work.If your team makes plans for a period of time like a quarter or six weeks, you should include some of the code cleanups in every plan. The best way to do this is to have developers first do cleanups that would make their specific feature work easier, and then have them do that feature work. Usually this doesn’t even slow down their feature work overall. (That is, if this is done correctly, developers can usually accomplish the same amount of feature work in a quarter that they could even if they weren’t also doing code cleanups, providing evidence that the code cleanups are already improving productivity.)

    Don’t stop normal feature development entirely to just work on code quality. Instead, make sure that enough code quality work is being done continuously that the codebase’s quality is always improving overall rather than getting worse over time.

If you do those things, that should get you well on the road to an actually-improving codebase. There’s actually quite a bit to know about this process in general—perhaps enough for another entire book. However, the above plus some common sense and experience should be enough to make major improvements in the quality of your codebase, and perhaps even improve your life as a software engineer or manager, too.


P.S. If you do find yourself wanting more help on it, I’d be happy to come speak at your company. Just let me know.