Algorithms 101 for Software Applications

In general I feel that deep algorithm knowledge is overrated in the software industry. In the early 90’s, one needed to know about common algorithms because you needed to actually implement them. Today in the 00’s and 10’s one mainly needs to know which algorithms are appropriate to use, since most common algorithms are already implemented in the standard library of modern programming languages. Therefore I would like to give a summary of the minimum that I think modern coders actually need to know to do their job effectively.

All advice in this article is directly towards those who write applications: primarily rich web applications, desktop applications, and server-side tools. In this context the most advanced type of program you would be asked to write would be a compiler or other program transformation tool.

If on the other hand you work in a low-level environments close to the hardware1, are performing scientific calculations2, or are regularly processing insanely large data sets3, you will need to know more about algorithms that what I cover here.

Collections

In general the main algorithmic work you’ll be doing when writing programs will involve the use of collections included in your language’s standard library. Collections hold values. The algorithms for manipulating these collections will also typically be included in the standard library.

Basic collections

The following kinds of collections are so common as to be fundamental. Practically all programs involve the efficient manipulation of data stored in these collections.

  • List - An ordered sequence of elements.
  • Set - An unordered set of elements, disallowing duplicates.
  • Map - A structure that associates keys with values.

Sets and Maps in particular come in different flavors depending on what kind of ordering guarantee (if any) they provide:

  • Unordered Set/Map
    • Elements can be in any order.
  • Stable Set/Map
    • Elements appear in the order that they were added.
  • Sorted Set/Map
    • Elements are always sorted via either their natural ordering or a custom comparison function.

Other useful more-advanced collections exist such as Bidirectional Maps, Counted Bags, Partitioning Bags, and Priority Queues, but they will not be considered here.4

Basic operations

All collections support operations for adding elements, removing elements, and iterating over elements.5 However they do so at different speeds. It is important to choose the best kind of collection to use based on which operations your program will perform on it most frequently.

Below are the operations that each collection type supports most efficiently, along with the operation’s speed in Big-Oh notation6 when using the best7 implementation available:

  • List
    • Append element to end - O(1)
    • Get element at random index - O(1)
    • Set element at random index to new element - O(1)
    • Iterate over all elements - O(n)
    • Contains a specific element? - O(n)! (consider using a Set instead)
    • Remove first occurrence of an element - O(n)! (consider using a Bag instead)
    • Remove all occurrences of an element - O(n)! (consider using a Set instead)
  • Set
    • Add element (if not already present) - O(1)†
    • Remove element - O(1)†
    • Contains a specific element? - O(1)†
    • Iterate over all elements - O(n)
  • Map
    • Set value associated with key - O(1)†
    • Get value associated with key - O(1)†
    • Contains a specific key? - O(1)†
    • Iterate over all key-value pairs - O(n)

† = Assumes using an Unordered or Stable implementation with a well-designed hash function. If using a Sorted implementation then performance degrades from O(1) to O(log(n)). If using a poor hash function then performance degrades from O(1) to as low as O(n).

Common algorithms exist for performing these operations. As mentioned previously it is rare that you will need to code such collection-manipulating algorithms manually unless you are working in a constrained environment. Instead you just need to be a smart shopper when picking a collection implementation to use from your language’s standard library.

Common collection implementations

Generally speaking the collections available in the standard library of a programming language will be include one or more of the following specific implementations, possibly using different names:

  • List
    • Array-Based List (recommended)
      • O(1) amortized8 append
      • O(1) random get and set
      • But O(n) insert, remove, and prepend
    • Linked List
      • O(1) prepend
      • O(1) append if tail pointer is tracked, otherwise O(n) append
      • But O(n) random get, set, insert, and remove
      • But causes memory fragmentation
    • Array
      • O(1) random get and set
      • But does not support resizing after construction
  • Set
    • Linked Hash Set (recommended) - a stable set
      • O(1) add, remove, and contains
      • O(n) iteration in consistent (but not sorted) order
    • Hash Set - an unsorted set
      • O(1) add, remove, and contains
      • O(n) iteration, but in random order
    • Tree Set - a sorted set
      • O(log(n)) add, remove, and contains
      • O(n) iteration in sorted order
  • Map
    • Set-Based Map (recommended)
      • Uses an underlying set to hold the keys.
      • O(1) get, set, and contains-key for stable and unsorted sets
      • O(log(n)) get, set, and contains-key for sorted sets
      • O(n) iteration and contains-value
    • Association List
      • Is just a list of key-value pairs.
      • O(n) iteration and contains-value
      • But O(n) get, set, and contain-key

Here is a table of implementations for various common programming languages:

Sorting

The only advanced operation you commonly need to perform on a collection is to sort it.

The following sorting algorithms are worth knowing about for actual use:

  • Insertion Sort
    • O(n2) average and worst case performance
    • Usually faster in practice than other algorithms for small numbers of elements.
  • Merge Sort
    • O(n⋅log(n)) average and worse case performance, optimal for a comparison-based sort.
    • Can be done on disk rather than in RAM, for very large data sets.
  • Quicksort
    • O(n⋅log(n)) average case performance
    • But O(n2) worst case performance
    • But is not a stable sort
  • Heap Sort
    • O(n⋅log(n)) average and worse case performance, optimal for a comparison-based sort.
    • But is not a stable sort
    • But is complex to implement since it involves heaps.

It’s also worth knowing that there are even faster sort algorithms that are not based on comparisons. For example:

  • Radix Sort
    • O(w⋅n) average and worst case performance, where w is usually a constant

In interviews, knowing about and being able to implement merge sort, quicksort, and insertion sort has always been sufficient for me.

Standard libraries vary in what specific sorting algorithms they offer:

  • Python uses Timsort, a hybrid of Merge Sort and Insertion Sort.
  • Java previously used either Merge Sort or Insertion Sort depending on the size of the input list but later switched to Timsort.
  • Perl previously used Quicksort but later switched to Merge Sort.
  • C++ does not mandate a particular algorithm but does specify that whatever algorithm provided will have worst case performance of O(n⋅log(n)) or better.

Trees

Beyond collections, it’s useful to know how to both to implement tree data structures - particularly N-Ary Trees - how to traverse them, and how to manipulate them.

N-Ary Trees are made of Nodes, starting at a root node, where each node contains a list of children nodes. A node with no children is a leaf. A node can be related to another node as a child, parent, descendant, ancestor, or cousin. A node may or may not track who its parent is, depending on how it is implemented.

Trees can be traversed in either Depth-First Order or Breadth-First Order. Most programs use Depth-First Order because it is easy to implement via recursion. Breadth First-Order traversal may be useful, however, in writing algorithms related to the shortest-path from the tree root.

In interviews, I’ve found useful the ability to improvise new tree-related algorithms. For example an algorithm to find the nearest common ancestor.

Graphs

It is rare that you will need to use graphs in real-world application development except for the most demanding kinds of applications, such as compilers and program transformation tools. Therefore I will only cover them briefly here.

A Graph is made of Nodes (sometimes called “vertices”) and Edges. Edges connect pairs of nodes together. Beyond that:

  • In a “directed” graph (or “digraph”), each edge has a head and tail. By contrast in an “undirected” graph, each edge is unoriented and just joins two nodes together.

  • Some graphs are “weighted”, meaning that nodes and/or edges are annotated with numerical weights.

  • Some graphs are “multi-graphs”, meaning that they explicitly allow multiple edges between the same pair of nodes.

Graphs are useful for representing and manipulating connected structures, such as the intersections and roads on a map, the set of statements and related next-statements in source code, and the set of squares on a game board plus its neighbors.

Algorithms for working with graphs are similar to those for working with trees but are more complicated, since each node in a graph can be typically be reached along multiple paths.

Knowing how to traverse a graph in both Depth-First Order and Breadth-First Order without revisiting a node twice is extremely useful, both in practice and for interviews. For example how could you implement the paint bucket tool in a drawing program, coloring in a region of neighboring pixels without coloring in any pixel twice?

Odds and Ends

There are a few other miscellaneous concerns worth mentioning:

  • Strings in many languages (Java, Python, JavaScript) are immutable. Therefore appending to a string takes O(n) time. By itself this is not a problem, however you need to be careful not to repeatedly append to a string inside a loop because then your runtime will be closer to O(n2). Instead, append the string parts to a temporary list and then join the list together afterwards using the appropriate standard library function. That will get you back to O(n) time.

  • Implementing binary search in a list correctly is a lot harder than you think. The first couple of attempts in the academic literature actually had flawed algorithms.

  • Algorithms involving array rearrangement and index manipulation are good to practice for interviews, since some interviewers ask for low-level tricky algorithms of this sort. For example the Dutch National Flag problem is a classic.

Series

This article is part of the Programming for Perfectionists series.


  1. Low-level environments typically lack rich standard libraries. So you’re back to implementing linked lists and friends by hand.

  2. Scientific calculations involve lots of floats, which have a host of additional considerations to worry about. Chiefly float imprecision and numerical stability. The data sets involved are also typically very large.

  3. Web services that have the popularity of Facebook, Amazon, Apple, Microsoft, or Google are dealing with unbelievable data volumes that merit using specialized algorithms. Other websites, although perhaps still popular, don’t receive anywhere near the same level of traffic.

  4. Further, other useful more-advanced non-collection data structures exist such as Trees, Directed Graphs, and Undirected Graphs, but they also will not be covered here.

  5. The runtimes quoted here are for operations that are performed in-place, modifying the original collection value. If you require operations that efficiently return modified copies of the original collection, look into persistent data structures, such as used by Rich Hickey in the Clojure standard library.

  6. In Big-Oh notation, O(1) is a constant amount of time, O(log(n)) is time proportional to the order of magnitude of the collection’s size, and O(n) is time proportional to the size of the collection. Other common runtimes include O(n⋅log(n)) and O(n2).

  7. Considerably worse performance can result if you use a poor implementation. For example if you attempt to append (not prepend) an element to a list in a functional language that uses the default singly-linked list implementation, it will take O(n) time rather than the usual O(1) amortized time.

  8. O(1) amortized time means that in aggregate the per-operation time cost will average out to O(1). However individual operations may take O(1) in the average case or O(n) in the worst case.