Are linked lists slower or faster than arrays?

Posted on 08/03/2019 by Ken Gregg

The answer depends on what you’re doing to the data structure.

A data structure itself isn’t inherently slower or faster than another data structure. The specific operations you perform on a data structure (i.e., the algorithms associated with a data structure) individually can be compared with similar operations on other data structures.

Some operations are more efficient for an array than for a linked list, and some operations are more efficient for a linked list than for an array.

The question in the title of this post doesn’t mention whether we’re dealing with a singly-linked list or a doubly-linked list. That subtle difference can alter the efficiency of some operations. For simplicity, we’ll assume we’re dealing with a singly-linked list in the remainder of this discussion.

Let’s also assume that an array is implemented as one contiguous chunk of memory, exactly large enough to hold the current number of items in the array. This is not always the case in all environments, but it does match many common implementations.

For a conceptually linear data structure, like a linked list or an array, several common operations are typically defined:

  1. From the head (beginning) of the list:
    1. Access an existing item (read or alter the contents)
    2. Add a new item
    3. Remove an existing item
  2. From the tail (end) of the list:
    1. Access an existing item (read or alter the contents)
    2. Add a new item
    3. Remove an existing item
  3. From an arbitrary position (not the head or tail) within the list:
    1. Access an existing item (read or alter the contents)
    2. Add a new item
    3. Remove an existing item
  4. Sort the entire list, by some criteria contained in the data items.

As a first example, let’s take a look at adding a new item to the head of the list (operation 1.2 above):

  • For a linked list, this operation requires constant time, O(1), typically involving the manipulation of a few pointers. The operation is independent of the number of items already in the list.
  • For an array, this operation typically requires reallocating a new array to accommodate the extra item (which might involve a round trip to the OS to get more memory), storing the new item in the first position of the new array, and copying all the items from the original array to the new array. This is an O(N) operation, which depends on how many items N are in the array.

In this case, this specific operation on a linked list is far more efficient than the same operation on an array, as N grows large. This is one example of an operation that demonstrates the scalablility of a linked list.

As a second example, consider just reading one arbitrary item from the list, given its position, or index, within the list. We don’t know if the position index will be at the head or tail of the list, so assume it’s just some arbitrary location within the list (operation 3.1 above):

  • For a linked list, this operation requires that we start at one end of the list, and keep walking through the list sequentially until we reach the item at the desired position. (There are ways to speed this up, but then the data structure may cease to be a singly-linked list.) So, accessing an arbitrary item within a linked list is an O(N) operation. The worst case occurs when we start at one end, and the item we’re looking for is at the other end.
  • For an array, this operation takes constant time, O(1), because all that needs to happen is the conversion of the position index into an offset from the beginning of the array. In other words, one address calculation is all that’s required to get to a specific item of the array.

In this case, this specific operation on a linked list is far less efficient than the same operation on an array, as N grows large.

We’re just looking at two operations here. There are other examples which illustrate where specific operations on one of these data structures is more efficient than the other, when the number of items is large.

The point of all this is that data structures, by themselves, aren’t inherently faster or slower, better or worse, than other data structures. It depends on what operations you need to perform on them. If you need a data structure that grows and shrinks over time, without incurring overhead of reallocation and copying every time you add or remove an item, then a linked list is a better choice than an array. If the number of items is known up front and is fixed over time, then an array is probably a better choice.

algorithm analysis choosing data structures data structure operations efficiency software development software engineering programming data structures and algorithms algorithms and data structures algorythms