articles tagged with notation

A Lesson In Computational Complexity

1 comment

What is ‘Computational Complexity’ ?

Basically a theory describing the scalability of algorithms. “As the size of the input to an algorithm increases, how do the running time and memory requirements of the algorithm change?” Wikipedia has a more complete description. Complexity can be considered in terms of;

  • Time Complexity – the number of steps that it takes to solve an instance of the problem as a function of the size of the input, using the most efficient algorithm.
  • Space Complexity – the amount of space, or memory required by the algorithm.

If you consider an array with n elements, and in solving a problem on this array, it takes n times n (or n2) steps. You could say the algorithm used had a complexity of n2. Now with different programming languages there may be additional steps in the algorithm necessary to solve the problem. To describe a general level of complexity across any language, the Big Oh Notation is used. So the time-complexity for this algorithm would be ?(n2)

‘?’ basically indicates that we are ignoring any language dependent factors, allowing complexity to be expressed purely in terms of the size of the input.

An Example

if you take a simple problem like, finding the ‘mode’ average on an array. I.e. the most frequently occurring element in a sequence (lets say of numbers) – You could do this an number of ways. Here is a brute force attempt (in Ruby).

def calculate_mode(nums)
  hi_count = 0
  mode = nums[0]
  nums.each do | search_for |
    count = 0
    nums.each do |num|
      if search_for == num
        count += 1
      end
    end
    if count > hi_count
      hi_count = count
      mode = search_for
    end
  end
end

It should be obvious that this algorithm has a complexity of ?(n2) – since every element of the array must be searched by each element to find the highest count value (and hence the mode).

Could it be done any faster? If the array of elements was sorted (ordered numerically) – the mode could be found by finding the longest continuing sequence in the array, that should only take n iterations. Employing a quick sort algorithm first; which can sort (on average) with ?(n log n)

def quicksort(a)
  return a if a.size <= 1
  pivot = a[0]
  quicksort(a.select {|i| i < pivot }) +
      a.select {|i| i == pivot } + 
      quicksort(a.select {|i| i > pivot })  
end

def calculate_mode_fast(nums, time_start)
  # sort array first
  sorted_nums = quicksort(nums)

  hi_count = 0
  mode = nums[0]  
  count = 1
  idx = 1

  sorted_nums.each do | search_for |
   if search_for == sorted_nums[idx]
     count += 1
     if count > hi_count
       hi_count = count
       mode = search_for
     end
   else
     count = 1
   end
   idx += 1 if idx < sorted_nums.length
  end
end

So we could say that this approach has (on average) a complexity of ?(n+(n log n)) which is significantly less that ?(n2)

I’ve posted this code listing which implements both algorithms running to compute the modal average on identical arrays of random integers. You can vary n (the number of elements) to see how each method performs. From the results (below) its clear that the 2nd algorithm is more time-efficient in this case.

calculate_mode
====================
mode => 3 (it occurs 44 times)
array length was 1000
num loops was 1000000
completed in 0.71737

calculate_mode_fast
=========================
mode => 3 (it occurs 44 times)
array length was 1000
num loops was 1040
completed in 0.011247
← (k) prev | next (j) →