Most coding beginners don’t know how to measure the efficiency of their code. And try to rely on the time of execution or amount of code lines. So let’s see how it should be done!
Let’s start with the basics. Imagine we have a program that contains just one line of code:
Console.WriteLine("Hi");
In order to perform this operation, our CPU will need let’s say 1-time unit. Then the following command will be evaluated as 2 units.
Console.WriteLine("Hi");
Console.WriteLine("Hi");
Ok, now let’s test if you got it right. Please evaluate the following piece of code:
for(int i = 0; i < 100;i++){
Console.WriteLine("Hi");}
I hope you gave it a try and the answer of course is 100 units.
Now, you see that two lines of code can have a different execution time. That means we cannot just count the lines. The truth is that some lines of code are “heavier” than others in the sense of performance.
But what if I will tell you that not all expressions can be measured in this way? How is that possible? Check the example below and try to evaluate it:
foreach(string Name in ListOfNames){
Console.WriteLine("Hi");}
Quite challenging, isn’t it? Here the number of units will depend on the amount of Names in the list That is why we cannot rely on the method I showed to you before. To avoid this kind of problem we use O() (big O notation) that represents only how expandable our algo is in the worst-case scenario. What is the worst-case scenario?
Let’s start with the best scenario, first. Imagine that the list from our last example is empty. And it will never be executed. And we will spend 0 units on it. So the complexity of such a case will be O(0) (big O of zero).
Now we should try to evaluate the worst case. In the end, it might be any number depending on the length of the list. So let’s represent it as a variable n. So the complexity of the last code example will be O(n) (read as “big o of n”).
The following diagram represents O(n). Y represents the number of operations, and X represents the amount of data to process. Here you see that Y grows linearly with the X. That means that TC directly depends on the amount of data.

Ok, now we have to understand that any algorithm that executes in a constant amount of operations will be evaluated as O(1).
Let’s take a look at the three examples below. I’m sure that the executions time for each of them is different. But in the context of their expandability, we will evaluate them equally. Why? Because in any scenario, this code will keep the same amount of operations. And the time complexity is going to be constant.
💡 That means that as long as the complexity of expressions is constant, we can treat them as equal. And notate O(1)
int i = 1;
################
int k = i+1; ///
################
for (int c = 0; c < 10; c++)
{
...
}
################
The next two examples will represent the complexity you want to avoid in your solutions to the interview.
In this case, you want to count the number of possible int pairs in an array. And for this purpose, we use a nested foreach loop. That gives us time complexity O(n^2).
public int GetNumberOfPairs(int[] arr)
{
int result = 0;
foreach(int i in arr)
{
foreach(int j in arr)
{
if (i!=j){
result++;
}
}
}
return result;
}
But why is it that bad? Because depending on the size of the array, the number of operations will increase exponentially.

I want to give you an example of how bad it is. For an array with length 3 our CPU should process 9 operations (the first point on the diagram), but for an array with length 10 it will be increased to 100 operations (the second point on the diagram).
And the worst thing you can do is to add another nested loop.
public int GetNumberOfTriplets(int[] arr)
{
int result = 0;
foreach(int i in arr)
{
foreach(int j in arr)
{
foreach(int x in arr)
{
if (i!=j && i!=x && x!=j){
result++;
}
}
}
}
return result;
}
This code is even more time consumable. The same array of 3 elements will give us 27 operations, and the same array of 10 elements will cost 1000 operations.
But is there something better than O(n) when we work with arrays? The answer is Yes. There is O(log n). And we have to aim for it.
If you don’t know what the log is I would recommend you to check at least the introduction videos of the following course about it from this link:
Intro to logarithms (video) | Logarithms | Khan Academy
Let’s take a look at this example of such an algo. The classic example is the BinarySearch algorithm:
Here we see a sorted array from one to 7 where we want to find 6.

Pseudocode:
find value in the middle

Check if this value is equal to 6. If yes return the index of it.
Else check whether it is more than 6 or less:
if the middle value is bigger take the left half of the array and repeat the previous steps.
otherwise, take the right half of the array and repeat the previous steps.

That means in each iteration, we decrease the search area in half.
Below you can see the c# code for this algo.
/// arguments here are a sorted array and the value to be found
public static int BinarySearch(int[] arr, int key) {
int minNum = 0;
int maxNum = arr.Length - 1;
while (minNum <=maxNum) {
int mid = (minNum + maxNum) / 2;
if (key == arr[mid]) {
return mid;
} else if (key < arr[mid]) {
max = mid - 1;
}else {
min = mid + 1;
}
}
return -1;
}
It is a classical BinarySerch algorithm that can find an index of a certain element in an array with a time complexity of O(log n).

We have to use big O notation to identify our time complexity.
The heaviest parts of the code usually contain loops, methods that contain a loop or it also might be a recursion. But loops are not equally heavy, loops with a constant number of iterations have a constant complexity the same as the simple code we write. But when the number of iterations depends on the algorithm’s input, the time complexity may range from O(log n) to O(n^X) (depending on the number of nested loops).