Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Quantum Computing with Silq Programming

You're reading from   Quantum Computing with Silq Programming Get up and running with quantum computing with the simplicity of this new high-level programming language

Arrow left icon
Product type Paperback
Published in Apr 2021
Publisher Packt
ISBN-13 9781800569669
Length 310 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Thomas Cambier Thomas Cambier
Author Profile Icon Thomas Cambier
Thomas Cambier
Srinjoy Ganguly Srinjoy Ganguly
Author Profile Icon Srinjoy Ganguly
Srinjoy Ganguly
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Preface 1. Section 1: Essential Background and Introduction to Quantum Computing
2. Chapter 1: Essential Mathematics and Algorithmic Thinking FREE CHAPTER 3. Chapter 2: Quantum Bits, Quantum Measurements, and Quantum Logic Gates 4. Chapter 3: Multiple Quantum Bits, Entanglement, and Quantum Circuits 5. Chapter 4: Physical Realization of a Quantum Computer 6. Section 2: Challenges in Quantum Programming and Silq Programming
7. Chapter 5: Challenges in Quantum Computer Programming 8. Chapter 6: Silq Programming Basics and Features 9. Chapter 7: Programming Multiple-Qubit Quantum Circuits with Silq 10. Section 3: Quantum Algorithms Using Silq Programming
11. Chapter 8: Quantum Algorithms I – Deutsch-Jozsa and Bernstein-Vazirani 12. Chapter 9: Quantum Algorithms II – Grover's Search Algorithm and Simon's Algorithm 13. Chapter 10: Quantum Algorithms III – Quantum Fourier Transform and Phase Estimation 14. Section 4: Applications of Quantum Computing
15. Chapter 11: Quantum Error Correction 16. Chapter 12: Quantum Cryptography – Quantum Key Distribution 17. Chapter 13: Quantum Machine Learning 18. Other Books You May Enjoy

The calculation of the time and space complexity of algorithms

In this section, we will introduce the concept of the time and space complexity of an algorithm, which helps us to measure the efficiency of an algorithm. This is important to measure because there are ways of optimizing algorithms for specific applications that are time-efficient but take up more memory. The time and space complexity will help us to analyze the performance of various algorithms and determine their use cases accordingly. We will then look at the notations that are used to calculate complexity, and finally, we will analyze and calculate the time and space complexity of a few algorithms.

Time and space complexity

Whenever we deal with algorithms, we are concerned with the speed and memory requirements of a computer program, because certain tasks require immediate real-time outputs. An example might be a self-driving car, where the camera gets real-time data from the road and then the computer processes it in real time to provide numerical outputs, which are fed into the computer controlling the car. The computer sends control signals carrying the information of the numerical outputs to the accelerator and brake sensors. For this reason, it becomes very important to calculate the complexity of any algorithm. Time is a crucial computational resource, and one that is very important to the design and optimization of algorithms.

The two complexities that we are concerned with are time and space complexity.

The time complexity of an algorithm is defined as the amount of time taken by an algorithm to execute some code in terms of the amount of input to the algorithm. This time is not an actual time in seconds, minutes, or hours; rather, it's the number of times an instruction is run by the computer. Space complexity is defined as the amount of memory (usually RAM) taken by the algorithm to execute some code in terms of the amount of input to the algorithm.

Let's dive into the mathematical notation used to calculate the time and space complexity of algorithms.

Notation for time and space complexity

We saw in the previous section the importance of time and space for a computer algorithm, and we saw an example of a linear search operation using Python.

For the calculation of the time and space complexity of algorithms, there are several asymptotic notations that are used to describe various aspects of the complexity of the algorithm. Let's see their definitions now.

  • Big O notation: For the asymptotic upper bound, we use the O (big O) notation.

    For a given function is defined as follows:

    : There exist positive constants c and such that for all }.

  • Big notation: For the asymptotic lower bound, we use the (big Omega) notation. Mathematically, it is defined very similarly to big O notation.

    For a given function is defined as follows:

    : There exist positive constants c and such that for all }.

  • Big notation: For the asymptotic tight bound, we use the (big Theta) notation.

    For a given function , is defined as follows:

    : There exist positive constants , and such that for all .

Since now we have covered the essential notations, let's look at the calculation and analysis of a few algorithms to give you an idea of the usage of these notations.

Calculation and analysis of the complexity of algorithms

For the analysis of algorithms, we mostly use big O notation to calculate the worst-case and average-case scenarios of algorithms. The upper bound given by big O notation is a very useful number to analyze the performance of the algorithm. For calculation of big O, we always ignore the lower-order terms and keep only the higher-order terms, because the lower-order terms become insignificant when the input given to a particular program is very high. Big O notation will focus on the number of steps a program took to complete the execution of a program or function.

For example, let's consider the case of linear search, which we discussed earlier. In that, you will observe that the loop will run for the length of the array, which means if there are N elements in the array, then the loop will run till element to search for the given input. This means the algorithm will take N steps to complete, and we write this as O(N). There is a possibility that the given input element will be found at the first index of the array itself. In that case, the algorithm took only a single step to find the element, and the complexity then becomes O(1). This means that O(1) is the best-case scenario and O(N) is the worst-case scenario.

However, as we discussed earlier, big O notation will generally be used to describe the worst-case scenarios of algorithms. It is also worth remembering that when the number of steps remains constant, the complexity is O(1), and this is true for reading, inserting, or deleting elements at the end of an array; it does not matter how many elements the array has.

We have described O(1) and O(N); there is another complexity called O(log N), and this is used with the binary search algorithm. Since in binary search we keep dividing the array into halves, for an N = 8 element array, it would only take N steps to find the element, meaning only 3 steps!

Let's now look at some practical code examples in Python 3, where we can calculate the time complexity of the algorithm. The first program we consider is for identifying a prime number:

def prime(n):
    for i in range(2, n):
        if n % i == 0:
            return False
    return True

If you consider the preceding code, you can see that the loop will run from 2 to the number that is provided as input by the user. This means that the number of steps taken by the code will be equal to the number given by the user. Therefore, the code has O(N) complexity. Also, an important point to consider is that big O notation throws away the constants, which means that O(50N) is equal to O(N).

This is because for some algorithms, it might be the case that ) is faster than O(50N) to a certain point, but after that point, O(50N) again becomes faster, so big O notation ignores the constants. This is true, for example, for distinguishing the sorting algorithms bubble sort and selection sort, where they both have ) complexity, but after a certain number of steps, selection sort becomes O(N) and remains like that no matter the size of the array. It is therefore natural to see that O(N) is faster than 0(N 2).

There are a lot of other cases as well where there are different time complexities, such as O(N*logN), O(N!) but they are out of the scope of this book. Computational complexity is a very broad discipline of computer science comprising various data structures and algorithms; proper analysis of all their time and space complexities would be difficult to cover in a short book. There are many good books and online courses on data structures and algorithms out there that discuss these concepts in a lot more detail.

The concepts and calculation of time and space complexity will be extremely useful when we discuss quantum algorithms and compare them with their classical versions in later chapters. For example, Grover's search algorithm will be discussed in Chapter 9, Quantum Algorithms III – Grover's Search Algorithm and Simon's Algorithm, where we will see that it takes ) time to find an element in an unsorted list!

You have been reading a chapter from
Quantum Computing with Silq Programming
Published in: Apr 2021
Publisher: Packt
ISBN-13: 9781800569669
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime