Many people, hearing the word "algorithm", think about something relatively novel and non-existent before the modern computer era. But one of the most widely used algorithms in today's world (crucial, for example, in cryptographic applications) is, in fact, two thousand years old. A famous computer scientist Donald Knuth called it "the granddaddy of all algorithms". This algorithm was described by an ancient Greek mathematician Euclid and finds the greatest common divisor of two integers. Let's see how it works!
Division with remainder
Firstly, let's recall the procedure known as a division with remainder. Suppose we have two integers and , such that . Marking all multiples of on a number line, we see that either coincides with one of them or lies between two marks. If is the closest mark to the left from (possibly coinciding with itself), the "distance" from to is strictly less than .
Keeping in mind this picture, we may state the following:
For any integer and any non-zero integer , there exist uniquely determined integers and , such that
We will say that divided by has quotient and remainder . We will also denote as . Notice that , meaning "the greatest integer that is less or equal to " (where is a floating-point result of ordinary division).
For example, , so divided by has a remainder (and we can write that ).
Euclid's algorithm
Suppose now we want to find the greatest common divisor of two integers and . Somebody can think the most obvious approach to compute is to firstly represent and as products of primes, and then multiply together their common factors. For instance, and , so their greatest common divisor is .
However, despite centuries of effort, nobody has invented an efficient algorithm for factoring. Is there some other way to compute the greatest common divisors? Fortunately, yes. The key fact is that
If and are positive integers, and , then
Why is it true? Well, if , then , so necessarily any integer that divides both and must divide . On the other hand, any integer that divides both and must also divide . So, as the sets of all common divisors coincide, the greatest common divisors also coincide.
Hence we can easily write a recursive algorithm, always coming down to smaller arguments until one of them is zero (and you can easily prove that for any positive integer ). Applying it to and , for example, we obtain
Is it really an efficient algorithm? For the answer, we need to understand how quickly the arguments and decrease with each recursive call. In a single round, the pair of arguments becomes . Either and then Otherwise, and then, as , it follows that In the next round, will become . Here, we always write the greatest of two arguments in the first place. So, after any two consecutive rounds of the algorithm, the greatest of two arguments is at the very least halved in value. That means we will reach zero in at most divisions, which is quite fast — for a 24-digit (in our ordinary decimal system), there will be less than 170 divisions.
Extended Euclid's algorithm
Tweaking Euclid's algorithm a bit allows us to express as an integer linear combination of and , that is, in the form
where and are some integers. Such a representation is always possible and will help us to do division in modular arithmetic (an operation making possible, for example, the key generation in the famous RSA cryptosystem).
Before giving the general algorithm, we illustrate it with an example. We will compute : We can look at the second line from below and write:Then, looking one line up, we notice we can substitute with an expression , so:Next, we substitute with , hence: See the pattern? In the end (check it!) we obtain:Proceeding in the same way, we can always express in the form , where and are integers — in other words, as a linear combination of and . This process can be implemented as a recursive algorithm. Suppose we know the coefficients and in the representation of as ? Now we can obtain the coefficients and in the representation . Since with ,
So, and . The base case for this recursion is again , when .
Working with non-positive integers
In the previous paragraphs, we discussed finding only when one integer is strictly positive and another is at least non-negative (the case when it is zero was considered as the base of the recursion). How can we handle negative numbers? Firstly, we notice that flipping the sign of any argument doesn't change the greatest common divisor, so we can always reduce the problem to finding of two non-negative integers. And then the only case not mentioned before is . Well, there exists a somewhat counterintuitive convention that . Now we can use Euclid's algorithm to find of any two integers.
Conclusion
In this module, we have explored one of the most important algorithms called Euclid's algorithm. Here are the main take-aways:
- Euclid's algorithm is an effective algorithm for finding the greatest common divisor of two integers.
- It can be written in a recursive way and consists in replacing the greatest of two positive arguments with its remainder modulo another argument
- An extended version of Euclid's algorithm allows us to represent of two integers as their linear combination.