Operations on polynomials and series¶
Problems in competitive programming, especially the ones involving enumeration some kind, are often solved by reducing the problem to computing something on polynomials and formal power series.
This includes concepts such as polynomial multiplication, interpolation, and more complicated ones, such as polynomial logarithms and exponents. In this article, a brief overview of such operations and common approaches to them is presented.
Basic Notion and facts¶
In this section, we focus more on the definitions and "intuitive" properties of various polynomial operations. The technical details of their implementation and complexities will be covered in later sections.
Polynomial multiplication¶
Definition
Univariate polynomial is an expression of form
The values
Typical example of such field is the field of remainders modulo prime number
For simplicity we will drop the term univariate, as this is the only kind of polynomials we consider in this article. We will also write
Definition
The product of two polynomials is defined by expanding it as an arithmetic expression:
The sequence
Definition
The degree of a polynomial
For consistency, degree of
In this notion,
Convolutions are the basis of solving many enumerative problems.
Example
You have
Objects of first kind are valued
You pick a single object of the first kind and a single object of the second kind. How many ways are there to get the total value
Solution
Consider the product
Example
You throw a
Solution
The answer is the number of outcomes having the sum
What is the number of outcomes having the sum
For
That being said, the answer to the problem is the
The coefficient near
Formal power series¶
Definition
A formal power series is an infinite sum
In other words, when we consider e.g. a sum
Definition
The product of formal power series
where the coefficients
The sequence
Thus, polynomials may be considered formal power series, but with finite number of coefficients.
Formal power series play a crucial role in enumerative combinatorics, where they're studied as generating functions for various sequences. Detailed explanation of generating functions and the intuition behind them will, unfortunately, be out of scope for this article, therefore the curious reader is referenced e.g. here for details about their combinatorial meaning.
However, we will very briefly mention that if
Example
Let
In a similar way, there is an intuitive meaning to some other functions over formal power series.
Long polynomial division¶
Similar to integers, it is possible to define long division on polynomials.
Definition
For any polynomials
where
Denoting
Definition
If
Polynomial long division is useful because of its many important properties:
-
-
It implies that
-
In particular,
-
For any linear polynomial
-
It implies that
-
For modulo being
Note that long division can't be properly defined for formal power series. Instead, for any
Basic implementation¶
Here you can find the basic implementation of polynomial algebra.
It supports all trivial operations and some other useful methods. The main class is poly<T>
for polynomials with coefficients of type T
.
All arithmetic operation +
, -
, *
, %
and /
are supported, %
and /
standing for remainder and quotient in Euclidean division.
There is also the class modular<m>
for performing arithmetic operations on remainders modulo a prime number m
.
Other useful functions:
deriv()
: computes the derivativeintegr()
: computes the indefinite integralinv(size_t n)
: calculate the firstlog(size_t n)
: calculate the firstexp(size_t n)
: calculate the firstpow(size_t k, size_t n)
: calculate the firstdeg()
: returns the degree oflead()
: returns the coefficient ofresultant(poly<T> a, poly<T> b)
: computes the resultant ofbpow(T x, size_t n)
: computesbpow(T x, size_t n, T m)
: computeschirpz(T z, size_t n)
: computesvector<T> eval(vector<T> x)
: evaluatespoly<T> inter(vector<T> x, vector<T> y)
: interpolates a polynomial by a set of pairs- And some more, feel free to explore the code!
Arithmetic¶
Multiplication¶
The very core operation is the multiplication of two polynomials. That is, given the polynomials
You have to compute polynomial
It can be computed in
Inverse series¶
If
Divide and conquer¶
This algorithm was mentioned in Schönhage's article and is inspired by Graeffe's method. It is known that for
Note that
The complexity of this method can be estimated as
Sieveking–Kung algorithm¶
The generic process described here is known as Hensel lifting, as it follows from Hensel's lemma. We'll cover it in more detail further below, but for now let's focus on ad hoc solution. "Lifting" part here means that we start with the approximation
Let
Let
From this, one can obtain the final formula, which is
Thus starting with
The algorithm here might seem a bit more complicated than the first one, but it has a very solid and practical reasoning behind it, as well as a great generalization potential if looked from a different perspective, which would be explained further below.
Euclidean division¶
Consider two polynomials
Let
The system of linear equations we're talking about can be written in the following form:
From the looks of it, we can conclude that with the introduction of reversed polynomials
the system may be rewritten as
From this you can unambiguously recover all coefficients of
And from this, in turn, you can recover
Note that the matrix above is a so-called triangular Toeplitz matrix and, as we see here, solving system of linear equations with arbitrary Toeplitz matrix is, in fact, equivalent to polynomial inversion. Moreover, inverse matrix of it would also be triangular Toeplitz matrix and its entries, in terms used above, are the coefficients of
Calculating functions of polynomial¶
Newton's method¶
Let's generalize the Sieveking–Kung algorithm. Consider equation
where
where
and
Let
Substituting
Since
The last formula gives us the value of
Thus, knowing how to invert polynomials and how to compute
where
The iterative rule above is known in numerical analysis as Newton's method.
Hensel's lemma¶
As was mentioned earlier, formally and generically this result is known as Hensel's lemma and it may in fact used in even broader sense when we work with a series of nested rings. In this particular case we worked with a sequence of polynomial remainders modulo
Another example where Hensel's lifting might be helpful are so-called p-adic numbers where we, in fact, work with the sequence of integer remainders modulo
Logarithm¶
For the function
Thus we can calculate
Inverse series¶
Turns out, we can get the formula for
Exponent¶
Let's learn to calculate
-th power¶
Now we need to calculate
Note though, that you can calculate the logarithms and the exponents correctly only if you can find some initial
To find it, you should calculate the logarithm or the exponent of the constant coefficient of the polynomial.
But the only reasonable way to do it is if
Thus you can use formula above only if
Note that you also can calculate some
Evaluation and Interpolation¶
Chirp-z Transform¶
For the particular case when you need to evaluate a polynomial in the points
Let's substitute
Which is up to the factor
Note that
Now if you need to evaluate a polynomial in the points
It gives us an
Another observation is that
The coefficient of
Multi-point Evaluation¶
Assume you need to calculate
- Compute a segment tree such that in the segment
- Starting with
- This will recursively compute
- Concatenate the results from the first and second recursive call and return them.
The whole procedure will run in
Interpolation¶
There's a direct formula by Lagrange to interpolate a polynomial, given set of pairs
Computing it directly is a hard thing but turns out, we may compute it in
Consider
But if you consider the derivative
Now consider the recursive algorithm done on same segment tree as in the multi-point evaluation. It starts in the leaves with the value
When we return from the recursion we should merge the results from the left and the right vertices as
In this way when you return back to the root you'll have exactly
GCD and Resultants¶
Assume you're given polynomials
Let
You want to know if
Euclidean algorithm¶
Well, we already have an article about it. For an arbitrary domain you can write the Euclidean algorithm as easy as:
template<typename T>
T gcd(const T &a, const T &b) {
return b == T(0) ? a : gcd(b, a % b);
}
It can be proven that for polynomials
Resultant¶
Let's calculate the product
For symmetry we can also multiply it with
The value defined above is called the resultant of the polynomials
- If
- From this follows
Miraculously it means that resultant of two polynomials is actually always from the same ring as their coefficients!
Also these properties allow us to calculate the resultant alongside the Euclidean algorithm, which works in
template<typename T>
T resultant(poly<T> a, poly<T> b) {
if(b.is_zero()) {
return 0;
} else if(b.deg() == 0) {
return bpow(b.lead(), a.deg());
} else {
int pw = a.deg();
a %= b;
pw -= a.deg();
base mul = bpow(b.lead(), pw) * base((b.deg() & a.deg() & 1) ? -1 : 1);
base ans = resultant(b, a);
return ans * mul;
}
}
Half-GCD algorithm¶
There is a way to calculate the GCD and resultants in
The procedure to do so implements a
The specific details of the algorithm are somewhat tedious to explain, however you can find its implementation in the library, as half_gcd
function.
After half-GCD is implemented, you can repeatedly apply it to polynomials until you're reduced to the pair of