Cyclic Redundancy Code (CRC) is commonly used to determine the correctness of a data transmission or storage.
The fundamental mathematics behind the CRC is polynomial division. An arbitrary message (a fixed block of k information bits) is treated as if each bit were the binary coefficient of a polynomial of degree k-1. Let’s assume that we augment that message by simply adding some arbitrary number of bits to the end of the message which we will call the parity bits. If the original message is augmented such that the new message (original message + parity bits), which we will refer to as the code word, is evenly divisible by a known polynomial, which we will call the generator polynomial, then the receiver can assume that there were no transmission errors. However, in practice, it is possible to introduce errors into the received message that make detection of these errors impossible for a given generator polynomial. Many of today’s communications protocols, such as HDLC and Ethernet, use 16-bit and 32-bit CRCs, respectively. The native implementation for computing and checking a CRC is bit-based which typically makes hardware a more natural fit.
CRC Theory
Let's assume that we augment our message polynomial m(x) of degree k by multiplying it by an arbitrary polynomial g(x). We will refer to g(x) as the generator polynomial.
We can also write the code word as the sum of two polynomials, the original message m(x) where each component is increased in degree by (n-k) and an arbitrary polynomial r(x) of degree (n-k). This form has the advantage of not disturbing the original message and is the basis for the CRC algorithm.
The binary coefficients of the remainder polynomial are the parity bits which get appended to the end of the original message. So what we wind up with is a code word that is simply the original message followed by a tail of parity bits. The example below shows the resulting code word derived from the input ASCII hex values for the test sequence 123456789 and the computed CRC given the CRC-16 generator polynomial.
The fundamental mathematics behind the CRC is polynomial division. An arbitrary message (a fixed block of k information bits) is treated as if each bit were the binary coefficient of a polynomial of degree k-1. Let’s assume that we augment that message by simply adding some arbitrary number of bits to the end of the message which we will call the parity bits. If the original message is augmented such that the new message (original message + parity bits), which we will refer to as the code word, is evenly divisible by a known polynomial, which we will call the generator polynomial, then the receiver can assume that there were no transmission errors. However, in practice, it is possible to introduce errors into the received message that make detection of these errors impossible for a given generator polynomial. Many of today’s communications protocols, such as HDLC and Ethernet, use 16-bit and 32-bit CRCs, respectively. The native implementation for computing and checking a CRC is bit-based which typically makes hardware a more natural fit.
CRC Theory
Let's assume that we augment our message polynomial m(x) of degree k by multiplying it by an arbitrary polynomial g(x). We will refer to g(x) as the generator polynomial.
- c(x) = m(x)g(x)
We can also write the code word as the sum of two polynomials, the original message m(x) where each component is increased in degree by (n-k) and an arbitrary polynomial r(x) of degree (n-k). This form has the advantage of not disturbing the original message and is the basis for the CRC algorithm.
- c(x) = m(x)x^(n–k) + r(x)
The binary coefficients of the remainder polynomial are the parity bits which get appended to the end of the original message. So what we wind up with is a code word that is simply the original message followed by a tail of parity bits. The example below shows the resulting code word derived from the input ASCII hex values for the test sequence 123456789 and the computed CRC given the CRC-16 generator polynomial.