Talk:BCH code

Terrible Explication
This is a general comment on the BCH entry ... it is AWFUL, TERRIBLE, OBTUSE, BAD, ....

Classic case of nerds writing for nerds. Most of us are just trying to figure out how things work, get things done, and make a little money - and don't have time to be nerds.

This entry should explain BCH codes ... not the mathematics behind them. This page should be moved to a separate referenced "BCH theory" page.

Replace this entry in entirety with something more intuitive and tutorial ... (... PLEASE ...) —Preceding unsigned comment added by 75.107.115.229 (talk) 05:25, 6 June 2010 (UTC)


 * I'm not sure what sort of explanation you're looking for, beyond the fact that BCH is a cyclic block code! The fact that it is useful is due to its particular mathematical properties, which can only really be explained by presented them.


 * However, an "applications" section would probably benefit the article. Oli Filth(talk&#124;contribs) 11:14, 6 June 2010 (UTC)

This article is exactly as critized above - OBTUSE and of LITTLE USE to anyone who does not already have a sophisticated matamatical background. The observation that a BCH is a "cyclic block code" is of little help, and even seems to be a bit of a put down or an attempt at "witty repartee" rather than a genuine attempt to be helpful or to understand the problem. There most certenly is a helpful, intuitive level of explination between "its a cyclic block code" and the highly technical and inaccessable treatment given in this article. This isn't a criticism that should be dismissed or ridiculed - it goes to the heart of the public service that Wikipedia is trying to provide. Hopefully someone who has both an understanding of the subject and a desire to help others understand will make the needed improvements. Alan.A.Mick (talk) 14:58, 30 December 2011 (UTC)

Really? Describing BCH as a "cyclic block code" is seen as "witty repartee"? I think the entry is perfect just the way it is... If you think otherwise, then take a crack at rewriting it... *THAT* is the heart of the public service that Wikipedia provides... — Preceding unsigned comment added by 173.73.54.71 (talk) 17:04, 22 October 2012 (UTC)

It sounds like the above comments are not complaining about the description of BCH as a cyclic block code, but rather that this information is not sufficient to understand what BCH is or how it is used. A layman's explanation of what the jargon means, particularly why it is relevant, would be an improvement. Even better, set the jargon aside for the theory section, and explain what is being done and why. Is the term "Galois Field" meaningful in any way to someone learning about BCH on Wikipedia? Does the phrase "primitive narrow-sense BCH codes" help the reader understand why they are different from general BCH codes, why one might prefer either of them, or help the reader understand how to construct BCH codes? It is understandable that every article cannot be a hand-holding tutorial that gingerly leads the reader along to illumination, but surely there is some happy medium between that and strict mathematical formalism.


 * Some helpful information might be how to get the corrections for a given code. For example if the message is 512 bytes and the BCH length is 8, then the maximum correction is 4 (always?).  Or can different GF(2n) fields be chosen to alter the corrective power.  I think that a section on fields of 2n would be helpful for the people in the binary world.  It is mathematically interesting for other fields, but a lot of people accessing the page will just be trying to understand GF(2n) only; I don't know if there are then short-cuts/simplification to some of the explanations?


 * CRC32 and Reed-Solomon have some similar subject matter. Not everyone will be familiar with algebraic fields.  However, those criticizing could be constructive by saying what is better in those pages.  BCH and Reed-Solomon are technically more difficult, so I would expect them to be a little more difficult to understand (require more reading).  I think some worked binary examples would be helpful; along with the mathematical notation. Golay codes has some historical use.  I think BCH could include technology that currently uses them. This is in the lead.  Thanks to whoever has contributed to this page. 71.19.161.130 (talk) 17:11, 16 December 2013 (UTC)

Although BCH code is a class of cyclic error correction code, the fact that it is a cyclic code only makes clear that the maximum size of a message for a field based on GF(2^n) is (2^n-1) elements There is nothing in the encoding or decoding process of common error correcting codes where the fact that such codes are cyclic provides any benefit. Note that [original view Reed Solomon code] is not a cyclic code unless the evaluated data points are powers of the field primitive. Outside of academics, the most common usage for Reed Solomon original view encoding is some implementations of jerasure, an erasure only code used for data storage and typically a set of m evaluated data points are {0, 1, ..., m-1} resulting in a non-cyclic code, and for GF(2^n), the maximum message length is 2^n (as opposed to 2^n - 1). Rcgldr (talk) 19:24, 16 December 2020 (UTC)

Another problem with the references
The last two links in the references are broken. 194.171.252.100 09:53, 14 September 2007 (UTC)

Problem with reference
The second reference, to | this PDF, is not accessible to people without an account on the relevant database. As such, it seems that a full citation would be more appropriate: A Simple Step-by-Step Decoding of Binary BCH Codes CHR et al. IEICE Trans Fundamentals.2005; E88-A: 2236-2239 --Dyfrgi 07:55, 11 January 2007 (UTC)

Encoding/Decoding
The examples are not very elaborate. How does one create the generator polynomial $$m_{1,3, ...}$$ from $$m_{1}$$, $$m_{3}$$, ...? What is the importance of $$x^4 + x + 1$$? The way to determine the cycles of roots of $$m_{1}$$, $$m_{3}$$, $$m_{5}$$ and $$m_{7}$$ are not explained. It's not clarified that $$C_{r}$$ means the redundancy polynomial. How do I calculate the syndrome values? How do I calculate the error locator polynomials? Shall I continue?

Can anyone please provide us with a complete algorithm description, detailed enough so it can be replicated on paper or in source code?

--Zom-B 16:54, 24 June 2007 (UTC)


 * "What is the importance of $$x^4 + x + 1$$?" - it is just an example of a primitive polynomial used as the basis for GF(2^4). All non-zero numbers for this field are powers of x (hex 2). "How to calculate syndromes ... error locator polynomial" - I'm assuming the article has been updated since the question was asked, since it now explains the process. "cycles of roots" - I don't see this in the article anymore. I updated the article to explain the minimum polynomials for powers of 2 modulo the field primitive polynomial. The least common multiples are simply the product of non-duplicate minimum polynomials. Rcgldr (talk) 19:33, 16 December 2020 (UTC)

The example didn't seem to be right so I have expanded it. The encoding section needs to be revised accordingly but I don't have time to do that right now. Richard Pinch 06:50, 30 June 2007 (UTC)

I have created Czech version of the page (based on material from this one), now I am going to update the English in simillar way. The explanation of the unreadable characters correction leading to simple reuse of Euclidian algorithm with Forney formula ... will be added in near future (no new source material, just simplifying the reasoning).

Explanation of \Lambda could be simplified or rather fully replaced by the explanation with unreadable characters.

I have generalised the BCH code definition in Czech version to cover Reed-Solomon, what was somehow inconsistently suggested on this page. I am not sure if I am approved to do the same here in english version. Hippo.69 (talk) 18:39, 29 December 2012 (UTC)


 * Encoding should just be the process of calculating the $$m_y(x)$$ polynomials and appending the values to the message? All methods for calculating CRC, etc can be used such as table look ups and calculations in parallel.  If it is this simple, I don't think an extra section is needed. 71.19.161.130 (talk) 16:54, 16 December 2013 (UTC)

Dispute on Graph
I dont know how this is going to be useful, but to the best of my knowledge this is the graph I obtained from running my simulation code. The main program is given here (ancillary functions omitted for want of space). You can get the full program from | here.


 * 1) !/usr/bin/octave -q


 * 1) A simple communications channel,

%BPSK uncoded. System.BER=[]; System.Error=[]; System.Length=36*1000; System.Mesg=zeros(1,System.Length); System.Coded=[];

%BCH coded. System.BCHError=[]; System.BCHBER=[]; System.BCHLength=System.Length*(63.0/36.0); System.BCHCoded=zeros(1,System.BCHLength); System.BCHDeCoded=[];

%BPSK parameters. System.Modulated=[]; System.DeModulated=[]; System.DeCoded=[]; System.CarrierRate=10; System.DataRate=2; System.SNRdb=0:0.5:10;

gen_poly=BCH_poly;

Length=length(System.SNRdb); System.BCHActualErrors=zeros(1,Length); System.BCHCorrected=zeros(1,Length);

%Additive White Gaussian Noise, power. System.N0=5;

for iter=1:Length

disp('=>') System.SNRdb(iter) System.BCHBER

disp('Simulating AWGN channel with BCH & BPSK')

%Value of SNR Eb=System.SNR=10^(System.SNRdb(iter)/10)*System.N0; %Signal strength, power. System.Eb= System.SNR*System.N0; Ec=Eb*36.0/63.0; %Source Message System.Mesg=mod(floor(randn(1,System.Length)),2);

Rxword=[]; Chunks=length(System.Mesg)/36; for biter=1:Chunks bmesg=System.Mesg((biter-1)*36+1:biter*36); cwmesg=GF_product(gen_poly,GF_polarize(bmesg));

%convert from the 0,-1 mode to the 1,0 mode. cwmesgTx=cwmesg+1;

System.BCHCoded((biter-1)*63+1:(biter)*63)=cwmesg;

%Modulate BPSK bch1=bpskmod(cwmesgTx,20,10,sqrt(Ec));

%Add noise bch2=bch1+sqrt(System.N0)*boxmuller(length(bch1));

%Demodulate BPSK cwRx=bpskdemod(bch2,20,10,sqrt(Ec));

%  This is in the +1,+0 mode, %  convert to a 0,-1 mode cwrecv=cwRx-1;

Rxword=[Rxword cwrecv];

%error pos gives roots w.r.t index of 0. errpos=BCH_decode_berlekamp(cwrecv,63,36,5);%_berlekamp

%so add +1, to convert error position locations form 0 index, to +1 index. errpos=errpos+1;

if isempty(errpos) %disp('Codeword is correct') correctw=cwrecv; else %printf('Correcting %d errors\n',length(errpos)) %errpos correctw=cwrecv; for i=1:length(errpos) %flip the bits, correctw(errpos(i))=-(1+correctw(errpos(i))); end end

%disp('Corrected CW') %correctw %fgets(stdin)

CorrectError=length(errpos); System.BCHCorrected(iter)=System.BCHCorrected(iter)+CorrectError;

System.BCHDeCoded((biter-1)*63+1:biter*63)=correctw; end System.BCHActualErrors(iter)=Difference(System.BCHCoded,Rxword); System.BCHError(iter)=Difference(System.BCHCoded,System.BCHDeCoded); System.BCHBER(iter)=System.BCHError(iter)/System.BCHLength;

%BPSK part of simulation, System.Coded=System.Mesg;

%Modulate System.Modulated=bpskmod(System.Coded,System.CarrierRate,System.DataRate,sqrt(System.Eb)); %Channel with AWGN System.Channel=System.Modulated + sqrt(System.N0)*randn(1,length(System.Modulated));

%DeModulate System.DeModulated=bpskdemod(System.Channel,System.CarrierRate,System.DataRate,sqrt(System.Eb));

%Channel Decode System.DeCoded=System.DeModulated;

%Error Calculation System.Error(iter)=Difference(System.Coded,System.DeCoded); System.BER(iter)=System.Error(iter)/System.Length;

end

disp('System Error') System.Error

disp('System BER') System.BER

disp('System.BCHError') System.BCHError

disp('System.BCHBER') System.BCHBER

disp('System BCH Actual') System.BCHActualErrors

disp('System BCH Corrected') System.BCHCorrected

disp('System SNRdb') System.SNRdb

semilogy(System.SNRdb,System.BER,";Simulated BPSK;",System.SNRdb,0.5*erfc(sqrt(10.^(System.SNRdb/10))),";Theoretical BPSK;",System.SNRdb,System.BCHBER,";Simulated BCH, [63,36];") title "BPSK Uncoded, Vs BCH Coded systems" save bchchannel_sim System fgets(stdin) fgets(stdin)

Let me know whats the problem, if there is any. --பராசக்தி 16:48, 16 September 2006 (UTC)

I tried to run your code on my octave, but I don't have the required other functions. Perhaps, it is better to consider the theoretical error rate. The (63,36) Binary BCH code with Berlekamp-Massey decoding should correct all error patterns of weight <= 5 and no error patterns of weight >5. Since the theoretical BER for BPSK is

$$Q(\sqrt{2 E_s / N_0}) = \frac{1}{2} Erfc (\sqrt{E_s / N_0})$$,

we can write

$$ P_b = \sum_{j=6}^{63} \frac{j}{n} {n\choose j}p^j(1-p)^{n-j} $$

where $$ p = Q( 2 \sqrt{E_s/N_0} ) $$ is the raw error rate at the input of the decoder (due to the rate adjusted SNR). This assumes further that all decoder failures are detected, which is not a terrible assumption for this code.

I think that $$p$$ should be equal to your "simulated BPSK" error rate. Perhaps you could add these theoretical curves to your graph to see what is going on.

Here is my Matlab code (apologies but my Octave installation has never quite worked for me).

n = 63; k = 36; t = 5; rate = k/n;

ebno = 0:0.5:10;

bpsk_ber = 0.5*erfc(sqrt(10.^(ebno/10))); coded_raw_ber = 0.5*erfc(sqrt(10.^(rate*ebno/10))); coded_ber = 0*coded_raw_ber;

b = (0:n)/n; b(1:(t+1)) = 0; for i=1:length(ebno) coded_ber(i) = dot(binopdf(0:n,n,coded_raw_ber(i)),b); end

semilogy(ebno,bpsk_ber,'r-',ebno,coded_raw_ber,'g-',ebno,coded_ber,'b-'); title('(63,36) BCH with bounded distance decoding'); legend('BPSK','Input to decoder','After decoder',3); grid on

I have given a link to the code tarball on my website. Please use that for the other functions. AFAIK I think it is correct. The method maybe different, but I think it is statistically significant, as I ran the code for 36000 bits and got BER for 1e-4 (10x more bits than 1/BER). Also the right way would be to perform any simulation for just about till 10 errors or 100 errors accumulate at each SNR value as I understand from books by Moore on ECC. Let me know, now that other functions are avaiable. I will try & check your claims as well. --பராசக்தி 00:38, 19 September 2006 (UTC)

---

Here is an updated version of my code which uses the Matlab Communications Toolbox for the simulation. The theoretical BER formula/curve was updated to count only message bit errors.

n = 63; k = 36; t = 5; rate = k/n;

ebno = 0:0.5:10;

ebno_lin = 10.^(ebno/10); bpsk_ber = 0.5*erfc(sqrt(ebno_lin)); coded_raw_ber = 0.5*erfc(sqrt(rate*ebno_lin)); coded_ber = 0*coded_raw_ber;

% Theory [X,Y] = meshgrid(0:k,0:n-k); T = (((X+Y)>t)+0).*X; for i=1:length(ebno) msg_err = binopdf(0:k,k,coded_raw_ber(i)); par_err = binopdf(0:(n-k),n-k,coded_raw_ber(i))'; joint = msg_err(ones(1,n-k+1),:) .* par_err(:,ones(1,k+1)); coded_ber(i) = sum(sum(T.*joint))/k; end

% Simulation (all zeros codeword) M = 500; sim_ber = zeros(1,length(ebno)); for i=1:2:15 y = -1 + randn(M,n)/sqrt(2*rate*ebno_lin(i)); ybit = (y>0)+0; xhat = bchdec(gf(ybit,1),n,k); sim_ber(i) = sum(sum(xhat~=0))/M/k; end

semilogy(ebno,bpsk_ber,'r-',ebno,coded_raw_ber,'g-',ebno,coded_ber,'b-',ebno,sim_ber,'bo'); title('(63,36) BCH with bounded distance decoding'); legend('BPSK','Input to decoder','After decoder','Simulation',3); xlabel('E_n / N_0'); ylabel('BER'); grid on; set(gca,'YLim',[1e-7 1]);

The results of this script are plotted here. You will notice the theory and simulation now match because they are both correct.



Ideas from Wolfram
Wolfram Alpha writes the function : $$GF(q^m)$$ like this : $$G_m[q^m](z)$$ and $$GeneratingFunction[q^m, m, z]$$ see

The $$n=q^m-1$$ is written alternatively as

$$q^m = 1 + n$$

or $$q=(1 + n)^{m^{-1}}$$ or more simply $$q=(1 + n)^{1/m}$$ see

More to come... -- James Michael DuPont (talk) 07:34, 29 August 2010 (UTC)

Decoding example
From Glrx's recent edit summary: "single digit symbols and errors make this a poor example." Thanks for your input. I feel that the example is a good one, as it is simple and practical. Additional examples could be added to fully illustrate the complexity of the subject, but I don't know how much value that would add to the article. Thoughts? Bobmath (talk) 15:30, 28 July 2011 (UTC)

Euclidean algorithm
For RS codes, the Euclidean_decoder is another popular algorithm. I don't know if this is true for BCH codes, but since Reed%E2%80%93Solomon_error_correction section Peterson_decoder refers to this BCH article, perhaps the Euclidean decoder could be mentioned here as well with a link to  Euclidean_decoder. Rcgldr (talk) 04:30, 3 November 2011 (UTC)

I agree, for nonbinary codes or for decodding with unreadable characters I think it is the best method as it computes $$\Omega$$ during the $$\Lambda$$ calculation, so the Forney formula could be used easily afterwards. Peterson algorithm is good only for explanation purposes. Computing several determinants cannot be faster than gcd computation. Massey ... algorithm is probably comparable especially for binary case. Both favourite algorithms compute $$\Lambda$$ without 'guessing' it's degree. Hippo.69 (talk) 18:56, 29 December 2012 (UTC)

Simplified BCH Example
The article says that in $$GF(2^4)$$ it is true that:

$$(a+b)^2 = a^2 + 2ab + b^2 = a^2 + b^2$$

But I think that is true in $$GF(2)$$.

Page 29 of Error Control Coding, by Lin and Costello, seems to support this case. 82.57.60.212 (talk) 11:23, 14 January 2012 (UTC) Damix


 * I don't have Lin and Costello, but addition in GF(2k) is just exclusive or. The 2ab does not represent a field multiply of 2 times ab but rather ab + ab &mdash; which is zero. The same confusion arises in formal differentiation. Glrx (talk) 20:11, 15 January 2012 (UTC)


 * To answer Damix's original concern, I would like to note the following mathematical fact. The characteristic of all the non-zero elements in $$\mathrm{GF}(p^m)$$ (for prime $$p$$) is $$p$$. That is, if you sum any element of $$\mathrm{GF}(p^m)$$ $$p$$ times, you get zero. Arun Kandasamy 12:34, 25 February 2012 (UTC)  — Preceding unsigned comment added by Arun ks (talk • contribs)

Syndroms numbering
Hmmm, I should reindex the syndroms to be indexed in all sections the same way. ... in the next edit probably Hippo.69 (talk) 00:47, 30 December 2012 (UTC) I hope this is corrected now Hippo.69 (talk) 20:51, 4 January 2013 (UTC)

Possible error in the Forney algorithm explanation
I am not familiar to the wikipedia encoding or anything like that but i want to point out something which seems to me like a mistake. Close to the beginning of the forney algorithm explanation, the exponent in the multiplicative formula for lambda(x) is negative, which contradicts the definiton that the roots of it are the reciprocals of the error locations (which in fact will be the error location themselves from this definiton). If this is a mistake i hope someone would fix it and if its not please explain to me why. Also excuse me for the lack of formatting or any spelling mistakes, i was typing this on my phone. — Preceding unsigned comment added by 37.26.148.151 (talk) 12:57, 16 September 2015 (UTC)

External links modified
Hello fellow Wikipedians,

I have just added archive links to 1 one external link on BCH code. Please take a moment to review my edit. If necessary, add after the link to keep me from modifying it. Alternatively, you can add to keep me off the page altogether. I made the following changes:
 * Added archive http://web.archive.org/web/20120930164509/http://www.stanford.edu/class/ee387/handouts/notes7.pdf to http://www.stanford.edu/class/ee387/handouts/notes7.pdf

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at ).

Cheers.—cyberbot II  Talk to my owner :Online 16:24, 22 March 2016 (UTC)

Chess hashing
I just read that two early chess programs used BCH coding for their hash functions. What advantage does BCH coding add to hashing ( I am hoping it might eliminate collisions. Was/Is it not used more widely due to speed considerations - chess is always looking for maximum speed (accepting some degree of error in the process).) I was looking for a simple explanation but it appears this is not a simple subject. Thanks 2601:181:8301:4510:54EA:A058:77AF:BF1B (talk) 13:15, 24 June 2016 (UTC)

Example
In the example it reads:


 * $$m_i\left(\alpha^i\right) \bmod \left(x^4 + x + 1\right) = 0.$$

Seems an error to me. Madyno (talk) 21:46, 14 October 2021 (UTC)


 * - that part of the example is correct. The confusing part is stating that $$alpha = x$$, but then reusing $$x$$ on the following lines for $$m_1\left(x\right)$$, $$m_2\left(x\right) $$, ... . It might have been more clear if another variable was used. For example, $$m_2\left(z = x^2\right) = z^4 + z + 1 = x^8 + x^2 + 1$$, and $$x^8 + x^2 + 1 \bmod \left(x^4 + x + 1\right) = 0.$$ Rcgldr (talk) 20:49, 15 October 2021 (UTC)
 * Thank for you explanation. However, being a mathematician, it makes no sense to me. $$x^4 + x + 1$$ looks like a polynomial, but if it is meant to be the number $$\alpha^4 + \alpha + 1$$, why not say so. Madyno (talk) 21:16, 15 October 2021 (UTC)
 * - for this article, a finite field $$GF(2^n)$$ is defined by a reducing polynomial of degree $$n$$. In the example, the reducing polynomial for $$GF(2^4)$$ is $$x^4 + x + 1$$, with a primitive element value $$alpha = x$$. There are a total of $$\phi(2^4-1)=8$$ primitive element values: $$x, x+1, x^2, x^2+1, x^3+1, x^3+x+1, x^3+x^2+1, x^3+x^2+x$$, but using $$\alpha = x$$ is the simplest choice. In the following lines, $$x$$ is redefined as a power of alpha, which is why I think switching to using $$z$$ would have made this simpler to understand, with the goal of finding the minimum polynomial that satisfies $$m_i\left(z = \alpha^i\right) \bmod \left(x^4 + x + 1\right) = 0$$, since it's the 1 bit coefficients of $$m_i\left(z\right)$$ that define the minimum polynomials. Rcgldr (talk) 23:20, 15 October 2021 (UTC)
 * Thanks again, I do understand this all, except for the statement: $$\alpha = x$$. As far as I know, $$\alpha$$ is a primitive element. The other 7 are $$\alpha+1, \alpha^2, \alpha^2+1, \alpha^3+1, \alpha^3+\alpha+1, \alpha^3+\alpha^2+1, \alpha^3+\alpha^2+\alpha$$. Extending $$GF(2)$$ with any one of these will generate $$GF(2^4)$$. The confusing part is the (possible) redefinition of the generic variable $$x$$ as being the same as $$\alpha$$ from thereon.Madyno (talk) 11:13, 16 October 2021 (UTC)
 * Thanks again, I do understand this all, except for the statement: $$\alpha = x$$. As far as I know, $$\alpha$$ is a primitive element. The other 7 are $$\alpha+1, \alpha^2, \alpha^2+1, \alpha^3+1, \alpha^3+\alpha+1, \alpha^3+\alpha^2+1, \alpha^3+\alpha^2+\alpha$$. Extending $$GF(2)$$ with any one of these will generate $$GF(2^4)$$. The confusing part is the (possible) redefinition of the generic variable $$x$$ as being the same as $$\alpha$$ from thereon.Madyno (talk) 11:13, 16 October 2021 (UTC)

We're running around in circles. Compare it with the complex numbers. Extend the reals with a root of the polynomial $$x^2 + 1$$. Call this root $$i$$. As far as I know, no one will say $$i=x$$. Of course there is the sloppy way of saying: for $$x=i$$ it holds that $$x^2 + 1 = 0$$. This doen't mean $$x$$ and $$i$$ are the same, but merely: If you give $$x$$ the value $$i$$ the polynomial $$x^2 + 1$$ will have the value 0. No more than hat. Madyno (talk) 20:02, 16 October 2021 (UTC)
 * - To avoid having two meanings for x, I changed the terms for the reducing polynomial from x to z, following the example on page 6 of bch_code.pdf. Rcgldr (talk) 02:43, 17 October 2021 (UTC)
 * Defining the 8 primitive elements as $$\{\alpha, \alpha+1, \alpha^2, \alpha^2+1, \alpha^3+1, \alpha^3+\alpha+1, \alpha^3+\alpha^2+1, \alpha^3+\alpha^2+\alpha\}$$ assumes $$a(x) = x$$, so why not just define the 8 primitive elements as $$\{x, x+1, x^2, x^2+1, x^3+1, x^3+x+1, x^3+x^2+1, x^3+x^2+x\}$$, which is what I see in textbooks for error correction code. Since I've switched the terms for the reducing polynomial from x to z, then $$\alpha(z) = z$$, matches the table on page 6 of bch_code.pdf, which lists $$\alpha^1 = z = 0010_2 = 2$$. Rcgldr (talk) 12:11, 17 October 2021 (UTC)
 * The point is you're confusing different representations of $$K=GF(2^4)$$. One way is the 16 bytes: 0000, 0001, ..., 1111. They form a variant of the 16 vectors: (0,0,0,0), (0,0,0,1), ..., (1,1,1,1) showing $$K$$ as a 4-dimensional vectorspace over $$GF(2)$$. Another, equivalent, representation is with the polynomials $$a_3x^3 + a_2x^2 + a_1x + a_0$$ where $$a=(a_0,a_1,a_2,a_3)$$ with $$a_i\in GF(2)$$ is one of the vectors. Multiplication, also for the vector representation, is defined as for polynomials, but modulo for instance $$x^4 + x + 1$$ or another irreducible polynomial of degree 4. A third way is, as I wrote, extending $$GF(2)$$ with a primitive element $$\alpha$$, a root of an irreducible polynomial over $$GF(2)$$ with degree 4, for instance $$x^4 + x + 1$$. In the representation with polynomials, the polynomial $$x$$ is a primitive root of unity. It generates $$K$$. The successive powers are:
 * $$x(=0010)$$
 * $$x^2(=0100)$$
 * $$x^3(=1000)$$
 * $$x^4=x+1(=0011)$$
 * $$x^5=x^2+x(=0110)$$
 * $$x^6=x^3+x^2(=1100)$$
 * $$x^7=x^4+x^3=x^3+x+1(=1011)$$
 * $$x^8=x^4+x^2+x=x^2+1(=0101)$$
 * $$x^9=x^3+x(=1010)$$
 * $$x^{10}=x^4+x^2=x^2+x+1(=0111)$$
 * $$x^{11}=x^3+x^2+x(=1110)$$
 * $$x^{12}=x^4+x^3+x^2=x^3+x^2+x+1(=1111)$$
 * $$x^{13}=x^4+x^3+x^2+x=x^3+x^2+1(=1101)$$
 * $$x^{14}=x^4+x^3+x=x^3+1(=1001)$$
 * $$x^{15}=x^4+x=1(=0001)$$


 * Hence $$x^2$$ is also a primitive root of unity. But $$x^3$$ is not: $$(x^3)^5=x^{15}=1$$. Madyno (talk) 19:31, 17 October 2021 (UTC)


 * Most (if not all) of my textbooks for error correcting codes define a primitive element $$\alpha$$ as a polynomial in x, such as $$\{x, x+1, ..., x^3+x^2+x\}$$. Getting back to the article, changing the terms of the reducing polynomial from x to z, should eliminate the confusing part of the article where x previously had two meanings. Rcgldr (talk) 20:35, 17 October 2021 (UTC)

In the end the confusion arises from the statement $$\alpha=x$$, where $$\alpha$$ is a polynomial. Formally this should read: $$\alpha(x)=x$$. Madyno (talk) 07:27, 18 October 2021 (UTC)
 * I made your suggested change $$\alpha(z)=z$$. My confusion about this is due to textbooks and articles I've read. One of my textbooks, Error Control Coding by Shu Lin and Daniel J Costello Jr, has an example in $$GF(2^4)$$ where they define $$\alpha$$ as a binary value: $$\alpha = 0010_2$$. The pdf file I linked to above just lists powers of $$\alpha$$ in a table, where $$\alpha^1 = z$$. In the textbook Error Correcting Codes by Peterson and Weldon, for $$GF(2)$$ modulo $$X^4+X+1$$ ... "let $$\alpha$$ denote the residue class that contains X", followed by a table where $$\alpha = 0010_2$$. None of these use the syntax $$\alpha(z) = z$$, even though that is more mathematically correct. The books and articles target implementations rather than theory. Rcgldr (talk) 08:10, 18 October 2021 (UTC)
 * If interested, I have example code that matches this wiki article for 3 error correction, using Sugiyama's extended Euclid decoder (note the target audience for this was a hardware group, so the decoder emulates a pair of shift registers). bch15.c. It is written for Visual Studio and used intrinsics for the carryless multiply instruction, PCLMULQDQ. Rcgldr (talk) 08:18, 18 October 2021 (UTC)
 * Okay, I rest my case. Madyno (talk) 12:19, 18 October 2021 (UTC)

Scope of Galois field
Should the scope of the Galois field be $q^{m}$ rather than $q$? RossBoswell (talk) 21:00, 23 April 2024 (UTC)

Decoding based on extended Euclidean algorithm
Is there a reason for using matrix style notation versus the simpler notation shown in the other Wikipedia articles about extended Euclidean algorithm or Reed Solomon implementation of the extended Euclidean algorithm? For decoding purposes, there is no need to calculate t as seen in Extended_Euclidean_algorithm which results in the simpler still Reed%E2%80%93Solomon_error_correction which renames s to A. There also the issue of always using $$\alpha^i$$, or using negative constant (versus variable) powers such as $$\alpha^{-7}$$ instead of $$\alpha^8$$, and it's not clear that $$\alpha^{4}+\alpha^{-6}+\alpha^{-1} = 0$$ or that $$2\alpha^{7}$$ means $$\alpha^{7}+\alpha^{7} = 0$$, since the syntax is also used for actual multiplication versus repeated addition depending on context. It could be simpler to follow if the example just choose a specific instance of $$GF(2^4)$$ and used 4 bit binary numbers such as $$0101_2$$ where appropriate. Rcgldr (talk) 04:43, 11 July 2024 (UTC)