Redheffer matrix

In mathematics, a Redheffer matrix, often denoted $$A_n$$ as studied by, is a square (0,1) matrix whose entries aij are 1 if i divides j or if j = 1; otherwise, aij = 0. It is useful in some contexts to express Dirichlet convolution, or convolved divisors sums, in terms of matrix products involving the transpose of the $$n^{th}$$ Redheffer matrix.

Variants and definitions of component matrices
Since the invertibility of the Redheffer matrices are complicated by the initial column of ones in the matrix, it is often convenient to express $$A_n := C_n + D_n$$ where $$C_n := [c_{ij}]$$ is defined to be the (0,1) matrix whose entries are one if and only if $$j=1$$ and $$i \neq 1$$. The remaining one-valued entries in $$A_n$$ then correspond to the divisibility condition reflected by the matrix $$D_n$$, which plainly can be seen by an application of Mobius inversion is always invertible with inverse $$D_n^{-1} = \left[\mu(j/i) M_i(j)\right]$$. We then have a characterization of the singularity of $$A_n$$ expressed by $$\det\left(A_n\right) = \det\left(D_n^{-1}C_n+I_n\right).$$

If we define the function


 * $$M_j(i) := \begin{cases} 1, & \text{ if j divides i; } \\ 0, & \text{otherwise, } \end{cases}, $$

then we can define the $$n^{th}$$ Redheffer (transpose) matrix to be the nxn square matrix $$R_n = [M_j(i)]_{1 \leq i,j \leq n}$$ in usual matrix notation. We will continue to make use this notation throughout the next sections.

Examples
The matrix below is the 12 &times; 12 Redheffer matrix. In the split sum-of-matrices notation for $$A_{12} := C_{12} + D_{12}$$, the entries below corresponding to the initial column of ones in $$C_n$$ are marked in blue.
 * $$\left(\begin{matrix}

1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ {\color{blue}\mathbf{1}} & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \\ {\color{blue}\mathbf{1}} & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 \\ {\color{blue}\mathbf{1}} & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ {\color{blue}\mathbf{1}} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ {\color{blue}\mathbf{1}} & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 \\ {\color{blue}\mathbf{1}} & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ {\color{blue}\mathbf{1}} & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ {\color{blue}\mathbf{1}} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ {\color{blue}\mathbf{1}} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ {\color{blue}\mathbf{1}} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ {\color{blue}\mathbf{1}} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{matrix}\right)$$

A corresponding application of the Mobius inversion formula shows that the $$n^{th}$$ Redheffer transpose matrix is always invertible, with inverse entries given by


 * $$R_n^{-1} = \left[M_j(i) \cdot \mu\left(\frac{i}{j}\right)\right]_{1 \leq i,j \leq n},$$

where $$\mu(n)$$ denotes the Moebius function. In this case, we have that the $$12 \times 12$$ inverse Redheffer transpose matrix is given by


 * $$R_{12}^{-1} = \left(\begin{matrix}

1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & -1 & -1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 1 & -1 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & -1 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 1 \\ \end{matrix}\right)$$

Determinants
The determinant of the n × n square Redheffer matrix is given by the Mertens function M(n). In particular, the matrix $$A_n$$ is not invertible precisely when the Mertens function is zero (or is close to changing signs). As a corollary of the disproof of the Mertens conjecture, it follows that the Mertens function changes sign, and is therefore zero, infinitely many times, so the Redheffer matrix $$A_n$$ is singular at infinitely many natural numbers.

The determinants of the Redheffer matrices are immediately tied to the Riemann Hypothesis through this relation with the Mertens function, since the Hypothesis is equivalent to showing that $$M(x) = O\left(x^{1/2+\varepsilon}\right)$$ for all (sufficiently small) $$\varepsilon > 0$$.

Factorizations of sums encoded by these matrices
In a somewhat unconventional construction which reinterprets the (0,1) matrix entries to denote inclusion in some increasing sequence of indexing sets, we can see that these matrices are also related to factorizations of Lambert series. This observation is offered in so much as for a fixed arithmetic function f, the coefficients of the next Lambert series expansion over f provide a so-called inclusion mask for the indices over which we sum f to arrive at the series coefficients of these expansions. Notably, observe that


 * $$\sum_{d|n} f(d) = \sum_{k=1}^n M_k(n) \cdot f(k) = [q^n]\left(\sum_{n \geq 1} \frac{f(n) q^n}{1-q^n}\right).$$

Now in the special case of these divisor sums, which we can see from the above expansion, are codified by boolean (zero-one) valued inclusion in the sets of divisors of a natural number n, it is possible to re-interpret the Lambert series generating functions which enumerate these sums via yet another matrix-based construction. Namely, Merca and Schmidt (2017-2018) proved invertible matrix factorizations expanding these generating functions in the form of


 * $$\sum_{n \geq 1} \frac{f(n) q^n}{1-q^n} = \frac{1}{(q; q)_{\infty}} \sum_{n \geq 1} \left(\sum_{k=1}^n s_{n,k} f(k)\right) q^n,$$

where $$(q; q)_{\infty}$$ denotes the infinite q-Pochhammer symbol and where the lower triangular matrix sequence is exactly generated as the coefficients of $$s_{n,k} = [q^n] \frac{q^k}{1-q^k} (q; q)_{\infty}$$, through these terms also have interpretations as differences of special even (odd) indexed partition functions. Merca and Schmidt (2017) also proved a simple inversion formula which allows the implicit function f to be expressed as a sum over the convolved coefficients $$\ell(n) = (f \ast 1)(n)$$ of the original Lambert series generating function in the form of


 * $$f(n) = \sum_{d|n} \sum_{k=1}^n p(d-k) \mu(n/d) \left[\sum_{j \geq 0 \atop k-j \geq 0} \ell(k - j) [q^j](q; q)_{\infty}\right],$$

where p(n) denotes the partition function, $$\mu(n)$$ is the Moebius function, and the coefficients of $$(q; q)_{\infty}$$ inherit a quadratic dependence on j through the pentagonal number theorem. This inversion formula is compared to the inverses (when they exist) of the Redheffer matrices $$A_n$$ for the sake of completion here.

Other than that the underlying so-termed mask matrix which specifies the inclusion of indices in the divisor sums at hand are invertible, utilizing this type of construction to expand other Redheffer-like matrices for other special number theoretic sums need not be limited to those forms classically studied here. For example, in 2018 Mousavi and Schmidt extend such matrix based factorization lemmas to the cases of Anderson-Apostol divisor sums (of which Ramanujan sums are a notable special case) and sums indexed over the integers that are relatively prime to each n (for example, as classically defines the tally denoted by the Euler phi function). More to the point, the examples considered in the applications section below suggest a study of the properties of what can be considered generalized Redheffer matrices representing other special number theoretic sums.

Spectral radius and eigenspaces

 * If we denote the spectral radius of $$A_n$$ by $$\rho_n$$, i.e., the dominant maximum modulus eigenvalue in the spectrum of $$A_n$$, then
 * $$\lim_{n \rightarrow \infty} \frac{\rho_n}{\sqrt{n}} = 1,$$

which bounds the asymptotic behavior of the spectrum of $$A_n$$ when n is large. It can also be shown that $$1+\sqrt{n-1} \leq \rho_n < \sqrt{n}+O(\log n)$$, and by a careful analysis (see the characteristic polynomial expansions below) that $$\rho_n = \sqrt{n} + \log\sqrt{n} + O(1)$$.
 * The matrix $$A_n$$ has eigenvalue one with multiplicity $$n - \left\lfloor \log_2(n) \right\rfloor - 1$$.
 * The dimension of the eigenspace $$E_{\lambda}(A_n)$$ corresponding to the eigenvalue $$\lambda := 1$$ is known to be $$\left\lfloor \frac{n}{2} \right\rfloor - 1$$. In particular, this implies that $$A_n$$ is not diagonalizable whenever $$n \geq 5$$.
 * For all other eigenvalues $$\lambda \neq 1$$ of $$A_n$$, then dimension of the corresponding eigenspaces $$E_{\lambda}(A_n)$$ are one.

Characterizing eigenvectors
We have that $$[a_1,a_2,\ldots,a_n]$$ is an eigenvector of $$A_n^{T}$$ corresponding to some eigenvalue $$\lambda \in \sigma(A_n)$$ in the spectrum of $$A_n$$ if and only if for $$n \geq 2$$ the following two conditions hold:


 * $$\lambda a_n = \sum_{d|n} a_d \quad\text{ and }\quad \lambda a_1 = \sum_{k=1}^n a_k.$$

If we restrict ourselves to the so-called non-trivial cases where $$\lambda \neq 1$$, then given any initial eigenvector component $$a_1$$ we can recursively compute the remaining n-1 components according to the formula


 * $$a_j = \frac{1}{\lambda-1} \sum_{d|j \atop d < j} a_d.$$

With this in mind, for $$\lambda \neq 1$$ we can define the sequences of


 * $$v_{\lambda}(n) := \begin{cases} 1, & n = 1; \\ \frac{1}{\lambda-1} \sum_{d|n \atop d \neq n} v_{\lambda}(d), & n \geq 2. \end{cases}$$

There are a couple of curious implications related to the definitions of these sequences. First, we have that $$\lambda \in \sigma(A_n)$$ if and only if


 * $$\sum_{k=1}^n v_{\lambda}(k) = \lambda.$$

Secondly, we have an established formula for the Dirichlet series, or Dirichlet generating function, over these sequences for fixed $$\lambda \neq 1$$ which holds for all $$\Re(s) > 1$$ given by


 * $$ \sum_{n \geq 1} \frac{v_{\lambda}(n)}{n^s} = \frac{\lambda-1}{\lambda-\zeta(s)},$$

where $$\zeta(s)$$ of course as usual denotes the Riemann zeta function.

Bounds and properties of non-trivial eigenvalues
A graph theoretic interpretation to evaluating the zeros of the characteristic polynomial of $$A_n$$ and bounding its coefficients is given in Section 5.1 of. Estimates of the sizes of the Jordan blocks of $$A_n$$ corresponding to the eigenvalue one are given in. A brief overview of the properties of a modified approach to factorizing the characteristic polynomial, $$p_{A_n}(x)$$, of these matrices is defined here without the full scope of the somewhat technical proofs justifying the bounds from the references cited above. Namely, let the shorthand $$s := \lfloor \log_2(n) \rfloor$$ and define a sequence of auxiliary polynomial expansions according to the formula


 * $$f_n(t) := \frac{p_{A_n}(t+1)}{t^{n-s-1}} = t^{s+1} - \sum_{k=1}^s v_{nk} t^{s-k}.$$

Then we know that $$f_n(t)$$ has two real roots, denoted by $$t_n^{\pm}$$, which satisfy


 * $$t_n^{\pm} = \pm \sqrt{n} + \log\sqrt{n} + \gamma - \frac{3}{2} + O\left(\frac{\log^2(n)}{\sqrt{n}}\right),$$

where $$\gamma \approx 0.577216$$ is Euler's classical gamma constant, and where the remaining coefficients of these polynomials are bounded by


 * $$ |v_{nk}| \leq \frac{n \cdot \log^{k-1}(n)}{(k-1)!}.$$

A plot of the much more size-constrained nature of the eigenvalues of $$f_n(t)$$ which are not characterized by these two dominant zeros of the polynomial seems to be remarkable as evidenced by the only 20 remaining complex zeros shown below. The next image is reproduced from a freely available article cited above when $$n \sim 10^6$$ is available here for reference.

Applications and generalizations
We provide a few examples of the utility of the Redheffer matrices interpreted as a (0,1) matrix whose parity corresponds to inclusion in an increasing sequence of index sets. These examples should serve to freshen up some of the at times dated historical perspective of these matrices, and their being footnote-worthy by virtue of an inherent, and deep, relation of their determinants to the Mertens function and equivalent statements of the Riemann Hypothesis. This interpretation is a great deal more combinatorial in construction than typical treatments of the special Redheffer matrix determinants. Nonetheless, this combinatorial twist on enumerating special sequences of sums has been explored more recently in a number of papers and is a topic of active interest in pre-print archives. Before diving into the full construction of this spin on the Redheffer matrix variants $$R_n$$ defined above, observe that this type of expansion is in many ways essentially just another variation of the usage of a Toeplitz matrix to represent truncated power series expressions where the matrix entries are coefficients of the formal variable in the series. Let's explore an application of this particular view of a (0,1) matrix as masking inclusion of summation indices in a finite sum over some fixed function. See the citations to the references and for existing generalizations of the Redheffer matrices in the context of general arithmetic function cases. The inverse matrix terms are referred to a generalized Mobius function within the context of sums of this type in.

Matrix products expanding Dirichlet convolutions and Dirichlet inverses
First, given any two non-identically-zero arithmetic functions f and g, we can provide explicit matrix representations which encode their Dirichlet convolution in rows indexed by natural numbers $$n \geq 1, 1 \leq n \leq x$$:


 * $$D_{f,g}(x) := \left[M_d(n) f(d) g(n/d)\right]_{1 \leq d,n \leq x} = \begin{bmatrix} 0 & 0 & \cdots & 0 & g(x) \\ 0 & 0 & \cdots & g(x-1) & g(x) \\ \ldots & \ldots & \ddots & \ddots & \cdots \\ g(1) & g(2) & \cdots & g(x-1) & g(x) \end{bmatrix} \begin{bmatrix} 0 & 0 & \cdots & 0 & f(1) \\ 0 & 0 & \cdots & f(2) & f(1) \\ \ldots & \ldots & \ddots & \ddots & \cdots \\ f(x) & f(x-1) & \cdots & f(2) & f(1) \end{bmatrix} R_x^{T}. $$

Then letting $$e^{T} := [1,1,\ldots,1]$$ denote the vector of all ones, it is easily seen that the $$n^{th}$$ row of the matrix-vector product $$e^{T} \cdot D_{f,g}(x)$$ gives the convolved Dirichlet sums


 * $$(f \ast g)(n) = \sum_{d|n} f(d) g(n/d),$$

for all $$1 \leq n \leq x$$ where the upper index $$x \geq 2$$ is arbitrary.

One task that is particularly onerous given an arbitrary function f is to determine its Dirichlet inverse exactly without resorting to a standard recursive definition of this function via yet another convolved divisor sum involving the same function f with its under-specified inverse to be determined:



f^{-1}(n) \ =\ \frac {-1}{f(1)} \mathop{\sum_{d\,\mid \,n}}_{d < n} f\left(\frac{n}{d}\right) f^{-1}(d),\ n > 1 \text{ where } f^{-1}(1) := 1 / f(1). $$

It is clear that in general the Dirichlet inverse $$f^{-1}(n)$$ for f, i.e., the uniquely defined arithmetic function such that $$(f^{-1} \ast f)(n) = \delta_{n,1}$$, involves sums of nested divisor sums of depth from one to $$\omega(n)$$ where this upper bound is the prime omega function which counts the number of distinct prime factors of n. As this example shows, we can formulate an alternate way to construct the Dirichlet inverse function values via matrix inversion with our variant Redheffer matrices, $$R_n$$.

Generalizations of the Redheffer matrix forms: GCD sums and other matrices whose entries denote inclusion in special sets
There are several often cited articles from worthy journals that fight to establish expansions of number theoretic divisor sums, convolutions, and Dirichlet series (to name a few) through matrix representations. Besides non-trivial estimates on the corresponding spectrum and eigenspaces associated with truly notable and important applications of these representations—the underlying machinery in representing sums of these forms by matrix products is to effectively define a so-termed masking matrix whose zero-or-one valued entries denote inclusion in an increasing sequence of sets of the natural numbers $$\{1,2,\ldots,n\}$$. To illustrate that the previous mouthful of jargon makes good sense in setting up a matrix based system for representing a wide range of special summations, consider the following construction: Let $$\mathcal{A}_n \subseteq [1, n] \cap \mathbb{Z}$$ be a sequence of index sets, and for any fixed arithmetic function $$f: \mathbb{N} \longrightarrow \mathbb{C}$$ define the sums


 * $$S_{\mathcal{A},f}(n) \mapsto S_f(n) := \sum_{k \in \mathcal{A}_n} f(k).$$

One of the classes of sums considered by Mousavi and Schmidt (2017) defines the relatively prime divisor sums by setting the index sets in the last definition to be


 * $$\mathcal{A}_n \mapsto \mathcal{G}_n := \{1 \leq d \leq n: \gcd(d, n) = 1\}.$$

This class of sums can be used to express important special arithmetic functions of number theoretic interest, including Euler's phi function (where classically we define $$m := 0$$) as


 * $$\varphi(n) = \sum_{d \in \mathcal{G}_n} d^m,$$

and even the Mobius function through its representation as a discrete (finite) Fourier transform:


 * $$\mu(n) = \sum_{\stackrel{1\le k \le n }{ \gcd(k,\,n)=1}} e^{2\pi i \frac{k}{n}}.$$

Citations in the full paper provide other examples of this class of sums including applications to cyclotomic polynomials (and their logarithms). The referenced article by Mousavi and Schmidt (2017) develops a factorization-theorem-like treatment to expanding these sums which is an analog to the Lambert series factorization results given in the previous section above. The associated matrices and their inverses for this definition of the index sets $$\mathcal{A}_n$$ then allow us to perform the analog of Moebius inversion for divisor sums which can be used to express the summand functions f as a quasi-convolved sum over the inverse matrix entries and the left-hand-side special functions, such as $$\varphi(n)$$ or $$\mu(n)$$ pointed out in the last pair of examples. These inverse matrices have many curious properties (and a good reference pulling together a summary of all of them is currently lacking) which are best intimated and conveyed to new readers by inspection. With this in mind, consider the case of the upper index $$x := 21$$ and the relevant matrices defined for this case given as follows:


 * $$\left(\begin{smallmatrix}

1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 1 \\ \end{smallmatrix}\right)^{-1} = \left(\begin{smallmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & -1 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & -1 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & -1 & 0 & -1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & 2 & -1 & 0 & 0 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & 0 & 1 & 0 & -1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & -1 & 1 & 0 & -1 & 1 & -1 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & 1 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & -1 & 0 & 0 & 0 & 1 & 0 & 0 & -1 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 3 & 0 & -2 & 0 & -2 & 0 & 2 & 0 & -1 & 0 & -1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -3 & 0 & 1 & 0 & 3 & 0 & -1 & -1 & 1 & 0 & 0 & 0 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & 1 & 0 & 1 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & -2 & 0 & 0 & 1 & 0 & 0 & 1 & -1 & 1 & -1 & -1 & 1 & 0 & 0 & 0 & 0 \\ -3 & 0 & 2 & 0 & 2 & 0 & -2 & 0 & 1 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 3 & 0 & -2 & 0 & -2 & 0 & 2 & 0 & -1 & 0 & 0 & 0 & 1 & 0 & 0 & -1 & -1 & 1 & 0 & 0 \\ 1 & 0 & -1 & 0 & 0 & 0 & 1 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 1 & 0 \\ -1 & 0 & 0 & -1 & 1 & 1 & 0 & -1 & 2 & -1 & -1 & 1 & -1 & 1 & 1 & -1 & 0 & 0 & -1 & 1 \\ \end{smallmatrix}\right)$$

Examples of invertible matrices which define other special sums with non-standard, however, clear applications should be catalogued and listed in this generalizations section for completeness. An existing summary of inversion relations, and in particular, exact criteria under which sums of these forms can be inverted and related is found in many references on orthogonal polynomials. Other good examples of this type of factorization treatment to inverting relations between sums over sufficiently invertible, or well enough behaved triangular sets of weight coefficients include the Mobius inversion formula, the binomial transform, and the Stirling transform, among others.