New results on the convergence of random matrices

This paper extends the previous convergence results in Cerqueti and Costantini (2008) to a more general case using larger normed set of functions. In this regard, the weight-based convergence of the random matrices and their generalized eigenvalues is obtained under less restrictive requirements for the weights.


Introduction
This paper extends the previous convergence results in Cerqueti and Costantini (2008) to a more general context. The main limit of the approach of the quoted paper relies on the thinness of the functional spaces used to obtain the convergence of a class of random matrices and their generalized eigenvalues. More precisely, the weights introduced in Cerqueti and Costantini (2008) belong to rather small functional sets, and this leads to a not general convergence result.
In this paper, we relax the restrictive assumptions on the weights of the random matrices and we provide a more general approach to obtain the convergence of the generalized eigenvalues. In this respect, we prove that the Sobolev Spaces used in Cerqueti and Costantini (2008) can be embedded into some larger normed sets of functions. Specifically, the weights considered in Cerqueti and Costantini (2008) can be also used in our more general framework, but the converse is not true. It is worth to note that, as we will prove, the new functional spaces are so wide that they also contain the a large class of polynomials and the functions that are bounded in [0, 1].
The remaining part of the paper is organized as follows. Section 2 describes the data generating process and the weighted random matrices. In Section 3 the main properties of the weights are studied. Section 4 presents the generalized eigenvalue problem and the main convergence results.

Preliminaries
This section contains the notation set and the preliminary definitions used throughout the paper. Consider the following p-variate integrated process of a real nonnegative order d, I(d), i.e: . . , p t ) is a zero-mean stationary process, L is the lag operator, i.e. L t := t−1 , and ∆ := 1 − L.
In order to make this study as self-contained as possible, we report the main Assumptions on the process Y t already stated in Cerqueti and Costantini (2008) with some discussions.

Assumptions on the data generating process
We suppose that the process Y t is defined as in (1) and satisfies the following conditions.
(ii) The hypotheses of the Wold decomposition theorem are satisfied, i.e.: where v t are i.i.d. zero-mean p-variate gaussian variables with variance equals to the identity matrix of order p, I p , and C(L) is a p-squared matrix of lag polynomials in the lag operator L.
(iii) There exist C 1 (L) and C 2 (L) p-squared matrices of lag polynomials in the lag operator L such that all the roots of detC 1 (L) are outside the complex unit circle and C(L) = C 1 (L) −1 C 2 (L).
Conditions (ii) and (iii) can be interpreted as follows.
The lag polynomial C(L) − C(1) attains value zero at L = 1 with algebraic multiplicity equals to d. Thus, there exists a lag polynomial . Therefore, we can write: Let us define w t := D(L)v t . Then, substituting w t into (3), we get: (4) implies that, given Y t ∼ I(d), we can write recursively: where rank(C(1)) = p − r < p.
By Assumptions (ii) and (iii), we have that C(L)v t and D(L)v t are well-defined stationary processes.
(iv) Let us consider R r the matrix of the eigenvectors of C(1)C(1) T corresponding to the r zero eigenvalues. Then the matrix Assumption (iv) avoids that Y t is an integrated process of order greater than d.
admits a unit root with algebraic multiplicityd − d, and so D(1) is singular.
Therefore R T r D(1)D(1) T R r is singular, and Assumption (iv) does not hold.

Random matrices and weights
The random matrices involved in the generalized eigenvalue problem depend on an integer number m ≥ p and they are based on the following property of the integrated processes: and ∆ d Y t is a stationary process.
Given k = 1, . . . , m; h = 2, . . . , d, we introduce very generally the functions We also define and where a n,k := with and A m and B m represent the random matrices related to the nonstationary and stationary part of the process, respectively. In order to obtain convergence results for the random matrices A m and B m , the weights F k and G k,h are defined as follows.
Definition 2.1 provides the functional spaces where the weights should be contained in order to ensure the convergence results. The set F m has been already explored in Bierens (1997), while the space G m,d contains functions satisfying the asymptotic condition (17). It is worth noting that an asymptotic condition on the weights is indeed required to obtain our convergence result. We also stress that, using the previous definition, the convergence result of the random matrices and their corresponding generalized eigenvalues is obtained in a more general context than that of Cerqueti and Costantini (2008).
The next section provides the main properties of the weights.

Properties of the weights
The properties of the functional class F m are shown in Bierens (1997). Therefore, we analyze only G m,d . The first result shows that G m,d is not empty, and is wide enough to contain a large class of polynomial functions.
The function 1] belongs to G m,d .
Proof. Let us fix h = 2, . . . , d and consider Thus (17) holds, and the proposition is completely proved.
Other important features of the functional spaces G's can be shown. These properties provide a further support on the fact that the choice of weights belonging to the G's is not restrictive, since it involves a huge number of functions.
We summarize them in the following result.
If (k 1 , h 1 ) = (k 2 , h 2 ), then the left-hand side of (17) can be rewritten as and the proof is complete.
A standard comparison result gives In order to make the analysis here more general than the one in Cerqueti and Costantini (2008), it is shown that the functional space G m,d contains the weights used in their approach. Then G n ∈ G m,d .

Remark 3.3. Theorem 3.2-(ii) assures that every functions that is bounded in
Moreover, we have Hölder's inequality gives for each p, q > 1 such that p −1 + q −1 = 1.
The right-hand side of (22) implies that a sufficient condition for G n ∈ G m,d is The theorem is proved, since (20) implies (23).

Convergence of the generalized eigenvalues
This section contains the statement of the generalized eigenvalue problem with the related convergence results. Using the findings of the previous sections, we generalize the outcomes in Cerqueti and Costantini (2008). To this end, the functional spaces containing the weights is broadened (see Theorem 3.4).
First of all, we introduce the notation that will be used in this section. We define Ψ k : where f k is the derivative of F k .
Moreover, we define the following p-variate standard normally distributed random vectors: and we construct the matrix V r,m as and λ 1,m ≥ · · · ≥ λ p−r,m the ordered solutions of Then we have the following convergence in distribution (II) let us consider λ * 1,m ≥ · · · ≥ λ * r,m the ordered solutions of the generalized eigenvalue problem Then the following convergence in distribution holds Proof. The proof is grounded on Anderson et al., (1983) and Bierens, (1997).
We show the result for d = 2 and then we generalize the proof for d > 2.
By definition of the data generating process, we can write recursively By (27), we have as n → +∞. Therefore Since G k,h ∈ G m,d , the second addend of the right-term side of (29) vanishes as n → +∞. Thus, the set of Assumptions in Subsection 2.1 and F k ∈ F m lead to the hypotheses of a well-known and rather technical convergence in distribution result due to Bierens (1997, Lemma 1), that can be written as: as n → +∞, jointly for k = 1, . . . , m.
By (30) we obtain the thesis.
Analogously to the previous case, using the Assumptions in Subsection 2.1, F k ∈ F m and Lemma 1 in Bierens, (1997), we obtain the thesis.
The theorem is completely proved.