Archive for June, 2014

Under the assumption that e|t is i.i.d. N(0,o^), this has the simple form, Qnt(fW,A«) = 4NT)”1 E^! E^i^^^-^o’F®)2 (plus terms that do not depend on Aq or F), where = E[X*t|X,F^”^,A^‘^] = Xit if /jt=l and = it=0. The arguments about concentrating the likelihood above apply, so F® are computed as the eigenvalues of N 1 E ^ = where ^ is defined analogously to Xj above except using rather than Xas needed for estimates of the missing observations. The unbalanced panel quasi-MLEs are obtained by iterating this process to convergence. Note also that this approach extends to other data irregularities, in particular situations with mixed sampling frequencies, for example some variables might be observed monthly while others are observed either as end-of-quarter values (stocks) or as quarterly averages (flows).
Read the rest of this entry »

We therefore take a different approach and estimate the dynamic factor model in its static (or stacked) form. The approach here is quasi-MLE, in the sense that the estimator is motivated by making strong parametric assumptions, but the consistency of the estimated factors is shown under weaker nonparametric assumptions given in section 3. To motivate the estimation strategy, we suppose that h=0 so At=AQ, and e-t is i.i.d. N(0,o^) and independent across series. We also diverge from the treatment of Ft in dynamic factor models, in which Ft is modeled as obeying a stochastic process, and instead treat {FJ as a Txr dimensional unknown nonrandom parameter to be estimated. With this notation and under these restrictive assumptions, the maximum likelihood estimator for (Aq, F) solves the nonlinear least squares problem with the objective function.

Read the rest of this entry »

These representations exploit the fact that aj(L) has finite order. If it has infinite order then these static representations have infinitely many factors. Parametric time domain dynamic factor models explored to date in the literature assume finite order c^(L) (cf. Sargent (1989),
Stock and Watson (1991), and Quah and Sargent (1993)). Whether this is a problem is an empirical issue.
It is convenient at this point to introduce some additional notation. Let X*t denote the observation on variable i at time t and let the T x 1 vector X- = (X- j, X^, ■ • • ’ • Also let F-t be the observation on the ith factor at time t, let the Txr matrix F=(Fj, F2,…,F^)’, let Pp = F(F’F)’!F\ and let At = (\lt, \2t,…,XNt)\ where \tis rXl vector ^actor loadings on variable i at time t. Let F° denote the true value of F. Let denote the i’th row of Throughout, с and d denote generic finite positive constants. For any matrix M, its (i,j) element is M*j and its norm is ||M || = {tr(M’M)}i//2. For a real symmetic matrix M, mineval(M) and maxeval(M) denote its minimum and maximum eigenvalue.
Finally, to address the problem of missing data in an unbalanced panel, let /-t be a nonrandom indicator function, where I^= 1 if the i^1 variable is observed at date t, and /jt=0 otherwise.
Read the rest of this entry »

By suitable redefinition of the factors and the idiosyncratic disturbances, the dynamic factor model can be rewritten in the form (2.1) with At constant. To see this, let Zt denote anxl vector of time series variables which are assumed to satisfy the dynamic factor model,
for i= 1.. ,n, where ft is a vector of factors and L is the lag operator. In the econometric literature using dynamic factor models, {^} and i = l,…,n are taken to be mutually independent. Let a-(L) have order q and let g-(L) be a finite order lag polynomial with roots outside the unit circle. (Typically normality of these disturbances is further assumed to motivate using the Kalman filter to compute the maximum likelihood estimates of the factors.) The model is completed by making an additional assumption specifying the stochastic process followed by the factors, such as a Gaussian vector autoregression, where the factors are distributed independently of {v-t}.

Read the rest of this entry »

The model

Let yt be a scalar time series variable and let Xt be a N-dimensional multiple time series variable. Throughout, yt is taken to be the variable to be forecast while Xt is the vector time series variable that contain useful information for forecasting yt+ j – It is assumed that X( can be represented by the factor structure,
where Ft is the r x 1 common factor and et is the N x 1 idiosyncratic disturbance. The idiosyncratic disturbances are in general correlated across series and over time; specific assumptions used for the asymptotic analysis are given in section 3.
Read the rest of this entry »

The second related literature is the large body of work that uses approximate factor structures to study asset prices. Contributions include Chamberlain and Rothschild (1983), Connor and Korajczyk (1986, 1988, 1993), Mei (1993), Schneewwiss and Mathes (1995), Bekker et. al. (1996), Geweke and Zhou (1996), and Zhou (1997); also see the survey in Campbell, Lo and McKinley (1996, chapter 6)).

The work in these literatures most closely related to the present paper is by Connor and Korajczyk (1986, 1988, 1993) and Forni and Reichlin (1996, 1997, 1998); both consider the determination of the number of factors and their estimation in large systems. Working within a static approximate factor model that allows some cross-sectional dependence among the idiosyncratic errors, Connor and Korajczyk (1986, 1988, 1993) show that factors estimated by principal components are consistent (at a given date) as N-*oo with T fixed. They apply their methods to evaluating the arbitrage pricing theory of asset prices. Forni and Reichlin (1998), working with a dynamic factor model with mutually uncorrelated idiosynchratic errors, show that cross sectional averages consistently estimate a certain scalar linear combination of the factors. They use this insight in Forni and Reichlin (1996, 1998) to motivate heuristically a dynamic principal components procedure for estimating the vector common factors and for studying the dynamic properties of the factors. Forni and Reichlin (1997) suggest an alternative estimator, which they motivate by dynamic principal components, although no proofs of consistency of the estimated factors are provided. * In the first applications of this large cross section approach to macroeconomic data, they apply their methods to large regional and sectoral data sets, for example Forni and Reichlin (1998) analyze productivity and output for 450 U.S. industries.
Read the rest of this entry »

Asymptotic results are presented in Section 3. The asymptotic framework is motivated by the application to macroeconomic forecasting. Because the number of time series (N) far exceeds the number of observation dates (T), N and T are modeled as tending to infinity, but T/N -*> 0. Because macroeconomic theory does not clearly suggest finitely many factors, the number of factors (r) is treated as tending to infinity, but much more slowly than T. Because r is not known, the number of estimated factors (k) is not assumed to equal the number of true factors. In this framework, it is shown that, if k>r, the estimated factors are uniformly consistent (they span the space of the true factors, uniformly in the time index). Given this result and some additional conditions, it is then shown that, if k>r, an information criterion will consistently estimate the number of factors entering the forecasting equation for the variable of interest, and the resulting forecasts are as efficient asymptotically as if the true factors were observed. These theoretical predictions are examined in and supported by a Monte Carlo experiment reported in section 4.
Read the rest of this entry »

First, because empirical evidence suggests that time variation in macroeconomic relations is widespread (e.g. Stock and Watson [1996]), the factor loadings are permitted to evolve over time. Second, the factor structure is approximate, in the sense that the idiosyncratic errors can be correlated across series. Third, the model is nonparametric, in the sense that the correlation structures and distributions of the idiosyncratic terms and the factors, and the precise lag structure by which the factors enter, are not specified parametrically. Fourth, a practical concern when working with a large number of time series is that a large break or outlier arising from a data entry error or a redefinition might go undetected, and this possibility is introduced into the analysis. Fifth, because economic time series are typically available over different spans, the model and estimation procedures are developed for the cases of both a balanced and unbalanced panel.
Read the rest of this entry »

Recent advances in information technology now make it possible to access in real time, at a reasonable cost, literally thousands of economic time series for major developed economies.
This raises the prospect of a new frontier in macroeconomic forecasting, in which a very large number of time series are used to forecast a few key economic quantities such as output or inflation. Time series models currently used for macroeconomic forecasting, however, incorporate only a handful of series: vector autoregressions, for example, typically contain a half-dozen to one dozen variables, rarely more. Although thousands of time series are available in real time, a theoretical framework for using these data for time series forecasting remains undeveloped.
Read the rest of this entry »

The co-decision procedure, introduced in the Maastricht Treaty (Article 189b) and amended by the 1997 Treaty of Amsterdam, elevates the European Parliament as the locally elected body in the Union’s decision-making structure to equal legislative standing with the Council of Ministers. The 1986 SEA created the cooperation procedure in which the Parliament could accept, reject, or amend Council decisions. Under cooperation, however, rejected and amended decisions returned to the Council for final decision, and the Parliament had no further say in policy choice. At best the Parliament was an agenda-setter, but only if the Commission as the true agenda-setter acquiesced. Maastricht’s co-decision procedure now gives the Parliament joint say along with the Council of Ministers over the final specification of all EMU policies except monetary policy. Policies rejected or amended by Parliament but once again approved by Council, perhaps in another amended form, must now be agreed to by an absolute majority of Parliament. Disagreements between the Council of Ministers and Parliament are to be resolved through a Conciliation Committee composed of members from both bodies. Committee compromises return to their respective chambers, using a qualified majority in the Council and an absolute majority in the Parliament for final approval. The net effect of the codecision procedure has been to create two equally powerful legislative bodies, each capable of blocking the preferred outcomes of the other. Under Maastricht, negotiations between a broadly elected Parliament and a country appointed Council replace the non-elected Commission as the center locus for EMU policy-making. The EMU decision-making structure now closely resembles that of the United States: an institutionally weak executive, a state (country-specific) Senate and a district (local region-specific) House. It is the constitutional form of democratic federalism.

Read the rest of this entry »

Pages: 1 2 Next
Get Started Now!

Copyright © 2013 - 2019 Real Time Economics. All rights reserved

Home | Site Map | Contacts