**Case Study Contents**

## Problem Statement

In economics, a production function relates the output of a production process to the inputs. The Cobb–Douglas (C-D) production function, a particular functional form of the production function, is used widely to represent the technological relationship between the amounts of two or more inputs, physical capital and labor, and the amount of output that can be produced by those inputs. The C-D production function is a special case of the Constant Elasticity of Substitution (CES) production function. Production functions with concave functional form such as CES functions are popular in economic modeling because they exhibit diminishing returns; diminishing returns is the decrease in the marginal (incremental) output of a production process as the amount of a single factor of production is incrementally increased, while the amounts of all other factors of production stay constant.

Here, we estimate the Cobb-Douglas function in its most standard form of a single good with two factors, labor and capital, using Mizon's 1977 data set. In his 1977 paper, Mizon estimated a variety of specifications for production functions including the C-D and CES where they were allowed to have additive as well as multiplicative error terms. (Example 2 will cover the CES models). In his production function estimations, Mizon used U.K. data on capital, labor use, and a common output measure for 24 industries covering the years 1954, 1957 and 1960.

To estimate a vector of unknowns of the Cobb-Douglas function, we are going to minimize the Sum of Squared Errors (SSE) in a nonlinear least squares model. We also will report some standard regression statistics such as standard errors, t-statistic (also called T value), and p-value at the estimation point.

## Mathematical Formulation

In a typical nonlinear least squares problem, we estimate a vector of unknowns $\theta$ by solving a constrained optimization problem. To be more specific, we search for estimators of $\theta$, i.e., $\hat{\theta}$, that minimize the SSE subject to some constraints:

$$\hat{\theta} = \arg\min_{\theta}\left[(\sum_{t=1}^m\mu_t^2)\right],$$

$$\mbox{s.t.} \quad q_t = f_t(x_t,\theta) + \mu_t.$$

Based on the data set from Mizon (1977), we observe exogenous data $x_t = (k_t, l_t)$, where $k_t$ is capital used at time t and $l_t$ is labor employed at time t, together with dependent variable $q_t$ (quantity of output). We are interested in estimating an unknown vector of $\theta = (\theta_{i0}, \theta_{i1}, \theta_{i2})'$ in which $\theta_{i0}$ is a scale factor and $\theta_{i1}$ and $\theta_{i2}$ are output elasticity of input factors. The constraints are:

$$ {q_t} = {\theta_{i0}} {k_t^{\theta_{i1}}} {l_t^{\theta_{i2}}} + {\mu_t},$$

In more general cases, with more than two inputs, we can denote the inputs as $V_{1}, V_{2}, \dots, V_{m}$, $m > 2$. Then the fitting constraints are:

$$ {q_t} = {\theta_{i0}} {(V_{1t})^{\theta_{i1}}} {(V_{2t})^{\theta_{i2}}} \dots {(V_{mt})^{\theta_{im}}} + {\mu_t},$$

and we need to estimate a unknown vector of $\theta = (\theta_{i0}, \theta_{i1}, \theta_{i2},\dots, \theta_{im})'$ with $m + 1$ elements. Both the standard and general form of the Cobb-Douglas production function are included in the demo of example 1.

To report standard errors at the estimated point, we need to estimate the co-variance matrix of the coefficients $\hat{V}_{\hat{\theta}}$. In standard econometrics references such as Greene (2011),

\begin{equation}

\hat{V}_{\hat{\theta}} = \frac{\sum_t(y_t - f_t(x_t, \hat{\theta}))^2}{n-m}{(J^TJ)^{-1}}.

\end{equation}

Here $\frac{\sum_t(y_t - f_t(x_t, \hat{\theta}))^2}{n-m}$ is the estimated variance of residuals, where $n-m$ is the degrees of freedom, defined by $n$ = number of observations and $m$ = number of unknowns. On the other hand, a Jacobian matrix $J$ is a $n \times m$ matrix with its $(t,i)^{th}$ element defined as $J_{t,i} = \frac{\partial{\mu_t(\theta)}}{\partial{\theta_i}}$, where $t=1,2,\dots, n$ and $i=1,2, \dots, m$. GAMS provides a mechanism to generate the Jacobian matrix $J$ at solution point $\theta$. As we can see from this nonlinear least squares example in GAMS, cd_gdx.gms (Cobb-Douglas model with gdx input), we rely on the **convertd** solver with options *DictMap* and *Jacobian* for generating a dictionary map from the solver to GAMS and the Jacobian matrix at the solution point. We save them individually in data file *dictmap.gdx* and *jacobian.gdx*. Combining the information from these two files will provide us with the Jacobian matrix $J$ at the solution point $\theta$.

Once we have the estimators and the corresponding standard errors of these estimators, we can address the common question of how good are the estimators. To test whether the estimators are significantly different from zero, we can generate a t-statistic based on a hypothesis. Following Greene (2011), let $\hat{\beta}$ be an estimator of true value $\beta$, then a t-statistic for $\beta$ is defined as

$$t_{\hat{\beta}} = \frac{\hat{\beta}-\beta_0}{s.e.(\hat{\beta})}.$$

Note that we usually set $\beta_0 = 0$ when testing the significance of a certain estimator. In general, when a t-statistic is large enough (larger than the critical value with respect to some significance level $\alpha$), we tend to reject the null hypothesis claiming that this estimator is not significantly different from zero. The p-value is defined as the probability of obtaining a result equal to or "more extreme" than what was actually observed, assuming that the hypothesis under consideration is true. In this case, we could simply take the p-value as the probability of observing current data assuming our estimator is not significant. If the p-value is less than the significance level $\alpha$, then it is "unlikely" that the null hypothesis was true. It is easy to see that a large t-statistic would indicate a small p-value for the same statistical test.

## Demo

Check out the demo of example 1 to experiment with a nonlinear least squares model for estimating and statistically testing the Cobb-Douglas model.

## Model

A printable version of the nonlinear least squares model is here: cd_gdx.gms (Cobb-Douglas model with gdx input) or cd_text.gms (Cobb-Douglas model with text input).

## References

- Mizon, Grayham E. 1977. Inferential Procedures in Nonlinear Models: An Application in a UK Industrial Cross Section Study of Factor Substitution and Returns to Scale.
*Econometrica***45**(5), 1221-1242. - Kalvelagen, Erwin. 2007.
*Least Squares Calculations with GAMS*. Available for download at http://www.amsterdamoptimization.com/pdf/ols.pdf. - Greene, William. 2011.
*Econometrics Analysis, 7th ed.*Prentice Hall, Upper Saddle River, NJ.