Spurious Regression
Regressing I(d) processes is fraught with danger of misspecification and nonsense estimates. We often see analyst reports where a stock is being regressed with a composite index to find the beta of the stock OR two stocks are regressed to find the relative beta etc . If one looks at those numbers closely from a stats perspective, most of the estimates are complete non sense.
Most often , knowingly or unknowingly , one tends to apply regression amongst non stationary series and then tries to use the estimate as though the regression has been done on stationary series. In one of the references to a paper I was reading, there was a quite a praise showered on a paper by Philips (a Yale Prof), titled ”Understanding Spurious Regressions”,(1986). It made me curious enough to go through it carefully.
Here are the key points from the paper :

t ratios used to assess the significance of coefficients in I(d) regressions do not have limiting distributions

Coefficients do not converge in probability to constants

Asymptotic distributions of intercept and slope do not converge in probability

Asymptotically, Durbin Watson statistic converges to 0

Serial correlation coefficients of the regression residuals converge in probability to unity

High R square and Low Durbin Watson should make you suspicious of the regression results
Most of the above stuff is intuitively obvious if you merely simulate 2 random walks, regress one against the other , observe all the relevant statistics and check how the stats behave asymptotically. What is empirically observed using a simulation is actually proven in this paper. That’s the reason probably this paper is widely referenced in Unit root testing literature. Closed form solutions always gives you a different feel of the solution and this paper allows one to instantly relate to the Monte carlo results.