These are all variants of LASSO, and provide the entire sequence of coefficients and fits, starting from zero to the least squares fit.

lars.lsa(
  Sigma0,
  b0,
  n,
  type = c("lar", "lasso"),
  max.steps = NULL,
  eps = .Machine$double.eps,
  adaptive = TRUE,
  para = NULL
)

Arguments

Sigma0

A Grammian / covariance matrix of pathway predictors.

b0

An eigenvector of Sigma0.

n

The sample size.

type

Option between "lar" and "lasso". Defaults to "lasso".

max.steps

How many steps should the LAR or LASSO algorithms take? Defaults to 8 times the pathway dimension.

eps

What should we consider to be numerically 0? Defaults to the machine's default error limit for doubles (.Machine$double.eps).

adaptive

Ignore.

para

Ignore.

Value

An object of class "lars".

Details

LARS is described in detail in Efron, Hastie, Johnstone and Tibshirani (2002). With the "lasso" option, it computes the complete LASSO solution simultaneously for all values of the shrinkage parameter in the same computational cost as a least squares fit. This function is adapted from the lars function in the lars package to apply to covariance or Grammian pathway design matrices.

See also

Examples

# DO NOT CALL THIS FUNCTION DIRECTLY. # Use AESPCA_pVals() instead if (FALSE) { X_mat <- as.matrix(colonSurv_df[, 5:50]) X_mat <- scale(X_mat) XtX <- t(X_mat) %*% X_mat A_mat <- svd(XtX)$v lars.lsa( Sigma0 = XtX, b0 = A_mat[1, ] * sign(A_mat[1, 1]), n = ncol(X_mat) ) }