Posts

ML notes personal 4

Image
Gradient Boosting How it works? step 1 : model 1 - it always returns the avg value of output col, pred1 step 2 : calculate residual 1 -> predicted-actual = res1 step 3 : make a decision tree iq, cgpa and res1 step 4 : make pred2  step 5 : res2  = actual - (pred1 + LR*pred2) step 6 : again make dt based on iq, cgpa, res2, and repeat till res is nearly = 0  A. Adaboost vs B.Gradient Boost - A - max depth of decision tree=1 - B - max leaf node - (8 to 32)

ML notes personal 3

Image
  🧠 Ridge Regression aka L2 Regularization(Because of Square) 📌 What is Ridge Regression? Ridge Regression is a regularized version of linear regression that helps reduce overfitting by penalizing large coefficients . 🧮 Key Formula: Loss = RSS + λ ∑ j = 1 n β j 2 \text{Loss} = \text{RSS} + \lambda \sum_{j=1}^{n} \beta_j^2 Loss = RSS + λ j = 1 ∑ n ​ β j 2 ​ Where: RSS = Residual Sum of Squares (normal linear regression loss) λ (lambda) = regularization strength (also called tuning parameter ) βᵢ = model coefficients The penalty term is L2 norm 🎯 Why Use Ridge? Controls model complexity Prevents overfitting Useful when features are highly correlated (multicollinearity) ⚙️ How It Works Adds a penalty to the squared values of coefficients Forces coefficients to be smaller , but not zero Helps in bias-variance tradeoff (increases bias, reduces variance) 🔁 Difference from Other Methods Method Penalty Shrinks Coefficients to Zero...