Bias-variance tradeoff vs approximation-estimation tradeoff
Recently I’ve been reading a textbook called ‘Data Science and Machine Learning Mathematical and Statistical Methods’ by a previous professor Zdravko I. Botev and i came across an interesting nation ‘approximation-estimation tradeoff’. Uptil now I, like most people was only aware of bias-variacne tradeoff being the most common problem with optimsing mahcine learning algorithms. So the aim of this post of to classify the difference between the two using the aid of a provious textbook and a the article ‘Bias/Variancer is not the same as Approimation/Estimation’ by Gavin brown and Riccardo Ali.
Adressing Perspective’s
Many people are aware that machine learning and data sciecne originated from the field of Statistics and Mathematics, yet many course taught focus on using training data to optimsatie machine learning algorithsm.
So the importanat distinction is ‘approximation-estimation tradeoff was published first and orignates from Statistical learning theory, which focuses on a excess risk in mathematical defintiion (Bayes model), yet bias-variacne trade off focuses on a defintion akin to protical, focusing on the problem risk when using training data for trainfed models, more akin to the problems faced practically in the industry today’