I understand that some machine learning models tend to be low bias, whereas others tend to be low variance (source). As an example, a linear regression will tend to have low variance error and high bias error. In contrast, a decision tree will tend to have high variance error and low bias error. Intuitively this makes sense because a decision tree is prone to overfitting the data, whereas a linear regression is not. However, is there a more quantitative way to determine if a class of algorithms tends to produce low bias or low variance models?