Accuracy in ML vocabulary is used mostly for Classification problem i.e. Count of correct prediction out of total.

In a common speaking language, it will mean the predictive correctness of the model esp. on test data.

My understanding is that it's same as Score which can be calculated simply as

*regressor.score(X_test, Y_test)*

I am assuming that you are using SciKit-Learn,
*score* method for DecisionTreeRegressor will return **R-square coefficient**.Offical Link

*score(self, X, y[, sample_weight])*

Return the coefficient of determination R^2 of the prediction.

**What should you do -**

*You should calculate two metrics* - **R-square and MAE/MSE**.

**Reason** being - for an end-user/business person, MAE would be useful e.g. saying that *model's prediction will be ~250$ away from the correct value on an average*.

**Challenge** with MAE/MSE is that it doesn't say if it is good model unless you have an idea of the underlying data. e.g. *Creating two models on pricing data of 2 different city - Boston/Tokyo and the MSE is 1000$/$1500*.

**You can't conclude** that the former is a better model from this data.

R-square helps here.

**Adjusted R-square** (Another regression metrics) - If your feature set is fixed, then you need not check this metrics. It was devised to fix an issue with R-square when the feature set is different for different models.

Snippet to get RMSE, R-square, Adjusted R-square

```
#https://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics
def reg_metrics(y_test, y_pred, X_train):
from sklearn.metrics import mean_squared_error, r2_score
rmse = np.sqrt(mean_squared_error(y_test,y_pred))
r2 = r2_score(y_test,y_pred)
# Scikit-learn doesn't have adjusted r-square, hence custom code
n = y_pred.shape[0]
k = X_train.shape[1]
adj_r_sq = 1 - (1 - r2)*(n-1)/(n-1-k)
print(rmse, r2, adj_r_sq)
```

Links to study -

Statistics by Jim

Wikipedia