Python can't take input while using functions


While building my model, I figured that it makes sense to reuse some of the code that I've been using for the train dataset, on the test set as well so I took the code performing mutual operations into one function definition. In this function, I am handling missing values and using its return to perform one-hot-encoding and using it on Random Forest Regression. However, its throwing the following error:

Traceback (most recent call last):
  File "C:/Users/security/Downloads/AP/Boston-Kaggle/", line 56, in <module>, y_train)
  File "C:\Users\security\AppData\Roaming\Python\Python37\site-packages\sklearn\feature_selection\", line 196, in fit, y, **fit_params)
  File "C:\Users\security\AppData\Roaming\Python\Python37\site-packages\sklearn\ensemble\", line 249, in fit
    X = check_array(X, accept_sparse="csc", dtype=DTYPE)
  File "C:\Users\security\AppData\Roaming\Python\Python37\site-packages\sklearn\utils\", line 542, in check_array
    allow_nan=force_all_finite == 'allow-nan')
  File "C:\Users\security\AppData\Roaming\Python\Python37\site-packages\sklearn\utils\", line 56, in _assert_all_finite
    raise ValueError(msg_err.format(type_err, X.dtype))
ValueError: Input contains NaN, infinity or a value too large for dtype('float32').

I did not have this problem while using the same code without organizing it into a function. def feature_selection_and_engineering(df) is the function in question. The following is my entire code.

import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_selection import SelectFromModel
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor

train = pd.read_csv("")
test = pd.read_csv("")

def feature_selection_and_engineering(df):
    # Creating a series of how many NaN's are in each column
    nan_counts = df.isna().sum()

    # Creating a template list
    nan_columns = []

    # Iterating over the series and if the value is more than 0 (i.e there are some NaN's present)
    for i in range(0, len(nan_counts)):
        if nan_counts[i] > 0:

    # Iterating through all the columns which are known to have NaN's
    for i in nan_columns:
        if df[nan_columns][i].dtypes == 'float64':
            df[i] = df[i].fillna(df[i].mean())
        elif df[nan_columns][i].dtypes == 'object':
            df[i] = df[i].fillna('XX')

    # Creating a template list
    categorical_columns = []

    # Iterating across all the columns,
    # checking if they're of the object datatype and if they are, appending them to the categorical list
    for i in range(0, len(df.dtypes)):
        if df.dtypes[i] == 'object':

    return categorical_columns

# take one-hot encoding
OHE_sdf = pd.get_dummies(feature_selection_and_engineering(train))

# drop the old categorical column from original df
train.drop(columns = feature_selection_and_engineering(train), axis = 1, inplace = True)

# attach one-hot encoded columns to original data frame
train = pd.concat([train, OHE_sdf], axis = 1, ignore_index = False)

# Dividing the training dataset into train/test sets with the test size being 20% of the overall dataset.
x_train, x_test, y_train, y_test = train_test_split(train, train['SalePrice'], test_size = 0.2, random_state = 42)

randomForestRegressor = RandomForestRegressor(n_estimators=1000)

# Invoking the Random Forest Classifier with a 1.25x the mean threshold to select correlating features
sel = SelectFromModel(RandomForestClassifier(n_estimators = 100), threshold = '1.25*mean'), y_train)

selected = sel.get_support()

#, y_train), y_train)

# Assigning the accuracy of the model to the variable "accuracy"
accuracy = randomForestRegressor.score(x_train, y_train)

# Predicting for the data in the test set
predictions = randomForestRegressor.predict(feature_selection_and_engineering(test))

# Writing the predictions to a new CSV file
submission = pd.DataFrame({'Id': test['PassengerId'], 'SalePrice': predictions})
filename = 'Boston-Submission.csv'
submission.to_csv(filename, index=False)

print(accuracy*100, "%")


Posted 2019-09-05T07:54:36.370

Reputation: 125



There might be two reasons, why you get this error.

One is, that you probably have +/-inf in your float columns. Such values are not replaced by fillna, so you need to replace them yourself. Just like this:

 df.loc[(df[col] == np.float64('inf')) | (df[col] == -np.float64('inf')), col]= 0.0

You need to do this for all float columns.

Then maybe you have other types in your dataframe that contain NaN, None, inf or -inf. E.g. other float types as float32, object or Int64. Before training/application of the model it would probably be best to use the same datatype. E.g. float64 for all numeric datatypes. If you like to do that, you can simply do:

dt= df.dtypes
for col in dt.index[ t: t.kind).isin(list('bifc'))].to_list():
    df[col]= df[col].astype('float64')

And then run your NaN/inf-replacement code.

Btw. the same way you can also replace your loop and change:

# Creating a template list
nan_columns = []

# Iterating over the series and if the value is more than 0 (i.e there are some NaN's present)
for i in range(0, len(nan_counts)):
    if nan_counts[i] > 0:


nan_columns= nan_counts.index[nan_counts>0].to_list()


Posted 2019-09-05T07:54:36.370

Reputation: 309