Issues with pandas chunk merge


I'm trying to solve a kaggle competition

Since the data is too much to fit in memory at once, I'm trying to clean, process and save data back to disk. This is the code I'm using:

# Process individual chunks to normalize JSON columns and remove unwanted columns
def process_chunk(df):
    df['hits'] = df['hits'].apply(literal_eval)
    df['hits'] = df['hits'].str[0]
    df = df[pd.notnull(df['hits'])]

    df['customDimensions'] = df['customDimensions'].apply(literal_eval)
    df['customDimensions'] = df['customDimensions'].str[0]
    df = df[pd.notnull(df['customDimensions'])]

    JSON_COLUMNS = ['device', 'geoNetwork', 'totals', 'trafficSource', 'hits', 'customDimensions']

    for column in JSON_COLUMNS:
        column_as_df = json_normalize(df[column])
        column_as_df.columns = [f"{column}.{subcolumn}" for subcolumn in column_as_df.columns]
        df.drop(column, axis=1, inplace=True)
        df = df.merge(column_as_df, right_index=True, left_index=True)

    # remove some columns to reduce size
    # we only need a few hits columns, I suppose
    want_hits_col = ['hits.type', 'hits.hour', 'hits.hitNumber', 'hits.type']
    all_hits = df[df.columns[pd.Series(df.columns).str.startswith('hits')]].columns
    hits_columns_remove = list(set(all_hits) - set(want_hits_col))

    df = df.drop(hits_columns_remove, axis=1)
    return df

# divide the data into chunks and save processed chunks incrementally
def save_parse(csv_path, chunksize=500):
    JSON_COLUMNS = ['device', 'geoNetwork', 'totals', 'trafficSource']

    chunks = pd.read_csv(csv_path,
                         converters={column: json.loads for column in JSON_COLUMNS},
                         dtype={'fullVisitorId': 'str'},  # Important!!
    chunk_counter = 1
    for chunk in chunks:
        processed = process_chunk(chunk)
        # show me the progress
        print("Processed chunk number - {} with shape {}".format(chunk_counter, processed.shape))
        chunk_counter += 1
        # save to csv
        processed.to_csv('train_chunks_appended.csv', mode='a')
    print("Done parsing")

For the first chunk, the process_chunk method outputs current value (df.shape = (211, 59)). Stragly, for second chunk onwards, chunk is not processing properly.

I think there is a problem at merging the normalized json columns back to dataframe at this line:

df = df.merge(column_as_df, right_index=True, left_index=True)

which is why the shape becomes (0, 27) for rest of the chunks. I am not able to figure out why this is happening. Any pointers would be great.


Posted 2018-12-01T19:46:00.673

Reputation: 11

Check if the index in one dataframe exists in the other. It could be that they all have different indexes and therefore merge does not find the intersection of the two datasets – – 2018-12-01T21:30:00.287 You're right. For the second chunk onwards, the chunk dataframe index starts with chunk index(i.e. size of chunk * chunk number) but the column_as_df (the one from json_normalize) starts with 0. Now, is there a way to preserve index during the normalization process? – Sany – 2018-12-01T23:57:54.700

You save the index in a variable. Make merge without index and assign index afterwards. – – 2018-12-02T08:41:56.543

Sorry was cought up in something. I tried that approach but the JSON data spits out variable number of columns and messes up the entire structure. I'm trying to figure out a possible solution. Thank you for your suggestions – Sany – 2018-12-04T02:44:20.193

No answers