I've been working on using LogisticRegression from scikit to try the Titanic Kaggle comp.
I've found something interesting, and that is that no amount of feature engineering and paramater tuning is changing my base model by more than a percentage point one way or the other.
At this point I'm completely self taught, so I figure one of two things is happening (maybe both):
1. I'm doing logistic regression all wrong
2. logistic regression isn't the right choice for the problem.
Are either of these two things true? My notebooks are below. They are a bit long, but if you are bored during quarantine, I'd appreciate feedback: