Similarity Measure Time Series


In my work I have an observed Time Series and Simulated ones. I want to compare the Light Curves and check for similarityto find out which simulated curve fits best respectivley which parameters simulate the Light Curve the best.

At the moment I do it with the Cross-Correlation function from numpy. But I am not sure if that is the best option, due to the fact that the Light Curve with the highest Cross-Correlation Coefficient not always looks like the best fit/simulation compared to other simulations with a lower CC-Coefficient. Is there a another way to measure for similarity? I read something about the Chi-Square Statistics, but I am not sure how that works and how this could be applied to my problem.

The observation data I use is not evenly binned, so I used the interpolation function of Scipy. Should I also smooth the observation data or would I lose true features of my data. I thought about using the savitzky golay smoothing.

At the moment I am using a brute force method to try out all possible parameters and simulated the corresponding light curve. The problem is this takes a lot of time with 20 parameters. The parameters are more or less dependent on each other. So I cant use a least-square fit method, because there are multiple possible minimas. Is there a simple method that I overlooked. Or is a restricted brute force fit my best option?

In the picture below you'll see one plot with the simulation and the observaton data.

Thanks for all suggestions.

![Red: Simulation Curve, Blue: Observed, interpolated Data


Posted 2019-08-08T22:45:18.853

Reputation: 51

No answers