## measuring flip-flop behaviour across several topics

3

I'm trying to analyze a behavior called "sentiment flipping" of users in a dataset, but I'm not able to step on.

Let's suppose that I have two groups of users, say them good and bad users.

My dataset contains N tweets that classified into 6 topics. The tweets were created by the bad and good users.

The 6 topics are about general issues, but 3 of these topics are about organization/individuals supported (A) by the "bad" users and the other 3 are against (B) their ideologies.

The difference between the bad and good users in their tweeting behavior is:

• The good user posted tweets in some of the topics (and maybe all of them) without forcing "positive" or "negative" sentiment in the topics.
• The bad user posted tweets contain negative sentiment on the topics against her/his ideologies and positive sentiment on the topics she/he supports. The clear difference between both users also is that the bad user posts negative sentiment profusely on B topics and positive sentiment on A topics.

How can I measure/show this flipping behavior in a score/value; given that each tweet is represented by a vector like: <# of Pos words, # of Neg words>.

I think a good solution will consider how dense and ideologically clear the bad user behavior.

This image summarizes the previous description:

What if you gave every user an emotionalism and bias rating? – Tasty213 – 2019-08-23T13:08:03.687

@Tasty213 yes this is what i am looking for .. but with considering the topics. – user_007 – 2019-08-24T21:36:49.863

So I have a theory for this. We want to be able to distinguish users who are emotional and biased from those that are unemotional and unbiased. $$M = emotionality\\ B = Bias\\ t_i = tweet\ number\ i\\ b_i = bad\ words\ in\ t_i\\ p_i = good\ words\ in\ t_i\\ w_i = words\ in\ t_i\\ M = \frac{(b_i+p_i)}{w_i}\\ B = \frac{(b_i-p_i)^2}{w_i}\\$$