Disregarding possible computational restraints, are there general applications where lemmatization would be a counterproductive step when analyzing text data?
For example, would lemmatization be something that is not done when building a context-aware model?
For reference, lemmatization per dictinory.com is the act of grouping together the inflected forms of (a word) for analysis as a single item.
For example, the word 'cook' is the lemma of the word 'cooking'. The act of lemmatization is, for example, replacing the word cooking with cook after you have tokenized your text data. Additionally, the word 'worse' has 'bad' as its lemma, and as the previous example replacing the word 'worse' with 'bad' is the action of lemmatization.