Alternatives to TF-IDF and Cosine Similarity when comparing documents of differing formats



I've been working on a small, personal project which takes a user's job skills and suggests the most ideal career for them based on those skills. I use a database of job listings to achieve this. At the moment, the code works as follows:

1) Process the text of each job listing to extract skills that are mentioned in the listing

2) For each career (e.g. "Data Analyst"), combine the processed text of the job listings for that career into one document

3) Calculate the TF-IDF of each skill within the career documents

After this, I'm not sure which method I should use to rank careers based on a list of a user's skills. The most popular method that I've seen would be to treat the user's skills as a document as well, then to calculate the TF-IDF for the skill document, and use something like cosine similarity to calculate the similarity between the skill document and each career document.

This doesn't seem like the ideal solution to me, since cosine similarity is best used when comparing two documents of the same format. For that matter, TF-IDF doesn't seem like the appropriate metric to apply to the user's skill list at all. For instance, if a user adds additional skills to their list, the TF for each skill will drop. In reality, I don't care what the frequency of the skills are in the user's skills list -- I just care that they have those skills (and maybe how well they know those skills).

It seems like a better metric would be to do the following:

1) For each skill that the user has, calculate the TF-IDF of that skill in the career documents

2) For each career, sum the TF-IDF results for all of the user's skill

3) Rank career based on the above sum

Am I thinking along the right lines here? If so, are there any algorithms that work along these lines, but are more sophisticated than a simple sum? Thanks for the help!

Richard Knoche

Posted 2017-01-02T20:41:13.493

Reputation: 131

3Check out Doc2vec, Gensim has the implementation – Blue482 – 2017-01-03T11:55:15.797



Perhaps you could use word embeddings to better represent the distance between certain skills. For instance, "Python" and "R" should be closer together than "Python" and "Time management" since they are both programming languages.

The whole idea is that words that appear in the same context should be closer.

Once you have these embeddings, you would have a set of skills for the candidate, and sets of skills of various size for the jobs. You could then use Earth Mover's Distance to calculate the distance between the sets. This distance measure is rather slow (quadratic time) so it might not scale well if you have many jobs to go through.

To deal with the scalability issue, you could perhaps rank the jobs based on how many skills the candidate has in common in the first place, and favor these jobs.

Valentin Calomme

Posted 2017-01-02T20:41:13.493

Reputation: 4 666


A common and simple method to match "documents" is to use TF-IDF weighting, as you have described. However, as I understand your question, you want to rank each career (-document) based on a set of users skills.

If you create a "query vector" from the skills, you can multiply the vector with your term-career matrix (with all the tf-idf weights as values). The resulting vector would give you a ranking score per career-document which you can use to pick the top-k careers for the set of "query skills".

E.g. if your query vector $\bar{q}$ consists of zeros and ones, and is of size $1 \times |terms|$, and your term-document matrix $M$ is of size $|terms| \times |documents|$, then $\bar{v} M$ would result in a vector of size $1 \times |documents|$ with elements equal to the sum of every query term's TF-IDF weight per career document.

This method of ranking is one of the simplest and many variations exist. The TF-IDF entry on Wikipedia also describes this ranking method briefly. I also found this Q&A on SO about matching documents.


Posted 2017-01-02T20:41:13.493

Reputation: 66

Surprisingly, a simple average of word embeddings is often as good as a weighted average of embeddings done with Tf-Idf weights. – wacax – 2018-03-26T21:38:02.013


Use the Jaccard Index. This will very much serve your purpose.

Himanshu Rai

Posted 2017-01-02T20:41:13.493

Reputation: 1 608


You can try using "gensim". I did a similar project with unstructured data. Gensim gave better scores than standard TFIDF. It also ran faster.

Harsha Reddy

Posted 2017-01-02T20:41:13.493

Reputation: 21