The tf-idf-Statistic For Keyword Extraction

tf-idfThe tf-idf-statistic (“term frequency – inverse document frequency”) is a common tool for the purpose of extracting keywords from a document by not just considering a single document but all documents from the corpus. In terms of tf-idf a word is important for a specific document if it shows up relatively often within that document and rarely in other documents of the corpus. I used tf-idf for extracting keywords from protocols of sessions of the German Bundestag and am quite happy with the results. Given that I was dealing with (so far) 18 documents, together containing more than one million words which would have to be aggregated for the term frequency, then outer joined and then fed to the formula I was first a bit worried about how R would perform. To my surprise the whole processing from reading the files from disk to the final table of tf-idf-values took about 8 seconds. That’s not  bad at all.

The tf part

The tf-formula is a ratio of a term’s occurences in a document and the number of occurences of the most frequent word within the same document. Why this makes sense is pretty self explanatory. But obviously we would end up with stop words yielding high scores – and even if those would have been discarded before, a lot of words naturally show up often in a long text but aren’t relevant to the specific document.

The idf part

And this is exactly where the idf-factor comes into play as it represents the inverse of the share of the documents in which the regarded term can be found. The lower the number of containing documents relative to the size of the corpus, the higher the factor. The reason why this ratio is not used directly but instead its logarithm is because otherwise the effective scorinidf-formulag penalty of showing up in two documents would be too extreme. As you can see in the plot – the idf for a term found in just one document is twice the idf for a term found in two. This would heavily bias the ranking in favor of super-rare words even if the tf-factor indicates a high relevance. It is very unlikely that a word is of high relevance in one document but never used anywhere else.

 tf and idf together

Both factors react positively to higher relevance – one using local information, the other taking a more global perspective. So we can simply take the product and we have the “traditional” tf-idf-formula. But this formula is not set in stone and depending on whether you want to put emphasis on the tf- or rather on the idf-part it might make sense to get a feeling for the ranking behaviour in your use case and apply adjustments to it until you are d’accord with the result.

For example in the case of the Bundestag protocols I came to the conclusion that the final rankings are more useful if I put some more weight on the idf-part – which leads to effectively penalizing a word’s ranking if the word shows up in more than one document. This seems to make sense right now as my corpus only keeps 18 documents. So what I did is I added the idf-part and divided it with the size of the corpus – see tfidf’ in the top image. That has the effect that the penalization of non-exclusivity is decreasing with time as the corpus grows in size (more protocols being published).

On that note – let’s see how the two versions of tf-idf compare – assuming a corpus containing 20 documents and the most frequent word to show up 4000 times.

Let’s for example compare a word A occuring 20 times in the document d and in 2 documents of the corpus to another word B occuring 10 times in the document d and in only 1 document (d, of course) of the corpus. In case of the traditional formula the tf-idf-statistic of B would be 35% lower compared to the tf-idf-statistic of A. In case of the alternated tf-idf-formula on the other hand we would experience an increase of B’s tf-idf-statistic by 24% compared to A’s! So it makes sense to think a bit about how exactly one would like the score to behave / rank words.

I didn’t want to spend too much time on this sub-project, so I kept it at my rule of thumb optimization. But when you start thinking about the tf-idf idea a lot of possible tweaks come to mind. To give an examle, true to the motto “once won’t do any harm” we could restrict the denominator of idf to taking only documents into account where the term shows up at least twice.

Constraints

Obviously the tf-idf approach to quantification of a keyword’s relevance has its limits. For example it is easily conceivable that a word will show up in a lot of documents of the corpus and yet play a central role in each of it. Or a subject is covered in several documents because it is very important – but tf-idf would penalize terms typical for this subject exactly because of that reason. This is why tf-idf is most certainly not the answer to everything. I came across another idea described in a paper from 2009 where the density of a term is used to infer its relevance. The basic idea is that a very relevant word will show relatively strong local densities compared to a common word with a more uniform density. Below you see the a density approximation for three stop words (“and”,”the” (male) and “the” (female)) and the densities for three terms that scored highest with respect to tf-idf in protocol #11.

densities-stop-words densities-top-tfidf

If it is possible to take the distribution into account then we can even determine relevancy for insulated single documents.


(original article published on www.joyofdata.de)

2 thoughts on “The tf-idf-Statistic For Keyword Extraction

  1. cool stuff! have you tried using stemming to increase accuracy?

    see you on Monday!

    • Hi Christian, thanks!
      And that is definitely a good suggestion! Will consider that for a new version of the processing.
      See you

Comments are closed.