Table of Contents

## What is smoothing in the context of language model?

The term smoothing refers to the adjustment of the maxi- mum likelihood estimator of a language model so that it will be more accurate. At the very least, it is required to not as- sign a zero probability to unseen words.

## What is a unigram model?

A unigram model can be treated as the combination of several one-state finite automata. It splits the probabilities of different terms in a context, e.g. from. to. In this model, the probability of each word only depends on that word’s own probability in the document, so we only have one-state finite automata as units.

**What is bigram language model?**

The Bigram Model As the name suggests, the bigram model approximates the probability of a word given all the previous words by using only the conditional probability of one preceding word. In other words, you approximate it with the probability: P(the | that)

### What is bigram and trigram models?

A 2-gram (or bigram) is a two-word sequence of words, like “I love”, “love reading”, or “Analytics Vidhya”. And a 3-gram (or trigram) is a three-word sequence of words like “I love reading”, “about data science” or “on Analytics Vidhya”.

### What are the smoothing methods of language model?

Laplace smoothing: Another name for Laplace smoothing technique is add one smoothing. Additive smoothing. Good-turing smoothing. Kneser-Ney smoothing.

**What is add smoothing?**

Add-1 smoothing (also called as Laplace smoothing) is a simple smoothing technique that Add 1 to the count of all n-grams in the training set before normalizing into probabilities.

## Is Bert a language model?

BERT is an open source machine learning framework for natural language processing (NLP). BERT is designed to help computers understand the meaning of ambiguous language in text by using surrounding text to establish context.

## What is Witten Bell smoothing?

Witten-Bell smoothing is this smoothing algorithm that was invented by some dude named Moffat, but dudes named Witten and Bell have generally gotten credit for it. It is significant in the field of text compression and is relatively easy to implement, and that’s good enough for us.

**What is absolute discounting?**

Absolute discounting involves subtracting a fixed discount, D, from each nonzero count, an redistributing this probability mass to N-grams with zero counts. We implement absolute discounting using an interpolated model: Kneser-Ney smoothing combines notions of discounting with a backoff model.

### What is a unigram language model?

Unigram language model What is a unigram? In natural language processing, an n-gram is a sequence of n words. For example, “statistics” is a unigram (n = 1), “machine learning” is a bigram (n = 2), “natural language processing” is a trigram (n = 3), and so on.

### What is an un-smoothed unigram model?

On the other extreme, the un-smoothed unigram model is the over-fitting model: it gives excellent probability estimates for the unigrams in the training text, but misses the mark for unigrams in a different text.

**Do smoothed unigram models fit better to Dev2?**

This fits well with our earlier observation that a smoothed unigram model with a similar proportion (80–20) fits better to dev2 than the un-smoothed model does. In fact, the more different the evaluation text is from the training text, the more we need to interpolate our unigram model with the uniform.

## What is wrong with the unigram model?

There is a big problem with the above unigram model: for a unigram that appears in the evaluation text but not in the training text, its count in the training text — hence its probability — will be zero.