Probability distribution
A probability distribution is a function that maps each outcome in a sample space to its probability.
Summing up the probabilities of every single possible outcome in an experiment should be \(1\), because the whole point of an experiment is to be guaranteed that an outcome from a sample space will occur.
Just for fun, imagine an unfair coin where heads is \(3\) times more likely to come up than tails. Then, \(p(H) = \frac{3}{4}\) and \(p(T) = \frac{1}{4}.\) Adding up the probabilities of every outcome in the sample space \(\set{H, T}\) results in \(\frac{3}{4} + \frac{1}{4} = 1\), as expected.
Let's also consider a biased die where \(1\) is \(5\) times more likely to be rolled than any other number. Then, \(p(1) = \frac{5}{10}\) and \(p(2) = p(3) = p(4) = p(5) = p(6) = \frac{1}{10}.\) The probability of the event of rolling an odd number occurring is simply the sum of all the probabilities of the outcomes that are odd numbers (all the elements of that event): \(p(1) + p(3) + p(5) = \frac{5}{10} + \frac{1}{10} + \frac{1}{10} = \frac{7}{10}.\)