Event


An event is a set of outcomes for an experiment. In other words, it is a subset of that experiment's sample space.

If the probability distribution \(p\) is known, then the probability of an event \(E\) in the sample space \(S\) is: $$p(E) = \sum\limits_{s \in E}p(s)$$

When an element of an event turns out to be the actual outcome of an experiment, then that event is said to have occurred. The chance of this happening is the event's probability.

The event of rolling an even number with a die is \(\set{2, 4, 6}.\) Recall that the sample space in that experiment is \(\set{1, 2, 3, 4, 5, 6}.\)

Contents

Mutual exclusivity


If two events are mutually exclusive, there is no way for both to occur in an experiment. Stated mathematically:

For mutually exclusive events \(E_1\) and \(E_2\), the probability of both occurring is \(0\): $$p(E_1 \cap E_2) = 0$$
⚠ Mutually exclusive events are disjoint because the sets of outcomes they represent have no overlap. In terms of set operations, \(E_1 \cap E_2 = \emptyset.\)

Another important property of mutually exclusive events is that the chances of either occurring in an experiment is simply the sum of their probabilities:

For mutually exclusive events \(E_1\) and \(E_2\), the probability of either occurring is the sum of their probabilities: $$p(E_1 \cup E_2) = p(E_1) + p(E_2)$$

Consider an experiment where two dice are rolled. What's the probability of that the sum of the numbers they show is \(4\) or less? If we break up that event into mutually exclusive "sub-events," we can be more organized in our counting. The event that the sum is \(\leq 4\) consists of \(3\) mutually exclusive sub-events: that the sum is \(4\), that the sum is \(3\), and that the sum is \(2.\) The reason these are all mutually exclusive is because the sum cannot be two different numbers at the same time. Therefore, we can just add up their probabilities to find the probability of their union (that any of them happen): \(\frac{1}{36} + \frac{2}{36} + \frac{1}{36} = \frac{4}{36} = \frac{1}{9}.\)

By the inclusion-exclusion principle, for events \(E_1\) and \(E_2\) that are not mutually exclusive: $$p(E_1 \cup E_2) = p(E_1) + p(E_2) - p(E_1 \cap E_2)$$
⚠ The corrections provided by the inclusion-exclusion principle ensure that no outcome in any of the events ever has its probability counted more than once, which allows this formula to work.

To show two events are not mutually exclusive, come up with an outcome that causes both events to occur at the same time.

In an experiment, a die is rolled. What is the probability that the number it shows is less than \(3\) or odd? Well, there are \(2\) outcomes where it's less than \(3\), being if it shows \(1\) or \(2.\) There are \(3\) outcomes where it's odd, being if it shows \(1\), \(3\), or \(5.\) Those two events are not mutually exclusive because \(1\) is in both of them. Therefore, we'll need to subtract off the probability of their intersection from the sum of their probabilities: \(\frac{2}{6} + \frac{3}{6} - \frac{1}{6} = \frac{4}{6}.\)

⚠ The probability of an event or another event occurring is the probability of their union occurring. This is a little hint back to the underlying logical analogue of the union operation.

Complement


The complement of an event is the set of all outcomes in the sample space that are not a part of that event. Naturally, the probability of an event occurring plus the probability of it not occurring must add up to \(1.\)

For any event \(E\) in a sample space \(S\): $$p(\overline{E}) = 1 - p(E)$$
⚠ The universal set in this context is the sample space, so \(\overline{E} = S - E.\)
⚠ If an event does not occur, you can deduce that its complement must have occurred.

Since an event and its complement share no outcomes, they are mutually exclusive. Additionally, their union represents the entire sample space, that is to say, for any event \(E\), it's always true that \(E \cup \overline{E} = S\), where \(S\) is the sample space of the experiment.

As you might already know from the complement rule, the complement of a set can sometimes be a lot easier to count than the original set. Consider an experiment where a coin is flipped \(5\) times in a row. Here, the outcomes are \(5\)-element sequences where each element is heads \((H)\) or tails \((T).\) What is the probability that heads appears at least once in the sequence? Well, heads could appear once in multiple ways, or twice, or three times, or four... this is going to be tedious to count. That is, it would be, if the complement rule was not in our toolbox! If we simply count by complement, we find the probability that heads does not appear at least once in the sequence (in other words, that only tails appears) is just \(\frac{1}{2^5}.\) Do note that \(2^5\) is the total number of outcomes, obtained by the rule of product. Then, all that needs to be done is subtract that probability from \(1\) to get the original probability we were looking for: \(1 - \frac{1}{2^5} = 0.96875.\) So, yeah, you'll almost certainly get at least \(1\) heads if you flip a coin \(5\) times in a row.

Logic & Proofs
IntegerRational numberInequalityReal numberTheoremProofStatementProof by exhaustionUniversal generalizationCounterexampleExistence proofExistential instantiationAxiomLogicTruthPropositionCompound propositionLogical operationLogical equivalenceTautologyContradictionLogic lawPredicateDomainQuantifierArgumentRule of inferenceLogical proofDirect proofProof by contrapositiveIrrational numberProof by contradictionProof by casesSummationDisjunctive normal form
Set Theory
SetElementEmpty setUniversal setSubsetPower setCartesian productStringBinary stringEmpty stringSet operationSet identitySet proof
Functions
FunctionFloor functionCeiling functionInverse function
Algorithms
AlgorithmPseudocodeCommandAsymptotic notationTime complexityAtomic operationBrute-force algorithm
Relations
RelationReflexive relationSymmetric relationTransitive relationRelation compositionEquivalence relationEquivalence class
Number Theory
Integer divisionLinear combinationDivision algorithmModular arithmeticPrime factorizationGreatest common divisorLeast common multiplePrimality testFactoring algorithmEuclid's theoremPrime number theoremEuclidean algorithm
Induction
Proof by inductionFibonacci sequenceProof by strong inductionWell-ordering principleSequenceFactorialRecursive definition
Combinatorics
Rule of productRule of sumBijection rulePermutationCombinationComplement ruleExperimentOutcomeSample spaceEventProbabilityProbability distributionUniform distributionMultisetSixfold wayInclusion-exclusion principlePigeonhole principle
Graph Theory
GraphWalkSubgraphRegular graphComplete graphEmpty graphCycle graphHypercube graphBipartite graphComponentEulerian circuitEulerian trailHamiltonian cycleHamiltonian pathTreeHuffman treeSubstringForestPath graphStarSpanning treeWeighted graphMinimum spanning treeGreedy algorithmPrim's algorithm
Recursion
RecursionRecursive algorithmCorrectness proofDivide-and-conquer algorithmSorting algorithmMerge sort