Dependent and independent random events. Basic formulas for addition and multiplication of probabilities

Dependent and independent random events.
Basic formulas for addition and multiplication of probabilities

The concepts of dependence and independence of random events. Conditional Probability. Formulas for addition and multiplication of probabilities for dependent and independent random events. Total probability formula and Bayes formula.

Addition theorems

Let us find the probability of the sum of events and (under the assumption of their compatibility or inconsistency).


Theorem 2.1. The probability of the sum of a finite number of incompatible events is equal to the sum of their probabilities:



Example 1 The probability that a pair of size 44 men's shoes will be sold in a store is 0.12; 45th - 0.04; 46th and more - 0.01. Find the probability that a pair of men's shoes at least size 44 will be sold.


Decision. The desired event will occur if a pair of shoes of size 44 ( event ) or size 45 ( event ), or at least size 46 ( event ) is sold, i.e. the event is the sum of events . The events , and are incompatible. Therefore, according to the theorem on the sum of probabilities, we obtain



Example 2 Under the conditions of Example 1, find the probability that the next pair of shoes less than size 44 will be sold.


Decision. The events "another pair of shoes less than size 44 will be sold" and "a pair of shoes of size no less than size 44 will be sold" are opposite. Therefore, according to formula (1.2), the probability of the occurrence of the desired event



because , as found in example 1.


Theorem 2.1 of addition of probabilities is valid only for incompatible events. Using it to find the probability of joint events can lead to incorrect and sometimes absurd conclusions, which is clearly seen in the following example. Let Electra Ltd fulfill the order on time with a probability of 0.7. What is the probability that the firm will complete at least one out of three orders on time? The events consisting in the fact that the company will fulfill the first, second, third orders on time will be denoted, respectively. If we apply Theorem 2.1 of the addition of probabilities to find the desired probability, then we get . The probability of the event turned out to be greater than one, which is impossible. This is because the events are joint. Indeed, the fulfillment of the first order on time does not exclude the fulfillment of the other two orders on time.


Let us formulate a probability addition theorem in the case of two joint events (the probability of their joint occurrence will be taken into account).


Theorem 2.2. The probability of the sum of two joint events is equal to the sum of the probabilities of these two events without the probability of their joint occurrence:


Dependent and independent events. Conditional Probability

Distinguish between dependent and independent events. Two events are said to be independent if the occurrence of one of them does not change the probability of the occurrence of the other. For example, if two automatic lines operate in a workshop, which are not interconnected according to production conditions, then the stops of these lines are independent events.


Example 3 The coin is flipped twice. The probability of the appearance of the "coat of arms" in the first test (event ) does not depend on the appearance or non-appearance of the "coat of arms" in the second test (event ). In turn, the probability of the appearance of the "coat of arms" in the second test does not depend on the result of the first test. Thus, events and independent.


Several events are called collectively independent, if any of them does not depend on any other event and on any combination of the others.


The events are called dependent, if one of them affects the probability of occurrence of the other. For example, two production plants are connected by a single technological cycle. Then the probability of failure of one of them depends on the state of the other. The probability of one event, calculated assuming the occurrence of another event, is called conditional probability events and is denoted by .


The condition of independence of an event from an event is written in the form , and the condition of its dependence - in the form . Consider an example of calculating the conditional probability of an event.

Example 4 There are 5 incisors in the box: two worn and three new. Two consecutive extractions of incisors are made. Determine the conditional probability of the appearance of a worn cutter during the second extraction, provided that the cutter removed for the first time is not returned to the box.


Decision. Let us denote the extraction of a worn cutter in the first case, and - the extraction of a new one. Then . Since the removed cutter is not returned to the box, the ratio between the numbers of worn and new cutters changes. Therefore, the probability of removing a worn cutter in the second case depends on what event took place before.


Let us designate the event that means the extraction of the worn cutter in the second case. The probabilities for this event are:



Therefore, the probability of an event depends on whether the event occurred or not.

Probability multiplication formulas

Let the events and be independent, and the probabilities of these events are known. Find the probability of combining events and .


Theorem 2.3. The probability of the joint occurrence of two independent events is equal to the product of the probabilities of these events:



Corollary 2.1. The probability of the joint occurrence of several events that are independent in the aggregate is equal to the product of the probabilities of these events:


Example 5 Three boxes contain 10 pieces each. In the first box - 8 standard parts, in the second - 7, in the third - 9. One part is taken out at random from each box. Find the probability that all three parts taken out are standard.


Decision. The probability that a standard part (event ) is taken from the first box, . The probability that a standard part (event ) is taken from the second box, . The probability that a standard part (event ) is taken from the third box, . Since the events , and are independent in the aggregate, then the desired probability (according to the multiplication theorem)



Let the events and be dependent, and the probabilities and be known. Let us find the probability of the product of these events, i.e. the probability that both the event and the event will appear.


Theorem 2.4. The probability of the joint occurrence of two dependent events is equal to the product of the probability of one of them by the conditional probability of the other, calculated under the assumption that the first event has already occurred:



Corollary 2.2. The probability of the joint occurrence of several dependent events is equal to the product of the probability of one of them by the conditional probabilities of all the others, and the probability of each subsequent event is calculated on the assumption that all previous events have already appeared.

Example 6 An urn contains 5 white balls, 4 black and 3 blue. Each trial consists of drawing one ball at random without returning it to the urn. Find the probability that at the first test a white ball will appear (event ), at the second - black (event) and at the third - blue (event).


Decision. Probability of a white ball appearing on the first trial. The probability of a black ball appearing on the second trial, calculated assuming that a white ball appeared on the first trial, that is, the conditional probability . The probability of a blue ball appearing on the third trial, calculated assuming that a white ball appeared on the first trial and a black one on the second trial, . Desired probability


Total Probability Formula

Theorem 2.5. If an event occurs only if one of the events occurs, forming a complete group of incompatible events, then the probability of the event is equal to the sum of the products of the probabilities of each of the events by the corresponding conditional probability of the event:



In this case, the events are called hypotheses, and the probabilities are called a priori. This formula is called the total probability formula.


Example 7 The assembly line receives parts from three machines. Machine performance is not the same. On the first machine 50% of all parts are produced, on the second - 30%, on the third - 20%. The probability of a quality assembly when using a part made on the first, second and third machine, respectively, is 0.98, 0.95 and 0.8. Determine the probability that the assembly coming off the conveyor is of high quality.


Decision. Let's designate an event that means the suitability of the assembled node; , and - events meaning that the parts are made respectively on the first, second and third machine. Then



Desired probability


Bayes formula

This formula is used in solving practical problems when an event that appears together with any of the events that form a complete group of events has occurred and a quantitative reassessment of the probabilities of hypotheses is required. A priori (before experience) probabilities are known. It is required to calculate a posteriori (after experience) probabilities, i.e., in essence, it is necessary to find conditional probabilities . For a hypothesis, Bayes' formula looks like this.

Probability definitions

Classic definition

The classical "definition" of probability comes from the notion equal opportunities as an objective property of the phenomena being studied. Equivalence is an indefinable concept and is established from general considerations of the symmetry of the phenomena under study. For example, when tossing a coin, it is assumed that, due to the supposed symmetry of the coin, the homogeneity of the material, and the randomness (non-bias) of the toss, there is no reason to prefer “tails” over “eagles” or vice versa, that is, the loss of these sides can be considered equally probable (equiprobable) .

Along with the concept of equiprobability in the general case, the classical definition also requires the concept of an elementary event (outcome) that favors or does not favor the event A under study. We are talking about outcomes, the occurrence of which excludes the possibility of the occurrence of other outcomes. These are incompatible elementary events. For example, when a dice is rolled, a particular number will be eliminated from the other numbers.

The classical definition of probability can be formulated as follows:

The probability of a random event A called the ratio of the number n incompatible equally probable elementary events that make up the event A , to the number of all possible elementary events N :

For example, suppose two dice are tossed. The total number of equally possible outcomes (elementary events) is obviously 36 (6 possibilities on each die). Estimate the probability of getting 7 points. Getting 7 points is possible in the following ways: 1+6, 2+5, 3+4, 4+3, 5+2, 6+1. That is, there are only 6 equally likely outcomes that favor event A - getting 7 points. Therefore, the probability will be equal to 6/36=1/6. For comparison, the probability of getting 12 points or 2 points is only 1/36 - 6 times less.

Geometric definition

Despite the fact that the classical definition is intuitive and derived from practice, at least it cannot be directly applied if the number of equally possible outcomes is infinite. A striking example of an infinite number of possible outcomes is a limited geometric region G, for example, on a plane, with an area S. A randomly "thrown" "point" with equal probability can be at any point in this region. The problem is to determine the probability of a point falling into some subdomain g with area s. In this case, generalizing the classical definition, we can come to a geometric definition of the probability of falling into the subdomain :

In view of the equal possibility, this probability does not depend on the shape of the region g, it depends only on its area. This definition can naturally be generalized to a space of any dimension, where the concept of "volume" is used instead of area. Moreover, it is this definition that leads to the modern axiomatic definition of probability. The concept of volume is generalized to the concept of a "measure" of some abstract set, to which the requirements are imposed, which the "volume" also has in the geometric interpretation - first of all, these are non-negativity and additivity.

Frequency (statistical) determination

The classical definition, when considering complex problems, encounters difficulties of an insurmountable nature. In particular, in some cases it may not be possible to identify equally likely cases. Even in the case of a coin, as is known, there is a clearly not equally probable possibility of an "edge" falling out, which cannot be estimated from theoretical considerations (one can only say that it is unlikely, and this consideration is rather practical). Therefore, at the dawn of the formation of the theory of probability, an alternative "frequency" definition of probability was proposed. Namely, formally, the probability can be defined as the limit of the frequency of observations of the event A, assuming the homogeneity of observations (that is, the sameness of all observation conditions) and their independence from each other:

where is the number of observations, and is the number of occurrences of the event .

Despite the fact that this definition rather points to a way of estimating an unknown probability - by means of a large number of homogeneous and independent observations - nevertheless, this definition reflects the content of the concept of probability. Namely, if a certain probability is attributed to an event, as an objective measure of its possibility, then this means that under fixed conditions and multiple repetitions, we should get a frequency of its occurrence close to (the closer, the more observations). Actually, this is the original meaning of the concept of probability. It is based on an objectivist view of natural phenomena. Below we will consider the so-called laws of large numbers, which provide a theoretical basis (within the framework of the modern axiomatic approach presented below), including for the frequency estimate of probability.

Axiomatic definition

In the modern mathematical approach, the probability is given by Kolmogorov's axiomatics. It is assumed that some space of elementary events. Subsets of this space are interpreted as random events. The union (sum) of some subsets (events) is interpreted as an event consisting in the occurrence at least one from these events. The intersection (product) of subsets (events) is interpreted as an event consisting in the occurrence all these events. Disjoint sets are interpreted as incompatible events (their joint offensive is impossible). Accordingly, the empty set means impossible event.

Probability ( probability measure) is called measure(numeric function) defined on the set of events, having the following properties:

If the space of elementary events X certainly, then the specified additivity condition for arbitrary two incompatible events is sufficient, from which additivity will follow for any final the number of incompatible events. However, in the case of an infinite (countable or uncountable) space of elementary events, this condition is not enough. The so-called countable or sigma additivity, that is, the fulfillment of the additivity property for any no more than countable families of pairwise incompatible events. This is necessary to ensure the "continuity" of the probability measure.

The probability measure may not be defined for all subsets of the set . It is assumed that it is defined on some sigma algebra subsets . These subsets are called measurable according to a given probability measure, and they are random events. The set - that is, the set of elementary events, the sigma-algebra of its subsets and the probability measure - is called probability space.

Continuous random variables. In addition to discrete random variables, the possible values ​​of which form a finite or infinite sequence of numbers that do not completely fill any interval, there are often random variables whose possible values ​​form a certain interval. An example of such a random variable is the deviation from the nominal of a certain size of a part with a properly established technological process. This kind of random variables cannot be specified using the probability distribution law p(x). However, they can be specified using the probability distribution function F(x). This function is defined in exactly the same way as in the case of a discrete random variable:

Thus, here too the function F(x) defined on the whole number axis, and its value at the point X is equal to the probability that the random variable will take on a value less than X. Formula (19) and properties 1° and 2° are valid for the distribution function of any random variable. The proof is carried out similarly to the case of a discrete quantity. The random variable is called continuous, if for it there exists a non-negative piecewise-continuous function* that satisfies for any values x equality

Based on the geometric meaning of the integral as an area, we can say that the probability of fulfilling the inequalities is equal to the area of ​​a curvilinear trapezoid with base bounded above by a curve (Fig. 6).

Since , and based on formula (22)

Note that for a continuous random variable, the distribution function F(x) continuous at any point X, where the function is continuous. This follows from the fact that F(x) is differentiable at these points. Based on formula (23), assuming x 1 =x, , we have

Due to the continuity of the function F(x) we get that

Hence

Thus, the probability that a continuous random variable can take on any single value of x is zero. It follows from this that the events consisting in the fulfillment of each of the inequalities

They have the same probability, i.e.

Indeed, for example,

as Comment. As we know, if an event is impossible, then the probability of its occurrence is zero. In the classical definition of probability, when the number of test outcomes is finite, the reverse proposition also takes place: if the probability of an event is zero, then the event is impossible, since in this case none of the test outcomes favors it. In the case of a continuous random variable, the number of its possible values ​​is infinite. The probability that this value will take on any particular value x 1 as we have seen, is equal to zero. However, it does not follow from this that this event is impossible, since as a result of the test, the random variable can, in particular, take on the value x 1 . Therefore, in the case of a continuous random variable, it makes sense to talk about the probability that the random variable falls into the interval, and not about the probability that it will take on a particular value. So, for example, in the manufacture of a roller, we are not interested in the probability that its diameter will be equal to the nominal value. For us, the probability that the diameter of the roller does not go out of tolerance is important. Example. The distribution density of a continuous random variable is given as follows:

The graph of the function is shown in Fig. 7. Determine the probability that a random variable will take a value that satisfies the inequalities. Find the distribution function of a given random variable. ( Decision)

The next two paragraphs are devoted to the distributions of continuous random variables that are often encountered in practice - uniform and normal distributions.

* A function is called piecewise continuous on the entire numerical axis if it is either continuous on any segment or has a finite number of discontinuity points of the first kind. ** The rule for differentiating an integral with a variable upper bound, derived in the case of a finite lower bound, remains valid for integrals with an infinite lower bound. Indeed,

Since the integral

is a constant value.

Dependent and independent events. Conditional Probability

Distinguish between dependent and independent events. Two events are said to be independent if the occurrence of one of them does not change the probability of the occurrence of the other. For example, if two automatic lines operate in a workshop, which are not interconnected according to production conditions, then the stops of these lines are independent events.

Example 3 The coin is flipped twice. The probability of the appearance of the "coat of arms" in the first test (event ) does not depend on the appearance or non-appearance of the "coat of arms" in the second test (event ). In turn, the probability of the appearance of the "coat of arms" in the second test does not depend on the result of the first test. Thus, events and independent.

Several events are called collectively independent , if any of them does not depend on any other event and on any combination of the others.

The events are called dependent , if one of them affects the probability of occurrence of the other. For example, two production plants are connected by a single technological cycle. Then the probability of failure of one of them depends on the state of the other. The probability of one event, calculated assuming the occurrence of another event, is called conditional probability events and is denoted by .

The condition of independence of an event from an event is written in the form , and the condition of its dependence - in the form . Consider an example of calculating the conditional probability of an event.

Example 4 There are 5 incisors in the box: two worn and three new. Two consecutive extractions of incisors are performed. Determine the conditional probability of the appearance of a worn cutter during the second extraction, provided that the cutter removed for the first time is not returned to the box.

Decision. Let us denote the extraction of a worn cutter in the first case, and - the extraction of a new one. Then . Since the removed cutter is not returned to the box, the ratio between the number of worn and new cutters changes. Therefore, the probability of removing a worn cutter in the second case depends on what event took place before.

Let us designate the event that means the extraction of the worn cutter in the second case. The probabilities for this event are:

Therefore, the probability of an event depends on whether the event occurred or not.

Probability density- one of the ways to set a probability measure on the Euclidean space. In the case when the probability measure is the distribution of a random variable, one speaks of densityrandom variable.

Probability density Let be a probability measure on, that is, a probability space is defined, where denotes the Borel σ-algebra on. Let denote the Lebesgue measure on.

Definition 1. The probability is called absolutely continuous (with respect to the Lebesgue measure) () if any Borel set of zero Lebesgue measure also has probability zero:

If the probability is absolutely continuous, then according to the Radon-Nikodym theorem, there exists a non-negative Borel function such that

,

where the common abbreviation is used , and the integral is understood in the sense of Lebesgue.

Definition 2. More generally, let be an arbitrary measurable space, and let and be two measures on this space. If there is a non-negative , which allows expressing the measure in terms of the measure in the form

then this function is called measure density as , or derivative of Radon-Nikodim measure with respect to measure , and denote

If, at the occurrence of an event, the probability of an event does not change, then the events and called independent.

Theorem:Probability of joint occurrence of two independent events and (works and ) is equal to the product of the probabilities of these events.

Indeed, since events and independent, then
. In this case, the formula for the probability of a product of events and takes the form.

Events
called pairwise independent if any two of them are independent.

Events
called collectively independent (or simply independent), if every two of them are independent and each event and all possible products of the others are independent.

Theorem:Probability of product of a finite number of independent events in the aggregate
is equal to the product of the probabilities of these events.

Let us illustrate the difference in the application of the event probability formulas for dependent and independent events using examples

Example 1. The probability of hitting the target by the first shooter is 0.85, the second is 0.8. The guns fired one shot at a time. What is the probability that at least one projectile hit the target?

Solution: P(A+B) =P(A) +P(B) –P(AB) Since the shots are independent, then

P(A+B) = P(A) +P(B) –P(A)*P(B) = 0.97

Example 2. An urn contains 2 red and 4 black balls. 2 balls are taken out of it in a row. What is the probability that both balls are red.

Solution: 1 case. Event A - the appearance of a red ball at the first removal, event B - at the second. Event C is the appearance of two red balls.

P(C) \u003d P (A) * P (B / A) \u003d (2/6) * (1/5) \u003d 1/15

2nd case. The first ball drawn is returned to the basket.

P(C) \u003d P (A) * P (B) \u003d (2/6) * (2/6) \u003d 1/9

Total Probability Formula.

Let the event can only happen to one of the incompatible events
, forming a complete group. For example, the store receives the same products from three enterprises and in different quantities. The probability of producing low-quality products at these enterprises is different. One of the products is randomly selected. It is required to determine the probability that this product is of poor quality (event ). Events here
- this is the choice of a product from the products of the corresponding enterprise.

In this case, the probability of the event can be considered as the sum of products of events
.

By the addition theorem for the probabilities of incompatible events, we obtain
. Using the probability multiplication theorem, we find

.

The resulting formula is called total probability formula.

Bayes formula

Let the event happens at the same time as one of incompatible events
, whose probabilities
(
) are known before experience ( a priori probabilities). An experiment is performed, as a result of which the occurrence of an event is registered , and it is known that this event had certain conditional probabilities
(
). It is required to find the probabilities of events
if the event is known happened ( a posteriori probabilities).

The problem is that, having new information (event A has occurred), it is necessary to overestimate the probabilities of events
.

Based on the theorem on the probability of the product of two events

.

The resulting formula is called Bayes formulas.

Basic concepts of combinatorics.

When solving a number of theoretical and practical problems, it is required to make various combinations from a finite set of elements according to given rules and to count the number of all possible such combinations. Such tasks are called combinatorial.

When solving problems, combinatorics use the rules of sum and product.

When assessing the probability of the occurrence of any random event, it is very important to have a good idea in advance whether the probability (probability of the event) of the occurrence of the event of interest to us depends on how other events develop. In the case of the classical scheme, when all outcomes are equally probable, we can already estimate the probability values ​​of the individual event of interest to us on our own. We can do this even if the event is a complex collection of several elementary outcomes. And if several random events occur simultaneously or sequentially? How does this affect the probability of the event of interest to us? If I roll a die a few times and I want to get a six and I'm always unlucky, does that mean I should increase my bet because, according to probability theory, I'm about to get lucky? Alas, probability theory says nothing of the sort. Neither dice, nor cards, nor coins can remember what they showed us last time. It does not matter to them at all whether for the first time or for the tenth time today I test my fate. Every time I roll again, I know only one thing: and this time the probability of rolling a "six" again is one-sixth. Of course, this does not mean that the number I need will never fall out. It only means that my loss after the first toss and after any other toss are independent events. Events A and B are called independent if the realization of one of them does not affect the probability of the other event in any way. For example, the probabilities of hitting a target with the first of two guns do not depend on whether the other gun hit the target, so the events "the first gun hit the target" and "the second gun hit the target" are independent. If two events A and B are independent, and the probability of each of them is known, then the probability of the simultaneous occurrence of both event A and event B (denoted by AB) can be calculated using the following theorem.

Probability multiplication theorem for independent events

P(AB) = P(A)*P(B) the probability of the simultaneous occurrence of two independent events is equal to the product of the probabilities of these events.

Example 1. The probabilities of hitting the target when firing the first and second guns are respectively equal: p 1 = 0.7; p2 = 0.8. Find the probability of hitting with one volley by both guns simultaneously.

As we have already seen, the events A (hit by the first gun) and B (hit by the second gun) are independent, i.e. P(AB)=P(A)*P(B)=p1*p2=0.56. What happens to our estimates if the initiating events are not independent? Let's change the previous example a little.

Example 2 Two shooters in a competition shoot at targets, and if one of them shoots accurately, then the opponent starts to get nervous, and his results worsen. How to turn this everyday situation into a mathematical problem and outline ways to solve it? It is intuitively clear that it is necessary to somehow separate the two scenarios, to compose, in fact, two scenarios, two different tasks. In the first case, if the opponent misses, the scenario will be favorable for the nervous athlete and his accuracy will be higher. In the second case, if the opponent decently realized his chance, the probability of hitting the target for the second athlete is reduced. To separate the possible scenarios (they are often called hypotheses) of the development of events, we will often use the "probability tree" scheme. This diagram is similar in meaning to the decision tree, which you have probably already had to deal with. Each branch is a separate scenario, only now it has its own value of the so-called conditional probability (q 1 , q 2 , q 1 -1, q 2 -1).

This scheme is very convenient for the analysis of successive random events. It remains to clarify one more important question: where do the initial values ​​of probabilities come from in real situations? After all, the theory of probability does not work with the same coins and dice, does it? Usually these estimates are taken from statistics, and when statistics are not available, we conduct our own research. And we often have to start it not with collecting data, but with the question of what information we generally need.

Example 3 In a city of 100,000 inhabitants, suppose we need to estimate the size of the market for a new non-essential product, such as a color-treated hair conditioner. Let's consider the "tree of probabilities" scheme. In this case, we need to approximately estimate the value of the probability on each "branch". So, our estimates of market capacity:

1) 50% of all residents of the city are women,

2) of all women, only 30% dye their hair often,

3) of these, only 10% use balms for colored hair,

4) of these, only 10% can muster up the courage to try a new product,

5) 70% of them usually buy everything not from us, but from our competitors.


According to the law of multiplication of probabilities, we determine the probability of the event of interest to us A \u003d (a city resident buys this new balm from us) \u003d 0.00045. Multiply this probability value by the number of inhabitants of the city. As a result, we have only 45 potential buyers, and given that one vial of this product is enough for several months, the trade is not very lively. Still, there are benefits from our assessments. Firstly, we can compare the forecasts of different business ideas, they will have different “forks” on the diagrams, and, of course, the probability values ​​will also be different. Secondly, as we have already said, a random variable is not called random because it does not depend on anything at all. It's just that its exact meaning is not known in advance. We know that the average number of buyers can be increased (for example, by advertising a new product). So it makes sense to focus on those "forks" where the distribution of probabilities does not particularly suit us, on those factors that we are able to influence. Consider another quantitative example of consumer behavior research.

Example 3 An average of 10,000 people visit the food market per day. The probability that a market visitor walks into a dairy pavilion is 1/2. It is known that in this pavilion, on average, 500 kg of various products are sold per day. Can it be argued that the average purchase in the pavilion weighs only 100 g?

Discussion.

Of course not. It is clear that not everyone who entered the pavilion ended up buying something there.


As shown in the diagram, in order to answer the question about the average purchase weight, we must find the answer to the question, what is the probability that a person who enters the pavilion buys something there. If we do not have such data at our disposal, but we need them, we will have to obtain them ourselves, after observing the visitors of the pavilion for some time. Suppose our observations show that only a fifth of the visitors to the pavilion buy something. As soon as these estimates are obtained by us, the task becomes already simple. Of the 10,000 people who came to the market, 5,000 will go to the pavilion of dairy products, there will be only 1,000 purchases. The average purchase weight is 500 grams. It is interesting to note that in order to build a complete picture of what is happening, the logic of conditional "branching" must be defined at each stage of our reasoning as clearly as if we were working with a "concrete" situation, and not with probabilities.

Tasks for self-examination.

1. Let there be an electrical circuit consisting of n series-connected elements, each of which operates independently of the others. The probability p of non-failure of each element is known. Determine the probability of proper operation of the entire section of the circuit (event A).


2. The student knows 20 of the 25 exam questions. Find the probability that the student knows the three questions given to him by the examiner.

3. Production consists of four successive stages, each of which operates equipment for which the probabilities of failure during the next month are, respectively, p 1 , p 2 , p 3 and p 4 . Find the probability that in a month there will be no stoppage of production due to equipment failure.

The dependence of events is understood in probabilistic sense, not functionally. This means that the appearance of one of the dependent events cannot unambiguously judge the appearance of the other. Probabilistic dependence means that the occurrence of one of the dependent events only changes the probability of the occurrence of the other. If the probability does not change, then the events are considered independent.

Definition: Let - arbitrary probability space, - some random events. They say that event BUT does not depend on the event AT , if its conditional probability is the same as its unconditional probability:

.

If a , then we say that the event BUT event dependent AT.

The concept of independence is symmetrical, that is, if an event BUT does not depend on the event AT, then the event AT does not depend on the event BUT. Indeed, let . Then . Therefore, they simply say that the events BUT and AT independent.

The following symmetrical definition of the independence of events follows from the rule of multiplication of probabilities.

Definition: Events BUT and AT, defined on the same probability space are called independent, if

If a , then the events BUT and AT called dependent.

Note that this definition is also valid when or .

Properties of independent events.

1. If events BUT and AT are independent, then the following pairs of events are also independent: .

▲ Let us prove, for example, the independence of the events . Imagine an event BUT as: . Since the events are incompatible, then , and due to the independence of the events BUT and AT we get that . Hence , which means independence . ■

2. If the event BUT does not depend on events IN 1 and IN 2, which are incompatible () , that event BUT does not depend on the amount.

▲ Indeed, using the axiom of additivity of probability and independence of the event BUT from events IN 1 and IN 2, we have:

Relationship between the concepts of independence and incompatibility.

Let be BUT and AT- any events that have a non-zero probability: , so . If the events BUT and AT are inconsistent (), and therefore equality can never take place. Thus, incompatible events are dependent.

When more than two events are considered simultaneously, their pairwise independence does not sufficiently characterize the connection between the events of the entire group. In this case, the concept of independence in the aggregate is introduced.

Definition: Events defined on the same probability space are called collectively independent, if for any 2 £m £n and any combination of indices holds the equality:

At m = 2 independence in the aggregate implies pairwise independence of events. The reverse is not true.


Example. (Bernstein S.N.)

A random experiment consists in tossing a regular tetrahedron (tetrahedron). There is a face that has fallen out from top to bottom. The faces of the tetrahedron are colored as follows: 1st face - white, 2nd face - black,
3 face - red, 4 face - contains all colors.

Consider the events:

BUT= (Dropout of white color); B= (Black drop out);

C= (Red dropout).

Then ;

Therefore, the events BUT, AT and With are pairwise independent.

However, .

Therefore, events BUT, AT and With collectively they are not independent.

In practice, as a rule, the independence of events is not established by checking it by definition, but vice versa: events are considered independent from some external considerations or taking into account the circumstances of a random experiment, and independence is used to find the probabilities of producing events.

Theorem (multiplications of probabilities for independent events).

If events defined on the same probability space are independent in the aggregate, then the probability of their product is equal to the product of the probabilities:

▲ The proof of the theorem follows from the definition of the independence of events in the aggregate or from the general probability multiplication theorem, taking into account the fact that in this case

Example 1 (typical example for finding conditional probabilities, the concept of independence, the probability addition theorem).

The electrical circuit consists of three independently operating elements. The failure probabilities of each of the elements are respectively equal to .

1) Find the probability of circuit failure.

2) The circuit is known to have failed.

What is the probability that it fails:

a) 1st element; b) 3rd element?

Decision. Consider events = (Failed k th element), and the event BUT= (Scheme failed). Then the event BUT is presented in the form:

.

1) Since the events and are not incompatible, then the axiom of additivity of probability Р3) is not applicable and to find the probability one should use the general probability addition theorem, according to which

Related publications