
Stochastic processes
A stochastic process is a family of random variables that depends on a parameter, t. A stochastic process is specified using the following notation:

Here, t is a parameter, and T is the set of possible values of t.
Usually, time is indicated by t, so a stochastic process is a family of time-dependent random variables. The variability range of t, that is, the set, T, can be a set of real numbers, possibly coinciding with the entire time axis. But it can also be a discrete set of values.
The random variables, Xt, are defined on the set, X, called the space of states. This can be a continuous set, in which case it is defined as a continuous stochastic process, or a discrete set, in which case it is defined as a discrete stochastic process.
Consider the following elements:

This means the values that the random variables, Xt, can take are called system states and represent the possible results of an experiment. The Xt variables are linked together by dependency relationships. We can know a random variable if we know both the values it can assume and the probability distribution. So, to understand a stochastic process, it is necessary not only to know the values that Xt can take but also the probability distributions of the variables and the joint distributions between the values. Simpler stochastic processes, in which the variability range of t is a discrete set of time values, can also be considered.
Important note
In practice, there are numerous phenomena that are studied through the theory of stochastic processes. A classic application in physics is the study of the motion of a particle in each medium, the so-called Brownian motion. This study is carried out statistically using a stochastic process. There are processes where even by knowing the past and the present, the future cannot be determined; whereas, in other processes, the future is determined by the present without considering the past.
Types of stochastic process
Stochastic processes can be classified according to the following characteristics:
Space of states
Time index
Type of stochastic dependence between random variables
The state space can be discrete or continuous. In the first case, the stochastic process with discrete space is also called a chain, and space is often referred to as the set of non-negative integers. In the second case, the set of values assumed by the random variables is not finite or countable, and the stochastic process is in continuous space.
The time index can also be discrete or continuous. A discrete-time stochastic process is also called a stochastic sequence and is denoted as follows:

Here, the set, T, is finite or countable.
In this case, the changes of state are observed only in certain instances: finite or countable. If state changes occur at any instant in a finite or infinite set of real intervals, then there is a continuous-time process, which is denoted as follows:
The stochastic dependence between random variables, X(t), for different values of t characterizes a stochastic process and sometimes simplifies its description. A stochastic process is stationary in the strict sense that the distribution function is invariant with respect to a shift on the time axis, T. A stochastic process is stationary in the broad sense that the first two moments of the distribution are independent of the position on the T axis.
Examples of stochastic processes
The mathematical treatment of stochastic processes seems complex, yet we find cases of stochastic processes every day. For example, the number of patients admitted to a hospital as a function of time, observed at noon each day, is a stochastic process in which the space of states is discrete, being a finite subset of natural numbers, and time is discrete. Another example of a stochastic process is the temperature measured in a room as a function of time, observed at every instant, with continuous state space and continuous time. Let's now look at a number of structured examples that are based on stochastic processes.
The Bernoulli process
The concept of a random variable allows us to formulate models that are useful for the study of many random phenomena. An important early example of a probabilistic model is the Bernoulli distribution, named in honor of the Swiss mathematician, James Bernoulli (1654-1705), who made important contributions to the field of probability.
Some of these experiments consist of repeatedly performing a given test. For example, we want to know the probability of getting a head when throwing a coin 1,000 times.
In each of these examples, we look for the probability of obtaining x successes in n trials. If x indicates the successes, then n - x will be the failures.
A sequence of Bernoulli trials consists of a Bernoulli trial under the following hypotheses:
There are only two possible mutually exclusive results for each trial, arbitrarily called success and failure.
The probability of success, p, is the same for each trial.
All tests are independent.
Independence means that the result of a test is not influenced by the result of any other test. For example, the event, the third test was successful, is independent of the event, the first test was successful.
The toss of a coin is a Bernoulli trial: the heads event can be considered successful, and the tails event can be considered unsuccessful. In this case, the probability of success is p = 1/2. In rolling two dice, the event, the sum of the points is seven, and the complementary event are both unsuccessful. In this case, it is a Bernoulli trial and the probability of success is p = 1/6.
Important note
Two events are said to be complementary when the occurrence of the first excludes the occurrence of the second but one of the two will certainly occur.
Let p denote the probability of success in a Bernoulli trial. The random variable, X, which counts the number of successes in n trials is called the binomial random variable of the n and p parameters. X can take integer values between 0 and n.
Random walk
The random walk is a discrete parameter stochastic process in which Xt, where X represents a random variable, describes the position taken at time t by a moving point. The term, random walk, refers to the mathematical formalization of statistics that describe the displacement of an object that moves randomly. This kind of simulation is extremely important for a physicist and has applications in statistical mechanics, fluid dynamics, and quantum mechanics.
Random walks represent a mathematical model that is used universally to simulate a path formalized by a succession of random steps. This model can assume a variable number of degrees of freedom, depending on the system we want to describe. From a physical point of view, the path traced over time will not necessarily simulate a real motion, but it will represent the trend of the characteristics of the system over time. Random walks find applications in chemistry, biology, and physics, but also in other fields such as economics, sociology, and information technology.
Random one-dimensional walking is a model that is used to simulate the movement of a particle moving along a straight line. There are only two potential movements on the allowed path: either to the right (with a probability that is equal to p) or to the left (with a probability that is equal to q) of the current position. Each step has a constant length and is independent of the others, as shown in the following diagram:

Figure 2.1 – One-dimensional walking
The position of the point in each instant is identified by its abscissa, X(n). This position, after n steps, will be characterized by a random term. Our aim is to calculate the probability of passing from the starting point after n movements. Obviously, nothing assures us that the point will return to the starting position. The variable, X(n), returns the abscissa of the particle after n steps. It is a discrete random variable with a binomial distribution.
At each instant, the particle steps right or left based on the value returned by a random variable, Z(n). This variable can take only two values: +1 and -1. It assumes a + 1 value with a probability of p > 0 and a value of -1 with a probability that is equal to q. The sum of the two probabilities is p + q = 1. The position of the particle at instant n is given by the following equation:

This shows the average number of returns to the origin of the particle, named p. The probability of a single return is given by the following geometric series:

We assume that the probability of the particle returning to the origin tends to 1. This means that despite the frequency of the returns decreasing with the increase in the number of steps taken, they will always be in an infinite value of steps taken. So, we can conclude that a particle with equal probability of left and right movement, left free to walk casually to infinity with great probability, returns infinite times to the point from which it started.
The Poisson process
There are phenomena in which certain events, with reference to a certain interval of time or space, rarely happen. The number of events that occur in that interval varies from 0 to n, and n cannot be determined a priori. For example, the number of cars passing through an uncrowded street in a randomly chosen 5-minute time frame can be considered a rare event. Similarly, the number of accidents at work that happen at a company in a week, or the number of printing errors on a page of a book, is rare.
In the study of rare events, a reference to a specific interval of time or space is fundamental. For the study of rare events, the Poisson probability distribution is used, named in honor of the French mathematician, Simeon Denis Poisson (1781-1840), who first obtained the distribution. The Poisson distribution is used as a model in cases where the events or realizations of a process, distributed randomly in space or time, are counts, that is, discrete variables.
The binomial distribution is based on a set of hypotheses that define the Bernoulli trials, and the same happens for the Poisson distribution. The following conditions describe the so-called Poisson process:
The realizations of the events are independent, meaning that the occurrence of an event in a time or space interval has no effect on the probability of the event occurring a second time in the same, or another, interval.
The probability of a single realization of the event in each interval is proportional to the length of the interval.
In any arbitrarily small part of the interval, the probability of the event occurring more than once is negligible.
An important difference between the Poisson distribution and the binomial distribution is the number of trials and successes. In a binomial distribution, the number, n, of trials is finite and the number, x, of successes cannot exceed n; in a Poisson distribution, the number of tests is essentially infinite and the number of successes can be infinitely large, even if the probability of having x successes becomes very small as x increases.