# Difference between revisions of "CDS 110b: Stochastic Systems"

(→Frequently Asked Questions) |
|||

Line 12: | Line 12: | ||

== Frequently Asked Questions == | == Frequently Asked Questions == | ||

+ | |||

+ | '''Q (2006): Can you explain the jump from pdfs to correlations in more detail?''' | ||

+ | <blockquote> | ||

+ | The probability density function (pdf), <math>p(x; t)</math> tells us how the value of a random process is distributed at a particular ''time'': | ||

+ | <center><amsmath> | ||

+ | P(a \leq x(t) \leq b) = \int_a^b p(x; t) dx. | ||

+ | </amsmath></center> | ||

+ | You can interpret this by thinking of <math>x(t)</math> as a separate random variable for each time <math>t</math> | ||

+ | |||

+ | The ''correlation'' for a random process tells us how the value of a random process at one time, <math>t_1</math> is related to the value at a different time <math>t_2</math>. This relationship is probabalistic, so it is also described in terms of a distribution. In particular, we use the ''joint probability density function'', <math>p(x_1, x_2; t_1, t_2)</math> to characterize this: | ||

+ | <center><amsmath> | ||

+ | P(a_1 \leq X_1(t_1) \leq b_1, a_2 \leq X_2(t_2) \leq b_2) = \int_{a_1}^{b_1} \int_{a_2}^{b_2} p(x_1, x_2; t_1, t_2) dx_1 dx_2 | ||

+ | </amsmath></center> | ||

+ | Given any random process, <math>p(x_1, x_2; t_1, t_2)</math> descibes (as a density) how the value of the random variable at time <math>t_1</math> is related (or "correlated") with the value at time <math>t_2</math>. We can thus describe a random process according to its joint probability density function. | ||

+ | |||

+ | In practice, we don't usually describe random processes in terms of their pdfs and joint pdfs. It is usually easier to describe them in terms of their statistics (mean, variance, etc). In particular, we almost never describe the correlation in terms of joint pdfs, but instead use the ''correlation function'': | ||

+ | <center><amsmath> | ||

+ | \rho(t, \tau) = E\{X(t) X(\tau)\} = \int_{-\infty}^\infty \int_{-\infty}^\infty x_1 x_2 p(x_1, x_2; t, \tau) dx_1 dx_2 | ||

+ | </amsmath></center> | ||

+ | The utility of this particular function is seen primarily through its application: if we know the correlation for one random process and we "filter" that random process through a linear system, we can compute the correlation for the corresponding output process. | ||

+ | </blockquote> | ||

+ | |||

+ | '''Q (2006): What is the meaning of a white noise process''' | ||

+ | <blockquote> | ||

+ | <p>The definition of a white noise process is that it is a Gaussian process with constant power spectral density. The intution behind this definition is that the spectral content of the process is constant at all frequencies. The term "white" noise comes from the fact that the color "white" comes from having light present at all frequencies.</p> | ||

+ | |||

+ | <p>Another interpretation of the white noise is through the power spectrum of a signal. In this case, we simply compute the Fourier transform of a signal <math>F(t)</math>. The signal is said to be white if it has constant spectrum across all frequencies.</p> | ||

+ | |||

+ | <p> More information | ||

+ | * [http://en.wikipedia.org/wiki/Power_spectrum Wikipedia entry on power spectrum] | ||

+ | </p> | ||

+ | </blockquote> | ||

+ | |||

+ | '''Q (2006): What is a random process (in relation to transfer function)''' | ||

+ | <blockquote> | ||

+ | <p>Formally, a ''random process'' is a continuous collection of random variables <math>x(t)</math>. It is perhaps easiest to think first of a discrete time random process<math>x_k</math>. At each time instant <math>k</math>, <math>x_k</math> is a random variable according to some distribution. If the process is white, then there is no correlation between <math>x_k</math> and <math>x_l</math> when <math>k \neq l</math>. If, on the other hand, the value of <math>x_k</math> gives us information about what <math>x_l</math> will be, then the processes are correlated and <math>\rho(k, l)</math> is the correlation function. </p> | ||

+ | |||

+ | <p>These concepts can also be written in continous time, in which case each <math>x(t)</math> is a random variable and <math>\rho(t, s)</math> is the correlation function. This takes some time to get used to since <math>x</math> is not a signal, but rather a description of a class of signals (satisfying some probability measures).</p> | ||

+ | |||

+ | <p> A ''transfer function'' describes how we map ''signals'' in the frequency domain (see [[AM:Transfer Functions|{{Astrom}} and Murray]]). We can use transfer functions to describe how random processes are mapped through a linear system (this is called spectral response; see lecture notes or text)</p> | ||

+ | |||

+ | <p> More information | ||

+ | * [http://en.wikipedia.org/wiki/Stochastic_process Wikipedia entry on stochastic process] | ||

+ | </p> | ||

+ | </blockquote> | ||

+ | |||

+ | '''Q (2006): what is the transfer function for a parallel combination of <math>H_1(s)</math> and <math>H_2(s)</math>?''' | ||

+ | <blockquote> | ||

+ | If two transfer functions are in parallel (meaning: they receive the same input and the the output is the sum of the outputs from the individual transfer functions), the net transfer function is <math>H_1(s) + H_2(s)</math>. Note that this is different than the formula that you get when you have parallel interconnections of resistors in electrical engineering. This is because when two outputs come together in a circuit diagram this restricts the voltage to be the same at the corresponding terminals, whereas in a block diagram we ''sum'' the output signals. | ||

+ | </blockquote> |

## Revision as of 18:50, 7 January 2007

WARNING: This page is for a previous year.See current course homepage to find most recent page available. |

CDS 110b | Schedule | Project | FAQ | Reading |

This set of lectures presents an overview of random processes and stochastic systems. We begin with a short review of continuous random variables and then consider random processes and linear stochastic systems. Basic concepts include probability density functions (pdfs), joint probability, covariance, correlation, power spectral density, spectral response and spectral factorization.

## Course Materials

## References and Further Reading

- Friedland, Chapter 10
- Hoel, Port and Stone, Introduction to Probability Theory - this is a good reference for basic definitions of random variables
- Apostol II, Chapter 14 - another reference for basic definitions in probability and random variables

## Frequently Asked Questions

**Q (2006): Can you explain the jump from pdfs to correlations in more detail?**

The probability density function (pdf), tells us how the value of a random process is distributed at a particular

time:<amsmath> P(a \leq x(t) \leq b) = \int_a^b p(x; t) dx.

</amsmath>You can interpret this by thinking of as a separate random variable for each time

The

correlationfor a random process tells us how the value of a random process at one time, is related to the value at a different time . This relationship is probabalistic, so it is also described in terms of a distribution. In particular, we use thejoint probability density function, to characterize this:<amsmath> P(a_1 \leq X_1(t_1) \leq b_1, a_2 \leq X_2(t_2) \leq b_2) = \int_{a_1}^{b_1} \int_{a_2}^{b_2} p(x_1, x_2; t_1, t_2) dx_1 dx_2

</amsmath>Given any random process, descibes (as a density) how the value of the random variable at time is related (or "correlated") with the value at time . We can thus describe a random process according to its joint probability density function.

In practice, we don't usually describe random processes in terms of their pdfs and joint pdfs. It is usually easier to describe them in terms of their statistics (mean, variance, etc). In particular, we almost never describe the correlation in terms of joint pdfs, but instead use the

correlation function:<amsmath> \rho(t, \tau) = E\{X(t) X(\tau)\} = \int_{-\infty}^\infty \int_{-\infty}^\infty x_1 x_2 p(x_1, x_2; t, \tau) dx_1 dx_2

</amsmath>The utility of this particular function is seen primarily through its application: if we know the correlation for one random process and we "filter" that random process through a linear system, we can compute the correlation for the corresponding output process.

**Q (2006): What is the meaning of a white noise process**

The definition of a white noise process is that it is a Gaussian process with constant power spectral density. The intution behind this definition is that the spectral content of the process is constant at all frequencies. The term "white" noise comes from the fact that the color "white" comes from having light present at all frequencies.

Another interpretation of the white noise is through the power spectrum of a signal. In this case, we simply compute the Fourier transform of a signal . The signal is said to be white if it has constant spectrum across all frequencies.

More information

**Q (2006): What is a random process (in relation to transfer function)**

Formally, a

random processis a continuous collection of random variables . It is perhaps easiest to think first of a discrete time random process. At each time instant , is a random variable according to some distribution. If the process is white, then there is no correlation between and when . If, on the other hand, the value of gives us information about what will be, then the processes are correlated and is the correlation function.These concepts can also be written in continous time, in which case each is a random variable and is the correlation function. This takes some time to get used to since is not a signal, but rather a description of a class of signals (satisfying some probability measures).

A

transfer functiondescribes how we mapsignalsin the frequency domain (see). We can use transfer functions to describe how random processes are mapped through a linear system (this is called spectral response; see lecture notes or text)More information

**Q (2006): what is the transfer function for a parallel combination of and ?**

If two transfer functions are in parallel (meaning: they receive the same input and the the output is the sum of the outputs from the individual transfer functions), the net transfer function is . Note that this is different than the formula that you get when you have parallel interconnections of resistors in electrical engineering. This is because when two outputs come together in a circuit diagram this restricts the voltage to be the same at the corresponding terminals, whereas in a block diagram we

sumthe output signals.