A stochastic process satisfying the Markov property: the distribution of the future states given the value of the current state does not depend on the past states. Use this tag for general state space processes (both discrete and continuous times); use (markov-chains) for countable state space ...

learn more… | top users | synonyms

0
votes
0answers
31 views

Markov processes and presheaves

Can one consider markov processes as some particular presheaves of vector spaces on posets (in that case the poset of natural numbers with the usual order) ? The image of each natural number would be ...
0
votes
1answer
39 views

Markov Chain discarding balls from urn

The following question has me stumped. Any ideas on how to get started? An urn contains $n$ green balls and $n+2$ red balls. A ball is picked at random: if it is green then a red balls is also ...
0
votes
0answers
19 views

Similarities and differences between Memoryless property and Markovian property

I want to ask about similarities and differences between Memoryless property (MLP) and Markovian property (MKP), which I've been curious since long time ago. According to what I have searched and ...
3
votes
1answer
35 views

For a generator $G$ of a Markov process in continuous time and finite state space, how would one prove that the entries of $e^{tG}$ are non-negative?

I have a generator matrix G for a Markov chain in continuous time and finite state space and I am looking to prove that the entries of $e^{tG} \geq 0 $ By definition $G = P'(0)$ with entries $g_{ij} ...
2
votes
2answers
58 views

Expected state of a markov chain

Let's start with a slightly trivial Markov chain defined as follows: the beginning state is called $1$ and the set of states is $\mathbb{N}$. At each step, when the current state is $n$, the ...
0
votes
1answer
58 views

Inner product space and Reversible Markov chain

I am trying to clarify the following: Suppose P is the $N\times N$ transition matrix of a finite-dimensional Markov Chain, with invariant distribution given by N-vector $\mu$ (i.e. $\mu^T=\mu^T P$). ...
0
votes
0answers
28 views

Propagation of standard deviation for random variable with Markov Property

I have a discrete random variable, $X \in \{0,1,2,3\}$. Define the indicator function: $$ 1_{k}\left(x\right) = \begin{cases} 1, & \text{if $x=k$} \\ 0, & \text{otherwise} \\ \end{cases}$$ ...
0
votes
0answers
29 views

Formal Theory Regarding M/M/s Queue

I have some difficulties with formally deducing the Q-matrix or infinitesimal generator for M/M/s Queues. Although I undestrand the intuitive idea I would like to know the real formal definition of ...
0
votes
1answer
24 views

question about the transformation of a Markov process

I have a question about Markov Process: Let $X_t=(X_t^1, X_t^2,..., X_t^n)$ be a Markov process with regard to the filtration $\mathcal{F}_t$, let $Y_t:=\max_{1\leq k\leq n}X_t^k$, then is $Y_t$ a ...
-1
votes
1answer
76 views

What is the Expectations of all 3 ants meeting at same point?

Say we have 3 ants in three corner's of triangle. What is the expectations that all 3 ants meeting together given that the ant moves in any direction. So by just seeing it I figured out that in 2 ...
0
votes
1answer
43 views

Estimating the transition matrix given the stationary distribution

Let's say we are given a Markov chain for variable $X = [x_1, ..., x_n]$; also we are given a desired stationary distribution for this graph $P_\infty = [p_1, ..., p_n]^\top$. How can we design an ...
0
votes
1answer
23 views

Is it possible to reverse probabilistic automaton?

Is it possible to reverse probabilistic automaton (PA), i.e. calculate the probability of previous state given current state? Will reversed automaton be a PA (Markov?), i.e. will next probability ...
1
vote
1answer
31 views

Absorbing state for a collection of random walks

Further to this question; having learned some stuff since I posed it. Consider a collection of random walks $X_i$ which take finite integer values. These evolve as time-inhomogeneous Markov Chains. ...
2
votes
1answer
25 views

Markov property of a random process (a solution of piece-wise deterministic equations)

Consider a piece-wise deterministic (Markov!) process \begin{eqnarray} \dot{x}(t) & = & A_{\theta(t,x(t))}x(t)\\ x(0) & = & x_0 \in \mathbb{R}^n \notag \end{eqnarray} where ...
0
votes
0answers
25 views

Stability condition for queueing model with 2 servers, non-identical hypoexponetial-2 and FCFS

We consider a queueing model with 2 servers and a shared queue, let us label the servers as $A$ and $B$. Jobs arrive in batches of size $N$ with rate $\lambda$. Each server has a hypoexponential ...
2
votes
0answers
37 views

References for basics of Piecewise-Deterministic Markov Processes

I am looking for introductory/pedagogical material to Piecewise-Deterministic Markov Processes (see http://en.wikipedia.org/wiki/Piecewise-deterministic_Markov_process) (For the moment I am interested ...
0
votes
0answers
47 views

Stopping time inequality Markov process

Let X be a right-continuous Feller-Dynkin process and define the stopping time $$\nu_{r}=\inf\{t\geq 0\mid ||X_{t}-X_{0}||\geq r\}$$ Let $B_{x}(\epsilon)=\{y\mid ||y-x|| \leq \epsilon\}$, for $x$ not ...
0
votes
0answers
20 views

Max. reachability in infinite-state MDP

Following [1], the maximum probability to reach a set of states $B\subseteq S$ from state $s\in S$ in a Markov decision process with finite state space $S$ can be expressed as the unique solution to ...
0
votes
0answers
26 views

Question on MRF with Efficient Approximations [Boykov98]

My question is about this paper. Although most of the explanations are perfectly clear to me, there is this one part I feel unsure about.. If you could enlighten me, I would really appreciate it. ...
2
votes
0answers
50 views

A equivalent definition of the Feller Process.

I saw this on Liggett's Book (P.95). Let $S=% %TCIMACRO{\U{2115} }% %BeginExpansion \mathbb{N} %EndExpansion ,$ and suppose $\left( X_{t}\right) _{t\geq 0}$ is a continuous-time Markov process with ...
1
vote
1answer
41 views

A question about Infinitesimal generator of Feller Process

Let $S=% %TCIMACRO{\U{211d} }% %BeginExpansion \mathbb{R} %EndExpansion $, and consider the Feller process $\left( X_{t}\right) _{t\geq 0}$ with state space $S$ such that $X_{t}=t+X_{0}$ for all ...
1
vote
1answer
71 views

Discrete-time Markov chain properties

A Markov chain in discrete time is irreducible, has state space $\{0,1,\dots\}$ and starts at $1$. It is both a branching process and a martingale. Determine the probability of hitting $0$.
4
votes
0answers
82 views

Conditional probability and integrating out part of a random walk

Suppose that I have a random walk process defined by $\alpha_{t+1}$ ~ N$(\alpha_t, \omega^2)$. Given $\alpha_t$ and $\alpha_{t+2}$, I understand why the conditional formula for ...
0
votes
1answer
81 views

Canonical Markov Process

Let $X$ be a canonical, right-continuous Markov process with values in a Polish state space $E$, equipped with Borel-$\sigma$-algebra $\mathcal{E}$ and we assume that $t\rightarrow E_{X_{t}}f(X_{s})$ ...
0
votes
1answer
55 views

three-state Markov chain

a male and a female go to a 2-table restaurant on the same day. each day the male sits at one or the other of the 2 tables, starting at the table 1, with a Markov chain transition matrix: ...
0
votes
1answer
60 views

Rewriting Markov process

Let $X$ be a Markov proces with state space $(E,\mathcal{E})$with initial distribution $\nu$ and transition function $P_{t}$, so $$E_{\nu}(f(X_{t+s})\mid\mathcal{F}_{s})=P_{t}f(X_{s})$$ Suppose that ...
0
votes
1answer
34 views

Question on Markov chains of expected number of states

I am confused with an statement from my probability book that has to do with Markov chains. I hope someone could clarify that, if possible....Consider a Markov chain for which $P_{11}=1$ and ...
1
vote
2answers
69 views

Diffusion process. Distribution vs transition probability.

I need confirmation on the following problem: Take a SDE of the form: \begin{equation} dX_t=a(X_t,t)dt+b(X_t,t)dW_t \end{equation} where all the conditions, such that the solution $X_t$ is defined ...
0
votes
1answer
39 views

General State Space Markov Chain

I am having some difficulty understanding some early results of Markov Chain theory on a general state space. We have a function (Kernel) $K:E \times E \rightarrow \mathbb{R}$, and a distribution ...
0
votes
0answers
34 views

Variability in estimations over a non-ergodic/non-regular Markov process

Imagine we have a non-ergodic/non-regular Markov Process with with $n$ states. Among these $n$ states, there are $k$ absorbing states. For each of the $n-k$ non-absorbing states, it is not possible ...
1
vote
1answer
51 views

Identity in Markov Processes

I want to know if my reasoning here is correct, it seems simple enough but I just want clarification (I am considering the proof that if a Markov process satisfies the detailed balance condition, then ...
0
votes
0answers
66 views

Markov Chains Worked Example (Stirzaker)

I have a Markov Chain with state space the non-negative integers. The rules of the M.C. are that when it is in state $i \neq 0$, it moves to one of {${0,1,2,\ldots,i+1}$} with probability $1/(i+2)$ ...
0
votes
0answers
23 views

On discrete-time stochastic attractivity of linear systems

Let $m$ be a probability measure on $Y \subseteq \mathbb{R}^p$, so that $m(Y)=1$. Consider a continuous function $f: \mathbb{R}^n \rightarrow \mathbb{R}^p$. Assume that $f(0) = 0$, and that there ...
1
vote
1answer
50 views

Have there some discrete-time continuous-state Markov processes been studied?

I have seen discrete-time discrete-state Markov processes (such as random walks), continuous-time discrete-state Markov processes (such as Poisson processes), and continuous-time continuous-state ...
0
votes
0answers
44 views

Continuous time markov chain

Jobs arrive at a central computer according to a $PP(\lambda)$. The job processing times are i.i.d. $\exp(\mu)$. The computer processes them one at a time in the order of arrival. The computer is ...
0
votes
1answer
47 views

On discrete-time stochastic attractivity

Let $m$ be a probability measure on $Y \subseteq \mathbb{R}^p$, so that $m(Y)=1$. Consider a function $f: \mathbb{R}^n \times Y \rightarrow \mathbb{R}^n$, continuous on the first arguments, ...
3
votes
2answers
119 views

Probability of Extinction in a simple Birth and Death Process

We are asked to show that the probability of extinction $\zeta=\lim_{t\to \infty} P\left(X(t)=0\right)$ given by: $$\zeta=\begin{cases}1&\text{if }\lambda\le \mu,\\ \left(\frac \mu\lambda ...
0
votes
1answer
54 views

Metropolis Hastings definition - Proving $\pi(x)$ is the invariant density of our transition matrix

I'm currently working through the proof of the Metropolis-Hastings algorithm, and using two sources: page 328, section 3 page 1704-1705 I have a good understanding of most of the proof until ...
1
vote
0answers
27 views

Single evaluation for using exponential sampling until past a point

I am trying to improve an algorithm that looks like the following (and am getting stumped): I am provided with a starting time, rate, and a target time. I then use an exponential distribution to ...
1
vote
0answers
133 views

Why Markov matrices always have 1 as an eigenvalue

Also called stochastic matrix. Let $A=[a_{ij}]$ - matrix over $\mathbb{R}$ $0\le a_{ij} \le 1 \forall i,j$ $\sum_{j}a_{ij}=1 \forall i$ i.e the sum along each column of $A$ is 1. I ...
1
vote
1answer
52 views

Hitting times of Markov chain/process have always finite moments?

Consider an irreducible ergodic Markov chain on a finite state space $\Omega$. Then any state is positive recurrent and this should suffice to conclude that the mean hitting time of state $s \in ...
0
votes
2answers
44 views

Specifying differential equation that describes a particular set of dynamics.

There are $S$ individuals who are susceptible to infection, and $I$ who are infectious. $S + I = N$, where $N$ is the total size of the population. Each infectious transmit the disease to a ...
4
votes
1answer
58 views

Showing a process is not markov

I keep searching but I can't find any place that gives a good method of showing a process is NOT Markov. The definition I am using is that for every $s<t$ and $g$ bounded borel there is $f$ borel ...
2
votes
1answer
40 views

Amount of information a hidden state can convey (HMM)

In this paper (Products of Hidden Markov Models, http://www.cs.toronto.edu/~hinton/absps/aistats_2001.pdf), the authors say that: The hidden state of a single HMM can only convey log K bits of ...
0
votes
2answers
39 views

How do you explain $f(x_4|x_3)f(x_3|x_2)f(x_2|x_1)f(x_1) = f(x_4,x_3,x_2,x_1)$?

Let $x_1=x(n_1)$, $x_2=x(n_2)$, $x_3=x(n_3)$ and $x_4=x(n_4)$ be random Markov processes $(n_1 < n_2 < n_3 < n_4)$. I don't understand the identity given below on their probability density ...
-2
votes
2answers
96 views

Is first order moving average a Markov process?

Given first order moving average $$ x(n) = e(n) + ce(n-1) $$ where $e(n)$ is a sequence of Gaussian random variables with zero mean and unit variance which are independent of each other, and $c$ is ...
3
votes
0answers
118 views

Is there monotone class theorem used in one of these steps?

IN Rogers & Williams "Diffusions, Markov Process and Martingales" they introduce the resolvent as: $$R_\lambda f(x):=\int_{[0,\infty)}e^{-\lambda t}P_tf(x)dt=\int_ER_\lambda(x,dy)f(y)$$ where ...
1
vote
1answer
49 views

A book on finite state continuous time Markov chain

I want to read in detail about finite state continuous time Markov chain. Can anybody suggest a book which deal this topic in detail?
2
votes
1answer
59 views

Random Process derived from Markov process

I have a query on a Random process derived from Markov process. I have stuck in this problem for more than 2 weeks. Let $r(t)$ be a finite-state Markov jump process described by ...
0
votes
2answers
156 views

Expected value of stochastic process

I have the following problem: $X_1,X_2,...$ are positive identically distributed random variables with the distribution function $F(x) :=P(X_n \leq x)$ and we assume that $F(0)<1$ for all $n$. Let ...