The Law of the Iterated Logarithm for Random Dynamical System with Jumps and State-Dependent Jump Intensity


 In this paper our considerations are focused on some Markov chain associated with certain piecewise-deterministic Markov process with a state-dependent jump intensity for which the exponential ergodicity was obtained in [4]. Using the results from [3] we show that the law of iterated logarithm holds for such a model.


Introduction
We conduct our considerations for some subclass of piecewise-deterministic Markov processes (PDMP). These processes are governed by deterministic semiflows which are intermittent by jumps. PDMP's have been introduced by Davis in [5] and have found their application, among others, in modeling phenomena in biology, as stochastic model for gene expression ( [2,14,15]).
Most results on such processes are formulated in situation when the periods between jumps have a Poisson distribution with constant parameter λ. We are interested in the properties of these type of systems but in case when the jump intensity depends on the trajectory of the process. The asymptotic stability and exponential ergodicity for a model in which the intensity of jumps depends on the state of the system was examined in [4,11]. In this work we focus to prove one of the limit theorems, namely the law of the iterated logarithm (LIL), for a such process. Limit theorems for Markov processes have recently been the subject of intense research (see e.g. [1,6,9,10,16]). The LIL defines a range in which, with probability 1, from a certain point the trajectories of the stochastic process will be found. In other words the LIL examines the greatest deviations from the mean of stochastic process. It can be located between the strong law of large numbers and the central limit theorem. Originally it was formulated by A. Khintchine in [7] and independently by A. Kolmogorov in [8].
The article consists of three parts. In Section 2 we introduce a notation and formulate basic definitions and facts related to Markov operators. Section 3 contains a formal description and main assumption of the considered model. In the last section we formulate and prove the LIL for the process described in Section 3.

Basic notation and definition
Let (S, d) be a Polish metric space and let B S denotes the σ-field of all Borel subsets of S. As usual, by B(x, r) we denote the open ball in (S, d) with center at x ∈ S and radius r > 0. We use symbol 1 A and δ x for indicator function of set A ⊂ S and Dirac measure in point x ∈ S, respectively. We use letters R and N to denote successively the set of real and natural numbers. Additionally, R + stands for the set of nonnegative real numbers and N 0 for N ∪ {0}.
Within the set B(S), which states for all bounded, Borel measurable functions f : S → R, we specify two subsets: C(S) and Lip(S) consisting of all continuous functions and Lipschitz-continuous functions, respectively.
Let M s (S) be the set of all finite, countably additive functions on B S . By M(S) and M 1 (S) we denote the subsets of M s (S) consisting of all nonnegative measures and all probability measures, respectively.
We write M L 1,k (S) for the set of all µ ∈ M 1 (S) satisfying where k > 0 and L : S → R + is a Lyapunov function i.e L is bounded on bounded sets and for some The set M s (S) is considered with the Fortet-Mourier norm ||·|| F M ( [12,13]), given by
For given stochastic kernel K we can always set two mappings P : M(S) → M(S) and U : B(S) → B(S) by formulas: Then P is a Markov operator and U is its dual operator. Let us notice that using (1) and (3) we obtain We want to emphasize that the operator P can be extended to a linear operator defined on the space of all bounded below Borel functions with keeping the duality property (1).
A regular Markov operator P is called Feller if U f ∈ C(S) for any f ∈ C(S).
We say that the The operator P is said to be exponentially ergodic if there exists invariant measure µ * ∈ M 1 (S) and constant β ∈ [0, 1) such that, for every µ ∈ M L 1,1 (S) and some constant C µ ∈ R, we have

Markov chains
It is well known that if we take a stochastic kernel K and a measure µ ∈ M 1 (S) then we can always define on relevant probability space, say (Ω, F, P µ ), a homogeneus Markov chain (χ n ) n∈N 0 for which If we consider the Markov operator P for the kernel (5) according formula (2), then In our further considerations we will use the symbol E µ for the expectation with respect to P µ . If µ = δ x for some fixed x ∈ S, we will write E x instead of E δ x .
We say that a time-homogeneus Markov chain evolving on the space S 2 is a Markovian coupling of some stochastic kernel K : S × B S → [0, 1] whenever its stochastic kernel J : for all x, y ∈ S and A ∈ B S . Let us underline this, that if J : for all x, y ∈ S and A ∈ B S , then we are always able to construct a Markovian coupling of K whose stochastic kernel J satisfies J ≤ J .

The law of the iterated logarithm for Markov chains
Let µ ∈ M 1 (S) be a initial distribution of the Markov chain (χ n ) n∈N 0 . For any n ∈ N and f ∈ Lip(S) we define for n ≤ e.
Suppose that there exists the unique invariant measure µ * ∈ M 1 (S) for (χ n ) n∈N 0 . We say that the LIL holds for Markov chain (f (χ n )) n∈N 0 if for some The following theorem is proved in [3,Theorem 4.7].
Our considerations are focused on discrete-time dynamical system described in detail in [4] and determined by stochastic process ((Y (t), ξ(t))) t≥0 evolving through random jumps in the space X.
On a time frame [t n−1 , t n ] the process (Y (t)) t≥0 is driven accordingly with Π i , where index i is appointed by (ξ(t)) t≥0 .
At the moment of the jump, i.e at time t n , process (Y (t)) t≥0 skips to a new state due to the mapping q θ : Y → Y and the current semiflow Π i is displaced by Π j . The q θ is randomly pick out from a given set {q θ : θ ∈ Θ}. We assume here that Y × Θ (y, θ) → q θ (y) ∈ Y is continuous and that the probability of choosing q θ is related with density function Θ θ → p θ (y), y ∈ Y , such that (θ, y) → p θ (y) is continuous. The semiflows conversion is done in accordance with a matrix of continuous probabilities π ij : Y → [0, 1], i, j ∈ I, satisfying j∈I π ij (y) = 1 for i ∈ I, y ∈ Y.
In this work we examine only the sequence of random variables given by the locations directly after the jumps, that is (Y n , ξ n ) := (Y (τ n ), ξ(τ n )), n ∈ N 0 , where τ n is a random variable describing the jump time t n .
We will express the above considerations for the intuitive description of how our model works in the language of random variables. Let (Ω, F, P µ ) be a probability space on which we define ((Y n , ξ n )) n∈N 0 . Let (Y 0 , ξ 0 ) : Ω → X be random variable with arbitrary and fixed distribution µ ∈ M 1 (X). Further, we introduce sequences (τ n ) n∈N 0 , (ξ n ) n∈N , (η n ) n∈N and (Y n ) n∈N of random variables which fulfill the following conditions: • τ n : Ω → R + , n ∈ N 0 , where τ 0 = 0, form a strictly increasing sequence such that τ n → ∞ a.e., and ∆τ n = τ n − τ n−1 are mutually independent and have the conditional distributions given by whenever y ∈ Y and i ∈ I, where L is given by s, y))ds.
Proof. The following proof is based on techniques shown in proof of [3,Theorem 5.2]. First we show that (12) implies that for L q =L 1/(2+r) q inequality (10) is fulfilled. To obtain this, let us assume (12) and in contrast to (10) that L q =L 1/(2+r) q and (13) LL q λ + α ≥ λ.
The law of the iterated logarithm for random dynamical system . . .