Metric entropy 2

I am reading the article “ENTROPY THEORY OF GEODESIC FLOWS”.

Now we focus on the upper semi-continuouty of the metric entropy map. The object we investigate is (X,T,\mu), where \mu is a T-invariant measure.

The insight to make us interested to this kind of problem is a part of variational problem, something about the existence of certain object which combine a certain moduli space to make some quantity attain critical value(maximum or minimum). The most simple example maybe Isoperimetric inequality and Dirichlet principle of Laplace. Any way, to establish such a existence result a classical approach is to proof the upper semi-continuouty and bounded for associate energy of the problem. In our case the semi-continuouty will be some thin about the regularity of the entropy map:

E:M(X,T)\to h_{\mu}.

We define the entropy at infinity:

sup_{(\mu_n)}limsup_{\mu_n\to 0}h_{\mu_n}(T)

Where (u_n)_{n=1}^{\infty} varies in all sequences of measure coverage to 0 in the sense for all A\subset M, A measurable then \lim_{n\to \infty} \mu_{n}(A)=0.

Compact case

we say some thing about the compact case, In this case we have finite partition with smaller and smaller cubes, this could be understand as a sequences of smaller and smaller scales. A example to explain the differences is \mathbb N^{\mathbb N},\sigma, shift map on countable alphabet.

Because of this thing, there is a good sympolotic model, i.e.  h-expension, and it generalization  asymptotically  h-expension equipped on a compact metric space $X$ have been proved to be that the corresponding entropy map is upper semi-continous.

In particular C^{\infty} diffeomorphisms on compact manifold is asymptotically h-expensive.

 

 

Natural problem but I do not understand very well:

Why it is natural to assume the measure to be probability measure in the non-compact space?

 

Non-compact case

(X,d) metric space

T:X\longrightarrow X is a continuous map.

d_n(x,y)=\sup_{0\leq k\leq n-1}d(T^kx,T^ky), then d_{n} is still a metric.

Easy to see \frac{1}{n}h_{\mu}(T^n)=h_{\mu}(T). This identity could be proved by the cretition of entropy by \delta-seperate set and \delta-cover set.

 

Kapok theorem:

X compact, for every ergodic measure \mu the following formula hold:

h_{\mu}(T)=\lim_{\epsilon \to 0}limsup_{n\to \infty}\frac{1}{n}logN_{\mu}(n,\epsilon,\delta).

Where h_{\mu}(T) is the measure theoretic entropy of \mu.

Riquelme proved the same formula hold for Lipchitz maps on topological manifold.

 

 

Let M_e(X,T) defined the moduli space of T-invariant portability measure.

Let M_(X,T) defined the moduli space of ergodic T-invariant probability measure.

Simplified entropy formula:

(X,d,T) satisfied simplified entropy formula if \forall \epsilon >0 surfaced small and \forall \delta\in (0,1), \mu\in M _e(X,T).

h_{\mu}(T)=\limsup_{n\to \infty}\frac{1}{n}log(N_{\mu}(n,\epsilon,\delta)).

Simplified entropy inequality:

If \epsilon>0 suffciently small, \mu \in M_{e}(X,T), \delta\in (0,1).

h_{\mu}(T)\leq \limsup_{n\to \infty}\frac{1}{n}log(N_{\mu}(n,\epsilon,\delta)).

Weak entropy dense:

M_e(X,T) is weak entropy dense in M(X,T). \forall \lambda>0, \forall \mu\in M(X,T), \exists \mu_n\in M_e(X,T), satisfied:

  1. \mu_n\to \mu weakly.
  2. h_{\mu_n}(T)>h_{\mu}(T)-\lambda, \forall \lambda>0.
广告

Metric entropy 1

Some basic thing, include the definition of metric entropy is introduced in my early blog.

Among the other thing, there is something we need to focus on:

1.Definition of metric entropy, and more general, topological entropy.

2.Spanning set and separating set describe of entropy.

3.amernov theorem:

h_{\mu}(T)=\frac{1}{n}h_{\mu}(T^n).

Now we state the result of Margulis and Ruelle:

Let M be a compact riemannian manifold, f:M\to M is a diffeomorphism and \mu is a f-invariant measure.

Entropy is always bounded above by the sum of positive exponents;i.e.,

h_{m}(f)\leq \int_{i}\lambda_i^{+}(x)dimE_i(x)dm(x).

Where dimE_i(x) is the multiplicity of \lambda_i(x) and a^{+}=max(a,0).

Pesin show the inequality is in fact an equality if f\in C^2 and m is equivalent to the Riemannian measure on M. So this is also sometime known as Pesin’s formula.

F.Ledrappier and L.S.Young generate the result of Pesin.

One of their main result is:

f:M\to M is a C^2 diffemoephism, where M is a compact riemanian manifold, f is compatible with the Lesbegue measure on M, and

h_m({f,\mu})=\int_{M}\lambda_idim(V_i)dm

If and only if on the canonical defined quation manifold $M/W_{\mu}$, i.e. the manifold mod unstable manifold $W_{\mu}$, the induced conditional measure m_{\xi} is absolute continuous.

Remark: according to my understanding, the equality just mean in some sense we have the inverse estimate:

h_{m}(f,\mu)\geq \int_{M}\lambda_idim(V_i)dm.

This result maybe just mean near the fix point of f,i.e. the place charge the topology of the foliation, we have the inverse estimate. Such a inverse estimate will lead a control of the singularity of the push forward measure m_{\xi} on the quation manifold.  So m_{\xi} have good regularity. But this idea is not complete to solve the problem.

Now we begin to get a geometric explain and which will lead a rigorous proof of the inequality:

h_{m}(f)\leq \int_{i}\lambda_i^{+}(x)dimE_i(x)dm(x).

At first we could observe that the long time average \lim_{n\to \infty}\frac{1}{n}log||Df^n|| of Df could be diagonal. Assume after diagonal the eigenvalue is

\lambda_1\leq \lambda_2\leq \lambda_3\leq...\leq \lambda_{n-1}\leq \lambda_n.

This eigenvalue could divide into 3 parts: <0,=0,>0.

This will lead to a direct sum decomposition of the tangent bundle TM:

TM\simeq E_{u}\otimes E_s\otimes E_c.

Where $E_u$ is the part corresponding to the eigenvalue>0, For this part we consider the more refinement decomposition:

E_u=\otimes_{k=1}^rV_k, V_k is the eigenvector space of \lambda_k. The dimension of $V_k$ is $dim V_k$.

On the other hand, we have a equality of metric entropy:

h_{m}(f)=\frac{1}{n}h_{m}(f^n)=\sup_{\alpha\in partition \ set}\frac{1}{n}h_m(f^n,\alpha).

For the later one, \alpha is a measurable partition of M, then \alpha could always be refine to a smaller partition \beta, and we have:

h_{m}(f,\alpha)\leq h_{m}(f,\beta).

Now we arrive the central place of the proof:

every partition could be refine by a partition with boundary of almost all cubes is parallel to the foliation. So  we focus ourselves on the portion \beta and all boundary of cubes in \beta is parallel to the eigenvector.

Under this situation, we need only estimate the numbers of \vee_{i=1}^nT^i\beta. Estimate it is not very difficult. we need only observe the following two thing:

1.

\lim{n\to \infty} exists a.e. in M. So this lead to the definition of foliation almost everywhere, and except a measurable zero set. In fact this set is the set of fix point of M under f.

    2.

After a rescaling, every point which is not a fix point of f could be understand as it is far away from fix points. Then the foliation could be understand as  a product space locally. The flow with the direction which the eigenvalue is less than 1 cold not change \vee_{i=1}^nT^i\beta. The direction with eigenvalue equal to 1 is just transition and just change the number of \vee_{i=1}^nT^i\beta with polynomial growth. But the central thing is the direction with eigenvalue large than one and will make \vee_{i=1}^nT^i\beta change with viscosity e^{\lambda_i}. and we product it and get :

h_{m}(f,\mu)\leq \int_{M}\lambda_i dim(V_i)dm.

In fact the proof only need f to be C^1