Fractal discrete networks approximations and Poincaré invariance?

In the link below one may find interesting paper by Sabine Hossenfelder, about finite networks symmetry due to Poincaré group. According to this thesis locally finite networks cannot be Poincaré invariant because of finiteness. Every finite network after some Lorentz boost can became large, so it is not possible to define very large scale limit in such way it restore in continuum Lorenz invariance.

But what about locally not finite ones? Here I meant specially fractal ones, where scale cannot be determined, so Lorentz boosts cannot magnify any characteristic shapes or directions, but they still may have finite number of degrees of freedom ( opposite to locally nonfinite irregular ones). Probably for such networks some calculations are still possible?

([1504.06070] A No-go theorem for Poincar’e-invariant networks]1)

Can average acceleration be exactly determined from discrete points separated by equal time intervals?…

This question already has an answer here:

  • How to approximate acceleration from a trajectory’s coordinates?

    3 answers

I am using the American dialect of the English language. The following definitions distinguish my question from the similar, previously asked question:

https://ahdictionary.com/word/search.html?q=approximation

  1. Mathematics An inexact result adequate for a given purpose.

https://ahdictionary.com/word/search.html?q=average

  1. Mathematics a. A number that typifies a set of numbers of which it is a function.

My question: is there an exact expression for the time-average
of acceleration in terms of projectile position coordinates measured
at instants in time separated by a constant, known time period?

I am not asking how to approximate acceleration based on
this data. Though I am interested in that matter as well.

I am writing a short discussion of projectile motion, primarily for
my own edification. I had originally intended to give exact expressions
for both average velocity and average acceleration, based on my hypothetical
input data. After playing with the math a bit, it occurred to
me that it is not possible to give such an exact formulation of average
acceleration. I wish to say as much in my essay, but it seems prudent
to verify the fact before committing it to an affirmative statement.

By an exact expression for the time average of acceleration, I mean an
expression similar to that used below to define the time-average of velocity.

The following is my method of approximating the acceleration
near the times $t_{p}:$

Two-dimensional projectile position coordinates
are recorded at instants in time denoted by

$$
t_{p}=pDelta t,
$$

where $p$ is an integer index increasing with time, and $Delta t$
is a fixed time increment. These known coordinates are written

$$
mathfrak{r}_{p}=begin{bmatrix}y_{p}\
z_{p}
end{bmatrix}.
$$

We assume projectile coordinates have some unknown functional
dependence on time, denoted by
$$
mathfrak{r}left[tright]=begin{bmatrix}yleft[tright]\
zleft[tright]
end{bmatrix},text{ and }mathfrak{r}_{p}=mathfrak{r}left[t_{p}right];
$$

where $mathfrak{r}left[tright]$ is sufficiently differentiable.
Projectile displacement during a time interval $Delta t_{p}=left[left(p-1right)Delta t,pDelta tright]=left[t_{p-1},t_{p}right]$
is determined from these measurements to be
$$
Deltamathfrak{r}_{p}=mathfrak{r}_{p}-mathfrak{r}_{p-1}.
$$

The time-average of velocity over a time interval $Delta t_{p}$
is defined by the equation

$$
leftlangle mathfrak{v}_{p}rightrangle equivfrac{Deltamathfrak{r}_{p}}{Delta t}.
$$

First, suppose $Delta t$ is large compared to changes in the actual
velocity; which we denote as the unknown differentiable function
of time,
$$
mathfrak{v}left[tright]=frac{dmathfrak{r}}{dt}left[tright].
$$

Hypothetically, velocity could be zero over most of an interval $Delta t_{p},$
then change rapidly during some small final sub-interval, so that
the entire displacement $Deltamathfrak{r}_{p}$ occurs in that final
period. In such a case the time-average of velocity over the entire
interval $Delta t_{p}$ will not equal the numerical average of the
velocities at the endpoints:

$$
leftlangle mathfrak{v}_{p}rightrangle nefrac{mathfrak{v}_{p-1}+mathfrak{v}_{p}}{2}=mathfrak{v}_{p-1}+frac{Deltamathfrak{v}_{p}}{2}.
$$

Alternatively, if velocity were to change with time at a constant
rate

$$
frac{dmathfrak{v}}{dt}=mathfrak{a}text{, so that}
$$

$$
mathfrak{v}left[tright]=mathfrak{v}_{0}+tmathfrak{a},text{ and}
$$

$$
mathfrak{v}_{p}-mathfrak{v}_{p-1}=Delta tmathfrak{a}=Deltamathfrak{v}_{p}=Deltamathfrak{v},
$$

we would find that

$$
leftlangle mathfrak{v}_{p}rightrangle =mathfrak{v}_{p-1}+frac{mathfrak{v}_{p}-mathfrak{v}_{p-1}}{2}=mathfrak{v}_{p-1}+frac{Delta t}{2}mathfrak{a}
$$

$$
=mathfrak{v}_{p}-Deltamathfrak{v}_{p}+frac{Delta t}{2}mathfrak{a}=mathfrak{v}_{p}-frac{Delta t}{2}mathfrak{a}.
$$

In such a case the exact relationships

$$
leftlangle mathfrak{v}_{p}rightrangle =mathfrak{v}left[t_{p}-frac{Delta t}{2}right]=mathfrak{v}_{p}-frac{Delta t}{2}mathfrak{a},text{ and}
$$

$$
leftlangle mathfrak{v}_{p+1}rightrangle =mathfrak{v}left[t_{p}+frac{Delta t}{2}right]=mathfrak{v}_{p}+frac{Delta t}{2}mathfrak{a}
$$

would hold. Thus, in the case of constant acceleration,
$$
frac{leftlangle mathfrak{v}_{p+1}rightrangle -leftlangle mathfrak{v}_{p}rightrangle }{Delta t}=mathfrak{a}=mathfrak{a}_{p}.
$$

Still assuming constant acceleration; position as a function of time
with constants $mathfrak{r}_{0},mathfrak{v}_{0},mathfrak{a},$is
written

$$
mathfrak{r}left[tright]=mathfrak{r}_{0}+mathfrak{v}_{0}t+frac{1}{2}mathfrak{a}t^{2},text{ and}
$$

$$
mathfrak{r}_{p}=mathfrak{r}_{0}+mathfrak{v}_{0}pDelta t+frac{1}{2}mathfrak{a}left(pDelta tright)^{2}
$$

$$
=mathfrak{r}_{p-1}+mathfrak{v}_{p-1}Delta t+frac{1}{2}mathfrak{a}Delta t^{2}.text{ So that}
$$

$$
Deltamathfrak{r}_{p}=mathfrak{r}_{p}-mathfrak{r}_{p-1}=left(mathfrak{v}_{p-1}+frac{Delta t}{2}mathfrak{a}right)Delta t
$$

$$
=left(mathfrak{v}_{p}-frac{Delta t}{2}mathfrak{a}right)Delta t,text{ and}
$$

$$
Deltamathfrak{r}_{p+1}=mathfrak{r}_{p+1}-mathfrak{r}_{p}=left(mathfrak{v}_{p}+frac{Delta t}{2}mathfrak{a}right)Delta t.
$$

Taking the difference $Deltamathfrak{r}_{p+1}-Deltamathfrak{r}_{p},$
then dividing both sides by $Delta t;$ upon applying the definition
of average velocity we arrive at

$$
leftlangle mathfrak{v}_{p+1}rightrangle -leftlangle mathfrak{v}_{p}rightrangle =frac{Deltamathfrak{r}_{p+1}-Deltamathfrak{r}_{p}}{Delta t}
$$

$$
=left(mathfrak{v}_{p}+frac{Delta t}{2}mathfrak{a}right)-left(mathfrak{v}_{p}-frac{Delta t}{2}mathfrak{a}right)
$$

$$
=Delta tmathfrak{a}.
$$

Dividing by $Delta t$ again gives us an expression for the acceleration
in terms of our known data:

$$
mathfrak{a}_{p}=mathfrak{a}=frac{leftlangle mathfrak{v}_{p+1}rightrangle -leftlangle mathfrak{v}_{p}rightrangle }{Delta t}
$$

$$
=frac{1}{Delta t}left(frac{Deltamathfrak{r}_{p+1}}{Delta t}-frac{Deltamathfrak{r}_{p}}{Delta t}right)
$$

$$
=frac{1}{Delta t^{2}}left(begin{bmatrix}y_{p+1}-y_{p}\
z_{p+1}-z_{p}
end{bmatrix}-begin{bmatrix}y_{p}-y_{p-1}\
z_{p}-z_{p-1}
end{bmatrix}right)
$$

$$
=frac{1}{Delta t^{2}}left(mathfrak{r}_{p+1}+mathfrak{r}_{p-1}-2mathfrak{r}_{p}right).
$$

Now we consider the case in which the acceleration changes with time,
but the time increment $Delta t$ is small in comparison to the acceleration.
Our assumption of sufficient differentiability for $mathfrak{r}$
insures us that the velocity near $mathfrak{r}_{p}$ is linear to
the first order in time. Thus our previous result provides a good
approximation for the acceleration at $t_{p}:$

$$
mathfrak{a}_{p}approxfrac{1}{Delta t^{2}}left(mathfrak{r}_{p+1}+mathfrak{r}_{p-1}-2mathfrak{r}_{p}right).
$$

This is an approximation of acceleration, not a precisely defined
average.

L1 distance between empirical and true distribution for discrete distributions

I have a distribution over the discrete set $mathcal{A} = {1, ldots, d}$ where the pmf is $p(.)$. That is, $p(i)$ is the probability of obtaining $i$ from $mathcal{A}$. Given a dataset with $n$ i.i.d. samples from $mathcal{A}$, i.e. $X sim mathcal{A}^n$, I compute its empirical distribution as $q(.)$. That is, if $X = (x_1, ldots, x_n)$, I compute $q(i) = frac{1}{n} sum_{j=1}^n mathbf{1}_{x_j = i}$ for each $i in mathcal{A}$. I want to bound from above and below $E_{X sim mathcal{A}^n}[|p(.) – q(.)|_1]$. I would think that this is something well known, but I just can’t seem to find a good reference. I tried using the DKW inequality and then trying to apply Markov’s inequality, but was unable to get anything from that. I also tried using Pinsker’s inequality, but I couldn’t bound the KL divergence.

This is not a homework question. I’d greatly appreciate any pointers/help.

(EDITED to include more rigorous notation).

Does anybody know the name of the discrete distribution with these properties?

I’m looking for a distribution which has the following properties. I don’t know what it’s called so I’m having a hard time finding references to it.

Properties:

  • Domain is over a finite range of integers (distribution is discrete and truncated)
  • Range is over the reals
  • The sum over the distribution is equal to 1
  • The first and second moments (mean and variance) are defined and are independent of one another
  • The entropy of the distribution is maximized given the above constraints.

A normal distribution would fit these criteria if the domain was over all of the reals. Likewise, a truncated normal distribution would fit if the domain were in a range of reals.

The binomial distribution can’t be right because there’s only one free parameter p which affects both the mean and variance. Likewise the hyper-geometric distribution doesn’t fit either.

Does anybody know if this distribution has a name?

Does anybody know the name of this discrete distribution?

I’m looking for a distribution which has the following properties. I don’t know what it’s called so I’m having a hard time finding references to it.

Properties:

  • Domain is over a finite range of integers (distribution is discrete and truncated)
  • Range is over the reals
  • The sum over the distribution is equal to 1
  • The first and second moments (mean and variance) are defined and are independent of one another
  • The entropy of the distribution is maximized given the above constraints.

A normal distribution would fit these criteria if the domain was over all of the reals. Likewise, a truncated normal distribution would fit if the domain were in a range of reals.

The binomial distribution can’t be right because there’s only one free parameter p which affects both the mean and variance. Likewise the hyper-geometric distribution doesn’t fit either.

Does anybody know if this distribution has a name?

Can the space-time be a discrete lattice?

I would like to know how much conjectures about the space-time’s being discrete are valid. There is a short article here:

https://www.meta-religion.com/Physics/Quantum_physics/is_space.htm

One reason mentioned here is that current physical models suggest that the information content of any finite space-time region is finite. Therefore, Spacetime cannot be continuous because this way the position of particles might contain infinite information. How much is this correct?

Generally what do people mean when they say that the spacetime is quantized? Do they mean that it is a discrete grid with certain allowed positions?

How to implement an improper discrete-time transfer function into Simulink

I am trying to model a discrete-time control system in Simulink. The control system contains an inner loop that utilizes the inverse of the plant model. The plant in question takes the form of:

$P(s) = frac{3.332times s + 4.679}{s^2+30.97times s+45.5}$

s = tf('s');
P = (3.332*s + 4.679)/(s^2+30.97*s+45.5);

I inverted the plant, $P(s)^{-1}$:

P_inv = inv(P);

Now I have an inverted model however you cannot explicitly implement an improper function into Simulink. I followed this discussion that has been referred to multiple times scattered throughout forums.

[r,p,k]=residue(P_inv.num{1},P_inv.den{1});
sys1 = tf(r(1),[1,-p(1)]);
sysk1 = k(1)*s;
sysk2 = k(2);

Once the improper function is broken down, I discretized each component separately.

Ts = 0.005;
d_sys1 = c2d(sys1,Ts)
d_sysk1 = c2d(sysk1,Ts,'matched') % <--- problem

A problem now exists in d_sysk1. It results in yet another improper function taking the form of:

d_sysk1 =
    59.99 z - 59.99
Sample time: 0.005 seconds
Discrete-time transfer function.

In my simulation I want to implement a discretized improper transfer function, P(s). Is there something I did incorrectly?

Shifting signal smaller than discrete step

I have a image that I need to shift with less than a pixel.

My plan was to do a Fourier transform and multiply the signal with $e^{-aiu-biv}$ where $a, b$ are the shifts in x and y direction. This might make sense in theory but in practice I don’t really know how to represent $e^{-aiu-biv}$.

I am using numpy to do a 2d FFT of the image which, ofcourse, is represented as a matrix of complex values. From this point of I’m kinda lost, how do I multiply this with $e^{-aiu-biv}$?

It seems like fundamentally something is wrong as I need to temporarily represent the image in a higher resolution, shift is slightly and then collapse it back to the original resolution.