The ultimate conclusion of Quantum Physics, that properties don't exist until they are measured, makes all the preceding 'requirements' of a valid physical theory (say for example symmetry) risible.
I am with Einstein in believing this is impossible. Einstein penned the 'EPR Paradox," but then after his death Bell created Bell's Theorem, which proves that properties are either non-local, or don't exist until they are measured, proven by endless entanglement experiments by Physicists.
Looking at the math, I suggested what we might have is a 2D universe, that might make the Bell inequalities equal. So rather than there being hidden properties, as Einstein imagined, with more information, there is actually less information than we think there is.
To make this work, the 2-D ness of the quantum domain would have to somehow, that I couldn't explain, give rise to the 3D universe we experience.
Anyway, from the 3D perspective, a 2D universe does represent non-locality!
Well, now I have a different explanation, from a particular sampling problem I've been looking at, where you have samples of samples. Rather than the utility of information being asymptotic, at some point the value of information will be negative, worse than presuming no information at all.
That produces both the seeming truncation effects of quantum theory, as well as Bell Theorem results.
Well what is the sampling here? I imagine it like this. Imagine that we 'see' only particular frames of a very high speed movie. We are sampling the true universe. We might or might not be in the same position in each natural bundle, if there is any such thing, or exactly the same number of frames apart, but it's easy enough to see the issue if we are always exactly in the same position, the first frame of each bundle.
Now further assume that this is not necessarily true of photos we generate. They may actually start in some frame other than ours, so by the time we measure them, we catch them at some frame which is not in series with the starting one, offset by a particular number of frames. That would be true of their entangled photons as well.
So that's one possible explanation of entanglement and all that. There's another more hand waving one. And that is to note how the standard deviation of a sample is divided by N-1 and not N as for a population. And yet they otherwise look like identical data, the same number of objects. That also means that number when you have just one element isn't 0...it's indeterminate and 0/0. Basically it goes wild as you approach 1 in the sample of a sample. It has too high of a kurtosis to be useful statistically. Other statistical processes work like this as well.
These are both kinds of non-locality. But kind of in reverse. We aren't experiencing the full spacetime that exists. Just a sample. We jump from one frame to another with a lot in between.
There might be other ways of applying the sampling thing, I'm still working on the math of it.