I'm going to assume that the science category is a reasonable place for maths.
So, in mathematical computer science there's a thing called Kolmogorov Complexity. Roughly speaking, the Kolmogorov Complexity of something is the length of the shortest possible description of that thing. For instance, the pieces of text "to be or not to be, that is the question" and "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" have the same length (40 characters). But while the second is very easy to convey quickly ("40 'a's" for instance at seven characters), the first is quite difficult ("start of Hamlet's speech" is a fair try, but still 24 characters and somewhat ambiguous). This shows that the first piece of text is more complex than the second.
More formally, the Kolmogorov Complexity of a string of symbols is defined with respect to a way of describing an instance of a model of computation (such as a turing machine or lambda term), and is the length of the shortest description of such an instance whose output is that string. For those not familiar with them, you can read 'turing machine' or 'lambda term' as 'computer program (that takes no input)'.
Kolmogorov Complexity has a property which I find kind of unsettling, called the Chaitin Incompleteness Theorem: for each language it can be defined with respect to, there is a finite, often surprisingly low number such that it is impossible to prove that a specific string has a Kolmogorov Complexity greater than that number (even though such strings provably exist). That's a little awkward to say, so let me express it more concretely. It is almost certainly the case that no matter how long it runs for, no program that can fit on your laptop can output a full and accurate description of the entire universe. But, because such a description would be comfortably above the number I referred to for many languages that you could run on a laptop, you can't prove that (as is often the case in computer science, this is subject to the caveat that your laptop has infinite ram, but in practice I think it has enough that no proof that exploits the fact that it's finite is practically achievable).
The proof of this property is as follows. Imagine a program P that behaves like this:
the length of this program is a number called L
for every possible formal logical proof, one at a time:
if this proof is a valid proof that a specific string S has a Kolmogorov Complexity greater than L:
output S and halt the program
Does P ever halt? If it did, then that would mean it had found a valid proof that S has a Kolmogorov Complexity greater than its length. But when it halts, it outputs S, so clearly S has a Kolmogorov Complexity of at most L, since a program of length L outputs it. And if S has a complexity of L or less, then no matter what S is there can be no valid proof that its Kolmogorov Complexity is in fact greater than L. Thus, P never halts.
But, if P never halts, and if given enough time P would reach any specific proof, then there can be no proof that any specific string S has a Kolmogorov Complexity greater than the length L of the shortest possible program with the described semantics.
That's pretty cool, but here's the scary part: you can write P. It's not trivial, but you could totally do it.
Checking whether a formal proof is valid is a relatively simple matter of going through each step and ensuring that it follows certain purely structural rules with respect to the preceding steps.
An algorithm that, given enough time, generates every possible valid proof without regard to whether it also generates invalid ones is easy.
If you reach every possible valid proof you don't need to check whether a proof is a proof that a string S has complexity greater than L, you just need to formulate one possible way to phrase that conclusion and check for it - if any equivalent statement is provable, eventually that one will be too.
Formulating that way to phrase the result of the proofs you're looking for is the most difficult part, since it requires constructing a model of the language you're using in a formal logic, but it's by no means beyond the bounds of possibility (as shown by the fact that people have written interpreters for nontrivial languages in Prolog). I don't know how big the program would be in the end (though I hope to get around to trying it), but I'd be willing to bet it would be smaller than some programs you use every day.
All of which is to say, the shortest complete description of the locations of every molecule in the andromeda galaxy at some moment in time might be shorter than the shortest complete description of the locations of every molecule in a potted plant, but we can never know for sure.
wow this is all super interesting and cool and you explained it really well but damn im too stupid to understand it lol/gen/j