=?UTF-8?B?4oCcRHVjayHigJ0gb3IgdGhlIGFudGhyb3BpYyBwcmluY2lwbGU6?=
From
Treon Verdery@21:1/5 to
All on Tue Feb 28 07:49:48 2023
Then there is also the possibility at the MWI, as well as unitary MWI with adjustability came up with a source branch with consciousness, and that consciousness then propagated at its further branches, and just possibly figured out how to traverse to
other previously unvisited branches. This is a little like the anthropic principle being used as a generator and screen, this time of conscious being, but so what? It has entertainment value.
(I have no Idea how I could have thought of a way to make unitary MWI adjustable, but I thought it)
Naming seems to kind of do things at human comprehension. I am confused as to why Dave thinks naming a thing “quantum unresolved neurons” (or some other cytes or tissues at a human) (schoedingers neurons) has a basis for producing something like the
ability to be.
I perceive from his writings that he ponders the idea that all things are actually quantum superposed and unresolved, including the observer, like a human, and that then (Dave) noting we are actually running sketches or simulations of what folks together
call reality, and noting the least fancied up or elaborate interpretation of the schroedinger equations, could be all superposition all the time, Dave might be looking for consciousness in quantum effects, possibly because at the least elaborate
interpretation (MWI, and/or all superposed everything) consciousness would seem to be psudo-invetiably made up of quantum stuff. The thing is, I think there can be non-quantum stuff. So it might be valid to find testable hypothesis about presence of
being, isness, consciousness in that stuff which is non-quantum, just as much as hypothesizing sources of isness at quantum things.
So far I mentioned eigenvalue math, another one might be those things about the observed universe that are nonfinite and analog (spherical coordinate photon emission direction)
The thing is that ging for the most parsimonious statement of quantum mechanics and its way of structuralizing and hypothesizing about what it would do, and then testing those hypothesis goes with occams razor, “simplest supported explanation”
Occams razor and parsimony might only describe some systems, particularly systems with fewer parts: But do statistics math actually support occams razor? It is possible that humans note, and can mathematically find, things, systems, and explanations-
likely to be right, that is true, outside of occams razor. Occams razor makes sense as a basis for doing science, but might be improvable with math, even giving the ability to assist in generating strengthened hypotheses that treat data and gather data
in a way that causes occams razor to have greater applicability and productivity.
So the thing that to my perception causes occams razor to be sometimes, possibly predictably, nonapplicable is: If you think of distribution, the likelihood that the distribution will be mathematically identical as its fit of an equational curve, depends
on not just number of samples, it
If you look at 10 or 300 individual distributions, some of them are likely, that is mathematics of probability supported, to have some variance between each other.
Some math shapes amplify, so if at some pile of distributions, there is also a distribution of which of that list of previously measured distributions items gets amplified, then that would generate at least some large, observationally prominent
anisotropies (anomolous assemblages), where if you tried to describe them with the simplest thing (occams razor), that is the equation that fits the aggreagate of the 300 measured separate distributions, you would actually have described it wrong, with
something that has has high prediction value yet misses predicting the giant blobs from the aggregates..
So someone who knows math could take a system with n as measured variable, and K repetitions, and find the quantity of non-identical to model distributions, and then look at the effect of the combined effect of grouping the new distributions together,
perhaps as a simple combinatoric, to mathematically describe a system where Occams razor was only functional say 2/3 of the time; then at actual things at the observable universe, scientists and other researchers and technologists could look for
statistical/distributional features that suggest some particular thing or phenomena might only be 10% likeley (or perhaps 90% likely) to have an occams razor basis, parsimoniousness, for saying how it works.
So is there any physics experiment that would look for anisotropies at what the schroedinger equation predicts, not as a means of finding an exception that produces a new physics thing to explain, but measuring the variance of the shroedinger equation’
s predictions, noting that when you clump variance together large anisotropies predictably arise.These might be really varying as to distributions (noting that arranging the distributions, possibly where they can amplify each other, producing
anisotropies, anisotropies better explained with something less parsimonious than the schroedinger equation)
At some technologically doable experiment, then doing the thing where if there are a million measurements of that variance, a few will be really big, and, at another round of piling up or sorting, will look like, or be, big anisotropies, and then those
anisotropies would have an alternative mathematical statement. That alternative mathematical statement would appear non-parsimonius while still being higher in accuracy.
Note: I thought of an exception: If there are a million pieces of data, one equation that predicts them with precision is just to have each and every number of the million things listed as an array element, or the element of a set, reminds me of FEA…
so it would be fully descriptive yet not predictive.
So the variance at a group of distributions sometimes regularly creating a nonparsimonious explanation from being more predictive structure of thinking about things says: There are predictably occasions where an occams razor apporach is less functional
than another equation.
So is it possible to sift sideways or replace the bottom-up thing follows thing occams razor with a different approach, well occams razor, with its sequetial-seeming thing reminds me of a compare two, favor one, bubble sort. There are other sorts at
computer science. One of them that I do not really comprhend it heap sort, it starts at the middle, and then, perhaps alternates sides, anyway I think I remember it was twice as effective (possibly fast or half the steps) as bubble sort. So, what would
a human or software do with a heap of tested hypothesis that they wanted to come up with an equation to explain? There might be the scientific method equivalent of the heap sort (or some even more advanced computer science sort) And, beneficially if
the human or software knew it was going to come up with a descriptive equation (explanation) using a heap sort scientific method (compared with going linear/bubble sort from axioms), then it could suggest completely new kinds of experiments. If I were
more thoughtful I might be able to think of some versions of these experiments.
There are a bunch of different sorts at computer science, humans, or possibly software and AI could look to see if shared midrange features, rather than axiomatic like bases could generate models that are more predictive. Notably, some sorting
algorithms are parallel, and although humans think widely, I perceive with unenuciated parallelism and wide scope, they might enunciate in their minds and their writings, sequential explanation. It is possible that the existence of a ariety of computer
science sorting algorithms could, to a mathmetically literate mechanisms of science studying person, suggest new ways to do and look at things.
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)