• Math:

    From Treon Verdery@21:1/5 to All on Tue Feb 28 04:40:00 2023
    “a computation theoretical schemata for an online recursive self feeding automata which can be by way of tail recursion, which can be implemented using iteration” reminds me of how to get cellular automata based data compression to work better;
    Figuring out seed and tree that generates an uncompressed data object could work better if you feed the tree branches or their string generation [][][][][][] (2d data as list) back into the [][][][][] (2d data as list). so basically, grow a tree, find
    relationships or equations that say something about what generates what, then use those equations, as CA rules with the tree refeeding its own base seed/technique. Primitive thinking on my part and a mathematician could do better, but sometimes there
    are things where if you have two values sometimes there are rules/equations about what can be found between them.

    So if the data is a phone book, an equation or rule set, possibly with the “ that narrows the possible tree output to things with lots of vowels or spaces between 3-10 letter strings could make a first approxination that is structurally near accurate (
    and with vowel frequency, statistically nearer the actual goal data-string) to sequentially refine. That would reduce the amount of CPU cycles to compress the big data string initially.

    Then again, Generating a “near miss” to the data string CPU-cyle-cheaply then adjusting the seed and CA ruleset to upgrade the “near miss” to the actual specific data string to be compressed might take just as much, more, or less computation. I
    do not have any idea, it is just an alternative approach. So the idea is that tree-refeedback could generate the “near miss” with a test of “looks, structurally, like the data string” then just make lesser, easier to compute changes to the seed,
    possibly using “recursive self feeding automata” once the CA generates a structural match to the goal string, like lots of vowels or regular space characters.

    This structure first, then refine approach could also decrease the number of CA tree growth increments, that is it would generate the tree with less CPU cycles. Also, it seems possible to find different rulesets and seeds and number of tree generations
    that produce the same goal data string. More than one way to produce the goal string. Among the multiple ways found, one could be fastest to expand (decompress) from having fewest tree-line generations.

    At an automata, “Each variable in the CFG [context free grammar] corresponds to a language; this language is recursively defined using other variables. We hence look upon each variable as a module; and define modules that accept words by calling other
    modules recursively” http://msl.cs.uiuc.edu/~btovar/cs475/hw/recaut.pdf , reminds me of producing a durable island of meaning , where the island is the CFG, (made up of automata output and little-part stored and produced values); causes me to think, “
    it, the grammar, and the language, is the drift or gist of a bunch of functions and numbers that comprise it”. Perhaps a normal distribution, graphic, generating eqaution, and part/area isolating equations are a drift or gist that creates a language
    that says “normal distribution” Then again, that is kind of being overly broad as it seems to just restate, “equations can describe a system”. My loose reinterpretation of the word “language” at the quote is kind of just saying once you
    have equations of the normal distribution you can then recombine these equations as a sort of words to make completely new statements about a normal distribution (possibly also doing some exciting grammar thing where what comes first and next and
    possibly what/where is a verb is, has a definite pattern, possibly a CFG generated grammar). Gee it just sounds like “math says things, and the math you use to say things can be reordered to say new things and make new descriptions” which seems very
    very well known.

    At the same paper it says, “Intuitively, if q′∈δm(q, m′), then Xq can generate a word of the form xy where x is accepted using a call to module m and y is accepted from the state q” (perhaps it is saying: where a new generated thing meets a
    previous stored state) The line from that paper seems like an actual parts that work and do something, actual usable math, way of saying: we generated a CFG where a variable can be placed next to another variable in a relationship. That is it generates
    something like an axiom of grammar, while simultaneously building things from previous stored state.

    The automata that generates the context free grammer, that might be doing specific instances of, “math says things, and the math you use to say things can be reordered to say new things and make new descriptions” has one line of a multi-line
    description that says, “Intuitively, a transition within a module is simulated by generating the letter on the transition and generating a variable that stands for the language generated from the next state.” OK, that is exciting to read, and seems
    like the english explanation of the “Intuitively, if q′∈δm(q, m′)…(more text)” descriptive/defining actual math or logic-like language of the paper. It might sort of say: one part of one thing, and another previous-like thing, together make
    up a durable grammar (way of saying things) which is sompletely different than the interpretation at the start of this paragraph, but I will leave the first part of the paragraph as it is becuase it is also entertaining.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)