• neural networks cover rule based in zero order logic (Was: Higher Order

    From Mild Shock@21:1/5 to All on Sat Mar 15 16:13:51 2025
    A storm of symbolic differentiation libraries
    was posted. But what can these Prolog code
    fossils do?

    Does one of these libraries support Python symbolic
    Pieceweise ? For example one can define rectified
    linear unit (ReLU) with it:

    / x x >= 0
    ReLU(x) := <
    \ 0 otherwise

    With the above one can already translate a
    propositional logic program, that uses negation
    as failure, into a neural network:

    NOT \+ p 1 - x
    AND p1, ..., pn ReLU(x1 + ... + xn - (n-1))
    OR p1; ...; pn 1 - ReLU(-x1 - .. - xn + 1)

    For clauses just use Clark Completion, it makes
    the defined predicate a new neuron, dependent on
    other predicate neurons,

    through a network of intermediate neurons. Because
    of the constant shift in AND and OR, the neurons
    will have a bias b.

    So rule based in zero order logic is a subset
    of neural network.

    Python symbolic Pieceweise https://how-to-data.org/how-to-write-a-piecewise-defined-function-in-python-using-sympy/

    rectified linear unit (ReLU) https://en.wikipedia.org/wiki/Rectifier_(neural_networks)

    Clark Completion
    https://www.cs.utexas.edu/~vl/teaching/lbai/completion.pdf

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mild Shock@21:1/5 to Mild Shock on Sat Mar 15 17:04:08 2025
    Hi,

    There are some ideas to realize the neuronal neuron used
    for belief networks on the computer. Via so called
    “Repeat-Until-Success” (RUS) circuits maybe?

    See also:

    Towards a Real Quantum Neuron
    Wei Hu - 2018
    https://www.scirp.org/journal/paperinformation?paperid=83091

    Quantum Neuron
    Yudong Cao et al. - 2017
    https://arxiv.org/abs/1711.11240

    Bye

    Mild Shock schrieb:

    A storm of symbolic differentiation libraries
    was posted. But what can these Prolog code
    fossils do?

    Does one of these libraries support Python symbolic
    Pieceweise ? For example one can define rectified
    linear unit (ReLU) with it:

                    /   x      x  >= 0
        ReLU(x) := <
                    \   0      otherwise

    With the above one can already translate a
    propositional logic program, that uses negation
    as failure, into a neural network:

    NOT     \+ p             1 - x
    AND     p1, ..., pn      ReLU(x1 + ... + xn - (n-1))
    OR      p1; ...; pn      1 - ReLU(-x1 - .. - xn + 1)

    For clauses just use Clark Completion, it makes
    the defined predicate a new neuron, dependent on
    other predicate neurons,

    through a network of intermediate neurons. Because
    of the constant shift in AND and OR, the neurons
    will have a bias b.

    So rule based in zero order logic is a subset
    of neural network.

    Python symbolic Pieceweise https://how-to-data.org/how-to-write-a-piecewise-defined-function-in-python-using-sympy/


    rectified linear unit (ReLU) https://en.wikipedia.org/wiki/Rectifier_(neural_networks)

    Clark Completion
    https://www.cs.utexas.edu/~vl/teaching/lbai/completion.pdf


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mild Shock@21:1/5 to Mild Shock on Sun Mar 16 22:59:13 2025
    Ok some progress report here. I have currently a
    library(linear) in the working which is only a few
    lines of code, but it provides vectors and matrixes.
    One can use the library to define matrix exponentiation:

    matexp(M, 1, M) :- !.
    matexp(M, N, R) :- N mod 2 =:= 0, !,
    I is N // 2,
    matexp(M, I, H),
    matmul(H, H, R).
    matexp(M, N, R) :-
    I is N-1,
    matexp(M, I, H),
    matmul(H, M, R).

    And then do fancy stuff like answering the question
    what are the last 8 digits of fibonacci(1000000):

    ?- time((fib(1000000, _X), Y is _X mod 10^8)).
    % Zeit 28 ms, GC 0 ms, Lips 88857, Uhr 16.03.2025 22:48
    Y = 42546875

    The 28 ms execution time are not bad, since modulo was not
    integrated into matexp/3, making it to compute the full
    fibonacci(1000000) before taking the modulo. Not sure whether
    JavaScript bigint is faster or slower than GMP ?

    So what can we do with library(linear) besides implementing
    eval/3 and back/3 ? We can finally update a neural network
    and do this iteratively. Using a very simple random pick
    to choose some training data sample:

    update([V], _, [V]) :- !.
    update([V,M|L], [_,M3|R], [V,M4|S]) :-
    maplist(maplist(compose(add,mul(0.1))), M3, M, M4),
    update(L, R, S).

    iter(0, _, N, N) :- !.
    iter(I, Z, N, M) :-
    random(R), K is floor(R*4)+1,
    call_nth(data(Z, X, Y), K),
    eval(N, X, U),
    back(U, Y, V),
    update(U, V, W),
    J is I-1,
    iter(J, Z, W, M).

    Disclaimer: This is only a proof of concept. It mostlikely
    doesn’t have all the finess of Python torch.autograd. Also
    it uses a very simple update of the weights via μ Δwij with
    μ = 0.1. But you can already use it to learn an AND

    or to learn an XOR.

    Mild Shock schrieb:
    Somehow I shied away from implementing call/n for
    my new Prolog system. I thought my new Prolog system
    has only monomorphic caches , I will never be able to

    replicate what I did for my old Prolog system with
    arity polymorphic caches. This changed when I had
    the idea to dynamically add a cache for the duration

    of a higher order loop such as maplist/n, foldl/n etc…

    So this is the new implementation of maplist/3:

    % maplist(+Closure, +List, -List)
    maplist(C, L, R) :-
       sys_callable_cacheable(C, D),
       sys_maplist(L, D, R).

    % sys_maplist(+List, +Closure, -List)
    sys_maplist([], _, []).
    sys_maplist([X|L], C, [Y|R]) :-
       call(C, X, Y),
       sys_maplist(L, C, R).

    Its similar as the SWI-Prolog implementation in that
    it reorders the arguments for better first argument
    indexing. But the new thing is sys_callable_cacheable/1,

    which prepares the closure to be more efficiently
    called. The invocation of the closure is already
    quite fast since call/3 is implemented natively,

    but the cache adds an itch more speed. Here some
    measurements that I did:

    /* SWI-Prolog 9.3.20 */
    ?- findall(X,between(1,1000,X),L), time((between(1,1000,_),
       maplist(succ,L,_),fail; true)), fail.
    % 2,003,000 inferences, 0.078 CPU in 0.094 seconds

    /* Scryer Prolog 0.9.4-350 */
    ?- findall(X,between(1,1000,X),L), time((between(1,1000,_),
       maplist(succ,L,_),fail; true)), fail.
        % CPU time: 0.318s, 3_007_105 inferences

    /* Dogelog Player 1.3.1 */
    ?- findall(X,between(1,1000,X),L), time((between(1,1000,_),
       maplist(succ,L,_),fail; true)), fail.
    % Zeit 342 ms, GC 0 ms, Lips 11713646, Uhr 10.03.2025 09:18

    /* realla Prolog 2.64.6-2 */
    ?- findall(X,between(1,1000,X),L), time((between(1,1000,_),
        maplist(succ,L,_),fail; true)), fail.
    % Time elapsed 1.694s, 15004003 Inferences, 8.855 MLips

    Not surprisingly SWI-Prolog is fastest. What was
    a little surprise is that Scryer Prolog can do it quite
    fast, possibly since they heavily use maplist/n all

    over the place, they came up with things like '$fast_call'
    etc.. in their call/n implementation. Trealla Prolog is
    a little bit disappointing at the moment.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)