• ILP is still dreaming of higher order (Was: Prolog Education Group clue

    From Mild Shock@21:1/5 to Mild Shock on Fri Mar 7 23:58:10 2025
    The first deep learning breakthrough was
    AlexNet by Alex Krizhevsky, Ilya Sutskever
    and Geoffrey Hinton:

    In 2011, Geoffrey Hinton started reaching out
    to colleagues about “What do I have to do to
    convince you that neural networks are the future?” https://en.wikipedia.org/wiki/AlexNet

    Meanwhile ILP is still dreaming of higher order logic:

    We pull it out of thin air. And the job that does
    is, indeed, that it breaks up relations into
    sub-relations or sub-routines, if you prefer.

    You mean this here:

    Background knowledge (Second Order)
    -----------------------------------
    (Chain) ∃.P,Q,R ∀.x,y,z: P(x,y)← Q(x,z),R(z,y)

    https://github.com/stassa/vanilla/tree/master/lib/poker

    Thats too general, it doesn’t adress
    analogical reasoning.

    Mild Shock schrieb:
    Concerning this boring nonsense:

    https://book.simply-logical.space/src/text/2_part_ii/5.3.html#

    Funny idea that anybody would be interested just now in
    the year 2025 in things like teaching breadth first
    search versus depth first search, or even be “mystified”
    by such stuff. Its extremly trivial stuff:

    Insert your favorite tree traversal pictures here.

    Its even not artificial intelligence neither has anything
    to do with mathematical logic, rather belongs to computer
    science and discrete mathematics which you have in
    1st year university

    courses, making it moot to call it “simply logical”. It
    reminds me of the idea of teaching how wax candles work
    to dumb down students, when just light bulbs have been
    invented. If this is the outcome

    of the Prolog Education Group 2.0, then good night.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mild Shock@21:1/5 to Mild Shock on Sat Mar 8 00:00:34 2025
    You are probably aiming at the decomposition
    of autoencoder or transformer into an encoder
    and decoder. Making the split up automatically

    from a more general ILP framework.

    The H is the bottleneck on purpose:

    relation(X, Y) :- encoder(X, H), decoder(H, Y).

    Ok you missed the point. Lets assume for the
    moment the H on purpose is not something that
    happens accidential through a more general

    learning algorithm. But it is a design feature
    of how we want to learn. Can we incorporate
    analogical reasoning, the parallelogram?

    Yeah, relatively simple, just add more input
    and output layers. The new parameter K indicates
    how the representation was chosen:

    relation(X, Y) :-
    similar(X, A, K),
    encoder(A, H),
    decoder(H, B),
    similar(Y, B, K).

    Its again an autoencoder respectively transformer,
    with a bigger latent space. Prominent additional input
    layers that work here are convolutional neural networks.

    Things like max pooling or self-attention pooling:

    relation(X, Y) :-
    encoder2(X, J),
    decoder2(J, Y).

    encoder2(X, [K|H]) :-
    similar(X, A, K),
    encoder(A, H),

    decoder2([K|H], Y) :-
    decoder(H, B),
    similar(Y, B, K).

    You can learn the decoder2/2 as a whole in your
    autoencoder and transformer learning framework,
    provided it can deal with many layers, i.e.

    if it has deep learning techniques.

    Mild Shock schrieb:
    The first deep learning breakthrough was
    AlexNet by Alex Krizhevsky, Ilya Sutskever
    and Geoffrey Hinton:

    In 2011, Geoffrey Hinton started reaching out
    to colleagues about “What do I have to do to
    convince you that neural networks are the future?” https://en.wikipedia.org/wiki/AlexNet

    Meanwhile ILP is still dreaming of higher order logic:

    We pull it out of thin air. And the job that does
    is, indeed, that it breaks up relations into
    sub-relations or sub-routines, if you prefer.

    You mean this here:

    Background knowledge (Second Order)
    -----------------------------------
    (Chain) ∃.P,Q,R ∀.x,y,z: P(x,y)← Q(x,z),R(z,y)

    https://github.com/stassa/vanilla/tree/master/lib/poker

    Thats too general, it doesn’t adress
    analogical reasoning.

    Mild Shock schrieb:
    Concerning this boring nonsense:

    https://book.simply-logical.space/src/text/2_part_ii/5.3.html#

    Funny idea that anybody would be interested just now in
    the year 2025 in things like teaching breadth first
    search versus depth first search, or even be “mystified”
    by such stuff. Its extremly trivial stuff:

    Insert your favorite tree traversal pictures here.

    Its even not artificial intelligence neither has anything
    to do with mathematical logic, rather belongs to computer
    science and discrete mathematics which you have in
    1st year university

    courses, making it moot to call it “simply logical”. It
    reminds me of the idea of teaching how wax candles work
    to dumb down students, when just light bulbs have been
    invented. If this is the outcome

    of the Prolog Education Group 2.0, then good night.



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)