In 2011, Geoffrey Hinton started reaching out
to colleagues about “What do I have to do to
convince you that neural networks are the future?” https://en.wikipedia.org/wiki/AlexNet
We pull it out of thin air. And the job that does
is, indeed, that it breaks up relations into
sub-relations or sub-routines, if you prefer.
Background knowledge (Second Order)
-----------------------------------
(Chain) ∃.P,Q,R ∀.x,y,z: P(x,y)← Q(x,z),R(z,y)
https://github.com/stassa/vanilla/tree/master/lib/poker
Concerning this boring nonsense:
https://book.simply-logical.space/src/text/2_part_ii/5.3.html#
Funny idea that anybody would be interested just now in
the year 2025 in things like teaching breadth first
search versus depth first search, or even be “mystified”
by such stuff. Its extremly trivial stuff:
Insert your favorite tree traversal pictures here.
Its even not artificial intelligence neither has anything
to do with mathematical logic, rather belongs to computer
science and discrete mathematics which you have in
1st year university
courses, making it moot to call it “simply logical”. It
reminds me of the idea of teaching how wax candles work
to dumb down students, when just light bulbs have been
invented. If this is the outcome
of the Prolog Education Group 2.0, then good night.
The H is the bottleneck on purpose:
relation(X, Y) :- encoder(X, H), decoder(H, Y).
The first deep learning breakthrough was
AlexNet by Alex Krizhevsky, Ilya Sutskever
and Geoffrey Hinton:
In 2011, Geoffrey Hinton started reaching out
to colleagues about “What do I have to do to
convince you that neural networks are the future?” https://en.wikipedia.org/wiki/AlexNet
Meanwhile ILP is still dreaming of higher order logic:
We pull it out of thin air. And the job that does
is, indeed, that it breaks up relations into
sub-relations or sub-routines, if you prefer.
You mean this here:
Background knowledge (Second Order)
-----------------------------------
(Chain) ∃.P,Q,R ∀.x,y,z: P(x,y)← Q(x,z),R(z,y)
https://github.com/stassa/vanilla/tree/master/lib/poker
Thats too general, it doesn’t adress
analogical reasoning.
Mild Shock schrieb:
Concerning this boring nonsense:
https://book.simply-logical.space/src/text/2_part_ii/5.3.html#
Funny idea that anybody would be interested just now in
the year 2025 in things like teaching breadth first
search versus depth first search, or even be “mystified”
by such stuff. Its extremly trivial stuff:
Insert your favorite tree traversal pictures here.
Its even not artificial intelligence neither has anything
to do with mathematical logic, rather belongs to computer
science and discrete mathematics which you have in
1st year university
courses, making it moot to call it “simply logical”. It
reminds me of the idea of teaching how wax candles work
to dumb down students, when just light bulbs have been
invented. If this is the outcome
of the Prolog Education Group 2.0, then good night.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 486 |
Nodes: | 16 (3 / 13) |
Uptime: | 140:49:40 |
Calls: | 9,658 |
Calls today: | 6 |
Files: | 13,708 |
Messages: | 6,167,406 |