**DIE ANTINOMIEN DER MENGENLEHRE**E. Specker, Dialectica, Vol. 8, No. 3 (15. 9. 1954) https://www.jstor.org/stable/42964119?seq=7
Hi,
That is extremly embarassing. I don’t know
what you are bragging about, when you wrote
the below. You are wrestling with a ghost!
Maybe you didn’t follow my superbe link:
seemingly interesting paper. In stead
particular, his final coa[l]gebra theorem
The link behind Hopcroft and Karp (1971) I
gave, which is a Bisimulation and Equirecursive
Equality hand-out, has a coalgebra example,
I used to derive pairs.pl from:
https://www.cs.cornell.edu/courses/cs6110/2014sp/Lectures/lec35a.pdf
Bye
Mild Shock schrieb:
Inductive logic programming at 30
https://arxiv.org/abs/2102.10556
The paper contains not a single reference to autoencoders!
Still they show this example:
Fig. 1 ILP systems struggle with structured examples that
exhibit observational noise. All three examples clearly
spell the word "ILP", with some alterations: 3 noisy pixels,
shifted and elongated letters. If we would be to learn a
program that simply draws "ILP" in the middle of the picture,
without noisy pixels and elongated letters, that would
be a correct program.
I guess ILP is 30 years behind the AI boom. An early autoencoder
turned into transformer was already reported here (*):
SERIAL ORDER, Michael I. Jordan - May 1986
https://cseweb.ucsd.edu/~gary/PAPER-SUGGESTIONS/Jordan-TR-8604-OCRed.pdf
Well ILP might have its merits, maybe we should not ask
for a marriage of LLM and Prolog, but Autoencoders and ILP.
But its tricky, I am still trying to decode the da Vinci code of
things like stacked tensors, are they related to k-literal clauses?
The paper I referenced is found in this excellent video:
The Making of ChatGPT (35 Year History)
https://www.youtube.com/watch?v=OFS90-FX6pg
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (3 / 13) |
Uptime: | 29:53:17 |
Calls: | 10,391 |
Calls today: | 2 |
Files: | 14,064 |
Messages: | 6,417,091 |