On 10/26/2024 8:22 AM, olcott wrote:
On 10/25/2024 11:07 PM, Richard Damon wrote:
On 10/25/24 6:11 PM, olcott wrote:
On 10/25/2024 10:45 AM, Richard Damon wrote:
On 10/25/24 9:56 AM, olcott wrote:
On 10/25/2024 7:27 AM, Richard Damon wrote:
On 10/24/24 8:56 PM, olcott wrote:
On 10/24/2024 6:23 PM, Richard Damon wrote:
On 10/24/24 9:36 AM, olcott wrote:
On 10/23/2024 9:48 PM, Richard Damon wrote:
On 10/23/24 9:51 PM, olcott wrote:
ChatGPT does completely understand this.
But, it is just a stupid idiot that has been taught to repeat >>>>>>>>>>> what it has been told.
It is a brilliant genius that seems to infallibly deduce all >>>>>>>>>> of the subtle nuances of each of the consequences on the basis >>>>>>>>>> of a set of premises.
I guess you don't undetstand how "Large Language Models work, >>>>>>>>> do you.
It has NO actual intelegence, or ability to "deduce" nuances, >>>>>>>>> it is just a massive pattern matching system.
All you are doing is proving how little you understand about >>>>>>>>> what you are talking about,
Remember, at the bottom of the page is a WARNING that it can >>>>>>>>> make mistakes. And feeding it LIES, like you do is one easy way >>>>>>>>> to do that.
There is much more to this than your superficial
understanding. Here is a glimpse:
https://www.technologyreview.com/2024/03/04/1089403/large-
language- models-amazing-but-nobody-knows-why/
The bottom line is that ChatGPT made no error in its
evaluation of my work when this evaluation is based on
pure reasoning. It is only when my work is measured
against arbitrary dogma that cannot be justified with
pure reasoning that makes me and ChatGPT seem incorrect.
If use your same approach to these things we could say that
ZFC stupidly fails to have a glimmering of understanding of
Naive set theory. From your perspective ZFC is a damned liar.
The articles says no such thing.
*large-language-models-amazing-but-nobody-knows-why*
They are much smarter and can figure out all kinds of
things. Their original designers have no idea how they
do this.
In fact, it comments about the problem of "overfitting" where the >>>>>>> processing get the wrong answers because it over generalizes.
This is because the modeling process has no concept of actual
meaning, and thus of truth, only the patterns that it has seen.
AI's don't "Reason", they patern match and compare.
Note, that "arbitrary dogma" that you try to reject, are the
RULES and DEFINITONS of the system that you claim to be working in. >>>>>>>
How about we stipulate that the system that I am
working in is termination analysis for the x86 language.
as my system software says in its own name: x86utm.
But it doesn;t actually know
I said the the underlying formal mathematical system
of DDD/HHH <is> the x86 language.
Can't be.
That isn't a formal logic system.
Sure it is. A formal logic system is any system
tat applies finite string transformation rules
to finite strings.
The simplest concrete example of such a system
transforms pairs of ASCII digits into their sum.
Every computable function can be construed as applying
finite string transformation rules to an input finite
string and deriving an output finite string.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 493 |
Nodes: | 16 (2 / 14) |
Uptime: | 23:37:03 |
Calls: | 9,729 |
Calls today: | 19 |
Files: | 13,741 |
Messages: | 6,182,405 |