Somehow I shied away from implementing call/n for
my new Prolog system. I thought my new Prolog system
has only monomorphic caches , I will never be able to
replicate what I did for my old Prolog system with
arity polymorphic caches. This changed when I had
the idea to dynamically add a cache for the duration
of a higher order loop such as maplist/n, foldl/n etc…
So this is the new implementation of maplist/3:
% maplist(+Closure, +List, -List)
maplist(C, L, R) :-
sys_callable_cacheable(C, D),
sys_maplist(L, D, R).
% sys_maplist(+List, +Closure, -List)
sys_maplist([], _, []).
sys_maplist([X|L], C, [Y|R]) :-
call(C, X, Y),
sys_maplist(L, C, R).
Its similar as the SWI-Prolog implementation in that
it reorders the arguments for better first argument
indexing. But the new thing is sys_callable_cacheable/1,
which prepares the closure to be more efficiently
called. The invocation of the closure is already
quite fast since call/3 is implemented natively,
but the cache adds an itch more speed. Here some
measurements that I did:
/* SWI-Prolog 9.3.20 */
?- findall(X,between(1,1000,X),L), time((between(1,1000,_),
maplist(succ,L,_),fail; true)), fail.
% 2,003,000 inferences, 0.078 CPU in 0.094 seconds
/* Scryer Prolog 0.9.4-350 */
?- findall(X,between(1,1000,X),L), time((between(1,1000,_),
maplist(succ,L,_),fail; true)), fail.
% CPU time: 0.318s, 3_007_105 inferences
/* Dogelog Player 1.3.1 */
?- findall(X,between(1,1000,X),L), time((between(1,1000,_),
maplist(succ,L,_),fail; true)), fail.
% Zeit 342 ms, GC 0 ms, Lips 11713646, Uhr 10.03.2025 09:18
/* realla Prolog 2.64.6-2 */
?- findall(X,between(1,1000,X),L), time((between(1,1000,_),
maplist(succ,L,_),fail; true)), fail.
% Time elapsed 1.694s, 15004003 Inferences, 8.855 MLips
Not surprisingly SWI-Prolog is fastest. What was
a little surprise is that Scryer Prolog can do it quite
fast, possibly since they heavily use maplist/n all
over the place, they came up with things like '$fast_call'
etc.. in their call/n implementation. Trealla Prolog is
a little bit disappointing at the moment.
What can we do with these new toys, we
can implement vector operations and matrice
operations. An then apply it for example
to layered neural networks by
representing them as:
/**
* Network is represented as [N0,M1,N1,...,Mn,Nn]
* - Where N0 are the input neurons vector
* - Where N1 .. Nn-1 are the hidden neurons vectors
* - Where Nn are the output neurons vector
* . Where M1 .. Mn are the transition weights matrice
*/
?- mknet([3,2], X).
X = [''(-1, 1, 1), ''(''(1, 1, -1), ''(1, 1, -1)), ''(-1, 1)].
The model evaluation at a data point
is straight forward:
eval([V], [V]) :- !.
eval([V,M,_|L], [V,M|R]) :- !,
matmul(M, V, H),
vecact(H, expit, J),
eval([J|L], R).
The backward calculation of deltas
is straight forward:
back([V], U, [D]) :- !,
vecact(U, V, sub, E),
vecact(E, V, mulderiv, D).
back([V,M,W|L], U, [D2,M,D|R]) :-
back([W|L], U, [D|R]),
mattran(M, M2),
matmul(M2, D, E),
vecact(E, V, mulderiv, D2).
You can use this to compute weight changes
and drive a gradient algorithm.
Mild Shock schrieb:
Somehow I shied away from implementing call/n for
my new Prolog system. I thought my new Prolog system
has only monomorphic caches , I will never be able to
replicate what I did for my old Prolog system with
arity polymorphic caches. This changed when I had
the idea to dynamically add a cache for the duration
of a higher order loop such as maplist/n, foldl/n etc…
So this is the new implementation of maplist/3:
% maplist(+Closure, +List, -List)
maplist(C, L, R) :-
sys_callable_cacheable(C, D),
sys_maplist(L, D, R).
% sys_maplist(+List, +Closure, -List)
sys_maplist([], _, []).
sys_maplist([X|L], C, [Y|R]) :-
call(C, X, Y),
sys_maplist(L, C, R).
Its similar as the SWI-Prolog implementation in that
it reorders the arguments for better first argument
indexing. But the new thing is sys_callable_cacheable/1,
which prepares the closure to be more efficiently
called. The invocation of the closure is already
quite fast since call/3 is implemented natively,
but the cache adds an itch more speed. Here some
measurements that I did:
/* SWI-Prolog 9.3.20 */
?- findall(X,between(1,1000,X),L), time((between(1,1000,_),
maplist(succ,L,_),fail; true)), fail.
% 2,003,000 inferences, 0.078 CPU in 0.094 seconds
/* Scryer Prolog 0.9.4-350 */
?- findall(X,between(1,1000,X),L), time((between(1,1000,_),
maplist(succ,L,_),fail; true)), fail.
% CPU time: 0.318s, 3_007_105 inferences
/* Dogelog Player 1.3.1 */
?- findall(X,between(1,1000,X),L), time((between(1,1000,_),
maplist(succ,L,_),fail; true)), fail.
% Zeit 342 ms, GC 0 ms, Lips 11713646, Uhr 10.03.2025 09:18
/* realla Prolog 2.64.6-2 */
?- findall(X,between(1,1000,X),L), time((between(1,1000,_),
maplist(succ,L,_),fail; true)), fail.
% Time elapsed 1.694s, 15004003 Inferences, 8.855 MLips
Not surprisingly SWI-Prolog is fastest. What was
a little surprise is that Scryer Prolog can do it quite
fast, possibly since they heavily use maplist/n all
over the place, they came up with things like '$fast_call'
etc.. in their call/n implementation. Trealla Prolog is
a little bit disappointing at the moment.
Somehow I shied away from implementing call/n for
my new Prolog system. I thought my new Prolog system
has only monomorphic caches , I will never be able to
replicate what I did for my old Prolog system with
arity polymorphic caches. This changed when I had
the idea to dynamically add a cache for the duration
of a higher order loop such as maplist/n, foldl/n etc…
So this is the new implementation of maplist/3:
% maplist(+Closure, +List, -List)
maplist(C, L, R) :-
sys_callable_cacheable(C, D),
sys_maplist(L, D, R).
% sys_maplist(+List, +Closure, -List)
sys_maplist([], _, []).
sys_maplist([X|L], C, [Y|R]) :-
call(C, X, Y),
sys_maplist(L, C, R).
Its similar as the SWI-Prolog implementation in that
it reorders the arguments for better first argument
indexing. But the new thing is sys_callable_cacheable/1,
which prepares the closure to be more efficiently
called. The invocation of the closure is already
quite fast since call/3 is implemented natively,
but the cache adds an itch more speed. Here some
measurements that I did:
/* SWI-Prolog 9.3.20 */
?- findall(X,between(1,1000,X),L), time((between(1,1000,_),
maplist(succ,L,_),fail; true)), fail.
% 2,003,000 inferences, 0.078 CPU in 0.094 seconds
/* Scryer Prolog 0.9.4-350 */
?- findall(X,between(1,1000,X),L), time((between(1,1000,_),
maplist(succ,L,_),fail; true)), fail.
% CPU time: 0.318s, 3_007_105 inferences
/* Dogelog Player 1.3.1 */
?- findall(X,between(1,1000,X),L), time((between(1,1000,_),
maplist(succ,L,_),fail; true)), fail.
% Zeit 342 ms, GC 0 ms, Lips 11713646, Uhr 10.03.2025 09:18
/* realla Prolog 2.64.6-2 */
?- findall(X,between(1,1000,X),L), time((between(1,1000,_),
maplist(succ,L,_),fail; true)), fail.
% Time elapsed 1.694s, 15004003 Inferences, 8.855 MLips
Not surprisingly SWI-Prolog is fastest. What was
a little surprise is that Scryer Prolog can do it quite
fast, possibly since they heavily use maplist/n all
over the place, they came up with things like '$fast_call'
etc.. in their call/n implementation. Trealla Prolog is
a little bit disappointing at the moment.
Neural Networks
Rolf Pfieffer et al. - 2012
FANN - Fast Artificial Neural Network
Package for SWI-Prolog - 2018
https://www.swi-prolog.org/pack/list?p=plfann
A Gentle Introduction to torch.autograd
Next, we load an optimizer, in this case SGD with a
learning rate of 0.01 and momentum of 0.9. We register all
the parameters of the model in the optimizer.
optim = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=0.9)
Ok some progress report here. I have currently a
library(linear) in the working which is only a few
lines of code, but it provides vectors and matrixes.
One can use the library to define matrix exponentiation:
matexp(M, 1, M) :- !.
matexp(M, N, R) :- N mod 2 =:= 0, !,
I is N // 2,
matexp(M, I, H),
matmul(H, H, R).
matexp(M, N, R) :-
I is N-1,
matexp(M, I, H),
matmul(H, M, R).
And then do fancy stuff like answering the question
what are the last 8 digits of fibonacci(1000000):
?- time((fib(1000000, _X), Y is _X mod 10^8)).
% Zeit 28 ms, GC 0 ms, Lips 88857, Uhr 16.03.2025 22:48
Y = 42546875
The 28 ms execution time are not bad, since modulo was not
integrated into matexp/3, making it to compute the full
fibonacci(1000000) before taking the modulo. Not sure whether
JavaScript bigint is faster or slower than GMP ?
So what can we do with library(linear) besides implementing
eval/3 and back/3 ? We can finally update a neural network
and do this iteratively. Using a very simple random pick
to choose some training data sample:
update([V], _, [V]) :- !.
update([V,M|L], [_,M3|R], [V,M4|S]) :-
maplist(maplist(compose(add,mul(0.1))), M3, M, M4),
update(L, R, S).
iter(0, _, N, N) :- !.
iter(I, Z, N, M) :-
random(R), K is floor(R*4)+1,
call_nth(data(Z, X, Y), K),
eval(N, X, U),
back(U, Y, V),
update(U, V, W),
J is I-1,
iter(J, Z, W, M).
Disclaimer: This is only a proof of concept. It mostlikely
doesn’t have all the finess of Python torch.autograd. Also
it uses a very simple update of the weights via μ Δwij with
μ = 0.1. But you can already use it to learn an AND
or to learn an XOR.
Mild Shock schrieb:
Somehow I shied away from implementing call/n for
my new Prolog system. I thought my new Prolog system
has only monomorphic caches , I will never be able to
replicate what I did for my old Prolog system with
arity polymorphic caches. This changed when I had
the idea to dynamically add a cache for the duration
of a higher order loop such as maplist/n, foldl/n etc…
So this is the new implementation of maplist/3:
% maplist(+Closure, +List, -List)
maplist(C, L, R) :-
sys_callable_cacheable(C, D),
sys_maplist(L, D, R).
% sys_maplist(+List, +Closure, -List)
sys_maplist([], _, []).
sys_maplist([X|L], C, [Y|R]) :-
call(C, X, Y),
sys_maplist(L, C, R).
Its similar as the SWI-Prolog implementation in that
it reorders the arguments for better first argument
indexing. But the new thing is sys_callable_cacheable/1,
which prepares the closure to be more efficiently
called. The invocation of the closure is already
quite fast since call/3 is implemented natively,
but the cache adds an itch more speed. Here some
measurements that I did:
/* SWI-Prolog 9.3.20 */
?- findall(X,between(1,1000,X),L), time((between(1,1000,_),
maplist(succ,L,_),fail; true)), fail.
% 2,003,000 inferences, 0.078 CPU in 0.094 seconds
/* Scryer Prolog 0.9.4-350 */
?- findall(X,between(1,1000,X),L), time((between(1,1000,_),
maplist(succ,L,_),fail; true)), fail.
% CPU time: 0.318s, 3_007_105 inferences
/* Dogelog Player 1.3.1 */
?- findall(X,between(1,1000,X),L), time((between(1,1000,_),
maplist(succ,L,_),fail; true)), fail.
% Zeit 342 ms, GC 0 ms, Lips 11713646, Uhr 10.03.2025 09:18
/* realla Prolog 2.64.6-2 */
?- findall(X,between(1,1000,X),L), time((between(1,1000,_),
maplist(succ,L,_),fail; true)), fail.
% Time elapsed 1.694s, 15004003 Inferences, 8.855 MLips
Not surprisingly SWI-Prolog is fastest. What was
a little surprise is that Scryer Prolog can do it quite
fast, possibly since they heavily use maplist/n all
over the place, they came up with things like '$fast_call'
etc.. in their call/n implementation. Trealla Prolog is
a little bit disappointing at the moment.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 487 |
Nodes: | 16 (2 / 14) |
Uptime: | 00:32:05 |
Calls: | 9,660 |
Calls today: | 2 |
Files: | 13,709 |
Messages: | 6,166,402 |