Hello,
More of my philosophy about my next software projects and more of my thoughts..
I am a white arab from Morocco, and i think i am smart since i have also invented many scalable algorithms and algorithms..
More of my philosophy about non-linear regression and more..
I think i am highly smart, and i have just finished quickly the software implementation of Levenberg–Marquardt algorithm and of the simplex algorithm to solve non-linear least squares problems, but i have also
just implemented the PSO and genetic algorithm that permit to solve non-linear least squares problems, and i will soon implement a generalized way with artificial intelligence using the software that permit to solve the non-linear "multiple" regression,
but i have also noticed that in mathematics you have to take care of the variability of the y in non-linear least squares problems so that to approximate, also the Levenberg–Marquardt algorithm (LMA or just LM) that i have just implemented , also known
as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The Levenberg–Marquardt algorithm is used in many software applications for
solving generic curve-fitting problems. The Levenberg–Marquardt algorithm was found to be an efficient, fast and robust method which also has a good global convergence property. For these reasons, It has been incorporated into many good commercial
packages performing non-linear regression. But my way of implementing the non-linear "multiple" regression in the software will be much more powerful than Levenberg–Marquardt algorithm, and of course i will share with you many parts of my software
project,
so stay tuned !
I think i am highly smart, and i have just talked to you above about my new software project, read below about it, and my next software project is that i will implement a professional software for mathematics and operational research that is a
sophisticated solver of linear and non-linear programming problems using artificial intelligence, and of course
i am actually thinking about how to implement the sensitivity analysis
part, and of course my software will avoid premature convergence and
it will also be much more scalable using multicores so that to search with artificial intelligence much faster the global optimum.
More of my philosophy about artificial intelligence and about non-linear regression..
And i will talk to you more about my interesting software project for mathematics, so my new software project uses artificial intelligence to implement a generalized way with artificial intelligence using the software that permit to solve the non-linear "
multiple" regression, and it is much more powerful than Levenberg–Marquardt algorithm , since i am implementing a smart algorithm using artificial intelligence that permits to avoid premature convergence, and it is also one of the most important thing,
and also it will be much more scalable using multicores so that to search with artificial intelligence much faster the global optimum, so i am doing it this way to be really professional and i will give you a tutorial that explains my algorithms that
uses artificial intelligence so that you learn from them.
More of my philosophy about Parallel "Stable" Sort and more of my thoughts..
I think i am smart, and i have just looked at the other software project of Arthur V. Ratz(
Read about him here:
https://www.codeproject.com/Articles/arthurratz
)
How To Implement The Parallel "Stable" Sort Using Intel® MPI Library And Deploy It To A Multi-Node Computational Cluster
https://www.codeproject.com/Articles/5267405/How-To-Implement-The-Parallel-Stable-Sort-Using-In
And here is the steps of the above software project:
1. Generate an array of N-objects to be sorted
2. Split up an entire array into the k-subsets of objects
3. Sort each subset of objects in parallel
4. Merge all k-subsets into the ordered array of objects
So i think that the step 4 above is not good, since you have to look
at my following opensource parallel software project that is my invention that is much better and that parallelizes step 4 above in an efficient manner, you can read about my software project here(and i will soon port it to MPI so that it runs on A Multi-
Node Computational Cluster, and i will soon enhance this invention of mine so that it be fully scalable), so read about it carefully here:
My Parallel Sort Library that is more efficient
https://sites.google.com/site/scalable68/parallel-sort-library-that-is-more-efficient
More of my philosophy about Loop parallelization and more of my thoughts..
I invite you to read the following interesting article from
Arthur V. Ratz, an Urainian that graduated from L’viv State Polytechnic University and earned a computer science and information technology master’s degree and he has 20 years experience in his field:
https://www.codeproject.com/Articles/1184743/Parallel-Scalable-Burrows-Wheeler-Transformation-B
And notice in the above interesting article that the important parts in parallelism are:
1- Tight Loop Parallelization
2- Nested Parallelism
3- Collapsing Nested Loops
And it is related to my following important thoughts about Loop parallelization and more, so read them carefully:
I have just read the following web page:
Parallelization: Harder than it looks
https://www.jayconrod.com/posts/29/parallelization--harder-than-it-looks
Notice that MIT's Cilk is using a divide and conquer approach to calculate the grainsize for the Parallel For, here it is:
-------
void run_loop(first, last) {
if (last - first < grainsize) {
for (int i = first; i < last; ++i) LOOP_BODY;
} else {
int mid = (last-first)/2;
cilk_spawn run_loop(first, mid);
run_loop(mid, last);
}
}
-------
But as you are noticing if i do a simulation of it by
running my following Delphi program:
----------------------
program test;
var c,d:uint64;
begin
c:=high(uint64);
d:=0;
repeat
c:=c div 2;
d:=d+1;
until c<=1000;
writeln(c);
writeln(d);
end.
-------
So as you are noticing for a grainsize of 1000 the above Delphi program gives 511, that means that the Cilk's divide and conquer approach to calculate the grainsize for the Parallel For is "not" good.
This is why you have to take a look at my Threadpool engine with priorities that scales very well that is really powerful because it scales very well on multicore and NUMA systems, also it comes with a ParallelFor() that scales very well on multicores
and NUMA systems, and take a look at its source code to notice i am calculating much more precisely and correctly the grainsize for ParallelFor() than the Cilk's divide and conquer to calculate the grainsize that is "not" good.
And today i will talk about data dependency and parallel loops..
For a loop to be parallelized, every iteration must be independent of the others, one way to be sure of it is to execute the loop
in the direction of the incremented index of the loop and in the direction of the decremented index of the loop and verify if the results are the same. A data dependency happens if memory is modified: a loop has a data dependency if an iteration writes a
variable that is read or write in another iteration of the loop. There is no data dependency if only one iteration reads or writes a variable or if many iterations read
the same variable without modifying it. So this is the "general" "rules".
Now there remains to know that you have for example to know how to construct the parallel for loop if there is an induction variable or if there is a reduction operation, i will give an example of them:
If we have the following (the code looks like Algol or modern Object Pascal):
IND:=0
For I:=1 to N
Do
Begin
IND := IND + 1;
A[I]:=B[IND];
End;
So as you are noticing since IND is an induction variable , so
to parallelize the loop you have to do the following:
For I:=1 to N
Do
Begin
IND:=(I*(I+1))/2;
A[I]:=B[IND];
End;
Now for the reduction operation example, you will notice that my invention that is my Threadpool with priorities that scales very well (
read about it below) supports a Parallel For that scales very well that supports "grainsize", and you will notice that the grainsize can be used in the ParallelFor() with a reduction operation and you will notice that my following powerful scalable Adder
is also used in this scenario, here it is:
https://sites.google.com/site/scalable68/scalable-adder-for-delphi-and-freepascal
So here is the example with a reduction operation in modern Object Pascal:
TOTAL:=0.0
For I := 1 to N
Do
Begin
TOTAL:=TOTAL+A[I]
End;
So with my powerful scalable Adder and with my powerful invention that is my ParallelFor() that scales very well, you will parallelize the above like this:
procedure test1(j:integer;ptr:pointer);
begin
t.add(A[J]); // "t" is my scalable Adder object
end;
// Let's suppose that N is 100000
// In the following, 10000 is the grainsize
obj.ParallelFor(1,N,test1,10000,pointer(0));
TOTAL:=T.get();
And read the following to understand how to use grainsize of my Parallel for that scales well:
About my ParallelFor() that scales very well that uses my efficient Threadpool that scales very well:
With ParallelFor() you have to:
1- Ensure Sufficient Work
Each iteration of a loop involves a certain amount of work,
so you have to ensure a sufficient amount of the work,
read below about "grainsize" that i have implemented.
2- In OpenMP we have that:
Static and Dynamic Scheduling
One basic characteristic of a loop schedule is whether it is static or dynamic:
• In a static schedule, the choice of which thread performs a particular iteration is purely a function of the iteration number and number of
threads. Each thread performs only the iterations assigned to it at the beginning of the loop.
• In a dynamic schedule, the assignment of iterations to threads can
vary at runtime from one execution to another. Not all iterations are
assigned to threads at the start of the loop. Instead, each thread
requests more iterations after it has completed the work already
assigned to it.
But with my ParallelFor() that scales very well, since it is using my efficient Threadpool that scales very well, so it is using Round-robin scheduling and it uses also work stealing, so i think that this is sufficient.
Read the rest:
My Threadpool engine with priorities that scales very well is really powerful because it scales very well on multicore and NUMA systems, also it comes with a ParallelFor() that scales very well on multicores and NUMA systems.
You can download it from:
https://sites.google.com/site/scalable68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well
Here is the explanation of my ParallelFor() that scales very well:
I have also implemented a ParallelFor() that scales very well, here is the method:
procedure ParallelFor(nMin, nMax:integer;aProc: TParallelProc;GrainSize:integer=1;Ptr:pointer=nil;pmode:TParallelMode=pmBlocking;Priority:TPriorities=NORMAL_PRIORITY);
nMin and nMax parameters of the ParallelFor() are the minimum and maximum integer values of the variable of the ParallelFor() loop, aProc parameter of ParallelFor() is the procedure to call, and GrainSize integer parameter of ParallelFor() is the
following:
The grainsize sets a minimum threshold for parallelization.
A rule of thumb is that grainsize iterations should take at least 100,000 clock cycles to execute.
For example, if a single iteration takes 100 clocks, then the grainsize needs to be at least 1000 iterations. When in doubt, do the following experiment:
1- Set the grainsize parameter higher than necessary. The grainsize is specified in units of loop iterations.
If you have no idea of how many clock cycles an iteration might take, start with grainsize=100,000.
The rationale is that each iteration normally requires at least one clock per iteration. In most cases, step 3 will guide you to a much smaller value.
2- Run your algorithm.
3- Iteratively halve the grainsize parameter and see how much the algorithm slows down or speeds up as the value decreases.
A drawback of setting a grainsize too high is that it can reduce parallelism. For example, if the grainsize is 1000 and the loop has 2000 iterations, the ParallelFor() method distributes the loop across only two processors, even if more are available.
And you can pass a parameter in Ptr as pointer to ParallelFor(), and you can set pmode parameter of to pmBlocking so that ParallelFor() is blocking or to pmNonBlocking so that ParallelFor() is non-blocking, and the Priority parameter is the priority of
ParallelFor(). Look inside the test.pas example to see how to use it.
And now more of my philosophy about cycling and the simplex method and more of my thoughts..
I am a white arab, and i think i am smart since i have also
invented many scalable algorithms and algorithms..
Please look below , i have just cleaned more the code of the simplex
algorithm for Delphi and Freepascal:
More of my philosophy about the Simplex algorithm and method
and more of my thoughts..
I think i am highly smart, and i say that the Linear Programming Problem is by far the most widely used optimization model. Its impact on economic and government modeling is immense. The Simplex Method for solving the Linear Programming (LP) Problem, due
to George Dantzig, has been an extremely efficient computational tool for almost four decades. In general, the simplex algorithm is extremely powerful, which usually takes 2m to 3m iterations at the most (here, m denotes the range of equality constraints)
, and it converges in anticipated polynomial time for specific distributions of random input, and it is why i have just come with the Simplex algorithm and method below for Freepascal and Delphi compilers so that to solve Linear Programming (LP) Problem
and i have enhanced it from a past release of a simplex program so that to work with both Delphi and Freepascal and so that to use Dynamic arrays so that to "scale", and i will also say that cycling, that prevent convergence of the simplex algorithm and
method to the optimal solution, is a rare phenomenon, in fact, constructing a Linear Programming (LP) Problem on wich the simplex algorithm and method will cycle is difficult, so it is why i have not implemented a countermeasure for it, so that to
prevent cycling, such as implementing the classic perturbation method and lexicographic method, so i hope that you will be happy with my below work. And here is my Freepascal and Delphi unit with a test program in Delphi and Freepascal that works
perfectly:
First, here is the source code of the unit for Freepascal and Delphi
that i have enhanced (and you can download the freepascal 64 bit compiler from here:
https://www.freepascal.org/ ):
--------------------------------------------------------------------
{***************************************************************
* LINEAR PROGRAMMING: THE SIMPLEX METHOD *
* ------------------------------------------------------------ *
* SAMPLE RUN: *
* Maximize z = x1 + x2 + 3x3 -0.5x4 with conditions: *
* x1 + 2x3 <= 740 *
* 2x2 - 7x4 <= 0 *
* x2 - x3 + 2x4 >= 0.5 *
* x1 + x2 + x3 +x4 = 9 *
* and all x's >=0. *
* *
* Number of variables in E.F.: 4 *
* Number of <= inequalities..: 2 *
* Number of >= inequalities..: 1 *
* Number of = equalities.....: 1 *
* Input Economic Function: *
* Coefficient # 1: 1 *
* Coefficient # 2: 1 *
* Coefficient # 3: 3 *
* Coefficient # 4: -0.5 *
* Constant term..: 0 *
* Input constraint # 1: *
* Coefficient # 1: 1 *
* Coefficient # 2: 0 *
* Coefficient # 3: 2 *
* Coefficient # 4: 0 *
* Constant term..: 740 *
* Input constraint # 2: *
* Coefficient # 1: 0 *
* Coefficient # 2: 2 *
* Coefficient # 3: 0 *
* Coefficient # 4: -7 *
* Constant term..: 0 *
* Input constraint # 3: *
* Coefficient # 1: 0 *
* Coefficient # 2: 1 *
* Coefficient # 3: -1 *
* Coefficient # 4: 2 *
* Constant term..: 0.5 *
* Input constraint # 4: *
* Coefficient # 1: 1 *
* Coefficient # 2: 1 *
* Coefficient # 3: 1 *
* Coefficient # 4: 1 *
* Constant term..: 9 *
* *
* Input Table: *
* 0.00 1.00 1.00 3.00 -0.50 *
* 740.00 -1.00 0.00 -2.00 0.00 *
* 0.00 0.00 -2.00 0.00 7.00 *
* 0.50 0.00 -1.00 1.00 -2.00 *
* 9.00 -1.00 -1.00 -1.00 -1.00 *
* *
* Maximum of E.F. = 17.02500 *
* X1 = 0.000000 *
* X2 = 3.325000 *
* X3 = 4.725000 *
* X4 = 0.950000 *
* *
* ------------------------------------------------------------ *
* Reference: "Numerical Recipes By W.H. Press, B. P. Flannery, *
* S.A. Teukolsky and W.T. Vetterling, Cambridge *
* University Press, 1986" [BIBLI 08]. *
* *
* Past release 1.0 By J-P Moreau, Paris *
* New release by Amine Moulay Ramdane *
* *
* Note: This unit was enhanced by Amine Moulay Ramdane *
* so that to work with both Delphi and Freepascal *
* and so that to use Dynamic arrays so that to scale. * ***************************************************************}
unit TSimplex;
//Uses Crt;
interface
Type
MAT = array of array of double;
IVEC = array of integer;
Var
A: MAT;
IPOSV, IZROV: IVEC;
ICASE,N,M,M1,M2,M3 {i,j,}: Integer;
R: REAL;
Procedure simplx(var a:MAT; m, n, m1, m2, m3: Integer; var icase:Integer; var izrov, iposv:IVEC);
implementation
//Label 3;
Procedure simp1(var a:MAT; mm:integer; ll:IVEC; nll, iabf: integer; var kp: integer;
var bmax:double); Forward;
Procedure simp2(var a:MAT; m, n:integer; l2:IVEC; nl2:integer; var ip:integer;
kp:integer; var q1:double); Forward;
Procedure simp3(var a:MAT; i1,k1,ip,kp:integer); Forward;
Procedure simplx(var a:MAT; m, n, m1, m2, m3: Integer; var icase:Integer; var izrov, iposv:IVEC);
{-----------------------------------------------------------------------------------------
USES simp1,simp2,simp3.
Simplex method for linear programming. Input parameters a, m, n, m1, m2, and m3, and
output parameters a, icase, izrov, and iposv are described above (see reference).
Constants: MMAX is the maximum number of constraints expected; NMAX is the maximum number
of variables expected; EPS is the absolute precision, which should be adjusted to the
scale of your variables.
-----------------------------------------------------------------------------------------}
Label 1,2,10,20,30, return;
Var
i,ip,ir,is1,k,kh,kp,m12,nl1,nl2: Integer;
l1, l2, l3: IVEC;
bmax,q1,EPS: double;
Begin
setlength(l1,n+1);
setlength(l2,n+1);
setlength(l3,n+1);
EPS:=1e-6;
if m <> m1+m2+m3 then
begin
writeln(' Bad input constraint counts in simplx.');
goto return
end;
nl1:=n;
for k:=1 to n do
begin
l1[k]:=k; {Initialize index list of columns admissible for exchange.}
izrov[k]:=k {Initially make all variables right-hand.}
end;
nl2:=m;
for i:=1 to m do
begin
if a[i+1,1] < 0.0 then
begin
writeln(' Bad input tableau in simplx, Constants bi must be nonnegative.');
goto return
end;
l2[i]:=i;
iposv[i]:=n+i {-------------------------------------------------------------------------------------------------
Initial left-hand variables. m1 type constraints are represented by having their slackv ariable
initially left-hand, with no artificial variable. m2 type constraints have their slack
variable initially left-hand, with a minus sign, and their artificial variable handled implicitly
during their first exchange. m3 type constraints have their artificial variable initially
left-hand. -------------------------------------------------------------------------------------------------}
end;
for i:=1 to m2 do l3[i]:=1;
ir:=0;
if m2+m3 = 0 then goto 30; {The origin is a feasible starting solution. Go to phase two.}
ir:=1;
for k:=1 to n+1 do {Compute the auxiliary objective function.}
begin
q1:=0.0;
for i:=m1+1 to m do q1 := q1 + a[i+1,k];
a[m+2,k]:=-q1
end;
10: simp1(a,m+1,l1,nl1,0,kp,bmax); {Find max. coeff. of auxiliary objective fn}
if(bmax <= EPS) and (a[m+2,1] < -EPS) then
begin
icase:=-1; {Auxiliary objective function is still negative and can’t be improved,}
goto return {hence no feasible solution exists.}
end
else if (bmax <= EPS) and (a[m+2,1] <= EPS) then
{ Auxiliary objective function is zero and can’t be improved; we have a feasible starting vector.
Clean out the artificial variables corresponding to any remaining equality constraints by
goto 1’s and then move on to phase two by goto 30. }
begin
m12:=m1+m2+1;
if m12 <= m then
for ip:=m12 to m do
if iposv[ip] = ip+n then {Found an artificial variable for an equalityconstraint.}
begin
simp1(a,ip,l1,nl1,1,kp,bmax);
if bmax > EPS then goto 1; {Exchange with column corresponding to maximum}
end; {pivot element in row.}
ir:=0;
m12:=m12-1;
if m1+1 > m12 then goto 30;
for i:=m1+1 to m1+m2 do {Change sign of row for any m2 constraints}
if l3[i-m1] = 1 then {still present from the initial basis.}
for k:=1 to n+1 do
a[i+1,k] := -1.0 * a[i+1,k];
goto 30 {Go to phase two.}
end;
simp2(a,m,n,l2,nl2,ip,kp,q1); {Locate a pivot element (phase one). }
if ip = 0 then {Maximum of auxiliary objective function is}
begin {unbounded, so no feasible solution exists.}
icase:=-1;
goto return
end;
1: simp3(a,m+1,n,ip,kp);
{ Exchange a left- and a right-hand variable (phase one), then update lists.}
if iposv[ip] >= n+m1+m2+1 then {Exchanged out an artificial variable for an}
begin {equality constraint. Make sure it stays
out by removing it from the l1 list. }
for k:=1 to nl1 do
if l1[k] = kp then goto 2;
2: nl1:=nl1-1;
for is1:=k to nl1 do l1[is1]:=l1[is1+1];
end
else
begin
if iposv[ip] < n+m1+1 then goto 20;
kh:=iposv[ip]-m1-n;
if l3[kh] = 0 then goto 20; {Exchanged out an m2 type constraint.}
l3[kh]:=0 {If it’s the first time, correct the pivot column or the
minus sign and the implicit artificial variable. }
end;
a[m+2,kp+1] := a[m+2,kp+1] + 1.0;
for i:=1 to m+2 do a[i,kp+1] := -1.0 * a[i,kp+1];
20: is1:=izrov[kp]; {Update lists of left- and right-hand variables.}
izrov[kp]:=iposv[ip];
iposv[ip]:=is1;
if ir <> 0 then goto 10; {if still in phase one, go back to 10.
End of phase one code for finding an initial feasible solution. Now, in phase two, optimize it.}
30: simp1(a,0,l1,nl1,0,kp,bmax); {Test the z-row for doneness.}
if bmax <= EPS then {Done. Solution found. Return with the good news.}
begin
icase:=0;
goto return
end;
simp2(a,m,n,l2,nl2,ip,kp,q1); {Locate a pivot element (phase two).}
if ip = 0 then {Objective function is unbounded. Report and return.}
begin
icase:=1;
goto return
end;
simp3(a,m,n,ip,kp); {Exchange a left- and a right-hand variable (phase two),}
goto 20; {update lists of left- and right-hand variables and
{return for another iteration. }
return: End;
{The preceding routine makes use of the following utility subroutines: }
Procedure simp1(var a:MAT; mm:integer; ll:IVEC; nll, iabf: integer; var kp: integer;
var bmax:double);
{ Determines the maximum of those elements whose index is contained in the supplied list
ll, either with or without taking the absolute value, as flagged by iabf. } Label return;
Var
k: integer;
test: double;
Begin
kp:=ll[1];
bmax:=a[mm+1,kp+1];
if nll < 2 then goto return;
for k:=2 to nll do
begin
if iabf = 0 then
test:=a[mm+1,ll[k]+1]-bmax
else
test:=abs(a[mm+1,ll[k]+1])-abs(bmax);
if test > 0.0 then
begin
bmax:=a[mm+1,ll[k]+1];
kp:=ll[k]
end
end;
return: End;
Procedure simp2(var a:MAT; m, n:integer; l2:IVEC; nl2:integer; var ip:integer;
kp:integer; var q1:double);
Label 2,6, return;
Var EPS: double;
i,ii,k: integer;
q,q0,qp: double;
Begin
EPS:=1e-6;
{ Locate a pivot element, taking degeneracy into account.}
ip:=0;
if nl2 < 1 then goto return;
for i:=1 to nl2 do
if a[i+1,kp+1] < -EPS then goto 2;
goto return; {No possible pivots. Return with message.}
2: q1:=-a[l2[i]+1,1]/a[l2[i]+1,kp+1];
ip:=l2[i];
if i+1 > nl2 then goto return;
for i:=i+1 to nl2 do
begin
ii:=l2[i];
if a[ii+1,kp+1] < -EPS then
begin
q:=-a[ii+1,1]/a[ii+1,kp+1];
if q < q1 then
begin
ip:=ii;
q1:=q
end
else if q = q1 then {We have a degeneracy.}
begin
for k:=1 to n do
begin
qp:=-a[ip+1,k+1]/a[ip+1,kp+1];
q0:=-a[ii+1,k+1]/a[ii+1,kp+1];
if q0 <> qp then goto 6
end;
6: if q0 < qp then ip:=ii
end
end
end;
return: End;
Procedure simp3(var a:MAT; i1,k1,ip,kp:integer);
{ Matrix operations to exchange a left-hand and right-hand variable (see text).}
Var
ii,kk:integer;
piv:double;
Begin
piv:=1.0/a[ip+1,kp+1];
if i1 >= 0 then
for ii:=1 to i1+1 do
begin
if ii-1 <> ip then
begin
a[ii,kp+1] := a[ii,kp+1] * piv;
for kk:=1 to k1+1 do
if kk-1 <> kp then
a[ii,kk] := a[ii,kk] - a[ip+1,kk]*a[ii,kp+1]
end
end;
for kk:=1 to k1+1 do
if kk-1 <> kp then a[ip+1,kk] :=-a[ip+1,kk]*piv;
a[ip+1,kp+1]:=piv
End;
end.
{ end of file tsimplex.pas}
-----------------------------------------------------------------
And here is the test program that uses the above unit:
-------------------------------------------------------------------
Program test_simplex;
uses TSimplex;
Label 3;
var i,j:integer;
{main program}
BEGIN
writeln;
write(' Number of variables in E.F.: '); readln(TSimplex.N);
write(' Number of <= inequalities..: '); readln(TSimplex.M1);
write(' Number of >= inequalities..: '); readln(TSimplex.M2);
write(' Number of = equalities.....: '); readln(TSimplex.M3);
TSimplex.M:=TSimplex.M1+TSimplex.M2+TSimplex.M3; {Total number of constraints}
setlength(TSimplex.A,TSimplex.m+3,TSimplex.n+2);
setlength(IPOSV,TSimplex.n+1);
setlength(IZROV,TSimplex.n+1);
for i:=1 to TSimplex.M+2 do
for j:=1 to TSimplex.N+1 do
TSimplex.A[i,j]:=0.0;
writeln(' Input Economic Function:');
for i:=2 to TSimplex.N+1 do
begin
write(' Coefficient #',i-1,': ');
readln(TSimplex.A[1,i])
end;
write(' Constant term : ');
readln(TSimplex.A[1,1]);
{ input constraints }
for i:=1 to TSimplex.M do
begin
writeln(' Input constraint #',i,':');
for j:=2 to TSimplex.N+1 do
begin
write(' Coefficient #',j-1,': ');
readln(TSimplex.R);
TSimplex.A[i+1,j] := -TSimplex.R
end;
write(' Constant term : ');
readln(TSimplex.A[i+1,1])
end;
writeln;
writeln(' Input Table:');
for i:=1 to TSimplex.M+1 do
begin
for j:=1 to TSimplex.N+1 do write(TSimplex.A[i,j]:8:2);
writeln
end;
simplx(TSimplex.A,TSimplex.M,TSimplex.N,TSimplex.M1,TSimplex.M2,TSimplex.M3,TSimplex.ICASE,TSimplex.IZROV,TSimplex.IPOSV);
if TSimplex.ICASE=0 then {result ok.}
begin
writeln;
writeln(' Maximum of E.F. = ', TSimplex.A[1,1]:12:6);
for i:=1 to TSimplex.N do
begin
for j:=1 to TSimplex.M do
if TSimplex.IPOSV[j] = i then
begin
writeln(' X',i,' = ', TSimplex.A[j+1,1]:12:6);
goto 3;
end;
writeln(' X',i,' = ', 0.0:12:6);
3: end
end
else
writeln(' No solution (error code = ', TSimplex.ICASE,').');
writeln;
END.
--------------------------------------------------------------
Thank you,
Amine Moulay Ramdane.
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)