• Re: on distinguishing memoization and dynamic programming

    From Tim Rentsch@21:1/5 to Julieta Shem on Fri Jan 12 16:41:52 2024
    Julieta Shem <jshem@yaxenu.org> writes:

    I was trying to distinguish memoization from dynamic programming --- in
    a technical way --- and I failed. Can you write something like a mathematical definition of each one?

    Memoization is a stand-alone technique for improving the performance
    of pure functions, which is to say functions whose result depends
    only on their inputs. The idea is when a result is calculated for a
    particular set of inputs, the result is saved so that it needn't be
    calculated again for the same set of inputs. If lookup is cheaper
    than recomputing, it's a win to save the result (up to some amount
    of saved results, obviously) so it doesn't have to be calculated
    again. By analogy, memoization is more or less a software cache at
    the level of individual functions.

    Note that there needn't be any particular relationship between the
    inputs and the results. A function might be recursive, like the
    fibonacci function, or the different results might be completely
    independent, with no recursion at play. Memoization doesn't care -
    the only things that matter is that the output is a pure function
    of the inputs, and that lookup is cheaper than recalculation. When
    those two conditions hold, memoization can improve performance (as
    before, depending on how much storage is needed to accomplish that).

    Dynamic programming is used when there is a single large-scale
    problem for which it is necessary to solve a significant number of smaller-scale problems, and there are relationships between the
    smaller problems such that there are dependencies on those smaller
    problems that force some to be solved before others. The key idea
    of dynamic progamming is to determine, based on an analysis of the
    overall problem, an ordering for the smaller problems that allows
    each smaller problem to be solved exactly once. It is common for
    dynamic programming problems that the smaller problems make up a
    regular array (of whatever dimensionality) so that results can be
    stored in a regular structure, and looking up previously solved
    subproblems can be done just by doing a normal indexing operation.

    An abstract example of a dynamic programming problem is calculating
    a function F of three arguments I, J, K, where computing F depends
    on several values of F( {i values}, {j values}, {k values} ), where
    each {i value} is <= I, each {j value} is <= J, and each {k value}
    is <= K. By tabulating F in increasing order of I+J+K, we arrange
    for the earlier values to be available when computing F(I,J,K).

    Incidentally, it can happen that F(I,J,K) is a function of lesser
    values of i, j, or k, and also on F(I,J,K) itself. When that
    happens we get an /equation/ that must be solved for F(I,J,K), and
    the equation can be solved because values for all the dependent
    (smaller) sub-problems are already known.

    I wanted to answer your question because memoization and dynamic
    programming are really nothing like each other, I thought it would
    be good to clarify the two concepts. Hopefully you got some value
    out of my explanations here.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)