Product of elements of a list



Sum is for functions; Total is for a list. Product is for a function; what is the command for a list? I'd have guessed Times, but not so. Using tricks like Times @@ <list> I could make it work, but what is the command for product of elements of a list?


Posted 2014-10-13T06:52:59.110


7I'm curious, what is it you don't like about Times @@ list? – RunnyKine – 2014-10-13T06:56:18.860

2Tr[{1, 2, 3, 4}, Times] works too, but is not really better than using Apply. – Yves Klett – 2014-10-13T06:59:04.523



The usual way to multiply all elements of a list is Times @@ list.

The less trivial problem is to calculate the product as fast as possible. Some performance tests (from slowest to fastest):

a = RandomReal[{0.999, 1.001}, 10000000];

Det@DiagonalMatrix[a] // AbsoluteTiming
(* I don't have 1 PB RAM :) *)

Product[a[[i]], {i, Length[a]}] // AbsoluteTiming
(* {5.516518, 0.323758} *)

1 ## & @@ a // AbsoluteTiming
(* {4.724335, 0.323758} *)

Tr[a, Times] // AbsoluteTiming
(* {4.140270, 0.323758} *)

Times @@ a // AbsoluteTiming
(* {3.742130, 0.323758} *)

Fold[Times, 1., a] // AbsoluteTiming
(* {1.592785, 0.323758} *)

Times @@ Times @@ Partition[a, 2000] // AbsoluteTiming
(* {0.249095, 0.323758} *)

Exp@Total@Log@a // AbsoluteTiming
(* {0.185068, 0.323758} *)

cTimes = Compile[{{x, _Real, 1}}, Module[{res = 1.}, Do[res *= x[[i]], {i, Length[x]}];
   res], CompilationTarget -> "C", RuntimeOptions -> "Speed"];
cTimes[a] // AbsoluteTiming
(* {0.032927, 0.323758} *)

Exp@*Total@*Log have a big advantage: it doesn't go to underflow or overflow if you work with a wide range of numbers (numbers should be positive, see Steve's answer).

One can go further and allow compiler to use vector instructions

srcTimes = "#include \"WolframLibrary.h\"
  DLLEXPORT mint WolframLibrary_getVersion(){return WolframLibraryVersion;}
  DLLEXPORT int WolframLibrary_initialize(WolframLibraryData libData){return 0;}
  DLLEXPORT void WolframLibrary_uninitialize(WolframLibraryData libData){return;}
  DLLEXPORT int times(WolframLibraryData libData,
            mint Argc, MArgument *Args, MArgument Res) {
    double r = 1.0;
    double *data;
    MTensor T0 = MArgument_getMTensor(Args[0]);
    mint len = libData->MTensor_getFlattenedLength(T0);
    data = libData->MTensor_getRealData(T0);
    mint i;
    for(i = 0; i < len-1; i+=2){
        r *= data[i]*data[i+1];
    for(; i < len; i++){
        r *= data[i];    
    MArgument_setReal(Res, r);
    return LIBRARY_NO_ERROR;
libTimes = CreateLibrary[srcTimes, "times"];
cTimes2 = LibraryFunctionLoad[libTimes, "times", {{_Real, 1, "Constant"}}, _Real];

cTimes[a] // AccurateTiming
cTimes2[a] // AccurateTiming
(* 0.0279596 *)
(* 0.0171085 *)


Posted 2014-10-13T06:52:59.110

Reputation: 41 907

I see that you chose to interpret the question as equivalent to my own from Stack Overflow: Is there a fast product operation for PackedArrays? +1 :-)

– Mr.Wizard – 2014-10-13T18:07:16.597

2p.s. belisarius's comment still cracks me up: "Ohh well, my sense of humor is asymptotically approaching idiocy as years go by. – belisarius" – Mr.Wizard – 2014-10-13T18:13:05.080

I always use Exp@Total@Log@a, glad to see it performs well since that is the one that makes the most sense to me. – Tim Mayes – 2014-10-13T21:54:31.397

I think this timing test may be deceptive. If you change the character of the input to a = RandomReal[{-1, 1}, 10000000]; then the order of timings is completely scrambled. – bill s – 2015-11-15T17:49:19.487

@bills It is because you obtain overflow and non-numeric values. – ybeltukov – 2015-11-15T18:17:38.353

@ybeltukov -- you still get scrambled timings (without underflow) if you only multiply a million items (instead of 10 million). – bill s – 2015-11-15T20:18:36.777

@bills I have the same order of timings for RandomReal[{0.95, 1.05}, 1000000]. For bigger variance I obtain underflow. – ybeltukov – 2015-11-15T20:25:57.210

What I just ran was: a = RandomReal[{-2.5, 2.5}, 1000000]; You do get small numbers 10^-36112, but no underflow. – bill s – 2015-11-15T20:34:39.167

@bills Actually it is machine underflow. It is mush less then $MinMachineNumber. You can set SetSystemOptions["CatchMachineUnderflow" -> False] to obtain desired timings (with rounded to 0.0 result).

– ybeltukov – 2015-11-15T20:45:54.040


Ah well, for fun and posterity:

Tr[#, Times]&

Yves Klett

Posted 2014-10-13T06:52:59.110

Reputation: 14 743


At present I don't believe there is a core language function for this operation. Times @@ list is standard practice from what I have observed. Just for fun you could also use:

1 ## & @@ {a, b, c, d}
a b c d

I was remiss not to link my own question on the subject with a nice answer from Simon:

Be aware that there is a trade-off of precision for speed with the Log method.


Posted 2014-10-13T06:52:59.110

Reputation: 259 163

2That´s what you get for asking a perfectly innocent question around here :D – Yves Klett – 2014-10-13T07:18:56.367


Why is there no equivalent of Total for products?

There isn't an equivalent of Total for products, and I believe that there is a reason.

The high performance of Total (compared to Plus) becomes especially important for large arrays. Adding together lots of numbers is usually straightforward. But multiplying them has a high risk of over- or underflow.


Times @@ RandomReal[1, 10000]
(* 3.49264640062689*10^-4292 *)

Times @@ RandomReal[10, 10000]
(* 1.26694123858894*10^5708 *)

These results are well under and over the smallest and largest representable machine numbers:

{$MinMachineNumber, $MaxMachineNumber}
(* {2.22507*10^-308, 1.79769*10^308} *)

Mathematica can deal with such huge and tiny numbers, but it must use arbitrary precision arithmetic, which is slow anyway.

We must keep this in mind in the experiments we will do below.

What are fast ways to multiply numbers in a list?

To minimize the risk of under/overflow, let us use the following array for benchmarking:

arr = RandomReal[E, 100000];

The simplest way is Times @@ arr, but this is not particularly fast. The reason is unpacking.

Times @@ arr // RepeatedTiming
(* {0.014, 2.55876*10^191} *)

Several of the tricks shown in this thread, like Tr[arr, Times], are basically the same thing.

The simplest fast way is

Product[a, {a, arr}] // RepeatedTiming
(* {0.0014, 2.55876*10^191} *)

I generally recommend this approach.

The fastest way, at the cost of accuracy, is transforming the product to a sum using logarithms:

Exp@Total@Log[arr] // RepeatedTiming
(* {0.000106, 2.55876*10^191} *)

This approach makes sense given the nature of the operation: the results will typically span orders of magnitude. The under/overflow is delayed until the final Exp. The thing to watch out for is precision and the presence of negative values.

What about compilation?

Direct procedural approaches with Compile are going to be equivalent to (and perform similarly with)

cf = Compile[{{arr, _Real, 1}},
  Product[a, {a, arr}]

Benchmarking shows that this is no better than a naked Product. Perhaps Product already auto-compiled?

cf[arr] // RepeatedTiming
(* {0.0014, 2.55876*10^191} *)

It does becomes faster if we compile to C:

cf = Compile[{{arr, _Real, 1}},
  Product[a, {a, arr}],
  CompilationTarget -> "C"

cf[arr] // RepeatedTiming
(* {0.00014, 2.55876*10^191} *)

The important thing to pay attention to here is under- and overflow. Because compiled functions operate with machine numbers, under- or overflow will not only affect performance. It will also affect the result.


arr = RandomReal[2, 100000];

(* 0. *)

The result is not really zero. The function returns zero only because of underflow. The true result is

Product[a, {a, arr}] // RepeatedTiming
(* {0.023, 1.87228152952573*10^-13556} *)

Notice that after underflow, Product becomes much slower, even slower than Times @@ .... This is due to having to switch to arbitrary precision arithmetic. But it does give us the correct result.

The fix is to explicitly instruct Compile to handle under/overflows:

cf = Compile[{{arr, _Real, 1}},
  Product[a, {a, arr}],
  CompilationTarget -> "C",
  RuntimeOptions -> {"CatchMachineUnderflow" -> True, "CatchMachineOverflow" -> True}

cf[arr] // RepeatedTiming

During evaluation of CompiledFunction::cfne: Numerical error encountered; proceeding with uncompiled evaluation.

During evaluation of CompiledFunction::cfne: Numerical error encountered; proceeding with uncompiled evaluation.
(* {0.024, 1.87228152952573*10^-13556} *)

Now for this array, the compiled function is not any faster than the plain Product, because it must resort to uncompiled evaluation anyway. But it does return the correct result, and the compiled function still has the potential to be fast for other arrays. Unfortunately, under- and overflow checking in Compile does seem to introduce a performance penalty.

To sum up

  1. The fastest way is always transforming to a sum: Exp@Total@Log[arr]. This may deteriorate accuracy and will not work for negative numbers.
  2. The fastest direct way is Product[a, {a, arr}].
  3. If you compile solution (2), expect a speedup only with CompilationTarget -> "C", and remember to turn on catching under- and overflows using RuntimeOptions -> {"CatchMachineUnderflow" -> True, "CatchMachineOverflow" -> True}.


Posted 2014-10-13T06:52:59.110

Reputation: 213 047

2I had a look to see if GeometricMean did anything clever and found Statistics`DescriptiveDump`ApplyToPacked which effectively switches on autocompile for Apply so that it works without unpacking. For this problem it's no better than using Product but I thought it worth mentioning. – Simon Woods – 2017-05-09T21:37:16.193


Using the Exp Total Log form should be avoided in general, esp. if some values are negative due to accuracy issues. Consider the nasty value from the above example with the sign flipped shown below. Since 10 million is an even number, the product should be the same.

a = RandomReal[{0.999, 1.001}, 10000000];

AbsoluteTiming[Times @@ a]
(* {1.4060804, 0.243018} *)

(* {0.1490085, 0.243018} *)

(* I π + Log[3] *)

(* {0.7520430, 0.243011 + 0.00184219 I} *)

AbsoluteTiming[Tr[a, Times]]
(* {1.4640838, 0.243018} *)

AbsoluteTiming[Times @@ (-a)]
(* {1.2720727, 0.243018} *)

Steve C

Posted 2014-10-13T06:52:59.110

Reputation: 41

+1 You are right. Here is a workaround for Exp-Total-Log method.

– ybeltukov – 2015-11-15T20:55:30.000


Det[DiagonalMatrix[#]] &



Posted 2014-10-13T06:52:59.110

Reputation: 129 207


This is not terse but just wanted to play:

Fold[#1 #2 &, #, ##2] &@{a, b, c, d}


a b c d


Posted 2014-10-13T06:52:59.110

Reputation: 53 491

As ybeltukov shows I am unnecessarily complex: Fold[Times,1,#]& – ubpdqn – 2014-10-13T13:44:00.497


{1, 2, 3, 4} /. List -> Times


Posted 2014-10-13T06:52:59.110

Reputation: 19 067

One could expect here: {{1, 2}, {1, 2}} the result {1,4} not 4. – Kuba – 2014-10-13T07:12:19.683

1how about Flatten -:) – Algohi – 2014-10-13T07:13:45.573