First, machine integers and machine floats will have similar speeds. Integers will be a little faster, as long as numeric computation is the task. The problem, or pitfall, with integers in *Mathematica* is that they are treated as exact expressions. In complicated calculations, the integers may grow beyond machine size and division results in exact rationals; further, special functions, like `Sin[2]`

, remain unevaluated and are treated as symbolic expressions. When in a computation such non-machine values are introduce, instead of native CPU arithmetic, the software routines of *Mathematica* are invoked. Naturally, the software routines are slower.

As symbolic software system, *Mathematica* can do some unexpected things, and it takes a while to learn them all. Most iterative commands, like `Table`

, `Sum`

, `Map`

, etc., will compile their expressions if the number of iterations is high enough (see `SystemOptions["CompileOptions"]`

and scan for options ending in "CompileLength"). What can be compiled takes a long explanation. In the present example, `f[i]`

fails to be compiled, but its value `i^2`

will be compiled.

```
(* OP's form -- slow, uncompiled *)
Table[Sum[f[i], {i, 1., 100000}], {j, 1, 100}]; // AbsoluteTiming
(* {6.84558, Null} *)
```

With f[i] evaluated via `Evaluate`

or with `i^2`

substituted directly, it is somewhat faster due to compilation of the summand:

```
Table[Sum[Evaluate@f[i], {i, 1., 100000}], {j, 1, 100}]; // AbsoluteTiming
(* {1.04318, Null} *)
Table[Sum[i^2, {i, 1., 100000}], {j, 1, 100}]; // AbsoluteTiming
(* {1.04277, Null} *)
```

With integers, it's even faster:

```
Table[Sum[Evaluate@f[i], {i, 1, 100000}], {j, 1, 100}]; // AbsoluteTiming
(* {0.160928, Null} *)
Table[Sum[i^2, {i, 1, 100000}], {j, 1, 100}]; // AbsoluteTiming
(* {0.148675, Null} *)
```

For real speed, use packed arrays and the vectorization of arithmetical operations and many functions. Integers are still faster than floats.

```
Table[Total[f[Range[1., 100000]]], {j, 1, 100}]; // AbsoluteTiming
(* {0.071032, Null} *)
Table[Total[f[Range[1, 100000]]], {j, 1, 100}]; // AbsoluteTiming
(* {0.046477, Null} *)
```

If the integers become bigger than machine integers (bigger than `2^63 - 1`

), then the integer computation will slow down.

```
Table[Total[f[2^Range[1., 1000]]], {j, 1, 1000}]; // AbsoluteTiming
(* {0.336848, Null} *)
Table[Total[f[2^Range[1, 1000]]], {j, 1, 1000}]; // AbsoluteTiming
(* {1.00608, Null} *)
```

To see a bigger difference, consider the more complicated summand `f[i/(i + 1)]`

, which won't be compilable in integers but will be compilable in floats:

```
Table[Sum[Evaluate@f[i/(i + 1)], {i, 1, 10000}], {j, 1, 100}]; // AbsoluteTiming
(* {4.94459, Null} *)
Table[Sum[Evaluate@f[i/(i + 1)], {i, 1., 10000}], {j, 1, 100}]; // AbsoluteTiming
(* {0.31874, Null} *)
```

Why not

`Do[Total[Range[1., 100000.]^2], 1000] // AbsoluteTiming`

? – ilian – 2018-06-05T03:01:33.270Most of the time goes to summing either symbolically or perhaps element-wise, I am not sure which. COuld use

`Compile`

if you want to reduce memory use, or else "vectorize" e.g. as`sumSquares[n_] := Total[Range[n]^2]`

. – Daniel Lichtblau – 2018-06-05T03:09:42.250I see substantially similar was posted while I was running tests... – Daniel Lichtblau – 2018-06-05T03:10:17.167