## Perfect numbers

7

1

The question given to me:

a. Find the perfect numbers between $1$ and $10^6$

b: Find the abundant numbers between $1$ and $1000$

For a, I wrote

Select[Range[1, 10^6], All]


I think I'm suppose to use divisor in there somewhere, but I'm not sure where to put that since I think me selecting all the numbers between $1$ and $10^6$ won't give me the list of all the perfect numbers...

For b, I think it's similar to part a, which I'm in a dead end with.

I'm trying to find a simple input since I'm still a beginner and want to stay with the basics.

8

Perfect numbers:

Select[ Range[10^6], Total[Divisors @ #] == 2 # &]

{6, 28, 496, 8128}


abundant numbers:

Select[ Range[10^3], Total[ Most @ Divisors @ #] > # &]//Short

{ 12, 18, 20, 24, 30, 36, 40, 42, 48, <<228>>,
968, 972, 978, 980, 984, 990, 992, 996, 1000}


I used Short to to get only a few since there are:

Count[ Range[10^3], _?(Total[Most@Divisors@#] > # &)]

246


of them.

Edit

As RunnyKine pointed out that using DivisorSigma[1, #] & is more efficient than Total @ Divisors @ # &. Another improvement might be exploiting the fact that there are no known odd perfect numbers, it was verified that there is none below 10^1500. The largest known perfect number (48-th) has only 34850340 digits i.e. IntegerLength[2^(57885161 - 1) (2^57885161 - 1)]. Taking the above into account we get 2 times speed up with:

Pick[ #, DivisorSigma[1, #] - 2 #, 0]& @ Range[2, 10^6, 2]


However we can observe there are odd abundant numbers, but they are sparsely distributed among even ones. Below 1000 there is only one ( while there are 245 even ones):

Pick[ #, UnitStep[ DivisorSigma[1, #] - 2 # - 1], 1]& @ Range[1, 10^3, 2]

{945}


Below 10^6 there are

Length @ Pick[ #, UnitStep[ DivisorSigma[1, #] - 2 # - 1], 1] & /@
{Range[ 2, 10^6, 2], Range[ 1, 10^6, 2]}

{245549, 1996}


even and odd abundant numbers respectively.

It is remarkable that Length @ Pick[ Range[10^6], UnitStep[ DivisorSigma[1, #] - 2 # - 1], 1]] is faster than : Count[ Range[10^6], _?(DivisorSigma[1, #] > 2 # &)].

Good stuff. I wonder if there's any odd perfect numbers. – RunnyKine – 2013-10-20T04:05:54.913

@RunnyKine Many have asked this question for centuries. Check out MathWorld or http://www.oddperfect.org/

– KennyColnago – 2013-10-20T17:26:38.860

I just started learning this information, but what exactly does @ do? – asik – 2013-10-20T18:27:04.010

@asik f @ x is equivalent to f[x], another possible form is x//f. Read e.g. Special Ways to Input Expressions tutorial in the documentation.

– Artes – 2013-10-20T18:29:49.943

Thank you! Kept seeing it, but I wanted to make sure. – asik – 2013-10-21T03:21:41.053

9

A faster approach to finding Perfect numbers using DivisorSigma

 Select[Range[10^6], DivisorSigma[1, #] == 2 # &]


{6, 28, 496, 8128}

Here's an even faster approach:

Pick[#, MapThread[Equal, {DivisorSigma[1, #], 2 #}], True] &[Range[10^6]]


and a little bit faster:

Pick[#, DivisorSigma[1, #] - 2 #, 0] &@Range[10^6]


For Abundant numbers do:

Select[Range[10^6], (DivisorSigma[1, #] - #) > # &]


and a faster approach as above:

Pick[#, MapThread[Greater, {DivisorSigma[1, #] - #, #}], True] &[Range[10^6]]


By giving Greater the Listable Attribute we can squeeze out some more performance:

SetAttributes[Greater, Listable]
Pick[#, DivisorSigma[1, #] > 2 #, True] &@Range[10^6]


Edit :

For this particular problem since we know that the largest Perfect number is less than 10000, we can begin to hit miliseconds regime using ParallelMap:

ParallelMap[Pick[#, DivisorSigma[1, #] - 2 #, 0] &, {Range[2, 5*^3, 2],
Range[5*^3 + 2, 1*^4, 2]}] // Flatten // AbsoluteTiming


{0.015627, {6, 28, 496, 8128}}

Of course this will also give us speed up if we scan the entire range.

A little faster yet: Pick[#, Unitize[DivisorSigma[1, #] - 2 #], 0] &@Range[10^6] – Michael E2 – 2013-10-20T01:29:11.790

@MichaelE2. Nice, but almost the same time as the MapThread method on my machine. – RunnyKine – 2013-10-20T01:29:30.767

On mine it's 4.7 (MapThread) to 4.2. – Michael E2 – 2013-10-20T01:37:27.713

@MichaelE2, it's definitely a little bit faster. – RunnyKine – 2013-10-20T01:40:29.677

@MichaelE2, You actually don't need Unitize and that makes it a tiny bit faster. – RunnyKine – 2013-10-20T01:48:11.773

D'oh. Of course. Thanks. I used UnitStep for the abundant and didn't think. – Michael E2 – 2013-10-20T01:50:02.933

@MichaelE2. Thanks for leading me to the idea. – RunnyKine – 2013-10-20T02:04:53.567

8

Or how about the connection between even perfect numbers and Mersenne primes?

With[{p = Prime[Range[20]]},
Pick[p, PrimeQ[2^p - 1]] /. q_ -> 2^(q - 1) (2^q - 1)]


Update for perfect numbers only.

As noted in previous answers, DivisorSigma[1,n] is faster than summing Divisors, and Pick is faster than Select. So even perfect numbers may be found by using

RepeatedTiming[Pick[#, DivisorSigma[1, #] - 2 #, 0] &@Range[2, 10^6, 2]]


which gives a timing of 1.34 s on my machine. These approaches are slow compared to the clever divisor sum by @MichaelE2. The repeated timing of his function is a tiny 0.056 s.

@QuantumDot points out the new v10.4 functions PerfectNumber[n] and PerfectNumberQ[n]. However, the following takes a glacial 4 seconds! Why?

RepeatedTiming[Pick[#, Map[PerfectNumberQ, #]] &@Range[2, 10^6, 2]]


The Help page for the new v10.4 function MersennePrimeExponent shows how to instantly calculate even perfect numbers.

RepeatedTiming[2^(# - 1)*(2^# - 1) &[MersennePrimeExponent[Range[4]]]]


3

With the newly released version 10.4, you can use the built-in function PerfectNumber.

I don't have the latest version of 10.4; it would be nice to see how it compares with the answers given here...

but... based on my reading of the documentation page (under the section Details), I can only conclude one thing: PerfectNumber is bloatware. Arguments contrary to my conclusion are welcome in the comments section.

Can you elaborate why the new built-in PerfectNumber function is bloatware? – shrx – 2016-03-07T08:19:11.933

2@shrx In my view, there are just a small handful of known perfect numbers so the function should be part of a separate package rather than littering the System context namespace. The implementation of that function may as well be a table of known values, since I'd imagine finding new ones on a personal computer is hopeless. – QuantumDot – 2016-03-18T12:35:29.410

3

DivisorSigma is no doubt fast on individual integers, but the problem of finding the sum of divisors over a large contiguous range of integers can be solved with a fast a C-style program, perfect for compiling if the sums stay within the size of a machine integer.

The idea is to turn the problem of summing the divisors around. An integer n is a divisor of each of its multiples. We keep a list of sums, and for every integer n, we add it to each sum whose position in the list is a multiple of n. We do this up to some pre-specified limit lim. At the end, the list contains the sums of the divisors of each integer up to lim.

divsum = Compile[{{lim, _Integer}},
Module[{dsum},
dsum = Table[1, {lim}];
Do[
dsum[[k]] += n,
{n, 2, lim}, {k, n, lim, n}];
dsum],
RuntimeOptions -> "Speed", CompilationTarget -> "C"
];

divsum[10^6]; // AbsoluteTiming
(* {0.119278, Null} *)

DivisorSigma[1, Range[10^6]]; // AbsoluteTiming
(* {3.985526, Null} *)


We can define functions to find perfect and abundant numbers, using Pick to pick out the numbers according to the divisor-sum property.

perfect[lim_] := Pick[#, divsum[lim] - 2 #, 0] &@Range[lim];
abundant[lim_] := Pick[#, UnitStep[2 # - divsum[lim]], 0] &@Range[lim]

perfect[10^6] // AbsoluteTiming
(* {0.159563, {6, 28, 496, 8128}} *)

abundant[10^6] // Short // AbsoluteTiming
(* {0.168468, {12, 18, 20, 24, <<247537>>, 999992, 999996, 999999, 1000000}} *)


Do you know why the alternate line Do[dsum[[Range[n, lim, n]]] += n, {n, 2, lim}] is 3 times faster than Do[dsum[[k]] += n, {n, 2, lim}, {k, n, lim, n}] in an uncompiled version of divsum, and yet 7 times slower in a compiled version of divsum? – KennyColnago – 2013-10-30T21:31:07.823

@KennyColnago The one is faster in the uncompiled version because Part is "vectorized", but I'm not sure why it's so much slower in the compiled version. Its compiled code is 75% longer than the other's, which may contribute some. It could be that the vectorized read/write (with ..[[list]]) is slower than single element read/write when compiled. I have to go, but perhaps if I think of something, I'll let you know. – Michael E2 – 2013-10-30T22:15:19.787

@KennyColnago First, Do[r = Range[n, lim, n]; dsum[[r]] += n, {n, 2, lim}] is a little more efficient, with compiled code similar to my code, but still much slower. Examining the C code (CCodeGenerate) of each shows mine has rather simple, direct C array addressing -- probably impossible to beat for speed. In the other loop, library functions are called to deal with the list of parts, one to get the parts and another to set the parts. This is the main difference in the code, and I suppose must account for the difference in speed. – Michael E2 – 2013-10-31T03:01:27.767

Wow, thanks for the sleuthing! It helps a lot with my never ending search for efficiency. – KennyColnago – 2013-10-31T21:34:01.687