Can digital computers understand infinity?



As a human being, we can think infinity. In principle, if we have enough resources (time etc.), we can count infinitely many things (including abstract, like numbers, or real).

For example, at least, we can take into account integers. We can think, principally, and "understand" infinitely many numbers that are displayed on the screen. Nowadays, we are trying to design artificial intelligence which is capable at least human being. However, I am stuck with infinity. I try to find a way how can teach a model (deep or not) to understand infinity. I define "understanding' in a functional approach. For example, If a computer can differentiate 10 different numbers or things, it means that it really understand these different things somehow. This is the basic straight forward approach to "understanding".

As I mentioned before, humans understand infinity because they are capable, at least, counting infinite integers, in principle. From this point of view, if I want to create a model, the model is actually a function in an abstract sense, this model must differentiate infinitely many numbers. Since computers are digital machines which have limited capacity to model such an infinite function, how can I create a model that differentiates infinitely many integers?

For example, we can take a deep learning vision model that recognizes numbers on the card. This model must assign a number to each different card to differentiate each integer. Since there exist infinite numbers of integer, how can the model assign different number to each integer, like a human being, on the digital computers? If it cannot differentiate infinite things, how does it understand infinity?

If I take into account real numbers, the problem becomes much harder.

What is the point that I am missing? Are there any resources that focus on the subject?


Posted 2019-10-05T00:18:38.083

Reputation: 577

Comments are not for extended discussion; this conversation has been moved to chat.

– nbro – 2020-03-06T01:22:14.997



I think this is a fairly common misconception about AI and computers, especially among laypeople. There are several things to unpack here.

Let's suppose that there's something special about infinity (or about continuous concepts) that makes them especially difficult for AI. For this to be true, it must both be the case that humans can understand these concepts while they remain alien to machines, and that there exist other concepts that are not like infinity that both humans and machines can understand. What I'm going to show in this answer is that wanting both of these things leads to a contradiction.

The root of this misunderstanding is the problem of what it means to understand. Understanding is a vague term in everyday life, and that vague nature contributes to this misconception.

If by understand, we mean that a computer has the conscious experience of a concept, then we quickly become trapped in metaphysics. There is a long running, and essentially open debate about whether computers can "understand" anything in this sense, and even at times, about whether humans can! You might as well ask whether a computer can "understand" that 2+2=4. Therefore, if there's something special about understanding infinity, it cannot be related to "understanding" in the sense of subjective experience.

So, let's suppose that by "understand", we have some more specific definition in mind. Something that would make a concept like infinity more complicated for a computer to "understand" than a concept like arithmetic. Our more concrete definition for "understanding" must relate to some objectively measurable capacity or ability related to the concept (otherwise, we're back in the land of subjective experience). Let's consider what capacity or ability might we pick that would make infinity a special concept, understood by humans and not machines, unlike say, arithmetic.

We might say that a computer (or a person) understands a concept if it can provide a correct definition of that concept. However, if even one human understands infinity by this definition, then it should be easy for them to write down the definition. Once the definition is written down, a computer program can output it. Now the computer "understands" infinity too. This definition doesn't work for our purposes.

We might say that an entity understands a concept if it can apply the concept correctly. Again, if even the one person understands how to apply the concept of infinity correctly, they we need only record the rules they are using to reason about the concept, and we can write a program that reproduces the behavior of this system of rules. Infinity is actually very well characterized as a concept, captured in ideas like Aleph Numbers. It is not impractical to encode these systems of rules in a computer, at least up to the level that any human understands them. Therefore, computers can "understand" infinity up to the same level of understanding as humans by this definition as well. So this definition doesn't work for our purposes.

We might say that an entity "understands" a concept if it can logically relate that concept to arbitrary new ideas. This is probably the strongest definition, but we would need to be pretty careful here: very few humans (proportionately) have a deep understanding of a concept like infinity. Even fewer can readily relate it to arbitrary new concepts. Further, algorithms like the General Problem Solver can, in principal, derive any logical consequences from a given body of facts, given enough time. Perhaps under this definition computers understand infinity better than most humans, and there is certainly no reason to suppose that our existing algorithms will not further improve this capability over time. This definition does not seem to meet our requirements either.

Finally, we might say that an entity "understands" a concept if it can generate examples of it. For example, I can generate examples of problems in arithmetic, and their solutions. Under this definition, I probably do not "understand" infinity, because I cannot actually point to or create any concrete thing in the real world that is definitely infinite. I cannot, for instance, actually write down an infinitely long list of numbers, merely formulas which express ways to create ever longer lists by investing ever more effort in writing them out. A computer ought to be at least as good as me at this. This definition also does not work.

This is not an exhaustive list of possible definitions of "understands", but we have covered "understands" as I understand it pretty well. Under every definition of understanding, there isn't anything special about infinity that separates it from other mathematical concepts.

So the upshot is that, either you decide a computer doesn't "understand" anything at all, or there's no particularly good reason to suppose that infinity is harder to understand than other logical concepts. If you disagree, you need to provide a concrete definition of "understanding" that does separate understanding of infinity from other concepts, and that doesn't depend on subjective experiences (unless you want to claim your particular metaphysical views are universally correct, but that's a hard argument to make).

Infinity has a sort of semi-mystical status among the lay public, but it's really just like any other mathematical system of rules: if we can write down the rules by which infinity operates, a computer can do them as well as a human can (or better).

John Doucette

Posted 2019-10-05T00:18:38.083

Reputation: 7 904

Comments are not for extended discussion; this conversation has been moved to chat.

– nbro – 2020-03-06T01:18:33.597


I think your premise is flawed.

You seem to assume that to "understand"(*) infinities requires infinite processing capacity, and imply that humans have just that, since you present them as the opposite to limited, finite computers.

But humans also have finite processing capacity. We are beings built of a finite number of elementary particles, forming a finite number of atoms, forming a finite number of nerve cells. If we can, in one way or another, "understand" infinities, then surely finite computers can also be built that can.

(* I used "understand" in quotes, because I don't want to go into e.g. the definition of sentience etc. I also don't think it matters in regarding this question.)

As a human being, we can think infinity. In principle, if we have enough resources (time etc.), we can count infinitely many things (including abstract, like numbers, or real).

Here, you actually say it out loud. "With enough resources." Would the same not apply to computers?

While humans can, e.g. use infinities when calculating limits etc. and can think of the idea of something getting arbitrarily larger, we can only do it in the abstract, not in the sense being able to process arbitrarily large numbers. The same rules we use for mathematics could also be taught to a computer.


Posted 2019-10-05T00:18:38.083

Reputation: 289

Comments are not for extended discussion; this conversation has been moved to chat.

– nbro – 2020-03-06T01:20:04.757


TL;DR: The subtleties of infinity are made apparent in the notion of unboundedness. Unboundedness is finitely definable. "Infinite things" are really things with unbounded natures. Infinity is best understood not as a thing but as a concept. Humans theoretically possess unbounded abilities not infinite abilities (eg to count to any arbitrary number as opposed to "counting to infinity"). A machine can be made to recognize unboundedness.

Down the rabbit hole again

How to proceed? Let's start with "limits."


Our brains are not infinite (lest you believe in some metaphysics). So, we do not "think infinity". Thus, what we purport as infinity is best understood as some finite mental concept against which we can "compare" other concepts.

Additionally, we cannot "count infinite integers." There is a subtly here that is very important to point out:

Our concept of quantity/number is unbounded. That is, for any any finite value we have a finite/concrete way or producing another value which is strictly larger/smaller. That is, Provided finite time we could only count finite amounts.

You cannot be "given infinite time" to "count all the numbers" this would imply a "finishing" which directly contradicts the notion of infinity. Unless you believe humans have metaphysical properties which allow them to "consistently" embody a paradox. Additionally how would you answer: What was the last number you counted? With no "last number" there is never a "finish" and hence never an "end" to your counting. That is you can never "have enough" time/resources to "count to infinity."

I think what you mean is we can fathom the notion of bijection between infinite sets. But this notion is a logical construction (ie it's a finite way of wrangling what we understand to be infinite).

However, what we are really doing is: Within our bounds we are talking about our bounds and, when ever we need to, we can expand our bounds (by a finite amount). And we can even talk about the nature of expanding our bounds. Thus:


A process/thing/idea/object is deemed unbounded if given some measure of its quantity/volume/existence we can in a finite way produce an "extension" of that object which has a measure we deem "larger" (or "smaller" in the case of infinitesimals) than the previous measure and that this extension process can be applied to the nascent object (ie the process is recursive).

Canonical case number one: The Natural Numbers

Additionally, our notion of infinity prevents any "at-ness" or "upon-ness" unto infinity. That is, one never "arrives" at infinity nor does one ever "have" infinity. Rather, one proceeds unboundedly.

Thus how do we conceptualize infinity?


It seems that "infinity" as a word is misconstrued to mean that there is a thing that exists called "infinity" as opposed to a concept called "infinity". Let's smash atoms with the word:

Infinite: limitless or endless in space, extent, or size; impossible to measure or calculate.

in- :a prefix of Latin origin, corresponding to English un-, having a negative or privative force, freely used as an English formative, especially of adjectives and their derivatives and of nouns (inattention; indefensible; inexpensive; inorganic; invariable). (source)

Finite: having limits or bounds.

So in-finity is really un-finity which is not having limits or bounds. But we can be more precise here because we can all agree the natural numbers are infinite but any given natural number is finite. So what gives? Simple: the natural numbers satisfy our unboundedness criterium and thus we say "the natural numbers are infinite."

That is, "infinity" is a concept. An object/thing/idea is deemed infinite if it possess a property/facet that is unbounded. As before we saw that unboundedness is finitely definable.

Thus, if the agent you speak of was programmed well enough to spot the pattern in the numbers on the cards and that the numbers are all coming from the same set it could deduce the unbounded nature of the sequence and hence define the set of all numbers as infinite - purely because the set has no upper bound. That is, the progression of the natural numbers is unbounded and hence definably infinite.

Thus, to me, infinity is best understood as a general concept for identifying when processes/things/ideas/objects posses an unbounded nature. That is, infinity is not independent of unboundedness. Try defining infinity without comparing it to finite things or the bounds of those finite things.


It seems feasible that a machine could be programmed to represent and detect instances of unboundedness or when it might be admissible to assume unboundedness.


Posted 2019-10-05T00:18:38.083

Reputation: 987

2I think you should clarify the statement: "Humans possess unbounded properties not infinite properties". – nbro – 2019-10-08T01:44:58.437

@nbro Good critique, I see the unclarity of the original statement. I have updated to capture better the intended meaning. – respectful – 2019-10-08T20:09:20.843


In Haskell, you can type:

print [1..]

and it will print out the infinite sequence of numbers, starting with:


It will do this until your console runs out of memory.

Let's try something more interesting.

double x = x * 2
print (map double [1..])

And here's the start of the output:


These examples show infinite computation. In fact, you can keep infinite data structures in Haskell, because Haskell has the notion of non-strictness-- you can do computation on entities that haven't been fully computed yet. In other words, you don't have to fully compute an infinite entity to manipulate that entity in Haskell.

Reductio ad absurdum.


Posted 2019-10-05T00:18:38.083

Reputation: 231

2Your argument is no different than symbol manipulation where you have $\infty$ that represents infinity. – nbro – 2019-10-07T00:19:37.917

6@nbro symbol manipulation of a symbol that represents infinity and which has appropriate properties and implications that are appropriate to that concept is IMHO the definition of "understanding infinity". – Peteris – 2019-10-07T09:00:21.337

1@Peteris Your definition of understanding is similar to one provided by John Doucette. See the Chinese room argument. I claim that you cannot write a program that is able to apply the concept of infinity to all cases. – nbro – 2019-10-07T12:14:58.390

1@nbro "I claim that you cannot write a program that is able to apply the concept of infinity to all cases. " Indeed, this is an intuitive conclusion of the halting problem-- you can make any machine that can solve any problem, including the halting problem for turing machines-- call this a "Super-Turing" machine. But, on that machine, you could invent a problem that this "Super-Turing" machine could not solve-- say whether or not a Super-Turing program will halt-- and you would need a "Super-super-turing machine" to solve that. And so on. It is like Godel's Incompleteness theorem, no language – noɥʇʎԀʎzɐɹƆ – 2019-10-07T15:12:06.010

may express everything the universe has to offer. – noɥʇʎԀʎzɐɹƆ – 2019-10-07T15:12:17.693


I believe humans can be said to understand infinity since at least Georg Cantor because we can recognize different types of infinites (chiefly countable vs. uncountable) via the concept of cardinality.

Specifically, a set is countably infinite if it can be mapped to the natural numbers, which is to say there is a 1-to-1 correspondence between the elements of countably infinite sets. The set of all reals is uncountable, as is the set of all combinations of natural numbers, because there will always be more combinations than natural numbers where n>2, resulting in a set with a greater cardinality. (The first formal proofs for uncountability can be found in Cantor, and is subject of Philosophy of Math.)

Understanding of infinity involves logic as opposed to arithmetic because we can't express, for instance, all of the decimals of a transcendental number, only use approximations. Logic is a fundamental capability of what we think of as computers.

  • An analytic process (AI) that can recognize a function that produces an infinite loop, such as using $\pi$ to draw a circle, might be said to understand infinity...

"Never ending" is a definition of infinity, with the set of natural numbers as an example (there is a least number, 1, but no greatest number.)

Intractability vs. Infinity

Outside of the special case of infinite loops, I have to wonder if an AI is more oriented on computational intractability as opposed to infinity.

A problem is said to be intractable if there is not enough time and space to completely represent it, and this can be extended to many real numbers.

$\pi$ may be understood to be infinite because it arises from/produces a circle, but I'm not sure this is the case with all real numbers with an intractable number of decimals.

Would the AI assume such a number were infinite or merely intractable? The latter case is concrete as opposed to abstract--either it can finish the computation or not.

This leads to the halting problem.

  • Turing's proof that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist could be taken as an indication that an algorithm based on the Turing-Church model of computation cannot have a perfect understanding of infinity.

If an alternate computational model arose that could solve the halting problem, it might be argued that an algorithm could have a perfect understanding, or at least demonstrate an understanding comparable to humans.


Posted 2019-10-05T00:18:38.083

Reputation: 5 886

Comments are not for extended discussion; this conversation has been moved to chat.

– nbro – 2020-03-06T01:20:28.790


(There's a summary at the bottom for those who are too lazy or pressed for time to read the whole thing.)

Unfortunately to answer this question I will mainly be deconstructing the various premises.

As I mentioned before, humans understand infinity because they are capable, at least, counting infinite integers, in principle.

I disagree with the premise that humans would actually be able to count to infinity. To do so, said human would need an infinite amount of time, an infinite amount of memory (like a Turing machine) and most importantly an infinite amount of patience - in my experience most humans get bored before they even count to 1,000.

Part of the problem with this premise is that infinity is actually not a number, it's a concept that expresses an unlimited amount of 'things'. Said 'things' can be anything: integers, seconds, lolcats, the important point is the fact that those things are not finite.

See this relevant SE question for more details:

To put it another way: if I asked you "what number comes before infinity?" what would your answer be? This hypothetical super-human would have to count to that number before they could count infinity. And they'd need to know the number before that first, and the one before that, and the one before that...

Hopefully this demonstrates why the human would not be able to actually count to infinity - because infinity does not exist at the end of the number line, it is the concept that explains the number line has no end. Neither man nor machine can actually count up to it, even with infinite time and infinite memory.

For example, If a computer can differentiate 10 different numbers or things, it means that it really understand these different things somehow.

Being able to 'differentiate' between 10 different things doesn't imply the understanding of those 10 things.

A well-known thought experiment that questions the idea of what it means to 'understand' is John Searle's Chinese Room experiment:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.

The thing to take away from this experiment is that the ability to process symbols does not imply that one actually understands those symbols. Many computers process natural languages every day in the form of text (characters encoded as integers, typically in a unicode-based encoding like UTF-8), but they do not neccessarily understand those languages. On a simpler Effectively all computers are able to add two numbers together, but they do no necessarily understand what they are doing.

In other words, even in the 'deep learning vision model' the computer arguably does not understand the numbers (or 'symbols') it is being shown, it is merely the algorithm's ability to simulate intelligence that allows it to be classed as artificial intelligence.

For example, we can take a deep learning vision model that recognizes numbers on the card. This model must assign a number to each different card to differentiate each integer. Since there exist infinite numbers of integer, how can the model assign different number to each integer, like a human being, on the digital computers? If it cannot differentiate infinite things, how does it understand infinity?

If you were to perform the same card test on a human, and continually increased the number of cards used, eventually a human wouldn't be able to keep track of them all due to lack of memory. A computer would experience the same problem, but could theoretically outperform the human.

So now I ask you, can a human really differentiate infinite things? Personally I suspect the answer is no, because all humans have limited memory, and yet I would agree that humans most likely can understand infinity to some degree (some can do so better than others).

As such, I think the question "If it cannot differentiate infinite things, how does it understand infinity?" has a flawed premise - being able to differentiate infinite things is not a prerequisite for understanding the concept of infinity.


Essentially your question hinges on what it means to 'understand' something.

Computers can certainly represent infinity, the IEEE floating point specification defines both positive and negative infinity, and all modern processors are capable of processing floating points (either in hardware or through software).

If AIs are ever capable of actually understanding things then theoretically they might be able to understand the concept of infinity, but we're a long way off being able to definitively prove this either way, and we'd have to come to a consensus about what it means to 'understand' something first.


Posted 2019-10-05T00:18:38.083

Reputation: 210


I strongly believe that digital computers cannot understand concepts such as infinity, real numbers or, in general, continuous concepts, in a similar way that Flatlanders do not understand the 3-dimensional world. Have also a look at the book Hyperspace: A Scientific Odyssey Through Parallel Universes, Time Warps, and the 10th Dimension (1994), by Michio Kaku, that discusses these topics more in detail. Of course, in this answer, the concept of understanding is not rigorously defined, but only intuitively.


Posted 2019-10-05T00:18:38.083

Reputation: 19 783

Comments are not for extended discussion; this conversation has been moved to chat.

– nbro – 2020-03-06T01:21:39.653


Then premise assumes that humans "understand" infinity. Do we?

I think you'd need to tell me what criterion you would use, if you wanted to know whether I "understand" infinity, first.

In the OP, the idea is given that I could "prove" I "understand" infinity, because "In principle, if we have enough resources (time etc.), we can count infinitely many things (including abstract, like numbers, or real)."

Well, that's simply not true. Worse, if it were true (which it isnt), then it would be equally true for a computer. Here's why:

  1. Yes, you can in principle count integers, and see that counting never ends.
  2. But even if you had enough resources, you could never "count infinitely many things". There would always be more. That's what "infinite" means.
  3. Worse, there are multiple orders ("cardinalities") of infinity. Most of them, you can't count, even with infinite time, and perhaps not even with infinite other resources. They are actually uncountable. They literally cannot be mapped to a number line, or to the set of integers. You cannot order them in such a way that they can be counted, even in principle.
  4. Even worse, how do you do that bit where you decide "in principle" what I can do, when I clearly can't ever do it, or even the tiniest part of it? That step feels layman-style assumptive, not actually seeing the issues in doing it rigorously. It may not be trivial.
  5. Last, suppose this was your actual test, like in the OP. So if I could "in principle with enough resources (time etc) count infinitely many things", it would be enough for you to decide I "understood" infinity (whatever that means). Then so could a computer with sufficient resources (RAM, time, algorithm). So the test itself would be satisfied trivially by a computer if you gave the computer the same criteria.

I think maybe a more realistic line of logic is that what this question actually shows, is that most (probably all?) humans actually do not understand infinity. So understanding infinity is probably not a good choice of test/requirement for AI.

If you doubt this, ask yourself. Do you honestly, truly, and seriously, "understand" a hundred trillion years (the possible life of a red dwarf star)? Like, can you really comprehend what its like, experiencing a hundred trillion years, or is it just a 1 with lots of zeros? What about a femtosecond? Or a time interval of about 10^-42 seconds? Can you truly "understand" that? A timescale compared to which, one of your heartbeats, compares like one of your heartbeats compares to a billion billion times the present life of this universe? Can you really "understand infinity", yourself? Worth thinking about......


Posted 2019-10-05T00:18:38.083

Reputation: 223

Comments are not for extended discussion; this conversation has been moved to chat.

– nbro – 2020-03-06T01:20:58.957


By adding some rules for infinity in arithmetic (such as infinity minus a large finite number is infinity, etc.), the digital computer can appear to understand the notion of infinity.

Alternatively, the computer can simply replace the number n with its log-star value. Then, it can differentiate the numbers at a different scale, and can learn that any number with log-star value > 10 is practically equivalent to infinity.

Amrinder Arora

Posted 2019-10-05T00:18:38.083

Reputation: 211

1Representing only infinity or finite set which includes infinity does not enough us to believe that the model understand infinity. Unfortunately, your response is totally useless from my perspective. – verdery – 2019-10-05T20:55:42.600

@verdery Very true. I believe that my response is probably a starting point. Hence the community wiki marker. I quite like John Ducette's answer. – Amrinder Arora – 2019-10-06T00:31:39.700


I think the concept that is missing in the discussion, so far, is symbolic representation. We humans represent and understand many concepts symbolically. The concept of Infinity is a great example of this. Pi is another, along with some other well-known irrational numbers. There are many, many others.

As it is, we can easily represent and present these values and concepts, both to other humans and to computers, using symbols. Both computers and humans, can manipulate and reason with these symbols. For example, computers have been performing mathematical proofs for a few decades now. Likewise, commercial and/or open source programs are available that can manipulate equations symbolically to solve real world problems.

So, as @JohnDoucette has reasoned, there isn't anything that special about Infinity vs many other concepts in math and arithmetic. When we hit that representational brick wall, we just define a symbol that represents "that" and move forward.

Note, the concept of infinity has many practical uses. Any time you have a ratio and the denominator "goes to" zero, the value of the expression "approaches" infinity. This isn't a rare thing, really. So, while your average person on the street isn't conversant with these ideas, lots and lots of scientists, engineers, mathematicians and programmers are. It's common enough that software has been dealing with Infinity symbolically for a couple decades, now, at least. E.g. Mathematica:

Charlie Reitzel

Posted 2019-10-05T00:18:38.083

Reputation: 131


A Turing machine is the main mathematical model of computation of modern digital computers. A Turing machine is defined as an object that manipulates symbols, according to certain rules (which represent the program that the Turing machine executes), on an infinite tape that is subdivided into discrete cells. Therefore, a Turing machine is a symbol manipulation system, which, given a certain input, produces a certain output or does not halt.

If you assume that understanding is equivalent to symbol manipulation, then a Turing machine is capable of understanding many concepts, even though the difficulty of understanding each of these concepts is variable, with respect to time and space. (The branch of theoretical computer science (TCS) that studies the difficulty of certain computational problems is called computational complexity theory. The branch of TCS that studies the computability of certain problems is called computability theory).

To understand the concept of infinity, a Turing machine needs to manipulate the symbol infinity correctly in all possible cases. A Turing machine cannot represent all real numbers because the set of real numbers is uncountable. Without loss of generality, suppose that the real number $\mathbb{r}$ (for example, Chaitin's constant) cannot be represented (or computed) by a Turing machine, then $\mathbb{r}$ can never be manipulated by a Turing machine. Consequently, there are cases in mathematics where a Turing machine cannot apply the concept of infinity. For example, a Turing machine cannot understand $\lim_{x \to \infty} \frac{x}{\mathbb{r}} = \infty$.

This proves that a Turing machine cannot manipulate the concept of infinity in all possible cases, because a Turing machine can never experience certain real numbers. However, a Turing machine may be able to manipulate the concept of infinity in many cases (that involve countable sets), so a Turing machine may have a partial understanding of the concept of infinity, provided that understanding is equivalent to symbol manipulation.


Posted 2019-10-05T00:18:38.083

Reputation: 19 783

Comments are not for extended discussion; this conversation has been moved to chat.

– nbro – 2020-03-06T01:19:17.590


Computers don't understand "infinity" or even "zero", just like a screwdriver does not understand screws. It is a tool made for processing binary signals.

In fact, a computer's equivalent in wetware is not a person but a brain. Brains don't think, persons do. The brain is just the platform persons are implemented with. It's a somewhat common mistake to conflate the two since their connection tends to be rather inseparable.

If you wanted to assign understanding, you'd at least have to move to actual programs instead of computers. Programs may or may not have representations for zero or infinity, and may or may not be able to do skillful manipulations of either. Most symbolic math programs fare mostly better here than someone required to work with math as part of their job.


Posted 2019-10-05T00:18:38.083



The Questions That Computers Can Never Answer - Wired (magazine)

Computers might not be able to reach infinity at all: < >, never mind actually understand it.

Computation and computers do have implications for "hard limits of systems."


Tautological Revelations

Posted 2019-10-05T00:18:38.083

Reputation: 268


John Doucette's answer covers my thoughts on this pretty well, but I thought a concrete example might be interesting. I work on a symbolic AI called Cyc, which represents concepts as a web of logical predicates. We often like to brag that Cyc "understands" things because it can elucidate logical relationships between them. It knows, for example, that people don't like paying their taxes, because paying taxes involves losing money and people are generally averse to that. In reality, I think most philosophers would agree that this is an incomplete "understanding" of the world at best. Cyc might know all of the rules that describe people, taxes, and displeasure, but it has no real experience of any of them.

In the case of infinity, though, what more is there to understand? I would argue that as a mathematical concept, infinity has no reality beyond its logical description. If you can correctly apply every rule that describes infinity, you've grokked infinity. If there's anything that an AI like Cyc can't represent, maybe it's the emotional reaction that such concepts tend to evoke for us. Because we live actual lives, we can relate abstract concepts like infinity to concrete ones like mortality. Maybe it's that emotional contextualization that makes it seem like there's something more to "get" about the concept.


Posted 2019-10-05T00:18:38.083

Reputation: 21


I would think that a computer couldn’t understand infinity primarily because the systems and parts of a system, that are driving the computer are finite themselves.

lockheed silverman

Posted 2019-10-05T00:18:38.083

Reputation: 27


The "concept" of infinity is 1 thing to understand. I can represent it with 1 symbol (∞).

As I mentioned before, humans understand infinity because they are capable, at least, counting infinite integers, in principle.

By this definition humans do not understand infinity. Humans are not capable of counting infinite integers. They will die (run out of compute resources / power) at some time. It would probably be easier in fact to get a computer to count towards infinity than it would be to get a human to do so.


Posted 2019-10-05T00:18:38.083

Reputation: 111

Of course, we do not understand infinity because we are able to count to infinity in practice. However, in theory, would we be able to count to infinity, given infinite resources?. Furthermore, of course, the symbol $\infty$ is just a symbol that has a meaning in mathematics, but this meaning could have been given to another symbol or, in other words, we could have denoted the concept of infinity by another symbol. So, your arguments are quite superfluous, in my opinion. – nbro – 2019-10-08T02:13:21.740

Given infinite resources both humans or computers could count to infinity. The symbol ∞ is a placeholder for the "concept" of infinity. Most humans know very little about this concept. They know it is bigger than any other number. They don't have any rules for multiplication or addition of the concept but they "feel" 2 *∞ is bigger than 1 * ∞, etc. Some mathematicians have different definitions of the concept or even multiple concepts of infinity depending on the context of the field. – Pace – 2019-10-08T02:32:32.367


Just food for thought: how about if we try to program infinity not in theoretical, but in practical terms? Thus, if we deem something that a computer cannot calculate, given its resources as infinity, it would fulfill the purpose. Programmatically, it can be implemented as follows: if the input is less than available memory it's not infinity. Subsequently, infinity can be defined as something that returns out-of-memory error on an evaluation attempt.


Posted 2019-10-05T00:18:38.083

Reputation: 111


Its arguable if we humans understand infinity. We just create new concept to enplace old mathematics when we meet this problem. In division by infinity machine can understand it the same way as we:

double* xd = new double;
*xd =...;
if (*xd/y<0.00...1){
int* xi = new int;
*xi = (double) (*xd);
delete xd;

If human thinks of infinity - imagines just huge number in his/her current context. So key to writing algorithm is just finding a scale that AI is currently working with. And BTW this problem must ve been solved years ago. People designing float/double must ve been conscious what they were doing. Moving exponenta sign is linear operation in double.


Posted 2019-10-05T00:18:38.083

Reputation: 21


Well -- just to touch on the question of people and infinity -- my father has been a mathematician for 60 years. Throughout this time, he's been the kind of geek who prefers to talk and think about his subject over pretty much anything else. He loves infinity and taught me about it from a young age. I was first introduced to the calculus in 5th grade (not that it made much of an impression). He loves to teach, and at the drop of a hat, he'll launch into a lecture about any kind of math. Just ask.

In fact, I would say that there are few things he is more familiar with than mother's face, perhaps? I wouldn't count on it. If a human can understand anything, my father understands infinity.


Posted 2019-10-05T00:18:38.083

Reputation: 111


Humans certainly don't understand infinity. Currently computers cannot understand things that humans cannot because computers are programmed by humans. In a dystopian future that may not be the case.

Here are some thoughts about infinity. The set of natural numbers is infinate. It has also been proved that the set of prime numbers, which is a subset of the natural numbers, is also infinate. So we have an infinate set within an infinate set. It gets worse, between any 2 real numbers there is an infinate number of real numbers. Have a look at the link to Hilbert's paradox of the Grand Hotel to see how confusing infinity can get -

Paul McCarthy

Posted 2019-10-05T00:18:38.083

Reputation: 111


I think the property humans have which computers do not, is some sort of parallel process that runs alongside every other thing they are thinking and tries to assign an importance weighting evaluation to everything you are doing. If you ask a computer to run the program : A = 1; DO UNTIL(A<0) a=a+1; END;

The computer will. If you ask a human, another process interjects with "I'm bored now... this is taking ages... I'm going to start a new parallel process to examine the problem, project where the answer lies and look for a faster route to the answer ... Then we discover that we are stuck in an infinite loop that will never be "solved".. and interject with an interrupt that flags the issue, kills the boring process and goes to get a cup of tea :-) Sorry if that is unhelpful.

Andy Evans

Posted 2019-10-05T00:18:38.083

Reputation: 1

The question is not "Can AI understand infinity" but "in what way is infinity useful to an AI ? so how do we represent it for that purpose ?" - as a human, you have a huge number of "subsumption processess" that are bound to your survival in your environment. One of those systems manages your resource and flags up when an undertaking is demanding or large (possibly tending to infinity) so you are bound to a real concept of what infinity could mean for you. What does it need to mean to AI ? Time resource ? number of nodes assigned ? How important/acurate is the answer ? – Andy Evans – 2019-10-16T14:45:39.970