Where is the knowledge that AI's "knowledge representations" represent?

4

0

I find this really confusing. AI often says its computer systems "know" things, but when AI explains how to program a computer to be intelligent, it talks only about "knowledge representation". E.g., Russell and Norvig's, Artificial Intelligence: A Modern Approach.

In part III, for example, a part titled "Knowledge and Reasoning", the authors talk only about knowledge representation, e.g. at the start of the first chapter of part III: "This chapter introduces knowledge-based agents. The concepts that we discuss - the representation of knowledge and the reasoning processes that brings knowledge to life - are central to the entire field of artificial intelligence [original emphasis].

Why talk about representation? Why not talk about knowledge per se (that which is represented)? Where is the actual thing - knowledge? We seem to know where the representations are - inside the computer. But where is the actual knowledge? Inside the human programmer? Do AI's computer systems really know nothing, in themselves?

Roddus

Posted 2018-05-31T23:07:13.403

Reputation: 457

It is analogous to asking where redness itself is, apart from red objects, in other words, it reifies a fictitious entity introduced for convenience of phrasing. The "actual knowledge" detached from its representations would have to consist of some kind of Platonic ideas, and most AI researchers are not platonists. To them only representations and their conversions are real, but this is not to say that the fiction can not be useful for capturing conversion invariant features. – Conifold – 2018-05-31T23:36:37.627

@Conifold. So a red object is red but the universal, redness, has no independent existence. All there is is red objects. Particular horses exist but here is no such thing as horseness (except as a neural construct or abstraction in human brains). So particular neural structures are embodiment of (are) knowledge, but there is no such thing as knowledge-in-general existing out there in the Universe separate from particular instances inside brains. Well, that's fine. So where are these particular instances inside AI's computer systems - alleged artificial brains - and what are they made of? – Roddus – 2018-05-31T23:48:21.377

1You are still reifying too much. These terms are supposed to account for an activity, namely the activity of correlating behavior with environment. Material (neural) side of representations is only an aspect in this activity, another aspect is the relation to their referents it maintains (as in representations of horses to real horses). However, while representations and referents at least have objects for the material side, although treating them as just that is misleading, things like knowledge do not. It makes sense to talk about AI's knowledge, etc. only in the context of its interactions. – Conifold – 2018-06-01T00:04:51.217

@Conifold I see that the exercise of knowledge can be a matter of correlating behaviour with environment (e.g., in seeking to survive in the wild). Isn't knowledge what determines interactions? You say that talking about knowledge only makes sense in the context of interaction. So if there is no interaction there is no knowledge. If I'm in a coma, I have no knowledge of anything? Yet most would say I might not be expressing knowledge, but that knowledge still exists. Dispositional concepts of knowledge seem unhelpful. Treating it as structure/process, as most do, seems much more useful. – Roddus – 2018-06-01T04:41:58.623

Knowledge is an abstraction. Knowledge has no physical reality. Knowedge representations are concrete and real--words on paper, bits in a computer's memory, etc. – Solomon Slow – 2018-06-01T17:37:30.250

If you are in a coma there are plenty of interactions going on in your brain, but yes, certain things only count as "representing" knowledge only because you got to interact with them in the past. A book as an object is nothing more than paper and ink, by itself it contains no representations and certainly no "knowledge", brain is no different. It is only because of all the activities that went (and can go again) into developing language, relating symbols to referents, composing and recombining representations, etc., that they become "repositories of knowledge". – Conifold – 2018-06-01T21:38:16.820

@james large OK, so knowledge has no physical reality, and KRs do. Are the KRs (symbols) purely syntactic and therefore in themselves give no indication of what they refer to? If so, then there must be some connection between the symbol "Eiffel Tower" and the tall metal referent in Paris, which connection is not part of the symbol or tower. What, then, about this third <something> that associates the KR with the tower? Does this connecting thing have physical reality? And if so, why not call this connecting thing knowledge? And if not, how does a symbol refer? – Roddus – 2018-06-02T23:56:08.790

@Conifold When you say "These terms are supposed to account for an activity, namely the activity of correlating behavior with environment." You mean the term "knowledge"? So the term "knowledge" refers to behaviour? If a system can survive in the wild then the fact of survival not only indicates that the system has knowledge but the actions of the system in response to the environment are the knowledge. "knowledge" does not refer to any internal process or structure? This seems a way to avoid the issue of internal structure/process. – Roddus – 2018-06-03T22:10:26.227

Nothing so crude. One can paraphrase "knowledge" and other such terms out of the language, but it will lengthen expression considerably. "Knowledge" does not refer (directly), it does help express a "process" if you want, but in its dynamic aspects, and is irreducible to static "records" (which is the naive stereotype). I think "survival indicates knowledge" has the same circularity problems as Spencer's "survival of the fittest", or "success indicates talent" and is a tribute to the said stereotype, but that is a side issue.

– Conifold – 2018-06-03T22:25:28.633

@Conifold Maybe that's a good idea - paraphrase talk of knowledge or even stop talking about knowledge (in the context of AI). Maybe the idea of knowledge is a roadblock. One big problem of AI is how to get the machine to generalize. What about defining "generalize" and then specifying tests needed to be passed? Though this linguistically has been attempted: Moore's paradox; responses to the questions: "the police arrested the protesters because they were drunk", "the police beat the protesters because they were drunk" (who does "they" refer to?), etc. – Roddus – 2018-06-08T00:26:25.337

Answers

4

In the context of artificial intelligent agents and AI, it appears that know is just the primitive connecting those agents to their representations of knowledge.

In the 1995 edition of Artificial Intelligence: A Modern Approach, section 6.3 Representation, Reasoning and Logic, Russell and Norvig describe that "the object of knowledge representation is to express knowledge in a computer-tractable form." This is defined by two aspects: syntax, how sentences are represented in a computer, and semantics, determining "the facts in the world to which the sentences refer." Their subsequent Figure 6.5 and accompanying explanation clarifies - "Facts are part of the world, whereas their representations must be encoded."

In this context, the ordinary knowledge process (reasoning) is the inference of facts from facts. In contrast, the representational knowledge process is, using sentences representing the relevant facts of the world, to conduct logical (syntactic) inference on those sentences, and be able to translate those sentences back into facts about the world (Ibid.) To put it simply, in artificial intelligence, knowledge representation depends explicitly on the encoding and translating part of the process.

Here knowledge depends on semantics, which you might expect the programmer or user of the system to know. So in one interpretation, the system of machine, user (and programmer) has knowledge. However, in AI, you might colloquially say that a machine or system "knows" things by considering it an agent. As for whether the machine has genuine knowledge, as opposed to just holding representations and performing syntactic manipulations, that's a question of epistemology and theory of mind.

Edit: For an in-depth discussion on whether computers can understand, see The Chinese Room Argument in the Stanford Encyclopedia of Philosophy. Searle's argument and the replies there are relevant to theory of mind and reflect some of the diversity of opinion on ascribing knowledge to machines.

Greg S

Posted 2018-05-31T23:07:13.403

Reputation: 372

Can I ask: R&N say "the object of knowledge representation is to express knowledge in a computer-tractable form". (1) does "express knowledge" mean write/type symbols, e.g. "The Eiffel Tower is a tall metal tower in Paris, France"? (2) Do these shapes contain their meanings within themselves? (3) When encoded and inside a computer, does the encoding contain the meaning of the shapes? (4) If not, when the encodings are inside the computer, where does the intelligent computer get the meaning from? . – Roddus – 2018-06-03T02:40:23.143

1>

  • Yes, amongst other methods, like speaking or, e.g. truth tables, symbolic logic, relational graphs, database entries, and their bit representations (syntax). 2) No. It requires a semantic interpretation. 3) No. 4) Arguably, it can "know" something without understanding or "getting the meaning". See John Searle's Chinese Room argument.
  • < – Greg S – 2018-06-03T05:43:45.880

    @ Greg S So who/what does the semantic interpretation? The human can, but the computer can't (because all it has is the intrinsically meaningless encoded shapes). Given that having a semantics is necessary for human-like intelligence, the machine has no human-like intelligence. So on what basis is R&N talking about artificial intelligence. If the interpretation is inside the human then they are actually talking about human intelligence, not AI. So Searle was right: computers have a syntax alone, and since semantics is necessary for thought, computers will never think? – Roddus – 2018-06-03T22:00:23.610

    1

    @Roddus: In the introduction of AI (1995), R&N explicitly note that there is disagreement on how to interpret artificial intelligence. We're left with a similar ambiguity of epistemological framework, suggesting we should interpret R&N's uses of "know" as an informal, primitive (colloquial) term. However, their definition of knowledge representation implies external semantics and is consistent with Searle as a main line of interpretation. See A1-A3 -> C1; also replies to Searle as suggesting other approaches to epistemology or intelligence.

    – Greg S – 2018-06-04T05:47:34.000

    Added a reference for The Chinese Room argument (more in-depth than the one in comment). I left understanding and knowing as different terms here although I think the argument is relevant to both since knowledge can get a bit complicated. – Greg S – 2018-06-04T06:26:10.730

    @ Greg S That's well put ("their definition of knowledge representation implies external semantics and is consistent with Searle as a main line of interpretation"). Don't you think there's a problem with the terminology, though, specially when AI says that AI systems know things, and AI does say this quite a lot? Now that AI systems are controlling self-driving vehicles, the public gets primed with the idea that AI systems really do know things (like how to not run into concrete barriers, pedestrians pushing bikes, tractor-trailers, or the backs of parked fire trucks) but they don't. – Roddus – 2018-06-04T10:48:02.497

    Actually...I'll edit to include perhaps a better interpretation of R&N - that in the context of artificial agents, know is just the primitive connecting them to their representations of knowledge. In that case, interpreting that knowledge still implicitly depends on external semantics. It could be problematic to say that AI knows things, since it ignores the importance of how the problem is framed and presented (by e.g. programmers and users). There are also potential philosophical and ethical concerns about anthropomorphizing machines. Maybe R&N could add a short section on this. – Greg S – 2018-06-04T23:44:41.913

    Greg S That would help, for sure. There certainly was an early start to anthropomorphizing (e.g., McCarthy, 1959, "Programs With Common Sense") but the concern is that the relentless anthropomorphizing since then conceals the external semantics of the machine, and effectively prevents the big problem being addressed: how the machine can get its own semantics. Perhaps a case in point: the "visual" "perception" systems of self-driving cars where deep "learning" uses a gazillion human-annotated images, i.e., external semantics. Surely AI needs to address the issue of internal semantics. – Roddus – 2018-06-07T23:30:47.773

    0

    Just a pragmatic approach: for example, we want to know more about limbs movement by using cameras and AI systems. That is, we are searching for a type of knowledge we don't have (a posteriori), based on some knowledge we already have (a priori) about limbs.

    Can we ask for such knowledge to a machine? No. A machine does not know what a limb is or what a movement is. We need to represent such a priori knowledge on the machine. For that, you usually define an ontology, using an application like Protégé to model and represent a priori knowledge. There, we will represent all entities, including those we have knowledge about and those we need to get knowledge of.

    After that, the AI system is built upon the ontology. Now, the machine has a representation of knowledge inside. And it is load with a set of rules allowing it to learn (get a posteriori knowledge) using some mechanism.

    The result is a model (a large set of numbers) representing knowledge about some entities in the ontology set. For example, it can tell that legs make less relative effort than arms.

    "Where is the knowledge"? In our heads. The machine has nothing more than a set of numbers, a knowledge model. Numbers are not knowledge.

    "Why talk about representation? Why not talk about knowledge per se?" Because knowledge is not a physical object, it does not exist out of our minds. In order for it to exist outside of our minds, we need to create a representation of it.

    Actual knowledge exists only in our minds. Machines are just able to represent such knowledge in some way.

    RodolfoAP

    Posted 2018-05-31T23:07:13.403

    Reputation: 2 572

    By "Actual knowledge exists only in our minds. Machines are just able to represent such knowledge in some way." do you mean Searle was right: computers are purely syntactic devices, there is no way to get semantics from syntax, and since semantics is necessary to thought, computers will never think? This seems the problem AI needs to solve: how a computer can get its own semantics. But AI's concepts seem to make it impossible for AI to find a solution. Dennett said as much in the 1980's, I think, saying something like: AI needs a complete re-thinking of the semantic-level setting. – Roddus – 2018-06-07T23:46:50.193

    Changing subject, ok. If computers would be able to process semantic contents, the same should be possible for structures, rocks or gas particles. Is it so? Can we talk about semantics in other context than our mind? Perhaps. A martian could perceive that the atomic state of our brain is a representation of knowledge he doesn't get. And a theoretical God would probably state that our semantics cannot be considered as such, and that we are just syntactic processors. – RodolfoAP – 2018-06-08T01:01:04.010