What are/were the main criticisms of logical positivism?

28

9

Logical positivism, later called logical empiricism, was a school of analytic philosophy famously connected with the Vienna circle and with a significant following up until the 1950's.

What were the main criticisms that were articulated to refute logical positivism, who articulated them, and why were they so successful in displacing the movement from its previous stature?

eMansipater

Posted 2011-08-27T17:00:09.400

Reputation: 1 440

@Mitch Newtonian mechanics predicts things in a useful way. Logical Positivism does not provide anything helpful. One does not have to believe logic is the underpinning of all reality in order to use logic, or to prefer it. So the theory itself does not introduce any useful perspective. It is just obsessive overstatement of the obvious to a degree it is no longer true. – None – 2018-11-30T22:45:57.430

2

SEP has a wonderful article on the Vienna Circle which goes through major doctrines and their criticisms; in particular I would recommend this subsection which places the various responses to the Vienna Circle in a wider historical context -- http://plato.stanford.edu/entries/vienna-circle/#VieCirHis

– Joseph Weissman – 2011-08-28T16:42:24.903

Though logical positivism is often said to have been 'killed' or 'refuted', it's sort of like saying that Newtonian mechanics is wrong (in that it was superseded by the 'correct' theory of relativity, but really is still an extremely useful approximation to the later theory). – Mitch – 2011-08-28T21:08:24.540

@Mitch Newtonian mechanics is unlike LP in that refutations have been offered for LP which have seemed to many to be definitive. – adrianos – 2012-01-06T11:13:46.653

Answers

12

Though Popper's critique of the inductive nature of the verification principle was influential, it is the related arguments of Reichenbach, Quine, Hempel, Sellars which most definitively refute verificationism in its Logical Positivist form. Reichenbach, Quine (in Two Dogmas of Empiricism) and Sellars (in Empiricism and the Philosophy of Mind) point out in similar ways that there is no way to semantically separate sentences about sense experience from other sentences, a distinction upon which LP rested in its attempt to show that all sentences could be reduced to a formal language about sense experience. Hempel focussed on the fact that the verification principle was not itself verifiable. Some of the key Logical Positivists themselves, including Carnap, seem to have anticipated these problems to some extent.

adrianos

Posted 2011-08-27T17:00:09.400

Reputation: 1 212

3Hempel's criticism is mentally challenged, and I won't deal with it. The more valid criticism is the one of Reichenbach and Quine, that sentences regarding sense-impressions are not easy to separate from other sentences. But this is not quite true, as it is possible to frame it in another way where the problem doesn't arise: namely you can make formal sentences (systems, i.e. computational models) predict sense experience, and reject the systems which fail, accept the ones that succeed, and declare two equivalent when they make the same predictions. Equivalence is the important thing. – Ron Maimon – 2013-12-12T01:56:15.787

1The statement that "system A is equivalent to system B if A and B predict the same sense impressions" does not require to isolate sense impressions to form system A or system B, it just makes an identity between the systems when their sense predictions coincide, so that the systems can refer to all sorts of concepts, you just don't assign objective truth to these concepts unless there is some difference in a prediction, and in this case, the observations determine which. Further, there is Occam's razor here, in that you generally prefer the simplest deductive system. – Ron Maimon – 2013-12-12T01:58:24.967

3@RonMaimon I find Hempel's criticism very convincing, I think others do too. On your proposal to have systems which predict or fail to predict sense experience, you seem to assume - just as LP did - that there is a way to identify sentences that are about sense experience and so isolate them from the rest. But this is just what is being rejected by the critics of LP. There are no protocol sentences - or, if there were, there would be no formal way to apply them to experience. – adrianos – 2013-12-14T17:22:50.753

1Quine: '[Carnap's system] provides no indication, not even the sketchiest, of how a statement of the form 'Quality q is at x; y; z; t' could ever be translated into Carnap's initial language of sense data and logic. The connective 'is at' remains an added undefined connective; the canons counsel us in its use but not in its elimination.' – adrianos – 2013-12-14T17:29:30.857

1I understand Hempel's criticism better after your explanation, thanks. That theories can predict or fail to predict sense experience is just true, as I can program a computer to do this: evolve codes to predict properties of pixels on the camera attached to it, and if it matches the prediction, it matches the prediction, and if not, no. When you can program a computer to do it, there should be little debate on whether it is possible in principle. But the question you ask about "how do I know whether a bit is coming from sensory data or from internal computation" is valid... – Ron Maimon – 2013-12-14T20:21:59.187

1The same issue arises in biological modelling, and in this context I ran into it, and it did take some thinking to resolve. You have some physical bits of data, and when you are modelling biology, some of the bits, like the phosphoryllation of p53 protein are clearly, clearly biologically important (the protein will bind to DNA with enough phosphoryllations), while other modifications, like rotation of water molecules, are really obviously irrelevant. The goal of a theory is to throw away the irrelevant bits, and focus only on the relevant ones. The method is teleological, so it is defined... – Ron Maimon – 2013-12-14T20:33:49.830

in relation to far away things that one already knows are relevant, for example, the cell dying, or reproducing, some obvious relevant event. You then extend the model when you find the bits are required for modelling already relevant bits, so iteratively produce a growing model. The resulting definition is benignly circular, but it requires thinking about it, to see that you don't end up including everything (because a simpler model will prune irrelevant data and leave things biological behavior same) and also you don't end up including nothing (you will slowly include more biology)... – Ron Maimon – 2013-12-14T20:35:54.370

In a practical sense, this also defines how to separate out that data which is reliable sensory data. You make models which predict the behavior of sensory data, and you verify if the sensory data is legitimate or a hallucination in the same way, by predicting further consistent sense data from the report. In this way, you can exclude drug hallucination sense data, even if it is reproducible, and focus on that which survives in the infinite system limit. But it is sort of teleological, it requires evolutionary end state, that you will converge to something, and this is a theological notion. – Ron Maimon – 2013-12-14T20:39:23.260

Regarding you example, I don't know what "quality x is at x,y,z,t" means. All you have is "I saw this sense data come into my brain", and time is just another sense data, as is position. – Ron Maimon – 2013-12-14T20:40:22.420

1@RonMaimon My only comment is nobody ever saw sense data coming into their brain. Its just philosophers who talk this way. – adrianos – 2014-02-20T14:47:49.093

12

The main element of logical positivism was verificationism. This led to the attitude, that only propositions proven by verification are worth to discuss in philosophy. So metaphysics and ontology had to be ignored according to Vienna circle as they cannot be proved by objective methods.

The problem with this view, articulated mainly by Karl Popper, was that for verification of a proposition, one needs absolute truths on which he can base the verification. This is known as the Münchhausen Trilemma. Popper as founder of falsificationism in his critical rationalism, a kind of historic fallibilism, stated that there is nothing we can be 100% sure about, there is ALWAYS the chance of being wrong. We can only falsify current knowledge and develop more exact, consistent and coherent theories. This is the current paradigma believed by most natural scientists.

I foundy only a german diagram of this famous trilemma, use translation if it is not self-explaining

trilemma

Hauser

Posted 2011-08-27T17:00:09.400

Reputation: 451

4The trilemma plays a very tiny role in Popper's critique of logical positivism. You have also misunderstood what is meant by falsificationism, which has very little to do with falibilism. Rather, the possibility of falsification is a criterion for the demarcation of science. – adrianos – 2012-01-06T10:48:14.200

1@adrianos I disagree, while I admit you better call falsificationism historically a scientific method, it's obvious that Poppers elaboration of this concept is based on epistemological views of fallibilism (which covered at this time more the logical side of epistemology, not the practical scientific ones). But the result is the same, we have no absolut truths from a epistemological point of view, and todays scholars try to falsify "proven" theories to enlarge our knowledge, the historic link between both terms is pretty obvious to me. – Hauser – 2012-01-06T16:17:18.037

1I think @adrianos is right in his comment. Also, all logical positivists were fallibilists, most of them however rejected the claims of Popper's falsificationism, i.e. 1) corroboration cannot rationally lead to more confidence in the theory being tested; 2) falsification is definitive (in opposition to verification). – DBK – 2012-03-09T19:05:58.043

3Addendum: to imply that verificationism is about "there is NO chance of being wrong" (or something like that) is a caricature at best. Further: I don't know of any logical empiricist who would have claimed that conclusive verification is possible. Indeed, Moritz Schlick claimed in its most radical verificationist phase that because of this impossibility general laws are not proper statements, but statement-schemata from which meaningful statements can be derived. – DBK – 2012-03-09T19:31:13.560

1

The nature of knowledge is different, hence, the tools to verify the knowledge are different. First, this paradigm overstated the notion of verification. Second, it failed to take into account that there are spheres which may not be proven in a scientific way.

Science has a better alternative position. Science does not deal with any notion that can not be scientifically tested. This way, its neutrality towards Metaphysics remains an important contour for its large-scale acceptance even in traditional societies.

On the contrary, Logical Positivism was aggressive and rejectionist and saw attacks on its own since no knowledge, human knowledge, is perfect. It has to go through a constant process of revision. This way an alternative discourse of what things are and how they should be and should not be failed miserably in attracting large subscribers.

Zubair Ahmed Chandio

Posted 2011-08-27T17:00:09.400

Reputation: 11

I made some edits which you may roll back or continue editing. Welcome to this SE! – Frank Hubeny – 2018-11-30T15:37:35.633