Popper on choosing between theories

December 22, 2011 § Leave a comment

I, by contrast, propose that the first thing to be taken into account should be the severity of tests… And I hold that what ultimately decides the fate of a theory is the result of a test, i.e. an agreement about basic statements… for me the choice is decisively influenced by the application of the theory and the acceptance of the basic statements in connection with this application…

This is in opposition to preferring the simple on an aesthetic basis. More importantly, he suggests we agree on the basic statements, and not universals.

He draws a long analogy to trial by jury:

The verdict is reached in accordance with a procedure which is governed by rules. These rules are based on certain fundamental principles which are chiefly, if not solely, designed to result in the discovery of objective truth. They sometimes leave room not only for subjective convictions but even for subjective bias.

The ideal these days, I guess, is that everyone can play juror if data are made available. Of course, taking data as basic (or near-basic) statements requires a decision.

The empirical basis of objective science has thus nothing ‘absolute’ about it. Science does not rest upon solid bedrock. The bold structure of its theories rises, as it were, above a swamp.

I see no reason not to believe this. The question is, then, to what extent can the theories built upon the swamp be objective — in particular, when most measurements have an associated error? We need to get into Popper’s treatment of probability before we can deal with this question.

Advertisements

Popper on instrumentalism and conventionalism

December 16, 2011 § Leave a comment

[The scientist’s] aim is to find explanatory theories (if possible, true explanatory theories); that is to say, theories which describe certain structural properties of the world, and which permit us to deduce, with the help of initial conditions, the effects to be explained.

“Initial conditions” are singular statements that apply to a specific event in question. Combining these with universal laws produces predictions. Popper doesn’t require that every event can be deductively predicted from universal laws. But science has to search for such laws that causally explain events. Popper contends that while scientific laws are not verifiable, they are falsifiable.

One angle from which the primacy of falsification might be challenged is instrumentalism. Berkeley suggested abstract theories are instruments for the prediction of observable phenomena, and not genuine assertions about the world. The difference is that between “all models are wrong” and “all models are falsifiable”.

Popper rejects instrumentalism because everybody uses abstract properties in ordinary speech.

There is no sharp dividing line between an ’empirical language’ and a ‘theoretical language’: we are theorizing all the time, even when we make the most trivial singular statement.

We are always using models, so we’re always wrong. Personally, I can live with this. Under instrumentalism, the crucial question becomes “how wrong”. As long as measurements are taken to be real features of the world, the answer to this can be used in falsificationism.

But what if measurements are dependent on assumptions? This is an implication of conventionalism. Duhem held that universal laws are merely human conventions. Since measurements depend on these laws, a conventionalist might argue that theoretical systems are not only unverifiable but also unfalsifiable. Popper makes a value judgement against conventionalism, not because it’s demonstrably wrong but because it allows explaining away, rendering it useless for science. He quotes Joseph Black:

A nice adaptation of conditions will make almost any hypothesis agree with the phenomena. This will please the imagination but does not advance our knowledge.

Statistics makes such adaptation even easier: the phenomena were merely improbable. The rise of probabilistic models makes it even more valuable to guard against ad hoc adaptations.

Brad reads Popper

December 14, 2011 § Leave a comment

I’m finding important contrasts between The Logic of Scientific Discovery and my fourth-hand preconceptions of the book. Popper differentiates between four kinds of tests:

  1. “the logical comparison of the conclusions among themselves, by which the internal consistency of the system is tested”
  2. “the investigation of the logical form of the theory, with the object of determining whether it has the character of an empirical or scientific theory”
  3. “the comparison with other theories, chiefly with the aim of determining whether the theory would constitute a scientific advance should it survive our various tests”
  4. “the testing of the theory by way of empirical applications of the conclusions which can be derived from it”
Statistics is largely concerned with the last of these, and so it should be. But it’s worth reminding ourselves and JPSP editors that the first three kinds of tests exist and are worth doing.

The demarcation problem — “finding a critierion which would enable us to distinguish between the empirical sciences on the one hand, and mathematics and logic as well as ‘metaphysical’ systems on the other” — is something I think about a lot. I hadn’t previously connected this to the induction problem, and will have to think about whether accepting a convention for demarcation lets us build science without induction.

Popper says that scientific statements are objective in the sense that they can be criticised “inter-subjectively”. In practice this seems to mean that other scientists can test the statements. This means “there can be no ultimate statements in science”, which I am satisfied with.

Quote of the week: The methodology of economic research programmes

May 19, 2011 § Leave a comment

A critical implication of Blanchard’s haiku metaphor is that the DGSE approach had failed to generate a truly progressive [in the Lakatos sense] scientific research program. A new project in the DGSE framework will typically, as Blanchard indicates, begin with the standard general equilibrium model, disregarding the modifications made to that model in previous work examining other ways in which the real economy deviated from the modeled ideal.

By contrast, a scientifically progressive program would require a cumulative approach, in which empirically valid adjustments to the optimal general equilibrium framework were incorporated into the standard model taken as the starting point for research. Such an approach would imply the development of a model that moved steadily further and further away from the standard general equilibrium framework, and therefore became less and less amenable to the standard techniques of analysis associated with that model.

John Quiggin, Zombie Economics, pp. 105–106

Edit: Bonus quote:

The prevailing emphasis on mathematical and logical rigor has given economics an internal consistency that is missing in other social sciences. But there is little value in being consistently wrong. (p. 211)

300 years of Hume: From “On the Association of Ideas”

May 13, 2011 § Leave a comment

Though it be too obvious to escape observation, that different ideas are connected together; I do not find that any philosopher has attempted to enumerate or class all the principles of association; a subject, however, that seems worthy of curiosity. To me, there appear to be only three principles of connexion among ideas, namely, Resemblance, Contiguity in time or place, and Cause or Effect.

An Enquiry Concerning Human Understanding, section III

Hume makes it obvious here that the set of associations is a superset of the set of causations. In other parts of the Enquiry, the distinction is less clear.

Peripheral aside: Pace Tufte, the statement “correlation is not causation” is correct. The concept that we call correlation is not the same as the concept that we call causation. See above!

Zombie Statistics in Zombie Economics

May 6, 2011 § Leave a comment

Let’s put some quotes here for me to think about later.

The econometric tests reported in studies of the Great Moderation showed a statistically significant change occurring in the mid-1980s. However, it is an open secret in econometrics that such tests mean very little, since the same set of time series data that suggests a given hypothesis must be used to test it. This is quite unlike the biomedical problems for which the statistical theory of significance was developed, where a hypothesis is developed first, and then an experiment is designed to test it.

Book club: Gigerenzer and the illusion of certainty

December 25, 2010 § Leave a comment

Gerd Gigerenzer has produced the most challenging response to Kahneman and Tversky’s work on (quote-unquote) rationality as it relates to human judgement of uncertainty. In this series, I’ll work through Gigerenzer’s pop-sci books Calculated Risks and Gut Feelings and pick out bits I can apply to my thinking and teaching.

I’m not an absolutist who holds that nothing in the future is certain. It’s certain that the sun will rise tomorrow. Pedantic spiritualists and/or quantumists may argue that there is some non-zero probability that in the next few hours the sun will turn into a supermassive slice of cheesecake. Yet if the concept of certainty is to have any use, complement probabilities of the order of 10^(10^10) should be considered negligible. Certainty is best employed as an approximate concept—like approximately everything else.

Yet problem zero in statistical thinking is not understatement but overstatement of uncertainty. It’s not just undergrads that make this mistake. We pros know that our estimates are almost surely wrong, and append uncertainties to them. Yet we have a habit of assuming our quantification of uncertainty is exact, when in all but the simplest real-world problems, our chance model will be wrong. We should be more concerned with teaching students how the real world can deviate from our models that with teaching them when to divide by n and when to divide by n-1.

Where Am I?

You are currently browsing the book clubbing: more fun than seal clubbing category at "But it's under .05!".