The Mad Monk’s modelling mockery

Tony Abbott has tried his hand at modelling the economic costs of carbon emissions reduction. The results are a little disturbing. Unless Abbott was being deliberately, deceptively simplistic in order to appeal to the burn-the-elitists demographic of Australian society, he truly doesn’t have a clue what he’s talking about:

He says given a 5 per cent reduction in greenhouse gas emissions will cost Australian taxpayers $120 billion, the cost of the emissions trading scheme’s 10-year aim of a 25 per cent reduction will be much greater.

“The Federal Government has never released the modelling,” Mr Abbott said.

“Now if there is modelling that shows the costs of a 15 per cent and a 25 per cent emissions reduction, let’s see the modelling, let’s release the figures.

“I think it’s reasonable to assume in the absence of other plausible evidence that five times that reduction, a 25 per cent reduction in emissions, might cost five times the price – half a trillion dollars, 50 per cent of Australia’s annual GDP.”

I’m no economist, but I suspect the experts might shy away from confidently predicting that 5 times the reduction implies 5 times the cost. We’re talking about billions of dollars flowing through all the intricate structures that make up the economy. There are feedback mechanisms, economies of scale, and the little fact that a “5%” reduction in CO2 is relative to 2000 levels but the projected cost is based on 2020 levels (because that’s when it’s happening). Even a “0%” change from 2000 levels represents a substantial cut in what our 2020 CO2 emissions would have been, but according to Abbott’s model this scenario would cost nothing.

Why even have economists if a constant factor is all it takes to convert a percentage CO2 reduction into a dollar amount? If Tony, our alternative Prime Minister, thinks it’s “reasonable to assume” such things, perhaps we can get him to try out this approach to economic modelling in a controlled environment where he can’t hurt anyone else. Say, in a padded cell with Monopoly money.

Meta-engineering

I’m beginning to think I should have approached this maths modelling stuff from an engineering point of view: with a requirements document, version control and unit testing. Constructing a reasonably complicated mathematical model seems to have enough in common with software development that such things could be quite useful.

I’m calling this “meta-engineering”, because I’d be engineering the development of a model which itself describes (part of) the software engineering process.

The only problem is that formal maths notation can’t just be compiled and executed like source code, and source code is far too verbose (and lacking in variety of symbols) to give you a decent view of the maths.

Fortunately, Bayesian networks provide a kind of high-level design notation; perhaps the UML of probability analysis. Mine look like some sort of demented public transport system. However, drawing them in LaTeX using TikZ/PGF gives me a warm fuzzy feeling.

What am I doing?

Over the past few weeks I’ve had numerous questions of the form: “how’s your work going?” I find I can only ever answer this with banalities like “good” or “meh”.

It’s not that I don’t know what I’m doing. At any given point in time, I have a list of minor challenges written up on the whiteboard (which accumulate with monotonous regularity). However, my first problem is that I never remember what these are when I’m not actually working on them. I write them down so that I don’t have to remember, of course.

My second problem is that, even if I did remember what I was supposed to be doing, there just isn’t any short explanation. Currently I have on the whiteboard such startling conversation pieces as “Express CI in terms of S and U”. This may or may not tickle your curiosity (depending on how much of a nerd you are), but explaining what it means – and granted, I’ll have to do that eventually anyway – demands as much mental energy as solving the problem itself.

My third problem is  that I regularly shuffle around the meaning of the letters, to ensure I don’t run out of them and also to resolve inconsistencies. I’m currently using the entire English alphabet in my equations and a large proportion of the Greek one, so naming variables is a minor headache in itself. For instance, since I wrote the todo item “Express CI in terms of S and U”, I’ve decided to rename the variable “CI” to “CS“. Also, “S” used to be “T”, and “U” used to be two separate variables. This is mostly cosmetic, but I recoil at the prospect of explaining something so obviously in flux.

I choose to believe that I’ll be able to explain everything once I’ve written my thesis… and hopefully as I’m writing my thesis.

Artificial intelligence

A thought occurs, spurred on by my use of Bayesian networks. They’re used in AI (so I’m led to believe), though I’m using them to model the comprehension process in humans. However, I do also work in a building filled with other people applying AI techniques.

My question is this: how long until Sarah Connor arrives and blows up level 4? And if she doesn’t, does that mean that the machines have already won? Or does it simply mean that we’re all horrible failures and that nothing will ever come of AI?

A good friend (you know who you are) is working with and discovering things about ferrofluids. In my naivety, I now find myself wondering if you could incorporate some kind of neural structure into it, and get it to reform itself at will…

The doomsday argument

This has recently been the source of much frustration for some of my friends, as I’ve attempted to casually plow through a probabilistic argument that most people would instinctively recoil at. So, I thought, it might work better when written down. Of course, plenty of others have also written it down, including Brandon Carter – its originator – and Stephen Baxter – a science fiction author (who referred to it as the “Carter Catastrophe” in his novel Time).

The main premise of the argument is the Copernican principle. Copernicus, of course, heretically suggested that the Earth was not the centre of the universe. Thus, the Copernican principle is the idea that the circumstances of our existence are not special in any way (except insofar as they need to be special for us to exist in the first place).

We are now quite comfortable with the Copernican principle applied to space, but the doomsday argument applies it to time. Just as we do not live in any particularly special location, so we do not live at any particularly special moment. This is distorted by the fact that the human population has exploded in the last century to the point where about 10% of all the humans to have ever lived (over the course of homo sapiens’ ~200,000 year history) are still alive today. We can deal with this distortion by (conceptually) assigning a number to each human, in chronological order of birth, from 1 to N (where N is the total number of humans that have lived, are currently alive, or will ever be born in the future). We can then say, instead, that we are equally likely to have been assigned any number in that range.

In probability theory, this is equivalent to saying that you have been randomly selected from a uniform distribution. Yes, it must be you (the observer) and not someone else, because from your point of view you’re the only person who has a number selected from the entire range – past, present and future. You could have been assigned a number at any point along the human timeline (by the Copernican principle), but you still cannot observe the future, and so by selecting any other specific individual you’d automatically be restricting the range to the past and present. The number you’ve actually been assigned is something on the order of 60 billion (if we estimate that to be the total number of humans to have ever lived so far).

So where does that leave us? Well, in a uniform distribution, any randomly selected value is 95% likely to be in the final 95% of the range. If your randomly selected number is 60 billion, then it’s 95% likely that the total number of humans to ever live will be less than 60 billion × 20 = 1.2 trillion. Similarly, it’s 80% likely that the total number will be 60 billion × 5 = 300 billion, and 50% likely that the total number will be 120 billion. Now, 50%, 20% and 5% probabilities do crop up, but we must draw the line at some point, because you cannot demand absolute certainty (or else science would be impossible.)

This should make us think. The doomsday argument doesn’t give an exact number, nor does it directly give us a time, but this can be estimated from trends in population growth. However, the prospect of a scenario in which humanity spreads out beyond the solar system and colonises the galaxy, to produce a population of countless trillions over tens of thousands or even millions of years, would seem vanishingly unlikely under this logic. Even the prospect that humanity will survive at roughly its current population on Earth for more than a few thousand years seems remote.

It’s also worth pointing out, as others have, that the doomsday argument is entirely independent of the mechanism by which humanity’s downfall might occur. That is, if you accept the argument, then there is nothing we can do to stop it.

Needless to say, the objections to this reasoning come thick and fast, especially if you bumble like I have through a hasty verbal explanation (hopefully I’ve been more accurate and articulate in this blog post). One should bear in mind that this isn’t simply some apocalyptic pronouncement from a random, unstable individual (it wasn’t my idea). This is work that has been published some time ago by three physicists independently (Brandon Carter, J. Richard Gott and Holger Bech Nielsen) in peer-reviewed journals. That’s not to say it’s without fault, but given the level of scrutiny already applied, one might at least pause before simply dismissing it out of hand.

The objections I’ve heard (so far) to the doomsday argument usually fall along the following lines:

  1. Often they discard the notion that the observer is randomly selected, thus reaching a different (and trivial) conclusion.  One can point out that there always has to be a human #1, and a human #2, and so on, and that this says nothing about the numbers that come after. However, in pointing this out, one is not randomly selecting those numbers, and random selection is the premise of the argument.
  2. They object that a sample size of one is useless. Indeed, in the normal course of scientific endeavour, a sample size of one is useless, but that’s just because in a practical setting we’re trying to achieve precision. If we’re just trying to make use of what we know, one sample is infinitely more useful than no samples at all. The doomsday argument does not at any point assume that its single randomly-selected number represents anything more than a single randomly-selected number. If we had more than one random sample, we’d be able to make a stronger case, but that does not imply there’s currently no case at all.
  3. Sometimes they object on the grounds of causality – that we simply can’t know the future. I think this is just a manifestation of personal incredulity. There is no physical law that says we cannot know the future, and here we’re not talking about some divine revelation or prophecy. We’re only talking about broad probabilistic statements about the future, and we make these all the time (meteorology, climatology, city planning, resource management, risk analysis, software development, etc. ad infinitum).

However, I’m sure that won’t be the end of it.

The Zim desktop wiki

I’ve discovered that Zim is a great little brainstorming tool, for me at least. While I occasionally “think in images”, my brain usually works on words and symbols. A wiki – especially one that sports a LaTeX equation editor – seems to be a powerful way to assist a text-based brainstorming session. Being a desktop application (rather than a web application), Zim is also very simple to set up and use.

I spent today and yesterday using it to construct some arcane maths involving matrix multiplication. Said maths mostly turned out to be wrong, of course, but that’s all part of the process.

Theoretical frameworks, part 3

The first and second instalments of this saga discussed the thinking and writing processes. However, I also need to fess up to reality and do some measuring.

A theoretical framework is not a theory. The point of a theoretical framework is to frame theories – to provide all the concepts and variables that a theory might then make predictions about. (If I were a physicist these might be things like light and mass). You can test whether a theory is right or wrong by comparing its predictions to reality. You can’t do that for theoretical frameworks, because there are no predictions, only concepts and variables. The best you can do is determine whether those concepts and variables are useful. This really means you have to demonstrate some sort of use.

And so it falls to me to prove that there’s a point to all my cogitations, and to do so I need data. In fact, I need quite complex data, and in deference to approaching deadlines and my somewhat fatigued brain, I need someone else’s quite complex data.

The truth is – I’m probably not going to get it; at least, not all of it.  Ideally, I need data on:

  • the length of time programmers take to assimilate specific pieces of knowledge about a piece of software;
  • the specific types of knowledge required to assimilate other specific types of knowledge;
  • the probability that programmers will succeed in understanding something, including the probability that they find a defect;
  • the probability that a given software defect will be judged sufficiently important to correct;
  • the precise consequences, in terms of subsequent defect removal efforts, of leaving a defect uncorrected;
  • the cost to the end user of a given software defect;
  • the propensity of programmers to find higher-cost defects; and
  • the total number of defects present in a piece of software in the first place.

I also need each of these broken down according to some classification scheme for knowledge and software defects. I also need not just ranges of values but entire probability distributions. Such is the pain of a theoretical framework that attempts to connect rudimentary cognitive psychology to economics via software engineering.

With luck, I may be able to stitch together enough different sources of data to create a usable data set. I hope to demonstrate usefulness by using this data to make recommendations about how best to find defects in software.

Theoretical frameworks

One of the chapters of my much-delayed thesis describes (or rather will describe) a theoretical framework, which is academic-speak for “a way of understanding stuff” in a given field. In my case, stuff = software inspections, and my way of understanding them is a mixture of abstractions of abstractions of abstractions and some slightly crazy maths, just to give it that extra bit of abstractedness that seemed to be lacking.

It’s very easy when engaged in abstract theorising to forget what it is you’re actually modelling. All those boxes and lines look positively elegant on a whiteboard, but when you come to describe what the concepts represent and how someone would actually use it, things frequently go a bit pear-shaped. The problem, as far as I’ve been able to tell, is the limited short-term memory available for all this mental tinkering. What you need is to keep the concrete and the abstract in your head simultaneously, but this is easier said than done (especially if one’s head is full of concrete to begin with). When the abstract gets very abstract and there’s lots of it, the real-world stuff slips quietly out of your consciousness without telling you.

Sometimes it’s only a small thing that gets you. Sometimes you realise that it all mostly makes sense, if only this box was called something else. Then there are times when you finish your sketch with a dramatic flourish, try to find some way of describing the point of the whole thing, and shortly after sit back in an embarrassing silence.

My latest accomplishment, or perhaps crime against reason, is the introduction of integrals into my slightly crazy maths (already liberally strewn with capital sigmas). An integral, for the uninitiated, looks a bit like an S, but rather pronounced “dear god, no”. You can think of it as the sum of an infinite number of infinitely small things, which of course is impossible. However, it does allow my theoretical framework to abstract… no, nevermind.