This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

I hope the Climate Activists are proud of the effect their lies are having on the younger generation

If this survey is real the messages these young people are receiving are completely wrong.

We need to reduce our impact on our planet but CO2 is a complete red herring. The current ECS (temperature increase for a doubling of CO2 in the atmosphere) is centred around 3°C (IPCC AR6). The 2°C will destroy civilisation is simply made up.

 

 

Parents
  • Aivar Usk:

    Peter Bernard Ladkin:
    Safety engineering has been based on the identification and assessment of risk and its mitigation for a quarter century.

    ……. let us not forget the ALARP principle …..

    The ALARP principle is an English legal principle deriving from Asquith 1949, not an engineering principle. You can't formulate it as a process or an algorithm such that if you have followed that process/used that algorithm, and your system or installation failed in such a way as to kill or injure people, you are thereby immune from possibly-successful prosecution. (There are still plenty of engineers who don't understand that.)  ALARP is used in common-law jurisdictions (because they follow Asquith), but there are some common-law jurisdictions such as the US which don't tend to follow judgements outside their jurisdiction, of which Asquith is one. 

    ALARP is not present as either a legal principle or an attempted engineering principle in non-common-law jurisdictions such as those of France or Germany. In Germany and France, the requirement is that the system you put in place shall be at least as safe as the system/activity it is replacing (German: MGS or Mindestens gleiche Sicherheit; French: au moins le même securité). 

    So if you want to argue that we should approach the dangers of global warming by using ALARP, most of the world will not be convinced. 

     

    I would suggest that empirical evidence at relating smoking to various health issues is much stronger than in case of identifying CO2 as the main driver of climate change. 

    My point in introducing the smoking example was only to show that risk is the appropriate decision criterion, not certainty. I wasn't intending further comparison (although of course you are free to do so if you think it pertinent).

     

    Regarding the extent of anthropogenic effects on climate: professor Ross McKitrick has recently published an article "The IPCC’s attribution methodology is fundamentally flawed" that refers to his paper in Climate Dynamics criticising the math behind “Optimal Fingerprinting” methodology relied upon at attributing climate change to greenhouse gases. It was peer reviewed and not refuted, even recommended by at least one of the AT99 authors. 

    Yes, well, having looked at the blog post and the paper, I observe that he certainly seems to like to overstate his case. He spends a lot of his blog time talking in detail about his peer review situation with his paper; his lack of success in getting the targets of his observations to write some kind of reply; his technical qualifications as an economist as distinct from what he considers to be the lower qualifications of the authors whose paper he criticises. As if anybody cares (although I guess you do). Really, those kinds of things are just everyday. And the suggestion that Allen is less good at stats than he is just seems to be gratuitous. Over the years, Allen has made a number of key contributions to statistical methods in climate science, and McKitrick has contributed …….. what?

    The McKitrick paper is a piece of what I would call mathematical fundamentalism in statistics. I've encountered it in another context, that of the statistical evaluation of software. He is saying “in order to apply this method and rigorously draw these conclusions, this-and-this-and-this mathematical condition should apply. The authors have not shown these conditions apply.” That may well be right (or not), but so what (see below)? Notice that in the blog he goes further; he says they don't apply, which is clearly his considered view. But he didn't get to say that in the paper; he got to say some other strident things but not that. 

    The thing about statistics is that it is quite a practical science. You have a bunch of numbers, and you want to know what they say. You try to model the situation, as we say. You try to find a model which more or less fits the situation, and enables you thereby feasibly to estimate some probabilistic parameters of the underlying stochastic process. My experience, and that of most professional statisticians, is that mathematical fundamentalism tends to inhibit that process rather than enhance it. 

    Take SW stateval, for instance. You can often model continuously-operating software as a Poisson process (or some other renewal process). Now, a Poisson process has certain mathematical prerequisites that the operation of SW manifestly does not fulfil. For example, that, from any point in time, the probability that the software will fail in the next thirty seconds is constant throughout the operation. This is manifestly not literally the case; here is proof. Suppose the SW fails at time T by generating a HW interrupt. To actually fail, it has to have started on an execution path leading to that interrupt. At some time t < T, it will become inevitable that the interrupt will be generated at time T. It follows that for time (t,T) the probability of failure is 100%. Which is (if your SW is half-way usable) different from the probability that the SW will fail, say, within time (T-t) of startup. QED. But, in fact, you do very well by considering the SW operation to be modelled anyway as a Poisson process and trying to estimate key parameters, such as expected time to failure. It is a practical approximation that can be seen generally to work very well here. 

    So mathematical fundamentalism about statistical procedures is sometimes, even often, inappropriate, in that it would hinder you from applying statistical processes that in fact give you some very useful information about the stochastic processes underlying your real-world example. 

    That may be why Allen and Test didn't see an immediate need to reply. Just guessing, though.

    I have been involved in a similar situation in 2016 and ongoing. I started some practical work in 2009 with an eminent statistician, trying to rewrite a brief piece on SW stateval in a widely-used international standard. The stats involved is straightforward first-year-undergrad stuff,  which my colleague wrote up in brief (it is of course all over the Internet in any case). We had someone on the committee who maintained “this is wrong; you're missing out this and this and this condition; to apply this math you have to ensure these conditions hold.” Actually, no, you don't. You get quite decent results in circumstances in which the modelling is approximate rather than exact. 

     

     

     

Reply
  • Aivar Usk:

    Peter Bernard Ladkin:
    Safety engineering has been based on the identification and assessment of risk and its mitigation for a quarter century.

    ……. let us not forget the ALARP principle …..

    The ALARP principle is an English legal principle deriving from Asquith 1949, not an engineering principle. You can't formulate it as a process or an algorithm such that if you have followed that process/used that algorithm, and your system or installation failed in such a way as to kill or injure people, you are thereby immune from possibly-successful prosecution. (There are still plenty of engineers who don't understand that.)  ALARP is used in common-law jurisdictions (because they follow Asquith), but there are some common-law jurisdictions such as the US which don't tend to follow judgements outside their jurisdiction, of which Asquith is one. 

    ALARP is not present as either a legal principle or an attempted engineering principle in non-common-law jurisdictions such as those of France or Germany. In Germany and France, the requirement is that the system you put in place shall be at least as safe as the system/activity it is replacing (German: MGS or Mindestens gleiche Sicherheit; French: au moins le même securité). 

    So if you want to argue that we should approach the dangers of global warming by using ALARP, most of the world will not be convinced. 

     

    I would suggest that empirical evidence at relating smoking to various health issues is much stronger than in case of identifying CO2 as the main driver of climate change. 

    My point in introducing the smoking example was only to show that risk is the appropriate decision criterion, not certainty. I wasn't intending further comparison (although of course you are free to do so if you think it pertinent).

     

    Regarding the extent of anthropogenic effects on climate: professor Ross McKitrick has recently published an article "The IPCC’s attribution methodology is fundamentally flawed" that refers to his paper in Climate Dynamics criticising the math behind “Optimal Fingerprinting” methodology relied upon at attributing climate change to greenhouse gases. It was peer reviewed and not refuted, even recommended by at least one of the AT99 authors. 

    Yes, well, having looked at the blog post and the paper, I observe that he certainly seems to like to overstate his case. He spends a lot of his blog time talking in detail about his peer review situation with his paper; his lack of success in getting the targets of his observations to write some kind of reply; his technical qualifications as an economist as distinct from what he considers to be the lower qualifications of the authors whose paper he criticises. As if anybody cares (although I guess you do). Really, those kinds of things are just everyday. And the suggestion that Allen is less good at stats than he is just seems to be gratuitous. Over the years, Allen has made a number of key contributions to statistical methods in climate science, and McKitrick has contributed …….. what?

    The McKitrick paper is a piece of what I would call mathematical fundamentalism in statistics. I've encountered it in another context, that of the statistical evaluation of software. He is saying “in order to apply this method and rigorously draw these conclusions, this-and-this-and-this mathematical condition should apply. The authors have not shown these conditions apply.” That may well be right (or not), but so what (see below)? Notice that in the blog he goes further; he says they don't apply, which is clearly his considered view. But he didn't get to say that in the paper; he got to say some other strident things but not that. 

    The thing about statistics is that it is quite a practical science. You have a bunch of numbers, and you want to know what they say. You try to model the situation, as we say. You try to find a model which more or less fits the situation, and enables you thereby feasibly to estimate some probabilistic parameters of the underlying stochastic process. My experience, and that of most professional statisticians, is that mathematical fundamentalism tends to inhibit that process rather than enhance it. 

    Take SW stateval, for instance. You can often model continuously-operating software as a Poisson process (or some other renewal process). Now, a Poisson process has certain mathematical prerequisites that the operation of SW manifestly does not fulfil. For example, that, from any point in time, the probability that the software will fail in the next thirty seconds is constant throughout the operation. This is manifestly not literally the case; here is proof. Suppose the SW fails at time T by generating a HW interrupt. To actually fail, it has to have started on an execution path leading to that interrupt. At some time t < T, it will become inevitable that the interrupt will be generated at time T. It follows that for time (t,T) the probability of failure is 100%. Which is (if your SW is half-way usable) different from the probability that the SW will fail, say, within time (T-t) of startup. QED. But, in fact, you do very well by considering the SW operation to be modelled anyway as a Poisson process and trying to estimate key parameters, such as expected time to failure. It is a practical approximation that can be seen generally to work very well here. 

    So mathematical fundamentalism about statistical procedures is sometimes, even often, inappropriate, in that it would hinder you from applying statistical processes that in fact give you some very useful information about the stochastic processes underlying your real-world example. 

    That may be why Allen and Test didn't see an immediate need to reply. Just guessing, though.

    I have been involved in a similar situation in 2016 and ongoing. I started some practical work in 2009 with an eminent statistician, trying to rewrite a brief piece on SW stateval in a widely-used international standard. The stats involved is straightforward first-year-undergrad stuff,  which my colleague wrote up in brief (it is of course all over the Internet in any case). We had someone on the committee who maintained “this is wrong; you're missing out this and this and this condition; to apply this math you have to ensure these conditions hold.” Actually, no, you don't. You get quite decent results in circumstances in which the modelling is approximate rather than exact. 

     

     

     

Children
No Data