« profile & posts archive

This author has written 591 posts for Larvatus Prodeo.

Return to: Homepage | Blog Index

26 responses to “How the Wivenhoe engineers fell foul of the Floods Commission”

  1. desipis

    W2 is not clear.

    I think it’s clearer when it’s read in context. W1A-E are about gradually accepting losses at the various crossings. When the dam reaches the designated level the risk to other objectives are a greater concern than maintaining the lower level objects. With W2 the focus is on Lowood, until Brisbane is at too much risk. What I find unclear is whether the W1 strategies should include the “natural flows” limit. There is an implication that there is a floor of 2000 cumecs in the application of the “natural flows” limit.

    It is an optional strategy and can be bypassed.

    The difference is that there is (presumably) no specific dam level limit identifiable for making the W2 to W3 transition, as the dynamics of the event would count more than a particular dam level in determining if Lowood could be saved. The way it skips to W3 is basically saying if you think you won’t be able to save Lowood don’t bother trying.

    When you have 3,500 at Lowood, 4,000 at Mogill would not be far away.

    It depends on where the rain falls, and so it might be worth trying to save Lowood.

    Even more curious is the requirement that releases from Wivenhoe must be below the natural flow at Lowood, excluding water from the Wivenhoe. There is a similar clause relating to Moggill. (The statement in the manual is actually ambiguous, but the alternative meaning is nonsensical.)

    The effect of these sections is pretty clear:
    a) don’t cause more flooding than would have occurred without Wihenhoe dam existing (“natural peak flow”);
    b) don’t cause flooding damaging to urban areas (the 3500 and 4000 values).

    Once you’ve exceeded the natural peaks at Lowood or Moggill, there is no benefit to reducing outflows back to those peaks as whatever damage was done won’t be undone by doing so. Additionally, if you’re concerned about the dam reaching capacity later in the flood event reducing outflows could result in a higher peak later on. The overall point I would take from W2 & W3 is minimise the risk to urban areas, but do so by focusing on minimising the overall peak at Lowood (W2) and only rush to maximum release in order to lower the dam level if it is clearly necessary to avoid urban inundation due to future inflows (W3).

  2. desipis

    If Lowood (population about 1,000) is important, why not mention it in W3?

    This goes to the essence of the entire section. The strategies are essentially the logical conclusion of the relative importance of the objectives and the geography of the dam and the river. Any action is essentially possible in any strategy because all the objectives remain and their order of importance doesn’t change (although its arguable that some actions may effect a change in strategy). The ‘primary’ objective is simply a guild about what is likely to be the focus given a particular set of circumstances. The strategy section is descriptive in nature. It’s about how the dam is expected to be operated; presumably written with an educational/informative intent. It isn’t sufficiently detailed or clear to be read as prescriptive and I think it would be a mistake to interpret or apply it that way. It’s important to note that the “maximum release” values (and in the case of W3 the primary objective itself) are part of the conditions for a strategy, indicating the strategies need to be considered in a wholeistic manner, not in an straight forward “X+Y therefore Z” manner.

    Part of the “communcation” problem is caused by the fact there are two (or more) different ways to describe (or label) what strategy you’re in. One is to describe based on the start of the decision making process. The other is to describe it based on the end of the decision making process, or in other words based on which objective is effectively governing releases. It’s quite possible for you start your thinking process based on W2 while having the primary objective from W1 governing releases (because the primary objective from W2 isn’t at risk). In such a case I think it’s reasonable to describe the situation as “applying W1″ even though the reasoning is rooted in W2. Alternatively you could describe it as “being in W2″, or in neither, or both. This creates a problem when the commission decides that the only way to describe a strategy is based on the starting point, and then intreprets all the uses of the labels on that basis, despite the fact the people using the labels held a different understanding at the time they were used.

    The best way of describing it depends on what you’re trying to communicate. If you’re trying to provide a straight forward explanation of why the releases are/were at particular levels (e.g. in the briefing note to the minister), then it’s reasonable to describe the strategy based on the governing objective (i.e. “the rates we’ve set are intended to protect the crossing X as per strategy WY“), particularly as the governing objective will be the issue at the forefront of the engineers mind at the time. Once you shift to justifying your actions later, it becomes clear that it would be best to describe strategy it in terms of where you started your thinking rather than where you ended it. This is (I suspect) why there’s a distinction between how the strategy was described/labelled during the event to how it was described/labelled in the report.

    It might reasonable the commission to make a decision on how to interpret what ‘being in/applying/etc a particular strategy’ means based on the manual. They would have to do so in order to make a determination on whether the actions complied with the manual. Just as the engineers would have needed to do in order to write their compliance section of the report. The engineers made a professional judgement on the meaning of the terms in the manual. They prepared the report based on that meaning. The table in chapter 10 of the report does not indicate that the label “WX” itself was part of the information used to determine outcome. The labels there are used to identify the substance of the strategies used based on the meaning of the labels in the manual. I don’t think there’s any problem with the contents of those tables.

    The purpose of the report is to communicate what actually happened. It’s not to enter into a philosophical discussion on the different interpretations of the manual, or catalog the inconsequential labelling errors made by the engineers; attempting to do either of those would undermine its primary purpose. It might be open to the commission to decide on a different meaning, and hence come to a different conclusion on compliance with the manual. However to conclude (end of section 16.11.2) that the engineers were intending to mislead based on the fact the commission not only disagreed with but completely rejected the engineer’s understanding of the manual, an understanding (mostly) supported by the expert evidence, seems grossly premature and more than a little contrived.

    When assessing the conduct of the engineers in writing the report the commission fails to acknowledge the ambiguity in what the terms mean, or the engineers understanding about exactly what the manual requires. This is done despite the obviousness of the issue in the submissions and evidence. In section 16.11.3, where the state of mind of the engineers is talked about in terms of “W1″, “W2″, etc, but there is no mention of what the engineers understood those terms to actually mean. For example when dicussing Mr Tibaldi’s state of mind:

    According to the summary he prepared for the ministerial briefing note, he was operating the dam in W1 during his 8-9 January shift; his first drafts of the March flood event report have W2 applying. He could not have made those entries if he had any belief he was operating the dam in W3 during that shift.

    There’s no consideration of the fact that the labelling, rather than substance of the strategy, is what Tibaldi was correcting. I see those changes as reflecting an initial labelling of the strategy based on the focused objective (the crossings), a brief transition to labelling based on the dam level and then a final labelling based on the whole strategy at the time. Looking at the situation reports I’d reconstruct their thinking at the time as:
    – Dam level > 68.5, release predicted to be less than 3500 cumecs, consider protecting urban areas as primary objective.
    – Protecting urban areas is the primary objective. There’s no justification at this stage reducing flows. We can’t meet the natural flows requirement and should target minimising further damage.
    – There’s no clear risk to urban areas at this stage so we’ll avoid impacting crossings (the next highest objective).
    I think it’d be possible to mistakenly label this strategy as W1, particularly to provide explanation for higher releases after a risk to urban areas is assessed. I also think it’d be easy to miss the significance of the natural flows issue and label it as W2 when the initial labelling mistake is noticed. I would personally still label the above strategy as W2, and consider it the appropriate strategy (don’t abandon Lowood yet) to be in given the circumstances (8:00am on the 8th), which puts me at odds with both the engineers and the commission who seem to agree that W3 is the correct strategy. At no stage do I think Tibaldi was intending to misrepresent the substance of the strategy. The commission seemed predisposed to concluding the engineers were dishonest, possibly because they didn’t have the ability to understand the issue themselves.

    To me it all creates the appearance of a politically motivated finding. It wouldn’t surprise me if they were driven partially by their ego being bruised by the media reports.

  3. BilB

    To my thinking a piece of software should have been developed to handle the judgement process and non peak execution, and with a human intervention option at the ultimate peak level purely to provide checking against circumstances outside to scope of the software. Engineers roles should be maintaning data authentication, data input, and software operational integrity.

    Any fault really lies with there being an incomplete system. All of your points above Brian are completely correct and verified by any number of air accidents which clearly demonstrate that humans are not able to cope with massive amounts of divergent information in real time unless that information flow is repeatedly and specifically pre rehearsed.

    The coverup is the inquiry itself.

  4. BilB

    What theintent strategy is saying is that

    W + L + M should always be less than Lp + Mp

    and so where BR = Lp + Mp , W = 0.

    So you really have a problem where BR = Lp + Mp and Wl = 100% and it is raining like Moses.

  5. desipis

    brian, I also have other things to do and I apologise if my long comments have been taking over the threads.

    That would create a hellava flood at Moggill. How can you limit the flow in the Brisbane River to less than what it is resulting from water that is not under your control?

    As Bilb is pointing at, the issue is with the peaks. If you’ve got lots of water in the dam, and there’s a “wave” of water coming from somewhere else, you’re supposed to time the dam releases to miss that wave (and be no worse overall than the peak of that wave). Unless by doing so you’re going to snooker yourself into releasing a bigger wave of water later anyway (at which point you go to W3). That’s what I meant by W2 being about the ‘dynamics of the event’ above.

    Bilb, to be honest, as a software/systems engineer, this is the last place I’d recommend a software solution. I mean, I’m sure the engineers have modelling software to do all the heavy number crunching, possibly highlighting the points where various strategies should be employed. But these would not be anywhere near an authoritative or automated control system. It wouldn’t surprise me if it’d take tens if not hundreds of millions of dollars to develop a software based solution that met safety specifications. At the end of the day it’d be something that was rarely used therefore likely never trusted, and probably overridden anyway. Looking at the mess that is the revised manual I don’t hold much hope that any attempts at a full software solution would be successful.

  6. BilB

    What I am referring to is “executive” software. In most engineering fields there is very powerful purpose built software to make the complex flow of information to do with that field of enterprise. Increasingly we see “executive” software that aims to take the conclusions of engineering software and merge it together with the same from other fields. In this case there is no doubt software that calculates river flows using topography and level monitoring data to calculate outcomes. The BOM uses other software to predict rainfall wind an temperature. Councils use software to manage vegetation. No doubt there is also software that calculates tidal information, and there should be software that manages human flows to predict risks to human and other life in any area. The executive software would take the results of the source software to determine a best outcome. Such software is location and circumstance specific and therefore less likely to be “off the shelf”.

    What I would assume Jaques is referring to, at least in part,

    “Interestingly, decision makers under extreme pressure use a highly instinctive decision making process that does not resemble the process taught in decision theory”

    is the process of “thought without words”. Theory involves conscious and symbolicaly driven mental processes. Intuition and instinct is where the subconscious takes over and the body operates without conscious thought. This is the territory where so called “miracles” occur.

    Give it a go. See how far you can go with an idea without using thought words in your mind.

    I have a Programmable Logic Controller that I use for various projects which very clearly demonstrates the difference. This controller was developed by a 2 guys Bart Schroder and Paul Handly for GEC in Wellington NZ. Bart now has a business that mkes the Cleverscope, a must for software and circuitboard developers. This PLC which was dubbed the IDS (Intelliget Drive System) has 2 operating planes. one is called the connect plain, and the other is the programming plain. In the connect plaing you can link items together in a matrix and outputs react automatically to inputs although there can be various divices in between such as PID blocks and Scaling Blocks. The second plain has a state based fully multitasking progamming environment where a programme can run to monitor and manage the I/O. This operates indipendently of the connect plain, h owever it cann interact amodify and override the connect plain.

    This is precisely how our brains work, and Jaques’ comment refers, by my way of thinking at least, to the output of the connect plain in Bart and Paul’s spectacularly clever Intelligent Drive System.

  7. BilB

    I disagree with you, Desipis. Yes such software would cost a hundred million dollars to develop. But look at the cost of not having such software, it is in the billions of dollars. The software would be running continuously in mild mode but would be able to, and required to, run simulations regularly to both back analyse and forward analyse extreme events. The alternative is to spend tens of millions of dollars periodically on pointless political post mortems. One approach is smart, the other is dumb. We live with Dumb. If Abbott gets in we will have to learn to live with Dumber!

  8. BilB

    The cost of the Queensland Floods ? Between 3 and 30 billion dollars depending on which paper you read.

    http://www.heraldsun.com.au/businessold/counting-cost-of-queensland-floods/story-e6frfh4f-1225988029312

    Now executive software cannot eliminate risk, but it can minimise it.

    We really need to have a national software budget that ensures that all future Climate Change risks are able to be at best be predicted but also managed intelligently rather than leaving “evaluation” to various pundits and political ideologues, with all of the uncertainty that comes from that.

    And this is one of the key inputs that would be constantly checked.

    http://www.bom.gov.au/climate/glossary/soi.shtml

    Where is the redline going next?

  9. John D

    In the previous post on this subject I extracted these figures from the Flood Event Report What was interesting about these figures was the long delay between the setting up of the control center and significant flows from the dam. For example, it took 28 hrs before the flood gates were opened at all, 96 hrs before outflows reached 2000 m3/s, 120 hrs before flows reached 3000 m3/s and 130 hrs before the peak dam height and flow (7464 m/sec) were reached.
    The difference between the volume held in the dam at peak and the point were flow reached 3000 m3/s was only 180 GL. It would take 50 hrs @ 1000 m3/s to discharge 180 GL (or 420 m3/s over 120 hrs.) The point I am making here is that, small increases in the flows at he early stage of the crisis may have averted the need to go over 3500 m3/s and thus avoid most of the damage.
    It is also worth noting that, during the early stages of the crisis the police asked the engineers to delay closing the bridges and BCC officers told the dam engineers that damage to Brisbane would have been caused by lower flows than those mentioned in the manual.
    The key messages for the future are that it is important to:
    Start raising the floodgates as soon as dam level gets much above the end of wet season target.
    Having accurate predictions of the effect of flows on damage to property and serious inconvenience.
    Investing in the areas that will increase the flows that can be used without causing problems. (Raising of key bridges and protecting low lying houses etc.)

  10. John D

    I have had plenty of opportunities during commissioning to see what the combination of long hours, pressure to get the plant going and the stress of wondering whether your brilliant design is going to work does to myself and others. It is very easy to become obsessed with a problem that someone else has brought up. The police ask you to delay bridge closures? Easy to let this become the overwhelming issue when the key issue is that there is a real risk that enough rain will fall to require dam saving discharges that are high enough to cause major major damage downstream.
    Another key problem is that, when problems do come up, people latch on to a particular solution (and pursue it to the end) instead identifying a number of potential solutions and pursuing more than one of these at the same time.
    The final key problem I have found was that I can do things like run a control room when tired but find it very difficult to do routine calculations accurately, even with computer models to help. Not a good thought when dam engineers need to do quite a bit of modelling.
    Part of the solution is to organize the engineers so that nobody is working 12 hr shifts. (Four engineers could have been on overlapping 9 hr shifts for example.) The other part of the solution is to have at least one person, who is not overtired, asking the dumb questions and reminding people of what the real objectives are.
    It also helps to have people from different professions on the team. The team handling this crisis would have been a lot stronger weather people on the team to provide a better picture of the risks and uncertainties.
    Bilb: My experience is that control systems contribute a lot to commissioning problems – I am a bit wary of a control system telling us all what to do, particularly when overriding the control system will be grounds for damage claims. Events moved slowly during this crisis so thee was time for humans to consider what should be done.

  11. BilB

    That is ridiculous, JohnD. Very little of modern industry can possibly operate without computer control systems. The reality is, the more computer management there is, the safer we are. Control systems allow humans to apply their great intelligence with detail and care at the rate that they are best suited for, not at the rate that random circumstances can conspire to inflict as in a climate emergency.

  12. John D

    Reliable like the Qld Health payroll system Bilb? Compared with most control systems the speed required for the dam is glacial so human’s do have time to think before they act. We are also talking about a system that may not be used in anger for 50 to 100 yrs.
    Brian: We had about 870 GL in the flood box by the time the outflow was finally raised to 3000 m3/s 120 hrs after the control group was formed. That is an increase in water held of 7.25 GL/hr (2014 m3/s). The 3000 m3/s level was reached after the start of the second rain peak. We could have kept the flood peak well below what it ended up being if there had less in the flood box 120 hrs after the start – It should have been possible to keep max flow below 3000 m3/s.
    I doubt that anyone knew how desperate things would become when the dam first went over the full level So if we are going to deal with crisis of similar size to the 2011 flood we have start running at flows that are high enough to close bridges from the start and perhaps flooding Lowood. Most of the time the disruption will turn out not to be necessary but very occasionally.
    After years of unjustified inconvenience things will get slack and when the next monster hits we will have a repeat of what happened this time. There is a real need to spend the money necessary to handle higher flows without too much inconvenience.

  13. Socrates

    As an engineer thanks to Brian for these excellent series of posts and the analysis and reading behind them. I would not pretend that engineers never make mistakes, but the adversarial nature of this commission is concerning. Unless someone can show that the flood would not have been as bad if the engineers had acted differently (based rationally on the evidence they had at the time) then to me they have done nothing wrong. So far, the evidence I have read suggests that their diversions from the manual may have reduced the number of houses flooded. Surely that is the real question, not debates over procedure.

    I don’t believe it is possible to create any system that will achieve perfect decisions in such circumstances. You must make decisions given imperfect present information, with unreliable forecasts about an uncertain future. There is no zero risk strategy. Release too much now and you flood people. Release too little now and you may flood many more people later. Judgement is required, and there is not time to brief politicians and defer to them. I think the engineers in this case have been poor communicators, but that does not make them incompetent as engineers.

    I also wonder why there has been no discussion about building approvals?. Why had so many new buildings been given planning approval on what were clearly flood prone sites between the 1974 flood and now? After the 1974 flood levels were extensively studied and risks with Wivenhoe in place well known. The flood levels in 2011 were below 1974 levels, yet flooded some new buildings built since 1974 in flood prone locations. Who let new buildings be built in flood prone locations? I thought there were rules for the degree of flood immunity required to permit residential building?

  14. BilB

    Well you clearly know everything, JohnD, we’ll leave it all to you.

  15. BilB

    To hell with it, I’ll wade in.

    You guys obviously know nothing at all about control systems and what they can do. This particular case is one that desperately needs a Proportional Integral Derivative calculation constantly being applied to meet the conflicting needs of water management releases in flood and the water retention needs when in drought. The manual that the engineers were following is a clumsy hamfisted method for achieving what any type of motion controller is doing all day long. The machinery that I use can be spinning at thousands of revolutions per minute one second and pull to zero the next in exactly one particular spot that will be 1 ten thousands part of a revolution in accuracy with absolutely no over run. Every one of my machines has at least three controllers that can achieve that. The brisbane water management is no different in principle, just a lot more independently varying inputs.

    This is no massively complicated achievement these days. These systems are all around you. To say that this is too hard to achieve “look at what a mess the hospital pay system is” is just total nonsense that engineers should be ashamed to be espousing. What you’re suggesting might be ok for Warragamba dam here near me where we usually drink the water faster than it can flow into the dam for a decade at a time, but the river system in the Brisbane area is vastly more complex with dams spilling into other dams which then spill into rivers that combine with other rivers. That needs a true proportional total system integrated control which is constantly looking ahead and adjusting flow rates to meet the clearly defined target river levels. And integrated also with the weather radar and tides.

    This bucket brigade calculation and discussion that has been happening upthread is fun to do to play with the maths, but in real life should never be done by humans these days. Humans make too many mistakes, and the first of them is to say that the floods can only happen every 50 years. The last time that parts of brisbane was flooded was ith in the life of this blog, and John Quiggins, and that was not fifty years ago. The most important realisation is that there is absolutely no weather certainty from here on in.

  16. John D

    Socrates: I have said on a number of comments during the dam discussions that the engineers made reasonable decisions on the basis of the information they had at the time. Their decisions would also have been correct if the second rain event hadn’t happened or had moved through as fast as the forecast predicted.
    What this event taught us is that strategies that are optimal for relatively small floods (like the 1974 flood) are not optimal when confronted with something more serious.
    What was needed was a commission that was competent to investigate the lessons of this flood and come up with recommendations for future strategies that recognize we may have to face an even larger flood at some time in the future.

  17. John D

    Bilb: The spinning disk exercise you talked about may sound impressive but it is really a simple system controlling something that is easy to model and measure. It is also something that has to be done by control systems because it happens so fast. The dam is far more complex and fuzzy and moves at a pace where humans have got enough time to be involved.

    In the meantime I will miss our conversations now that LP is about to shut down. Best of luck with your business.

  18. BilB

    JohnD,

    I am going to labour the point, because it is important. I implore you to find an up to date process and control engineer and ask him what his industry is all about.

    PID control is all about assessing information and reacting to arrive at a precise point in the most efficient way. It applies to motion control, temperature control, time management, compounding and mixing, even commerce and economics. In CNC machinery you can have many motors operating different axes (dimensions) all working in unison to arrive at a precise point within submicron accuracy. All robotics depend on this calculation.

    A PID control function applied to the Wivenhoe/Brisbane River system would very likely have been letting some water go from the dam before the rain began to fall as it would be looking at the precipitation data and making forward decisions based on the total amount of information available. Humans cannot reliably operate in this manner. We can define the process and establish the parameters, but we cannot compute the data with anything like the precision achieveable with computers.

    Now I am not trying to be a smart arse here. I do not have a command of this level of computation and programming, but I do know what it can achieve, and I know when to call the engineers and I know what to ask them to do. For my class of industry. All I am suggesting is ask more questions. Talk to more people about this. I would expect that you would be fascinated and you might find another dimension that operates in so many parts of our lives without our even knowing about it.

  19. Chris

    BilB @ 24 – One difference between your examples and the dam is that the data about the inputs and even how the system is going to behave has much much larger levels of uncertainty.

    I think the engineers should have software which can do the modelling that you talk about. However it also needs to be able to clearly explain why it makes the recommendations it does and the engineers have to understand how it works, what it takes into account and most importantly what it doesn’t.

    Just as importantly it should be seen as a resource that engineers can use (like a calculator) and perhaps even as an aid to remind them what is important during stressful situations, but they should not be expected to just blindly follow its advice. Otherwise you end up with situations like with GPSs where people will go the wrong way up one way streets or lost on bad dirt roads in the country because the GPS told them to do it.

  20. John D

    Bilb: I have been designing, operating and commissioning mineral processing plant for yonks. This has including preparing P&IDs, debugging new and convincing conservative control engineers that that they can actually program the subtle control systems I craved. Yonks goes back to an era when digital control in mineral processing was at its earliest beginnings and most of the control consisted of things like operators using manual valves and an educated eye.
    Automatic control is not a magic answer. Its strengths and limitations need to be understood.