当前位置:首页 >> 电力/水利 >>

大停电的风险评估 Risk Assessment of Cascading Outages Methodologies and Challenges


IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 27, NO. 2, MAY 2012

631

Risk Assessment of Cascading Outages: Methodologies and Challenges
Prepared by the Task Force on Understanding, Prediction, Mitigation, and Restoration of Cascading Failures of the IEEE PES Computing and Analytical Methods Subcommittee (CAMS)
M. Vaiman (Lead), K. Bell, Y. Chen, B. Chowdhury, I. Dobson, P. Hines, M. Papic, S. Miller, and P. Zhang

Abstract—Cascading outages can cause large blackouts, and a variety of methods are emerging to study this challenging topic. The Task Force on Understanding, Prediction, Mitigation, and Restoration of Cascading Failures, under the IEEE PES Computer Analytical Methods Subcommittee (CAMS), seeks to consolidate and review the progress of the ?eld towards methods and tools of assessing the risk of cascading failure. This paper discusses the challenges of cascading failure and summarizes a variety of state-of-the-art analysis and simulation methods, including analyzing observed data, and simulations relying on various probabilistic, deterministic, approximate, and heuristic approaches. Limitations to the interpretation and application of analytical results are highlighted, and directions and challenges for future developments are discussed. Index Terms—Cascading failure, power transmission system reliability, preventing cascades, risk analysis, sequential contingency analysis.

I. INTRODUCTION

A

CASCADING outage is a sequence of events in which an initial disturbance, or set of disturbances, triggers a sequence of one or more dependent component outages (based on [1] and [2]). In some cases cascading outages halt before the sequence results in the interruption of electricity service. However, in many notable cases, such as blackouts in North America on August 14, 2003 [3], Europe on November 12, 2006 [4], and Brazil on November 10, 2009 [5], cascading outages have resulted in massive disruptions to electricity service. Although such large blackouts are infrequent, they contribute signi?cantly to blackout risk and perceptions of electricity service reliability. There has recently been considerable progress in advancing methods for analyzing cascading outages, but the advances are somewhat scattered. The goal of this paper, by the IEEE Task Force on Understanding, Prediction, Mitigation, and Restoration of Cascading Failures, is to summarize and consolidate the state-of-the-art to enable further progress and to highlight the remaining challenges. Cascading outages are in?uenced by the details of the system state, such as components out for maintenance and the patterns

Manuscript received September 24, 2010; revised February 16, 2011 and May 22, 2011; accepted July 17, 2011. Date of publication December 27, 2011; date of current version April 18, 2012. Paper no. TPWRS-00754-2010. Task Force Contributing Members: M. Vaiman (Lead), K. Bell, Y. Chen, B. Chowdhury, I. Dobson, P. Hines, M. Papic, S. Miller, and P. Zhang. Digital Object Identi?er 10.1109/TPWRS.2011.2177868

of power transfers, and the automatic and manual system procedures. The initiating events for a cascading outage can include a wide variety of exogenous disturbances such as high winds, lightning, natural disasters (hurricanes, earthquakes, etc.), contact between conductors and vegetation or human error. Moreover, there are many mechanisms by which subsequent outages can propagate beyond the initial outages. Generally, the dependent component outages occur when relays or humans trip circuit breakers. The apparent immediate causes of such trips are multifarious and include: ? overloaded transmission lines that subsequently contact vegetation; ? overcurrent/undervoltage conditions triggering distance relay actions; ? hidden failures or inappropriate settings in protection devices, which are exposed by a change in operating state; ? voltage collapse; ? insuf?cient reactive power resources; ? stalled motors triggered by low voltages or off-nominal frequency; ? generator rotor dynamic instability; ? small signal instability; ? over (or under) excitation in generators; ? over (or under) speed in generators; ? operator or maintenance personnel error; ? computer or software errors and failures; ? errors in operational procedures. The dependent events in large cascading outages typically include several, or even most, of these failure mechanisms. Cascading failure risk assessment is the estimation of the risk associated with blackouts that could result from the range of all disturbances that could initiate a cascading failure. According to [6, p. 1], risk “involves an exposure to a chance of injury or loss.” It is thus the combination of probability (chance or uncertainty) and cost (injury or loss). Risk assessment generally involves both the characterization of uncertainties associated with a problem and the estimation of the costs associated with deleterious outcomes. For the cascading outage problem, uncertainties are associated with three primary sources: 1) the initiating events, 2) the sequence of dependent events that could unfold as a result of the initiating events, and 3) the ultimate costs of a blackout with a known size. Regarding this last uncertainty (#3), it is typical to assume that measuring cost (or impact) of a blackout in terms of its size in MW or MWh is suf?cient as a proxy variable for direct cost. However it is notable that different

0885-8950/$26.00 ? 2011 IEEE

632

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 27, NO. 2, MAY 2012

Fig. 1. Annual number of large blackouts, after removing events caused by large natural disasters (hurricanes, ice storms, earthquakes). Data from NERC for 1984–2006.

blackouts have large variations in indirect costs. Compare, for example, the different social outcomes in New York City during the 1977 blackout, when widespread social disorder erupted as a result of the power outage, and in 2003, when city residents endured the blackout in relative calm. The remainder of this paper is divided into ?ve sections. Sections II and III describe the need for risk assessment related to cascading failures and how risk assessment can be used in power system operations. Section IV describes classes of methodologies for risk assessment of cascading outages. Section V provides suggestions for future work in the area of cascading failure risk analysis. The committee’s conclusions from this review are summarized in Section VI. II. NEED FOR RISK ASSESSMENT OF CASCADING OUTAGES Many current industry reliability procedures, such as the criterion, tend to inhibit cascading failure. However, cascading outages do occasionally occur. A number of newer reliability standards therefore require that sets of initiating events do not cause cascading outages and that the potential for cascading events be monitored continually. For example, while this is longstanding good industry practice and is a re?ection of the signi?cant role of operation of protection in propagating cascades, NERC Standard PRC-023 [7] requires that protective relays reliably detect all fault conditions but are not set over-conservatively so that they restrict the system power transfer capability. In addition, NERC Standard FAC-011-2 [8] requires that system operating limits be established such that all single contingencies and certain multiple contingencies do not result in cascading outages. Despite the fact that regulations require utilities to consider cascading outages, the tools for directly assessing and mitigating large cascading failures are not yet well developed. Furthermore, cascading outages continue to occur around the world and it might be argued that many major blackouts involve cascading outages which start from single initiating events (rather than “multiple contingencies”) with the occurrence of further outages in rapid succession, in sequences that show causal relationships, often stochastic. For example, in [9], of the 26 major unreliability events that were reviewed, 19 of them were triggered by losses of single transmission elements albeit that many of these events were exacerbated by other problems. Fig. 1 shows the annual frequency of large blackouts in North America, after removing events caused by extreme natural events (hurricanes, ice storms, etc.) and supply shortages. Many, if not all, of these blackouts were made worse by cascading outages. Relatively small disturbances can initiate very

Fig. 2. Relative blackout frequency and contributions to blackout risk from large North American blackouts in various size categories. Data from NERC for 1984–2006.

large cascading failures. Cascading failures, along with competing pressures to optimize reliability and economic ef?ciency, lead to a power-law tail in the distribution of blackout sizes in both theoretical models and empirical blackout data [10], [11]. This power-law distribution has been observed in blackout data from North America [10], Sweden [12], Norway [13], New Zealand [14], and China [15]. The impact of this power-law is evident in Fig. 2, which shows the frequencies of blackouts in various size categories and their overall contributions to blackout risk in North America, in terms of the total amount of demand interrupted, the costs of cascading failures have enormous variance. Because of limited data availability there is some debate as to whether cascading failure risk is increasing over time [16], [17]. However it is clear that cascading failures continue to contribute signi?cantly to blackout risk. The following factors are argued in [17] to contribute to system stress, and may now be adding to the risk of cascading outages: ? changes in transmission system operations policy that reduce the focus on mutual assistance at times of high stress on individual systems and increase the focus on facilitating long-distance energy transactions; ? shortening market “gate closure” times to aid the market, which has the effect of increasing uncertainty for power system operators and limiting the range of possible actions; ? continued dif?culty in obtaining permits for new transmission lines [18]; ? increased need for quanti?ed economic justi?cation of actions by power system planners and operators; ? increased dependency in power system operation on a greater number of individual, independently owned, actors; ? limited ?exibility in much of the existing generation capacity such as ?rst-generation wind farms or older combined cycle gas turbines; ? increased uncertainty in power transfers due to uncertainty in wind generation. The U.S. electricity industry is currently revising its reliability standards to require utilities to monitor not only single contingencies (where ) that contingencies, but also could initiate cascades. Such “multiple contingencies” could be thought of as a combination of outages that occurs within such a short period of time that corrective action has not been possible

VAIMAN (LEAD) et al.: RISK ASSESSMENT OF CASCADING OUTAGES: METHODOLOGIES AND CHALLENGES

633

before the next one occurs. From an operator’s perspective when they consider a response (or anticipate the situation and take preventive action), it is the resulting combination that is observed. However, given the dynamic nature of a system, the outcome of the multiple contingency depends not only on the combination but also on the sequence in which the outages occur. Indeed, some of the outages considered simply as part of the “multiple contingency” may actually be consequences of earlier outages and the particular dynamic responses of the system; such consequential outages are not always predictable using conventional, “deterministic” power system analysis tools. Furthermore, at the point at which an operator might observe the system’s state, if appropriate action is not taken, it may only be a matter of time before further outages are triggered. If the order of the contingencies is neglected, then the number contingency combinations is of possible (1) If sequence always matters, the number of contingencies in. Given a large system with tens of thoucreases to , estimating the impact of sands of components each contingency with even will require more than simulations, which is computationally infeasible for a simulator with any ?delity. However, sets of tens, or even hundreds, of outages do occasionally occur and result in very large blackouts. It is thus necessary that risk analysis methodologies systematically reduce the computational complexity of the problem to provide useful information about risk without excessive computational delay. In order to do so, methodologies that are concerned with estimation of risk need to consider the probability of initiating event(s), the probability distribution of cascading failure sizes that would result from a given initiating event, and the impacts of the blackout that results from various cascading failure sizes. However, since most, but not all, current cascading failure simulation tools treat system state and model parameters (pre-contingency con?guration, branch impedances, etc.) deterministically, the size of the cascading failure that results from a given initiator is typically not modeled as a probability distribution. Assuming that the blackout size and cost are deterministic, using the standard de?nition of risk as the product of probability and cost, the and blackout risk posed by contingency , which has size is cost (2) One approach to estimating system risk, rather than risk associated with an individual contingency, is to choose a subset of all possible contingencies (a subset of ’s), and then simulate each to estimate the size of the blackout that results from each. Given a space of many events with differing costs and probabilities, it is common to aggregate system risk by summing the individual risks. However, this approach tends to mix risks from low probability, high cost events with those of high probability, low cost events. It is likely to be more useful to provide information about risks stemming from different size events separately. It is also important to note that risk as perceived by the public is

frequently more dif?cult to quantify. Many risks with low total from engineering estimates, but substantial uncertainty risk in the cost (e.g., nuclear power accidents) can be perceived as very risky [19]. The possible or intended use of a risk estimate in driving decisions should be clearly understood when deciding what approach to take when estimating that risk. The next section of this paper summarizes the range of uses and the impacts they have on the risk assessment approach. Section IV then goes on to discuss a number of those approaches. III. USES OF A RISK ASSESSMENT FOR ANALYSES OF CASCADING OUTAGES It is clear from Section II that there is a need for greater understanding of cascading outages and how major blackouts arise. However, both the emphasis of a methodology and the uses of the improved understanding that results from risk analysis tools depend on the context and, in particular, the time available to choose and actuate risk mitigating actions. A. Facilitation of Decision Making One purpose of risk assessment is to enable good decisions regarding actions to mitigate stress in the system. The set of actions available to operators depends on the timescales at which the risk assessment is done. In developing risk assessment methodologies, it is important to consider three different time timescales: 1) real time or near real time system operation; 2) operational planning, i.e., day-ahead to month-ahead preparation for operation of the system; and 3) long-term planning, in which changes to infrastructure or regulation are feasible. Risk assessment can also be useful in developing strategies to increase the resilience of interacting infrastructure systems to power system failures. In operational timescales, only a limited set of actions is available to the operator, but the consequences of erroneous decisions can be enormous. During emergency operations there is often little to no time for operators to correct an erroneous control action. This implies a need for analytical accuracy which may include the need for assessment of short- or mid-term stability, which in turn implies a need for modeling sophistication and large quantities of accurate data, which are often dif?cult or impossible to obtain and manage in real time. Compromises between speed and accuracy are thus necessary. This might be resolved, for example, by designing the analysis to err on the side of caution in order to minimize the number of false positives, i.e., cases that are reported to be safe but are not, even if at the expense of an increased number of false negatives, i.e., cases that are reported to be relatively unsafe but actually have a low risk. Given good information about ongoing problems, operators have a limited set of actions available to mitigate short-term risk. They can, for example, impose limits on key power transfers. Defense measures such as system integrity protection schemes [20] may be “armed” or not. (If they are “armed”, there is always the chance of inadvertent operation, i.e., operation when it was not necessary, and these would give rise to other problems.) Operators can open circuit breakers in order to separate the grid into

634

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 27, NO. 2, MAY 2012

islands or increase the impedance of a particular path (but not to make it in?nite). Or, under extreme circumstances and where the facility is available, speci?c relays (such as Zone 3 distance protection) can be adjusted or temporarily disabled. In operational planning timescales, there is some uncertainty about exactly what conditions will arise on the system, but also a larger set of control actions available. Operators can, for example, warm up cold generating units to prepare for coming risks. Some types of cascading outages that are propagated by tripping of generation could be arrested through the use of generation reserves. The appropriate dispatch of reserves can alter power ?ows, provide additional voltage support or ramp up to compensate for generator outages. The operational planner may also prefer to rely on reactive power reserves in particular locations or particular sources, and take steps ensure that these resources are available. Restoration plans can also be developed during operational planning time scales In investment planning timescales, there are the greatest uncertainties about the operating states of the system, but also the fullest scope for different actions available to manage risk. As implied by NERC standard PRC-023 [7] and noted in [9], inadvertent interactions between difference local protection and control systems can make a bad situation on the system worse. System engineers should, on a regular basis, check protection settings and, where necessary, modify them to accord better with changed system conditions, decide to make certain defense measures available, test the performance of equipment, and/or evaluate compliance with grid code rules such as those concerned with generating units’ required operational capabilities [9]. In longer timescales, it is possible to make additional capital investment, such as building new transmission or generation. It is also feasible to modify industry rules through regulatory processes to improve the robustness of the overall power system and ensure adequate contributions from different actors. For example, some improvements to generator performance may be required, such as the introduction of low voltage ride-through capability for wind farms. Alternatively, if the system operator believes some improved performance from particular generators to be necessary, they may enter into a contract with generators for these services. Not all generators are always able to comply with existing rules and may have applied for derogations against them. Some indication of future risk may lead to this derogation being dropped meaning that the generator must now comply with the original rule. In terms of what is more directly within a network utility’s remit, another example of action is the commissioning of surveys, which may help the utility understand more precisely the rating of particular circuits or whether some kind of enhanced vegetation management is needed. A dif?cult question for a power system engineer or operator to answer is whether some action is necessary. As a high-consequence event becomes more probable, the motivation for action should increase, but it is often dif?cult to judge the threshold of acceptable risk. As was discussed above, risk includes dimensions of both the probability of an event or ?nal state and impact were that to occur. In addition to informing practical and immediate actions, there is another possible broad category of use of a tool: to

allow some kind of “fundamental”, “scienti?c” or “conceptual” understanding of power system behavior. This has great value in informing future developments in power systems engineering even if it has no immediate impact on the way power systems are planned or operated.

B. Industry Treatments of Risks of Cascades A number of utilities in the U.S. are already addressing risks of cascades. For example, Con Edison has developed an automated approach to predict cascading outages [21] with the purpose of enabling system planners to quantify the system’s ability to withstand cascading outages caused by the thermal overloads. Idaho Power has a methodology for ranking of contingencies based on the size of the secure operating region, and the most limiting contingencies are identi?ed. The effect of mitigation measures on the alleviating violations such as thermal, voltage and voltage stability can be measured by monitoring the size change of the secure operating region [22]. ISO New England performs tests using quite severe contingencies to test vulnerability to major disturbances [23]. For the steady state test, all elements at the station being tested are opened, and the power ?ow case is solved. For transient stability tests, a three-phase fault is applied to the test bus, and left un-cleared locally assuming no communications from the station under test to the remote terminals. Remote terminals are opened based on expected design fault clearing time. Transmission operators and planners in Europe have long been accustomed to working with reliability rules such as “N-1” (secure against the loss of a single primary component) or “N-D” (secure against the loss of a double circuit overhead line). However, there is a growing recognition of the need to fully comprehend the consequences of unplanned outage events [23]. For example, alongside the introduction of a single GB-wide security standard in Great Britain in 2005 [24], there was a clari?cation that, following any power system disturbance, protection and control equipment may normally be expected to respond automatically. However, assessment of secured events should therefore take account of such responses that are consequential to them, e.g., cascade tripping of circuits, auto-switching, switching of capacitor banks, AVR responses and transformer tapping. In particular, it should be established that a new steady state is reached that exists within normal operating limits. Otherwise, suitable preventive actions should be taken. There is increasing focus on minimizing the extent to which the network acts as a barrier to inter-area trades of electric energy. This has led to increasing adoption of system integrity protection schemes to facilitate automatic post-fault actions and reduce pre-fault constraint of power transfers. However, major disturbances such as those in Italy in 2003 and Western Europe in 2006 have focused attention on the need to study the consequences not only of initiating events but also the actions of these schemes, including dynamic responses. Tools such as SICRE in Italy [25] and Assess [26] in France have been developed, at least in part, for that purpose.

VAIMAN (LEAD) et al.: RISK ASSESSMENT OF CASCADING OUTAGES: METHODOLOGIES AND CHALLENGES

635

IV. METHODOLOGIES FOR RISK ASSESSMENT OF CASCADING OUTAGES This section presents classes of theoretical methods being developed or applied for identifying contingencies that could initiate a cascade, and/or for estimating cascading failure risk. Since cascading is very complicated and complete enumeration of all possibilities is impossible, there are necessarily compromises and limitations in assessing cascading risk. The criteria for comparison of risk assessment methodologies are ?rst described followed by the general nature of these compromises. These are followed by a discussion of methods based on detailed modeling and simulation. “Bulk analysis” methods that use high level models are then discussed. A. Criteria in Comparison of Risk Assessment Methodologies Which methodology for the assessment of risk due to cascading outages should be adopted depends on the context of its use. For a utility, to know what decision to take will often depend on some understanding of the mechanism of a cascade. The ability of some methodologies and tools based on them to “explain” a mechanism under particular circumstances will be very important in some contexts. On the other hand, other methodologies, while perhaps offering advantages in terms of speed for the computation of a risk index, do not lend themselves to explanation. In addition, the ability, in an easy way, to test the sensitivity of results to changes in modeled events or system parameters may be an important feature of a chosen methodology. The criteria for comparison of methodologies proposed by the IEEE Task Force on Understanding, Prediction, Mitigation, and Restoration of Cascading Failures are: ? accuracy of reproduction of real phenomena; ? computational complexity and speed of execution; ? degree of dependency on large volumes of data; ? degree to which results may be reviewed in detail and explained; ? accuracy of modeling of the power system (AC or DC power ?ow, limitations on the size of the model, modeling of dynamic responses of control devices, etc.); ? need for quanti?cation of event probabilities or frequencies of occurrence. Normally, a trade-off between accuracy and speed is required. As discussed in this paper, the result of this trade-off is likely to depend on the timescale in which a tool is to be used. Similarly, the ease with which sensitivities to different assumptions might be tested will depend on speed of execution and the volumes of data required. B. Assumptions Used in Cascading Outages Methodologies All current methods based on detailed modeling and simulation can capture only a subset of the many mechanisms of cascading failure. Due to the challenge of modeling the actions of human operators and complex interactions, human factors or wider systems issues are not typically represented. Different developers of methodologies have concentrated on different selections of cascading mechanisms to be represented; this is necessary and healthy at this stage of development of the ?eld. It is valuable to explore different combinations of mechanisms so that eventually progress can be made in determining which

mechanisms are more important to model and what compromises in their modeling detail are needed for practicality in simulation times and data availability. In addition to selecting a subset of cascading mechanisms to model, assumptions are needed regarding the triggers of cascading failure. To have a potentially useful picture of risk from simulation methods many sequences, each of which will include one or more triggers and the potential for a subsequent cascade, need to be sampled and simulated. Doing so requires modeling of a subset of all possible exogenous triggers, such as storms, malicious behavior or operator error. Probabilistic sampling requires assumptions about the relative probabilities of these potential triggers. Data regarding outage frequencies, such as those used in generator adequacy reliability modeling [27], are particularly valuable in choosing outage probabilities. Hidden failures, such as defective relays or overgrown vegetation, are common contributors to cascading failure, and can be included in the set of triggers [28]. To compute aggregate risk from many simulated cascades, it is useful to compare the sizes of the cascades that result from modeling with the empirical data on cascade sizes and frequencies. Obtaining a suf?ciently uniform and numerous joint sample of initial conditions, initiating events, and event sequences remains a challenging problem. There are also large uncertainties in determining the cost of cascades, especially the large ones. While there is progress towards assessing risk with simulated cascades with uniform sampling as described above, many current authors forgo a risk assessment, but instead generate non-uniform sample of cascades and apply various heuristics to recommend actions that may mitigate risk based on the sample of cascades. All the heuristics strongly prune the cases considered. Common heuristics include the following: 1) Model only the initial stages of cascading. 2) Model only the most probable, or most consequential entire cascading sequences. 3) Consider only the risk, probability or impact of the next stage of the cascade, or the stages in the cascade up until the current state. For example, unlikely next stages may be neglected, even if there may be very many such next stages. 4) Model only a subset of initial conditions or initiating events. 5) Assume that cascades proceed deterministically. These heuristics all appear sensible and might be effective, but none have been ?rmly validated. C. Example Modeling and Simulation Methods Many have deployed combinations of heuristics to produce modeling and simulation methods for both research and commercial purposes that capture aspects of cascading failure. This section discusses a subset of existing simulation methods in order to highlight different approaches to the problem. It is important to note that this review focuses on sampling and simulation methods that use sequences of steady state AC or DC power ?ow calculations, rather than full dynamic simulations. Studies of past cascading failures [29], [30] clearly show that dynamic phenomena (voltage collapse, rotor instability, etc.) are important contributors to cascading failure. Although some tools have

636

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 27, NO. 2, MAY 2012

Fig. 3. Cluster-based representation of a power system network.

been developed that provide facilities for study of such phenomena subject to uncertainty, they require large volumes of data, can be unwieldy and depend on specialist users [25], [26]. Additional research is needed to develop simulation methodologies that capture the interactions between continuous machine dynamics and discrete relay actions in a convenient way. 1) Cluster-Based Approach: As previously noted, it is praccontingency combinations tically impossible to assess all in a bulk power system. A “cluster” approach can quickly identify potential cascading modes due to thermal overloads. A power system network may be represented as a number of “clusters” (e.g., groups of buses) that are connected to the rest of the network via “critical” lines (e.g., cutsets) [31].1 Clusters of sources or sinks may be identi?ed by virtue of them having similar minimal cutsets. Outaging any line in a cutset usually causes large overloads on another branch (or branches). If an overloaded branch (or branches) is switched off as a system protection measure, this may lead to a cascading effect. The power system network is divided into three types of clusters: 1) load clusters; 2) generator clusters; and 3) a connecting cluster. This may be considered as electrical division of the power system. A cluster view of the system is shown in Fig. 3, where clusters are represented by small dots with cluster IDs shown next to each cluster, and cutsets are drawn as lines connecting these dots. Generator clusters are shown by light grey color (cluster’s ID is drawn in light grey). Load clusters are shown in black (cluster’s ID is drawn in black). A Connecting cluster is shown in dark grey (its ID is drawn in dark grey).
1A “cutset” is that set of branches of a network that, if removed from the network, would completely disconnect a source of power from a sink. A “minimal cutset” is a cutset in which all the branches in the cutset must be removed from the network in order to carry the disconnection of source and sink [32].

If an initiating event causes cascading on a cutset, then the line that connects these two clusters is shown as a solid line. If an initiating event causes cascading inside a cluster, then this clusters is shown as a black dot. After clusters are formed and cutsets are identi?ed, the next step is selection of the initiating events. The selection is made using heuristic rules, such as the values of the power ?ows on the outaged lines. Because of the pruning of the search space that can be achieved, their application need not be restricted to particular control areas. Thus, initiating events that may be located outside of the utility’s/ISO’s footprint but cascade into their footprint can be identi?ed. In addition to creating a list of initiating events that are identi?ed as a result of the “cluster”-based approach, any pre-de?ned or automatically created generic list may be used as a list of initiating events [33]. Following an initiating event, cascading chains are automatically identi?ed. A cascading chain is a series of consecutive tripping events (each referred to as a “tier”) following an initiating event which are caused by overloads exceeding the branch tripping threshold, low voltage or high voltage violation below or above load/generator tripping thresholds. All violated elements can be identi?ed during the process, but only those at which violations exceed the user-speci?ed tripping thresholds are automatically tripped in the implementation of the approach described in [33]. Thus, through consideration of contingencies are identithe chain of tripping events, the ?ed that cause stability violation, large loss of load/generation, or islanding, where is the cascading tier when either stability violation, islanding or large loss of load/generation occurs. Multiple elements may be tripped at each cascading tier. If all initiating events, and not only those identi?ed using the heuristic rules, are considered, this process may be considered a uniform enumeration method. Since analyses of past blackouts in North and South America, and Europe show that over 50% of blackouts involved many cascading elements and were “slow” in progression, remedial actions should be identi?ed and implemented in order to alleviate or reduce the spread and impact of cascading outages. There might be two approaches for implementing remedial actions during the analysis of cascading outages [21]: 1) preventing the spread of cascading outages; 2) mitigating the consequences of cascading. The ?rst approach determines and applies remedial actions before the cascading starts. Remedial actions are applied after an initiating event and at each cascading tier to completely prevent or decrease the spread of cascading outages. The second approach determines remedial actions after cascading had occurred. Possible remedial actions include: transformer tap change, transformer phase-shifter adjustment, capacitor and reactor switching, MVAr dispatch, MW dispatch, line switching, and load curtailment. Examples of software in which remedial actions are modeled include [26] and [34]. The speed with which an action is taken can be important for the arrest of a cascade; in [26], this can be modeled via standard representations of control systems. 2) Enumeration of Likely Cascade Paths: Reference [36] discusses a method for systematically eliminating less probable

VAIMAN (LEAD) et al.: RISK ASSESSMENT OF CASCADING OUTAGES: METHODOLOGIES AND CHALLENGES

637

causes for cascading outages and identifying and organizing the most probable cascading events. The underlying assumption is that a system state that does not have criteria violations will not cascade. The method requires the following: a de?ned list of single contingencies vetted for criteria violations, a means of rapidly discounting combinations, an objective criterion for discounting vulnerability to widespread outages, and a means of creating a priority list of signi?cant contingencies. Presuming all single contingencies meet these criteria or have otherwise been addressed, impacts for all of these contingencies are calculated and facilities impacted by more than a speci?ed amount are recorded. If two contingencies do not impact any of the same facilities, they are independent. Contingencies are discounted by eliminating those whose impacts are independent. For those that are not independent, the system is simulated with a method that is similar to the cluster method. The process repeats until one of four conditions is reached: 1) the case solves without violations; 2) the next load drop would exceed the user-speci?ed maximum load drop; 3) a low-voltage condition is encountered, indicating that load drop is warranted, but there is no load in the vicinity of the voltage violation to drop; or 4) the power ?ow case cannot be solved even after application of the load drop procedure. If the case solves without violations, it is concluded that there is not a substantial vulnerability to widespread outages. If one of the other conditions applies, it is concluded that cascading cannot be precluded. Once all initiating combinations of multiple contingencies that might cascade have been identi?ed, they are prioritized to create a list that can be analyzed for solutions. The algorithm is simple and intuitive: for each single contingency that is a part of a double contingency initiating event, a count is accumulated. The list of single contingencies is then sorted by the accumulated totals, and the list is processed starting from the most infrequently occurring contingency to the most, eliminating contingencies in the list until a double contingency only occurs once in the list. The remaining list is a list of single contingencies that most impact the potential failure of the system. These contingencies can be addressed by engineering and analysis and the model rerun with the proposed improvement. 3) Uniform Sampling: The uniform sampling approach examines a random and representative subset of all possible cascading scenarios. Depending on the statistic being sought, if the sampling is done well and there are a suf?cient number of random trials, the risk statistics that result can approach what one would obtain from an exhaustive analysis [11], [27], [37], [37]. Cascades may arise from single initiating events or from multiple independent events that, together, take the system into a state from which a cascade begins. Taking such a possibility into account, a typical approach to sampling might, for one trial, randomly sample one possible, independent initiating event. The resulting state in the trial would then be examined and would be subject to similar judgments concerning the modeling of the system and remedial actions as described above. These would include judgments regarding the modeling

of detailed engineering phenomena. Given suitable data and software to incorporate relevant models, in theory these could be reproduced precisely. However, computational time and the lack of accurate data often dictate that phenomena such as operator response and operation of protection (which might, for example, cause network branches or generating to be lost from service) are treated as stochastic. These events might also be sampled in the same set of trials. One challenge is that most of the individual events are very rare. This leads to many trials being needed for a particular degree of con?dence to be achieved for any given risk statistic The number of trials, and hence the computation time for a risk statistic, can be reduced by appropriate strategies under the collective heading of variance reduction. For example, importance sampling biases the sampling towards those events that are most likely to test the phenomena under investigation so they appear more often and then correct the bias in the summary statistics. Examples of a sampling approach include [38] and [39]. Reference [37] describes an extension of the work described in [38] to include variance reduction and compare different operational scenarios in a reduced computation time. A simulation approach that represents the complex systems feedbacks that shape the slow upgrade of the power system is discussed in [11]. One aspect is that combinations of operating states, initiating events, and the way the cascade progresses are all sampled. 4) Enumeration Technique Including Operator Intervention and Automatic Protection Characteristics: Another approach to cascading failure risk assessment is to use a sequential Monte Carlo model [40]. First, using the forced outage and unavailability rates for the generators and lines of a system, an hour-byhour state model of the operating components in the system can be determined. Then, using projected loading data for the system, an hourly power ?ow is calculated to determine how the system will behave given the current availability of components. The calculation of this power ?ow is contingent on whether or not any components are unavailable for a given hour, with the assumption that system operation with no unscheduled outages should also suffer no instabilities, thermal overloads, or voltage violations. If a line or generator has failed during a given hour, a stability test is performed ?rst using a direct method which can produce results without the need for detailed dynamic data. If it is determined that the system has arrived at an unstable point of operation, checks are performed to see if system stability can be regained via some action performed by the protection system or the human operators. If not, then the area of impact for the instability will be determined, as well as the monetary cost for this catastrophic failure. If, however, the system is found to be stable, then a power ?ow is calculated to identify thermal overloads or voltage violations. If thermal overloads are found, a test is performed to determine whether or not the overloading has exceeded the short-term or the long-term loading limit. If the short-term limit has been breached, it is assumed that the protection system will operate with a given probability of success, tripping the line and removing it from service. If the protection system does not operate successfully, there is an additional chance that the oper-

638

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 27, NO. 2, MAY 2012

ator monitoring the system will be able to take some actions, such as load shedding, in order to mitigate the overload. If the long-term limit has been breached, it is assumed that the operator has suf?cient time to react to the disturbance, and has the option of re-dispatching generation, shedding load, or closing relays in order to mitigate the problem. In both cases, there is an inherent probability that no action will be taken whatsoever, corresponding to such instances as a relay failure to operate or an error in Supervisory Control And Data Acquisition (SCADA) system transmission resulting in faulty data. In the instance of over- and under-voltages, the operator may also react by switching on reactor/capacitor banks or shedding load, in order to mitigate the voltage violations. In any case in which action is taken in order to avert a system problem, another power ?ow is performed and the thermal and voltage limits are re-examined. This back and forth cycling continues until no violations remain, the power ?ow cannot be resolved, or a set number of iterations is reached, the latter signifying the presence of a probable cascading event. After the responses of the system, protection, and operator actions are determined given the hourly state of loading and components, the information pertaining to the event is logged and an hourly risk assessment representing cost of failure is stored for this hour. The current state of components after any relay or operator action is then examined to determine any effect on the next hour’s state of components, such as the time taken before a tripped relay can be reclosed, returning a previously overloaded line to service. D. Bulk Analysis Methods Bulk analysis models seek to assess risk at a high level without a detailed system model. These methods are complementary to detailed simulation and provide different capabilities. 1) Historical Blackout Data: The most basic method of assessing blackout risk is to identify trends in historical blackout records [10], [16], [41]. In this approach, records of the timing and size of transmission line outages or demand interrupted are compiled into aggregate measures related to system risk. Blackout data can be used to estimate, for example, the probability of blackouts in various size ranges [10], the rate at which outages branch into dependent outages [42], or temporal trends in blackout frequencies [16]. Historical data in many regions show that the distribution of blackout sizes has a heavy tail, and this, together with the large cost of large blackouts, makes the risk of large blackouts non-negligible. This fact underscores the importance of understanding large cascading failures and attempting to mitigate them. The historical distribution of blackout sizes is very important in providing a benchmark to validate different simulation methods. Historical methods cannot be used to directly measure real-time risk, since state data are not typically included in the analysis. It may, however, be possible to adjust the historical average risk based on current conditions (such as the time of day, season, etc.). Historical blackout data methods obviously have no modeling assumptions, but one is limited to the historical record in terms of studying changes in risk, and thus have little to no predictive capabilities.

As regulatory organizations, such as NERC, increase data collection regarding historical reliability events, new types of analyses will become feasible and valuable. For example, some analysis methods screen or contingencies to sharply reduce the number of contingencies studied. With increased historical data collection, screening procedures could be empirically evaluated by determining the number of historical blackouts that would have been identi?ed by screening, or whether the blackouts would have been mitigated with the remedial actions based on the screened contingencies. 2) High-Level Statistical Models: High-level statistical models of cascading may be useful in quantifying and monitoring cascading data that are either observed in the power system or produced by simulations. High level models are complementary to detailed analyses in that they summarize some key features of the cascading process and neglect most of the details of the cascading. The parameters of these high level models can be estimated from much shorter observations or many fewer simulation runs than directly estimating the distribution of blackout size. (Directly estimating the distribution of blackout size by waiting for enough rare large blackouts to occur for good statistics to be accumulated generally takes too long.) For example, simple branching process models have parameters that measure the average size of the initiating failures and an average tendency for the failures to propagate. The distribution of blackout sizes, and hence blackout risk, can be estimated from these two parameters. Any high-level statistical model requires validation and there is some evidence that branching process models [42] and the CASCADE model [43] can produce blackout size distributions similar to those observed in power systems [1], [44] and produced by simulations of cascading line overloads [42]. Much of the testing to date considers the number of transmission lines outaged as a measure of blackout size, but the load shed is also initially tested in [45]. In branching process and CASCADE probabilistic models, there are many identical components that can fail, but no direct representation of the power system. The failures of the components are produced in generations or stages. In the branching process each failure independently produces a given distribution of failures in the next generation and the process stops when either no new failures are produced or all the components have failed. In the CASCADE model, each component starts with a random loading in some speci?ed range and has a threshold load at which it fails. When a component fails, the load of all the other components increases, possibly causing further failures. These models are simple enough that in their simplest forms there are analytic formulas for the distribution of the total number of components failed. E. Strengths and Weakness of Risk Analysis Methodologies Table I summarizes the opinion of the Task Force regarding the strengths and weaknesses of the methodologies described in the present paper. It may be noted that deterministic simulation produces a set of cascades resulting from a contingency list of credible contingencies, whereas probabilistic simulation uniformly samples from the possible cascades to evaluate event probabilities and risks.

VAIMAN (LEAD) et al.: RISK ASSESSMENT OF CASCADING OUTAGES: METHODOLOGIES AND CHALLENGES

639

TABLE I SUMMARY TABLE OF CASCADING TOOLS

A. Steady-State Analysis: More Accurate Modeling of the Protection Devices Sequential steady-state (power-?ow) analysis of cascading outages will continue to be an important methodology. One of the major assumptions of this methodology is the use of an arbitrary value for the line tripping threshold, which varies from study to study and from utility to utility. A line tripping threshold is used to simulate the operation of protection devices in a steady-state environment. It is often assumed that if an initiating event causes branches to become loaded above a threshold, protection schemes will trip the overloaded elements. (In some methods, in order to represent variation in actual thresholds, the tripping action is sampled.) This is a sensitivity parameter, and study results depend signi?cantly on its values. Since protective relays were involved in 75% of major disturbances reported by NERC from 1984 to 1988 [28], it becomes very important to improve modeling of protective relays during cascading outages analysis. More detailed current approaches range from modeling with protection control groups [46] to detailed models [39]. Future work should concentrate on 1) developing requirements for corresponding input data for analysis and modeling, and 2) investigating what the minimal necessary set of additional data on relay and their set points should be. B. Analysis of Cascading Outage From a Stability Perspective Another very important direction is the analysis of cascading outages from a transient or mid-term stability perspective since many power system networks are already stability limited. The importance of this can be highlighted with respect to planned levels of renewable generation. Most new wind turbines are doubly-fed induction generators (DFIGs); these will continue to be deployed in large numbers in the near future alongside synchronous generators connected via fully rated converters (FRCs). While many grid codes now specify that wind farms should be capable of riding through certain low voltage conditions, both DFIG and FRCs, without specialized control, do not contribute to the system inertia. Currently, under/over frequency relays (e.g., protection devices) are designed for systems with signi?cant amounts of traditional generators; without appropriate modi?cation of protection settings, the replacement of traditional generators with DFIG or FRC wind turbines could result in false tripping of relays, which increases the possibility of cascading outages. Further stability issues arise from higher transfers of power, both to facilitate more widely integrated markets and the operation of wind farms. This often brings with it voltage stability problems, exacerbated by the relative lack of reactive power capability from wind turbines and the replacement of reactive power reserves held by traditional generators with capacitor banks or SVCs. C. Identifying Remedial Actions/Special Protection Schemes to Prevent/Mitigate Cascades One important use for risk assessment methods is to enable system planners and operators to identify the need for remedial action schemes and to quantify their bene?ts (or hazards). An appropriate level of detail and explanation is necessary if

V. FUTURE WORK TO ENHANCE ANALYSIS OF CASCADING OUTAGES In addition to a general need for understanding and correctly framing the risk of cascading failure, the Task Force considers that the most important directions for future work are as follows. 1) Validation of all methods against observed real data. A challenge to validation is that industry data are either not systematically collected or are kept con?dential, which prevents tool developers and researchers from performing the validation that is needed to advance the state of the art. We recommend that industry organizations work to develop methods so that data can be shared with research and industry parties, under appropriate con?dentiality agreements. 2) Improving methods for sampling the initial conditions and events that trigger cascades. More work is needed to validate heuristic methods for pruning the set of simulations to perform, as well as to establish that statistical validity of sampling methods. 3) Re-evaluating the cascade mechanisms that need to be modeled and the modeling detail that is required. In respect of point 3 above, modeling more mechanisms and increasing the detail of the modeling is expected to be required to address some questions. However, indiscriminate increases in modeling detail are not feasible. Statistical models may sometimes be needed for tractability. Bulk analysis methods that may leverage understanding of cascading and require less modeling detail should also be pursued. It is particularly important as new methods for risk assessment of cascading failure are developed that the community is open to new ideas in their nascent forms and develop methods that can be thoroughly grounded in science and industry practice. We now discuss some speci?c directions in which simulation methods may be improved.

640

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 27, NO. 2, MAY 2012

investments in new facilities are to be made and operators can have con?dence in them [9]. D. Increasing the Speed of Computations While complete enumeration approaches to risk analysis are infeasible no matter what the computing con?guration is, much can be done to decrease the amount of time required for individual cascading failure simulations (see, e.g., [47]). Similarly Monte Carlo approaches lend themselves to parallel computing methods, which is an area of ongoing power systems research and practice [48]–[51]. E. Enhancing Power Flow Models for Analysis of Cascading Outages At present, models used to analyze cascading failure events in a planning (of?ine) environment are in general different from the models within the real-time Energy Management System (EMS) system. Traditionally, planning models are of bus-branch representation and EMS models of node-breaker representation. However, cascade mechanisms generally depend on speci?c circuit breaker openings meaning that bus-branch models, except in a bulk analysis mode, are insuf?cient. Maintaining two separate models has led to a huge expenditure of person-hours to align all data necessary for cascading type of contingency analysis. By better aligning operation and planning models several bene?ts will come of the effort: ? Model maintenance becomes greatly simpli?ed because there is only one model to maintain, the full-topology model. ? Seamless exchange of data between operations and planning (online data structures for contingencies, interface de?nitions, etc.). ? Full interoperability among models would signi?cantly improve analysis of cascading failure events. ? Benchmarking of operation and planning results becomes relatively consistent. VI. CONCLUSION The present paper has presented the work of the IEEE Task Force on Understanding, Prediction, Mitigation, and Restoration of Cascading Failures. As cascading failures continue to contribute signi?cantly to blackout risk, there is a need for greater understanding of cascading outages and how major blackouts arise. This understanding is necessary for appropriate decisions to be taken in operational, operational planning and investment planning timescales, and for informing regulatory and utility policy. Risk assessment methodologies currently utilized by the industry for analysis of cascading outages have been summarized. Two classes of methods for analysis of cascading outages have been discussed: detailed modeling and simulation methods, and bulk analysis methods. The complexity of cascading outages makes enumeration of all possibilities impossible. Some degree of approximation is therefore necessary, whether in terms of the individual events by which cascades might be propagated or the modeling of the physical phenomena they involve. Many current approaches involve a pruning of the set of combinations of individual outages;

these generally fall into one of two classes: a random sampling or the use of heuristics. A review has been presented of the state-of-the art as expressed through these general approaches along with some examples and discussion of modeling approximations. The risk of masking of rare, very large disturbances by more frequent, smaller disturbances is highlighted. The criteria by which different approaches to study of cascading outages might be judged have been presented in order to help utilities and regulators understand the state of the art and the uses to which new methodologies might be put. The criteria include accuracy of simulation, computation time, dependency on large sets of data and whether probabilities are required to be computed. A number of recommendations are made for future work to enhance analysis of cascading outages. These include: validation of all methods against observed real data; improvement in methods of sampling cascades; and a re-evaluation of the cascade mechanisms that need to be modeled and the modeling detail that is required. Given the scale of the effort required and the enormity of the challenges ahead, collaboration among policy makers, utilities, vendors and research organization is essential to solve this challenging problem.

REFERENCES
[1] H. Ren and I. Dobson, “Using transmission line outage data to estimate cascading failure propagation in an electric power system,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 55, no. 9, pp. 927–931, Sep. 2008. [2] R. Baldick, B. Chowdhury, I. Dobson, Z. Dong, B. Gou, D. Hawkins, H. Huang, M. Joung, D. Kirschen, F. Li, J. Li, Z. Li, C.-C. Liu, L. Mili, S. Miller, R. Podmore, K. Schneider, K. Sun, D. Wang, Z. Wu, P. Zhang, W. Zhang, and X. Zhang, “Initial review of methods for cascading failure analysis in electric power transmission systems,” in Proc. IEEE Power and Energy Society General Meeting, Jul. 2008. [3] Final Report on the August 14, 2003 Blackout in the United States and Canada, US-Canada Power System Outage Task Force, Tech. Rep., 2004. [4] Final Report System Disturbance on 4 Nov. 2006, Union for the Co-ordination of Transmission of Electricity, Tech. Rep., 2007. [5] “Dam Failure Triggers Huge Blackout in Brazil,” CNN, 2009. [6] M. Morgan and M. Henrion, Uncertainty: A Guide to Dealing With Uncertainty in Quantitative Risk and Policy Analysis. Cambridge: Cambridge Univ. Press, 1990. [7] Transmission Relay Loadability, NERC, Standard PRC-023, Feb. 2008. [8] System Operating Limits Methodology for the Operations Horizon, NERC, Standard FAC-011-2, Nov. 2006. [Online]. Available: http://www.nerc.com/?les/FAC-011_2.pdf. [9] CIGRE Working Group C1.17, Planning to Manage Power Interruptions, Technical Brochure. Paris, France: CIGRE, 2010. [10] B. A. Carreras, D. E. Newman, I. Dobson, and A. B. Poole, “Evidence for self-organized criticality in a time series of electric power system blackouts,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 51, no. 9, pp. 1733–1740, Sep. 2004. [11] I. Dobson, B. A. Carreras, V. E. Lynch, and D. E. Newman, “Complex systems analysis of series of blackouts: Cascading failure, critical points, and self-organization,” Chaos, vol. 17, no. 026103, Jun. 2007. [12] A. Holmgren and C. Molin, “Using disturbance data to assess vulnerability of electric power delivery systems,” J. Infrast. Syst., vol. 12, no. 4, pp. 243–251, 2006. [13] J. ?. H. Bakke, A. Hansen, and J. Kert’esz, “Failures and avalanches incomplex networks,” Europhys. Lett., vol. 76, no. 4, pp. 717–723, 2006. [14] G. Ancell, C. Edwards, and V. Krichtal, “Is a large scale blackout of the New Zealand power system inevitable?,” in Proc. Electricity Engineers Association 2005 Conf. “Implementing New Zealand’s Energy Options”, Aukland, New Zealand, 2005, Electricity Engineers Association.

VAIMAN (LEAD) et al.: RISK ASSESSMENT OF CASCADING OUTAGES: METHODOLOGIES AND CHALLENGES

641

[15] X. Weng, Y. Hong, A. Xue, and S. Mei, “Failure analysis on China power grid based on power law,” J. Control Theory Appl., vol. 4, no. 3, pp. 235–238, Aug. 2006. [16] P. Hines, J. Apt, and S. Talukdar, “Large blackouts in North America: Historical trends and policy implications,” Energy Policy, vol. 37, no. 12, 2009. [17] CIGRE Working Group C1.2, Maintenance Of Acceptable Reliability In An Uncertain Environment, Technical Brochure 334. Paris, France: CIGRE, Dec. 2007. [18] S. Vajjhala and P. Fischbeck, “Quantifying siting dif?culty: A case study of U.S. transmission line siting,” Energy Policy, vol. 35, pp. 650–671, 2007. [19] P. Slovic, B. Fischhoff, and S. Lichtenstein, “Facts and fears: Understanding perceived risk,” Policy and Practice in Health and Safety, vol. 38, no. 2, pp. 65–102, 1979, Supplement. [20] CIGRE Task Force C2.02.24, Defense Plan Against Extreme Contingencies, Technical Brochure 316. Paris, France: CIGRE, Apr. 2007. [21] M. K. Koenig, P. Duggan, J. Wong, M. Y. Vaiman, M. M. Vaiman, and M. Povolotskiy, “Prevention of cascading outages in Con Edison’s network,” in Proc. IEEE T & D Conf., Apr. 2010. [22] M. Papic, M. Y. Vaiman, and M. M. Vaiman, “Determining a secure region of operation for Idaho power company,” in Proc. IEEE Power Engineering Society General Meeting, San Francisco, CA, Jun. 2005. [23] Task Force on Understanding, Prediction, Mitigation and Restoration of Cascading Failures of the IEEE Computing & Analytical Methods (CAMS) Subcommittee, “Survey of tools for risk assessment of cascading outages,” in Proc. IEEE Power & Energy Society General Meeting, Detroit, MI, Jul. 2011. [24] A. Berizzi and M. Sforna, “Dynamic security issues in the Italian deregulated power system,” in Proc. IEEE Power Engineering Society General Meeting, 2006. [25] National Grid, GB Security and Quality of Supply Standard, Issue 1, Sep. 2004. [26] J.-P. Paul and K. R. W. Bell, “A ?exible and comprehensive approach to the assessment of large-scale power system security under uncertainty,” Int. J. Elect. Power Energy Syst., vol. 26, no. 4, pp. 265–272, 2004. [27] R. Billinton and W. Li, Reliability Assessment of Electrical Power Systems Using Monte Carlo Methods. New York: Plenum, 1994. [28] A. G. Phadke and J. S. Thorp, “Expose hidden failures to prevent cascading outages,” Comput. Appl. Power, vol. 9, no. 3, pp. 20–23, Jul. 1996. [29] D. Kosterev, C. Taylor, and W. Mittelstadt, “Model validation for the August 10, 1996 WSCC system outage,” IEEE Trans. Power Syst., vol. 14, no. 3, pp. 967–979, Aug. 1999. [30] B. Yang, V. Vittal, and G. Heydt, “Slow-coherency-based controlled islanding; a demonstration of the approach on the August 14, 2003 blackout scenario,” IEEE Trans. Power Syst., vol. 21, no. 4, pp. 1840–1847, Nov. 2006. [31] N. Bhatt, S. Sarawgi, R. O’Keefe, P. Duggan, M. Koenig, M. Leschuk, S. Lee, K. Sun, V. Kolluri, S. Mandal, M. Peterson, D. Brotzman, S. Hedden, E. Litvinov, S. Maslennikov, X. Luo, E. Uzunovic, B. Fardanesh, L. Hopkins, A. Mander, K. Carman, M. Y. Vaiman, M. M. Vaiman, and M. Povolotskiy, “Assessing vulnerability to cascading outages,” in Proc. PSCE 2009, Mar. 15-18, 2009, pp. 1–9. [32] X. Wang and V. Vittal, “System islanding using minimal cutsets with minimum net ?ow,” in Proc. IEEE PES Power Systems Expo. Conf., 2004.

[33] “Physical and operational margin (POM) program manual,” . Los Angeles, CA, 2010, V&R Energy Systems Research, Inc. [34] “Optimal mitigation measures (OPM) program manual,” . Los Angeles, CA, 2010, V&R Energy Systems Research, Inc. [35] S. S. Miller, “Extending traditional planning methods to evaluate the potential for cascading failures in electric power grids,” in Proc. 2008 PES General Meeting (Transmission 2000), 08GM1365, Panel paper. [36] R. Y. Rubinstein, Simulation and the Monte Carlo Method. New York: Wiley, 1981. [37] K. R. W. Bell, D. S. Kirschen, R. N. Allan, and P. Kelen, “Ef?cient Monte Carlo assessment of the value of security,” in Proc 12th Power Systems Computation Conf., Trondheim, Norway, Jun. 1999. [38] D. S. Kirschen, K. R. W. Bell, D. P. Nedic, D. Jayaweera, and R. N. Allan, “Computing the value of security,” Proc. Inst. Elect. Eng., Gen., Transm., Distrib., vol. 150, no. 6, pp. 673–678, Nov. 2003. [39] X. Yu and C. Singh, “A practical approach for integrated power system vulnerability analysis with protection failures,” IEEE Trans. Power Syst., vol. 19, no. 4, pp. 1811–1820, Nov. 2004. [40] J. Rossmaier and B. H. Chowdhury, “The development of a new indicator of system vulnerability, the cascade failure risk index,” in Proc. IEEE PES General Meeting, Minneapolis, MN, Jul. 25-29, 2010. [41] D. Cornforth, “Long tails from the distribution of 23 years of electrical disturbance data,” in Proc. Power Systems Conf. Expo., Seattle, WA, Mar. 2009. [42] I. Dobson, J. Kim, and K. R. Wierzbicki, “Testing branching process estimators of cascading failure with data from a simulation of transmission line outages,” Risk Anal., vol. 30, no. 4, pp. 650–662, 2010. [43] I. Dobson, B. A. Carreras, and D. E. Newman, “A loading-dependent model of probabilistic cascading failure,” Probab. Eng. Inf. Sci., vol. 19, no. 1, pp. 15–32, Jan. 2005. [44] Q. Chen, C. Jiang, W. Qiu, and J. D. McCalley, “Probability models for estimating the probabilities of cascading outages in high-voltage transmission network,” IEEE Trans. Power Syst., vol. 21, no. 3, pp. 1423–1431, Aug. 2006. [45] J. Kim and I. Dobson, “Propagation of load shed in cascading line outages simulated by OPA,” in Proc. COMPENG 2010: Complexity in Engineering, Rome, Italy, Feb. 2010. [46] Transmission reliability evaluation for large-scale systems (TRELSS): Version 6.0 user’s manual, EPRI. Palo Alto, CA, 2000, 1001035. [47] S. Khaitan, J. McCalley, and Q. Chen, “Multifrontal solver for online power system time-domain simulation,” IEEE Trans. Power Syst., vol. 23, no. 4, pp. 1727–1737, Nov. 2008. [48] C. Lemaitre and B. Thomas, “Two applications of parallel processing in power system computation,” IEEE Trans. Power Syst., vol. 11, no. 1, pp. 246–253, Feb. 1996. [49] C. Borges, D. Falcao, J. Mello, and A. Melo, “Composite reliability evaluation by sequential Monte Carlo simulation on parallel and distributed processing environments,” IEEE Trans. Power Syst., vol. 16, no. 2, pp. 203–209, May 2001. [50] Q. Morante, N. Ranaldo, A. Vaccaro, and E. Zimeo, “Pervasive grid for large-scale power systems contingency analysis,” IEEE Trans. Ind. Informat., vol. 2, no. 3, pp. 165–175, Aug. 2006. [51] W. Li, Risk Assessment of Power Systems: Models, Methods, and Applications. New York: Wiley-IEEE, 2004.


相关文章:
更多相关标签: