by the system performance [171] and not by the health indexes of individual components, that is, they are system‐level failures rather than component‐level failures [172–174]. Control systems may still operate normally even if the component health indexes exceed failure thresholds [157,175]. In addition, the feedback control mechanisms hide the explicit mapping from the individual component degradation state to the control performance loss [175]. It is easy to estimate the RUL of an individual component, but difficult for degraded control systems, because their failure time cannot be described by any particular distribution [171,176]. Therefore, when applied to degraded control systems, general CBM models should connect the component degradation and the control performance loss, so that maintenance activities depend on the reduction of control performance [176].
The Wiener process is often employed in degradation models because of its favorable mathematical properties; in particular, it can capture non‐monotonic degradation signals frequently encountered in practical applications, because consecutive independent increments are normally distributed [172]. Therefore, this stochastic process has been widely applied to characterize the path of degradation in realistic scenarios, where fluctuations are observed in the degradation process, for example, brake‐pad wear for automobiles [177], bearing degradation [174,178], gyroscopes in inertial navigation systems [172,179,180], contact image sensors in copy machines [181], the resistance of carbon‐film resistors [159], and the pitting corrosion process [167]. To overcome the aforementioned limitations, the Wiener degradation model with unit‐to‐unit variability is introduced to describe gas turbines exhibiting different lifetimes. The Wiener model considers the random starting time of the degradation process, which follows a non‐homogeneous Poisson process [167,182]. Furthermore, the drift parameter, which denotes the aging rate, is also variable and follows a normal distribution, where the mean denotes the average aging rate and the variance represents the variability in the aging rate [170,183–186]. These parameters can be estimated from the lifetime dataset via an expectation‐maximization (EM) algorithm [172,180,183–186].
Thus, the data‐driven degradation model with unit‐to‐unit variability is integrated into the control system model described by control block diagram, resulting in a real‐time simulation model. In such control models, the interplay among the reduction in the control signal due to component degradation, the transfer functions of the subsystems, and the feedback control loop, provides the mapping between the component degradation states and the system performance loss. This interaction is modeled via control‐block diagrams, which implement the feedback control mechanism and quantify the control signal by comparing the control performance to the setpoint. Therefore, such an integrated model does not require explicit mapping from the component degradation states to the system performance loss and is well‐suited to represent a degraded control system. As such, this simulation model realistically predicts the performance of the control system at different operating times and degradation stages.
1.3.2 Ensuring Cybersecurity of CPSs
Attacks on complex systems, for example, CPSs, are fundamentally different from traditional internal failures (e.g., degradation and design) and external failures (e.g., natural disasters) [187–190]. Many attack models for complex systems embrace a partial perspective, which only focuses on component vulnerability, and neglects the dependence of system performance on it [191–193]. As a result, the insights provided by these models are not adequate for providing general recommendations in realistic applications. To address this limitation, recent studies investigate the influence of component vulnerability (attacks at the component level) on system performance [194–197].
Pioneering works [192198–202] develop optimal defense strategies to minimize the attachment vulnerability of parallel systems, assuming that attackers maximize either the damage probability or the expected damage over a time horizon. They also consider general features, that is, imperfect false target techniques and genuine targets [201,203]. These defense strategies reach a trade‐off between increasing the protection of existing components and providing redundancy by allocating additional components [192,203–205].
System performance is an essential feature in CPSs that can still operate if some components are unavailable and, therefore, are characterized by multiple performance levels [206–211]. System performance degrades with increasing component destruction or unavailability; if the system performance level decreases, the required demand may be partially unsatisfied. Two risk measures can be used for multi‐state complex systems [203–205]: 1) the probability that the demand is not satisfied is considered for complex systems that fail if performance cannot meet demand, for example, automatic train protection and block systems [212,213], and power system dynamic security systems [190]; 2) the expected damage proportional to the unsupplied demand is considered for complex systems that can operate even if the demand is partially supplied, for example, mobile ad hoc networks [191], NCSs [214,215], supervisory control and data acquisition (SCADA) systems [216,217], water distribution networks [218], and electric power grids [219–221].
Several works consider both the vulnerability and performance of complex systems subject to attacks [201–207,222,223]. These works generally describe a case as a dynamic contest between an attacker and a defender to develop a component vulnerability model and a multi‐state system performance model. The number of destroyed components quantifies the demand loss and expected damage costs [200,205]. To make the above contest more realistic, attack time uncertainties and the attacker's preference on the attack time should be considered.
In the literature, two different approaches exist for determining the attack time, that is, the strategic selection and the selection based on probability distributions. In the former, the attacker strategically selects whether to attack at some point in time or at a later point in time, based on the outcome of the game, given that the attack occurs at a specific time [224]. Thus, complex attack and defense strategies can be derived from a two‐stage min‐max multi‐period game. Extensive attack or defense in one period limits the attack or defense that can be exerted in the next period, and vice versa. Thus, players strategically choose whether to exert effort now or in the future [224–226]. The defender may determine optimal resource allocation strategies for redundancy [192] and protection, that is, individual or overarching protection [205,227–229]. On the other hand, the attacker may distribute the constrained resources optimally across sequential attacks [230–233].
In the second approach, the attacker prefers to conduct the attack at the time of the critical event [206]. Indeed, attacks in Nice, Berlin, Manchester, and London occurred several days before and after Bastille Day, Christmas, a concert, and the Champions League 2017 Final, respectively. In these cases, the defenders have increased the protection level in the immediate aftermath; therefore, it is not worthwhile and cost‐effective for the attacker to deploy another attack in a short period. Because attacks occurred at critical times, they can greatly influence public opinion. As a result, the attacker aims to maximize the system loss by strategically selecting a set of elements to attack based on the two‐stage min‐max game [234]. Because we can predict the distribution of the time at which the critical event occurs, the attack time can be inferred from a data‐driven probability distribution [192]. The two approaches aim to maximize the outcomes of the game given that the attack occurs at a specific time under a similar system structure and variable resources.
The truncated normal distribution is used to describe the uncertainty of the most probable attack time, that is, the time of the critical events, and the accuracy of the defender's estimate of it [206,235,236]. The truncated normal distribution has been adopted to represent uncertainties in many realistic applications, for example, traffic peaks of online video websites [193], the peak season of power supplies [237–239], the peak demand of water distribution systems [240], and the rush hour of public transportation [241]. Accounting for the influence of this uncertainty increases the relevance of the insights gained for the optimal resource allocation strategy against attacks.
CPSs are a new class of engineered complex systems that provide tight interactions between cyber and physical components. The corruption of a small subset of their components has the potential to trigger system‐level failures leading to entire system performance disruptions [191,215,221,242]. Previous studies on attack vulnerability and performance