A Distributed Multimodel Cosimulation Platform to Assess General Purpose Services in Smart Grids

Future smart grids with more distributed generation and flexible demand require well-verified control and management services. This article presents a distributed multimodel cosimulation platform based on smart grid architecture model to foster general purpose services in smart grids. It aims at providing developers with support to easily set-up a test-bed environment, where they can simulate realistic scenarios to assess their algorithms and services. The proposed platform takes advantages of internet-of-things communication paradigms and protocols to enable the interoperability among different models and virtual or physical devices that compose a use case. Moreover, the integration of digital real-time simulators unlocks hardware-in-the-loop features. To test the functionality of our platform, a novel scheme of fault detection, isolation, and restoration is developed, in which communication and interoperability of different functions and devices are crucial. This service is applied on a realistic portion of a power grid in Turin, Italy, where devices communicate over the Internet. Finally, the laboratory experimental results achieved during a real-time cosimulation are discussed.

purpose services need to be developed. For this purpose, the smart grid architecture model (SGAM) offers a support to design them with an architectural approach by which interoperability viewpoints can be well represented [1].
Similar to any other ex-ante analysis, laboratory facilities and simulation systems are needed to test and verify performance of new methods or tools for different use cases. As a crucial requirement for test and validation of new solutions, a near real-world environment is recommended to be used [2]. In this regard, real-time simulations (RTS) depict a path that can effectively support the research efforts to meet emerging lab test requirements [3], [4].
For power system electromagnetic transient analysis (EMT), like fault analysis, RTSs are very reliable with flexibility and scalability due to computation parallelism features. However, to study interoperability in smart grids based on SGAM, combination of control and communication simulators with grid realtime simulator is needed. An overview of existing cosimulation frameworks is discussed in [5]. Some of these frameworks do not run simulations in real-time, some are not scalable in terms of number of interconnected devices (e.g., smart meters, relays, actuators, etc.) and some are quite costly in terms of lab set-up.
On these premises, this article, which extends our previous work [6], we proposes a distributed multimodel cosimulation platform that takes advantages from internet-of-things (IoT) communication paradigms and protocols to allow the interoperability across different models [7] and many virtual or physical devices [8] (i.e., circuit breakers (CBs), rec-losers, remote terminal units, smart meters, etc.) composing the simulation scenario. It also integrates digital real-time simulators (DRTS) to enable hardware-in-the-loop (HIL) simulations. HIL is very beneficial for testing and evaluating the performance of physical prototypes or newly developed algorithms as embedded software, before deploying them in real-world systems. The proposed platform has been designed to simulate, even concurrently, various general purpose services for smart gird management following an event-driven approach. Nevertheless, a particular emphasis is given on addressing fast transient phenomena in smart grids, which require fast system response, like fault detection, isolation and restoration (FDIR).
Comparing with most of existing cosimulation platforms, the one proposed in this article is able to concurrently meet the testing requirements of use cases with the following characteristics: 1) when many information and communication technology (ICT)-enabled devices (i.e., smart meters, actuators, bay controllers, etc.) are distributed widely in the network and coordination of them through communication network is important; 2) when smart functions or management schemes deal with fast transient phenomena; 3) when communication latency would directly impact the performance of an application.
In its core, the platform integrates different network simulators for data transmission. Thus, realistic metropolitan area networks (MANs) can be modeled to evaluate the impact of new IoT devices and services in communicating over the Internet in terms of congestion and messages delays. Indeed, such delays in data transmission can impact on the operational status of the smart grid [9]. We believe our platform can provide valuable supports with minimum amount of required efforts to set-up the test-bed environment and to migrate from a fully simulated environment to the real-word.
The rest of this article is organized as follows. Section II reviews relevant literature solutions highlighting our contribution. Section III presents the proposed distributed multimodel cosimulation platform. Section IV introduces our use case for FDIR. Section V describes the case study and its test-bed setup. Section VI presents the experimental results. Finally, Section VII concludes this article.

II. RELATED WORKS
This section reviews literature solutions on cosimulation platforms for smart grids that are complex networks of systems, where different entities cooperate by exchanging heterogeneous information.
In [10], Hopkinson et al. present a cosimulation environment, called EPOCHS, that is based on high level architecture (HLA) standard [20]. EPOCHS is a breakthrough technology, since it is the first known cosimulator for realizing simulations of power systems and communication networks. To evaluate electromagnetic scenarios involving communication networks, EPOCHS integrates 1) ns-2 [21], a communication network simulator; 2) PSCAD/EMTDC, a commercial electromagnetic transient simulator; and 3) PSLF, a commercial electro-mechanical transient simulator. INSPIRE [11] is another cosimulation platform based on HLA technology. It interconnects a power system simulator, called DIgSILENT PowerFactory, with a communication network simulator, called OPNET [22], and continuous time-based application modeled in MATLAB, JAVA, GNU R, and C++. It is based on both the IEC 61850 standard and the IEC 61968/61970 common information model. Thus, INSPIRE is limited on simulating the interactions between both components and communication layers in SGAM. Shum et al. [12] presented a more complex ad-hoc cosimulation platform based on HLA and JADE, a popular multiagent platform. The platform integrates OPNET with the commercial electromagnetic transient simulator PSCAD/EMTDC. The framework aims at easing to understand, evaluate and debug distributed smart grid software.
Compared to [10]- [12], GECO [13] exploits a different approach to create a cosimulation environment between PSLF and ns-2. In GECO, cosimulation is achieved through a global event-driven mechanism to manage the co-simulation orchestration instead of HLA technology. Yang et al. [14] presented a cosimulation framework for validating distributed controls in smart grids, based on both hardware-and software-in-the-loop approaches. Controls are designed exploiting a model-driven approach based on the IEC 61499 standard. The communication among power plants and the controllers, that are developed in MATLAB/Simulink, is achieved through user datagram protocol (UDP) and transmission control protocol (TCP) sockets. Mosaik [23] is a framework for modular simulation of active components in smart grids that also follows an event-driven approach. Mosaik has been used by Nguyen et al. [15] to integrate DRTS (i.e., RTDS) with Omnet++ (a communication network simulator) and perform Power HIL simulations.
For analyzing the performance of smart grid technology, Bian et al. [16] presented a real-time cosimulation platform that integrates Opal-RT and OPNET. The platform provides the possibility to interface the OPNET simulation environment with the real world through Internet protocol (IP). Venkataramanan et al. [17] developed an ad-hoc cosimulation platform that can be used as test-bed for microgrid cyber-physical analysis. It combines: 1) an RTDS, 2) the common open research emulator (a.k.a. CORE), and 3) open platform communication based on FreeOPCUA. OOCoSim [18] is another cosimulation framework that integrates both OPNET and OpenDSS (a power system simulator). OOCoSim framework has been applied to study only demand response application in a smart-grid scenario. Finally, Mirz et al. [19] proposed a cosimulation architecture for power system communication and market analysis in smart grids. Authors proposed a data-model to integrate and allow the communication among 1) market models in python, 2) power system simulations in Modelica, and 3) network models in ns-3.
The presented literature solutions, with the exception of [19], do not follow SGAM that supports the design of services highlighting the interoperability among the actors in the smart grid. Moreover, [10], [13], [17] are designed and implemented for a specific use-case scenario with particular requirements. While, these kind of cosimulation platforms must be designed to be flexible as much as possible to simulate general purpose services, even concurrently, in smart grids. Thus, a multimodel approach is needed to allow a cooperation among different simulation models, possibly distributed over the Internet and communicating by exploiting standard protocols and data-formats. This makes easier the definition of more complex smart grids scenarios. Finally, literature solutions lack on the integration of IoT devices and third-party platforms that can feed the algorithm to be tested with real-world data, even in (near-) real-time. Furthermore, the integration of IoT communication paradigms and protocols enable the required flexibility and scalability in performing HIL cosimulations. Thus, a platform that integrates IoT protocols can be ready to integrate next generation devices (e.g., real-world smart meters).
This article proposes a distributed multimodel cosimulations platform to asses and evaluate general purpose services in smart grids by implementing an event-driven approach. It exploits communication paradigms peculiar of IoT platforms, to implement a flexible framework where different power grid scenarios and simulations can be executed. In its core, it implements 1) different software simulation environments, 2) DRTS, and 3) three different communication network simulators (to be used alternately according to co-simulation requirements). Thanks to the integration of DRTS, the platform is also ready to perform HIL simulations. While by using IoT protocols, it allows the interoperability with real-world IoT devices and third-party platforms.
The strength of this platform comes from its capability to host new different use cases, input data, or models by means of its modules in a plug-and-play fashion, with a minimum effort to change the overall setup. As an example, several DGs can be integrated as either independent simulators or standalone physical generators thanks to our IoT-based architecture, which is a novel contribution to the cosimulations of electric energy systems with respect to conventional setups. The proposed platform is designed for general purpose services, but its distinguished capability, comparing to many existing platforms, is related to cosimulation and real-like analysis of the use cases in which low communication latency is crucial and EMA is required. Coordination of switching of converters connecting emerging DGs to low inertial systems for voltage or frequency control, and improving reliability of network by advanced outage management system are two main examples of such use cases. In these applications, a large number of devices are also involved, which makes the fast coordination and connectivity of them a challenge. We believe IoT-based solutions would tackle this challenge and improve the performance. Regarding advanced outage management systems, which is the context of the use case discussed in this article, a newly developed self-healing scheme named decentralized fault detection and isolation, and centralized restoration (De-FDI-CR) has been simulated by using the proposed platform. Successful deployment of such schemes creates self-healing distribution grids with high reliability and customer satisfaction.
To highlight our contribution, Table I reports a comparison of our solution with the reviewed smart-grid cosimulation environments. It highlights the following.
1) What kind of cosimulation is performed.
2) If the framework is compliant with SGAM.
3) If and what network simulator has been integrated in the framework. 4) If a DRTS is used. 5) Which are the integrated software frameworks. 6) If HIL simulation are supported. 7) If the simulation environment interacts with IoT devices. Fig. 1 shows the proposed distributed multimodel cosimulation platform, which has been designed to simulate and assess general purpose services for a smart management of power grids by exploiting an event-drive approach. It follows the interoperability and inter-dependency among layers depicted in SGAM [1]. Its modularity takes advantage of the microservices software design pattern, which consists on developing software as a suite of small services, each running in its own process and communicating with lightweight mechanisms [24]. This increases flexibility and maintainability because services are small, highly decoupled and focus on doing a small task [25]. The platform implements IoT communication paradigms and technologies to allow a fast bidirectional communication among its entities, either hardware or software. In particular, it exploits the two main communication paradigms: 1) publish/subscribe [26] through MQTT protocol [27] and 2) request/response offering representational state transfer (REST) web services [28]. Publish/subscribe allows asynchronous communication that complements request/response. Furthermore, publish/subscribe enables the development of scalable loosely coupled event-driven systems, services and applications that can react in (near-) real-time to certain events [29]. All these features ease developers in creating new simulation scenarios in a plug-and-play fashion also integrating third-party software. Moreover, our solution provides support to migrate from a fully simulated environment to the real-word. The rest of this section describes all the modules for each layer in the platform.

A. Component Layer
The component layer represents all the interacting entities, either hardware or software, needed to build a services operating in a smart grid scenario following a multimodel approach. As show in Fig. 1, it is made up of three macro blocks: 1) simulated environment, 2) real-world environment, and 3) integration module.
On the one hand, the simulated environment consists of both DRTSs (e.g., OpalRT and RTDS) and different software simulation frameworks (e.g., MATLAB and Simulink), where models are executed. Thanks to real-time capabilities provided by this macroblock, no significant latency is introduced to simulations. On the other hand, the real-world environment consists of heterogeneous hardware devices that exploit different protocols, either wireless or wired, to communicate (e.g., Wi-Fi, ZigBee, and power line communication (PLC)).
The components of these two macroblocks are integrated into our cosimulation platform by exploiting the integration module. To allow a bidirectional communication, it acts as bridge between the underlying technologies and the rest of the platform following a methodology in [30]. In this prospective, each component in the simulated or in the real-world environment needs its own software-or device-integrator, respectively. Once the corresponding integrator receives a new data from its low level technology, this data is translated into a common data-format and sent to the rest of the platform by exploiting either MQTT or REST, and vice-versa.
In a nutshell, the component layer provides features to enable a modular multimodel approach, where the different models, running in their own simulation frameworks, are combined together to build smart grid scenarios. Integration of DRTS allows simulations that realistically reproduce electromagnetic transient in distribution and transmission networks enabling also HIL. Finally, devices deployed in the real-word can be integrated in our platform to feed and asses novel services with real data.

B. Communication Layer
Future smart grids will be equipped with several internetconnected devices that will communicate, even each other, to provide a service [31]. Thus, evaluating the impact of data transmission in existing communication networks is of paramount importance to address service requirements even in terms of data transmission latency [9]. In this view, the communication layer helps in connecting the different entities, either hardware or software, in our cosimulation platform.
As shown in Fig. 1, the first module is the MQTT message broker, which is the main actor to allow a bidirectional and asynchronous communication based on the MQTT protocol. Thus, it routes data from publisher to subscribers.
This layer optionally provides different simulators to realistically emulate communication networks. Thus, developers can optionally switch from the real-word MAN to a virtual MAN to study their simulation scenarios also from the data transmission viewpoint, i.e., transmission latency, bandwidth, low level protocols, and access media (e.g., fiber optic, LTE, NBIoT, and WiMax). Following a methodology in [32] and [33], we embedded three main communication network simulators that can be alternatively chosen by developers according to their assessments: 1) Mininet [34], 2) Omnet++ [35], and 3) ns-3 [36]. Mininet is a network emulator for building virtual networks consisting of hosts, switches, controllers, and links exploiting both real kernel Linux and real network stack. It is recommended to evaluate end-to-end time latency and maximum bandwidth of communicating devices. While, Omnet++ and ns-3 are two frameworks for in-depth simulations of different physical layers and protocols in communication networks. They also allow assessments of the overall IP suite. Both Omnet++ and ns-3 are suitable for performing stack message exchange analysis to design innovative protocols. These simulators implement real-time software schedulers, hence also this layer does not introduce significant latency to whole simulation scenarios. In a test-bed cosimulation environment, the selected communication network simulator can be deployed in a server working as a virtual router where the virtual MAN runs. Thus, all the communication traffic generated by the entities in the platform is routed through this virtual router to analyze delays, congestion, and packet losses. This is valid also for communication flows either based on MQTT or REST.

C. Information Layer
The information layer in Fig. 1 defines data-formats and information models for the data exchanged between the entities, either hardware or software, in our co-simulation platform. By default, it proposes open-standard formats that are completely language-independent, such as JSON and XML. Both dataformats are widespread used in distributed software architectures and web applications. In particular, JSON is becoming a standard for data exchange, even among IoT devices, because it is 1) lightweight, 2) easy to understand, manipulate and generate, and 3) self-describing and human readable. Nonetheless, our platform is opened in integrating other data-formats according to requirements of the different simulation scenarios.
Regarding the information models, our solution is compliant with both IEC 61850 [37] and ISO 17800 [38] that are the two main standards for making communication between intelligent electronic devices (IEDs) and control centers. The former is used for information exchanged between electrical substation and IEDs. While, the latter is used for data exchanged between control systems and end use IEDs.

D. Function Layer
The function layer in Fig. 1 includes the main functionality that is commonly used by the other entities in the cosimulation platform, i.e., 1) asset management to manage information related to smart grid equipment and their geo-referenced locations; 2) device management to manage interactions between devices and other entities in the platform (e.g., services); and 3) data acquisition and storage to store real and/or simulated data coming from other entities in the platform into scalable nonrelational database (e.g., MongoDB). This layer defines also the functionality that a general purpose service provides for a smart management of electrical distribution network, such as 1) Optimal power flow (OPF) and ii) decentralized fault detection and isolation, and centralized restoration, a.k.a. De-FDI-CR (see Section IV). This article is focused on services to make smart grids support self-healing and resilient to faults. However, thanks to its modularity, the platform is flexible in defining and including additional new services that can either play concurrently or be replaced in a plug-and-play fashion.

E. Sequence Diagram of a Generic Smart Grid Scenario
This Section describes the communication flow in our platform for a generic smart grid scenario to be simulated. Fig. 2 reports a sequence diagram with the interaction among the possible actors in the cosimulation, each of them running a different model. In this example, we suppose that the actors are running in four different servers and a DRTS that are connected in a real dedicated local area network designed to minimize network latency (e.g., by exploiting gigabit Ethernet technology).
As depicted in Fig. 2, DRTS runs a model reproducing the behavior of a distribution network that exchanges information with the other actors in the system by exploiting the MQTT protocol. When DRTS publishes the Message-1 with, for instance, some alerts or measurements to MQTT message broker, this message is routed through the virtual MAN simulated in one of the integrated communication network simulators to realistically assess the delay in the transmission till reaching the subscribed Service-A. To fulfill its computation, Service-A invokes Service-B by sending the Message-2 again via MQTT routed through the virtual MAN. After a post-process performed by Service-B, it replays to Service-A sending the resulting Message-3. Finally, Service-A ends its computation and sends the Message-4 with, for example, an actuation command to DRTS.

IV. SERVICE: FDIR
In this section, a self-healing scheme for modern distribution systems is described, and it is mapped to SGAM according to the proposed cosimulation platform. The scheme aims to perform De-FDI-CR.
FDIR aims to automatically provide detection and isolation of the faulted portion of distribution system and restore the service for customers on healthy sections of the grid [39]. It makes decisions using over-current protection relays, loss of voltage on any phase, presetting capacity limits, and typically what the other switches see in the system. FDIR does not locate faults [40], however, it intends to restore most of customers as fast as possible, which would avoid extra techno-economic costs in emergency [41].
There are several logic architectures for FDIR, including centralized logic and distributed logic. In decentralized logic, switch/recloser devices communicate among each other to locate and isolate the fault, and eventually restore the healthy part of the feeder [42]. The IEC 61850 based peer-to-peer communication technology is often used to deploy this scheme [43]. This logic is usually faster and more reliable, but restoration decisions are not necessarily the best solutions; lack of real time load information from neighboring or supporting feeders for reconfiguration may result in line congestion or even over-current causing consequent faults.
In the centralized logic, distribution management system (DMS) performs a high-level optimization of feeder reconfiguration in case of occurrence of faults. Power flow algorithms would support an effective load distribution by using existing switches in the system [44]. Currently, the challenge is to make direct and efficient communication between devices in the network with DMS and also accurate load model information for an effective near real-time response.
This article proposes an FDIR scheme that takes advantages of both centralized and decentralized logic. In this scheme, fault detection and isolation are managed very close to field thanks to distributed intelligence, while the restoration is done by a reconfiguration decision made by a centralized algorithm. In other words, the scheme performs De-FDI-CR. This hybrid solution aims to minimize the abovementioned challenges of centralized logic by enhancing communication among devices using IoT paradigms, and integrating near real-time load data collected periodically from substation smart meters.
Once a fault occurred, the closest upstream device (Device-A) detects it and opens a switch/recloser toward the fault. This device sends a signal to the next downstream device to request opening its switch/recloser, and isolate the faulty section. As the effect of fault might trigger main feeder protection relay, at primary substation, CB of the feeder would be open. Once the fault is isolated, CB and all upstream switch/recloser, which are opened, should be closed. Upstream switches might be open due to their relay settings as reaction to the fault current. In order to close them all in a distributed way without any distant coordination, device-A sends a command to the next upstream IED to close its switch, if it is opened, and the latter device passes the same signal to the next one upstream. Switch reclosing commands or trip messages are transferred to all next upstream devices to close all, including finally the CB. At this stage, actions of fault detection and isolation are completed, but the rest of feeder is not re-energized yet. In our scheme, a central restoration agent is in charge of restoring supply of the other loads in the feeder. It gets the most updated data of the load and generation which is periodically uploaded to a data storage (e.g., every minute or every 15-min data). It also accesses the network topology data in the data storage at DMS to check all available normally open (NOP) switches which can be closed to supply the rest of affected feeder.
Device-A sends an alarm message to this agent reporting a permanent fault at its downstream section. Since some of other upstream devices might sense the fault and send similar alarms, the agent supposes the fault to be right after the farthest one from the head of the feeder. It, then, creates a set of possible network reconfiguration based on existing NOPs, and then call an OPF tool in DMS to eventually rank the countermeasures; different objectives may be considered, however, to improve system reliability, the best choices are the ones which would not cause other failures due to over-current or reverse power flow in other lines. In our scheme, we minimize the line loss and the number of NOP switching. Fig. 3(a) maps the main actors of our service to different zones within distribution domain. The devices include all physical components of the field. Fig. 3(b) describes protocols and mechanisms for the interoperable exchange of data among components in the context of the underlying service. Fault detection and then isolation are performed thanks to peer-to-peer communication among devices in the field. Power measurements to be used for power flow analysis are transferred to the operation zone through data concentrator. MQTT is used for making communication between operation zone and station or field zones. Fig. 3(c) represents the data type exchanged among actors; devices in the field exchange commands to pass the detection alarm and eventually isolate the faulted section. They also send trip alarms of breakers or reclosers directly to message broker, and listen to it to receive actuation commands for network reconfiguration and supply restoration. Fig. 3(d) maps the specific functions involved in this service to appropriate zones within distribution domain. As discussed earlier, De-FDI-CR would perform fault detection and isolation through a distributed logic using intelligent devices, and restore the supply for healthy, yet interrupted sections of network through central logic.

V. CASE STUDY AND EXPERIMENTAL SETUP
To demonstrate the performance of our developed FDIR scheme as well as the functionality of the cosimulation platform, we use two cases as IEEE 34-node radial network [45] and a portion of an urban distribution grid, respectively. The former is used to present what kind of FDIR scheme, we integrate to our overall platform, and the latter aims to demonstrate the whole process of self-healing mechanism, including communication system and the test platform.
In this section, we introduce the realistic case used for testing the cosimulation platform and we explain the lab setup built according to SGAM to verify the applicability of the proposed method. We use a small portion of an urban distribution grid with three feeders and three NOP switches for different reconfiguration choices. This network is extracted from the real distribution grid of Turin, a city in Northwest of Italy. Fig. 4 represents the topology of the case study and the location of the fault. It is composed of 3 primary and 18 secondary substations, with 220 kV at high voltage and 22 kV at medium voltage side. Fourteen nodes are supplying residential loads, two are feeding industrial loads and there is one substation connected to a generation plant. In our tests, a constraint on the radial operation of the grid is considered. Three of the branches (between nodes MV4 and MV11, between nodes MV7 and MV8, and between nodes MV5 and MV19) are structurally connected and considered as NOP. Supply restoration is performed by closing one of these NOP switches once the fault is isolated.
The model of this grid is implemented in MATLAB Simulink as the development environment of eMEGAsim solver of Opal-RT real time simulator. The model is compiled by RT-Lab and executed on a 5600 Opal-RT digital simulator. For this EMA, a 250-microsecond time-step is considered which makes a suitable sampling for modeled over-current relay. A single-phase to ground fault is triggered between substation MV3 and MV4.
Once the fault is triggered, upstream switch/recloser at substation MV3 opens followed by an IED relay command, and it sends opening command to the downstream switch/recloser at MV4. In this way, faulted section is isolated.
Once the fault is isolated, IED at MV3 sends alarm signal to the restoration algorithm of De-FDI-CR, and reclosing command to its closest upstream substation (i.e., MV2 in this grid). The latter is to ensure all switches at upstream substations are closed. This is because faults could also trigger relays at other upstream substations, especially the one at the beginning of the feeder. Therefore, all substations from MV3 to the head of the feeder would pass reclosing command one by one through peer-to-peer communication. The last switch to be reclosed-in case it was open-is the main CB of the feeder. This in-field process ensures restoration of all interrupted loads before the location of fault.
Supply restoration of the rest of feeder requires a high-level or centrally supervised coordination of several existing NOP switches to achieve an optimal reconfiguration of the network. Since reliability of supply is the main objective of our use case, this case-study does not consider a network reconfiguration at a wide area involving many other feeders. This is also a practical approach that prevents unnecessary switching, which may affect switches with ageing.  Regarding the case study, the central agent of De-FDI-CR should find which of the three NOPs to close safely to guarantee a stable restoration of supply. Integration of DG in smart grids implies that the network may face operational problems like reverse power flow, line congestion, over/under voltage situation, if it is not well managed. Therefore, an ex-ante OPF analysis is needed prior to actuate the appropriate NOP for reconfiguration. CR logic gets the most updated data of the load and generation which is periodically uploaded to a data storage (e.g. every 15-min data). During fault restoration process, it also accesses the network topology data to check all available NOP switches which can be closed to supply the rest of affected feeder. To minimize losses and respecting all network constraints including voltage and current acceptable ranges, CR invokes an OPF tool in DMS for three possible network reconfiguration options based on the existing NOP status. This is performed through REST protocol. Fig. 5 shows the scheme of a realistic MAN backbone modeled to test and evaluate the performance of the De-FDI-CR service, also in terms of communication latency with the IEDs. Typically, modern MAN backbones leverage upon fiber optic links deployed across metropolitan areas and connecting different backbone routers in a ring configuration. These MANs, based on fiber optic technology, can guarantee connections with 100 Mbps. In our model, the links between two routers forming the ring (red lines) are full-duplex with a max length of about 10 km each and zero losses. We supposed that all IEDs deployed in the portion of distribution grid are connected to the Internet through the backbone router R1 in Fig. 5. IEDs communicate with the De-FDI-CR service via MQTT through the message broker that is connected to routers R3 and R5, respectively. R1, R3, and R5 are connected with their respective subnetworks through 10 Mbps full-duplex links (blue lines) with a max length of about 1 km and zero losses. Finally, routers R2, R4, and R6 serve three other different subnetworks that generate background traffic with different rates to realistically congest the MAN. To avoid bottlenecks between these last three routers and their traffic generators, a 200 Mbps full-duplex links have been chosen (green lines) with max length of about 1 km and zero losses. Thus in our simulations, IEDs, De-FDI-CR service and message broker are in different locations in the same metropolitan area and communicate over the Internet. For these simulations, Mininet has been exploited. Fig. 6 reports our laboratory set-up, where the entities and models of our cosimulation platform have been deployed for this scenario. In particular, an OPAL-RT DRTS and four computers have been involved and used to run the MQTT message broker, both data storage and the OPF algorithm, the mininet network simulator, and the restoration algorithm of De-FDI-CR, respectively.
These computers and the OPAL-RT are connected through a 100 Gbps Ethernet switch that introduces a negligible communication latency of about 10 ns. Gigabit Ethernet equipment provides backward compatibility to older 100 Mbps and 10 Mbps legacy Ethernet devices [46]. The computers and the OPAL-RT belong to the same dedicated local area network and communicate following the sequence diagram described in Section III-E. Thus, all the traffic generated by these actors is routed through the virtual router where the virtual MAN runs in Mininet.

VI. EXPERIMENTAL RESULTS
This Section presents the experimental results of applying our developed FDIR scheme on both the IEEE 34-node radial network [45] and a portion of an urban distribution grid, respectively (see Section V). Fig. 7 shows a single line diagram of IEEE 34-node system. In order to examine different reconfiguration options, we added three NOP switches between nodes 864 and 842, 890 and 834, and 856 and 838. The model of this case is implemented in MATLAB Simulink to be loaded to Opal-RT real-time simulator for running a RTS. A single-phase to ground fault is triggered between nodes 858 and 834.

A. IEEE 34-Node Case Study
After detecting the fault and isolating the branch between nodes 858 and 842, FDIR central agent should decide which NOP to close to not only restore the rest of network, but also minimize the power loss and achieve the best possible voltage profile. Fig. 8 plots different voltage profiles corresponding to different configurations. Config1, Config2, and Config3 are configurations of the network after closing switches of 864-842, 890-834, and 856-838, respectively. Fig. 9 compares different power flows over lines considering different configuration options. Results plotted in Figs. 8 and 9 show that Config3 has the best system performance comparing to the other options.
To quantify this difference, we calculated the voltage deviation index (VDI) defined as the sum of square of voltage deviation at each node (Equation 1). The VDI for Config1, Con-fig2, and Config3 are 0.0455 p.u., 0.0463 p.u., and 0.0373 p.u.,  respectively. From power flow results, the total power loss of the system is the least with Config3. The total power loss for the three configurations as Config1, Config2, and Config3, is obtained 5011.6 kW, 4921.5 kW, and 4813.5 kW, respectively. According to the results, FDIR finally choose Config3 to restore the system after fault (1)

B. Three-Feeder Realistic Case Study in Turin
This section presents the experimental results of the proposed FDIR schema applied to the three-feeder network shown in Fig. 4. This also demonstrates how our cosimulation platform would support an interoperability analysis of a use case following the SGAM.
Once the fault is triggered, an inverse definite minimum time over-current relay (IDMT) at MV3 detects violation and trigger a sectionalizer to disconnect supply of the rest of feeder from MV3. Fig. 10 presents voltage and current signals captured by the relay IED next to the fault location (i.e., at MV3). In our case, all relay are tuned with the same settings of IDTM. Hence, in case of neglecting delay of waveform propagation along the line, occurrence of a fault anywhere on the feeder would trigger all upstream relays. This is intentionally set in order to stress FDIR reaction for more peer-to-peer command exchange. The reaction is the chain of reclosing commands starting from MV3 to the beginning of the feeder. Reclosing all open switches of the upstream side is a part of supply restoration to re-energize all the loads of the feeder before the location of the fault.
Prior to restoration, the decentralized logic isolates the section between MV3 and MV4. This is done by sending a command from MV3 to MV4 right after detecting the fault.
After isolating the fault, MV3 initiates the chain of peer-topeer command pass as discussed above. At the same time, MV3 also sends an alarm message to the centralized logic of De-FDI-CR, which contains the hardware ID of the IED installed at MV3. This launches the algorithm to create three possible network configurations based on the status of NOP1, NOP2, and NOP3; and select the best option to restore the rest of feeder. The three configurations formed by closing NOP1, NOP2, and NOP3 are named Config1, Config2, and Config3, respectively.
The affected feeder has a distributed generator with 29.05 kW and 3.1kVAr production at the time of study. All the load and generation values are collected by meter IEDs (i.e., smart meters) installed at the substations. The data is reported out every 15 min through communication system. In our case, meter and relay IEDs are the same devices with the three features as well as event recording.
Considering the distributed generator as uncontrollable generation, an OPF is called for the three configurations. We use the standard ac OPF that consists of 18 vectors of voltage angles and magnitudes and 2 vector of generator real and reactive power injections. Since there are only two controllable generators, which belong to the same network operator, the objective would be loss minimization rather than cost minimization. The equality constraints are simply the full set of 36 nonlinear real and reactive power balance equations as there are 18 involved nodes. The inequality constraints consist of two sets of 17 branch flow limits as nonlinear functions of the bus voltage angles and magnitudes. The variable limits include an equality constraint on any reference bus angle and upper and lower limits on all bus voltage magnitudes and real and reactive generator injections. The limits for the node voltage magnitudes are set between 0.95 p.u. and 1.05 p.u.
Central logic algorithm is written in Python and it calls MATLAB MATPOWER to execute OPF. Changing the configurations and running OPF for the three cases, resulted in three different sets of power flow of the lines (see Fig. 11). We obtained 8.20 kW, 9.16 kW, and 7.48 kW power loss for Config1, Config2, and Config3, respectively. Fig. 12 is depicted to represent the voltage profile of the three cases; as it is shown in this figure, Config3 has a much better voltage profile comparing to Config1 and Config2 in most of the substations.
As for the IEEE 34-node radial network, we calculated the VDI following the Equation (1). The voltage deviation obtained given by the difference between calculated voltage and the rated voltage equal to 1pu or 22volts. VDI for Config1, Config2,  and Config3 are 0.0012p.u., 0.0021p.u., and 0.0008p.u., respectively.
Since, no violations or reverse power flow are observed in the three cases, all are acceptable choices, however, ranking them based on either power losses or VDI makes CR send actuation command to NOP3 to reconfigure the network as Config3 shown in Fig. 13. Fig. 14 reports the results on communication latency when a fault is located and restored at different congestion rates, neglecting the computation time of De-FDI-CR service. The computation time of the algorithm did not exceed a few milliseconds (10-20 ms), which is negligible in our case. This is the time needed to send an alarm from smart meters to De-FDI-CR and to send back an actuation command from De-FDI-CR to NOPs via MQTT across the MAN as described in Section V. Both alarms and commands are sent as messages of about 85 bytes each, compliant with the JSON data-format.
As shown in Fig. 14, our analysis started from considering first the MAN with a congestion rate of 1% where routers are almost completely unloaded. Thus, routers process incoming packets almost immediately. In this scenario, the median value of the  Increasing the overall MAN congestion up to 50% by increasing the background traffic, the median value for the communication latency raised almost constantly without exceeding the 100 ms. This is due to the impact of the generated background traffic on routers. Indeed, the time spent by a packet in a router before being processed increases and this creates queues on the routers, as expected.
Growing the congestion from 60% to 80%, the median value for the communication latency slightly increased from almost 110-160 ms. Finally, with MAN congestion equal to 90% and 100% the median time delay is about 280 ms and slightly lower than 500 ms, respectively.
The IEC 61850 standard [9] defines the communication requirements to be addressed in the power distribution networks. Thanks to our distributed multimodel cosimulation platform, we are able to evaluate these communication requirements for a specific service (i.e., De-FDI-CR) in a realistic simulation scenario. Indeed, referring to the performance classes in Table II,  defined by the standard, our experimental results on time delay   TABLE II  COMMUNICATION REQUIREMENTS AND PERFORMANCE CLASSES FOR POWER SYSTEMS DEFINED BY IEC61850 [9] satisfy the requirements of classes TT0, TT1 and TT2. This is also confirmed when the MAN congestion rate is 100%, which is a very critical situation in communication network. However in this very worst scenario, only few packets exceeded the 500 ms reaching a max time delay of about 650 ms (see red portion in Fig. 14). It is worth noting that Internet providers and network managers try to avoid this critical situation that, in long periods, could lead to the collapse of MAN itself.

VII. CONCLUSION
Deployment of new algorithms and control strategies in smart grids requires interoperability analysis and systematic transitions. This implies using reference models like SGAM to map the new use cases. Mapping use case in SGAM enables the possibility of performing an interoperability analysis that is overlooked in classic laboratory tests.
This article demonstrated how our multimodel cosimulation platform could be used to map and simulate new strategies and algorithms with a SGAM-based approach. The integration of DRTSs enables the possibility of developing HIL setups for testing physical devices, while integrating new algorithm and strategies. Moreover, the integration of communication network simulators in the platform enables the possibility to test and develop use cases by taking into account the performance of the communication networks and protocols.
The experimental results of simulating a self-healing scheme use case by means of our platform demonstrated both the capability of our platform and the effectiveness of the proposed service. A new FDIR approach, named De-FDI-CR, is proposed with capability of both centralized and decentralized FDIR logic; while solving the limitations of the two logic. It respects the communication latency constrain of the IEC 61850 standard using the MQTT protocol over a MAN for exchanging messages. This was a limitation in conventional centralized FDIR. Possible violation less than 25% of messages violate the maximum latency constrain only in the condition of fully congested MAN. While, we are also exploiting a centralized intelligence, the fast restoration capability of decentralized logic is respected. Our cosimulation platform realized the feasibility of deployment of this scheme in the real-world system.