A Comparative Analysis of Deadlock Avoidance and Prevention Algorithms for Resource Provisioning in Intelligent Autonomous Transport Systems Over 6G Infrastructure

6G is the future of intelligent connectivity artefacts with Artificial Intelligence (AI) at its backbone. The Multi-Access Edge Computing (MEC) based 6G enabled infrastructure helps in achieving the required zero latency for autonomous Intelligent Transport Systems (ITS) with features of low power consumption, lower end-to-end latency, minimal processing/transmission overheads, higher throughput and reliability. MEC are prone to a deadlock due to the limited amount of available computational resources, resulting in fatal delays in Vehicle-to-Vehicle (V2V), Vehicles to Road Side Unit (RSU) and RSU to ITS communication. The unresolved deadlock may entail higher energy consumption that can adversely affect the Quality of Service (QoS) in terms of safety and reliability with potential threats of causing fatal accidents. Therefore, it is almost imperative to resolve the deadlocks from MEC to comply with the QoS parameters of MEC based autonomous vehicles. The asserted goals can be achieved by employing an intelligent and adaptive deadlock resolution strategy. In this paper, a deadlock-aware, and collaborative edge decision algorithm has been proposed for facilitating the seamless communication of autonomous vehicles over MEC. Additionally, deadlock avoidance and prevention schemes have been evaluated using Bankers resource request avoidance algorithm, wound wait algorithm and wait-die algorithms for resource provisioning in collaborative MEC. Furthermore, the effectiveness of deadlock avoidance and prevention algorithms in real-time scenarios has been analyzed in MEC systems. The metrics used for a comparative analysis in this research include Round-trip time, Queue wait-time and CPU utilization. The proposed algorithm shows promising results when compared with prevalent techniques.

A Comparative Analysis of Deadlock Avoidance and Prevention Algorithms for Resource Provisioning in Intelligent Autonomous Transport Systems Over 6G Infrastructure Emeka E. Ugwuanyi , Member, IEEE, Muddesar Iqbal , Member, IEEE, and Tasos Dagiuklas , Member, IEEE Abstract-6G is the future of intelligent connectivity artefacts with Artificial Intelligence (AI) at its backbone. The Multi-Access Edge Computing (MEC) based 6G enabled infrastructure helps in achieving the required zero latency for autonomous Intelligent Transport Systems (ITS) with features of low power consumption, lower end-to-end latency, minimal processing/transmission overheads, higher throughput and reliability. MEC are prone to a deadlock due to the limited amount of available computational resources, resulting in fatal delays in Vehicle-to-Vehicle (V2V), Vehicles to Road Side Unit (RSU) and RSU to ITS communication. The unresolved deadlock may entail higher energy consumption that can adversely affect the Quality of Service (QoS) in terms of safety and reliability with potential threats of causing fatal accidents. Therefore, it is almost imperative to resolve the deadlocks from MEC to comply with the QoS parameters of MEC based autonomous vehicles. The asserted goals can be achieved by employing an intelligent and adaptive deadlock resolution strategy. In this paper, a deadlock-aware, and collaborative edge decision algorithm has been proposed for facilitating the seamless communication of autonomous vehicles over MEC. Additionally, deadlock avoidance and prevention schemes have been evaluated using Bankers resource request avoidance algorithm, wound wait algorithm and wait-die algorithms for resource provisioning in collaborative MEC. Furthermore, the effectiveness of deadlock avoidance and prevention algorithms in real-time scenarios has been analyzed in MEC systems. The metrics used for a comparative analysis in this research include Round-trip time, Queue wait-time and CPU utilization. The proposed algorithm shows promising results when compared with prevalent techniques.

I. INTRODUCTION
M ULTI-ACCESS Mobile Edge Computing (MEC) is one of the 6G enabling technologies proposed to meet the advertised ultra-low latency standards. According to [1], [2] MEC is a hub of access points that has storage and computational capacity for a wide range of consumer devices such as Blockchain, terrestrial networks, IoT devices and autonomous vehicles (AVs) [3]. Low latency is crucial for the emerging AV networks that require ubiquitous user connectivity and real-time computational offload response. To achieve Ultra-reliable Low Latency Communication (URLLC) in 6G networks, QoS parameters of reliability and availability exhibit higher precedence. These factors play an important role in ensuring the smooth delivery of time-critical applications and have been widely accepted as key concerns of network service providers [4] when coupled with AVs [5], [6]. In the current research, it is assumed that the AV network requires optimal computational and storage resources [7], [8]. Hence, a large portion of their workload is offloaded. Due to the proximity of MEC to endusers, the workload of these devices are offloaded to the nearest MEC [9], [10].
A substantial increase in such resource-constrained AVs, in turn, increases the number of devices sharing and competing for the limited resources provided by the MEC platform [11]. Due to the limited amount of resources available on the edge node, there is a need to effectively manage MEC and AV resources to prevent over-provisioning of resources and deadlock [6]. Deadlock may arise as a process requests for resources that are held by another waiting resource, thereby leading to a circular wait state [12]. Such an unplanned circular wait would increase the energy consumption during offloading. It has been proved in our prior research endeavours [10], [13], [14]. Additionally, it has been proved that the MEC platform is prone to a deadlock due to limited resource constraints. Therefore, effective precautions need to be used to avoid over-provisioning of the edge node to AV clients. Such that low latency can be maintained and deadlock in the AV network is eradicated. Deadlock is an undesirable phenomenon in real-time systems that hosts time-critical applications which are sensitive to latency. In [12], deadlock is defined as an event that happens in a multi-programming environment where several processes compete for a finite number of resources. During this, a process enters a waiting state when it requests resources that are not available at the time of the request. If the waiting process is never able to change state, because the resources it has requested are held by another waiting process, then the system is said to be in a deadlock. Deadlock has been studied extensively [15]- [17] in various multi-programming environments which can be applied to AVs. However, there is limited research on deadlock management in real-time systems in the context of MEC for AVs.
According to [12], real-time algorithms are used when a rigid time requirement has been placed on the operation of a processor or the flow of data. Therefore, it is used as a control mechanism in dedicated applications. IoT systems are usually real-time driven because the data obtained by the IoT sensors must be analyzed at a given time for timely and accurate decisions to be made. Therefore, it is important to consider the effect of deadlock strategies on a real-time system.
One of the most important aspects of offloading is choosing the best candidate to offload a given task. This decision is crucial as a suitable candidate needs to be selected to avoid missing the task deadline. This decision must be optimized considering the computational resources of the MEC and network constraints to avoid re-offloading, overprovisioning of resources and deadlock in AV networks. This paper addresses the highlighted problems by proposing a deadlock aware collaborative edge computing offloading algorithm to effectively select the best candidate for offload within an AV network. The proposed algorithm ensures reliability and low latency network communication. Furthermore, the current work aims to provide a comparative analysis of different strategies that can be used in building a real-time and reliable 6G network system by minimizing the chances of deadlock during resource provisioning for AVs/IoT devices in MEC. Hence, an evaluation and comparison of different deadlock avoidance and prevention algorithms have been made. In this regard, six algorithms have been examined using three deadlock strategies (Bankers algorithm, wound wait, and wait die) and 2 realtime schemes (Earliest Deadline First and Rate Monotonic Scheduling). These algorithms were examined due to their effectiveness and competitive time complexity. Experiments have been conducted to compare the CPU utilization, waiting time, round trip time and offload handling performance. Each algorithm has been evaluated based on their key performance indicators and the algorithms that performed better based on the experimental constraints were discussed.
The paper is structured as follows. An extensive review of relevant and related literature is presented in Section II. Comparative analysis has been done in section III, which includes the system model, communication and computation model. Section IV details the experimental setup used for comparison. Section V shows the outcome of the experiments. Section VI concludes the paper. Finally, future works have been presented in Section VII.

A. Resource Provisioning in MEC
Resource provisioning in MEC is a challenging problem for Internet Service Providers due to the impact it has on the efficiency of the system and the Quality of service (QoS). Researchers have previously addressed this resource provisioning problem in cloud computing. However, resource provisioning in MEC is more challenging mainly because the edge servers have more resource constraints than the cloud servers and the edge servers would be deployed as a distributed environment compared to the centralized cloud. Enabling distributed computing and storage capabilities at the edge of the network will benefit delay-sensitive and computationintensive mobile applications.
There has been a considerable amount of work done in the area of resource provisioning in MEC. Badri et al. [18] have proposed a risk-based optimization for resource provisioning in MEC. In their work, they have assumed that the resource requirements of mobile applications are stochastic. Therefore, they formulated a chance-constrained stochastic program problem. They have resolved this using the Sample Average Approximation method.
Kherraf et al. [9] have studied resource provisioning and workload assignment in MEC and formulated the problem as a mixed-integer program to jointly decide on the number of nodes, the location of MECs and applications to deploy. They have solved this by decomposing it into two problems, a delay aware load assignment sub-problem and dimensioning edge servers sub-problem. They have proposed optimized provisioning of edge computing resources with a heterogeneous workload in IoT networks. They concluded that the proposed tool could be used by network operators to develop cost-effective strategies for edge network planning and design.
Chang and Miao [19] have studied resource provisioning in MEC in the area of minimizing energy consumption of cellular networks. In their research, they investigated both the communication and computation aspect of resource provisioning to improve energy efficiency. They modelled the system as tandem queues and studied the trade-off between the subsystems on energy consumption and service latency. Based on this, they proposed an algorithm to determine the optimal provisioning of both communication and computation resources to minimize the overall energy consumption without sacrificing the performance of service latency.
Yu and Langar [20] have proposed a collaborative computation offloading framework for MEC. The authors have considered an offloading scenario where multiple mobile users offload duplicated computation tasks to the edge servers. Hence, creating an opportunity for edge servers to share computational results. The aim is to develop an optimal collaborative offloading strategy with data caching enhancements to reduce end-user latency. The problem has been formulated as a multi-label classification in which a Deep Supervised Learning approach has been employed to address the issue. Numerical results have shown that the proposed scheme achieves reduced delay and energy consumption compared to other schemes.
Zhou et al. [21] have proposed a resource provisioning scheme for heterogeneous IoT applications on cloud-edge platforms. The scheme has been aimed at minimizing long-term operational costs while guaranteeing both hard and soft deadlines for heterogeneous IoT applications. The proposed framework employs a Lyapunov optimization technique to make online resource provisioning greedy decisions without prior knowledge of the resource statistics of the edge system. The authors have evaluated the efficiency of the proposed approach using realistic traffic and cost traces.
Ma et al. [22] have proposed a mobility-aware and delaysensitive service provisioning scheme for mobile edge cloud networks. The authors have formulated two novel optimization problems of user service request admissions with the focus of maximizing the accumulative network utility and throughput. The authors have utilized a constant approximation algorithm and an online algorithm to address the formulated problems. The authors have demonstrated the efficiency of the proposed scheme using experimental simulations.
There have been other proposals for resource provisioning techniques to offload mobile application workloads on MEC [23]. Nevertheless, none of the previous works on MEC considers deadlock during offloading and resource provisioning which is a concern for distributed systems such as MEC [13].

B. Deadlock Handling Strategies
In a trusted computing scenario using edge nodes and IoT devices, high availability and reliability are crucial factors for a good user experience. Therefore, deadlock-free operations are important in achieving this goal. The absence of deadlock strategies to detect, recover or eradicate deadlock in such a system might cause deterioration of the system's performance and ineffective use of energy as deadlock might occur but the system has no way of recognizing what has happened. The standard toolset for deadlock detection is the Wait for Graph (WFG). The WFG models the relationship between the processes and the resources involved. Here, each node represents a process and an arc is originated from a process waiting for a resource to a process holding the resource. There are 4 main ways of handling deadlock. This includes (i) ignore, (ii) detect and recover, (iii) prevention and (iv) avoidance. Furthermore, there are 4 main conditions necessary for a deadlock to occur. These include (i) mutual execution, (ii) hold and wait (iii) no preemption and (iv) circular wait [12]. A simultaneous occurrence of these four conditions leads the system to an unsafe state where the system suffers from a probability of getting stuck due to unmanaged distribution of resources. Therefore, each of the deadlock handling strategies offers solutions to eradicate deadlock by ensuring that at least one of these conditions does not hold.
1) Deadlock Detection: In the design and development of a multi-threaded system, deadlock detection could be chosen as a way of handling system deadlock. If the employed algorithm detects a deadlock, the next step would be to recover the system from the deadlock. Therefore, deadlock detection and recovery go hand in hand. In this scenario, an algorithm is employed to examine the state of the system to determine if a deadlock has occurred. After this, another algorithm is used to recover the system from deadlock. To detect the presence of deadlock, a resource allocation graph and a corresponding WFG are used for a single instance for each resource type. Note that in this strategy, deadlock can happen, after which the system then detects the occurred event and attempts to recover itself. This causes an overhead of the run-time costs of maintaining the necessary information and executing the detection algorithm. Additionally, there might be potential losses inherent in recovering from a deadlock [12].
To detect deadlock for a single instance of each resource type using the WFG, an edge from E i to E j implies that process E i is waiting for the process E j to release a resource that E i needs. The edge E i → E j only exists in a WFG if the corresponding resource allocation graph contains two edges E i → R q and R q → P j for some resource R q . A deadlock exists in the system if there is a cycle in the WFG. Using this, the detection algorithm requires a runtime order of n 2 operations where n is the number of vertices in the graph. For several instances of a resource type, the runtime order to detect a deadlock would be m × n 2 where m is a vector that indicates the number of available resources of each type [12].
There is extensive research on deadlock detection schemes. Farajzadeh et al. [24], have proposed a distributed deadlock detection algorithm based on history-based edge chasing which resolves the deadlock as soon as its detected. According to their research, this action reduces the average persistence time of the deadlock compared to other detection algorithms. Izumi et al. [25] have proposed a deadlock detection algorithm for distributed processes. In their research, they have formulated a deadlock detection scheduling problem with the presence of system failures and derived a deadlock detection time that minimizes long-run average cost per unit time. They have concluded that the number of distributed processes and the system failure probability give a great effect on the long-run average message-complexity per unit time, but not the deadlock scheduling time. Other research on deadlock detection includes Lamport's algorithm [26] which is a mutual exclusion algorithm that uses logical clocks for event synchronization and the Chandy-Misra-Hass algorithm [27] which uses messages called probes to detect the presence of deadlock in a system.
2) Deadlock Prevention: Deadlock prevention algorithms handle deadlock in a system by trying to prevent one of the previously mentioned four conditions required for a deadlock to occur. In a typical distributed system, there is at least one non-sharable resource. Therefore, the mutual exclusion condition must hold. Due to this, deadlock cannot be prevented by denying the mutual exclusion principle. To prevent deadlock by eliminating hold and wait, two possible protocols could be used. A protocol that requires that all resources a process needs are allocated to the process before the start of execution. This will eradicate hold and wait but might lead to the under-utilization of the system. Another protocol could allow a process to request new resources only after releasing the current set of resources. However, this protocol may lead to starvation.
Deadlock can also be prevented by preempting resources from a process if the resources are required by a higher priority process. This strategy of process termination during execution is inappropriate for real-time systems in which the elapsed execution time of the process must be predictable [28]. The final method of preventing deadlock is by eliminating the circular wait condition. To ensure that this condition never holds, a protocol can be used to impose a total ordering of all resource types and require that each process requests resources in increasing order of enumeration. Example, if R = {R 1 , R 2 ..R z } is a set of resource types which has been assigned unique integer numbers from 1 to z. A process can only request resources in increasing order of enumeration. Therefore, if a process request R i then it can only request for another resource There have been many deadlock prevention strategies proposed by different researchers in different computer science fields to eradicate deadlock by preventing one of the necessary four conditions required for a deadlock to occur. In the field of Service-Oriented Architecture (SOA) infrastructure, Lin Lou et al. [29] have proposed a deadlock prevention strategy to eradicate the possibility of deadlock caused by resource locking based on two-phase commit protocol. This requires that each transaction obtains all needed locks before the second commit phase. To solve this, they have utilized a timestamp-based restart policy for global resource allocation.
In the context of web SOA where the competition of web resources by web services could lead to deadlock. Ding et al. [30] have proposed a method to analyze and verify the deadlock prevention solutions using trace semantics of communicating sequential processes. The proposed formal modelling approach has proved useful in the verification of deadlock solutions analyzed in the paper. Furthermore, in the context of Grid systems with resource sharing capabilities, simultaneous requests of co-allocation of resources by multiple applications could lead to deadlock. To address this problem, Chuanfu et al. have [31] proposed a deadlock prevention method for fast allocation of grid resources based on an atomic transaction. Utilizing this method, all resources required by a process at the time of the request are specified. The request succeeds if all the resources required are available.
In this research, two preventive algorithms that have been investigated are wound-wait and wait-die. Both algorithms use timestamp-based techniques and they favour the older processes with an older timestamp. These algorithms have been used due to their efficiency, competitive time complexity and practical applications, especially in database systems [32]. a) Wound-wait algorithm: Wound-wait deadlock prevention algorithm is a non-preemptive technique. Here, when an older process requests a resource that is currently held by a younger process, the younger process is rolled back. However, when a younger process requests a resource that is held by an older process, the younger process waits. If P i and P j are both processes and P i requests for a resource held by P j .
The wait-die deadlock prevention algorithm is a preemptive technique. In this scenario, when an older process requests a resource that is held by a younger process, the older process waits. However, when a younger process requests a process that is held by an older process, it dies. If P i and P j are both processes and P i requests for a resource held by P j . Then P j is rolled back if t (P i ) < t (P j ) i.e P i is older. Where t (P i ) and t (P j ) are timestamps, else P i can wait.
3) Deadlock Avoidance: The drawbacks of using a deadlock prevention method include low device utilization and reduced system throughput. Alternatively, the deadlock avoidance mechanism could be used. In contrast to the prevention method, this works by requiring additional information on the complete sequence of the resource request. With this prior information on the requests and resources, the system decides if a process should run or wait with the motive of avoiding a possible deadlock. For the avoidance method to work, the simplest model requires that the maximum amount of resources for each resource type is declared. With these data, the model ensures that a circular-wait condition never exists during the dynamic resource allocation. Any potentially unsafe resource request is denied. The system is said to be in a safe state if the maximum number of resources requested for each process can be allocated and there exists no possible sequence of future requests in deadlock. If a safe sequence exists, then the system is said to be in a safe state. There is a safe sequence of processes [P 1 , P 2 , P 3 . . . P n ], if the resource request that would be made by each process P i can be satisfied by the currently available resources including resources held by all P j with j < i [12]. A system is said to be in an unsafe state if it is not guaranteed that all possible sequences of future requests will not lead to a deadlock. Not all unsafe states are deadlocks, but an unsafe state may lead to a potential deadlock in a system. There are two well-known deadlock avoidance algorithms which are the resource allocation graph algorithm and the banker's algorithm.
a) Resource allocation graph: The resource allocation graph is only used if the resource allocation system has only one instance of each resource type. While using the resource allocation graph for deadlock avoidance, if a process P i requests for a resource R j ,(P i → R j ), the request is only granted if R j → P i does not lead to a cycle in the resource-allocation graph. Safeness of the system is checked by using a cycle-detection algorithm which requires an order of n 2 operations where n is the number of processes in the system. b) Banker's algorithm: Resource-allocation graph cannot be applied to a resource allocation system with multiple instances of each resource type due to limitations. However, Banker's algorithm could be used. If the Banker's algorithm is applied, then each process must declare the maximum amount of resources for each resource type that it will require to complete execution. This declared number must not exceed the total amount for each resource type in the system, else the system would be in an unsafe state. Four data structures must be maintained while using the Banker's algorithm.
• Available: This is a vector of the number of available resources for each resource type. The length of the vector is m where m is the number of available resources. • Max: This is a matrix of the maximum resource demand for each process. The matrix size is mn where m is the number of available resources and n is the number of processes. • Allocation: This is a matrix of the number of resources currently allocated for each process. The matrix size is mn where m is the number of available resources and n is the number of processes. • Need: This is a matrix of the remaining amount of resources that each process needs. The matrix size is mn where m is the number of available resources and n is the number of processes. Need = Max + Allocati on The time complexity to determine if a state is safe or not is mn 2 [12]. There has been a great deal of research done on the improvement of banker's algorithm over the years. In each case, the algorithm is extended, improved or applied in a different area in computer science. The most notable adjustments have been made in 1999 [33], 2000 [34] and 2006 [35]. Sheau-Dong Lang [33] has assumed that the control flow of the resource-related calls of processes forms rooted trees. Based on this, a quadratic-time algorithm has been proposed. The algorithm decomposes trees into regions and computes the associated maximum resource claims before the process execution. The information collected is used at runtime to verify the safety of the system using the original banker's algorithm. Tricas et al. [34] have applied Banker's algorithm in the field of flexible manufacturing systems. They have modelled the problem employing Petri nets and proposed two improvements based on the knowledge of process structure. Their research has proven that the improved algorithm has much more concurrency than the original banker's algorithm. Other deadlock avoidance algorithms that have been developed includes the graphical deadlock avoidance algorithm proposed by El-Kafrawy [36]. The improvement solves the deadlock avoidance problem in sequential resource allocation systems using a polynomial graphical solution. The graph updates dynamically each time a new resource is requested. Another example is the deadlock avoidance algorithm for streaming applications proposed by Li et al. [37] using both a propagating algorithm and a non-propagating algorithm.

C. Real-Time Scheduling
In a trusted computing environment where high availability and reliability are important factors, often there is a specific response deadline time constraint that the system must meet. If this is the case, the system is said to be a real-time system. The system may or may not meet this time demand. This depends mainly on the capacity of the system to perform computations at a given time. In a real-time application, there are multiple tasks with different criticality levels. The tasks could either be soft real-time, hard-real-time or firm real-time. For a given set of tasks T = {t 1 , t 2 , t 3 . . . t n }. Task t i is said to be a hard real-time task if the execution of t i must be completed by a given deadline D i and W i ≤ D i where W i is the worst-case execution time of t i . Task t i is said to be a soft real-time task if the penalty it pays increases as the r i increases. Here, r i is the time elapsed between the deadline of t i and the actual completion time. The penalty function P(t i ) = 0 if W i ≤ D i else P(t i ) > 0. The task t i is said to be a firm real-time task if an increase in reward depends on how early t i finishes its computation before the given deadline D i . The Reward function R(t i ) = 0 if W i ≥ D i else R(t i ) > 0. In this research, two optimal Real-time scheduling algorithms have been studied. These are Rate Monotonic Scheduling Algorithm (RMS) and the Earliest deadline First algorithm (EDF). These algorithms were selected because they are well-known baseline scheduling algorithms for real-time systems [38] and they also have a competitive time complexity.
1) Rate Monotonic Scheduling Algorithm: The Rate Monotonic Scheduling (RMS) Algorithm is a priority-driven algorithm with priorities well known before the arrival of the task. These priorities are determined by the time period of each task and are the same for all instances of the same task. RMS is the most widely used and studied real-time algorithm [38]. Some assumptions are made while using the RMS algorithm, these assumptions include: (i) The tasks have no precedence constraints and all tasks are independent. (ii) it is assumed that only processing requirements are significant. (iii) it is assumed that the tasks have no non-preemptable section and the cost of preemption is negligible. (iv) it is also assumed that the tasks are periodic and that t i has a higher priority than t j ⇔ i > j. The shorter the period, the higher the priority. If a lower priority task t j is running and a higher priority task t i is waiting to run, t i will preempt t j . RMS assigns higher priority to tasks that use the CPU more often. The algorithm complexity is n(2 1/n − 1). RMS is referred to as an optimal real-time algorithm because if a given set of processes cannot be scheduled by RMS, then it cannot be scheduled by any other algorithm that uses static priorities [12].
2) Earliest Deadline First Algorithm: The Earliest Deadline First (EDF) algorithm [38] is also a priority-driven algorithm which assigns priorities according to tasks deadline. EDF gives a higher priority to a task t i that has an earlier deadline d i . t i will always preempt a task with a lower priority t j which have a higher deadline d j . EDF uses a dynamic priority assignment. The priority of the tasks are assigned as the tasks arrive based on the task's deadline requirements. The priorities of other tasks are adjusted to reflect the deadline of newly runnable processes. The following assumptions are made when using the EDF scheduling algorithm: (i) The tasks have no precedence constraints and all tasks are independent. (ii) it is assumed that only processing requirements are significant. (iii) it is assumed that the tasks have no non-preemptable section and the cost of preemption is negligible. The EDF algorithm has a worst-case runtime of O((N + α) 2 ) where α is the number of aperiodic tasks and N is the total number of requests in each hyper-period of n periodic tasks in the system [39]. EDF is referred to as an optimal uniprocessor real-time scheduling algorithm because it schedules tasks so that they meet their deadline requirement with 100% CPU utilization. If EDF cannot feasibly schedule a set of tasks on a uniprocessor then no other algorithm can. This is proved using the time slice swapping technique [38].

D. Research Contributions
The main contributions of this paper are listed as follows: • Using a case study for AVs as a real-time system, to compare how deadlock avoidance and prevention mechanisms will perform in real-time scenarios using RMS or EDF in prioritizing workloads. In this analysis, different metrics have been considered including Round-trip time, Queue waiting time, CPU utilization and Ratio of Local execution to collaborative MEC to cloud.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.

III. COMPARATIVE ANALYSIS
In this paper, a comparative study is carried out for deadlock avoidance and deadlock prevention algorithms for multi-access mobile edge computing environments. Here 6 case study algorithms have been considered based on the structure proposed in our previous study on deadlock in multi-access mobile edge computing for industrial IoT [13]. Each compared algorithm is composed of a deadlock algorithm and a real-time scheduling algorithm. The algorithms used for this design can be seen in Table I.   TABLE II COMPARED ALGORITHMS The 6 compared algorithms that have been produced as a result can be seen in Table II. The algorithm workflow for each of the six algorithms is the same structure as is in Figure 1. Tasks are sent from the RSU to the local edge node for resource provisioning. In the MEC node, tasks are put into a job queue and the queue is prioritized using a real-time scheduling algorithm. Then a deadlock algorithm is employed to reduce or eradicate the chances of deadlock. Thereafter, the waiting time is calculated for each task received and an assumed finishing time P t i for each task t i is predicted. If P t i < D t i where D t i is the deadline for task t i , then a MEC M i is identified that meets the deadline requirement using a collaborative decision algorithm. This algorithm is detailed in subsection F. If such MEC is not found, then the tasks are sent to the central cloud to be executed and the MEC node acts as a proxy. For each of the compared algorithm, this structure remains the same but an appropriate real-time algorithm and deadlock algorithm is utilized. Further details about the algorithm structure are detailed in our previous study [13].

A. Deadlock in Distributed MEC
To describe the deadlock condition in distributed MEC, let's assume a set of processes P = {p 1 , p 2 .., p n } and a set of resources R = {r 1 , r 2 .., r m }, where n and m are the number of processes and resources respectively. These resources and processes are present in the collaborative MEC space. However, they might not reside in the same MEC. Deadlock occurs if a process p i is waiting for a resource r a that is currently held by another process p j . Additionally, p j is waiting for a resource r b that is currently held by p i . If neither p i nor p j can be preempted while in waiting state, the system would be in a deadlock.

B. System Model
In this research, a distributed architecture which consists of a pool of MEC nodes is considered as a platform for resource provisioning. Let's consider a cluster of edge servers. A finite non-empty set of edge servers in the same cluster is denoted as M = {M 1 , M 2 .., M n }. Let's assume that a finite non-empty set of end devices U = {u 1 , u 2 .., u n } are connected to the edge network such that u i ∈ U and M j ∈ M maintains a disjoint many-to-one cardinality. Here an edge server is connected to many end devices, but no end device is connected to multiple edge server. Each u i has a workload W u i = [T u i 1 , T u i 2 , ..T u i n ] which contains an array of tasks to be executed. For each T u i j in W u i the u i computes an offloading decision a j ∈ {0, 1}, where a j = 0, a j = 1 represents "execute locally" and "offload", respectively. It is assumed that the end device makes an offloading decision based on its battery life and computational resources. Let's assume that each u i is connected to the closest M j and hence offloads all T u i j ∈ W u i s.t a j = 1. For each T u i j that is offloaded, the u i also sends a requirement vector R E Q = {c i m i , l i , s i }. R E Q is characterized by number of CPU cycles, memory, maximum latency and data size respectively.

C. Computational Model
Let's denote the computation capacity of each MEC M j in M as S M j . This is the CPU frequency. Let's assume that each M j maintains a queue Q M j = [T u 1 , T u 2 ..T u n ] of tasks offloaded to M j . The execution time of a task T u i offloaded to M j is The waiting time for a newly added task T u n+1 is Therefore, the total processing delay K i for T u i is

D. Communication Model
For a task T u i offloaded to an edge node M j , the communication cost of offloading the task H i can be expressed as the following where trans i and pr op i are the transmission delay and propagation delay respectively. The transmission delay can be expressed as where tr M j is the transmission rate of M j . Substituting eq (5) in eq (4), the communication cost can be expressed as Table II While using any of the algorithms in Table II, an identification id is required to uniquely identify each process. To define the requirements for a set of processes P, for any of the defined algorithms, the variable constraints for the algorithm components are first defined.

E. Modelling of the Algorithms in
Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.

1) Banker's Algorithm:
In using the Banker's algorithm, for each process sent to the MEC for resource provisioning, two vectors are required. These are the resource type required R E S by the process and the maximum resource for each resource type M AX. Therefore, for each process the resource type constraint where, |RT | is the number of resource-type. When r i = 0, the resource type at the i th position is not required otherwise r i = 1 specifies that the resource type is required. It is assumed there are three resource types (CPU, Memory and Storage) that can be claimed by each process. Likewise, the maximum resource for each resource type is defined as where 2) RMS: While using the RMS algorithm, for each process that is sent to the MEC for resource provisioning the computation time or capacity C i and the time period t i will be required.
Therefore, variable constraints needed for RMS 3) EDF: While using the EDF algorithm, for each process sent to the MEC for resource provisioning the computation time or capacity C i , time period t i and the deadline D i will be required. Therefore, variable constraints needed for EDF

4) Wound-Wait/Wait Die:
For each process sent to the MEC for resource provisioning while using the wound-wait or wait die, the resource type required R E S and the time stamps T s for each process are required. Here, R E S is obtained the same way as in eq 7 and T s = {t i | i ∀ {1 . . . |P|}}. Therefore, the variable constraints for Wound wait W w and Wait die W d Table II 1) Rate Monotonic Scheduling and Banker Algorithm: In this algorithm for a set of processes P, combining 7 and 9

F. Deadlock Constraint of the Algorithms in
Let RM S(x) and B A(x) be functions of RMS and Banker's algorithm respectively. Then, RM S(P) → P rt ⊆ P (15) where, P rt is a set of tasks that can be executed in real-time. Then putting P rt in the Banker's function, where, P sa f e is the deadlock-free safe sequence.

2) Earliest Deadline First and Banker Algorithm:
In this algorithm for a set of processes P, combining 7, 9 and 12 Let E DF(x) and B A(x) be a function of the EDF algorithm and Banker's algorithm respectively. Then, where, P r t is a set of tasks that can be executed in real-time.
B A(P rt ) → P sa f e (19) where, P sa f e is the deadlock-free safe sequence.

3) Rate Monotonic Scheduling and Wound Wait:
In this algorithm for a set of processes P, combining equations 11 and 13 Let RM S(x) and W W (x) be a function of the RMS algorithm and Wound wait algorithm respectively. Then, RM S(P) → P rt ⊆ P (21) where, P rt is a set of tasks that can be executed in real-time.
W w (P rt ) → P sa f e (22) where, P sa f e is the deadlock-free safe sequence.

4) Rate Monotonic Scheduling and Wait Die:
In this algorithm for a set of processes P, combining equations 11 and 13 Let RM S(x) and W d (x) be a function of the RMS algorithm and Wound wait algorithm respectively. Then, RM S(P) → P rt ⊆ P (24) where, P rt is a set of tasks that can be executed in real-time.

5) Earliest Deadline First and Wound Wait:
In this algorithm for a set of processes P, combining equations 12 and 13 Let f : E DF(x) and f : W w (x) be a function of the EDF algorithm and Wound wait algorithm respectively. Then, where, P rt is a set of tasks that can be executed in real-time.
where, P sa f e is the deadlock-free safe sequence.

6) Earliest Deadline First and Wait Die:
In this algorithm for a set of processes P, combining equations 12 and 13 Let f : E DF(x) and f : W d (x) be a function of the EDF algorithm and Wait die algorithm respectively. Then, where, P rt is a set of tasks that can be executed in real-time.
where, P sa f e is the deadlock-free safe sequence.

G. Collaborative Offloading Decision
In this sub-section, the collaborative offloading decision making for the proposed algorithm has been described. In the proposed algorithm, the offloading decision is made by considering the time constraint of a task T u i and the deadlock constraint of the MEC resource. For the time constraint, the following must be satisfied for execution T C < l i (32) where T C is the time cost and is a summation of the computational cost and communication cost.
Substituting 3 and 6 in 33 The deadlock constraint is determined using one of the algorithm models in the previous section. For simplicity let's assume that the constraint is determined by the B A(x) in 16. B A(x) returns a safe sequence or false if the system is not in a safe state. M j makes an offloading decision a i for each newly added task T u i . a i ∈ {0, 1, 2}, where a i = 0, means execute locally, a i = 1 means send to another edge node and a i = 2 means offload to the central cloud.
If 35 is True, then a i = 0. Otherwise, an MEC that fits the description is sort after. If such MEC exists, then T u i is offloaded to it else it is offloaded to the cloud. To ensure that an edge node M j can calculate 35 for another edge node M k , each M j multicasts its status M s j to the MEC cluster after each update to W T n . To reduce the communication overhead, the number of MECs in a cluster is minimized.
where Mem M j is the memory utilization of M j . Therefore, each M j maintains the following The collaborative algorithm is presented in algorithm 1

H. Time Complexity
The differences in the time complexity of the algorithms used in this research to design each algorithm can be seen in Table II. Comparing the deadlock algorithms, banker's algorithm has the highest order of time complexity. However, comparing the scheduling algorithms, the EDF algorithm has the highest order of complexity. The Time complexity graph in Figure 2 compares the time complexity of each of the compared algorithms. The graphs show the scalability of each of the compared algorithms with an increase in the number of processes and resources. The graph illustrates that ALG 1 and ALG 2 are the most scalable case study algorithms with increase in the number of processes and number of resource types. On the other hand, ALG 3 and ALG 4 is the least scalable algorithm out of the six algorithms.

IV. EXPERIMENTAL SETUP
In this section, the experimental setup is presented and how the compared algorithms are tested is outlined. The components that make up the system and the associated tools are discussed. The objective of this section is to evaluate the performance of the algorithms using two different environmental setups and evaluate the results obtained to better understand the strengths and limitations of the algorithms.
This section is broken down into two subsections as two different experimental setups have been used to evaluate and compare the case study algorithms. Both experiments have been carried out using Graphical Network Simulator-3 (GNS3) platform [40]. GNS3 is a network software emulator first released in 2008 that can be used to emulate complex networks with a combination of virtual and real devices. GNS3 has been  used because it provides a platform to emulate real networks using virtualization concepts in contrast to simulation platforms like cloudsi m [41] or ns-3 [42]. Figure 3 illustrates the high-level diagram of the experimental deployment. The Edge layer consists of the network and application plane. The Edge layer extends the conventional infrastructure by providing compute and storage capacities to the RSU for resource provisioning. Compute and storage decisions made in the Edge layer are made through the edge application plane. The Edge servers in the MEC layer collaborate among themselves using the network plane to support the demand from the IoT devices/UE. Routing and forwarding decisions are made by the SDN controller in the control plane. The MEC uses multicast for cooperative communication. In the experimental setup, MQTT brokerbased [39] multi-cast communication is used. MQTT is used to simulate multicast cooperative communication between the RSUs during experiments. Other cooperative communications mediums could also be used. Comparisons between these mediums are outside the scope of this research. The RSU reach the edge layer through a cellular communication link. Tasks are sent to the cloud if they cannot be scheduled in the edge layer. In the experimental setup in GNS3 platform [43], the MEC leverages the cloud-native philosophy using containerised Open VSwitch hosts communicating through an SDN controller. Each of the algorithms is implemented using python [43] and is deployed as an MEC service on each MEC node. To avoid time stealing during the experimentation, the CPU and Memory utilization of each MEC host has been configured to be limited to the Table V. The experiment has been conducted using two physical compute nodes whose specifications can also be seen in the Table V. Each of the compute nodes has an Intel(R) Core(TM) i7-8550U processor. The compute 1 (C1) runs the GNS3 emulator software and GNS3 VM1 while GNS3 VM2 runs on compute 2 (C2).

A. Deployment Architecture
During the experiment, each node in the MEC layer runs one of the algorithms listed in Table II. The end devices are emulated using ubuntu linux containers. The end device generates tasks based on predefined experimental task profiles and sent to the MEC for processing. The task profile includes the CPU, memory, data size and latency constraints. The MEC then applies the scheduling algorithm followed by the  deadlock algorithm for each task. The task is either executed locally, re-offloaded to another MEC or cloud depending on the result from Algorithm 1. A cluster of linux servers running the task processing application simulates the cloud service.
A pair of experimental setups have been used to evaluate the algorithms. For each of the setups, the experiment has been conducted with 4, 7 and 10 MECs. Additionally, each MEC receive 2600 requests where each request contains |T i | number of tasks where, |T i | ∈ {1, 2, ..n}. The higher the value of n, the more load on the MEC and the more difficult it is to meet the deadline. n has been set to 3 in the following experiments. The task arrival at each MEC node is assumed to follow the Poisson arrival process with a varying arrival rate λ t (t ∈ {1, 2..n}). The difference between the two experimental setups is the client request distribution.

B. Experimental Setup 1 (Exp1)
In this experiment, each MEC node receives the same number of total requests in each run. Figure 4 E x p 1 depicts the request distribution used for experiment setup 1. This setup simulates a scenario during co-operative offload where MECs are equally busy.

C. Experimental Setup 2 (Exp2)
In this experiment, each MEC node receives an unequal amount of requests in each run. Figure 4 E x p 2 depicts the request distribution used for experiment setup 2. This setup simulates a scenario during co-operative offload where MECs are unequally busy.

V. PERFORMANCE COMPARISON RESULTS
In this section, the performance of the compared algorithms obtained during the experiment has been evaluated. The metrics used for these comparisons are listed below.
• CPU Utilization of the MEC node • Round Trip time • The waiting Time • The ratio of tasks re-offloaded or executed locally on the MEC

A. Experimental Setup 1: Results
This section contains experimental results obtained during experimentation using the E x p 1 setup.
1) CPU Comparison: The CPU utilization for each of the algorithm obtained after the experiments have been presented in Figure 5. From the results obtained it can be deduced that the CPU utilization of the MEC platform decreases gradually with an increase in MEC nodes. This occurs because the task loads are balanced among MECs while the number of tasks remains the same. Thereby, reducing the number of tasks processed per MEC node. The ALG 4 achieves the best CPU utilization convergence as depicted in Figure 5. The least convergence CPU utilization is the ALG 1 .
2) Round Trip Time: Figure 6 shows the round-trip time comparison of each algorithm during the experiment. Each MEC (M i ), periodically monitors the round-trip time to reach Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.     9. Comparison of the ratio of processes that missed their deadline to the processes that failed to meet the deadline. T P (Timely process) is used to denote processes that meet the execution deadline while U P (Untimely Process) is used to denote tasks processes that missed the deadline for Exp1.
MEC and tasks executed on the cloud has been made in this section. This is shown in Figure 8. According to the figure, the maximum percentage of tasks executed locally on the MEC is 78%. The outcome displayed on the graph depends on the scheduling algorithm employed. There are some noticeable similarities and differences among the six algorithms. An increase in the number of nodes has very little effect on ALG 1 and ALG 6 . However, for ALG 2 and ALG 5 (both use EDF), an increase in the number of nodes increases the number of tasks re-offloaded to MEC and cloud. This has an opposite effect on ALG 3 and ALG 4 which both use RMS for scheduling.

5) Comparison of the Ratio of Processes That Meet Execution Deadline:
In this section, the ratio of tasks that meet their execution deadline during the experiment for E x p 1 is compared for all six compared algorithms. These results are obtained from the end device. Each task that is sent out by the end device to the MEC node has a deadline constraint and is monitored to make sure that the deadline constraint is met as the task travels through the MEC platform and back to the end node. The percentage of tasks that meets the deadline constraint is labelled here as T P (Timely Process) while the percentage of tasks that did not meet the deadline constraint is labelled here as UP (Untimely Process). It can be seen in   Figure 9 that more tasks meet the deadline as the number of MECs increases. It can also be seen that more tasks meet their deadline using ALG 4 with 10 MECs compared to the other algorithms in this E x p 1 . Furthermore, ALG 6 obtains the least number of untimely processes with 4 MECs.

B. Experimental Results for Experimental Setup 2 (Exp2)
This section contains experimental results obtained during experimentation using the E x p 2 setup.
1) CPU Comparison: Figure 10 shows the CPU utilization results obtained for the algorithms during the E x p 2 . As depicted in the figure, the CPU utilization for each of the compared algorithms decreases with an increase in the MEC node. This gradual decrease is similar to the behaviour in E x p 1 . This can be attributed to the sharing of the total workload sent by clients among the MEC nodes. It can also be seen that the CPU utilization of the case study algorithms for E x p 2 is lower than the E x p 1 . This observation is trivial as in E x p 1 , the MECs were equally busy throughout the experiment which is not the case in E x p 2 . Averaging the results of the 3 sub experiments with 4, 7 and 10 MECs, ALG 2 obtains the highest CPU utilization while ALG 6 achieves the slowest CPU utilization during the experiments. ALG 2 uses a deadlock avoidance algorithm while ALG 6 uses a deadlock prevention algorithm.   13. Comparison between the ratio of processes that were executed locally in the MEC to processes that were Re-offloaded for Exp2.

2) Round Trip Time:
The round-trip time obtained for E x p 2 can be seen in Figure 11. Each participating MEC records the RTT for each of the MEC in the platform to be used for offloading decisions. The RTT is obtained here in the E x p 2 setup similar to how it is obtained in the E x p 1 setup. The round-trip time for the E x p 2 ranged between 0.89 to 1.93 milliseconds. Averaging the results of the 3 sub experiments with 4, 7 and 10 MECs, ALG 4 obtains the lowest overall RTT while ALG 6 achieves the highest RTT. ALG 4 and ALG 6 are both deadlock prevention algorithms. However, ALG 4 uses RMS for task scheduling while ALG 6 uses EDF.

3) Waiting Time:
The waiting time convergence comparison result for the E x p 2 has been depicted in Figure 12. E x p 2 setup seems to have a more predictable convergence than the E x p 1 as the waiting time converges between 0.89 to 1.63 milliseconds for each of the experimental runs from 4 to 10 MECs. On average, ALG 4 attains a lower waiting time while ALG 1 acquires a higher waiting time during the 3 sub-experiments from 4 to 10 MECs. ALG 4 utilises a deadlock prevention algorithm while ALG 1 employs a deadlock avoidance algorithm.

4) Offload vs. Local:
The comparisons between the ratio of tasks executed locally, re-offloaded to the cloud or re-offloaded Comparison of the ratio of processes that missed their deadline to the processes that failed to meet the deadline. T P (Timely process) is used to denote processes that meet the execution deadline while U P (Untimely Process) is used to denote tasks processes that missed the deadline for Exp2. to another MEC are presented in this section. It can be seen in Figure 13 that an increase in the number of MECs leads to an increase in the percentage of tasks re-offloaded to a neighbouring MEC for all compared algorithms. The overall behaviour here is similar to what is shown in E x p 1 . ALG 6 and ALG 1 algorithm had the best performance result with the number of tasks executed locally for each run above 78%. However, an increase in the number of nodes has very little effect on ALG 6 and ALG 1 similar as in E x p 1 . ALG 2 achieves the highest increase rate in tasks re-offloaded to be executed in a neighbouring MEC as the number of MECs increases.

5) Comparison of the Ratio of Processes That Meet Execution Deadline:
In this section, the ratio of tasks that meet their execution deadline during the experiment for E x p 2 has been compared for all six algorithms. These results are obtained from the end device perspective. Each task that is sent out by the end device to the MEC node has a deadline constraint and is monitored to make sure that the deadline constraint is met as the task travels through the MEC platform and back to the end node. The percentage of tasks that meets the deadline constraint is labelled here as T P (Timely Process) while the percentage of tasks that did not meet the deadline constraint is labelled here as U P (Untimely Process). It can be seen in Figure 14 that more tasks meet the deadline as the number of MECs increases. It can also be seen that more tasks meet their deadline while using the ALG 2 for each experimental run from 4 MECs to 10 MECs compared to the other algorithms. ALG 1 obtains the lowest T P for 10 MECS while ALG 4 achieve the highest T P. ALG 4 utilises a deadlock prevention algorithm while ALG 1 employs a deadlock avoidance algorithm.
6) Exp1 and Exp2 Comparison: Figure 15 summarizes the difference in the experimental outcomes of E x p 1 and E x p 2 . The figure shows the average outputs of each of the experimental runs (4, 7 and 10). The figure shows that ALG 3 obtains better CPU utilization compared to other algorithms. ALG 6 and ALG 1 obtains better percentage of tasks executed locally compared to other algorithms in E x p 1 and E x p 2 . Comparing algorithms that obtain better overall percentage of tasks executed on time, ALG 4 provides the best performance among all the algorithms under study. ALG 1 and ALG 2 use a deadlock avoidance algorithm while ALG 3 uses a deadlock prevention algorithm. To generalize, avoidance algorithm does better in the percentage of tasks executed locally and the overall percentage of tasks executed on time while prevention algorithms obtain better CPU utilization. Additionally, the optimum difference between these two algorithms is the ability to keep the system in a safe state. E x p 1 and E x p 2 obtain similar behavioural patterns for the algorithms when comparing the CPU utilization and the offload vs local.

VI. CONCLUSION
In this paper, a comparative analysis of Deadlock Avoidance and Prevention Algorithms for Resource Provisioning in Collaborative Edge Computing has been presented for Autonomous Vehicles (AVs). The study has been carried out in a step on building reliable and readily available MEC platforms that can be used to deliver low latency requirements for 6G case study scenarios. Using a case study for real-time AV systems in this study, comparisons were made on deadlock situations in real-time systems. Rate Monotonic scheduling algorithm or Earliest Deadline First algorithm were used in prioritizing the workloads. In this research, our hypothesis was established based on our previous study on deadlocks in MEC by comparing six algorithms. Two experimental setups were designed on the GNS3 platform for evaluating the compared algorithms. The metrics used in the comparison include Round-trip time, Queue waiting time, CPU utilization and the ratio of tasks. The contributions made in this paper also have the potential to be handy in deadlock avoidance and prevention for efficient resource provisioning in AV networks. One major limitation of the research is that it has been carried out in a closed environment with limited specifications. It would be interesting to know if the results obtained are reproduced in different environments.

VII. FUTURE WORK
Future work in this area may be to explore deadlock prediction in MEC using data collected over time. In this scenario, the Resource Allocation Graph (RAG) maps the resource consumption over time for each MEC. Additionally, the fluctuation of resource consumption for each edge is monitored which results in a time series. Furthermore, an autoregressive function may be used to estimate the probability of the RAG to form a cycle. Hence, a proactive deadlock algorithm is used to deprioritise processes that are more likely to result in a deadlock as a preventive measure.