Abstract
Currently, for years, unmanned aerial vehicles have been widely applied in a comprehensive realm. By enhancing computer photography and artificial intelligence, it can automatically discriminate against environmental objectives and detect events that occur in the real scene. The application of collaborative unmanned aerial vehicles will offer diverse interpretations which support a multiperspective view of the scene. Due to diverse interpretations of unmanned aerial vehicles usually deviates, thus, unmanned aerial vehicles require a consensus interpretation for the scenario. To previous purposes, this study presents an original consensus-based method to pilot multi-unmanned aerial vehicle systems for achieving consensus on their observation as well as constructing a group situation-based depiction of the scenario. Further, a fuzzy neural network generalized prediction control system known as a recurrent self-evolving fuzzy neural network is mainly used to ensure stability through the use of a descending gradient online learning rule. At the same time, users can think along the lines of evolutionary biological design. Unmanned aerial vehicles can be modeled as system experts for solving group problems that require the definition of conditions that best describe the scene. First, this method allows each unmanned aerial vehicle to set high-level conditions for detection events by aggregating events based on fuzzy information. These aggregated events are modeled by a fuzzy system ontology, which allows each unmanned aerial vehicle to report its preferences in conditions. Therefore, the interpretation of each drone is compressed to achieve a collective interpretation of the state. The final polls, consent and affinity polls confirmed the final decision group’s reliability ratings. The rated consensus indicates how well the collective interpretation of the scene matches each drone’s point of view.
Keywords
Introduction
There has been a deep interest in applying to unmanned aerial vehicles (UAVs) to complete complicated assignments for decades. The use of UAV has the benefit for people to avoid directly involving in dangerous tasks or assignments which are the places of hard-to-reach. The UAVs have been utilized in various realms such as military (e.g. enemy attack, crowd monitoring, military intelligence deployment detection) and civilian employment (e.g. forest firefighting, breeding monitoring, agriculture management, etc.). 1 –4 Sensor-equipped UAVs that are augmented with the techniques of computer photography 1 and artificial intelligence 5 to manipulate tracking data for detecting people, targets, and environments, and identifying events that occur in the real scene (e.g. vehicles driving on a highway). Since most problems involve various factors such as weather conditions, sensor capabilities, reliability of method application, and environmental patterns, the performance of only a single UAV is not satisfactory in most cases. Moreover, applying single-look scene interpreting for the scenario description will be considerably finite, particularly just only used one unmanned ground vehicle. Thus, by teams comprising diverse UAVs as well as augmented with many technologies and sensors, it could assuredly offer a precise multi-looks watching in a real scene, that is, a complete scenario comprehension, 5 for achieving consensus on every UAV individual scene perspective.
Figure 1 displays a brief system of multi-UAVs that applied to monitor an urban district. Every UAV observes the scene from its own interpretation of the occurring situation and makes its own interpreting results to the occurring condition which can differ from other UAVs. Thus, there requires a consensus on scene comprehending, the most truly interpreting on a scenario. The experimental case could be resolved as a problem of group decision-making (GDM) that requires collective consensus amid these different UAV views for dynamic of the practical scene.

From the UAVs, a different spot interpretation needs the agreement collective on more probable description of scene. UAV: unmanned aerial vehicle.
For GDM problems, the application of a consensus measure can help reach agreement among experts 6,7 while reaching a final solution that exactly satisfies each expert’s interpretation. 8,9 Through consensus modeling, this study proposes a decision-making approach that supports multiple UAV systems to reach consensus on the conditions encountered in the observed environment. This method allows each UAV to communicate its preference for conditions through fuzzy aggregation of detection events. In GDM tasks, UAVs are modeled as experts whose goal is to determine which conditions best describe the scene. 10 –14 Thus, the collective interpretation of the state was achieved at the expense of consensus on individual drone preferences. Application consensus GDM provides a common consensus for synthetic scenarios. 15 –17 Artificial neural networks (ANNs) have established themselves as a potential problem solver due to their unique properties such as massively parallel processing, adaptive learning capability, self-organization, and robustness. 18 –20 However, the main problem with ANNs is that these numbers hidden in neurons have a direct and strong effect on the neuron’s performance. That means we need to sacrifice the operation time to fulfill the efficiency and accuracy of the computations, which makes the NN tool hard to be utilized online or in real time for applications. Even so, in Bayesian methods for estimating unknown parameters, it is often difficult to assume probabilistic prior information. In addition, the observed data may be vague (fuzzy) rather than certainty (crisp). In this article, a probabilistic approach to deal with such situations was described by introducing the concept of likelihood function for fuzzy data in probabilistic models. This approach employs probability distributions to model prior information. Thus, traditional NN methods, such as multilayer perceptrons for time-varying signals or systems, have less applicability due to the static structure of the system. To address this issue, fuzzy neural networks (FNNs) are considered a flexible and plausible alternative as they combine biologically inspired learning with the mechanisms of human thought. By tuning the mechanism with fuzzy and recursive self-growth schemes, stability and performance are improved and demonstrated in this article. Combined with nonlinear activation functions, recurrent neural networks can handle complex spatiotemporal patterns. Therefore, in this article, we will focus on recurrent self-evolving FNN (RSEFNN) with local feedback for classifying cognitive system states for various UAV applications.
This method generates significant beneficial results for multi-UAV systems for condition awareness, for example, the reliability evaluation of consensus-based group decisions. For example, when applying UAV team participated in a rescue mission, if their detecting results achieve a decision with highly degree agreement, then the rescuers can regard as dependable by the UAV team’s interpreting scene. Otherwise, rescuers cannot trust the results of the UAV team. Thus, if the ultimate result can satisfy the scenario interpretations of the individual UAVs, the consensus evaluation is obviously significant.
This study is arranged as follows. The second section presents preliminary knowledge of the GDM procedures such as consensus modeling, the fuzzy ontologies, and refers to the cognition of multi-UAV systems. The third section describes the study’s method, emphasizes the UAV preference generating in conditions and constructing the decision-making model of consensus-based. The fourth section displays the operating principle of the study method in a scenario of a classic case. Finally, the fifth section dissertates the merits and inferiority of the provided approach and compares it with other approaches mentioned in the article. And the final section proposed conclusions of the article.
System description of RSEFNN
FNNs are mainly used to represent fuzzy “if–then” rules in network structures. At the same time, fuzzy “if–then” rules can be trained using known learning algorithms for ANNs. Key processes include fuzzy rules, inference processes, and fuzzy knowledge. Fuzzy rules determined by previous events and consequences for modeling the relationship between control inputs and outputs. Inference processes are mainly used to define aggregate operators such as fuzzy concatenation and fuzzy inference methods. The proposed algorithm is used to adjust the parameters of neural fuzzy networks. The proposed evolutionary algorithm can take into account the influence of partial solutions and provide an appropriate search space to increase the probability of satisfying the global solution. The ith rule of the fuzzy dynamic model has the form: Plant Rule i
where
with
for all t. Therefore
for all t.
The concept of parallel distributed compensation is presented to develop a fuzzy controller to stabilize a fuzzy system for the abovementioned TS fuzzy continuum model. 21 The idea is to develop a compensator for each local model. Linear control design methods can be used for each rule. The result of a universal nonlinear global fuzzy controller is a “fuzzy mixture” of individual linear controllers. The fuzzy controller uses the same fuzzy set as the fuzzy system. According to the above fuzzy model, FNN can combine the following modeling schemes.
Consider a multiple NN system N consisting of L interconnected subsystems where the lth isolated framework exits
We assume that v is regarded as the transfer functions
where
where
Subsequently, the min–max matrix
Moreover, based on the method of interpolation, we could have
where
Suppose that there exist bounding matrices
for the trajectory
where
Namely,
The repetitive structure of RSEFNN is obtained by returning the firing strength of fuzzy rules to the system itself, thus avoiding the use of additional external registers to store past states. Figure 2 shows the system structure of the RSEFNN system model. The details of the system functions of all layers of RSEFNN are shown below. 22 The theory is as follows, u(l) defines the system output in a node of the lth layer. (1) This Layer 1 (Input Layer) inputs are illustrated by X = (x 1…xn ). Level 1 does not perform calculations. Each node in this layer corresponds to an input variable and simply passes the input value to the next

The structure of the RSEFNN model. RSEFNN: recurrent self-evolving fuzzy neural network.
(2) The Layer 2: The second level (Fuzzification Level) is also called the membership function level. Each node in these layers uses a layer of Gaussian membership functions corresponding to the linguistic target labels of the layer 1 input variables. These membership function values computed in layer 2 give
in which,
where
in which
(5) The Layer 5: Nodes at level 5 (result layer) are called hidden result nodes. Each node of the recursive algorithm at level 4 has a corresponding result node at level 5. These resultant node functions produce linear compositions of variable inputs. The output of layer 5 is calculated as
(6) The Layer 6: In layer 6 (Output Layer), the output node implements fuzzy defuzzification, synthesizes all of these operations recommended in layer 5 and the recursive nodes on layer 4. This layer adopts a weighted average defuzzification approach
in which y is the output of the RSEFNN model and R is the total number of these fuzzy rules.
For simplicity, we consider the case with one output, and then our goal is to minimize the error function
The parameter result vector is updated as follows
where
The Gaussian membership function mean is updated in equation (10) and the proof of the stability criterion for fuzzy neural LMIs are in Appendix
A group interpretation scenario
Figure 3 displays the logical viewpoint of the study’s framework. Applying various types of UAVs patrol a district and probe events coming from the scene of observation. Every UAV has been equipped with a technical background to complete detection event (UAV’s event detection team). UAVs could specifically probe mobile targets on the scene via tracking video algorithms and also use scene ontology with contextual knowledge to fuse this information. 25,26

The video from finding team UAV events to the last interpretation of the team scene. UAV: unmanned aerial vehicle.
For modeling UAV’s probed events and its values of frequency, we expanded Track Stick ontology to a fuzzy ontology. All types of UAV probed events as well as their valuable frequency are appended to the system ontology as principles. The applied principle is expressed as the triple
Event descriptors that describe types of events are modeled as concepts in a fuzzy ontology based on frequency of occurrence values. 27 –33 Figure 4 displays three descriptors’ event definition to a specific event e: Low E, Medium E, and High E. That is, the event e has been a model of variable linguistic (fuzzy linguistic terms) in the three fuzzy concepts, depicted by the membership functions which is fuzzy of the figure. These three conceptions depict different densities of vehicles (or people) related to the kind event e, in the specified scene. Depending on the valuable frequency, the descriptors events describes the participations in type event in detail in the form of membership value which is fuzzy. For example, if the valuable frequency of e is low, low E can depict people’s participation in this event type more Medium E and High E.

The descriptors in the e event.
As soon as every UAV conveys preferences in conditions, the module M2 first permits UAVs to establish a decision in group and then evaluate team agreements as well as probes which UAVs dominate the decision through utilizing consensus reaching processes. Condition understanding based on multi-UAV system is founded as a problem of GDM
34
–40
; these conditions have been regarded as the alternatives, and every UAV in the system team can be evaluated as one expert. So, every UAV represents its preferences for these detected conditions (refer to Section of RSEFNN), officially, defined n UAVs as well as m conditions, every UAV conveys preferences in the m conditions. These preferences represented through the ith UAV are expressed in vector
These UAV systems could be used in various kinds of UAVs (e.g. aerial, ground, sensor-based, etc.), every UAV possesses distinct functions and abilities. Further, the weather, for example, luminosity and humidity, or any other environmental characteristics (i.e. dense forests, radioactive regions), may decrease the capabilities of some UAVs. Therefore, each UAV has a reliability level; more specifically, wi means reliability weight incorporated with the ith UAV. For example, let us premeditate a UAV team comprise of three UAVs (i.e. UAV#1, UAV#2, and UAV#3), in which UAV#1 and UAV#3 have been equipped with action cameras and UAV#2 equipped with infrared cameras.
This kind of model summarizes UAV preferences and defines the collective vector preference about this condition. The collective vector preference
in which the
where
The degree of CS determines in what kind of conditions the UAVs existing divergence, thence, discriminate whether the team’s decision is reliable on each condition. CS degree under all of the conditions (cr) has been computed as the average power means of the CS degree. CS on the relation (cr) offers an unparalleled accumulative gauge for assessing the consistency amid UAVs in this team under all of the conditions. The denser cr is to zero, the higher the consistency of UAV under all the conditions, and the higher the reliability of the last decision group (ccp). The collective cumulative preference (ccp) is computed as these arithmetic means in the factors of these collective preferences
where
Numerical case
A case study is in the section to demonstrate what our model acts in the practical scene. Let us premeditate the experimental scene displayed in Figure 5. This scene relates to some people crossing as well as other walking nearby the road. If a group of six drones arrives at a site, monitoring the area, then every UAV could simultaneously probe five people at the scene via tracking video, other moving objects have been filtered off (as shown in obj_6 in the figure). The UAV infers the constructed epistemology to probe events as the system ontology axioms (i.e. predicate-object subject-triples), in which the events type (predicate) has been involved in the relevant personnel (subject) as well as the position where this event takes place (object). Take an example, these axioms depict events detected via UAV#1 relating to probed people and POIs. 49 –55

Experimental study illustrating six UAV’s observations and a practical scenario interprets. UAV: unmanned aerial vehicle.
Depending on the “A group interpretation scenario” section, the module M1 implements an initial step which is identified by (0) for UAV configuration preferences, so the module M2 guides UAVs to the last interpretation group via others steps.
(0) Situation and preference generation: Calculating the frequencies which were related to each event’s type probing by the UAVs. At this time, based on the query-based maximum concept satisfiability, it is probable to calculate this preference of UAV#1 on the marching people states. Generally, the preference of UAV for a certain condition is produced by requesting the maximum satisfiability concept of UAV instance as well as their events type frequencies. It demonstrates the preferences on the conditions of marching people in Table 1 produced by the six UAVs. Considering the concept of the marching people defined in listing 3, the request will be utilized to UAVs to query the valuable frequency of the four events types which is included on the concept of marching people from the second to fifth column. These final columns indicate that request consequences, which represent the valuable preference of each UAV in this condition. The higher those preference values, the more suitable those UAVs will consider describing the observation scene. In this case, the situation of marching people is regarded as very suitable for scene description by UAV#4.
Priority UAV generation monitoring marching people position event.
UAV: unmanned aerial vehicle; GT: GODJGTOWARDS; VRUNNING: VEHICLE RUNNING; WIR: WALKINGINSIDETHEROUTE; CRS: CROSSING.
(1) Collective preferences: There are five statuses that can be recognized in this scene display as Figure 5, such as simple crossing, people marching, traffic, shopping, and men working on the road. UAVs will produce the preference values on these statuses, and their results are reported in Table 2. In the light of (1) we calculated the collective preference (CP) team vector and reported their values as Table 3. Based on the results, simple crossing (CRS) team vector is the most suitable status to depict this observation scene, while men and traffic (TRF) working on road (WRK) are the least suitable statuses to describe how it occurred. The consequence denotes that ultimate team decision. Due to the scene not showing any status which may affect any UAV’s performance, for the purpose of Simplify, by allocating its weights to one of that supposing each UAV has the same reliability.
Six UAVs preferences on five situations.
UAV: unmanned aerial vehicle; WRK: MEN ON THE ROAD; CRS: SIMPLE CROSSING; TRF: TRAFFIC, MAR: PEOPLE MARCHING; SHO: SHOPPING.
The collective preferences.a
WRK: MEN ON THE ROAD; CRS: SIMPLE CROSSING; TRF: TRAFFIC, MAR: PEOPLE MARCHING; SHO: SHOPPING.
a Values represent collective decisions in each situation.
(2) Consensus: As soon as the collective preferences are produced, the consensus gauges, depicted in previous Section, permit assessing the consistency level amid the UAVs. As mentioned previously, our consensus system model has been constituted of three aggregation levels, such as those depicted in Section 3-B. In level 1, vectors similarity among pairs of UAVs evaluates the resemblance amid pairs in UAVs. Vectors similarity has been the rows of Table 4, which expresses the consistency of UAV pairs in different statuses. For instance, in experimental case by the crossing simple status, the UAV pairs, that are the most consistent, have been the couples (UAV#1, UAV#4), (UAV#2, UAV#3), (UAV#2, UAV#5), and (UAV#3, UAV#5). The combinations with more than two in a team are also feasible without the limit to two members and this article just uses the brief instance for the case study. That similarity integration of the vectors on the UAVs depending on cs measure (3) permits the assessment of that consensus degree amid the UAVs on every status. That consequences are expressed as the vector cs and listed in Table 5. Note that the team mainly allows with the trf traffic status and extremely opposes the sho shopping status. Beginning from a vector cs, this cr consensus by the relation gauge has been given (4). Their values has been given 0.46 that illustrates that the average consistency of UAVs in all cases is 54%. That is, they partially agree on all cases.
Similarity vectors among UAV have been assessed.
UAV: unmanned aerial vehicle; WRK: MEN ON THE ROAD; CRS: SIMPLE CROSSING; TRF: TRAFFIC; MAR: PEOPLE MARCHING; SHO: SHOPPING.
Consensus among UAVs on situation (vector cs).
UAV: unmanned aerial vehicle; WRK: MEN ON THE ROAD; CRS: SIMPLE CROSSING; TRF: TRAFFIC; MAR: PEOPLE MARCHING; SHO: SHOPPING.
Comparisons
For probing the UAVs which guide the team’s decision-making, the group’s proximity ps (5) of each single UAV was evaluated. The outcome of vectors ps is illustrated in Table 6. These values in the ith row explain those differences between the preference of ith UAV and the preference of the team in various statuses. This indicator probes the most inconsistent status between a single UAV and the team, and which UAVs dominate the team’s decision-making in each case. For instance, UAV#2 and UAV#5 are the biggest differences in the team’s preference for simple crossing (crs) status, while UAV#4 and UAV#l guide the decision procedure in this status. The numbers of decision-making leader (i.e. UAV#1, UAV#5, and UAV#6) are the largest in the status of people marching (mar). For probing the UAVs which guide the decision-making procedures in all statuses, the cumulative proximity in situation gauge has been adopted (8). Table 7 demonstrates cps vectors. UAV#5 guides team decisions in all statuses and UAV#2 and UAV#4 represent the decisions in most distinct from the ultimate team decision. The model error of the overall fuzzy neural approximation has been simulated bounded and satisfied in Figure 6. The dashed line (the real error of modeling) is totally bounded in solid line (system allowed error of states) and it can guarantee the stability and stabilization of controlled systems. From Table 8, the comparison for the controllable range with time lag shows that the proposed methodology is much more flexible for the controlled system in applications. The allowance of time delay for the controllable range in the studies of Zhen et al. 4 and Coyle et al. 18 is less than the method proposed in this article and similarly the modeling error by traditional techniques cannot guarantee the modeling error bounded. Therefore, from Figure 6 and Table 8, we ensure and demonstrate the performance better compared to exiting published papers.
The individual UAV proximity on the five situations.
UAV: unmanned aerial vehicle; WRK: MEN ON THE ROAD; CRS: SIMPLE CROSSING; TRF: TRAFFIC; MAR: PEOPLE MARCHING; SHO: SHOPPING.
UAV cumulative proximity.a
UAV: unmanned aerial vehicle.
a Each row illustrates what a drone decision distinct from a decision in group in all of situations.

The approximation error of the overall FNN model derived in Appendix. FNN: fuzzy neural network.
Allowable maximum time lag τ.
Discussion, conclusion, and future study
In this article, we use a modified Lyapunov evolving NN, which is a biomimetic algorithm with a high rate of convergence, easy parameter tuning, and memory function. Gray evolutionary neural networks train RSEFNN and generate random seeds in decision space to simulate consensus decision-making and use algorithmic computational formulas to approximate optimal solutions. Improvement of Biological Neural Networks Linear Differentiation Method—The algorithm modifies the convergence coefficient to increase the share of common searches, avoid hitting the optimal solution in the area, increase memory capacity, improve convergence efficiency, increase weights for different positions, and perform searches. 56 –58 Be clear on your direction and use greedy strategies to avoid excessive unnecessary searches. The new RSEFNN combines neural network linear differentiation schemes with Lyapunov stabilization methods for nonlinear systems and UAV applications.
This study proposed a new method to support the multi-UAV system for making decisions about what happens on the scene. A model GDM which possesses modeling consensus is employed in multi-UAV controls. The model permits multi-UAV controls for spying to make various decisions and assess instant results on status probing through the environmental observation. The model we proposed improves the new “ability” of multi-UAV systems to process scene interpretation. The collective preferences permit the multi-UAV controls to describe this judgment of that global team at what happening on the scene. The model offers an assessment of the reliability of controls decision-making that could assist an autonomous ground station to take action or human operators. Owing to the system consensus achieved by UAVs, the station ground can determine whether to replan a task to obtain more knowledge and enhance scene interpretation. To improve the performance of real-time applications, knowledge of iterative structure is useful because it enables neural networks to remember past events. To test the generality of the method, we use an interdisciplinary approach to evaluate the effectiveness of the proposed iterative architecture-based prediction system. The performance of the RSEFNN-based system was evaluated using a generalized method between subjects, and the results showed that the RSEFNN model is feasible, stable, and validated. The highlights of contributions could be listed as below: A novel online gradient descent learning rule of evolved biological algorithm can be realized. The approach allows UAVs to build high-level situations from the detected events through a fuzzy-based aggregation. The consensus and proximity measures support the evaluation of the final group decision reliability.
The future study directions will emphasize multi-agent paradigm for UAV system design, on the basis of the proposed consensus-based GDM model, we will train in defining cooperation assignment activities, aimed, instead, at the UAV consensus in the scene interpretation. Furthermore, the experiments and simulated verifications would be done in the future research.
Footnotes
Authors’ note
All analyzed data and measurements during the present study are included in the article.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Appendix 1
Let the energy function for the neural network (NN) be defined as
where
