Abstract
In this article, we study a problem of dynamic task allocation with multiple agent responsibilities in distributed multi-agent systems. Agents in the research have two responsibilities, communication and task execution. Movements in agent task execution bring changes to the system network structure, which will affect the communication. Thus, agents need to be autonomous on communication network reconstruction for good performance on task execution. First, we analyze the relationships between the two responsibilities of agents. Then, we design a multi-responsibility–oriented coalition formation framework for dynamic task allocation with two parts, namely, task execution and self-adaptation communication. For the former part, we integrate our formerly proposed algorithm in the framework for task execution coalition formation. For the latter part, we develop a constrained Bayesian overlapping coalition game model to formulate the communication network. A task-allocation efficiency–oriented communication coalition utility function is defined to optimize a coalition structure for the constrained Bayesian overlapping coalition game model. Considering the geographical location dependence between the two responsibilities, we define constrained agent strategies to map agent strategies to potential location choices. Based on the abovementioned design, we propose a distributed location pruning self-adaptive algorithm for the constrained Bayesian overlapping coalition formation. Finally, we test the performance of our framework, multi-responsibility–oriented coalition formation framework, with simulation experiments. Experimental results demonstrate that the multi-responsibility oriented coalition formation framework performs better than the other two distributed algorithms on task completion rate (by over 9.4% and over 65% on average, respectively).
Keywords
Introduction
Nowadays, distributed multi-agent systems (DMASs) are employed in practice growingly for dealing with sundry tasks, for example, disaster rescue, 1 military operations, 2 –5 and distributed sensor networks. 6 Since tasks in these systems turn to be increasingly complicated (e.g. arising dynamically and demanding on heterogeneous resource), agents generally need to cooperate with others for responding newly emerging tasks. Due to the high description ability of the coalition game theory in cooperation, some researchers have utilized this theory for solving task allocation problems 7 –10 by forming agent coalitions to respond to tasks. Meanwhile, due to the real limitations, social networks are generally adopted for agent communication in practice. 11 –14 A social network allows agents to directly communicate with their neighbors and transmit information out via neighbors.
As shown in Figure 1, agents in this DMAS take two responsibilities, that is, communication and task execution.

A typical social network in a disaster rescue system. (agents are heterogeneous with respect to resources. t1 and t2 are two newly emerging random tasks.)
However, most established algorithms on task allocation generally focus on task allocation, without considering the communication. Chapman et al. 1 propose an overlapping potential game approximation algorithm (OPGAA) for dynamic rescue allocation, which is a distributed algorithm on dealing with overlapping coalition formation. However, OPGAA does not take resource optimization into account, which leads to insufficient agent utilization. Bin et al. 15 study a problem of multiple task allocation without considering the relationships among tasks. Tomasz et al. 9 study a dynamic heterogeneous coalition formation problem and propose the centralized greedy coalition formation algorithm (CGCFA). CGCFA is a centralized algorithm for dynamic coalition formation, which focuses on generating an optimal dynamic coalition structure. CGCFA provides an approach to solve the problem of dynamic coalition formation. But it has a center operator knowing information of the entire system to do all the computation, which does not consider the communication limits and is not feasible for DMASs. In our former work, 16 we proposed a mutual-selecting market-based algorithm (MSMA) for heterogeneous coalition formation, which do not consider the impact of the dynamic changes of the communication structure.
In addition, the studies on communication mainly focus on data transmission and spectrum sharing without considering the limitations of communication range and multiple responsibilities. 17 –19 Although agents in those DMASs, such as disaster rescue or military operations, usually need to respond to their multiple tasks, attention has been rarely paid to this issue.
Multiple responsibilities of agents, for example, communication and task execution, usually have mutual impacts on each other. Agents in Figure 1 need to move geographically for responding dynamic tasks, which will change agent geographical locations and the structure of the social network. Evidently, some changes, such as single points in communication networks caused by agent movements, will break an effective network structure of information transmission. That is, those changes will decrease the system performance on communication and task execution. Therefore, a method for agent multiple responsibility in DMASs is urgently needed. The method should enable agents to reconstruct the network and respond to emerging tasks self-adaptively.
Our main contribution in this article is a multi-responsibility–oriented coalition formation framework (MOCFF) for dynamic task allocation with multiple agent responsibilities. The framework is aimed at forming two kinds of coalitions, for communication and task execution, respectively. Our former proposed algorithm, MSMA, is adopted for task coalition formation. So, our work in this article focuses on the framework and self-adaptation communication coalition (CC) reconstruction. The contributions are summarized as follows: We study two responsibilities of agents in this problem, that is, communication and task execution, based on which we design a MOCFF for solutions. We analyze the relationship between the two responsibilities. Based on the analysis, we propose a task-responding–oriented utility function to describe the relationship between CCs and task execution coalitions. We develop a constrained Bayesian overlapping coalition game (CBOCG) model with location constraints to formulate the CC formation problem. Furthermore, a distributed location pruning self-adaptive algorithm (LPSA) is proposed to calculate its solution, which is an equivalence to a Nash equilibrium.
20,21
The rest of this article is organized as below. In the second section, we introduce system assumptions and related preliminaries; then, we present our main work, an integrated self-adaptive framework in the third section. This framework includes two parts of work, that is, self-adaptation CC formation and task execution coalition formation. In the fourth section, we introduce two algorithms, for self-adaptation CC formation and task execution coalition formation, respectively. Then, in the fifth section, we give out the experiment settings, the results and related analysis, compare our work with some established research studies, and make summaries. Finally, some possible future research directions are mentioned in sixth section.
Related preliminaries and system assumptions
In this section, we firstly introduce some related system assumptions and preliminaries of this study. Then, based on those assumptions and definitions, we analyze the relationship between the two agent responsibilities and the relationship between the two kinds of coalitions. Then, we analyze self-adaptation on CC formation and propose related self-adaptation principles; in the end of this section, we propose agent self-adaptation strategies based on analysis and principles proposed above.
System assumptions and definitions
We consider a scenario with N rational agents transmitting information via their CCs. Communication ranges of agents overlap with each other as in Figure 2. Those overlapping agents are utilized as relay stations for information transmission. As illustrated in Figure 2, the CCs of agents 1, 3, and 5 overlap with each other. Agent 1 can transmit information to agent 5 via agent 3.

Overlapping CCs. CC: communication coalition.
The communication range of an agent, which is mainly decided by its type, is assumed to be a fixed constant in this article. A social network is adopted for agent communication in this DMAS. Here, neighbors of agent ai are those in its communication range, and the neighbor set of agent ai forms its neighborhood. Meanwhile, an agent is assumed to obtain a certain number of neighbors due to the limitation on communication capability. 12,13
Firstly, we introduce some formal definitions. In this article, we use notation A to denote an agent set, notation C to denote a coalition, and notation C to denote a set of coalitions.
Definition 1
[Overlapping coalitions] In a set of coalitions
Definition 2
[Overlapping Coalition Game] Let a tuple
Also, we give out a formal definition of neighbor and neighbor set in definition 3.
Definition 3
[Neighbor, Neighbor set] Agents in agent ai’s communication range are defined as agent ai’s neighbors. A set of neighbors can be called as agent ai
’s neighbor set, denoted as
Definition 4
[Sub-neighbor, Sub-neighbor set] Agent aj is defined as agent ai ’s sub-neighbor if aj is not a neighbor of ai but is in the CC of a neighbor of aj . A set of agent ai’s sub-neighbors is called its sub-neighbor set.
As shown in Figure 2, the network’s structure is over complicated for problem formulation. Thus, we propose a notion of CC to describe the network structure, which helps to formulate the problem we study and to define its utility function.
Definition 5
[Communication coalition] For agent ai, its CC is a set of agents, including its neighbor agents and itself. We denote agent ai
’s CC by
Here,
Relationship between the two coalitions
Agents obtain information to form task execution coalition via their social network, thus task execution coalition relies on CC. The relationship between the two kinds of coalitions is illustrated in Figure 3.

Relationship between the two kinds of coalitions.
Executing tasks faster will be beneficial to gaining more system rewards because of the task deadline. That is, a good structure of CCs will help to improve the efficiency of task execution coalition formation. Self-adaptation CC reconstruction requires topological optimization in social networks.
Analysis of topological optimization in social networks
As mentioned above, we adopt a social network for agent communication. Taking agent 3 in Figure 2 for example, the process of information transmission goes like this: when agent 3 receives a task announcement, it transmits the task announcement to its neighbors who, upon receipt, will also transmit such announcements to their neighbors for task coalition formation.
The adopted social network can be regarded as a combination set of agent local social networks. An overlapping CC of a certain agent includes some overlapping nodes, like neighbors or sub-neighbors, which will result in repeated and useless information transmission. Therefore, a good network structure the will help to avoid or alleviate those issues. Here, we will introduce an analysis on topological optimization of local social networks at first.
Agents 3, 5, and 6, as shown in Figure 2, are neighbors with each other, which leads to a closed loop in agent 3’s local social networks. Taking agent 3’s local network as an example, we give out a detailed description of agent 3’s local social network in the tree structure in Figure 4, where arrows indicate the directions in which information flows.

Tree structure of agent 3’s local social network.
Considering the issues discussed above, an ideal local social network should be capable of transmitting task announcements in a range as wide as possible to form task-executing coalitions in time. That is, expanding the information transmission range of an agent as possible helps to avoid useless and unnecessary announcement transmission. Therefore, the performance of the social network could be improved based on the following considerations. Local performance: Agents should make their personal decisions to improve their contribution on the global utility in the following three aspects: To avoid unnecessary information transmission, closed loops should be removed from the topology structure, which means some edges need to be removed. Meanwhile, to form optimal or suboptimal executing coalitions, task announcements need to be transmitted to agents as many as possible, which is conducive to the optimization for a better solution. Considering the consumption of time and fuel for agents’ movements to task locations, we only consider three levels of the local social network, including a task announcer (the center agent of this local social network), its neighbors, and sub-neighbors. Self-adaptive triggers: There are many possibilities for triggering self-adaptive reconstruct of networks. An agent decides to adapt or not based on its performance and local network structure, like agent density in its neighborhood. Rewiring: How does an agent decide on removing a certain connection, or select an agent to establish a new connection? The utility of a single agent is the main consideration that we take.
16,21
Since “a local Bayesian equilibrium is formally equivalent to a Nash equilibrium of a standard game,” 21 we utilize the utility of an individual agent to find a local Bayesian equilibrium, which is an equilibrium of our problem.
When its local network self-adapts, an agent has two states,
Self-adaptation principles and uncertainty in agent strategies
An ideal local social network of an agent should have enough nodes but as few redundant neighbors as possible. Based on the conception of agent density in a neighborhood, we class agent local neighborhood networks into the following three categories that are, sparse local network, reasonable local network, and crowded local network.
Agents that are not in reasonable networks need to adjust their locations for their local social network reconstruction for better information delivery. As on the previous studies, 10,11 a self-adaptive network is an agent interaction topology, which is the result of local rewiring decisions made by individual agents. Based on the analysis above, we propose self-adaptation principles for reorganized networks.
Definition 6: Self-adaptation principles
For a social network topology optimization, any unreasonable local social network should make its self-adaptive decision based on its local network states: [Sparse local network] An agent in a sparse local network will try to improve its communication in the local network via adding a new neighbor by location change. [Crowded local network] An agent in a crowded local network will remove repeated nodes or some useless nodes in their neighbor sets or sub-neighbor sets by location change.
Considering the uncertainties, each agent has two choices for self-adaptation, maintaining the current local social network or reconstructing its local social network. Considering the location effect on the network structure, the self-adaptation choices, like leaving a certain CC or staying in a certain CC, mean two choices on its current locations, that is, leaving or staying. Thus, for any agent in the system: An available agent will stay in the current location when it cannot enhance its utility if it leaves. An available agent will leave its current location when the local social network utility increases, after leaving for adding a new neighbor or removing a redundant neighbor.
Here, an agent is supposed to add or remove only one neighbor at a time. More specifically, the self-adaptation strategies can be defined as follows:
Constrained agent strategy
As discussed above, the aim of an agent’s self-adaptation is to keep its local social network reasonable and avoid crowded or sparse networks. Apparently, the self-adaptation strategies of an agent are constrained by its current locations. Here agents are assumed to share information with their neighbors honestly. So, agents can obtain the information of their neighbors and sub-neighbors.
Since the problem is formulated based on the coalition game, to ensure that the calculation converges to a Bayesian equilibrium, we make some rules on agent location self-adaptation.
Rule 1 ensures that the algorithm we proposed for self-adaptation communication formation in “Algorithms in MOCFF” section will converge to a Bayesian equilibrium. The proof is given in the same section. The location self-adaptation calculation process is shown in Figure 5.

Agent location self-adaptation process.
Freedom and probability calculation
By updating its beliefs about the agents in its CC based on observations, an agent can estimate those agents’ strategies about staying or leaving. Besides, CCs can be overlapped in our problem, which is derived from practices. 22 –24 Although agents make decisions independently, their decisions are mutually affected. To model the uncertainty in CC reconstruction, we use the Bayesian coalition game theory 25 –28 to formulate the problem. Therefore, we could calculate the probabilities over agents’ strategies.
Since agents make decisions simultaneously, they could build an observation to estimate the decisions of other agents. For agent ai, it will stay in the current location with probability ς. In other words, the probability that an agent leaves its current location will be
where
Definition 7
[Freedom] A freedom degree of an agent is a ratio about its neighbor number against the upper limit of neighbors. It is denoted as equation (2)
where
Based on the Bayesian equilibrium definition, 23 the more neighbors ai obtains, the lower possibility that it could leave. An agent is totally free to make any decision on location change when it has no neighbors.
For agent ai, we define a set of staying and leaving acknowledgements as
Besides, a Bayesian equilibrium is also a Nash equilibrium, 22,23 thus a Nash-stable coalitional structure is an equivalent solution to a Bayesian equilibrium. 20
Definition 8
A coalitional structure
Based on definition 8, if a coalition structure (i.e. a set of coalitions) is stable, then it can be obtained that: No agent ai has an incentive to leave the current coalition Given an agent’s beliefs about the others, no agent will have an incentive to leave from the current coalition
An integrated framework for dynamic task allocation
In this section, we firstly introduce an integrated framework for dynamic task allocation under multiple agent responsibilities. This framework is deigned to cope with two responsibilities by forming two kinds of coalitions, that is, CCs and task execution coalitions.
Then, we introduce our CBOCG model for CC formation. Considering agents’ obligations and cost, we define a utility function for an overlapping CC, which helps to cast our issue into a problem for an optimal utility calculation. After that, we give out the belief update calculation. In the end of this section, we introduce a formerly proposed algorithm for task allocation, 16 which is integrated in this framework for task execution coalition formation.
A multi-responsibility–oriented coalition formation framework
Agents need to form two kinds of coalitions to deal with two responsibilities. Here, agents are assumed to have different states regards to different kinds of coalitions respectively. For task execution, agents have three states
Considering the system reward, task execution has higher priority. That is, at each time step, idle agents will check whether they receive task coalition announcements at first. When agents receive no task coalition information and remain idle, they are granted to self-adapt in this step. The agent state transition is shown in Figure 6.

Agent state transition.
Based on agent state transition, we designed a MOCFF, as shown in Figure 7.

The framework of a multi-responsibility–oriented coalition formation.
As illustrated in Figure 7, the two kinds of coalitions, CC and task execution coalition, respond to the two responsibilities, respectively. Agents update their states for multi-responsibility–oriented coalition formation via our two proposed algorithms.
Task execution coalition formation
In this article, we use a distributed task allocation algorithm proposed by Bing et al., 16 the MSMA, for dynamic coalition formation. MSMA is designed on market-based methods and utilizes auction methods to realize the optimization. The process of MSMA is shown in Figure 8. (Any interested readers can refer to our previous work 16 for details).

The Process of MSMA. MSMA: mutual-selecting market–based algorithm
CC formation
For CC formation, we consider a social network consisting of N agents, and the set of those N agents is denoted by
Agents. The set A consists of N rational agents, which is denoted by Coalitions. The set Constrained strategy. The strategy space can be denoted by Observation. The observation space is denoted as Probability distribution. p is a common a priori probability over the CLC set Payoff.
where
Preference.
In addition, we use a tuple to represent an agent, so that the problem can be described more specifically. For agent
In the rest of this article, we use the tuple to describe agents.
A solution of a CBOCG
A solution of CBOCG is to find its Bayesian equilibrium, which is also a Nash equilibrium of a dynamic game. 25 Given the beliefs about the types of and strategies played by other agents, a Bayesian equilibrium is defined as a strategy profile that maximizes the expected payoff for each agent.
Based on the definition of the Bayesian equilibrium, the strategy each agent adopts should be aimed at optimizing its utility.
25
Thus, each agent in the CBOCG should make its strategy to seek its optimal payoff. In other words, for any agent
In the following part, we will give out functions on a coalition utility and cost, based on which we will define a payoff function.
A coalition payoff function definition
A coalition payoff function can be defined as
Resource excess or resource insufficiency cause low an agent utilization rate. Also, for a certain task, the earlier it gets processed, the more effective the system will be. In a CC, agents generally have two obligations: first, they need to transmit task announcements for forming task coalitions; second, agents need to execute tasks with others in a coalition form. Thus, how many resources are possessed by an agent and its CC, and how many outer-agents of CC members can respond the CC’s task announcement directly impact whether a qualified task coalition can be formed to execute the task in time or not.
Based on those considerations, we define that the payoff of an agent’s CC consists of three parts: the CC’s resource utility, transmission utility, and the self-adaptation cost. A CC’s resource utility depends on how many resources it possesses. The transmitting ability can be defined as an ability of how many outer-coalition resources a CC can directly transmit task announcements to, which indicates how many resources that the CC can call for task execution in time.
Thus, we can give a payoff function of
where ur
is
Resource utility and transmission utility
For resource utility
where
The transmission utility
Thereafter, the utility of
where
Cost
The topological reconstruction of a local social network comes with cost, which can be expressed as a function. Since there is no resource consumption, a CC’s resource utility would not change. So, the cost of an agent’s self-adaptation is mainly decided by its movement. Specifically, the location change of an agent brings cost on its movement, that is, time cost. Thus, we can define a cost function as equation (9)
where
where
Considering all the cost above,
Based on the CC utility function, we design an agent strategy-making model.
Belief Update
In a CBOCG, each agent holds beliefs about other agents’ decisions. For agent ai, it calculates its best reply based on those beliefs on other agents’ potential strategies. Due to dynamics of the DMAS, agents need to update the beliefs about their neighbors via related acknowledgments and observations.
Under our assumption, each agent could build their observations about its neighbors and sub-neighbors, which will help to update corresponding beliefs for more accurate utility computation.
Let
Given an observation
where
Algorithms in MOCFF
In this section, we will introduce two distributed algorithms adopted in the MOCFF. One is the MSMA for task execution coalition formation; the other is the LPSA for self-adaptive CC formation.
First, we will give a brief introduction of MSMA. Then, we will introduce the LPSA and prove that its result converges to a Nash-stable coalition structure. At the end of this section, the working process of MOCFF will be presented in detail.
MSMA for task execution formation
As introduced in “An integrated framework for dynamic task allocation” section, the MSMA is designed for task execution coalition formation. The detailed process of the MSMA is given by Bing et al. 16
LPSA for CC formation
Here, we design the LPSA based on agents’ strict preferences and constraints on locations, as presented in definition 5, to find a solution of the CBOCG.
Algorithm 2 works in the following steps: first, at time step τ, each agent updates its current CC and sub-neighbor set; then, each available agent for self-adaptation forms a strategy space based on its network states; after that, each available agent estimates the probabilities of potential strategies and computes the corresponding expected payoffs. If the utility of strategy
Distributed location pruning self-adaptive algorithm at time step τ
Next, we will consider whether any coalition structure obtained from algorithm 2 is Nash-stable.
Theorem 1
Algorithm 2 will converge to a Nash-stable coalitional structure.
Proof
According to definition 8 about the Nash-stable coalitional structure, any agent could be able to move to another coalition only when none in the new coalition is worse off (i.e. the agents believe that they will not be worse off in terms of expected payoff after the one joins). Based on a CC’s payoff in equation (5), a new agent’s join will always bring increase to the payoff of a coalition.
Starting with any coalition structure CS, we suppose that algorithm 2 will not converge to a Nash-stable coalition structure. In other words, for
Base on algorithm 2, agent ai finds a new coalition to join, which means that it finds a new coalition satisfying
The process of MOCFF
The working process of MOCFF is shown in Figure 9.

The process of MOCFF. MOCFF: multi-responsibility–oriented coalition formation framework; CC: communication coalition.
Task execution is prior to communication self-adaptation; therefore, idle agents are available for self-adaptation in CC only when it has not received any offer for task execution coalitions.
Result analysis
In this section, we will test MOCFF in a dynamic task allocation problem. In the same simulation experiments, this mechanism will be compared with three established algorithms, namely, MSMA, OPGAA, and CGCFA. Here, CGCFA is utilized as a measure on the rationality of simulation data.
Firstly, we will introduce the settings of our simulation experiments; then, three performance measures will be defined for algorithm performance evaluation. Finally, we will set two groups of experiments for the algorithm performance comparison and analyze the results.
Simulation settings
For simplification, we assume that each agent possesses two kinds of resources and the quantity of resources it carries varies randomly. Tasks are assumed to have different requirements on resources and their locations are generated randomly.
When dynamic heterogeneous tasks are allocated, task requirements usually could not be satisfied exactly due to agents’ heterogeneity and distribution. Therefore, the total resources that a coalition possesses for a task might be beyond the task’s requirement, which will impact the task completion rate (TCR) directly.
For comprehensive performance comparison under different settings, we set two groups of experiments. To achieve statistical significance, each group will be set with a series of scenarios. Each scenario will be run 10 times with different task flow simulations.
Experiments group 1 is designed to compare the performance of the four algorithms. Here, we set three scenarios, 1-1, 1-2, and 1-3, in group 1. The detailed settings in group 1 are given in Table 1.
Simulation settings in group 1.
Group 2 is designed to identify improvements in MOCFF when compared to MSMA. Firstly, we propose a parameter, task density, to help analyze the results. The definition is given as follows
where
For detailed comparisons under different settings, that is, task reduction and total time, we divide group 2 into two subgroups, 2-1 and 2-2.
Here, we set 9 scenarios in subgroups 2-1 and 2-2 respectively. The settings of subgroup 2-1 are listed in Table 2, including scenarios 2-1 to 2-9.
Settings in simulation subgroup 2-1.
The settings of group 2-2 are listed in Table 3, including scenarios 3-1 to 3-9.
Settings in simulation subgroup 2-2.
For each scenario, we generate and run 10 random groups of task data for the statistical analysis of results. In our simulation experiments, tasks are generated randomly with each resource requirement
Performance evaluation
We adopt three standard measurements, M1, M2, and TCR, to review the performances of these four algorithms. The results are analyzed with respect to three performance measures:
Here, we give the definition of TCR as in equation (15)
where Nt is the total number of tasks during calculation, and Nc is the number of tasks completed during the same period.
Result analysis
In this section, we will present the performance comparisons in the two groups of experiments, respectively. Firstly, in scenarios with sufficient agents, we will compare the performance of the four coalition formation algorithms and give out relative analysis. Then, we will present performance comparisons of MSMA and MOCFF in scenarios with resource shortages.
Comparison of four algorithms
Here, we give out the results of the experiments in group 1. The performances of the four methods (CGCFA, OPGAA, MSMA, and MOCFF) are compared in the TCR. The results of comparisons are shown in Figure 10.

TCR comparison: Comparison of TCR in scenarios 1-1, 1-2, and 1-3. TCR: task completion rate.
Figure 10 shows a comparison of TCRs of the four algorithms, where blue, green, and red indicate scenarios 1-1, 1-2, and 1-3, respectively. From this figure, we can see that our algorithm helps to improve the performance of task response, especially when agents are insufficient. MOCFF performs better than the other two distributed algorithms, that is, MSMA and OPGAA.
The results on M1, M2, and TCR of the three distributed algorithms (OPGAA, MSMA, and MOCFF) are presented in Table 4.
Results in simulation experiments.
TCR: task completion rate; MSMA: mutual-selecting market-based algorithm; MOCFF: multi-responsibility oriented coalition formation framework; OPGAA: overlapping potential game approximation algorithm.
In Table 4, M1 and M2 are statistical mean values of the total experiments in scenarios 1-1, 1-2, and 1-3 respectively.
As presented above, MOCFF performs better on average than the other two distributed algorithms, OPGAA and MSMA, in all the three scenarios. It illustrates that MOCFF helps to improve the practical efficiency of coalition formation through more efficient communication. Specifically, MOCFF performs increasingly better as agents get reduced in number, with improvement rate of 7.11% in scenario 1, 10.15% in scenario 2, and 11.46% in scenario 3, respectively, compared to MSMA.
Comparison between MOCFF and MSMA
In this section, we give out the results of experiments in group 2. The performances of the two algorithms (MSMA and MOCFF) are compared with respect to the TCR.
Firstly, we test the performance of MOCFF and MSMA in experiments of group 2-1, that is, scenarios of task reduction. The results of comparisons are shown in Figure 11.

Comparison of tasks completion rate with MSMA and MOCFF in 100 time units. (a) TCR in scenarios 2-1, 2-2, and 2-3 with 40 tasks; (b) TCR in scenarios 2-4, 2-5, and 2-6 with 50 tasks; (c) TCR in scenarios 2-7, 2-8, and 2-9 with 60 tasks. MSMA: mutual-selecting market–based algorithm; MOCFF: multi-responsibility–oriented coalition formation framework; TCR: task completion rate.
Figure 11 presents a comparison of scenarios involving 40, 50, and 60 tasks, respectively, within 100 time units. Figure 11(a) illustrates the comparison among scenario 2-1, 2-2, and 2-3, Figure 11(b) illustrates the comparison among scenario 2-4, 2-5, and 2-6, and Figure 11(c) illustrates the comparison among scenarios 2-7, 2-8, and 2-9.
As seen directly from Figure 11, MOCFF performs better than MSMA in each scenario of group 2-1, no matter agents are sufficient or not. When the number of tasks decreases, the total requirement of all tasks declines, therefore the two algorithms perform better. When the number of agents decrease under certain number of tasks, the growth rate of TCRs in MOCFF increases, compared to that of MSMA. This means, our self-adaptation mechanism works more efficiently when agents are insufficient. Also, as shown in Figure 11, under the same
Then, we test the performance of MOCFF and MSMA in experiments of group 2-2, that is, scenarios with different total time. The results of comparisons are shown in Figure 12.

TCR comparison with MSMA and MOCFF with 100 agents. (a) TCR in scenarios 3-1, 3-2, and 3-3 with 40 tasks; (b) TCR in scenarios 3-4, 3-5, and 3-6 with 50 tasks; (c) TCR in scenarios 3-7, 3-8, and 3-9 with 60 tasks. MSMA: mutual-selecting market-based algorithm; MOCFF: multi-responsibility–oriented coalition formation framework; TCR: task completion rate.
As seen in Figure 12, MOCFF performs better than MSMA in each scenario of group 2-2 as well. Although when agents and the number of tasks remain the same, the total time apparently has little impact on the performance of the two algorithms, respectively.
When agents remain the same in number, the influence of dt on the performance of MOCFF is not obvious compared with that on MSMA. Thus, we draw a data chart about the potential of TCR about dt in all scenarios from 2-1 to 3-9, as shown in Figure 13.

Comparison of the MSMA tasks completion rate with MSMA and MOCFF with 100 agents. MSMA: mutual-selecting market-based algorithm; MOCFF: multi-responsibility–oriented coalition formation framework.
In Figure 13, we arrange dt in ascending order in the chart and use the data about the corresponding TCR. The TCP trend descends along with the ascending dt in general, which indicates that dt has impact on the performance of MOCFF. In other word, the number of agents, the number of tasks, and the total time of tasks influence the effect of our self-adaptive mechanism due to the time cost required by the self-adaptive mechanism. Also, Figures 8 and 9 show us that the performance of MOCFF relies on the structure of the original communication network.
Concluding remarks
This article presents an MOCFF for dynamic task allocation. In the proposed self-adaptive framework, two agent responsibilities are identified, that is, communication and task execution. Thus, the framework is designed to form two kinds of coalitions to fulfill the two responsibilities. The two algorithms, LPSA and MSMA, are proposed in the framework to address two kinds of coalition formation.
Our formerly proposed algorithm, MSMA, is integrated for forming task execution coalitions. The self-adaptive mechanism enables the social network to achieve better task information announcements. With the self-adaptation CC formation algorithm, the framework works more efficiently on dynamic task allocation. The results of experiments show that, compared to other methods, such as OPGAA and CGCFA, our framework works more efficiently on the optimization of coalition formation.
Additionally, for the proposed CBOCG model, we intend to further study its features and more equilibriums. We also find the problem studied here is a complex network problem. 30 Thus, in the future work, we plan to formulate this problem based on the complex networks theory, which helps to analyze the unknown topological structure of a dynamical network for a better solution. Besides, we do not consider the impact of the task execution procedure on CC procedure in this work, which is also a valuable research subject in the future.
Footnotes
Acknowledgements
The authors are grateful for helpful comments by the editor and the anonymous referees for their suggestions and comments which have resulted useful for improving the article.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
