Abstract
This paper tackles the issue of spectrum sharing and medium access control among heterogeneous secondary users. Two solutions are proposed in this paper. The first solution can be used in centralized fashion where a central entity exists which decides transmission power for all secondary users. This solution tries to minimize the time required by secondary users to clear their queues. The second solution assumes the autonomy of secondary users where the decision to update transmission power is distributed among users. Dynamical system approach is used to model system behavior. The trajectory of interference noise level suffered by secondary users is used to update transmission power at the beginning of each time frame based on the proposed dynamic power assignment rule. This rule couples the responses of all secondary users in a way which simplifies future interference noise forecasting. A forecasting engine based on deep neural network is proposed. This engine gives secondary users the ability to acquire useful knowledge from surrounding wireless environment. As a result, better transmission power allocation is achieved. Evaluation experiments have confirmed that adopting deep neural network can improve the performance by 46% on average. All of the proposed solutions have achieved an outstanding performance.
1. Introduction
Cognitive radio is a major step in the evolution of wireless communications [1]. It provides wireless systems with the ability to adapt to the changes in the environment and to increase its utilization. By observing their surroundings, wireless systems should update their communication parameters in a way that increases their performance. The most important issue in system utilization is spectrum usage which can be considered as the main motivation behind cognitive radio paradigm. The available wireless spectrum is a very scarce resource and highly underutilized [2]. Cognitive radio systems (secondary users) should take the opportunity and they should try to use the underutilized spectrum to improve their performance. This is easier said than done. A serious problem will arise if naïve policy of unorganized attempts by multiple cognitive systems to utilize the same spectrum is adopted by these systems. This naïve way of utilizing Idle spectrum will lead to chaos and high competitive situations among cognitive radio systems. Also, other resources will be wasted such as power and computation. Hence, secondary users should figure out a way to share the underutilized spectrum without adding more problems to each other.
Spectrum sharing is a key challenge in cognitive radio technologies. There are two types of spectrum sharing. The first is primary-secondary sharing where the primary users allow secondary users to use their spectrum under some conditions. This first type is divided into two categories which are primary-secondary underlay sharing and primary-secondary overlay sharing [3]. In underlay paradigm, the secondary user operates on the primary user channel with promise of not causing interference higher than some threshold. In overlay paradigm, the primary user allows the secondary user to use its channel in return for some service such as relaying. The second type of spectrum sharing is secondary-secondary sharing where secondary users utilize the CR spectrum based on some sharing policy.
1.1. Related Works
Numerous works in literatures tried to tackle the first type (primary-secondary) of spectrum sharing issue [4–24]. For example, convex optimization was used by [4] to show the possibility of having both of the primary user and secondary user operating in the same spectrum. Their approach is based on using multiple antennas for the secondary user transmission so that spatial multiplexing can be used to guarantee low interference constraints at the primary user. In addition, their approach is able to operate on multiple channels at the same time which means that their approach is capable of having high level of reconfigurability in spatial, time, and frequency domains.
Another example of primary-secondary sharing is provided by [5] that followed a game theory approach based on oligopoly market strategies. In their approach, primary users are dynamically evaluating their prices in which they offer spectrum access to secondary users. The cost for primary users is measured in terms of quality of service degradation of primary transmission. Niyato and Hossain analysis of their approach found that Nash equilibrium of their game is not optimal from primary users perspective. Therefore, they forced primary users to choose strategies which achieve global optimality and they provided a mechanism to punish deviated primary users (i.e., primary users who do not follow globally optimal strategies) to ensure stability. However, their approach is not fully robust because it requires that all primary users be aware of the punishment mechanism.
A similar market-based approach is adopted by [8] where auction theory is used. The proposed mechanism is based on secondary users submitting bids for auctioned spectrum by primary users. Then, the primary user will choose the best bid which achieves stable equilibrium. To generalize the proposed solution to multiple primary users, several conditions have to be satisfied to achieve stability which is the main weakness of this approach. Examples of conditions are the fact that primary users always consider other wireless stations interest; if they do not, there should be an entity that punishes uncooperative primary users. In addition, every primary user is aware of all other primary users. If it is not, the proposed solution will not be stable.
The work in [16] used simulated annealing to form their solution. It is based on primary users allowing secondary users to operate in the spectrum while obeying interference cap. In general their approach had good performance. However, it requires a centralized global optimization where simulated annealing is applied. Reference [15] followed a different path to tackle spectrum sharing issue. They assumed that secondary networks have relay nodes which can be used to ease cognitive radio operations. Each secondary user is supposed to select transmission through a relay node which guarantees interference constraint of primary users. Their solution showed very good performance. Nevertheless, it requires an existing supportive relay infrastructure to be effective.
Literature lacks the needed research to tackle the second type (secondary-secondary) of spectrum sharing. Most of the references worked on the second-type assumed homogeneity of secondary users and Idle primary channels [25–33]. For instance, [26] proposed a mechanism which distributes channels among secondary users in a way that minimizes interference among them. They followed game theoretic approach to develop their solution. To improve the performance, Pigouvian taxation was incorporated into the proposed mechanism. On the other hand, [33] was concerned with spectrum sharing and MAC in wide secondary networks where each secondary user can access a different subset of available channels. They formulated such system as a mixed-integer nonlinear problem which is very hard to solve. As a result, linearization and relaxation techniques were used to ease the problem. The proposed solution is iterative in nature and it was able to achieve near-optimal performance according to their problem formulation.
Some view works tried to handle heterogeneous cognitive radio settings [34–41]. An example of such works is proposed by [34]. Here, the authors suggested a two-phase approach. The first phase distributes channels among secondary users. Then, the second phase assigns transmission power on these channels. Their approach was able to perform well. However, it requires a centralized scheduler which introduces communication and cooperation overhead. Similarly, [35] tried to tackle heterogeneous spectrum sharing and MAC by proposing cross layer approach. It is based on classifying channels probabilistically where Hungarian algorithm is used in the next step to schedule channels among secondary users. The proposed solution was able to outperform greedy techniques.
1.2. Motivation and Contributions
Most of the discussed related works assumed some sort of cooperation among secondary users. These facts motivated the authors of this paper to investigate the second type of spectrum sharing and medium access control in harsher wireless environment. In this environment, secondary users are assumed to be totally heterogeneous without any cooperation among them. In addition, primary user channels are assumed to be heterogeneous as well, where they have different bandwidth and channel characteristics (i.e., fading and shadowing).
This paper tries to contribute to the existing cognitive radio literature by delivering three main contributions: The concept of Spectrum-Time Duality is highlighted which states that it is beneficial for secondary users to reduce their level of competition with other secondary users by giving spectrum in return for time and vice versa. The idea depends on the fact that secondary users can be Idle if they transmitted all their data. For a secondary user which still has data to send, its jobs get easier after the others become Idle. Based on this concept, centralized transmission power mechanism is introduced where the main objective is to clear the secondary users queues as soon as possible so that the competition for other secondary users is reduced. Distributed transmission power rule is developed so that secondary users can assign their transmission power according to closed dynamical system evolution. By using this rule, the trajectory evolution of many system parameters can be predicted since it follows strict dynamics. Examples of system parameters are queue backlogs of secondary users, interference noise caused by secondary users, and the actual transmission power of all secondary users. Advanced method is proposed to employ deep neural network (DNN) [42] which is a very promising machine learning technique. This method utilizes trajectory evolution of interference noise levels to forecast future noise levels. As a result, performance of transmission power rule is greatly enhanced. By employing DNN in cognitive radio technology, real cognition capabilities can be realized. Secondary users will be able to generate decisions without the intervention of system designer. To the best of our knowledge, this is the first attempt in literature to employ DNN in cognitive radio networks.
Note that the proposed solutions are designed for cognitive radio technology since it is the main technology of next generation wireless communications. Nevertheless, these solutions can be easily implemented in conventional wireless technologies.
This paper is organized as follows: Section 2 defines the adopted system model to develop the proposed solutions; Section 3 introduces the proposed centralized spectrum sharing and access mechanism; Section 4 develops distributed transmission power rule and noise forecasting engine as the second proposed solution; intensive evaluation experiments are discussed in Section 5 and the conclusion is delivered as Section 6.
2. System Model
In this section, the general formulation of the spectrum sharing problem in cognitive radio networks is introduced. Then, this formulation will be modified to incorporate important aspects which can be used to ease the proposed solution development.
2.1. General Spectrum Sharing Problem
Consider a set of several secondary users pairs (transmitter and receiver)
The transmitter in every secondary user pair
Most works in literature assume homogeneity of channels bandwidth. However, future systems have to deal with heterogeneous channels. Therefore, let us assume that every channel
Keep in mind that this formulation assumes that every secondary user has something to transmit all the time. In other words, secondary users do not have Idle periods. Such assumption may be valid in controlled environment of homogeneous secondary users (i.e., wireless sensor networks). However, it is not valid for heterogeneous secondary users' environment (i.e., mix of multiple femtocells and WLANs). Therefore, we introduce another formulation in the next subsection.
2.2. Spectrum-Time Duality
One of the most important observations in heterogeneous secondary users environment is that every user has its own packet size (
Equation (3) does not guarantee minimum
So far, we considered
If there is a central coordinator which assigns the transmission power to all users during
Equation (5) is for centralized control, while the previous is for distributed control. As a starting point, we will focus on the centralized version.
The maximum number of packets that can be transferred in interval
The next step is calculating how user's queues will evolve over time. The arrival rate of packets at user
Assuming that the arrival rate of all users is a stationary process where
The aim of the last constraint is to guarantee that no user is Idle during the interval
3. Centralized Medium Sharing and Access
The main purpose of this section is to develop a general approach to allocate transmission power in centralized setting. However, such approach cannot be used in heterogeneous secondary users' environment. Therefore, the next section will develop a distributed mechanism to achieve the same goal. The findings of this section can be easily implemented in classical wireless technologies where centralized control entity is assumed.
First, let us call the interval
The value inside the brackets can be controlled by only manipulating the transmission power levels. To highlight the relationship between
The update of transmission power is represented by
The next step is to calculate the change in data rate (
Let us write both of
By substituting both of (14) and (15) in (13) and performing numerous algebraic operations, we have the following equation:
Equation (16) gives us a direct link between the change in data rate and the update of transmission power. It can be shown that the relationship among transmission power updates of all users is governed by
Note that (17) can be written in matrix format for every kth channel as
By using the invertible matrix theorem [43] which states that matrix
Based on this finding, we reformulate the optimization problem in (9) by using
3.1. Reformulation of Power Allocation Problem
At the beginning, we need to revisit optimization problem (9) constraints. The first constraint (
The vector
Similarly, the second constraint of optimization problem (
The third constraint needs to be changed to an equivalent constraint since it depends only on
The last step is to rewrite the global objective function. Note that each user would like to minimize
Trying to find the optimal
Having our problem in this form, we can find the best possible
3.2. Balancing Weight
In the problem of (25), due to the heterogeneity of both primary channels and secondary users, the obtained solution may lead to unfair power allocation among secondary users. Therefore,
Formula (26) describes the default approach where an equal treatment of all users is given. Formula (27) gives users with larger queues higher priority, while formula (28) favors users with faster packets arrival rate.
4. Distributed Medium Sharing and Access
The previous section developed an approach to share and access medium among multiple secondary users. However, such approach requires a centralized coordinator to assign transmission power level to each secondary user. This approach can be implemented only among cooperative secondary users. For example, base stations in mobile telecommunication system (i.e., LTE or WiMAX) which belong to the same provider can adopt the centralized approach since they can coordinate among themselves easily. Conversely, an environment with noncooperative secondary users will require a passive distributed approach due to the lack of coordination among secondary users.
Heterogeneous secondary users shall decide the appropriate transmission power based solely on the local acquired information regarding the surrounding wireless environment. Assuming that every user knows its queue size, expected packets arrival rate, and Channel State Information (CSI), a relationship between the required transmission power and time can be formulated based on the objective of clearing the queue. In other words, each user would like to clear its queue within a specific maximum time frame (
Note that using noise level as a parameter to decide the transmission power will lead to coupling all interfering secondary users since most of the noise is interference noise generated by them. As a result, we will have a dynamical system which evolves over time. This dynamical system is discrete due to the fact that all digital communication technologies transmit data in packets. The discrete dynamical system requires global time frame to be defined which can be tricky in heterogeneous environment. To simplify the formulation of transmission power assignment rule, we will assume that such global time frame is defined as
Any transmission power formula should try to honor four limits which are maximum queue size for the ith secondary user (
4.1. Time Frame Length Assignment
The first step in designing the transmission power allocation rule is to link the length of time frame (
Formula (29) does not honor maximum time frame limit which means that
The added parameter
The term
4.2. Transmission Power Rules
As mentioned before, the primary goal for any secondary user is to clear its queue as soon as possible. Based on this objective, a general transmission power assignment rule on the kth channel can be formed by algebraically manipulating the following equation:
Equation (32) is based on (8). The first term distributes the load (current and new arriving packets) among channels based on their bandwidth. Other approaches to distribute the load can be used. However, this approach is adopted for simplicity. After some algebraic manipulations, general transmission power is calculated as follows:
It is clear that
Term
4.3. Inference Engine
At this point, we constructed a model for heterogeneous cognitive radio networks where transmission power of different secondary users over different channels forms a tightly coupled dynamical system. The next step is to develop an inference engine to forecast future noise level so that better values of transmission power are assigned. This paper proposes the use of deep neural network (DNN) [42] to construct such engine. The capability of DNN to extract features from any series of correlated input data is more strengthened when inputs are coming from a tightly coupled dynamical system. Also, the nonlinearity of any dynamical system can be approximated by neural network. These facts advocate for the use of DNN as base for the proposed engine.
4.3.1. Engine Input
The first step to develop the proposed noise forecasting mechanism is to define the appropriate input design. Any input data should be based on local information only since secondary users are assumed to be heterogeneous with no active cooperation. Inspired by Taken's theorem [46], the input is defined as a vector of delayed noise values over time (i.e.,
Based on Taken's theorem, the number of delayed value samples should be more than twice the space dimensionality which contains the manifold. In our case, the dynamical system forms
4.3.2. DNN Architecture
Deep neural network can be divided into two main categories: Deep Autoencoder (DAE) [47] and Deep Belief Network (DBN) [48]. Both of these types use neurons where the activation function is a sigmoid function. They differentiate in how to interpret the value of activation function. DAE interprets activation function value as the neuron output, while DBN uses this value as probability of selecting one of binary values as the neuron output. This paper adopts the first type as the main architecture for inference engine due to its deterministic behavior.
Neurons in DNN are arranged in layers. Each layer represents an extracted feature of input data. The first layer is called input layer, while the last layer is the output layer. Any layer between the input layer and the output layer is called a hidden layer. Classical neural networks have one hidden layer because of extreme difficulty in training multiple hidden layers [49] by using the conventional learning techniques. This was the case until 2006 when Hinton et al. [48] proposed a technique to train deep neural networks where several hidden layers are stacked over each other. This finding revolutionized the field of neural networks and its application, especially after similarities of this training technique and how the mammalian brain operates were highlighted.
Stacking several hidden layers allows DNN to extract several features from input data. To illustrate what we mean by features, let us use object recognition example since it is easier to visualize. Imagine the input data is a stream of gray scale images. After feeding these images to DNN, the first layer will be able to distinguish different types of edges, while the second layer will be able to distinguish different combinations of edges recognized by the first layer. By repeating this behavior, the third layer will differentiate between different object subparts based on the second-layer output and the fourth layer will use the third-layer output to recognize the object type. This hierarchical process can be repeated to extract higher abstraction features depending on the application objective.
In our case, a similar approach is used to extract several features of wireless environment surrounding secondary users. The noise level can be forecasted locally by every secondary user based on these extracted features. In general, we do not really care what the exact nature of these features is as long as they enhance the inference engine forecasting capabilities. Similarly, in previous object recognition example, features such as light intensity may not have big impact on how well DNN can recognize objects. Nonetheless, these types of features may help emphasize more important features which leads to improving DNN learning capabilities. The adopted learning technique has the ability to extract any necessary features to achieve learning objectives [47].
4.3.3. Engine Integration
At the beginning, the inference engine will not be able to accurately forecast future noise level because it requires time to learn the surrounding wireless environment dynamics. Therefore, to calculate transmission power using (31) and (35), the forecasted noise level should be integrated in a way which takes forecasting error into consideration. Let
Instead of using
The weight term ϵ relates to the error in noise forecasting. Its value should be between zero and one where approaching zero means higher confidence in noise forecasting accuracy. The adopted formula for ϵ is
It is clear that as
To summarize, the integrated transmission power allocation rule for the ith secondary user on the kth channel is
Also, computing
Note that using this rule will result in a tightly coupled dynamical system as well. Consequently, inference engine performance is guaranteed to keep improving with more experience. Both of Figures 1 and 2 show general illustration of how the proposed solutions operate and integrate.

Inference engine integration.

Illustration of the proposed system model.
4.4. Global Time Frame
So far, it is assumed that a global time frame is well defined for all secondary users. However, such assumption is not practical in a heterogeneous environment. Therefore, secondary users need to find out the global time frame boundaries based on local information. The first thing to note is that global time frame boundaries have to align with local time frame boundaries of all secondary users. Thus, from each secondary user perspective, the end of local time frame is the end of the global time frame as well. In addition, each secondary user will update its transmission power by the end of its local time frame. Hence the noise level experienced by other secondary users will change.
Having the last three facts in mind, the global time frame can be approximated locally by secondary users. The simplest approach is to monitor noise level variation. Whenever a sudden change in noise level is noticed, the time of this change will be marked as a boundary of global time frame. Using this approach will lead to rapid transmission power updates for all secondary users. As a result, faster convergence of inference engine learning can be achieved.
5. Evaluation and Discussion
Several simulation experiments were conducted to evaluate the proposed solutions in this paper. Parameters in these simulations were chosen randomly. The reasoning behind random choice is to test the evaluated solutions performance in the most general way. A uniform distribution is used to choose simulation parameters values. Heterogeneous secondary users and heterogeneous wireless channels were assumed during the simulation experiments. Secondary users have different arrival rates and different queue sizes. Also, they have different packet sizes and different quality of service levels which is represented by the minimum allowed SINR. On the other hand, wireless channels have different bandwidth and different fading behaviors.
The simulation time was set to 1000 seconds. Time frame value was chosen randomly between two extreme values. The minimum value is one millisecond and the maximum value is 0.01 seconds. Each simulation experiment was repeated 100 times. The average of simulation results was taken as the final result. Two parameters were chosen as the performance variables (x-axis). These are the number of secondary users and the number of wireless channels. On the other hand, three performance metrics (y-axis) ware selected to study the proposed solutions. These are queue size, achieved data rate, and power consumption.
Secondary user pairs (transmitter and receiver) were distributed randomly where the average distance between any two pairs is 5 m. The average distance between the transmitter and receiver in any secondary user pair is 3 m. Parameters such as minimum SINR threshold, arrival rate, the maximum queue size, and channel bandwidth were assigned in a way that amplifies their effect on the general performance. A random value for each one of these parameters was chosen between two extreme levels at the beginning of each simulation experiment. Then, this chosen value was multiplied by some coefficient which depends on secondary user identity. For example, the third secondary user may use 7 as its coefficient for maximum queue size which means that the maximum queue size for the third secondary user is seven times larger than the chosen value for this parameter at the beginning of simulation experiment.
These coefficients were selected in an ordered fashion which reflects secondary users and wireless channel indexes. For instance, the first secondary user coefficient is 1 and the second secondary user coefficient is 2. By using this coefficient approach to assign parameter values, the effect of the parameter will be apparent in the simulation results. For example, coefficients for the maximum queue size were set in ascending order. Therefore, as the number of coexisting secondary users increases, buffering capability of overall system increases as well. Now, we can see if introducing more heterogeneous secondary users in terms of queue size will improve the performance. Similar to maximum queue size, minimum SINR threshold and channel bandwidth were set in ascending order, where packets arrival rates were set in descending order. The initial maximum queue size is randomly chosen in interval
Packet size for all secondary users is 64 bytes. This parameter is identical for all secondary users to increase the influence of arrival rate and queue size on overall system performance. Maximum transmission power for each wireless channel was set to 0.1 watts. Also, probability detection (
Simulation parameters.
Neural network parameters.
Simulation experiments were programmed using MATLAB environment. Several utilities were from LTE system level simulator [50]. This simulator has been developed by the Institute of Telecommunication at Vienna. It is freely available for noncommercial and academic usage. It is very flexible and it has the capability of providing more organized and lesser complex simulation methods.
5.1. Queue Size Evaluation
The first performance metric to investigate is the average queue size of all secondary users. Keep in mind that secondary users have heterogeneous buffering capabilities. Figure 3(a) shows how the average queue size for DMSA-D, DMSA-N, and MAX-POWER increases as the number of coexisting secondary users increases. This behavior is due to the fact that introducing more secondary users increases the competition over the fixed available channels which leads to lower data rate for all users. Furthermore, adding more users with higher maximum queue size as explained in the previous section allows the average queue size to increase. For MAX-POWER, the average queue size increases almost linearly, while both of the DMSA versions have much better performance. It is clear that adopting deep learning approach has improved the performance noticeably.

Queue size performance. (a) Average queue size where the number of channels is 10 and the number of secondary users is increased. (b) Average queue size where the number of secondary users is 10 and the number of channels is increased.
On the other hand, CMSA solutions have achieved the best performance. The average queue size if these solutions were used is very close to zero especially when the number of available channels is larger than the number of secondary users as inferred from Figure 3(b). In the latter (Figure 3(b)), different versions of CMSA have different performance when the number of channels is 2 and the number of secondary users is 10. Here, the Interior-Point method was not able to find feasible transmission power assignment that clears all secondary user's queues. CMSA-Q had the best performance in this harsh environment since its optimization depends on the current queue size of secondary users (27). In Figure 3(b), the average queue size is decreasing as the number of available channels increases. The reduction rate of MAX-POWER is very small compared to other solutions. Note that, in Figures 3(a) and 3(b), DMSA-N performance approaches DMSA-D performance as overall system complexity increases. The achieved improvement for DMSA-D through the use of DMSA-N shrinks as the number of secondary users and channels increases. One way to keep DMSA-N improvement from shrinking is by using deeper architecture. However, deeper architecture requires more computation power and training time. In addition, DMSA-N is able to exploit increased system complexity to find better transmission power assignment. However, as complexity passes some threshold, DMSA-N performance starts degrading similar to other solutions.
5.2. Data Rate Evaluation
Data rate is measured for secondary users only if they are not in outage. Otherwise, it is assumed as zero. From wireless channel perspective, the actual experienced data rate is used in the figures. Fairness evaluations of the proposed solutions were conducted by using Jain's index [51]. However, due to the limited available space for this paper, brief discussion regarding fairness evaluations will be provided without fairness figures. Average data rate achieved by CMSA-L is the highest as seen in Figures 4(a) and 4(b). However, the fairness performance of CMSA-L is the lowest. It can be concluded that optimizing based on the arrival rate of secondary users achieves the highest performance in terms of average data rate. CMSA-Q comes second after CMSA-L. It has better fairness performance than CMSA-L due to its dependency on the current queue size of secondary users.

Data rate performance. (a) Average data rate where the number of channels is 10 and the number of secondary users is increased. (b) Average data rate where the number of secondary users is 10 and the number of channels is increased.
Both of DMSA solutions achieve very high performance in terms of fairness. DMSA-N has very close performance to CMSA-E in terms of average data rate when the system is simple. It is reasonable to hypothesize that increasing inference engine capability by using deeper architecture will reduce the gap between DMSA-N and CMSA-E performance in terms of average data rate. Performance of MAX-POWER is the worst. Secondary users were in outage during most of Figure 4(a) experiments when MAX-POWER was used.
5.3. Power Consumption Evaluation
The average power consumption for MAX-POWER is fixed for all experiments and it solely depends on the number of wireless channels. As this number increases, the average power consumption linearly increases as well with slope of one. Similarly, the fairness for MAX-POWER is fixed at the highest possible value in all experiments since the same transmission power is used by all users on all channels. For DMSA solutions, gradually increasing power consumption is noticed as well. However, the rate of this power consumption growth is shrinking as the number of secondary users and wireless channels increases. In Figure 5(a), this growth rate is decreasing due to the increased competition in the system as a result of the increased number of secondary users. Higher competition forces secondary users to reduce their transmission power so that the interference suffered by other secondary users is mitigated. The behavior observed in Figure 5(a) is a testament for the proposed dynamical system ability to infer the interference and load states of the surrounding secondary users. Such inference ability allows secondary users to be more reasonable when assigning transmission power. Performance in Figure 5(b) states that as the number of wireless channels increases, the need for aggressive competition through higher transmission power decreases. Such behavior can be greatly improved by using deep learning approach.

Power consumption performance. (a) Average power consumption where the number of channels is 10 and the number of secondary users is increased. (b) Average power consumption where the number of secondary users is 10 and the number of channels is increased. (c) Average logarithmic power consumption where the number of channels is 10 and the number of secondary users is increased. (d) Average logarithmic power consumption where the number of secondary users is 10 and the number of channels is increased.
Fairness for DMSA solution is very high compared to CMSA solutions. As the number of secondary users and wireless channels increases, DMSA fairness decreases. Again, such behavior is very desirable since it affirms that DMSA solutions were able to intelligently treat every secondary user differently based on their distinct situations. The average power consumption of CMSA solutions is very tiny compared to other solutions. To show the magnitude and the general behavior of CMSA solutions in terms of average power consumption, logarithmic scale was used as depicted in Figures 5(c) and 5(d). The Interior-Point method was able to find very tiny transmission power levels which achieve high SINR values for secondary users. It seems that the thermal noise was the reason behind keeping transmission power at these levels rather than reducing them more. Keep in mind that SINR is a ratio. It does not depend on the actual values of the numerator and denominator as long as the ratio between them is the same. Therefore, mathematically speaking, lower values of transmission power can be found if thermal noise was not considered which is not realistic.
Both of CMSA-Q and CMSA-L average transmission power levels are decreasing as the number of secondary users and wireless channels increases. CMSA-E has similar behavior when the number of wireless channels is increasing. However, increased competition in Figure 5(c) forces CMSA-E to increase transmission power of secondary users. Both of CMSA-Q and CMSA-L are more sensitive than CMSA-E to load states of secondary users. On the other hand, the equal treatment of CMSA-E for secondary users leads to increasing transmission power when the number of secondary users is increased. Heterogeneity of added wireless channels leads CMSA-E fairness to decrease. As said before, observing this impact of heterogeneity confirms the dynamicity of the proposed solutions.
6. Conclusion
This paper proposed two solutions with five versions to allocate transmission power for cognitive radio systems in heterogeneous environment. Three of these versions are centralized mechanisms where the decision is generated by central entity which has all the necessary information. The remaining two versions are distributed mechanisms where each cognitive system observes its surrounding wireless environment and it uses only its own sensory data to generate transmission power decision. Designs of all proposed solutions were inspired by Spectrum-Time Duality concept. This concept states that cognitive systems may intentionally reduce their spectrum utilization for other cognitive systems in return for longer usage time after these cognitive systems clear their queues. The optimization problem in centralized solutions tries to clear queues of coexisting cognitive radio systems in a way that reduces competition in future time frames. The distributed solutions assign transmission power by taking into consideration the queue size and the interference level of all cognitive radio systems.
Dynamical system theory was used to design the distributed solutions. Transmission power allocation rule was proposed. This rule guarantees the overall system evolution in predicted fashion. In addition, a very powerful machine learning technique was used. This technique is deep neural network. Applying this technique on the proposed dynamical system led to much better performance. To the best of our knowledge, this paper is the first attempt to utilize deep learning in cognitive radio networks. Results suggest that using deeper and more sophisticated neural networks with local information may produce comparable performance to the centralized solutions where all global information is available. One of the most important recommendations of this work for future research is to utilize advanced machine learning techniques (i.e., deep learning) in cognitive radio networks. The increased complexity of these networks requires higher level of sophistication and self-adaptability in any proposed solution which can be provided to some extent by using these techniques.
Footnotes
Competing Interests
The authors declare that they have no competing interests.
Acknowledgments
This research is supported by TWAS-COMSTECH research fund: Intelligent Spectrum Sensing and Sharing in Cognitive Radio Networks (Project Code: 12-202 RG/ITC/AS_C).
