Abstract
Information centric sensor network is a promising technology for multimedia sensor network due to the enhancement of transmission rate and reduced latency by using caches. Because of the limited energy in each sensor node, energy efficiency problem is crucial to be considered in information centric sensor network. In this paper, we cope with the energy efficient management problem in such network, in which both transmission energy and caching energy are considered. We derive the exact analytical expression for the energy consumption model in such network and propose cooperative energy efficient management scheme for multimedia information dissemination. We carry out extensive performance evaluations and our results show that the energy consumption can be minimized by cooperatively tuning the key parameters such as the cache size and caching probability.
1. Introduction
Sensor networks have drawn significant attentions in recent years due to their cheaper prices, smaller sizes, and intelligence [1]. The sensor networks have a wide variety of functionalities, such as monitoring humidity, temperature, movement of an object, and noise [2]. And this leads to different kinds of sensor networks, such as terrestrial sensor networks, underground sensor networks, and multimedia sensor networks [1]. Particularly, multimedia sensor networks are deployed in order to track or monitor any activity in the form of image, video, and audio. In such networks, each sensor node is installed with a microcamera or microphone for collecting images and voice and they are normally planned to be deployed in a relatively fixed way into the environment [3].
Since multimedia is the major content to be disseminated in multimedia sensor network, a major challenge is the demand for large bandwidth and quality of service (QoS) provisioning. To improve content delivery efficiency and QoS, information centric networking (ICN) has recently been proposed. And several projects are investigating various options for ICN architectures, such as data oriented network architecture (DONA) [4], TRIAD [5], named data networking [6], and PURSUIT [7]. Generally in ICN, content objects are decoupled from end host locations and contents are accessed by their names [8]. Through network caches, the contents are delivered to end users with much shorter routing hops and less delay [9].
The benefit ICN brings drives the research to combine ICN into sensor network design [10–12]. Particularly, information centric wireless sensor and actor networks have been proposed with an emphass on coordination and interoperability [10]. Furthermore, the protocol design and implementation for sensor network with content centric networking are further introduced and proved feasible in [12]. And by applying the characteristic of data caching of ICN into sensor networks, the data can be obtained more efficiently.
On the other hand, energy consumption is a major concern in information centric multimedia sensor networks. Since multimedia data are transmitted within the network, more energy is required. Many research works have focused on minimizing energy consumption for the network [13–18]. In [13], the authors proposed to adjust the service capacity to the actual traffic demand for reducing the power consumption. In [14], the problem for minimizing energy consumption for enterprises and data center devices was discussed. The authors investigated the room for energy saving in wireline network [15]. And the energy saving solution in the access network is proposed in [16]. Energy aware routing algorithm is investigated in [17] and load balancing scheme for achieving energy efficiency is solved in [18].
Several works have recently been proposed to solve energy efficiency problem in ICN [19–25]. Particularly, in [19, 20], the authors investigated the energy consumption in various content dissemination architectures using simple trace-based results and concluded that the information centric model can consume significant less energy than the host centric model. But in this work, the caching energy is not considered and the authors assume that popular content is visited so that the content can be achieved within one hop distance. In [21, 22], the authors analyze the power consumption in content centric networking (CCN) with the consideration of caching energy and propose an optimal cache allocation scheme. The optimization of cache location placement to minimize energy consumption is investigated in [23]. Similarly, energy efficient cache optimization solution was considered in general ICN framework [24]. However, in the above research the authors approximate the transmission energy without considering the details for caching schemes. More recently, the authors propose a heuristic energy efficient scheme and perform numerical simulation to evaluate the proposed scheme [25]. However, the proposed scheme lacks theoretical analysis.
In this paper, we propose an energy efficient management scheme for information centric sensor network. Our contributions lie in three aspects. First, we develop a close form analytical framework for analyzing energy efficient management scheme considering both the energy consumption for traffic transmission and traffic caching. Second, based on the theoretical results we propose cooperative energy efficient design in information centric sensor network. Third, we perform extensive evaluations to analyze the impact of node cache size, caching probability, network sizes, and transmission rate on the energy consumption.
The rest of the paper is listed as follows. In Section 2, system model for information centric sensor network is presented. We present the analytical evaluation of energy consumption and cooperative energy efficient design in Section 3. Extensive evaluation results are shown in Section 4. And we conclude the paper in Section 5.
2. System Model
2.1. System Model of Content Centric Network with Identifier Mapping
The system model for content centric network with identifier mapping system is shown in Figure 1. The identifier denotes the name object identifier and the router name related identifier. The identifier mapping system is responsible for the mappings between these two identifiers. The reason we consider content centric sensor network with identifier mapping system is because such system can reduce the size of routing table by using router related name for packet forwarding [26]. This is suitable for sensor networks due to the limited storing space in sensor node. We consider two typical types of topology in this paper: linear topology and tree topology. Two kinds of packet exist in the above system, that is, interest packet and data packet. Interest packet mainly contains the name of data object and the description of the name, such as the nonce which is generated by each interest for avoiding interest forwarding loop and the type of service of the requested content. The data packet contains the name of the data object and the carried data.

System model of content centric network with identifier mapping.
The naming in the above system is hierarchical and unique, which is quite similar with the uniform resource identifier (URI). It consists of a list of characters with different lengths to identify data objects. For example, as shown in Figure 1, the original interest has a name “/Status/Room1.” In addition, each node has a unique name for marking the identity, which is shown as “/RID1” in Figure 1. The mapping relationships between the name of data objects and the identity of routers are maintained in the identifier mapping system. When an original interest arrives at the network, the interest is encapsulated with RID related names and then transmitted within the network.
Each router in the system maintains three data structures: forwarding information base (FIB), content store (CS), and pending interest table (PIT). FIB maintains the outgoing interfaces toward each RID related name and is used for forwarding interest toward the sources of matching data. In each entry in FIB, a list of interfaces are maintained, each can be associated with the status information (RED/YELLOW/GREEN), routing cost, and round trip time (RTT) measured by the forwarding plane. Based on the stateful information, different forwarding strategies can be used [27]. In this paper, we consider the best route approach, in which the highest-ranking GREEN interface is used for transmitting interest. If no GREEN interfaces exist, interest is forwarded to the highest-ranking YELLOW interface. The interface in the FIB is ranked by the routing cost for arriving at each RID.
CS is used to temporally store the data objects collected by sensor nodes and received by other nodes. In this paper, we mainly consider the data objects received by other nodes. The content can be temporarily stored in the CS and updated based on different content replacement strategies. Some typical caching algorithms include least recently used (LRU), least frequently used (LFU), and random replacement (RR). Particularly, LRU denotes the case where the least recently used data objects in the cache that are replaced. LFU represents the least frequently used data objects in the cache are replaced. And RR means to randomly replace the data objects in the cache. And we consider the LRU approach in this paper.
PIT is used to keep record of the incoming interfaces of the interest in order to send back the data packets to the request node. Usually, PIT is a table recording the interest name, the incoming interfaces, and a number of nonce which are associated with a particular interest name. In addition, PIT also records the outgoing interfaces which contain the interest forwarding time via the interface which is used for calculating the RTT.
2.2. Processing of Interest and Data Packet
We further illustrate the forwarding of interest packet in Figure 2. When an interest is received by the node, the following steps are undertaken.
At the first place the node will check whether a matched data object exists in the cache. If the interest is matched, then data object is returned and the processing of this interest ends at this step. If the interest is not matched, the node will further check whether the received interest is an encapsulated interest with the RID. If not, the node will inquire for the corresponding RID from local cache or the identifier mapping system. If such mapping information does not exist, the processing of interest ends here. If the mapping information is found, the original interest will be encapsulated with the RID. Next the node will check whether this is an existing PIT entry in the node. If yes, the node will add the incoming interface of the interest and the processing of this interest terminates here. If there is not an existing PIT entry, the node will further check if there is a match FIB entry with the RID. If there exists an FIB entry, the interest will be sending out through the recorded outgoing interfaces using RIDs and new PIT entry is created. If no FIB entry exists, the interest is dropped.

Processing of incoming interest packet.
The processing of data packet is further illustrated in Figure 3. When a data packet is received, the following steps are needed to process the data packet.
First of all, the node will check whether a PIT exists. If there is not a matching PIT entry in the node, the data packet is dropped. If there is a matching PIT entry, the data packet with the original name of data object will be stored in the CS. Then the data packet is forwarded through the incoming interfaces of the PIT entry.

Processing of incoming data packet.
3. Cooperative Energy Efficient Design
In this section, we perform analytical evaluation of energy consumption considering both transmission energy and caching energy and propose the cooperative energy efficient design.
3.1. Energy Consumption Model
The energy consumption mainly consists of two parts: transmission energy
Similar to [25], we assume that the size of request contents is much larger than the caching capacity, and then, within a time period t, the caching energy is mainly determined by the caching size of a router which is expressed as
To summarize, the total energy consumption is derived as
3.2. Caching Strategy ϵ-LRU
To further analyze (6), we further discuss the caching strategy in this section. We first consider the basic LRU scheme as the caching strategy, in which the node caches every data object and replaces the least used data objects. When taking the energy consumption into consideration, if the data objects are cached at every node passing by, the caching energy may be increased. Thereby we propose a revised version of LRU scheme, which is named ϵ-LRU scheme, in which every node caches data objects with a probability of ϵ. In this case, when data objects are received by a node, they are processed as in Figure 4.
The node will generate a random variable R, ranging between The node will compare the generated random variable R with ϵ. If If

Data objects processing for ϵ-LRU scheme.
3.3. Theoretic Solution
Based on the caching strategy described above, we further approximate (6) as
The challenge for analyzing (7) is to find the solution of
We use
Particularly, we have
Similarly, we can represent
Furthermore, based on the results in [28], we have the miss probability at ν-level
Thus far, we have solved all equations for finding
In addition, we assume that under steady state, the average received data packets are equal to the number of request interest. And the average transmission rate ℛ can be accordingly approximated as
Therefore, we have (7) expressed as
3.4. Cooperative Energy Efficient Design
The objective of cooperative energy efficient design is to minimize the energy consumption or, mathematically,
We can observe that the energy consumption is determined by multiple variables. And we have listed three major factors in Figure 5, which are the cache size 𝒞, caching probability ϵ, and request rate λ. By tuning these factors, the energy consumption can be dynamically changed. For example, increasing the caching size will lead to more caching energy but less transmission energy. Similarly, increasing the caching probability may result in reducing transmission energy consumption but increasing caching energy. Thereby, we need to design energy efficient scheme cooperatively with the consideration of various factors.

Cooperative energy efficient design.
Since we have found the theoretical expression of the total energy consumption model, we propose the following methodology for cooperative energy efficient design. We first find the cache size of each node by assuming that all other parameters are given. In other words, the cache size is precalculated heuristically and offline. Once the cache size is fixed, we can adjust the other parameters. We consider two different situations in this case. In the former situation, the network node can adjust the parameter dynamically so that the caching probability can be easily changed. And in the latter situation, the node may not be able to change the parameter and the interest requester can adjust the parameter dynamically, such as the request rate. Both situations can be performed online or in real time.
The offline method is used before the system parameters are configured so that this can avoid the time and complexity when information is transmitted within the network. The online/real-time design is calculated dynamically by using the real-time information collected. This can achieve the accurate and timely control for energy efficient design. By combining these two methods, we can achieve energy efficient design in both long-term and short-term ways.
4. Performance Analysis
In this section, we perform extensive analysis on the cooperative energy efficient design and investigate the impact of cache size, caching probability, interest request rate, content popularity, and network size on energy consumption. For all simulations, we assume that the energy consumption per node per bit is
4.1. Impact of Cache Size on Energy Consumption
Figure 6 shows energy consumption results as a function of cache size with different popularity indexes. As observed from the results, for popular data objects, the energy consumption decreases with the increasing of caching size at first, and then after the energy consumption reaches a minimum value, the total energy consumption increases again. The reason is because with the increasing of caching size, the transmission energy reduces due to the reduction of transmission distance. But the caching energy increases at the same time. This leads to a minimum value of energy consumption as shown in Figure 6(a). However, if the data object is not popular, increasing cache size of the node may result in a very slight decrease of transmission energy. Since the caching energy increases linearly, the total energy consumption increases near linearly for unpopular data objects as presented in Figure 6(b).

Energy consumption as a function of cache size.
4.2. Impact of Data Object Caching Probability on Energy Consumption
The energy consumption with respect to caching probability ϵ is illustrated in Figure 7. The index of popularity is assumed to be

Energy consumption as a function of ϵ.
4.3. Impact of Interest Request Rate on Energy Consumption
We next plot Figure 8 to show the energy consumption as a function of interest request rate. Interestingly, we observe that, with the increasing of interest request rate, the energy consumption decreases. This is because with the changes of interest request rate, the transmission energy per bit remains the same. But as observed from (14), the caching energy consumption is an inverse function of interest request rate; the caching energy decreases. Thereby, the energy consumption decreases with the increasing of interest request rate. The results in Figure 8 indicate that we can adjust interest request rate for energy efficient cooperative design.

Energy consumption as a function of interest request rate.
4.4. Impact of Content Popularity on Energy Consumption
We take a further step to observe the energy consumption with respect to content popularity. We assume the average data object size σ as

Energy consumption as a function of content popularity index.
4.5. Impact of Network Size on Energy Consumption
Finally, the energy consumption as a function of network topology level is shown in Figure 10. We assume that the popularity index is

Energy consumption as a function of network topology level.
5. Conclusion
In this paper, we have developed a comprehensive theoretic framework for analyzing the energy consumption in multimedia sensor network. The energy consumption model has been derived theoretically with close form expression. Based on the theoretic model, we propose cooperative energy efficient management design considering the factors of node cache size, caching probability, and interest request rate. We perform extensive numerical results for energy consumption with the impact of various factors. The results have shown that the cache size, caching probability, interest request rate, content popularity, and network size should be designed cooperatively and dynamically since they have different impacts on energy consumption. In future, we aim to solve the cooperative energy efficient design for information centric sensor networks considering different types of sensor network topologies.
Footnotes
Notations
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work is partially supported by National Natural Science Foundation of China (61102049, 61232017, and 61271202), the National Basic Research Program of China “973” Program (2013CB329101), the National Science and Technology Major Projects of the Ministry of Science and Technology of China (W13GY00040, 2012ZX03005003), Beijing Natural Science Foundation (4132053, 4122060), the Scientific Research Foundation of the Returned Overseas Chinese Scholars of State Education Ministry (W13C300010), and Specialized Research Fund for the Doctoral Program of Higher Education (20110009120004). The authors also gratefully acknowledge the helpful comments from the editors and reviewers, which have improved the paper quality.
