Abstract
Establishing trust and reputation for evaluation of message reliability is key to the vehicular ad hoc networks (VANETs). Most of the previous reputation management systems focus on the effectiveness of the reputation management system in handling the liars who send false service messages. However, these reputation management systems have two drawbacks. One is that they are vulnerable to tactical attacks such as self-promoting attacks and bad-mouthing attacks. The other is that they may violate location privacy because they assume every vehicle communicates with a unique ID. Our research particularly investigates the robustness against these tactical attacks, as well as the preservation of privacy by integrating trust management with the pseudonym technique. To resist the tactical attacks in VANETs, we present a reputation model which builds both service reputation and feedback reputation. Moreover, we apply the information entropy and the majority rule to the reputation accumulation algorithms to counter false feedback. To defend the reputation link attack during pseudonym changes, we propose hidden-zone strategy and k-anonymity strategy. The simulation results show that our scheme is robust to these tactical attacks and preserves privacy against the reputation link attack during the pseudonym changes.
1. Introduction
A vehicular ad hoc network (VANET) typically consists of roadside infrastructure and vehicles that are connected in a self-organized way. It supports safety applications, convenience applications, and commercial applications, such as collision warning, congested road notification, and infotainment. These applications are highly dependent on the liable messages generated by vehicles. Because false messages may lead to failures, injuries, and even deaths, false information should not be transferred by the neighbor node which affects the overall performance of network [1]. Traditional network security approaches such as the use of firewall, access control, and authorized certification are greatly investigated to secure VANETs. These approaches, known as hard security mechanisms [2], are very effective in protecting VANETs from external attackers. However, what if some internal attackers (the vehicles with true legal IDs) intentionally produce false messages to cheat others? For example, a malicious internal vehicle may send a false congestion warning to let other vehicles make unnecessary detours. In such case, the validation of a sender's ID cannot guarantee the truth of the message. A potential way to address such problems is using trust management [3], which are also known as soft security mechanisms [2].
1.1. Motivation
There have already been many reputation systems proposed for other environments, such as peer-to-peer networks [4], ad hoc networks [5], and wireless sensor networks [6]. However, these reputation systems are not feasible for VANETs because they do not concern the characteristics of VANETs, such as high dynamics and privacy concerns. Recently, some trust management systems [7–14] have been proposed for VANETs. However, they have two drawbacks.
First, these trust management systems may be vulnerable to tactical attacks. In the preliminary stage, the trust management systems in VANETs are designed to deal with the basic attack—lying, in which the liars send false service messages individually. However, when realizing the effect of trust management systems incorporated into the VANET, the malicious vehicles are likely to target the trust management system using some more complicated attacks which we call tactical attacks. For example, a couple of vehicles can form a collusion group and quickly promote the reputation of a member by enormous positive feedback; a service provider vehicle can bad-mouth his rival by intentionally giving negative feedback. Therefore, the VANET suffers not only the malicious behaviors of vehicles, but also the trust management system [15]. Unfortunately, these aforementioned trust management systems in VANETs do not consider these tactical attacks in their trust models. Trust model, which is the core of a trust management system, takes the responsibility for profiling the honest and the malicious behaviors. Robust trust model against tactical attacks in VANETs must be taken into account.
Second, current trust management systems [16, 17] proposed for VANETs do not consider the issue of privacy preservation. Most of these previous trust management systems assume every vehicle communicates with a unique ID rather than pseudonyms, which is not practical in VANETs due to privacy issues. The RaBTM [18] (roadside unit and beacon-based trust management system) employs pseudonyms in the reputation system to enhance privacy preservation. We will make comparisons with RaBTM and highlight the feature of our work in Section 6. In fact, an adversary vehicle can easily track a target vehicle by following the target's unique ID in the messages, which greatly violates the target vehicle's location privacy. Unlike other ad hoc networks, every vehicle in VANET is tightly related to a user (driver). Disclosure of the location information of a vehicle is a potential threat to the user's privacy. Location privacy preservation is a significant issue in a VANET because people's concerns about privacy will prevent them from accepting the VANET. An efficient way to address this issue is to use the pseudonym technique [19, 20]. In a pseudonym-enabled VANET, every vehicle has multiple pseudonyms. To preserve their location privacy, vehicles will periodically change their pseudonyms when they are broadcasting messages. When designing a trust and reputation management system in a VANET, it is necessary to employ pseudonyms instead of unique IDs for vehicles.
However, integrating the trust management with the pseudonym technique is not as easy as it seems. As the reputation value becomes the visible information in most messages, we find that there is a reputation link attack that may threaten the location privacy. The reputation link attack refers to the fact that the adversaries can link the pseudonyms by linking reputations involved in the target's messages. The classical mix-zone (a zone designed for pseudonym changes and trying to decrease the probability of linking) [20] can deal with linking of velocity, location, acceleration, and so forth, but it does not work on the reputation link attack. For instance (Figure 1), we assume only pseudonym and velocity are visible. An attacker is tracking the target

An example in which the mix-zone decreases the probability of velocity link to 50%.

An example in which the mix-zone cannot preserve privacy against reputation link attack.
The reputation link attack, which may violate the untraceability of VANETs, is an important issue in designing a trust management system because the reputation link attack makes it easier for an attacker to keep track of a target's location information. Furthermore, with the location information and some additional knowledge, such as home/work location pairs, the attacker may identify the driver with a probability of over 50% [21]. For example, an attacker tracks the competitor and gets his most-visited locations. Then the attacker may identify the driver by correlating these locations with the home/work location pairs obtained from the web, for example, online social networks.
1.2. Contributions
In this paper, we present a reputation management scheme that overcomes these two drawbacks.
First, we propose a reputation model, which is not only effective in dealing with false messages, but also robust to the tactical attacks. We assume false messages are generated by malicious vehicles for their own benefit. We model service reputation for every vehicle, which can be used to identify malicious vehicles that provide low quality services. Most collusion attacks are with false feedback, so we propose feedback reputation to model the quality of feedback. In addition, we propose robust aggregation algorithms, which use the majority rule and information entropy to mitigate the threat of collusion attacks.
Second, we present a hidden-zone strategy and a k-anonymity strategy to preserve location privacy against reputation link attacks. The hidden-zone strategy hides the reputation value in a distance, which is effective in dealing with local active attackers (LAAs) [22]. The k-anonymity strategy generalizes the reputation values to achieve a k-anonymity, which is effective in handling both LAAs and global passive attackers (GPAs) [22].
Third, we model the tradeoff between privacy and utility mathematically for the k-anonymity strategy.
The rest of the paper is organized as follows. Section 2 introduces a pseudonym-enabled VANET model, a reputation management mechanism, and attack models. Section 3 describes the proposed reputation model. Section 4 presents our strategies against the reputation link attack. Section 5 evaluates our scheme via simulations. Section 6 discusses the related work. Section 7 concludes this study.
2. System Overview
In this section, we describe the pseudonym-enabled VANET model and the reputation management mechanism on it. Then we introduce some attack models.
2.1. Network Model
The network involved in our RPRep scheme consists of three entities (Figure 3): vehicles, RSUs, and servers. Every vehicle is equipped with an onboard unit (OBU), which is able to broadcast and receive messages via wireless communication. Applications are running on vehicles and each vehicle can be a service provider. We assume most vehicles are honest, while some of them are malicious and send false service messages or false feedback messages. The RSU is a wireless communication device deployed along the roadside. It is connected with servers and acts as a communication interface between vehicles and servers. We assume that RSU can be trusted. In our scheme, the pseudonym server and the reputation server are also trusted authorities. The pseudonym server issues and verifies pseudonym certificates. The reputation server maintains the reputation of vehicles and issues reputation certificates.

The pseudonym-enabled VANET model.
2.2. Reputation Management Mechanism
We incorporate the reputation management mechanism into the pseudonym-enabled VANET. We describe the reputation management mechanism using an example. The notations used are listed in the List of Notations shown at the end of the paper.
Step 1.
A vehicle
Vehicles are equipped with pseudonyms and their corresponding secret keys. When sending a message, a vehicle signs it with its secret key and attaches the signature and the pseudonym certificate to the message so that receivers can verify the signature.
Step 2.
The
Step 3.
The
Step 4.
When
Step 5.
The reputation server collects the feedback messages, aggregates the reputation values, and updates the records of
2.3. Attack Models
We assume servers and RSUs are protected against fraudulent access by well-established security mechanisms. We also assume the malicious vehicles cannot fake signatures and pseudonym certificates, and the Sybil attack is avoided. Denial-of-service attacks are considered out of the scope of this paper.
Apart from “lying,” on which the aforementioned reputation systems focus, there are a few other potential malicious behaviors in the VANET. We model these attacks in the following.
2.3.1. Self-Promoting Attack
In self-promoting attacks [25], attackers tend to unfairly augment their own reputation. Typically, in the VANET, a group of malicious vehicles stay with a target vehicle in a separate space and collaborate to give numerous positive feedback messages to increase the target's reputation. With the manipulated high reputation, the target is able to convince an increased number of vehicles of the reality of its message even if it is actually false. One feature of the self-promoting attack is that the positive ratings on the attacker are largely from a small crowd of colluding vehicles.
2.3.2. Bad-Mouthing Attack
In some cases, the malicious vehicles collude to give negative feedback on a target in order to corrupt its reputation. This attack often happens among the competitor vehicles that are providing similar services. The attackers can use the bad-mouthing attack [26] to slander their competitors. The bad-mouthing attack differs from the self-promoting attack because the bad-mouthing attackers cannot collaborate with the target vehicle to separate from other honest vehicles. In addition to the ratings provided by the attackers, the reputation server also receives a large number of ratings from other honest vehicles, which is helpful in identifying false feedback messages. Therefore, modeling the feedback reputation may be effective in mitigating bad-mouthing attacks.
2.3.3. Reputation Link Attack
In the process of a pseudonym change, the vector
For example (Table 1), a cluster of vehicles—
Original table, changed table, and generalization table. (a) Original table (OT) before pseudonym change; (b) changed table (CT) after pseudonym change; (c) generalization table (GT), where
The objective of the attack is to track the vehicles and breach their location privacies. Such an attacker can be abstracted as (i) global passive attacker (GPA) [22] that can eavesdrop all communications of any vehicle in a monitored area and (ii) local active attacker (LAA) [22] that can follow a target and eavesdrop its messages.
3. Reputation Model
In this section, we propose a robust reputation model. Unlike previous trust models in VANETs, our model is meant to deal with both the liars and tactical attackers.
3.1. Basic Concepts
We first introduce some basic concepts involved in our reputation model.
3.1.1. Service Reputation and Feedback Reputation
The concepts of trust and reputation in human society have been extended into the VANETs. Since vehicles in a VANET have various applications which can interact intelligently with each other just as humans do, the introduction of trust and reputation in this kind of network is natural. Basically, reputation is public knowledge and represents the collective opinion of a node in a network. In this paper, we adopt the following definition [27].
Definition 1.
Reputation is the global perception of a vehicle's trustworthiness in a VANET.
Interactions and feedback of vehicles provide information to measure trust among them [28]. There are mainly two types of messages in our scheme, service messages and feedback messages, so we model service reputation and feedback reputation accordingly. The service reputation models the reliability of service messages. We assume false service messages are intentionally sent by the malicious vehicles, so a low service reputation implies the malicious vehicle, thereby allowing the service consumer vehicles to make better decisions. In our scheme, the service reputation will be published in the reputation certificate as Rep, which is discussed in Section 2.2. The feedback reputation evaluates the quality of ratings in the feedback messages. We use it to identify false feedback, which is often exploited by self-promoting and bad-mouthing attackers.
3.1.2. Experience-Based Trust and Role-Based Trust
We consider a vehicle trustworthy if it has a good history of providing service or if it demonstrates a specialized or authority role such as bus, ambulance, and police car. So reputation can be built up by experience-based trust and role-based trust in our reputation model. The experience-based trust refers to the degree to which the feedbacker believes a target vehicle's service message. A role in the reputation model represents a set of vehicles that are of this role. The role-based trust refers to the general belief in a role. The specialized or authority vehicles can be verified by a trustworthy government department and register to the VANET.
In particular, because there is insufficient feedback in the beginning of a VANET, which is also known as the cold start issue [29], the reputation is mainly built up by role-based trust. Then, the vehicles communicate and the reputation server accumulates the feedback. When there is sufficient feedback, we calculate the reputation largely using experience-based trust.
3.2. Rating Generation Module
Recall the example in Section 2.2;
The rating is defined by
When
The confidence c consists of time closeness, location closeness, and sensing condition. The TC is the time closeness between the observation and the event, where
3.3. Reputation Aggregation Module
As Figure 4 shows, when the reputation server receives feedback

Reputation aggregation procedures.
Then,
3.4. Service-Reputation Aggregation Algorithm
In our model, feedback messages and events are steadily accumulated by the reputation server. Typically, an event
In the self-promoting attack, a set of vehicles give enormous positive feedback on a member. In our collusion detection method, we use information entropy to evaluate the feedback in the window. Large information entropy implies that a great many vehicles participate in feedback. In this case, it is difficult for attackers to raise or slander the reputation. On the contrary, a low information entropy probably implies collusion. According to the information theory, probability of a feedbacker vehicle
Trust in an event is calculated by
Then, the weight of each event is defined by
Lastly, the service reputation of
3.5. Feedback Reputation Aggregation Algorithm
We use the majority rule to evaluate the quality of a rating. The function
There are probably tactical attacks on feedback reputation as well. Two steps in Figure 4 can also mitigate this type of attacks. First, the number of feedback messages on the event must be greater than a threshold
3.6. Reputation Update and Vehicle Revocation
The reputation server keeps track of every vehicle with records of (VID, SRep, FRep, Roletrust, FMsgRecord, FEventRecord, isRevoked). In our example, when the reputation aggregations are completed, SRep, FMsgRecord, and FEventRecord of vehicle
If
4. Location Privacy Preservation
In this section, we present strategies to preserve location privacy against reputation link attack.
In a classical VANET, the attackers can track a target vehicle by linking the information in the messages such as location, velocity, and acceleration. Mix-zone technologies have been studied to address this problem, which are mainly classified into two categories—location-based mix-zone [20] and silent mix-zone [19]. The location-based mix-zone technique suggests pseudonym changes at social spots where many vehicles may gather, for example, road intersections. The basic idea of the silent mix-zone is that vehicles should not transmit messages in a period and they should change pseudonyms during the silent periods.
In a VANET with a reputation management system, the reputation value in most messages is observable. The reputation value can also be exploited by adversaries for tracking. However, the mix-zone techniques mentioned above cannot address the reputation link attack.
We propose two strategies to defend the reputation link attack: one is using hidden-zone to prevent the attacker from getting sufficient knowledge and the other is using k-anonymity to make the reputation values undistinguishable. Specifically, we will describe the strategies in a location-based mix-zone scenario. These strategies can also be adjusted to adapt silent mix-zones.
4.1. Hidden-Zone Strategy
In a reputation link attack, the attacker needs both the original table (OT) and the changed table (CT) to link the target's reputation values. Keeping the attacker from the knowledge of the changed table (CT) helps to defend against the attack. Suppose the attacker is a local active attacker (LAA); if the target and the neighbors hide their reputation values in the transmission range of the attacker's, the attacker can hardly identify the target. To this end, we present the hidden-zone strategy for pseudonym change. The idea of hidden-zone is inspired by the silent mix-zone [19]. But they are quite different. The hidden-zone hides reputation values in a restricted zone, while the silent mix-zone forbids messages for a random period.
Protocol (Hidden-Zone Strategy)
A vehicle When the traffic light turns to green, When
For example (Figure 5), a cluster of vehicles

An example of hidden-zone strategy.
The performance of the hidden-zone strategy relates to intersections types. Suppose there are n directions for a vehicle to leave a mix-zone, for example, three directions in a four-way intersection and two directions in a three-way intersection. Let
The hidden-zone strategy has some drawbacks. First, it is ineffective in handling a global passive attacker (GPA) because the GPA's monitored area is large and secret and makes it hard to construct a hidden-zone. As Figure 5 shows, the hidden-zone cannot keep the GPA from the original table (OT) and the changed table (CT). Second, hiding reputation certificates may cause inconsistence for services that depend on reputation values. So the hidden-zone strategy is suitable for the delay-tolerant services and not feasible for the time-critical services. Therefore, we present a k-anonymity strategy that overcomes these drawbacks.
4.2. k-Anonymity Strategy
The k-anonymity concept was presented by Sweeney [30] for a data center to publish private medical records. Inspired by it, we define the k-anonymity of a vehicle.
Definition 2 (k-anonymity of a vehicle).
Let
The k-anonymity strategy is to make the changed table's tuples be k-anonymity. Generalization is an approach to implement k-anonymity. For example, with the Value Generalization Hierarchy (VGH) in Figure 6, we generalize the changed table (CT) to a generalized table (GT) (Table 1(c)) where three vehicles

Value Generalization Hierarchy of Rep.
The k-anonymity strategy is as follows.
Protocol (k-Anonymity Strategy)
A vehicle When the traffic light turns to green,
The k-anonymity strategy overcomes the weaknesses of the hidden-zone strategy. This strategy is effective in not only the LAAs but also the GPAs because it works even if the attacker has full knowledge of vehicles in a mix-zone. For example (Figure 7), a cluster of vehicles

An example of k-anonymity strategy.
However, there is still one issue with the k-anonymity strategy, which is how to balance the privacy and utility when generalizing the reputation values.
Given that
Definition 3 (generalization degree, GD).
The generalization degree of a reputation value
Definition 4 (effectiveness of a reputation value Rep).
Consider
The relationship between GD and Eff can be inferred so that
To address this issue, we use an Eff threshold M to balance Eff and GD. Every vehicle must not generalize its reputation value when the Eff is lower than the threshold M.
5. Evaluation
In this section, we present the evaluation of our RPRep scheme. We use the Veins [31] simulator which is based on OMNeT++, an event-based network simulator, and SUMO, a road traffic simulator. We use a map of the University of Erlangen-Nuremberg, Germany. Figure 8 shows a snapshot of one of our simulation runs. For the simulations in Sections 5.1 and 5.2, we fix the total number of vehicles to 50.

Map for simulating VANET.
5.1. Effectiveness
In our first simulation, we evaluate the effectiveness of RPRep in countering liars. There it consists of 10% authority vehicles and 90% common vehicles. Some of the common vehicles are liars who broadcast false congestion messages every 90 seconds to cheat others. When a vehicle is deceived by a congestion message, it makes a detour, so it travels a longer way instead of the planned way. We vary the number of liars from 0% to 30% and use the average distance of vehicles in the VANET to measure the change.
The results are presented in Figure 9. When there is no liar in the VANET, the average distance of all vehicles is about 3280 meters. As the number of liars increases, the average distance gets longer because the vehicles are confused by the false congestion messages. When 30% are liars, the average distance of the traditional VANET gets up to 3956 meters, which is much longer than that of the VANET with RPRep. The vehicles in the VANET with RPRep are less disturbed by false messages because the RPRep helps them identify the liars. As expected, our RPRep scheme is effective in countering malicious vehicles that send false service messages.

Average distance of all vehicles when there are different percentages of liars.
5.2. Robustness
5.2.1. Robustness of RPRep under Self-Promoting Attack
This experiment evaluates the robustness of RPRep under a self-promoting attack. We randomly designate a vehicle

Reputation of the malicious vehicle in the self-promoting attack.
5.2.2. Robustness of RPRep under Bad-Mouthing Attack
This experiment is in a scenario with bad-mouthing attack. The target

Reputation of the target in the bad-mouthing attack.
5.3. Location Privacy Preservation
We define a metric “tracking rate” for the evaluations of location privacy preservation.
Definition 5 (tracking rate).
Tracking rate
Second, we investigate the impact of k in the k-anonymity strategy. In this simulation, the attackers are GPAs. As k increases from 2 to 4, the tracking rate decreases slightly (Figure 13(a)), but the effectiveness of reputation values increases sharply (Figure 13(b)), especially when there are only a few vehicles. Effective reputation values are essential for the trust management system. Therefore, to make a good tradeoff between the tracking rate and the utility of reputation values, we let

Performance of strategies under tracking attack.

Impact of k.
6. Related Work
The implementation of trustworthy security and privacy mechanisms is still the main challenges in VANETs [32]. Cooperative VANET applications rely on the messages sent by vehicles. An open question [33] is the following: how can one vehicle trust a message received from another? Thus, trust is supposed to be built to allow each vehicle to detect dishonest ones as well as malicious data sent by these dishonest ones and to give incentives for vehicles to behave honestly [34].
A number of research studies have proposed some digital signature and authentication schemes to enable trustworthy communication in VANETs. The work in [35, 36] designed an endorsement mechanism based on threshold signatures to deal with internal attacks. The work in [37] introduced an ID-based proxy signature technique with ECDSA for authentication in VANETs, which combines the properties of an identity-based signature along with the features of a proxy signature. Their scheme is improved by overcoming a private key reveal attack [8]. The work in [38] introduced a conditional privacy-preserving communication protocol for VANETs based on proxy resignature, which is characterized by the trusted authority designating the RSUs to translate signatures computed by the OBUs into ones that are valid with respect to the trusted authority's public key. The work in [39] proposed a decentralized lightweight authentication scheme called the trust-extended authentication mechanism (TEAM) for highly dynamic VANETs. These cryptographic methods ensure integrity and nonrepudiation and increase the vehicles' confidence for communications, but they cannot guarantee the quality or reliability of the message data itself.
Some trust and reputation mechanisms have been designed for VANETs and the main issues studied by them can be categorized as follows.
(A) What should be considered as evidence to build trust and reputation in VANETs? Wei and Chen [18] and Chen and Wei [13] proposed a beacon-based trust management system called RaBTM that aims to thwart internal malicious attackers more precisely in VANETs. In their system, by computing the similarity between the claimed position, velocity, and direction with the estimated values, they determine the trustworthiness of the vehicle that is sending the beacons. Antolino Rivas and Guerrero-Zapata [40] studied chains of trust to share information about points of interest in VANETs. They let users append their signed reviews to the original message and use these reviews to build trust among the users. Minhas et al. [9] developed a framework that models the trustworthiness of the agents of vehicles in order to receive the most effective information. Their multifaceted trust-modeling approach incorporates direct experiences and vehicle roles to build trust. Actually, since every vehicle is highly related to a human driver, both vehicle attributes and drivers' social relationships can be used to build trust for VANETs.
(B) What is specific of the trust management in VANETs? Lo and Tsai [7] consider a false traffic warning message as a message with inaccurate traffic information due to two reasons. First, the status of traffic events such as location and size is dynamically changed over time. Second, vehicles may have different sensing capabilities since they have different sensors. Based on these assumptions, they presented a reputation system which uses a cooperative event observation mechanism. Wang et al. [41] argue that, in a VANET, a source vehicle must rely on other vehicles to forward its packets on multihop routes to the destination. Thus, they highlight the trust propagation in routing. They presented a trust routing method by introducing the concept of attribute similarity. Abumansoor and Boukerche [42] describe a case of non-line of sight caused by obstacles, which interrupts direct communication among vehicles and prevents vehicles from properly monitoring their neighboring nodes. This will lead vehicles to evaluate the trustworthiness of others unfairly. So they proposed a trust evaluation model based on location information and verification to address this issue.
(C) Should the trust management system be decentralized or centralized? Gómez Mármol and Martínez Pérez [10] surveyed the trust management in P2P networks, wireless sensor networks, and ad hoc networks and presented a decentralized scheme TRIP to quickly and accurately distinguish malicious or selfish nodes spreading false messages throughout the VANETs. Zhang et al. [14] presented a trust-modeling framework for message propagation and evaluation in VANETs. Their model allows vehicles to make decisions about whether to follow the message by evaluation in a distributed way while taking into account others' opinions. However, these decentralized trust systems, which rely on large numbers of interactions with neighbors, are not practical in the highly dynamic environment of a VANET. To handle the dynamics of VANETs, Huang et al. [43] presented a situation-aware trust scheme SAT, which includes an attribute based policy control model and a proactive trust model to build trust among vehicles. The weakness is that the users may need rich knowledge to define and adjust a policy. Li et al. [11] designed an efficient announcement scheme that takes advantage of the centralized infrastructure in a highly dynamic environment of VANETs. In their scheme, a trusted reputation server issues reputation certificates to vehicles and later they use these reputation certificates in communication with other vehicles. Because the feedback is collected from all vehicles in the VANET instead of neighbors, the issue of lacking interactions faced by decentralized trust systems is well addressed in this scheme. Our scheme is centralized, which is similar to Li et al.'s. Nevertheless, Li et al.'s scheme only considers the naïve liars and assumes a unique identity for each vehicle rather than pseudonyms. So their scheme is less robust against tactical attacks and less effective in preserving privacy.
(D) How can we model the misbehavior? Minhas et al. [9] model liars in their multifaceted trust model, which may send false traffic information to mislead other vehicles and cause traffic congestion. Gazdar et al. [44] model two different misbehaviors: malicious and selfish. In their trust model, a malicious vehicle can broadcast a message about an unreal event, and a selfish vehicle can reduce its forwarding rate in order to keep its throughput only for its own data transmission. A lot of work considers only the liars; however, when a trust management system is integrated into a VANET, the malicious vehicles may carry out many tactical attacks on it. In our scheme, we address some of the potential tactical attacks, including self-promoting attack and bad-mouthing attack.
(E) How can we preserve privacy? Driving privacy or vehicle anonymity is a critical concern in VANETs [45]. Li and Chigan [12] state that privacy protection and reputation management impose conflicting requirements in VANETs because of the following: privacy protection makes it challenging to maintain the reputation history of any node; reputation management requires real-time reputation manifestation at risk of easier vehicle tracking. They presented a decentralized scheme JPRA, which uses 1-hop neighbors and a partially blind signature technique, to solve these conflicting requirements. This leads to other concerns such as ensuring that the neighbors are trustworthy. In general, most previous trust management systems for VANETs pay more attention to trust issues than privacy preservation.
The RaBTM proposed by Wei and Chen [13, 18] is one of the most related works. They incorporate pseudo identity in their scheme for privacy preservation. However, they do not address the problem of reputation link attack. Thereby in RaBTM, the adversary can trace a vehicle by linking the reputations. Another very related work is IncogniSense which is proposed by Christin et al. [46]. The IncogniSense introduces a concept reputation cloaking to address the reputation link problem in the scenario of participatory sensing. One of the reputation cloaking techniques is to give small disturbance to reputation to make it more generic; for example, reputation 5.1 and reputation 5.2 can be floored to 5. The IncogniSense mainly protects client's privacy from the server, while our scheme preserves vehicle's privacy from other vehicles in the process of message broadcasting. In the scenario of participatory sensing, pseudonym changers are all the clients; but in the scenario of broadcasting in a VANET, the pseudonym changers are usually a small number of vehicles in a mix-zone (e.g., an intersection). So the Floor Function of IncogniSense can probably preserve privacy in the scenario of participatory sensing, but it may not guarantee a k-anonymity in VANET. In our scheme, we combine a reputation management system with the pseudonym technique to enhance privacy preservation in VANETs. We strengthen the trust model to deal with tactical attacks. Our scheme can control the tradeoff between privacy preservation and reputation utility.
7. Conclusion
Trust management is a key technique in handling internal attackers in VANETs. Unlike other networks such as peer-to-peer networks and wireless sensor networks, designing a trust management system in a VANET needs to take pseudonyms into account due to privacy concerns. In this paper, we present a reputation management scheme for pseudonym-enabled VANETs. Our scheme builds service reputations to identify liars who provide false service messages, as well as feedback reputations to deal with tactical attackers. In addition, we propose hidden-zone strategy and k-anonymity strategy to resist the reputation link attack during pseudonym changes, which protects the location privacy of vehicles. Experimental results demonstrate that our scheme works efficiently to deal with lying, self-promoting, and bad-mouthing behaviors, as well as reputation link-based tracking attacks.
Our scheme is designed for the VANET. Nevertheless, it can also be applied to the IoV (Internet of vehicles) [47]. According to recent predictions [48], billions of “things” will be connected to the Internet by 2020 including vehicles. An IoV considers each vehicle as a smart device equipped with powerful sensors, V2V communication technologies, and IP-based connectivity to the Internet. Similarly, the IoV is susceptible to false messages. A centralized trust management system such as our RPRep can be employed to address this issue.
We do not distinguish the reputation between vehicle and driver. For future work, we will consider the social network in which the drivers are involved. The usage of social networks has become pervasive. This leads to a new paradigm in solving “trust” problems. Building a connection among drivers and online social networks may enable cross-domain trust relationships. The trust management system in VANET will benefit from these trust relationships, especially in the early stage of the VANET.
For future work, we also plan to consider the threat of corruption [49] in the k-anonymity strategy. For example (Figure 7), Alex, Brent, and Carl are 3-anonymity. But if Brent and Carl are corrupted and they tell their pseudonyms to the attacker, Alex will be disclosed. As we analyze in Section 5.3, k is usually a low integer, that is, 2, which means the attacker needs to corrupt only a small number of vehicles. Therefore, we would like to cope with the threat of corruption to further improve the robustness of the k-anonymity strategy.
Footnotes
List of Notations
Competing Interests
The authors declare that there are no competing interests regarding the publication of this paper.
Acknowledgments
The support provided by the Third Jiangsu Overseas Research & Training Program for University Prominent Young & Middle-Aged Teachers and Presidents is acknowledged. The partial support from the National Natural Science Foundation of China (no. 61402244) is also acknowledged.
