Abstract
With the rapid development of cloud storage, more and more resource-constraint data owners can employ cloud storage services to reduce the heavy local storage overhead. However, the local data owners lose the direct control over their data, and all the operations over the outsourced data, such as data transfer and deletion, will be executed by the remote cloud server. As a result, the data transfer and deletion have become two security issues because the selfish remote cloud server might not honestly execute these operations for economic benefits. In this article, we design a scheme that aims to make the data transfer and the transferred data deletion operations more transparent and publicly verifiable. Our proposed scheme is based on vector commitment (VC), which is used to deal with the problem of public verification during the data transfer and deletion. More specifically, our new scheme can provide the data owner with the ability to verify the data transfer and deletion results. In addition, by using the advantages of VC, our proposed scheme does not require any trusted third party. Finally, we prove that the proposed scheme not only can reach the expected security goals but also can satisfy the efficiency and practicality.
Introduction
Cloud computing, an emerging and promising computing paradigm, was first put forward by Google. 1 Cloud computing can connect a large number of computing resources, network bandwidths, and storage spaces together via the Internet.2,3 Using these tremendous resources, the cloud service provider is able to provide on-demand self-service for the tenants conveniently and ubiquitously, for example, outsourcing service,4–6 identity authentication, 7 cloud storage service, and data search.8–10 In the cloud storage service paradigm, the cloud storage service provider can offer resource-constraint data owners boundless storage resources and network resources. Thanks to a number of irresistible advantages, the cloud storage service has been widely applied in daily life and work. In order to save the overhead of storing and maintaining the data, there will be more and more tenants, including the corporations and individuals prefer to store their files on a remote cloud data center. Investigation shows that 82% organizations benefit by embracing the cloud storage service. 11
Although the cloud storage service has plenty of attractive advantages, it is unavoidably subjected to some novel security issues and challenges. 12 In the cloud storage service, the data owners might lose the direct control over their outsourced files, which prevents them from performing operations over their outsourced data directly. 13 Therefore, all the operations over the outsourced data, such as moving the data from one cloud server to another and removing the outsourced file from the storage medium, will be executed by the remote cloud server. As a result, the outsourced data transfer and deletion have become two new problems because the selfish cloud server might not execute these operations sincerely for economic benefits. In order to permanently remove the outsourced data, a lot of researchers have focused on the problem of data deletion in the past decade and have put forward plenty of methods, such as deletion by overwriting14–19 and deletion by cryptography.20–24 Although a number of deletion schemes have been proposed, there are still some problems and challenges in deleting the outsourced data.
First of all, plenty of the existing data deletion schemes are inefficient, especially the schemes that realize data deletion by overwriting the disk. In the overwriting methods, when the data owners require the cloud server to delete the data, the remote cloud server will realize data deletion by utilizing random data to overwrite the physical medium and then return a deletion result. To make the data deletion operation more secure, some researchers argue that the disk should be overwritten more than one time. However, deletion by overwriting is inefficient in practical applications, especially in the distributed storage system because it needs to overwrite all the storage mediums that store the data backups. Besides, Gutmann 25 stated that it cannot really realize data deletion by overwriting the physical medium simply because there is some physical remanence left on the disk. The attacker can use the physical remanence to recover the deleted files. Hence, it is very significant to improve the efficiency and security of data deletion schemes.
Secondly, most of the existing data deletion methods cannot achieve public verification of the deletion results. Plenty of deletion schemes can be described as “one-bit-return” scheme: the data owners send an order to require the storage medium to delete the data. After receiving the deletion command, the storage system removes the corresponding data and then sends a one-bit result (Success/Failure) to the data owners to indicate the status of the deletion operation. In these schemes, the data owners must believe the returned data deletion result because they cannot verify it conveniently and efficiently. However, the storage system might reserve the data backups maliciously for economic interests and send a false result to cheat the data owners. Later, some schemes aim to offer the data owners the capacity to verify the data deletion outcome, but many of them need a trusted third party (TTP). Take Hao et al.’s 26 data deletion scheme as an example, it needs a trusted platform module (TPM). Nevertheless, it is very hard to find such a TTP in practical applications. Therefore, the requirement of public verifiability should be reached in the data deletion schemes without requiring a TTP.
Finally, moving the cloud data among different cloud servers and deleting the transferred data from the original cloud server have become two fundamental requirements for the data owners. Data transfer is frequent in plenty of real-applicitaions, for example, smart homes, power control system, and medical data-management system. The report shows that cloud data traffic will increase by 19.3 exabytes per year. 27 By 2018, 9% of the total cloud traffic is predicted to be the data traffic among different clouds; there is a 2% increase compared with the end of 2013. However, only a few existing schemes can realize outsourced data transfer and deletion simultaneously.28–30 In addition, all of them need a third party auditor. Take Ni et al.’s 28 scheme as an example, it uses the improved BCP encryption scheme and the polynomial-based authenticators to ensure the integrity of transferred data on the new cloud. Finally, it utilizes proxy-encryption technique to achieve deletion of the transferred data from the original cloud server. However, it needs a third party auditor, which will become the bottleneck. As far as we know, it seems that there is no research work on efficient and publicly verifiable outsourced data transfer and deletion scheme that does not require a TTP under the dishonest cloud server model. As a consequence, we design a vector commitment-based construction that aims to make the outsourced data transfer and deletion processes publicly verifiable. In addition, the presented construction does not require any TTP.
Main contributions
In this article, we design a new vector commitment-based scheme that not only can achieve publicly verifiable cloud data deletion but also can realize provable data transfer between two different cloud servers. In our scheme, we use the primitive of vector commitment to generate some corresponding proofs that can be used to verify the data transfer and deletion results. Therefore, our article’s main contributions are the following two folds:
We put forward a new vector commitment-based scheme that not only can achieve publicly verifiable cloud data deletion but also can realize provable data transfer between two different cloud servers. If the selfish original cloud server does not honestly transfer the data to the target cloud server, or it does not delete the transferred data sincerely, the proposed scheme can offer the data owner the ability to discover the dishonest operations by verifying the returned evidences.
We apply the vector commitment (VC) to reach the property of public verifiability in the outsourced data transfer and deletion. By utilizing the tremendous advantages of VC, our proposed scheme can achieve public verifiability without requiring any TTP, which is very different from most of the previous solutions. Besides, our novel scheme is also quite efficient in communication as well as computation.
This work is the extension of our previous paper, which was presented at the International Conference on Information and Communications Security (ICICS) 2018. 31 In the following, we describe the main differences between this article and the conference version. Firstly, we describe the related work more detailed in section “Introduction.” Secondly, we present a more detailed scheme and add the high description of our scheme in section “Our construction.” We also describe the design goals for our scheme in section “Design goals” and prove that our proposed scheme can satisfy these security properties in section “Security analysis.” Finally, we add the experimental simulation of our novel scheme and the overhead comparison between a previous scheme and our scheme in section “Performance evaluation.”
Related work
In cloud storage, plenty of researchers have focused on the verifiable cloud data transfer and deletion for a long time, resulting in a large number of solutions. In processing the problem of data deletion, although unlinking can delete the link of the file efficiently, the contents of outsourced file still remain in the storage medium. The attacker can recover the contents by using tools to scan the disk. 32 Therefore, it is particularly meaningful to study and put forward more secure outsourced data deletion schemes.
In 2010, in order to delete the contents of the file and provide the data owner with the ability to verify the deletion result, Paul and Saxena 33 put forward a verifiable data deletion protocol that is named “Proof of Erasability” (PoE). In addition, Perito and Tsudik 34 put forward a similar solution called “Proofs of Secure Erasure” (PoSE-s), which aims to permanently delete digital data from the embedded devices. In 2011, Wei et al. 35 proposed a scheme to erase the data from flash-based solid-state drives reliably. Luo et al. 36 put forward a permutation-based verifiable cloud data erasure protocol in 2016. In their protocol, they assume that the cloud server is self-serving, and it merely maintains the latest version of the outsourced data. Besides, all the backups are assumed to be consistent when the data owner updates the data. Therefore, they turn overwriting operation into data update operation. That is, they can delete the data by updating them. Finally, the data owner is able to use a challenge–response protocol to verify the deletion outcome.
In order to permanently and efficiently delete the file from the Yet Another Flash File System (YAFFS), Lee et al. 37 put forward a secure data deletion protocol in 2010. In their scheme, they modify the YAFFS to encrypt different files with different encryption keys, and then they store the keys in the file header. Finally, they realize data deletion by deleting the file header to make the ciphertext unavailable. In 2012, Reardon et al. 38 proposed a data node encrypted file system, which can also be used to remove data from flash memory. They use a unique key to encrypt every data block and store the corresponding keys in a key storage area. They delete the related keys if the data owner wants to remove some data blocks. To be specific, the key storage area will be replaced with a new one, which does not contain the decryption keys for the deleted data blocks. In 2014, Xiong et al. 39 presented a data self-destructing protocol. In their scheme, each file is labeled with a time instant. Besides, every privacy key will also be associated with a time interval. If the attributes that are related to the corresponding ciphertext satisfy the access structure, and the time instant must be in the specified time interval, the ciphertext can be decrypted. Upon reaching the expired time, the sensitive information will be self-destructed securely.
In order to manage the cloud data more transparently and efficiently, Du et al. 40 studied the Merkle hash tree, pre-deleting sequence, and some other basic cryptography techniques and designed a deletion scheme for multi-copy in cloud storage. Their scheme offers data owner the ability to check the deletion result by verifying the deletion evidence. In 2018, Yang et al. 41 put forward a novel outsourced data deletion scheme with public verifiability, which is based on blockchain. In their scheme, the remote cloud server stores and maintains the outsourced data. Upon receiving a valid deletion command, the remote cloud server may remove the related outsourced data and generates the corresponding deletion evidence at the same time. Finally, the deletion proof is published on the blockchain, and any verifier can check the deletion result by verifying the deletion proof without requiring any TTP. Yang and Tao 42 put forward a new cloud data deletion scheme with publicly verifiability and efficient tracking. They use the Merkle hash tree to deal with the problem of public verification and offer the data owner the ability to verify the deletion result.
Furthermore, how to achieve provable cloud data transfer and deletion simultaneously has caught plenty of researchers’ attentions. In 2015, Yu et al. 27 presented a verifiable cloud data possession protocol that is characterized by provable data transfer and deletion. After the data are transferred to a new cloud server, they are able to check the transferred data integrity on the new cloud server and then delete the transferred data from the original cloud server. In 2017, Xue et al. 29 designed a verifiable cloud data transfer scheme from provable data possession (PDP) and cloud data deletion. In their scheme, when the data owners want to alter the service provider, they can move the data blocks to a new cloud server and check the transferred data integrity on the new cloud server. Moreover, the original cloud removes the corresponding transferred data blocks and utilizes rank-based Merkle Hash Tree (RMHT) to generate a deletion proof. Then the data owners can check the deletion result by verifying the proof. Later, Wang et al. 11 presented a verifiable cloud data transfer and erasure protocol. Their scheme could provide the data owner the ability to transfer data between two different cloud servers. Besides, they use homomorphic authenticators and homomorphic encryption to offer the data owner the ability to verify the deletion result on the original cloud and the transferred data integrity on the new cloud.
Organization
In the following section, we will describe the organization for the rest of the article. We present some preliminaries in section “Preliminaries,” including the bilinear pairings and the VC. In section “Problem statement,” we will minutely describe the problem statement. Then, in section “Our construction,” we describe our novel scheme in detail. In section “Analysis of our scheme,” we first give a brief security analysis of our scheme. After that, we make a comparison between our novel scheme and an existing scheme. Finally, we give a conclusion about the proposed data transfer and deletion scheme in section “Conclusion.”
Preliminaries
In the following section, we firstly present the basic definitions and main properties of the bilinear pairings briefly. Secondly, we may give a short description about the Computational Diffie–Hellman (CDH) problem. Finally, we describe the cryptographic primitive of VC, which is particularly important for realizing public verifiability.
Bilinear pairings
We assume that the groups
The CDH problem in
Definition 1
We assume that a and b are both randomly chosen from
Vector commitment
As a fundamental cryptographic primitive, commitment scheme plays a really important role in plenty of security protocols, for instance, zero-knowledge proof, identification protocol, digital voting scheme, and verifiable database.
43
Informally, a commitment protocol could intuitively be seen as a sealed envelope: when the sender wants to commit to a message m, he can put the message m into the envelope and send the sealed envelope to the receiver. At a later moment, the sender is able to open the sealed envelope to publicly show the committed message m. A commitment scheme is expected to satisfy two properties. The first one is called “
In 2013, Catalano and Fiore
45
put forward a new cryptographic primitive from commitment, which is called vector commitment (VC). Generally speaking, a VC scheme is very closely related to the zero-knowledge sets. Without loss of generality, a VC scheme can allow the committer to commit to an ordered sequence of messages
Problem statement
In the following section, we firstly describe the system model of our proposed scheme. Then we formalize the main security threats, which are considered in our scheme. Finally, we identify our principal design goals in detail.
System model
In the following section, we present the system model of our new proposed scheme. Without loss of generality, the system model contains two remote cloud servers
A data owner O refers to an entity that owns restricted computing resources, network resources, and storage resources. In order to save the local storage overhead, O prefers to outsource his personal data to the remote cloud server S1. Later, as some controllable or uncontrollable factors, O may transfer the outsourced data to a new cloud server S2. Whereafter, O is willing to permanently delete the transferred data from S1 and verifies the transfer and deletion results by checking the returned evidences.
An original cloud server S1 is an entity who has powerful computing ability, plenty of storage resources, and transmission bandwidths. In our system model, we assume that S1 might transfer the specified data to cloud server S2. After that, S1 is requested to permanently remove the transferred data. Finally, S1 computes some corresponding proofs to convince O that the data transfer and deletion operations have been executed honestly.
A target cloud server S2 refers to another entity which also owns a number of storage spaces and provides on-demand cloud storage service for the resource-constraint data owners. In our scheme, we define that S2 is the target cloud server. That is to say, S2 receives the transferred data from S1 and stores them. After that, S2 generates a new commitment and returns evidence to O to indicate the success of the data transfer.

The system model of our scheme.
Security threats
In our scheme, we will assume that S1 is a “semi-honest-but-curious” cloud server and it may not follow our protocol honestly for financial incentives. Besides, some malicious users (hackers) might try to unlawfully access the outsourced data. Therefore, there are two types of attacks should be considered in our scheme: the external attacks and the internal attacks. The external attackers, such as hackers and malicious users, might pay attention to digging the sensitive information from the outsourced data. The internal attackers, such as the dishonest administrators of the cloud storage system, may study the privacy of O from the outsourced data. Besides, S1 may not transfer the data or delete the transferred data sincerely for economic benefits. More specifically, we mainly consider the following four security challenges:
Data privacy exposure: data privacy exposure is a very common security threat for the data owner in the cloud storage service for the following three reasons. Firstly, the attackers are so curious that they might try their best to illegally access the outsourced data for digging some private information. Secondly, the remote cloud server might share the outsourced data with their corporators for financial incentives. Last but not least, the cloud server would reserve some copies maliciously to dig some implicit benefits from the data. Therefore, the outsourced data suffer from privacy exposure threat.
Data pollution: the outsourced data may be polluted as the following factors. Firstly, the selfish remote cloud server might delete some outsourced data that are rarely accessed to save storage resources. Secondly, for saving the transmission bandwidth, S1 may merely send part of the data when O downloads or transfers the data. Finally, the external attackers may destroy the data maliciously. Hence, the outsourced data suffer from data pollution.
Dishonest data transfer: in practical applications, the local data owner may transfer the data from S1 to S2 for some objective or subjective reasons. To transfer the data successfully, S1 needs to cost some computation and communication resources. For some economic reasons, S1 may only transfer partial data to S2, or even worse, S1 may not transfer the data at all. However, S1 will claim that the data have been transferred to S2 honestly and return an error transfer result to mislead O. Therefore, dishonest data transfer is another security threat.
Malicious data reservation: when the data have been transferred to S2, O prefers to permanently delete the transferred file from S1. However, the selfish S1 might maliciously keep some backups of transferred data for the following reasons. Firstly, S1 needs to expend some additional overhead to delete the transferred data. Secondly, S1 can obtain some implicit benefits from the data reservation arbitrarily, which may lead to data privacy exposure. Therefore, from O’s point of view, the malicious transferred data reservation is also a security threat.
Moreover, we can assume that S1 and S2 will not collude together to cheat O because they belong to different enterprises. That is, the two cloud servers both follow our protocol independently. Furthermore, we assume that S2 would not slander S1 maliciously for good reputation and S1 will not store any data backups on other subcontractors.
Design goals
In our new scheme, we aim to reach provable data transfer and verifiable data deletion in cloud storage. As a result, according to the above security threats, we need to realize the following four properties:
Data confidentiality: in order to protect the sensitive data contained in the outsourced file, it must prevent the attackers from accessing the sensitive data illegally. This implies that we need to use secure encryption algorithm to encrypt the outsourced file and then outsource the corresponding ciphertext. In addition, the decryption key should be maintained so secret that only the data owner knows.
Data integrity: in order to prevent the outsourced data from being polluted, on one hand, O should be given the ability to verify data integrity during the data decryption process; on the other hand, S2 should check that whether the transferred data blocks are intact before storing them. If the data are not intact, both O and S2 can detect the data pollution.
Verifiability: in order to ensure that the outsourced data have been successfully transferred to S2 and have been deleted from S1 permanently, O and S2 should have the ability to check the results of the data transfer and deletion operations. If S1 does not faithfully transfer or delete the transferred data, it cannot forge effective evidences to prove that the file has been transferred or deleted honestly.
Accountability: Upon performing some operations over the outsourced data, both O and S1 cannot deny their behaviors. If one of them denies his performance and slanders the other, the one who is vilified can prove his innocence and disclose the slanderer’s malicious behavior.
Our construction
Overview
In this article, we adopt the primitive of VC to construct a new scheme that aims to solve the problem of verifiable outsourced data transfer and deletion under a commercial model.
The main processes of our new proposed scheme are demonstrated in Figure 2. The outsourced file always contains some sensitive information which should be kept secret from the data owner’s point of view. Therefore, the data owner O should first encrypt the file to protect the data confidentiality and then outsource the ciphertext to the original cloud server S1. The original cloud server S1 maintains the data and returns a storage proof. After that, the data owner O can check the storage result and delete the local backups. Later, when the data owner O needs the outsourced file, he must download the ciphertext from the original cloud server S1 and decrypt it to obtain the related plaintext. Meanwhile, due to some controllable or uncontrollable factors, the data owner O may want to change the cloud storage service provider. Therefore, the data owner O needs to transfer the data from the original cloud server S1 to the target cloud server S2 and checks the data transfer result. Finally, the data owner O wants to delete the transferred data from the original cloud server S1. The original cloud server S1 executes data deletion command and returns a data deletion evidence, which will be used to verify the data deletion result by the data owner O.

The main process of our scheme.
We can easily find that our new proposed scheme can achieve data confidentiality, provable data transfer, and verifiable data deletion, which is very similar to the previous solutions.29,47 Moreover, our new construction is able to achieve public verifiability without requiring any TTP. However, the previous schemes29,47,48 need to introduce a third party auditor. In real-applications, the third party auditor has become the bottleneck that impedes the rapid development of verifiable data transfer and deletion. Therefore, we can think that our new proposed solution is more attractive and practical.
The concrete construction
In this subsection, we may introduce our novel provable outsourced data transfer and deletion scheme in detail. In the following, we firstly describe some symbols that will be utilized in our novel scheme. First of all, we could assume that O has passed the identification and become a legal client of the two cloud storage service providers S1 and S2. After that, O can set a secret and unique identity number
KeyGen
First of all, it is required to generate ECDSA public/private key pairs
Encrypt
In order to prevent the sensitive information from disclosure, the data owner
StoreCheck
After uploading the outsourced data to
Decryption
When
Transfer
For some objective or subjective reasons,
Deletion
Analysis of our scheme
In the following section, we will provide a detailed analysis about our new proposed scheme. First of all, we might prove the security properties that our new scheme can satisfy. Secondly, we give a comparison between Hao et al.’s scheme 26 and our scheme. Finally, we present the efficiency evaluation comparisons of Hao et al.’s 26 scheme and our proposed scheme.
Security analysis
In the following, we will prove that our proposed scheme can satisfy the desired security properties.
Data confidentiality
Data confidentiality means that the attacker cannot obtain any plaintext information of the outsourced data if they cannot obtain the corresponding data decryption key. In our proposed scheme, the local data owner uses the IND-CPA secure Advanced Encryption Standard (AES) to encrypt the outsourced data and maintains the encryption/decryption keys secretly, which can ensure that any attacker cannot get the data encryption/decryption keys illegally. That is, any attacker cannot maliciously obtain the plaintext. Therefore, the proposed scheme can satisfy the outsourced data confidentiality.
Data integrity
Without loss of generality, our proposed provable cloud data transfer and deletion scheme is able to guarantee the cloud data integrity.
After outsourcing the personal data to the original cloud server S1, the local data owner
In the
Therefore, we can say that our new proposed scheme is able to guarantee the outsourced data integrity in the data transfer and decryption phases.
Public verifiability
Our proposed scheme is able to satisfy the property of public verifiability. After deleting the transferred data from the storage medium, the remote original cloud S1 may compute a deletion evidence
Accountable traceability
Our novel proposed scheme can satisfy the property of accountable traceability. More specifically, we analyze the accountable traceability in the data transfer and data deletion processes, respectively.
In data transfer process
Our new scheme can achieve provable data transfer between two different cloud servers. We will mainly consider the following scenarios: the dishonest
Dishonest
Malicious S1: if S1 is malicious, S1 may behave dishonestly in the data transfer process. On one hand,
In data deletion process
Similarly, we describe the accountable traceability when
Dishonest
Malicious S1: first of all,
Comparison
In order to demonstrate the overhead and efficiency more intuitively, we compare our novel VC-based scheme with the existing Hao et al.’s scheme 26 in the following section.
Without loss of generality, both the two schemes need to execute a few one-time computations to prepare the corresponding keys. Secondly, our proposed scheme is able to achieve public verifiability without requiring any TTP; however, Hao et al.’s scheme 26 needs a TPM. Moreover, our proposed scheme is able to reach provable outsourced data transfer and deletion, which is different from Hao et al.’s scheme. 26 Finally, the remote cloud server might maintain the outsourced data. That is to say, the remote cloud server will execute most of the computation operations (this is the same in Hao et al.’s scheme 26 ).
In our novel scheme, the outsourced file
From Tables 1 and 2, it can be easily discovered that our scheme can simultaneously achieve verifiable outsourced data transfer and deletion without relying on any TTP, which is different from scheme.
26
Moreover, our novel scheme will cost much less overhead to encrypt and decrypt the same size of the file. In order to delete
Function comparison of the two schemes.
TTP: trusted third party.
Computational overhead comparison of the two schemes.
Performance evaluation
In the following, we will provide the performance evaluation of our proposed verifiable cloud data transfer and deletion scheme. We both simulate our novel scheme and Hao et al.’s scheme 26 with the OpenSSL library and the pairing-based cryptography library. More specifically, we execute the simulation experiments on the same Linux machine equipped with 4GB main memory and Intel(R) Core(TM) i5-4590 processors running at 3.30 GHz, and we simulate all the entities on this Linux machine. Finally, we are able to evaluate the overhead of the proposed scheme precisely throughout the simulations.
In order to protect the outsourced data confidentiality, the local data owner should use encryption algorithm to encrypt the outsourced file before outsourcing it to the remote cloud server. The encryption operation efficiency comparison of our novel scheme and Hao et al.’s scheme is presented in Figure 3. From the comparison of the simulation results, we can find that the overhead of the encryption operation will increase with the size of the encrypted file. To encrypt the same size of plaintext, Hao et al.’s scheme requires much more time than our scheme. Furthermore, our novel scheme’s growth rate is much lower than that of Hao et al.’s scheme. Therefore, we can say that our novel scheme is much more efficient in encryption phase.

The time cost of encrypting.
After outsourcing the personal data to the remote cloud server, our scheme gives the local data owner the ability to verify the storage result. The main computation comes from generating storage proofs and verifying the storage proofs. To be more specific, the remote cloud server needs to perform

The time cost of storage verification.
The local data owner will not maintain any data copy after outsourcing the file to the remote cloud server. Therefore, when the data owner requires the outsourced data, they need to download the ciphertext from the remote cloud server, and then they can obtain the file by decrypting the ciphertext. To perform the experiment conveniently, we can assume that the data owner stores 100 data blocks on the cloud server and downloads 20 data blocks in the decryption phase. Then the decryption efficiency comparison of our proposed scheme and Hao et al.’s scheme is shown in Figure 5. From Figure 5, we can realize that our proposed scheme costs much less overhead to decrypt the same size of decrypted ciphertext. Furthermore, the time cost will increase with the size of the decrypted file, nevertheless, the growth rate of Hao et al.’s scheme is much higher than that of our new proposed scheme. As a result, our new proposed scheme is much more efficient to decrypt the file.

The time cost of decrypting.
Both our novel scheme and Hao et al. scheme can satisfy the public verifiability of the data deletion result. To delete a file, Hao et al.’s scheme needs to execute one signature generation operation and a signature verification operation. However, in order to delete

The time cost of deleting.
Conclusion
Due to the lack of trust between the local data owner
Footnotes
Handling Editor: Marcin Wozniak
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Open Projects of State Key Laboratory of Integrated Service Networks (ISN) of Xidian University (grant no. ISN19-13), the Natural Science Foundation of Guangxi (grant no. 2016GXNSF AA380098), and the Science and Technology Program of Guangxi (grant no. AB17195045).
