How the DSS Digital Signature Standard scheme can be compromised by the attacker?

Performance of Digital Signature Schemes on Mobile Devices

D.Y.W. Liu, ... M.H. Au, in Mobile Security and Privacy, 2017

1.1 Our Contribution

We present a performance analysis of two well-known digital signature schemes from pairing-based cryptography on mobile devices with Android (Google, 2016) platform. The two schemes are from Boneh et al. (2004b) (BLS) and Paterson and Schuldt (2006) (PS). The efficiency of these schemes is evaluated in terms of computation time and energy consumption during signature generation and verification, as well as the time to generate the message digest. Various types of information which reflect the practical settings, in terms of size and information type, are adopted in our experiments. We present the results and discuss their implications.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128046296000122

Password-Based Authenticated Key Establishment Protocols

Jean Lancrenon, ... Feng Hao, in Computer and Information Security Handbook (Third Edition), 2013

Relying on Public-Key Infrastructure (PKI)

Another type of preestablished long-term keying material that can be used is certified public key/secret key pairs. This requires a trusted CA to digitally sign messages binding parties' identities to their public keying material. These signed messages are known as certificates. Let pkCA and skCA be the CA's public verification and private signature keys respectively. (See Chapter 46 for information on digital signatures.) The certificate of a given party will also contain descriptions of the instantiated mathematical objects used in the public-key algorithms. Let CertA (resp., CertB) denote A's (resp., B's) certificate. Below we describe the main flows of the well-known Station-to-Station protocol (STS; see Ref. [3]). Let G be a cyclic group of order denote n in which DDH is believed to hold, g be a generator for G, E be a symmetric encryption algorithm, and S be the signature algorithm of a digital signature scheme. (See Chapter 46.) The protocol runs as follows:

A sends to B the data CertA and gx, where x is chosen at random, and CertA contains the group parameters and a description of E. It also contains A's public signature verification key, pkA.

B first verifies the CA's signature on CertA using CA's public key pkCA. If this check fails, B aborts the protocol. Otherwise, he replies to A with CertB, gy, cB:=EK(SskB( gy,gx)), where y is chosen at random, B computes K = gxy, S skB(gy,gx) is a digital signature under key sk B of (gy,gx), and finally cB is an encryption of the signature under K.

A checks the CA's signature on CertB using pkA. If this check fails, the protocol is aborted. Otherwise, she computes K: = gxy and decrypts cB using K to obtain SskB(gy,gx). She verifies this signature on message (gy,gx) using pkB. If this check fails, the protocol is aborted. Otherwise, she computes a signature SskA(gx,gy) on (gx,gy) using her private signing key skA, and then computes an encryption cA:=EK(SskA(gx,gy)) that she sends back to B.

Finally, B decrypts cA using K and verifies the obtained signature using A's public verification key pkA. The session key is set to being K = gxy.

In this scheme, the authentication is basically provided by the digital signatures. A party is certain that a message was indeed signed by another entity if the signature verification equation under that entity's public key holds. The CA's role is to make sure that an adversary cannot simply replace an honest party's public key with her own in a certificate, since this would require forging a signature under the CA's key. Also, similarly to the Needham–Schroeder protocol, the values gx and gy can be viewed as numbers that, in addition to computing a joint session key, serve as unique identifiers for the key exchange, in order to prevent replay attacks. Notice also that the session key K is actually used in the protocol to encrypt the signatures, allowing the parties to demonstrate to one another that they have computed the correct session key.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128038437000491

Introduction

Zhe-Ming Lu, Shi-Ze Guo, in Lossless Information Hiding in Images, 2017

1.4.2 Fragile Image Watermarking

1.4.2.1 Background

Digital watermarking has been also proposed as a possible solution for data authentication and tamper detection. The invisible authenticator, sensitive watermark, is inserted using the visual redundancy of human visual system (HVS), and is altered or destroyed when the cover image is modified by various linear or nonlinear transformations. The changes of authentication watermark can be used to determine the modification of the marked image, even locate the tampered area. Because the watermark is embedded in the content of image, it can exert its efficiency in the whole lifecycle.

1.4.2.2 Classification

The authentication watermark can be classified into fragile watermark and semifragile watermark according to its fragility and sensitivity. The fragile watermark is very sensitive and designed to detect every possible change in marked image; so it fits to verify the integrity of data and is viewed as an alternative verification solution to a standard digital signature scheme. However, in most multimedia applications, minor data modifications are acceptable as long as the content is authentic, so the semifragile watermark is developed and widely used in content verifying. Semifragile watermark is robust for acceptable content-preserving manipulations (compression, enhancement, etc.) whereas fragile watermark is robust for malicious distortions such as feature adding or removal. Therefore it is suitable to verify the trustworthiness of data.

1.4.2.3 Requirements

A watermarking-based authentication system can be considered as effective if it satisfies the following requirements:

1.

Invisibility: The embedded watermark is invisible. It is the basic requirement of keeping the commercial quality of watermarked images. The watermarked image must be perceptually identical to the original one under normal observation.

2.

Tampering detection: An authentication watermarking system should detect any tampering in a watermarked image. This is the most fundamental property to reliably test image's authenticity.

3.

Security: The embedded watermark cannot be forged or manipulated. In such systems the marking key is private, the marking key should be difficult to deduce from the detection information, and the insertion of a mark by unauthorized parties should be difficult.

4.

Identification of manipulated area: The authentication watermark should be able to detect the location of altered areas and verify other areas as authentic. The detector should also be able to estimate what kind of modification had occurred.

1.4.2.4 Watermarking-Based Authentication System

The process of digital watermarking–based authentication is similar to any watermarking system; it is composed of two parts: the embedding of authentication watermark and the extraction and verification of authentication watermark.

1.4.2.4.1 Authentication Watermark Embedding

The general description of watermark embedding is:

(1.16)c′=E (c,a,w,Kpr)

where E(.) is the watermark embedding operator; c and c′ are image pixels or coefficients before and after watermark embedding; w is the embedded watermark sample, which is generated by the pseudorandom sequence generator or chaotic sequence; and a is a tuning parameter determining the strength of the watermark to ensure the invisibility. It can be a constant or a JND function proposed by HVS [17]. Kpr is the private key that controls the generation of watermark sequence or selects the location for embedding.

1.4.2.4.2 Authentication Watermark Extraction and Verification

The general description of watermark extraction is:

(1.17)w′=D(I1,Kpu )

where D(.) is the watermark extraction operator, I1 is the questionable marked image, and Kpu is the public key corresponding to Kpr [18]. If the Hamming distance between the extracted and original watermarks is less than a predefined threshold, the modification of marked image is acceptable and the image's content is authentic, or the marked image is unauthentic. The tampered area can be located by the differences between the extracted and original watermarks: the watermark differences of the tampered image are most likely concentrated in a particular area, whereas the differences caused by incidental manipulation such as compression are sparse and widely spread over the entire image. So the tampered area can be determined.

1.4.2.5 Overview of Techniques

Many early authenticating watermarking systems embed the mark in the spatial domain of an image. Some watermark schemes can easily detect random changes to an image but fail to detect tampered area. An example is the fragile mark embedded in the least significant bit (LSB) plane of an image [19].

The later authentication watermark schemes are developed in transform domains, such as DCT and wavelet domains. The properties of a transform can be used to characterize how the image has been damaged, and the choice of watermark embedding locations enables us to flexibly adjust the sensitivity of the authentication watermark. For example, if one is only interested in determining whether an image has been tampered with, one could use a special type of signal that can be easily destroyed by slight. modifications, e.g., an encrypted JPEG compressed image file. On the other hand, if one is interested in determining which part of an image has been altered, one should embed the watermark in each DCT block or wavelet detail subband, to find out which part has been modified. Some authentication watermark schemes are developed from the spread spectrum-based robust watermarking algorithms [20,21]. The semifragile watermarks are attached on the middle-low DCT coefficients or the wavelet low-resolution detail subbands as additive white Gaussian noise. At detector, the correlation value between the original watermark sequence and the extracted watermark or marked image is used to determine the authenticity of the test image. Because the influence on middle-low frequency coefficients of incidental manipulations such as compression is small, whereas that of tampering is significant, the algorithms can detect whether the images are tampered or not, but cannot locate the tampered area.

Considering the authentication watermark is sensitive to noise, the quantization technique is widely used in the authentication schemes. As a result, the effect of the noise created by the cover image is concealed. Kundur [22,23] proposed a semifragile watermarking authentication scheme based on the wavelet transform. The image is decomposed using the Haar wavelets. Both the embedding and extraction processes of authentication watermark depend on the quantization process of secret key selected wavelet transform coefficients. The spatial frequency property of wavelet transform helps to locate and characterize the tampered area. Yu et al. [24] developed Kundur's schemes, and modeled the probabilities of watermark errors caused by malicious tampering and incidental distortion as Gaussian distributions with large and small variances, respectively, and computed the best number of coefficients needed to embed watermark at each scale such that the trade-off between robustness and fragility is optimized, so the scheme can detect maliciously tampered areas while tolerating some incidental distortions.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128120064000012

Securing Biometric Data

Anthony Vetro, ... Jonathan S. Yedida, in Distributed Source Coding, 2009

One class of methods for securing biometric systems is “transform-based.” Transform-based approaches essentially extract features from an enrollment biometric using a complicated transform. Authentication is performed by pattern matching in the transform domain. Security is assumed to come from the choice of a good transform that masks the original biometric data. In some cases, the transform itself is assumed to be kept secret, and design considerations must be made to ensure this secrecy. Particularly when the transform itself is compromised, it is difficult to prove rigorously the security of such systems. Notable techniques in this category include cancelable biometrics [2, 3], score matching-based techniques [4], and threshold-based biohashing [5].

This chapter focuses on an alternative class of methods that are based on using some form of “helper data.” In such schemes, user-specific helper data is computed and stored from an enrollment biometric. The helper data itself and the method for generating this data can be known and is not required to be secret. To perform authentication of a probe biometric, the stored helper data is used to reconstruct the enrollment biometric from the probe biometric. However, the helper data by itself should not be sufficient to reconstruct the enrollment biometric. A cryptographic hash of the enrollment data is stored to verify bitwise exact reconstruction.

Architectural principles underlying helper data-based approaches can be found in the information-theoretic problem of “common randomness” [6]. In this setting, different parties observe dependent random quantities (the enrollment and the probe) and then through finite-rate discussion (perhaps intercepted by an eavesdropper) attempt to agree on a shared secret (the enrollment biometric). In this context, error-correction coding (ECC) has been proposed to deal with the joint problem of providing security against attackers, while accounting for the inevitable variability between enrollment and probe biometrics. On the one hand, the error-correction capability of an error-correcting code can accommodate variations between multiple measurements of the same biometric. On the other hand, the check bits of the error-correction code perform much the same function as a cryptographic hash of a password on conventional access-control systems. Just as attackers cannot invert the hash and steal the password, they cannot use the check bits to recover and steal the biometric.

An important advantage of helper data-based approaches relative to transform-based approaches is that the security and robustness of helper data-based schemes are generally easier to quantify and prove. The security of transform-based approaches is difficult to analyze since there is no straightforward way to quantify security when the transform algorithm itself is compromised. In helper data-based schemes, this information is known to an attacker, and the security is based on the performance bounds of error-correcting codes, which have been deeply studied.

To the best of our knowledge, Davida, Frankel, and Matt were the first to consider the use of ECC in designing a secure biometrics system for access control [7]. Their approach seems to have been developed without knowledge of the work on common randomness in the information theory community. They describe a system for securely storing a biometric and focus on three key aspects: security, privacy, and robustness. They achieve security by signing all stored data with a digital signature scheme, and they achieve privacy and robustness by using a systematic algebraic error-correcting code to store the data. A shortcoming of their scheme is that the codes employed are only decoded using bounded distance decoding. In addition, the security is hard to assess rigorously and there is no experimental validation using real biometric data.

The work by Juels and Wattenberg [8] extends the system of Davida et al. [7] by introducing a different way of using error-correcting codes. Their approach is referred to as “fuzzy commitment.” In the enrollment stage the initial biometric is measured, and a random codeword of an error-correcting code is chosen. The hash of this codeword along with the difference between an enrollment biometric and the codeword are stored. During authentication, a second measurement of the user's biometric is obtained, then the difference between this probe biometric and the stored difference is determined, and error correction is then carried out to recover the codeword. Finally, if the hash of the resulting codeword matches the hash of the original codeword, then access is granted. Since the hash is difficult to invert, the codeword is not revealed. The value of the initial biometric is hidden by subtracting a random codeword from it, so the secure biometric hides both codeword and biometric data. This scheme relies heavily on the linearity/ordering of the encoded space to perform the difference operations. In reality, however, the feature space may not match such linear operations well.

A practical implementation of a fuzzy commitment scheme for iris data is presented in [9]. The authors utilize a concatenated-coding scheme in which Reed–Solomon codes are used to correct errors at the block level of an iris (e.g., burst errors due to eyelashes), while Hadamard codes are used to correct random errors at the binary level (e.g., background errors). They report a false reject rate of 0.47 percent at a key length of 140 bits on a small proprietary database including 70 eyes and 10 samples for each eye. As the authors note, however, the key length does not directly translate into security, and they estimate a security of about 44 bits. It is also suggested in [9] that passwords could be added to the scheme to substantially increase security.

In [10] Juels and Sudan proposed the fuzzy vault scheme. This is a cryptographic construct that is designed to work with unordered sets of data. The “fuzzy vault” scheme essentially combines the polynomial reconstruction problem with ECC. Briefly, a set of t values from the enrollment biometric are extracted, and a length k vector of secret data (i.e., the encryption key) is encoded using an (n,k) ECC. For each element of the enrollment biometric, measurement-codeword pairs would be stored as part of the vault. Additional random “chaff” points are also stored, with the objective of obscuring the secret data. In order to unlock the vault, an attacker must be able to separate the chaff points from the legitimate points in the vault, which becomes increasingly difficult with a larger number of chaff points. To perform authentication, a set of values from a probe biometric could be used to initialize a codeword, which would then be subject to erasure and error decoding to attempt recovery of the secret data.

One of the main contributions of the fuzzy vault work was to realize that the set overlap noise model described in [10] can effectively be transformed into a standard errors and erasures noise model. This allowed application of Reed-Solomon codes, which are powerful codes and sufficiently analytically tractable to obtain some privacy guarantees. The main shortcoming is that the set overlap noise model is not realistic for most biometrics since feature points typically vary slightly from one biometric measurement to the next rather than either matching perfectly or not matching at all.

Nonetheless, several fuzzy vault schemes applied to various biometrics have been proposed. Clancy et al. [11] proposed to use the X - Y location of minutiae points of a fingerprint to encode the secret polynomial, and they describe a random point-packing technique to fill in the chaff points. The authors estimate 69 bits of security and demonstrate a false reject rate of 30 percent. Yang and Verbauwhede [12] also used the minutiae point location of fingerprints for their fuzzy vault scheme. However, they convert minutiae points to a polar coordinate system with respect to an origin that is determined based on a similarity metric of multiple fingerprints. This scheme was evaluated on a very small database of 10 fingers, and a false reject rate of 17 percent was reported.

There do exist variants of the fuzzy vault scheme that do not employ ECC. For instance, the work of Uludag et al. [13] employs cyclic redundancy check (CRC) bits to identify the actual secret from several candidates. Nandakumar et al. [14] further extended this scheme in a number of ways to increase the overall robustness of this approach. On the FVC2002-DB2 database [15], this scheme achieves a 9 percent false reject rate (FRR) and a 0.13 percent false accept rate (FAR). The authors also estimate 27 to 40 bits of security depending on the assumed distribution of minutiae points.

As is evident from the literature, error-correcting codes indeed provide a powerful mechanism to cope with variations in biometric data. While the majority of schemes have been proposed in the context of fingerprint and iris data, there also exist schemes that target face, signature, and voice data. Some schemes that make use of multibiometrics are also beginning to emerge. Readers are referred to review articles on biometrics and security for further information on work in this area [16, 17].

In the sections that follow, the secure biometrics problem is formulated in the context of distributed source coding. We first give a more formal description of the problem setup, and we then describe solutions using techniques that draw from information theory, probabilistic inference, signal processing, and pattern recognition. We quantify security and robustness and provide experimental results for a variety of different systems.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123744852000160

Authentication Techniques and Methodologies used in Wireless Body Area Networks

Munir Hussain, ... Zeeshan Iqbal, in Journal of Systems Architecture, 2019

2.4.3 Digital signature scheme

Digital signature scheme is a mathematical technique used in the world of network security by using hash function over the message/data in order to provide integrity, non-repudiation and authenticity [21]. This technique generally utilizes public key cryptography to manage the network security. Whenever, a node wants to send message to other node, at the initial stag original message is hashed with hash function to produce message digest, the digest message is then signed with the help of private key and forwards towards the destination. Due to the private key, it is impossible for intermediate nodes to read or alter the original message. Once the message is received at the other end, first signature is verified with the help of public key and if it is valid then hash function is applied on the message digest in order to extract original message, otherwise, it will be considered an attack.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S138376211930462X

Comprehensive survey on privacy-preserving protocols for sealed-bid auctions

Ramiro Alvarez, Mehrdad Nojoumian, in Computers & Security, 2020

2.3 Digital signature scheme

A digital signature scheme (Diffie and Hellman, 1976) confirms that a sender of a message is the intended source of the message and that message is also the original intended message. In other words, digital signatures can be used for properties such as authenticity and integrity. One way to construct a digital signature scheme is to use a public-key cryptosystem along with a hash function. The digital signature is then generated by taking the original message, hashing it, and encrypting the hash value with the private-key rather than the public-key. The signature and the message are then sent to the receiving party. Using the public-key, the receiver can decrypt the signature to recover the hash of the original message. If the received hash value, which is protected, is the same as the hash value that was recovered from the decryption of the signature, the receiver accepts the message as an authenticated and unchanged message.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0167404818306631

On perspective of security and privacy-preserving solutions in the internet of things

Lukas Malina, ... Jiri Hosek, in Computer Networks, 2016

4.3 Group signatures and ring signatures

Common digital signature schemes are usually linkable and traceable to a user identity. If a user identity is decoupled from a verification procedure then the privacy, authentication and unlinkability of a user can be ensured. Group Signature (GS) schemes allow the users to authenticate themselves on behalf of a group without using certificates or user identities. A user who is a member of a group can sign a message behalf of the group and sends it anonymously to a verifier. The signature is produced by using a group secret member key and is verified by one public group key that is publicly spread in the system.

Group signature schemes could be used in many privacy-preserving services and applications. GS firstly introduced in 1991 by Chaum [46] have been investigated by many researchers who presented many schemes, for example, the scheme proposed by Boneh, Boyen and Shacham [47], by Delerablée and Pointcheval [48], the scheme proposed by Boyen and Waters [49] or Libert, Peters and Yung’s scheme [50]. Many papers, for example, [51–54], try to apply group signature schemes in Mobile Ad-hoc Networks (MANETs), Vehicular Ad hoc Networks (VANETs) and other broadcast communication systems where privacy and anonymity of senders are needed. These vehicular networks and ad hoc systems can be a subset of the IoT infrastructure.

Nevertheless, group signature schemes are not suitable for constrained devices due to many expensive operations such as modular exponentiation and bilinear pairing operations. The signature and verification phases of some group signature schemes take too much time even by using the computationally powerful nodes. For example, the signing phase of the Boneh, Boyen and Shacham scheme [47] takes several seconds on smartphones. Some GS schemes produce larger signatures (e.g. around 6 kB in the scheme [50]) and use longer keys than classic signature schemes such as RSA or ECDSA. Therefore, the bandwidth restrictions of the IoT infrastructure and the memory restrictions of the IoT devices prevent the implementation of group signature schemes in the privacy-preserving IoT services.

Ring Signcryption/Ring Signature (RS) schemes can protect the sender privacy because a receiver only knows that a ciphertext/signature comes from a member of a ring. Li et al. [55] propose a ring signcryption scheme for a heterogeneous IOT data transmission between sensors and a server. Their scheme achieves confidentiality, integrity, authentication, non-repudiation and anonymity without the need of certificates. The signcryption takes n+2 point multiplications and few additions, hash functions and XOR operations. For example, n=100 members in the ring need about 80 s to perform the signcryption on the MICA2 device with the ATmega 128 8-bit processor [55]. The unsingcryption takes n point multiplications, 2 pairing operations and few less expensive operations (hashes, additions, etc.). Therefore, the receiver needs a powerful device (e.g. a server).

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1389128616300779

Structures and data preserving homomorphic signatures

Naina Emmanuel, ... Muhammad Khurram Khan, in Journal of Network and Computer Applications, 2018

3.4 Secure short signatures

Group signatures being a central cryptographic primitive supports anonymity and accountability. Revocation is necessary for adopting the digital signature scheme. Revocation scheme proposed by the authors of Libert et al. (2012) is based on the Noar-Noar-Lotspiech (NNL) framework making it scalable and efficient in the standard model. The scheme is history-independent: members do not need to update their keys and has very low verification cost. NNL ciphertexts are used as a revocation list in the group signature. The hardness assumption of proposed technique is based on the q-SFP hardness assumption where q is a polynomial function. This scheme's construction is based on the following algorithms:

Setup(λ,N): Given security parameter say λ∈N (permitted number of users).

Join: The user is assigned a certificate/key to join.

Revoke: Used for the revocation of the unauthorized users.

Sign: Signs the message through the generated one-time signature key pair.

Verify: Returns 1 if the verification is accepted otherwise 0.

Open: On the user's demand this algorithm opens the commitment to the correct data.

In the scheme (Alperin-Sheriff and Apon, 2017) jacob et al. proposes the technique providing smaller verification key by the linear factor. The verification and signing algorithms have an execution time of one second. This makes use of weak pseudorandom functions instead of pseudorandom functions (PRFs). The technique also proposes the randomized inversion of the gadget matrix G, that reduces noise growth in the homomorphic evaluations. In contrast to the strong PRFs, the output is unpredictable against an adversary. Weak PRFs are easy to compute as compared to the strong PRFs.

An open problem is to construct a signature scheme having short size with very tight security based on the SIS hardness assumption and instantiated PRF. Boyen and Li (2016) contains no.of techniques including lattice-based IBE, key-homomorphic and Wang signature schemes. This scheme used tightly secure PRFs that imply adaptively short secure signatures, as short signatures are very necessary for low-bandwidth channels. In a nutshell, the contribution of this survey shows that tightly secure PRFs are computable by Boolean circuits efficiently, are ample to build tightly secure lattice signature based on SIS/LWE hardness assumptions.

Boneh et al. (Boneh and Zhandry, 2013) presents signature scheme that is considered to be secure against quantum Chosen Ciphertext Attack (CCA). Authors introduced the compilers for converting classically secure signatures into quantum secure signatures and apply theses two compilers to post-quantum signatures. These signatures are quantum secure in generic assumptions. This scheme gives the system secure against superposition attacks that makes hardware designers less worry about the security.

Definition 19

Any signature scheme say S, is a tuple of the following algorithms (G, Sign, Verify) as follow:

G(λ→) generates secret and public key pair such that λ is the security parameter.

Sign outputs the new state and the signature. If the state is non-empty then state depends on the message that has been signed. And if the present state is empty then the sign is considered to be stateless and state variables are dropped altogether.

Verify either rejects or accepts the signatures.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1084804517303739

Security and privacy for innovative automotive applications: A survey

Van Huynh Le, ... Nicola Zannone, in Computer Communications, 2018

6.2.1 PKIs

Several projects, government bodies, and standards [36,73,133,155–157] have proposed the adoption of a PKI and digital signature schemes based on ECDSA19 to ensure security and privacy in V2V and V2I communication. Accordingly, each vehicle should employ a private cryptographic key to sign messages. One private key can be associated with multiple short-term certificates, so called pseudonyms, which are issued by pseudonym certificate authorities. Pseudonyms can be used to verify messages signed with the private key; however, unlike certificates in PKIs employed in other domains, pseudonyms do not contain identifying information. As a result, message integrity can be ensured without revealing the identity of the vehicle. If legal investigation is required, authorities that have enough information (e.g., a database mapping issued pseudonyms to vehicle IDs or suitable cryptographic keys) can perform pseudonym-vehicle identity resolution. In addition, authorities should be able to revoke the certificates of misbehaving vehicles.

The use of digital signatures and certificates largely satisfies integrity, authentication, and non-repudiation requirements. This approach also ensures a degree of revocable privacy. When combined with message timestamps, it also ensures message freshness. However, digital signatures and certificates introduce computational overhead in the form of complex cryptographic operations and transmission overhead in the form of certificate transmission. There has been several proposals to alleviate these problems. The EVITA project presents a hardware security module (HSM) to accelerate cryptographic operations (and to securely store keys and generate random numbers) [36,133]. Krishnan and Weimerskirch [163] propose to verify only relevant incoming messages. A disadvantage of this approach is that, it requires complex cross-layer design: the relevancy of a message is only known at the application level [192]. Various certificate omission schemes have been proposed to reduce transmission overhead [158–162]. In an omission scheme, the receivers cache incoming certificates and the sender omits certificates from selected messages. The messages can be verified if their certificates have been cached.

Last but not least, pseudonyms may be insufficient to prevent location tracking. An attacker could deduce the complete travel path by combining pseudonyms and location information [193]. To this end, various pseudonym changing strategies have been proposed. For example, vehicles can abstain from sending messages at random periods to ensure unlinkability between pseudonyms [164]. In particular, several vehicles can form a group such that only one group member broadcasts messages while the other members stay silent for a period to enhance location privacy. However, silent periods are unsuitable for periodically broadcast messages required by several safety applications. Another work proposes that vehicles trade their pseudonyms [165]. While this method improves privacy, the fact that vehicles can obtain pseudonyms through exchanges makes non-repudiation more difficult to achieve and opens opportunities for Sybil attacks.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S014036641731174X

Applications of blockchain in unmanned aerial vehicles: A review

Tejasvi Alladi, ... Mohsen Guizani, in Vehicular Communications, 2020

4.3.2 Secure communication channel for swarms

Blockchain can provide reliable peer-to-peer communication channels to swarm agents and ways to overcome possible threats and attacks. In the blockchain encryption scheme, public-key cryptography and digital signature schemes are used. A pair of complementing keys called public and private keys are created for each agent to provide the required capabilities. Public keys are like account numbers which are publicly accessible information and private keys are like passwords or secret information which will be used to authenticate an agent's identity and the functions that it executes. In the context of UAV swarm systems, digital signature scheme and public-key cryptography are shown in Fig. 7 and 8 respectively. Any UAV can send data to any other UAV in the system since the public keys of all the UAVs are known to all other UAVs. But only the UAV whose public key is used to encrypt the data will be able to decrypt it since private keys are private to the individual UAVs. Since the public key cannot be used for decryption, it secures the message from third parties even when they use the same channel.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S2214209620300206

What are the potential security attacks on digital signatures?

Chosen Message Attack: The attacker tricks the genuine user into digitally signing a Message that the user does not normally intend to sign. As a result, the attacker gets a pair of the original message that was signed and the digital signature.

How can DSC be misused?

If private key is not stored securely, then it can be misused to sign an electronic record without the knowledge of the owner of the private key. In paper world, date and the place where the paper has been signed is recorded and court proceedings are followed on that basis.

Can digital signatures be hacked?

Properly done, a digital signature verifies the data's authenticity and integrity. However, improperly done, it can reveal the user's private key.

What are the properties and attacks in digital signature?

As stated above, digital signatures provide us with three very important properties. These are authentication, integrity and non-repudiation. Authentication is the process of verifying that the individual who sends a message is really who they say they are, and not an impostor.