IMAGE Signal & Security mini-workshop
IMAGE Signal & Security mini-workshop
ABSTRACTS
Deep Learning in Steganography and Steganalysis since 2015
Marc Chaumont
Montpellier University, LIRMM, FRANCE
For almost 10 years, the detection of a message hidden in an image has been mainly carried out by the computation of a Rich Model (RM), followed by a classification by an Ensemble Classifier (EC). In 2015, the first study using a convolutional neural network (CNN) allowed to obtain steganalysis results by "deep learning" approaching the results of two-step approaches (EC + RM). Therefore, since 2015, numerous publications have shown that it is possible to obtain better performances notably in spatial steganalysis, in JPEG steganalysis, in Selection-Channel-Aware steganalysis, in quantitative steganalysis, in stegananalysis with images of arbitrary size, etc.
In this presentation, we will discuss the infancy of CNNs in steganography / steganalysis. We will recall the steganography / steganalysis purposes, and the generic structure of a convolutional neural network. Then, we will present the best network (beginning of 2018) used for spatial steganalysis. Finally, if there is enough time, we will discuss steganography by GAN, and discuss the perspectives of the field.
Combining Image search and Digital Watermarking in Real life
Vivien Chappelier
Lamark, FRANCE
Lamark is a technical provider selling digital imaging technologies, especially similar image search and digital watermarking. These technologies have been competing in the realm of Digital Right Management as both of them aim at identifying digital content. The presentation will present these technologies and their drawbacks. A system stemming from the combination of the best of the two worlds create a super efficient tool for content tracking: A watermarking technique extremely robust, perfectly secure, fast to detect and with unlimited capacity.
Learnable Encoding: from Imaging to Security
Slava Voloshynovskiy
Univeristy of Geneva, SWITZERLAND
In this talk, we will overview a basic framework of learnable encoding that encodes the input data into sparse ternary representation and ensures the reconstruction of input data in a link to the Shannon rate-distortion theory. We extend the basic framework to multi-layer systems and link it to successive refinement and deep factorizations.
Being intuitive in training and simple in encoding allowing to flexibly incorporate various priors and constraints, this framework offers quite interesting possibilities for various regression and classification problems. In this talk, we will briefly demonstrate the applications covering the basic elements of imaging systems starting from signal sampling in large scale architectures, compression, denoising and single image super-resolution to more complex scenarios covering classification systems facing adversarial attacks.
Securing Binary Detection by Feature Randomisation
Mauro Barni
University of Sienna, ITALY
In order to get an advantage in his race of arms with the attacker, the designer of a binary detector may rely rely on a subset of features chosen at random in a large set of meaningful features. Given its ignorance about the exact feature set, the adversary must attack a version of the detector based on the entire feature set. In this way, the effectiveness of the attack diminishes since there is no guarantee that attacking a detector working in the full feature space will result in a successful attack against the reduced-feature detector. We prove both theoretically and experimentally - by applying the proposed procedure to a multimedia forensics scenario - that, thanks to random feature selection, the security of the detector increases significantly at the expense of a negligible loss of performance in the absence of attacks.
A Natural Steganography Embedding Scheme Dedicated to Color Sensors in the JPEG Domain
Patrick Bas
CNRS Centrale Lille, FRANCE
Natural steganography embeds a payload by adding noise (a stego signal) to an image acquired at ISO1 to make it look like the image was acquired at a larger ISO sensitivity ISO2 > ISO1. This appraoch has been shown to achieve both high capacity and statistical undetectability as long as the embedder is able to correctly model the added noise. In the spatial domain, implementations have been proposed for monochrome sensors, which do not perform demosaicing, with developers that apply quantization, Gamma transform [bas:WIFS-16], and downsampling [bas:ICASSP-17]. In the JPEG domain, our recent work [denemark:hal-01687194] highlighted that models that only consider first-order marginal statistics (histograms) work well for monochrome sensors but the embedding is very detectable for color sensors. This is due to the fact that demosaicing introduces dependencies between DCT modes belonging to the same DCT block as well as adjacent blocks.
In this paper, we first model these dependencies by estimating the covariance matrix of DCT coefficients of sensor noise of a given power within a (3×64)×(3×64) neighborhood. Then using three different sublattices and computing conditional probabilities from the Normal multivariate distribution, we are able to compute the embedding probability associated with each DCT coefficient. This probability will be later transformed into costs and used with syndrome-trellis codes (STCs) [filler2011minimizing] to perform practical embedding. We show that this strategy of sampling reduces the detectability especially for medium JPEG quality factors.
Anti-collusion Codes and Forensic Watermarking
Gwenaël Doërr
ContentArmor, FRANCE
Piracy increasingly jeopardizes the content value chain of the Entertainment industry. VOD services and live events of major sport leagues are routinely redistributed illegally. Forensic watermarking and anti-collusion codes are mature technologies with decades of research results behind them. On paper, the combination of these two techniques grants the ability to trace back the source of piracy and enforces relevant remediation strategies. Nevertheless, in practice, the deployment of such tracing mechanisms raises down-to-earth considerations that could turn a marriage in heavens into a highway to hell. In this talk, G. Doërr, a 15-years long forensic watermarking practitioner, will discuss his personal experience on this topic.
Fingerprint Template Protection using Minutia-pair Spectral Representations
Boris Skoric
Eindhoven University of Technology, THE NETHERLANDS
Storage of biometric data requires some form of template protection in order to preserve the privacy of people enrolled in a biometric database. One approach is to use a Helper Data System. Here it is necessary to transform the raw biometric measurement into a fixed-length representation. In this paper we extend the spectral function approach of Stanko and Skoric [WIFS2017], which provides such a fixed-length representation for fingerprints. First, we introduce a new spectral function that captures different information from the minutia orientations. It is complementary to the original spectral function, and we use both of them to extract information from a fingerprint image. Second, we construct a helper data system consisting of zero-leakage quantisation followed by the Code Offset Method. We show empirical data which demonstrates that applying our helper data system causes only a small performance penalty compared to fingerprint authentication based on the unprotected spectral functions.
Fun with Covert Channels - Gyroscopes and more
Stefan Katzenbeisser
Technische Universität Darmstadt, GERMANY
Covert channels allow to transmit data between a sender and a receiver over a channel that was never intended for data transmission. We investigate several covert channels on smartphones that emerge due to the use of advanced sensors and actuators. Furthermore, we discuss possible attacks
against the privacy of users which utilize covert channels, from data exfiltration to user tracking.