Skip to Main Content Area
CPS-VO
Contact Support
Browse
Calendar
Announcements
Repositories
Groups
Search
Search for Content
Search for a Group
Search for People
Search for a Project
Tagcloud
› Go to login screen
Not a member?
Click here to register!
Forgot username or password?
Cyber-Physical Systems Virtual Organization
Fostering collaboration among CPS professionals in academia, government, and industry
CPS-VO
adversarial examples
biblio
Fidelity: Towards Measuring the Trustworthiness of Neural Network Classification
Submitted by aekwall on Mon, 12/07/2020 - 11:32am
security of data
Task Analysis
Trusted Computing
learning (artificial intelligence)
pubcrawl
composability
Computational modeling
Sociology
Statistics
neural nets
Neural networks
pattern classification
machine learning
machine learning model
trustworthiness
Perturbation methods
adversarial examples
adversarial attack detection
adversarial settings
neural network classification
neural network system
security-critical tasks
biblio
Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks
Submitted by aekwall on Mon, 09/21/2020 - 2:36pm
security of data
learning (artificial intelligence)
Resiliency
pubcrawl
Robustness
neural nets
Neural networks
Training
testing
Predictive models
deep neural networks
machine learning tasks
adversarial examples
DNNs
composability
adversarial deep learning
adversarial inputs
benign inputs
black-box adversarial attacks
cross-layer model diversity ensemble framework
defense success rates
defense-attack arms race
ensemble defense
ensemble diversity
Manifolds
MODEF
noise reduction
representative attacks
supervised model verification ensemble
unsupervised model
verification cross-layer ensemble
Cross Layer Security
biblio
Creation of Adversarial Examples with Keeping High Visual Performance
Submitted by grigby1 on Fri, 09/11/2020 - 10:46am
learning (artificial intelligence)
machine learning
CNN
convolutional neural nets
Neural networks
pubcrawl
Human behavior
security
Mathematical model
composability
Perturbation methods
Resistance
image classification
adversarial examples
visualization
convolutional neural network
image recognition
captchas
CAPTCHA
artificial
character images
character recognition
character string CAPTCHA
convolutional neural network (CNN)
FGSM
high visual performance
human readability
image recognition technology
intelligence
biblio
A Black-Box Approach to Generate Adversarial Examples Against Deep Neural Networks for High Dimensional Input
Submitted by grigby1 on Fri, 09/04/2020 - 3:11pm
security of data
query processing
Conferences
optimisation
pubcrawl
composability
Metrics
Resiliency
resilience
learning (artificial intelligence)
neural nets
Cyberspace
machine-to-machine communications
regression analysis
Iterative methods
deep neural networks
face recognition
adversarial perturbations
gradient methods
adversarial examples
approximation theory
black-box approach
black-box setting
CNNs
data science
extensive recent works
generate adversarial examples
generating adversarial samples
high dimensional
image classification
learning models
linear fine-grained search
linear regression model
minimizing noncontinuous function
model parameters
noncontinuous step function problem
numerous advanced image classifiers
queries
white-box setting
Zeroth order
zeroth order optimization algorithm
zeroth-order optimization method
Black Box Security
biblio
Detecting Adversarial Examples for Deep Neural Networks via Layer Directed Discriminative Noise Injection
Submitted by grigby1 on Fri, 06/19/2020 - 10:49am
adversarial examples
adversarial images
computer architecture
computer vision
computer vision tasks
convolutional neural nets
deep learning
deep neural networks
discriminative noise injection strategy
distortion
dominant layers
false positive rate
false trust
layer directed discriminative noise
learning (artificial intelligence)
machine learning
MobileNet
natural images
natural scenes
Neural networks
noninvasive universal perturbation attack
Perturbation methods
policy-based governance
Policy-Governed Secure Collaboration
pubcrawl
resilience
Resiliency
Scalability
Sensitivity
Training
biblio
Certified Robustness to Adversarial Examples with Differential Privacy
Submitted by aekwall on Mon, 04/20/2020 - 9:31am
Cryptography
data privacy
security
learning (artificial intelligence)
pubcrawl
Metrics
Robustness
standards
neural nets
Databases
Mathematical model
Measurement
Predictive models
differential privacy
deep neural networks
machine learning models
adversarial examples
Adversarial-Examples
certified defense
certified robustness
cryptographically-inspired privacy formalism
Deep-learning
defense
Google Inception network
ImageNet
machine-learning
norm-bounded attacks
PixelDP
Sophisticated Attacks
privacy models and measurement
biblio
Malware Evasion Attack and Defense
Submitted by grigby1 on Tue, 02/18/2020 - 10:53am
adversarial example
adversarial examples
Adversarial Machine Learning
black-box attacks
composability
Data models
defense
defense approaches
Detectors
Evasion Attack
grey-box evasion attacks
invasive software
learning (artificial intelligence)
machine learning classifiers
malware
malware detection systems
malware evasion attack
Metrics
ML classifier
ML-based malware detector
pattern classification
Perturbation methods
pubcrawl
resilience
Resiliency
security
Training
Training data
white box cryptography
White Box Security
white-box evasion attacks
file
Sharif_Gen_framework_adv_examples_Bauer.pdf
Submitted by Jamie Presken on Mon, 07/08/2019 - 9:33am
adversarial examples
face recognition
machine learning
Neural networks
2019: July
CMU
Metrics
Resilient Architectures
Safety Critical ML
Securing Safety-Critical Machine Learning Algorithms
biblio
A General Framework for Adversarial Examples with Objectives
Submitted by Jamie Presken on Mon, 07/08/2019 - 9:33am
2019: July
adversarial examples
CMU
face recognition
machine learning
Metrics
Neural networks
Resilient Architectures
Safety Critical ML
Metrics
Resilient Architectures
CMU
Securing Safety-Critical Machine Learning Algorithms
2019: July
biblio
Defending IT Systems against Intelligent Malware
Submitted by grigby1 on Mon, 06/10/2019 - 1:02pm
adversarial examples
Antivirus Software vendors
ART
Classification algorithms
dynamic analysis
Gallium nitride
generative adversarial network
generative adversarial networks
Human behavior
intelligent malware
invasive software
IT systems
learning (artificial intelligence)
machine learning
machine learning algorithms
malware
Malware Analysis
malware binaries
malware classification
malware detection
malware families
malware images
malware variants
Metrics
neural nets
privacy
pubcrawl
resilience
Resiliency
Signatures
static analysis
Training
unsupervised deep neural networks
1
2
next ›
last »