Visible to the public Biblio

Found 2385 results

Filters: Keyword is composability  [Clear All Filters]
2019-12-11
Zhao, Jianfeng.  2018.  Case Study: Discovering Hardware Trojans Based on Model Checking. Proceedings of the 8th International Conference on Communication and Network Security. :64–68.

Hardware Trojan may cause changes in system functions, system information leakage, and system damage or system paralysis. According to the hardware Trojan classification method, this paper discusses the hardware Trojan that belongs to the design stage, the behavior level description, the internal trigger, and it changes the function of processor, it is a hardware Trojan of combinational logic. The domestic and foreign research institutions put forward a variety of methods for the detection of hardware Trojans. In this paper, based on the open source processor OR1200 RTL source code, Aiming at a kind of hardware Trojan, which is composed of combinational logic trigger, one of the formal methods, the model checking technique, is used to detect the hardware Trojan. The experiment uses the open source EBMC model detection tool, uses the RTL source code as the model input, and uses SVA to describe the property input. The experimental results show that the model checking technique can be used as an effective hardware Trojan detection method.

Laud, Peeter, Pettai, Martin, Randmets, Jaak.  2018.  Sensitivity Analysis of SQL Queries. Proceedings of the 13th Workshop on Programming Languages and Analysis for Security. :2–12.

The sensitivity of a function is the maximum change of its output for a unit change of its input. In this paper we present a method for determining the sensitivity of SQL queries, seen as functions from databases to datasets, where the change is measured in the number of rows that differ. Given a query, a database schema and a number, our method constructs a formula that is satisfiable only if the sensitivity of the query is bigger than this number. Our method is composable, and can also be applied to SQL workflows. Our results can be used to calibrate the amount of noise that has to be added to the output of the query to obtain a certain level of differential privacy.

Ugwuoke, Chibuike, Erkin, Zekeriya, Lagendijk, Reginald L..  2018.  Secure Fixed-Point Division for Homomorphically Encrypted Operands. Proceedings of the 13th International Conference on Availability, Reliability and Security. :33:1–33:10.

Due to privacy threats associated with computation of outsourced data, processing data on the encrypted domain has become a viable alternative. Secure computation of encrypted data is relevant for analysing datasets in areas (such as genome processing, private data aggregation, cloud computations) that require basic arithmetic operations. Performing division operation over-all encrypted inputs has not been achieved using homomorphic schemes in non-interactive modes. In interactive protocols, the cost of obtaining an encrypted quotient (from encrypted values) is computationally expensive. To the best of our knowledge, existing homomorphic solutions on encrypted division are often relaxed to consider public or private divisor. We acknowledge that there are other techniques such as secret sharing and garbled circuits adopted to compute secure division, but we are interested in homomorphic solutions. We propose an efficient and interactive two-party protocol that computes the fixed-point quotient of two encrypted inputs, using an efficient and secure comparison protocol as a sub-protocol. Our proposal provides a computational advantage, with a linear complexity in the digit precision of the quotient. We provide proof of security in the universally composable framework and complexity analyses. We present experimental results for two cryptosystem implementations in order to compare performance. An efficient prototype of our protocol is implemented using additive homomorphic scheme (Paillier), whereas a non-efficient fully-homomorphic scheme (BGV) version is equally presented as a proof of concept and analyses of our proposal.

Hasumi, Daichi, Shima, Shigeyoshi, Takakura, Hiroki.  2018.  Speculating Incident Zone System on Local Area Networks. Proceedings of the 2018 Workshop on Traffic Measurements for Cybersecurity. :40–45.

Triage process in the incident handling lacks the ability to assess overall risks to modern cyber attacks. Zoning of local area networks by measuring internal network traffic in response to such risks is important. Therefore, we propose a SPeculating INcident Zone (SPINZ) system for supporting the triage process. The SPINZ analyzes internal network flows and outputs an incident zone, which is composed of devices related to the incident. We evaluate the performance of the SPINZ through simulations using incident flow datasets generated from internal traffic open data and lateral movement traffic. As a result, we confirm that the SPINZ has the capability to detect an incident zone, but removing unrelated devices from an incident zone is an issue to be further investigated.

Kerber, Thomas, Kiayias, Aggelos, Kohlweiss, Markulf, Zikas, Vassilis.  2019.  Ouroboros Crypsinous: Privacy-Preserving Proof-of-Stake. 2019 IEEE Symposium on Security and Privacy (SP). :157–174.
We present Ouroboros Crypsinous, the first formally analyzed privacy-preserving proof-of-stake blockchain protocol. To model its security we give a thorough treatment of private ledgers in the (G)UC setting that might be of independent interest. To prove our protocol secure against adaptive attacks, we introduce a new coin evolution technique relying on SNARKs and key-private forward secure encryption. The latter primitive-and the associated construction-can be of independent interest. We stress that existing approaches to private blockchain, such as the proof-of-work-based Zerocash are analyzed only against static corruptions.
Yan-Tao, Zhong.  2018.  Lattice Based Authenticated Key Exchange with Universally Composable Security. 2018 International Conference on Networking and Network Applications (NaNA). :86–90.

The Internet of things (IoT) has experienced rapid development these years, while its security and privacy remains a major challenge. One of the main security goals for the IoT is to build secure and authenticated channels between IoT nodes. A common way widely used to achieve this goal is using authenticated key exchange protocol. However, with the increasing progress of quantum computation, most authenticated key exchange protocols nowadays are threatened by the rise of quantum computers. In this study, we address this problem by using ring-SIS based KEM and hash function to construct an authenticated key exchange scheme so that we base the scheme on lattice based hard problems believed to be secure even with quantum attacks. We also prove the security of universal composability of our scheme. The scheme hence can keep security while runs in complicated environment.

Hogan, Kyle, Maleki, Hoda, Rahaeimehr, Reza, Canetti, Ran, van Dijk, Marten, Hennessey, Jason, Varia, Mayank, Zhang, Haibin.  2019.  On the Universally Composable Security of OpenStack. 2019 IEEE Cybersecurity Development (SecDev). :20–33.
We initiate an effort to provide a rigorous, holistic and modular security analysis of OpenStack. OpenStack is the prevalent open-source, non-proprietary package for managing cloud services and data centers. It is highly complex and consists of multiple inter-related components which are developed by separate, loosely coordinated groups. All of these properties make the security analysis of OpenStack both a worthy mission and a challenging one. We base our modeling and security analysis in the universally composable (UC) security framework. This allows specifying and proving security in a modular way – a crucial feature when analyzing systems of such magnitude. Our analysis has the following key features: 1) It is user-centric: It stresses the security guarantees given to users of the system in terms of privacy, correctness, and timeliness of the services. 2) It considers the security of OpenStack even when some of the components are compromised. This departs from the traditional design approach of OpenStack, which assumes that all services are fully trusted. 3) It is modular: It formulates security properties for individual components and uses them to prove security properties of the overall system. Specifically, this work concentrates on the high-level structure of OpenStack, leaving the further formalization and more detailed analysis of specific OpenStack services to future work. Specifically, we formulate ideal functionalities that correspond to some of the core OpenStack modules, and then proves security of the overall OpenStack protocol given the ideal components. As demonstrated within, the main challenge in the high-level design is to provide adequately fine-grained scoping of permissions to access dynamically changing system resources. We demonstrate security issues with current mechanisms in case of failure of some components, propose alternative mechanisms, and rigorously prove adequacy of then new mechanisms within our modeling.
Skrobot, Marjan, Lancrenon, Jean.  2018.  On Composability of Game-Based Password Authenticated Key Exchange. 2018 IEEE European Symposium on Security and Privacy (EuroS P). :443–457.

It is standard practice that the secret key derived from an execution of a Password Authenticated Key Exchange (PAKE) protocol is used to authenticate and encrypt some data payload using a Symmetric Key Protocol (SKP). Unfortunately, most PAKEs of practical interest are studied using so-called game-based models, which – unlike simulation models – do not guarantee secure composition per se. However, Brzuska et al. (CCS 2011) have shown that a middle ground is possible in the case of authenticated key exchange that relies on Public-Key Infrastructure (PKI): the game-based models do provide secure composition guarantees when the class of higher-level applications is restricted to SKPs. The question that we pose in this paper is whether or not a similar result can be exhibited for PAKE. Our work answers this question positively. More specifically, we show that PAKE protocols secure according to the game-based Real-or-Random (RoR) definition with the weak forward secrecy of Abdalla et al. (S&P 2015) allow for safe composition with arbitrary, higher-level SKPs. Since there is evidence that most PAKEs secure in the Find-then-Guess (FtG) model are in fact secure according to RoR definition, we can conclude that nearly all provably secure PAKEs enjoy a certain degree of composition, one that at least covers the case of implementing secure channels.

Canetti, Ran, Stoughton, Alley, Varia, Mayank.  2019.  EasyUC: Using EasyCrypt to Mechanize Proofs of Universally Composable Security. 2019 IEEE 32nd Computer Security Foundations Symposium (CSF). :167–16716.
We present a methodology for using the EasyCrypt proof assistant (originally designed for mechanizing the generation of proofs of game-based security of cryptographic schemes and protocols) to mechanize proofs of security of cryptographic protocols within the universally composable (UC) security framework. This allows, for the first time, the mechanization and formal verification of the entire sequence of steps needed for proving simulation-based security in a modular way: Specifying a protocol and the desired ideal functionality; Constructing a simulator and demonstrating its validity, via reduction to hard computational problems; Invoking the universal composition operation and demonstrating that it indeed preserves security. We demonstrate our methodology on a simple example: stating and proving the security of secure message communication via a one-time pad, where the key comes from a Diffie-Hellman key-exchange, assuming ideally authenticated communication. We first put together EasyCrypt-verified proofs that: (a) the Diffie-Hellman protocol UC-realizes an ideal key-exchange functionality, assuming hardness of the Decisional Diffie-Hellman problem, and (b) one-time-pad encryption, with a key obtained using ideal key-exchange, UC-realizes an ideal secure-communication functionality. We then mechanically combine the two proofs into an EasyCrypt-verified proof that the composed protocol realizes the same ideal secure-communication functionality. Although formulating a methodology that is both sound and workable has proven to be a complex task, we are hopeful that it will prove to be the basis for mechanized UC security analyses for significantly more complex protocols and tasks.
2019-12-10
Ponuma, R, Amutha, R, Haritha, B.  2018.  Compressive Sensing and Hyper-Chaos Based Image Compression-Encryption. 2018 Fourth International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB). :1-5.

A 2D-Compressive Sensing and hyper-chaos based image compression-encryption algorithm is proposed. The 2D image is compressively sampled and encrypted using two measurement matrices. A chaos based measurement matrix construction is employed. The construction of the measurement matrix is controlled by the initial and control parameters of the chaotic system, which are used as the secret key for encryption. The linear measurements of the sparse coefficients of the image are then subjected to a hyper-chaos based diffusion which results in the cipher image. Numerical simulation and security analysis are performed to verify the validity and reliability of the proposed algorithm.

Tian, Yun, Xu, Wenbo, Qin, Jing, Zhao, Xiaofan.  2018.  Compressive Detection of Random Signals from Sparsely Corrupted Measurements. 2018 International Conference on Network Infrastructure and Digital Content (IC-NIDC). :389-393.

Compressed sensing (CS) integrates sampling and compression into a single step to reduce the processed data amount. However, the CS reconstruction generally suffers from high complexity. To solve this problem, compressive signal processing (CSP) is recently proposed to implement some signal processing tasks directly in the compressive domain without reconstruction. Among various CSP techniques, compressive detection achieves the signal detection based on the CS measurements. This paper investigates the compressive detection problem of random signals when the measurements are corrupted. Different from the current studies that only consider the dense noise, our study considers both the dense noise and sparse error. The theoretical performance is derived, and simulations are provided to verify the derived theoretical results.

Shiddik, Luthfi Rakha, Novamizanti, Ledya, Ramatryana, I N Apraz Nyoman, Hanifan, Hasya Azqia.  2019.  Compressive Sampling for Robust Video Watermarking Based on BCH Code in SWT-SVD Domain. 2019 International Conference on Sustainable Engineering and Creative Computing (ICSECC). :223-227.

The security and confidentiality of the data can be guaranteed by using a technique called watermarking. In this study, compressive sampling is designed and analyzed on video watermarking. Before the watermark compression process was carried out, the watermark was encoding the Bose Chaudhuri Hocquenghem Code (BCH Code). After that, the watermark is processed using the Discrete Sine Transform (DST) and Discrete Wavelet Transform (DWT). The watermark insertion process to the video host using the Stationary Wavelet Transform (SWT), and Singular Value Decomposition (SVD) methods. The results of our system are obtained with the PSNR 47.269 dB, MSE 1.712, and BER 0.080. The system is resistant to Gaussian Blur and rescaling noise attacks.

Zhou, Guorui, Zhu, Xiaoqiang, Song, Chenru, Fan, Ying, Zhu, Han, Ma, Xiao, Yan, Yanghui, Jin, Junqi, Li, Han, Gai, Kun.  2018.  Deep Interest Network for Click-Through Rate Prediction. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. :1059-1068.

Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.

Tai, Kai Sheng, Sharan, Vatsal, Bailis, Peter, Valiant, Gregory.  2018.  Sketching Linear Classifiers over Data Streams. Proceedings of the 2018 International Conference on Management of Data. :757-772.

We introduce a new sub-linear space sketch—the Weight-Median Sketch—for learning compressed linear classifiers over data streams while supporting the efficient recovery of large-magnitude weights in the model. This enables memory-limited execution of several statistical analyses over streams, including online feature selection, streaming data explanation, relative deltoid detection, and streaming estimation of pointwise mutual information. Unlike related sketches that capture the most frequently-occurring features (or items) in a data stream, the Weight-Median Sketch captures the features that are most discriminative of one stream (or class) compared to another. The Weight-Median Sketch adopts the core data structure used in the Count-Sketch, but, instead of sketching counts, it captures sketched gradient updates to the model parameters. We provide a theoretical analysis that establishes recovery guarantees for batch and online learning, and demonstrate empirical improvements in memory-accuracy trade-offs over alternative memory-budgeted methods, including count-based sketches and feature hashing.

Deng, Lijin, Piao, Yan, Liu, Shuo.  2018.  Research on SIFT Image Matching Based on MLESAC Algorithm. Proceedings of the 2Nd International Conference on Digital Signal Processing. :17-21.

The difference of sensor devices and the camera position offset will lead the geometric differences of the matching images. The traditional SIFT image matching algorithm has a large number of incorrect matching point pairs and the matching accuracy is low during the process of image matching. In order to solve this problem, a SIFT image matching based on Maximum Likelihood Estimation Sample Consensus (MLESAC) algorithm is proposed. Compared with the traditional SIFT feature matching algorithm, SURF feature matching algorithm and RANSAC feature matching algorithm, the proposed algorithm can effectively remove the false matching feature point pairs during the image matching process. Experimental results show that the proposed algorithm has higher matching accuracy and faster matching efficiency.

Feng, Chenwei, Wang, Xianling, Zhang, Zewang.  2018.  Data Compression Scheme Based on Discrete Sine Transform and Lloyd-Max Quantization. Proceedings of the 3rd International Conference on Intelligent Information Processing. :46-51.

With the increase of mobile equipment and transmission data, Common Public Radio Interface (CPRI) between Building Base band Unit (BBU) and Remote Radio Unit (RRU) suffers amounts of increasing transmission data. It is essential to compress the data in CPRI if more data should be transferred without congestion under the premise of restriction of fiber consumption. A data compression scheme based on Discrete Sine Transform (DST) and Lloyd-Max quantization is proposed in distributed Base Station (BS) architecture. The time-domain samples are transformed by DST according to the characteristics of Orthogonal Frequency Division Multiplexing (OFDM) baseband signals, and then the coefficients after transformation are quantified by the Lloyd-Max quantizer. The simulation results show that the proposed scheme can work at various Compression Ratios (CRs) while the values of Error Vector Magnitude (EVM) are better than the limits in 3GPP.

Huang, Lilian, Zhu, Zhonghang.  2018.  Compressive Sensing Image Reconstruction Using Super-Resolution Convolutional Neural Network. Proceedings of the 2Nd International Conference on Digital Signal Processing. :80-83.

Compressed sensing (CS) can recover a signal that is sparse in certain representation and sample at the rate far below the Nyquist rate. But limited to the accuracy of atomic matching of traditional reconstruction algorithm, CS is difficult to reconstruct the initial signal with high resolution. Meanwhile, scholar found that trained neural network have a strong ability in settling such inverse problems. Thus, we propose a Super-Resolution Convolutional Neural Network (SRCNN) that consists of three convolutional layers. Every layer has a fixed number of kernels and has their own specific function. The process is implemented using classical compressed sensing algorithm to process the input image, afterwards, the output images are coded via SRCNN. We achieve higher resolution image by using the SRCNN algorithm proposed. The simulation results show that the proposed method helps improve PSNR value and promote visual effect.

Sun, Jie, Yu, Jiancheng, Zhang, Aiqun, Song, Aijun, Zhang, Fumin.  2018.  Underwater Acoustic Intensity Field Reconstruction by Kriged Compressive Sensing. Proceedings of the Thirteenth ACM International Conference on Underwater Networks & Systems. :5:1-5:8.

This paper presents a novel Kriged Compressive Sensing (KCS) approach for the reconstruction of underwater acoustic intensity fields sampled by multiple gliders following sawtooth sampling patterns. Blank areas in between the sampling trajectories may cause unsatisfying reconstruction results. The KCS method leverages spatial statistical correlation properties of the acoustic intensity field being sampled to improve the compressive reconstruction process. Virtual data samples generated from a kriging method are inserted into the blank areas. We show that by using the virtual samples along with real samples, the acoustic intensity field can be reconstructed with higher accuracy when coherent spatial patterns exist. Corresponding algorithms are developed for both unweighted and weighted KCS methods. By distinguishing the virtual samples from real samples through weighting, the reconstruction results can be further improved. Simulation results show that both algorithms can improve the reconstruction results according to the PSNR and SSIM metrics. The methods are applied to process the ocean ambient noise data collected by the Sea-Wing acoustic gliders in the South China Sea.

Cui, Wenxue, Jiang, Feng, Gao, Xinwei, Zhang, Shengping, Zhao, Debin.  2018.  An Efficient Deep Quantized Compressed Sensing Coding Framework of Natural Images. Proceedings of the 26th ACM International Conference on Multimedia. :1777-1785.

Traditional image compressed sensing (CS) coding frameworks solve an inverse problem that is based on the measurement coding tools (prediction, quantization, entropy coding, etc.) and the optimization based image reconstruction method. These CS coding frameworks face the challenges of improving the coding efficiency at the encoder, while also suffering from high computational complexity at the decoder. In this paper, we move forward a step and propose a novel deep network based CS coding framework of natural images, which consists of three sub-networks: sampling sub-network, offset sub-network and reconstruction sub-network that responsible for sampling, quantization and reconstruction, respectively. By cooperatively utilizing these sub-networks, it can be trained in the form of an end-to-end metric with a proposed rate-distortion optimization loss function. The proposed framework not only improves the coding performance, but also reduces the computational cost of the image reconstruction dramatically. Experimental results on benchmark datasets demonstrate that the proposed method is capable of achieving superior rate-distortion performance against state-of-the-art methods.

Braverman, Mark, Kol, Gillat.  2018.  Interactive Compression to External Information. Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing. :964-977.

We describe a new way of compressing two-party communication protocols to get protocols with potentially smaller communication. We show that every communication protocol that communicates C bits and reveals I bits of information about the participants' private inputs to an observer that watches the communication, can be simulated by a new protocol that communicates at most poly(I) $\cdot$ loglog(C) bits. Our result is tight up to polynomial factors, as it matches the recent work separating communication complexity from external information cost.

Huang, Xuping.  2018.  Mechanism and Implementation of Watermarked Sample Scanning Method for Speech Data Tampering Detection. Proceedings of the 2Nd International Workshop on Multimedia Privacy and Security. :54-60.

The integrity and reliability of speech data have been important issues to probative use. Watermarking technologies supplies an alternative solution to guarantee the the authenticity of multiple data besides digital signature. This work proposes a novel digital watermarking based on a reversible compression algorithm with sample scanning to detect tampering in time domain. In order to detect tampering precisely, the digital speech data is divided into length-fixed frames and the content-based hash information of each frame is calculated and embedded into the speech data for verification. Huffman compression algorithm is applied to each four sampling bits from least significant bit in each sample after pulse-code modulation processing to achieve low distortion and high capacity for hiding payload. Experimental experiments on audio quality, detection precision and robustness towards attacks are taken, and the results show the effectiveness of tampering detection with a precision with an error around 0.032 s for a 10 s speech clip. Distortion is imperceptible with an average 22.068 dB for Huffman-based and 24.139 dB for intDCT-based method in terms of signal-to-noise, and with an average MOS 3.478 for Huffman-based and 4.378 for intDCT-based method. The bit error rate (BER) between stego data and attacked stego data in both of time-domain and frequency domain is approximate 28.6% in average, which indicates the robustness of the proposed hiding method.

2019-12-09
Nozaki, Yusuke, Yoshikawa, Masaya.  2018.  Area Constraint Aware Physical Unclonable Function for Intelligence Module. 2018 3rd International Conference on Computational Intelligence and Applications (ICCIA). :205-209.

Artificial intelligence technology such as neural network (NN) is widely used in intelligence module for Internet of Things (IoT). On the other hand, the risk of illegal attacks for IoT devices is pointed out; therefore, security countermeasures such as an authentication are very important. In the field of hardware security, the physical unclonable functions (PUFs) have been attracted attention as authentication techniques to prevent the semiconductor counterfeits. However, implementation of the dedicated hardware for both of NN and PUF increases circuit area. Therefore, this study proposes a new area constraint aware PUF for intelligence module. The proposed PUF utilizes the propagation delay time from input layer to output layer of NN. To share component for operation, the proposed PUF reduces the circuit area. Experiments using a field programmable gate array evaluate circuit area and PUF performance. In the result of circuit area, the proposed PUF was smaller than the conventional PUFs was showed. Then, in the PUF performance evaluation, for steadiness, diffuseness, and uniqueness, favorable results were obtained.

Khokhlov, Igor, Jain, Chinmay, Miller-Jacobson, Ben, Heyman, Andrew, Reznik, Leonid, Jacques, Robert St..  2018.  MeetCI: A Computational Intelligence Software Design Automation Framework. 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1-8.

Computational Intelligence (CI) algorithms/techniques are packaged in a variety of disparate frameworks/applications that all vary with respect to specific supported functionality and implementation decisions that drastically change performance. Developers looking to employ different CI techniques are faced with a series of trade-offs in selecting the appropriate library/framework. These include resource consumption, features, portability, interface complexity, ease of parallelization, etc. Considerations such as language compatibility and familiarity with a particular library make the choice of libraries even more difficult. The paper introduces MeetCI, an open source software framework for computational intelligence software design automation that facilitates the application design decisions and their software implementation process. MeetCI abstracts away specific framework details of CI techniques designed within a variety of libraries. This allows CI users to benefit from a variety of current frameworks without investigating the nuances of each library/framework. Using an XML file, developed in accordance with the specifications, the user can design a CI application generically, and utilize various CI software without having to redesign their entire technology stack. Switching between libraries in MeetCI is trivial and accessing the right library to satisfy a user's goals can be done easily and effectively. The paper discusses the framework's use in design of various applications. The design process is illustrated with four different examples from expert systems and machine learning domains, including the development of an expert system for security evaluation, two classification problems and a prediction problem with recurrent neural networks.

Tsochev, Georgi, Trifonov, Roumen, Yoshinov, Radoslav, Manolov, Slavcho, Pavlova, Galya.  2019.  Improving the Efficiency of IDPS by Using Hybrid Methods from Artificial Intelligence. 2019 International Conference on Information Technologies (InfoTech). :1-4.

The present paper describes some of the results obtained in the Faculty of Computer Systems and Technology at Technical University of Sofia in the implementation of project related to the application of intelligent methods for increasing the security in computer networks. Also is made a survey about existing hybrid methods, which are using several artificial intelligent methods for cyber defense. The paper introduces a model for intrusion detection systems where multi agent systems are the bases and artificial intelligence are applicable by the means simple real-time models constructed in laboratory environment.

Cococcioni, Marco.  2018.  Computational Intelligence in Maritime Security and Defense: Challenges and Opportunities. 2018 IEEE Symposium Series on Computational Intelligence (SSCI). :1964-1967.

Computational Intelligence (CI) has a great potential in Security & Defense (S&D) applications. Nevertheless, such potential seems to be still under exploited. In this work we first review CI applications in the maritime domain, done in the past decades by NATO Nations. Then we discuss challenges and opportunities for CI in S&D. Finally we argue that a review of the academic training of military officers is highly recommendable, in order to allow them to understand, model and solve new problems, using CI techniques.