Visible to the public Biblio

Found 278 results

Filters: First Letter Of Last Name is F  [Clear All Filters]
A B C D E [F] G H I J K L M N O P Q R S T U V W X Y Z   [Show ALL]
F, A. K., Mhaibes, H. Imad.  2018.  A New Initial Authentication Scheme for Kerberos 5 Based on Biometric Data and Virtual Password. 2018 International Conference on Advanced Science and Engineering (ICOASE). :280–285.

Kerberos is a third party and widely used authentication protocol, in which it enables computers to connect securely using a single sign-on over an insecure channel. It proves the identity of clients and encrypts all the communications between them to ensure data privacy and integrity. Typically, Kerberos composes of three communication phases to establish a secure session between any two clients. The authentication is based on a password-based scheme, in which it is a secret long-term key shared between the client and the Kerberos. Therefore, Kerberos suffers from a password-guessing attack, the main drawback of Kerberos. In this paper, we overcome this limitation by modifying the first initial phase using the virtual password and biometric data. In addition, the proposed protocol provides a strong authentication scenario against multiple types of attacks.

F. Hassan, J. L. Magalini, V. de Campos Pentea, R. A. Santos.  2015.  "A project-based multi-disciplinary elective on digital data processing techniques". 2015 IEEE Frontiers in Education Conference (FIE). :1-7.

Todays' era of internet-of-things, cloud computing and big data centers calls for more fresh graduates with expertise in digital data processing techniques such as compression, encryption and error correcting codes. This paper describes a project-based elective that covers these three main digital data processing techniques and can be offered to three different undergraduate majors electrical and computer engineering and computer science. The course has been offered successfully for three years. Registration statistics show equal interest from the three different majors. Assessment data show that students have successfully completed the different course outcomes. Students' feedback show that students appreciate the knowledge they attain from this elective and suggest that the workload for this course in relation to other courses of equal credit is as expected.

F. Quader, V. Janeja, J. Stauffer.  2015.  "Persistent threat pattern discovery". 2015 IEEE International Conference on Intelligence and Security Informatics (ISI). :179-181.

Advanced Persistent Threat (APT) is a complex (Advanced) cyber-attack (Threat) against specific targets over long periods of time (Persistent) carried out by nation states or terrorist groups with highly sophisticated levels of expertise to establish entries into organizations, which are critical to a country's socio-economic status. The key identifier in such persistent threats is that patterns are long term, could be high priority, and occur consistently over a period of time. This paper focuses on identifying persistent threat patterns in network data, particularly data collected from Intrusion Detection Systems. We utilize Association Rule Mining (ARM) to detect persistent threat patterns on network data. We identify potential persistent threat patterns, which are frequent but at the same time unusual as compared with the other frequent patterns.

Fabian, Benjamin, Ermakova, Tatiana, Lentz, Tino.  2017.  Large-Scale Readability Analysis of Privacy Policies. Proceedings of the International Conference on Web Intelligence. :18–25.

Online privacy policies notify users of a Website how their personal information is collected, processed and stored. Against the background of rising privacy concerns, privacy policies seem to represent an influential instrument for increasing customer trust and loyalty. However, in practice, consumers seem to actually read privacy policies only in rare cases, possibly reflecting the common assumption stating that policies are hard to comprehend. By designing and implementing an automated extraction and readability analysis toolset that embodies a diversity of established readability measures, we present the first large-scale study that provides current empirical evidence on the readability of nearly 50,000 privacy policies of popular English-speaking Websites. The results empirically confirm that on average, current privacy policies are still hard to read. Furthermore, this study presents new theoretical insights for readability research, in particular, to what extent practical readability measures are correlated. Specifically, it shows the redundancy of several well-established readability metrics such as SMOG, RIX, LIX, GFI, FKG, ARI, and FRES, thus easing future choice making processes and comparisons between readability studies, as well as calling for research towards a readability measures framework. Moreover, a more sophisticated privacy policy extractor and analyzer as well as a solid policy text corpus for further research are provided.

Fabre, Arthur, Martinez, Kirk, Bragg, Graeme M., Basford, Philip J., Hart, Jane, Bader, Sebastian, Bragg, Olivia M..  2016.  Deploying a 6LoWPAN, CoAP, Low Power, Wireless Sensor Network: Poster Abstract. Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems CD-ROM. :362–363.

In order to integrate equipment from different vendors, wireless sensor networks need to become more standardized. Using IP as the basis of low power radio networks, together with application layer standards designed for this purpose is one way forward. This research focuses on implementing and deploying a system using Contiki, 6LoWPAN over an 868 MHz radio network, together with CoAP as a standard application layer protocol. A system was deployed in the Cairngorm mountains in Scotland as an environmental sensor network, measuring streams, temperature profiles in peat and periglacial features. It was found that RPL provided an effective routing algorithm, and that the use of UDP packets with CoAP proved to be an energy efficient application layer. This combination of technologies can be very effective in large area sensor networks.

Fachkha, C., Bou-Harb, E., Debbabi, M..  2014.  Fingerprinting Internet DNS Amplification DDoS Activities. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

This work proposes a novel approach to infer and characterize Internet-scale DNS amplification DDoS attacks by leveraging the darknet space. Complementary to the pioneer work on inferring Distributed Denial of Service (DDoS) using darknet, this work shows that we can extract DDoS activities without relying on backscattered analysis. The aim of this work is to extract cyber security intelligence related to DNS Amplification DDoS activities such as detection period, attack duration, intensity, packet size, rate and geo- location in addition to various network-layer and flow-based insights. To achieve this task, the proposed approach exploits certain DDoS parameters to detect the attacks. We empirically evaluate the proposed approach using 720 GB of real darknet data collected from a /13 address space during a recent three months period. Our analysis reveals that the approach was successful in inferring significant DNS amplification DDoS activities including the recent prominent attack that targeted one of the largest anti-spam organizations. Moreover, the analysis disclosed the mechanism of such DNS amplification DDoS attacks. Further, the results uncover high-speed and stealthy attempts that were never previously documented. The case study of the largest DDoS attack in history lead to a better understanding of the nature and scale of this threat and can generate inferences that could contribute in detecting, preventing, assessing, mitigating and even attributing of DNS amplification DDoS activities.

Fachkha, C., Bou-Harb, E., Debbabi, M..  2014.  Fingerprinting Internet DNS Amplification DDoS Activities. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-5.

This work proposes a novel approach to infer and characterize Internet-scale DNS amplification DDoS attacks by leveraging the darknet space. Complementary to the pioneer work on inferring Distributed Denial of Service (DDoS) using darknet, this work shows that we can extract DDoS activities without relying on backscattered analysis. The aim of this work is to extract cyber security intelligence related to DNS Amplification DDoS activities such as detection period, attack duration, intensity, packet size, rate and geo- location in addition to various network-layer and flow-based insights. To achieve this task, the proposed approach exploits certain DDoS parameters to detect the attacks. We empirically evaluate the proposed approach using 720 GB of real darknet data collected from a /13 address space during a recent three months period. Our analysis reveals that the approach was successful in inferring significant DNS amplification DDoS activities including the recent prominent attack that targeted one of the largest anti-spam organizations. Moreover, the analysis disclosed the mechanism of such DNS amplification DDoS attacks. Further, the results uncover high-speed and stealthy attempts that were never previously documented. The case study of the largest DDoS attack in history lead to a better understanding of the nature and scale of this threat and can generate inferences that could contribute in detecting, preventing, assessing, mitigating and even attributing of DNS amplification DDoS activities.

Facon, A., Guilley, S., Lec'Hvien, M., Schaub, A., Souissi, Y..  2018.  Detecting Cache-Timing Vulnerabilities in Post-Quantum Cryptography Algorithms. 2018 IEEE 3rd International Verification and Security Workshop (IVSW). :7-12.

When implemented on real systems, cryptographic algorithms are vulnerable to attacks observing their execution behavior, such as cache-timing attacks. Designing protected implementations must be done with knowledge and validation tools as early as possible in the development cycle. In this article we propose a methodology to assess the robustness of the candidates for the NIST post-quantum standardization project to cache-timing attacks. To this end we have developed a dedicated vulnerability research tool. It performs a static analysis with tainting propagation of sensitive variables across the source code and detects leakage patterns. We use it to assess the security of the NIST post-quantum cryptography project submissions. Our results show that more than 80% of the analyzed implementations have at least one potential flaw, and three submissions total more than 1000 reported flaws each. Finally, this comprehensive study of the competitors security allows us to identify the most frequent weaknesses amongst candidates and how they might be fixed.

Facon, Adrien, Guilley, Sylvain, Ngo, Xuan-Thuy, Perianin, Thomas.  2019.  Hardware-enabled AI for Embedded Security: A New Paradigm. 2019 3rd International Conference on Recent Advances in Signal Processing, Telecommunications Computing (SigTelCom). :80–84.

As chips become more and more connected, they are more exposed (both to network and to physical attacks). Therefore one shall ensure they enjoy a sufficient protection level. Security within chips is accordingly becoming a hot topic. Incident detection and reporting is one novel function expected from chips. In this talk, we explain why it is worthwhile to resort to Artificial Intelligence (AI) for security event handling. Drivers are the need to aggregate multiple and heterogeneous security sensors, the need to digest this information quickly to produce exploitable information, and so while maintaining a low false positive detection rate. Key features are adequate learning procedures and fast and secure classification accelerated by hardware. A challenge is to embed such security-oriented AI logic, while not compromising chip power budget and silicon area. This talk accounts for the opportunities permitted by the symbiotic encounter between chip security and AI.

Fadhilah, D., Marzuki, M. I..  2020.  Performance Analysis of IDS Snort and IDS Suricata with Many-Core Processor in Virtual Machines Against Dos/DDoS Attacks. 2020 2nd International Conference on Broadband Communications, Wireless Sensors and Powering (BCWSP). :157—162.
The rapid development of technology makes it possible for a physical machine to be converted into a virtual machine, which can operate multiple operating systems that are running simultaneously and connected to the internet. DoS/DDoS attacks are cyber-attacks that can threaten the telecommunications sector because these attacks cause services to be disrupted and be difficult to access. There are several software tools for monitoring abnormal activities on the network, such as IDS Snort and IDS Suricata. From previous studies, IDS Suricata is superior to IDS Snort version 2 because IDS Suricata already supports multi-threading, while IDS Snort version 2 still only supports single-threading. This paper aims to conduct tests on IDS Snort version 3.0 which already supports multi-threading and IDS Suricata. This research was carried out on a virtual machine with 1 core, 2 core, and 4 core processor settings for CPU, memory, and capture packet attacks on IDS Snort version 3.0 and IDS Suricata. The attack scenario is divided into 2 parts: DoS attack scenario using 1 physical computer, and DDoS attack scenario using 5 physical computers. Based on overall testing, the results are: In general, IDS Snort version 3.0 is better than IDS Suricata. This is based on the results when using a maximum of 4 core processor, in which IDS Snort version 3.0 CPU usage is stable at 55% - 58%, a maximum memory of 3,000 MB, can detect DoS attacks with 27,034,751 packets, and DDoS attacks with 36,919,395 packets. Meanwhile, different results were obtained by IDS Suricata, in which CPU usage is better compared to IDS Snort version 3.0 with only 10% - 40% usage, and a maximum memory of 1,800 MB. However, the capabilities of detecting DoS attacks are smaller with 3,671,305 packets, and DDoS attacks with a total of 7,619,317 packets on a TCP Flood attack test.
Faghihi, Farnood, Abadi, Mahdi, Tajoddin, Asghar.  2018.  SMSBotHunter: A Novel Anomaly Detection Technique to Detect SMS Botnets. 2018 15th International ISC (Iranian Society of Cryptology) Conference on Information Security and Cryptology (ISCISC). :1–6.
Over the past few years, botnets have emerged as one of the most serious cybersecurity threats faced by individuals and organizations. After infecting millions of servers and workstations worldwide, botmasters have started to develop botnets for mobile devices. Mobile botnets use different mediums to communicate with their botmasters. Although significant research has been done to detect mobile botnets that use the Internet as their command and control (C&C) channel, little research has investigated SMS botnets per se. In order to fill this gap, in this paper, we first divide SMS botnets based on their characteristics into three families, namely, info stealer, SMS stealer, and SMS spammer. Then, we propose SMSBotHunter, a novel anomaly detection technique that detects SMS botnets using textual and behavioral features and one-class classification. We experimentally evaluate the detection performance of SMSBotHunter by simulating the behavior of human users and SMS botnets. The experimental results demonstrate that most of the SMS messages sent or received by info stealer and SMS spammer botnets can be detected using textual features exclusively. It is also revealed that behavioral features are crucial for the detection of SMS stealer botnets and will improve the overall detection performance.
Fahad, S.K. Ahammad, Yahya, Abdulsamad Ebrahim.  2018.  Inflectional Review of Deep Learning on Natural Language Processing. 2018 International Conference on Smart Computing and Electronic Enterprise (ICSCEE). :1–4.
In the age of knowledge, Natural Language Processing (NLP) express its demand by a huge range of utilization. Previously NLP was dealing with statically data. Contemporary time NLP is doing considerably with the corpus, lexicon database, pattern reorganization. Considering Deep Learning (DL) method recognize artificial Neural Network (NN) to nonlinear process, NLP tools become increasingly accurate and efficient that begin a debacle. Multi-Layer Neural Network obtaining the importance of the NLP for its capability including standard speed and resolute output. Hierarchical designs of data operate recurring processing layers to learn and with this arrangement of DL methods manage several practices. In this paper, this resumed striving to reach a review of the tools and the necessary methodology to present a clear understanding of the association of NLP and DL for truly understand in the training. Efficiency and execution both are improved in NLP by Part of speech tagging (POST), Morphological Analysis, Named Entity Recognition (NER), Semantic Role Labeling (SRL), Syntactic Parsing, and Coreference resolution. Artificial Neural Networks (ANN), Time Delay Neural Networks (TDNN), Recurrent Neural Network (RNN), Convolution Neural Networks (CNN), and Long-Short-Term-Memory (LSTM) dealings among Dense Vector (DV), Windows Approach (WA), and Multitask learning (MTL) as a characteristic of Deep Learning. After statically methods, when DL communicate the influence of NLP, the individual form of the NLP process and DL rule collaboration was started a fundamental connection.
Fahl, Sascha, Harbach, Marian, Perl, Henning, Koetter, Markus, Smith, Matthew.  2013.  Rethinking SSL Development in an Appified World. Proceedings of the 2013 ACM SIGSAC Conference on Computer &\#38; Communications Security. :49–60.
The Secure Sockets Layer (SSL) is widely used to secure data transfers on the Internet. Previous studies have shown that the state of non-browser SSL code is catastrophic across a large variety of desktop applications and libraries as well as a large selection of Android apps, leaving users vulnerable to Man-in-the-Middle attacks (MITMAs). To determine possible causes of SSL problems on all major appified platforms, we extended the analysis to the walled-garden ecosystem of iOS, analyzed software developer forums and conducted interviews with developers of vulnerable apps. Our results show that the root causes are not simply careless developers, but also limitations and issues of the current SSL development paradigm. Based on our findings, we derive a proposal to rethink the handling of SSL in the appified world and present a set of countermeasures to improve the handling of SSL using Android as a blueprint for other platforms. Our countermeasures prevent developers from willfully or accidentally breaking SSL certificate validation, offer support for extended features such as SSL Pinning and different SSL validation infrastructures, and protect users. We evaluated our solution against 13,500 popular Android apps and conducted developer interviews to judge the acceptance of our approach and found that our solution works well for all investigated apps and developers.
Fahrbach, M., Miller, G. L., Peng, R., Sawlani, S., Wang, J., Xu, S. C..  2018.  Graph Sketching against Adaptive Adversaries Applied to the Minimum Degree Algorithm. 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS). :101–112.

Motivated by the study of matrix elimination orderings in combinatorial scientific computing, we utilize graph sketching and local sampling to give a data structure that provides access to approximate fill degrees of a matrix undergoing elimination in polylogarithmic time per elimination and query. We then study the problem of using this data structure in the minimum degree algorithm, which is a widely-used heuristic for producing elimination orderings for sparse matrices by repeatedly eliminating the vertex with (approximate) minimum fill degree. This leads to a nearly-linear time algorithm for generating approximate greedy minimum degree orderings. Despite extensive studies of algorithms for elimination orderings in combinatorial scientific computing, our result is the first rigorous incorporation of randomized tools in this setting, as well as the first nearly-linear time algorithm for producing elimination orderings with provable approximation guarantees. While our sketching data structure readily works in the oblivious adversary model, by repeatedly querying and greedily updating itself, it enters the adaptive adversarial model where the underlying sketches become prone to failure due to dependency issues with their internal randomness. We show how to use an additional sampling procedure to circumvent this problem and to create an independent access sequence. Our technique for decorrelating interleaved queries and updates to this randomized data structure may be of independent interest.

Fahrenkrog-Petersen, Stephan A., van der Aa, Han, Weidlich, Matthias.  2019.  PRETSA: Event Log Sanitization for Privacy-aware Process Discovery. 2019 International Conference on Process Mining (ICPM). :1—8.

Event logs that originate from information systems enable comprehensive analysis of business processes, e.g., by process model discovery. However, logs potentially contain sensitive information about individual employees involved in process execution that are only partially hidden by an obfuscation of the event data. In this paper, we therefore address the risk of privacy-disclosure attacks on event logs with pseudonymized employee information. To this end, we introduce PRETSA, a novel algorithm for event log sanitization that provides privacy guarantees in terms of k-anonymity and t-closeness. It thereby avoids disclosure of employee identities, their membership in the event log, and their characterization based on sensitive attributes, such as performance information. Through step-wise transformations of a prefix-tree representation of an event log, we maintain its high utility for discovery of a performance-annotated process model. Experiments with real-world data demonstrate that sanitization with PRETSA yields event logs of higher utility compared to methods that exploit frequency-based filtering, while providing the same privacy guarantees.

Faith, B. Fatokun, Hamid, S., Norman, A., Johnson, O. Fatokun, Eke, C. I..  2020.  Relating Factors of Tertiary Institution Students’ Cybersecurity Behavior. 2020 International Conference in Mathematics, Computer Engineering and Computer Science (ICMCECS). :1—6.

Humans are majorly identified as the weakest link in cybersecurity. Tertiary institution students undergo lot of cybersecurity issues due to their constant Internet exposure, however there is a lack in literature with regards to tertiary institution students' cybersecurity behaviors. This research aimed at linking the factors responsible for tertiary institutions students' cybersecurity behavior, via validated cybersecurity factors, Perceived Vulnerability (PV); Perceived Barriers (PBr); Perceived Severity (PS); Security Self-Efficacy (SSE); Response Efficacy (RE); Cues to Action (CA); Peer Behavior (PBhv); Computer Skills (CS); Internet Skills (IS); Prior Experience with Computer Security Practices (PE); Perceived Benefits (PBnf); Familiarity with Cyber-Threats (FCT), thus exploring the relationship between the factors and the students' Cybersecurity Behaviors (CSB). A cross-sectional online survey was used to gather data from 450 undergraduate and postgraduate students from tertiary institutions within Klang Valley, Malaysia. Correlation Analysis was used to find the relationships existing among the cybersecurity behavioral factors via SPSS version 25. Results indicate that all factors were significantly related to the cybersecurity behaviors of the students apart from Perceived Severity. Practically, the study instigates the need for more cybersecurity training and practices in the tertiary institutions.

Fajri, M., Hariyanto, N., Gemsjaeger, B..  2020.  Automatic Protection Implementation Considering Protection Assessment Method of DER Penetration for Smart Distribution Network. 2020 International Conference on Technology and Policy in Energy and Electric Power (ICT-PEP). :323—328.
Due to geographical locations of Indonesia, some technology such as hydro and solar photovoltaics are very attractive to be used and developed. Distribution Energy Resources (DER) is the appropriate schemes implemented to achieve optimal operation respecting the location and capacity of the plant. The Gorontalo sub-system network was chosen as a case study considering both of micro-hydro and PV as contributed to supply the grid. The needs of a smart electrical system are required to improve reliability, power quality, and adaptation to any circumstances during DER application. While the topology was changing over time, intermittent of DER output and bidirectional power flow can be overcome with smart grid systems. In this study, an automation algorithm has been conducted to aid the engineers in solving the protection problems caused by DER implementation. The Protection Security Assessment (PSA) method is used to evaluate the state of the protection system. Determine the relay settings using an adaptive rule-based method on expert systems. The application with a Graphical User Interface (GUI) has been developed to make user easier to get the specific relay settings and locations which are sensitive, fast, reliable, and selective.
Falcon, R., Abielmona, R., Billings, S., Plachkov, A., Abbass, H..  2014.  Risk management with hard-soft data fusion in maritime domain awareness. Computational Intelligence for Security and Defense Applications (CISDA), 2014 Seventh IEEE Symposium on. :1-8.

Enhanced situational awareness is integral to risk management and response evaluation. Dynamic systems that incorporate both hard and soft data sources allow for comprehensive situational frameworks which can supplement physical models with conceptual notions of risk. The processing of widely available semi-structured textual data sources can produce soft information that is readily consumable by such a framework. In this paper, we augment the situational awareness capabilities of a recently proposed risk management framework (RMF) with the incorporation of soft data. We illustrate the beneficial role of the hard-soft data fusion in the characterization and evaluation of potential vessels in distress within Maritime Domain Awareness (MDA) scenarios. Risk features pertaining to maritime vessels are defined a priori and then quantified in real time using both hard (e.g., Automatic Identification System, Douglas Sea Scale) as well as soft (e.g., historical records of worldwide maritime incidents) data sources. A risk-aware metric to quantify the effectiveness of the hard-soft fusion process is also proposed. Though illustrated with MDA scenarios, the proposed hard-soft fusion methodology within the RMF can be readily applied to other domains.

Falk, E., Repcek, S., Fiz, B., Hommes, S., State, R., Sasnauskas, R..  2017.  VSOC - A Virtual Security Operating Center. GLOBECOM 2017 - 2017 IEEE Global Communications Conference. :1–6.

Security in virtualised environments is becoming increasingly important for institutions, not only for a firm's own on-site servers and network but also for data and sites that are hosted in the cloud. Today, security is either handled globally by the cloud provider, or each customer needs to invest in its own security infrastructure. This paper proposes a Virtual Security Operation Center (VSOC) that allows to collect, analyse and visualize security related data from multiple sources. For instance, a user can forward log data from its firewalls, applications and routers in order to check for anomalies and other suspicious activities. The security analytics provided by the VSOC are comparable to those of commercial security incident and event management (SIEM) solutions, but are deployed as a cloud-based solution with the additional benefit of using big data processing tools to handle large volumes of data. This allows us to detect more complex attacks that cannot be detected with todays signature-based (i.e. rules) SIEM solutions.

Fan Yang, University of Illinois at Urbana-Champaign, Santiago Escobar, Universidad Politécnica de Valencia, Spain, Catherine Meadows, Naval Research Laboratory, Jose Meseguer, University of Illinois at Urbana-Champaign, Paliath Narendran, University at Albany-SUNY.  2014.  Theories for Homomorphic Encryption, Unification and the Finite Variant Property. 16th International Symposium on Principles and Practice of Declarative Programming (PPDP 2014).

Recent advances in the automated analysis of cryptographic protocols have aroused new interest in the practical application of unification modulo theories, especially theories that describe the algebraic properties of cryptosystems. However, this application requires unification algorithms that can be easily implemented and easily extended to combinations of different theories of interest. In practice this has meant that most tools use a version of a technique known as variant unification. This requires, among other things, that the theory be decomposable into a set of axioms B and a set of rewrite rules R such that R has the finite variant property with respect to B. Most theories that arise in cryptographic protocols have decompositions suitable for variant unification, but there is one major exception: the theory that describes encryption that is homomorphic over an Abelian group.

In this paper we address this problem by studying various approximations of homomorphic encryption over an Abelian group. We construct a hierarchy of increasingly richer theories, taking advantage of new results that allow us to automatically verify that their decompositions have the finite variant property. This new verification procedure also allows us to construct a rough metric of the complexity of a theory with respect to variant unification, or variant complexity. We specify different versions of protocols using the different theories, and analyze them in the Maude-NPA cryptographic protocol analysis tool to assess their behavior. This gives us greater understanding of how the theories behave in actual application, and suggests possible techniques for improving performance.

Fan, Chun-I, Chen, I-Te, Cheng, Chen-Kai, Huang, Jheng-Jia, Chen, Wen-Tsuen.  2018.  FTP-NDN: File Transfer Protocol Based on Re-Encryption for Named Data Network Supporting Nondesignated Receivers. IEEE Systems Journal. 12:473–484.
Due to users' network flow requirement and usage amount nowadays, TCP/IP networks may face various problems. For one, users of video services may access simultaneously the same content, which leads to the host incurring extra costs. Second, although nearby nodes may have the file that a user wants to access, the user cannot directly verify the file itself. This issue will lead the user to connect to a remote host rather than the nearby nodes and causes the network traffic to greatly increase. Therefore, the named data network (NDN), which is based on data itself, was brought about to deal with the aforementioned problems. In NDN, all users can access a file from the nearby nodes, and they can directly verify the file themselves rather than the specific host who holds the file. However, NDN still has no complete standard and secure file transfer protocol to support the ciphertext transmission and the problem of the unknown potential receivers. The straightforward solution is that a sender uses the receiver's public key to encrypt a file before she/he sends the file to NDN nodes. However, it will limit the behavior of users and incur significant storage costs of NDN nodes. This paper presents a complete secure file transfer protocol, which combines the data re-encryption, satisfies the requirement of secure ciphertext transmission, solves the problem of the unknown potential receivers, and saves the significant storage costs of NDN nodes. The proposed protocol is the first one that achieves data confidentiality and solves the problem of the unknown potential receivers in NDN. Finally, we also provide formal security models and proofs for the proposed FTP-NDN.
Fan, Chun-I, Tseng, Yi-Fan, Cheng, Chen-Hsi, Kuo, Hsin-Nan, Huang, Jheng-Jia, Shih, Yu-Tse.  2019.  Anonymous Authentication and Key Agreement Protocol for LTE Networks. 2019 2nd International Conference on Communication Engineering and Technology (ICCET). :68—71.
In 2008, 3GPP proposed the Long Term Evolution (LTE) in version 8. The standard is used in high-speed wireless communication standard for mobile terminal in telecommunication. It supports subscribers to access internet via specific base station after authentication. These authentication processes were defined in standard TS33.401 and TS33.102 by 3GPP. Authenticated processing standard inherits the authentication and key agreement protocol in RFC3310 and has been changed into authenticated scheme suitable for LTE. In the origin LTE authenticated scheme, subscribers need to transfer its International Mobile Subscriber Identity (IMSI) with plaintext. The IMSI might be intercepted and traced by fake stations. In this work, we propose a new scheme with a pseudo IMSI so that fake stations cannot get the real IMSI and trace the subscriber. The subscriber can keep anonymous and be confirmed by the base station for the legality. The pseudo identity is unlinkable to the subscriber. Not only does the proposed scheme enhance the security but also it just has some extra costs for signature generation and verification as compared to the original scheme.
Fan, H., Ji, X. y, Chen, S..  2015.  A hybrid algorithm for reactive power optimization based on bi-level programming. International Conference on Renewable Power Generation (RPG 2015). :1–4.

This paper established a bi-level programming model for reactive power optimization, considering the feature of the grid voltage-reactive power control. The targets of upper-level and lower-level are minimization of grid loss and voltage deviation, respectively. According to the differences of two level, such as different variables, different solution space, primal-dual interior point algorithm is suggested to be used in upper-level, which takes continuous variables in account such as active power source and reactive power source. Upper-level model guaranteed the sufficient of the reactive power in power system. And then in lower-level the discrete variables such as taps are optimized by random forests algorithm (RFA), which regulate the voltage in a feasible range. Finally, a case study illustrated the speediness and robustness of this method.

Fan, Jiasheng, Chen, Fangjiong, Guan, Quansheng, Ji, Fei, Yu, Hua.  2016.  On the Probability of Finding a Receiver in an Ellipsoid Neighborhood of a Sender in 3D Random UANs. Proceedings of the 11th ACM International Conference on Underwater Networks & Systems. :51:1–51:2.
We consider 3-dimensional(3D) underwater random network (UAN) where the nodes are uniformly distributed in a cuboid region. Then we derive the closed-form probability of finding a receiver in an ellipsoid neighborhood of an arbitrary sender. Computer simulation shows that the analytical result is generally consistent with the simulated result.
Fan, M., Yu, L., Chen, S., Zhou, H., Luo, X., Li, S., Liu, Y., Liu, J., Liu, T..  2020.  An Empirical Evaluation of GDPR Compliance Violations in Android mHealth Apps. 2020 IEEE 31st International Symposium on Software Reliability Engineering (ISSRE). :253—264.

The purpose of the General Data Protection Regulation (GDPR) is to provide improved privacy protection. If an app controls personal data from users, it needs to be compliant with GDPR. However, GDPR lists general rules rather than exact step-by-step guidelines about how to develop an app that fulfills the requirements. Therefore, there may exist GDPR compliance violations in existing apps, which would pose severe privacy threats to app users. In this paper, we take mobile health applications (mHealth apps) as a peephole to examine the status quo of GDPR compliance in Android apps. We first propose an automated system, named HPDROID, to bridge the semantic gap between the general rules of GDPR and the app implementations by identifying the data practices declared in the app privacy policy and the data relevant behaviors in the app code. Then, based on HPDROID, we detect three kinds of GDPR compliance violations, including the incompleteness of privacy policy, the inconsistency of data collections, and the insecurity of data transmission. We perform an empirical evaluation of 796 mHealth apps. The results reveal that 189 (23.7%) of them do not provide complete privacy policies. Moreover, 59 apps collect sensitive data through different measures, but 46 (77.9%) of them contain at least one inconsistent collection behavior. Even worse, among the 59 apps, only 8 apps try to ensure the transmission security of collected data. However, all of them contain at least one encryption or SSL misuse. Our work exposes severe privacy issues to raise awareness of privacy protection for app users and developers.