Visible to the public Biblio

Found 727 results

Filters: Keyword is Internet  [Clear All Filters]
2021-01-15
Zeid, R. B., Moubarak, J., Bassil, C..  2020.  Investigating The Darknet. 2020 International Wireless Communications and Mobile Computing (IWCMC). :727—732.

Cybercrime is growing dramatically in the technological world nowadays. World Wide Web criminals exploit the personal information of internet users and use them to their advantage. Unethical users leverage the dark web to buy and sell illegal products or services and sometimes they manage to gain access to classified government information. A number of illegal activities that can be found in the dark web include selling or buying hacking tools, stolen data, digital fraud, terrorists activities, drugs, weapons, and more. The aim of this project is to collect evidence of any malicious activity in the dark web by using computer security mechanisms as traps called honeypots.

Korolev, D., Frolov, A., Babalova, I..  2020.  Classification of Websites Based on the Content and Features of Sites in Onion Space. 2020 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :1680—1683.
This paper describes a method for classifying onion sites. According to the results of the research, the most spread model of site in onion space is built. To create such a model, a specially trained neural network is used. The classification of neural network is based on five different categories such as using authentication system, corporate email, readable URL, feedback and type of onion-site. The statistics of the most spread types of websites in Dark Net are given.
Park, W..  2020.  A Study on Analytical Visualization of Deep Web. 2020 22nd International Conference on Advanced Communication Technology (ICACT). :81—83.
Nowadays, there is a flood of data such as naked body photos and child pornography, which is making people bloodless. In addition, people also distribute drugs through unknown dark channels. In particular, most transactions are being made through the Deep Web, the dark path. “Deep Web refers to an encrypted network that is not detected on search engine like Google etc. Users must use Tor to visit sites on the dark web” [4]. In other words, the Dark Web uses Tor's encryption client. Therefore, users can visit multiple sites on the dark Web, but not know the initiator of the site. In this paper, we propose the key idea based on the current status of such crimes and a crime information visual system for Deep Web has been developed. The status of deep web is analyzed and data is visualized using Java. It is expected that the program will help more efficient management and monitoring of crime in unknown web such as deep web, torrent etc.
Li, Y., Yang, X., Sun, P., Qi, H., Lyu, S..  2020.  Celeb-DF: A Large-Scale Challenging Dataset for DeepFake Forensics. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :3204—3213.
AI-synthesized face-swapping videos, commonly known as DeepFakes, is an emerging problem threatening the trustworthiness of online information. The need to develop and evaluate DeepFake detection algorithms calls for datasets of DeepFake videos. However, current DeepFake datasets suffer from low visual quality and do not resemble DeepFake videos circulated on the Internet. We present a new large-scale challenging DeepFake video dataset, Celeb-DF, which contains 5,639 high-quality DeepFake videos of celebrities generated using improved synthesis process. We conduct a comprehensive evaluation of DeepFake detection methods and datasets to demonstrate the escalated level of challenges posed by Celeb-DF.
2021-01-11
Saleh, I., Ji, H..  2020.  Network Traffic Images: A Deep Learning Approach to the Challenge of Internet Traffic Classification. 2020 10th Annual Computing and Communication Workshop and Conference (CCWC). :0329–0334.
The challenge of network traffic classification exists at the heart of many networking related tasks aimed at improving the overall user experience and usability of the internet. Current techniques, such as deep packet inspection, depend heavily on interaction by network administrators and engineers to maintain up to date stores of application network signatures and the infrastructure required to utilize them effectively. In this paper, we introduce Network Traffic Images, a 2-dimensional (2D) formulation of a stream of packet header lengths, which enable us to employ deep convolutional neural networks for network traffic classification. Five different network traffic image orientation mappings are carefully designed to deduce the best way to transform the 1-dimensional packet-subflow into a 2D image. Two different mapping strategies, one packet-relative and the other time-relative, are experimented with to map the packets of a packet flow to the pixels in the image. Experiments shows that high classification accuracy can be achieved with minimal manual effort using network traffic images in deep learning.
Malik, A., Fréin, R. de, Al-Zeyadi, M., Andreu-Perez, J..  2020.  Intelligent SDN Traffic Classification Using Deep Learning: Deep-SDN. 2020 2nd International Conference on Computer Communication and the Internet (ICCCI). :184–189.
Accurate traffic classification is fundamentally important for various network activities such as fine-grained network management and resource utilisation. Port-based approaches, deep packet inspection and machine learning are widely used techniques to classify and analyze network traffic flows. However, over the past several years, the growth of Internet traffic has been explosive due to the greatly increased number of Internet users. Therefore, both port-based and deep packet inspection approaches have become inefficient due to the exponential growth of the Internet applications that incurs high computational cost. The emerging paradigm of software-defined networking has reshaped the network architecture by detaching the control plane from the data plane to result in a centralised network controller that maintains a global view over the whole network on its domain. In this paper, we propose a new deep learning model for software-defined networks that can accurately identify a wide range of traffic applications in a short time, called Deep-SDN. The performance of the proposed model was compared against the state-of-the-art and better results were reported in terms of accuracy, precision, recall, and f-measure. It has been found that 96% as an overall accuracy can be achieved with the proposed model. Based on the obtained results, some further directions are suggested towards achieving further advances in this research area.
Rajapkar, A., Binnar, P., Kazi, F..  2020.  Design of Intrusion Prevention System for OT Networks Using Deep Neural Networks. 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1–6.
The Automation industries that uses Supervisory Control and Data Acquisition (SCADA) systems are highly vulnerable for Network threats. Systems that are air-gapped and isolated from the internet are highly affected due to insider attacks like Spoofing, DOS and Malware threats that affects confidentiality, integrity and availability of Operational Technology (OT) system elements and degrade its performance even though security measures are taken. In this paper, a behavior-based intrusion prevention system (IPS) is designed for OT networks. The proposed system is implemented on SCADA test bed with two systems replicates automation scenarios in industry. This paper describes 4 main classes of cyber-attacks with their subclasses against SCADA systems and methodology with design of components of IPS system, database creation, Baselines and deployment of system in environment. IPS system identifies not only IT protocols but also Industry Control System (ICS) protocols Modbus and DNP3 with their inside communication fields using deep packet inspection (DPI). The analytical results show 99.89% accuracy on binary classification and 97.95% accuracy on multiclass classification of different attack vectors performed on network with low false positive rate. These results are also validated by actual deployment of IPS in SCADA systems with the prevention of DOS attack.
Papadogiannaki, E., Deyannis, D., Ioannidis, S..  2020.  Head(er)Hunter: Fast Intrusion Detection using Packet Metadata Signatures. 2020 IEEE 25th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD). :1–6.
More than 75% of the Internet traffic is now encrypted, while this percentage is constantly increasing. The majority of communications are secured using common encryption protocols such as SSL/TLS and IPsec to ensure security and protect the privacy of Internet users. Yet, encryption can be exploited to hide malicious activities. Traditionally, network traffic inspection is based on techniques like deep packet inspection (DPI). Common applications for DPI include but are not limited to firewalls, intrusion detection and prevention systems, L7 filtering and packet forwarding. The core functionality of such DPI implementations is based on pattern matching that enables searching for specific strings or regular expressions inside the packet contents. With the widespread adoption of network encryption though, DPI tools that rely on packet payload content are becoming less effective, demanding the development of more sophisticated techniques in order to adapt to current network encryption trends. In this work, we present HeaderHunter, a fast signature-based intrusion detection system even in encrypted network traffic. We generate signatures using only network packet metadata extracted from packet headers. Also, to cope with the ever increasing network speeds, we accelerate the inner computations of our proposed system using off-the-shelf GPUs.
Bahaa, M., Aboulmagd, A., Adel, K., Fawzy, H., Abdelbaki, N..  2020.  nnDPI: A Novel Deep Packet Inspection Technique Using Word Embedding, Convolutional and Recurrent Neural Networks. 2020 2nd Novel Intelligent and Leading Emerging Sciences Conference (NILES). :165–170.
Traffic Characterization, Application Identification, Per Application Classification, and VPN/Non-VPN Traffic Characterization have been some of the most notable research topics over the past few years. Deep Packet Inspection (DPI) promises an increase in Quality of Service (QoS) for Internet Service Providers (ISPs), simplifies network management and plays a vital role in content censoring. DPI has been used to help ease the flow of network traffic. For instance, if there is a high priority message, DPI could be used to enable high-priority information to pass through immediately, ahead of other lower priority messages. It can be used to prioritize packets that are mission-critical, ahead of ordinary browsing packets. Throttling or slowing down the rate of data transfer can be achieved using DPI for certain traffic types like peer-to-peer downloads. It can also be used to enhance the capabilities of ISPs to prevent the exploitation of Internet of Things (IoT) devices in Distributed Denial-Of-Service (DDOS) attacks by blocking malicious requests from devices. In this paper, we introduce a novel architecture for DPI using neural networks utilizing layers of word embedding, convolutional neural networks and bidirectional recurrent neural networks which proved to have promising results in this task. The proposed architecture introduces a new mix of layers which outperforms the proposed approaches before.
2020-12-28
Hynek, K., Čejka, T., Žádník, M., Kubátová, H..  2020.  Evaluating Bad Hosts Using Adaptive Blacklist Filter. 2020 9th Mediterranean Conference on Embedded Computing (MECO). :1—5.

Publicly available blacklists are popular tools to capture and spread information about misbehaving entities on the Internet. In some cases, their straight-forward utilization leads to many false positives. In this work, we propose a system that combines blacklists with network flow data while introducing automated evaluation techniques to avoid reporting unreliable alerts. The core of the system is formed by an Adaptive Filter together with an Evaluator module. The assessment of the system was performed on data obtained from a national backbone network. The results show the contribution of such a system to the reduction of unreliable alerts.

Riaz, S., Khan, A. H., Haroon, M., Latif, S., Bhatti, S..  2020.  Big Data Security and Privacy: Current Challenges and Future Research perspective in Cloud Environment. 2020 International Conference on Information Management and Technology (ICIMTech). :977—982.

Cloud computing is an Internet-based technology that emerging rapidly in the last few years due to popular and demanded services required by various institutions, organizations, and individuals. structured, unstructured, semistructured data is transfer at a record pace on to the cloud server. These institutions, businesses, and organizations are shifting more and more increasing workloads on cloud server, due to high cost, space and maintenance issues from big data, cloud computing will become a potential choice for the storage of data. In Cloud Environment, It is obvious that data is not secure completely yet from inside and outside attacks and intrusions because cloud servers are under the control of a third party. The Security of data becomes an important aspect due to the storage of sensitive data in a cloud environment. In this paper, we give an overview of characteristics and state of art of big data and data security & privacy top threats, open issues and current challenges and their impact on business are discussed for future research perspective and review & analysis of previous and recent frameworks and architectures for data security that are continuously established against threats to enhance how to keep and store data in the cloud environment.

Liu, H., Di, W..  2020.  Application of Differential Privacy in Location Trajectory Big Data. 2020 International Conference on Intelligent Transportation, Big Data Smart City (ICITBS). :569—573.

With the development of mobile internet technology, GPS technology and social software have been widely used in people's lives. The problem of big data privacy protection related to location trajectory is becoming more and more serious. The traditional location trajectory privacy protection method requires certain background knowledge and it is difficult to adapt to massive mass. Privacy protection of data. differential privacy protection technology protects privacy by attacking data by randomly perturbing raw data. The method used in this paper is to first sample the position trajectory, form the irregular polygons of the high-frequency access points in the sampling points and position data, calculate the center of gravity of the polygon, and then use the differential privacy protection algorithm to add noise to the center of gravity of the polygon to form a new one. The center of gravity, and the new center of gravity are connected to form a new trajectory. The purpose of protecting the position trajectory is well achieved. It is proved that the differential privacy protection algorithm can effectively protect the position trajectory by adding noise.

2020-12-21
Liu, Q., Wu, W., Liu, Q., Huangy, Q..  2020.  T2DNS: A Third-Party DNS Service with Privacy Preservation and Trustworthiness. 2020 29th International Conference on Computer Communications and Networks (ICCCN). :1–11.
We design a third-party DNS service named T2DNS. T2DNS serves client DNS queries with the following features: protecting clients from channel and server attackers, providing trustworthiness proof to clients, being compatible with the existing Internet infrastructure, and introducing bounded overhead. T2DNS's privacy preservation is achieved by a hybrid protocol of encryption and obfuscation, and its service proxy is implemented on Intel SGX. We overcome the challenges of scaling the initialization process, bounding the obfuscation overhead, and tuning practical system parameters. We prototype T2DNS, and experiment results show that T2DNS is fully functional, has acceptable overhead in comparison with other solutions, and is scalable to the number of clients.
Ayers, H., Crews, P., Teo, H., McAvity, C., Levy, A., Levis, P..  2020.  Design Considerations for Low Power Internet Protocols. 2020 16th International Conference on Distributed Computing in Sensor Systems (DCOSS). :103–111.
Low-power wireless networks provide IPv6 connectivity through 6LoWPAN, a set of standards to aggressively compress IPv6 packets over small maximum transfer unit (MTU) links such as 802.15.4.The entire purpose of IP was to interconnect different networks, but we find that different 6LoWPAN implementations fail to reliably communicate with one another. These failures are due to stacks implementing different subsets of the standard out of concern for code size. We argue that this failure stems from 6LoWPAN's design, not implementation, and is due to applying traditional Internet protocol design principles to low- power networks.We propose three design principles for Internet protocols on low-power networks, designed to prevent similar failures in the future. These principles are based around the importance of providing flexible tradeoffs between code size and energy efficiency. We apply these principles to 6LoWPAN and show that the modified protocol provides a wide range of implementation strategies while allowing implementations with different strategies to reliably communicate.
Karthiga, K., Balamurugan, G., Subashri, T..  2020.  Computational Analysis of Security Algorithm on 6LowPSec. 2020 International Conference on Communication and Signal Processing (ICCSP). :1437–1442.
In order to the development of IoT, IETF developed a standard named 6LoWPAN for increase the usage of IPv6 to the tiny and smart objects with low power. Generally, the 6LoWPAN radio link needs end to end (e2e) security for its IPv6 communication process. 6LoWPAN requires light weight variant of security solutions in IPSec. A new security approach of 6LoWPAN at adaptation layer to provide e2e security with light weight IPSec. The existing security protocol IPsec is not suitable for its 6LoWPAN IoT environment because it has heavy restrictions on memory, power, duty cycle, additional overhead transmission. The IPSec had packet overhead problem due to share the secret key between two communicating peers by IKE (Internet Key Exchange) protocol. Hence the existing security protocol IPSec solutions are not suitable for lightweight-based security need in 6LoWPAN IoT. This paper describes 6LowPSec protocol with AES-CCM (Cipher block chaining Message authentication code with Counter mode) cryptographic algorithm with key size of 128 bits with minimum power consumption and duty cycle.
2020-12-17
Mukhandi, M., Portugal, D., Pereira, S., Couceiro, M. S..  2019.  A novel solution for securing robot communications based on the MQTT protocol and ROS. 2019 IEEE/SICE International Symposium on System Integration (SII). :608—613.

With the growing use of the Robot Operating System (ROS), it can be argued that it has become a de-facto framework for developing robotic solutions. ROS is used to build robotic applications for industrial automation, home automation, medical and even automatic robotic surveillance. However, whenever ROS is utilized, security is one of the main concerns that needs to be addressed in order to ensure a secure network communication of robots. Cyber-attacks may hinder evolution and adaptation of most ROS-enabled robotic systems for real-world use over the Internet. Thus, it is important to address and prevent security threats associated with the use of ROS-enabled applications. In this paper, we propose a novel approach for securing ROS-enabled robotic system by integrating ROS with the Message Queuing Telemetry Transport (MQTT) protocol. We manage to secure robots' network communications by providing authentication and data encryption, therefore preventing man-in-the-middle and hijacking attacks. We also perform real-world experiments to assess how the performance of a ROS-enabled robotic surveillance system is affected by the proposed approach.

Abeykoon, I., Feng, X..  2019.  Challenges in ROS Forensics. 2019 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computing, Scalable Computing Communications, Cloud Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). :1677—1682.

The usage of robot is rapidly growth in our society. The communication link and applications connect the robots to their clients or users. This communication link and applications are normally connected through some kind of network connections. This network system is amenable of being attached and vulnerable to the security threats. It is a critical part for ensuring security and privacy for robotic platforms. The paper, also discusses about several cyber-physical security threats that are only for robotic platforms. The peer to peer applications use in the robotic platforms for threats target integrity, availability and confidential security purposes. A Remote Administration Tool (RAT) was introduced for specific security attacks. An impact oriented process was performed for analyzing the assessment outcomes of the attacks. Tests and experiments of attacks were performed in simulation environment which was based on Gazbo Turtlebot simulator and physically on the robot. A software tool was used for simulating, debugging and experimenting on ROS platform. Integrity attacks performed for modifying commands and manipulated the robot behavior. Availability attacks were affected for Denial-of-Service (DoS) and the robot was not listened to Turtlebot commands. Integrity and availability attacks resulted sensitive information on the robot.

2020-12-14
Yu, L., Chen, L., Dong, J., Li, M., Liu, L., Zhao, B., Zhang, C..  2020.  Detecting Malicious Web Requests Using an Enhanced TextCNN. 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC). :768–777.
This paper proposes an approach that combines a deep learning-based method and a traditional machine learning-based method to efficiently detect malicious requests Web servers received. The first few layers of Convolutional Neural Network for Text Classification (TextCNN) are used to automatically extract powerful semantic features and in the meantime transferable statistical features are defined to boost the detection ability, specifically Web request parameter tampering. The semantic features from TextCNN and transferable statistical features from artificially-designing are grouped together to be fed into Support Vector Machine (SVM), replacing the last layer of TextCNN for classification. To facilitate the understanding of abstract features in form of numerical data in vectors extracted by TextCNN, this paper designs trace-back functions that map max-pooling outputs back to words in Web requests. After investigating the current available datasets for Web attack detection, HTTP Dataset CSIC 2010 is selected to test and verify the proposed approach. Compared with other deep learning models, the experimental results demonstrate that the approach proposed in this paper is competitive with the state-of-the-art.
Kavitha, R., Malathi, K., Kunjachen, L. M..  2020.  Interference of Cyber Endanger using Support Vector Machine. 2020 International Conference on Computer Communication and Informatics (ICCCI). :1–4.
The wonder of cyberbullying, implied as persistent and repeated mischief caused through the use of PC systems, mobile phones, and noteworthy propelled contraptions. for instance, Hinduja and Patching upheld that 10-forty% of outlined children masses surrendered having dealt with it each as a harmed individual or as a with the guide of the use of-stander wherein additional progressively young individuals use development to issue, undermine, embarrass, or by and large burden their mates. Advanced badgering has starting at now been said as one which reason first rate harm to society and monetary machine. Advances in development related with web record remark and the assortment of the web associations renders the area and following of such models as a credibility hard and extremely problematic. This paper portrays a web structure for robotized revelation and seeing of Cyber-tormenting cases from on-line exchanges and on line associations. The device is mainly assembled completely absolutely as for the revelation of 3 basic ordinary language sections like Insults, Swears and 2d person. A sort machine and cosmology like reasoning had been contracted to go over the normality of such substances inside the trade board/web documents, which may conceivable explanation a message to security in case you have to take fitting improvement. The instrument has been dissected on staggering social occasions and achieves less steeply-esteemed acknowledgment displays.
2020-12-11
Huang, S., Chuang, T., Huang, S., Ban, T..  2019.  Malicious URL Linkage Analysis and Common Pattern Discovery. 2019 IEEE International Conference on Big Data (Big Data). :3172—3179.

Malicious domain names are consistently changing. It is challenging to keep blacklists of malicious domain names up-to-date because of the time lag between its creation and detection. Even if a website is clean itself, it does not necessarily mean that it won't be used as a pivot point to redirect users to malicious destinations. To address this issue, this paper demonstrates how to use linkage analysis and open-source threat intelligence to visualize the relationship of malicious domain names whilst verifying their categories, i.e., drive-by download, unwanted software etc. Featured by a graph-based model that could present the inter-connectivity of malicious domain names in a dynamic fashion, the proposed approach proved to be helpful for revealing the group patterns of different kinds of malicious domain names. When applied to analyze a blacklisted set of URLs in a real enterprise network, it showed better effectiveness than traditional methods and yielded a clearer view of the common patterns in the data.

2020-12-07
Lemes, C. I., Naessens, V., Vieira, M..  2019.  Trustworthiness Assessment of Web Applications: Approach and Experimental Study using Input Validation Coding Practices. 2019 IEEE 30th International Symposium on Software Reliability Engineering (ISSRE). :435–445.
The popularity of web applications and their world-wide use to support business critical operations raised the interest of hackers on exploiting security vulnerabilities to perform malicious operations. Fostering trust calls for assessment techniques that provide indicators about the quality of a web application from a security perspective. This paper studies the problem of using coding practices to characterize the trustworthiness of web applications from a security perspective. The hypothesis is that applying feasible security practices results in applications having a reduced number of unknown vulnerabilities, and can therefore be considered more trustworthy. The proposed approach is instantiated for the concrete case of input validation practices, and includes a Quality Model to compute trustworthiness scores that can be used to compare different applications or different code elements in the same application. Experimental results show that the higher scores are obtained for more secure code, suggesting that it can be used in practice to characterize trustworthiness, also providing guidance to compare and/or improve the security of web applications.
2020-12-02
Islam, S., Welzl, M., Gjessing, S..  2019.  How to Control a TCP: Minimally-Invasive Congestion Management for Datacenters. 2019 International Conference on Computing, Networking and Communications (ICNC). :121—125.

In multi-tenant datacenters, the hardware may be homogeneous but the traffic often is not. For instance, customers who pay an equal amount of money can get an unequal share of the bottleneck capacity when they do not open the same number of TCP connections. To address this problem, several recent proposals try to manipulate the traffic that TCP sends from the VMs. VCC and AC/DC are two new mechanisms that let the hypervisor control traffic by influencing the TCP receiver window (rwnd). This avoids changing the guest OS, but has limitations (it is not possible to make TCP increase its rate faster than it normally would). Seawall, on the other hand, completely rewrites TCP's congestion control, achieving fairness but requiring significant changes to both the hypervisor and the guest OS. There seems to be a need for a middle ground: a method to control TCP's sending rate without requiring a complete redesign of its congestion control. We introduce a minimally-invasive solution that is flexible enough to cater for needs ranging from weighted fairness in multi-tenant datacenters to potentially offering Internet-wide benefits from reduced interflow competition.

Ayar, T., Budzisz, Ł, Rathke, B..  2018.  A Transparent Reordering Robust TCP Proxy To Allow Per-Packet Load Balancing in Core Networks. 2018 9th International Conference on the Network of the Future (NOF). :1—8.

The idea to use multiple paths to transport TCP traffic seems very attractive due to its potential benefits it may offer for both redundancy and better utilization of available resources by load balancing. Fixed and mobile network providers employ frequently load-balancers that use multiple paths on either per-flow or per-destination level, but very seldom on per-packet level. Despite of the benefits of packet-level load balancing mechanisms (e.g., low computational complexity and high bandwidth utilization) network providers can't use them mainly because of TCP packet reorderings that harm TCP performance. Emerging network architectures also support multiple paths, but they face with the same obstacle in balancing their load to multiple paths. Indeed, packet level load balancing research is paralyzed by the reordering vulnerability of TCP.A couple of TCP variants exist that deal with TCP packet reordering problem, but due to lack of end-to-end transparency they were not widely deployed and adopted. In this paper, we revisit TCP's packet reorderings problem and present a transparent and light-weight algorithm, Out-of-Order Robustness for TCP with Transparent Acknowledgment (ACK) Intervention (ORTA), to deal with out-of-order deliveries.ORTA works as a transparent thin layer below TCP and hides harmful side-effects of packet-level load balancing. ORTA monitors all TCP flow packets and uses ACK traffic shaping, without any modifications to either TCP sender or receiver sides. Since it is transparent to TCP end-points, it can be easily deployed on TCP sender end-hosts (EHs), gateway (GW) routers, or access points (APs). ORTA opens a door for network providers to use per-packet load balancing.The proposed ORTA algorithm is implemented and tested in NS-2. The results show that ORTA can prevent TCP performance decrease when per-packet load balancing is used.

Jie, Y., Zhou, L., Ming, N., Yusheng, X., Xinli, S., Yongqiang, Z..  2018.  Integrated Reliability Analysis of Control and Information Flow in Energy Internet. 2018 2nd IEEE Conference on Energy Internet and Energy System Integration (EI2). :1—9.
In this paper, according to the electricity business process including collecting and transmitting power information and sending control instructions, a coupling model of control-communication flow is built which is composed of three main matrices: control-communication, communication-communication, communication-control incidence matrices. Furthermore, the effective path change between two communication nodes is analyzed and a calculation method of connectivity probability for information network is proposed when considering a breakdown in communication links. Then, based on Bayesian conditional probability theory, the effect of the communication interruption on the energy Internet is analyzed and the metric matrix of controllability is given under communication congestion. Several cases are given in the final of paper to verify the effectiveness of the proposed method for calculating controllability matrix by considering different link interruption scenarios. This probability index can be regarded as a quantitative measure of the controllability of the power service based on the communication transmission instructions, which can be used in the power business decision-making in order to improve the control reliability of the energy Internet.
Naik, D., Nikita, De, T..  2018.  Congestion aware traffic grooming in elastic optical and WiMAX network. 2018 Technologies for Smart-City Energy Security and Power (ICSESP). :1—9.

In recent years, integration of Passive Optical Net-work(PON) and WiMAX (Worldwide Interoperability Microwave Access Network) network is attracting huge interest among many researchers. The continuous demand for large bandwidths with wider coverage area are the key drivers to this technology. This integration has led to high speed and cost efficient solution for internet accessibility. This paper investigates the issues related to traffic grooming, routing and resource allocation in the hybrid networks. The Elastic Optical Network forms Backbone and is integrated with WiMAX. In this novel approach, traffic grooming is carried out using light trail technique to minimize the bandwidth blocking ratio and also reduce the network resource consumption. The simulation is performed on different network topologies, where in the traffic is routed through three modes namely the pure Wireless Network, the Wireless-Optical/Optical-Wireless Network, the pure Optical Network keeping the network congestion in mind. The results confirm reduction in bandwidth blocking ratio in all the given networks coupled with minimum network resource utilization.