Visible to the public Biblio

Filters: Keyword is scheduling  [Clear All Filters]
2021-05-05
Samriya, Jitendra Kumar, Kumar, Narander.  2020.  Fuzzy Ant Bee Colony For Security And Resource Optimization In Cloud Computing. 2020 5th International Conference on Computing, Communication and Security (ICCCS). :1—5.

Cloud computing (CC) systems prevail to be the widespread computational paradigms for offering immense scalable and elastic services. Computing resources in cloud environment should be scheduled to facilitate the providers to utilize the resources moreover the users could get low cost applications. The most prominent need in job scheduling is to ensure Quality of service (QoS) to the user. In the boundary of the third party the scheduling takes place hence it is a significant condition for assuring its security. The main objective of our work is to offer QoS i.e. cost, makespan, minimized migration of task with security enforcement moreover the proposed algorithm guarantees that the admitted requests are executed without violating service level agreement (SLA). These objectives are attained by the proposed Fuzzy Ant Bee Colony algorithm. The experimental outcome confirms that secured job scheduling objective with assured QoS is attained by the proposed algorithm.

2021-04-08
Lin, X., Zhang, Z., Chen, M., Sun, Y., Li, Y., Liu, M., Wang, Y., Liu, M..  2020.  GDGCA: A Gene Driven Cache Scheduling Algorithm in Information-Centric Network. 2020 IEEE 3rd International Conference on Information Systems and Computer Aided Education (ICISCAE). :167–172.
The disadvantages and inextensibility of the traditional network require more novel thoughts for the future network architecture, as for ICN (Information-Centric Network), is an information centered and self-caching network, ICN is deeply rooted in the 5G era, of which concept is user-centered and content-centered. Although the ICN enables cache replacement of content, an information distribution scheduling algorithm is still needed to allocate resources properly due to its limited cache capacity. This paper starts with data popularity, information epilepsy and other data related attributes in the ICN environment. Then it analyzes the factors affecting the cache, proposes the concept and calculation method of Gene value. Since the ICN is still in a theoretical state, this paper describes an ICN scenario that is close to the reality and processes a greedy caching algorithm named GDGCA (Gene Driven Greedy Caching Algorithm). The GDGCA tries to design an optimal simulation model, which based on the thoughts of throughput balance and satisfaction degree (SSD), then compares with the regular distributed scheduling algorithm in related research fields, such as the QoE indexes and satisfaction degree under different Poisson data volumes and cycles, the final simulation results prove that GDGCA has better performance in cache scheduling of ICN edge router, especially with the aid of Information Gene value.
2021-03-29
Salim, M. N., Hutahaean, I. W., Susanti, B. H..  2020.  Fixed Point Attack on Lin et al.’s Modified Hash Function Scheme based on SMALLPRESENT-[8] Algorithm. 2020 International Conference on ICT for Smart Society (ICISS). CFP2013V-ART:1–7.
Lin et al.'s scheme is a hash function Message Authentication Codes (MAC) block cipher based scheme that's composed of the compression function. Fixed point messages have been found on SMALLPRESENT-[s] algorithm. The vulnerability of block cipher algorithm against fixed point attacks can affect the vulnerability of block cipher based hash function schemes. This paper applies fixed point attack against Lin et al.'s modified scheme based on SMALLPRESENT-[8] algorithm. Fixed point attack was done using fixed point message from SMALLPRESENT-[8] algorithm which used as Initial Value (IV) on the scheme branch. The attack result shows that eight fixed point messages are successfully discovered on the B1 branch. The fixed point messages discovery on B1 and B2 branches form 18 fixed point messages on Lin et al.'s modified scheme with different IVs and keys. The discovery of fixed point messages shows that Lin et al.'s modified scheme is vulnerable to fixed point attack.
2020-12-21
Portaluri, G., Giordano, S..  2020.  Gambling on fairness: a fair scheduler for IIoT communications based on the shell game. 2020 IEEE 25th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD). :1–6.
The Industrial Internet of Things (IIoT) paradigm represents nowadays the cornerstone of the industrial automation since it has introduced new features and services for different environments and has granted the connection of industrial machine sensors and actuators both to local processing and to the Internet. One of the most advanced network protocol stack for IoT-IIoT networks that have been developed is 6LoWPAN which supports IPv6 on top of Low-power Wireless Personal Area Networks (LoWPANs). 6LoWPAN is usually coupled with the IEEE 802.15.4 low-bitrate and low-energy MAC protocol that relies on the time-slotted channel hopping (TSCH) technique. In TSCH networks, a coordinator node synchronizes all end-devices and specifies whether (and when) they can transmit or not in order to improve their energy efficiency. In this scenario, the scheduling strategy adopted by the coordinator plays a crucial role that impacts dramatically on the network performance. In this paper, we present a novel scheduling strategy for time-slot allocation in IIoT communications which aims at the improvement of the overall network fairness. The proposed strategy mimics the well-known shell game turning the totally unfair mechanics of this game into a fair scheduling strategy. We compare our proposal with three allocation strategies, and we evaluate the fairness of each scheduler showing that our allocator outperforms the others.
2020-12-17
Staschulat, J., Lütkebohle, I., Lange, R..  2020.  The rclc Executor: Domain-specific deterministic scheduling mechanisms for ROS applications on microcontrollers: work-in-progress. 2020 International Conference on Embedded Software (EMSOFT). :18—19.

Robots are networks of a variety of computing devices, such as powerful computing platforms but also tiny microcontrollers. The Robot Operating System (ROS) is the dominant framework for powerful computing devices. While ROS version 2 adds important features like quality of service and security, it cannot be directly applied to microcontrollers because of its large memory footprint. The micro-ROS project has ported the ROS 2 API to microcontrollers. However, the standard ROS 2 concepts are not enough for real-time performance: In the ROS 2 release “Foxy”, the standard ROS 2 Executor, which is the central component responsible for handling timers and incoming message data, is neither real-time capable nor deterministic. Domain-specific requirements of mobile robots, like sense-plan-act control loops, cannot be addressed with the standard ROS 2 Executor. In this paper, we present an advanced Executor for the ROS 2 C API which provides deterministic scheduling and supports domain-specific requirements. A proof-of-concept is demonstrated on a 32-bit microcontroller.

2020-12-02
Niz, D. de, Andersson, B., Klein, M., Lehoczky, J., Vasudevan, A., Kim, H., Moreno, G..  2019.  Mixed-Trust Computing for Real-Time Systems. 2019 IEEE 25th International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA). :1—11.

Verifying complex Cyber-Physical Systems (CPS) is increasingly important given the push to deploy safety-critical autonomous features. Unfortunately, traditional verification methods do not scale to the complexity of these systems and do not provide systematic methods to protect verified properties when not all the components can be verified. To address these challenges, this paper proposes a real-time mixed-trust computing framework that combines verification and protection. The framework introduces a new task model, where an application task can have both an untrusted and a trusted part. The untrusted part allows complex computations supported by a full OS with a realtime scheduler running in a VM hosted by a trusted hypervisor. The trusted part is executed by another scheduler within the hypervisor and is thus protected from the untrusted part. If the untrusted part fails to finish by a specific time, the trusted part is activated to preserve safety (e.g., prevent a crash) including its timing guarantees. This framework is the first allowing the use of untrusted components for CPS critical functions while preserving logical and timing guarantees, even in the presence of malicious attackers. We present the framework design and implementation along with the schedulability analysis and the coordination protocol between the trusted and untrusted parts. We also present our Raspberry Pi 3 implementation along with experiments showing the behavior of the system under failures of untrusted components, and a drone application to demonstrate its practicality.

2020-12-01
Yang, R., Ouyang, X., Chen, Y., Townend, P., Xu, J..  2018.  Intelligent Resource Scheduling at Scale: A Machine Learning Perspective. 2018 IEEE Symposium on Service-Oriented System Engineering (SOSE). :132—141.

Resource scheduling in a computing system addresses the problem of packing tasks with multi-dimensional resource requirements and non-functional constraints. The exhibited heterogeneity of workload and server characteristics in Cloud-scale or Internet-scale systems is adding further complexity and new challenges to the problem. Compared with,,,, existing solutions based on ad-hoc heuristics, Machine Learning (ML) has the potential to improve further the efficiency of resource management in large-scale systems. In this paper we,,,, will describe and discuss how ML could be used to understand automatically both workloads and environments, and to help to cope with scheduling-related challenges such as consolidating co-located workloads, handling resource requests, guaranteeing application's QoSs, and mitigating tailed stragglers. We will introduce a generalized ML-based solution to large-scale resource scheduling and demonstrate its effectiveness through a case study that deals with performance-centric node classification and straggler mitigation. We believe that an MLbased method will help to achieve architectural optimization and efficiency improvement.

2020-11-30
Cheng, D., Zhou, X., Ding, Z., Wang, Y., Ji, M..  2019.  Heterogeneity Aware Workload Management in Distributed Sustainable Datacenters. IEEE Transactions on Parallel and Distributed Systems. 30:375–387.
The tremendous growth of cloud computing and large-scale data analytics highlight the importance of reducing datacenter power consumption and environmental impact of brown energy. While many Internet service operators have at least partially powered their datacenters by green energy, it is challenging to effectively utilize green energy due to the intermittency of renewable sources, such as solar or wind. We find that the geographical diversity of internet-scale services can be carefully scheduled to improve the efficiency of applying green energy in datacenters. In this paper, we propose a holistic heterogeneity-aware cloud workload management approach, sCloud, that aims to maximize the system goodput in distributed self-sustainable datacenters. sCloud adaptively places the transactional workload to distributed datacenters, allocates the available resource to heterogeneous workloads in each datacenter, and migrates batch jobs across datacenters, while taking into account the green power availability and QoS requirements. We formulate the transactional workload placement as a constrained optimization problem that can be solved by nonlinear programming. Then, we propose a batch job migration algorithm to further improve the system goodput when the green power supply varies widely at different locations. Finally, we extend sCloud by integrating a flexible batch job manager to dynamically control the job execution progress without violating the deadlines. We have implemented sCloud in a university cloud testbed with real-world weather conditions and workload traces. Experimental results demonstrate sCloud can achieve near-to-optimal system performance while being resilient to dynamic power availability. sCloud with the flexible batch job management approach outperforms a heterogeneity-oblivious approach by 37 percent in improving system goodput and 33 percent in reducing QoS violations.
2020-11-16
Huyck, P..  2019.  Safe and Secure Data Fusion — Use of MILS Multicore Architecture to Reduce Cyber Threats. 2019 IEEE/AIAA 38th Digital Avionics Systems Conference (DASC). :1–9.
Data fusion, as a means to improve aircraft and air traffic safety, is a recent focus of some researchers and system developers. Increases in data volume and processing needs necessitate more powerful hardware and more flexible software architectures to satisfy these needs. Such improvements in processed data also mean the overall system becomes more complex and correspondingly, resulting in a potentially significantly larger cyber-attack space. Today's multicore processors are one means of satisfying the increased computational needs of data fusion-based systems. When coupled with a real-time operating system (RTOS) capable of flexible core and application scheduling, large cabinets of (power hungry) single-core processors may be avoided. The functional and assurance capabilities of such an RTOS can be critical elements in providing application isolation, constrained data flows, and restricted hardware access (including covert channel prevention) necessary to reduce the overall cyber-attack space. This paper examines fundamental considerations of a multiple independent levels of security (MILS) architecture when supported by a multicore-based real-time operating system. The paper draws upon assurance activities and functional properties associated with a previous Common Criteria evaluation assurance level (EAL) 6+ / High-Robustness Separation Kernel certification effort and contrast those with activities performed as part of a MILS multicore related project. The paper discusses key characteristics and functional capabilities necessary to achieve overall system security and safety. The paper defines architectural considerations essential for scheduling applications on a multicore processor to reduce security risks. For civil aircraft systems, the paper discusses the applicability of the security assurance and architecture configurations to system providers looking to increase their resilience to cyber threats.
2020-11-02
Wang, Nan, Yao, Manting, Jiang, Dongxu, Chen, Song, Zhu, Yu.  2018.  Security-Driven Task Scheduling for Multiprocessor System-on-Chips with Performance Constraints. 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI). :545—550.

The high penetration of third-party intellectual property (3PIP) brings a high risk of malicious inclusions and data leakage in products due to the planted hardware Trojans, and system level security constraints have recently been proposed for MPSoCs protection against hardware Trojans. However, secret communication still can be established in the context of the proposed security constraints, and thus, another type of security constraints is also introduced to fully prevent such malicious inclusions. In addition, fulfilling the security constraints incurs serious overhead of schedule length, and a two-stage performance-constrained task scheduling algorithm is then proposed to maintain most of the security constraints. In the first stage, the schedule length is iteratively reduced by assigning sets of adjacent tasks into the same core after calculating the maximum weight independent set of a graph consisting of all timing critical paths. In the second stage, tasks are assigned to proper IP vendors and scheduled to time periods with a minimization of cores required. The experimental results show that our work reduces the schedule length of a task graph, while only a small number of security constraints are violated.

2020-10-06
Zaman, Tarannum Shaila, Han, Xue, Yu, Tingting.  2019.  SCMiner: Localizing System-Level Concurrency Faults from Large System Call Traces. 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE). :515—526.

Localizing concurrency faults that occur in production is hard because, (1) detailed field data, such as user input, file content and interleaving schedule, may not be available to developers to reproduce the failure; (2) it is often impractical to assume the availability of multiple failing executions to localize the faults using existing techniques; (3) it is challenging to search for buggy locations in an application given limited runtime data; and, (4) concurrency failures at the system level often involve multiple processes or event handlers (e.g., software signals), which can not be handled by existing tools for diagnosing intra-process(thread-level) failures. To address these problems, we present SCMiner, a practical online bug diagnosis tool to help developers understand how a system-level concurrency fault happens based on the logs collected by the default system audit tools. SCMiner achieves online bug diagnosis to obviate the need for offline bug reproduction. SCMiner does not require code instrumentation on the production system or rely on the assumption of the availability of multiple failing executions. Specifically, after the system call traces are collected, SCMiner uses data mining and statistical anomaly detection techniques to identify the failure-inducing system call sequences. It then maps each abnormal sequence to specific application functions. We have conducted an empirical study on 19 real-world benchmarks. The results show that SCMiner is both effective and efficient at localizing system-level concurrency faults.

Baruah, Sanjoy, Burns, Alan.  2019.  Incorporating Robustness and Resilience into Mixed-Criticality Scheduling Theory. 2019 IEEE 22nd International Symposium on Real-Time Distributed Computing (ISORC). :155—162.

Mixed-criticality scheduling theory (MCSh) was developed to allow for more resource-efficient implementation of systems comprising different components that need to have their correctness validated at different levels of assurance. As originally defined, MCSh deals exclusively with pre-runtime verification of such systems; hence many mixed-criticality scheduling algorithms that have been developed tend to exhibit rather poor survivability characteristics during run-time. (E.g., MCSh allows for less-important (“Lo-criticality”) workloads to be completely discarded in the event that run-time behavior is not compliant with the assumptions under which the correctness of the LO-criticality workload should be verified.) Here we seek to extend MCSh to incorporate survivability considerations, by proposing quantitative metrics for the robustness and resilience of mixed-criticality scheduling algorithms. Such metrics allow us to make quantitative assertions regarding the survivability characteristics of mixed-criticality scheduling algorithms, and to compare different algorithms from the perspective of their survivability. We propose that MCSh seek to develop scheduling algorithms that possess superior survivability characteristics, thereby obtaining algorithms with better survivability properties than current ones (which, since they have been developed within a survivability-agnostic framework, tend to focus exclusively on pre-runtime verification and ignore survivability issues entirely).

2020-07-20
Guelton, Serge, Guinet, Adrien, Brunet, Pierrick, Martinez, Juan Manuel, Dagnat, Fabien, Szlifierski, Nicolas.  2018.  [Research Paper] Combining Obfuscation and Optimizations in the Real World. 2018 IEEE 18th International Working Conference on Source Code Analysis and Manipulation (SCAM). :24–33.
Code obfuscation is the de facto standard to protect intellectual property when delivering code in an unmanaged environment. It relies on additive layers of code tangling techniques, white-box encryption calls and platform-specific or tool-specific countermeasures to make it harder for a reverse engineer to access critical pieces of data or to understand core algorithms. The literature provides plenty of different obfuscation techniques that can be used at compile time to transform data or control flow in order to provide some kind of protection against different reverse engineering scenarii. Scheduling code transformations to optimize a given metric is known as the pass scheduling problem, a problem known to be NP-hard, but solved in a practical way using hard-coded sequences that are generally satisfactory. Adding code obfuscation to the problem introduces two new dimensions. First, as a code obfuscator needs to find a balance between obfuscation and performance, pass scheduling becomes a multi-criteria optimization problem. Second, obfuscation passes transform their inputs in unconventional ways, which means some pass combinations may not be desirable or even valid. This paper highlights several issues met when blindly chaining different kind of obfuscation and optimization passes, emphasizing the need of a formal model to combine them. It proposes a non-intrusive formalism to leverage on sequential pass management techniques. The model is validated on real-world scenarii gathered during the development of an industrial-strength obfuscator on top of the LLVM compiler infrastructure.
Stroup, Ronald L., Niewoehner, Kevin R..  2019.  Application of Artificial Intelligence in the National Airspace System – A Primer. 2019 Integrated Communications, Navigation and Surveillance Conference (ICNS). :1–14.

The National Airspace System (NAS), as a portion of the US' transportation system, has not yet begun to model or adopt integration of Artificial Intelligence (AI) technology. However, users of the NAS, i.e., Air transport operators, UAS operators, etc. are beginning to use this technology throughout their operations. At issue within the broader aviation marketplace, is the continued search for a solution set to the persistent daily delays and schedule perturbations that occur within the NAS. Despite billions invested through the NAS Modernization Program, the delays persist in the face of reduced demand for commercial routings. Every delay represents an economic loss to commercial transport operators, passengers, freighters, and any business depending on the transportation performance. Therefore, the FAA needs to begin to address from an advanced concepts perspective, what this wave of new technology will affect as it is brought to bear on various operations performance parameters, including safety, security, efficiency, and resiliency solution sets. This paper is the first in a series of papers we are developing to explore the application of AI in the National Airspace System (NAS). This first paper is meant to get everyone in the aviation community on the same page, a primer if you will, to start the technical discussions. This paper will define AI; the capabilities associated with AI; current use cases within the aviation ecosystem; and how to prepare for insertion of AI in the NAS. The next series of papers will look at NAS Operations Theory utilizing AI capabilities and eventually leading to a future intelligent NAS (iNAS) environment.

2020-07-16
Ma, Siyou, Feng, Gao, Yan, Yunqiang.  2019.  Study on Hybrid Collaborative Simulation Testing Method Towards CPS. 2019 IEEE 19th International Conference on Software Quality, Reliability and Security Companion (QRS-C). :51—56.

CPS is generally complex to study, analyze, and design, as an important means to ensure the correctness of design and implementation of CPS system, simulation test is difficult to fully test, verify and evaluate the components or subsystems in the CPS system due to the inconsistent development progress of the com-ponents or subsystems in the CPS system. To address this prob-lem, we designed a hybrid P2P based collaborative simulation test framework composed of full physical nodes, hardware in the loop(HIL) nodes and full digital nodes to simulate the compo-nents or subsystems in the CPS system of different development progress, based on the framework, we then proposed collabora-tive simulation control strategy comprising sliding window based clock synchronization, dynamic adaptive time advancement and multi-priority task scheduling with preemptive time threshold. Experiments showed that the hybrid collaborative simulation testing method proposed in this paper can fully test CPS more effectively.

2020-05-15
Wang, Jihe, Zhang, Meng, Qiu, Meikang.  2018.  A Diffusional Schedule for Traffic Reducing on Network-on-Chip. 2018 5th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/2018 4th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). :206—210.
pubcrawl, Network on Chip Security, Scalability, resiliency, resilience, metrics, Tasks on NoC (Network-on-Chip) are less efficient because of long-distance data synchronization. An inefficient task schedule strategy can lead to a large number of remote data accessing that ruins the speedup of parallel execution of multiple tasks. Thus, we propose an energy efficient task schedule to reduce task traffic with a diffusional pattern. The task mapping algorithm can optimize traffic distribution by limit tasks into a small area to reduce NoC activities. Comparing to application-layer optimization, our task mapping can obtain 20% energy saving and 15% latency reduction on average.
2020-04-13
Wu, Qiong, Zhang, Haitao, Du, Peilun, Li, Ye, Guo, Jianli, He, Chenze.  2019.  Enabling Adaptive Deep Neural Networks for Video Surveillance in Distributed Edge Clouds. 2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS). :525–528.
In the field of video surveillance, the demands of intelligent video analysis services based on Deep Neural Networks (DNNs) have grown rapidly. Although most existing studies focus on the performance of DNNs pre-deployed at remote clouds, the network delay caused by computation offloading from network cameras to remote clouds is usually long and sometimes unbearable. Edge computing can enable rich services and applications in close proximity to the network cameras. However, owing to the limited computing resources of distributed edge clouds, it is challenging to satisfy low latency and high accuracy requirements for all users, especially when the number of users surges. To address this challenge, we first formulate the intelligent video surveillance task scheduling problem that minimizes the average response time while meeting the performance requirements of tasks and prove that it is NP-hard. Second, we present an adaptive DNN model selection method to identify the most effective DNN model for each task by comparing the feature similarity between the input video segment and pre-stored training videos. Third, we propose a two-stage delay-aware graph searching approach that presents a beneficial trade-off between network delay and computing delay. Experimental results demonstrate the efficiency of our approach.
2020-02-10
Yang, Weiyong, Liu, Wei, Wei, Xingshen, Lv, Xiaoliang, Qi, Yunlong, Sun, Boyan, Liu, Yin.  2019.  Micro-Kernel OS Architecture and its Ecosystem Construction for Ubiquitous Electric Power IoT. 2019 IEEE International Conference on Energy Internet (ICEI). :179–184.

The operating system is extremely important for both "Made in China 2025" and ubiquitous electric power Internet of Things. By investigating of five key requirements for ubiquitous electric power Internet of Things at the OS level (performance, ecosystem, information security, functional security, developer framework), this paper introduces the intelligent NARI microkernel Operating System and its innovative schemes. It is implemented with microkernel architecture based on the trusted computing. Some technologies such as process based fine-grained real-time scheduling algorithm, sigma0 efficient message channel and service process binding in multicore are applied to improve system performance. For better ecological expansion, POSIX standard API is compatible, Linux container, embedded virtualization and intelligent interconnection technology are supported. Native process sandbox and mimicry defense are considered for security mechanism design. Multi-level exception handling and multidimensional partition isolation are adopted to provide High Reliability. Theorem-assisted proof tools based on Isabelle/HOL is used to verify the design and implementation of NARI microkernel OS. Developer framework including tools, kit and specification is discussed when developing both system software and user software on this IoT OS.

2019-12-17
Huang, Jeff.  2018.  UFO: Predictive Concurrency Use-After-Free Detection. 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE). :609-619.

Use-After-Free (UAF) vulnerabilities are caused by the program operating on a dangling pointer and can be exploited to compromise critical software systems. While there have been many tools to mitigate UAF vulnerabilities, UAF remains one of the most common attack vectors. UAF is particularly di cult to detect in concurrent programs, in which a UAF may only occur with rare thread schedules. In this paper, we present a novel technique, UFO, that can precisely predict UAFs based on a single observed execution trace with a provably higher detection capability than existing techniques with no false positives. The key technical advancement of UFO is an extended maximal thread causality model that captures the largest possible set of feasible traces that can be inferred from a given multithreaded execution trace. By formulating UAF detection as a constraint solving problem atop this model, we can explore a much larger thread scheduling space than classical happens-before based techniques. We have evaluated UFO on several real-world large complex C/C++ programs including Chromium and FireFox. UFO scales to real-world systems with hundreds of millions of events in their execution and has detected a large number of real concurrency UAFs.

2019-12-09
Li, Wenjuan, Cao, Jian, Hu, Keyong, Xu, Jie, Buyya, Rajkumar.  2019.  A Trust-Based Agent Learning Model for Service Composition in Mobile Cloud Computing Environments. IEEE Access. 7:34207–34226.
Mobile cloud computing has the features of resource constraints, openness, and uncertainty which leads to the high uncertainty on its quality of service (QoS) provision and serious security risks. Therefore, when faced with complex service requirements, an efficient and reliable service composition approach is extremely important. In addition, preference learning is also a key factor to improve user experiences. In order to address them, this paper introduces a three-layered trust-enabled service composition model for the mobile cloud computing systems. Based on the fuzzy comprehensive evaluation method, we design a novel and integrated trust management model. Service brokers are equipped with a learning module enabling them to better analyze customers' service preferences, especially in cases when the details of a service request are not totally disclosed. Because traditional methods cannot totally reflect the autonomous collaboration between the mobile cloud entities, a prototype system based on the multi-agent platform JADE is implemented to evaluate the efficiency of the proposed strategies. The experimental results show that our approach improves the transaction success rate and user satisfaction.
2019-06-24
Chouikhi, S., Merghem-Boulahia, L., Esseghir, M..  2018.  Energy Demand Scheduling Based on Game Theory for Microgrids. 2018 IEEE International Conference on Communications (ICC). :1–6.

The advent of smart grids offers us the opportunity to better manage the electricity grids. One of the most interesting challenges in the modern grids is the consumer demand management. Indeed, the development in Information and Communication Technologies (ICTs) encourages the development of demand-side management systems. In this paper, we propose a distributed energy demand scheduling approach that uses minimal interactions between consumers to optimize the energy demand. We formulate the consumption scheduling as a constrained optimization problem and use game theory to solve this problem. On one hand, the proposed approach aims to reduce the total energy cost of a building's consumers. This imposes the cooperation between all the consumers to achieve the collective goal. On the other hand, the privacy of each user must be protected, which means that our distributed approach must operate with a minimal information exchange. The performance evaluation shows that the proposed approach reduces the total energy cost, each consumer's individual cost, as well as the peak to average ratio.

2019-03-22
Sheikh, Abdullah, Munro, Malcolm, Budgen, David.  2018.  SSM: Scheduling Security Model for a Cloud Environment. Proceedings of the 2018 2Nd International Conference on Cloud and Big Data Computing. :11-15.

Scheduling in the cloud is a complex task due to the number and variety of resources available and the volatility of usage-patterns of resources considering that the resource setting is on the service provider. This complexity is compounded further when Security issues and Quality of Service (QoS) are also factored in. The aim of this paper is to describe a model that based on Security (SSM) as a key element that cloud services rely on which affects the performance, cost and time concerns within the security constraints of the cloud service. Definition of the Scheduling Security Model (SSM), and evaluation through worked example that can meet the customer requirements of cost and the quality of service in the required time.

2018-12-10
Versluis, L., Neacsu, M., Iosup, A..  2018.  A Trace-Based Performance Study of Autoscaling Workloads of Workflows in Datacenters. 2018 18th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). :223–232.

To improve customer experience, datacenter operators offer support for simplifying application and resource management. For example, running workloads of workflows on behalf of customers is desirable, but requires increasingly more sophisticated autoscaling policies, that is, policies that dynamically provision resources for the customer. Although selecting and tuning autoscaling policies is a challenging task for datacenter operators, so far relatively few studies investigate the performance of autoscaling for workloads of workflows. Complementing previous knowledge, in this work we propose the first comprehensive performance study in the field. Using trace-based simulation, we compare state-of-the-art autoscaling policies across multiple application domains, workload arrival patterns (e.g., burstiness), and system utilization levels. We further investigate the interplay between autoscaling and regular allocation policies, and the complexity cost of autoscaling. Our quantitative study focuses not only on traditional performance metrics and on state-of-the-art elasticity metrics, but also on time-and memory-related autoscaling-complexity metrics. Our main results give strong and quantitative evidence about previously unreported operational behavior, for example, that autoscaling policies perform differently across application domains and allocation and provisioning policies should be co-designed.

2018-05-09
Rahbari, D., Kabirzadeh, S., Nickray, M..  2017.  A security aware scheduling in fog computing by hyper heuristic algorithm. 2017 3rd Iranian Conference on Intelligent Systems and Signal Processing (ICSPIS). :87–92.

Fog computing provides a new architecture for the implementation of the Internet of Things (IoT), which can connect sensor nodes to the cloud using the edge of the network. This structure has improved the latency and energy consumption in the cloud. In this heterogeneous and distributed environment, resource allocation is very important. Hence, scheduling will be a challenge to increase productivity and allocate resources appropriately to the tasks. Programs that run in this environment should be protected from intruders. We consider three parameters as authentication, integrity, and confidentiality to maintain security in fog devices. These parameters have time and computational overhead. In the proposed approach, we schedule the modules for the run in fog devices by heuristic algorithms based on data mining technique. The objective function is included CPU utilization, bandwidth, and security overhead. We compare the proposed algorithm with several heuristic algorithms. The results show that our proposed algorithm improved the average energy consumption of 63.27%, cost 44.71% relative to the PSO, ACO, SA algorithms.

Lu, Z., Chen, F., Cheng, G., Ai, J..  2017.  A secure control plane for SDN based on Bayesian Stackelberg Games. 2017 3rd IEEE International Conference on Computer and Communications (ICCC). :1259–1264.

Vulnerabilities of controller that is caused by separation of control and forwarding lead to a threat which attacker can take remote access detection in SDN. The current work proposes a controller architecture called secure control plane (SCP) that enhances security and increase the difficulty of the attack through a rotation of heterogeneous and multiple controllers. Specifically, a dynamic-scheduling method based on Bayesian Stackelberg Games is put forward to maximize security reward of defender during each migration. Secondly, introducing a self-cleaning mechanism combined with game strategy aims at improving the secure level and form a closed-loop defense mechanism; Finally, the experiments described quantitatively defender will get more secure gain based on the game strategy compared with traditional strategy (pure and random strategies), and the self-cleaning mechanism can make the control plane to be in a higher level of security.