Biblio
Cloud computing systems (CCSs) enable the sharing of physical computing resources through virtualisation, where a group of virtual machines (VMs) can share the same physical resources of a given machine. However, this sharing can lead to a so-called side-channel attack (SCA), widely recognised as a potential threat to CCSs. Specifically, malicious VMs can capture information from (target) VMs, i.e., those with sensitive information, by merely co-located with them on the same physical machine. As such, a VM allocation algorithm needs to be cognizant of this issue and attempts to allocate the malicious and target VMs onto different machines, i.e., the allocation algorithm needs to be security-aware. This paper investigates the allocation patterns of VM allocation algorithms that are more likely to lead to a secure allocation. A driving objective is to reduce the number of VM migrations during allocation. We also propose a graph-based secure VMs allocation algorithm (GbSRS) to minimise SCA threats. Our results show that algorithms following a stacking-based behaviour are more likely to produce secure VMs allocation than those following spreading or random behaviours.
Edge computing supports the deployment of ubiquitous, smart services by providing computing and storage closer to terminal devices. However, ensuring the full security and privacy of computations performed at the edge is challenging due to resource limitation. This paper responds to this challenge and proposes an adaptive approach to defense randomization among the edge data centers via a stochastic game, whose solution corresponds to the optimal security deployment at the network's edge. Moreover, security risk is evaluated subjectively based on Prospect Theory to reflect realistic scenarios where the attacker and the edge system do not similarly perceive the status of the infrastructure. The results show that a non-deterministic defense policy yields better security compared to a static defense strategy.
Research on the design of data center infrastructure is increasing, both from academia and industry, due to the rapid development of cloud-based applications such as search engines, social networks, and large-scale computing. On a large scale, data centers can consist of hundreds to thousands of servers that require systems with high-performance requirements and low downtime. To meet the network's needs in a dynamic data center, infrastructure of applications and services are growing. It takes a process of designing a network topology so that it can guarantee availability and security. One way to surmount this is by implementing the zero trust security model based on micro-segmentation. Zero trust is a security idea based on the principle of "never trust, always verify" in which no concepts of trust and untrust in network traffic. The zero trust security model implemented network traffic in the form of untrust. Micro-segmentation is a way to achieve zero trust by dividing a network into smaller logical segments to restrict the traffic. In this research, data center network performance based on software-defined networking with zero trust security model using micro-segmentation has been evaluated using a testbed simulation of Cisco Application Centric Infrastructure by measuring the round trip time, jitter, and packet loss during experiments. Performance evaluation results show that micro-segmentation adds an average round trip time of 4 μs and jitter of 11 μs without packet loss so that the security can be improved without significantly affecting network performance on the data center.
Resource scheduling in a computing system addresses the problem of packing tasks with multi-dimensional resource requirements and non-functional constraints. The exhibited heterogeneity of workload and server characteristics in Cloud-scale or Internet-scale systems is adding further complexity and new challenges to the problem. Compared with,,,, existing solutions based on ad-hoc heuristics, Machine Learning (ML) has the potential to improve further the efficiency of resource management in large-scale systems. In this paper we,,,, will describe and discuss how ML could be used to understand automatically both workloads and environments, and to help to cope with scheduling-related challenges such as consolidating co-located workloads, handling resource requests, guaranteeing application's QoSs, and mitigating tailed stragglers. We will introduce a generalized ML-based solution to large-scale resource scheduling and demonstrate its effectiveness through a case study that deals with performance-centric node classification and straggler mitigation. We believe that an MLbased method will help to achieve architectural optimization and efficiency improvement.
We consider the scenario where a cloud service provider (CSP) operates multiple geo-distributed datacenters to provide Internet-scale service. Our objective is to minimize the total electricity and bandwidth cost by jointly optimizing electricity procurement from wholesale markets and geographical load balancing (GLB), i.e., dynamically routing workloads to locations with cheaper electricity. Under the ideal setting where exact values of market prices and workloads are given, this problem reduces to a simple linear programming and is easy to solve. However, under the realistic setting where only distributions of these variables are available, the problem unfolds into a non-convex infinite-dimensional one and is challenging to solve. One of our main contributions is to develop an algorithm that is proven to solve the challenging problem optimally, by exploring the full design space of strategic bidding. Trace-driven evaluations corroborate our theoretical results, demonstrate fast convergence of our algorithm, and show that it can reduce the cost for the CSP by up to 20% as compared with baseline alternatives. This paper highlights the intriguing role of uncertainty in workloads and market prices, measured by their variances. While uncertainty in workloads deteriorates the cost-saving performance of joint electricity procurement and GLB, counter-intuitively, uncertainty in market prices can be exploited to achieve a cost reduction even larger than the setting without price uncertainty.