Visible to the public Biblio

Found 203 results

Filters: First Letter Of Last Name is V  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R S T U [V] W X Y Z   [Show ALL]
V
Volkov, A. I., Semin, V. G., Khakimullin, E. R..  2020.  Modeling the Structures of Threats to Information Security Risks based on a Fuzzy Approach. 2020 International Conference Quality Management, Transport and Information Security, Information Technologies (IT QM IS). :132—135.

The article deals with the development and implementation of a method for synthesizing structures of threats and risks to information security based on a fuzzy approach. We consider a method for modeling threat structures based on structural abstractions: aggregation, generalization, and Association. It is shown that the considered forms of structural abstractions allow implementing the processes of Ascending and Descending inheritance. characteristics of the threats. A database of fuzzy rules based on procedural abstractions has been developed and implemented in the fuzzy logic tool environment Fussy Logic.

Vollala, S., Varadhan, V.V., Geetha, K., Ramasubramanian, N..  2014.  Efficient modular multiplication algorithms for public key cryptography. Advance Computing Conference (IACC), 2014 IEEE International. :74-78.

The modular exponentiation is an important operation for cryptographic transformations in public key cryptosystems like the Rivest, Shamir and Adleman, the Difie and Hellman and the ElGamal schemes. computing ax mod n and axby mod n for very large x,y and n are fundamental to the efficiency of almost all pubic key cryptosystems and digital signature schemes. To achieve high level of security, the word length in the modular exponentiations should be significantly large. The performance of public key cryptography is primarily determined by the implementation efficiency of the modular multiplication and exponentiation. As the words are usually large, and in order to optimize the time taken by these operations, it is essential to minimize the number of modular multiplications. In this paper we are presenting efficient algorithms for computing ax mod n and axbymod n. In this work we propose four algorithms to evaluate modular exponentiation. Bit forwarding (BFW) algorithms to compute ax mod n, and to compute axby mod n two algorithms namely Substitute and reward (SRW), Store and forward(SFW) are proposed. All the proposed algorithms are efficient in terms of time and at the same time demands only minimal additional space to store the pre-computed values. These algorithms are suitable for devices with low computational power and limited storage.
 

Vollmer, T., Manic, M., Linda, O..  2014.  Autonomic Intelligent Cyber-Sensor to Support Industrial Control Network Awareness. Industrial Informatics, IEEE Transactions on. 10:1647-1658.

The proliferation of digital devices in a networked industrial ecosystem, along with an exponential growth in complexity and scope, has resulted in elevated security concerns and management complexity issues. This paper describes a novel architecture utilizing concepts of autonomic computing and a simple object access protocol (SOAP)-based interface to metadata access points (IF-MAP) external communication layer to create a network security sensor. This approach simplifies integration of legacy software and supports a secure, scalable, and self-managed framework. The contribution of this paper is twofold: 1) A flexible two-level communication layer based on autonomic computing and service oriented architecture is detailed and 2) three complementary modules that dynamically reconfigure in response to a changing environment are presented. One module utilizes clustering and fuzzy logic to monitor traffic for abnormal behavior. Another module passively monitors network traffic and deploys deceptive virtual network hosts. These components of the sensor system were implemented in C++ and PERL and utilize a common internal D-Bus communication mechanism. A proof of concept prototype was deployed on a mixed-use test network showing the possible real-world applicability. In testing, 45 of the 46 network attached devices were recognized and 10 of the 12 emulated devices were created with specific operating system and port configurations. In addition, the anomaly detection algorithm achieved a 99.9% recognition rate. All output from the modules were correctly distributed using the common communication structure.

Vollmer, T., Manic, M., Linda, O..  2014.  Autonomic Intelligent Cyber-Sensor to Support Industrial Control Network Awareness. Industrial Informatics, IEEE Transactions on. 10:1647-1658.

The proliferation of digital devices in a networked industrial ecosystem, along with an exponential growth in complexity and scope, has resulted in elevated security concerns and management complexity issues. This paper describes a novel architecture utilizing concepts of autonomic computing and a simple object access protocol (SOAP)-based interface to metadata access points (IF-MAP) external communication layer to create a network security sensor. This approach simplifies integration of legacy software and supports a secure, scalable, and self-managed framework. The contribution of this paper is twofold: 1) A flexible two-level communication layer based on autonomic computing and service oriented architecture is detailed and 2) three complementary modules that dynamically reconfigure in response to a changing environment are presented. One module utilizes clustering and fuzzy logic to monitor traffic for abnormal behavior. Another module passively monitors network traffic and deploys deceptive virtual network hosts. These components of the sensor system were implemented in C++ and PERL and utilize a common internal D-Bus communication mechanism. A proof of concept prototype was deployed on a mixed-use test network showing the possible real-world applicability. In testing, 45 of the 46 network attached devices were recognized and 10 of the 12 emulated devices were created with specific operating system and port configurations. In addition, the anomaly detection algorithm achieved a 99.9% recognition rate. All output from the modules were correctly distributed using the common communication structure.

Völp, Marcus, Lackorzynski, Adam, Decouchant, Jérémie, Rahli, Vincent, Rocha, Francisco, Esteves-Verissimo, Paulo.  2016.  Avoiding Leakage and Synchronization Attacks Through Enclave-Side Preemption Control. Proceedings of the 1st Workshop on System Software for Trusted Execution. :6:1–6:6.

Intel SGX is the latest processor architecture promising secure code execution despite large, complex and hence potentially vulnerable legacy operating systems (OSs). However, two recent works identified vulnerabilities that allow an untrusted management OS to extract secret information from Intel SGX's enclaves, and to violate their integrity by exploiting concurrency bugs. In this work, we re-investigate delayed preemption (DP) in the context of Intel SGX. DP is a mechanism originally proposed for L4-family microkernels as disable-interrupt replacement. Recapitulating earlier results on language-based information-flow security, we illustrate the construction of leakage-free code for enclaves. However, as long as adversaries have fine-grained control over preemption timing, these solutions are impractical from a performance/complexity perspective. To overcome this, we resort to delayed preemption, and sketch a software implementation for hypervisors providing enclaves as well as a hardware extension for systems like SGX. Finally, we illustrate how static analyses for SGX may be extended to check confidentiality of preemption-delaying programs.

Volz, V., Majchrzak, K., Preuss, M..  2018.  A Social Science-based Approach to Explanations for (Game) AI. 2018 IEEE Conference on Computational Intelligence and Games (CIG). :1–2.

The current AI revolution provides us with many new, but often very complex algorithmic systems. This complexity does not only limit understanding, but also acceptance of e.g. deep learning methods. In recent years, explainable AI (XAI) has been proposed as a remedy. However, this research is rarely supported by publications on explanations from social sciences. We suggest a bottom-up approach to explanations for (game) AI, by starting from a baseline definition of understandability informed by the concept of limited human working memory. We detail our approach and demonstrate its application to two games from the GVGAI framework. Finally, we discuss our vision of how additional concepts from social sciences can be integrated into our proposed approach and how the results can be generalised.

von Hof, Vincent, Fögen, Konrad, Kuchen, Herbert.  2017.  Detecting Spring Configurations Errors. Proceedings of the Symposium on Applied Computing. :1505–1512.
Dependency injection frameworks such as the Spring framework rely on dynamic language features of Java. Errors arising from the improper usage of these features bypass the compile-time checks of the Java compiler. This paper discusses the application of static code analysis as a means to restore compile-time checking for Spring-related configuration errors. First, possible errors in the configuration of Spring are identified and classified. Attributed grammars are applied in order to formally detect the errors and a prototypical compiler extension is implemented based on Java's pluggable annotation processing API.
von Maltitz, Marcel, Carle, Georg.  2018.  Leveraging Secure Multiparty Computation in the Internet of Things. Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services. :508–510.
Centralized systems in the Internet of Things—be it local middleware or cloud-based services—fail to fundamentally address privacy of the collected data. We propose an architecture featuring secure multiparty computation at its core in order to realize data processing systems which already incorporate support for privacy protection in the architecture.
Vora, Keval, Tian, Chen, Gupta, Rajiv, Hu, Ziang.  2017.  CoRAL: Confined Recovery in Distributed Asynchronous Graph Processing. Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems. :223–236.
Existing distributed asynchronous graph processing systems employ checkpointing to capture globally consistent snapshots and rollback all machines to most recent checkpoint to recover from machine failures. In this paper we argue that recovery in distributed asynchronous graph processing does not require the entire execution state to be rolled back to a globally consistent state due to the relaxed asynchronous execution semantics. We define the properties required in the recovered state for it to be usable for correct asynchronous processing and develop CoRAL, a lightweight checkpointing and recovery algorithm. First, this algorithm carries out confined recovery that only rolls back graph execution states of the failed machines to affect recovery. Second, it relies upon lightweight checkpoints that capture locally consistent snapshots with a reduced peak network bandwidth requirement. Our experiments using real-world graphs show that our technique recovers from failures and finishes processing 1.5x to 3.2x faster compared to the traditional asynchronous checkpointing and recovery mechanism when failures impact 1 to 6 machines of a 16 machine cluster. Moreover, capturing locally consistent snapshots significantly reduces intermittent high peak bandwidth usage required to save the snapshots – the average reduction in 99th percentile bandwidth ranges from 22% to 51% while 1 to 6 snapshot replicas are being maintained.
Vorobeychik, Yevgeniy, Mayo, Jackson R., Armstrong, Robert C., Ruthruff, Joseph R..  2011.  Noncooperatively Optimized Tolerance: Decentralized Strategic Optimization in Complex Systems. Phys. Rev. Lett.. 107:108702.

We introduce noncooperatively optimized tolerance (NOT), a game theoretic generalization of highly optimized tolerance (HOT), which we illustrate in the forest fire framework. As the number of players increases, NOT retains features of HOT, such as robustness and self-dissimilar landscapes, but also develops features of self-organized criticality. The system retains considerable robustness even as it becomes fractured, due in part to emergent cooperation between players, and at the same time exhibits increasing resilience against changes in the environment, giving rise to intermediate regimes where the system is robust to a particular distribution of adverse events, yet not very fragile to changes.

Vorobiev, E. G., Petrenko, S. A., Kovaleva, I. V., Abrosimov, I. K..  2017.  Organization of the Entrusted Calculations in Crucial Objects of Informatization under Uncertainty. 2017 XX IEEE International Conference on Soft Computing and Measurements (SCM). :299–300.

The urgent task of the organization of confidential calculations in crucial objects of informatization on the basis of domestic TPM technologies (Trusted Platform Module) is considered. The corresponding recommendations and architectural concepts of the special hardware TPM module (Trusted Platform Module) which is built in a computing platform are proposed and realize a so-called ``root of trust''. As a result it gave the organization the confidential calculations on the basis of domestic electronic base.

Vorobiev, E. G., Petrenko, S. A., Kovaleva, I. V., Abrosimov, I. K..  2017.  Analysis of computer security incidents using fuzzy logic. 2017 XX IEEE International Conference on Soft Computing and Measurements (SCM). :369–371.

The work proposes and justifies a processing algorithm of computer security incidents based on the author's signatures of cyberattacks. Attention is also paid to the design pattern SOPKA based on the Russian ViPNet technology. Recommendations are made regarding the establishment of the corporate segment SOPKA, which meets the requirements of Presidential Decree of January 15, 2013 number 31c “On the establishment of the state system of detection, prevention and elimination of the consequences of cyber-attacks on information resources of the Russian Federation” and “Concept of the state system of detection, prevention and elimination of the consequences of cyber-attacks on information resources of the Russian Federation” approved by the President of the Russian Federation on December 12, 2014, No K 1274.

Voronkov, Oleg Yu..  2019.  Synergetic Synthesis of the Hierarchical Control System of the “Flying Platform”. 2019 III International Conference on Control in Technical Systems (CTS). :23—26.
The work is devoted to the synthesis of an aircraft control system using a synergetic control theory. The paper contains a general description of the apparatus and its control system, a synthesis of control laws, and a computer simulation. The relevance of the work consists in the need to create a vertically take-off aircraft of the “flying platform” type in order to increase the efficiency of rescue operations in disaster zones where helicopters and other modern means can't cope with the task. The scientific novelty of the work consists in the application of synergetic approaches to the development of a hierarchical system for balancing the vehicle spatial position and to the coordinating energy-saving control of electric motors that receive energy from a turbine generator.
Voronych, Artur, Nyckolaychuk, Lyubov, Vozna, Nataliia, Pastukh, Taras.  2019.  Methods and Special Processors of Entropy Signal Processing. 2019 IEEE 15th International Conference on the Experience of Designing and Application of CAD Systems (CADSM). :1–4.

The analysis of applied tasks and methods of entropy signal processing are carried out in this article. The theoretical comments about the specific schemes of special processors for the determination of probability and correlation activity are given. The perspective of the influence of probabilistic entropy of C. Shannon as cipher signal receivers is reviewed. Examples of entropy-manipulated signals and system characteristics of the proposed special processors are given.

Voskuilen, G., Vijaykumar, T.N..  2014.  Fractal++: Closing the performance gap between fractal and conventional coherence. Computer Architecture (ISCA), 2014 ACM/IEEE 41st International Symposium on. :409-420.

Cache coherence protocol bugs can cause multicores to fail. Existing coherence verification approaches incur state explosion at small scales or require considerable human effort. As protocols' complexity and multicores' core counts increase, verification continues to be a challenge. Recently, researchers proposed fractal coherence which achieves scalable verification by enforcing observational equivalence between sub-systems in the coherence protocol. A larger sub-system is verified implicitly if a smaller sub-system has been verified. Unfortunately, fractal protocols suffer from two fundamental limitations: (1) indirect-communication: sub-systems cannot directly communicate and (2) partially-serial-invalidations: cores must be invalidated in a specific, serial order. These limitations disallow common performance optimizations used by conventional directory protocols: reply-forwarding where caches communicate directly and parallel invalidations. Therefore, fractal protocols lack performance scalability while directory protocols lack verification scalability. To enable both performance and verification scalability, we propose Fractal++ which employs a new class of protocol optimizations for verification-constrained architectures: decoupled-replies, contention-hints, and fully-parallel-fractal-invalidations. The first two optimizations allow reply-forwarding-like performance while the third optimization enables parallel invalidations in fractal protocols. Unlike conventional protocols, Fractal++ preserves observational equivalence and hence is scalably verifiable. In 32-core simulations of single- and four-socket systems, Fractal++ performs nearly as well as a directory protocol while providing scalable verifiability whereas the best-performing previous fractal protocol performs 8% on average and up to 26% worse with a single-socket and 12% on average and up to 34% worse with a longer-latency multi-socket system.
 

Vougioukas, Michail, Androutsopoulos, Ion, Paliouras, Georgios.  2017.  A Personalized Global Filter To Predict Retweets. Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization. :393–394.

Information shared on Twitter is ever increasing and users-recipients are overwhelmed by the number of tweets they receive, many of which of no interest. Filters that estimate the interest of each incoming post can alleviate this problem, for example by allowing users to sort incoming posts by predicted interest (e.g., "top stories" vs. "most recent" in Facebook). Global and personal filters have been used to detect interesting posts in social networks. Global filters are trained on large collections of posts and reactions to posts (e.g., retweets), aiming to predict how interesting a post is for a broad audience. In contrast, personal filters are trained on posts received by a particular user and the reactions of the particular user. Personal filters can provide recommendations tailored to a particular user's interests, which may not coincide with the interests of the majority of users that global filters are trained to predict. On the other hand, global filters are typically trained on much larger datasets compared to personal filters. Hence, global filters may work better in practice, especially with new users, for which personal filters may have very few training instances ("cold start" problem). Following Uysal and Croft, we devised a hybrid approach that combines the strengths of both global and personal filters. As in global filters, we train a single system on a large, multi-user collection of tweets. Each tweet, however, is represented as a feature vector with a number of user-specific features.

Voyiatzis, I., Sgouropoulou, C., Estathiou, C..  2015.  Detecting untestable hardware Trojan with non-intrusive concurrent on line testing. 2015 10th International Conference on Design Technology of Integrated Systems in Nanoscale Era (DTIS). :1–2.

Hardware Trojans are an emerging threat that intrudes in the design and manufacturing cycle of the chips and has gained much attention lately due to the severity of the problems it draws to the chip supply chain. Hardware Typically, hardware Trojans are not detected during the usual manufacturing testing due to the fact that they are activated as an effect of a rare event. A class of published HTs are based on the geometrical characteristics of the circuit and claim to be undetectable, in the sense that their activation cannot be detected. In this work we study the effect of continuously monitoring the inputs of the module under test with respect to the detection of HTs possibly inserted in the module, either in the design or the manufacturing stage.

Vrban\v ci\v c, Grega, Fister, Jr., Iztok, Podgorelec, Vili.  2018.  Swarm Intelligence Approaches for Parameter Setting of Deep Learning Neural Network: Case Study on Phishing Websites Classification. Proceedings of the 8th International Conference on Web Intelligence, Mining and Semantics. :9:1-9:8.

In last decades, the web and online services have revolutionized the modern world. However, by increasing our dependence on online services, as a result, online security threats are also increasing rapidly. One of the most common online security threats is a so-called Phishing attack, the purpose of which is to mimic a legitimate website such as online banking, e-commerce or social networking website in order to obtain sensitive data such as user-names, passwords, financial and health-related information from potential victims. The problem of detecting phishing websites has been addressed many times using various methodologies from conventional classifiers to more complex hybrid methods. Recent advancements in deep learning approaches suggested that the classification of phishing websites using deep learning neural networks should outperform the traditional machine learning algorithms. However, the results of utilizing deep neural networks heavily depend on the setting of different learning parameters. In this paper, we propose a swarm intelligence based approach to parameter setting of deep learning neural network. By applying the proposed approach to the classification of phishing websites, we were able to improve their detection when compared to existing algorithms.

Vu, Ly, Bui, Cong Thanh, Nguyen, Quang Uy.  2017.  A Deep Learning Based Method for Handling Imbalanced Problem in Network Traffic Classification. Proceedings of the Eighth International Symposium on Information and Communication Technology. :333–339.

Network traffic classification is an important problem in network traffic analysis. It plays a vital role in many network tasks including quality of service, firewall enforcement and security. One of the challenging problems of classifying network traffic is the imbalanced property of network data. Usually, the amount of traffic in some classes is much higher than the amount of traffic in other classes. In this paper, we proposed an application of a deep learning approach to address imbalanced data problem in network traffic classification. We used a recent proposed deep network for unsupervised learning called Auxiliary Classifier Generative Adversarial Network to generate synthesized data samples for balancing between the minor and the major classes. We tested our method on a well-known network traffic dataset and the results showed that our proposed method achieved better performance compared to a recent proposed method for handling imbalanced problem in network traffic classification.

Vu, Q. H., Ruta, D., Cen, L..  2017.  An ensemble model with hierarchical decomposition and aggregation for highly scalable and robust classification. 2017 Federated Conference on Computer Science and Information Systems (FedCSIS). :149–152.

This paper introduces an ensemble model that solves the binary classification problem by incorporating the basic Logistic Regression with the two recent advanced paradigms: extreme gradient boosted decision trees (xgboost) and deep learning. To obtain the best result when integrating sub-models, we introduce a solution to split and select sets of features for the sub-model training. In addition to the ensemble model, we propose a flexible robust and highly scalable new scheme for building a composite classifier that tries to simultaneously implement multiple layers of model decomposition and outputs aggregation to maximally reduce both bias and variance (spread) components of classification errors. We demonstrate the power of our ensemble model to solve the problem of predicting the outcome of Hearthstone, a turn-based computer game, based on game state information. Excellent predictive performance of our model has been acknowledged by the second place scored in the final ranking among 188 competing teams.

Vu, Thang X., Vu, Trinh Anh, Lei, Lei, Chatzinotas, Symeon, Ottersten, Björn.  2019.  Linear Precoding Design for Cache-aided Full-duplex Networks. 2019 IEEE Wireless Communications and Networking Conference (WCNC). :1–6.
Edge caching has received much attention as a promising technique to overcome the stringent latency and data hungry challenges in the future generation wireless networks. Meanwhile, full-duplex (FD) transmission can potentially double the spectral efficiency by allowing a node to receive and transmit simultaneously. In this paper, we study a cache-aided FD system via delivery time analysis and optimization. In the considered system, an edge node (EN) operates in FD mode and serves users via wireless channels. Two optimization problems are formulated to minimize the largest delivery time based on the two popular linear beamforming zero-forcing and minimum mean square error designs. Since the formulated problems are non-convex due to the self-interference at the EN, we propose two iterative optimization algorithms based on the inner approximation method. The convergence of the proposed iterative algorithms is analytically guaranteed. Finally, the impacts of caching and the advantages of the FD system over the half-duplex (HD) counterpart are demonstrated via numerical results.
Vu, Xuan-Son, Jiang, Lili, Brändström, Anders, Elmroth, Erik.  2017.  Personality-based Knowledge Extraction for Privacy-preserving Data Analysis. Proceedings of the Knowledge Capture Conference. :44:1–44:4.
In this paper, we present a differential privacy preserving approach, which extracts personality-based knowledge to serve privacy guarantee data analysis on personal sensitive data. Based on the approach, we further implement an end-to-end privacy guarantee system, KaPPA, to provide researchers iterative data analysis on sensitive data. The key challenge for differential privacy is determining a reasonable amount of privacy budget to balance privacy preserving and data utility. Most of the previous work applies unified privacy budget to all individual data, which leads to insufficient privacy protection for some individuals while over-protecting others. In KaPPA, the proposed personality-based privacy preserving approach automatically calculates privacy budget for each individual. Our experimental evaluations show a significant trade-off of sufficient privacy protection and data utility.
Vuppalapati, C., Ilapakurti, A., Kedari, S., Vuppalapati, R., Vuppalapati, J., Kedari, S..  2020.  The Role of Combinatorial Mathematical Optimization and Heuristics to improve Small Farmers to Veterinarian access and to create a Sustainable Food Future for the World. 2020 Fourth World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4). :214–221.
The Global Demand for agriculture and dairy products is rising. Demand is expected to double by 2050. This will challenge agriculture markets in a way we have not seen before. For instance, unprecedented demand to increase in dairy farm productivity of already shrinking farms, untethered perpetual access to veterinarians by small dairy farms, economic engines of the developing countries, for animal husbandry and, finally, unprecedented need to increase productivity of veterinarians who're already understaffed, over-stressed, resource constrained to meet the current global dairy demands. The lack of innovative solutions to address the challenge would result in a major obstacle to achieve sustainable food future and a colossal roadblock ending economic disparities. The paper proposes a novel innovative data driven framework cropped by data generated using dairy Sensors and by mathematical formulations using Solvers to generate an exclusive veterinarian daily farms prioritized visit list so as to have a greater coverage of the most needed farms performed in-time and improve small farmers access to veterinarians, a precious and highly shortage & stressed resource.
Vural, Serdar, Minerva, Roberto, Carella, Giuseppe A., Medhat, Ahmed M., Tomasini, Lorenzo, Pizzimenti, Simone, Riemer, Bjoern, Stravato, Umberto.  2018.  Performance Measurements of Network Service Deployment on a Federated and Orchestrated Virtualisation Platform for 5G Experimentation. 2018 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN). :1–6.
The EU SoftFIRE project has built an experimentation platform for NFV and SDN experiments, tailored for testing and evaluating 5G network applications and solutions. The platform is a fully orchestrated virtualisation testbed consisting of multiple component testbeds across Europe. Users of the platform can deploy their virtualisation experiments via the platform's Middleware. This paper introduces the SoftFIRE testbed and its Middleware, and presents a set of KPI results for evaluation of experiment deployment performance.
Vyakaranal, S., Kengond, S..  2018.  Performance Analysis of Symmetric Key Cryptographic Algorithms. 2018 International Conference on Communication and Signal Processing (ICCSP). :0411–0415.
Data's security being important aspect of the today's internet is gaining more importance day by day. With the increase in online data exchange, transactions and payments; secure payment and secure data transfers have become an area of concern. Cryptography makes the data transmission over the internet secure by various methods, algorithms. Cryptography helps in avoiding the unauthorized people accessing the data by authentication, confidentiality, integrity and non-repudiation. In order to securely transmit the data many cryptographic algorithms are present, but the algorithm to be used should be robust, efficient, cost effective, high performance and easily deployable. Choosing an algorithm which suits the customer's requirement is an utmost important task. The proposed work discusses different symmetric key cryptographic algorithms like DES, 3DES, AES and Blowfish by considering encryption time, decryption time, entropy, memory usage, throughput, avalanche effect and energy consumption by practical implementation using java. Practical implementation of algorithms has been highlighted in proposed work considering tradeoff performance in terms of cost of various parameters rather than mere theoretical concepts. Battery consumption and avalanche effect of algorithms has been discussed. It reveals that AES performs very well in overall performance analysis among considered algorithms.