Visible to the public Phishing (IEEE) (2014 Year in Review), Part 2

SoS Newsletter- Advanced Book Block


SoS Logo

Phishing (IEEE)
(2014 Year in Review)
Part 2


This set of bibliographical references is about phishing.  All works cited here appeared in the IEEE library during 2014.  They are presented in two parts.


Kharraz, A.; Kirda, E.; Robertson, W.; Balzarotti, D.; Francillon, A., "Optical Delusions: A Study of Malicious QR Codes in the Wild," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.192,203, 23-26 June 2014. doi: 10.1109/DSN.2014.103 QR codes, a form of 2D barcode, allow easy interaction between mobile devices and websites or printed material by removing the burden of manually typing a URL or contact information. QR codes are increasingly popular and are likely to be adopted by malware authors and cyber-criminals as well. In fact, while a link can "look" suspicious, malicious and benign QR codes cannot be distinguished by simply looking at them. However, despite public discussions about increasing use of QR codes for malicious purposes, the prevalence of malicious QR codes and the kinds of threats they pose are still unclear. In this paper, we examine attacks on the Internet that rely on QR codes. Using a crawler, we performed a large-scale experiment by analyzing QR codes across 14 million unique web pages over a ten-month period. Our results show that QR code technology is already used by attackers, for example to distribute malware or to lead users to phishing sites. However, the relatively few malicious QR codes we found in our experiments suggest that, on a global scale, the frequency of these attacks is not alarmingly high and users are rarely exposed to the threats distributed via QR codes while surfing the web.
Keywords: Internet; Web sites; computer crime; invasive software; telecommunication security; 2D barcode; Internet; URL; Web crawler; Web sites; contact information; malicious QR code; mobile device; optical delusion; phishing sites; Crawlers; Malware; Mobile communication; Servers; Smart phones; Web pages; Mobile devices; malicious QR codes; malware; phishing (ID#: 15-4459)


Yang Xiao; Chung-Chih Li; Ming Lei; Vrbsky, S.V., "Differentiated Virtual Passwords, Secret Little Functions, and Codebooks for Protecting Users From Password Theft," Systems Journal, IEEE, vol. 8, no.2, pp.406,416, June 2014. doi: 10.1109/JSYST.2012.2183755 In this paper, we discuss how to prevent users' passwords from being stolen by adversaries in online environments and automated teller machines. We propose differentiated virtual password mechanisms in which a user has the freedom to choose a virtual password scheme ranging from weak security to strong security, where a virtual password requires a small amount of human computing to secure users' passwords. The tradeoff is that the stronger the scheme, the more complex the scheme may be. Among the schemes, we have a default method (i.e., traditional password scheme), system recommended functions, user-specified functions, user-specified programs, and so on. A function/program is used to implement the virtual password concept with a tradeoff of security for complexity requiring a small amount of human computing. We further propose several functions to serve as system recommended functions and provide a security analysis. For user-specified functions, we adopt secret little functions in which security is enhanced by hiding secret functions/algorithms.
Keywords: security of data; automated teller machines; codebooks; differentiated virtual password mechanism; online environments; password theft protection; secret algorithms; secret little functions; security analysis; strong security; user passwords; user-specified functions; virtual password scheme; weak security; Authentication; Electronic mail; Encryption; Humans; Optimized production technology; Servers; Codebooks; differentiated virtual passwords; key logger; phishing; secret little functions; shoulder-surfing (ID#: 15-4460)


Gupta, S.; Pilli, E.S.; Mishra, P.; Pundir, S.; Joshi, R.C., "Forensic Analysis Of E-Mail Address Spoofing," Confluence The Next Generation Information Technology Summit (Confluence), 2014 5th International Conference , pp.898,904, 25-26 Sept. 2014. doi: 10.1109/CONFLUENCE.2014.6949302 E-mail is the most widely used application on the internet. However E-mail application is not totally reliable and safe communication medium as loopholes in protocols make the attacker able to misuse it for sending spoofed E-mails. E-mail sender spoofing is a major problem of the E-mail system. E-mail sender spoofing is a malicious activity in which the source is being modified and presented as if the E-mail is coming from intended sender whereas the original sender is an attacker. This paper presents the behavior of different E-mail client applications while receiving the sender spoofed E-mails. We propose an investigation algorithm for sender spoofing which will check for spoofed addresses in E-mail by performing extensive analysis on E-mail header fields. We have taken basically four fields into consideration i.e. Received SPF, DKIM, DKIM-Signature, and DMARC. Our algorithm checks for valid values of the fields; any invalid value indicates an unauthorized E-mail. We have created dataset of spoofed & legitimate E-mails in our lab and performed the analysis on E-mail headers for invalid values. Our proposed algorithm is able to detect address spoofed E-mails.
Keywords: Internet; digital forensics; protocols; unsolicited e-mail; DKIM-Signature; DMARC; Internet; e-mail address spoofing; forensic analysis; loopholes; protocols; reliable communication medium; safe communication medium; Algorithm design and analysis; Authentication; Electronic mail; Forensics; Postal services; Receivers; Servers; E-mail Forensic; E-mail Investigation; E-mail Sender spoofing; E-mail Spoofing; Phishing (ID#: 15-4461)


Bicakci, K.; Unal, D.; Ascioglu, N.; Adalier, O., "Mobile Authentication Secure against Man-in-the-Middle Attacks," Mobile Cloud Computing, Services, and Engineering (MobileCloud), 2014 2nd IEEE International Conference on, pp.273,276, 8-11 April 2014. doi: 10.1109/MobileCloud.2014.43 Current mobile authentication solutions puts a cognitive burden on users to detect and avoid Man-In-The-Middle attacks. In this paper, we present a mobile authentication protocol named Mobile-ID which prevents Man-In-The-Middle attacks without relying on a human in the loop. With Mobile-ID, the message signed by the secure element on the mobile device incorporates the context information of the connected service provider. Hence, upon receiving the signed message the Mobile-ID server could easily identify the existence of an on-going attack and notify the genuine service provider.
Keywords: message authentication; mobile communication; mobile computing; telecommunication security; Mobile-ID; man-in-the-middle attack; mobile authentication protocol; Authentication; Context; Mobile communication; Mobile handsets; Protocols; Servers; Man-In-The-Middle attack; authentication; mobile signature; phishing; secure element; security protocol (ID#: 15-4462)


Grzonkowski, S.; Mosquera, A.; Aouad, L.; Morss, D., "Smartphone Security: An Overview Of Emerging Threats," Consumer Electronics Magazine, IEEE, vol.3, no.4, pp.40, 44, Oct. 2014. doi: 10.1109/MCE.2014.2340211 The mobile threat landscape has undergone rapid growth as smartphones have increased in popularity. The first generation of mobile threats saw attackers relying on various scams delivered through SMS. As the technology progressed and Web browsers, e-mail clients, and custom applications became standard on smartphones, attackers started exploiting new possibilities beyond traditional e-mail spam and phishing attacks. The landscape continues to evolve with mobile bitcoin miners, botnets, and ransomware.
Keywords: computer crime; invasive software; online front-ends; smart phones ;telecommunication security; unsolicited e-mail; SMS; Web browsers; attackers; botnets; custom applications; e-mail clients; e-mail spam; emerging threats; mobile bitcoin miners; mobile threat landscape; phishing attacks; ransomware; scams; smartphone security; Computer security; Malware; Mobile communication; Network security; Privacy; Smart phones; Software development (ID#: 15-4463)


Bhat, S.Y.; Abulaish, M.; Mirza, A.A., "Spammer Classification Using Ensemble Methods over Structural Social Network Features," Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2014 IEEE/WIC/ACM International Joint Conferences on, vol. 2, no., pp.454,458, 11-14 Aug. 2014. doi: 10.1109/WI-IAT.2014.133 The overwhelming growth and popularity of online social networks is also facing the issues of spamming, which mainly leads to uncontrolled dissemination of malware/viruses, promotional ads, phishing, and scams. It also consumes large amounts of network bandwidth leading to less revenue and significant financial losses to organizations. In literature, various machine learning techniques have been extensively used to detect spam and spammers in online social networks. Most commonly, individual classifiers are learnt over content-based features extracted from users' interactions and profiles to label them as spam/spammers or legitimate. Recently, new network structure-based features have also been proposed for spammer detection task, but their significance using ensemble learning methods has not been extensively evaluated yet. In this paper, we evaluate the performance of some ensemble learning methods using community-based structural features extracted from an interaction network for the task of spammer detection in online social networks.
Keywords: computer crime; computer viruses; feature extraction; invasive software; learning (artificial intelligence); pattern classification; social networking (online);community-based structural feature extraction; content-based feature extraction; ensemble learning methods; interaction network; machine learning techniques; malware; network structure-based features; online social networks; phishing; promotional ads; scams; spammer classification; spammer detection; spamming; structural social network features; viruses; Bagging; Boosting; Communities; Conferences; Feature extraction; Social network services; Stacking; Classifier ensemble; Machine learning; Social network security; Spam detection (ID#: 15-4464)


Moura, G.; Sadre, R.; Pras, A., "Bad Neighborhoods On The Internet," Communications Magazine, IEEE, vol.52, no.7, pp.132, 139, July 2014. doi: 10.1109/MCOM.2014.6852094 Analogous to the real world, sources of malicious activities on the Internet tend to be concentrated in certain networks instead of being evenly distributed. In this article we formally define and frame such areas as Internet Bad Neighborhoods. By extending the reputation of malicious IP addresses to their neighbors, the bad neighborhood approach ultimately enables attack prediction from unforeseen addresses. We investigate spam and phishing bad neighborhoods, and show how their underlying business models, counter-intuitively, influences the location of the neighborhoods (both geographically and in the IP addressing space). We also show how bad neighborhoods are highly concentrated at a few Internet Service Providers and discuss how our findings can be employed to improve current network and spam filters and incentivize botnet mitigation initiatives.
Keywords: Internet; computer network security; information filters; invasive software; unsolicited e-mail; Internet bad neighborhoods; attack prediction; botnet mitigation initiatives; malicious IP addresses; malicious activities; phishing; spam filters; Business; Computer security; Databases; IP networks; Internet; Unsolicited electronic mail (ID#: 15-4465)


Armknecht, F.; Hauptmann, M.; Roos, S.; Strufe, T., "An Additional Protection Layer For Confidential Osns Posts," Communications (ICC), 2014 IEEE International Conference on, pp.3746,3752, 10-14 June 2014. doi: 10.1109/ICC.2014.6883904 The design of secure and usable access schemes to personal data represent a major challenge of online social networks (OSNs). State of the art requires prior interaction to grant access. Sharing with users who are not subscribed or previously have not been accepted as contacts in any case is only possible via public posts, which can easily be abused by automatic harvesting for user profiling, targeted spear-phishing, or spamming. Moreover, users are restricted to the access rules defined by the provider, which may be overly restrictive, cumbersome to define, or insufficiently fine-grained. We suggest a complementary approach that can be easily deployed in addition to existing access control schemes, does not require any interaction, and includes even public, unsubscribed users. It exploits the fact that different social circles of a user share different experiences and hence encrypts arbitrary posts. Assembling only well-established cryptographic primitives, we prove that the security of our scheme is determined by the entropy of the required knowledge. We consequently analyze the efficiency of an informed dictionary attack and assess the entropy to be on par with common passwords. A fully functional implementation is used for performance evaluations, and available for download on the Web.
Keywords: authorisation; cryptography; social networking (online); Web; access control schemes; access rules; confidential OSN posts; cryptographic primitives; dictionary attack; online social networks; protection layer; public posts; spamming; spear-phishing; user profiling; Access control; Ciphers; Dictionaries; Entropy; Social network services; Secret Sharing; Social Network Security; Spam Protection (ID#: 15-4466)


Janbeglou, M.; Naderi, H.; Brownlee, N., "Effectiveness of DNS-Based Security Approaches in Large-Scale Networks," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on, vol., no., pp.524, 529, 13-16 May 2014. doi: 10.1109/WAINA.2014.87 The Domain Name System (DNS) is widely seen as a vital protocol of the modern Internet. For example, popular services like load balancers and Content Delivery Networks heavily rely on DNS. Because of its important role, DNS is also a desirable target for malicious activities such as spamming, phishing, and botnets. To protect networks against these attacks, a number of DNS-based security approaches have been proposed. The key insight of our study is to measure the effectiveness of security approaches that rely on DNS in large-scale networks. For this purpose, we answer the following questions, How often is DNS used? Are most of the Internet flows established after contacting DNS? In this study, we collected data from the University of Auckland campus network with more than 33,000 Internet users and processed it to find out how DNS is being used. Moreover, we studied the flows that were established with and without contacting DNS. Our results show that less than 5 percent of the observed flows use DNS. Therefore, we argue that those security approaches that solely depend on DNS are not sufficient to protect large-scale networks.
Keywords: Internet; computer network security; protocols; DNS-based security approaches; Internet protocol; botnets; content delivery networks; domain name system; large-scale networks; load balancers; malicious activities; phishing; spamming; Databases; Educational institutions; Electronic mail; IP networks; Internet; Ports (Computers);Servers; DNS; large-scale network; network measurement; passive monitoring; statistical analysis (ID#: 15-4467)


Algarni, A.; Yue Xu; Chan, T., "Social Engineering in Social Networking Sites: The Art of Impersonation," Services Computing (SCC), 2014 IEEE International Conference on, pp.797,804, June 27 2014-July 2 2014. doi: 10.1109/SCC.2014.108 Social networking sites (SNSs), with their large number of users and large information base, seem to be the perfect breeding ground for exploiting the vulnerabilities of people, who are considered the weakest link in security. Deceiving, persuading, or influencing people to provide information or to perform an action that will benefit the attacker is known as "social engineering." Fraudulent and deceptive people use social engineering traps and tactics through SNSs to trick users into obeying them, accepting threats, and falling victim to various crimes such as phishing, sexual abuse, financial abuse, identity theft, and physical crime. Although organizations, researchers, and practitioners recognize the serious risks of social engineering, there is a severe lack of understanding and control of such threats. This may be partly due to the complexity of human behaviors in approaching, accepting, and failing to recognize social engineering tricks. This research aims to investigate the impact of source characteristics on users' susceptibility to social engineering victimization in SNSs, particularly Facebook. Using grounded theory method, we develop a model that explains what and how source characteristics influence Facebook users to judge the attacker as credible.
Keywords: computer crime; fraud; social aspects of automation; social networking (online);Facebook; SNS; attacker; deceptive people; financial abuse; fraudulent people; grounded theory method; human behaviors complexity ;identity theft; impersonation; large information base; phishing; physical crime; security; sexual abuse; social engineering traps; social engineering victimization; social engineering tactics; social networking sites; threats; user susceptibility; Encoding; Facebook; Interviews; Organizations; Receivers; Security; impersonation; information security management; social engineering; social networking sites; source credibility; trust management (ID#: 15-4468)


Gupta, N.; Aggarwal, A.; Kumaraguru, P., " Deep Dive Into Short URL Based E-Crime Detection," Electronic Crime Research (eCrime), 2014 APWG Symposium on, pp.14,24, 23-25 Sept. 2014. doi: 10.1109/ECRIME.2014.6963161 Existence of spam URLs over emails and Online Social Media (OSM) has become a massive e-crime. To counter the dissemination of long complex URLs in emails and character limit imposed on various OSM (like Twitter), the concept of URL shortening has gained a lot of traction. URL shorteners take as input a long URL and output a short URL with the same landing page (as in the long URL) in return. With their immense popularity over time, URL shorteners have become a prime target for the attackers giving them an advantage to conceal malicious content. Bitly, a leading service among all shortening services is being exploited heavily to carry out phishing attacks, work-from-home scams, pornographic content propagation, etc. This imposes additional performance pressure on Bitly and other URL shorteners to be able to detect and take a timely action against the illegitimate content. In this study, we analyzed a dataset of 763,160 short URLs marked suspicious by Bitly in the month of October 2013. Our results reveal that Bitly is not using its claimed spam detection services very effectively. We also show how a suspicious Bitly account goes unnoticed despite of a prolonged recurrent illegitimate activity. Bitly displays a warning page on identification of suspicious links, but we observed this approach to be weak in controlling the overall propagation of spam. We also identified some short URL based features and coupled them with two domain specific features to classify a Bitly URL as malicious or benign and achieved an accuracy of 86.41%. The feature set identified can be generalized to other URL shortening services as well. To the best of our knowledge, this is the first large scale study to highlight the issues with the implementation of Bitly's spam detection policies and proposing suitable countermeasures.
Keywords: computer crime; social networking (online);unsolicited e-mail; Twitter; URL based e-crime detection; URL shortening;; emails; online social media; phishing attack; pornographic content propagation; spam URL; spam detection; work-from-home scam; Accuracy; Communities; Data collection; Facebook; Real-time systems; Twitter; Uniform resource locators (ID#: 15-4469)


Enache, A.-C.; Sgarciu, V., "Spam Host Classification Using PSO-SVM," Automation, Quality and Testing, Robotics, 2014 IEEE International Conference on, pp. 1, 5, 22-24 May 2014. doi: 10.1109/AQTR.2014.6857840 Search engines have become a de facto place to start information acquisition on the Internet. Sabotaging the quality of the results retrieved by search engines can lead users to doubt the search engine provider. Spam websites can serve as means of phishing. This paper shows a spam host detection approach that uses support vector machines(SVM) for classification. We create a parallel version of standard Particle Swarm Optimization(PSO) to determine free parameters of the SVM classifier and apply our proposed model to a content web spamming dataset, WEBSPAM-UK2011. Our implementation of the parallel PSO is constructed on a pool of threads and each thread executes tasks associated to a particle from the swarm. Experiments showed that our proposed model can achieve a higher accuracy than regular SVM and outperforms other classifiers (C4.5, Naive Bayes). Furthermore, parallel version of standard Particle Swam Optimization(PSO) can efficiently select parameters for SVM.
Keywords: Internet; Web sites; parallel algorithms; particle swarm optimisation; search engines; security of data; support vector machines; unsolicited e-mail; Internet; PSO; SVM; Web spamming dataset; particle swarm optimization; phishing; search engines; spam Websites; spam host classification; spam host detection; support vector machines; Accuracy; Kernel; Particle swarm optimization; Sensitivity;Standards;Support vector machines; Unsolicited electronic mail; Particle Swarm Optimization; Support Vector Machine; parallelism; spam host (ID#: 15-4470)


Enache, A.-C.; Patriciu, V.V., "Spam Host Classification Using Swarm Intelligence," Communications (COMM), 2014 10th International Conference on, pp.1,4, 29-31 May 2014. doi: 10.1109/ICComm.2014.6866669 Web Spam, or Spamdexing, is a form of Search Engine Optimization(SEO) spamming that hinders the efficiency of search engines. These types of exploits use unethical methods in order to place a web page into the first rank. Sabotaging the quality of the results retrieved by search engines can lead users to mistrust the search engine provider. Moreover, spam websites can be a starting point for phishing or malware attacks. Over the last decade Web Spamming has become an important problem. This paper shows a spam host detection approach that uses swarm intelligence. We test our model on two datasets (WEBSPAM-UK2011 and WEBSPAM-UK2007) and show that it can obtain a good accuracy. Moreover, we compared our approach with other popular classifiers (C4.5, SVM and Logistic Regression ) and empirically demonstrated that it can outperform them in some cases.
Keywords: Internet; optimisation; search engines; swarm intelligence; unsolicited e-mail; WEBSPAM-UK2007; WEBSPAM-UK2011; Web spamming; search engine optimization spamming; spam host classification; spam host detection; spamdexing; swarm intelligence; Accuracy; Data mining; Feature extraction; Particle swarm optimization; Support vector machines; Training; Unsolicited electronic mail; ant colony classification algorithm; host spam; swarm intelligence (ID#: 15-4471)


Drew, Jake; Moore, Tyler, "Automatic Identification of Replicated Criminal Websites Using Combined Clustering," Security and Privacy Workshops (SPW), 2014 IEEE, pp.116, 123, 17-18 May 2014. doi: 10.1109/SPW.2014.26 To be successful, cyber criminals must figure out how to scale their scams. They duplicate content on new websites, often staying one step ahead of defenders that shut down past schemes. For some scams, such as phishing and counterfeit-goods shops, the duplicated content remains nearly identical. In others, such as advanced-fee fraud and online Ponzi schemes, the criminal must alter content so that it appears different in order to evade detection by victims and law enforcement. Nevertheless, similarities often remain, in terms of the website structure or content, since making truly unique copies does not scale well. In this paper, we present a novel combined clustering method that links together replicated scam websites, even when the criminal has taken steps to hide connections. We evaluate its performance against two collected datasets of scam websites: fake-escrow services and high-yield investment programs (HYIPs). We find that our method more accurately groups similar websites together than does existing general-purpose consensus clustering methods.
Keywords: Clustering algorithms; Clustering methods; HTML; Indexes; Investment; Manuals; Sociology (ID#: 15-4472)


Khan, M.S.; Ferens, K.; Kinsner, W., "A Chaotic Measure For Cognitive Machine Classification Of Distributed Denial Of Service Attacks," Cognitive Informatics & Cognitive Computing (ICCI*CC), 2014 IEEE 13th International Conference on, pp.100,108, 18-20 Aug. 2014. doi: 10.1109/ICCI-CC.2014.6921448 Today's evolving cyber security threats demand new, modern, and cognitive computing approaches to network security systems. In the early years of the Internet, a simple packet inspection firewall was adequate to stop the then-contemporary attacks, such as Denial of Service (DoS), ports scans, and phishing. Since then, DoS has evolved to include Distributed Denial of Service (DDoS) attacks, especially against the Domain Name Service (DNS). DNS based DDoS amplification attacks cannot be stopped easily by traditional signature based detection mechanisms because the attack packets contain authentic data, and signature based detection systems look for specific attack-byte patterns. This paper proposes a chaos based complexity measure and a cognitive machine classification algorithm to detect DNS DDoS amplification attacks. In particular, this paper computes the Lyapunov exponent to measure the complexity of a flow of packets, and classifies the traffic as either normal or anomalous, based on the magnitude of the computed exponent. Preliminary results show the proposed chaotic measure achieved a detection (classification) accuracy of about 66%, which is greater than that reported in the literature. This approach is capable of not only detecting offline threats, but has the potential of being applied over live traffic flows using DNS filters.
Keywords: Internet; firewalls; pattern classification; DNS DDoS amplification attacks; DNS filters; Internet; attack-byte patterns; chaos based complexity measure; classification accuracy; cognitive computing approach; cognitive machine classification algorithm; cyber security threats; distributed denial-of-service attacks; domain name service; network security systems; signature based detection mechanisms; simple packet inspection firewall; Chaos; Classification algorithms; Computer crime; Internet; Mathematical model; Nonlinear dynamical systems; Time series analysis; Anomaly Detection; Chaos; Cognitive Machine Learning; Cyber threats; DDoS Amplification; DNS; Data traffic;Fractal; Internet; Lyapunov exponent (ID#: 15-4473)


Dainotti, A.; King, A.; Claffy, K.; Papale, F.; Pescape, A., "Analysis of a “/0” Stealth Scan from a Botnet," Networking, IEEE/ACM Transactions on, vol.23, no. 2, pp341,354, April 2015 doi: 10.1109/TNET.2013.2297678 Botnets are the most common vehicle of cyber-criminal activity. They are used for spamming, phishing, denial-of-service attacks, brute-force cracking, stealing private information, and cyber warfare. Botnets carry out network scans for several reasons, including searching for vulnerable machines to infect and recruit into the botnet, probing networks for enumeration or penetration, etc. We present the measurement and analysis of a horizontal scan of the entire IPv4 address space conducted by the Sality botnet in February 2011. This 12-day scan originated from approximately 3 million distinct IP addresses and used a heavily coordinated and unusually covert scanning strategy to try to discover and compromise VoIP-related (SIP server) infrastructure. We observed this event through the UCSD Network Telescope, a /8 darknet continuously receiving large amounts of unsolicited traffic, and we correlate this traffic data with other public sources of data to validate our inferences. Sality is one of the largest botnets ever identified by researchers. Its behavior represents ominous advances in the evolution of modern malware: the use of more sophisticated stealth scanning strategies by millions of coordinated bots, targeting critical voice communications infrastructure. This paper offers a detailed dissection of the botnet's scanning behavior, including general methods to correlate, visualize, and extrapolate botnet behavior across the global Internet.

Keywords: Animation; Geology; IP networks; Internet; Ports (Computers); Servers; Telescopes; Botnet; Internet background radiation; Internet telephony; Network Telescope; VoIP; communication system security; darknet; network probing; scanning (ID#: 15-4474)


Li, Zhou; Alrwais, Sumayah; Wang, XiaoFeng; Alowaisheq, Eihal, "Hunting the Red Fox Online: Understanding and Detection of Mass Redirect-Script Injections," Security and Privacy (SP), 2014 IEEE Symposium on, pp.3, 18, 18-21 May 2014. doi: 10.1109/SP.2014.8 Compromised websites that redirect web traffic to malicious hosts play a critical role in organized web crimes, serving as doorways to all kinds of malicious web activities (e.g., drive-by downloads, phishing etc.). They are also among the most elusive components of a malicious web infrastructure and extremely difficult to hunt down, due to the simplicity of redirect operations, which also happen on legitimate sites, and extensive use of cloaking techniques. Making the detection even more challenging is the recent trend of injecting redirect scripts into JavaScript (JS) files, as those files are not indexed by search engines and their infections are therefore more difficult to catch. In our research, we look at the problem from a unique angle: the adversary's strategy and constraints for deploying redirect scripts quickly and stealthily. Specifically, we found that such scripts are often blindly injected into both JS and HTML files for a rapid deployment, changes to the infected JS files are often made minimum to evade detection and also many JS files are actually JS libraries (JS-libs) whose uninfected versions are publicly available. Based upon those observations, we developed JsRED, a new technique for the automatic detection of unknown redirect-script injections. Our approach analyzes the difference between a suspicious JS-lib file and its clean counterpart to identify malicious redirect scripts and further searches for similar scripts in other JS and HTML files. This simple, lightweight approach is found to work effectively against redirect injection campaigns: our evaluation shows that JsRED captured most of compromised websites with almost no false positives, significantly outperforming a commercial detection service in terms of finding unknown JS infections. Based upon the compromised websites reported by JsRED, we further conducted a measurement study that reveals interesting features of redirect payloads and a new Peer-to-Peer network the adversary const- ucted to evade detection.
Keywords: Browsers; Feeds; HTML; Libraries; Payloads; Security; Servers; Compromised Web Sites; Differential Analysis; Web Redirection (ID#: 15-4475)


Manek, Asha S; Sumithra, V; Shenoy, P Deepa; Mohan, M.Chandra; Venugopal, K R; Patnaik, L M, "DeMalFier: Detection of Malicious Web Pages Using An Effective Classifier," Data Science & Engineering (ICDSE), 2014 International Conference on, pp. 83,88, 26-28 Aug. 2014. doi: 10.1109/ICDSE.2014.6974616 The web has become an indispensable global platform that glues together daily communication, sharing, trading, collaboration and service delivery. Web users often store and manage critical information that attracts cybercriminals who misuse the web and the internet to exploit vulnerabilities for illegitimate benefits. Malicious web pages are transpiring threatening issue over the internet because of the notoriety and their capability to influence. Detecting and analyzing them is very costly because of their qualities and intricacies. The complexities of attacks are increasing day by day because the attackers are using blended approaches of various existing attacking techniques. In this paper, a model DeMalFier (Detection of Malicious Web Pages using an Effective ClassiFier) has been developed to apply supervised learning approaches to identify malicious web pages relevant to malware distribution, phishing, drive-by-download and injection by extracting the content of web pages, URL-based features and features based on host information. Experimental evaluation of DeMalFier model achieved 99.9% accuracy recommending the impact of our approach for real-life deployment.
Keywords: Accuracy; Crawlers; Data models; Feature extraction; HTML; Uniform resource locators; Web pages; DeMalFier; Malicious Web Pages; Pre-Processing Techniques; Supervised Learning; Web Security (ID#: 15-4476)


Lee, J.; Bauer, L.; Mazurek, M., "Studying the Effectiveness of Security Images in Internet Banking," Internet Computing, IEEE, vol. 19, no. 1, pp. 54,62, Jan.-Feb., 09 2015.  doi: 10.1109/MIC.2014.108 Security images are often used as part of the login process on internet banking websites, under the theory that they can help foil phishing attacks. Previous studies, however, have yielded inconsistent results about users' ability to notice that a security image is missing. This paper describes an online study of 482 users that attempts to clarify to what extent users notice and react to the absence of security images. The majority of our participants (73 percent) entered their password when we removed the security image and caption. We found changing the appearance and other characteristics of the security image generally had little effect on whether users logged in when the security image was absent. Additionally, we subjected the passwords created by participants to a password-cracking algorithm and found that participants with stronger passwords were less likely (64.7 percent vs 80.1 percent) to enter their passwords when the security image was missing.
Keywords: Banking; Electronic mail; Internet; Maintenance engineering; Online banking; Security; Visualization (ID#: 15-4477)


Singh, Surbhi; Sharma, Sangeeta, "Improving Security Mechanism To Access HDFS Data By Mobile Consumers Using Middleware-Layer Framework," Computing, Communication and Networking Technologies (ICCCNT), 2014 International Conference on, pp.1,7, 11-13 July 2014. doi: 10.1109/ICCCNT.2014.6963051 Revolution in the field of technology leads to the development of cloud computing which delivers on-demand and easy access to the large shared pools of online stored data, softwares and applications. It has changed the way of utilizing the IT resources but at the compromised cost of security breaches as well such as phishing attacks, impersonation, lack of confidentiality and integrity. Thus this research work deals with the core problem of providing absolute security to the mobile consumers of public cloud to improve the mobility of user's, accessing data stored on public cloud securely using tokens without depending upon the third party to generate them. This paper presents the approach of simplifying the process of authenticating and authorizing the mobile user's by implementing middleware-centric framework called MiLAMob model with the huge online data storage system i.e. HDFS. It allows the consumer's to access the data from HDFS via mobiles or through the social networking sites e.g., Facebook, Gmail, Yahoo,  etc. using OAuth 2.0 protocol. For authentication, the tokens are generated using one-time password generation technique and then encrypting them using AES method. By implementing the flexible user based policies and standards, this model improves the authorization process.
Keywords: Authentication; Cloud computing; Data models; Mobile communication; Permission; Social network services; Authentication; Authorization; Computing; HDFS; MiLAMob; OAuth 2.0; Security; Token (ID#: 15-4478)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.