Visible to the public Attribution 2015Conflict Detection Enabled

SoS Newsletter- Advanced Book Block



SoS Logo




Attribution of the source of an attack or the author of malware is a continuing problem in computer forensics. For the Science of Security community, it is an important issue related to human behavior, metrics, and composability. The research presented here was presented in 2015.

Novino Nirmal. A, Kyung-Ah Sohn, and T. S. Chung, “A Graph Model Based Author Attribution Technique for Single-Class E-Mail Classification,” Computer and Information Science (ICIS), 2015 IEEE/ACIS 14th International Conference on, Las Vegas, NV, 2015, pp. 191-196. doi: 10.1109/ICIS.2015.7166592
Abstract: Electronic mails have increasingly replaced all written modes of communications for important correspondences including personal and business transactions. An e-mail is given equal significance as a signed document. Hence email impersonation through compromised accounts has become a major threat. In this paper, we have proposed an email style acquisition and classification model for authorship attribution that serves as an effective tool to prevent and detect email impersonation. The proposed model gains knowledge of the author’s email style by being trained only with the sample email texts of the author and then identifies if a given email text is a legitimate email of the author or not. Extracting the significant features that represent an author’s style from the available concise emails is a big challenge in email authorship attribution. We have proposed to use a graph-based model to precisely extract the unique feature set of the author. We have used one-class SVM classifier to deal with the single-class sample data that consists of only true positive samples. Two classification models have been designed and compared. The first one is a probability model which is based on the probability of occurrence of a feature in the specific email. The second technique is based on inclusive compound probability of a feature to appear in a sentence of an email. Both the models have been evaluated against the public Enron dataset.
Keywords: electronic mail; graph theory; support vector machines; author attribution technique; business transactions; classification model; electronic mails; email style acquisition; graph model; inclusive compound probability; one-class SVM classifier; personal transactions; public Enron dataset; single-class e-mail classification; single-class sample data; Compounds; Context; Electronic mail; Sensitivity; Support vector machines; Testing; Training; Author Attribution; Graph Model; Information Retrieval; One Class SVM; Stylometry; Text Classification (ID#: 16-10894)


A. F. Ahmed, R. Mohamed, B. Mostafa, and A. S. Mohammed, “Authorship Attribution in Arabic Poetry,” Intelligent Systems: Theories and Applications (SITA), 2015 10th International Conference on, Rabat, 2015, pp. 1-6. doi: 10.1109/SITA.2015.7358411
Abstract: In this paper, we present the Arabic poetry as an authorship attribution task. Several features such as Characters, Sentence length; Word length, Rhyme, and First word in sentence are used as input data for Markov Chain methods. The data is filtered by removing the punctuation and alphanumeric marks that were present in the original text. The data set of experiment was divided into two groups: training dataset with known authors and test dataset with unknown authors. In the experiment, a set of thirty-three poets from different eras have been used. The Experiment shows interesting results with classification precision of 96.96%.
Keywords: Markov processes; humanities; natural language processing; pattern classification; Arabic poetry; Markov chain classifier; alphanumeric marks; authorship attribution task; classification precision; poets; punctuation marks; rhyme; word length; Context; Feature extraction; Sea measurements; Strips; Training; Training data; Arabic Poetry; Authorship attribution; Markov Chain; Text Classification (ID#: 16-10895)


E. Castillo, D. Vilariño, O. Cervantes, and D. Pinto, “Author Attribution Using a Graph Based Representation,” Electronics, Communications and Computers (CONIELECOMP), 2015 International Conference on, Cholula, 2015, pp. 135-142. doi: 10.1109/CONIELECOMP.2015.7086940
Abstract: Authorship attribution is the task of determining the real author of a given anonymous document. Even though different approaches exist in literature, this problem has been barely dealt with by using document representations that employ graph structures. Actually, most research works in literature handle this problem by employing simple sequences of n words (n-grams), such as bigrams and trigrams. In this paper, an exploration in the use of graphs for representing document sentences is presented. These structures are used for carrying out experiments for solving the problem of Authorship attribution. The experiments that are presented here attain approximately a 79% of accuracy, showing that the graph-based representation could be a way of encapsulating various levels of natural language descriptions in a simple structure.
Keywords: graph theory; natural language processing; text analysis; anonymous document; author attribution; document sentence representation; graph based representation; graph structures; natural language descriptions; Feature extraction; Kernel; Semantics; Support vector machines; Syntactics; Topology; Writing (ID#: 16-10896)


M. Khonji, Y. Iraqi, and A. Jones, “An Evaluation of Authorship Attribution Using Random Forests,” Information and Communication Technology Research (ICTRC), 2015 International Conference on, Abu Dhabi, 2015, pp. 68-71. doi: 10.1109/ICTRC.2015.7156423
Abstract: Electronic text (e-text) stylometry aims at identifying the writing style of authors of electronic texts, such as electronic documents, blog posts, tweets, etc. Identifying such styles is quite attractive for identifying authors of disputed e-text, identifying their profile attributes (e.g. gender, age group, etc), or even enhancing services such as search engines and recommender systems. Despite the success of Random Forests, its performance has not been evaluated on Author Attribtion problems. In this paper, we present an evaluation of Random Forests in the problem domain of Authorship Attribution. Additionally, we have taken advantage of Random Forests’ robustness against noisy features by extracting a diverse set of features from evaluated e-texts. Interestingly, the resultant model achieved the highest classification accuracy in all problems, except one where it misclassified only a single instance.
Keywords: feature extraction; text analysis; author attribution problems; authorship attribution; e-text stylometry; electronic text; feature extraction; random forests; Accuracy; Authentication; Feature extraction; Noise measurement; Radio frequency; Testing; Training
(ID#: 16-10897)


E. Nunes, N. Kulkarni, P. Shakarian, A. Ruef, and J. Little, “Cyber-Deception and Attribution in Capture-the-Flag Exercises,” 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), Paris, 2015, pp. 962-965. doi: 10.1145/2808797.2809362
Abstract: Attributing the culprit of a cyber-attack is widely considered one of the major technical and policy challenges of cyber-security. The lack of ground truth for an individual responsible for a given attack has limited previous studies. Here, we overcome this limitation by leveraging DEFCON capture-the-flag (CTF) exercise data where the actual ground-truth is known. In this work, we use various classification techniques to identify the culprit in a cyberattack and find that deceptive activities account for the majority of misclassified samples. We also explore several heuristics to alleviate some of the misclassification caused by deception.
Keywords: pattern classification; security of data; DEFCON CTF exercise data; DEFCON capture-the-flag exercise data; capture-the-flag exercises; classification techniques; culprit attribution; cyber-attack; cyber-deception; cyber-security; Computer crime; Decision trees; Logistics; Payloads; Social network services; Support vector machines; Training (ID#: 16-10898)


R. D. Guttman, J. A. Hammerstein, J. A. Mattson, and A. L. Schlackman, “Automated Failure Detection and Attribution in Virtual Environments,” Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, Waltham, MA, 2015, pp. 1-5. doi: 10.1109/THS.2015.7225309
Abstract: Creating reproducible experiments in a realistic cyber environment is a non-trivial challenge [1] [2]. Utilizing the STEPfwd platform, we have developed a system for easily delivering high-fidelity cyber environments to research participants anywhere in the world. In this paper we outline the operation method of the STEPfwd platform. Special focus will be given to realism enhancing capabilities offered in the environment, such as simulated user behavior and traffic generation. We then discuss how these realism enhancing capabilities are leveraged to perform automated failure detection and attribution within the environment.
Keywords: digital simulation; fault tolerant computing; virtual reality; STEPfwd platform; automated failure detection; high-fidelity cyber environments; realistic cyber environment; traffic generation; user behavior simulation; virtual environments; Electronic mail; Monitoring; Servers; Videos; Virtual machine monitors; Virtual machining (ID#: 16-10899)


Barathi Ganesh H B, Reshma U, and Anand Kumar M, “Author Identification Based on Word Distribution in Word Space,” Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, Kochi, 2015, pp. 1519-1523. doi: 10.1109/ICACCI.2015.7275828
Abstract: Author attribution has grown into an area that is more challenging from the past decade. It has become an inevitable task in many sectors like forensic analysis, law, journalism and many more as it helps to detect the author in every documentation. Here unigram/bigram features along with latent semantic features from word space were taken and the similarity of a particular document was tested using Random forest tree, Logistic Regression and Support Vector Machine in order to create a global model. Dataset from PAN Author Identification shared task 2014 is taken for processing. It has been observed that the proposed model shows state-of-art accuracy of 80% which is significantly greater when compared to the Author Identification PAN results of the year 2014.
Keywords: digital forensics; law; regression analysis; support vector machines; trees (mathematics); author attribution; author identification; forensic analysis; journalism; latent semantic features; logistic regression; random forest tree; support vector machine; unigram/bigram features; word distribution; word space; Accuracy; Computational modeling; Feature extraction; Logistics; Semantics; Support vector machines; Vegetation; Author attribution; Logistic Regression; PAN Author Identification 2014; Random forest tree; Support Vector Machine (ID#: 16-10900)


J. Rivera, “Achieving Cyberdeterrence and the Ability of Small States to Hold Large States at Risk,” Cyber Conflict: Architectures in Cyberspace (CyCon), 2015 7th International Conference on, Tallinn, 2015, pp. 7-24. doi: 10.1109/CYCON.2015.7158465
Abstract: Achieving cyberdeterrence is a seemingly elusive goal in the international cyberdefense community. The consensus among experts is that cyberdeterrence is difficult at best and perhaps impossible, due to difficulties in holding aggressors at risk, the technical challenges of attribution, and legal restrictions such as the UN Charter’s prohibition against the use of force. Consequently, cyberspace defenders have prioritized increasing the size and strength of the metaphorical “walls” in cyberspace over facilitating deterrent measures.
Keywords: security of data; UN Charter prohibition; cyberdeterrence; cyberspace defenders; Cyberspace; Force; Internet; Lenses; National security; Power measurement; attribution; deterrence; use of force (ID#: 16-10901)


C. Wei, T. Wu, and H. Fu, “Plain-to-Plain Scan Registration Based on Geometric Distributions of Points,” Information and Automation, 2015 IEEE International Conference on, Lijiang, 2015, pp. 1194-1199. doi: 10.1109/ICInfA.2015.7279468
Abstract: Scan registration plays a critical role in odometry, mapping and localization for Autonomous Ground Vehicle. In this paper, we propose to adopt a probabilistic framework to model the locally planar patch distributions of candidate points from two or more consecutive scans instead of the original point-to-point mode. This can be regarded as the plain-to-plain measurement metric which ensures a very high confidence in the normal orientation of aligned patches. We take into account the geometric attribution of the scanning beam to pick out feature points and then which can reduce the number of selected points to a lower level. The optimization of transform is achieved by the combination of high frequency but coarse scan-to-scan motion estimation and low frequency but fine scan-to-map batch adjustment. We validate the effectiveness of our method by the qualitative tests on our collected point clouds and the quantitative comparisons on the public KITTI odometry datasets.
Keywords: SLAM (robots); automatic guided vehicles; distance measurement; motion estimation; optimisation; probability; transforms; autonomous ground vehicle; coarse scan-to-scan motion estimation; geometric attribution; geometric distributions; localization; mapping; odometry; plain-to-plain measurement metric; plain-to-plain scan registration; planar patch distributions; probabilistic framework; qualitative tests; scan-to-map batch adjustment; transform optimization; Feature extraction; Laser radar; Probabilistic logic; Three-dimensional displays; Trajectory; Transforms; Vehicles; batch adjustment (ID#: 16-10902)


M. Spitters, F. Klaver, G. Koot, and M. v. Staalduinen, “Authorship Analysis on Dark Marketplace Forums,” Intelligence and Security Informatics Conference (EISIC), 2015 European, Manchester, 2015, pp. 1-8. doi: 10.1109/EISIC.2015.47
Abstract: Anonymity networks like Tor harbor many underground markets and discussion forums dedicated to the trade of illegal goods and services. As they are gaining in popularity, the analysis of their content and users is becoming increasingly urgent for many different parties, ranging from law enforcement and security agencies to financial institutions. A major issue in cyber forensics is that anonymization techniques like Tor’s onion routing have made it very difficult to trace the identities of suspects. In this paper we propose classification set-ups for two tasks related to user identification, namely alias classification and authorship attribution. We apply our techniques to data from a Tor discussion forum mainly dedicated to drug trafficking, and show that for both tasks we achieve high accuracy using a combination of character-level n-grams, stylometric features and timestamp features of the user posts.
Keywords: law; marketing; security of data; stock markets; Tor harbor; Tor onion routing; anonymity networks; authorship analysis; dark marketplace forums; financial institutions; illegal goods; illegal services; law enforcement; security agencies; underground markets; Discussion forums; Distance measurement; Drugs; Message systems; Roads; Security; Writing; alias detection; author attribution; dark web; machine learning; stylometric analysis; text mining (ID#: 16-10903)


Zhi-Ying Lv, Xi-Nong Liang, Xue-Zhang Liang, and Li-Wei Zheng, “A Fuzzy Multiple Attribute Decision Making Method Based on Possibility Degree,” Fuzzy Systems and Knowledge Discovery (FSKD), 2015 12th International Conference on, Zhangjiajie, 2015,
pp. 450-454. doi: 10.1109/FSKD.2015.7381984
Abstract: A fuzzy multiple attribute decision making method is investigated, in which the weights are given by interval numbers and the attribute values are given by the form of triangular fuzzy numbers or linguistic terms. A possibility degree formula for the comparison between two trapezoidal fuzzy numbers is proposed. According to the ordered weighted average (OWA) operator, an approach is presented to aggregate the possibility matrixes based on attributes and then the most desirable alternative is selected. This fuzzy multiple attribute decision making method is used in the field of financial investment evaluation, the set of attributes of the decision making program is built by analyzing of financial and accounting reports in the same industry. Finally a numerical example is provided to demonstrate the practicality and the feasibility of the proposed method.
Keywords: accounting; decision theory; financial management; fuzzy set theory; investment; possibility theory; OWA operator; accounting reports; financial investment evaluation; fuzzy multiple attribute decision making method; linguistic terms; ordered weighted average operator; possibility degree formula; trapezoidal fuzzy numbers; triangular fuzzy numbers; Decision making; Indexes; Industries; Information technology; Investment; Open wireless architecture; Pragmatics; OWA aggregation Operators; investment options; multiple attribution decision making; possibility degree; trapezoidal fuzzy number (ID#: 16-10904)


O. Granichin, N. Kizhaeva, D. Shalymov, and Z. Volkovich, “Writing Style Determination Using the KNN Text Model,” Intelligent Control (ISIC), 2015 IEEE International Symposium on, Sydney, NSW, 2015, pp. 900-905. doi: 10.1109/ISIC.2015.7307296
Abstract: The aim of the paper is writing style investigation. The method used is based on re-sampling approach. We present the text as a series of characters generated by distinct probability sources. A re-sampling procedure is applied in order to simulate samples from the texts. To check if samples are generated from the same population we use a KNN-based two-sample test. The proposed method shows high ability to distinguish variety of different texts.
Keywords: probability; sampling methods; text analysis; KNN text model; KNN-based two-sample test; distinct probability sources; resampling approach; writing style determination; writing style investigation; Gaussian distribution; Histograms; Lead; Probability distribution; Sociology; Testing; Writing; Writing style; authorship attribution; re-sampling; twosample test (ID#: 16-10905)


Kyounghun Kim, Yunseok Noh, and S. B. Park, “Detecting Multiple Userids on Korean Social Media for Mining TV Audience Response,” TENCON 2015 - 2015 IEEE Region 10 Conference, Macao, 2015, pp. 1-4. doi: 10.1109/TENCON.2015.7373121
Abstract: Possession of multiple userids by a single user happens when more than two userids actually belong to the same user. In analysis of audience response of TV program, it is so important to detect these multi-id users because they often use the multiple ids to manipulate audience response or to take illegal profits. Detecting multiple userids of a single user has similiar nature with authorship attribution in terms of identifying authorships for given arbitrary texts. The conventional supervised techniques for authorship attribution, however, are difficult to be employed directly to the problem of multiple userids detection. This is because we do not know real authors and multiple userids may belong to the same author in the task of multiple userids detection. In addition, since we can not have all authors in advance, userids cannot be treated as classes. This paper proposes a method of learning the element-wise differences between multiple userids. Each userid is represented as a feature vector from their postings on web social media. Then the similarity vector between two userid vectors can be obtained by performing their element-wise difference. With the similarity vectors, we train the similarity patterns for detecting if multiple userids belong to the same user or not. In order to solve the problem successfully, we present six features which are effective for Korean social media. We conducted comprehensive experiments on the Korean social media dataset. The experimental results show that the proposed similarity learning method with all presented features is successful for detecting multiple userids on Korean social media.
Keywords: data mining; learning (artificial intelligence); social networking (online); television broadcasting; Korean social media; TV audience response mining; Web social media; authorship attribution; element-wise difference; illegal profits; multiple userids detection; similarity learning method; supervised technique; Feature extraction; Learning systems; Media; Probabilistic logic; TV; Training; Writing (ID#: 16-10906)


I. Chepurna and M. Makrehchi, “Exploiting Class Bias for Discovery of Topical Experts in Social Media,” 2015 IEEE International Conference on Data Mining Workshop (ICDMW), Atlantic City, NJ, 2015, pp. 64-71. doi: 10.1109/ICDMW.2015.198
Abstract: Discovering contexts of user’s expertise can be a challenging task, especially if there is no explicit attribution provided. With more professionals adopting social networks as a mean of communicating with their colleagues and broadcasting updates on the area of their competence, it is crucial to detect such individuals automatically. This would not only allow for better follower recommendation, but would also help to mine valuable insights and emerging signals in different communities. We posit that topical groups have their unique semantic signatures. Hence, we can treat identification of expert’s topical attribution as a binary classification task, exploiting the class bias to generate training sample without any manual labor. In thiswork, we present profile-and behavior-based models to explore experts topicality. While the former focuses on the static profile of user activity, the latter takes into account consistency and dynamics of a topic in user feed. We also propose a naive baseline tailored to a domain used in evaluation. All models are assessed on a case study of Twitter investment community.
Keywords: social networking (online); Twitter investment community; behavior-based models; binary classification task; class bias; experts topicality; follower recommendation; profile-based models; semantic signatures; social media; social networks; topic consistency; topic dynamics; topical attribution; topical experts discovery; user activity; valuable insights; Context; Media; Semantics; Stock markets; Training; Twitter (ID#: 16-10907)


C. Napoli, E. Tramontana, G. L. Sciuto, M. Wozniak, R. Damaevicius, and G. Borowik, “Authorship Semantical Identification Using Holomorphic Chebyshev Projectors,” Computer Aided System Engineering (APCASE), 2015 Asia-Pacific Conference on, Quito, 2015, pp. 232-237. doi: 10.1109/APCASE.2015.48
Abstract: Text attribution and classification, for both information retrieval and analysis, have become one of the main issues in the matter of security, trust and copyright preservation. This paper proposes an innovative approach for text classification using Chebyshev polynomials and holomorphic transforms of the coefficients space. The main advantage of this choice lies in the generality and robustness of the proposed semantical identifier, which can be applied to various contexts and lexical domains without any modification.
Keywords: copyright; information retrieval; pattern classification; security of data; text analysis; trusted computing; Chebyshev polynomial; authorship semantical identification; coefficients space; copyright preservation; holomorphic Chebyshev projector; holomorphic transform; information analysis; information retrieval; security; semantical identifier; text attribution; text classification; trust; Chebyshev approximation; Databases; Feature extraction; Modeling; Polynomials; Radiation detectors; Transforms; authorship identification; data mining; natural language; neural networks; text mining (ID#: 16-10908)


B. Caillat, B. Gilbert, R. Kemmerer, C. Kruegel, and G. Vigna, “Prison: Tracking Process Interactions to Contain Malware,” High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 1282-1291. doi: 10.1109/HPCC-CSS-ICESS.2015.297
Abstract: Modern operating systems provide a number of different mechanisms that allow processes to interact. These interactions can generally be divided into two classes: inter-process communication techniques, which a process supports to provide services to its clients, and injection methods, which allow a process to inject code or data directly into another process’ address space. Operating systems support these mechanisms to enable better performance and to provide simple and elegant software development APIs that promote cooperation between processes. Unfortunately, process interaction channels introduce problems at the end-host that are related to malware containment and the attribution of malicious actions. In particular, host-based security systems rely on process isolation to detect and contain malware. However, interaction mechanisms allow malware to manipulate a trusted process to carry out malicious actions on its behalf. In this case, existing security products will typically either ignore the actions or mistakenly attribute them to the trusted process. For example, a host-based security tool might be configured to deny untrusted processes from accessing the network, but malware could circumvent this policy by abusing a (trusted) web browser to get access to the Internet. In short, an effective host-based security solution must monitor and take into account interactions between processes. In this paper, we present Prison, a system that tracks process interactions and prevents malware from leveraging benign programs to fulfill its malicious intent. To this end, an operating system kernel extension monitors the various system services that enable processes to interact, and the system analyzes the calls to determine whether or not the interaction should be allowed. Prison can be deployed as an online system for tracking and containing malicious process interactions to effectively mitigate the threat of malware. The system can also be used as a dynamic analysis to aid an analyst in understanding a malware sample’s effect on its environment.
Keywords: Internet; application program interfaces; invasive software; online front-ends; operating system kernels; software engineering; system monitoring; Prison; Web browser; code injection; dynamic analysis tool; host-based security solution; host-based security systems; injection method; interprocess communication technique; malicious action attribution; malware containment; operating system kernel extension; process address space; process interaction tracking; process isolation; software development API; trusted process; Browsers; Kernel; Malware; Monitoring; inter-process communication; prison; windows (ID#: 16-10909)


L.-Y. Chen, P.-M. Lee, and T.-C. Hsiao, “A Sensor Tagging Approach for Reusing Building Blocks of Knowledge in Learning Classifier Systems,” Evolutionary Computation (CEC), 2015 IEEE Congress on, Sendai, 2015, pp. 2953-2960. doi: 10.1109/CEC.2015.7257256
Abstract: During the last decade, the extraction and reuse of building blocks of knowledge for the learning process of Extended Classifier System (XCS) in Multiplexer (MUX) problem domain have been demonstrate feasible by using Code Fragment (CF) (i.e. a tree-based structure ordinarily used in the field of Genetic Programming (GP)) as the representation of classifier conditions (the resulting system was called XCSCFC). However, the use of the tree-based structure may lead to the bloating problem and increase in time complexity when the tree grows deep. Therefore, we proposed a novel representation of classifier conditions for the XCS, named Sensory Tag (ST). The XCS with the ST as the input representation is called XCSSTC. The experiments of the proposed method were conducted in the MUX problem domain. The results indicate that the XCSSTC is capable of reusing building blocks of knowledge in the MUX problems. The current study also discussed about two different aspects of reusing of building blocks of knowledge. Specifically, we proposed the “attribution selection” part and the “logical relation between the attributes” part.
Keywords: feature selection; learning (artificial intelligence); pattern classification; trees (mathematics); CF; XCS; attribution selection; code fragment; extended classifier system; knowledge building block; learning process; sensor tagging approach; tree-based structure; Accuracy; Encoding; Impedance matching; Indexes; Multiplexing; Sociology; Statistics; Building Blocks; Extended Classifier System (XCS); Hash table; Pattern Recognition; Scalability; Sensory Tag (ID#: 16-10910)


S. Y. Chang, Y. C. Hu, and Z. Liu, “Securing Wireless Medium Access Control Against Insider Denial-of-Service Attackers,” Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, 2015, pp. 370-378. doi: 10.1109/CNS.2015.7346848
Abstract: In a wireless network, users share a limited resource in bandwidth. To improve spectral efficiency, the network dynamically allocates channel resources and, to avoid collisions, has its users cooperate with each other using a medium access control (MAC) protocol. In a MAC protocol, the users exchange control messages to establish more efficient data communication, but such MAC assumes user compliance and can be detrimental when a user misbehaves. An attacker who compromised the network can launch a two-pronged denial-of-service (DoS) attack that is more devastating than an outsider attack: first, it can send excessive reservation requests to waste bandwidth, and second, it can focus its power on jamming those channels that it has not reserved. Furthermore, the attacker can falsify information to skew the network control decisions to its favor. To defend against such insider threats, we propose a resource-based channel access scheme that holds the attacker accountable for its channel reservation. Building on the randomization technology of spread spectrum to thwart outsider jamming, our solution comprises of a bandwidth allocation component to nullify excessive reservations, bandwidth coordination to resolve over-reserved and under-reserved spectrum, and power attribution to determine each node’s contribution to the received power. We analyze our scheme theoretically and validate it with WARP-based testbed implementation and MATLAB simulations. Our results demonstrate superior performance over the typical solutions that bypass MAC control when faced against insider adversary, and our scheme effectively nullifies the insider attacker threats while retaining the MAC benefits between the collaborative users.
Keywords: access control; access protocols; radio access networks; telecommunication security; DoS attack; MAC protocol; MATLAB; WARP; channel reservation; data communication; denial-of-service attackers; medium access control protocol; over-reserved spectrum; power attribution; resource-based channel access; spectral efficiency; under-reserved spectrum; wireless medium access control; wireless network; Bandwidth; Communication system security; Data communication; Jamming; Media Access Protocol; Wireless communication (ID#: 16-10911)


C. H. Lin and J. L. Shih, “A Preliminary Study of Using Laban Movement Analysis to Observe the Change of Teenagers’ Bodily Expressions of Emotions in Game-Based Learning,” Advanced Applied Informatics (IIAI-AAI), 2015 IIAI 4th International Congress on, Okayama, 2015, pp. 329-334. doi: 10.1109/IIAI-AAI.2015.260
Abstract: This study aims to develop a recognition system of bodily expressions of emotions (BEE) that can be applied to digital kinetic games. There were three stages of the study. First, 76 emotion and action terms frequently used by teenagers were selected, they were quantized into four quadrants, and gathered into 16 clusters. The second stage is to construct the database of emotional motions. The researchers collected the data of body motions expressed in accordance to the 16 emotion terms, classified the attributions of body motions using the particular factors of Effort and Space Harmony based on Laban Movement Analysis (LMA). The last stage is to construct the algorithms of BEE based on the attribution database built in the earlier stage. Finally, the researchers would compare the results of the digital recognition system with the database that was built in the second stage to secure the accuracy of the recognition system. Once the body recognition system was built, it would be an important research tool and mechanism to recognize the emotional changes of the player during the kinetic game play, and compare the learning conditions with learning effectiveness to do further investigations.
Keywords: computer aided instruction; computer games; database management systems; emotion recognition; BEE algorithms; Laban movement analysis; attribution database; bodily expressions of emotions recognition system; body motions; digital kinetic games; digital recognition system; emotional motion database; learning conditions; learning effectiveness; space harmony; teenager bodily expression change; Algorithm design and analysis; Classification algorithms; Clustering algorithms; Databases; Emotion recognition; Games; Kinetic theory; bodily expressions of emotions; emotional motions; kinetic game (ID#: 16-10912)


H. Ghaemmaghami, D. Dean, and S. Sridharan, “A Cluster-Voting Approach for Speaker Diarization and Linking of Australian Broadcast News Recordings,” Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, South Brisbane, QLD, 2015, pp. 4829-4833. doi: 10.1109/ICASSP.2015.7178888
Abstract: We present a clustering-only approach to the problem of speaker diarization to eliminate the need for the commonly employed and computationally expensive Viterbi segmentation and realignment stage. We use multiple linear segmentations of a recording and carry out complete-linkage clustering within each segmentation scenario to obtain a set of clustering decisions for each case. We then collect all clustering decisions, across all cases, to compute a pairwise vote between the segments and conduct complete-linkage clustering to cluster them at a resolution equal to the minimum segment length used in the linear segmentations. We use our proposed cluster-voting approach to carry out speaker diarization and linking across the SAIVT-BNEWS corpus of Australian broadcast news data. We compare our technique to an equivalent baseline system with Viterbi realignment and show that our approach can outperform the baseline technique with respect to the diarization error rate (DER) and attribution error rate (AER).
Keywords: pattern clustering; speaker recognition; speech synthesis; AER; Australian broadcast news recordings; DER; SAIVT-BNEWS; Viterbi segmentation; attribution error rate; cluster-voting approach; complete-linkage clustering; diarization error rate; minimum segment length; multiple linear segmentations; segmentation scenario; speaker diarization; Adaptation models; Hidden Markov models; Joining processes; Measurement; Reliability; Speech; Viterbi algorithm; Viterbi realignment; cluster-voting
(ID#: 16-10913)


F. Palmetshofer, D. Schmidt, D. Heinke, N. Gehrke, and U. Steinhoff, “The Volume Fraction of Iron Oxide in a Certain Particle Size Range Determines the Harmonic Spectrum of Magnetic Tracers,” Magnetic Particle Imaging (IWMPI), 2015 5th International Workshop on, Istanbul, 2015, pp. 1-1. doi: 10.1109/IWMPI.2015.7107057
Abstract: Magnetic Particle Spectroscopy (MPS) is a well established method to characterize the signal strength of MPI tracers. To better understand the signal generation it is of considerable interest to identify the optimum particle size or size range causing a maximum signal. Yet, these tracers often exhibit a broad distribution of core diameters impeding an attribution of the signal strength to a specific particle size or size range. In our work, we present a combined evaluation of dynamic and static magnetization measurements to better understand the relation between particle size distribution and dynamic signal generation.
Keywords: biomagnetism; biomedical imaging; iron compounds; magnetic particles; magnetisation; particle size; Fe2O3; core diameters; dynamic magnetization measurements; dynamic signal generation; harmonic spectrum; iron oxide; magnetic particle spectroscopy; magnetic tracers; optimum particle size; particle size distribution; signal strength; static magnetization measurements; volume fraction; Atmospheric measurements; Correlation; Harmonic analysis; Iron; Magnetic field measurement; Particle measurements; Size measurement (ID#: 16-10914)


H. Cai and T. Wolf, “Source Authentication and Path Validation with Orthogonal Network Capabilities,” Computer Communications Workshops (INFOCOM WKSHPS), 2015 IEEE Conference on, Hong Kong, 2015, pp. 111-112. doi: 10.1109/INFCOMW.2015.7179368
Abstract: In-network source authentication and path validation are fundamental primitives to construct security mechanisms such as DDoS mitigation, path compliance, packet attribution, or protection against flow redirection. Unfortunately, most of the existing approaches are based on cryptographic techniques. The high computational cost of cryptographic operations makes these techniques fall short in the data plane of the network, where potentially every packet needs to be checked at Gigabit per second link rates in the future Internet. In this paper, we propose a new protocol, which uses a set of orthogonal sequences as credentials, to solve this problem, which enables a low overhead of verification in routers. Our evaluation of a prototype experiment demonstrates the fast verification speed and low storage consumption of our protocol, while providing reasonable security properties.
Keywords: Internet; authorisation; computer network security; cryptographic protocols; Gigabit per second link rates; cryptographic operations; in-network source authentication; orthogonal network capabilities; path validation; Authentication; Conferences; Cryptography; Optimized production technology; Routing protocols (ID#: 16-10915)


A. Thekkilakattil and G. Dodig-Crnkovic, “Ethics Aspects of Embedded and Cyber-Physical Systems,” Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, Taichung, 2015, pp. 39-44. doi: 10.1109/COMPSAC.2015.41
Abstract: The growing complexity of software employed in the cyber-physical domain is calling for a thorough study of both its functional and extra-functional properties. Ethical aspects are among important extra-functional properties, that cover the whole life cycle with different stages from design, development, deployment/production to use of cyber physical systems. One of the ethical challenges involved is the question of identifying the responsibilities of each stakeholder associated with the development and use of a cyber-physical system. This challenge is made even more pressing by the introduction of autonomous increasingly intelligent systems that can perform functionalities without human intervention, because of the lack of experience, best practices and policies for such technology. In this article, we provide a framework for responsibility attribution based on the amount of autonomy and automation involved in AI based cyber-physical systems. Our approach enables traceability of anomalous behaviors back to the responsible agents, be they human or software, allowing us to identify and separate the “responsibility” of the decision-making software from human responsibility. This provides us with a framework to accommodate the ethical “responsibility” of the software for AI based cyber-physical systems that will be deployed in the future, underscoring the role of ethics as an important extra-functional property. Finally, this systematic approach makes apparent the need for rigorous communication protocols between different actors associated with the development and operation of cyber-physical systems that further identifies the ethical challenges involved in the form of group responsibilities.
Keywords: artificial intelligence; computational complexity; decision making; embedded systems; ethical aspects; AI based cyber-physical systems; cyber-physical domain; decision-making software; ethical responsibility; ethics aspects; extra-functional properties; group responsibilities; human responsibility; intelligent systems; software complexity; Artificial intelligence; Cyber-physical systems; Ethical aspects; Ethics; Safety; Software; Stakeholders; Extra-functional Properties; Software-responsibility (ID#: 16-10916)


D. S. Romanovskiy, A. V. Solomonov, S. A. Tarasov, I. A. Lamkin, G. B. Galiev, and S. S. Pushkarev, “Low Temperature Photoluminescence and Photoreflectance of Metamorphic HEMT Structures with High Mole Fraction of In,” Young Researchers in Electrical and Electronic Engineering Conference (EIConRusNW), 2015 IEEE NW Russia, St. Petersburg, 2015, pp. 36-39. doi: 10.1109/EIConRusNW.2015.7102227
Abstract: Low-temperature photoluminescence and photoreflectance have been studied in several metamorphic HEMT (MHEMT) heterostructures with the In0.7Ga0.3As active regions and different buffer layer designs. It was found that structures with step-graded metamorphic buffer have better quality. Also it was shown that mismatched superlattices in metamorphic buffer can influence on the half-width of photoluminescence spectra. The possible attribution of photoluminescence spectral lines and their thermal behaviour are critically discussed. The photoreflectance spectrum shows a lot of features in energy region, where none of PL features observed.
Keywords: III-V semiconductors; gallium arsenide; high electron mobility transistors; indium compounds; photoluminescence; photoreflectance; superlattices; In high mole fraction; In0.7Ga0.3As; MHEMT; low temperature photoluminescence; metamorphic HEMT structures; mismatched superlattices; photoluminescence spectra; photoreflectance spectrum; step-graded metamorphic buffer; Biology; Indium gallium arsenide; mHEMTs; HEMTs; low temperature; metamorphic buffer (ID#: 16-10917)


Y. Wang, S. Qiu, C. Gao, and J. Wei, “Cluster Analysis on the Service Oriented Alliance Enterprise Manufacturing Resource,” Advanced Mechatronic Systems (ICAMechS), 2015 International Conference on, Beijing, 2015, pp. 1-4. doi: 10.1109/ICAMechS.2015.7287118
Abstract: To alliance manufacturing enterprises, there are some problems in resources such as different kinds, ununiform standards, high repeatability etc. Under cloud manufacturing service platform, using cluster analysis method to classify intelligent resources of alliance enterprise was studied. The same or similar resources are divided into one resources cluster. The uniqueness attribution of resources is solved. According to the cloud user needs, resources in a cluster not in all resources set are searched and matched. Using the evaluation method of manufacturing dynamic capacity, optimal resources is provided. It can improve use ratio of the resources. Finally an instance was verified.
Keywords: cloud computing; customer services; manufacturing systems; production engineering computing; statistical analysis; cloud manufacturing service platform; cloud user needs; cluster analysis method; intelligent resources; manufacturing dynamic capacity; optimal resource; resource use ratio improvement; service-oriented alliance enterprise manufacturing resource; Dynamic scheduling; Indexes; Manufacturing; Measurement; Resource management; Standards; Symmetric matrices; Cloud manufacturing; cluster analysis; manufacturing capacity; resource cluster (ID#: 16-10918)


P. Bakalov, E. Hoel, and W. L. Heng, “Time Dependent Transportation Network Models,” Data Engineering (ICDE), 2015 IEEE 31st International Conference on, Seoul, 2015, pp. 1364-1375. doi: 10.1109/ICDE.2015.7113383
Abstract: Network data models are frequently used as a mechanism to solve wide range of problems typical for the GIS applications and transportation planning in particular. They do this by modelling the two most important aspects of such systems: the connectivity and the attribution. For a long time the attributes like the travel time, associated with a transportation network model have been considered static. With the advancement of the technology data vendors now have the capability to capture more accurate information about the speeds of streets at different times of the day and provide this data to customers. The network attributes are not static anymore but associated with a particular time instance (e.g time-dependent). In this paper we describe our time dependent network model tailored towards the need of transportation network modelling. Our solution is based on the existing database functionality (tables, joins, sorting algorithms) provided by a standard relational DBMS and has been implemented and tested and currently being shipped as a part of the ESRI ArcGIS 10.1 platform and all subsequent releases.
Keywords: geographic information systems; relational databases; GIS applications; database functionality; network data model; relational DBMS; time dependent transportation network models; transportation planning; Algorithm design and analysis; Analytical models; Data models; Geometry; Junctions; Roads (ID#: 16-10919)


X. Pan, C. j. He, and T. Wen, “A SOS Reliability Evaluate Approach Based on GERT,” Reliability and Maintainability Symposium (RAMS), 2015 Annual, Palm Harbor, FL, 2015, pp. 1-7. doi: 10.1109/RAMS.2015.7105085
Abstract: Since SOS (system-of-systems) has properties of independence, connectivity, attribution, diversity, and emergent etc., traditional reliability modeling and assessment approaches are not fully suitable to SOS. Typically, SOS is task-oriented and a specified task of SOS will generate a particular logic process. Therefore, the reliability of a SOS can be evaluated by assessing the SOS tasks process. First, this paper uses ABM (Activity Based Methodology) method, which is based on DODAF (Department of Defense Architecture Framework), to decompose a SOS task into activity models. Second, a GERT (Graphical Evaluation and Re view Technique) hierarchical stochastic network is used to describe task process of SOS, which is dynamic and stochastic. Third, a reliability assessment method of SOS based on the SOS task process DES (Discrete Event Simulation) is presented. At last, a case study is given to illustrate the method.
Keywords: discrete event simulation; reliability; stochastic processes; ABM; DES; DODAF; Department of Defense architecture framework; GERT; SOS reliability evaluate approach; SOS tasks process; activity based methodology; graphical evaluation and review technique; hierarchical stochastic network; reliability assessment; system-of-systems; Aircraft; Atmospheric modeling; Reliability theory; Stochastic processes; system of systems
(ID#: 16-10920)


D. Regan and S. K. Srivatsa, “Adaptive Artificial Bee Colony Based Parameter Selection for Subpixel Mapping Multiagent System in Remote-Sensing Imagery,” Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, Coimbatore, 2015, pp. 1-8. doi: 10.1109/ICIIECS.2015.7193224
Abstract: Remote sensing has become an important source of land use/cover information at a range of spatial and temporal scales. The existence of mixed pixels is a major problem in remote-sensing image classification. Although the soft classification and spectral unmixing techniques can obtain an abundance of different classes in a pixel to solve the mixed pixel problem, the subpixel spatial attribution of the pixel will still be unknown. The subpixel mapping technique can effectively solve this problem by providing a fine-resolution map of class labels from coarser spectrally unmixed fraction images. However, most traditional subpixel mapping algorithms treat all mixed pixels as an identical type, either boundary-mixed pixel or linear subpixel, leading to incomplete and inaccurate results. To improve the subpixel mapping accuracy, this paper proposes an adaptive subpixel mapping framework based on a multiagent system for remote sensing imagery. In the proposed multiagent subpixel mapping framework, three kinds of agents, namely, feature detection agents, subpixel mapping agents and decision agents, are designed to solve the subpixel mapping problem. This confirms that MASSM is appropriate for the subpixel mapping of remote-sensing images. But the major problem is that the selection of the parameters becomes assumption in order to overcome these problems proposed work focus on adaptive selection of parameters based on the optimization methods, it automatically selects the parameters value in the classification, and it improves the classification results in the remote-sensing imagery. Experiments with artificial images and synthetic remote-sensing images were performed to evaluate the performance of the proposed artificial bee colony based optimization subpixel mapping algorithm in comparison with the hard classification method and other subpixel mapping algorithms: subpixel mapping based on a back-propagation neural network and the spatial attraction model. The experimental results indicate that the proposed algorithm outperforms the other two subpixel mapping algorithms in reconstructing the different structures in mixed pixels.
Keywords: backpropagation; image classification; remote sensing; adaptive artificial bee colony based parameter selection; adaptive subpixel mapping; back-propagation neural network; boundary-mixed pixel; decision agents; feature detection agents; linear subpixel; soft classification; spatial attraction model; spectral unmixing techniques; subpixel mapping agents; subpixel mapping multiagent system; Classification algorithms; Image reconstruction; Indexes; Lead; Remote sensing; Spatial resolution; Artificial Bee Colony; Enhancement; Hyperspectral Image Sub-Pixel Mapping; Remote Sensing; Resolution; Subpixel Mapping; Super-Resolution Mapping (ID#: 16-10921)


B. Amjad, M. Hirano, and N. Minato, “Loosely Coupled SME Network in Manufacturing Industry: A Challenge in Niigata Sky Project,” Management of Engineering and Technology (PICMET), 2015 Portland International Conference on, Portland, OR, 2015,
pp. 295-301. doi: 10.1109/PICMET.2015.7273172
Abstract: As the emerging nations add influence to the state of global economy, the new demand in air logistics is expected to catapult the increase in aircraft production, introducing a new horizon for potential suppliers to enter into the supply chain regime. This scenario may capture the attention not only from the industry institutions, but also from the public entities, that are in interest of plotting the economic viability of underlying jurisdiction. In such objective, the aviation cluster incubation project of “Niigata Sky Project (NSP)” which is implemented by “the City of Niigata, Japan”, implies a new perspective of implementing a sustainable industry cluster. Present commercial aircraft production is configured under the fundamental of securing a high standard of quality assurance. This prevalence has embedded a structure that requires highly competitive supplier selection policy, which public case can be observed through Germany. Niigata in this regard, demonstrates a lenient non-competitive attribution, and where the suppliers’ competency is developed through their collaborative participation in the project. This case can be represented as “collaboratively institutionalized” versus the German case of “competitively institutionalized” cluster formation. Defining these distinctive practices’ character and finding its applicable phases in the cluster’s lifecycle, shall provide a new view for the cluster managers and SMEs in developing their organizational strengths for regional development.
Keywords: aerospace industry; quality assurance; small-to-medium enterprises; supply chain management; sustainable development; Japan; Niigata Sky Project; air logistics; aircraft production; competitively institutionalized cluster formation; economic viability; loosely coupled SME network; manufacturing industry; small-to-medium sized enterprise; supplier selection policy; supply chain regime; sustainable industry cluster; Aircraft; Cities and towns; Companies; Industries; Joints; Manufacturing; Production (ID#: 16-10922)


W. Tengteng and T. Lijun, “An Analysis to the Concentric Relaxation Vulnerability Area of Voltage Sag in Power System,” Power and Energy Engineering Conference (APPEEC), 2015 IEEE PES Asia-Pacific, Brisbane, QLD, 2015, pp. 1-5. doi: 10.1109/APPEEC.2015.7380901
Abstract: This paper presents a new definition of vulnerability area of voltage sags concentric relaxation vulnerability area (CRVA) that is the area of voltage sags caused by fault with the fault point as the center. Analyzing CRVA could greatly reduce the workload involved in traditional voltage sag vulnerability areas and provided us a judgment reference of the attribution of liabilities between power supply department and power users. Based on such conclusions, we use the GUI of MATLAB to conduct user graphical interface programmed to develop a general software for analyzing CRVA caused by voltage sags. Meanwhile, we take into account the influence posed on CRVA by transformer connection mode when the fault occurs on the low- or high-voltage side of the transformer. The above algorithm enables us to derive the CRVA of IEEE-30 bus system and build such a model through PSCAD to demonstrate the accuracy of CRVA analyzing software in a simulated manner.
Keywords: IEEE standards; graphical user interfaces; power engineering computing; power supply quality; power system faults; power system reliability; power transformer protection; IEEE-30 bus system; MATLAB GUI; PSCAD; graphical user interface; power supply department; power system fault; transformer connection mode; transformer high-voltage; transformer low-voltage fault; voltage sag concentric relaxation vulnerability area analysis; Circuit faults; Graphical user interfaces; MATLAB; Power quality; Voltage fluctuations; concentric relaxation; short circuit fault; voltage sag; vulnerability area (ID#: 16-10923)


M. Teramoto and J. Nakamura, “Application of Applied KOTO-FRAME to the Five-Story Pagoda Aseismatic Mechanism,” 2015 IEEE International Conference on Data Mining Workshop (ICDMW), Atlantic City, NJ, 2015, pp. 742-748. doi: 10.1109/ICDMW.2015.173
Abstract: We have created and are proposing the KOTO-FRAME previously called dynamic quality function deployment (DQFD) technique, which evolved from quality function deployment (QFD). This method was applied to aseismatic mechanisms and is recognized as tacit knowledge for creating a logical structure from the architecture of a five-story pagoda by experimenting a model of the steric balancing toy principle. Consequently, without complex calculations, we were able to define the corresponding data structure in the attribution table of experiment or evaluation, which is worth applying not only to transact past data but also future data via experiment or evaluation utilizing the idea of “market of data.”
Keywords: buildings (structures); history; knowledge management; quality function deployment; structural engineering; DQFD technique; KOTO-FRAME; dynamic quality function deployment; five-story pagoda aseismatic mechanism; steric balancing toy principle; tacit knowledge; Conferences; Data structures; Decision support systems; Floors; Quality function deployment; Stability analysis; Structural beams; MoDAT; QFD; construction; data; tacit knowledge (ID#: 16-10924)


P. Heyer, J. J. Rivas, L. E. Sucar, and F. Orihuela-Espina, “Improving Classification of Posture Based Attributed Attention Assessed by Ranked Crowd-Raters,” Pervasive Computing Technologies for Healthcare (PervasiveHealth), 2015 9th International Conference on, Istanbul, 2015, pp. 277-279. doi: 10.4108/icst.pervasivehealth.2015.259171
Abstract: Attribution of attention from observable body posture is plausible, providing additional information for affective computing applications. We previously reported a promissory 69.72 ± 10.50 (μ ± σ) of F-measure to use posture as a proxy for attributed attentional state with implications for affective computing applications. Here, we aim at improving that classification rate by reweighting votes of raters giving higher confidence to those raters that are representative of the raters population. An increase to 75.35 ± 11.66 in F-measure was achieved. The improvement in predictive power by the classifier is welcomed and its impact is still being assessed.
Keywords: cognition; human computer interaction; learning (artificial intelligence); F-measure; posture classification; posture-based attributed attention; ranked crowd-rater; semisupervised learning; Affective computing; Head; Human computer interaction; Sensitivity; Sensors; Sociology; Statistics; adaptation; attention; neurorehabilitation; posture; semi-supervised learning (ID#: 16-10925)


J. Li and D. Liu, “DPSO-Based Clustering Routing Algorithm for Energy Harvesting Wireless Sensor Networks,” Wireless Communications & Signal Processing (WCSP), 2015 International Conference on, Nanjing, 2015, pp. 1-5. doi: 10.1109/WCSP.2015.7341030
Abstract: Energy harvesting wireless sensor networks (EH-WSNs) have been widely used in various areas in recent years. Unlike battery-powered wireless sensor networks, EH-WSNs are powered by energy harvested from ambience. This change calls for new designs on routing protocols in EH-WSNs. This paper proposes a novel centralized clustering routing algorithm based on discrete particle swarm optimization (DPSO). The base station (BS) gathers status information from all nodes, then runs a modified DPSO algorithm to find the optimal topology for wireless sensor networks. The cluster heads election and member nodes attribution are considered as an overall problem and are optimized simultaneously. Simulation results show that the DPSO-based clustering routing has stronger ability to balance energy consumption among sensor nodes in EH-WSNs and increases the network throughput by 15% than sLEACH.
Keywords: energy harvesting; particle swarm optimisation; pattern clustering; routing protocols; telecommunication network topology; wireless sensor networks; BS; DPSO-based clustering routing algorithm; EH-WSN; base station; centralized clustering routing protocol algorithm; discrete particle swarm optimization; energy consumption; energy harvesting wireless sensor network; optimal topology; sLEACH; Algorithm design and analysis; Clustering algorithms; Energy harvesting; Optimization; Routing; Routing protocols; Wireless sensor networks (ID#: 16-10926)


A. R. Estrada and L. T. Schlemer, “Taxonomy of Faculty Assumptions About Students,” Frontiers in Education Conference (FIE), 2015. 32614 2015. IEEE, El Paso, TX, 2015, pp. 1-5. doi: 10.1109/FIE.2015.7344036
Abstract: If you are part of a faculty meeting, a committee, or a learning community of instructors, you will sooner or later hear the same conversation - the conversation that begins with complaints about students. The attributions about lack of engagement, focus on grades, or the entitlement of this generation are common, and though typically unexamined such complaints are not completely ungrounded. This narrative creates a community around a shared “problem.” This camaraderie is natural, but what are the consequences? Beyond whether such statements are “true,” we believe these assumptions about students are affecting student learning. There is a phenomenon in education known as “self-fulfilling prophesy” where what we believe about students becomes manifest in part because instructors behave in ways that bring about what the instructor initially expects. As a first step in exploring these assumptions, 150 participants in a Teaching Professor Conference in May 2014, generated a list of assumptions they held about students. These assumptions were categorized into four dimensions: Motivation, Behavior, Preparation, and Systems with each dimension having a continuum. This paper describes the taxonomy and references theories to support the organization. The paper will give examples of the assumptions and discuss the next steps to validate the ideas.
Keywords: teaching; faculty assumptions taxonomy; faculty meeting; instructor learning community; self-fulfilling prophesy; student learning; teaching professor conference; Context; Engineering education; Organizations; Psychology; Space exploration; Taxonomy; Faculty assumptions; psychology (ID#: 16-10927)


J. Albadarneh et al., “Using Big Data Analytics for Authorship Authentication of Arabic Tweets,” 2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC), Limassol, Cyprus, 2015, pp. 448-452. doi: 10.1109/UCC.2015.80
Abstract: Authorship authentication of a certain text is concerned with correctly attributing it to its author based on its contents. It is a very important problem with deep root in history as many classical texts have doubtful attributions. The information age and ubiquitous use of the Internet is further complicating this problem and adding more dimensions to it. We are interested in the modern version of this problem where the text whose authorship needs authentication is an online text found in online social networks. Specifically, we are interested in the authorship authentication of tweets. This is not the only challenging aspect we consider here. Another challenging aspect is the language of the tweets. Most current works and existing tools support English. We chose to focus on the very important, yet largely understudied, Arabic language. Finally, we add another challenging aspect to the problem at hand by addressing it at a very large scale. We present our effort to employ big data analytics to address the authorship authentication problem of Arabic tweets. We start by crawling a dataset of more than 53K tweets distributed across 20 authors. We then use preprocessing steps to clean the data and prepare it for analysis. The next step is to compute the feature vectors of each tweet. We use the Bag-Of-Words (BOW) approach and compute the weights using the Term Frequency-Inverse Document Frequency (TF-IDF). Then, we feed the dataset to a Naive Bayes classifier implemented on a parallel and distributed computing framework known as Hadoop. To the best of our knowledge, none of the previous works on authorship authentication of Arabic text addressed the unique challenges associated with (1) tweets and (2) large-scale datasets. This makes our work unique on many levels. The results show that the testing accuracy is not very high (61.6%), which is expected in the very challenging setting that we consider.
Keywords: Algorithm design and analysis; Authentication; Big data; Clustering algorithms; Electronic mail; Machine learning algorithms; Support vector machines (ID#: 16-10928)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.