Software engineering building ventures include outlining and improvement of different application-based programming. Software engineering venture subjects can be actualized by various instruments, for example,NET, Java Android, PHP, MATLAB, NS2, and VLSI. Nxtlogic Software Solution gives the IEEE 2016 software engineering venture under the space like Data mining, Image Processing, Cloud Computing, Network Security and Mobile Computing.
|
S.No | Project Code | Project Title | View Abstract |
---|
BIG DATA |
Close
Clustering techniques have been widely adopted in many real world data analysis applications, such as customer behavior analysis, targeted marketing, digital forensics, etc. With the explosion of data in today’s big data era, a major trend to handle a clustering over large-scale datasets is outsourcing it to public cloud platforms. This is because cloud computing offers not only reliable services with performance guarantees, but also savings on in-house IT infrastructures. However, as datasets used for clustering may contain sensitive information, e.g., patient health information, commercial data, and behavioral data, etc, directly outsourcing them to public cloud servers inevitably raise privacy concerns.
1 | BD17NXT01 | Practical Privacy-Preserving MapReduce Based K-means Clustering over Large-scale Dataset |  |
Close
Community question answering services (CQAS) (e.g., Yahoo! Answers) provides a platform where people post questions and answer questions posed by others. Previous works analyzed the answer quality (AQ) based on answer-related features, but neglect the question-related features on AQ. Previous work analyzed how asker- and question-related features affect the question quality (QQ) regarding the amount of attention from users, the number of answers and the question solving latency, but neglect the correlation between QQ and AQ (measured by the rating of the best answer), which is critical to quality of service (QoS). We handle this problem from two aspects. First, we additionally use QQ in measuring AQ, and analyze the correlation between a comprehensive list of features (including answer-related features) and QQ. Second, we propose the first method that estimates the probability for a given question to obtain high AQ. Our analysis on the Yahoo! Answers trace confirmed that the list of our identified features exert influence on AQ, which determines QQ. For the correlation analysis, the previous classification algorithms cannot consider the mutual interactions between multiple (>2) classes of features. We then propose a novel Coupled Semi-Supervised Mutual Reinforcement-based Label Propagation (CSMRLP) algorithm for this purpose. Our extensive experiments show that CSMRLP outperforms the Mutual Reinforcement-based Label Propagation (MRLP) and five other traditional classification algorithms in the accuracy of AQ classification, and the effectiveness of our proposed method in AQ prediction. Finally, we provide suggestions on how to create a question that will receive high AQ, which can be exploited to improve the QoS of CQAS.
2 | BD17NXT02 | Question Quality Analysis and Prediction in Community Question Answering Services with Coupled Mutual Reinforcement |  |
Close
Many data owners are required to release the data in a variety of real world application, since it is of vital importance to discovery valuable information stay behind the data. However, existing re-identification attacks on the AOL and ADULTS datasets have shown that publish such data directly may cause tremendous threads to the individual privacy. Thus, it is urgent to resolve all kinds of re-identification risks by recommending effective de-identification policies to guarantee both privacy and utility of the data. De-identification policies is one of the models that can be used to achieve such requirements, however, the number of de-identification policies is exponentially large due to the broad domain of quasi-identifier attributes. To better control the trade off between data utility and data privacy, skyline computation can be used to select such policies, but it is yet challenging for efficient skyline processing over large number of policies. In this paper, we propose one parallel algorithm called SKY-FILTER-MR, which is based on MapReduce to overcome this challenge by computing skylines over large scale de-identification policies that is represented by bit-strings. To further improve the performance, a novel approximate skyline computation scheme was proposed to prune unqualified policies using the approximately domination relationship. With approximate skyline, the power of filtering in the policy space generation stage was greatly strengthened to effectively decrease the cost of skyline computation over alternative policies. Extensive experiments over both real life and synthetic datasets demonstrate that our proposed SKY-FILTER-MR algorithm substantially outperforms the baseline approach by up to four times faster in the optimal case, which indicates good scalability over large policy sets.
3 | BD17NXT03 | Efficient Recommendation of De-identification Policies using MapReduce |  |
Close
Attribute-based encryption (ABE) has been widely used in cloud computing where a data provider outsources his/her encrypted data to a cloud service provider, and can share the data with users possessing specific credentials (or attributes). However, the standard ABE system does not support secure deduplication, which is crucial for eliminating duplicate copies of identical data in order to save storage space and network bandwidth. In this paper, we present an attribute-based storage system with secure deduplication in a hybrid cloud setting, where a private cloud is responsible for duplicate detection and a public cloud manages the storage. Compared with the prior data deduplication systems, our system has two advantages. Firstly, it can be used to confidentially share data with users by specifying access policies rather than sharing decryption keys. Secondly, it achieves the standard notion of semantic security for data confidentiality while existing systems only achieve it by defining a weaker security notion. In addition, we put forth a methodology to modify a ciphertext over one access policy into ciphertexts of the same plaintext but under other access policies without revealing the underlying plaintext.
4 | BD17NXT04 | Attribute Based Storage Supporting Secure Deduplication of Encrypted Data in Cloud |  |
Close
Secure data deduplication can significantly reduce the communication and storage overheads in cloud storage services, and has potential applications in our big data-driven society. Existing data deduplication schemes are generally designed to either resist brute-force attacks or ensure the efficiency and data availability, but not both conditions. We are also not aware of any existing scheme that achieves accountability, in the sense of reducing duplicate information disclosure (e.g., to determine whether plaintexts of two encrypted messages are identical). In this paper, we investigate a three-tier cross-domain architecture, and propose an efficient and privacy-preserving big data deduplication in cloud storage (hereafter referred to as EPCDD). EPCDD achieves both privacy-preserving and data availability, and resists brute-force attacks. In addition, we take accountability into consideration to offer better privacy assurances than existing schemes. We then demonstrate that EPCDD outperforms existing competing schemes, in terms of computation, communication and storage overheads. In addition, the time complexity of duplicate search in EPCDD is logarithmic.
5 | BD17NXT05 | Achieving Efficient and Privacy-Preserving Cross-Domain Big Data Deduplication in Cloud |  |
Close
Privacy has become a considerable issue when the applications of big data are dramatically growing in cloud computing. The benefits of the implementation for these emerging technologies have improved or changed service models and improve application performances in various perspectives. However, the remarkably growing volume of data sizes has also resulted in many challenges in practice. The execution time of the data encryption is one of the serious issues during the data processing and transmissions. Many current applications abandon data encryptions in order to reach an adoptive performance level companioning with privacy concerns. In this paper, we concentrate on privacy and propose a novel data encryption approach, which is called Dynamic Data Encryption Strategy (D2ES). Proposed approach aims to selectively encrypt data and use privacy classification methods under timing constraints.
6 | BD17NXT06 | Privacy-Preserving Data Encryption Strategy for Big Data in Mobile Cloud Computing |  |
Close
The k-nearest neighbors (k-NN) query is a fundamental primitive in spatial and multimedia databases. It has extensive applications in location-based services, classification & clustering and so on. With the promise of confidentiality and privacy, massive data are increasingly outsourced to cloud in the encrypted form for enjoying the advantages of cloud computing (e.g., reduce storage and query processing costs). Recently, many schemes have been proposed to support k-NN query on encrypted cloud data. However, prior works have all assumed that the query users (QUs) are fully-trusted and know the key of the data owner (DO), which is used to encrypt and decrypt outsourced data. The assumptions are unrealistic in many situations, since many users are neither trusted nor knowing the key. In this paper, we propose a novel scheme for secure k-NN query on encrypted cloud data with multiple keys, in which the DO and each QU all hold their own different keys, and do not share them with each other; meanwhile, the DO encrypts and decrypts outsourced data using the key of his own. Our scheme is constructed by a distributed two trapdoors public-key cryptosystem (DT-PKC) and a set of protocols of secure two-party computation, which not only preserves the data confidentiality and query privacy but also supports the offline data owner
7 | BD17NXT07 | Secure k-NN Query on Encrypted Cloud Data with Multiple Keys |  |
Close
In order to analyze a news article dataset, we first extract important information such as title, date, and paragraph of the body. At the same time, we remove unnecessary information such as image, caption, footer, advertisement, navigation and recommended-news. The problem is that the formats of news articles are changing according to time and also they vary according to news source and even section of it. So, it is important for a model to generalize when predicting unseen formats of news articles. We confirmed that a machine learning based model is better to predict new data than a rule-based model by some experiments. Also, we suggest that noise information in the body possibly can be removed because we define a classification unit as a leaf node itself. On the other hand, general machine learning based models cannot remove noise information. Since they consider the classification unit as an intermediate node which consists of the set of leaf nodes, they cannot classify a leaf node itself.
8 | BD17NXT08 | SVM-based web content mining with leaf classification unit from DOM-tree |  |
Close
An analysis of the Federal Aviation Administration (FAA) certification requirements has revealed a potential error in the temperature correction formula the FAA requires manufacturers to use to process engine cooling data for compliance with Part 23 of Title 14 of the Code of Federal Regulations (14 CFR Part 23). The FAA engine cooling performance temperature correction formula, which predicts the critical temperature of engine components, was quantitatively evaluated using data acquired with a single-engine aircraft powered by a normally aspirated, air-cooled, reciprocating engine. Previous research has failed to analyze the effect ambient air temperature has on oil temperature and has failed to propose a more accurate model for predicting critical cylinder head and oil temperature. Engine cylinder head and oil temperature data were acquired during FAA-defined, cooling-performance climbs performed on three days with different ambient air temperatures. The acquired data revealed the FAA correction formula, when applied to cylinder head temperature, did not correct data to the most critical test condition, potentially leaving certified aircraft vulnerable to overheating. The acquired data showed the FAA correction formula, when applied to oil temperature, worked fairly well. Nonetheless, there was room for improvement. Thus, new temperature correction formulas were developed to predict both critical engine cylinder head and oil temperature for compliance with 14 CFR Part 23. The FAA can evaluate these correction formulas for use in 14 CFR Part 23 to ensure engine cooling performance is properly tested and general aviation aircraft remain safe from overheating.
9 | BD17NXT09 | A new temperature correction algorithm for FAA engine cooling tests |  |
Close
Patent documents are provide a significant source of knowledge about future technologies. Many attempts have been conducted to mine important knowledge from patents to analyze new technology trends. In this paper, we will to analyze implicit knowledge derived from the patents dataset of Big Data domain from KIPRIS. Keywords that occur in the title of patents are classified into three categories: Approach, Goal Object, and Goal Predicate, in order to create a model of relations of title patterns. The same keywords found on the timeline interval will be analyzed and illustrated in the patent pattern which are able to depict the relationship of goals and approaches of the patents occurred in different time interval. As a result, implicit trends and knowledge related to of specific keywords of technology reflect of each time gap can be obtained. Search result using `Goal object, Goal predicate and Approach pattern query is also found efficient and meet the user enquiry related technologies in timeline.
10 | BD17NXT10 | Study on extracting implicit patterns of patent data based on timeline |  |
CLOUD COMPUTING |
Close
Many cloud service providers (CSPs) provide data storage services with datacenters distributed worldwide. These datacenters provide different get/put latencies and unit prices for resource utilization and reservation. Thus, when selecting different CSPs datacenters, cloud customers of globally distributed applications (e.g., online social networks) face two challenges: 1) how to allocate data to worldwide datacenters to satisfy application service level objective (SLO) requirements, including both data retrieval latency and availability and2) how to allocate data and reserve resources in datacenters belonging to different CSPs to minimize the payment cost. To handle these challenges, we first model the cost minimization problem under SLO constraints using the integer programming. Due to its NP-hardness, we then introduce our heuristic solution, including a dominant-cost-based data allocation algorithm and an optimal resource reservation algorithm. We further propose three enhancement methods to reduce the payment cost and service latency: 1) coefficient-based data reallocation; 2) multicast-based data transferring; and 3) request redirection-based congestion control. We finally introduce an infrastructure to enable the conduction of the algorithms. Our trace-driven experiments on a supercomputing cluster and on real clouds (i.e., Amazon S3, Windows Azure Storage, and Google Cloud Storage) show the effectiveness of our algorithms for SLO guaranteed services and customer cost minimizaion.
1 | CC17NXT01 | Minimum-Cost Cloud Storage Service across Multiple Cloud Providers |  |
Close
Along with the development of cloud computing, an increasing number of enterprises start to adopt cloud service, which promotes the emergence of many cloud service providers. For cloud service providers, how to configure their cloud service platforms to obtain the maximum profit becomes increasingly the focus that they pay attention to. In this paper, we take customer satisfaction into consideration to address this problem. Customer satisfaction affects the profit of cloud service providers in two ways. On one hand, the cloud configuration affects the quality of service which is an important factor affecting customer satisfaction. On the other hand, the customer satisfaction affects the request arrival rate of a cloud service provider. However, few existing works take customer satisfaction into consideration in solving profit maximization problem, or the existing works considering customer satisfaction do not give a proper formalized definition for it. Hence, we first refer to the definition of customer satisfaction in economics and develop a formula for measuring customer satisfaction in cloud computing. And then, an analysis is given in detail on how the customer satisfaction affects the profit. Lastly, taking into consideration customer satisfaction, service-level agreement, renting price, energy consumption, and so forth, a profit maximization problem is formulated and solved to get the optimal configuration such that the profit is maximized.
2 | CC17NXT02 | Customer-Satisfaction-Aware Optimal Multiserver Configuration for Profit Maximization in Cloud Computing |  |
Close
Ciphertext policy attribute-based encryption (CP-ABE) is a promising cryptographic technique for fine-grained access control of outsourced data in the cloud. However, some drawbacks of key management hinder the popularity of its application. One drawback in urgent need of solution is the key escrow problem. We indicate that front-end devices of clients like smart phones generally have limited privacy protection, so if private keys are entirely held by them, clients risk key exposure that is hardly noticed but inherently existed in previous research. Furthermore, enormous client decryption overhead limits the practical use of ABE. In this paper, we propose a collaborative key management protocol in CP-ABE. Our construction realizes distributed generation, issue and storage of private keys without adding any extra infrastructure. A fine-grained and immediate attribute revocation is provided for key update. The proposed collaborative mechanism effectively solves not only key escrow problem but also key exposure. Meanwhile, it helps markedly reduce client decryption overhead. A comparison with other representative CP-ABE schemes demonstrates that our scheme has somewhat better performance in terms of cloud-based outsourced data sharing on mobile devices. Finally, we provide proof of security for the proposed protocol.
3 | CC17NXT03 | A Collaborative Key Management Protocol in Ciphertext Policy Attribute-Based Encryption for Cloud Data Sharing |  |
Close
Cloud storage services allow users to outsource their data to cloud servers to save local data storage costs. However, unlike using local storage devices, users do not physically manage the data stored on cloud servers; therefore, the data integrity of the outsourced data has become an issue. Many public verification schemes have been proposed to enable a third-party auditor to verify the data integrity for users. These schemes make an impractical assumption-the auditors have enough computation capability to bear expensive verification costs. In this paper, we propose a novel public verification scheme for the cloud storage using indistinguishability obfuscation, which requires a lightweight computation on the auditor and the delegate most computation to the cloud. We further extend our scheme to support batch verification and data dynamic operations, where multiple verification tasks from different users can be performed efficiently by the auditor and the cloud-stored data can be updated dynamically. Compared with other existing works, our scheme significantly reduces the auditors computation overhead. Moreover, the batch verification overhead on the auditor side in our scheme is independent of the number of verification tasks. Our scheme could be practical in a scenario, where the data integrity verifications are executed frequently, and the number of verification tasks (i.e., the number of users) is numerous; even if the auditor is equipped with a low-power device, it can verify the data integrity efficiently. We prove the security of our scheme under the strongest security model proposed by Shi et al. (ACM CCS 2013). Finally, we conduct a performance analysis to demonstrate that our scheme is more efficient than other existing works in terms of the auditors communication and computation efficiency.
4 | CC17NXT04 | Efficient Public Verification of Data Integrity for Cloud Storage Systems from IndistinguishabilityObfuscation Public Cloud |  |
Close
The unabated plethora of research activities to augment multifarious mobile devices by leveraging cloud resources have created new research field called Mobile Cloud Computing (MCC). Recently, researchers have found Mobile Cloud Computing (MCC) to be a promising venture combining the advantage of both the cloud and mobility in giving users available resources on the cloud wherever they are using their mobile phone. Despite its benefits, many challenges still remain with MCC. This paper addressed recent challenges ranging from limited computational capacity, connectivity, security, latency, and heterogeneity. Solutions for these challenges are suggested including mobile processes offloading, using HTML5 technologies to compensate for connectivity loss, SDN centralized control features for data security, utilizing the cloud for time consuming applications, and using cross platform applications neutralizes heterogeneous technologies. Finally, the paper concludes with an extracted list of new research idea for scholars to work on to further enhance MCC.
5 | CC17NXT05 | A critical overview of latest challenges and solutions of Mobile Cloud Computing Systems |  |
Close
Attribute-based encryption (ABE) has been widely used in cloud computing where a data provider outsources his/her encrypted data to a cloud service provider, and can share the data with users possessing specific credentials (or attributes). However, the standard ABE system does not support secure deduplication, which is crucial for eliminating duplicate copies of identical data in order to save storage space and network bandwidth. In this paper, we present an attribute-based storage system with secure deduplication in a hybrid cloud setting, where a private cloud is responsible for duplicate detection and a public cloud manages the storage. Compared with the prior data deduplication systems, our system has two advantages. Firstly, it can be used to confidentially share data with users by specifying access policies rather than sharing decryption keys. Secondly, it achieves the standard notion of semantic security for data confidentiality while existing systems only achieve it by defining a weaker security notion. In addition, we put forth a methodology to modify a ciphertext over one access policy into ciphertexts of the same plaintext but under other access policies without revealing the underlying plaintext.
6 | CC17NXT06 | Attribute-Based Storage Supporting Secure Deduplication of Encrypted Data in Cloud |  |
Close
It is important for cloud service brokers to provide a multi-cloud storage service to minimize their payment cost to cloud service providers (CSPs) while providing service level objective (SLO) guarantee to their customers. Many multi-cloud storage services have been proposed or payment cost minimization or SLO guarantee. However, no previous works fully leverage the current cloud pricing policies (such as resource reservation pricing) to reduce the payment cost. Also, few works achieve both cost minimization and SLO guarantee. In this paper, we propose a multi-cloud Economical and SLO-guaranteed Storage Service (ES3), which determines data allocation and resource reservation schedules with payment cost minimization and SLO guarantee. ES3 incorporates (1) a coordinated data allocation and resource reservation method, which allocates each data item to a datacenter and determines the resource reservation amount on datacenters by leveraging all the pricing policies; (2) a genetic algorithm based data allocation adjustment method, which reduce data Get/Put rate variance in each datacenter to maximize the reservation benefit. We also propose several algorithms to enhance the cost efficient and SLO guarantee performance of ES3 including i) dynamic request redirection, ii) grouped Gets for cost reduction, iii) lazy update for cost-efficient Puts, and iv) concurrent requests for rigid Get SLO guarantee. Our trace-driven experiments on a supercomputing cluster and on real clouds (i.e., Amazon S3, Windows Azure Storage and Google Cloud Storage) show the superior performance of ES3 in payment cost minimization and SLO guarantee in comparison with previous methods.
7 | CC17NXT07 | An Economical and SLO-Guaranteed Cloud Storage Service across Multiple Cloud Service Providers |  |
Close
Technology offers the potential to improve healthcare service delivery. The objective of healthcare Web site is to provide services and updated information at low cost to all the people regardless of their abilities and disabilities, which can reduce overcrowding in hospitals and reduce spread of disease. In a developing country like India, where hospitals are overcrowded, healthcare Web sites can play a major role in delivering updated healthcare services. Therefore, designing an effective healthcare Web site is becoming essential as numerous people are accessing the Web for gathering information about hospitals and their healthcare services. There are no specific guidelines available for designing the health care Web site. So, it has become extremely important to evaluate hospital Web sites and address the design issues. The objective of this paper is to evaluate the accessibility, usability and security of hospitals Web sites in metro cities of India. Accessibility is evaluated using WCAG 2.0 accessibility guidelines, usability is evaluated using readability score and language analysis. Security analysis takes content management system into account.
8 | CC17NXT08 | Evaluating the accessibility, usability and security of Hospitals websites: An exploratory study Computing System |  |
Close
Cloud computing provides individuals and enterprises massive computing power and scalable storage capacities to support a variety of big data applications in domains like health care and scientific research, therefore more and more data owners are involved to outsource their data on cloud servers for great convenience in data management and mining. However, data sets like health records in electronic documents usually contain sensitive information, which brings about privacy concerns if the documents are released or shared to partially untrusted third-parties in cloud. A practical and widely used technique for data privacy preservation is to encrypt data before outsourcing to the cloud servers, which however reduces the data utility and makes many traditional data analytic operators like keyword-based top-k document retrieval obsolete. In this paper, we investigate the multi-keyword top-k search problem for big data encryption against privacy breaches, and attempt to identify an efficient and secure solution to this problem. Specifically, for the privacy concern of query data, we construct a special tree-based index structure and design a random traversal algorithm, which makes even the same query to produce different visiting paths on the index, and can also maintain the accuracy of queries unchanged under stronger privacy. For improving the query efficiency, we propose a group multi-keyword top-k search scheme based on the idea of partition, where a group of tree-based indexes are constructed for all documents. Finally, we combine these methods together into an efficient and secure approach to address our proposed top-k similarity search. Extensive experimental results on real-life data sets demonstrate that our proposed approach can significantly improve the capability of defending the privacy breaches, the scalability and the time efficiency of query processing over the state-of-the-art methods.
9 | CC17NXT09 | Privacy-Preserving Multi-keyword Top-k Similarity Search Over Encrypted Data |  |
Close
Cloud computing has generated much interest in the research community in recent years for its many advantages, but has also raise security and privacy concerns. The storage and access of confidential documents have been identified as one of the central problems in the area. In particular, many researchers investigated solutions to search over encrypted documents stored on remote cloud servers. While many schemes have been proposed to perform conjunctive keyword search, less attention has been noted on more specialized searching techniques. In this paper, we present a phrase search technique based on Bloom filters that is significantly faster than existing solutions, with similar or better storage and communication cost. Our technique uses a series of n-gram filters to support the functionality. The scheme exhibits a trade-off between storage and false positive rate, and is adaptable to defend against inclusion-relation attacks. A design approach based on an application’s target false positive rate is also described.
10 | CC17NXT10 | Fast Phrase Search for Encrypted Cloud Storage |  |
Close
A set of Virtual Machine (VM) allocators for Cloud Data Centers (DCs) that perform the joint allocation of computing and network resources. VM requests are defined in terms of system (CPU, RAM and Disk) and network (Bandwidth) resources. As concerns the first ones, we allocate VM resources following two different policies, namely Best Fit and Worst Fit, corresponding to consolidation and spreading strategies respectively. For each server, the allocators choose the network path that minimizes electrical power consumption, evaluated according to a precise model, specifically designed for network switches. More specifically, we implemented different allocation algorithms based on Fuzzy Logic, Single and Multi-Objective optimization. Simulation tests have been carried out to evaluate the performance of the allocators in terms of number of allocated VMs for each policy.
11 | CC17NXT11 | Power Consumption-Aware Virtual Machine Placement in Cloud Data Center |  |
Close
Cloud storage provides a convenient, massive, and scalable storage at low cost, but data privacy is a major concern that prevents users from storing files on the cloud trustingly. One way of enhancing privacy from data owner point of view is to encrypt the files before outsourcing them onto the cloud and decrypt the files after downloading them. However, data encryption is a heavy overhead for the mobile devices, and data retrieval process incurs a complicated communication between the data user and cloud. Normally with limited bandwidth capacity and limited battery life, these issues introduce heavy overhead to computing and communication as well as a higher power consumption for mobile device users, which makes the encrypted search over mobile cloud very challenging. In this paper, we propose TEES (Traffic and Energy saving Encrypted Search), a bandwidth and energy efficient encrypted search architecture over mobile cloud.
12 | CC17NXT12 | TEES: An Efficient Search Scheme over Encrypted Data on Mobile Cloud |  |
Close
In a virtualization environment that serves multiple tenants (independent organizations), storage consolidation at the filesystem level is desirable because it enables data sharing, administration efficiency, and performance optimizations. The scalable deployment of filesystems in such environments is challenging due to intermediate translation layers required for networked file access or identity management. First we define the entities involved in a multitenant filesystem and present relevant security requirements. Then we introduce the design of the Dike authorization architecture. It combines native access control with tenant namespace isolation and compatibility to object-based filesystems. We introduce secure protocols to authenticate the participating entities and authorize the data access over the network. We alternatively use a local cluster and a public cloud to experimentally evaluate a Dike prototype implementation that we developed. At several thousand tenants, our prototype incurs limited performance overhead below 21%, unlike a solution from industry whose multitenancy overhead approaches 84% in some cases.
13 | CC17NXT13 | Multitenant Access Control for Cloud-Aware Distributed Filesystems |  |
Close
Nowadays, cloud computing is hard to effectively sustain the implementation of the commercial model of Internet Service globalization. There is a growing trend to build an environment of cloud service, with the capacity to serve anytime and anywhere, by mutual cooperation between cloud service providers around the world. However, this tendency will raise a key issue which is how to provide a benign environment, that allows self-collaboration and fair competition, for different cloud service providers with diverse stakeholder. Inspired by the concept and structure of Service-Oriented Architecture (SOA) service, this paper proposes a structure named JointCloud Corporation Environment (JCCE), which offers a mutual benefit and win-win JointCloud environment for global cloud service providers. JCCE contains three core services, which are Distributed Cloud Transaction, Distributed Cloud Community and Distributed Cloud Supervision. Also, facing with different cloud service participants, JCCE offers three main service modes for their consumption, supply and coordination. This study plays a significant role in supporting the sharing and self-collaboration of multiple cloud entities, and promoting the development of cloud service market healthy and orderly.
14 | CC17NXT14 | Corporation Architecture for Multiple Cloud Service Providers in JointCloud Computing |  |
Close
One of the challenges faced by data-intensive computing is the problem of stragglers, which can significantly increase the job completion time. Various proactive and reactive straggler mitigation techniques have been developed to address the problem. The straggler identification scheme is a crucial part of the straggler mitigation techniques, as only when stragglers are detected not only correctly but also early enough, the improvement in job completion time can make a real difference. Although the classical standard deviation method is a widely adopted straggler identification scheme, it is not an ideal solution due to certain inherent limitations. In this paper, we present Tukey method, another statistical method for outlier detection, which is more suitable for the identification of stragglers for two reasons. First, it is robust to extreme observations from stragglers. Second, it can identify stragglers and, more importantly, start speculative execution earlier than the standard deviation method. Our extensive simulation results confirm that Tukeys method can remarkably outperform the standard deviation method.
15 | CC17NXT15 | An Improved Straggler Identification Scheme for Data-Intensive Computing on Cloud Platforms |  |
DATA MINING |
Close
In an Information technology world, the ability to effectively process massive datasets has become integral to a broad range of scientific and other academic disciplines. We are living in an era of data deluge and as a result, the term “Big Data” is appearing in many contexts. It ranges from meteorology, genomics, complex physics simulations, biological and environmental research, finance and business to healthcare. Big Data refers to data streams of higher velocity and higher variety. The infrastructure required to support the acquisition of Big Data must deliver low, predictable latency in both capturing data and in executing short, simple queries. To be able to handle very high transaction volumes, often in a distributed environment; and support flexible, dynamic data structures. Data processing is considerably more challenging than simply locating, identifying, understanding, and citing data. For effective large-scale analysis all of this has to happen in a completely automated manner. This requires differences in data structure and semantics to be expressed in forms that are computer understandable, and then “robotically” resolvable. There is a strong body of work in data integration, mapping and transformations. However, considerable additional work is required to achieve automated error-free difference resolution. This paper proposes a framework on recent research for the Data Mining using Big Data.
1 | DM17NXT01 | Data Mining with Big Data |  |
Close
Web search engines are composed by thousands of query processing nodes, i.e., servers dedicated to process user queries. Such many servers consume a significant amount of energy, mostly accountable to their CPUs, but they are necessary to ensure low latencies, since users expect sub-second response times (e.g., 500 ms). However, users can hardly notice response times that are faster than their expectations. Hence, we propose the Predictive Energy Saving Online Scheduling Algorithm (PESOS ) to select the most appropriate CPU frequency to process a query on a per-core basis. PESOS aims at process queries by their deadlines, and leverage high-level scheduling information to reduce the CPU energy consumption of a query processing node. PESOS bases its decision on query efficiency predictors, estimating the processing volume and processing time of a query. We experimentally evaluate PESOS upon the TREC ClueWeb09B collection and the MSN2006 query log.
2 | DM17NXT02 | Energy-efficient Query Processing in Web Search Engines |  |
Close
We present a new two-level composition model for crowdsourced Sensor-Cloud services based on dynamic features such as spatio-temporal aspects. The proposed approach is defined based on a formal Sensor-Cloud service model that abstracts the functionality and non-functional aspects of sensor data on the cloud in terms of spatio-temporal features. A spatio-temporal indexing technique based on the 3D R-tree to enable fast identification of appropriate Sensor-Cloud services is proposed. A novel quality model is introduced that considers dynamic features of sensors to select and compose Sensor-Cloud services. The quality model defines Coverage as a Service which is formulated as a composition of crowdsourced Sensor-Cloud services. We present two new QoS-aware spatio-temporal composition algorithms to select the optimal composition plan.
3 | DM17NXT03 | Crowdsourced Coverage as a Service: Two-Level Composition of Sensor Cloud Services |  |
Close
Getting back to previously viewed web pages is a common yet uneasy task for users due to the large volume of personally accessed information on the web. This paper leverages humans natural recall process of using episodic and semantic memory cues to facilitate recall, and presents a personal web revisitation technique called WebPagePrev through context and content keywords. Underlying techniques for context and content memories acquisition, storage, decay, and utilization for page re-finding are discussed. A relevance feedback mechanism is also involved to tailor to individuals memory strength and revisitation habits. Our 6-month user study shows that: (1) Compared with the existing web revisitation tool Memento, History List Searching method, and Search Engine method, the proposed WebPagePrev delivers the best re-finding quality in finding rate (92.10 percent), average F1-measure (0.4318), and average rank error (0.3145). (2) Our dynamic management of context and content memories including decay and reinforcement strategy can mimic users retrieval and recall mechanism.
4 | DM17NXT04 | Personal Web Revisitation by Context and Content Keywords with Relevance Feedback |  |
Close
We study the problem of preserving user privacy in the publication of location sequences. Consider a database of trajectories, corresponding to movements of people, captured by their transactions when they use credit cards, RFID debit cards, or NFC compliant devices. We show that, if such trajectories are published exactly (by only hiding the identities of persons that followed them), one can use partial trajectory knowledge as a quasi-identifier for the remaining locations in the sequence. We devise four intuitive techniques, based on combinations of locations suppression and trajectories splitting, and we show that they can prevent privacy breaches while keeping published data accurate for aggregate query answering and frequent subsets data mining.
5 | DM17NXT05 | Local Suppression and Splitting Techniques for Privacy Preserving Publication of Trajectories |  |
Close
Mining the most influential location set finds k locations, traversed by the maximum number of unique trajectories, in a given spatial region. These influential locations are valuable for resource allocation applications, such as selecting charging stations for electric automobiles and suggesting locations for placing billboards. This problem is NP-hard and usually calls for an interactive mining processes involving a users input, e.g., changing the spatial region and k, or removing some locations that are not eligible for an application according to the domain knowledge. Efficiency is the major concern in conducting this human-in-the-loop mining. To this end, we propose a complete mining framework, which includes an optimal method for the light setting (i.e., small region and k) and an approximate method for the heavy setting (i.e., large region and k). The optimal method leverages vertex grouping and best-first pruning techniques to expedite the mining process. The approximate method can provide the performance guarantee by utilizing the greedy heuristic, and it is comprised of efficient updating strategy, index partition and workload-based optimization techniques. We evaluate the efficiency and effectiveness of our methods based on two taxi datasets from China, and one check-in dataset from New York.
6 | DM17NXT06 | Mining the Most Influential k-Location Set From Massive Trajectories |  |
Close
Web search engines are composed by thousands of query processing nodes, i.e., servers dedicated to process user queries. Such many servers consume a significant amount of energy, mostly accountable to their CPUs, but they are necessary to ensure low latencies, since users expect sub-second response times (e.g., 500 ms). However, users can hardly notice response times that are faster than their expectations. Hence, we propose the Predictive Energy Saving Online Scheduling Algorithm (PESOS ) to select the most appropriate CPU frequency to process a query on a per-core basis. PESOS aims at process queries by their deadlines, and leverage high-level scheduling information to reduce the CPU energy consumption of a query processing node. PESOS bases its decision on query efficiency predictors, estimating the processing volume and processing time of a query. We experimentally evaluate PESOS upon the TREC ClueWeb09B collection and the MSN2006 query log. Results show that PESOS can reduce the CPU energy consumption of a query processing node up to ~ 48 percent compared to a system running at maximum CPU core frequency. PESOSoutperforms also the best state-of-the-art competitor with a ~ 20 percent energy saving, while the competitor requires a fine parameter tuning and it may incurs in uncontrollable latency violations.
7 | DM17NXT07 | Energy-Efficient Query Processing in Web Search Engines |  |
Close
This paper addresses the problem of automatic analytical pole-zero extraction for multi-stage operational amplifiers with frequency compensation. Traditional methods mainly rely on numerical reference to derive approximate pole-zero expressions without incorporating any design knowledge. Such methods suffer from bad interpretability of the auto-generated results. This paper takes a topological approach and attempts to advocate that certain form of design knowledge can be incorporated in the symbolic term selection process for polezero generation. The generation engine selects the dominant terms by a formal inspection on the token patterns that are correlated to gain factors and compensation elements. Since the gain factors and compensation elements of an opamp are pertinent to the topological details of a circuit, the proposed polezero extraction method is closer to design conception than other numerical reference based methods. Consequently, the generated pole/zero results are better interpretable. Application to a class of multi-stage operational amplifiers with a variety of compensation structures demonstrates that the proposed method is effective and can match human-derived results.
8 | DM17NXT08 | Topological Approach to Symbolic Pole-Zero Extraction Incorporating Design Knowledge |  |
Close
Biomolecular controlled annotations have become pivotal in computational biology, because they allow scientists to analyze large amounts of biological data to better understand test results, and to infer new knowledge. Yet, biomolecular annotation databases are incomplete by definition, like our knowledge of biology, and might contain errors and inconsistent information. In this context, machine-learning algorithms able to predict and prioritize new annotations are both effective and efficient, especially if compared with time-consuming trials of biological validation. To limit the possibility that these techniques predict obvious and trivial high-level features, and to help prioritizing their results, we introduce a new element that can improve accuracy and relevance of the results of an annotation prediction and prioritization pipeline. We propose a novelty indicator able to state the level of ”originality” of the annotations predicted for a specific gene to Gene Ontology (GO) terms. This indicator, joint with our previously introduced prediction steps, helps prioritizing the most novel interesting annotations predicted. We performed an accurate biological functional analysis of the prioritized annotations predicted with high accuracy by our indicator and previously proposed methods. The relevance of our biological findings proves effectiveness and trustworthiness of our indicator and of its prioritization of predicted annotations.
9 | DM17NXT09 | Novelty Indicator for Enhanced Prioritization of Predicted Gene Ontology Annotations |  |
Close
Finding an effective and efficient representation is very important for image classification. The most common approach is to extract a set of local descriptors, and then aggregate them into a high-dimensional, more semantic feature vector, like unsupervised bag-of-features and weakly supervised part-based models. The latter one is usually more discriminative than the former due to the use of information from image labels. In this paper, we propose a weakly supervised strategy that using multi-instance learning (MIL) to learn discriminative patterns for image representation. Specially, we extend traditional multi-instance methods to explicitly learn more than one patterns in positive class, and find the “most positive” instance for each pattern. Furthermore, as the positiveness of instance is treated as a continuous variable, we can use stochastic gradient decent to maximize the margin between different patterns meanwhile considering MIL constraints. To make the learned patterns more discriminative, local descriptors extracted by deep convolutional neural networks are chosen instead of hand-crafted descriptors.
10 | DM17NXT10 | Learning Multi-instance Deep Discriminative Patterns for Image Classification |  |
FORENSICS AND INFORMATION SECURITY |
Close
As an important application in cloud computing, cloud storage offers user scalable, flexible, and high-quality data storage and computation services. A growing number of data owners choose to outsource data files to the cloud. Because cloud storage servers are not fully trustworthy, data owners need dependable means to check the possession for their files outsourced to remote cloud servers. To address this crucial problem, some remote data possession checking (RDPC) protocols have been presented. But many existing schemes have vulnerabilities in efficiency or data dynamics. In this paper, we provide a new efficient RDPC protocol based on homomorphic hash function. The new scheme is provably secure against forgery attack, replace attack, and replay attack based on a typical security model. To support data dynamics, an operation record table (ORT) is introduced to track operations on file blocks. We further give a new optimized implementation for the ORT, which makes the cost of accessing ORT nearly constant. Moreover, we make the comprehensive performance analysis, which shows that our scheme has advantages in computation and communication costs. Prototype implementation and experiments exhibit that the scheme is feasible for real applications.
1 | FIS17NXT01 | A Novel Efficient Remote Data Possession Checking Protocol in Cloud Storage |  |
Close
Sharing of resources on the cloud can be achieved on a large scale, since it is cost effective and location independent. Despite the hype surrounding cloud computing, organizations are still reluctant to deploy their businesses in the cloud computing environment due to concerns in secure resource sharing. In this paper, we propose a cloud resource mediation service offered by cloud service providers, which plays the role of trusted third party among its different tenants. This paper formally specifies the resource sharing mechanism between two different tenants in the presence of our proposed cloud resource mediation service. The correctness of permission activation and delegation mechanism among different tenants using four distinct algorithms (activation, delegation, forward revocation, and backward revocation) is also demonstrated using formal verification. The performance analysis suggests that the sharing of resources can be performed securely and efficiently across different tenants of the cloud.
2 | FIS17NXT02 | A Cross Tenant Access Control (CTAC) Model for Cloud Computing: Formal Specification and Verification |  |
Close
Data access control is a challenging issue in public cloud storage systems. Ciphertext-policy attribute-based encryption (CP-ABE) has been adopted as a promising technique to provide flexible, fine-grained, and secure data access control for cloud storage with honest-but-curious cloud servers. However, in the existing CP-ABE schemes, the single attribute authority must execute the time-consuming user legitimacy verification and secret key distribution, and hence, it results in a single-point performance bottleneck when a CP-ABE scheme is adopted in a large-scale cloud storage system. Users may be stuck in the waiting queue for a long period to obtain their secret keys, thereby resulting in low efficiency of the system. Although multi-authority access control schemes have been proposed, these schemes still cannot overcome the drawbacks of single-point bottleneck and low efficiency, due to the fact that each of the authorities still independently manages a disjoint attribute set. In this paper, we propose a novel heterogeneous framework to remove the problem of single-point performance bottleneck and provide a more efficient access control scheme with an auditing mechanism. Our framework employs multiple attribute authorities to share the load of user legitimacy verification. Meanwhile, in our scheme, a central authority is introduced to generate secret keys for legitimacy verified users. Unlike other multi-authority access control schemes, each of the authorities in our scheme manages the whole attribute set individually. To enhance security, we also propose an auditing mechanism to detect which attribute authority has incorrectly or maliciously performed the legitimacy verification procedure. Analysis shows that our system not only guarantees the security requirements but also makes great performance improvement on key generation.
3 | FIS17NXT03 | RAAC: Robust and Auditable Access Control with Multiple Attribute Authorities for Public Cloud Storage |  |
Close
Cloud storage system provides facilitative file storage and sharing services for distributed clients. To address integrity, controllable outsourcing, and origin auditing concerns on outsourced files, we propose an identity-based data outsourcing (IBDO) scheme equipped with desirable features advantageous over existing proposals in securing outsourced data. First, our IBDO scheme allows a user to authorize dedicated proxies to upload data to the cloud storage server on her behalf, e.g., a company may authorize some employees to upload files to the companys cloud account in a controlled way. The proxies are identified and authorized with their recognizable identities, which eliminates complicated certificate management in usual secure distributed computing systems. Second, our IBDO scheme facilitates comprehensive auditing, i.e., our scheme not only permits regular integrity auditing as in existing schemes for securing outsourced data, but also allows to audit the information on data origin, type, and consistence of outsourced files. Security analysis and experimental evaluation indicate that our IBDO scheme provides strong security with desirable efficiency.
4 | FIS17NXT04 | Identity-Based Data Outsourcing with Comprehensive Auditing in Clouds |  |
Close
With the rapid advancement of technology, healthcare systems have been quickly transformed into a pervasive environment, where both challenges and opportunities abound. On the one hand, the proliferation of smart phones and advances in medical sensors and devices have driven the emergence of wireless body area networks for remote patient monitoring, also known as mobile-health (M-health), thereby providing a reliable and cost effective way to improving efficiency and quality of health care. On the other hand, the advances of M-health systems also generate extensive medical data, which could crowd todays cellular networks. Device-to-device (D2D) communications have been proposed to address this challenge, but unfortunately, security threats are also emerging because of the open nature of D2D communications between medical sensors and highly privacy-sensitive nature of medical data. Even, more disconcerting is healthcare systems that have many characteristics that make them more vulnerable to privacy attacks than in other applications. In this paper, we propose a light-weight and robust security-aware D2D-assist data transmission protocol for M-health systems by using a certificateless generalized signcryption (CLGSC) technique. Specifically, we first propose a new efficient CLGSC scheme, which can adaptively work as one of the three cryptographic primitives: signcryption, signature, or encryption, but within one single algorithm. The scheme is proved to be secure, simultaneously achieving confidentiality and unforgeability
5 | FIS17NXT05 | Light-weight and Robust Security-Aware D2D-assist Data Transmission Protocol for Mobile-Health Systems |  |
Close
Spatial data have wide applications, e.g., location-based services, and geometric range queries (i.e., finding points inside geometric areas, e.g., circles or polygons) are one of the fundamental search functions over spatial data. The rising demand of outsourcing data is moving large-scale datasets, including large-scale spatial datasets, to public clouds. Meanwhile, due to the concern of insider attackers and hackers on public clouds, the privacy of spatial datasets should be cautiously preserved while querying them at the server side, especially for location-based and medical usage. In this paper, we formalize the concept of Geometrically Searchable Encryption, and propose an efficient scheme, named FastGeo, to protect the privacy of clients’ spatial datasets stored and queried at a public server. With FastGeo, which is a novel two-level search for encrypted spatial data, an honest-but-curious server can efficiently perform geometric range queries, and correctly return data points that are inside a geometric range to a client without learning sensitive data points or this private query. FastGeo supports arbitrary geometric areas, achieves sublinear search time, and enables dynamic updates over encrypted spatial datasets. Our scheme is provably secure, and our experimental results on real-world spatial datasets in cloud platform demonstrate that FastGeo can boost search time over 100 times.
6 | FIS17NXT06 | FastGeo: Efficient Geometric Range Queries on Encrypted Spatial Data |  |
Close
Data integrity, a core security issue in reliable cloud storage, has received much attention. Data auditing protocols enable a verifier to efficiently check the integrity of the outsourced data without downloading the data. A key research challenge associated with existing designs of data auditing protocols is the complexity in key management. In this paper, we seek to address the complex key management challenge in cloud data integrity checking by introducing fuzzy identity-based auditing, the first in such an approach, to the best of our knowledge. More specifically, we present the primitive of fuzzy identity-based data auditing, where a user’s identity can be viewed as a set of descriptive attributes. We formalize the system model and the security model for this new primitive. We then present a concrete construction of fuzzy identity-based auditing protocol by utilizing biometrics as the fuzzy identity. The new protocol offers the property of error-tolerance, namely, it binds with private key to one identity which can be used to verify the correctness of a response generated with another identity, if and only if both identities are sufficiently close. We prove the security of our protocol based on the computational Diffie-Hellman assumption and the discrete logarithm assumption in the selective-ID security model. Finally, we develop a prototype implementation of the protocol which demonstrates the practicality of the proposal.
7 | FIS17NXT07 | Fuzzy Identity-Based Data Integrity Auditing for Reliable Cloud Storage Systems |  |
Close
Delay tolerant networks (DTNs) are often encountered in military network environments where end-to-end connectivity is not guaranteed due to frequent disconnection or delay. This work proposes a provenance-based trust framework, namely PROVEST (PROVEnance-baSed Trust model) that aims to achieve accurate peer-to-peer trust assessment and maximize the delivery of correct messages received by destination nodes while minimizing message delay and communication cost under resource-constrained network environments. Provenance refers to the history of ownership of a valued object or information. We leverage the interdependency between trustworthiness of information source and information itself in PROVEST. PROVEST takes a data-driven approach to reduce resource consumption in the presence of selfish or malicious nodes while estimating a node’s trust dynamically in response to changes in the environmental and node conditions.
8 | FIS17NXT08 | PROVEST: Provenance-based Trust Model for Delay Tolerant Networks |  |
Close
More and more users are attracted by P2P networks characterized by decentralization, autonomy and anonymity. However, users’ unconstrained behavior makes it necessary to use a trust model when establishing trust relationships between peers. Most existing trust models are based on recommendations, which, however, suffer from the shortcomings of slow convergence and high complexity of trust computations, as well as huge overhead of network traffic. Inspired by the establishment of trust relationships in human society, a guarantee-based trust model, GeTrust, is proposed for Chord-based P2P networks. A service peer needs to choose its guarantee peer(s) for the service it is going to provide, and they are both required to pledge reputation mortgages for the service. The request peer makes evaluations on all the candidates of service peer by referring their service reputations and their guarantee peers’ reputations, and selects the one with highest evaluation to be its service provider. In order to enhance GeTrust’s availability and prevent malicious behavior, we also present incentive mechanism and anonymous reputation management strategy. Simulation results show that GeTrust is effective and efficient in terms of improving successful transaction rate, resisting complex attacks, reducing network overhead and lowering computational complexity
9 | FIS17NXT09 | GeTrust: A guarantee-based trust model in Chord-based P2P networks |  |
Close
The recent proposed solutions for demand side energy management leverage the two-way communication infrastructure provided by modern smart-meters and sharing the usage information with the other users. In this paper, we first highlight the privacy and security issues involved in the distributed demand management protocols. We propose a novel protocol to share required information among users providing privacy, confidentiality, and integrity. We also propose a new clustering-based, distributed multi-party computation (MPC) protocol. Through simulation experiments we demonstrate the efficiency of our proposed solution. The existing solutions typically usually thwart selfish and malicious behavior of consumers by deploying billing mechanisms based on total consumption during a few time slots. However, the billing is typically based on the total usage in each time slot in smart grids. In the second part of this paper, we formally prove that under the per-slot based charging policy, users have incentive to deviate from the proposed protocols. We also propose a protocol to identify untruthful users in these networks. Finally, considering a repeated interaction among honest and dishonest users, we derive the conditions under which the smart grid can enforce cooperation among users and prevents dishonest declaration of consumption.
10 | FIS17NXT10 | Secure and Private Data Aggregation for Energy Consumption Scheduling in Smart Grids |  |
Close
The new paradigm of outsourcing data to the cloud is a double-edged sword. On the one hand, it frees data owners from the technical management, and is easier for data owners to share their data with intended users. On the other hand, it poses new challenges on privacy and security protection. To protect data confidentiality against the honest-but-curious cloud service provider, numerous works have been proposed to support fine-grained data access control. However, till now, no schemes can support both fine-grained access control and time-sensitive data publishing. In this paper, by embedding timed-release encryption into CP-ABE (Ciphertext-Policy Attribute-based Encryption), we propose a new time and attribute factors combined access control on time-sensitive data for public cloud storage (named TAFC). Based on the proposed scheme, we further propose an efficient approach to design access policies faced with diverse access requirements for time-sensitive data. Extensive security and performance analysis shows that our proposed scheme is highly efficient and satisfies the security requirements for time-sensitive data storage in public cloud.
11 | FIS17NXT11 | TAFC: Time and Attribute Factors Combined Access Control on Time-Sensitive Data in Public Cloud |  |
Close
Cloud computing is a new resource provisioning mechanism, which represents a convenient way for users to access different computing resources. Periodical workflow applications commonly exist in scientific and business analysis, among many other fields. One of the most challenging problems is to determine the right amount of resources for multiple periodical workflow applications. In this paper, the periodical workflow applications scheduling problem with total renting cost minimization is considered. The novelty of this work relies precisely on this objective function, which is more realistic in practice than the more commonly considered makespan minimization. An integer programming model is constructed for the problem under study. A Precedence Tree based Heuristic (PTH) is developed which considers three types of initial schedule construction methods. Based on the initial schedule, two improvement procedures are presented. The proposed methods are compared with existing algorithms for the related makespan based multiple workflow scheduling problem. Experimental and statistical results demonstrate the effectiveness and efficiency of the proposed algorithm.
12 | FIS17NXT12 | Resource renting for periodical cloud workflow applications |  |
Close
There are different techniques for measuring microdisplacements. The purpose of this paper was to ascertain whether a method of video motion magnification (VMM) can be used for measuring such displacements. For this, standard video devices (a digital single-lens reflex camera and a webcam) were used to record subtle movements of an object, and the results of the VMM technique were contrasted with an air-coupled ultrasonic sensing method that could achieve submicrometer accuracy. The results of the VMM technique highly correlate with those achieved using the ultrasonic sensor, showing that the former can accurately measure displacements in the range from about 5 to 40 µm from a distance of about 1 m. The temporal characteristics of the moving object were well preserved. The VMM technique is an alternative to other modalities for measuring microdisplacements and has the advantage of being noncontact, long-range, and relatively low-cost.
13 | FIS17NXT13 | Assessing the Feasibility of the Use of Video Motion Magnification for Measuring Microdisplacements |  |
Close
Recently, a number of high-fidelity reversible data hiding algorithms have been developed based on prediction-error expansion (PEE) and pixel sorting. In PEE, prediction is made using either a full-enclosed or a half-enclosed predictor. While in PEE with pixel sorting, the local complexity (LC), which is usually assumed to be proportional to the magnitude of prediction-error (PE), is exploited to reduce the embedding distortion. However, this assumption may not always hold in all conditions. In this letter, a directional enclosed predictor is proposed to detect the locations where LC is not proportional to PE. And, a directionally enclosed prediction and expansion (DEPE) scheme is then developed for efficient reversible data hiding. With DEPE, data embedding is restricted to pixels where LC correlates to PE with a proportional relationship. Experimental results show that, compared to the full-enclosed or half-enclosed prediction schemes, DEPE significantly improves the image fidelity while providing a considerable payload.
14 | FIS17NXT14 | High-Fidelity Reversible Data Hiding Using Directionally-Enclosed Prediction Modulation |  |
Close
Unmanned aerial vehicles (UAVs) networks have not yet received considerable research attention. Specifically, security issues are a major concern because such networks, which carry vital information, are prone to various attacks. In this paper, we design and implement a novel intrusion detection and response scheme, which operates at the UAV and ground station levels, to detect malicious anomalies that threaten the network. In this scheme, a set of detection and response techniques are proposed to monitor the UAV behaviors and categorize them into the appropriate list (normal, abnormal, suspect, and malicious) according to the detected cyber-attack. We focus on the most lethal cyber-attacks that can target an UAV network, namely, false information dissemination, GPS spoofing, jamming, and black hole and gray hole attacks. Extensive simulations confirm that the proposed scheme performs well in terms of attack detection even with a large number of UAVs and attackers since it exhibits a high detection rate, a low number of false positives, and prompt detection with a low communication overhead.
15 | FIS17NXT15 | A Hierarchical Detection and Response System to Enhance Security against Lethal Cyber-Attacks in UAV Networks |  |
DIGITAL IMAGE PROCESSING |
Close
Estimating transformations from degraded point sets is necessary for many computer vision and pattern recognition applications. In this paper, we propose a robust non-rigid point set registration method based on spatially constrained context-aware Gaussian fields. We first construct a context-aware representation (e.g., shape context) for assignment initialization. Then, we use a graph Laplacian regularized Gaussian fields to estimate the underlying transformation from the likely correspondences. On the one hand, the intrinsic manifold is considered and used to preserve the geometrical structure, and a priori knowledge of the point set is extracted. On the other hand, by using the deterministic annealing, the presented method is extended to a projected high-dimensional feature space, i.e., reproducing kernel Hilbert space through a kernel trick to solve the transformation, in which the local structure is propagated by the coarse-to-fine scaling strategy. In this way, the proposed method gradually recovers much more correct correspondences, and then estimates the transformation parameters accurately and robustly when facing degradations. Experimental results on 2D and 3D synthetic and real data (point sets) demonstrate that the proposed method reaches better performance than the state-of-the-art algorithms.
1 | IP17NXT01 | Robust Non-rigid Point Set Registration Using Spatially Constrained Gaussian Fields |  |
Close
Single-molecule localization based super-resolution microscopy, by localizing a sparse subset of stochastically activated emitters in each frame, achieves sub-diffraction-limit spatial resolution. Its temporal resolution, however, is constrained by the maximal density of activated emitters that can be successfully reconstructed. The state-of-the-art three-dimensional (3D) reconstruction algorithm based on compressed sensing suffers from high computational complexity and gridding error due to model mismatch. In this paper, we propose a novel super-resolution algorithm for 3D image reconstruction, dubbed TVSTORM, which promotes the sparsity of activated emitters without discretizing their locations. Several strategies are pursued to improve the reconstruction quality under the Poisson noise model, and reduce the computational time by an order-ofmagnitude. Numerical results on both simulated and cell imaging data are provided to validate the favorable performance of the proposed algorithm.
2 | IP17NXT02 | Super-Resolution Image Reconstruction for High-Density 3D Single-Molecule Microscopy Variation |  |
Close
In this paper, we present an attribute grammar for solving two coupled tasks: i) parsing an 2D image into semantic regions; and ii) recovering the 3D scene structures of all regions. The proposed grammar consists of a set of production rules, each describing a kind of spatial relation between planar surfaces in 3D scenes. These production rules are used to decompose an input image into a hierarchical parse graph representation where each graph node indicates a planar surface or a composite surface. Different from other stochastic image grammars, the proposed grammar augments each graph node with a set of attribute variables to depict scene-level global geometry, e.g. camera focal length, or local geometry, e.g., surface normal, contact lines between surfaces. These geometric attributes impose constraints between a node and its off-springs in the parse graph. Under a probabilistic framework, we develop a Markov Chain Monte Carlo method to construct a parse graph that optimizes the 2D image recognition and 3D scene reconstruction purposes simultaneously. We evaluated our method on both public benchmarks and newly collected datasets. Experiments demonstrate that the proposed method is capable of achieving state-of-the-art scene reconstruction of a single image.
3 | IP17NXT03 | Single-View 3D Scene Reconstruction and Parsing by attribute Grammar |  |
Close
Blind image restoration is a non-convex problem involving the restoration of images using unknown blur kernels. The success of the restoration process depends on three factors: 1) the amount of prior information concerning the image and blur kernel; 2) the algorithm used to perform restoration; and 3) the initial guesses made by the algorithm. Prior information of an image can often be used to restore the sharpness of edges. By contrast, there is no consensus concerning the use of prior information in the restoration of images from blur kernels, due to the complex nature of image blurring processes. In this paper, we model a blur kernel as a linear combination of basic 2-D patterns. To illustrate this process, we constructed a dictionary comprising atoms of Gaussian functions derived from the Kronecker product of 1-D Gaussian sequences. Our results show that the proposed method is more robust than other state-of-the-art methods in a noisy environment, due to its increased signal-to-noise ratio (ISNR). This approach also proved more stable than the other methods, due to the steady increase in ISNR as the number of iterations is increased.
4 | IP17NXT04 | Mixture of Gaussian Blur Kernel Representation for Blind Image Restoration |  |
Close
Many visual applications have benefited from the outburst of web images, yet the imprecise and incomplete tags arbitrarily provided by users, as the thorn of the rose, may hamper the performance of retrieval or indexing systems relying on such data. In this paper, we propose a novel locality sensitive low-rank model for image tag completion, which approximates the global nonlinear model with a collection of local linear models. To effectively infuse the idea of locality sensitivity, a simple and effective pre-processing module is designed to learn suitable representation for data partition, and a global consensus regularizer is introduced to mitigate the risk of overfitting. Meanwhile, low-rank matrix factorization is employed as local models, where the local geometry structures are preserved for the low-dimensional representation of both tags and samples. Extensive empirical evaluations conducted on three datasets demonstrate the effectiveness and efficiency of the proposed method, where our method outperforms pervious ones by a large margin.
5 | IP17NXT05 | A Locality Sensitive Low-Rank Model for Image Tag Completion |  |
Close
Peer-to-peer networking offers a scalable solution for sharing multimedia data across the network. With a large amount of visual data distributed among different nodes, it is an important but challenging issue to perform content-based retrieval in peer-to-peer networks. While most of the existing methods focus on indexing high dimensional visual features and have limitations of scalability, in this paper we propose a scalable approach for content-based image retrieval in peer-to-peer networks by employing the bag-of-visual-words model. Compared with centralized environments, the key challenge is to efficiently obtain a global codebook, as images are distributed across the whole peer-to-peer network. In addition, a peer-to-peer network often evolves dynamically, which makes a static codebook less effective for retrieval tasks. Therefore, we propose a dynamic codebook updating method by optimizing the mutual information between the resultant codebook and relevance information, and the workload balance among nodes that manage different codewords. In order to further improve retrieval performance and reduce network cost, indexing pruning techniques are developed. Our comprehensive experimental results indicate that the proposed approach is scalable in evolving and distributed peer-to-peer networks, while achieving improved retrieval accuracy.
6 | IP17NXT06 | A Scalable Approach for Content-Based Image Retrieval in Peer-to-Peer Networks |  |
Close
In multimedia information retrieval, most classic approaches tend to represent different modalities of media in the same feature space. With the click data collected from the users searching behavior, existing approaches take either one-to-one paired data (text-image pairs) or ranking examples (text-query-image and/or image-query-text ranking lists) as training examples, which do not make full use of the click data, particularly the implicit connections among the data objects. In this paper, we treat the click data as a large click graph, in which vertices are images/text queries and edges indicate the clicks between an image and a query. We consider learning a multimodal representation from the perspective of encoding the explicit/implicit relevance relationship between the vertices in the click graph. By minimizing both the truncated random walk loss as well as the distance between the learned representation of vertices and their corresponding deep neural network output, the proposed model which is named multimodal random walk neural network (MRW-NN) can be applied to not only learn robust representation of the existing multimodal data in the click graph, but also deal with the unseen queries and images to support cross-modal retrieval. We evaluate the latent representation learned by MRW-NN on a public large-scale click log data set Clickture and further show that MRW-NN achieves much better cross-modal retrieval performance on the unseen queries/images than the other state-of-the-art methods.
7 | IP17NXT07 | Learning of Multimodal Representations with Random Walks on the Click Graph |  |
Close
Distance metric learning (DML) is an important technique to improve similarity search in content-based image retrieval. Despite being studied extensively, most existing DML approaches typically adopt a single-modal learning framework that learns the distance metric on either a single feature type or a combined feature space where multiple types of features are simply concatenated. Such single-modal DML methods suffer from some critical limitations: (i) some type of features may significantly dominate the others in the DML task due to diverse feature representations; and (ii) learning a distance metric on the combined high-dimensional feature space can be extremely time-consuming using the naive feature concatenation approach. To address these limitations, in this paper, we investigate a novel scheme of online multi-modal distance metric learning (OMDML), which explores a unified two-level online learning scheme: (i) it learns to optimize a distance metric on each individual feature space; and (ii) then it learns to find the optimal combination of diverse types of features. To further reduce the expensive cost of DML on high-dimensional feature space, we propose a low-rank OMDML algorithm which not only significantly reduces the computational cost but also retains highly competing or even better learning accuracy. We conduct extensive experiments to evaluate the performance of the proposed algorithms for multi-modal image retrieval, in which encouraging results validate the effectiveness of the proposed technique.
8 | IP17NXT08 | Online Multi-Modal Distance Metric Learning with Application to Image Retrieval |  |
Close
Social media sharing Websites allow users to annotate images with free tags, which significantly contribute to the development of the web image retrieval. Tag-based image search is an important method to find images shared by users in social networks. However, how to make the top ranked result relevant and with diversity is challenging. In this paper, we propose a topic diverse ranking approach for tag-based image retrieval with the consideration of promoting the topic coverage performance. First, we construct a tag graph based on the similarity between each tag. Then, the community detection method is conducted to mine the topic community of each tag. After that, inter-community and intra-community ranking are introduced to obtain the final retrieved results. In the inter-community ranking process, an adaptive random walk model is employed to rank the community based on the multi-information of each topic community. Besides, we build an inverted index structure for images to accelerate the searching process. Experimental results on Flickr data set and NUS-Wide data sets show the effectiveness of the proposed approach.
9 | IP17NXT09 | Tag Based Image Search by Social Re-ranking |  |
Close
Alignment-free fingerprint cryptosystems perform matching using relative information between minutiae, e.g., local minutiae structures, is promising, because it can avoid the recognition errors and information leakage caused by template alignment/registration. However, as most local minutiae structures only contain relative information of a few minutiae in a local region, they are less discriminative than the global minutiae pattern. Besides, the similarity measures for trivially/coarsely quantized features in the existing work cannot provide a robust way to deal with nonlinear distortions, a common form of intra-class variation. As a result, the recognition accuracy of current alignment-free fingerprint cryptosystems is unsatisfying. In this paper, we propose an alignment-free fuzzy vault-based fingerprint cryptosystem using highly discriminative pair-polar (P-P) minutiae structures. The fine quantization used in our system can largely retain information about a fingerprint template and enables the direct use of a traditional, well-established minutiae matcher. In terms of template/key protection, the proposed system fuses cancelable biometrics and biocryptography. Transforming the P-P minutiae structures before encoding destroys the correlations between them, and can provide privacy-enhancing features, such as revocability and protection against cross-matching by setting distinct transformation seeds for different applications. The comparison with other minutiae-based fingerprint cryptosystems shows that the proposed system performs favorably on selected publicly available databases and has strong security.
10 | IP17NXT10 | A Security-Enhanced Alignment-Free Fuzzy Vault-Based Fingerprint Cryptosystem Using Pair-Polar Minutiae Structures |  |
KNOWLEDGE AND DATA ENGINEERING |
Close
Given a positive integer k, a social network G and a certain propagation model M, influence maximization aims to find a set of k nodes that has the largest influence spread. The state-of-the-art method IMM is based on the reverse influence sampling (RIS) framework. By using the martingale technique, it greatly outperforms the previous methods in efficiency. However, IMM still has limitations in scalability due to the high overhead of deciding a tight sample size. In this paper, instead of spending the effort on deciding a tight sample size, we present a novel bottomk sketch based RIS framework, namely BKRIS, which brings the order of samples into the RIS framework. By applying the sketch technique, we can derive early termination conditions to significantly accelerate the seed set selection procedure. Moreover, we provide several optimization techniques to reduce the cost of generating and processing samples. Finally, we conduct experiments over 10 real social networks to demonstrate the efficiency and effectiveness of the proposed method. Further details are reported in.[1]
1 | KD17NXT01 | Bring Order into the Samples: A Novel Scalable Method for Influence Maximization |  |
Close
In the era of “big data”, one of the key challenges is to analyze large amounts of data collected in meaningful and scalable ways. The field of process mining is concerned with the analysis of data that is of a particular nature, namely data that results from the execution of business processes. The analysis of such data can be negatively influenced by the presence of outliers, which reflect infrequent behavior or “noise”. In process discovery, where the objective is to automatically extract a process model from the data, this may result in rarely travelled pathways that clutter the process model. This paper presents an automated technique to the removal of infrequent behavior from event logs. The proposed technique is evaluated in detail and it is shown that its application in conjunction with certain existing process discovery algorithms significantly improves the quality of the discovered process models and that it scales well to large datasets.
2 | KD17NXT02 | Filtering out Infrequent Behavior from Business Process Event Logs |  |
Close
The efficient processing of document streams plays an important role in many information filtering systems. Emerging applications, such as news update filtering and social network notifications, demand presenting end-users with the most relevant content to their preferences. In this work, user preferences are indicated by a set of keywords. A central server monitors the document stream and continuously reports to each user the top-k documents that are most relevant to her keywords. Our objective is to support large numbers of users and high stream rates, while refreshing the top-k results almost instantaneously. Our solution abandons the traditional frequency-ordered indexing approach. Instead, it follows an identifier-ordering paradigm that suits better the nature of the problem. When complemented with a novel, locally adaptive technique, our method offers (i) proven optimality w.r.t. the number of considered queries per stream event, and (ii) an order of magnitude shorter response time (i.e., time to refresh the query results) than the current state-of-the-art.
3 | KD17NXT03 | Continuous Top-k Monitoring on Document Streams |  |
Close
With the rapid development of computer technology, cloud-based services have become a hot topic. They not only provide users with convenience, but also bring many security issues, such as data sharing and privacy issue. In this paper, we present an access control system with privilege separation based on privacy protection (PS-ACS). In the PS-ACS scheme, we divide users into private domain (PRD) and public domain (PUD) logically. In PRD, to achieve read access permission and write access permission, we adopt the Key-Aggregate Encryption (KAE) and the Improved Attribute-based Signature (IABS) respectively. In PUD, we construct a new multi-authority ciphertext policy attribute-based encryption (CP-ABE) scheme with efficient decryption to avoid the issues of single point of failure and complicated key distribution, and design an efficient attribute revocation method for it. The analysis and simulation result show that our scheme is feasible and superior to protect users privacy in cloud-based services
4 | KD17NXT04 | Privacy Protection based Access Control Scheme in Cloud-based Services |  |
Close
As cloud computing is becoming growingly popular, consumers tasks around the world arrive in cloud data centers. A private cloud provider aims to achieve profit maximization by intelligently scheduling tasks while guaranteeing the service delay bound of delay-tolerant tasks. However, the aperiodicity of arrival tasks brings a challenging problem of how to dynamically schedule all arrival tasks given the fact that the capacity of a private cloud provider is limited. Previous works usually provide an admission control to intelligently refuse some of arrival tasks. Nevertheless, this will decrease the throughput of a private cloud, and cause revenue loss. This paper studies the problem of how to maximize the profit of a private cloud in hybrid clouds while guaranteeing the service delay bound of delay-tolerant tasks. We propose a profit maximization algorithm (PMA) to discover the temporal variation of prices in hybrid clouds. The temporal task scheduling provided by PMA can dynamically schedule all arrival tasks to execute in private and public clouds. The sub problem in each iteration of PMA is solved by the proposed hybrid heuristic optimization algorithm, simulated annealing particle swarm optimization (SAPSO). Besides, SAPSO is compared with existing baseline algorithms. Extensive simulation experiments demonstrate that the proposed method can greatly increase the throughput and the profit of a private cloud while guaranteeing the service delay bound.
5 | KD17NXT05 | Title: Temporal Task Scheduling With Constrained Service Delay For Profit Maximization In Hybrid Clouds |  |
Close
Due to the fact that existing database systems are increasingly more difficult to use, improving the quality and the usability of database systems has gained tremendous momentum over the last few years. In particular, the feature of explaining why some expected tuples are missing in the result of a query has received more attention. In this paper, we study the problem of explaining missing answers to top-k queries in the context of SQL (i.e., with selection, projection, join, and aggregation). To approach this problem, we use the query-refinement method. That is, given as inputs the original top-k SQL query and a set of missing tuples, our algorithms return to the user a refined query that includes both the missing tuples and the original query results. Case studies and experimental results show that our algorithms are able to return high quality explanations efficiently.
6 | KD17NXT06 | Explaining Missing Answers to Top-k SQL Queries |  |
Close
A query facet is a significant list of information nuggets that explains an underlying aspect of a query. Existing algorithms mine facets of a query by extracting frequent lists contained in top search results. The coverage of facets and facet items mined by these kind of methods might be limited, because only a small number of search results are used. In order to solve this problem, we propose mining query facets by using knowledge bases which contain high-quality structured data. Specifically, we first generate facets based on the properties of the entities which are contained in Freebase and correspond to the query. Second, we mine initial query facets from search results, then expanding them by finding similar entities from Freebase. Experimental results show that our proposed method can significantly improve the coverage of facet items over the state-of-the-art algorithms.
7 | KD17NXT08 | Generating Query Facets using Knowledge Bases |  |
Close
Fraudulent behaviors in Google Play, the most popular Android app market, fuel search rank abuse and malware proliferation. To identify malware, previous work has focused on app executable and permission analysis. In this paper, we introduce FairPlay, a novel system that discovers and leverages traces left behind by fraudsters, to detect both malware and apps subjected to search rank fraud. FairPlay correlates review activities and uniquely combines detected review relations with linguistic and behavioral signals gleaned from Google Play app data (87 K apps, 2.9 M reviews, and 2.4M reviewers, collected over half a year), in order to identify suspicious apps. FairPlay achieves over 95 percent accuracy in classifying gold standard datasets of malware, fraudulent and legitimate apps. We show that 75 percent of the identified malware apps engage in search rank fraud. FairPlay discovers hundreds of fraudulent apps that currently evade Google Bouncers detection technology. FairPlay also helped the discovery of more than 1,000 reviews, reported for 193 apps, that reveal a new type of “coercive” review campaign: users are harassed into writing positive reviews, and install and review other apps.
8 | KD17NXT09 | Search Rank Fraud and Malware Detection in Google Play |  |
Close
This paper focuses on seeking a new heuristic scheme for an influence maximization problem in social networks: how to economically select a subset of individuals (so-called seeds) to trigger a large cascade of further adoptions of a new behavior based on a contagion process. Most existing works on selection of seeds assumed that the constant number [Math Processing Error] seeds could be selected, irrespective of the intrinsic property of each individuals different susceptibility of being influenced (e.g., it may be costly to persuade some seeds to adopt a new behavior). In this paper, a price-performance-ratio inspired heuristic scheme, PPRank, is proposed, which investigates how to economically select seeds within a given budget and meanwhile try to maximize the diffusion process. Our papers contributions are threefold. First, we explicitly characterize each user with two distinct factors: the susceptibility of being influenced (SI) and influential power (IP) representing the ability to actively influence others and formulate users SIs and IPs according to their social relations, and then, a convex price-demand curve-based model is utilized to properly convert each users SI into persuasion cost (PC) representing the cost used to successfully make the individual adopt a new behavior. Furthermore, a novel cost-effective selection scheme is proposed, which adopts both the price performance ratio (PC-IP ratio) and users IP as an integrated selection criterion and meanwhile explicitly takes into account the overlapping effect; finally, simulations using both artificially generated and real-trace network data illustrate that, under the same budgets, PPRank can achieve larger diffusion range than other heuristic and brute-force greedy schemes without taking users persuasion costs into account.
9 | KD17NXT10 | PPRank: Economically Selecting Initial Users for Influence Maximization in Social Networks |  |
Close
A network with n nodes contains O(n2) possible links. Even for networks of modest size, it is often difficult to evaluate all pairwise possibilities for links in a meaningful way. Further, even though link prediction is closely related to missing value estimation problems, it is often difficult to use sophisticated models such as latent factor methods because of their computational complexity on large networks. Hence, most known link prediction methods are designed for evaluating the link propensity on a specified subset of links, rather than on the entire networks. In practice, however, it is essential to perform an exhaustive search over the entire networks. In this article, we propose an ensemble enabled approach to scaling up link prediction, by decomposing traditional link prediction problems into subproblems of smaller size. These subproblems are each solved with latent factor models, which can be effectively implemented on networks of modest size. By incorporating with the characteristics of link prediction, the ensemble approach further reduces the sizes of subproblems without sacrificing its prediction accuracy. The ensemble enabled approach has several advantages in terms of performance, and our experimental results demonstrate the effectiveness and scalability of our approach.
10 | KDH17NXT07 | An Ensemble Approach to Link Prediction |  |
MOBILE COMPUTING |
Close
With the increasing availability of moving-object tracking data, trajectory search is increasingly important. We propose and investigate a novel query type named trajectory search by regions of interest (TSR query). Given an argument set of trajectories, a TSR query takes a set of regions of interest as a parameter and returns the trajectory in the argument set with the highest spatial-density correlation to the query regions. This type of query is useful in many popular applications such as trip planning and recommendation, and location based services in general. TSR query processing faces three challenges: how to model the spatial-density correlation between query regions and data trajectories, how to effectively prune the search space, and how to effectively schedule multiple so-called query sources. To tackle these challenges, a series of new metrics are defined to model spatial-density correlations. An efficient trajectory search algorithm is developed that exploits upper and lower bounds to prune the search space and that adopts a query-source selection strategy, as well as integrates a heuristic search strategy based on priority ranking to schedule multiple query sources. The performance of TSR query processing is studied in extensive experiments based on real and synthetic spatial data.
1 | MC17NXT01 | Searching Trajectories by Regions of Interest |  |
Close
The convergence of mobile communications and cloud computing facilitates the cross-layer network design and content-assisted communication. Mobile video broadcasting can benefit from this trend by utilizing joint source-channel coding and strong information correlation in clouds. In this paper, a knowledge-enhanced mobile video broadcasting (KMV-Cast) is proposed. The KMV-Cast is built on a linear video transmission instead of a traditional digital video system, and exploits the hierarchical Bayesian model to integrate the correlated information into the video reconstruction at the receiver. The correlated information is distilled to obtain its intrinsic features, and the Bayesian estimation algorithm is used to maximize the video quality. The KMV-Cast system consists of both likelihood broadcasting and prior knowledge broadcasting. The simulation results show that the proposed KMV-Cast scheme outperforms the typical linear video transmission scheme called Softcast, and achieves 8 dB more of the peak signal-to-noise ratio (PSNR) gain at low-SNR channels (i.e., -10 dB), and 5 dB more of PSNR gain at high-SNR channels (i.e., 25 dB).
2 | MC17NXT02 | Knowledge-Enhanced Mobile Video Broadcasting (KMV-Cast) Framework with Cloud Support |  |
Close
During past decades, the classroom scheduling problem has posed significant challenges to educational programmers and teaching secretaries. In order to alleviate the burden of the programmers, this paper presents SmartClass, which allows the programmers to solve this problem using web services. By introducing service-oriented architecture (SOA), SmartClass is able to provide classroom scheduling services with back-stage design space exploration and greedy algorithms. Furthermore, the SmartClass architecture can be dynamically coupled to different scheduling algorithms (e.g. Greedy, DSE, etc.) to fit in specific demands. A typical case study demonstrates that SmartClass provides a new efficient paradigm to the traditional classroom scheduling problem, which could achieve high flexibility by software services reuse and ease the burden of educational programmers. Evaluation results on efficiency, overheads and scheduling performance demonstrate the SmartClass has lower scheduling overheads with higher efficiency.
3 | MC17NXT03 | A Classroom Scheduling Service for Smart Classes |  |
Close
Protecting the privacy of mobile phone user participants is extremely important for mobile phone sensing applications. In this paper, we study how an aggregator can expeditiously compute the minimum value or the kth minimum value of all users data without knowing them. We construct two secure protocols using probabilistic coding schemes and a cipher system that allows homomorphic bitwise XOR computations for our problems. Following the standard cryptographic security definition in the semi-honest model, we formally prove our protocols security. The protocols proposed by us can support time-series data and need not to assume the aggregator is trusted. Moreover, different from existing protocols that are based on secure arithmetic sum computations, our protocols are based on secure bitwise XOR computations, thus are more efficient.
4 | MC17NXT04 | Efficient and Privacy-preserving Min and k-th Min Computations in Mobile Sensing Systems |  |
Close
Smartphones have been widely used with a vast array of sensitive and private information stored on these devices. To secure such information from being leaked, user authentication schemes are necessary. Current password/pattern-based user authentication schemes are vulnerable to shoulder surfing attacks and smudge attacks. In contrast, stroke/gait-based schemes are secure but inconvenient for users to input. In this paper, we propose ShakeIn, a handy user authentication scheme for secure unlocking of a smartphone by simply shaking the phone. With embedded motion sensors, ShakeIn can effectively capture the unique and reliable biometrical features of users about how they shake. In this way, even if an attacker sees a user shaking his/her phone, the attacker can hardly reproduce the same behaviour. Furthermore, by allowing users to customise the way how they shake the phone, ShakeIn endows users with the maximum operation flexibility. We implement ShakeIn and conduct both intensive trace-driven simulations and real experiments on 20 volunteers with about 530; 555 shaking samples collected over multiple months. The results show that ShakeIn achieves an average equal error rate of 1:2% with a small number of shakes using only 35 training samples even in the presence of shoulder-surfing attacks.
5 | MC17NXT05 | ShakeIn: Secure User Authentication of Smartphones with Habitual Single-handed Shakes |  |
Close
Communication between vehicles enables a wide array of applications and services ranging from road safety to traffic management and infotainment. Each application places distinct quality of service (QoS) constraints on the exchange of information. The required performance of the supported services differs considerably in terms of bandwidth, latency, and communication reliability. For example, high-bandwidth applications, such as video streaming, require highly reliable communication. However, the attenuation of the IEEE 802.11p/DSRC communication link, due to static and mobile obstructing objects, degrades the link quality and can compromise the QoS requirements of the supported applications. On the other hand, a dual-interface hybrid architecture may have a failover or backup mechanism and benefit from more reliable alternatives, such as cellular networks for occasionally offloading data transmission by radio access technology (RAT) selection and vertical handover process. Since 4G/Long-Term Evolution (LTE) is generally not free, it is, therefore, highly desirable to minimize the time during which the cellular interface is used and to return to the IEEE 802.11p/DSRC interface. This paper proposes a hybrid communication approach based on 4G/LTE and the IEEE 802.11p technologies to support a V2X video streaming application. The proposed approach includes details on the underlying communication architecture, a procedure for selecting the best RAT, a real test platform complemented by a standard software protocol stack, and finally an extensive performance evaluation of the proposed solution based on field test measurements. The results indicate that the proposed approach significantly improves the overall reliability of communication with respect to packet and frame delivery metrics.
6 | MC17NXT06 | QoS-Aware Video Transmission Over Hybrid Wireless Network for Connected Vehicles |  |
Close
Cloud computing has established itself as an alternative IT infrastructure and service model. However, as with all logically centralized resource and service provisioning infrastructures, cloud does not handle well local issues involving a large number of networked elements (IoTs) and it is not responsive enough for many applications that require immediate attention of a local controller. Fog computing preserves many benefits of cloud computing and it is also in a good position to address these local and performance issues because its resources and specific services are virtualized and located at the edge of the customer premise. However, data security is a critical challenge in fog computing especially when fog nodes and their data move frequently in its environment. This paper addresses the data protection and the performance issues by 1) proposing a Region-Based Trust-Aware (RBTA) model for trust translation among fog nodes of regions, 2) introducing a Fog-based Privacy-aware Role Based Access Control (FPRBAC) for access control at fog nodes, and 3) developing a mobility management service to handle changes of users and fog devices locations. The implementation results demonstrate the feasibility and the efficiency of our proposed framework.
7 | MC17NXT07 | A data protection model for fog computing |  |
Close
Location-based services can be seen everywhere today in our smartphones and devices that use GPS, and this service has become invaluable to customers. LBSs, however, do have their flaws. Users are forced to reveal location data if they want to use the service, which can be a risk for their own privacy and security. Therefore, several techniques have been proposed in literature in order to provide an optimal solution for privacy preserving queries in LBS. This paper will firstly explore the use of bloom filters in existing research and their inherent limitation. While using Bloom Filers can be straightforward, finding good hash functions can be challenging. We propose a method to automatically generate good, independent hash functions, with the goal of reducing information leakage while also creating an automated performance measure.
8 | MC17NXT08 | Privacy preserving queries for LBS: Hash function secured (HFS) |  |
Close
Authenticated key exchange (AKE) protocol allows a user and a server to authenticate each other and generate a session key for the subsequent communications. With the rapid development of low-power and highly-efficient networks, such as pervasive and mobile computing network in recent years, many efficient AKE protocols have been proposed to achieve user privacy and authentication in the communications. Besides secure session key establishment, those AKE protocols offer some other useful functionalities, such as two-factor user authentication and mutual authentication. However, most of them have one or more weaknesses, such as vulnerability against lost-smart-card attack, offline dictionary attack, de-synchronization attack, or the lack of forward secrecy, and user anonymity or untraceability. Furthermore, an AKE scheme under the public key infrastructure may not be suitable for light-weight computational devices, and the security model of AKE does not capture user anonymity and resist lost-smart-card attack. In this paper, we propose a novel dynamic ID-based anonymous two-factor AKE protocol, which addresses all the above issues. Our protocol also supports smart card revocation and password update without centralized storage. Further, we extend the security model of AKE to support user anonymity and resist lost-smart-card attack, and the proposed scheme is provably secure in extended security model. The low-computational and bandwidth cost indicates that our protocol can be deployed for pervasive computing applications and mobile communications in practice.
9 | MC17NXT09 | Provably Secure Dynamic ID-Based Anonymous Two-Factor Authenticated Key Exchange Protocol With Extended Security Model |  |
Close
Personal information is often gathered and processed in a decentralized fashion. Examples include health records and governmental data bases. To protect the privacy of individuals, no unique user identifier should be used across the different databases. At the same time, the utility of the distributed information needs to be preserved which requires that it be nevertheless possible to link different records if they relate to the same user. Recently, Camenisch and Lehmann (CCS 15) have proposed a pseudonym scheme that addresses this problem by domain-specific pseudonyms. Although being unlinkable, these pseudonyms can be converted by a central authority (the converter). To protect the users privacy, conversions are done blindly without the converter learning the pseudonyms or the identity of the user. Unfortunately, their scheme sacrifices a crucial privacy feature: transparency. Users are no longer able to inquire with the converter and audit the flow of their personal data. Indeed, such auditability appears to be diametral to the goal of blind pseudonym conversion. In this paper we address these seemingly conflicting requirements and provide a system where user-centric audits logs are created by the oblivious converter while maintaining all privacy properties. We prove our protocol to be UC-secure and give an efficient instantiation using novel building blocks.
10 | MC17NXT10 | Privacy-Preserving User-Auditable Pseudonym Systems |  |
NETWORKING /NETWORK SECURITY/ IOT – JAVA TITLES |
Close
Affording secure and efficient big data aggregation methods is very attractive in the field of wireless sensor networks (WSNs) research. In real settings, the WSNs have been broadly applied, such as target tracking and environment remote monitoring. However, data can be easily compromised by a vast of attacks, such as data interception and data tampering, etc. In this paper, we mainly focus on data integrity protection, give an identity-based aggregate signature (IBAS) scheme with a designated verifier for WSNs. According to the advantage of aggregate signatures, our scheme not only can keep data integrity, but also can reduce bandwidth and storage cost for WSNs. Furthermore, the security of our IBAS scheme is rigorously presented based on the computational Diffie-Hellman assumption in random oracle model.
1 | NP17NXT01 | A Secure and Efficient ID-Based Aggregate Signature Scheme for Wireless Sensor Networks |  |
Close
Internet of Things is expanding the network by integrating huge amount of surrounding objects which requires the secure and reliable transmission of the high volume data generation, and the mobile relay technique is one of the efficient ways to meet the on-board data explosion in LTE-Advanced (LTE-A) networks. However, the practice of the mobile relay will pose potential threats to the information security during the handover process. Therefore, to address this challenge, in this paper, we propose a secure handover session key management scheme via mobile relay in LTE-A networks. Specifically, in the proposed scheme, to achieve forward and backward key separations, the session key shared between the on-board user equipment (UE) and the connected donor evolved node B (DeNB) is first generated by the on-board UE and then securely distributed to the DeNB. Furthermore, to reduce the communication overhead and the computational complexity, a novel proxy re-encryption technique is employed, where the session keys initially encrypted with the public key of the mobility management entity (MME) will be re-encrypted by a mobile relay node (MRN), so that other DeNBs can later decrypt the session keys with their own private keys while without the direct involvement of the MME. Detailed security analysis shows that the proposed scheme can successfully establish session keys between the on-board UEs and their connected DeNB, achieving backward and forward key separations, and resisting against the collusion between the MRN and the DeNB as the same time. In addition, performance evaluations via extensive simulations are carried out to demonstrate the efficiency and effectiveness of the proposed scheme.
2 | NP17NXT02 | Achieve Secure Handover Session Key Management via Mobile Relay in LTE-Advanced networks. |  |
Close
The relay selection problem is considered in large-scale energy harvesting (EH) networks. It is known that if channel state information (CSI) is available at EH relays, a diversity order equal to the number of relays can be obtained, however, at the penalty of a feedback overhead (necessary to obtain accurate CSI) which is not suitable for energy-limited devices intended, e.g., for Internet-of-Things applications. In this paper, we therefore propose a new EH relay selection scheme which is based on the residual energy at each relays battery, and on information on the distribution of the channels between relays and the destination. The method thus minimizes both the outage probability and the feedback cost. Where previous work relay selection based on channel distribution information consider only small-scale fading distribution, we employ a stochastic geometry approach to consider jointly the geometrical distribution (i.e., large-scale fading) and small-scale fading yielding a simple relay selection criterion that furthermore utilizes only rough information on the relays location, i.e., an ordinal number from the destination. The outage probability of the proposed relay selection scheme is analytically derived, and the achievable diversity order of the proposed approach is investigated.
3 | NP17NXT03 | Robust Relay Selection for Large-Scale Energy-Harvesting IoT Networks |  |
Close
Ride sharing can reduce the number of vehicles in the streets by increasing the occupancy of vehicles, which can facilitate traffic and reduce crashes and the number of needed parking slots. Autonomous vehicles can make ride sharing convenient, popular, and also necessary because of the elimination of the driver effort and the expected high cost of the vehicles. However, the organization of ride sharing requires the users to disclose sensitive detailed information not only on the pick-up/drop-off locations but also on the trip time and route. In this paper, we propose a scheme to organize ride sharing and address the unique privacy issues. Our scheme uses a similarity measurement technique over encrypted data to preserve the privacy of trip data. The ride sharing region is divided into cells and each cell is represented by one bit in a binary vector. Each user should represent trip data as binary vectors and submit the encryptions of the vectors to a server
4 | NP17NXT04 | Privacy-Preserving Ride Sharing Scheme for Autonomous Vehicles in Big Data Era |  |
Close
Distributed energy resources (ERs), featured with small-scale power generation technologies and renewable energy sources, are considered as necessary supplements for smart grid. To ensure that merged resources contribute effectively to the grid, data generated by consumer side should be shared among the ERs. However, it also introduces challenges of the protection of consumer privacy. To address these difficulties, we propose a new framework to share data in smart grid by leveraging new advances in homomorphic encryption and proxy re-encryption. Our proposed framework allows ERs to analyze consumer data while ensuring consumer privacy. An additional benefit of our proposed framework is that consumer data is transmitted over the smart grid only once. Furthermore, we present a concrete scheme falling into the proposed framework. Extensive analysis shows that the concrete scheme is secure and efficient.
5 | NP17NXT05 | A Privacy-Preserving Data Sharing Framework for Smart Grid |  |
Close
A jammed wireless scenario is considered where a network operator aims to schedule users to maximize network performance while guaranteeing a minimum performance level to each user. We consider the case where no information about the position and the triggering threshold of the jammer is available. We show that the network performance maximization problem can be modeled as a finite-horizon joint power control and user scheduling problem, which is NP-hard. To find the optimal solution of the problem, we exploit dynamic programming techniques. We show that the obtained problem can be decomposed, i.e., the power control problem and the user scheduling problem can be sequentially solved at each slot. We investigate the impact of uncertainty on the achievable performance of the system and we show that such uncertainty leads to the well-known exploration-exploitation tradeoff. Due to the high complexity of the optimal solution, we introduce an approximation algorithm by exploiting state aggregation techniques. We also propose a performance-aware online greedy algorithm to provide a low-complexity sub-optimal solution to the joint power control and user scheduling problem under minimum quality-of-service requirements.
6 | NP17NXT06 | Optimal Power Allocation and Scheduling Under Jamming Attacks Networking |  |
Close
To achieve the potential in providing high throughput for cellular networks by device-to-device (D2D) communications, the interference among D2D links should be carefully managed. In this paper, we propose an opportunistic cooperation strategy for D2D transmission by exploiting the caching capability at the users to control the interference among D2D links. We consider overlay inband D2D, divide the D2D users into clusters, and assign different frequency bands to cooperative and non-cooperative D2D links. To provide high opportunity for cooperative transmission, we introduce a caching policy. To maximize the network throughput, we jointly optimize the cluster size and bandwidth allocation, where the closed-form expression of the bandwidth allocation factor is obtained. Simulation results demonstrate that the proposed strategy can provide 400% 500% throughput gain over traditional D2D communications when the content popularity distribution is skewed, and can provide 60% 80% gain even when the content popularity distribution is uniform.
7 | NP17NXT07 | High Throughput Opportunistic Cooperative Device-to-Device Communications withCaching |  |
Close
In the context of wireless mobile ad hoc networks, node clustering is a well-known solution for handling the scalability issue. While existing work focused on unstructured (i.e., flat) networks, this paper investigates a clustering algorithm to handle stable size-restricted clusters for structured (i.e., group-based) networks. In addition, we have identified that the ad hoc network clustering literature lacks a theoretical framework. This paper fills this gap by proposing to use coalition game theory, identifying coalitions to clusters and players to nodes. This theoretical framework allows us to derive a novel generic distributed node clustering algorithm. The algorithm is proved to converge to Nash-stable partitions. It is based on the concept of switch operations, where nodes take decision whether to leave or not their current coalition based on the coalition values. These decisions are made independently on any node individual payoff, meaning that the coalition formation game has a transferable utility. This generic algorithm is then tailored to both structured and unstructured networks, by defining judiciously the value functions and the heuristics dedicated to selecting suitable switch operations. Based on extensive simulations, we show that our proposed solutions outperform the existing ones especially in terms of cluster size and stability.
8 | NP17NXT08 | A Coalition Formation Game for Distributed Node Clustering in Mobile Ad Hoc Networks taxonomy |  |
Close
Uncertain data clustering has been recognized as an essential task in the research of data mining. Many centralized clustering algorithms are extended by defining new distance or similarity measurements to tackle this issue. With the fast development of network applications, these centralized methods show their limitations in conducting data clustering in a large dynamic distributed peer-to-peer network due to the privacy and security concerns or the technical constraints brought by distributive environments. In this paper, we propose a novel distributed uncertain data clustering algorithm, in which the centralized global clustering solution is approximated by performing distributed clustering. To shorten the execution time, the reduction technique is then applied to transform the proposed method into its deterministic form by replacing each uncertain data object with its expected centroid. Finally, the attribute-weight-entropy regularization technique enhances the proposed distributed clustering method to achieve better results in data clustering and extract the essential features for cluster identification. The experiments on both synthetic and real-world data have shown the efficiency and superiority of the presented algorithm.
9 | NP17NXT09 | Uncertain Data Clustering in Distributed Peer-to-Peer Networks |  |
Close
Traffic patterns associated with different primary user (PU) channels may provide different spectral access and energy harvesting opportunities in wireless-powered cognitive radio networks (WP-CRNs). Considering this, we propose trafficspecific optimal spectrum sensing policy such that the expected transmission rate of secondary user (SU) is maximized under the energy causality and PU collision constraints in orthogonal frequency division multiple access (OFDMA)-based WP-CRN. Towards this, we cluster N subcarriers (subchannels) into K clusters (where K N) and derive an optimal energy detection threshold for SU under each traffic pattern. Using traffic features, we propose an unsupervised and nonparametric classification technique to determine the number of unique traffic patterns K over all subchannels. Then, the traffic patterns are used to predict the idle/busy period statistics for subchannels, based upon which SU identifies harvest and transmit PU subchannels for energy harvesting and data transmission, respectively. We derive an optimal detection threshold based on the harvested energy such that it maximizes the expected transmission rate of SU while protecting PU from collision. We demonstrate the effectiveness of the proposed scheme in terms of rate gains under design constraints and show the optimal detection threshold under various energy arrival rates.
10 | NP17NXT10 | Energy Arrival-aware Detection Threshold in Wireless-Powered Cognitive Radio Networks |  |
Close
This paper addresses the consensus problem for a continuous-time multiagent system (MAS) with Markovian network topologies and external disturbance. Different from some existing results, global jumping modes of the Markovian network topologies are not required to be completely available for consensus protocol design. A network topology mode regulator (NTMR) is first developed to decompose unavailable global modes into several overlapping groups, where overlapping groups refer to the scenario that there exist commonly shared local modes between any two distinct groups. The NTMR schedules which group modes each agent may access at every time step. Then a new group mode-dependent distributed consensus protocol on the basis of relative measurement outputs of neighboring agents is delicately constructed. In this sense, the proposed consensus protocol relies only on group and partial modes and eliminates the need for complete knowledge of global modes. Sufficient conditions on the existence of desired distributed consensus protocols are derived to ensure consensus of the MAS with a prescribed H8 performance level. Two examples are provided to show the effectiveness of the proposed consensus protocol.
11 | NP17NXT11 | Consensus of Multiagent Systems Subject to Partially Accessible and Overlapping Markovian Network Topologies radio multi-channel wireless mesh networks |  |
Close
Toward improving the traditional clone detection technique whose performance may be affected by dynamic changes of supply chains and misread, we present a novel and effective clone detection approach, termed double-track detection, for radio frequency identification-enabled supply chains. As part of a tags attributes, verification information is written into tags so that the set of all verification information in the collected tag events forms a time series sequence. Genuine tags can be differentiated from clone tags due to the discrepancy in their verification sequences which are constructed as products flow along the supply chain. The verification sequence together with the sequence formed by business actions performed during the supply chains yield two tracks which can be assessed to detect the presence of clone tags. Theoretical analysis and experimental results show that our proposed mechanism is effective, reasonable, and has a relatively high clone detection rate when compared with a leading method in this area.
12 | NP17NXT12 | DTD: A Novel Double-Track Approach to Clone Detection for RFID-enabled Supply Chains Networks |  |
Close
Wireless sensor networks (WSNs) have been widely used in a plenty of applications. To achieve higher efficiency for data collection, WSNs are often partitioned into several disjointed clusters, each with a representative cluster head in charge of the data gathering and routing process. Such a partition is balanced and effective, if the distance between each node and its cluster head can be bounded within a constant number of hops, and any two cluster heads are connected. Finding such a cluster partition with minimum number of clusters and connectors between cluster heads is defined as minimum connected d-hop dominating set (d-MCDS) problem, which is proved to be NP-complete. In this paper, we propose a distributed approximation named CS-Cluster to address the d-MCDS problem under unit disk graph. CS-Cluster constructs a sparser d-hop maximal independent set (d-MIS), connects the d-MIS, and finally checks and removes redundant nodes. We prove the approximation ratio of CS-Cluster is (2d+1)?, where ? is a parameter related with d but is no more than 18.4. Compared with the previous best result O(d²), our approximation ratio is a great improvement. Our evaluation results demonstrate the outstanding performance of our algorithm compared with previous works.
13 | NP17NXT13 | A Novel Approximation for Multi-Hop Connected Clustering Problem in Wireless Networks |  |
Close
Data aggregation in WSNs (Wireless Sensor Networks) can effectively reduce communication overheads and the energy consumption of sensor nodes. A WSN needs to be not only energy efficient, but also secure. Various attacks may make data aggregation unsecure. We investigate the reliable and secure endto- end data aggregation problem considering selective forwarding attacks and modification attacks in homogeneous cluster-based WSNs, and propose two data aggregation approaches. Our approaches, namely, Sign-Share and Sham-Share, use secret sharing and signatures to allow aggregators to aggregate the data without understanding the contents of messages and the base station to verify the aggregated data and retrieve the raw data from the aggregated data. We have performed extensive simulations to compare our approaches with the two state-of-theart approaches PIP and RCDA-HOMO. The simulation results show both Sign-Share and Sham-Share are faster in processing and aggregating data.
14 | NP17NXT14 | Reliable and Secure End-to-End Data Aggregation Using Secret Sharing in WSNs Networks |  |
Close
We study data gathering problem in Rechargeable Sensor Networks (RSNs) with a mobile sink, where rechargeable sensors are deployed into a region of interest to monitor the environment and a mobile sink travels along a pre-defined path to collect data from sensors periodically. In such RSNs, the optimal data gathering is challenging because the required energy consumption for data transmission changes with the movement of the mobile sink and the available energy is time-varying. In this paper, we formulate data gathering problem as a network utility maximization problem, which aims at maximizing the total amount of data collected by the mobile sink while maintaining the fairness of network. Since the instantaneous optimal data gathering scheme changes with time, in order to obtain the globally optimal solution, we first transform the primal problem into an approximate network utility maximization problem by shifting the energy consumption conservation and analyzing necessary conditions for the optimal solution. As a result, each sensor does not need to estimate the amount of harvested energy and the problem dimension is reduced. Then, we propose a Distributed Data Gathering Approach (DDGA), which can be operated distributively by sensors, to obtain the optimal data gathering scheme. Extensive simulations are performed to demonstrate the efficiency of the proposed algorithm.
15 | NP17NXT15 | Near Optimal Data Gathering in Rechargeable Sensor Networks with a Mobile Sink |  |
Close
The introduction of fast-response battery energy storage system (BESS) provides a number of advantages to address the new challenges of smart grids, including improving the reliability and security of the power grids. This paper proposes a real-time distributed control algorithm to the corrective security-constrained DC optimal power flow problem for transmission network. The objective is to minimize the adjustment of the BESSs, while maintaining the supply-demand balance and ensuring no security constraint violations in the post-contingency state for the short-term period. Comparing to the conventional centralized methods, only simple local computation and information exchange with neighbors are required to update the local control signal, which leads to fast response of the BESSs to alleviate the impact of the transmission line outage. Real-time simulation results of the modified 6-bus and 24- bus systems demonstrate that the dynamic performances of proposed distributed algorithm satisfy standard requirements and indicate its applicability for practical power grid.
16 | NP17NXT16 | Real-Time Distributed Control of Battery Energy Storage Systems for Security Constrained DC-OPF Security |  |
Close
The need for fast and strong image cryptosystems motivates researchers to develop new techniques to apply traditional cryptographic primitives in order to exploit the intrinsic features of digital images. One of the most popular and mature technique is the use of complex dynamic phenomena, including chaotic orbits and quantum walks, to generate the required key stream. In this paper, under the assumption of plaintext attacks we investigate the security of a classic diffusion mechanism (and of its variants) used as the core cryptographic primitive in some image cryptosystems based on the aforementioned complex dynamic phenomena. We have theoretically found that regardless of the key schedule process, the data complexity for recovering each element of the equivalent secret key from these diffusion mechanisms is only O(1). The proposed analysis is validated by means of numerical examples. Some additional cryptographic applications of this paper are also discussed.
17 | NP17NXT17 | On the Security of a Class of Diffusion Mechanisms for Image Encryption Risk |  |
Close
In this paper, we propose new location privacy preserving schemes for database-driven cognitive radio networks that protect secondary users’ (SUs) location privacy while allowing them to learn spectrum availability in their vicinity. Our schemes harness probabilistic set membership data structures to exploit the structured nature of spectrum databases (DBs) and SUs’ queries. This enables us to create a compact representation of DB that could be queried by SUs without having to share their location with DB, thus guaranteeing their location privacy. Our proposed schemes offer different cost-performance characteristics. Our first scheme relies on a simple yet powerful two-party protocol that achieves unconditional security with a plausible communication overhead by making DB send a compacted version of its content to SU which needs only to query this data structure to learn spectrum availability. Our second scheme achieves significantly lower communication and computation overhead for SUs, but requires an additional architectural entity which receives the compacted version of the database and fetches the spectrum availability information in lieu of SUs to alleviate the overhead on the latter. We show that our schemes are secure, and also demonstrate that they offer significant advantages over existing alternatives for various performance and/or security metrics.
18 | NP17NXT18 | Location Privacy Preservation in Database-driven Wireless Cognitive Networks through Encrypted Probabilistic Data Structures |  |
Close
IP traceback plays an important role in cyber investigation processes, where the sources and the traversed paths of packets need to be identified. It has a wide range of applications, including network forensics, security auditing, network fault diagnosis, and performance testing. Despite a plethora of research on IP traceback, the Internet is yet to see a large-scale practical deployment of traceback. Some of the major challenges that still impede an Internet-scale traceback solution are, concern of disclosing Internet Service Provider (ISPs) internal network topologies (in other words, concern of privacy leak), poor incremental deployment, and lack of incentives for ISPs to provide traceback services. In this paper, we argue that cloud services offer better options for the practical deployment of an IP traceback system. We first present a novel cloud-based traceback architecture, which possesses several favorable properties encouraging ISPs to deploy traceback services on their networks. While this makes the traceback service more accessible, regulating access to traceback service in a cloud-based architecture becomes an important issue. Consequently, we address the access control problem in cloud-based traceback. Our design objective is to prevent illegitimate users from requesting traceback information for malicious intentions (such as ISPs topology discovery). To this end, we propose a temporal token-based authentication framework, called FACT, for authenticating traceback service queries. FACT embeds temporal access tokens in traffic flows, and then delivers them to end-hosts in an efficient manner. The proposed solution ensures that the entity requesting for traceback service is an actual recipient of the packets to be traced. Finally, we analyze and validate the proposed design using real-world Internet data sets.
19 | NP17NXT19 | FACT :A Framework for Authentication in Cloud-based IP Traceback |  |
Close
Knowledge discovery from various health data repositories requires the incorporation of healthcare data from diversified sources. Maintaining record linkage during the integration of medical data is an important research issue. Researchers have given different solutions to this problem that are applicable for developed countries where electronic health record of patients are maintained with identifiers like social security number (SSN), universal patient identifier (UPI), health insurance number, etc. These solutions cannot be used correctly for record linkage of health data of developing countries because of missing data, ambiguity in patient identification, and high amount of noise in patient information. Also, identifiable health data in electronic health repositories may produce a significant risk to patient privacy and also make the health information systems security vulnerable to hackers. In this paper, we have analyzed the practical problems of collecting and integrating healthcare data in Bangladesh for developing national health data warehouse. We have proposed a privacy preserved secured record linkage architecture that can support constrained health data of developing countries such as Bangladesh. Our technique can anonymize identifiable private data of the patients while maintaining record linkage in integrated health repositories to facilitate knowledge discovery process. Experimental results show that our proposed method successfully linked records with acceptable accuracy for noisy data in the absence of any standard ID like SSN.
20 | NP17NXT20 | Health data integration with Secured Record Linkage: A practical solution for Bangladesh and other developing countries |  |
WEB MINING |
Close
With the accessibility to information, users often face the problem of selecting one item (a product or a service) from a huge search space. This problem is known as information overload. Recommender systems (RSs) personalize content to a users interests to help them select the right item in information overload scenarios. Group RSs (GRSs) recommend items to a group of users. In GRSs, a recommendation is usually computed by a simple aggregation method for individual information. However, the aggregations are rigid and overlook certain group features, such as the relationships between the group members preferences. In this paper, it is proposed a GRS based on opinion dynamics that considers these relationships using a smart weights matrix to drive the process. In some groups, opinions do not agree, hence the weights matrix is modified to reach a consensus value. The impact of ensuring agreed recommendations is evaluated through a set of experiments. Additionally, a sensitivity analysis studies its behavior. Compared to existing group recommendation models and frameworks, the proposal based on opinion dynamics would have the following advantages: 1) flexible aggregation method; 2) member relationships; and 3) agreed recommendations.
1 | WM17NXT01 | Opinion Dynamics-Based Group Recommender Systems blogging Information |  |
Close
Getting back to previously viewed web pages is a common yet uneasy task for users due to the large volume of personally accessed information on the web. This paper leverages humans natural recall process of using episodic and semantic memory cues to facilitate recall, and presents a personal web revisitation technique called WebPagePrev through context and content keywords. Underlying techniques for context and content memories acquisition, storage, decay, and utilization for page re-finding are discussed. A relevance feedback mechanism is also involved to tailor to individuals memory strength and revisitation habits. Our 6-month user study shows that: (1) Compared with the existing web revisitation tool Memento, History List Searching method, and Search Engine method, the proposed WebPagePrev delivers the best re-finding quality in finding rate (92.10 percent), average F1-measure (0.4318), and average rank error (0.3145). (2) Our dynamic management of context and content memories including decay and reinforcement strategy can mimic users retrieval and recall mechanism. With relevance feedback, the finding rate of WebPagePrev increases by 9.82 percent, average F1-measure increases by 47.09 percent, and average rank error decreases by 19.44 percent compared to stable memory management strategy. Among time, location, and activity context factors in WebPagePrev, activity is the best recall cue, and context+content based re-finding delivers the best performance, compared to context based re-finding and content based re-finding.
2 | WM17NXT02 | Personal Web Revisitation by Context and Content Keywords with Relevance Feedback |  |
Close
Multilabel learning deals with examples having multiple class labels simultaneously. It has been applied to a variety of applications, such as text categorization and image annotation. A large number of algorithms have been proposed for multilabel learning, most of which concentrate on multilabel classification problems and only a few of them are feature selection algorithms. Current multilabel classification models are mainly built on a single data representation composed of all the features which are shared by all the class labels. Since each class label might be decided by some specific features of its own, and the problems of classification and feature selection are often addressed independently, in this paper, we propose a novel method which can perform joint feature selection and classification for multilabel learning, named JFSC. Different from many existing methods, JFSC learns both shared features and label-specific features by considering pairwise label correlations, and builds the multilabel classifier on the learned low-dimensional data representations simultaneously. A comparative study with state-of-the-art approaches manifests a competitive performance of our proposed method both in classification and feature selection for multilabel learning.
3 | WM17NXT03 | Joint Feature Selection and Classification for MultilabelLearning |  |
Close
It is observed that distinct words in a given document have either strong or weak ability in delivering facts (i.e., the objective sense) or expressing opinions (i.e., the subjective sense) depending on the topics they associate with. Motivated by the intuitive assumption that different words have varying degree of discriminative power in delivering the objective sense or the subjective sense with respect to their assigned topics, a model named as identified objective-subjective latent Dirichlet allocation (LDA) (iosLDA) is proposed in this paper. In the iosLDA model, the simple Pòlya urn model adopted in traditional topic models is modified by incorporating it with a probabilistic generative process, in which the novel “Bag-of-Discriminative-Words” (BoDW) representation for the documents is obtained; each document has two different BoDW representations with regard to objective and subjective senses, respectively, which are employed in the joint objective and subjective classification instead of the traditional Bag-of-Topics representation. The experiments reported on documents and images demonstrate that: 1) the BoDW representation is more predictive than the traditional ones; 2) iosLDA boosts the performance of topic modeling via the joint discovery of latent topics and the different objective and subjective power hidden in every word; and 3) iosLDA has lower computational complexity than supervised LDA, especially under an increasing number of topics.
4 | WM17NXT04 | Identifying Objective and Subjective Words via Topic Modeling |  |
Close
The growing need to analyze large collections of documents has led to great developments in topic modeling. Since documents are frequently associated with other related variables, such as labels or ratings, much interest has been placed on supervised topic models. However, the nature of most annotation tasks, prone to ambiguity and noise, often with high volumes of documents, deem learning under a single-annotator assumption unrealistic or unpractical for most real-world applications. In this article, we propose two supervised topic models, one for classification and another for regression problems, which account for the heterogeneity and biases among different annotators that are encountered in practice when learning from crowds. We develop an efficient stochastic variational inference algorithm that is able to scale to very large datasets, and we empirically demonstrate the advantages of the proposed model over state-of-the-art approaches.
5 | WM17NXT05 | Learning Supervised Topic Models for Classification and Regression from Crowds Classification |  |
Close
Precise friend recommendation is an important problem in social media. Although most social websites provide some kinds of auto friend searching functions, their accuracies are not satisfactory. In this paper, we propose a more precise auto friend recommendation method with two stages. In the first stage, by utilizing the information of the relationship between texts and users, as well as the friendship information between users, we align different social networks and choose some “possible friends.” In the second stage, with the relationship between image features and users, we build a topic model to further refine the recommendation results. Because some traditional methods, such as variational inference and Gibbs sampling, have their limitations in dealing with our problem, we develop a novel method to find out the solution of the topic model based on series expansion. We conduct experiments on the Flickr dataset to show that the proposed algorithm recommends friends more precisely and faster than traditional methods.
6 | WM17NXT06 | Two-Stage Friend Recommendation Based on Network Alignment and Series-Expansion of Probabilistic Topic Model |  |
Close
We propose a computational methodology for automatically estimating human behavioral patterns using the multiple instance learning (MIL) paradigm. We describe the incremental diverse density algorithm, a particular formulation of multiple instance learning, and discuss its suitability for behavioral coding. We use a rich multi-modal corpus comprised of chronically distressed married couples having problem-solving discussions as a case study to experimentally evaluate our approach. In the multiple instance learning framework, we treat each discussion as a collection of short-term behavioral expressions which are manifested in the acoustic, lexical, and visual channels. We experimentally demonstrate that this approach successfully learns representations that carry relevant information about the behavioral coding task. Furthermore, we employ this methodology to gain novel insights into human behavioral data, such as the local versus global nature of behavioral constructs as well as the level of ambiguity in the expression of behaviors through each respective modality. Finally, we assess the success of each modality for behavioral classification and compare schemes for multimodal fusion within the proposed framework.
7 | WM17NXT07 | Multiple Instance Learning for Behavioral Coding |  |
Close
In this work, we focus on modeling user-generated review and overall rating pairs, and aim to identify semantic aspects and aspect-level sentiments from review data as well as to predict overall sentiments of reviews. We propose a novel probabilistic supervised joint aspect and sentiment model (SJASM) to deal with the problems in one go under a unified framework. SJASM represents each review document in the form of opinion pairs, and can simultaneously model aspect terms and corresponding opinion words of the review for hidden aspect and sentiment detection. It also leverages sentimental overall ratings, which often come with online reviews, as supervision data, and can infer the semantic aspects and aspect-level sentiments that are not only meaningful but also predictive of overall sentiments of reviews. Moreover, we also develop efficient inference method for parameter estimation of SJASM based on collapsed Gibbs sampling. We evaluate SJASM extensively on real-world review data, and experimental results demonstrate that the proposed model outperforms seven well-established baseline methods for sentiment analysis tasks.
8 | WM17NXT08 | Analyzing Sentiments in One Go: A Supervised Joint Topic Modeling Approach |  |
Close
Online shopping is becoming more and more common in our daily lives. Understanding users’ interests and behavior is essential to adapt e-commerce websites to customers’ requirements. The information about users’ behavior is stored in the Web server logs. The analysis of such information has focused on applying data mining techniques, where a rather static characterization is used to model users’ behavior, and the sequence of the actions performed by them is not usually considered. Therefore, incorporating a view of the process followed by users during a session can be of great interest to identify more complex behavioral patterns. To address this issue, this paper proposes a linear-temporal logic model checking approach for the analysis of structured e-commerce Web logs. By defining a common way of mapping log records according to the e-commerce structure, Web logs can be easily converted into event logs where the behavior of users is captured. Then, different predefined queries can be performed to identify different behavioral patterns that consider the different actions performed by a user during a session. Finally, the usefulness of the proposed approach has been studied by applying it to a real case study of a Spanish e-commerce website. The results have identified interesting findings that have made possible to propose some improvements in the website design with the aim of increasing its efficiency.
9 | WM17NXT09 | Analysis of users behaviour in structured e-commerce websites |  |
Close
To automatically test web applications, crawling-based techniques are usually adopted to mine the behavior models, explore the state spaces or detect the violated invariants of the applications. However, their broad use is limited by the required manual configurations for input value selection, GUI state comparison and clickable detection. In existing crawlers, the configurations are usually string-matching based rules looking for tags or attributes of DOM elements, and often application-specific. Moreover, in input topic identification, it can be difficult to determine which rule suggests a better match when several rules match an input field to more than one topic. This paper presents a natural-language approach based on semantic similarity to address the above issues. The proposed approach represents DOM elements as vectors in a vector space formed by the words used in the elements. The topics of encountered input fields during crawling can then be inferred by their similarities with ones in a labeled corpus. Semantic similarity can also be applied to suggest if a GUI state is newly discovered and a DOM element is clickable under an unsupervised learning paradigm. We evaluated the proposed approach in input topic identification with 100 real-world forms and GUI state comparison with real data from industry. Our evaluation shows that the proposed approach has comparable or better performance to the conventional techniques. Experiments in input topic identification also show that the accuracy of the rule-based approach can be improved by up to 22% when integrated with our approach.
10 | WM17NXT10 | Using Semantic Similarity in Crawling-based Web Application Testing |  |
WEB SERVER |
Close
Cloud applications built on service-oriented architectures generally integrate a number of component services to fulfill certain application logic. The changing cloud environment highlights the need for these applications to keep resilient against QoS variations of their component services so that end-to-end quality-of-service (QoS) can be guaranteed. Runtime service adaptation is a key technique to achieve this goal. To support timely and accurate adaptation decisions, effective and efficient QoS prediction is needed to obtain real-time QoS information of component services. However, current research has focused mostly on QoS prediction of working services that are being used by a cloud application, but little on predicting QoS values of candidate services that are equally important in determining optimal adaptation actions. In this paper, we propose an adaptive matrix factorization (namely AMF) approach to perform online QoS prediction for candidate services. AMF is inspired from the widely-used collaborative filtering techniques in recommender systems, but significantly extends the conventional matrix factorization model with new techniques of data transformation, online learning, and adaptive weights. Comprehensive experiments, as well as a case study, have been conducted based on a real-world QoS dataset of Web services (with over 40 million QoS records). The evaluation results demonstrate AMF’s superiority in achieving accuracy, efficiency, and robustness, which are essential to enable optimal runtime service adaptation.
1 | WS17NXT01 | Online QoS Prediction for Runtime Service Adaptation via Adaptive Matrix Factorization Network Map |  |
Close
Attribute-based encryption, especially for ciphertext-policy attribute-based encryption, can fulfill the functionality of fine-grained access control in cloud storage systems. Since users attributes may be issued by multiple attribute authorities, multi-authority ciphertext-policy attribute-based encryption is an emerging cryptographic primitive for enforcing attribute-based access control on outsourced data. However, most of the existing multi-authority attribute-based systems are either insecure in attribute-level revocation or lack of efficiency in communication overhead and computation cost. In this paper, we propose an attribute-based access control scheme with two-factor protection for multi-authority cloud storage systems. In our proposed scheme, any user can recover the outsourced data if and only if this user holds sufficient attribute secret keys with respect to the access policy and authorization key in regard to the outsourced data. In addition, the proposed scheme enjoys the properties of constant-size ciphertext and small computation cost. Besides supporting the attribute-level revocation, our proposed scheme allows data owner to carry out the user-level revocation. The security analysis, performance comparisons, and experimental results indicate that our proposed scheme is not only secure but also practical.
2 | WS17NXT02 | Two-factor Data Access Control with Efficient Revocation for Multi-authority Cloud Storage Systems Services |  |
Close
Automatic resource provisioning is a challenging and complex task. It requires for applications, services and underlying platforms to be continuously monitored at multiple levels and time intervals. The complex nature of this task lays in the ability of the monitoring system to automatically detect runtime configurations in a cloud service due to elasticity action enforcement. Moreover, with the adoption of open cloud standards and library stacks, cloud consumers are now able to migrate their applications or even distribute them across multiple cloud domains. However, current cloud monitoring tools are either bounded to specific cloud platforms or limit their portability to provide elasticity support. In this article, we describe the challenges when monitoring elastically adaptive multi-cloud services. We then introduce a novel automated, modular, multi-layer and portable cloud monitoring framework. Experiments on multiple clouds and real-life applications show that our framework is capable of automatically adapting when elasticity actions are enforced to either the cloud service or to the monitoring topology. Furthermore, it is recoverable from faults introduced in the monitoring configuration with proven scalability and low runtime footprint. Most importantly, our framework is able to reduce network traffic by 41%, and consequently the monitoring cost, which is both billable and noticeable in large-scale multi-cloud services.
3 | WS17NXT03 | Monitoring Elastically Adaptive Multi-Cloud Services |  |
Close
Spatial data have wide applications, e.g., location-based services, and geometric range queries (i.e., finding points inside geometric areas, e.g., circles or polygons) are one of the fundamental search functions over spatial data. The rising demand of outsourcing data is moving large-scale datasets, including large-scale spatial datasets, to public clouds. Meanwhile, due to the concern of insider attackers and hackers on public clouds, the privacy of spatial datasets should be cautiously preserved while querying them at the server side, especially for location-based and medical usage. In this paper, we formalize the concept of Geometrically Searchable Encryption, and propose an efficient scheme, named FastGeo, to protect the privacy of clients’ spatial datasets stored and queried at a public server. With FastGeo, which is a novel two-level search for encrypted spatial data, an honest-but-curious server can efficiently perform geometric range queries, and correctly return data points that are inside a geometric range to a client without learning sensitive data points or this private query. FastGeo supports arbitrary geometric areas, achieves sublinear search time, and enables dynamic updates over encrypted spatial datasets. Our scheme is provably secure, and our experimental results on real-world spatial datasets in cloud platform demonstrate that FastGeo can boost search time over 100 times.
4 | WS17NXT04 | FastGeo: Efficient Geometric Range Queries on Encrypted Spatial Data Encrypted Cloud Data |  |
Close
Since the last two decades, XML has gained momentum as the standard for web information management and complex data representation. Also, collaboratively built semi-structured information resources, such as Wikipedia, have become prevalent on the Web and can be inherently encoded in XML. Yet most methods for processing XML and semi-structured information handle mainly the syntactic properties of the data, while ignoring the semantics involved. To devise more intelligent applications, one needs to augment syntactic features with machine-readable semantic meaning. This can be achieved through the computational identification of the meaning of data in context, also known as (a.k.a.) automated semantic analysis and disambiguation, which is nowadays one of the main challenges at the core of the Semantic Web. This survey paper provides a concise and comprehensive review of the methods related to XML-based semi-structured semantic analysis and disambiguation. It is made of four logical parts. First, we briefly cover traditional word sense disambiguation methods for processing flat textual data. Second, we describe and categorize disambiguation techniques developed and extended to handle semi-structured and XML data. Third, we describe current and potential application scenarios that can benefit from XML semantic analysis, including: data clustering and semantic-aware indexing, data integration and selective dissemination, semantic-aware and temporal querying, web and mobile services matching and composition, blog and social semantic network analysis, and ontology learning. Fourth, we describe and discuss ongoing challenges and future directions, including: the quantification of semantic ambiguity, expanding XML disambiguation context, combining structure and content, using collaborative/social information sources, integrating explicit and implicit semantic analysis, emphasizing user involvement, and reducing computational complexity.
5 | WS17NXT05 | An Overview on XML Semantic Disambiguation from Unstructured Text to Semi-Structured Data: Background, Applications, and Ongoing Challenges Testing |  |
DISTRIBUTED NETWORK |
Close
In this Paper, we show how to construct low-density lattice code (LDLC) lattices shaped using convolutional code lattices. First, we give an explicit method to find the generator matrices of convolutional code lattices. The shaping gain of convolutional code lattices based on rate 1/2 convolutional codes for short block length is found by evaluating the normalized second moment; a shaping gain as high as 1.24 dB for dimension n = 200 was found. Then, nested LDLC lattices are constructed. We design LDLC lattices that satisfy conditions necessary for forming nested lattice codes, and give a specific example. For an n = 36 dimensional lattice, a lattice code based on LDLC lattices, shaped using convolutional code lattices, has a shaping gain of 0.87 dB, over hypercube shaping.
1 | DN17NXT01 | Shaping LDLC Lattices Using Convolutional Code Lattices |  |
Close
This paper studies the performance of a Massive MIMO system with pilot contamination in the multi-cell multiuser network. RCI (Regularized Channel Inversion) precoding is considered due to its good performance. Instead of using a large system method which assumes the number of antennas and users tends to infinity with a fixed ratio, we analyze the system performance in a more general situation where the number of transmit antennas M and number of users K are finite. We first derive the closed-form expression of the optimal RCI factoramp;#945;opt,and then study the impact of pilot contamination on the system performance. Result shows that more factors will be considered in finite situation. It is shown that our analysis is more accurate especially in small values of M and K. And our derived amp;#945;opt enables much better performance than the traditional large system result.
2 | DN17NXT02 | Performance Analysis of RCI Precoding with Pilot Contamination in Finite Massive MIMO System |  |
Close
This paper addresses the problem of resource and power allocation for hybrid access femtocells. We introduce a refund mechanism to incentivize femtocell basestations (FBSs) to serve macrocell users (MUEs) suffering from low signal to interference and noise ratio (SINR) to enhance overall performance. Our goal is to guarantee quality of service for users, while allowing spectrum sharing between macrocell basestation (MBS) and the underlying FBSs. We exploit overheard user channel quality indicator (CQI) reports using different channel models in order to assess the interfered channel state and channel parameter distribution. We analyze the distribution of the SINR for both femtocell users and MUEs. Based on the analytical results, our solution decomposes the scheduling and power allocation problem into two sub-problems and tackles them sequentially. Using problem reduction/transformation, we convert the decomposed problems into well known reduced forms and provide solutions in accordance. Finally, we verify the presented results through simulations.
3 | DN17NXT03 | Scheduling and Power Allocation for Hybrid Access Cognitive Femtocells Femtocell Networks |  |
Close
To address the exponentially rising demand for wireless content, the use of caching is emerging as a potential solution. It has been recently established that joint design of content delivery and storage (coded caching) can significantly improve performance over conventional caching. Coded caching is well suited to emerging heterogeneous wireless architectures which consist of a dense deployment of local-coverage wireless access points (APs) with high data rates, along with sparsely-distributed, large-coverage macro-cell base stations (BS). This enables design of coded caching-and-delivery schemes that equip APs with storage, and place content in them in a way that creates coded-multicast opportunities for combining with macro-cell broadcast to satisfy users even with different demands. Such coded-caching schemes have been shown to be order-optimal with respect to the BS transmission rate, for a system with single-level content, i.e., one where all content is uniformly popular. In this paper, we consider a system with non-uniform popularity content which is divided into multiple levels, based on varying degrees of popularity. The main contribution of this paper is the derivation of an order-optimal scheme which judiciously shares cache memory among files with different popularities. To show order-optimality we derive new information-theoretic lower bounds, which use a sliding-window entropy inequality, effectively creating a non-cut-set bound. We also extend the ideas to when users can access multiple caches along with the broadcast. Finally, we consider two extreme cases of user distribution across caches for the multi-level popularity model: a single user per cache (single-user setup) versus a large number of users per cache (multi-user setup), and demonstrate a dichotomy in the order-optimal strategies for these two extreme cases.
4 | DN17NXT04 | Coded Caching for Multi-level Popularity and Access Delivery Networks |  |
Close
The advancements in networking technologies and hand-held devices enable mobile users to concurrently receive real-time multimedia streaming (e.g., high-definition video) with different radio interfaces (e.g., Wi-Fi and LTE networks). Stream control transmission protocol (SCTP) is an important transportlayer solution to implement concurrent multipath transfer (CMT) over heterogeneous wireless networks with multihomed terminals. However, it is challenging to distribute multimedia content to the resource-limited mobile devices because of the contradiction between energy consumption and streaming quality. To deliver the energy-efficient and quality-aware multimedia streaming over multiple wireless networks, this paper presents an Energy and goodPut Optimized CMT (EPOC) solution. First, we develop an analytical framework to model the relationship between energy consumption and goodput performance for real-time multimedia transmission to multihomed mobile devices. Second, we propose a joint Forward Error Correction (FEC) coding and rate allocation scheme to minimize energy consumption while satisfying goodput constraint. EPOC effectively leverages the energy-goodput tradeoff and multipath diversity to optimize mobile multimedia transmission in heterogeneous networking environment. We conduct the performance evaluation through extensive semi-physical emulations in Exata involving H.264 video streaming. Compared with the reference CMT schemes using SCTP, EPOC achieves appreciable improvements in energy conservation, goodput and video Peak Signal-to-Noise Ratio (PSNR).
5 | DS17NXT05 | Energy-Aware Concurrent Multipath Transfer for Real-Time Video Streaming over Heterogeneous Wireless Networks Heterogeneous Wireless Networks |  |