IOSR Journal of Computer Engineering (IOSR-JCE)

International Conference on Future Technology in Engineering – ICFTE’16

Volume 2

Paper Type : Research Paper
Title : Digital Image Forgery Detection
Country : India
Authors : Archa Ajith || Bindhu J S

Abstract: A forgery detection scheme using feature point matching and adaptive over segmentation is proposed in this paper. Integration of key point based forgery detection methods and block based detection methods are taken place. The adaptive over segmentation algorithm will divide the host image into irregular and non overlapping blocks in an adaptive manner. The feature points are extracted as block features from each block. These block features are then mapped with one another to locate the labeled feature points. This leads to indicate the suspected forgery regions...........

Keywords - Forgery region extraction, copy move forgery, adaptive over segmentation

[1] S. Bayssram, H. T. Sencar, and N. Memon, "An efficient and robust method for detecting copy-move forgery," in Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on, 2009, pp. 1053-1056.
[2] G. Li, Q. Wu, D. Tu, and S. Sun, "A sorted neighborhood approach for detecting duplicated regions in image forgeries based on DWT and SVD," in Multimedia and Expo, 2007 IEEE International Conference on, 2007, pp. 1750-1753.
[3] A. C. Popescu and H. Farid, "Exposing digital forgeries by detecting duplicated image regions," Dept. Comput. Sci., Dartmouth College, Tech. Rep. TR2004-515, 2004
[4] B. Shivakumar and L. D. S. S. Baboo, "Detection of region duplication forgery in digital images using SURF," IJCSI International Journal of Computer Science Issues, vol. 8, 2011
[5] S. Ryu, M. Lee, and H. Lee, "Detection of copy-rotate-move forgery using Zernike moments," in Information Hiding, 2010, pp. 51-65.

Paper Type : Research Paper
Title : Progressive Algorithms for Efficient Duplicate Detection
Country : India
Authors : Anusha Kenno || Bindhu J S

Abstract: Duplicate detection is the technique of identifying or detecting all group of record within a dataset that represent the same real world entity. Duplicate detection methods process large datasets in shorter time but maintaining the quality of the dataset becomes difficult, which is a major data quality concern in large databases. To address this progressive algorithm has been proposed that significantly increase the efficiency of finding duplicates if the execution time is limited and improve the quality of records..........
Keywords: Duplicate detection, blocking, progressive algorithm, windowing, data cleaning

[1] S. E Whang, D. Marmaros, and H. Garcia-Molina, " Pay-as-you-go entity resolution", IEEE Trans. Knowl. Data Eng., vol. 25, no. 5,pp. 1111–1124, May 2012.
[2] U. Draisbach, F. Naumann, S. Szott, and O. Wonneberg, "Adaptive windows for duplicate detection" in Proc IEEE 28th Int. Conf. Data Eng., 2012, pp. 1073–1083.
[3] J. Madhavan, S. R. Jeffery, S. Cohen, X. Dong, D. Ko, C. Yu, and A. Halevy, Web-scale data integration: You can only afford to pay as you go, in Proc. Conf. Innovative Data Syst. Res., 2007.
[4] U. Draisbach and F. Naumann,"A generalization of blocking and windowing algorithms for duplicate detection," in Proc. Int. Conf .Data Knowl Eng., 2011, pp. 18–24.
[5] C. Xiao, W. Wang, X. Lin, and H. Shang, "Top-k set similarity joins" in Proc. IEEE Int. Conf. Data Eng., 2009, pp. 916–927.

Paper Type : Research Paper
Title : Effctive Query Answering For Graph Patterns Using Views
Country : India
Authors : Reshma Ravi || Remya R

Abstract: Relational and semi structured data can be effectively answered using Answering Queries Using Views. This paper focuses on the problem of solving graph pattern queries with the help of graph simulation. A pattern query can be answered using views if and only if the pattern query is contained in the views. This paper investigates efficient algorithms for determining the containment problem (minimal and minimum) of pattern queries and also the maximally contained rewriting algorithm for finding approximate answers. These methods are able to efficiently answer large real world graphs.

Keywords- Pattern containment problem, Views, Graph pattern query, Directed Graph.

[1]. A.Y. Halevy's, "Answering queries using views: A Survey, "VLDBJ, vol.10, no.4, pp-270-294 2001.
[2]. X. Wu, Theodoratos and W .H. Wang, "Answering XML queries using materialized views revisited", in proc.18th ACM conf .inf. Knowl. Manage. , 2009, pp.475-484.
[3]. W. Fan, X. Wang, and Y. Wu, "Distributed graph simulation: impossibility and possibility", proc. VLDB Endowment, vol.7, no.12, pp.1083-1094, 2014.
[4]. Y. Papakonstantinou and V. Vassalos , "Query rewriting for the semi structured data," in the ACM SIGMOD int. conf. Manag. Data, 1999, pp. 455-466.
[5]. J. Wang, J, J, X. Yu' and J.Li's paper, "Answering tree pattern queries using views: A revisit," in the 14 th Int. conf. Extending Database Technol., 2011, pp. 153-164.

Paper Type : Research Paper
Title : A Distributed Representation Framework for Mining Human Trajectory Data
Country : India
Authors : Revathy S.B || Remya R

Abstract: Trajectory data represents the traces of moving objects. Nowadays, there exists different technologies which provides a variety of methods to track human mobility, including user generated geo-tagged contents, check-in services, and mobile apps. Trajectory data has different applications such as location recommendation, social link prediction, behavior analysis, path discovery etc. Trajectory data mining is a challenging task due to the complex characteristics of human mobility. Trajectory data is a kind of sequential data, and such surrounding contexts are necessary to consider for trajectory mining...............

Keywords: Trajectory data , location recommendation, location prediction, contextual information

[1]. M. Ye, P. Yin, W.-C. Lee, and D.-L. Lee, Exploiting geographical influence for collaborative point -of- interest recommendation, in Proc. 34 Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, 2011, pp. 325–334.
[2]. P. Bhargava, T. Phan, J. Zhou, and J. Lee, Who, what, when, and where: Multi-dimensional collaborative recommendations using tensor factorization on sparse user-generated data, in Proc. Int. Conf. World Wide Web, 2015, pp. 130–140.
[3]. Q. V. Le and T. Mikolov , Distributed representation of sentences and documents, in Proc. Int .Conf Mach. Learn., 2014, pp. 1188–1196.
[4]. H. Yin , Y. Sun, B. Cui, Z. Hu, and L. Chen, Lcars: A location content- aware recommender system, in Proc. ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2013, pp. 221–229.
[5]. H. Pham, C. Shahabi, and Y. Liu, Ebm: An entropy-based model to infer social strength from spatiotemporal data, in Proc. ACM SIGMOD Int. Conf. Manage. Data, 2013, pp. 265–276.

Paper Type : Research Paper
Title : Building a Fingerprint Based Deduplication Detection and Elimination Scheme
Country : India
Authors : Jisha Mariam Jacob || Sowmya K S

Abstract: As digital data is growing uncontrollably, the need for data reduction has become an important task in storage systems. For large scale data reduction, it is important to maximally detect and eliminate redundancy at low overheads. Data deduplication is a data reduction technique that reduces storage space by eliminating redundant data and only one instance of the data is retained on storage media. Delta compression is an efficient method for removing redundancy among non-duplicates but very similar data files and chunks...........

Keywords:- Data deduplication, Data reduction, Delta compression, Fingerprint, Resemblance detection, Super-feature.

[1]. The data deluge [Online]. Available:
[2]. J. Gantz and D. Reinsel, Extracting value from chaos, IDC Rev., vol. 1142, pp. 1–12, 2011.
[3]. L. DuBois, M. Amaldas, and E. Sheppard, Key considerations as deduplication evolves into primary storage, White
Paper 223310, Framingham, MA, USA: IDC, Mar. 2011.
[4]. S. Quinlan and S. Dorward, Venti: A new approach to archival storage, in Proc. USENIX Conf. File Storage
Technol., Jan. 2002, pp. 89–101.
[5]. D. T. Meyer and W. J. Bolosky, A study of practical deduplication, ACM Trans. Storage, vol. 7, no. 4, p. 14, 2012.

Paper Type : Research Paper
Title : Deep Learning for the identification of Interstitial Lung Diseases
Country : India
Authors : Sruthy P S || Dr.Dheeba J

Abstract: The Computer Aided Diagnosis(CAD) system for Interstitial Lung Diseases(ILD) have been proposed to aiming to enhance the accuracy of diagnosis of ILDs by physician because automatic tissue characterization is a crucial component of CAD system. To raise the quality of medical image analysis for the classification of patterns of lung present the concept of Deep Convolutional Neural Network(CNN).CNN designed for the classification of Interstitial lung diseases, consist of five convolutional layers with 2×2 kernels and LeakyReLU activation functions. The CNN use the Adam optimizer algorithm for the classification of ILD patterns. Experimental results prove superior performance and efficiency of the proposed approach through the comparative analysis of CNN against previous methods

Keywords: Interstitial Lung Diseases, Convolutional Neural Network, texture classification.

[1] R.Uppaluri et al.,"Computer recognition of regional lung disease patterns, "Am.j.Respir.Crit.Care Med., vol.160, no.2, pp.648-654, 1999.
[2] K.R Heitmann"Automatic detection of ground glass opacities on lung HRCT using multiple neural networks Eur.Radiol" vol.7, no.9, pp.1463-1472, 1997.
[3] M.Demedts and U.Costabel,"ATS/ERS international multidisciplinary consensus classification of the idiopathic interstitial pneumonias," Eur.Respiratory J., vol.19, no.5, pp.794-796, 2002.
[4] Y.Xu"Computer aided classification of interstitial lung diseases via MDCT: 3D adaptive multiple feature method (3D AMFM), Acad.Radiol" vol.13, no.8, pp.969-978, 2006.
[5] P.D.Korfiatis, A.N.Karahaliou, A.D.Kazantzi, C.Kalogeropoulou, and L.I.Costaridou,"Texture based identification and characterization of interstitial pneumonia patterns in lung multidetector CT," IEEE Trans.Inf.Technol.Biomed., vol.14, no.3, pp.675-680, May 2010

Paper Type : Research Paper
Title : A Novel Approach to Improving Security and Content Access Control in Named Data Networking
Country : India
Authors : Shyama Francis || Deepa K Daniel

Abstract: Named Data Networking(NDN) is an emerging technique in future internet .In NDN use of IP address is replaced by content name. Content accessed by NDN is secure because contents are signed by the content provider before delivering it. Integrity and authenticity of the content verified by the NDN nodes by verifying signature associated with it .Traditionally using heavy weight signature generation and verification algorithms are not appropriate for this task .Due to this content pollution and denial of service attack occur in NDN. Caching and location independent content access...........

Keywords— Access control, content provider, Data security, NDN

[1] V. Jacobson, D. K. Smetters, J. D. Thornton, M. Plass, N. Briggs, and R. Braynard, "Networking named content," Commun.. ACM, vol. 55, no. 1, pp. 117–124, 2012.
[2] N. Fotiou, G. F. Marias, and G. C. Polyzos, "Access control enforcement delegation for information-centric networking architectures," in Proc.ACM SIGCOMM Workshop Inf.-Centric Netw., 2012, pp. 85–90.
[3] R. C. Merkle, "A digital signature based on a conventional encryption function," in Proc. CRYPTO, 1987, pp. 369–378.
[4] K. Zhang, "Efficient protocols for signing routing messages," in Proc. NDSS, 1998, pp. 1–7.
[5] S.DiBenedetto, P. Gasti, G. Tsudik, and E. Uzun, "ANDaNA: Anonymous named data networking application," in Proc. NDSS,2012

Paper Type : Research Paper
Title : Well-Organized Privacy Preserving and Corrupted Packet Dropping in Networks
Country : India
Authors : Shilpa Ann Varghese || Devi Dath

Abstract: In the case of networks, security is always a serious concern. It is required to provide proper security to the network to safeguard them. Link Errors and Corrupted Packets are the two main issues, where Packet dropping is one of the serious issue. When a packet loss happens, we have to check whether it happened only due to the Link Error or it happened with the combination of Link Error and Corrupted Packets. In this paper we are focusing on insider attack. To boast the correctness of detection, we have to consider the connection between the missing packets. To safeguard truthful calculation of connection, we will develop a Homomorphic Linear Authenticator (HLA).

Keywords - Identification, Corrupted Packet, Homomorphic Linear Authenticator, Link Error, Packet Missing.

[1] R. Rao and G. Kesidis, "Detecting malicious packet dropping using statistically regular traffic patterns in multihop wireless networks that are not bandwidth limited," in Proc. IEEE GLOBECOM Conf., 2003, pp. 2957–2961.
[2] L. Buttyan and J. P. Hubaux. "Stimulating cooperation in self-organizing mobile ad hoc networks". ACM/Kluwer Mobile Networks and Applica- tions, 8(5):579–592, Oct. 2003.

[3] C. Wang, Q. Wang, K. Ren, and W. Lou, "Privacy-preserving public auditing for data storage security in cloud computing," in Proc. IEEE INFOCOM Conf., Mar. 2010, pp. 1–9.
[4] J. Eriksson, M. Faloutsos, and S. Krishnamurthy, "Routing amid colluding attackers," in Proc. IEEE Int. Conf. Netw. Protocols, 2007, pp. 184–193.
[5] W. Kozma Jr., and L. Lazos, "REAct: Resource-efficient accountability for node misbehavior in ad hoc networks based on random audits," in Proc. ACM Conf. Wireless Netw. Secur. 2009, pp. 103–110.

Paper Type : Research Paper
Title : A Review on Top-K Dominating Queries on Incomplete Data
Country : India
Authors : Sreelekshmi B || Anoop S

Abstract: Data mining is a powerful way to discover knowledge within the large amount of the data. Incomplete data is general, finding out and querying these type of data is important recently. The top-k dominating (TKD) queries return k objects that overrides maximum number of objects in a given dataset. It merges the advantages of skyline and top-k queries. This plays an important role in many decision support applications. Incomplete data holds in real datasets, due to device failure, privacy preservation, data loss............

Keywords: Algorithm , Dominance relationship, Incomplete data, Query processing, Top-k dominating query.

[1]. W. Zhang, X. Lin, Y. Zhang, J. Pei, and W. Wang, "Thre shold based probabilistic top-k dominating queries," The Int. J. Very Large Data Bases, vol. 19, no. 2, pp. 283–305, 2010.
[2]. M. Kontaki, A. N. Papadopoulos, and Y. Manolopoulos, "Continuous top-k dominating queries," IEEE Trans. KnowlData Eng., vol. 24, no. 5, pp. 840–853, May 2012.
[3]. D. Papadias, Y. Tao, G. Fu, and B. Seeger, "Progressive skyline computation in database systems," ACM Trans. Database Syst., vol. 30, no. 1, pp. 41–82, 2005.
[4]. M. L. Yiu and N. Mamoulis, "Efficient processing of top-k dominating queries on multi- dimensional data," in Proc. 33rd Int. Conf.Very Large Data Bases, 2007, pp. 483–494.
[5]. M. L. Yiu and N. Mamoulis, "Multi-dimensional top-k dominating queries," The Int. J. Very Large Data Bases, vol. 18, no. 3, pp. 695–718, 2009.

Paper Type : Research Paper
Title : Anaphora Method Based Context-Diversification for Keyword Queries
Country : India
Authors : Mrudula.M.S || Anoop.S

Abstract: Keyword based searching is the important part of research domain. The search can be applied on structured and /or semi-structured information .Queries are used to fetch large amount of data from databases. Due to ambiguity problem system can't effectively answer short and vague queries. For solving this ambiguity problem different context in XML data is used. Here XML keyword diversification model with the help of Anaphora method is proposed .Anaphora is the phenomenon of referring to an antecedent (Metonymically also referring expression ) Subtype are pronouns and definite NPs.........

Keywords – Anaphora method, XML Data, keyword query.

[1]. Y. Chen, W. Wang, Z. Liu, and X. Lin, "Keyword search on struc- tured and semi-structured data," in Proc. SIGMOD Conf., 2009, pp. 1005–1010.
[2]. L. Guo, F. Shao, C. Botev, and J. Shanmugasundaram, "Xrank: Ranked keyword search over xml documents," in Proc. SIGMOD Conf., 2003, pp. 16–27.
[3]. C. Sun, C. Y. Chan, and A. K. Goenka, "Multiway SLCA-based keyword search in xml data," in Proc. 16th Int. Conf. World Wide Web, 2007, pp. 1043–1052
[4]. Y. Xu and Y. Papakonstantinou, "Efficient keyword search for smallest lcas in xml databases," in Proc. SIGMOD Conf., 2005, pp. 537–538..
[5]. J. Li, C. Liu, R. Zhou, and W. Wang, "Top-k keyword search over probabilistic xml data," in Proc. IEEE 27th Int. Conf. Data Eng., 2011, pp. 673–684

Paper Type : Research Paper
Title : A Review on Rule Based Method for Entity Resolution
Country : India
Authors : Sreejam M || Praveen K Wilson

Abstract: In real-world scenario entity may appear in multiple data sources so that the entity may have quite different descriptions. Hence, it is necessary to identify the records referring to the same real-world entity, which is named as Entity Resolution (ER).This paper highlights ER as one of the most important problems in data cleaning and arises in many applications like information integration and information retrieval. Traditional ER approaches are ineffective to find records based on pair wise similarity comparisons, which assumes that records referring to the same entity are very similar to each other than otherwise........

Keywords:-Data Cleaning ,Entity Resolution, Rule Learning

[1] N. Koudas, S. Sarawagi, and D. Srivastava, "Record linkage: Similarity measures and algorithms," in Proc. ACM SIGMOD Int. Conf. Manage. Data, 2006, pp. 802–803.
[2] M. Bilenko and R. J. Mooney, "Adaptive duplicate detection using learnable string similarity measures," in Proc. ACM SIGKDD Int. Conf. Knowl . Discovery Data Mining, 2003, pp. 39–48.
[3] W. W. Cohen, "Integration of heterogeneous databases without common domains using queries based on textual similarity," ACM SIGMOD Rec., vol. 27, no. 2, pp. 201–212, 1998.
[4] L. Gravano, P. G. Ipeirotis, N. Koudas, and D. Srivastava, "Text joins in an RDBMS for web data integration," in Proc. 12th Int. Conf. World Wide Web, 2003, pp. 90–101.
[5] M. A. Jaro, "Advances in record-linkage methodology as applied to matching the 1985 census of Tampa, Florida," J. Amer. Statist. Assoc., vol. 84, no. 406, pp. 414–420, 1989.

Paper Type : Research Paper
Title : A Survey Paper on Clustering Data using Incremental Affinity Propagation
Country : India
Authors : Sreeja Ajithkumar || Praveen K Wilson

Abstract: Clustering is vital part of data mining technique and widely used in different applications. In this project we are focusing on Affinity propagation (AP) clustering algorithm, which is presented recently to overcome many clustering problems in different clustering applications. Many clustering applications deal with static data. AP clustering supports only static data applications; hence it becomes research problem that how to deal with incremental data (dynamic data) using AP. To solve this problem, recently proposed Incremental Affinity Propagation (IAP) clustering. Two strategies are proposed to overcome the difficulties in IAP clustering..........

Keywords: Affinity propagation, Data streams, Incremental Affinity Propagation, K-Medoid, Nearest neighbor assignment.

[1] X. Zhang, C. Furtlehner, and M. Sebag, "Frugal and Online Affinity Propagation," Proc. Conf. Francophone sur l'Apprentissage (CAP '08), 2008.
[2] J. Beringer and E. Hullermeier, "Online Clustering of Parallel Data Streams," Data and Knowledge Eng., vol. 58, no. 2, pp. 180-204, Aug. 2006.

[3] X.H. Shi, R.C. Guan, L.P. Wang, Z.L. Pei, and Y.C. Liang, "An Incremental Affinity Propagation Algorithm and Its Applications for Text Clustering," Proc. Int'l Joint Conf. Neural Networks (IJCNN '09), pp. 2914-2919, June 2009.
[4] A.M. Bagirov, J. Ugon, and D. Webb, "Fast Modified Global kMeans Algorithm for Incremental Cluster Construction," Pattern Recognition, vol. 44, no. 4, pp. 866-876, Nov. 2011.
[5] H. Geng, X. Deng, and H. Ali, "A New Clustering Algorithm Using Message Passing and Its Applications in Analyzing Microarray Data," Proc. Fourth Int'l Conf. Machine Learning and Applications (ICMLA '05), 2005.

Paper Type : Research Paper
Title : Taint Analysis and False Positive Prediction for Web Application Vulnerability Deportation
Country : India
Authors : Ameena.S || Devi Dath

Abstract: Web applications are increasing day by day and they have a prominent role in the life of a common man. It is required to keep the web application secure from the security loopholes. Web applications that reside in the web server have access to the database for executing the user queries. Along with the increase in security protection alternatives, hackers are trying all the possible ways to breach the security. Hackers are much more interested in attacks like SQLI, (SQL Injection) which can make alterations in the source code and can access the database; it will affect the entire web application security.............

Keywords: False positives, Sanitization function, Static analysis, Taint analysis, Web Application.

[1] A. Sabelfeld and A. C. Myers, Language-based information-flow security, IEEE J. Sel. Areas Commun.,21(1),2003, 5–19
[2] Y.-W. Huang et al., Securing web application code by static analysis and runtime protection, Proc. 13th Int. Conf. World Wide Web, 2004, 40–52
[3] W. Halfond and A. Orso, AMNESIA: analysis and monitoring for neutralizing SQL-injection attacks, Proc. 20th IEEE/ACM Int.Conf. Automated Software Engineering, 2005, 174–183
[4] S. Neuhaus, T. Zimmermann, C. Holler, and A. Zeller, Predicting vulnerable software component, Proc. 14th ACM Conf. Computer and Communications Security, 2007, 529–540.
[5] S. Lessmann, B. Baesens, C. Mues, and S. Pietsch, Benchmarking classification models for software defect prediction: A proposed framework and novel findings, IEEE Trans. Softw. Eng., 34(4), 2008, 485–496.

IOSR Journals are published both in online and print versions.