IOSR Journal of Computer Engineering (IOSR-JCE)

July - Aug 2014 Volume 16 - Issue 4

Version 1 Version 2 Version 3 Version 4 Version 5 Version 6 Version 7

Paper Type : Research Paper
Title : Classification and Quality Analysis of Food Grains
Country : India
Authors : Megha R. Siddagangappa, Asso.Prof. A. H. Kulkarni
: 10.9790/0661-16430110     logo
Abstract: In the present grain-handling scenario, grain type and quality are identified manually by visual inspection which is tedious and not accurate. There is need for the growth of fast, accurate and objective system for quality determination of food grains. An automated system is introduced which is used for grain type identification and analysis of rice quality (i.e. Basmati, Boiled and Delhi) and grade (i.e. grade 1, grade 2, and grade3) using Probabilistic Neural Network. This paper proposes a model that uses color and geometrical features as attributes for classification.

[1]. Neelamma K. Patil, Ravi M. Yadahalli (2011), "Clasification of Food Grains Using HSI Color Model by Combining Color and Texture", Third International Journal on Computer Vision , Pattern recognition, and graphics,2011.
[2]. Lai FS, Zayas I, Pomeranz Y. Application of pattern recognition techniques in the analysis of cereal grains. Cereal Chemistry. 1986; 63(2):168–174.
[3]. Zayas I., Pomeranz Y., and F. S. Lai, "Discriminate between wheat and non-wheat components in a grain sample", Cereal Chem., vol. 66, no.3, 1989.
[4]. N. S. Visenl, J. Paliwall, D. S. Jayas1, and N. D. G.White, "Image analysis of bulk grain samples using neural networks", Can. BioSyst. Eng, vol. 46, 2004.

Paper Type : Research Paper
Title : Survey on Symmetric and Asymmetric Key Cryptosystems
Country : India
Authors : Srinivas Madhira, Porika Sammulal
: 10.9790/0661-16431118     logo

Abstract : Cryptography is the collection of techniques used to hide the information securely from the eavesdroppers during its transmission over the network. Cryptography is centuries old, during the course of time a number of techniques have been proposed and developed by the researchers. Some of these techniques have become popular and widely in use today in a varieties of applications. This paper discusses the two most important categories of techniques symmetric and asymmetric cryptosystems.

Keywords: symmetric cryptosystem, asymmetric cryptosystem, secrete key, private key, public key, authentication, confidentiality, integrity.

[1] Whitfield Diffie and Martin E. Hellman, "New Directions in Cryptography", presented at the IEEE Information Theory Workshop, Lenox, MA, June 23–25, 1975 and the IEEE International Symposium on Information Theory in Ronneby, Sweden, June 21–24, 1976.
[2] W. Diffie and M. E. Hellman, "Multiuser cryptographic techniques", presented at National Computer Conference, New York, June 7–10, 1976.
[3] R.L. Rivest, A. Shamir, and L. Adleman, "A Method for Obtaining Digital Signatures and Public-Key Cryptosystems", This research was supported by National Science Foundation grant MCS76-14294, and the Officee of Naval Research grant number N00014-67-A-0204-0063.
[4] William Stallings, "Cryptography and Network Security", Principles and Practices, Pearson Education, 3rd Edition, 2003.

Paper Type : Research Paper
Title : A Generalized Study on Encryption Techniques for Location Based Services
Country : India
Authors : Y. Lakshmi Prasanna, Prof. E. Madhusudhan Reddy
: 10.9790/0661-16431926     logo

Abstract : Location-based service (LBS) is the concept that denotes applications integrating geographic location (i.e., spatial coordinates) with the general notion of services. Examples of such applications include emergency services, car navigation systems, tourist tour planning etc. The increasing spread of location-based services (LBSs) has led to a renewed research interest in the security of services. To ensure the credibility and availability of LBSs, the needed requirement is to address access control, authentication and privacy issues of LBSs. In this paper a study of the encryption techniques used for ensuring the security of location-based services (LBSs) is done. According to our discussion, the approach can meet the confidentiality, authentication, simplicity, and practicability of security issues. As a result, the proposed encryption techniques can also meet the demands of mobile information systems.

Keywords: Cryptography, Data Security, Location-based encryption, Location-based Services, Trusted Location Devices

[1] JochenSchiller, Agnès Voisard, Location-Based Services (San Francisco, CA 94111, Morgan Kaufmann Publishers Elsevier).
[2] Logan Scott , Dorothy E. Denning , A Location Based Encryption Technique and Some of Its Applications, Proceedings of the 2003 National Technical Meeting of The Institute of Navigation, Anaheim, CA, January 22 - 24, 2003,734-740.
[3] Hsien-Chou Liao, Po-Ching Lee, Yun-Hsiang Chao, and Chin-Ling Chen, A Location-Dependent Data Encryption Approach for Enhancing Mobile Information System Security, Proceedings of 9th International Conference on Advanced Communication Technology (ICACT2007), Korea, February 12 - 14, 2007,625-628.
[4] V. Rajeswari, V. Murali , A.V.S. Anil, A Navel Approach to Identify Geo-Encryption with GPS and Different Parameters (Locations And Time), International Journal of Computer Science and Information Technologies, Vol. 3 (4), 2012,4917 – 4919.
[5] Yu Chen and Wei-Shinn Ku, Self-Encryption Scheme for Data Security in Mobile Devices, CCNC'09, Las Vegas, NV, USA, Jan. 10 – 13, 2009,.

Paper Type : Research Paper
Title : A Survey on Scheduling Algorithms in Cloud Computing
Country : India
Authors : Mrs.M.Padmavathi , Shaik. Mahabbob Basha, Professor , Mr.Srinivas Pothapragada, CEO of Ostilio
: 10.9790/0661-16432732     logo

Abstract : Cloud Computing is every Where .Cloud Computing gets its name as metaphor for the Internet and it has changed the model of storing and managing data for scalable, real time ,internet based applications and Resources satisfying end users needs. Cloud computing services are usually backed by large-scale data centers are built to serve many users and many disparate Applications, for this we are considering Virtualization a s a perfect match. Resource Scheduling is a complicated task in cloud computing environment because there are many alternative computers with varying capacities. in this paper a review was performed on the Existing Scheduling Algorithms. limitations with existing Systems was list out. As a future enhancement a new Algorithm will develop.

Key words: Cloud Computing, Virtualization, CPU Scheduling algorithms

[1]. Vignesh V, Sendhil Kumar KS, Jaisankar N "Resource Management and Scheduling in Cloud Environment"International Journal of Scientific and Research Publications, Volume 3, Issue 6, June 2013 1 ISSN 2250-3153,PP:1-4
[2]. Dr. Amit Agarwal, Saloni Jain "Efficient Optimal Algorithm of Task Scheduling in Cloud Computing Environment" International Journal of Computer Trends and Technology (IJCTT) – volume 9 number 7– Mar 2014,PP:345-349
[3]. IM.Vijayalakshmi, IIV.Venkatesa Kumar "Investigations on Job Scheduling Algorithms in Cloud Computing" International Journal of Advanced Research in Computer Science & Technology (IJARCST 2014) Vol. 2 Issue Special 1 Jan-March 2014, ISSN : 2347 - 8446 (Online) ,ISSN : 2347 - 9817 (Print),PP:157-161

[4]. Arabi E.Keshk,Ashraf El-SIsi,Medhat A.Tawfeek,F.A.Torkey "Intelligent Strategy of Task Scheduling in Cloud Computing for
Load Balancing" International Journal of Emerging trends & Technology in ComputerScience(ITETTCS)Vol 2,Issue 6,November-
December 2013,ISSN 2278-6856,PP:12-22
[5]. Shaminder Kaur, Amandeep Verma "An Efficient Approach to Genetic Algorithm for Task Scheduling in Cloud Computing
Environment" I.J.Informational Technology and Computer Science,2012,10.PP 74-79.

Paper Type : Research Paper
Title : Privacy-Preserving Public Auditing For Secure Cloud Storage
Country : India
Authors : Salve Bhagyashri, Prof. Y.B.Gurav
: 10.9790/0661-16433338     logo

Abstract : By using Cloud storage, users can access applications, services, software whenever they requires over the internet. Users can put their data remotely to cloud storage and get benefit of on-demand services and application from the resources. The cloud must have to ensure data integrity and security of data of user. The issue about cloud storage is integrity and privacy of data of user can arise. To maintain to overkill this issue here, we are giving public auditing process for cloud storage that users can make use of a third-party auditor (TPA) to check the integrity of data. Not only verification of data integrity, the proposed system also supports data dynamics. The work that has been done in this line lacks data dynamics and true public auditability. The auditing task monitors data modifications, insertions and deletions. The proposed system is capable of supporting public auditability, data dynamics and Multiple TPA are used for the auditing process. We also extend our concept to ring signatures in which HARS scheme is used. Merkle Hash Tree is used to improve block level authentication. Further we extend our result to enable the TPA to perform audits for multiple users simultaneously through Batch auditing.
Index Terms: Cloud Storage, Data Dynamics, Public Auditing, Privacy Preserving, Ring Signatures

[1]. C. Wang, Q. Wang, K. Ren, and W. Lou, "Privacy- Preserving Public Auditing for Storage Security in Cloud Computing," Proc. IEEE INFOCOM ‟10, Mar. 2010.
[2]. P. Mell and T. Grance, "Draft NIST Working Definition of Cloud Computing," computing/index.html, June 2009.
[3]. Pearson, S. 2012. Privacy, Security and Trust in Cloud Computing. Privacy and Security for Cloud Computing,3-42.
[4]. Cloud Security Alliance, "Top Threats to Cloud Computing,", 2010
[5]. M. Arrington, "Gmail Disaster: Reports of Mass Email Deletions," disasterreports- of-mass-email-deletions/, 2006

Paper Type : Research Paper
Title : Informational Retrieval Using Crawler & Protecting Social Networking Data from Information leakage
Country : India
Authors : S. S. Wangikar, S. N. Deshmukh
: 10.9790/0661-16433949     logo

Abstract : Online social networks, such as Facebook, Twitter, Yahoo!, Google+ are utilized by many people. These networks allow users to publish details about themselves and to connect to their friends. Some of the information revealed inside these networks is meant to be private or public. Yet it is possible to use learning algorithms on released data to predict private or public information and use the classification algorithm on the collected data. In this paper, we explore how to get social networking data to predict information. We then devise possible classification techniques that could be used in various situations. Then, we explore the effectiveness of these techniques and attempt to use. We collect the different information from the users groups. On which we concluded the classification of that data. By using the various algorithms we can predict information of users. Crawler programs for current profile work. We constructed a spider that crawls & indexes FACBOOK. In this paper we focus on crawler programs that proved to be an effective tool of data base. In this paper we elaborate the use of data mining technique to help retailers to identify user profile for a retail store and improved. The aim is to judge the accuracy of different data mining algorithms on various data sets. The performance analysis depends on many factors encompassing test mode, different nature of data sets, and size of data set.
Keyword: Social network analysis, data mining, social network data,WEKA.

[1]. R.Heatherly, M.Kantarcioglu, and B.Thuraisingham,"Preventing Private Information Inference Attacks on Social Networks" IEEE Transactions on Knowledge and Data Engineering, Vol. 25, No. 8, August 2013.
[2]. Terremark Worldwide, Inc."Facebook Expands Operations at Terremark's NAP West Facility" Tuesday November 1, 8:30 am ET.
[3]. Risvik,K.M. and Michelsen,R. Search Engines and Web Dynamics. Computer Networks, Vol. 39, pp. 289–302, June 2002.
[4]. Chakrabarti,S.Mining the web: Analysis of hypertext and semi structured data. New York: Morgan Kaufmann-2003.
[5]. Pant G.,Srinivasan P.Menczer F.,"Crawling the Web", in Levene M., Poulovassilis A. Web Dynamics: Adapting to Change in Content, Size, Topology and Use, Springer, pp. 153–178-2004.
[6]. Brin,S.,Page L. The anatomy of a large-scale hypertextual Web search engine, Computer networks and ISDN systems, 30 (1-7). 107–117-1998

Paper Type : Research Paper
Title : Image Segmentation Techniques: An Overview
Country : India
Authors : Maninderjit Kaur, Er. Navdeep Singh
: 10.9790/0661-16435058     logo

Abstract : Image segmentation means extracting the part of an image or any object.In digital image processing, image segmentation plays a vital role in the processing of images.Image segmentation is one of the most important steps leading to the analysis of processed image data. The goal of image segmentation is pattern recognition and image analysis besides simplifying the representation of an image which is more meaningful. It is used in various applications like medical imaging, machine vision, traffic control systems, image recognition tasks etc.In this paper, review of various image segmentation techniques and comparison between different techniques is done. Keywords: image Segmentation, quad tree, Ostu's method, clustering, k-mean clustering

[1]. C. Pantofaru, M. Hebert, A comparison of image segmentation algorithms, Tech. Rep. CMU-RI-TR-05-40, CMU (2005).2, 14.
[2]. Digital Image Processing, Rafael C. Gonzalez & Richard E. Woods, Second Edition 2002, Prentice Hall.
[3]. W. K. Pratt, "Image Segmentation," in Digital image processing, 4nd ed. Wiley, 2008, pp. 579-622.
[4]. V. K. Dehariya, S. K. Shrivastava, R. C. Jain, "Clustering of Image Data Set Using K-Means and Fuzzy K-Means Algorithms", International conference on CICN, pp. 386- 391, 2010.
[5]. V. K. Dehariya, S. K. Shrivastava, R. C. Jain,"Clustering of Image Data Set Using K-Means and Fuzzy K-Means Algorithms", International conference on CICN, pp. 386- 391, 2010.

Paper Type : Research Paper
Title : Dynamic Load Balancing For Cloud Computing Using Heuristic Data and Load on Server
Country : India
Authors : Vinay Darji, Jayna Shah, Rutvik Mehta
: 10.9790/0661-16435969     logo

Abstract : Cloud computing is emerging trend in Information Technology community. Cloud resources are delivered to cloud users based on the requirements. Because of the services provided by cloud, it is becoming more popular among internet users. Hence, the number of cloud users is increasing day by day. Because of this, load on the cloud server needs to be managed for optimum resource utilization. This research proposes new load balancing algorithm which considers parameter like weight of each task, execution time of each task, current load and future load on the server. Current load balancing algorithms don't consider these parameters together for load balancing. Proposed scheme selects the best node based on these parameters to achieve optimum use of resources.

Index terms: Cloud computing, load balancing, min-min algorithm, resource utilization

[1] Martin Randles, David Lamb, A. Taleb-Bendiab, "A Comparative Study into Distributed Load Balancing Algorithms for Cloud Computing," IEEE 24th International Conference on Advanced Information Networking and Applications Workshops, pp551-556, 2010
[2] Jianzhe Tai, Juemin Zhang, JunLi, WaleedMeleis and NingfangMi "A R A: Adaptive Resource Allocation for Cloud Computing Environments under Bursty Workloads" 978-1-46730012-4/11 ©2011 IEEE.
[3] J. F. Yang and Z. B. Chen, "Cloud Computing Research and Security Issues," International Conference on Computational Intelligence and Software Engineering (CiSE), Wuhan, 10-12 December 2010, pp. 1-3.
[4] K. Ramana, A. Subramanyam and A. Ananda Rao, Comparative Analysis of Distributed Web Server System Load Balancing Algorithms Using Qualitative Parameters, VSRD-IJCSIT, Vol. 1 (8), 2011, 592-600
[5] Chaczko, Z., Mahadevan, V., Aslanzadeh, S., & Mcdermid, C., "Availability of Load Balancing in Cloud Computing", International Conference on Computer and Software Modeling, 2011.

Paper Type : Research Paper
Title : Fixed-Rank Representation for Image Classification Using ANN and KNN
Country : India
Authors : M. A. Zahed Javeed, Prof: Shubhangi Sapkal, Dr. R. R. Deshmukh
: 10.9790/0661-16437074     logo

Abstract : This study is focused on development an application of an efficient algorithm for subspace clustering and feature extraction that used for classification. Some previous techniques for subspace clustering make use of rank minimization and sparse based such as SSC and LRR are computationally expensive and may degrade clustering performance. So the need is to solve the problems of existing techniques, an algorithm is introduced known as Fixed-Rank Representation i.e. FRR based on matrix factorization is used for subspace clustering and feature extraction and that features are used for classification. For classification we use k-nearest neighbour and neural network classifiers and compare their accuracy or performance to classify the feature extracted from FRR.

Keywords: ANN, FRR, K-NN, LRR, SSC.

[1] R. S. Cabral, F. De la Torre, J. P. costeria, ad A. Bermardio. Matrix completion for muti-label image classification. In NIPS, 2011.
[2] E.cades, X. Li, Y. Ma, and J. Wright. Robust principal component analysis. Joural of the ACM, 58(1): 1-37, 2011.
[3] S. Rao, R.Tron, R. Vidal, and Y. Ma. Motion segmentation in the presence of outlying, incomplete, and corrupted trajectories. IEEE Trans. on PAMI, 32(10):1832–1845, 2010.
[4] E. Elhamifar and R. Vidal. Sparse subspace clustering. In CVPR, 2009.

Paper Type : Research Paper
Title : Handwritten Character Recognition Based on Zoning Using Euler Number for English Alphabets and Numerals
Country : India
Authors : Rachana R. Herekar, Prof. S. R. Dhotre
: 10.9790/0661-16437588     logo

Abstract : Handwritten Character Recognition has been a challenging research domain due to its diverse applicable environment. Handwriting has always been and will possibly continue to be a means of communication. There is a need to convert these handwritten documents into an editable format which can be achieved by Handwritten Character Recognition systems. This considerably reduces the storage space required. In this paper focus is on offline handwritten English alphabets and numerals. Feature extraction is performed using zoning method together with the concept of euler number. This increases accuracy and speed of recognition as the search space can be reduced by dividing the character set into three groups.

Keywords: Aspect Ratio, Euler Number, End Points, Offline Handwritten Character Recognition, Zoning

[1] Manju Rani and Yogesh Kumar Meena "An Efficient Feature Extraction Method for Handwritten Character Recognition" Malaviya National Institute of Technology, Jaipur, India, B.K. Panigrahi et al. (Eds.): SEMCCO 2011, Part II, LNCS 7077, pp. 302–309, 2011.
[2] J.Pradeep, E.Srinivasan and S.Himavathi, "Diagonal Based Feature Extraction For Handwritten Alphabets Recognition System Using Neural Network", Department of ECE, Pondicherry College Engineering, Pondicherry, India., Department of EEE, Pondicherry College Engineering, Pondicherry, India. International Journal of Computer Science & Information Technology (IJCSIT), Vol 3, No 1, Feb 2011.
[3] Impedovo n, G.Pirlo "Zoning methods for handwritten character recognition: A survey", pattern recognition D. department of information Aldo Moro, Via Orabona, 4, 70125Bari, Italy 2013.
[4] Anita Pal & Dayashankar Singh, "Handwritten English Character Recognition Using Neural," Network International Journal of Computer Science & Communication.vol. 1, No. 2, pp. 141-144. July-December 2010.
[5] U. Pal, T. Wakabayashi and F. Kimura, "Handwritten numeral recognition of six popular scripts," Ninth International conference on Document Analysis and Recognition ICDAR 07, Vol.2, pp.749-753, 2007.

Paper Type : Research Paper
Title : Version Control in Open Source Software
Country : India
Authors : Gurpal Singh( Mtech CSE)
: 10.9790/0661-16438992     logo

Abstract : open source software is software whose source code is freely available for anyone. The Open source software can be redistributed to others users and they can use it according to their own needs. Version Control is the process which is most commonly development and with the help of this process ,a team of people may change the same files at the same time or at different time .Each Version is associated with a time stamp and the and persons making the change .There are many open source software like open Office, Mozilla Fire Fox etc .But inthis dissertion ,Jfreechart have been taken and the variation of classes in each version are observed .Various Metrics have been Observed with the help of CCC tool such as Line of Comments, coupling, number Of Children and the depth of inheritance tree

Keywords: Basic Introduction, examples of open Source software's, open source software licenses, Advantages Using Open Source Software, General Idea of Version Control, JFree Chart, Various Versions In J Free Chart

[1]. M.Pilato " Version Control With Sub Version , o.reilly and associates
[2]. The origin and future of Open Source Software by net action white paper by Nathan New man
[3]. Joseph Jeller ,Brain Fitz Gerald " Understanding Open Source Development 2002 P.P15-150 ISBN
[4]. Ben Collins .Sussman , Brain W. Fitzpatrick ,C. Michael Pilato
[5]. Joseph Jeller , Brain Fitzgerald 2006 > White Paper on Version Control

Paper Type : Research Paper
Title : Performance Analysis of Hybrid (supervised and unsupervised) method for multiclass data set
Country : India
Authors : Rahul R. Chakre, Dr. Radhakrishna Naik
: 10.9790/0661-16439399     logo

Abstract : Due to the increasing demand for multivariate data analysis from the various application the dimensionality reduction becomes an important task to represent the data in low dimensional space for the robust data representation. In this paper, multivariate data analyzed by using a new approach SVM and ICA to enhance the classification accuracy in a way that data can be present in more condensed form. Traditional methods are classified into two types namely standalone and hybrid method. Standalone method uses either supervised or unsupervised approach, whereas hybrid method uses both approaches. In this paper we are using SVM (support vector machine) as supervised and ICA (Independent component analysis) as a unsupervised approach for the improvement of the classification on the basis of dimensionality reduction. SVM uses SRM (structural risk minimization) principle which is very effective over ERM (empirical risk minimization) which minimizes an upper bound on the expected risk, as opposed to ERM that minimizes the error on the training data, whereas ICA uses maximum independence maximization to improve performance. The perpendicular or right angel projection is used to avoid the redundancy and to improve the dimensionality reduction. At last step we are using a classification algorithm to classify the data samples and classification accuracy is measured. Experiments are performed for various two classes as well as multiclass dataset and performance of hybrid, standalone approaches are compared.

Keywords: Dimensionality Reduction, Hybrid Methods, Supervised Learning, Unsupervised Learning, Support Vector Machine (SVM), Independent Component Analysis (ICA).

[1] Sangwoo Moon and Hairong Qi ,"Hybrid Dimensionality Reduction Method Based on Support Vector Machines and Independent Component Analysis", IEEE Transactions on Neural networks and Learning Systems,vol. 23, no. 5, may 2012.
[2] A.M.Martinez and A.C.Kak, "PCA versus LDA,"IEEE Trans.Pattern Anal.Mach.Intell. vol.23, no. 2, pp. 228-233, Feb. 2001.
[3] C. J. C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):121–167, 1998.
[4] L. Cao, K. Chua, W. Chong, H. Lee, and Q. Gu. A comparison of PCA, KPCA and ICA for dimensionality reduction in support vector machine. Nerocomp, 55:321–336, 2003.

[5] Hyvarinen, A., Karhunen, J., Oja, E.: Independent Component Analysis and Its Application John Wiley & Sons,Inc. ,2001
[6] L.-F. Chen, H.-Y. M. Liao, M.-T. Ko, J.-C. Lin, and G.J. Yu, "Anew LDA-based face recognition system which can solve the small sample size problem," Pattern Recognit., vol. 33, no. 10, pp.1713–1726, 2000.
[7] H. Park, M. Jeon, and J. B. Rosen, "Lower dimensional representation of text data based on centroids and least squares," BIT Numerical Math vol. 43, no. 2, pp. 427–448, 2003.

Paper Type : Research Paper
Title : Implementation and Result Analysis of Polyalphabetic Approach to Caesar Cipher
Country : India
Authors : Prachi Patni
: 10.9790/0661-1643100106     logo

Abstract : In the modern world as there is drastic hike in use of internet for our daily work there is need to keep our information safe and secure so that an intruder can't misuse it. Cryptography was established to solve this problem. Cryptography is an art of transforming information (Plain Text) using encrypting algorithms into a form that is not readable (Cipher Text) without access to specific decoding algorithms. In this paper the author presents a novel approach to cryptographic techniques and illustrates the result and analysis of the proposed algorithm and points out that it is with improved security from many kind of attacks. This paper is partitioned in following sections: 1st section contain basic introduction about cryptography and Caesar Cipher, 2nd section includes proposed system, 3rd contain performance analysis where proposed system is compared with other techniques, 4th include Conclusion and Future Scope and last section contains References.

Keywords: Cryptography, Caesar, Cipher

[1]. Prachi Patni, A Poly-alphabetic approach to Caesar Cipher Algorithm, International Journal of Computer Science and Information Technologies (IJCSIT), Volume 4, 2013, 954-959, ISSN: 0975-9646

[2]. Amit Joshi and Bhavesh Joshi, A Randomized Approach for Cryptography in Emerging Trends in Networks and Computer Communications (ETNCC), 22-24 April 2011.
[3]. S G Srikantaswamy and Dr. H D Phaneendra, Improved Caesar Cipher with Random Number Generation Technique and Multistage Encryption, International Journal on Cryptography and Information Security (IJCIS), Vol.2, No.4, December 2012.

[4]. Ramandeep Sharma, Richa Sharma and Harmanjit Singh, "Classical Encryption Techniques" published in International Journal of Computers & Technology, Volume 3. No. 1, AUG, 2012.
[5]. O.P. Verma, Ritu Agarwal, Dhiraj Dafouti and Shobha Tyagi, Performance Analysis of Data Encryption Algorithms, IEEE Delhi Technological University, India, 2011.

Paper Type : Research Paper
Title : Image Restoration - A Survey
Country : India
Authors : Ravneet Kaur, Er. Navdeep Singh
: 10.9790/0661-1643107111     logo

Abstract : Image restoration is the process of restoring the degraded or corrupted image back to its original form. It is the initial step of image processing. Noise is added in the image while sending an image from one place to another via satellites, wireless, or during image acquisition process. There are various types of noises such as salt and pepper noise(impulse noise), Gaussian noise etc. The main goal of image restoration is to recover or improve the quality of an image, identifies the type of noise and attempts to reverse it. The restoration process improves the image by using a priori knowledge of the degradation process. The degradation process first identifies the type of noise, and then apply the inverse process to recover the corrupted image. In this paper various spatial domain filters are discussed which are used to remove noise from the images.

[1]. Rafael C.Gonzalez, Digital Image Processing, Pearson Education India, 2009.
[2]. S. Oudaya Coumal, P. Rajesh, and S. Sadanandam, Image restoration filters and image quality assessment using reduced reference metrics, Sri Manakula Vinayagar Engineering College, India.
[3]. Anmol Sharma, Jagroop Singh, Image denoising using spatial domain filters: A quantitative study, Jalandhar, India.
[4]. Xudong Jiang, Senior Member, IEEE, Iterative Truncated Arithmetic Mean Filter and Its Properties.
[5]. Weiwen Lv, Peng Wang, Bing An, Qiangxiang Wang, Yiping Wu, A spatial filtering algorithm in low frequency wavelet domain for X-ray inspection image denoising, Institution of Materials Science and Engineering, Wuhan, China.
[6]. J.S. Lee, "Digital image enhancement and noise filtering by use of local statistics", IEEE Trans. Pattern Anal. Machine Intell., Vol. 2, pp. 165-168, 1980.

Paper Type : Research Paper
Title : A Fast & Memory Efficient Technique for Mining Frequent Item Sets from a Data Set
Country : India
Authors : Richa Mathur, Virendra Kumar
: 10.9790/0661-1643112115     logo

Abstract : Frequent/Periodic item set mining is a extensively used data mining method for market based analysis,privacy preserving and it is also a heart favourite theme for the resarchers. A substantial work has been devoted to this research and tremendous progression made in this field so far. Frequent/Periodic itemset mining is used for search and to find back the relationship in a given data set. This paper introduces a new way which is more efficient in time and space frequent itemset mining. Our method scans the database only one time whereas the previous algorithms scans the database many times which utilizes more time and memory related to new one. In this way,the new algorithm will reduced the complexity (time & memory) of frequent pattern mining. We present efficient techniques to implement the new approach.

Keywords: Incremental Association Rule Mining, Minimum Support Threshold(MST),Transactional Data set.

[1]. Dr. Yongjian Fu," Data Mining: Applications,Tasks and Techniques ".
[2]. Jyoti Jadhav, Dr.Lata Ragha and Mr. Vijay Katkar, "Incremental Frequent Pattern Mining", IJEAT-2012
[3]. R. Agrawal, and R. Srikant, "Fast algorithms for mining association rules," Derive of 20th International Conference on Very Large Data Bases, Morgan Kaufmann, 1994.
[4]. Agrawal, R. and Psaila, G. Active Data Mining,Proceeding of 1st international Conference knowledge discovery and database, Montreal, 1995.
[5]. "Mining frequent patterns without candidate generation," J. Pei, Y. Yin and J.Han, The ACM SIGMOD International Conference on Management of Data, 2000
[6]. "The Pre-FUFP algorithm for incremental mining" Lin, C.-W., Hong, T. –P., & Lu, W. –H. (2009).Expert Systems with Applications.

Researcher can also search IOSR published article contents through

IOSR Xplore