Current Publications

STUDY OF CLOUD COMPUTING TECHNIQUES: RESOURCE PROVISIONING ASPECT

Pooja Chopra1*, Dr.RPS Bedi2

1Asst.Professor, Department of Computer Applications and IT, DIPS IMT, I.K.G PTU, Jalandhar, Punjab, India

2Controller of Examinations, I.K.GPTU, Jalandhar, Punjab, India

 

ABSTRACT:

 

Objectives: The objective of this paper is the analysis of various resource provisioning techniques. This paper reviews the already available resource provisioning techniques. Methods/ Statistical analysis: Various studies on resource provisioning techniques in cloud computing systems have been considered and compared. Relative analysis has been made to categorize these techniques. Findings: Cloud computing is one of the prominent technologies that has brought dramatic changes in the field of Information Technology. It is a pay-as-you-go service model that delivers services as per user needs instantly. Resource provisioning is a challenging task in cloud computing. Its main objective is to allocate minimum resources to consumers while providing maximum satisfaction. Various resource provisioning mechanisms are prevalent in the existing literature like Ontology based, failure based, semantic based etc. They have their own features, advantages and disadvantages. Application/ Improvements: This research work is very useful for researchers working in the field of cloud computing and especially for the ones working in resource provisioning.

 

Keywords: Cloud Computing, Resource Provisioning, Reliability, Ontology, SLA, QOS.

 

pdf

By | 2017-11-16T16:44:16+00:00 November 16th, 2017|General, Publications, Volume XI, Issue XI|0 Comments

BIG DATA ANALYTICS FOR AGRICULTURAL E-COMMERCE

Dr Ashfaq Shaikh1,

 1Department of Information Technology M.H Saboo Siddik College Of Engineering Mumbai, India

 

ABSTRACT

 

The Big Data analytics refer to analysis of large data sets collected from various data sources, the Big Data plays very important role to find very useful information from the large collection of data sets, The E-commerce already using the concept of Cloud Computing and Big Data particularly for finding useful information. The paper focuses on the Agricultural domain particularly Indian Agricultural system which has been hidden from these emerging technologies, the paper proposed the use of Big Data Analysis in the agricultural sector. The paper proposed how farmer can use E-commerce activity to sell crop and provides information about their production details, the new Agricultural Framework proposed which manages the complete cycle of Indian Agricultural system,  the mechanism also try to connect this information to various stakeholders that may be potential customers for the farmers. The system is responsible to provide a direct communication between farmer and buyer with the appropriate price like bid. The system also provides various images, data showing the quality of crop and process of bidding approach on a manually created database of documents in the area of Information Retrieval. The system collect the real time data from farmers may in the form of Text, images and SMS,  all this information is stored in the Hadoop File System for the processing, the Hadoop system will analyze the results based on the algorithms designed to categories crop quality, regions and the filtered information shown to the various stakeholders.

 

Keywords: Big Data Analytics, Cloud Computing, Hadoop, Agricultural Framework.

 

pdf

By | 2017-11-16T16:40:03+00:00 November 16th, 2017|General, Publications, Volume XI, Issue XI|0 Comments

DESIGN OF DADDA MULTIPLIER WITH OPTIMIZED POWER USING ANT ARCHITECTURE

M.Sukanya 1, Dr B. Rama Rao 2 , Y.Srinivasa Rao 3

1PG Student [VLSI], Dept. of ECE, AITAM, Tekkali, A.P., India .

 2Professor, Dept. of ECE, AITAM, Tekkali, A.P., India.

3Assistant professor Dept. of ECE, AITAM, Tekkali, A.P., India.

 

ABSTRACT: 

 

One of the most important hardware blocks for the DSP systems is multiplier block. In digital filtering, communication and analysis of the digital signals i.e in DSP applications the key role is played by the multiplier. In present day digital applications are focused for being portable and can be used as portable devices which means the devices are upcoming battery powered. Thus power dissipation becomes the important constraint in designing a system. Typically the multipliers are the complex systems and requires by clock rates, for reducing delay of the design for satisfying overall design performance.

In this paper two different multipliers are designed  based on ANT Architecture .The simulation and synthesis results are obtain by using XILINX ISE 12.3i .The modified Dadda multiplier and array multiplier are designed with combination of truncated multiplier .The multiplier circuit area in fixed width reduced precision replica can be lower by 39.99 % and the power can be reduced.

 

Keywords: Truncated multiplier, Array multiplier, Dadda multiplier, Multiplexer.[20] M.Sukanya(Print)

 

pdf

By | 2017-11-16T16:33:22+00:00 November 16th, 2017|General, Publications, Volume XI, Issue XI|0 Comments

AN EFFICIENT COLOR-BASED OBJECT DETECTION AND TRACKING IN VIDEOS

 Rachna Verma1

1Department of Computer Science and Engineering, Faculty of Engineering, JNV University, Jodhpur, Rajasthan, India

 

 ABSTRACT: 

 

In this paper, a new efficient color based object detection and tracking of a moving object in a video is discussed, which is based on a new formula, proposed by the author, to convert an RGB image into an intensity image. The proposed formula has a great discriminating ability to highlight a shade of a particular primary color in an image and suppress all other colors. This discriminating ability is used to detect an object of any primary color shade very efficiently as it eliminates many additional processing steps, such as segmentation, histogram matching, etc, used in previously reported color based trackers. In future, the proposed concepts will be extended to track objects of any color.

 

[19] Rachna Verma

pdf

By | 2017-11-16T16:28:59+00:00 November 16th, 2017|General, Publications, Volume XI, Issue XI|0 Comments

PERFORMANCE EVALUATION OF WEIGHTING FUNCTIONS BASED ON NONLOCAL MEANS ALGORITHM FOR IMAGE- DENOISING

K. Pushpalatha 1, Dr.M. Jaya manmadha rao 2, V.Laxmi 3

1M.Tech student, 2Professor, 3Assistant professor

1, 2, 3Department of Electronics and Communication Engineering, Aditya Institute of Technology and Management, Tekkali, Andhrapradesh, India.

 ABSTRACT: 

 

Non-local means algorithm is one of the well known and mostly used for image de-noising. The nonlocal means approach uses weighted version of all patches in a search neighbourhood to de-noise the centre patch. However, this search neighbourhood can include some dissimilar patches (track of noise in regions of image).  In this work, proposed nonlocal means filter and local post processing filter for eliminates those track of noise in particular regions of image. These  methods  although  prevent the loss  of  image  details  but  still  leaving  a scope  for development  for  improvement of the quality of the recovered image and based on the distribution of   distance  of  noisy  similar  patches.  The parameters like PSNR and   SSIM values are calculated for the girl, cameraman and eight images etc with corresponding weighting-functions. By observing the values and confirmed which weighting function gives better result compare to others.

 

KEYWORDS: Pre processing, Image de-noising, Nonlocal means filter, KLT-post processing, Default (existed) and Bisquare weighting functions.

 

pdf

By | 2017-11-16T16:22:54+00:00 November 16th, 2017|General, Publications, Volume XI, Issue XI|0 Comments

ENHANCING AND DETECTING THE DIGITAL TEXT BASED IMAGES USING SOBEL AND LAPLACIAN

PL.Chithra1, B.Ilakkiya Arasi 2

 1Department of Computer Science, University of Madras, Chennai, India.

 2Department of Computer Science, University of Madras, Chennai, India

ABSTRACT:

This paper focuses on enhancing and detecting a text in digital images. Since digital images have some shimmer and shades. Therefore it has some complex background to detect a text. This paper produces a solution to enhance a text by removing noise using filters and improves the sharpness by gradients using Sobel and Laplacian method. This method provides better results by improving blurred and shimmer based digital text images as well as it is applicable for normal text images. Normally, Text detection and recognition are usually done in handwriting images, and historical documents with various languages like Tamil, English, Telugu, and Hindi. Similarly for numbers is also applicable. Recent days all the process is moving to digital process like swiping, ATM, mobile, and LED boards in railway stations and buses. Since it is necessary to enhance and detect texts in digital images. To achieve this better result, this paper generates a combination of gradients using Sobel and Laplacian.

Keywords: Enhancement, Shimmers, Sobel and Laplacian filters, and Digital text Images

pdf

By | 2017-11-15T09:59:47+00:00 November 15th, 2017|General, Publications, Volume XI, Issue IX|0 Comments

CATTLE MONITORING AND FARM AUTOMATION USING IoT AND DATA ANALYTICS

Merly K, Sarjun S S,Aleena Ann Siby

Department of MCA, Kristu Jayanti College (Autonomous), Bangalore

ABSTRACT:
Cattle Farming is a field which requires attention and knowledge as the requirements of each cattle vary based on its breed and geographical location. The major concern in the dairy industry includes lack of data availability among farmers, financial issues and lack of awareness about the health and well-being of the cattle. These concerns largely affect the growth of the dairy industry in India. The main solution to these problems would be the implementation of latest technology like IoT, sensors and Data Analytics to collect data from various environments connected to dairy farms which would also result in the overall development of the industry and will also open new possibilities to the industry. The Implementation of sensors to monitor the cattle’s health, behaviour, breeding cycles would greatly improve the yield from cattle as well. The implementation of IoT would further develop the industry as the information will be spread among the entire population. This will also inspire people to invest into the industry and will also increase employability rates. The data initially collected from various locations can result in the clarity of information currently available to the public and government agencies. This accurate data would give higher chances for farmers to have better yield every year and will just increase the success rates even more as time elapses. As more and more data is collected, cloud services can be effectively implemented into the industry which would further increase the employability rates and will also result in the better development of the Indian economy.

Keywords: Cattle Farming, IoT, Data Analytics

 

pdf

By | 2017-11-15T09:55:42+00:00 November 15th, 2017|General, Publications, Volume XI, Issue IX|0 Comments

SMART CARD BASED DIGITALIZATION OF INDIA USING RFID TECHNOLOGY

Sandhya Jagdale, Ajay Kanade , Priyanka Gaikwad , Akash chalva, Geeta Atkar

Department of Computer Engineering. G.H.Raisoni College Of Engineering and Management. Pune university .

 

ABSTRACT:

 

There has been rising demand for secure system that must be dependable and quick respond for the industries and companies. RFID (Radio Frequency Identification) is one of the consistent and  faster identification of material objects. In the long-ago the barcode’s are more preferable ascompared to RFID because of their cost .But now a day’s RFID are easily available and also convenient to use with respect to cost. Researches has made some drastic changes which makes its programming  shorter and easier. To overcome the problem of showing any document for particular government officer we are going to develop a system which will save time and hassle for the officer who wants to check the document of the particular user whose information is stored in the data base.

 

Keywords: RFID Card, Reader, controller, digitalization.

 

 

pdf

By | 2017-11-10T17:10:42+00:00 November 10th, 2017|General, Publications, Volume XI, Issue XI|0 Comments

A LOW POWER NON-VOLATILE LUT BASED ON FRAM

Pavani  kodamanchili1, Prema Kumar Medapati2

1,2Department of Electronics and Communications,

Shri Vishnu Engineering College for Women, Autonomous  Bhimavaram, INDIA

 

ABSTRACT :

 

The Emerging non-volatile memories such as MRAM(Magnetic Random Access Memory)PRAM, and RRAM have been widely investigated to replace SRAM as the configuration bits in FPGA but this brings reliability issues in order to overcome this RRAM slice is introduced but it consumes more power hence the FRAM cell is proposed in order to reduce the power.

 

Keywords: Logic in memory, low power, NVLUT, FRAM (Ferro electronic   Memory)

 

 

pdf

By | 2017-11-10T16:51:36+00:00 November 10th, 2017|General, Publications, Volume XI, Issue XI|0 Comments

CORRELATION BETWEEN PADDY SEED FEATURES FOR SEED IDENTIFICATION

Dr. Archana Chaugule

 PCCOER, Rawet, Pune,India

 

ABSTRACT:

 

The objective of this work is to find the correlation between the different features of the paddy seeds. The paddy Seeds of four Paddy (Rice) grains: viz. Karjat-6(K6), Karjat-2(K2), Ratnagiri-4(R4) and Ratnagiri-24(R24) were used. The correlation was found between the well established features such as color, shape and texture. The correlation was also found between the new features that were extracted. It was found that the color was not at all related to any feature, and that there was a relation between few shape and texture features.

Keywords: Angle, Correlation, Features, Paddy

 

pdf

By | 2017-11-10T16:48:21+00:00 November 10th, 2017|General, Publications, Volume XI, Issue XI|0 Comments

A REVIEW OF VARIOUS FEATURE EXTRACTION TECHNIQUES OF IRIS RECOGNITION SYSTEM

Suman Bhajia1, SS Shekhawat 2

1Department of Computer Science and  Engineering, Jaipur, India

2Department of Computer Science and  Engineering, Jaipur, India

 

ABSTRACT

 

Iris is highly accurate and reliable because of its stable characteristics throughout lifetime. In iris recognition system general approach is image acquisition, pre-processing, segmentation, feature extraction, matching/classification. Generally the performance of biometric system is based on the selection of features. There are many different feature extraction techniques used in iris recognition system to extract iris features. In this paper various feature extraction methods are analyzed for iris recognition system.

 

 Keywords: Feature Extraction methods, Gabor filter, DWT, Haar wavelet

 

pdf

By | 2017-11-10T16:45:35+00:00 November 10th, 2017|General, Publications, Volume XI, Issue XI|0 Comments

PRIVACY AND SECURITY CHALLENGES IN MOBILE CLOUD COMPUTING

 Mr. C.Arun1, Dr. K.Prabu2

1Research Scholar, Department of Computer Science, Sudharsan College of Arts & Science, Pudukkottai, India.

2Assistant Professor, Department of Computer Science, Sudharsan College of Arts & Science, Pudukkottai, India.

 

ABSTRACT

Mobile cloud computing is an emerging technology in this century. The union of mobile computing and cloud computing is known as mobile cloud computing. Actually the adapted merits of MC and CC are formed MCC. Mobile cloud computing is computing of Mobile application through cloud. Mobile Cloud Computing (MCC) brings rich computational resource to mobile users, network operators, and cloud computing providers. It can be represented in many ways, and the ultimate goal of MCC is to enable execution of rich mobile application with rich user experience. Mobility is one of the main characteristics of MCC environment where user can be able to continue their work regardless of movement. But still problem are arising in MCC. High energy consumption, communication cost, execution time and data security during transaction, etc. Security and data privacy is a major problem in MCC which deter the users from adopting this technology. This survey paper throws light on privacy and security issues of Mobile Cloud Computing. Mobile Cloud Computing has privacy and security issues.

 

Keywords: Mobile cloud computing, Mobile computing, Cloud computing, Energy computations, Mobile application, Privacy and Security.

 

pdf

By | 2017-11-10T16:42:05+00:00 November 10th, 2017|General, Publications, Volume XI, Issue XI|0 Comments

A NOVEL APPROACH TOWARDS SECURING DATA IN DATA GRID

Raafiya Gulmeher 1, Dr Mohammed Abdul Waheed 2, Dr Asma Parveen 3

 1Research Scholar, CSE Dept, JJT University, jhunjhunu , Rajasthan, India

2 Associate Professor & Chairman, Dept of Studies in Computer Application,VTU PG Studies,Regional Office, Gulbarga, Karnataka, India

3Associate Professor, HOD CSE Dept, KBNCE, Gulbarga, Karnataka, India

 

ABSTRACT: 

 

There are  presently a huge digit of projects with various variety of novel and advancing Grid developmental approach There are different techniques and models for delivering Grid resources administration frameworks. The frameworks we monitored have a large portion of the part centered around computational web or an administration web. The main information Grid extend that we have overviewed is the CERN Data Grid, which is in the underlying phases of improvement. We have brought together information distribution plans (such as security scheme / erasure coding schemes), with active duplication to gain data survivability, safety, plus admission concert in order grids. The copy of the divided data required to be correctly allotted to get the real show gain. Data grid is a dispersed compute framework to unite a large number of information and computing assets into a unique nearby information evaluation scheme.

 

Keywords: Data Grid, Middleware, Performance, Reliability, Security  Replication, Resources,  Interoperability

 

pdf

By | 2017-11-10T16:38:20+00:00 November 10th, 2017|General, Publications, Volume XI, Issue XI|0 Comments

STUDY OF PARALLELIZATION OF ALGORITHMS USING OPENMP

Rahul Dadhich1, Neha Mahala2, Prakash Choudhary 3, P K Bhagat4

 1Samsung Company, Bangalore, India. 2ISM Dhanbad, India,  3National Institute of Technology Manipur, India

4National Institute of Technology Manipur, India

 

ABSTRACT: 

 

In this era, we all look forward to achieve high performance for our wide field of computational requirements. Today in the market, highly efficient, scalable and fast processors are available. This was all about the hardware perspective. But the software markets have not scaled up in the similar fashion. To scale up software efficiency, OpenMP tried to offer a shared memory parallel programming model. OpenMP is an emerging standard for parallelizing programs in a shared memory environment. It provides a set of pragma’s for programmers to parallelize their code. This paper explains the concept of parallel processing and the concept of OpenMP and how it is used in the C/C++ programs for effective utilization of available no. of cores/processors. This paper illustrates the basic concepts of parallel computing with a brief overview of OpenMP. The paper also describes an analysis of algorithms from different fields like Matrix Multiplication, Saddle point, Cholesky decomposition. The observations and results obtained show that how the  usage of OpenMP’s Pragma are effective in the normal C/C++ programs and how the result varies according to the inputs and available number of threads and shows that it is useful only when we are working on large data set or large computations are involved in the given problem.

 

Keywords: OpenMP, Parallel programming, Matrix multiplication, Cholesky decomposition, Saddle point.

 

pdf

By | 2017-11-10T16:34:18+00:00 November 10th, 2017|General, Publications, Volume XI, Issue XI|0 Comments

A NOVEL PIXEL REDUCTION TECHNIQUE FOR ACCURATE CIRCLE DETECTION

Nitin Bhatia 

1Department of Computer Science, DAV College, Jalandhar, India

ABSTRACT: 

 

The leading and distinguishing trait of digital image processing includes feature extraction. The application province considered in this paper is ‘Circle Detection’. This paper proposes a novel pixel reduction algorithm that allows the method to accurately detect circles even with reduced number of edge pixels. Experiments are conducted extensively on a large number of images. The proposed edge pixel reduction process reduces the number of edge pixels up to a maximum of 98%. The work is compared to classical Hough transform. The proposed technique including edge pixel reduction process outperforms classical Hough transform for circle detection by a maximum margin of 99.69%.

 

Keywords: Circle detection, Hough transform, Classical Hough Transform, Pixel reduction

 

pdf

By | 2017-11-08T16:21:25+00:00 November 8th, 2017|General, Publications, Volume XI, Issue XI|0 Comments

PROPOSED APPROACH FOR OPTIMIZED THRESHOLD BASED LOAD BALANCING IN DFS USING DATA MIGRATION

Mrs Smita Landge 1, Mrs Minal Bodke 2

 1Department of Computer Engineering ,Pimpri Chinchwad College of Engineering & Research,Pune

 2Department of Computer Engineering, Pimpri Chinchwad College of Engineering & Research,Pune

 

ABSTRACT: 

 

A novel approach is proposed on the basis of the current value of Network I/O load, Disk I/O load and Disk capacity load of data server, adjusts the threshold differences. Load balancing is achieved using data migration. Furthermore, proposed technique also provides an optimized way to handle excessive client request that are beyond the system to handle, by monitoring maximum number of data server that can be overloaded in overall system which results in more efficient data migration statistics. The relevancy of proposed threshold to overall cost performance count which is mathematically proven to be improved than 70%.

 

Keywords: Distributed System, Distributed File System, Data migration, Denial of Service, Load Balancing, Threshold

 

 

pdf

By | 2017-11-08T16:15:31+00:00 November 8th, 2017|General, Publications, Volume XI, Issue XI|0 Comments

TO MEASURE THE PERFORMANCE OF ATTRIBUTE EVALUATOR WRAPPER BY DIFFERENT METHODS

Dhyan Chandra Yadav

S.N.P.G.College, Narahi, Balia (U.P.)Department of Computer Application

ABSTRACT: 

 

Attribute Evaluator and Method creates a environment for large consumer claim data set. Feature subset selection is a technique for reducing the  consumer claim attribute space of a feature set. In other words, it is identifying a subset of features by removing irrelevant or redundant features. A good feature set contains a highly relevant feature which helps to improve the efficiency of the classification algorithms and to classify accurately. In this paper we will observe  by Wrapper +Random Method, Wrapper + Greedy Method and Wrapper +Best First Method are  a feature selection algorithm for random selection of instances . It uses  randomization selection of instances in the Wrapper. The efficiency and effectiveness of proposed algorithm is evaluated with consumer claim data sets.  Naïve Bayes and J48 are used as the classifiers. The classification results, in terms of classification accuracy and size of feature subspace to show the performance of the Wrapper algorithm.

 

Keywords: Feature Selection, Wrapper Evaluator,  Methods: Random , Greedy and Best First ,Weka

 

pdf

By | 2017-11-07T17:12:43+00:00 November 7th, 2017|General, Publications|0 Comments

COMPARATIVE ANALYSIS OF BAYES AND LAZY CLASSIFICATION ALGORITHMS

Dhyan Chandra Yadav

S.N.P.G. College , Narahi, Balia (U.P.),Department of Computer Application

ABSTRACT: 

 

Data mining applications are used in various areas such as sales, marketing, banking, finance, health care, insurance and medicine. Data Mining provide a specific environment to check two different type classifiers .The primary is Bayes Classifier and secondary is Lazy classifier ,both classifiers create separate environment for consumer claim data set . In this paper  we analyze the ROC  performance of Bayesian and Lazy classifiers for large consumer claim data set. There are two algorithms in Bayesian classifier namely BayesNet, and Naïve Bayes. In lazy classifier has three algorithms namely IBL, IBK and Kstar.

 

Keywords: Bayes Classifiers: Bayes Net; Lazy Classifiers: IBK, Naïve Bayes, IBL, Kstar; Weka.

 

pdf

By | 2017-11-07T17:09:42+00:00 November 7th, 2017|General, Publications|0 Comments

EVALUATE THE SUPPORT AND METRIC OF CONSUMER CLAIMS BY APRIORI AND PREDICTIVE APRIORI ALGORITHMS

Dhyan Chandra Yadav

S.N.P.G. College , Narahi, Balia (U.P.),  Department of Computer Application

ABSTRACT:  

 

Knowledge exploration from the benefactor claim knowledge set. Association rule generate a result for client claim dispute affirmative don’t seem to be. Frequent Pattern Mining may be a vital endeavor in data processing. Apriori approach applied to get frequent item set and pruning techniques for the satisfaction of the specified objective. This paper shows how Apriori and Predictive Apriori the achieve the objective of frequent mining along with the complexities required to perform the task.

 

Keywords: Data Mining, Apriori, Frequent Pattern Mining, Predictive Apriori,WEKA.

 

pdf

By | 2017-11-07T17:05:00+00:00 November 7th, 2017|General, Publications|0 Comments

MEASURE THE GROUTH OF INSTANCES BY APRIORI AND FILTERED ASSOCIATOR ALGORITHMS

 Dhyan Chandra Yadav

S.N.P.G. College, Narahi, Balia (U.P.), Department of Computer  Application

 

ABSTRACT: 

 

Data mining provide new way to discovery of new information in terms of patterns or rules from consumer claim dataset. Consumer claim finding frequent relationship between attributes. It is primarily focused on finding frequent co-occurring associations among a collection of items.  In this paper we measure performance by association rule mining algorithms. Algorithm is measured in terms of processor time by varying the number of instances by using ijcea_template_new_7-110933_2017_10_23_18_07_23_121different confidence values.

 

Keywords: Apriori, Confidence, Data Mining, Filtered Associator, WEKA.

 

pdf

By | 2017-11-07T17:01:50+00:00 November 7th, 2017|General, Publications|0 Comments

A COMPARATIVE STUDY OF CONSUMER CLAIMS DATA SET BY CLUSTERING ALGORITHMS

 Dhyan Chandra Yadav

S.N.P.G .College , Narahi, Balia (U.P.),Department of Computer Application

 

ABSTRACT: 

 

This paper discuss the development of an application for both consumer and companies or banks. The product, pricing and policy differ from country to country. In case of purchasing each company or bank has his rule and regulations by the help of clustering algorithms we easily analyzed dispute YES or NO from consumer claim dataset. So we use three major clustering algorithms: K-Means, Hierarchical clustering and Density based clustering algorithm and compare the performance of these three major clustering algorithms on the aspect of correctly class wise  cluster  building  ability  of  algorithm. Performance of the these techniques are presented and compared using a clustering tool WEKA.

 

Keywords: Clusterer: K-means algorithms, Hierarchical  clustering,  Density  based  clustering algorithms,  Weka.

 

pdf

 

 

By | 2017-11-07T16:58:48+00:00 November 7th, 2017|General, Publications|0 Comments

A COMPARATIVE ANALYSIS OF BAGGING, DECORATE AND DAGGING ALGORITHMS

Dhyan Chandra Yadav

S.N.P.G.College , Narahi, Balia (U.P.), Department of Computer Application

 

ABSTRACT: 

 

Some client doesn’t  realize “master card” arrange and suffering about funding backup in group action by the utilization of classification technique simply calculates the explicit and prediction models to predict continuous valued functions. Usually, classification is that the method of organizing knowledge into classes for its most respected and good use. The knowledge the info the information} classification technique makes vital and classify dispute data that’s simple to seek out and recover. During this paper the performance of 3 Meta classifiers algorithms particularly Bagging, Decorate and Dagging analyzed. The patron claim dataset is employed for estimating the performance of the algorithms by mistreatment the coaching Set. Finally, the comparative analysis is performed by mistreatment the factors similar to the classification accuracy and error rates on all algorithms.

 

Keywords: Meta classifier: Bagging, Decorate, Dagging, Training set ,Weka.

 

pdf

By | 2017-11-07T16:49:40+00:00 November 7th, 2017|General, Publications|0 Comments

SURVEY OF FACTORS THAT RISK SOFTWARE DEVELOPMENT PROCESS

Nitin Bhatia 1

 1Department of Computer Science, DAV College, Jalandhar, India

 

ABSTRACT: 

 

Software Engineering is a profession to provide high quality software to the customers. It is a systematic approach to analysis, design, implementation, maintenance and re engineering of software. But there are many risks involved in creating high quality software. Risks have no exact values. They are based upon uncertainties. There is indeed a long list of evil things that is depressing software projects from a long time period. In order to successfully manage software projects, we must learn to identify, analyze and control software risks. Although controlling risks have a cost, but if the risks are not addressed and does indeed bite us. But there is no magic solution to overcome these risks. In this paper we are most concerned about the study of risk factors by having strong knowledge of software engineering and management practices.

 

Keywords: Risk analysis, Risk Identification, Risk Estimation, Risk Evaluation, Risk Likelihood, Risk Impact, SEM, SPI, CMMI.

 

pdf

By | 2017-11-06T17:20:41+00:00 November 6th, 2017|General, Publications, Volume XI, Issue XI|0 Comments

TIME-AWARE CONSTRAINT OPTIMIZATION BASED PAIR-WISE TEST SUITE SIMILARITY FOR IMPROVING SOFTWARE SYSTEM QUALITY

M.Bharathi,  Dr.V.Sangeetha2

1 Department of Computer Science, Periyar University College of Arts and Science, Pennagaram,Tamilnadu,India.

2Department of Computer Science, Periyar University College of Arts and Science, Pappireddipatti,Tamilnadu,India

 

ABSTRACT: 

 

Testing a software product lines is difficult due to the huge number of possible products. Combinatorial testing presents one possibility to test a subset of all possible products for software quality management. Recently, many research works have been developed for testing a software product lines by using combinatorial testing.  However, optimizing the product lines to be tested and reducing the time complexity involved during combinatorial testing was remained unaddressed. In order to overcome such limitations, Time-aware Constraint Optimization based Pair-wise Test Suite Similarity (TCO-PTSS) framework is proposed. Initially, TCO-PTSS framework develops Time-aware Constraint Optimization (TCO) algorithm that generates set of test suite of optimized size in specified time interval based on constraints such as pairwise coverage, number of products and testing cost. Timeout value is set in TCO algorithm for terminating the test suite generation process. The TCO algorithm optimizes the product lines to be tested by extracting optimal set of test suites.  After that, TCO-PTSS framework used Pair-wise Test Suite Similarity (PTSS) for optimizing the generated test suites and reducing the execution of interaction sets in pair-wise testing. With the optimal test suite generated using TCO algorithm, Jaccard’s similarity is measured between two test suites based on interactions covered in given software program in order to optimize the number of interaction to be executed in pair wise testing. With aid of measured similarity value, then TCO-PTSS framework selects optimized test suites for combinatorial testing. As a result, TCO-PTSS framework efficiently improves the software quality. The performance of TCO-PTSS framework is measured in terms of metrics such as test suite generation time, testing cost, coverage rate and scalability. The experimental result demonstrates that the TCO-PTSS framework is able to increases the coverage rate of combinatorial testing for predicting the more faults in software program and also minimizes the testing cost for enhancing the software system quality when compared to state-of-the-art-works.

 

Keywords: Combinatorial testing, coverage, Interaction, Jaccard’s similarity, product lines, software quality, test case, test suite.

 

pdf

By | 2017-11-06T17:16:30+00:00 November 6th, 2017|General, Publications, Volume XI, Issue XI|0 Comments

BLOCKING ARTIFACTS SUPPRESSION IN BLOCK-CODED GREY SCALE IMAGES BLOCKING ARTIFACTS SUPPRESSION IN BLOCK-CODED GREY SCALE IMAGES USING AN ADAPTIVE FILTERING

Amanpreet Kaur1, Jagroop Singh Sidhu2, Jaskarn Singh Bhullar3

1Ph.D, Research Scholar, IK Gujral Punjab Technical University, Jalandhar 144601, Punjab, India

2Faculty Department of Electronics & Communication. Engg, DAVIET, Jalandhar 144001, Punjab, India.

3 Faculty Department of Applied Sciences, MIMIT, Malout 152107, Punjab, India.

 

ABSTRACT: 

 

Images coded at higher compression ratios mostly suffer from significant compression artifacts that degrades the visual quality of the images. These images degradation occur by the coarse quantization procedure of 8×8 discrete cosine transform (DCT) coefficients. In this paper, an adaptive postprocessing technique has been proposed to suppress the blocking artifacts that are generally occur in JPEG decoded images especially at low bit rates. Comprehensive experimental results illustrate that the proposed technique is more effective and stable for alleviating blocking artifacts with respect to existing techniques. Compared with other techniques, the proposed technique achieves better detail preservation and blocking effects removal performance with lower computational complexity. 

 

Keywords—directional filter, blocking artifacts, pixel vectors, PSNR-B

pdf

By | 2017-11-09T09:04:46+00:00 November 6th, 2017|General, Publications, Volume XI, Issue XI|0 Comments

HADOOP AND THE DATA WAREHOUSE: FRIENDS OR FOES?

Neeru Mago

 Department of Computer Science and Application, Panjab University SSG Regional Centre, Hsp

ABSTRACT: 

 

We incline towards the modern and greatest new technologies, and when fascinating platforms like Hadoop develop, they’re frequently complemented by a substantial amount of buildup. When it comes to Hadoop, though, there’s authentic matter behind this buildup. Just look at the growing numbers of code inputs in the Apache Hadoop projects and also the implementation rates of Hadoop in all scale of businesses. The center of this paper is to compare and contrast the relation powers of Hadoop technologies and relational databases. It’s imperative to know and understand this new technology relates to present technologies and business exercises. In the case of Hadoop, one should know how it will influence the field of data management of various enterprises.

 

Keywords: NoSQL, ACID, Hadoop, Hive, HBase, Giraph, RDBMS

 

pdf

By | 2017-11-05T16:18:08+00:00 November 5th, 2017|General, Publications, Volume XI, Issue XI|0 Comments

MULTI CLASSIFICATION TECHNIQUE FOR LOGO BASED DOCUMENT VERIFICATION

Vaijinath V. Bhosle1, Dr.Vrushsen P. Pawar2 , Dr. H. S. Fadewar3,Dr. N. S. Zulpe4

1 College of Computer Science and Information Technology (COCSIT), Latur, Maharashtra, India,

2 Science Faculty, Professor and Head in Water and Land Management Institute (WALMI), Aurangabad, Maharashtra, India,  

3 Science Faculty, Assistant Professor in School of Computational Sciences Swami Ramanand Teerth Marathwada University, Nanded, Maharashtra, India,  

4 Science Faculty, Principal in College of Computer Science And Information Technology, Latur, Maharashtra, India

 

ABSTRACT

 

Features extraction is very important process in image analysis. In this process first pre-process the image like convert the RGB image into grey scale image, resize the image. Then features are extracted by using different methods. Those features are used for document verification and classification. The document verification is very default task by the human being. This paper presents easy way to identify the document with organization wise on the logo based. This paper uses GLCM, SURF and LBP technique. In LBP method images are divided into number of blocks and take centre point of pixel as feature. It is time consuming method as compare to GLCM and Surf. For feature extraction. By using GLCM we have got 95.0% result, SURF gives us 96.0% result, LBP gives us 97.0% result. This paper combines three methods they give us 100% accuracy.

 

Keywords: Document Images, GLCM, SURF, LBP, ANN.

 

pdf

By | 2017-11-05T16:14:28+00:00 November 5th, 2017|General, Publications, Volume XI, Issue XI|0 Comments

DISCRIMINATIVE FEATURE EXTRACTION OF TEXT COMPONENTS FROM COMPLEX COLORED IMAGES

Mr. G.Sathyanarayanan1 , Dr. G.Gayathri Devi2Dr.C.P.Sumathi3

1Senior Professional Project Management, DXC Technology,

2,3 Department of Computer Science,  SDNB Vaishnav College for Women, Chennai, India.

 

ABSTRACT

 

The objective of the proposed approach is to project a new methodology for text feature extraction from the text component of images. The most important step involved in recognition is the selection of the feature information of the text component. Feature extraction algorithm identifies the relevant feature from raw text component image. The proposed feature extraction algorithms computes the feature of the text image based on attributes like loops, horizontal lines, vertical lines, slant lines etc.

 

Keywords: Text extraction, Feature extraction, Recognition

 

pdf

By | 2017-11-05T16:07:27+00:00 November 5th, 2017|General, Publications, Volume XI, Issue XI|0 Comments

INTELLIGENT PRIVACY PRESERVING SCHEME OVER CLOUD ENVIRONMENT WITH PROPER SECURITY MAINTENANCE STRATEGIES

1P. RAVINDER RAO, 2V. SUCHARITA

1Associate Professor, 2Professor , 12Department of Computer Science Engineering

1Anurag Group of Institutions, Hyderabad, TS, INDIA, 2Narayana Engineering College, Gudur, AP, INDIA.

 

ABSTRACT:

 Cloud Computing,, a term which rules the communication and Information Technology industry now-a-days. With the powerful and global support Cloud stands long and establishes its ruling to all over the world. All the business people, individuals, commercial and non-commercial organizations attain and utilize the benefits of cloud. The market created by Cloud source is huge, so the needs and expectations are more for this environment. The issues like Security and Privacy Preserving is needs to be resolved to provide better support to clients. One of the best ways to provide data or information present into the cloud server is called Cryptography. The powerful cryptographic solutions are already available in market to provide the support for Cloud Computing environment, but all are providing certain types of issues at certain point of time, someone cause issues while the data size is huge, someone creates issues while the number of users/clients are increasing, some cryptographic methodologies creates issues over privacy norms and many more. So, a new algorithm is required to resolve these issues called, “Advanced Cryptographic Standard [ACS]”, which provides the data security and achieving privacy concerns by means of its powerful 256-bit encryption and decryption scheme. The entire scope of this paper is concentrating more on Cloud Security as well as maintaining the privacy scenarios over remote server as well as provides best end features to clients. For all the proposed system efficiently proves that the implemented approach is better to resolve the issues like privacy preserving and data security.

Keywords: Cloud Computing, Privacy Preserving, Data Security, Advanced Cryptographic Standard, ACS, Data Integrity, Remote Server.

 

pdf

By | 2017-11-05T16:02:37+00:00 November 5th, 2017|General, Publications, Volume XI, Issue XI|0 Comments

SET PARTITIONING IN HIERARCHICAL TREES BASED VIDEO CODEC IN WAVELET DOMAIN

Ilam Parithi 1, Murugan 2, Balasubramanian3

 1Research Scholar, 2 Assistant Professor, 3 Professor

1,3Department of Computer Science & Engineering Manonmaniam Sundaranar University

Tirunelveli, Tamil Nadu, India

2Department of Computer Science, M.S University Constituent College, Kadayanallur

Tirunelveli, Tamil Nadu, India

 

ABSTRACT: 

 

Now a day video processing became very popular and necessary. On the contrary, video data requires large storage. To reduce storage area we need to remove redundant data present in the video sequence. Removing redundant data should also preserve quality. In this paper, we present a new approach using Wavelet domain and Set Partitioning In Hierarchical Trees (SPIHT) algorithm for video compression. The proposed method is tested and compared with the recent version of H.264/AVC and recent Wavelet Based Video Codec (WBVC). The performance of the proposed method is compared using parameters like Peak Signal to Noise Ratio (PSNR), compression size and computation time. It is proved that the average PSNR of the proposed method is 2.66dB greater than H.264/AVC and 1.62dB greater than WBVC method.

 

Keywords: Wavelet, SPIHT, Group of Pictures,

 

pdf

By | 2017-11-05T15:56:01+00:00 November 5th, 2017|General, Publications, Volume XI, Issue XI|0 Comments
Load More Posts