SVM Model for Identification of human GPCRs [ Full-Text ]
Sonal Shrivastava, K. R. Pardasani and M. M. Malik
G-protein coupled receptors (GPCRs) constitute a broad class of cell-surface receptors in eukaryotes and they possess seven transmembrane α-helical domains. GPCRs are usually classified into several functionally distinct families that play a key role in cellular signalling and regulation of basic physiological processes. We can develop statistical models based on these common features that can be used to classify proteins, to predict new members, and to study the sequence–function relationship of this protein function group. In this study, SVM based classification model has been developed for the identification of human gpcr sequences. Sequences of Level 1 subfamilies of Class A rhodopsin are considered as case study. In the present study, an attempt has been made to classify GPCRs on the basis of spe-cies. The present study classifies human gpcr sequences with rest of the species available in GPCRDB. Classification is based on specific information derived from the n-terminal and extracellular loops of the sequences, some physicochemical properties and amino acid composi-tion of corresponding gpcr sequences. Our method classifies Level 1 subfamilies of GPCRs with 94% accuracy.
Effect of Embedding Watermark on Compression of the Digital Images [ Full-Text ]
Er. Deepak Aggarwal and Er. Kanwalvir Singh Dhindsa
Image Compression plays a very important role in image processing especially when we are to send the image on the internet. The threat to the information on the internet increases and image is no exception. Generally the image is sent on the internet as the compressed image to optimally use the bandwidth of the network. But as we are on the network, at any intermediate level the image can be changed intentionally or unintentionally. To make sure that the correct image is being delivered at the other end we embed the water mark to the image. The watermarked image is then compressed and sent on the network. When the image is decompressed at the other end we can extract the watermark and make sure that the image is the same that was sent by the other end. Though watermarking the image increases the size of the uncompressed image but that has to done to achieve the high degree of robustness i.e. how an image sustains the attacks on it. The present paper is an attempt to make transmission of the images secure from the intermediate attacks by applying the generally used compression transforms.
Supervised Learning of Digital image restoration based on Quantization Nearest Neighbor algorithm
[ Full-Text ]
Md. Imran Hossain and Syed Golam Rajib
In this paper, an algorithm is proposed for Image Restoration. Such algorithm is different from the traditional approaches in this area, by utilizing priors that are learned from similar images. Original images and their degraded versions by the known degradation operators are utilized for designing the Quantization. The code vectors are designed using the blurred images. For each such vector, the high frequency information obtained from the original images is also available. During restoration, the high frequency information of a given degraded image is estimated from its low frequency information based on the artificial noise. For the restoration problem, a number of techniques are designed corresponding to various versions of the blurring function. Given a noisy and blurred image, one of the techniques is chosen based on a similarity measure, therefore providing the identification of the blur. To make the restoration process computationally efficient, the Quantization Nearest Neighborhood approaches are utilized.
——————————————————————————————————————————————————————————————————–
The government of state’s power bodies by means of the Internet [ Full-Text ]
Bercea L., Nemţoi G. and Ungureanu C.
The electronic government involves developing the informational society, which refers to an economy and a society in which the access, acquisition, memorizing, taking, transmitting, spreading and using the knowledge accede to a decisive role. The informational society involves changes in the domains of administration (e-Government), business (electronic commerce and e-business), education (long distance education), culture (multimedia centers and virtual libraries), mass- media (TV, video advertising panels), and in the labor manner (tele-work and virtual commuting).The e-government refers to the interaction between the Government, Parliament and other public institutions with the citizens by the electronic means.
——————————————————————————————————————————————————————————————————–
Optimized reversible BCD adder using new reversible logic gates [ Full-Text ]
H. R. Bhagyalakshmi and M. K. Venkatesha
Reversible logic has received great attention in the recent years due to their ability to reduce the power dissipation which is the main requirement in low power digital design. It has wide applications advanced computing, low power CMOS design, Optical information processing, DNA computing, bio information, quantum computation and nanotechnology. This paper presents an optimized reversible BCD adder using a new reversible gate. A comparative result is presented which shows that the proposed design is more optimized in terms of number of gates, number of garbage outputs and quantum cost than the existing designs.
——————————————————————————————————————————————————————————————————–
Determining the quality evaluation procedures using the expert systems [ Full-Text ]
Holban N., Ditoiu V. and Iancu E.
At this time, quality is a strategic instrument of the entities’ global management, but it is also a determining element of their competitive spirit. The importance given to quality is abundantly found in the preoccupations of the European Union’s Minister Board, by elaborating documents with a high impact over the quality of products/services in special, and organizations in general. We live in an era, when the evolution of the social life puts the accent more and more on quality, resulted from various processes, at the level of various domains of the economical and social development.
——————————————————————————————————————————————————————————————————–
Improvement in RUP Project Management via Service Monitoring: Best Practice of SOA [ Full-Text ]
Sheikh Muhammad Saqib, Shakeel Ahmad, Shahid Hussain, Bashir Ahmad and Arjamand Bano
Management of project planning, monitoring, scheduling, estimation and risk management are critical issues faced by a project manager during development life cycle of software. In RUP, project management is considered as core discipline whose activities are carried in all phases during development of software products. On other side service monitoring is considered as best practice of SOA which leads to availability, auditing, debugging and tracing process. In this paper, authors define a strategy to incorporate the service monitoring of SOA into RUP to improve the artifacts of project management activities. Moreover, the authors define the rules to implement the features of service monitoring, which help the project manager to carry on activities in well define manner. Proposed frame work is implemented on RB (Resuming Bank) application and obtained improved results on PM (Project Management) work.
——————————————————————————————————————————————————————————————————–
The Role of the XBRL Standard in Optimizing the Financial Reporting [ Full-Text ]
Grosu V., Hlaciuc E., Iancu E., Petris R. and Socoliuc M.
When the financial information is difficult to produce, interpret, compare and analyze, we are put in the situation to face inconvenient consequences with negative repercussions, such as: the investor can give up the investment (with negative consequences on the risk equity market), the banks may not give loans, an auditor may not consider the financial statements as being credible etc. These facts allow the introduction of this paper’s main objective, the eXtensible Business Reporting Language (XBRL) which is an open standard, independent and international for the treatments, opportunity, correctness, efficiency and minor costs of the financial and economical information. The XBRL will be analyzed in the second part of the paper, the history of this electronic communication language will be described, as there will also be described the promoting organizations, the base technology (the WEB and XML architecture which will be the next stage of the internet programming), and the role it has within the chain of reporting between the XBRL consortium and the international accounting organizations IASB-CI. This taxonomy serves clearly every accounting and extra- accounting information made by the company. This information which is treated in present by resorting to various formats or structures (most times incompatible between them and the owners) will be standardized with the XBRL.
——————————————————————————————————————————————————————————————————–
E-Courseware Design and Implementation Issues and Strategies [ Full-Text ]
Shakeel Ahmad, Adli Mustafa, Zahid Awan, Bashir Ahmad, Najeebullah and Arjamand Bano
Over the last few years electronic learning has been in use mostly by corporate institutes in the form of computer aided instructions and computer based training. The scope of such use has not only been limited to introductory courses for beginners and working people but also to impart knowledge in higher education sector. Due to increasing market demands and current prevailing law and order situation of this area (during which the University remain closed for uncertain period of time on many occasions) Gomal University D.I.Khan, Pakistan is planning to introduce e-learning at undergraduate and post graduate level in computer and management sciences for smooth and uninterrupted delivery of quality education to local and distant students. Obvious result of e-learning will be two fold. First it will meet market demands along with smooth uninterrupted delivery of quality education and secondly will solve the growing problem of shortage of experts raised by the current law and order situation. This paper investigates the main issues involved in designing and implementing an effective electronic courseware for students with diverse backgrounds belonging to this remote area. Some effective strategies for electronic delivery of courses to local and distant students are also presented along with some examples of implementation.
——————————————————————————————————————————————————————————————————–
FPGA Implementation of LS Code Generator for CDM Based MIMO Channel Sounder [ Full-Text ]
M. Habib Ullah, Md. Niamul Bari and A. Unggul Priantoro
MIMO (Multi Input Multi Output) wireless communication system is an innovative solution to improve the bandwidth efficiency by exploiting multipath-richness of the propagation environment. The degree of multipath-richness of the channel will determine the capacity gain attainable by MIMO deployment. Therefore, it is very important to have accurate knowledge of the propagation environment/radio channel before MIMO implement. The radio channel behavior can be estimated by channel measurement or channel sounding. CDM (Code Division multiplexing) is one of the channel sounding techniques that allow accurate measurement at the cost of hardware complexity. CDM based channel sounder, requires code with excellent auto-correlation and cross-correlation properties which generally difficult to achieve simultaneously. Theoretical analysis and computer simulation result demonstrated that, having excellent correlation propertied Loosely Synchronous (LS) code sequence perform efficiently. Finally, the an efficient LS code generator as a data source for transmitter implemented in Xilinx FPGA that can be integrated into CDM based 2×2 MIMO complete channel sounder.
——————————————————————————————————————————————————————————————————–
A Modified ck-Secure Sum Protocol for Multi-Party Computation [ Full-Text ]
Rashid Sheikh, Beerendra Kumar and Durgesh Kumar Mishra
Secure Multi-Party Computation (SMC) allows multiple parties to compute some function of their inputs without disclosing the actual inputs to one another. Secure sum computation is an easily understood example and the component of the various SMC solutions. Secure sum computation allows parties to compute the sum of their individual inputs without disclosing the inputs to one another. In this paper, we propose a modified version of our ck-Secure Sum protocol with more security when a group of the computing parties conspire to know the data of some party.
——————————————————————————————————————————————————————————————————–
Multi-Objective Geometric Programming Problem Being Cost Coefficients as Continuous Function with Weighted Mean Method [ Full-Text ]
A. K. Ojha and A.K. Das
Geometric programming problems occur frequently in engineering design and management. In multi-objective optimization, the trade-off information between different objective functions is probably the most important piece of information in a solution process to reach the most preferred solution . In this paper we have discussed the basic concepts and principles of multiple objective optimization problems and developed a solution procedure to solve this optimization problem where the cost coefficients are continuous functions using weighted method to obtain the non-inferior solutions.
——————————————————————————————————————————————————————————————————–
A Cluster-based Approach for Outlier Detection in Dynamic Data Streams (KORM: k-median OutlieR Miner) [ Full-Text ]
Parneeta Dhaliwal , MPS Bhatia and Priti Bansal
Outlier detection in data streams has gained wide importance presently due to the increasing cases of fraud in various applications of data streams .The techniques for outlier detection have been divided into either statistics based , distance based , density based or deviation based. Till now, most of the work in the field of fraud detection was distance based but it is incompetent from computational point of view. In this paper we introduced a new clustering based approach, which divides the stream in chunks and clusters each chunk using k-median into variable number of clusters. Instead of storing complete data stream chunk in memory, we replace it with the weighted medians found after mining a data stream chunk and pass that information along with the newly arrived data chunk to the next phase. The weighted medians found in each phase are tested for outlierness and after a given number of phases, it is either declared as a real outlier or an inlier. Our technique is theoretically better than the k-means as it does not fix the number of clusters to k rather gives a range to it and provides a more stable and better solution which runs in poly-logarithmic space.
——————————————————————————————————————————————————————————————————–
Nature inspired artificial intelligence based adaptive traffic flow distribution in computer network [ Full-Text ]
Manoj Kumar Singh
Because of the stochastic nature of traffic requirement matrix, it’s very difficult to get the optimal traffic distribution to minimize the delay even with adaptive routing protocol in a fixed connection network where capacity already defined for each link. Hence there is a requirement to define such a method, which could generate the optimal solution very quickly and efficiently. This paper presenting a new concept to provide the adaptive optimal traffic distribution for dynamic condition of traffic matrix using nature based intelligence methods. With the defined load and fixed capacity of links, average delay for packet has minimized with various variations of evolutionary programming and particle swarm optimization. Comparative study has given over their performance in terms of converging speed. Universal approximation capability, the key feature of feed forward neural network has applied to predict the flow distribution on each link to minimize the average delay for a total load available at present on the network. For any variation in the total load, the new flow distribution can be generated by neural network immediately, which could generate minimum delay in the network. With the inclusion of this information, performance of routing protocol will be improved very much.
——————————————————————————————————————————————————————————————————–
Improved NSGA-II Based on a Novel Ranking Scheme [ Full-Text ]
Rio G. L. D’Souza, K. Chandra Sekaran and A. Kandasamy
Non-dominated Sorting Genetic Algorithm (NSGA) has established itself as a benchmark algorithm for Multiobjective Optimization. The determination of pareto-optimal solutions is the key to its success. However the basic algorithm suffers from a high order of complexity, which renders it less useful for practical applications. Among the variants of NSGA, several attempts have been made to reduce the complexity. Though successful in reducing the runtime complexity, there is scope for further improvements, especially considering that the populations involved are frequently of large size. We propose a variant which reduces the run-time complexity using the simple principle of space-time trade-off. The improved algorithm is applied to the problem of classifying types of leukemia based on microarray data. Results of comparative tests are presented showing that the improved algorithm performs well on large populations.
——————————————————————————————————————————————————————————————————–
Text/Graphics Separation and Skew Correction of Text Regions of Business Card Images for Mobile Devices [ Full-Text ]
Ayatullah Faruk Mollah, Subhadip Basu and Mita Nasipuri
Separation of the text regions from background texture and graphics is an important step of any optical character recognition system for the images containing both texts and graphics. In this paper, we have presented a novel text/graphics separation technique and a method for skew correction of text regions extracted from business card images captured with a cell-phone camera. At first, the background is eliminated at a coarse level based on intensity variance. This makes the foreground components distinct from each other. Then the non-text components are removed using various characteristic features of text and graphics. Finally, the text regions are skew corrected for further processing. Experimenting with business card images of various resolutions, we have found an optimum performance of 98.25% (recall) with 0.75 MP images, that takes 0.17 seconds processing time and 1.1 MB peak memory on a moderately powerful computer (DualCore 1.73 GHz Processor, 1 GB RAM, 1 MB L2 Cache). The developed technique is computationally efficient and consumes low memory so as to be applicable on mobile devices.
——————————————————————————————————————————————————————————————————–
Word level Script Identification from Bangla and Devanagri Handwritten Texts mixed with Roman Script
[ Full-Text ]
Ram Sarkar, Nibaran Das, Subhadip Basu, Mahantapas Kundu, Mita Nasipuri and Dipak Kumar Basu
India is a multi-lingual country where Roman script is often used alongside different Indic scripts in a text document. To develop a script specific handwritten Optical Character Recognition (OCR) system, it is therefore necessary to identify the scripts of handwritten text correctly. In this paper, we present a system, which automatically separates the scripts of handwritten words from a document, written in Bangla or Devanagri mixed with Roman scripts. In this script separation technique, we first, extract the text lines and words from document pages using a script independent Neighboring Component Analysis technique [1]. Then we have designed a Multi Layer Perceptron (MLP) based classifier for script separation, trained with 8 different word-level holistic features. Two equal sized datasets, one with Bangla and Roman scripts and the other with Devanagri and Roman scripts, are prepared for the system evaluation. On respective independent text samples, word-level script identification accuracies of 99.29% and 98.43% are achieved.
——————————————————————————————————————————————————————————————————–
Handwritten Bangla Basic and Compound character recognition using MLP and SVM classifier [ Full-Text ]
Nibaran Das, Brindaban Das, Ram Sarkar, Subhadip Basu, Mahantapas Kundu and Mita Nasipuri
A novel approach for recognition of handwritten compound Bangla characters, along with the Basic characters of Bangla alphabet, is presented here. Compared to English like Roman script, one of the major stumbling blocks in Optical Character Recognition (OCR) of handwritten Bangla script is the large number of complex shaped character classes of Bangla alphabet. In addition to 50 basic character classes, there are nearly 160 complex shaped compound character classes in Bangla alphabet. Dealing with such a large varieties of handwritten characters with a suitably designed feature set is a challenging problem. Uncertainty and imprecision are inherent in handwritten script. Moreover, such a large variety of complex shaped characters, some of which have close resemblance, makes the problem of OCR of handwritten Bangla characters more difficult. Considering the complexity of the problem, the present approach makes an attempt to identify compound character classes from most frequently to less frequently occured ones, i.e., in order of importance. This is to develop a frame work for incrementally increasing the number of learned classes of compound characters from more frequently occurred ones to less frequently occurred ones along with Basic characters. On experimentation, the technique is observed produce an average recognition rate of 79.25% using MLP and 80.510% using SVM after three fold cross validation of data with future scope of improvement and extension.
——————————————————————————————————————————————————————————————————–
Improving Term Extraction Using Particle Swarm Optimization Techniques [ Full-Text ]
Mohammad Syafrullah and Naomie Salim
Term extraction is one of the layers in the ontology development process which has the task to extract all the terms contained in the input document automatically. The purpose of this process is to generate list of terms that are relevant to the domain of the input document. In the literature there are many approaches, techniques and algorithms used for term extraction. In this paper we propose a new approach using particle swarm optimization techniques in order to improve the accuracy of term extraction results. We choose five features to represent the term score. The approach has been applied to the domain of religious document. We compare our term extraction method precision with TFIDF, Weirdness, GlossaryExtraction and TermExtractor. The experimental results show that our propose approach achieve better precision than those four algorithm.
——————————————————————————————————————————————————————————————————–
Equal Power Distribution and Dynamic Subcarrier Assignment in OFDM Using Minimum Channel Gain Flow with Robust Optimization Uncertain Demand [ Full-Text ]
F.A. Hla Myo Tun, S.B. Aye Thandar Phyo and T.C. Zaw Min Naing
In this paper, the minimum channel gain flow with uncertainty in the demand vector is examined. The approach is based on a transformation of uncertainty in the demand vector to uncertainty in the gain vector. OFDM systems are known to overcome the impairment of the wireless channel by splitting the given system bandwidth into parallel sub-carriers, on which data-symbols can be transmitted simultaneously. This enables the possibility of enhancing the system’s performance by deploying adaptive mechanisms, namely power distribution and dynamic sub-carrier assignments. The performances of maximizing the minimum throughput have been analyzed by MATLAB codes.
——————————————————————————————————————————————————————————————————–
Supervised Classification Performance of MultiSpectral Images [ Full-Text ]
K. Perumal and R. Bhaskaran
Nowadays government and private agencies use remote sensing imagery for a wide range of applications from military applications to farm development. The images may be a panchromatic, multispectral, hyperspectral or even ultraspectral of terra bytes. Remote sensing image classification is one amongst the most significant application worlds for remote sensing. A few number of image classification algorithms have proved good precision in classifying remote sensing data. But, of late, due to the increasing spatiotemporal dimensions of the remote sensing data, traditional classification algorithms have exposed weaknesses necessitating further research in the field of remote sensing image classification. So an efficient classifier is needed to classify the remote sensing images to extract information. We are experimenting with both supervised and unsupervised classification. Here we compare the different classification methods and their performances. It is found that Mahalanobis classifier performed the best in our classification.
——————————————————————————————————————————————————————————————————–
Intrusion Detection System: Overview [ Full-Text ]
Hamdan O. Alanazi, Rafidah Md Noor, B. B. Zaidan and A. A. Zaidan
Network Intrusion Detection (NID) is the process of identifying network activity that can lead to the compromise of a security policy. In this paper, we will look at four intrusion detection approaches, which include ANN or Artificial Neural Network, SOM, Fuzzy Logic and SVM. ANN is one of the oldest systems that have been used for Intrusion Detection System (IDS), which presents supervised learning methods. However, in this research, we also came across SOM or Self Organizing Map, which is an ANN-based system, but applies unsupervised methods. Another approach is Fuzzy Logic (IDS-based), which also applies unsupervised learning methods. Lastly, we will look at the SVM system or Support Vector Machine for IDS. The goal of this paper is to draw an image for hybrid approaches using these supervised and unsupervised methods.
——————————————————————————————————————————————————————————————————–
A Hough Transform based Technique for Text Segmentation [ Full-Text ]
Satadal Saha, Subhadip Basu, Mita Nasipuri and Dipak Kr. Basu
Text segmentation is an inherent part of an OCR system irrespective of the domain of application of it. The OCR system contains a segmentation module where the text lines, words and ultimately the characters must be segmented properly for its successful recognition. The present work implements a Hough transform based technique for line and word segmentation from digitized images. The proposed technique is applied not only on the document image dataset but also on dataset for business card reader system and license plate recognition system. For standardization of the performance of the system the technique is also applied on public domain dataset published in the website by CMATER, Jadavpur University. The document images consist of multi-script printed and hand written text lines with variety in script and line spacing in single document image. The technique performs quite satisfactorily when applied on mobile camera captured business card images with low resolution. The usefulness of the technique is verified by applying it in a commercial project for localization of license plate of vehicles from surveillance camera images by the process of segmentation itself. The accuracy of the technique for word segmentation, as verified experimentally, is 85.7% for document images, 94.6% for business card images and 88% for surveillance camera images.
——————————————————————————————————————————————————————————————————–
Optimization Digital Image Watermarking Technique for Patent Protection [ Full-Text ]
Mahmoud Elnajjar, A. A. Zaidan, B. B. Zaidan, Mohamed Elhadi M. Sharif and Hamdan O. Alanazi
The rapid development of multimedia and internet allows for wide distribution of digital media data. It becomes much easier to edit, modify and duplicate digital information besides that, digital documents are also easy to copy and distribute, therefore it will be faced by many threats. It is a big security and privacy issue. Another problem with digital document and video is that undetectable modifications can be made with very simple and widely available equipment, which put the digital material for evidential purposes under question With the large flood of information and the development of the digital format, it become necessary to find appropriate protection because of the significance, accuracy and sensitivity of the information ,therefore multimedia technology and popularity of internet communications they have great interest in using digital watermarks for the purpose of copy protection and content authentication. Digital watermarking is a technique used to embed a known piece of digital data within another piece of digital data .A digital data may represent a digital signature or digital watermark that is embedded in the host media. The signature or watermark is hidden such that it’s perceptually and statistically undetectable. Then this signature or watermark can be extracted from the host media and used to identify the owner of the media.