Journal of Communication and Computer
Volume 7, Number 11, November 2010
Abstract: This paper presents the recognition of shapes for object retrieval in image databases using skeleton-based and contour-based representation by discrete curve evaluation and two consecutive primitive edges. Humans tend to use high-level concepts in everyday life. Object segmentation and recognition is the primary step of computer vision to achieve image retrieval of high-level image analysis. Contour-based and skeleton-based representations are important for object recognition in different areas. In comparing the contour-based approaches with the skeleton-based approaches for object representation, the contour-based is more sensitive to noise than a skeleton-based approach based on a good skeleton pruning method, but a rough shape classification can be performed since the obtained skeletons do not represent any shape details. In this paper, we proposed a novel method to integrate the contour-based approaches with skeleton-based approaches for object representation. The contour-based and skeleton-based representations are based on the proposal of two consecutive primitive edges method and discrete curve evolution method respectively. Experimental results demonstrate that the performance of the proposed algorithm is superior to Torsello and Hancock’s method in terms of retrieval accuracy.
Key words: Skeleton, shape similarity measure, visual parts, discrete curve evolution.
Regalado Alejandro, Báez Juan and Peralta Ever
Abstract: This work is focused on a project that integrates the curriculum of biochemistry, lineal algebra and computer programming. The purpose is for students to develop a software tool which calculates enzyme kinetic parameters based on proposed data. This program calculates such parameters using a linear regression of one of the linear forms of the Michaelis-Menten equations; moreover it characterizes the confidence of the lineal fit with the correlation coefficient. Once the different proposed steps were accomplished, we concluded that the purpose was satisfactorily reached with an increment in creative ability. The most important fact is that the percentage of failure among students was 57%, 50%, 28% and 18% from 2005 to 2008, respectively.
Key words: Enzymatic kinetics parameters, flux diagram, minimum squares, linear regression, problem based learning.
Adel Nadhem Naeem, Sureswaran Ramadass and Chan Huah Yong
Abstract: In the grid computing system, there are several grid monitoring tools that can be used to monitor the grid. Ganglia which is one of them, has three components Gmetad, Gmond, and a Web interface. In this article, the author enhanced a program to add one or a group of external sensor networks to Ganglia by using Gmetric, and studied the scalability of sensor networks in one node. In other hand studied the factors that caused loss packets between sensor networks and Ganglia, and the effect of the trash value in decreasing the lost packets about 10%. The results show that we can add up to hundred thousand of sensor networks to Ganglia.
Key words: Gird monitoring tools, ganglia, scalability of sensor network, loss packets, trash value.
Matthias Vodel, Mirko Caspar, Mirko Lippmann and Wolfram Hardt
Abstract: The primary objective of wireless sensor networks is the monitoring of a system or area by measuring different kinds of sensor data. Dependent on the application scenario, a specific measurement scheduling scheme is preferred. Based on this scheduling, an optimisation of the available energy resources in each sensor node and accordingly in the whole network topology is possible. At the same time, an application-specific synchronisation of the measurement points ensures a maximum quality for the merged sensor information. One essential requirement for any kind of scheduling scheme is a synchronised time base in the entire network topology. In order to define capable measurement schemes for any kind of available sensor node hardware, lightweight synchronisation concepts are necessary. In this paper the authors present a distributed scheduling concept for embedded, resource-limited sensor and Ad Hoc networks. The concept operates on the application layer and enables application-specific, energy-efficient measurement schemes. Therefore, the proposal uses a resource-optimised, flexible time synchronisation algorithm. For proof of concept, the approaches were implemented on a real-world sensornet hardware platform. Several test scenarios evaluate the feasibility and clarify an excellent usability in wireless sensor network environments.
Key words: Wireless Sensor Networks (WSN), scheduling, synchronisation, high-level, application-specific, measurement schemes.
Dennys Robson Girardi, Claudia Maria Moro and Hugo Bulegon
Abstract: Leprosy is an infectious disease caused by Mycobacterium Leprae, this disease, generally, compromises the neural fibers, leading to the development of disability. Disabilities are changes that limit daily activities or social life of a normal individual. When comes to leprosy, the study of disability considered the functional limitation (physical disabilities), the limitation of activity and social participation, which are measured respectively by the scales: EHF, SALSA and PARTICIPATION SCALE. The objective of this work is to propose the system SeyeS, based in Bayesian Networks – BN’s, of leprosy patients, which is based on information of eyes in the EHF scale. It is expected that the proposed system is applied in monitoring the patient during treatment and after healing therapy of the disease. The SeyeS presents specificity 1 and sensitivity
Key words: Leprosy, medical Informatics, decision support system, disability, eyes.
Amin Zribi, Sonia Zaibi, Ramesh Pyndiah and Ammar Bouallègue
Abstract: It was demonstrated in recent contributions that Joint Source/Channel (JSC) decoding could be a good issue to make error correction in the case of transmission of entropy-encoded data. This paper addresses a new scheme for JSC decoding of Variable-Length Codes (VLC) and Arithmetic Codes (AC) based on Maximum a posteriori (MAP) sequence estimation. Previous contributions used a trellis description of the entropy encoding machine to perform soft input decoding. Referred to hard input classical decoding, significant improvements are achieved. Nevertheless, for realistic contexts, the complexity of the trellis-based technique becomes intractable. The decoding algorithm we propose performs Chase-like decoding using a priori knowledge of the source symbol sequence and the compressed bit-stream lengths. Performance in the case of transmission on an Additive White Gaussian Noise (AWGN) channel is evaluated in terms of Packet Error Rate (PER). Simulation results show that the proposed decoding algorithm leads to significant performance gain in comparison to classical VLC and AC decoding while exhibiting very low complexity. The practical relevance of the proposed technique is validated in the case of image transmission across the AWGN channel. Lossless and lossy image compression schemes are considered, and the Chase-like entropy decoder shows excellent results in terms of PER and reconstructed image quality.
Key words: Arithmetic coding, chase-like decoding, communication system performance, variable-length code.
Abstract: In digital communication, it has been an important topic to implement an RC (Raised Cosine) filter digitally. The current method is a scalar-based algorithm. In this paper, a new vector-based algorithm is proposed and implemented in MATLAB. It is also realized with Motorola’s DSP
Key words: RC filter, convolution, correlation, digital communication.
Xia Zhuge and Koji Nakano
Abstract: Halftoning technique is used to convert a continuous-tone image into a binary image with pure black and white pixels. This technique is necessary when printing or displaying a monochrome or color image by a device with limited color levels. The main contribution of this paper is to present a halftoning method that conceals a small binary image into a large binary image. More specifically, two distinct gray scale images are given, such that the smaller one of them should be hidden in another larger gray scale image. Our halftoning method generates two binary images that reproduce the tone of the corresponding original two gray scale images. Each pixel of the small binary image is hidden into some pixel of the large binary image through our halftoning method. The small hidden image can be seen when we pick out the pixels of the large binary image at premeditated locations, or we cannot see the hidden image if we have no location information. Another contribution of this paper is to extend our halftoning method to hide a small image of any size into a corresponding large size image. The resulting images show that our halftoning method hides and recovers the original images. Hence, our halftoning technique can be used for watermarking as well as amusement purpose.
Key words: Digital halftoning, error diffusion, image hiding, watermarking.
Borislav Stoyanov, Aleksandar Milev and Anatoli Nachev
Key words: Feedback with carry shift register, self-shrinking sequence, 2-adic stream cipher.
Izzat Alsmadi, Faisal Alkhateeb, Eslam Al Maghayreh, Samer Samarah and Iyad Abu Doush
Abstract: In software projects, one of the main challenges and sources of success or failure is the effective use of available resources. Using effective techniques in regression testing is important to reduce the amount of required resources. This is accomplished through reducing the number of executed test cases without affecting coverage. In this research, genetic algorithms and optimization theory concepts are applied on test case generation and reduction optimization. The methods start by generating an initial pool of test cases through selecting valid paths in the GUI graph that is generated from the tested software dynamically using an in-house developed tool. The selected test cases are then improved through measuring and evaluating fitness functions. The two fitness functions used in this research were the test set generation speed and the test set coverage. Optimization theory is also used to find the best set, measured according to a particular fitness function that can best represent the whole testing database while preserving all other constraints.
Key words: Test case generation, software testing, software engineering, genetic algorithms, optimization theory, GUI graph, and test automation.
★ Database of EBSCO, Massachusetts, USA
★ Chinese Database of CEPS, Airiti Inc. & OCLC
★ Chinese Scientific Journals Database, VIP Corporation, Chongqing, P.R.China
★ CSA Technology Research Database
★ Ulrich Periodicals Directory
★ Summon Serials Solutions
★ Google scholar
★ Electronic Journals Library
★ CiteFactor (USA)
★ Scientific Indexing Services
★ INNO SPACE