Loading ...
Global Do...
News & Politics
9
0
Try Now
Log In
Pricing
International Journal of Computer Science & Information Security © IJCSIS PUBLICATION Vol. 1, No. 1, May 2009 ISSN 1947-5500 Editorial Message from Managing Editor Welcome to the first issue of IJCSIS (International Journal of Computer Science and Information Security). The primary aim of this journal to disseminate new knowledge and technology for the benefit of everyone ranging from the academic and professional research communities to industry practitioners in a range of topics in computer science & engineering in general and information & communication security, mobile & wireless networking, and wireless communication systems. It also provides a venue for researchers, students and professionals to submit on-going research and developments in their areas of interest. 40 papers were submitted to IJCSIS for the May 2009 Special Issue and the acceptance rate was 30%. The revision process for all papers was rigorous, including peer- reviewing from at least two qualified reviewers. The accepted papers are of good quality and cover the wide range of topics, such as Computer Network, Wireless Security and Algorithms. It is our pleasure to introduce you these accepted papers. Available at http://sites.google.com/site/ijcsis/ IJCSIS Vol. 1, No. 1, 31 May 2009. IJCSIS EDITORIAL BOARD Gregorio Martinez Perez Associate Professor - Profesor Titular de Universidad University of Murcia (UMU), Spain Dr. Sanjay Jasola Professor and Dean School of Information and Communication Technology, Gautam Buddha University, Greater NOIDA (Gautam Buddha Nagar) Dr Riktesh Srivastava Assistant Professor, Information Systems Skyline University College, University City of Sharjah, Sharjah, PO 1797, UAE REVIEWERS’ LIST of IJCSIS Publications 2009 1. Dr. Lam Hong Lee, Universiti Tunku Abdul Rahman, Malaysia 2. Assoc. Prof. N. Jaisankar, VIT University, Vellore,Tamilnadu, India 3. Dr. Amogh Kavimandan, The Mathworks Inc., USA 4. Dr. Ramasamy Mariappan, Vinayaka Missions University, India 5. Dr. Neeraj Kumar, SMVD University, Katra (J&K), India 6. Dr. Junjie Peng, Shanghai University, P. R. China 7. Dr. Ilhem LENGLIZ, HANA Group - CRISTAL Laboratory, Tunisia 8. Prof. Dr. Durgesh Kumar Mishra, Acropolis Institute of Technology and Research, Indore, MP, India 9. Prof. Dr.C.Suresh Gnana Dhas, Anna University, India 10. Prof. Pijush Biswas, RCC Institute of Information Technology, India 11. Dr. A. Arul Lawrence, Royal College of Engineering & Technology, India 12. Mr. Wongyos Keardsri, Chulalongkorn University, Bangkok, Thailand 13. Mr. Somesh Kumar Dewangan, CSVTU Bhilai (C.G.)/ Dimat Raipur, India 14. Mr. Hayder N. Jasem, University Putra Malaysia, Malaysia 15. Mr. A.V.Senthil Kumar, C. M. S. College of Science and Commerce, India 16. Mr. R. S. Karthik, C. M. S. College of Science and Commerce, India 17. Mr. P. Vasant, University Technology Petronas, Malaysia 18. Mr. Wong Kok Seng, Soongsil University, Seoul, South Korea 19. Mr. Praveen Ranjan Srivastava, BITS PILANI, India 20. Mr. Kong Sang Kelvin, The Hong Kong Polytechnic University, Hong Kong 21. Mr. Mohd Nazri Ismail, Universiti Kuala Lumpur, Malaysia 22. Dr. Rami J. Matarneh, Al-isra Private University, Amman, Jordan 23. Dr Ojesanmi Olusegun Ayodeji, Ajayi Crowther University, Oyo, Nigeria 24. Dr. Siddhivinayak Kulkarni, University of Ballarat, Ballarat, Victoria, Australia 25. Dr. Riktesh Srivastava, Skyline University, UAE 26. Dr. Oras F. Baker, UCSI University - Kuala Lumpur, Malaysia 27. Dr. Ahmed S. Ghiduk, Faculty of Science, Beni-Suef University, Egypt and Department of Computer science, Taif University, Saudi Arabia 28. Assist. Prof. Tirthankar Gayen, CIT, West Bengal University of Technology, India 29. Ms. Huei-Ru Tseng, National Chiao Tung University, Taiwan 30. Prof. Ning Xu, Wuhan University of Technology, China 31. Mr Mohammed Salem Binwahlan, Hadhramout University of Science and Technology, Yemen & Universiti Teknologi Malaysia, Malaysia. (IJCSIS) International Journal of Computer Science and Information Security, Vol. 1 No.1 May 2009 Manuscript received , April 2009 Manuscript revised , May 2009 Strategies and Performances of Soft Input Decryption Natasa Zivic† natasa.zivic@uni-siegen.de University of Siegen, Hoelderlinstrasse 3, Siegen, Germany Summary This paper analyzes performance aspects of Soft Input Decryption and L-values. Soft Input Decryption is a novel method which uses L-values (soft output) of a SISO channel decoder for the correction of input of Soft Input Decryption (SID blocks) which have been modified during the transmission over a noisy channel. The method is based on the combination of cryptography and channel coding improving characteristics of both of them. The algorithm, strategies and scenarios of Soft Input Decryption are described. The results of the tested performance of L-values show how many L-values are necessary for the correction of SID blocks. This number is higher than the number of wrong bits. This difference is presented. The number of L- values is estimated for different lengths of SID blocks as well as for different Eb/N0 ratios. Space characteristics of L-values are not analyzed, because they depend on the used SISO decoding algorithm. In this paper, Maximum A-Posteriori (MAP) algorithm is used. The time performance of Soft Input Decryption depends on the used cryptographic mechanism for the verification of the cryptographic check values (digital signatures (based on Elliptic Curve Cryptography), MACs (based on CBC-DES) and H-MACs (based on SHA-1)). The combination of the performance of L-values and time performance gives an estimation, if and when Soft Input Decryption can be performed in practice. Further optimizations are outlined. Key words: Soft Input Decryption, SISO Channel Decoding, Joint Channel Coding and Cryptography, Digital Signatures, Elliptic Curve Cryptography. 1. Introduction The combination of a SISO decoder and Decryptor, which is analyzed in this paper, can be considered as a concatenation of codes: en-/decryptor has the role of an outer en-/decoder and the channel encoder/SISO channel decoder has the role of an inner en-/decoder has (Fig. 1). Concatenation of codes, presented as an outer and inner code was already devised by Forney in 1966 [1]. In literature, it is known as concatenated codes [2], general concatenated codes [3] or super channel codes. The inner code is generally short and decoded with a soft decision decoding algorithm, while the outer code is generally longer and decoded with an algebraic decoding method [2]. In most cases a convolutional code is used as an inner code in combination with a Reed Solomon code or another convolutional code as an outer code. Such a type of concatenated codes can be compared to the combination of codes investigated in this work (Fig.1). Two good characteristics are the result of such a concatenated schema: good error performance because of the use of SISO principle and good security performance as the result of the use of the cryptographic mechanisms. The next common point of this work with previous works in coding is the idea of the use of reliability in decoding. There are several works which explore the reliability- based soft-decision decoding algorithms for linear block codes, using the concept of error correction by ordering the decoded bits by their reliability values. The values of soft outputs of the decoder have been used as reliability values. The idea of inversion of the least probable bits (with the lowest reliability values) originated from Chase decoding algorithms [4] in 1972, which were the generalization of the GMD (Generalized Minimum Distance) algorithms from 1966 [1]. These algorithms have been applied to a binary (n, k) linear block code and are referenced as LRP (Least Reliability Positions) algorithms. Chase algorithms generate a list of candidate code words by complementing all possible combinations of bits with the lowest reliability values. The candidate with the best metric is the decoded solution. (IJCSIS) International Journal of Computer Science and Information Security, Vol.1 No.1, May 2008 - 2 - 2 The similarity to the method of the Soft Input Decryption, is the use of L-values reordered and iteratively tested. The difference is that Soft Input Decryption uses two decoders (inner and outer) and a non-linear block code (cryptographic algorithms). Two codes enable the use of feedback from the outer to inner code. The next group of algorithms, which use decoding based on ordering of L-values, is a group of MRIP (Most Reliable Independent Positions) - reprocessing decoding algorithms, as the Most Reliable Basis [MRB], the Least Reliable Basis [LRB] and the Ordered Statistic Decoding Algorithm [2]. Joint source channel coding is the another topic related to this work. The cooperation between the source and channel decoder enables a better use of information of both decoders and better decoding results [5]. It is based on the turbo – principle, as well as Softbit - Source Decoding [6] and Iterative Source – Channel decoding [7]. The similarity to Soft Input Decryption is the use of iterative information exchange between the two elements of the receiver: channel and source decoder, in case of joint source channel coding, rsp. channel decoder and decryptor in case of Soft Input Decryption. 2. Soft Input Decryption Algorithm Soft Input Decryption [8] is a method for the correction of SID blocks which contain cryptographic check values (digital signatures, MACs, H-MACs) by using L-values as the output of the SISO channel decoder. Cryptographic check values provide data integrity, data origin authentication and non repudiation [9]. Soft Input Decryption works block oriented. The input for Soft Input Decryption contains data which are secured by cryptographic check values. The block which has to be corrected by Soft Input Decryption after channel decoding is called SID block (Soft Input Decryption block). It may contain data and cryptographic check values, or just cryptographic check values, depending on the used scenario (see Section 2.4). In Fig. 2 the standard verification process without Soft Input Decryption is presented. Fig. 2 Verification of a SID block without Soft Input Decryption. The algorithm of Soft Input Decryption (Fig. 3) is as follows: The Soft Input Decryption is successfully completed, if the verification of the cryptographic check value is successful, i.e. the output is “true”. If the verification is negative, the soft output of the channel decoder is analyzed and the bits with the lowest |L|-values are flipped (XOR 1). Then the decryptor performs the verification process and proves the result of the verification again. If the verification is negative, bits with another combination of the lowest |L|-values are changed. This iterative process will stop when the verification is successful or the needed resources are consumed. In the case that the attempts for correction fail, the number of modified bits is too large as a result of a very noisy channel, a very long SID block or an attack, so that the resources are not sufficient to find the correct content of a SID block. It may happen that the attempts for the correction of a SID block succeed, but the content of a SID block is not equal to the original one: a collision happened. This case has a negligible probability if the length of the cryptographic check values are chosen under security aspects (for example, considering the “birthday paradox”). (IJCSIS) International Journal of Computer Science and Information Security, Vol.1 No.1, May 2008 Fig. 3 Algorithm of Soft Input Decryption. 3. Simulations and Results The chosen length of the SID block is 320 bit. The reason for this length of a SID block is that this is a size of a digital signature based on Elliptic Curve Cryptography over GF(2160) or GF(p) with ld(p) = 160. Further on BPSK modulation, the model of an AWGN channel, as well as a convolutional and a turbo code [10] are used in the simulations. The used convolutional encoder has a code rate r = 1/2 and a constraint length m = 2. The turbo encoder with r = 1/3 is based on two parallel RSC convolutional encoders with r = ½ and a block interleaver of depth 17 [12]. The turbo decoder performs 3 iterations. The decoder (convolutional and turbo) uses a MAP algorithm [11]. The results in Fig. 4 are shown for Eb/N0 > 1.5 dB. The simulations have been programmed in C/C++ programming language. For each point of the curves 50.000 tests are performed which are sufficient to be representative. CCER (Cryptographic Check Error Rate) is defined as: blocks SID received of number blocks SID incorrect of number CCER = (1) An incorrect SID block is a SID block which could not be successfully verified or a SID block, which has been falsely successfully verified, i.e. a collision happened. Vice versa, a correct SID block is a successfully verified SID block and no collision happened, i.e. it is identical to the sent SID block. The complement of CCER is: blocks SID received of number blocks SID correct of number CCER CCER = − = 1 (2) CCER is presented in Fig. 4 in relation to Eb/N0. Soft Input Decryption is performed for trials up to the 8, rsp. the 16 lowest |L|-values (i.e. 28 rsp. 216 trials). The Soft Input Decryption gain [dB] means the reduction of CCER by Soft Input Decryption depending on Eb/N0. SID blocks can be successfully transmitted over a communication channel with Eb/N0 of 2.5 dB using Soft Input Decryption. For example, if 1 of 10 SID blocks cannot be verified (at 4 dB) without Soft Input Decryption, all SID blocks can be corrected in the case of convolutional codes. In case of a turbo code CCER can be reduced from 1/30 to 0 (at 2.5 dB). The results in Fig.5 are achieved for different lengths of SID blocks using convolutional coding: 128, 160, 320 and 384 bits, with up to 16 flipped bits corresponding to the lowest positions of |L|-values. As expected, the coding gain of Soft Input Decryption is influenced by the length of a SID block: it is lower when a SID block is longer. In the following figures and sections a convolutional code is used only, because the simulations of convolutional codes are faster than of turbo codes and the coding gains of turbo codes are similar to those of convolutional codes (compare to Fig. 4). Fig. 4 Decrypting gain of 320 bit SID block [8] a) Convolutional code without Soft Input Decryption a) b) Convolutional code with Soft Input Decryption using up to the 8 lowest |L|-values c) as b.), but using up to the 16 lowest |L|-values (IJCSIS) International Journal of Computer Science and Information Security, Vol.1 No.1, May 2008 - 4 - 4 d)Turbo code without Soft Input Decryption e)Turbo code with Soft Input Decryption using up to the 8 lowest |L|- values f)as e.), but using up to the 16 lowest |L|-values Fig. 5 CCER Coding gain of Soft Input Decryption using up to 16 lowest |L|-values, for different lengths of SID blocks 4. Strategies of Soft Input Decryption The module “Changing of bits of SID block” (see Fig. 3) contains the strategy, in which sequence bits and combinations of bits of the SID block are changed, before the next verification is achieved. Depending on the strategy of Soft Input Decryption, different schedules of bit correction are possible. The static strategy of Soft Input Decryption is used for the results of Chapter 3. Proposals for other strategies are also given in this Chapter. 4.1 Static Strategy If the first verification after starting Soft Input Decryption is not successful, the bit with the lowest |L|-value of the SID block is flipped, assuming that the wrong bits are probably those with the lowest |L|-values. If the verification is again not successful, the bit with the second lowest |L|-value is changed. The next try will flip the bits with the lowest and second lowest |L|-value, then the bit with the third lowest |L|-value, etc. The process is limited by the number of bits with the lowest |L|-values, which should be tested. The strategy follows a representation of an increasing binary counter, whereby the lowest bit corresponds to the bit with the lowest |L|-value, etc. The strategy is defined by following algorithm. The static strategy orders the bits of the SID block by their |L|-values starting with the lowest one and monotonly increasing. The output of the sort algorithm is represented as an example in Fig. 6. j is the increasing sequence of positions of |L|-values and Pj indicates the position of the bit with the jth lowest |L|-value in the original SID block. The length of the SID block is w. Fig. 6 Sorted sequence of bits of a SID block (an example). The function “Changing of bits of SID block” of the Soft Input Decryption algorithm (Fig. 2) is shown in Fig. 7 in detail. To control the strategy, an incrementing counter i = 1, …, 2Nmax – 1 is used in binary representation of fixed lengths of Nmax with coefficients cij ∈ {0, 1}. ∑ = − = max 1 1 , 2 N j j j i c i (3) Nmax is the maximum number of bits to be flipped, rsp. 2Nmax – 1 is the maximum number of trials, if all verifications fail. Each value of the counter i describes one trial. The indices j of those coefficients cij which are marked (cij = 1) indicate the positions of L-values of the bits to be flipped. These positions can be found by using the sorted table as in Fig. 6. The sorted sequence Pj can be limited to PNmax. i is reset to 0 at the beginning of Soft input Decryption. (IJCSIS) International Journal of Computer Science and Information Security, Vol.1 No.1, May 2008 Fig. 7 Algorithm of the static strategy The strategy is based on the following assumption: if bits are wrong decoded by the channel decoder, than they have the lowest |L|-values. Unfortunately, this assumption is not true, because L-values are probability based and give only an orientation which bits may be wrong decoded. It may happen, for example, that combinations of up to 7 bits have to be tested, when only 3 bits are wrong. 4.2 Dynamic Strategy The static strategy sorts the L-values of single bits. The dynamic strategy calculates the L-values of groups of bits to decide which trial should be performed next. It can happen, that a group of bits result in a lower |L|-value than a |L|-value of a single bit, i.e. that a specific group of bits is probably wrong. By this way, it is possible to find the group of wrong bits in only a few trials, not testing all combinations of flipped bits untill the bits untill the right combination is found like in the static strategy. The calculation is based on use of the L algebra [12] and much more complex than the static strategy. The elaboration of the dynamic strategy is for further study. 4.3 BER Based Strategy The BER based strategy analyses, which number of errors is the most probable in the SID block, under consideration of Eb/N0 rsp. BER. To reduce the potential number of tests, L-values are taken into account. Example: if Nmax is 16 and 4 bit errors are most probable for SID blocks of 320 bits, then up to ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ 4 16 instead of ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ 4 320 tests are performed. If ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ 4 16 tests are not successful, ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ 3 16 tests of 3 bit errors and ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ 5 16 tests of 5 bit errors have to be tested. The BER strategy is also for further study. 5. Application Scenarios 5.1 Scenario 1 In scenario 1 (Fig. 7) digital signatures giving message recovery are used [13]. In this type of digital signatures the digital signature is not appended to the message, but contains the message, practically the digital signature replaces the message. If the verification process is successful, the message is also correctly recovered. The length of the message including redundancy is limited by the size of the underlying mathematical structure of the used asymmetric cryptography. For example, the length has to be shorter than 1024 bits, if RSA is used with a length of the modulus of 1024 bits, or shorter than 160 bits, if ECC (elliptic curve cryptography) is used over GF (2160). Algorithms can be found in [13] and [14]. This scenario can be often found in transaction oriented applications exchanging short messages, which have to be authentic. Typical examples are measurement values in industrial metering systems (electricity, water, gas etc.), data generated by sensors, as well as stock exchange rates. (IJCSIS) International Journal of Computer Science and Information Security, Vol.1 No.1, May 2008 - 6 - 6 5.2 Scenario 2 Scenario 2 (Fig. 8) considers signatures with appendix [15]. Messages signed by signatures with appendix have arbitrary lengths, but Soft Input Decryption can only be applied to SID blocks of limited lengths (Note: the application of a collision free one-way hash function is considered here to be part of the signature generation and verification process). In this scenario, it is assumed that the message and the signature are transmitted separately and the SID block consists only of the signature: the message is transmitted via a communication channel different from the one used for the transmission of the digital signature (outband), or via the same communication channel (inband). The message itself is transmitted first and – if modified during transmission – corrected by redundancy within the message, by repetition or by agreement of the communication partners. So, it is assumed that finally the message is received correctly by the receiver. The digital signature is transmitted afterwards, either when it is generated on request or when the action described by the message should be executed. Typical examples are legal contracts or bank transactions, which are prepared in advance, and the digital signature is transmitted at the requested moment. If an error occurs during the signature verification, the signature is not correct: it has been manipulated or errors occurred during the transmission, which can not be corrected by the channel decoder. In this case, the Soft Input Decryption block consists of one signature block used for verification of the message. Fig. 8 Scenario of signatures with appendix 5.3 Scenario 3 Scenario 3 (Fig. 9) is the most general scenario and considers messages with a cryptographic check value: MAC [16], H-MAC [17] or digital signatures with appendix. In this case, the SID block consists of a message extended by a cryptographic check value. Typical examples are credit/debit applications and other bank applications. At one hand, from the point of view of Soft Input Decryption it does not matter which scenario is applied. Soft Input Decryption always tries to correct the SID block as long as the verification fails. At other hand it is interesting to show that there are many aplications for Soft Input Decryption, even if the lengths of SID blocks are relativly short. 6. Performance aspects of L-values 6.1. The number of L-values required for correction If the performance of Soft Input Decryption should be estimated, at first it is necessary to consider the reliability of the soft input. Fig. 10 shows the number of L-values needed for correction of 95 % of SID blocks for different lengths of SID blocks: w = 128, 160, 320 and 384. In case of SID blocks of length of 320 bits the usage of up to 8 L-values is sufficient for the correction if Eb/N0 > 4 dB. The number of L-values necessary for correction increases exponentially with decreasing of S/N: for Eb/N0 < 3.5 dB 13 L-values are needed for 95 % correction. Fig. 10 Number of L-values needed for correction of 95 % of SID blocks 6.2. The theoretical minimum needed for correction The problem is that reliability values themselves are not reliable. So, the question arises, which is the number of errors in a SID block, which is the lower limit of L-values to be used by Soft Input Decryption. (IJCSIS) International Journal of Computer Science and Information Security, Vol.1 No.1, May 2008 The probability Pw,i , that a code word of length w – in this case a SID block – contains i errors is: i w i i w p p i w P − − ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = ) 1 ( , (4) where p represents BER. Soft Input Decryption tests all possible combinations of up to Nmax L-values. The probability Pw that, after correction by Soft Input Decryption, only errors which are longer than Nmax will not be corrected is [18]: ∑ + = − − ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = w N i i w i w p p i w P 1 max ) 1 ( (5) Soft Input Decryption corrects the data received from channel decoding. Therefore p means the bit error rate after the channel decoder. p depends on the quality of a MAP decoder. In the case of 95 % corrected SID blocks, Pw = 0.05. ∑ ∑ + = = − − − ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ − = − ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = ≥ w N i N i i w i i w i w p p i w p p i w P 1 0 max max ) 1 ( 1 ) 1 ( 05 . 0 (6) Nmax is calculated for the correction of more than 95 % errors has been done for lengths of SID blocks of w = {128, 160, 320, 384} bits (Fig. 11). Nmax in (5) and (6) means the minimum Nmax because the case considers the number of L-values for correction of Pw wrong bits. The results are shown in Fig. 11. Fig. 11 Theoretically minimum number of bits (minNmax) to be changed Now, the results of Fig. 10 and 11 can be compared. Fig. 12 shows the comparison for SID block of 320 bits. The reason for the significant difference is the incorrect allocation of |L|-values compared to wrong bits, due to imperfections in the implementation of MAP decoding algorithm. Fig. 12 Comparison of needed and theoretical number of L-values for SID block of the length 320 bits 6.3. Estimation of the number of L-values for the correction of long SID blocks In this section the number of needed L-values for the correction of SID blocks longer than 384 bits is considered, because the application of Soft Input Decryption of longer SID blocks seems to be very interesting (see Scenario 3 of Section 2.4 or consider RSA digital signatures instead of ECC which will be discussed later in this section). The estimation for the number of required |L|-values is done by interpolation of the results for SID blocks of lengths of 128, 160 and 320 bits and extrapolation for longer SID blocks. The results for 384 bits known by simulations are used for the verification of the extrapolation. Fig. 13 (w = 320, Eb/N0 = 3 dB) shows the percentage of corrected SID blocks per |L|-value: 20 % of SID blocks are corrected with the lowest |L|-value. Additional 13 % are corrected with the second lowest |L|-value; with the third lowest |L|-value additional 11 % etc…; with the 15th lowest |L|-value additional 1 % more SID blocks are corrected than with the 14th lowest |L|-value. (IJCSIS) International Journal of Computer Science and Information Security, Vol.1 No.1, May 2008 - 8 - 8 Fig. 13 The percentage of corrected SID blocks per |L|-value The percentage of corrected SID blocks per |L|-value is tested for lengths of 128, 160 and 320 bits and for different Eb/N0. The results of tests show curves with a negative exponential function of the form (see Fig. 13): y = k e-ax (7) Coefficients k and a depend on the length w of the SID block and Eb/N0. This dependence is derived in this section and then applied for other values. As the total percentage of corrected SID blocks is 100 % after testing of all L-values, the following condition is valid: 1 1 )1 ( 1 1 = − − = = − − − = − = ∑ ∑ a aw a w i ai w i i e e ke e k y (8) or, by presenting e-ai as an infinite sum: aw aw i i aw aw aw a e a e i a e a a a e a a a e e k − − ∞ = − − − − ≈ − = − + + + = − − + + + + = − − = ∑ 1 1 ! 1 ... ! 3 ! 2 1 1 ... ! 3 ! 2 1 1 1 1 3 2 3 2 (9) The negative exponential function is derived for w = {128, 160, 320} and Eb/N0 = [1, 5.5] dB in steps 0.5 dB. For each function a and k are derived approximately and shown in Table 2. The relation between the exponent a and Eb/N0 is observed in the next step. S/N is used instead of Eb/N0, respecting the code rate of ½: dB N E N S b 3 0 − = (10) Approximations are done using S/N ratio which is easier for calculation. The entries of the first and third row of Table 2 is used for approximation by a second order polynomial: C N S B N S A a + ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ + ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ = 2 (11) The calculated coefficients A, B and C are shown in Table 2 for SID blocks of length of 128, 160 and 320 bits. The approximation of the dependence of a on S/N is shown in Fig. 14. The coefficients A, B and C have different values dependent on the length of SID blocks. A, B and C show a linear relation to w. Therefore A, B and C are approximated by linear functions of w. A = A(w) = KA ۔ w + NA (12) B = B(w) = KB ۔ w + NB (13) C = C(w) = KC ۔ w + NC (14) The values of the linear coefficients are shown in Table 3. (IJCSIS) International Journal of Computer Science and Information Security, Vol.1 No.1, May 2008 Eb/N0 [dB] 1 1.5 2 2.5 3 3.5 4 4.5 5 k 0.083 0.127 0.271 0.323 0.418 0.461 0.646 0.733 0.821 w = 128 a 0.08 0.12 0.24 0.28 0.35 0.38 0.5 0.55 0.6 k 0.030 0.078 0.127 0.221 0.284 0.47 0.582 0.792 1.059 w= 160 a 0.038 0.075 0.12 0.2 0.25 0.38 0.46 0.6 0.7 k 0.02 0.025 0.062 0.221 0.258 0.419 0.733 1.222 1.454 w = 320 a 0.02 0.025 0.06 0.2 0.23 0.35 0.5 0.8 0.9 Table 1 Coefficients k and a for SID block of w = 128, 160 and 320 bit Table 2 Coefficients A, B and C KA NA KB NB KC NC -0.00002 0.043 -0.00015 0.225 -0.00025 0.32 Table 3 Values of linear coefficients KA, NA, KB, NB, KC and NC of coefficients A, B and C Fig. 15 Coefficient a in dependence on Eb/N0 for different lengths of SID block Using equations (7) and (9), the percentage of corrected bits (indicated as y in (7)) can be calculated as a function of positions of |L|-values (indicated as x in (7)), S/N and the length w of a SID block: x w N S a w w N S a w N S a ax e e e ke y ) , / ( ) , / ( ) , / ( 1 1 − − − − − = = (15) with ) ( ) ( ) ( ) ( ) ( ) ( , 2 2 NC w KC N S NB w KB N S NA w KA w C N S w B N S w A w N S a + ⋅ + ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ ⋅ + ⋅ + ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ ⋅ + ⋅ = + ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ ⋅ + ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ ⋅ = ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ (16) w 128 160 320 A 0.04 0.04 0.037 B 0.206 0.201 0.177 C 0.288 0.28 0.24 (IJCSIS) International Journal of Computer Science and Information Security, Vol.1 No.1, May 2008 - 10 - 10 If the number of |L|-values needed for correction are indicated by x, a normalized distribution function of percentage of correction by xth L-value y can be expressed, using equation (16), as: ( ) ( ) ( ) ( ) ⎪⎭ ⎪ ⎬ ⎫ ⎪⎩ ⎪ ⎨ ⎧ ⋅ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ + ⋅ + ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ ⋅ + ⋅ + ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ + ⋅ − − − ⎪⎭ ⎪ ⎬ ⎫ ⎪⎩ ⎪ ⎨ ⎧ ⋅ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ + ⋅ + ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ + ⋅ + ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ + ⋅ − ⋅ ⋅ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ + ⋅ + ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ ⋅ + ⋅ + ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ ⋅ + ⋅ = − ≈ − − w NC w KC N S NB w KB N S NA w KA x NC w KC N S NB w KB N S NA w KA NC w KC N S NB w KB N S NA w KA e ae y aw x a ) ( ) ( ) ( exp 1 ) ( ) ( ) ( exp ) 1 /( 2 2 2 * (17) If the number x0 of L-values needed for correction of y(x0) percentage of errors has to be, equation (7) is expressed as: ∑ ∑ = − − − − − = − − = − − = ≈ 0 0 0 1 * 1 0 1 1 1 1 ) ( x i aw ax ax aw a x a x i e e e e e e k x y (18) and x0 is found as: ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ − − ≈ − ) 1 )( ( 1 1 ln 1 0 0 aw e x y a x (19) It is noted that equation (19) is based on an interpolation of Eb/N0 = [1, 4] dB (i.e. S/N = [-2, 2] dB) and w = [128, 160, 320]. y(x0) can be between 0 and 100 [%], but it is 95 % in this paper. The maximum position of minimum |L|-value can be found for different lengths of SID blocks and different Eb/N0. In order to check equation (19), y(x0) (needed L-values) for correction of 95% SID blocks of length of 384 bits (extrapolation) are compared to the results of tests (Fig. 16): Fig. 16 Comparison of L-values needed for correction of 95% of SID blocks (w = 384) based on equation (19) and simulations (Fig. 10) Fig. 16 shows that equation (19) can be trusted and used for further extrapolations. Equation (19) is now applied for SID blocks of length w = 1024 in Fig. 17. The length of 1024 bits is chosen for comparison, because RSA signatures with a signature length of 1024 bits offer the same security level as digital signatures based on ECC with the signature length of 320 bits. It may appear attractive to use RSA signatures because the verification time is very short when a public exponent with a low Hamming weight is used. Assuming Eb/N0 = 3 dB, the number of L-values is 48 compared to 14 when using ECC digital signatures of 320 bit. Therefore, at one hand, it becomes clear, that the use of RSA digital signatures is not suitable for Soft Input Decryption. At the other hand, the application of Soft Input Decryption to longer blocks than 1024 bits is realistic if Eb/N0 is not too low. Fig. 17 Comparison of L-values needed for correction of 95% of SID blocks with w = 320 and w = 1024 (IJCSIS) International Journal of Computer Science and Information Security, Vol.1 No.1, May 2008 Fig. 17 shows that 48 L-values rsp. 248 tests are necessary in the case of RSA digital signatures are needed compared to 14 rsp. 214 tests in case of ECC digital signatures. 7. Time Performance Under practical aspects it is important to consider the consumed resources of Soft Input Decryption, specifically the time needed for the correction of SID blocks. The needed time depends on the cryptographic algorithm and on the number of L-values which are necessary to correct the errors. Therefore the success of Soft Input Decryption is highly dependent on the quality of the soft output of the channel decoder. The time needed to perform 28 verifications of digital signatures of 320 bits with appendix (based on ECDSA) without hash calculation which are not necessary in Scenario 2, MACs (CBC MAC – DES with key size of 56 bit) and H-MACs (SHA-1 with output size of 160 bit) are shown in Table 3 – 5. The numbers are based on use of a PC with Pentium 4 of 1.7 and 1.8 GHz [19]. The second row of the tables shows the number of verifications and consumed time, which is needed for testing and correcting bits with up to the 8 lowest |L|- values. In the case of MAC/H-MAC a new MAC/H-MAC is computed, if the changed bit is in the message part. If the changed bit is in the MAC/H-MAC part, only a comparison to the previously computed MAC/H-MAC is performed, which costs almost no time. It is assumed that the L-values used for correction are equally distributed over the message part and the cryptographic check value. In the table with the results of digital signatures (Table 3) there is an additional third row which assumes an optimization of the digital signatures verifications (100 times faster verification) following the first verification. The arithmetic of subsequent verifications is much more efficient and faster, if only one bit of the input of the verification is changed compared to one of the preceding verifications because only a single point addition has to be executed for correction. Example: in the case of ECKDSA [13] only one single point addition is necessary for correction. The most interesting result is that digital signatures are more suitable for Soft Input Decryption than MAC and H- MAC, if the assumption, that succeeding verifications need only 1 % of the first one, is true. Even if succeeding verifications need less than 25 % of the first one, Soft Input Decryption with digital signatures is faster than Soft Input Decryption with H-MACs or MACs. Number of Verifications Time [s] 1,7 GHz Time [s] 1,8 GHz 1 1 0.0049 0.0044 2 256 1.2544 1.1264 3 1st + 255 similar 0.0174 0.0156 Table 3 Computation times for verification of digital signatures Number of Verifications Time [s] 1,7GHz Time [s] 1,8 GHz 1 1 0.002 0.00187 2 256 * m / (m + n) 0.3072 0.2872 Table 4 Computation times for verification of MACs (n = 128 bits) of a message (m = 192 bits) Number of Verifications Time [s] 1,7 GHz Time [s] 1,8 GHz 1 1 0.00424 0.004 2 256 * m / (m + n) 0.543 0,512 Table 5 Computation times for verification of H-MACs (n = 160 bits) of a message (m = 160 bits) 8. Conclusion and Further Work In this paper the principles of Soft Input Decryption are presented, as well as results of various simulations. The application of the Soft Input Decryption method is shown using different scenarios. The coding gain of cryptographic check error rates which is more than 2 dB in case of SID blocks of length of 320 bits show that Soft Input Decryption is a promising method which characteristics have to be examined for further use and improvement. Different strategies for changing of bits are possible, depending on a used system and error distribution. In this paper a static strategy is used, but also other possible strategies as dynamic and BER strategy are mentioned. These and other suggestions for new strategies are for further study. The performance of L-values is analyzed by comparison of results of tests and theoretical calculations. It is shown that the L-values are not reliable, because bits with lower |L|- (IJCSIS) International Journal of Computer Science and Information Security, Vol.1 No.1, May 2008 - 12 - 12 values are not necessarily wrong bits. For that reason much more L-values have to be tested, than it would be necessary if the wrong bits would really have the lowest |L|-values. Results show that L-values become less reliable with decreasing of Eb/N0. Existing results of performances of L-values (for SID blocks of length of 128, 160 and 320 bit) are used for generation of equations which approximately find the number of needed L-values for the correction of defined percentage of SID blocks. Results of approximation are confirmed by comparison to the known results of tests for SID blocks of length of 384 bits. By extrapolation of calculated equations on SID blocks of length 1024 bit, the number of needed L-values has been calculated, instead of performing of long Soft Input Decryption tests. Time performance of Soft Input Decryption depends on used cryptographic mechanisms (digital signature, MAC or H-MAC). The surprising results is that digital signatures show very good performance results when the time needed for verification of “neighboured” signatures is improved. The future work should focus on improvement of arithmetical efficiency of cryptography for Soft Input Decryption to achieve faster verification process in a scope of Soft input Decryption and to prove the assumption used in row 3 of Table 3. Section 3.2 more sophisticated strategies of Soft Input Decryption instead of static strategy have been outlined. Further studies will elaborate these strategies and test the performance. Investigation of Soft Input Decryption should be also applied to standardized turbo codes for 3G, using the fact that the remaining errors after turbo coding are very often grouped (flipping of corresponding groups of bits). References [1] Forney, G.D.Jr., Concatenated Codes, MIT Press, Cambridge, 1966. [2] Lin, S., Costello, D.J., Error Control Coding, Pearson Prentice Hall, USA, 2004. [3] Bossert, M., Kanalcodierung, B. G. Treubner, Stuttgart, 1998. [4] D. Chase, A Class of Algorithms for Decoding Block Codes with Channel Measurement Information, IEEE Trans. Inform. Theory, IT- 18, pp. 170-182, January 1972. [5] Hagenauer, J.: Source-Controlled Channel Coding, IEEE Trans. On Communications, pp. 2449-2457, September 2004. [6] Adrat, M., Picard, J.-M., Vary, P., Analysis of extrinsic Information from softbit-source decoding applicable to iterative source-channel decoding, in Proc. For ITG Conference 2002 – Source and Channel Coding, Berlin, January 2002. [7] Görtz, N., Iterative Source-Channel Decoding by Channel Coded Optimal Estimation, Proc. Of 3rd ITG Conference Source and Channel Coding, Munich, Germany, pp. 267-272, January 2000 [8] Živić, N., Ruland, C., Softinput Decryption, 4th Turbocode Conference, 6th Source and Channel Code Conference, VDE/IEEE, Munich, April 3 – 7, 2006 [9] Ruland, C., Informationssicherheit in Datennetzen, datacom Verlag, Bergheim 1993, ISBN-3-89238-081-3 [10] Berrou, C., Glavieux, A., Thitimajshima, P., Near Shannon Limit Error Correcting Coding and Decoding: Turbo Codes, Proc. IEEE International Conference on Communication, Geneva, Switzerland, vol. 2/3, pp. 1064-1070, 1993 [11] Bahl, L., Jelinek, J., Raviv, J., Raviv, F., Optimal decoding of linear codes for minimizing symbol error rate, IEEE Transactions on Information Theory, IT-20, pp. 284-287, March 1974. [12] Fu – Hua Huang, Evaluation of Soft Output. Decoding for Turbo Codes, thesis at the Faculty of the Virginia Polytechnic Institute, May 1997., http://scholar.lib.vt.edu/theses/available/etd-71897- 15815/unrestricted/ [13] ISO/IEC 15946-4, Information technology – Security techniques – Cryptographic Techniques based on Elliptic Curves – Part 4: Digital signatures giving message recovery, 2004. [14] ISO/IEC 14888-3: Information technology – Security techniques – Digital signatures giving message recovery – Part 3: [15] ISO/IEC 14888-1, Information technology – Security techniques – Digital signatures with appendix – Part 1: General, 1998. [16] ISO/IEC 9797-1, Information technology – Security techniques – Message Authentication Codes (MACs) – Part 1: Mechanisms using a block cipher, 1999. [17] ISO/IEC 9797-2: Information technology – Security techniques - Message Authentication Codes (MACs) – Part 2: Mechanisms using a hash-function, 2000 [18] Drajić, D.B.: Uvod u teoriju informacija i kodovanje, Akademska misao, Beograd, 2004 [19] Performance of optimized Implementations of the NESSIE Primitives, NESSIE Project Report, Version 2, 2003 Natasa Zivic, Dr., born 1975 in Belgrade, Serbia, graduated from the Faculty of Electrical Engineering (Electronics, Telecommunication and Automatics) of the Belgrade University in 1999. at the Telecommunication Department. After the Post diploma studies at the same Faculty (Telecommunications Division) she defended her Magister Thesis (Acoustics) in 2002. From October 2004. she was scientific assistant at the University of Siegen in Germany at the Institute for Data Communications Systems as a DAAD and University of Siegen Scholarship holder. In 2007. she defended her Doctoral Thesis on the same University. The main course of her work in Siegen is Coding and Cryptography. From 2000. till 2004. she was working at the Public Enterprise of PTT “Serbia”, Belgrade as the senior engineer. Currently she is employed as an Assistant Professor at the University of Siegen. (IJCSIS) International Journal of Computer Science and Information Security, Vol.1 No.1, May 2008 Fig. 1 Communication System using Concatenated Codes. Fig. 7 Scenario of signatures giving message recovery Fig. 9 Scenario of messages with cryptographic check values (IJCSIS) International Journal of Computer Science and Information Security, Vol. 1, No. 1, May 2009 22 AUTOMATIC DEFECT DETECTION AND CLASSIFICATION TECHNIQUE FROM IMAGE: A SPECIAL CASE USING CERAMIC TILES G. M. Atiqur Rahaman1 1Computer Science and Engineering Discipline Khulna University Khulna 9208, Bangladesh atiq99@hotmail.com Md. Mobarak Hossain2 2Computer Science and Engineering Department Asian University of Bangladesh Dhaka, Bangladesh mobarak10jan@yahoo.com Abstract—Quality control is an important issue in the ceramic tile industry. On the other hand maintaining the rate of production with respect to time is also a major issue in ceramic tile manufacturing. Again, price of ceramic tiles also depends on purity of texture, accuracy of color, shape etc. Considering this criteria, an automated defect detection and classification technique has been proposed in this report that can have ensured the better quality of tiles in manufacturing process as well as production rate. Our proposed method plays an important role in ceramic tiles industries to detect the defects and to control the quality of ceramic tiles. This automated classification method helps us to acquire knowledge about the pattern of defect within a very short period of time and also to decide about the recovery process so that the defected tiles may not be mixed with the fresh tiles. Keywords-: Quality Control; Pattern of Defect; Defected Ceramic Tiles; Fresh Tiles I. INTRODUCTION Image processing is one of the mostly increasing areas in computer science. As technology advances, the analog imaging is switched to the digital system now-a-days. Every day, we capture huge amount of images which are very difficult to maintain manually within a certain period of time. So the concept and application of the digital imaging grows rapidly. Digital image processing is used to extract various features from images. This is done by computers automatically without or with little human intervention. One of the most important operations on digital image is to identify and classify various kinds of defects. Thus to detect the defects from any image some methods are established and placed at three levels. At the lowest level, some techniques are available which deal directly with the raw, possibly noisy pixel values, with de-noising and edge detection being good examples. In the middle there are algorithms which utilize low level results, such as segmentation and edge linking. At the highest level are those methods which attempt to extract semantic meaning from the information provided by the lower level. Ceramic tiles industry sector is now a very important sector for manufacturing the ceramic tiles. All production phases are technically maintained until the final stage of the manufacturing process appeared. Sometimes checking is needed for the ceramic tiles if they are able to serve customer needs, i.e. to find defected tiles. So it is an important task to categorize the ceramic tiles after production based on surface defects. The manual method of defects inspection is labor intensive, slow and subjective. Although automated sorting and packing lines have been in existence for a number of years, the complexity of inspecting tiles for damage and selecting them against the criteria of a manufacturer i.e. automated defected tiles inspection have not been possible. Again human judgment is influenced by expectations and prior knowledge. In many detection tasks for example, edge detection, there is a gradual transition from presence to absence. On the other hand, in “obvious” cases, most naive observers agree that the defect is there, even when they cannot identify the structure. Such a monitoring task is of course tedious, subjective and expensive. For all these reason no one can deny the significance of automated defect detection and classification system. The objective of our research is to propose an efficient defect detection and classification technique which will be able to find out image defects at a high rate within a very short time. The overall outline of this paper is mentioned as follows. Section 2 reviews the existing works briefly. Section 3 illustrates our proposed techniques. Section 4 presents the experimental results and comparison. Finally, the conclusion is presented in Section 5. II. EXISTING METHODS FOR DEFECT DETECTION In the previous years, some proposed defect detection methods have been proposed to find out the image defects. But they have some limitations that can be described briefly as follows: In [3], H. Elbehiery et al. presented some techniques to detect the defects in the ceramic tiles. They divided their method into two parts. In the first part, Existing method consisted with the captured images of tiles as input. As the output, they showed the intensity adjusted or histogram equalized image. After that, they used the output of first part as input for the second part. In the second part of their algorithm, (IJCSIS) International Journal of Computer Science and Information Security, Vol. 1, No. 1, May 2009 23 different individual complementary image processing operations have been used in order to identify various kinds of defects. Prevailing task emphasized on the human visual inspection of the defects in the industry. But their system is not automated which is very much necessary in the manufacturing process. Again their proposed method is operation redundant because they apply their second part on every test image to identify various types of defects. Moreover, their proposed method is very time consuming. In [4], C. Boukouvalas et al. concerned about the problem of automatic inspection of ceramic tiles using computer vision. They applied techniques for pinhole and crack detectors for plane tiles based on a set of separable line filters, through textured tile crack detector based on the wigner distribution and a novel conjoint spatial-spatial frequency representation of texture, to a color texture tile defect detection algorithm which looks for abnormalities both in chromatic and structural properties of texture tiles. But, using separate filtering techniques for different types of defects is not a good idea at all, because in such case high computational time is a major issue for applying a large number of operations. Again, their procedure is an automated visual inspection system where they only show the defects making them clear to detect the defects found on image. In [5], Se Ho Choi et al. presented a real time defect detection method for high speed bar in coil. To enhance the performance of the detection they used edge preserving method for noise reduction, to separate images with different gray levels they used the laplacian filter (2nd differential) and after that they used double thresholding to binarize the image. But the major drawback of their process is that their method will not be able to find the orientation of the edge because of their using of laplacian filter, according to [7] which will be needed for the defect of ceramic tiles. The technique using the laplacian filter malfunctions for corner and curves. We generally have found total eight types of defects from the existing defect detection methods. These types of defects are shown in the following Table I. TABLE I. TYPES OF CERAMIC TILE DEFECTS Name of Defects Description Crack Break down of tile Pinhole Scattered isolated black-white pinpoint spot Blob Water drop spot on tile surface Spot Discontinuity of color on surface Corner Break down of tile corner Edge Break down of edge Scratch Generally scratch on surface Glaze Blurred surface on tile A. Maintaining the Integrity of the Specifications The template is used to format your paper and style the text. All margins, column widths, line spaces, and text fonts are prescribed; please do not alter them. You may note peculiarities. For example, the head margin in this template measures proportionately more than is customary. This measurement and others are deliberate, using specifications that anticipate your paper as one part of the entire proceedings, and not as an independent document. Please do not revise any of the current designations. III. MATERIALS AND METHODS A. Proposed Approach For Defect Detection And Classification The complete flow chart of our proposed defect detection and classification technique has been rendered in the following Fig. 1. Figure 1. Flow Chart of the Proposed Method In this section, we propose a new method for defect detection and classification. It can make the performance of defect detection rate higher than the existing one and can reduce the total computational time. Here, a common sequence of operations is applied to detect different types of defects in ceramic tiles and some proposed algorithms are employed to classify them. The proposed defect detection and classification approach will be performed using the following three steps: Yes No Morphological Operation Saving Image as Matrix Image Acquisition Image Enhancement Noise Reduction Edge Detection Is Defect Detection Successful? Final Result Defect Classification Start (IJCSIS) International Journal of Computer Science and Information Security, Vol. 1, No. 1, May 2009 24 Step 1: Performing some image preprocessing operations. Step 2: Applying the proposed defect detection process. Step 3: Classifying the defect using all proposed algorithms. B. Performing Some Image Preprocessing Operations At first, we have to employ some image preprocessing operations on the input image before applying the proposed defect detection process. Preprocessing steps are necessary for converting the captured RGB image found from the real source so that they can be eligible for performing any binary operations onto it. In our proposed method, we apply some preprocessing steps such as image enhancement, noise reduction etc. Again, at every preprocessing step we have a lot of options for using various kinds of methods in different applications. 1) Image Acquisition Image acquisition is the process of obtaining a digitized image from a real world source. Each step in the acquisition process may introduce random changes into the values of pixels in the image which is referred to as noise. A ceramic tile image is captured and stored into the computer for further processing. This may be achieved by taking a photograph with a conventional camera, having the film made into a print and scanning the print into a computer. In our method, we have used SAMSUNG DIGIMAX-W54 digital camera for image capturing. Then, this image is trimmed with m × n (width and height) to make all the images to equal size. 2) Image Enhancement Actually, image enhancement technique is to make the image clearer so that various operations can be performed easily on the image. For this, at first the captured RGB image is converted to the gray level image. Contrast stretching (often called normalization) is a simple image enhancement technique that attempts to improve the contrast in an image by stretching' the range of intensity values it contains to span a desired range of values, e.g. the full range of pixel values that the image type concerned allows. Low contrast images can be found due to the poor illumination, lack of dynamic range in the imaging sensor, or due to the wrong setting of the lens. The idea behind the contrast stretching is to increase the dynamic range of intensity level in the processed image. The general process of the contrast stretching operation [1] on grayscale image is to apply the following equation on each of the pixels in the input image to form the corresponding output image pixel: i n y x I y x O i + − − = ) min max min)( ) , ( ( ) , ( where, O(x,y) represents the output image, I(x,y) represents the xth pixel in the yth column in the input image. In this equation, ni represents the number of intensity levels, i represents the initial intensity level, "min" and "max" represent the minimum intensity value and the maximum intensity value in the current image respectively. Here "no. of intensity levels" shows the total number of intensity values that can be assigned to a pixel. For example, normally in the gray-level images, the lowest possible intensity is 0, and the highest intensity value is 255. Thus "no. of intensity levels" is equal to 256. The contrast stretching operation is applied on the grayscale images in two passes. In the first pass the algorithm calculates the minimum and the maximum intensity values in the image, and in the second pass through the image, the above formula is applied on the pixels. In the proposed method, we enhance the gray level image to improve its visual quality and machine recognition accuracy using the following formula, described in [1]: ( ) E M h stretc F INTRANS G , , , ′ ′ = Here, INTRANS performs the intensity or gray level transformations and G computes a contrast stretching transformation using the following MATLAB expression: ( ) ( ) ( )E eps F M Contrast .^ / . 1 / . 1 + + = where, parameter M must be in range [0,1]. The default value for M is ( ) ( ) F double im mean 2 2 and the default value for E is 4. Here, F is gray-level image and M is such result which is found by applying image double and median filtering operation on F. eps returns the distance from 1.0 to the next largest double-precision number, i.e. ( ) 52 ^ 2 − = eps . 3) Noise Reduction Noise reduction is a process of removing noise from a captured image. To remove noise some filtering techniques [1] can be proposed as follows: One method to remove noise is by convolving the original image with a mask that represent a low-pass filter or smoothing operation. For example, the Gaussian mask comprises elements determined by a Gaussian function. This convolution brings the value of each pixel into closer harmony with the values of its neighbors. In general, a smoothing filter sets each pixel to the average value, or a weighted average, of itself and its nearby neighbors; the Gaussian filter is just one possible set of weights. But smoothing filters tend to blur an image, because pixel intensity values that are significantly higher or lower than the surrounding neighborhood would "smear" across the area. Because of this blurring, linear filters are seldom used in practice for noise reduction. For the above reason, we proposed to use a non-linear filter which is called median filter. It is very good at preserving image detail if it is designed properly. To run a median filter: a) Consider each pixel in the image b) Sort neighboring pixels into order based upon their intensities c) Replace the original value of the pixel with the median value from the list A median filter is a rank-selection (RS) filter, a particularly harsh member of the family of rank-conditioned rank-selection (RCRS) filters [2]; a much milder member of that family, for example one that selects the closest of the neighboring values when a pixel's value is extremely in its neighborhood, and (IJCSIS) International Journal of Computer Science and Information Security, Vol. 1, No. 1, May 2009 25 leaves it unchanged otherwise, is sometimes preferred, especially in photographic applications. Median filter technique is good at removing salt and pepper noise from an image, and also causes relatively little blurring of edges, and hence is often used in computer vision applications. 4) Edge Detection An edge may be regarded as a boundary between two dissimilar regions in an image. These may be different surfaces of the object, or perhaps a boundary between light and shadow falling on a single surface. In principle, an edge is easy to find since differences in pixel values between regions are relatively easy to calculate by considering gradients. Many edge extraction techniques [3] can be broken up into two distinct phases: • Finding pixels in the image where edges are likely to occur by looking for discontinuities in gradients. • Linking these edge points in some way to produce descriptions of edges in terms of lines, curves etc. For the proposed method, we detect edge using sobel edge detection method [6] upon the resulting image. Actually there are many kinds of edge detectors. We use first derivative edge detector (sobel) to detect edges of the image. Because, it’s calculation is very simple and fast to detect edges. On the other hand, if we use second derivative edge detector operator such as laplacian of gaussian operator then we will not be able to find the orientation of the edge because of using the laplacian filter. Again, if we use other kinds of gaussian edge detectors such as canny, shen castan, boie-cox operators then the operation is more complex [7]. C. Applying the Proposed Defect Detection Process All preprocessing operations are applied to the reference image, stored in the computer database to compare with the test image. Let, the resulting image is I2. Now we consider I1 as the resulting image found from the test image after applying all preprocessing operations. We propose here a new technique. We store I1 and I2 as matrix form to a file. Let, these two matrices are named as m1 and m2. Then we count the total number of black pixels (in binary, it is represented as 1) in m1 and that in m2. These two are then compared. If the number of black pixels in m1 is greater than the number of black pixels in m2 then we can make decision that defect is found in the test image, otherwise we can say that no defect is present to the test image. To understand this concept clearly, consider the following example: Let, we have a test image and a reference image of equal size ) 5 5 ( × . Now applying preprocessing steps on the test image we find matrix m1 whose value is: ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ 0 1 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 1 Again, applying the preprocessing operations on the reference image another matrix m2 is found: ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 Here, the number of black pixels for the reference image is 2 and for the test image this number is 6. So, here obviously 6>2 and we can make decision that defect is found on the test image. The detailed block diagram of the proposed defect detection step is shown in the following Fig. 2. Figure 2. Block Diagram of Defect Detection Part D. Classifying the Defect Using All Proposed Algorithms In the following, our proposed detailed algorithms to classify total six kinds of defects are given step by step: 1) Initial Steps before Classification a) Plane Tiles Step1. After finding the binary image, apply morphological operation on it. Step 2. Now check every pixel elements of resulting image. Step 3. If any pixel element has value 1 then Change its value to 2. Step 4. Change such coordinates of binary image as 2 which of the resulting image have value 2. Step 5. Finally, save this resulting matrix into a text file. b) Printed Tiles Step 1. Check the reference image after processing and if any of its coordinates has value 1 then save these coordinates. Step 2. After finding the binary image from test image, apply morphological operation on it. Image after Edge Detection Counting the Number of Defected Pixels of Test Image, n1 Counting the Number of Defected Pixels of Reference Image, n2 (n1 > n2)? Defect Found Yes No Defect Not Found (IJCSIS) International Journal of Computer Science and Information Security, Vol. 1, No. 1, May 2009 26 Step 3. Now check every pixel elements of this resulting image. Step 4. If any pixel element has value 1 then Change its value to 2. Step 5. Change such coordinates of binary test image as 2 which of the resulting image have value 2. Step 6. Convert the previous saved coordinate values of binary test image as 0. Step 7. Finally, save this resulting matrix into a text file. 2) Algorithm to Determine Pinhole Defects Let, p_count as a variable for pinhole count, c_range as the range of corner, e_range as the range of edge and row as the maximum number of image pixels along any row and col as the maximum number of image pixels along any column. Step 1. Set, temp_a =c_range , and temp_b=e_range Step 2. Divide the total searching area for pinhole into three regions. Step 3. For left side region, for row consider the range from temp_a+1 to row- c_range-1 for column consider the range from temp_b+1 to c_range (a) check every pixel coordinates whether it is 0 or not. (b) if it is true then (i) For each coordinate (i,j) check all of its eight neighbors. (ii) If(i,j-1), (i,j+1), (i-1,j), (i+1,j) position values are 1 and the rest are 0, then p_count will be incremented by 1. Step 4. For right side region, range for row is from temp_a+1 to row-c_range-1 and range for column is from col- temp_a+1 to col-e_range. The rests are same as step-3. Step 5. For other middle side elements, range for row is from temp_b+1 to row-e_range and range for column is from col-temp_a+1 to col-c_range. The rests are same as step-3. Step 6. Finally, check value of p_count. If p_count>0, then pinhole is found, otherwise not found. 3) Algorithm to Determine Crack Defects Let, c_length as the range of crack. Step 1. Check every pixel coordinate (i,j) from left to right up to the last pixel element. Step 2. If any (i,j) has value 1 then (a) Consider its adjacent eight pixels and find which are 1. (b) If any adjacent pixel has value 1 then Current pixel coordinate will be updated to it. (c) Apply the backtracking process to find out all connected pixels and count the length. Step 3. Apply step 2 to all pixels and for each pixel find out the length of connected pixels. Step 4. Counting all length of the connected pixels found from step 2 and step 3, find out the maximum number and set it to c_count. Step 5. Finally, apply step 2 to specify the crack defected pixel coordinates so that other types of defects are not affected to it. Step 6. If c_count > c_length, then make decision that crack is found, otherwise crack is not found. 4) Algorithm to Determine Blob Defects Let, matx as size of blob, row as the maximum number of image pixels along any row and col as the maximum number of image pixels along any column. Step 1. Let, start=(matx/2)+1; Here start is the middle element of (matx*matx) . Step 2. Check every pixel coordinate (i,j) from left to right up to the last pixel element. for row consider the range from start to row-start+1 for column consider the range from start to col-start+1 (a) If any pixel coordinate (i,j) is 2, then (i) Considering it as the middle element and check the total (matx*matx) elements around it to find out how many 2 exists into these region. (ii) Let, the total number of 2 is equal to b_length. (iii) If b_length = (matx*matx), then Make decision that blob defect is found and exit from loop. (b) Otherwise, switch to next pixel coordinate at step 2. Step 3. After searching every pixel coordinate, if there is no b_length matches to (matx*matx), then make decision that blob defect is not found. 5) Algorithm to Determine Spot Defects Let, matx as size of spot, row as the maximum number of image pixels along any row and col as the maximum number of image pixels along any column. Step 1. Let, start=(matx/2)+1; Here start is the middle element of (matx*matx) . (IJCSIS) International Journal of Computer Science and Information Security, Vol. 1, No. 1, May 2009 27 Step 2. Check every pixel coordinate (i,j) from left to right up to the last pixel element. for row consider the range from start to row-start+1 for column consider the range from start to col-start+1 (a) If any pixel coordinate (i,j) is 2, then (i) Considering it as the middle element and check the total (matx*matx) elements around it to find out how many 2 exists into these region. (ii) Let, the total number of 2 is equal to s_length. (iii) If s_length = (matx*matx), then Make decision that spot defect is found and exit from loop. (b) Otherwise, switch to next pixel coordinate at step 2. Step 3. After searching every pixel coordinate, if there is no s_length matches to (matx*matx), then make decision that spot defect is not found. 6) Algorithm to Determine Edge Defects Let, c_range as the range of corner, row as the maximum number of image pixels along any row and col as the maximum number of image pixels along any column. Step 1. Initially, take a variable e_count and set its value to 0. Step 2. Consider four regions in up, down, left and right side. Step 3. For upper edge, row has fixed value 1 for column consider the range from c_range + 1 to col-c_range + 1 Consider each pixel coordinate (i,j). (i) If (i,j) coordinate has value 1 or 2, then Increment the value of e_count by 1 and go to step 7. (ii) Otherwise, continue. Step 4. For lower edge, row has value row and range for column is from c_range + 1 to col-c_range + 1. The rests are same as step-3. Step 5. For left edge, column has value 1 and range for row is from c_range + 1 to row-c_range + 1. The rests are same as step-3. Step 6. For right edge, column has value col and range for row is from c_range + 1 to row-c_range + 1. The rests are same as step-3. Step 7. If e_count > 0, then make decision that edge defect is found. Otherwise, edge defect is not found. 7) Algorithm to Determine Corner Defects Let, row as the maximum number of image pixels along any row and col as the maximum number of image pixels along any column. Step 1. Initially, take a variable c_count and set its value to 0. Step 2. Check every pixel coordinates (i,j) along the range for four corner elements. If any coordinate has value 2, then Increment the value of c_count by 1. Step 3. If c_count is equal to the total corner area, then Make decision that corner defect is found. Otherwise, corner defect is not found. IV. EXPERIMENTAL RESULTS AND DISCUSSION A. Defect Detection The proposed system detects defect successfully. In this section, we show the result of step by step process of the proposed defect detection method. The defect detection rate and time efficiency are compared with the existing method [3]. We also classify here different types of defects found through defect detection process. Here it is needed to mention that during the production, many numbers of tiles are produced in industries at the same time of same colors, shape and pattern. So, all the tiles of one shape are compared with that particular type of standard tile when processing through the computers. To understand our proposed defect detection process, we apply the proposed method on a sample RGB image for plane one colored tiles. Then we check if any kind of defect exists in this test image or not. For this, we apply our proposed preprocessing operations on this image such as image enhancement, noise reduction and edge detection. This is shown in the following Fig. 3. In this Fig. , we also show the reference RGB image for that test image and the output after applying preprocessing operations on it. (IJCSIS) International Journal of Computer Science and Information Security, Vol. 1, No. 1, May 2009 28 (a) Test Image (b) Gray Level Image (c) Image after (d) Image after Contrast Stretching Edge Detection (e) Reference Image (f) Image after Preprocessing Fig. 3: Proposed Preprocessing Steps Applied on Test Image and Reference Image Now consider the Fig. 3(d) and Fig. 3(f). The image of Fig. 3(d) is found from the test RGB image after applying all the proposed preprocessing operations onto it. Then the total number of defected pixels is count in this image. Here the total number of defected pixels is 348. Again, the image of Fig. 3(f) is found from the reference RGB image after applying all previous operations onto it. In this image the total number of defected pixels is 105. According to our proposed defect detection method we can say, n1=348 and n2=105. As n1 > n2, so we can make decision that defect is found in the test image. Again, we apply the proposed defect detection method on a sample test RGB image for printed tiles. Like before, we check if any kind of defect exists in this test image or not. Then we apply our proposed preprocessing operations on this image. This is shown in the following Fig. 4. In this Fig. , we also show the reference RGB image for that test image and the output after applying preprocessing operations on it. (a) Reference Image (b) Final Image (c) Test Image (d) Final Image Fig. 4: (a) Reference RGB Image, (b) Final Image after Preprocessing for Reference Image, (c) Test RGB Image, (d) Final Image after Preprocessing for Test Image Consider the Fig. 4(b) and Fig. 4(d). The image of Fig. 4(b) is found from the test RGB image after applying all the proposed preprocessing operations onto it. Then the total number of defected pixels is count in this image. Here the total number of defected pixels is 982. Again, the image of Fig. 4(d) is found from the reference RGB image after applying all previous operations onto it. In this image the total number of defected pixels is 714. According to our proposed defect detection method we can say, n1=982 and n2=714. As n1 > n2, so we can make decision that defect is found in the test image. We have tested our proposed method for defect detection on about 50 ceramic tile images. In this case, the proposed defect detection efficiency is compared to the existing method [3]. We see that the detection rate for the proposed method is better than that of the existing method. Following Table II shows the efficiency comparison between the existing work and the proposed work and Fig. 5 represents the detection efficiency through a chart. TABLE II. EFFICIENCY COMPARISON BETWEEN EXISTING WORK AND OUR WORK Detection Efficiency Number of Tiles Existing Work Our Work 10 90% 100% 20 85% 90% 30 86.67% 90% 40 87.5% 92.5% 50 88% 92% Average 87.4% 93% (IJCSIS) International Journal of Computer Science and Information Security, Vol. 1, No. 1, May 2009 29 Defect Detection Efficiency 80 85 90 95 100 105 0 20 40 60 Number of Tiles Detection RateE x is ting Work Our Work Fig. 5: Efficiency Comparison Chart between Existing Work and Our Work We also compute the required time for the proposed method and the existing method [3]. The proposed method needs less time than the existing one. Table III shows the time comparison between the existing work and proposed work. Fig. 6 represents time comparison through a chart. TABLE III. TIME COMPARISON BETWEEN EXISTING WORK AND OUR WORK Required Time (in sec.) Number of Tiles For Existing Method For Our Method 10 38.6363 10.3023 20 77.4986 20.4579 30 116.941 30.5645 40 157.204 40.2381 50 193.887 49.8988 Time Comparison 0 50 100 150 200 250 10 20 30 40 50 Number of Tiles Time in SecExisting Method Proposed Method Fig. 6: Time Comparison Chart between Existing Work and Our Work B. Defect Classification We need to classify the various kinds of defects after defect detection. For this purpose we store the output of the above resulting image into a file as matrix form. Then previously mentioned algorithms are applied on that matrix. In the following Table IV, we show our result after applying the proposed classification algorithms for both plane and printed tiles mentioned above. TABLE IV. CLASSIFICATION RESULT Types of Defect Result (for Plane Tiles) Result (for Printed Tiles) Pinhole Found Not Found Crack Not Found Found Blob Found Found Spot Not Found Not Found Edge Not Found Not Found Corner Not Found Not Found Table V represents the classification efficiency of the proposed method for both plane and printed tiles and Fig. 7 represents this efficiency through a chart. TABLE V. CLASSIFICATION EFFICIENCY OF THE PROPOSED METHOD FOR 100 TILES Name of Defects Classification Rate for Both Plane and Printed Tiles (%) Pinhole 93.48 Crack 86.49 Blob 87.50 Spot 90.00 Edge 96.77 Corner 93.55 Fig. 7: Classification Efficiency Chart for Proposed Method In this section, we have shown the result of the proposed defect detection method applied on a particular RGB image. We also have show the comparison between the existing method [3] and the proposed method in the case of detection efficiency and time efficiency. As a result, we find that our proposed method is better than the existing one. The detection rate of the proposed method is average 93% for a number of tiles, where for the existing method this rate is 87.4%. Again, the proposed method requires total 49.8988 seconds to process 50 tiles, where the existing method requires a huge time and it is total (IJCSIS) International Journal of Computer Science and Information Security, Vol. 1, No. 1, May 2009 30 193.887 seconds. We also have shown the classification efficiency for different types of defects. V. CONCLUSION AND FUTURE WORK We have proposed a defect detection method for ceramic tiles and compared our technique with the existing defect detection method. We also have shown a comparison for defect detection and processing time between the proposed method and the existing method [3]. Detection rate of the proposed method is better than that of the existing method and we need less time to detect defects than the existing one. We also have established defect classification algorithms for different types of defects. Finally, we have calculated the performance of efficiency for defect classification. The proposed method fails to detect the glaze and scratch faults. So it may be future work to detect and classify the glaze and scratch faults. We propose no method to improve the computational time for the proposed classification technique. In this case, a future work should be done on reducing the total computational time for defect classification. REFERENCES [1] R. C. Gonzalez, R. E. Woods, “Digital Image Processing”, Pearson Education (Singapore), Pte. Ltd., Indian Branch, 482 F.I.E. Patparganj, 2005-2006. [2] Puyin Liu, Hongxing Li, “Fuzzy Neural Network Theory and Application”. World Scientific, 2004. [3] H. Elbehiery, A. Hefnawy, and M. Elewa, “Surface Defects Detection for Ceramic Tiles Using Image Processing and Morphological Techniques”, Proceedings of World Academy of Science, Engineering and Technology, vol 5, pp 158-160, April 2005, ISSN 1307-6884. [4] C. Boukouvalas, J. Kittler, R. Marik, M. Mirmehdiand, M. Petrou, “Ceramic Tile Inspection for Colour and Structural Defects”, under BRITE-EURAM, project no. BE5638, pp 6, University of Surrey, 2006. [5] Se Ho Choi, Jong Pil Yun, Boyeul Seo, Young Su Park, Sang Woo Kim, “Real-Time Defects Detection Algorithm for High-Speed Steel Bar in Coil”, Proceedings of World Academy of Science, Engineering and Technology, Volume 21, January 2007, ISSN 1307-6884. [6] Se Ho Choi, Jong Pil Yun, Boyeul Seo, Young Su Park, Sang Woo Kim, “Real-Time Defects Detection Algorithm for High-Speed Steel Bar in Coil”, Proceedings of World Academy of Science, Engineering and Technology, Volume 21, January 2007, ISSN 1307-6884. [7] Mohamed Roushdi, “Comparative Study of Edge Detection Algorithms Applying on the Grayscale Noisy Image Using Morphological Filter”, GVIP Journal, Volume 6, Issue 4, December, 2006. AUTHORS PROFILE G.M. Atiqur Rahaman received B.Sc. in Computer Science and Engineering with Distinction from Computer Science and Engineering Discipline in 2003 from Khulna University, Bangladesh. Currently he is Assistant Professor of the same Discipline of the same University. He has published about 08 papers in national and international journals as well as international conference proceedings. His areas of interest include Machine Intelligence, Data Mining, Computer Vision, Digital Image Processing, Color Informatics etc. ` 20 40 60 80 100 C ra ck P in hol e Blo b S p ot Ed g e C o rne r Type of Defect Number of Tiles (IJCSIS) International Journal of Computer Science and Information Security, Vol. 1 No.1 May 2009 High Transmission Bit Rate of A thermal Arrayed Waveguide Grating (AWG) Module in Passive Optical Networks Abd El–Naser A. Mohammed1, Ahmed Nabih Zaki Rashed2*, Gaber E. S. M. El-Abyad3 and Abd El–Fattah A. Saad4 1,2,3,4Electronics and Electrical Communication Engineering Department Faculty of Electronic Engineering, Menouf 32951, Menoufia University, EGYPT 1E-mail: abd_elnaser6@yahoo.com, 2*E-mail: ahmed_733@yahoo.com Tel.: +2 048-3660-617, Fax: +2 048-3660-617 Abstract―In the present paper, high transmission bit rate of a thermal arrayed waveguide grating (AWG) which is composed of lithium niobate (LiNbO3)/polymethyl metha acrylate (PMMA) hybrid materials on a silicon substrate in Passive Optical Networks (PONs) has parametrically analyzed and investigated over wide range of the affecting parameters. We have theoretically investigated the temperature dependent wavelength shift of the arrayed waveguide grating (AWG) depends on the refractive-indices of the materials and the size of the waveguide. A thermalization of the AWG can be realized by selecting proper values of the material and structural parameters of the device. Moreover, we have analyzed the data transmission bit rate of a thermal AWG in passsive optical networks (PONs) based on Maximum Time Division Multiplexing (MTDM) technique. Keywords−PONs; Arrayed waveguide gratings (AWGs); integrated optics; optical planar waveguide; optical fiber communications; MTDM technique. I. INTRODUCTION With the explosive growth of end user demand for higher bandwidth, various types of passive optical networks (PONs) have been proposed. PON can be roughly divided into two categories such as time-division-multiplexing (TDM) and ultra wide wavelength-division-multiplexing (UW-WDM) methods [1]. Compared with TDM-PONs, WDM-PON systems allocate a separate wavelength to each subscriber, enabling the delivery of dedicated bandwidth per optical network unit (ONU). Moreover, this virtual point-to-point connection enables a large guaranteed bandwidth, protocol transparency, high quality of service, excellent security, bit-rate independence, and easy upgradeability. Especially, recent good progress on athermal arrayed waveguide grating (AWG) and cost-effective colorless ONUs [2] has empowered WDM-PON as an optimum solution for the access network. However, fiber link failure from the optical line terminal (OLT) to the ONU leads to the enormous loss of data. Thus, fault monitoring and network protection are crucial issues in network operators for reliable network. To date [3], many methods have been proposed for network protection. In the ITU-T recommendation on PONs (G.983.1) duplicated network resources such as fiber links or ONUs are required. The periodic and cyclic properties of AWGs are used to interconnect two adjacent ONUs by a piece of fiber. In the recent years, arrayed waveguide gratings (AWGs) have appeared to be one of attractive candidates for high channel count Mux/DeMux devices to process optical signals in a parallel manner. Its low chromatic dispersion [4], typically ±5 ps/nm — ±10 ps/nm, makes it possibly be used for 40 Gbit/s systems. However, it is well known that manufacturing AWGs involves a series of complex production processes and requires bulky facilities [5]. Their cost remains a big issue. Further, the technical complexity leads to low yield and poor performance. The former, no doubt, further increases the production cost while the latter degrades the signal quality and system’s performance [6], exhibiting high insertion loss, high channel crosstalk, low channel uniformity, and high polarization dependent loss. More vitally, AWGs require active temperature control in order to stabilize the thermal wavelength drift and temperature-dependent loss variations [7]. Due to its capability to increase the aggregate transmission capacity of a single-strand optical fiber, the arrayed waveguide grating (AWG) multiplexer is considered a key component in the construction of a dense wavelength-division-multiplexing system [8]. However, an AWG made of silica is so sensitive to the ambient temperature that the output wavelength changes by as much as 0.66 nm/ºC [9]. In the present study, a hybrid material waveguide with lithium niobate (LiNbO3) core material and PMMA cladding material is considered as the most attractive a thermal structure because of its resistance to the thermo-optic sensitivity of the materials. First, the principle of the a thermal AWG with LiNbO3/PMMA hybrid materials is described, and the relative formulas are derived for analyzing the temperature dependence of the AWG. Second, the theoretical analysis of the data transmission bit rate of a thermal AWG in PONs based on MTDM technique . Finally, a conclusion is reached based on the analysis and general discussion. Manuscript received April 17, 2009 Manuscript revised , May 2009 13 (IJCSIS) International Journal of Computer Science and Information Security, Vol. 1 No.1 May 2009 II. A THERMAL AWG MODULE IN PONS ARCHITECTURE MODEL Fig. 1. Simplified Passive Optical network architecture model. The network architecture is shown in Fig. 1. It is based on two cascaded Athermal arrayed waveguide gratings (AWGs). The first stage is an Nx1a thermal AWG located at the optical line terminal (OLT) or central office (CO). The functionality of this a thermal AWG is to route optical signals generated by the OLT laser diodes stack to each of the network branches to which the OLT will serve [4]. The second stage is a 1× M a thermal AWG located at the remote node. Its task is to demultiplex the M incoming wavelengths to each of the output ports, which connect to optical network channels and then to the number of supported users. The entire network routing intelligence is located at the CO in order to provide easy upgradeability and easy integration with the backbone. The two cascade a thermal AWGs are connected to each other by the single mode optical fiber cables [5]. III. Features Of A Thermal Arrayed Waveguide Grating Module Arrayed waveguide grating (AWG) which handles the function of wavelength multiplexer/demultiplexer is extensively used in configuring optical communication networks that are becoming more diversified. Since the transmission wavelength of an AWG is temperature dependent, it was a common practice to control its temperature using heaters or Peltier elements. The AWG consists of input waveguides, arrayed waveguide, slab waveguide and output waveguides, constituting a diffraction grating that takes advantage of the optical path difference in the arrayed waveguide. Figure 2 shows a schematic view of AWG circuitry. As ambient temperature changes, the phase front at each wavelength generated by the arrayed waveguide will tilt due to the change in the refractive index of the optical waveguide and the linear expansion of the silicon substrate, causing a shift of the focusing point on the output waveguide within the slab waveguide [10]. Fig. 2. Sechamtic view of a thermal AWG. As shown in Fig. 3, the thermal arrayed waveguide grating (AWG) with cross-sections is designed as square shape with the core width a, of lithium niobate (LiNbO3) material and PMMA polymer overcladding and undercladding material on a silicon substrate. OLT Single mode fiber link A thermal AWG 1xM A thermal AWG 1xM Laser Diodes Supported Number of Users Supported Number of Users A thermal AWG N X 1 Optical Network Channels Optical Network Channels Central Office (CO) 14 (IJCSIS) International Journal of Computer Science and Information Security, Vol. 1 No.1 May 2009 Fig. 3. A structure view of cross-section and refractive- index profile of hybrid materials LiNbO3/PMMA. IV. REFRACTIVE-INDEX OF HYBRID MATERIALS IV. 1. Lithium Niobate (LiNbO3) Core Material The investigation of both the thermal and spectral variations of the waveguide refractive index (n) require Sellmeier equation. The set of parameters required to completely characterize the temperature dependence of the refractive-index (n1) is given below, Sellmeier equation is under the form [11]: ( ) 2 10 2 9 2 8 7 2 6 5 2 4 3 2 1 2 1 λ λ λ A A H A A H A A H A A H A A n − − + + + − + + + = (1) where λ is the optical wavelength in μm and 2 0 2 T T H − = . T is the temperature of the material, C, and T0 is the reference temperature and is considered as 27 C. The set of parameters of Sellmeier equation coefficients, LiNbO3, are recast and dimensionally adjusted as below [11]: A1=5.35583, A2=4.629 x 10-7, A3=0.100473, A4=3.862 x 10-8, A5=0.20692, A6= -0.89 x 10-8, A7=100, A8=2.657 x 10-5, A9=11.34927, and A10=0.01533. Equation (1) can be simplified as the following expression: 2 10 2 9 2 78 2 56 2 34 12 2 1 λ λ λ A A A A A A n − − + − + = (2) where: A12=A1+A2H, A34=A3+A4H, A56=A5+A6H, and A78=A7+A8H. Then, the differentiation of Eq. (2) w. r. t λ which gives: ( ) ( ) ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ + − + − ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ − = 10 2 2 9 2 78 2 2 56 2 34 1 1 A A A A A n d dn λ λ λ λ (3) In the same way, the second differentiation w. r. t λ yields: ( ) [ ] ( ) ( ) [ ] ( ) ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ + − − − + − − − − = 10 3 2 9 2 2 2 9 2 78 3 2 56 2 2 2 56 2 34 1 2 1 2 4 4 1 A A A A A A A n d n d λ λ λ λ λ λ λ (4) Also, the differentiation w. r. t T gives: ( ) ( ) ( ) ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ − + − + − + ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = 2 9 2 8 2 2 56 2 34 56 6 4 2 56 2 2 1 1 2 A A A A A A A A A n T dT dn λ λ λ (5) IV. 2. PMMA Polymer Cladding Material The Sellmeier equation of the refractive-index is expressed as the following [12]: 2 6 2 2 5 2 4 2 2 3 2 2 2 2 1 2 2 1 C C C C C C n − + − + − + = λ λ λ λ λ λ (6) The parameters of Sellmeier equation coefficients, PMMA, as a function of temperature [12]: C1=0.4963, C2=0.07180 (T/T0), C3=0.6965, C4=0.1174 (T/T0), C5=0.3223, and C6=9.237. Then the differentiation of Eq. (6) w. r. t T yields: ( ) ( ) ( ) ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ − + − = 2 2 4 2 4 3 2 2 2 2 2 1 0 2 2 2 635 . 1 0718 . 0 C C C C C C T n dT dn λ λ λ (7) V. THEORETICAL MODEL ANALYSIS V. 1. Model of A thermal Arrayed Waveguide Grating (AWG) We present the a thermal condition and the relative formulas of LiNbO3/PMMA hybrid materials AWG on a silicon substrate. The temperature dependence of AWG center wavelength is expressed as [13, 14]. ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ + = sub c c c c c n dT dn n dT d α λ λ (8) where T is the ambient temperature, C, λc is the center wavelength of the arrayed waveguide grating, µm, nc is the effective refractive-index of the arrayed waveguide grating, αsub is the coefficient of thermal expansion of the Si substrate, and dT dnc is the thermo-optic (TO) coefficient of the waveguide. By integrating Eq. (8), we can obtain the following expression: ( ) T c c sub e n C α λ = (9) where C is an integrating constant. Assume that λc=λ0, and nc=nc0 when T=T0 at room temperature, we can determine C as the following: ( ) 0 0 0 T c sub e n C α λ − = (10) Substituting from Eq. (10) into Eq. (9), we can obtain: Si Substrate LiNbO3 core n1 PMMA overcladding n2 a PMMA undercladding n2 15 (IJCSIS) International Journal of Computer Science and Information Security, Vol. 1 No.1 May 2009 ( ) [ ] 0 0 0 T T c c c sub e n n − = α λ λ (11) From Eq. (11) we obtain the central wavelength shift caused by the temperature variation as: ( ) ( ) [ ] co T T c c c n e n n sub − = − = Δ − 0 0 0 0 α λ λ λ λ (12) Taking Δλ = 0, from Eq. (12) we can obtain the a thermal condition of the AWG as: ( ) ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = − 0 0 ln c c sub n n T T α (13) Then by differentiating Eq. (13), the a thermal condition of the AWG can also be expressed in another form as the following [14]: c sub c n dT dn α − = (14) The effective refractive index of the arrayed waveguide grating (AWG) is given by [15]: ( ) [ ] ( ) , 2 2 2 2 2 1 2 2 2 2 2 1 n b n n k n b n n k k nc + − = + − = = β (15) where β is the propagation constant of the fundamental mode, k is the wave number, and b is the normalized propagation constant and is given by [15]: ( ) , 9660 . 0 1428 . 1 2 ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ − = V V b (16) where V is the normalized frequency. For single mode step index optical fiber waveguide, the cut-off normalized is approximately V= Vc= 2.405, and by substituting in Eq. (16) we can get the normalized propagation constant b at the cut- off normalized frequency approximately b ≈ 0.5, and then by substituting in Eq. (15) we can obtain: ( ) , 2 1 2 2 2 1 n n nc + = (17) By taking the square root of Eq. (17) yields: ( ) , 7 . 0 2 1 2 2 2 1 n n nc + = (18) The cut-off normalized frequency for single mode step index optical fiber waveguide is given by the following expression [15]: ( ) , 2 2 1 2 2 2 1 n n a V off cut c − = − λ π (19) Assume that the cut-off wavelength is equal to the central wavelength to transfer the fundamental modes only, that is λcut-off = λc. Eq. (19) can be expressed in another form as follows: ( ) , 405 . 2 4 . 1 2 1 4 2 4 1 c c n n n a − = π λ (20) Equation (20) can be simplified as follows: ( ) , 83 . 1 2 1 4 2 4 1 c c n n n a − = λ (21) Equation (21) can be expressed in another form as follows: ( ) , 35 . 3 2 4 2 4 1 2 c c n n a n λ − = (22) By substituting from Eq. (22) into Eq. (14) yields: ( ) . 35 . 3 2 4 2 4 1 2 c sub c n n a dT dn λ α − − = (23) The effective refractive index nc is dependent on the refractive indices of the materials and on the size and shape of the waveguide, then by selecting proper materials and structural parameters of the waveguide to satisfy Eq. (23), an a thermal arrayed waveguide grating (AWG) can be designed. V. 2. Theoretical Model Analysis of High Data Transmission Bit Rate The total B.W is based on the total chromatic dispersion coefficient Dt [16], where: Dt = Dm + Dw (24) where Dm is the material dispersion coefficient in sec/m2, and Dw is the waveguide dispersion coefficient in sec/m2. Both Dm, Dw are given by [17] (for the fundamental mode): , 2 1 2 ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ − = λ λ d n d C Dm sec/m2 (25) , 1 2 Y n n C n Dw ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ Δ − = λ sec/m2 (26) where C is the velocity of the light, 3 x108 m/sec, 1n is the core refractive-index, 2n is the cladding refractive-index, Y is a function of wavelength, λ [17]. The relative refractive-index difference Δn is defined as [17]: 2 1 2 2 2 1 2n n n n − = Δ (27) The total pulse broadening due to total chromatic dispersion coefficient Dt is given by [17]: λ τ Δ = Δ L Dt , nsec (28) where Δλ is the spectral line-width of the optical source, nm, and L is the length of single-mode fiber waveguide, m. The maximum time division multiplexing (MTDM) transmission bit rate is given by [17]: τ τ Δ = Δ = 25 . 0 4 1 rm B , Gbit/sec (29) The optical signal wavelength span 1 μm ≤ λsi, optical signal wavelength≤ 1.65 μm is divided into intervals per link as follows: link m N N L L i f / , 65 . 0 0 μ λ λ λ = − = Δ (30) Then the MTDM bit rates per fiber cable link is given by the following expression: link Gbit N x B ch rLink sec/ / , 25 . 0 τ Δ = (31) where Nch is the number of optical network channels in the fiber cable link, and NL is the number of links in the fiber cable core and is up to 24 links/core. 16 (IJCSIS) International Journal of Computer Science and Information Security, Vol. 1 No.1 May 2009 -5 -4.5 -4 -3.5 -3 -2.5 -2 20 25 30 35 40 45 50 55 60 65 70 Temperature, T [C] Fig. 4. Variation of thermo-optic (TO) coefficient versus temperature when n1=2.33, n2=1.52, a= 5 μm. -0.016 -0.01 -0.004 0.002 0.008 0.014 0.02 20 25 30 35 40 45 50 55 60 65 70 n1 = 2.33 = 2.32 = 2.30 a thermal Temperature, T [C] Fig. 5. Variation of the central wavelength shift versus temperature for different core refractive-indices. -0.016 -0.01 -0.004 0.002 0.008 0.014 0.02 20 25 30 35 40 45 50 55 60 65 70 n2 = 1.52 = 1.51 = 1.50 a thermal Temperature, T [C] Fig. 6. Variation of the central wavelength shift versus temperature for different cladding refractive-indices. Thermo-optic coefficient dnC/dT [x10-6/ºC] Central wavelength shift Δλ [nm] Central wavelength shift Δλ [nm] 17 (IJCSIS) International Journal of Computer Science and Information Security, Vol. 1 No.1 May 2009 -0.016 -0.01 -0.004 0.002 0.008 0.014 0.02 20 25 30 35 40 45 50 55 60 65 70 a = 5 μm = 4.5 μm = 4 μm a thermal Temperature, T [C] Fig. 7. Variation of the central wavelength shift versus temperature for different core width of a thermal AWG. -30 -25 -20 -15 -10 -5 1 1.04 1.08 1.12 1.16 1.2 1.24 1.28 1.32 1.36 1.4 1.44 1.48 1.52 1.56 1.6 1.64 ∆n = 0.3 = 0.4 = 0.5 Optical signal wavelength, λ [µm] Fig. 8. Total chromatic dispersion coefficient versus optical signal wavelength at the assumed set of parameters. 3 5 7 9 11 13 15 17 19 21 1 1.04 1.08 1.12 1.16 1.2 1.24 1.28 1.32 1.36 1.4 1.44 1.48 1.52 1.56 1.6 1.64 ∆n = 0.3 = 0.4 = 0.5 Optical signal wavelength, λ [µm] Fig. 9. MTDM transmission bit rate/channel versus optical signal wavelength a the assumed set of parameters. Central wavelength shift Δλ [nm] Total chromatic dispersion Dt [psec/nm.m] MTDM bit rate/channel Brm [Gbit/sec] 18