Tuesday, April 2, 2019
Compression Techniques used for Medical Image
Compression Techniques use for Medical  send off1.1  knowledgeability reach  coalition is an  cardinal research issue over the  pass years. A  some(prenominal)  techniques and methods  get down been presented to achieve common goals to alter the  mold of  schooling of the  mountain range sufficiently well with less selective in hitation   coat and high  compaction rates. These techniques  john be classified into   both(prenominal) categories, lossless and lossy  coalescency techniques. lossless techniques  atomic  sum 18 use when  info   atomic  result 18  comminuted and loss of  discipline is  non acceptable  such as Huffman encryption,  cause Length En cryptograph (RLE), Lempel-Ziv-Welch  cryptanalytics (LZW) and Area coding. Hence,  galore(postnominal)  health check  forecasts should be  mean by lossless techniques. On the  an another(prenominal)(prenominal) hand, lossy  coalition techniques such as Predictive Coding (PC), Discrete ripple  diversify (DWT), Discrete  cos lettuce  r   ender (DCT) and Vector quantization (VQ)   more  streamlined in terms of  entrepot and  transmittal needs  b bely  in that respect is no warranty that they  idler  hold on the characteristics needed in  medical examination  examine  covering and  diagnosis 1-2.Data  calculus is the process that  render data files into  downcaster  champions this process is effective for storage and transmission. It presents the information in a digital form as binary sequences which  sustain spatial and statistical redundancy. The relatively high cost of storage and transmission makes data  crunch worthy. Compression is considered necessary and essential key for creating  discover files with manageable and transmittable sizings 3.The basic goal of  ascertain  pinchion is to  curve the  pungency rate of an  video to decrease the capacity of the channel or digital storage  remembering requirements while  chief(prenominal)taining the important information in the  frame 4. The bit rate is measured in bi   ts per picture element (bpp). well-nigh all methods of  word-painting  coalition  be  ground on deuce fundamental principlesThe first principle is to remove the redundancy or the  extra from the  hear. This  glide slope is called redundancy reduction.The  due south principle is to remove parts or details of the  forecast that  lead not be noticed by the user. This approach is called irrelevancy reduction.Image  compaction methods are based on either redundancy reduction or irrelevancy reduction  apiece while most  compaction methods exploit both. While in other methods they  shadowernot be easily separated 2. Several image  capsule techniques  convert transformed image data instead of the  pilot burner images 5-6.In this thesis, an approach is developed to enhance the performance of Huffman  compression coding a  spick-and-span hybrid lossless image compression technique that combines  in the midst of lossless and lossy compression which named LPC-DWT-Huffman (LPCDH) technique is pr   oposed to maximize compression so that  soprano compression can be obtained. The image firstly passed through the LPC transformation. The  wave form transformation is  accordingly use to the LPC output. Finally, the wavelet coefficients are en graved by the Huffman coding. Compared with both Huffman, LPC- Huffman and DWT-Huffman (DH) techniques our   reliable model is as maximum compression ratio as that before. However, this is still needed for more work especially with the advancement of medical imaging systems offering high resolution and video recording. Medical images  seeded player in the front of diagnostic, treatment and fellow up of different diseases. Therefore, nowadays, m any(prenominal) hospitals around the world are routinely using medical image processing and compression tools.1.1.1 MotivationsMost hospitals store medical image data in digital form using picture archiving and communication systems  overdue to extensive digitization of data and increasing telemedicine    use. However, the need for data storage capacity and transmission bandwidth continues to exceed the capability of available technologies. Medical image processing and compression  restrain become an important tool for diagnosis and treatment of  some(prenominal) diseases so we need a hybrid technique to compress medical image without any loss in image information which important for medical diagnosis.1.1.2 ContributionsImage compression plays a critical role in telemedicine. It is desired that either single images or sequences of images be transmitted over computer nedeucerks at large distances that they could be  employ in a multitude of purposes. The main contribution of the research is  object lens to compress medical image to be small size, reliable, improved and  unbendable to facilitate medical diagnosis performed by many medical centers.1.2 dissertation OrganizationThe thesis is organized into six chapters, as  sidelineChapter 2 Describes the basic background on the image com   pression technique including lossless and lossy methods and describes the types of medical imagesChapter 3 Provides a literature survey for medical image compression.Chapter 4 Describes LPC-DWT-Huffman (proposed methods) algorithmic program implementation. The  fair game is to achieve a  comely compression ratio as well as better  t unmatchable of voice of reproduction of image with a low power consumption.Chapter 5 Provides  wile results of compression of several medical images and compare it with other methods using several  metric functions.Chapter 6 Provides some drawn conclusions about this work and some suggestions for the  upcoming work. vermiform appendix A Provides Huffman example and comparison between the methods for the last years.Appendix B Provides the Matlab CodesAppendix C Provides various medical image compression using LPCDH.1.3 IntroductionImage compression is the process of obtaining a  stuff theatrical of an image while maintaining all the necessary information    important for medical diagnosis. The target of the Image compression is to reduce the image size in bytes without effects on the quality of the image. The decrease in image size permits images to save memory  quad. The image compression methods are  in general categorized into two central types Lossless and lossy methods. The major objective of  to each one type is to rebuild the  fender image from the compressed one without affecting any of its numerical or physical  respects 7.Lossless compression also called noiseless coding that the original image can perfectly recover each individual picture element value from the compressed (en jurisprudenced) image but have low compression rate. Lossless compression methods are often based on redundancy reduction which uses statistical decomposition techniques to eliminate or remove the redundancy (duplication) in the original image. Lossless Image coding is also important in applications where no information loss is allowed during compressio   n. Due to the cost, it is used only for a  a couple of(prenominal) applications with stringent requirements such as medical imaging 8-9.In lossy compression techniques there are a slight loss of data but high compression ratio. The original and  speculate images are not perfectly matched. However, practically near to each other, this difference is represented as a noise. Data loss may be unacceptable in many applications so that it must be lossless. In medical images compression that use lossless techniques do not give enough advantages in transmission and storage and the compression that use lossy techniques may  discharge critical data required for diagnosis 10. This thesis presents a combining of lossy and lossless compression to get high compressed image without data loss.1.4 Lossless CompressionIf the data have been lossless compressed, the original data can be exactly  manufactureed from the compressed data. This is  broadly speaking used for many applications that cannot allo   w any variations between the original and reconstructed data. The types of lossless compression can be analyzed in  picture 2.1.Figure 2.1 lossless compressionRun Length EncodingRun length  en enactment, also called recurrence coding, is one of the  simplexst lossless data compression algorithms. It is based on the idea of  encoding a consecutive  situation of the same   symbolization. It is effective for data sets that are consist of long sequences of a single repeated character 50. This is performed by replacing a  series of repeated symbols with a count and the symbol. That is, RLE finds the  egress of repeated symbols in the  stimulus image and replaces them with two-byte code. The first byte for the number and the second one is for the symbol. For a simple illustrative example, the string AAAAAABBBBCCCCC is encoded as A6B4C5 that saves nine bytes (i.e. compression ratio =15/6=5/2). However in some cases there is no much consecutive repeation which reduces the compression ratio.    An illustrative example, the original data 12000131415000000900, the RLE encodes it to 120313141506902 (i.e. compression ratio =20/15=4/3). Moreover if the data is  hit-or-miss the RLE may fail to achieve any compression ratio 30-49.Huffman encodingIt is the most popular lossless compression technique for removing coding redundancy. The Huffman encoding starts with computing the probability of each symbol in the image. These symbols probabilities are  sorted in a descending order creating leaf nodes of a tree. The Huffman code is designed by merging the lowest probable symbols producing a new probable, this process is continued until only two probabilities of two last symbols are left. The code tree is obtained and Huffman codes are formed from labelling the tree branch with 0 and 1 9. The Huffman codes for each symbol is obtained by reading the branch digits  consecutive from the root node to the leaf. Huffman code procedure is based on the following three observations1) More freq   uently(higher probability) occurred symbols will have shorter code words than symbol that occur less frequently.2) The two symbols that occur least frequently will have the same length code.3) The Huffman codes are variable length code and prefix code.For more indication Huffman example is presented in details in Appendix (A-I).The entropy (H) describes the possible compression for the image in bit per pixel. It must be noted that, there arent any possible compression ratio smaller than entropy. The entropy of any image is calculated as the average information probability 12. (2.1)Where Pk is the probability of symbols, k is the intensity value, and L is the number of intensity  set used to present image.The average code length is  apt(p) by the sum of product of probability of the symbol and number of bits used to encode it. More information can be founded in 13-14 and the Huffman code efficiency is calculated as (2.2)LZW codingLZW (Lempel- Ziv  Welch) is given by J. Ziv and A. Lem   pel in 1977 51.T. Welchs refinements to the algorithm were published in 1984 52. LZW compression replaces  draw of characters with single codes. It does not do any analysis of the  stimulant drug text. But, it adds  all new string of characters to a table of strings. Compression occurs when the output is a single code instead of a string of characters.LZW is a  lexicon based coding which can be static or  alive(p). In static coding, dictionary is fixed during the encoding and decoding processes. In dynamic coding, the dictionary is updated. LZW is  widely used in computer industry and it is implemented as compress command on UNIX 30. The output code of that the LZW algorithm can be any arbitrary length, but it must have more bits than a single character. The first 256 codes are by  heedlessness assigned to the standard character set. The remaining codes are assigned to strings as the algorithm proceeds. There are three best-known applications of LZW UNIX compress (file compression),    GIF image compression, and V.42 bits (compression over Modems) 50.Area codingArea coding is an  deepen form of RLE. This is more advance than the other lossless methods. The algorithms of area coding find rectangular regions with the same properties. These regions are coded into a specific form as an element with two points and a certain structure. This coding can be highly effective but it has the problem of a nonlinear method, which cannot be designed in hardware 9.1.5 lossy CompressionLossy Compression techniques deliver greater compression percentages than lossless ones. But there are some loss of information, and the data cannot be reconstructed exactly. In some applications, exact reconstruction is not necessary. The lossy compression methods are given in Figure 2.2. In the following subsections, several Lossy compression techniques are re scanedFigure 2.2 lossy compressionDiscrete  wavelet Transform (DWT)Wavelet analysis have been known as an efficient approach to representi   ng data ( presage or image). The Discrete Wavelet Transform (DWT) depends on filtering the image with high-pass filter and low-pass filter.in the first  typify The image is filtered row by row (horizontal direction) with two filters  and  and down sampling (keep the even indexed column) every  savours at the filter outputs. This produces two DWT coefficients each of size N -N/2. In the second stage, the DWT coefficients of the filter  are filtered column by column ( perpendicular direction) with the same two filters and keep the even indexed row and subsampled to give two other sets of DWT coefficients of each size N/2-N/2. The output is defined by   analogousity and  flesh out coefficients as shown in Figure 2.3.Figure 2.3 filter stage in 2D DWT 15.LL coefficients low-pass in the horizontal direction and lowpass in the  tumid direction.HL coefficients high-pass in the horizontal direction and lowpass in the vertical direction,  thus follow horizontal edges more than vertical edges.   LH coefficients high-pass in the vertical direction and low-pass in the horizontal direction, thus follow vertical edges than horizontal edges.HH coefficients high-pass in the horizontal direction and high-pass in the vertical direction, thus preserve diagonal edges.Figure 2.4 show the LL, HL, LH, and HH when one  aim wavelet is applied to brain image. It is noticed that The LL contains, furthermore all information about the image while the size is quarter of original image size if we  tailor the HL, LH, and HH three detailed coefficients shows horizontal, vertical and diagonal details. The Compression ratio increases when the number of wavelet coefficients that are equal zeroes increase. This implies that one level wavelet can provide compression ratio of four 16.Figure 2.4 Wavelet  depravation applied on a brain image.The Discrete Wavelet Transform (DWT) of a sequence consists of two series expansions, one is to the approximation and the other to the details of the sequence. The f   ormal definition of DWT of an N-point sequence x n, 0  n  N  1 is given by 17(2.3)(2.4)(2.5)Where Q (n1 ,n2) is approximated signal, E(n1 ,n2) is an image, WQ (j,k1,k2) is the approximation DWT and W (j,k1,k2) is the detailed DWT where i represent the direction index (vertical V, horizontal H, diagonal D) 18.To reconstruct back the original image from the LL (cA), HL (cD(h)), LH (cD(v)), and HH (cD(d)) coefficients, the inverse 2D DWT (IDWT) is applied as shown in Figure 2.5.Figure 2.5 one level inverse 2D-DWT 19.The equation of IDWT that reconstruct the image E () is given by 18(2.6)DWT has different families such as Haar and Daupachies (db) the compression ratio can  take off from wavelet type to another depending which one can represented the signal in fewer number coefficients.Predictive Coding (PC)The main  fortune of the predictive coding method is the Predictor which exists in both encoder and decoder. The encoder computes the predicted value for a pixel, denote x(n), based o   n the known pixel value of its  inhabit pixels. The  residuum  actus reus, which is the difference value between the  literal value of the  new pixel x (n) and x (n) the predicted one. This is computed for all pixels. The residual  flaws are then encoded by any encoding scheme to  reelect a compressed data stream 21. The residual errors must be small to achieve high compression ratio.e (n) = x (n)  x(n)(2.7)e (n) =x(n)  (2.8)Where k is the pixel order and  is a value between 0 and 1 20.The decoder also computes the predicted value of the  reliable pixel x (n) based on the previously decoded color values of neighboring pixels using the same method as the encoder. The decoder decodes the residual error for the current pixel and performs the inverse operation to restore the value of the current pixel 21.x (n) = e (n) + x (n)(2.9)Linear predictive coding (LPC)The techniques of linear  forecasting have been applied with great success in many problems of speech processing. The success in    processing speech signals suggests that similar techniques might be  utilizable in modelling and coding of 2-D image signals. Due to the extensive computation required for its implementation in two dimensions, only the simplest forms of linear prediction have received much  management in image coding 22. The schemes of one dimensional predictors make predictions based only on the value of the previous pixel on the current line as shown in equation.Z = X  D(2.10)Where Z denotes as output of predictor and X is the current pixel and D is the  side by side(predicate) pixel.The two dimensional prediction scheme based on the values of previous pixels in a left-to-right, top-to-bottom scan of an image. In Figure 2.6 X denotes the current pixel and A, B, C and D are the adjacent pixels. If the current pixel is the top leftmost one, then there is no prediction since there are no adjacent pixels and no prior information for prediction 21.Figure 2.6 Neighbor pixels for predictingZ = x  (B + D)   (2.11)Then, the residual error (E), which is the difference between the actual value of the current pixel (X) and the predicted one (Z) is given by the following equation.E = X  Z(2.12)Discrete Cosine Transform (DCT)The Discrete Cosine Transform (DCT) was first proposed by N. Ahmed 57. It has been more and more important in recent years 55. The DCT is similar to the distinct Fourier transform that transforms a signal or image from the spatial  humanity to the frequency domain as shown in Figure 2.7.Figure 2.7 Image transformation from the spatial domain to the frequency domain 55.DCT represents a  exhaustible series of data points as a sum of harmonics  romaine functions. DCTs representation have been used for numerous data processing applications, such as lossy coding of audio signal and images. It has been found that small number of DCT coefficients are capable of representing large sequence of raw data. This transform has been widely used in signal processing of image data, espec   ially in coding for compression for its near-optimal performance. The discrete cosine transform helps to separate the image into spectral sub-bands of differing importance with respect to the images visual quality 55. The use of cosine is much more efficient than sine functions in image compression since this cosine function is capable of representing edges and boundary. As described below, fewer coefficients are needed to approximate and represent a typical signal.The Two-dimensional DCT is useful in the analysis of two-dimensional (2D) signals such as images. We say that the 2D DCT is separable in the two dimensions. It is computed in a simple  bureau The 1D DCT is applied to each row of an image, s, and then to each column of the result.Thus, the transform of the image s(x, y) is given by 55,(2.13)where. (n x m) is the size of the block that the DCT is applied on. Equation (2.3) calculates one entry (u, v) of the transformed image from the pixel values of the original image matri   x 55. Where u and v are the sample in the frequency domain.DCT is widely used especially for image compression for encoding and decoding, at encoding process image divided into N x N blocks after that DCT performed to each block. In practice JPEG compression uses DCT with a block of 88. Quantization applied to DCT coefficient to compress the blocks so selecting any  quantisation method effect on compression value. Compressed blocks are saved in a storage memory with significantly space reduction. In decoding process, compressed blocks are loaded which de-quantized with reverse the quantization process. Inverse DCT was applied on each block and merging blocks into an image which is similar to original one 56.Vector QuantizationVector Quantization (VQ) is a lossy compression method. It uses a codebook containing pixel patterns with corresponding indexes on each of them. The main idea of VQ is to represent arrays of pixels by an index in the codebook. In this way, compression is achiev   ed because the size of the index is usually a small fraction of that of the block of pixels. The image is subdivided into blocks, typically of a fixed size of n-n pixels. For each block, the nearest codebook entry under the distance metric is found and the ordinal number of the entry is transmitted. On reconstruction, the same codebook is used and a simple look-up operation is performed to produce the reconstructed image 53.The main advantages of VQ are the simplicity of its idea and the possible efficient implementation of the decoder. Moreover, VQ is theoretically an efficient method for image compression, and superior performance will be gained for large vectors. However, in order to use large vectors, VQ becomes complex and requires many computational resources (e.g. memory, computations per pixel) in order to efficiently construct and search a codebook. More research on reducing this complexity has to be  through with(p) in order to make VQ a practical image compression method    with superior quality 50.Learning Vector Quantization is a supervised learning algorithm which can be used to  diversify the codebook if a set of labeled  discipline data is available 13. For an input vector x, let the nearest code-word index be i and let j be the class label for the input vector. The learning-rate parameter is initialized to 0.1 and then decreases monotonically with each iteration. After a suitable number of iterations, the codebook typically converges and the training is terminated. The main drawback of the conventional VQ coding is the computational load needed during the encoding stage as an exhaustive search is required through the  blameless codebook for each input vector. An alternative approach is to cascade a number of encoders in a hierarchical manner that trades off accuracy and  stronghold of encoding 14, 54.1.6 Medical Image TypesMedical imaging techniques allow doctors and researchers to view activities or problems within the human body, without invasi   ve neurosurgery. There are a number of accepted and safe imaging techniques such as X-rays, charismatic resonance imaging (MRI), Computed tomography (CT), Positron Emission Tomography (PET) and Electroencephalography (EEG) 23-24.1.7  determinationIn this chapter many compression techniques used for medical image have discussed. There are several types of medical images such as X-rays,  magnetized resonance imaging (MRI), Computed tomography (CT), Positron Emission Tomography (PET) and Electroencephalography (EEG). Image compression has two categories lossy and lossless compression. Lossless compression such as Run Length Encoding, Huffman Encoding Lempel-Ziv-Welch, and Area Coding. Lossy compression such as Predictive Coding (PC), Discrete Wavelet Transform (DWT), Discrete Cosine Transform (DCT), and Vector Quantization (VQ).Several compression techniques already present a better techniques which are faster, more accurate, more memory efficient and simpler to use. These methods will    be discussed in the next chapter.  
Subscribe to:
Post Comments (Atom)
 
 
No comments:
Post a Comment