OpenCores
URL https://opencores.org/ocsvn/jpegencode/jpegencode/trunk

Subversion Repositories jpegencode

[/] [jpegencode/] [trunk/] [document/] [JPEG Encoder.doc] - Diff between revs 5 and 6

Only display areas with differences | Details | Blame | View Log

Rev 5 Rev 6
╨╧рб▒с>■   HJ■   G                                                                                                                                                                                                                                                                                                                                                                                                                                                ье┴7  °┐∙6bjbjUU   "F7|7|∙2      l>>>>>>>R    ,Rg╢T(|||||||цшшшшшш$ =║>|||||ш
>>||!ш
ш
ш
|Ц>|>|цш
|цш
■ш
ц>>ц|HЁ╗┌Wкi╩R╬ 
╓цц70gцўш
ўцш
RR>>>>┘JPEG Encoder IP Core

This document describes the JPEG Encoder IP Core provided at  HYPERLINK "http://www.opencores.org" www.opencores.org.  The core is written in Verilog and is designed to be portable to any target device.  This core does not perform subsampling - the resulting JPEG image will have 4:4:4 subsampling.

Inputs

The top level module is jpeg_top, in the file jpeg_top.v.  

The inputs to the core are kept to a minimum.  The first 3 inputs are the clock, enable, and reset lines.  One global clock is used throughout the design, and all of the registers are synchronized to the rising edge of this clock.  The enable signal should be brought high when the data from the first pixel of the image is ready.  The enable signal needs to stay high while the data is being input to the core.  Each 8x8 block of data needs to be input to the core on 64 consecutive clock cycles.  After the 64 pixels of data from each block has been input, the enable signal needs to stay high for a minimum of 33 clock cycles.  There should not be any new data during this delay of 33 or more clock cycles.  Then the enable signal should be brought low for one clock cycle, then brought high again as the next 8x8 block of data is input to the core.  This pattern needs to continue for each of the 8x8 blocks of data.

The data bus is 24 bits wide.  The red, green, and blue pixel values are input on the bus.  Each pixel value is represented in 8 bits, corresponding to a value between 0-255.  The pixel values can be extracted directly from a .tif file for example.  The blue pixel value is in bits [23:16], green is in bits [15:8], and red is in bits [7:0] of the data bus.  

The only other input is the end_of_file_signal.  This signal needs to go high on the first clock cycle of valid data bits of the final 8x8 block of the image.  This signal lets the core know it needs to output all of the bits from this last block.  The output bitstream is a 32-bit bus, and normally between blocks, any bits that donТt fill the whole 32-bit width output bus will not be output.  Instead, they will be added to the initial bits from the next 8x8 block of the image.  On the last 8x8 block, the core will output any extra bits so that there are not any missing bits from the image.

Outputs

The JPEG bitstream is output on the signal JPEG_bitstream, a 32-bit bus.  The first 8 bits will be in positions [31:24], the next 8 bits are in [23:16], and so on.  The data in JPEG_bitstream is valid when the signal is data_ready is high.  data_ready will only be high for one clock cycle to indicate valid data.  On the final block of data, if the last bits do not fill the 32-bit bus, the signal eof_data_partial_ready will be high for one clock cycle when the extra bits are in the signal JPEG_bitstream.  The number of extra bits is indicated by the 5-bit signal end_of_file_bitstream_count.


Operation of the JPEG Encoder core

Color Space Transformation

The first operation of the JPEG Encoder core is converting the red, green, and blue pixel values to their corresponding Luminance and Chrominance (Y, Cb, and Cr) values.  This operation is performed in the RGB2YCBCR module.  The operation is based on the following formulas:

Y = .299 * Red  +  .587 * Green  +  .114 * Blue
Cb = -.1687 * Red  +  -.3313 * Green  + .5 * Blue + 128
Cr = .5 * Red  +  -.4187 * Green  +  -.0813 * Blue + 128

These operations are performed with fixed point multiplications.  All of the constant values in the above 3x3 matrix are multiplied by 2^14 (16384).   The multiplications are performed on one clock cycle, then all of the products are added together on the next clock cycle.  This is done to achieve a fast clock frequency during synthesis.  Then the sums are divided by 2^14, which is implemented by discarding the 14 LSBs of the sum values, instead of actually performing a divide operation.  Rounding is performed by looking at the 13th LSB and adding 1 to the sum if the 13th LSB is 1.

Discrete Cosine Transform

The next step after calculating the Y, Cb, and Cr values is performing the Discrete Cosine Transform (DCT).  This is commonly referred to as a 2D DCT.  The actual formula is the following:

DY = T * Y * inv(T)

T is the DCT matrix.  Y is the matrix of Y values for the 8x8 image block.  DY is the resultant matrix after the 2D DCT.  The DCT needs to be performed separately on the Y, Cb, and Cr values for each block.  The DCT of the Y values is performed in the y_dct module.  The DCT of the Cb and Cr values occurs in the cb_dct and cr_dct modules.  I will only describe the y_dct module here, as the cb_dct and cr_dct modules are essentially the same.

Now you may have noticed that I have not centered the Y, Cb, and Cr values on 0 in the previous stage.  To do that, I would have subtracted 128 from the final Y value, and not added the 128 to the final Cb and Cr values.  To perform the DCT, the values of Y, Cb, and Cr need to be centered around 0 and in the range Ц128 to 127.  However, I perform a few tricks in the DCT module that allow me to keep the Y, Cb, and Cr values in the range from 0-255.  I do this because it makes the implementation of the DCT easier.  

The DCT matrix, or T as I call it, is multiplied by the constant value 16384 or 2^14.  The rows of the T matrix are orthonormal (the entries in each row add up to 0), except for the first row.  Because the rows 2-8 are orthonormal, it does not matter that I have not centered the Y values on 0.  I perform the multiplication of the T rows by the Y columns of data, and the extra 128 in each of the Y values is cancelled out by the orthonormal T rows.  The first row, however, is not orthonormal - it has a constant value of .3536, or 5793 after it is multiplied by 2^14.  Since I have not centered Y by 0, the extra 128 in each value will result in an extra 128*8*5793 = 5932032 in the final sum.  So to make up for the fact that I have not centered the Y values on 0, I subtract 5932032 from the result of the first row of T multiplied by each of the 8 columns of the Y matrix.  If you think about this, it means I have to perform a total of 8 subtractions for an 8x8 matrix of Y values.  If I had subtracted 128 from each Y value before the DCT module, I would have needed to perform a total of 64 subtractions.  

After multiplying the T matrix by the Y matrix, the resulting matrix is multiplied by the inverse of the T matrix.  This operation is performed in the code with the goal of achieving the highest possible clock frequency for the design.  The result is the code may look overly confusing, but I tried many different schemes before settling on the one used in the code.  I would simulate the code, verify it worked, then synthesize to see what clock speed I could achieve, and I repeated this process many times until I got around 300 MHz as the best clock speed.  I targeted a Xilinx Virtex 5 FPGA to achieve this speed.

Quantization

The next step is fairly straightforward.  The module y_quantizer comes next for the Y values.  The Cb and Cr values go through the cb_quantizer and cr_quantizer modules.  The 64 quantization values are stored in the parameters Q1_1 through Q8_8.  I used finals values of 1 for my core, but you could change these values to any quantization you want.  I simulated different quantization values during testing, and I settled on values of 1, corresponding to Q = 100, because this stressed my code the most and I was trying to break the core in my final testing.  The core did not break, it worked, but I left the quantization values as they were.

As in previous stages, I avoid performing actual division as this would be an unnecessary and burdensome calculation to perform.  I create other parameters QQ1_1 through QQ8_8, and each value is 4096 divided by Q1_1 through Q8_8.  For example, QQ1_1 = 4096 / Q1_1.  This division is performed when the code is compiled, so it doesnТt require division in the FPGA.  

The input values are multiplied by their corresponding parameter values, QQ1_1 through QQ8_8.  Then, the bottom 12 bits are chopped off the product.  This gets rid of the 4096, or 2^12, that was used to create the parameters QQ1_1 through QQ8_8.  The final values are rounded based on the value in the 11th LSB.

Huffman Encoding

The module y_huff performs the Huffman encoding of the quantized Y values coming out of the y_quantizer module.  The modules cb_huff and cr_huff perform the Huffman encoding for the Cb and Cr values.  The module yd_q_h combines the y_dct, y_quantizer, and y_huff modules.  The values from y_quantizer are transposed (rows swapped with columns) as they are input to the y_huff module.  This is done so that the inputs of each 8x8 block to the top module, jpeg_top, can be written in the traditional left to right order.  Peforming the DCT requires matrix multiplication, and the rows of the T matrix are multiplied by the columns of the Y matrix.  So the Y values would need to be entered in a column format, from top to bottom, to implement this.  Instead, the Y values can be entered in the traditional row format, from left to right, and then by transposing the values as they pass between the y_quantizer and y_huff modules, the proper organization of Y values is regained.  

The Huffman table can be changed by changing the values in this module Ц the specific lines of code with the Huffman table are lines 1407-1930.  However, the core does not allow the Huffman table to be changed on the fly.  You will have to recompile the code to change the Huffman table.  You should create a full Huffman table, even if you have a small image file and do not expect to use all of the Huffman codes.  The calculations in this core may differ slightly from how you do your calculations, and if you use a Huffman table without all of the possible values defined, the core may need a Huffman code that is not stored in the RAM, and the result will be an incorrect bitstream output.

The DC component is calculated first, then the AC components are calculated in zigzag order.  The output from the y_huff module is a 32-bits signal containing the Huffman codes and amplitudes for the Y values.  

Creating the Output JPEG Bitstream

The outputs from the y_huff, cb_huff, and cr_huff modules are combined into the pre_fifo module, along with the RGB2YCBCR module.  The pre_fifo module organizes those modules into one module, but does not add any additional logic or functions.  The next module in the hierarchy is the fifo_out module.  This module takes the pre_fifo module and combines it with 3 sync_fifo_32 modules.

The sync_fifo_32 modules are necessary to hold the outputs from the y_huff, cb_huff, and cr_huff modules.  The sync_fifo_32 module is 16 registers deep.  The depth of the FIFOТs should be increased if the Quantization Table is small, which could cause the FIFOТs to overflow.  I did not have an overflow of any of the images I encoded, but if you have more than 512 (32*16) bits of data for one block from the Cb or Cr blocks, then the data will overflow the FIFO.  The Y block FIFO is read from more often than the Cb and Cr blocks, so it will not overflow.  The output JPEG bitstream combines the Y, Cb, and Cr Huffman codes together, and it starts with the Y Huffman codes, followed by the Cb Huffman codes, and finally the Cr Huffman codes for each 8x8 block of the image.  Then the Huffman codes from the next 8x8 block of the image are put into the bitstream.  

After the fifo_out module comes the ff_checker module.  The ff_checker module looks for any СFFТs in the bitstream that occur on the byte boundaries.  When an СFFТ is found, a С00Т is put into the bitstream after the СFFТ, and then the rest of the bitstream follows.  The ff_checker module uses a sync_fifo module to store data as it checks the bits for the СFFТs.  

The top level module of the JPEG Encoder core is the jpeg_out module.  This module combines the ff_checker module and the fifo_out module. 

Testbench

The testbench file, jpeg_out_TB.v, inputs the data from the image Сja.tifТ into the JPEG Encoder core.  I used a Matlab program to extract the red, green, and blue pixel values directly from the Сja.tifТ file and write it in the correct testbench format.  This testbench was used to simulate the core and to verify its correct operation.  The output from the core during simulation was the JPEG scan data bitstream, which was used to create the jpeg image file Сja.jpgТ.  The output from the core was the scan data portion of the jpeg image file.  The header was copied from a separate jpeg image that also had dimensions of 96x96 pixels.  I used the Huffman tables and Quantizations tables from this separate jpeg image to create ja.jpg.  The Huffman and Quantization tables are also the ones I used in the code of this core, otherwise the resultant bitstream would not correspond to the JPEG header I used.  Also, the end of the jpeg image needs to have the end of scan marker, СFFD9Т.  

 David Lundgren
 HYPERLINK "mailto:davidklun@gmail.com" davidklun@gmail.com


STz{|НОRTz|8$:$╖6╕6р6с6т6ї6Ў6∙6·Є·я·эээ·х·я·Бj═UH*0JБjU jUEFMNКЛ$        %     Н
╨╧рб▒с>■   GI■   F                                                                                                                                                                                                                                                                                                                                                                                                                                                ье┴7  °┐∙6bjbjUU   "D7|7|∙2      l>>>>>>>R    ,Rн╢T(|||||||,......$c Г║R>|||||R>>||g|в>|>|,|,,>>,|HЁФ`)oj╩R╬ 
т,,}0н,==,RR>>>>┘JPEG Encoder IP Core

This document describes the JPEG Encoder IP Core provided at  HYPERLINK "http://www.opencores.org" www.opencores.org.  The core is written in Verilog and is designed to be portable to any target device.  This core does not perform subsampling - the resulting JPEG image will have 4:4:4 subsampling.

Inputs

The top level module is jpeg_top, in the file jpeg_top.v.  

The inputs to the core are kept to a minimum.  The first 3 inputs are the clock, enable, and reset lines.  One global clock is used throughout the design, and all of the registers are synchronized to the rising edge of this clock.  The enable signal should be brought high when the data from the first pixel of the image is ready.  The enable signal needs to stay high while the data is being input to the core.  Each 8x8 block of data needs to be input to the core on 64 consecutive clock cycles.  After the 64 pixels of data from each block has been input, the enable signal needs to stay high for a minimum of 33 clock cycles.  There should not be any new data during this delay of 33 or more clock cycles.  Then the enable signal should be brought low for one clock cycle, then brought high again as the next 8x8 block of data is input to the core.  This pattern needs to continue for each of the 8x8 blocks of data.

The data bus is 24 bits wide.  The red, green, and blue pixel values are input on the bus.  Each pixel value is represented in 8 bits, corresponding to a value between 0-255.  The pixel values can be extracted directly from a .tif file for example.  The blue pixel value is in bits [23:16], green is in bits [15:8], and red is in bits [7:0] of the data bus.  

The only other input is the end_of_file_signal.  This signal needs to go high on the first clock cycle of valid data bits of the final 8x8 block of the image.  This signal lets the core know it needs to output all of the bits from this last block.  The output bitstream is a 32-bit bus, and normally between blocks, any bits that donТt fill the whole 32-bit width output bus will not be output.  Instead, they will be added to the initial bits from the next 8x8 block of the image.  On the last 8x8 block, the core will output any extra bits so that there are not any missing bits from the image.

Outputs

The JPEG bitstream is output on the signal JPEG_bitstream, a 32-bit bus.  The first 8 bits will be in positions [31:24], the next 8 bits are in [23:16], and so on.  The data in JPEG_bitstream is valid when the signal is data_ready is high.  data_ready will only be high for one clock cycle to indicate valid data.  On the final block of data, if the last bits do not fill the 32-bit bus, the signal eof_data_partial_ready will be high for one clock cycle when the extra bits are in the signal JPEG_bitstream.  The number of extra bits is indicated by the 5-bit signal end_of_file_bitstream_count.


Operation of the JPEG Encoder core

Color Space Transformation

The first operation of the JPEG Encoder core is converting the red, green, and blue pixel values to their corresponding Luminance and Chrominance (Y, Cb, and Cr) values.  This operation is performed in the RGB2YCBCR module.  The operation is based on the following formulas:

Y = .299 * Red  +  .587 * Green  +  .114 * Blue
Cb = -.1687 * Red  +  -.3313 * Green  + .5 * Blue + 128
Cr = .5 * Red  +  -.4187 * Green  +  -.0813 * Blue + 128

These operations are performed with fixed point multiplications.  All of the constant values in the above 3x3 matrix are multiplied by 2^14 (16384).   The multiplications are performed on one clock cycle, then all of the products are added together on the next clock cycle.  This is done to achieve a fast clock frequency during synthesis.  Then the sums are divided by 2^14, which is implemented by discarding the 14 LSBs of the sum values, instead of actually performing a divide operation.  Rounding is performed by looking at the 13th LSB and adding 1 to the sum if the 13th LSB is 1.

Discrete Cosine Transform

The next step after calculating the Y, Cb, and Cr values is performing the Discrete Cosine Transform (DCT).  This is commonly referred to as a 2D DCT.  The actual formula is the following:

DY = T * Y * inv(T)

T is the DCT matrix.  Y is the matrix of Y values for the 8x8 image block.  DY is the resultant matrix after the 2D DCT.  The DCT needs to be performed separately on the Y, Cb, and Cr values for each block.  The DCT of the Y values is performed in the y_dct module.  The DCT of the Cb and Cr values occurs in the cb_dct and cr_dct modules.  I will only describe the y_dct module here, as the cb_dct and cr_dct modules are essentially the same.

Now you may have noticed that I have not centered the Y, Cb, and Cr values on 0 in the previous stage.  To do that, I would have subtracted 128 from the final Y value, and not added the 128 to the final Cb and Cr values.  To perform the DCT, the values of Y, Cb, and Cr need to be centered around 0 and in the range Ц128 to 127.  However, I perform a few tricks in the DCT module that allow me to keep the Y, Cb, and Cr values in the range from 0-255.  I do this because it makes the implementation of the DCT easier.  

The DCT matrix, or T as I call it, is multiplied by the constant value 16384 or 2^14.  The rows of the T matrix are orthonormal (the entries in each row add up to 0), except for the first row.  Because the rows 2-8 are orthonormal, it does not matter that I have not centered the Y values on 0.  I perform the multiplication of the T rows by the Y columns of data, and the extra 128 in each of the Y values is cancelled out by the orthonormal T rows.  The first row, however, is not orthonormal - it has a constant value of .3536, or 5793 after it is multiplied by 2^14.  Since I have not centered Y by 0, the extra 128 in each value will result in an extra 128*8*5793 = 5932032 in the final sum.  So to make up for the fact that I have not centered the Y values on 0, I subtract 5932032 from the result of the first row of T multiplied by each of the 8 columns of the Y matrix.  If you think about this, it means I have to perform a total of 8 subtractions for an 8x8 matrix of Y values.  If I had subtracted 128 from each Y value before the DCT module, I would have needed to perform a total of 64 subtractions.  

After multiplying the T matrix by the Y matrix, the resulting matrix is multiplied by the inverse of the T matrix.  This operation is performed in the code with the goal of achieving the highest possible clock frequency for the design.  The result is the code may look overly confusing, but I tried many different schemes before settling on the one used in the code.  I would simulate the code, verify it worked, then synthesize to see what clock speed I could achieve, and I repeated this process many times until I got around 300 MHz as the best clock speed.  I targeted a Xilinx Virtex 5 FPGA to achieve this speed.

Quantization

The next step is fairly straightforward.  The module y_quantizer comes next for the Y values.  The Cb and Cr values go through the cb_quantizer and cr_quantizer modules.  The 64 quantization values are stored in the parameters Q1_1 through Q8_8.  I used finals values of 1 for my core, but you could change these values to any quantization you want.  I simulated different quantization values during testing, and I settled on values of 1, corresponding to Q = 100, because this stressed my code the most and I was trying to break the core in my final testing.  The core did not break, it worked, but I left the quantization values as they were.

As in previous stages, I avoid performing actual division as this would be an unnecessary and burdensome calculation to perform.  I create other parameters QQ1_1 through QQ8_8, and each value is 4096 divided by Q1_1 through Q8_8.  For example, QQ1_1 = 4096 / Q1_1.  This division is performed when the code is compiled, so it doesnТt require division in the FPGA.  

The input values are multiplied by their corresponding parameter values, QQ1_1 through QQ8_8.  Then, the bottom 12 bits are chopped off the product.  This gets rid of the 4096, or 2^12, that was used to create the parameters QQ1_1 through QQ8_8.  The final values are rounded based on the value in the 11th LSB.

Huffman Encoding

The module y_huff performs the Huffman encoding of the quantized Y values coming out of the y_quantizer module.  The modules cb_huff and cr_huff perform the Huffman encoding for the Cb and Cr values.  The module yd_q_h combines the y_dct, y_quantizer, and y_huff modules.  The values from y_quantizer are transposed (rows swapped with columns) as they are input to the y_huff module.  This is done so that the inputs of each 8x8 block to the top module, jpeg_top, can be written in the traditional left to right order.  Peforming the DCT requires matrix multiplication, and the rows of the T matrix are multiplied by the columns of the Y matrix.  So the Y values would need to be entered in a column format, from top to bottom, to implement this.  Instead, the Y values can be entered in the traditional row format, from left to right, and then by transposing the values as they pass between the y_quantizer and y_huff modules, the proper organization of Y values is regained.  

The Huffman table can be changed by changing the values in this module Ц the specific lines of code with the Huffman table are lines 1407-1930.  However, the core does not allow the Huffman table to be changed on the fly.  You will have to recompile the code to change the Huffman table.  You should create a full Huffman table, even if you have a small image file and do not expect to use all of the Huffman codes.  The calculations in this core may differ slightly from how you do your calculations, and if you use a Huffman table without all of the possible values defined, the core may need a Huffman code that is not stored in the RAM, and the result will be an incorrect bitstream output.

The DC component is calculated first, then the AC components are calculated in zigzag order.  The output from the y_huff module is a 32-bits signal containing the Huffman codes and amplitudes for the Y values.  

Creating the Output JPEG Bitstream

The outputs from the y_huff, cb_huff, and cr_huff modules are combined into the pre_fifo module, along with the RGB2YCBCR module.  The pre_fifo module organizes those modules into one module, but does not add any additional logic or functions.  The next module in the hierarchy is the fifo_out module.  This module takes the pre_fifo module and combines it with 3 sync_fifo_32 modules.

The sync_fifo_32 modules are necessary to hold the outputs from the y_huff, cb_huff, and cr_huff modules.  The sync_fifo_32 module is 16 registers deep.  The depth of the FIFOТs should be increased if the Quantization Table is small, which could cause the FIFOТs to overflow.  I did not have an overflow of any of the images I encoded, but if you have more than 512 (32*16) bits of data for one block from the Cb or Cr blocks, then the data will overflow the FIFO.  The Y block FIFO is read from more often than the Cb and Cr blocks, so it will not overflow.  The output JPEG bitstream combines the Y, Cb, and Cr Huffman codes together, and it starts with the Y Huffman codes, followed by the Cb Huffman codes, and finally the Cr Huffman codes for each 8x8 block of the image.  Then the Huffman codes from the next 8x8 block of the image are put into the bitstream.  

After the fifo_out module comes the ff_checker module.  The ff_checker module looks for any СFFТs in the bitstream that occur on the byte boundaries.  When an СFFТ is found, a С00Т is put into the bitstream after the СFFТ, and then the rest of the bitstream follows.  The ff_checker module uses a sync_fifo module to store data as it checks the bits for the СFFТs.  

The top level module of the JPEG Encoder core is the jpeg_out module.  This module combines the ff_checker module and the fifo_out module. 

Testbench

The testbench file, jpeg_top_TB.v, inputs the data from the image Сja.tifТ into the JPEG Encoder core.  I used a Matlab program to extract the red, green, and blue pixel values directly from the Сja.tifТ file and write it in the correct testbench format.  This testbench was used to simulate the core and to verify its correct operation.  The output from the core during simulation was the JPEG scan data bitstream, which was used to create the jpeg image file Сja.jpgТ.  The output from the core was the scan data portion of the jpeg image file.  The header was copied from a separate jpeg image that also had dimensions of 96x96 pixels.  I used the Huffman tables and Quantizations tables from this separate jpeg image to create ja.jpg.  The Huffman and Quantization tables are also the ones I used in the code of this core, otherwise the resultant bitstream would not correspond to the JPEG header I used.  Also, the end of the jpeg image needs to have the end of scan marker, СFFD9Т.  

 David Lundgren
 HYPERLINK "mailto:davidklun@gmail.com" davidklun@gmail.com


STz{|НОRTz|8$:$╖6╕6р6с6т6ї6Ў6∙6·Є·я·эээ·х·я·Бj═UH*0JБjU jUEFMNКЛ$        %     Н
О
О
уфьэBCDghГДЧШ╚9¤√√√∙√√√√√√√√√∙√√√√∙√∙√√√ЇЇЇ$a$∙6■9:ЗИвг`auv23;<ШЩШ!Щ!##@$A$R$S$&(·°°Ў°°°··°°°°°°°°Ў°°°°°°°Ў°°$a$&('(▐*▀*│+┤+╫+╪+Z-[-┐0└0/202╝2╜2╟2╚2ж6з6╖6ў6°6∙6¤¤¤¤¤√¤¤¤¤¤¤¤¤¤¤¤¤¤¤¤¤¤ 1Рh░╨/ ░р=!░Ё"░Ё#Ра$Ра%░═D╨╔ъy∙║╬МВкKйwww.opencores.orgр╔ъy∙║╬МВкKй4http://www.opencores.org/╙D╨╔ъy∙║╬МВкKйdavidklun@gmail.comр╔ъy∙║╬МВкKй6mailto:davidklun@gmail.com
уфьэBCDghГДЧШ╚9¤√√√∙√√√√√√√√√∙√√√√∙√∙√√√ЇЇЇ$a$∙6■9:ЗИвг`auv23;<ШЩШ!Щ!##@$A$R$S$&(·°°Ў°°°··°°°°°°°°Ў°°°°°°°Ў°°$a$&('(▐*▀*│+┤+╫+╪+Z-[-┐0└0/202╝2╜2╟2╚2ж6з6╖6ў6°6∙6¤¤¤¤¤√¤¤¤¤¤¤¤¤¤¤¤¤¤¤¤¤¤ 1Рh░╨/ ░р=!░Ё"░Ё#Ра$Ра%░═D╨╔ъy∙║╬МВкKйwww.opencores.orgр╔ъy∙║╬МВкKй4http://www.opencores.org/╙D╨╔ъy∙║╬МВкKйdavidklun@gmail.comр╔ъy∙║╬МВкKй6mailto:davidklun@gmail.com
i8@ё 8NormalCJ_HaJmH       sH     tH     6@6   Heading 1$@&
i8@ё 8NormalCJ_HaJmH       sH     tH     6@6   Heading 1$@&
5БCJ\Б<A@Є б<Default Paragraph Font.U@вё.	Hyperlink>*B*ph .>@.Title$a$
5БCJ\Б<A@Є б<Default Paragraph Font.U@вё.	Hyperlink>*B*ph .>@.Title$a$
5БCJ\Б∙2     F    EFMNКЛ$%НОуфьэBCDghГДЧШ╚
9
:
ЗИвг`auv23;<ШЩШЩ@ A R S &$'$▐&▀&│'┤'╫'╪'Z)[)┐,└,/.0.╝.╜.╟.╚.ж2з2╖2ў2°2√2Ш0ААШ0ААШ0ААШ0АА0ААШ0АFШ0ААШ0АFШ0ААШ0АFШ0АFШ0АFШ0ААШ0АF0ААШ0А?Ш0ААШ0ААШ0А?0ААШ0Аў
5БCJ\Б∙2D    EFMNКЛ$%НОуфьэBCDghГДЧШ╚
9
:
ЗИвг`auv23;<ШЩШЩ@ A R S &$'$▐&▀&│'┤'╫'╪'Z)[)┐,└,/.0.╝.╜.╟.╚.ж2з2╖2ў2°2√2Ш0ААШ0ААШ0ААШ0АА0ААШ0АFШ0ААШ0АFШ0ААШ0АFШ0АFШ0АFШ0ААШ0АF0ААШ0А?Ш0ААШ0ААШ0А?0ААШ0Аў
0ААШ0АШ0АШ0АШ0АШ0АШ0АШ0АШ0ААШ0А0ААШ0А┤Ш0А┤Ш0А┤Ш0А┤Ш0А┤Ш0ААШ0А┤Ш0ААШ0ААШ0ААШ0А┤Ш0ААШ0А┤0ААШ0А?Ш0ААШ0А?Ш0А?Ш0А?Ш0А?Ш0А?0ААШ0АuШ0ААШ0АuШ0ААШ0АuШ0ААШ0АА0ААШ0ААШ0ААШ0ААШ0ААШ0ААШ0ААШ0ААШ0ААШ0ААШ0ААШ0ААШ0ААШ0ААШ0ААШ0ААШ0ААШ0АА∙69&(∙6!"∙6 S{Н╖2с2ї2∙2X АX А8Cfn|Жк╝ &     Ю     м     ╔     ╙     ▐     ш     |
0ААШ0АШ0АШ0АШ0АШ0АШ0АШ0АШ0ААШ0А0ААШ0А┤Ш0А┤Ш0А┤Ш0А┤Ш0А┤Ш0ААШ0А┤Ш0ААШ0ААШ0ААШ0А┤Ш0ААШ0А┤0ААШ0А?Ш0ААШ0А?Ш0А?Ш0А?Ш0А?Ш0А?0ААШ0АuШ0ААШ0АuШ0ААШ0АuШ0ААШ0АА0ААШ0ААШ0ААШ0ААШ0ААШ0ААШ0ААШ0ААШ0ААШ0ААШ0ААШ0ААЪ0ААШ0ААШ0ААШ0ААШ0ААШ0АА∙69&(∙6 !∙6S{Н╖2с2ї2∙2X АX А8Cfn|Жк╝ &     Ю     м     ╔     ╙     ▐     ш     |
Т
Т
ш
ш
%@╚╩▄р╩╠#%rwРТп╡║└фщ■      ln■68╠╬░╗"ыЎ*╪▐▀хHSvxЦвз│┼╤ZfИФvВ^ d К У п ║ ╨ ╫ ▄ у  !!'!-!;!@!B!M!S!Y!t!!─!╩!"!"["d"╙#▐#у#щ#Q'W'э'є'ї'№'(     (((0(_(g(ї(¤()%)Я)е)з)о)┤)╗)**(*4*[*a*ї*ў*_+a+╡+╖+,,╩,╥,ф,ю,№,--!-╨-┌-щ-Є-'.+.e.m.Р.Ъ.к.▓.▄.щ.//9/?/М/Т/]0f0Ц0Ь0f1s1г1й1╝1╚1√2R[d      e     ▐     т     ^
i
(Еnr┬╤╩╥Х"╨"'$m$'  '╫)▌)с)Є)\-c-4.=.¤12√233333333333333333  CCSN C:\jpeg_verilog\JPEG Encoder.docCCSN C:\jpeg_verilog\JPEG Encoder.docCCSN C:\jpeg_verilog\JPEG Encoder.docCCSN C:\jpeg_verilog\JPEG Encoder.docCCSN C:\jpeg_verilog\JPEG Encoder.docCCSN C:\jpeg_verilog\JPEG Encoder.docCCSN C:\jpeg_verilog\JPEG Encoder.docCCSN C:\jpeg_verilog\JPEG Encoder.docCCSN C:\jpeg_verilog\JPEG Encoder.docCCSN C:\jpeg_verilog\JPEG Encoder.doc @Аf0f0фбwiif0S0∙2Р@  Unknown            GРЗz А Times New Roman5РАSymbol3&РЗz А Arial"qИЁ╨hчН█Fа█ж ў_*Y!ЁЁаx┤ВВ20dЬ32ГЁ  @This document describes the JPEG Encoder IP Core provided at wwwCCSNCCSN■ рЕЯЄ∙OhлС+'│┘0дРШфЁ(   8D
%@╚╩▄р╩╠#%rwРТп╡║└фщ■      ln■68╠╬░╗"ыЎ*╪▐▀хHSvxЦвз│┼╤ZfИФvВ^ d К У п ║ ╨ ╫ ▄ у  !!'!-!;!@!B!M!S!Y!t!!─!╩!"!"["d"╙#▐#у#щ#Q'W'э'є'ї'№'(     (((0(_(g(ї(¤()%)Я)е)з)о)┤)╗)**(*4*[*a*ї*ў*_+a+╡+╖+,,╩,╥,ф,ю,№,--!-╨-┌-щ-Є-'.+.e.m.Р.Ъ.к.▓.▄.ф.ф.щ.//9/?/М/Т/]0f0Ц0Ь0f1s1г1й1╝1╚1√2R[d  e     ▐     т     ^
i
(Еnr┬╤╩╥Х"╨"'$m$'  '╫)▌)с)Є)\-c-4.=.╚.0/¤12√233333333333333333с.ф.°2√2  CCSN C:\jpeg_verilog\JPEG Encoder.docCCSN C:\jpeg_verilog\JPEG Encoder.docCCSN C:\jpeg_verilog\JPEG Encoder.docCCSN C:\jpeg_verilog\JPEG Encoder.docCCSN C:\jpeg_verilog\JPEG Encoder.docCCSN C:\jpeg_verilog\JPEG Encoder.docCCSN C:\jpeg_verilog\JPEG Encoder.docCCSN C:\jpeg_verilog\JPEG Encoder.docCCSN C:\jpeg_verilog\JPEG Encoder.docDELL)C:\jpeg_encoder\document\JPEG Encoder.doc @Аф.ф.№ФCф.╚.∙2@@  Unknown            GРЗz А Times New Roman5РАSymbol3&РЗz А Arial"1ИЁ╨hчН█FЎе█ж!ў_*Y!ЁЁаx┤ВВ0dЬ32ГЁ  @This document describes the JPEG Encoder IP Core provided at wwwCCSNDELL■ рЕЯЄ∙OhлС+'│┘0дРШфЁ(       8D
`l
xДМФЬфAThis document describes the JPEG Encoder IP Core provided at wwwdhisCCSNdocCSNCSNNormalcCCSNlc32NMicrosoft Word 9.0i@КЇj@▓╓mh╩@ДSкi╩_*■ ╒═╒Ь.УЧ+,∙оD╒═╒Ь.УЧ+,∙оp,hpАИРШаи░╕
└
фUSAFwYЬ3а
`l
xДМФЬфAThis document describes the JPEG Encoder IP Core provided at wwwdhisCCSNdocCSNCSNNormalcDELLlc33LMicrosoft Word 9.0i@КЇj@▓╓mh╩@Еoj╩_*■ ╒═╒Ь.УЧ+,∙оD╒═╒Ь.УЧ+,∙оp,hpАИРШаи░╕
└
фUSAFwYЬ3а
        AThis document describes the JPEG Encoder IP Core provided at wwwTitle  8@_PID_HLINKSфA╪Yrmailto:davidklun@gmail.comKhttp://www.opencores.org/
        AThis document describes the JPEG Encoder IP Core provided at wwwTitle  8@_PID_HLINKSфA╪Yrmailto:davidklun@gmail.comKhttp://www.opencores.org/

 !"#■   %&'()*+■   -./0123456■   89:;<=>■   @ABCDEF■   ¤   I■   ■   ■                                                                                                                                                                                                                   Root Entry         └FРмЇWкi╩KАData

 !"■   $%&'()*■   ,-./012345■   789:;<=■   ?@ABCDE■   ¤   H■   ■   ■                                                                                                                                                                                                                       Root Entry                └FРЕz)oj╩JАData
            $1Table        ,ўWordDocument    "FSummaryInformation(            7DocumentSummaryInformation8        ?CompObj    jObjectPool            РмЇWкi╩РмЇWкi╩■                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           ■ 
            #1Table        +=WordDocument    "DSummaryInformation(            6DocumentSummaryInformation8        >CompObj    jObjectPool            РЕz)oj╩РЕz)oj╩■                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           ■ 
     └FMicrosoft Word Document
     └FMicrosoft Word Document
 
 

powered by: WebSVN 2.1.0

© copyright 1999-2024 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.