상세 컨텐츠

본문 제목

Blob Field With Compression Data Is Not Valid

카테고리 없음

by braserplumre1970 2020. 1. 31. 21:24

본문

Blob Field With Compression Data Is Not Valid

14.12.5 How Compression Works for InnoDB TablesThis section describes some internal implementation details aboutfor InnoDBtables. The information presented here may be helpful in tuningfor performance, but is not necessary to know for basic use ofcompression.Compression AlgorithmsSome operating systems implement compression at the file systemlevel. Files are typically divided into fixed-size blocks that arecompressed into variable-size blocks, which easily leads intofragmentation. Every time something inside a block is modified,the whole block is recompressed before it is written to disk.These properties make this compression technique unsuitable foruse in an update-intensive database system.InnoDB implements compression with the help of the well-known, whichimplements the LZ77 compression algorithm. This compressionalgorithm is mature, robust, and efficient in both CPU utilizationand in reduction of data size.

  1. Blob Field With Compression Data Is Not Valid In Texas

Because BLOBs and related data are always together, there is no possibility of mismatch. BLOBs are transferred from one server to another along with the usual data transfer process. There is no need for any manual efforts to keep them in sync. BLOB data is also backed up along with routine SQL Server data. The following code will help you to create a table called 'emp' in oracle database with three fields namely id, name, photo. The field photo is a blob field for which you are going to insert an image as the value using the code. The image size may be anything even in GB as the blob field in oracle can store upto 4GB of size.

The algorithm is“ lossless”, so that the original uncompressed datacan always be reconstructed from the compressed form. LZ77compression works by finding sequences of data that are repeatedwithin the data to be compressed. The patterns of values in yourdata determine how well it compresses, but typical user data oftencompresses by 50% or more.

Blob field with compression data is not valid free

NotePrior to MySQL 5.5.62, InnoDB supports thezlib library up to version 1.2.3. In MySQL5.5.62 and later, InnoDB supports thezlib library up to version 1.2.11.Unlike compression performed by an application, or compressionfeatures of some other database management systems, InnoDBcompression applies both to user data and to indexes. In manycases, indexes can constitute 40-50% or more of the total databasesize, so this difference is significant. When compression isworking well for a data set, the size of the InnoDB data files(the.ibd files) is 25% to 50% of theuncompressed size or possibly smaller.

Depending on the, this smallerdatabase can in turn lead to a reduction in I/O, and an increasein throughput, at a modest cost in terms of increased CPUutilization.InnoDB Data Storage and CompressionAll user data in InnoDB tables is stored in pages comprising aindex (the). Insome other database systems, this type of index is called an“ index-organized table”. Each row in the index nodecontains the values of the (user-specified or system-generated)and all theother columns of the table.inInnoDB tables are also B-trees, containing pairs of values: theindex key and a pointer to a row in the clustered index. Thepointer is in fact the value of the primary key of the table,which is used to access the clustered index if columns other thanthe index key and primary key are required.

Secondary indexrecords must always fit on a single B-tree page.The compression of B-tree nodes (of both clustered and secondaryindexes) is handled differently from compression ofused tostore long VARCHAR, BLOB, orTEXT columns, as explained in the followingsections.Compression of B-Tree PagesBecause they are frequently updated, B-tree pages require specialtreatment. It is important to minimize the number of times B-treenodes are split, as well as to minimize the need to uncompress andrecompress their content.One technique InnoDB uses is to maintain some system informationin the B-tree node in uncompressed form, thus facilitating certainin-place updates. For example, this allows rows to bedelete-marked and deleted without any compression operation.In addition, InnoDB attempts to avoid unnecessary uncompressionand recompression of index pages when they are changed. Withineach B-tree page, the system keeps an uncompressed“ modification log” to record changes made to thepage.

Updates and inserts of small records may be written to thismodification log without requiring the entire page to becompletely reconstructed.When the space for the modification log runs out, InnoDBuncompresses the page, applies the changes and recompresses thepage. If recompression fails (a situation known as a), the B-tree nodes are split and the process isrepeated until the update or insert succeeds.Generally, InnoDB requires that each B-tree page can accommodateat least two records.

For compressed tables, this requirement hasbeen relaxed. Leaf pages of B-tree nodes (whether of the primarykey or secondary indexes) only need to accommodate one record, butthat record must fit in uncompressed form, in the per-pagemodification log. Starting with InnoDB storage engine version 1.0.2, andif isON, the InnoDB storage engine checks the maximum rowsize during. If the row does notfit, the following error message is issued: ERROR HY000:Too big row.If you create a table whenis OFF, and asubsequent INSERT or UPDATEstatement attempts to create an index entry that does not fit inthe size of the compressed page, the operation fails withERROR 42000: Row size too large. (This errormessage does not name the index for which the record is too large,or mention the length of the index record or the maximum recordsize on that particular index page.) To solve this problem,rebuild the table withand select a larger compressed page size( KEYBLOCKSIZE), shorten any column prefixindexes, or disable compression entirely withROWFORMAT=DYNAMIC orROWFORMAT=COMPACT.Compressing BLOB, VARCHAR, and TEXT ColumnsIn an InnoDB table, andcolumns that are not part ofthe primary key may be stored on separately allocated. We referto these columns as. Their values are stored on singly-linked lists ofoverflow pages.For tables created in ROWFORMAT=DYNAMIC orROWFORMAT=COMPRESSED, the values of, orcolumns may be stored fullyoff-page, depending on their length and the length of the entirerow.

Blob Field With Compression Data Is Not Valid In Texas

Blob

For columns that are stored off-page, the clustered indexrecord only contains 20-byte pointers to the overflow pages, oneper column. Whether any columns are stored off-page depends on thepage size and the total size of the row. When the row is too longto fit entirely within the page of the clustered index, MySQLchooses the longest columns for off-page storage until the rowfits on the clustered index page. As noted above, if a row doesnot fit by itself on a compressed page, an error occurs.

NoteFor tables created in ROWFORMAT=DYNAMIC orROWFORMAT=COMPRESSED,andcolumns that are less thanor equal to 40 bytes are always stored in-line.Tables created in older versions of InnoDB use thefile format, whichsupports only ROWFORMAT=REDUNDANT andROWFORMAT=COMPACT. In these formats, MySQLstores the first 768 bytes of, andcolumns in the clustered indexrecord along with the primary key. The 768-byte prefix is followedby a 20-byte pointer to the overflow pages that contain the restof the column value.When a table is in COMPRESSED format, all datawritten to overflow pages is compressed “ as is”; thatis, InnoDB applies the zlib compression algorithm to the entiredata item. Other than the data, compressed overflow pages containan uncompressed header and trailer comprising a page checksum anda link to the next overflow page, among other things. Therefore,very significant storage savings can be obtained for longerBLOB, TEXT, orVARCHAR columns if the data is highlycompressible, as is often the case with text data.

Image data,such as JPEG, is typically already compressedand so does not benefit much from being stored in a compressedtable; the double compression can waste CPU cycles for little orno space savings.The overflow pages are of the same size as other pages. A rowcontaining ten columns stored off-page occupies ten overflowpages, even if the total length of the columns is only 8K bytes.In an uncompressed table, ten uncompressed overflow pages occupy160K bytes. In a compressed table with an 8K page size, theyoccupy only 80K bytes. Thus, it is often more efficient to usecompressed table format for tables with long column values.Using a 16K compressed page size can reduce storage and I/O costsfor, orcolumns, because such dataoften compress well, and might therefore require fewer overflowpages, even though the B-tree nodes themselves take as many pagesas in the uncompressed form.Compression and the InnoDB Buffer PoolIn a compressed InnoDB table, every compressed page (whether 1K,2K, 4K or 8K) corresponds to an uncompressed page of 16K bytes.

Toaccess the data in a page, InnoDB reads the compressed page fromdisk if it is not already in the, thenuncompresses the page to its original form. This section describeshow InnoDB manages the buffer pool with respect to pages ofcompressed tables.To minimize I/O and to reduce the need to uncompress a page, attimes the buffer pool contains both the compressed anduncompressed form of a database page. To make room for otherrequired database pages, InnoDB mayfrom the buffer pool anuncompressed page, while leaving the compressed page in memory.Or, if a page has not been accessed in a while, the compressedform of the page might be written to disk, to free space for otherdata. Thus, at any given time, the buffer pool might contain boththe compressed and uncompressed forms of the page, or only thecompressed form of the page, or neither.InnoDB keeps track of which pages to keep in memory and which toevict using a least-recently-used list, so that(frequently accessed) datatends to stay in memory.

When compressed tables are accessed,MySQL uses an adaptive LRU algorithm to achieve an appropriatebalance of compressed and uncompressed pages in memory. Thisadaptive algorithm is sensitive to whether the system is runningin an ormanner. The goalis to avoid spending too much processing time uncompressing pageswhen the CPU is busy, and to avoid doing excess I/O when the CPUhas spare cycles that can be used for uncompressing compressedpages (that may already be in memory). When the system isI/O-bound, the algorithm prefers to evict the uncompressed copy ofa page rather than both copies, to make more room for other diskpages to become memory resident.

When the system is CPU-bound,MySQL prefers to evict both the compressed and uncompressed page,so that more memory can be used for “ hot” pages andreducing the need to uncompress data in memory only in compressedform.Compression and the InnoDB Redo Log FilesBefore a compressed page is written to a, MySQL writes acopy of the page to the redo log (if it has been recompressedsince the last time it was written to the database). This is doneto ensure that redo logs are usable for, even inthe unlikely case that the zlib library isupgraded and that change introduces a compatibility problem withthe compressed data. Therefore, some increase in the size of, or a need for morefrequent, canbe expected when using compression. The amount of increase in thelog file size or checkpoint frequency depends on the number oftimes compressed pages are modified in a way that requiresreorganization and recompression.Note that compressed tables use a different file format for theredo log and the per-table tablespaces than in MySQL 5.1 andearlier. The product supports this latestfile format forcompressed InnoDB tables.

Blob Field With Compression Data Is Not Valid