# Redshift Column Compression Settings

This is part 4 of a series on Amazon Redshift maintenance:

## What is column compression

Amazon Redshift being a columnar database, is able to pick a more optimal compression scheme for columns compared to a regular row store database. This is because contiguous blocks of columns have similar patterns and distributions of data stored.

## Why column compression is important

Compression of columns can significantly impact the performance of queries. The main reason being the compressed columns take up less space on disk and therefore require less disk I/O operations while executing queries involving that compressed column.

Redshift picks a column compression encoding when you first copy data into an empty table. As data is added and deleted from that table, the optimum compression algorithm might change.

## SQL to see current column encoding

select "column", type, encoding
from pg_table_def
where table_name = table_name_here;


## What Redshift recommends

If you suspect that the right column compression ecoding might be different from what's currenlty being used – you can ask Redshift to analyze the column and report a suggestion.

analyze compression table_name_here;


which will output:

Table Column Encoding Est_reduction_pct
table_name_here listid delta32k 0.00
table_name_here sellerid delta32k 0.00
table_name_here eventid delta32k 0.00
table_name_here dateid lzo 96.94
table_name_here numtickets delta 0.00
table_name_here priceperticket lzo 15.74
table_name_here totalprice lzo 18.06
table_name_here listtime lzo 20.98

The analyze compression command does not change the compression method, it is simply a reporting tool. To manually change the column compression method being used, you'll have to recreate a table with the same schema and deep-copy the contents of the existing table.

Next Generation SQL Data Analysis Write SQL, visualize data, and share your results. Onboarding forward-thinking teams now: