Hi,
i have a table with around 500gb. It contains around 30M rows but the data inside is big.
I do not have enough space left over to copy the whole table, i also need to do it live or in a time range of only a few hours.
I deleted already data from that table (in one row) but the table size didn't went down. I suspect that there needs to be an optimize running but i don't think this is block free.
I have the following ideas:
- Delete data in one column, than use optimize to reduze table size, than hope that the upgrade to barracude doesn't take that long
-> Issue: absolutly unclear if this works in time and table size didn't went down when i deleted data in one column
- Delete data, than create a new table with compression and move data block by block
-> Issue: table size is not going down
- Create partition, optimize small tables
-> Issue: i don't think this works in parallel
Has anyone an idea what to do with this huge table?
i have a table with around 500gb. It contains around 30M rows but the data inside is big.
I do not have enough space left over to copy the whole table, i also need to do it live or in a time range of only a few hours.
I deleted already data from that table (in one row) but the table size didn't went down. I suspect that there needs to be an optimize running but i don't think this is block free.
I have the following ideas:
- Delete data in one column, than use optimize to reduze table size, than hope that the upgrade to barracude doesn't take that long
-> Issue: absolutly unclear if this works in time and table size didn't went down when i deleted data in one column
- Delete data, than create a new table with compression and move data block by block
-> Issue: table size is not going down
- Create partition, optimize small tables
-> Issue: i don't think this works in parallel
Has anyone an idea what to do with this huge table?