Quantcast
Channel: SCN: Message List
Viewing all articles
Browse latest Browse all 9052

Re: Late mate coloumn store error while copying table

$
0
0

Hi Pavan

 

I saw this thread and was wondering where you get the information from that there is 150 GB free memory available.

The OOM trace excerpt you posted shows that the allocation failed when the system was using 292.54 GB in total.

 

Most of it is in use for dealing with column store data (the remaining 139 gb are used for other purposes):

 

3: System:                                Pool/PersistenceManager  (62.99gb)

4: Statement Execution & Interm. Results: Pool/itab                (54.05gb)

6: Column Store Tables:                   Pool/malloc/libhdbcs.so  (36.88gb)

                                                              SUM (153.92gb)

 

As this looks like it is a SAP BW system, I assume that you don't shutdown and unload all tables before you try to copy the table.

 

Now, while you copying a column store table, this single activity ends up in multiple separate tasks in SAP HANA.

1) the source table main + delta store are kept in memory completely (including indexes, existing join translation tables, etc.)

 

2) an index of visible rows is kept in memory to ensure that a consistent view of the data gets transferred.

 

3) the data needs to get "materialised" and then inserted into the delta store of the target table. The materialisation happens in chunks so this requires memory for the materialisation and memory for the whole delta store of the target table.

 

4) the delta store does not compress the data but in fact blows it up to a certain degree.

 

5) if delta merge has not been disabled and is left at default (delta merge to disk), every now and then a second copy of the target table gets created. When the delta merge kicks in, we end up with

 

  orig. table (main+delta)

+ target table (main1 + delta1 + main2 + delta2)

+ delta merge processing memory

+ materialisation buffer

 

6) part of the usual delta merge is also to write the results to disk. This again requires memory as space in the persistency needs to be allocated.

Once this is done, the main1 and delta1 of the target table can be deallocated.

The new main2 should be considerably smaller than main1+delta1 before the merge.

But since the insert operation continues while the delta merge is happening, delta2 will take up considerable amount of data by now.

 

So far to the processing side of the story.

 

You mentioned that the table currently has 25 GB in size. How did you  get to this value?

Where all columns fully loaded into memory when you retrieved the numbers?

 

Looking at this thread so far I have the feeling that this would be best handled via a support message.

So, I recommend to open one and have the colleagues check in detail, why there is not enough memory for this table copy.

 

 

- Lars


Viewing all articles
Browse latest Browse all 9052

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>