All
I have an infocube which had a lot of data loads (hence partitions) each one being around 4m records. Each of these loads had a specific "snapshot" date = load date. There were over 700 of these load packets in Infocube, so cube was pretty large. However query performance was fine as user always selects the "Snapshot" date in query selection.
However, as part of a project ththe data in this cube was copied into another cube, modified, the original cube cleared down, and the data loaded back from the copy cube (in large chunks of around 10 days worth). This means there are many fewer partitions, but each partition has a lot more records in it (but still has the original snapshot date).
We have noticed a significant degradation in query performance.
I expected the smaller number of partitions to improve performance but am wondering if the larger size of the partition is affecting performance.
When I look in RSRT I can see it is the Data Manager step which is takign a lot longer.
Any thoughts would be welcome.
Cheers
Simon