I saw in the logs that the cache cleanup fails with a BUFFER UNDERFLOW error; can this be resolved by increasing the Collector memory assignment?
Also, our client has a blobs.dtsf file of approx. 177GB in size - is there a way to reduce this file i.e. by following steps in KB-484 How to fix a class cache explosion, or should this sort itself out once we resolve the buffer underflow issue?
Client is running dT 6.0.
Thanks Allan, unfortunately they're still on 6.0 and their recent attempt to migrate to 6.1 failed, we're assuming due to a corrupted migration archive so we rolled back to 6.0. They believe the massive blobs.dtsf file could've caused the corruption / failure, so we've been asked to assist them in reducing its size. I assume that file is critical if we want to prevent them having to restart agents?