05 Jul 2024 06:47 PM
I have followed this article exactly but I always get 'Prediction run failed!' I have even tried to isolate it to one system to no avail.
Any ideas or is this code outdated?
Automate predictive capacity management with Davis AI for Workflows (dynatrace.com)
08 Jul 2024 01:18 AM
If you trigger the workflow manually there should some messages written to Execution log which might give a clue on why the workflow is failing, do you see something written there?
04 Mar 2025 12:26 PM
I also have an issue: I followed the steps outlined in the docs and initially my workflow was successul, but lately I'm getting the below error in the predict_disk_capacity part of the workflow:
Action results and produced logs together are too large. They can't exceed 1MB each. Try changing the task inputs to return a smaller result.
How do I get around this? I've decreased the datapoints to predict, changed the timeframe and forecase offset without any joy.
Adapting the DQL to only incl. some hosts/host groups seems counter productive, as I'll have to set up multiple workflows to cover all the hosts in an environment.
04 Mar 2025 12:41 PM
While I can't speak to the workings of the forecast action, the limit for the action result of 1MB is in the process of being put up to 6MB within the next couple of days. This should hopefully resolve the issue for now.
04 Mar 2025 01:19 PM
That's great news, thanks @ChristopherHejl