Does anyone have experience on monitoring Control-M generated batch jobs with OneAgent? Control-M already does have internal monitoring features like alerting if a job is delayed or failed. But we're looking for deeper performance statistics down to the individual DB statements executed by Control-M. I believe Control-M is Java-based, so from that perspective it sounds doable. On the other hand we're talking about short-lived processes, so I'll probably need a custom PG detection rule and then a custom service definition. Anyone tried to implement something like this, if yes did you end up with a good result & visibility?
Solved! Go to Solution.
We didn't make monitoring with Control-M but there is one thing worth to admitt. If this is process that is running all the time and making jobs, there should not be problem. But if each job is starting new process that has to be instrumented and those jobs are fast, it may produce overhead. This is because each new instance of java will have to be instrumented again and again. When we have process that is running and instrumented once during start, short jobs shouldn't be issue here. Creating custom service can be done via GUI. I would start process with monitoring enabled and checking if dynatrace would detect something automatically, if not, check background thread's on Method hotspots to find Entry Points.
Thanks Sebastian, I think this seems like worth a PoC nonetheless. I can update this discussion later on with my findings...
Hi @Kalle L.
Can you updates with you findings please
All the best
We managed to monitor the batch jobs quite well by defining custom services via Dynatrace Configuration API with the following parameters:
1. Fully qualified class name
2. Entry point method
3. The method’s fully qualified return type
The reason for using the API instead of the Dynatrace UI was that due to the often short-lived nature of the batch job processes, OneAgent didn't find an active handle to the process which could then be used for pulling the config details. Instead pushing them in via API doesn't require that the process is even running at the time.
Feedback from the developers was that it's quite time-consuming to get those entry point method details for hundreds of batch jobs. So I suppose that may be the biggest hurdle with the setup. Configuring them via the API is then pretty straightforward, once you have a template for it and all the source data required for steps 1-3.
Here's a link to the API documentation:
Thanks @Kalle L. for your replay and the insight as well
appreciate it !