We're using Dynatrace to monitor our on-prem and cloud application stacks. This includes the software we develop and sell to our customers as well as the tooling and services our application relies on to function. As part of our solution we're using SnapLogic to perform various ETL and integration tasks. As our system grows we have more processes that become dependent on the SnapLogic integration piece. We're desiring visibility into SnapLogic to visualize its performance, how it fits into our flows, and to alert us of errors.
In our instance of SnapLogic we have a "groundplex" installed on a machine in our on-prem data center. We have OneAgent installed on that machine. This means we can currently look at the performance of the machine and make some conclusions on what's going on with SnapLogic's health (from an aggregate level).
We would like to take this a step further. We would like to know if anyone is tracking information that is deeper than the health of the host. Is there a way to get pipeline run metrics into SnapLogic? Most importantly we would like to know if there is any known way or best practice at creating logging within Snap pipelines that can be consumed and aggregated with logs in Dynatrace. It is highly desirable to have Dynatrace be on the look out for pipeline errors or "warnings" that a pipeline could detect.
Solved! Go to Solution.
When it comes to monitoring SnapLogic logs, you'll be fine with basic information such as:
What I would do is start monitoring SnapLogic using the tool's built-in API. You have the entire Pipeline Monitoring API available.
You can put the metrics from the SnapLogic API into Dynatrace using the Metric API.