I'm not practical of the new Dynatrace features and I'm trying to understand better which are the correct steps to perform a configuration.
I'd like to showcase a business flow within Dynatrace and most of the critical data can only be taken via external API (Agent approach is not possible).
I'd like to understand which is the best path to follow with the newest Dynatrace Saas capabilities.
It's my understanding that I should start with a Notebook which will call external API and retrieve Json data (and I have tried that and it's working).
But then I'll have to store those jsons somewhere and call it at scheduled frequency (should this be configured in the notebook still?)
After that I'll have to manipulate the data inside those json with DQL (maybe) and finally I can display it?
Is there a general picture that describe which will be the correct step?
Solved! Go to Solution.
Notebook is one time polling. From what I know you will need to use Workflow to schedule polling and store results to Grail.
Here some insight about your use case :
If you want to poll data periodically from the Dynatrace API via the classic environment SDK and ingest it into grail, then currently the best way is to go via Workflows. Because in workflows you can set the execution interval of a workflow.
Don't hesitate to ask your questions in the Developers section
We're still mainly using the extension framework for getting data from external sources as it can be bundled with dashboards, alerts, screens, log patterns and metric metadata. It can also run on your existing activegates, has built in configuration flows and supports polling any technology.
That's not to say that there isn't an overlap in use cases with workflows, but the extensions framework is usually the go-to for the solutions we build for customers.
In my case I'm suppose to track the difference in sales orders in between platforms (D365 , SAP CPI and SAP 4Hana ERP).
The first two are completely Saas so an Activegate with an extension running I don't think would make so much sense mainly for two reason:
For the third platform (which is on premise) the Database query extension on top of an AG could be a solution to catch valuable data to cross reference.