We have DCRUM 2017 and Splunk 7.1.1. I read the other posts and docs about Splunk plugins, and the current plugin is compatible with DC RUM 12.4 and 17.0, Splunk 6.3.x and 6.4.x.
When will a new Splunk plugin for DCRUM (not Dynatrace) be available? Also, is it possible to turn the AMD into a forwarder? Splunk has several apps that can use PCAP and other packet data, and we want to forward the raw packets from the AMD to Splunk Index servers.
Any ideas/success stories/pitfalls?
Thanks and God bless,
Solved! Go to Solution.
I'm afraid we don't have plans to work on the Splunk integration in the near future.
I'm aware that the example dashboard/app does not work with new Splunk releases, but the retrieval mechanism using DMI REST API should work, unless with new release Splunk disconitnued to support pull mechanism in Python we used to get data from DMI.
There are no plans at all to push packet data to external services directly from AMD. There was, however, a beta version of the API for accessing AMD level data straight from AMD Gat. Though we never went with it into production; if you are interested in it I can double check if it still available with the new release. The API follows the same design as the one delivered by DMI for the integration purposes, i.e., it provides access to files with interval-worth data. Therefore I believe our example python script used to pull data could be updated to pull data from the AMD Gate API.
Thank you for your quick response.
I'm not a programmer by nature, though I have done some (Teradata, MS SQL, VBA, Basic) in the distant past. I am not familiar with coding REST API's. Do you have a sample, or procedural documentation on how to?
Splunk is the direction my organization is going. It is a requirement that our tools are able to ingest data into Splunk (push or pull), otherwise, a new tool will need to be researched.
Thanks again and God bless,
the actual code that retrieves data from DC RUM over the DMI REST API is here: https://github.com/Dynatrace/DCRUM-Splunk-Applica... (this is part the source code of the whole example Splunk app listed here: https://github.com/Dynatrace/DCRUM-Splunk-Applica... )
The whole idea is around creating a report based on cached data and adding it to the configuration (see Splunk app documentation above) so that each monitoring interval data generated by that report is dumped into a file with timestamp suffix. The API enables to list all the data files available and retrieve a single one of them.
Below is a simple bash script I wrote some time ago to exemplify how this API works. For simplicitly it uses UID but in the production you'd be better off using tokens. The script shows which URLs to call to list all data available for retrieval (LIST), get the last (most recent) data from given report name (LAST), an exact data (GET) or the next available data file for the given report name (NEXT).
I will request update to the documentation to cover this API.
if [ "$CMD" == "LIST" ];
curl "$SERVER/RtmDataAPIServlet?cmd=get_dir&uid=$CASUID" 2>&1
elif [ "$CMD" == "LAST" ] && [ "$PARAM" != "" ];
FILE_NAME=`curl "$SERVER/RtmDataAPIServlet?cmd=get_dir&uid=$CASUID" 2>&1 | grep "$PARAM" | tail -n 1`
echo "$FILE_NAME" > .last-entry.tmp
elif [ "$CMD" == "GET" ] && [ "$PARAM" != "" ];
curl "$SERVER/RtmDataAPIServlet?cmd=get_entry&entry=$PARAM&uid=$CASUID" 2>&1
echo "$PARAM" > .last-entry.tmp
elif [ "$CMD" == "NEXT" ] && [ "$PARAM" != "" ] && [ "`cat .last-entry.tmp`" != "" ];
NEXT_FILE_NAME_CMD="curl \"$SERVER/RtmDataAPIServlet?cmd=get_dir&uid=$CASUID\" 2>&1 | grep \"$PARAM\" | sed -n -e '/$LAST_FILE_NAME/,\$p' | head -n 2 | tail -n 1"
if [ "$LAST_FILE_NAME" != "$FILE_NAME" ];
curl "$SERVER/RtmDataAPIServlet?cmd=get_entry&entry=$FILE_NAME&uid=$CASUID" 2>&1
echo "$FILE_NAME" > .last-entry.tmp