Comments have been closed on this page. Please use AppMon & UEM Open Q & A forum for questions about this plugin.

Looking for Splunk Application?

Go here


  1. Anonymous (login to see details)

    Looks very interesting!

  2. Anonymous (login to see details)

    Thanks to Tan, Vinson we finally got the Windows Splunk version to work. Thanks again!

  3. Anonymous (login to see details)


    I tried to implement it on windows as documented above but it does not listens on port 4321. If I run the batch file runFlume.bat manually it starts listening on port 4321 and I can see data in splunk.

    Is this how it works or I have done something wrong?


    1. Anonymous (login to see details)

      Did you try manually define the JAVA_HOME in your system environment variable?

    2. Anonymous (login to see details)

      it should just start, do you have the latest version? I fixed some things to make it auto start, try to download again pls.

      Java_home should not be needed but java needs to be installed (which it is or it wouldn't work at all), the bat file checks the registry for java 6 and 7, 32 and 64 bit. what do you have installed?

      if it doesn't work pls check the splunk log file and see if you get any errors when it tries to runFlume.bat

  4. Anonymous (login to see details)

    For the Splunk (Flume) to run automatically on windows, I had to make few changes to the following files.

    1. etc\apps\CompuwareAPM\local\inputs.conf

    The Windows file separators and directory locations are always a problem. I had to change it to make it relative, but some funky slashes

    disabled = 0
    interval = 300
    sourcetype = log4j

    Second, in case you want to disable the other scripts, just change the disabled = 1 and Splunk wont execute them

    2. runFlume.bat You have to specify the complete path for the CLASSPATH and the Conf files. It seems the Splunk ExecProcessor cannot read from relative paths. I have attached the file.

    3. etc\apps\CompuwareAPM\bin\flume-conf.properties
    Again, the sink directory for the flume has to be complete path with \\\ as file separators.

    e.g. agent1.sinks.purepath.sink.directory = C:\\\Progra~1\\\Splunk\\\etc\\\apps\\\CompuwareAPM\\\log\\\bt-export\\\pp

    Attached are all the three files in the splunk-app-for-windows.zip

    Note: You will have the change the absolute paths based on where you have installed Java and Splunk


  5. Anonymous (login to see details)


    Is there a webinar planned for this ?

    Would really be interested in a Splunk integration demo.


    1. Anonymous (login to see details)

      its on the list, but there is no definitive date yet, but surely sometime in the next two month.
      Is your interest just for Splunk or more general on how to get BT data from dynaTrace?

      1. Anonymous (login to see details)

        Hi George:

        Would you like us to set up a tour for you?



  6. Anonymous (login to see details)

    Getting a parse error when executing runDashboard.

    * About to connect() to dynatrace.dillards.com port 12021 (#0)
    * Trying connected
    * Connected to dynatrace.dillards.com ( port 12021 (#0)
    * Server auth using Basic with user 'admin'
    > GET /rest/management/reports/create/Extranet?type=XML&format=XML+Export&filter=tf:Last5Min HTTP/1.1
    > Authorization: Basic YWRtaW46aUhvbzRDaGE=
    > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/ zlib/1.2.3 libidn/1.18 libssh2/1.4.2
    > Host: dynatrace.dillards.com:12021
    > Accept: */*
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed
    0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{ [data not shown]
    0 7 0 7 0 0 3026 0 --:--:-- --:--:-- --:--:-- 7000* Connection #0 to host dynatrace.dillards.com left intact

    * Closing connection #0
    /tmp/tmp.4CiwEy9Vr4:1: parser error : Start tag expected, '<' not found

    unable to parse /tmp/tmp.4CiwEy9Vr4

  7. Anonymous (login to see details)


    Can you edit the runDashboard.sh and provide the right user name and password to curl

    The default is admin/admin

    -u admin:admin


  8. Anonymous (login to see details)

    I was under the impression that the username/password were what we used on our dynaTrace setup?


  9. Anonymous (login to see details)

    Installed Splunk and APM app on my laptop without issues . But when I installed Splunk on my dynaTrace server vm the flume channel couild not be started : 

    5 Jun 2013 12:48:59,267 ERROR [conf-file-poller-0] (org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.startAllComponents:117)  - Error while starting org.apache.flume.channel.MemoryChannel{name: pageaction}
        at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor.reject(Unknown Source)
        at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(Unknown Source)
        at java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(Unknown Source)
        at org.apache.flume.lifecycle.LifecycleSupervisor.supervise(LifecycleSupervisor.java:140)
        at org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.startAllComponents(DefaultLogicalNodeManager.java:114)
        at org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.load(PropertiesFileConfigurationProvider.java:225)
        at org.apache.flume.conf.file.AbstractFileConfigurationProvider.doLoad(AbstractFileConfigurationProvider.java:123)
        at org.apache.flume.conf.file.AbstractFileConfigurationProvider.access$300(AbstractFileConfigurationProvider.java:38)
        at org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:202)
        at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
        at java.util.concurrent.FutureTask$Sync.innerRunAndReset(Unknown Source)
        at java.util.concurrent.FutureTask.runAndReset(Unknown Source)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(Unknown Source)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(Unknown Source)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.lang.Thread.run(Unknown Source)

  10. Anonymous (login to see details)

    Do have more? can you send the full log file my way?

  11. Anonymous (login to see details)

    Hi Michael .

    Log sent . Thnaks in advance


    1. Anonymous (login to see details)

      Update :

      • Problem Turned out to be a 'missing'  Java Environment on the Splunk serrver.
      • Once java was installed  the application initialized correctly  
  12. Anonymous (login to see details)



    Any reason you can think of that we would stop receiving PP's?  we still see PA's coming in.


  13. Anonymous (login to see details)

    Only if the PPs in question are not tagged with the BT that you export

  14. Anonymous (login to see details)

    they are - the BT's havn't changed...  Still enabled with HTTP export.  I have the BT setup to go to both Perf warehouse and HTTP - they just dont show up in splunk.  Wondering if something is screwed up on the splunk side.

    Would re-installing the application be advisable?


  15. Anonymous (login to see details)

    On Linux I found looking in the splunkd.log file was useful in determining where my problems were. It is in the splunk/var/log/splunk directory. It helped me identify I had a permissions issue


  16. Anonymous (login to see details)

    You can also use splunk to look at the splunkd.log file and search or the runDashboard.sh or runDashboard.bat command. This is the quickest way to identify if there are permission issues, problems with your temporary directory.

    Here is the command used in splunk to allow you to quickly search what you are looking for.

    index=_internal source="*splunkd.log" *rundashboard*


  17. Anonymous (login to see details)

    The Splunk application looks like it’s configured for a single all-purpose Splunk instance. What if they're running a distributed Splunk setup, where search and indexer(s) run on separate hosts? How would I setup to export to multiple instances?

    1. Anonymous (login to see details)

      The Splunk App still needs to be split up into its separate parts. If you have experience with this and Splunk, it would be great if you could help us with this.

    2. Anonymous (login to see details)

      You have few options here. The Compuware Splunk App has two components the - Web / Search and the Data Inputs. You will have to install the Data inputs/Flume components of the Splunk app on the Forwarder which can forward the data to the indexer or if the indexer can also injest data, you use that instead of the forwarder. Make sure when you configure the dynaTrace Export URL, you point to this forwarder where the Flume server is running. 

      The Searches/Web Pages can be on the search head. 

      This is not out of the box, since every situation is different, but this can be easily done.

  18. Anonymous (login to see details)


    Documentation is a little thin on the ground on this capability.  Does one have to edit the rundashboard.bat ?  If so, some instructions around this would be good.  Is there some more comprehensive documentation, other than what is in this community post?

    1. Anonymous (login to see details)



      Check the subsection Pulling Measurement data into Splunk. Yes you need to edit the runDashboard.sh/bat file. Once to activate the feature to retrieve dashboard data and also to define the server and which dashboard to pull. Understand this is just for cyclic dashboard data. This is not needed for the transnational data which is the primary purpose of this application.

      If you check the file it should be pretty much self explanatory, if there is something unclear please let me and Rajesh know via mail and we will see that the documentation gets enhanced

  19. Anonymous (login to see details)

    Trying to install the plugin I get the following,

    01-23-2014 10:21:29.674 -0500 WARN LocalAppsAdminHandler - There was a problem creating directory while unarchiving bundle /Volumes/EXTHDD1/splunk/splunk/var/run/30950c87ed1ec821/APM_dynatrace/appserver/: Permission denied
    01-23-2014 10:21:29.674 -0500 ERROR LocalAppsAdminHandler - Error during app install: failed to extract app from /var/folders/dc/70zczyyn009dqztdk6g8l16m0000gn/T/tmp4vm9ko to /Volumes/EXTHDD1/splunk/splunk/var/run/30950c87ed1ec821: Permission denied

    I have Splunk running on MAC OSX.  What should the permissions be on the folders?

  20. Anonymous (login to see details)

    Are you installing it as the same user as the Splunk. Let me double check if the permissions in the bundled jar file are 755

    1. Anonymous (login to see details)

      I have uploaded the new tar/spl file with permissions as 755. Cgywin on Windows for some reason had changed the permission to 000

      Can you double check if this works. 

      1. Anonymous (login to see details)

        yes, this worked.

  21. Anonymous (login to see details)

    I have it working for the most part, it shows data coming in.

    Screen Shot 2014-01-27 at 1.09.08 PM.png

    I'm only seeing the visitor metrics nothing more.

    Screen Shot 2014-01-27 at 1.09.30 PM.png

    There are also errors in the Splunk log around ERROR ExecProcessor - message ...



  22. Anonymous (login to see details)

    Worked with Jeff to resolve this issue. The index name has changed in version 2.0, and hence the search queries had to be changed to add index=dynatrace. I have uploaded the changes to this page and the splunk app page. 

  23. Anonymous (login to see details)

    Following the section  Pulling Measurement data into Splunk, I tried to change the dashboard to an existing dashboard but it did not work.  I found that I also needed to change the default line /"usr/bin/xsltproc reportdynamic.xsl $TMPFILE" to '/usr/bin/xsltproc report.xsl $TMPFILE".


    We might want to mention the difference between the two.


  24. Anonymous (login to see details)

    We have configured dynaTrace and splunk through hourly dashboard data scripts. its working fine so far. I see following issue with reporting metric data from dashboards. We have a system profile with two agents. We want to capture metrics like Memory Utilization, Thread Utilization from both agents. Created chart with split metric by agent in dashboard. When i exported chart in XML format it showing in below format. In dynaTrace dashboard reports its clubbind metric data under each category. but when it got exported splunk, its not seeing metric type in measure description causing data get skewed. 


    <measure count="110" avg="64.961" measure="AgentA[Host]">

    <measure count="177" avg="69.969" measure="AgentB[Host]">

    <measure count="287" avg="68.393" measure="Memory Utilization (split by Agent Name)">


    As work around, i created dynatrace measure for each agent like below. with that i see right data get exported to splunk. if we start adding agents in System profiles, then i have to create to more measure and manage dashboards.

    <measure count="110" avg="64.961" measure="Memory Utilization - AgentA[Host]">

    <measure count="177" avg="69.969" measure="Memory Utilization - AgentB[Host]">

    is there any other way of doing this without creating measures.

  25. Anonymous (login to see details)

    How to integrate our instance of dynatrace into splunk

    1. Anonymous (login to see details)

      Hi, do you have a concrete problem that you wish to have solved? If so, please provide additional information.

      I am currently working on a blog post on apmblog.compuware.com that will show the benefits and basic tools you require to get dynaTrace and Splunk up and running. There will also be a webinar that will show these steps in more details.

  26. Anonymous (login to see details)

    I am looking for the zip file Rajesh says was attached on June 19, 2013

  27. Anonymous (login to see details)

    We are getting the following error when running the app on windows:

    Error we are seeing on splunk side.


    F:\Splunk\bin>splunk cmd f:\Splunk\etc\apps\APM_dynatrace\bin\runFlume.bat

    Using the following JAVA_HOME: f:\program files\java\jre6

    Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/flume/node


    Caused by: java.lang.ClassNotFoundException: org.apache.flume.node.Application

            at java.net.URLClassLoader$1.run(Unknown Source)

            at java.security.AccessController.doPrivileged(Native Method)

            at java.net.URLClassLoader.findClass(Unknown Source)

            at java.lang.ClassLoader.loadClass(Unknown Source)

            at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)

            at java.lang.ClassLoader.loadClass(Unknown Source)

    Could not find the main class: org.apache.flume.node.Application.  Program will


    1. Anonymous (login to see details)

      Hi Curtis

      this is classic windows path issue. I have a python version of the runflume.bat which will work cross platform. Can you test it and if it solves your issue then i can replace the bat/sh files 

      I will attach the python file to this page by today. 




  28. Anonymous (login to see details)

    Hi Rejesh,  Any luck on the python script?  Thanks,  Curtis

    1. Anonymous (login to see details)


      I have emailed you the script and also attached it to the page. 


  29. Anonymous (login to see details)

    We are fetching dynaTrace data to splunk two ways. First one CPU measures, BusinessTransaction chart data through runDashboard.sh script. Second one is through BusinessTransaction over HTTP. But we are seeing some of the data is missing in splunk for both "PP" and "metrics" source types. Do you have any recommendations to resolve this issue. 

  30. Anonymous (login to see details)

    Hi Lakshman, 

    Can you elaborate more on what data is missing.. Are you seeing any PP or none? Do you see the files created under the bt-exports directory. 




  31. Anonymous (login to see details)

    Hi Rajesh,

    We have set runDashboard.sh script interval to 1 hour and i believe Business Transactions over HTTP is real time. When we comparing dynaTrace data and splunk data, we were seeing some discrepancies like below. Not sure why some events get missed in splunk.

    Transaction CountDynaTraceSplunk
    BT A11095
    BT B5045

     I even modified runDashboard.sh script not to miss even one seconds data. but still seeing issue.

    CURRTIME=`date +%s` 
    PREVTIME=`cat prop/dt_LastTime.prop` 
    ENDTIME=`echo "($CURRTIME) * 1000" | bc`
    /usr/bin/curl -s -k -u $USERNAME:$PASSWORD "http://$DTSERVER/rest/management/reports/create/$DASHBOARD?type=XML&format=XML+Export&filter=tf:$TIMEFRAME" > $TMPFILE 
    /usr/bin/xsltproc report.xsl $TMPFILE
    CURRTIME=`echo "($CURRTIME+1) * 1000" | bc` 
    echo $CURRTIME > prop/dt_LastTime.prop
    rm $TMPFILE


  32. Anonymous (login to see details)


    I have successfully setup the integration between dynaTrace and Splunk and I am seeing vists, users actions and PurePath data in Splunk.  It appears that the data is running on a 30 minute delay.  For example i have a business transaction setup to split on UserID... i can see in dynaTrace that there were multiple transactions within the last few minutes, but on Splunk it only shows the transaction that occured more than 30 minutes ago.  

    Has anybody seen this before?  Is there a setting?



  33. Anonymous (login to see details)

    Note to everyone :

    Please download the newest version of the App (2.1) from the attachments or from the Splunk App store. This version changes the startup scripts for flume and in the input script for dashboard (metrics) data to python. 



  34. Anonymous (login to see details)

    This is how I got the visits_on_a_map view to work.

    1. Edit Splunk\etc\apps\maps\default\geoip.conf, set database_file to C:\progra~1\Splunk\etc\apps\maps\bin\GeoLiteCity.dat.

    2. Edit Splunk\etc\apps\maps\default\commands.conf, add the following after [geoip]:
    local = true

    3. http://SPLUNK_SERVER:8000/en-US/manager/search/data/ui/views?ns=APM_dynatrace

    4. Click on visits_on_a_map.

    5. Look for the following block in the XML:
    <module name="GoogleMaps" group="Visitors Across the Globe - Today" layoutPanel="panel_row1_col1">
    <param name="height">500px</param>
    <param name="center"/>
    <param name="scrollwheel">off</param>

    6. Remove the line: <param name="center"/>

    7. Change the zoomLevel to 1.

    8. Save.

    1. Anonymous (login to see details)

      Awesome. The step 2 is important to run the command geoip outside the app context. Thanks for documenting the steps. I will add it to the App documentation. 

      1. Anonymous (login to see details)

        Rajesh, did you make the xml changes to the app?

        This also worked for me and I now get the visits on the map.

    2. Anonymous (login to see details)

      Thank you for this.  We were struggling to get the maps view to work.  On one machine we made all of the recommended changes and this resolved the issue.  On another machine we simply performed recommended steps 3 to 7 and this worked.  Zoom level from 1 to 3 appear to work fine.

    3. Anonymous (login to see details)

      Thanks, these instructions were great!  

      Note that for version 2.2.3, the URL in step 3 should be:


    4. Anonymous (login to see details)

      Hi i tried the above setting, but it doesn't seems to be showing the data for me.... Any inputs would be appreciated, Thanks.

      used versions for integration 



      compuware apm app-latest version...


  35. Anonymous (login to see details)

    Can we get the JMX values from dynatrace, they want to put the splunk JMX plugin on the server and not use dynaTrace and would like to have dynatrace provide all data to splunk.

  36. Anonymous (login to see details)

    Hey Craig, I not sure what you exactly want to do, but if you want to get JMX metrics from a Server to Splunk why do you need dynaTrace? or do you want dynaTrace JMX Monitor send data to Splunk without dynaTrace?

  37. Anonymous (login to see details)

    I have been using this feature over couple of months. It is working great but here are issues I am facing with default implementation.

    • Its only collecting chart data. We have “business transaction” and visit chartlet are in our dashboard.
    • If chart has data which split by agent then it not parsing data right way and end up skewed in splunk.
    • I do not need all elements like avg, min, max, sum, count for all measurements. all these values are too big with 10 decimal places.

    So i modified report.xsl file to address these issues. Currently i using this template on one dashboard and i am not seeing any issues.

     I am not splunk or xml parsing expert. so i want your opinion on these changes. Please find attached report_laks.xsl file in attachments (not sure i can add attachments or not)


  38. Anonymous (login to see details)


    we tried to get some data via dashboard-api. Do I have to start the script separately?

    When I try it, we got this error:

    /splunk/etc/apps/APM_dynatrace/bin> python runDashboard.py

    Traceback (most recent call last):

      File "runDashboard.py", line 51, in <module>

        xslt = ET.parse(xsl_file)

      File "lxml.etree.pyx", line 2957, in lxml.etree.parse (src/lxml/lxml.etree.c:56299)

      File "parser.pxi", line 1526, in lxml.etree._parseDocument (src/lxml/lxml.etree.c:82331)

      File "parser.pxi", line 1555, in lxml.etree._parseDocumentFromURL (src/lxml/lxml.etree.c:82624)

      File "parser.pxi", line 1455, in lxml.etree._parseDocFromFile (src/lxml/lxml.etree.c:81663)

      File "parser.pxi", line 1002, in lxml.etree._BaseParser._parseDocFromFile (src/lxml/lxml.etree.c:78623)

      File "parser.pxi", line 569, in lxml.etree._ParserContext._handleParseResultDoc (src/lxml/lxml.etree.c:74567)

      File "parser.pxi", line 650, in lxml.etree._handleParseResult (src/lxml/lxml.etree.c:75458)

      File "parser.pxi", line 588, in lxml.etree._raiseParseError (src/lxml/lxml.etree.c:74760)

    IOError: Error reading file 'bin/report.xsl': failed to load external entity "bin/report.xsl"


    And it would be good to note, that lxml is a prerequesite for this script.



    EDIT: Solved this problem by changing


    But I still don't have data in splunk for this dashboard....




    1. Anonymous (login to see details)

      Hi Jan


      Were you able to solve this problem, or are you still not getting any data?





      1. Anonymous (login to see details)


        yes. I was able to capture data. I forgot to edit the inputs.conf.

        It seems, that the clean-flume.py does not work. May it a problem with some linux distributions?




  39. Anonymous (login to see details)

    I'd like to see this changed so that data isn't pulled at the search-head.   Be nice to have a TA, which grabs the data, and then, a seperate dashboard piece.  We discourage apps that grap data from running on the search-head, using that only for it's described purpose. 

  40. Anonymous (login to see details)

    When setting up Splunk app I missed the part about  'add the two Business Transactions from the Splunk Business Transactions template profile to your own System Profile'. This resulted in error mentioning missing 'clientip'.

    After I copied Business Transactions  Detailed Visit Data and Detailed User Actions together with their Measures everything worked ok. I guess the part 'By simply enabling the HTTP export of any Business Transactions you can now use them in splunk' made me think that I could just use any of the already existing Business Transactions I had but none  of these had the 'clientip'. Thanks Rajesh for helping me find this!

  41. Anonymous (login to see details)

    Installed splunk as per the instructions and getting no data. getting the following error from the Flume log

    20 Oct 2014 16:49:04,339 ERROR [lifecycleSupervisor-1-3] (org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run:238) - Unable to start EventDrivenSourceRunner: { source:org.apache.flume.source.http.HTTPSource{name:dynaTraceServer,state:IDLE} } - Exception follows.
    java.lang.IllegalStateException: Running HTTP Server found in source: dynaTraceServer before I started one.Will not attempt to start.
    at com.google.common.base.Preconditions.checkState(Preconditions.java:145)
    at org.apache.flume.source.http.HTTPSource.start(HTTPSource.java:119)
    at org.apache.flume.source.EventDrivenSourceRunner.start(EventDrivenSourceRunner.java:44)
    at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:236)
    at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
    at java.util.concurrent.FutureTask.runAndReset(Unknown Source)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown Source)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)




    1. Anonymous (login to see details)

      Nico, can you check if you have multiple Flume processes running? ps -ef | grep flume

  42. Anonymous (login to see details)

    Hi Rajesh,

    In the Win task manager I see 3 splunkd processes and one splunkweb Process no Flume processes running.

    1. Anonymous (login to see details)

      ok, on windows this is a java process. In the Task Manager, look for java processes, and make sure you have "Command Line" option in the column. Check how many java processes you have with command line containing "apache flume"

      This might be a windows issue with processes.... On windows – when in doubt "Reboot" 

      If this doesn't work, let me know and I will setup a quick webex. 

  43. Anonymous (login to see details)

    I would appreciate if we could do a webx - You want me to open a ticket? 




  44. Anonymous (login to see details)

    Could we also sent the system profile name to Splunk.

    We are trying to store events for certain profiles to specific index in Splunk than the default dynatrace index.


  45. Anonymous (login to see details)

    Hi Rajesh,

    runFlume.sh failed to process the events and logging below stack in splunk log. Any insight what causing this.?

    4-06-2015 03:05:01.811 -0700 ERROR ExecProcessor - message from "/splunk/etc/apps/APM_dynatrace/bin/runFlume.sh" 2015-04-06 03:05:01,811 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:160)] Unable to deliver event. Exception follows.
    04-06-2015 03:05:01.811 -0700 ERROR ExecProcessor - message from "/splunk/etc/apps/APM_dynatrace/bin/runFlume.sh" org.apache.flume.EventDeliveryException: Failed to process transaction
    04-06-2015 03:05:01.811 -0700 ERROR ExecProcessor - message from "/splunk/etc/apps/APM_dynatrace/bin/runFlume.sh" Caused by: java.io.IOException: Stale file handle
    = ... 4 more
    04-06-2015 03:06:02.819 -0700 ERROR ExecProcessor - message from "/splunk/etc/apps/APM_dynatrace/bin/runFlume.sh" 2015-04-06 03:06:02,819 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:160)] Unable to deliver event. Exception follows.
    04-06-2015 03:06:02.819 -0700 ERROR ExecProcessor - message from "/splunk/etc/apps/APM_dynatrace/bin/runFlume.sh" org.apache.flume.EventDeliveryException: Failed to process transaction
    04-06-2015 03:06:02.819 -0700 ERROR ExecProcessor - message from "/splunk/etc/apps/APM_dynatrace/bin/runFlume.sh" at org.apache.flume.sink.RollingFileSink.process(RollingFileSink.java:218)
    at java.lang.Thread.run(Thread.java:662)
    04-06-2015 03:06:02.819 -0700 ERROR ExecProcessor - message from "/splunk/etc/apps/APM_dynatrace/bin/runFlume.sh" Caused by: java.io.IOException: Stale file handle
    04-06-2015 03:06:02.819 -0700 ERROR ExecProcessor - message from "/splunk/etc/apps/APM_dynatrace/bin/runFlume.sh" at java.io.FileOutputStream.writeBytes(Native Method)
    at org.apache.flume.sink.RollingFileSink.process(RollingFileSink.java:195)


  46. Anonymous (login to see details)

    I am doing the Business Transaction Feed.  The problem is that I am seeing duplicate entries in splunk.  I have two splunk installations.  The interesting thing is that on one of them I don't get the duplicates.  I opened a support ticket with splunk.  They said that since there is an intermediary (flume server) I have to look to dynatrace for support.  Can anyone help me with this problem please?  Thanks.


    1. Anonymous (login to see details)

      Hi Delton,

      Assuming you are using the big data Business transaction bridge and that you followed the documentation correctly and all of that... i'd say open up a support case with us. 


      1. Anonymous (login to see details)

        I opened a support case.  They referred me to this page.

    2. Anonymous (login to see details)

      Hi Delton

      Here is some quick feedback from the lab to give you some hints on where to start looking:

      • Check the Flume configuration - maybe you have configured duplicated data sinks which means it will send the same data twice to the same Splunk instance
      • Duplicated BTs? Make sure you havent accidently copied Business Transacdtions that are all delivering the same data


      1. Anonymous (login to see details)

        Thanks Andi.  I am getting data more than twice.  Where should I look for the data sinks?  Is it in \splunk_home\etc\apps\CompuwareAPM\bin\flume-conf.properties?

        I checked the BTs they are no duplicates.

  47. Anonymous (login to see details)

    I have just installed the app into splunk, configured the bt export, and selected bt/s for export. I see this error in the splunk log:

    07-23-2015 15:27:22.231 -0400 ERROR FrameworkUtils - Incorrect path to script: /opt/splunk-search/etc/apps/compuwareapm/bin/runDashboard.py. Script must be located inside $SPLUNK_HOME/bin/scripts

     The file is actually installed in the /splunk/splunk-searchpool/etc/compuware/bin

    any one seen this?


  48. Anonymous (login to see details)

    correction, file exist in /splunk/search-pool/etc/apps/compuwareapm/bin

  49. Anonymous (login to see details)

    how is it possible to correlate PP and PA? I didn't find any common field in splunk.




  50. Anonymous (login to see details)

    Not sure how much support I can expect but I have Dynatrace 6.3.1 and Splunk 6.3. All seems to work well except for the purepath data which does not seem to reach Splunk.

    In other words, the 'pp' sourcetype does not get populated with any data (i.e. the bt-export/pp log files are empty) whereas the pa and visit sourcetypes get populated with data. 

    There is no obvious error in the flume logs and I have tried restarting the flume process to no avail. On the Dynatrace side the "Purepath reference" option is ticked in my exported business transaction.

    Has anyone encountered the issue before?


    1. Anonymous (login to see details)

      Romain, are you exporting any purepath based business transactions? The pp sourcetype is only populated for BTs based on server-side purepaths and not for BTs based on user-actions or visits. Usually the failure scenario here is pretty binary, we will usually see all data or no data in a failure-case. 

      Actually, nevermind, the purepath reference is only available for server-side purepath business transactions.

      Can I assume there have been new BTs created since you enabled the feed?



      1. Anonymous (login to see details)

        Hi Michael, thank you for the prompt comment!

        OK so the two example BTs (the ones that come with the app) will not populate the pp sourcetype then? That makes sense as to why there is no Purepath data being exported.

        1. Anonymous (login to see details)

          Yeah, the examples are for visit/user-action data as the sample dashboards in the app are focused primarily around user-experience data. The export functionality and Splunk app are focused on UEM data.

          Glad you were able to get some PP data exported!



      2. Anonymous (login to see details)

        I have just created a BT that captures server-side purepaths and exported it and I have now data in the pp sourcetype.

  51. Anonymous (login to see details)

    Has anyone ever tried deploying the latest dynaTrace app with Splunk version 6.4. Are there any gotchas I need to be aware of ahead of time. About a month ago, Mike Villager shared that the visits dashboard didn't work but the BT based events were fine.



  52. Anonymous (login to see details)

    I see an earlier comment from Sreerag Moolekattil asking about getting system profile name into the data we send to Splunk.

    This would be very valuable, as sending BT data from multiple profiles will result in dashboards in Splunk that mix data from multiple sources, and users of Splunk will have to find ways to filter this data to be meaningful if they are primarily interested in/responsible for the health of only a single application.

    I understand that the Business Transactions feed is already defined, and not a part of this plugin, but could the plugin be enhanced (on the Splunk side) to provide filtering options?  If we include an application or profile name in the BT names, could this potentially be used as a filter criteria, or could the Splunk component be configured with a specific list of keywords to filter on?


  53. Anonymous (login to see details)


    this has been added to the next release of Dynatrace! Hooray


    1. Anonymous (login to see details)

      Reinhard,  can you spell out which Release you are referring to?




      1. Anonymous (login to see details)

        I believe Reinhard is referring to 6.5 which includes the system profile name in the BT export.

  54. Anonymous (login to see details)

    So in Dynatrace 7.x we should see the system profile as one of the fields in the protobuf definition?

    In the meantime, what is involved in potentially filtering on keywords in the existing data?


  55. Anonymous (login to see details)

    I tried procedure described in this page.

    And dynatrace data is shown in Splunk, however "Drill down to CompuwareAPM" option is not appear.
    How should I do to get "Drill down to CompuwareAPM" option.

    ----- Environment info. -----
    AppMon version : 6.3
    Splunk version : 6.4.0
    APM with Dynatrace App version : 2.2.4
    Both of AppMon and Splunk are installed RHEL6 64bit 

  56. Anonymous (login to see details)

    Hi, the reason you are not seeing this option in your events is due to app permissions for the drill down work flow. In Splunk please go to settings->workflows->dT drill down workflow  in the permissions include all apps. BTW for me the drill down only works a small percentage of the time. You may see an error in the dT client just ignore it and let the purepath search run in the background for a few min.





    1. Anonymous (login to see details)

      Hi Tahir

      Thank you for your support.  I found 2 workflows (openDynaTraceClient and openInsidentInDynaTraceClient).  And I selected all apps and checked both of read and write at Role=Everyone.  However I still can not see the "Drill down to CompuwareAPM" option.  Are there any other configuration to see the drilldown option?

  57. Anonymous (login to see details)

    Hi Takahiro, 

    You did everything right. Just reporting the correct path for other readers: Settings > Fields > Workflow actions

    So in the events please choose an event and click on the down arrow in the left column of the event: https://dl.dropboxusercontent.com/u/59440855/Splunk%20Event%20dT%20Drill%20Down%20Workflow.pdf

    I hope this helps. Thanks,



    1. Anonymous (login to see details)

      Hi Tahir

      Thank you very much for your kindly support.  I missed the sentense "In both cases the CompuwareAPM dynaTrace client must be already running on your local machine!" at bottom of this page.

      AppMon Server and Splunk Enterprise are installed on same RHEL6 on AWS,  so I can only use CLI and can not use AppMon Client on that RHEL6.  Therfore I thought I should configure Workflow actions > URI of Link Configuration, however AppMon Client REST Interface seems to allow only localhost (Client REST Interface).

      So my environment, I think it is impiossible to get "Drill down to CompuwareAPM" option.  Anyway I really appreciate your help.

      1. Anonymous (login to see details)

        Hi Tahir & Takahiro,

        I am trying to Drill Down to DynaTrace using Splunk, I am able to view "DrillDown To DynaTrace" in all the apps. I am facing the issues when i am trying to DrillDown, its says "Unable To open the requested PurePath". 

        Could you please tell me if there are any pre-requisite for integrating Splunk and DynaTrace? 

  58. Anonymous (login to see details)

    Has anyone integrated Dynatrace with Splunk v6.4?




    1. Anonymous (login to see details)

      The "visits on a map" sample dashboard as it exists in version 2.2.3 of the plugin does not work in versions of Splunk greater than 6.2 as the advanced XML dashboard format was deprecated. The data feed into Splunk will continue to function just fine. 

    2. Anonymous (login to see details)

      Yes, we have Dynatrace 6.3.3 integrated with Splunk 6.4 in our Dev environment


  59. Anonymous (login to see details)

    thanks Mike for the note. Much appreciated.

  60. Anonymous (login to see details)

    Just wanted to add a note that this error indicates that java isn't installed or sourced correctly.

    ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/compuwareapm/bin/runFlume.py" Execution failed: [Errno 2] No such file or directory

  61. Anonymous (login to see details)

    We keep receiving the following errors in the splunkd.log:


    08-09-2016 14:21:32.972 -0400 INFO ViewstateReaper - Failed to reap viewstate flashtimeline:hlo9sfoe (user: nobody, app:compuwareapm, root: /usr/splunkdata/splunk/etc): Not removable: /nobody/compuwareapm/viewstates/flashtimeline:hlo9sfoe
    08-09-2016 14:21:32.972 -0400 INFO ViewstateReaper - Failed to reap viewstate flashtimeline:hloacq2k (user: nobody, app:compuwareapm, root: /usr/splunkdata/splunk/etc): Not removable: /nobody/compuwareapm/viewstates/flashtimeline:hloacq2k
    08-09-2016 14:21:32.972 -0400 INFO ViewstateReaper - Failed to reap viewstate flashtimeline:hloadrx9 (user: nobody, app:compuwareapm, root: /usr/splunkdata/splunk/etc): Not removable: /nobody/compuwareapm/viewstates/flashtimeline:hloadrx9
    08-09-2016 14:21:32.972 -0400 INFO ViewstateReaper - Failed to reap viewstate flashtimeline:hlobdbp9 (user: nobody, app:compuwareapm, root: /usr/splunkdata/splunk/etc): Not removable: /nobody/compuwareapm/viewstates/flashtimeline:hlobdbp9
    08-09-2016 14:21:39.431 -0400 INFO WatchedFile - Checksum for seekptr didn't match, will re-read entire file='/clocal/splunk/user/splunk/SPLUNK_FORWARD/Mainframe_health_check.log'.
    08-09-2016 14:21:39.432 -0400 INFO WatchedFile - Will begin reading at offset=0 for file='/clocal/splunk/user/splunk/SPLUNK_FORWARD/Mainframe_health_check.log'.
    08-09-2016 14:21:58.505 -0400 ERROR ExecProcessor - message from "python /usr/splunkdata/splunk/etc/apps/compuwareapm/bin/runFlume.py" java version "1.8.0_101"
    08-09-2016 14:21:58.506 -0400 ERROR ExecProcessor - message from "python /usr/splunkdata/splunk/etc/apps/compuwareapm/bin/runFlume.py" Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
    08-09-2016 14:21:58.506 -0400 ERROR ExecProcessor - message from "python /usr/splunkdata/splunk/etc/apps/compuwareapm/bin/runFlume.py" Java HotSpot(TM) Server VM (build 25.101-b13, mixed mode)
    08-09-2016 14:21:58.507 -0400 ERROR ExecProcessor - message from "python /usr/splunkdata/splunk/etc/apps/compuwareapm/bin/runFlume.py" Process already running as PID 24706


    The app appears to be running fine. What would be the cause of this?

    1. Anonymous (login to see details)

      Jacob Potter those messages relating to the ExecProcessor/Python are normal. They are being being incorrectly logged at an ERROR level but sadly it can not be easily changed. 

  62. Anonymous (login to see details)

    I was able to remove one by changing stderr to stdout in one part of the script but the others keep appearing. Are these actually being printed from a different Python library as i cannot find the print statement for the java messages in the script.


    And what about the ViewstateReaper messages?

  63. Anonymous (login to see details)

    Hi Michael Villiger,

    Will this plugin work with Splunk 6.5?




    1. Anonymous (login to see details)

      Bo, the Splunk app does not currently (as of October 24th) support Splunk 6.5. 

      1. Anonymous (login to see details)

        Hi Mike,  

        What Splunk version do you recommend at this point.. We are curerntly using DT App Mon

        Also, any idea as to time line when this is expected to be working well with Splunk 6.5?



        1. Anonymous (login to see details)

          The app is fine up through Splunk 6.3, Dynatrace versioning is unimportant. 

          With version 6.4 of Splunk, the visits on map dashboard will not function as it utilizes functionality that has been deprecated by Splunk but everything else should continue to function.

          Version 6.5 of Splunk introduced some further changes under the covers and the app has not been validated against this version yet. I can't comment on a timeline just yet but I can reach out when we know more!

          1. Anonymous (login to see details)

            That would be greatly appreciated!



  64. Anonymous (login to see details)

    Hi Michael,

    Thanks for the clarification!



  65. Anonymous (login to see details)

    Hi Michael,

    We have raised a case (SUPDT-26333) regarding DT client REST interface as it intermittently returned error -- during 'drill down to Dynatrace client' from Splunk.

    We were guided to include the "SP" and "tf" tag into the drill down workflow and that seem to overcome the issue.

    Additionally, AppMon support was kind enough to provide write up below for enhancements/improvements to avoid this issue from happening:

    With current implementation of Splunk plug-in, it try to drill-down to PurePath using only PT, PA, and PS values.

    This is resulting timeout when performing drill-down at some end user.

    Following suggestion was provided from tech support team.

    The reason for timeout here is because server doesn't know in which session the data about this purepath is stored and has to search for it.
    On servers with a lot of stored session this can take a while.
    For this reason the "RS" tag (ID of the Recording Stored session) or the "SP" tag (System Profile name) should be specified, in order to speed up the search.

    Note: On AppMon 6.3 document, detail about "SP" tag is already added, so you can refer to.

    This means that the Splunk plugin needs to be modified and if Continuous Transaction Storage is enabled (as by default) the "SP" tag has to be specified.
    For an additional speedup, a timeframe can be specified within the REST call, so that the server can additionally reduce amount of data to search in.
    See docu(https://community.dynatrace.com/community/display/DOCDT63/REST+Filters#RESTFilters-TimeframeFilter) how to specify timeframe.



  66. Anonymous (login to see details)


    We want the Transaction Flows details as well to be passed to Splunk which will help us to drilldown further.

    Could anyone please let us know whether its possible?


    1. Anonymous (login to see details)

      All Transaction Flow details are currently not available on the Data Feed we send to external tools such as Splunk. But - you can probably extract a lot of data that you find on the transaction flow through result measures that you can put on your Business Transactions that you stream, e.g: Execution Time, Database Time, ...

      Another approach would be to query the transaction flow dashlet using our REST API and then push this data to Splunk. Thats just a different thought

      1. Anonymous (login to see details)

        Hi Andreas,

        Thanks for your reply.

        Could you please let me know how we can send the Execution Time,Database time and other fields as mentioned in your response to Splunk? I am currently allowing only few fields to Splunk.



        1. Anonymous (login to see details)

          The way our Integration works is that you have to select those Business Transactions that you want to "Stream Out" using our "Real Time Data Feed" feature. Those BTs that are marked for data feed will be streamed out including all the measures that you have defined in that BT. That includes Filters, Splittings and Result Measures. So - what you need to do is to define a BT that you want to send to Splunk and add Execution time, Database Time, ... measures as Result measures.

          Check out my GitHub project where I showed how our Real Time Data Feed works in general. You will find a screenshot of a BT that I've used and a list of metrics I added to the Result List: https://github.com/Dynatrace/Dynatrace-Real-Time-Data-Feed-Listener

          I hope that helps

          1. Anonymous (login to see details)

            Thanks Andreas, 

            Could you please also provide me any link to add Execution time, Database Time, ... measures as Result measures?

            Thanks again.


            1. Anonymous (login to see details)

              When you edit your Business Transactions you can add Result Measures. Simply pick them from the list of available measures. If you want to learn more about BTs I suggest you watch some of my youtube videos on http://bit.ly/dttutorials

              There are two about Business Transactions: https://www.youtube.com/watch?v=HdCqPuFCfOQ&index=13&list=PLqt2rd0eew1bmDn54E2_M2uvbhm_WxY_6 & https://www.youtube.com/watch?v=BLz7qc5tstU&index=31&list=PLqt2rd0eew1bmDn54E2_M2uvbhm_WxY_6

              1. Anonymous (login to see details)

                Hi Andreas,

                While drilling down from Splunk to DynaTrace, i am receiving the below error 

                This XML file does not appear to have any style information associated with it. The document tree is shown below.
                <description>ParthIdentifier contained unknown item ''</description>

                Can anyone tell me why this error is coming?


  67. Anonymous (login to see details)

    We use this and its great.
    Just one comment.

    I looked at this 

    ca 31:41 out in the video.
    In my config (flume-conf.properties) I can see the same. I do not think we adjusted these settings.

    # Use a channel which buffers events in memory
    agent1.channels.purepath.type = memory
    agent1.channels.purepath.capacity = 1000000
    agent1.channels.purepath.transactionCapactiy = 1000

    Is this something that should be fixed ?Capactiy vs Capacity

    Rolf Gunnar

  68. Anonymous (login to see details)

    Is DynaTrace compatible with Splunk 6.5? I heard its not compatible and has some issues with it.

    1. Anonymous (login to see details)

      In my environment, AppMon 6.3 seem to work well with Splunk 6.5.

  69. Anonymous (login to see details)

    I have integrated Splunk with DynaTrace as mentioned above and able to pass some BT's to Splunk however i tried to do the same thing for UEM but not able to receive any data for User Experience. I am using DynaTrace client 6.3.

    Can someone help me?

    1. Anonymous (login to see details)

      Sorry for the confusion, i was checking for sourcetype=pp which is incorrect and had to check for sourcetype=pa. corrected that

  70. Anonymous (login to see details)


    I have found for Dynatrace 6.5, the reports.xsl file needs to be modified in order for runDashboard.py to run correctly.


    <xsl:apply-templates select="dashboardreport/data/chartdashlet/measures/measure/measurement"/>


    Needs to be change to:


    <xsl:apply-templates select="dashboardreport/data/chartdashlet/measures/measure/measure/measurement"/>


    For some reason the xml for the chart dashlet has been updated to include an extra measure tag for some reason.

    Michael Villiger


    Jacob Potter



  71. Anonymous (login to see details)

    While drilling down from Splunk to DynaTrace, i am receiving the below error 

    This XML file does not appear to have any style information associated with it. The document tree is shown below.
    <description>ParthIdentifier contained unknown item ''</description>

    Can anyone tell me why this error is coming?

  72. Anonymous (login to see details)

    Hi , 

    I was trying to get dash board data to splunk ,

    Initially i got the below exception , 

    Also , i got the below exceptions   

    File "runDashboard.py", line 59, in <module>
    xslt = ET.parse(xsl_file)
    File "lxml.etree.pyx", line 2692, in lxml.etree.parse (src/lxml/lxml.etree.c:49594)
    File "parser.pxi", line 1500, in lxml.etree._parseDocument (src/lxml/lxml.etree.c:71364)
    File "parser.pxi", line 1529, in lxml.etree._parseDocumentFromURL (src/lxml/lxml.etree.c:71647)
    File "parser.pxi", line 1429, in lxml.etree._parseDocFromFile (src/lxml/lxml.etree.c:70742)
    File "parser.pxi", line 975, in lxml.etree._BaseParser._parseDocFromFile (src/lxml/lxml.etree.c:67740)
    File "parser.pxi", line 539, in lxml.etree._ParserContext._handleParseResultDoc (src/lxml/lxml.etree.c:63824)
    File "parser.pxi", line 625, in lxml.etree._handleParseResult (src/lxml/lxml.etree.c:64745)
    File "parser.pxi", line 563, in lxml.etree._raiseParseError (src/lxml/lxml.etree.c:64060)
    IOError: Error reading file 'bin/report.xsl': failed to load external entity "bin/report.xsl"

    So i have edited the script to the below,

    #ROOT_PATH = os.path.abspath(os.path.dirname(__file__))

    #xsl_file = ROOT_PATH + "/" + "report.xsl"

    Now i was able to run the script without error  ,  but i could not get the data in JSON  format.

    I could see the script data & xls data in splunk events


    disabled = false
    interval = 300
    sourcetype = metrics
    index = dynatrace

    Do i need to change anything the inputs.conf or in the runDashBoard.py ?


    1. Anonymous (login to see details)

      Suganya Nedumaran:

      #1, AppMon does not support JSON as an export format so the XML parsing functionality in the python script is a requirement

      #2, do you know which version of the Splunk app you are utilizing? The runDashboard.py hasn't been updated in quite some time but still has has no mention of ROOT_PATH. You might be using a version of the app that is very old. Perhaps try the latest version of the app which I have just uploaded. 


      hope this helps!


  73. Anonymous (login to see details)

    Hi all,

    We have disabled comments on this plugin page.

    Please use the AppMon & UEM Plugins forum for questions about this plugin.

    Sorry to interrupt ongoing discussions. Please re-post your last question in AppMon & UEM Plugins forum.