Icon

Comments have been closed on this page. Please use AppMon & UEM Plugins forum for questions about this plugin.

Overview

Icon
Previously called "Windows Log File Monitor" but updated to now support both Windows and Linux
The monitor searches a Windows or Linux log file for text or regular expression and returns if a new line was found, and the last line number to contain the regex.

The Log File Scraper stores the last result and position in the monitored file in a Oracle, PostgreSQL, or SQL Server database so that it knows the last position the time it ran so it's not reading the same lines over and over. If the log file rolls over the monitor is smart enough to see that and starts from the beginning of the file again. The tables in the database (or App Mon Performance Warehouse) can be created by running the attached scripts. The scripts will create the table LogFileMonitor and LogRecords tables on the database.

Plugin Details

Plug-In Versions

Log File Monitor for Windows and Linux 3.16.4 (for dynaTrace >= 5.5)

Content:

  • com.logfile_3.16.4.jar: Plugin
  • LogFileMonitor.sql: SQL Server Commands to create LogFileMonitor Table
  • LogRecords.sql: SQL Server Commands to create LogRecords Table
  • Oracle_CreateTable.sql: Oracle Commands to create Tables
  • PostGreSQL_CreateTable.sql: PostgreSQL Commands to create Tables

Author

Derek Abing

Joshua Raymond

License

dynaTrace BSD

Support

Not Supported
If you have any questions or suggestions for these plugins, please add a comment to this page, use our forum, or drop us an email at apmcommunity@dynatrace.com!

Known Problems

 

Release History

2013-03-12 Initial Release for Windows Log Files Only

2013-12-18 Log File Monitor for Windows and Linux

2014-10-22 Update for Windows and Linux Monitor - 3.15.5

  • Added support for Oracle and PostgreSQL Databases
  • Improved plugin logging
  • Enhanced insertion of data into LogFileMonitor table

2015-04-09 Patch for Windows and Linux Monitor - 3.15.7

  • Improved logging for FileNotFound exceptions within Windows monitor
  • Improved hints within plugin configuration to indicate Windows monitor requirements

2016-06-19 Update for Windows and Linux Monitor - 3.16.2

  • Added support for SSH Keys
  • Added support for local Collector server logs

2017-03-17 Patches for windows compatibility

  • Defaulted Windows OS to "Local" connection method (even UNC file share paths are accessed "locally")
  • Fixed issue when using linux wildcards (Regexes)
  • Changed "ls" command parameter order to support Solaris OS

Provided Measures

Line Number: The Line Number of the last known occurance of the specified search term.
New Message: Returns a 1 if a new entry was made to log file with the search term specified.
Number of Messages: The number of lines that matched the Search Term.

Configuration

Name

Value

OSDropdown for selecting either Windows or Linux 
Connection TypeFor Linux connections, determines if a local log file or a remote host is being monitored (SSH) with relation to the AppMon Collector.
SSH TypeFor SSH connections, defines whether a password or public key is used for the SSH connection. Currently, only .pem keys are supported.
Linux UsernameLinux Username to use.
Linux PasswordLinux password to use

Directory

The network path to the shared filed on the server that contains the log file.
Example: /myapplication/logs/
This entry will be combined with the host name to create the network path to the file: //localhost/myapplication/logs/

File Regex

If this is selected then the file must be specified in terms of a regex expression.  If the directory contains multiple files that match the regex the newest file will be selected

File

The File that you wish to search.  If the File Regex option is checked, then this must be a regex. If running using Windows, this file must be shared to the user running the dynaTrace Collector.

Search Term

A regex of what you wish to search for within the log file.  The search is completed per line. Example .*Warning.*

Database TypeThe type of database where the log file scraper entries will be stored.

Database Server

SQL Server to contain the repository of log file searches

Database Port

Port to connect to on the database server. Default ports are:
Oracle: 1521, SQL Server: 1433, PostGreSQL: 5432

Database Name

The database to use for the log file repository.

Database Username

Username used to connect to the SQL Database

Database Password

The password for the username used to connect to the SQL database

Additional LinesAmount of additional lines to include in file after the log message is found 
Skip Additional Records If this box is checked, records included in the additional lines will be skipped for processing 
Keep Historical Record If this box is checked, a historical record of each log message will be added to the LogRecords database.  This database can then be queried for a list of all log messages matching a specific search and the corresponding timestamps 

Installation

Import the Plugin into the dynaTrace Server. For details how to do this please refer to the dynaTrace documentation.

Contribution

Feel free to contribute any changes on Github

  1. Anonymous (login to see details)

    Is the log file monitor plugin capable of monitoring log files on the a linux operating system?

  2. Anonymous (login to see details)

    This version specifically targets Windows only at the moment, but after uploading the plugin, we've converted it to a Windows and Linux log file monitor as well as improving the performance with some code tweaks. I'll have the new version uploaded in the coming days.

  3. Anonymous (login to see details)

    would adding the table to the existing dynatrace database create any problems?

    1. Anonymous (login to see details)

      Do you meant that you want to add a table to the database used for the performance warehouse? It shouldnt be a problem - but - just wondering what you want to do with this table?

      1. Anonymous (login to see details)

        Andreas,

        I think in the above comment, Ben was asking about using the existing dT Performance Warehouse as the log file repository for the Windows Log File Monitoring plugin.

        I am also wondering if this would be possible, as it would prevent us from creating another database on the SQL Server.

        1. Anonymous (login to see details)

          Well - nobody prevents you from using the same database and create your own tables. As long as you do not alter any of the dynaTrace tables it shouldnt be a problem. But - the plugin has to be changed in a way that it actually writes the data to that database. There is no out-of-the-box option to store output data from a plugin into our database

          1. Anonymous (login to see details)

            Sorry if I am confused here. Are you saying even if we add these tables to our existing DT PW that the plugin will not function? The plugin screen makes it seem as though we can specify any database to point it at so long as we know the database and host.

             

            1. Anonymous (login to see details)

              Hey Jared,

              The addition to the existing DT PW is optional and should not effect the functionality of the plugin or Dynatrace as the tables are completely separate from the core Dynatrace schema.  Most users add the tables to their existing database in order to save time by not setting up another database for the additional tables.  

              However, the log file scraper plugin does require a SQL Server, Oracle or PostgreSQL DB to work properly.  It will not currently work with DB2.  

              Hopefully this information answers your question.  Please let us know if you have any other questions.

              1. Anonymous (login to see details)

                nope that is fine, we are currently using SQL Server for our performance warehouse so that should work.

  4. Anonymous (login to see details)

    We were looking at the windows log monitor plug in and since this was packaged with it we thought it was necessary. If not good, if it is then we were wondering if the table can be created within the exsisting DT database vs creating a new one. Let me know if you have questions.

    1. Anonymous (login to see details)

      The monitor plugin doesnt return the actual log text. it just returns a measure that tells you if a certain log entry was found. If you want to extend the monitor to also capture the actual log message and write it in a database table then you can do that. But - there is no way you can show the content of this table with any of the out-of-the box dashlets we have.
      I would be interested in your use case. why are you interested in the log messages itself? If you are interested in log entries written by the application you should make use of our Logging SEnsor Packs that pick up Log4Net or Log4J calls in your JAva/.NET Apps. If you use a different logging framework you can create custom sensors for your log methods and capture the log message from the method argument

  5. Anonymous (login to see details)

    On the most basic side we don't need the actual messages just a 0/1 for alerting if it shows up. To expand on that it would be good to have the actual messages time server and so on to trend problems or developing problems vs what usually happens is RCA - network glitch or whatever.
    Here are a couple of scrubbed examples of they types of errors that are thrown.
    Disconnecting(Faulting) for 'aaaaaa.Server.aaaay.INotifyService' on server 'net.tcp://*Server**:61791/2/Notify/Service'
    Error on initial connection connecting: Cannot connect to any servers in your server list. ConnectionAttemptTimeout ellapsed.

    My ultimate goal is when specific errors are thrown in the windows application log we can trigger a recycle of that windows service to prevent service interuptions.

  6. Anonymous (login to see details)

     

    Did you ever complete the conversion to a Unix log file monitor?  We are a Unix shop and having the capability to monitor various log files would be very useful, also is there any way to return the number of occurrences since the last check?

  7. Anonymous (login to see details)

    Don,

    Try the new updated plugin, now called just Log File Monitor because it supports Windows and Linux.

  8. Anonymous (login to see details)

    I have setup the monitor but i am getting an error as 

    com.microsoft.sqlserver.jdbc.SQLServerException: Invalid object name 'WindowsLogFile'.

    in the log files .

    Can you please help.

    1. Anonymous (login to see details)

      Did you create the tables required by this plugin using the attached SQL scripts? please see the description of the plugin which explains that the plugin needs tables in that databsae

      1. Anonymous (login to see details)

        Yes i did set it up both the table and made sure the name is same as in the plugin configuration but the error was still appearing.

  9. Anonymous (login to see details)

    Are you using the Log File Monitor for Windows and Linux plugin or the Windows Log File Monitor 3.14.15 plugin?

    The Windows Log File Monitor 3.14.15 plugin will look for a table called WindowsLogFile.
    The Log File Monitor for Windows and Linux plugin will look for tables called LogFileMonitors and LogRecords.

  10. Anonymous (login to see details)

    Do you have a version of the table creating scripts for ORACLE?

    We don't run on SQLServer.

  11. Anonymous (login to see details)

    Can the plugin only ready logfiles from the collector it's running on or can it read logfiles from remote servers via SMB?

  12. Anonymous (login to see details)

    Rasmus,

    Yes, it connects remotely to other servers via SMB.

  13. Anonymous (login to see details)

    The plugin is not working for me and I am not getting any error messages in the log... 

  14. Anonymous (login to see details)

    Hey Derek,

    I successfully modified the plugin to work with PostgreSQL databases.  However, I am having difficulty modifying it to work with Oracle databases.  Would you have time to quickly review the current code base to see if we can get it working with Oracle DBs?  If so, please email me at Joshua.Raymond@Compuware.com

    Thanks

  15. Anonymous (login to see details)

    I'm having difficulty getting this to work with Linux. The plugin is installed. I'm using a Linux collector to execute the monitor. The plugin says there are no new messages found, but, I'm testing it with words that I know are in fact in the log file. I've also tried a non-regex search term and get the same result. (Everything is green on my monitor screen, so the plugin and monitor appears to be working fine)

    Does anyone have an example of configuration screen that works for you on Linux?

     

    1. Anonymous (login to see details)

      Hey Allan,

      It looks like the monitor is able to access the log otherwise you would get an error stating that the log file could not be found.  Have you checked your regex?  Keep in mind that the log file monitor looks at one line of the file at a time, therefore, if there are any words before or after the keyword then there needs to be regex to represent that i.e. .*info.*

      1. Anonymous (login to see details)

        Thanks! It was my regex (which I'm very unfamiliar with). I needed to add .* before the keyword.

  16. Anonymous (login to see details)

    I'm currently trying to configure alerting based on the results. The issue here is that once a flagged word has been found, Dynatrace will continue to alert indefinitely. It looked like "New Message" was intended to reset every polling interval, but, it just stays at "1" once a keyword is found. 

    Is there a way to set up Dynatrace to alert only when something has been found since the last poll. 

    1. Anonymous (login to see details)

      Never mind. I had to set up the database. Problem solved. 

  17. Anonymous (login to see details)

    Hello,

    i´ve installed and configured the plugin. The Monitordetail window shows an error. Bur I can´t find the problem. The LogFile doesn´t show any error.

    2014-12-18 11:13:46 INFO [com.logfile.WP@DB_AlertLog_prüfen_0] Connecting to \\Blnhrz503\g$\database\MBUST\diag\rdbms\mbust\mbust\trace\alert_mbust.log on blnhrz503...

    Does anyone have an idea?

     

     

     

    1. Anonymous (login to see details)

      Are you reading a log file on a Windows or Linux server? You need to execute log reading from a Windows collector if reading a log on a Windows server, and a Linux collector if reading a log file on a Linux server. 

      Make sure your collector has permission to read the log file, for example, you can try setting 755 permissions on a log directory in Linux. 

      Make sure you've set up your database for use with this plugin. 

      1. Anonymous (login to see details)

        The log file is located on a Windows server. It is also the appropriate Windows installed collector and this also has the permission to read the log file.

        Can you tell me, how to set up the database to use this plugin.

        1. Anonymous (login to see details)

          Hi Marcus. The SQL Files to create these tables in your database are attached to this plugin page. The authors provided these scripts for SQL Server, Oracle and Postgres. Scroll to the plugin details and you will find these links

          1. Anonymous (login to see details)

            Thanks, now it´s work. 

            The response of Joshua A. Raymond has solved the problem.

  18. Anonymous (login to see details)

    Hey Marcus,

    This is likely an access issue with the log file you are trying to monitor as Allan had mentioned in the above comment.  If the log was successfully reached and analyzed, you would see additional log entries such as the ones below:

    2014-12-02 09:29:37 INFO [com.logfile.WP@Example Log File Monitor_0] Connecting to \Log File Scraper\Test.log on testhost...
    2014-12-02 09:29:37 INFO [com.logfile.WP@Example Log File Monitor_0] Log ID 1: No Record for current search in database. A new record was created
    2014-12-02 09:29:37 INFO [com.logfile.WP@Example Log File Monitor_0] Log ID 1: New Message Found!
    2014-12-02 09:29:37 INFO [com.logfile.WP@Example Log File Monitor_0] Log ID 1: Updated summary data within the LogFileMonitor table

    Also, it looks like you inserted the host name "Blnhrz503" to the front of the directory path.  Please remove this host name and instead add the host to the Hosts section of the monitor configuration. 

    1. Anonymous (login to see details)

      Thanks, now it´s work. 

      "Please remove this host name and instead add the host to the Hosts section of the monitor configuration." was the solution.

  19. Anonymous (login to see details)

    Hi,

     

    Can this plugin cater for log rotation for example we have a logfile called logfile00001.txt where a new logfile gets created every 30 minutes lets say its called logfile00002.txt. By using the Regex for the file name i can get the plugin to look at all logfiles using the following expression logfile.* however if it writes to the table that the last line was for example line 200 it will still try read from line 200 only on the new logfile called logfile00002.txt.

     

    Sorry hope my explanation makes sense?

     

    Thanks for the great plugin though great stuff!

     

    Richard 

  20. Anonymous (login to see details)

    Hey Richard,

    That question makes perfect sense.  The log file monitor will look for files that match the regex and will analyze the file with the most recently modified timestamp, thereby using the newest file.  Additionally, since the name of the file changed, the line at which the search begins should restart at zero. 

    Hopefully this answers your question! Just let me know if you have any further questions/comments.

    1. Anonymous (login to see details)

      The file name changes but it still logs an entry in the LogFileMonitor table under the "Directory" column but it adds the regex used it doesn't actually store the filename and the line it was last on how does it know that the filename has changed and that it needs to start over at line 1?

       

       

      1. Anonymous (login to see details)

        Hey Richard,

        There is logic built into the plugin that will compare the directory column, which includes the file name, in order to determine if that file/search term/server name combination have been checked before.  If it has not, then a new record will be added to the logfilemonitor table and the last line number will restart at 0.  If it has, then it will retrieve the last_line_number entry, indicating the line that was checked last, from the logfilemonitor table in order to resume the search.

        Looking into the code base a bit further, the scenario you are describing where it is possible that it will not start over at line 1 could occur when the results are stored in an Oracle database as opposed to PostgreSQL or SQL Server.  It appears that the directory column is not used in the comparison as described above and only the search term and the server name are used.  Are you using Oracle to store your results?

        Please see an example of the logfilemonitor table below:

        1. Anonymous (login to see details)

          Thanks for the response Joshua, i am unable to attach an image to my post here so cannot show you what the table looks like in my DB. We are using SQL not PostgreSQL or Oracle.

           

          Is it normal for the plugin to store the regular expression used in the Directory column on the DB i would have assumed that it would store the actual file directory rather?

           

          For one of the entries it's storing [\\servername\Genesys_Logs\logs\.*urs.*] under the Directory column which is my regular expression so i don't see anything being stored in the DB that would make the file a unique entry for it to store the Last_Line_Number for?

           

          1. Anonymous (login to see details)

            Hey Richard,

            Thanks for the update.  Will you please email me a screenshot of your log file monitor configuration as well as the entries within the logfilemonitor table?  That would certainly help us determine what exactly is going on here.  My email address is Joshua.Raymond@dynatrace.com.

            Thanks,

            Joshua Raymond

            1. Anonymous (login to see details)

              Hi Joshua, I retested this and its working now I was testing this by copying an pasting a logfile trying to simulate log rotation but it looks like your plugin looks at the last modified date of a file not the newest file by creation date.

              Thanks for all the help!

              Richard

  21. Anonymous (login to see details)

    Hi

    This is ES onsite trying to installed Log monitoring plugin  for a customer and i get the following error

    Schedule Details
    Name: otlt3q5cweb08
    Status: scheduled
    Description: 
    Type: Log File Scraper
    Last Run: 09:00:15 (PST)
    Last Run Result: failed
    Next Run: 13:28:45 (PST)
    Current Run: -
    Duration (last run): -
    Schedule: Every 10 Minutes
    Execution Target: dynaTrace_Collector@otlp8r5ctol04
    Plugin Active: active
    Result Status: failed
    Result: Last polling execution failed
    Reason for failure: Error during execution of plug in occurred
    Error message: Executing monitor caused error

    Please help

    regards

    Ibrahim

     

    1. Anonymous (login to see details)

      Hey Ibrahim,

      Did you ensure that the user who is running the Collector and therefore executing the log file monitor has access to the file?  This will likely be "LOCAL_SERVICE" on Windows and should be changed to a user who has sufficient privileges to access the desired log files.

      Thanks,

      Josh Raymond

  22. Anonymous (login to see details)

    Hi, I have tried a fresh install of Log File Scraper 3.5.17 on DT 6.1 ( that otherwise writes Performance Warehouse data into postgres with no issues ).

    For the plugin though I am getting:

    2015-04-16 11:02:20 INFO [com.logfile.WP@APE Monitor_0] Log ID 0: No New Message
    2015-04-16 11:02:20 WARNING [com.logfile.WP@APE Monitor_0] java.sql.SQLException: No suitable driver found for jdbc:postgresql://localhost:/dt_plugins?user=ZZZ&password=XXX

    meaning that the plugin can not reach the DB that has been setup for it.

    Am I missing something driver-wise for DT6+ ?

    Regards, Thomas

     

    1. Anonymous (login to see details)

      Hey Thomas,

      The plugin is packaged with it's own driver for PostGres and does not use the driver included with dynaTrace.  What version of PostGres are you using in your instance?  It sounds like this is a simple matter of updating the driver within the plugin.

      Thanks,

      Josh Raymond

      1. Anonymous (login to see details)

        Hi Josh, thanks!

        we are on rh-postgresql92, how would I upgrade the driver?

        Regards, Thomas

  23. Anonymous (login to see details)

    Hey Thomas,

    Updating the drivers would require you to checkout the plugin and add the necessary libraries.  Since this is a multi-step process, I took the liberty of adding the updated driver for you.  To be more specific, I added the JDBC41 Postgresql Driver, Version 9.4-1201 driver to the build from the PostgreSQL website.  Can you please try using the updated version of the plugin by downloading it from the link below?

    Log File Monitor Plugin - Version 3.15.8

    Please let me know if you have any issues with this new version or if the new driver does not work for you.

    Thanks,

    Josh Raymond

  24. Anonymous (login to see details)

    Thanks Josh, that was quick! - unfortunately still getting the same:

    I removed old, added new, restarted collector, getting the trace below. Is it not the same issue as here: http://stackoverflow.com/posts/16696772/revisions ?

    Regards, Thomas

     

    2015-04-16 15:40:49 FINER [Driver@APE Monitor_0] ENTRY Arguments not traced.

    2015-04-16 15:40:49 FINER [Driver@APE Monitor_0] RETURN null

    2015-04-16 15:40:49 WARNING [com.logfile.WP@APE Monitor_0] java.sql.SQLException: No suitable driver found for jdbc:postgresql://localhost:/dt_plugins?user=un&password=pw

    at java.sql.DriverManager.getConnection(DriverManager.java:596)

    at java.sql.DriverManager.getConnection(DriverManager.java:233)

    at com.logfile.LogFile.updateData(LogFile.java:229)

    at com.logfile.WP.execute(WP.java:162)

    at com.dynatrace.diagnostics.sdk.UserPluginManager.executePlugin(SourceFile:565)

    at com.dynatrace.diagnostics.sdk.MonitorPluginExecutor.execute(SourceFile:51)

    at com.dynatrace.diagnostics.sdk.MonitorPluginExecutor.execute(SourceFile:26)

    at com.dynatrace.diagnostics.scheduling.impl.ServerJobCenterRegistry.a(SourceFile:189)

    at com.dynatrace.diagnostics.scheduling.impl.ServerJobCenterRegistry.a(SourceFile:412)

    at com.dynatrace.diagnostics.scheduling.impl.ServerJobCenterRegistry.execute(SourceFile:336)

    at com.dynatrace.diagnostics.scheduling.impl.SchedulerJob.a(SourceFile:101)

    at com.dynatrace.diagnostics.scheduling.impl.SchedulerJob.work(SourceFile:92)

    at com.dynatrace.diagnostics.scheduling.impl.SchedulerJob.executeJobInfo(SourceFile:241)

    at com.dynatrace.diagnostics.scheduling.impl.QuartzJob.execute(SourceFile:45)

    at org.quartz.core.JobRunShell.run(JobRunShell.java:202)

    at com.dynatrace.diagnostics.scheduling.impl.QuartzThreadPool$WorkerThread.run(SourceFile:788)

  25. Anonymous (login to see details)

    Hey Thomas,

    Looking into the code base, it appears to still be loading the older PostgreSQL driver instead of the updated one.  In order to test out this theory, I plan on removing the old driver and only including the new one within the build.  I will send you an updated copy of the plugin once complete.  Until then, please feel free to email me with any questions / concerns at Joshua.Raymond@dynatrace.com.

  26. Anonymous (login to see details)

    Hey Thomas,

    I have removed and tested the new plugin that only includes the updated driver.  Can you please try this updated plugin out?

    Log File Monitor - Version 3.15.8 Updated

    Please let me know the results of this updated plugin.

    Thanks,

    Josh Raymond

  27. Anonymous (login to see details)

    Hi Josh,

    I have eventually have found a ( stupid ) mistake of mine. I have not entered :5432 presuming that plugin will pick up a 'default' PG port which was not the case.

    Supplying the port 5432 resolves the driver loading problem. Error message is misleading though:

     

    2015-04-16 11:02:20 WARNING [com.logfile.WP@APE Monitor_0] java.sql.SQLException: No suitable driver found for jdbc:postgresql:[port missing]//localhost:/dt_plugins?

     

    Thanks for your help,

    Cheers, Thomas

     

     

    1. Anonymous (login to see details)

      Hey Thomas,

      No problem, I'm happy to hear the issue was resolved!  I agree that the error message doesn't exactly fit the root cause of the issue.  Within the next release of the plugin, I'll work to change the port within the plugin configuration automatically depending upon which database is selected.  Additionally, I'll work to add some logic to handle the misconfiguration of the port so that a better error message is produced.

      Please let me know if you experience any additional issues with the plugin.

      Thanks,

      Josh Raymond

  28. Anonymous (login to see details)

    Hi Josh,

    I am using dynatrace 6,1 and  unable to get the plugin to work.

    Using Oracle. Created the table in the same db used by the dynatrace performance warehouse.

    Get the following error.

    External error occurred

    Target service unreachable

    No file found, please ensure that the file is shared and that access to the file is available from the user running the collector.

    Have checked access from the server running the collector.( I tried changing the the collector service to use an account tha thas full access to the remote server)

     

    Is there a log somewhere that I can look at ?

  29. Anonymous (login to see details)

    Hey Nanda,

    It looks like the collector cannot access the log file.  Since you checked the access from the Collector and ensure that the log file was shared, it may be possible that the name or path to the file within the monitor configuration is incorrect.  Please keep in mind that the monitor looks for the network path for the file (//host/shared/directory/file) instead of the local path (C:/shared/directory/file).

    Additionally, there is a log file created for the monitor and is available underneath the Collector running it within System Information in the dynaTrace Client.  

    Please let me know if you need more information on how the monitor is configured or the location of the log file.

    Thanks,

    Josh Raymond

  30. Anonymous (login to see details)

    Hi

    We are trying to use the log monitor plug-in to report  the existence of a string (example ORA), we are able to execute the plug-in successfully but the New Message measure always returns 0.0, even if the text exist

    Attaching are screen captures of the results, and the plug-in configuration

     

    In the above configuration for Search Term I have tried using ORA, ORA*,.*ORA,*ORA*, etc but no luck

    Need your help

  31. Anonymous (login to see details)

    Hey Mohammed,

    The search term looks at the entire line of the log file, therefore you will need to take that into account with the regex.  I would try using the following regex criteria:

    .*ORA.*

    This regex will allow for any text before or after the ORA string, allowing the ORA string to be in the middle of a line within the log file.

    Hopefully this helps!

    Thanks,

    Josh Raymond 

  32. Anonymous (login to see details)

    Hi Josh

    I did try .*ORA.* and also .*ORA*.* but no luck

    not sure how to proceed

    Regards

    Ibrahim

  33. Anonymous (login to see details)

    Hey Mohammed,

    Is the text case sensitive by chance?  Also, feel free to send me the Log File Monitor log files and, if possible, an example log file so that I can take a closer look to see why the string is not being picked up.  My email address is Joshua.Raymond@dynatrace.com.

    Thanks,

    Josh Raymond

  34. Anonymous (login to see details)

    Is the log file monitor plugin capable of monitoring log files on the a Solaris operating system?

    Regards

    Ibrahim

  35. Anonymous (login to see details)

    Hey Mohammed,

    Unfortunately, to my knowledge, the log file monitor has not been tested on Solaris systems.  However, I don't foresee any issues with Solaris as we simply use a SSH connection and then the read the file using the java.io.file class.

    Therefore, test it out and let me know if you run into any issues.

    Thanks,

    Josh Raymond

    1. Anonymous (login to see details)

      Hi Josh

      I have tested with Solaris and had no success, I am attaching plugin log, the monitored  text file, Screen capture of results and configuration for your perusal, please examine and let me know if there is some configuration mistakes from my side

      alert_MFUNDPRD1.logalert_MFUNDPRD1.log

       

      Regards

      Ibrahim

  36. Anonymous (login to see details)

    Hey Ibrahim,

    Could you please email me the log file monitor log?  That will be the most useful when determining why no information is getting captured.

    Thanks,

    Josh Raymond

  37. Anonymous (login to see details)

    Hi Joshua

    I have sent the file as attachment to your email

    Regards

    Ibrahim

  38. Anonymous (login to see details)

    I am using a windows collector and the logs are on a windows server. Is it sufficient if I provide read only access to the log files? They should then be accessible by the collector, right?

  39. Anonymous (login to see details)

    Hi Keerti,

    Yes, read only access to the files from the user running the windows collector should be sufficient.  Additionally, you will need to make sure the files are shared and accessible by the server running the collector.  The shared address of the file will then be used within the Log File Monitor configuration.

    If you encounter any issues with the configuration, please feel free to post to this page (smile)

    Thanks,

    Josh Raymond

    1. Anonymous (login to see details)

      Hi Joshua,

      I am on a windows system and windows collector. I am getting the same error as Nanda. 

      Get the following error: External error occurred, Target service unreachable, No file found, please ensure that the file is shared and that access to the file is available from the user running the collector.

      Do I have to mention the path like this: \\host\shared\directory\ with the collector name in the hosts? Or do I have to mention the path like this: \shared\directory\ with the shared hostname in the hosts?

       

      Thanks,

      Keerti

  40. Anonymous (login to see details)

    Hey Keerti,

    The path within the monitor configuration should not include the host name as this is automatically added.  Instead, the configuration should only contain the directory to the shared file as below:

    \shared\directory\

    Also, a good test to make sure the file is accessible from the Collector is to try and navigate to the shared directory within windows explorer on the Collector.  If you are not able to access the file through windows explorer, then there is likely a sharing issue with the log file.

    Thanks,

    Josh Raymond

  41. Anonymous (login to see details)

    Hi Joshua,

    The plugin worked when I placed a log file locally on the collector. But it is not working when I share the files from different hosts on the network with read permissions to the collector account. I am getting the below error:

    2015-07-01 14:40:14 INFO [com.logfile.WP@sirens: *ERROR*_0] Connecting to \logs\e1root_*.log on sirens...
    2015-07-01 14:40:14 INFO [com.logfile.WP@sirens: An exception has been caught by the Web client_0] Connecting to \logs\e1root_*.log on sirens...

    Can you please help me find the issue?

    Keerti

  42. Anonymous (login to see details)

    Hey Keerti,

    From the log message, it appears that the Collector cannot connect to the log file on the remote server.  Are you able to confirm that the user account running the dynaTrace Collector is able to log into the remote server and access the shared log files?

    Also, testing the access to the remote log files from the Collectors file system is a good test for figuring out if the Collector can see the shared files.

    Please let me know the results of these tests.

    Thanks,

    Josh

  43. Anonymous (login to see details)

    Yes, I am able to access and read the remote logfiles from the Collector's host. 

  44. Anonymous (login to see details)

    Thanks for the information Keerti.  Can you please email me the complete log file for the monitor from the Collector to Joshua.Raymond@dynatrace.com?  Additionally, please include screenshots of your current monitor configuration.

    Thanks,

    Josh Raymond

  45. Anonymous (login to see details)

  46. Anonymous (login to see details)

    Hello Derek Abing and Joshua Raymond,

    Is it possible to modify the plugin that the user can determine how the connection is made. for example I am currently onsite at a customer they would like to use the plugin to read logfiles from a server. However the challenge is that server does not allow ssh connection with username and password. It requires a key. On the other hand one can connect to the server with user name and password, but then the port needs to be different (1023) and instead of ssh it should use telnet.

    Second question I noticed that the Log Monitor only works with a Oracle, MS SQL and Postgres database. This customer prefers dB2, is it possible to have a database script for DB2? at the moment we are using the Postgres database.

    Kind regards,

    Mark

     

  47. Anonymous (login to see details)

    Hey Mark,

    Modifying the plugin to make a different connection type is possible since, as you've already noticed, only SSH connections are allowed currently.  Public key connections could be an option but would require some additional coding to make it an option within the plugin.

    Additionally, DB2 support is an item that I was working on in the past, but since there was low demand the work required to add the support was high, the DB2 support was never added.  If it is something that is required by the customer, then it is an item I can work to add as time permits.  

    What is the timeline for the customer?  Are they more curious about the plugin capabilities or are these items must haves?

    Please feel free to email me directly at joshua.raymond@dynatrace.com

    Thanks,

    Josh

  48. Anonymous (login to see details)

    Hi Joshua,

    Is there any update on the issue which Ibrahim posted. Do we need provide any other information on this.

    Thanks,

    Aravindhan

    1. Anonymous (login to see details)

      Hey Aravindhan,

      Are you referring to the Solaris system support?  This issue was resolved by sharing the log file with the user that is running the Collector.

      Thanks,

      Josh

  49. Anonymous (login to see details)

    I have installed the plugin, set up the database, and configured the monitor. But when I execute it I recieve the following error:

    Result Status: failed
    Result: 1 host queried, 1 failed or unknown, 0 partially failed and 0 successful.
    Result: Detailed information of failed last task/monitor execution is no longer available. Please trigger a new run.

    I have verified that the plugin is connecting to and entering data in the database. The collector does have access to the share. (I'm just testing it right now by connecting to itself and searching the server.0.0.log file.) I have the log file shared, and have configured the montior to connect to that share. I then modified the plugin logging to "FINER", and re-ran the test several times. All resulted with the same error. However, I cannot seem to find the log file for the plugin on the dynatrace server. 

    Could you please point me to where the log file would be stored? I have looked in both "\Program Files\dynaTrace\dynaTrace 6.2\log\server" and "Program Files\dynaTrace\dynaTrace 6.2\log\collector\dynaTrace Collector". 

    Thanks

    1. Anonymous (login to see details)

      Hi George

      Log files of plugins should be available through the System Information DAshlet. Open that dashlet and then navigate to the Collector that executes your Log File. Under the Log File section you should find the log file of the logfilemonitor plugin

      Andi

      1. Anonymous (login to see details)

        Thank you! I found the log file. I still don't know why the run is failing though. Do you see any errors below?

        2015-11-30 14:53:57 INFO [com.logfile.WP@dynaTrace_Log Scraper_0] Previous message was repeated 1 times.
        2015-11-30 14:53:57 INFO [com.logfile.WP@dynaTrace_Log Scraper_0] Connecting to \server\Server.0.0.log on brksvw366...
        2015-11-30 14:53:57 FINER [Driver@dynaTrace_Log Scraper_0] ENTRY Arguments not traced.
        2015-11-30 14:53:57 FINE [SQLServerDriver@dynaTrace_Log Scraper_0] Property:serverName Value:BRKCLW20-DB1
        2015-11-30 14:53:57 FINE [SQLServerDriver@dynaTrace_Log Scraper_0] Property:instanceName Value:BRKSQL01
        2015-11-30 14:53:57 FINE [SQLServerDriver@dynaTrace_Log Scraper_0] Property:portNumber Value:1451
        2015-11-30 14:53:57 FINE [SQLServerDriver@dynaTrace_Log Scraper_0] Property:databaseName Value:dynaTracePluginDB
        2015-11-30 14:53:57 FINE [SQLServerConnection@dynaTrace_Log Scraper_0] ConnectionID:27 created by (SQLServerDriver:1)
        2015-11-30 14:53:57 FINER [SQLServerConnection@dynaTrace_Log Scraper_0] ConnectionID:27 Start time: 1448916837156 Time out time: 1448916852156 Timeout Unit Interval: 15000
        2015-11-30 14:53:57 FINE [SQLServerConnection@dynaTrace_Log Scraper_0] ConnectionID:27 This attempt server name: BRKCLW20-DB1 port: 1451 InstanceName: brksql01 useParallel: false
        2015-11-30 14:53:57 FINE [SQLServerConnection@dynaTrace_Log Scraper_0] ConnectionID:27 This attempt endtime: 1448916852156
        2015-11-30 14:53:57 FINE [SQLServerConnection@dynaTrace_Log Scraper_0] ConnectionID:27 This attempt No: 0
        2015-11-30 14:53:57 FINE [SQLServerConnection@dynaTrace_Log Scraper_0] ConnectionID:27 Connecting with server: BRKCLW20-DB1 port: 1451 Timeout slice: 15000 Timeout Full: 15
        2015-11-30 14:53:57 FINER [Channel@dynaTrace_Log Scraper_0] TDSChannel (ConnectionID:27): Opening TCP socket...
        2015-11-30 14:53:57 FINER [SQLServerConnection@dynaTrace_Log Scraper_0] ConnectionID:27 ClientConnectionId: ba326cd6-c9ba-4bb5-8837-2e8ad0e6c912 Requesting encryption level:OFF
        2015-11-30 14:53:57 FINER [SQLServerConnection@dynaTrace_Log Scraper_0] ConnectionID:27 ClientConnectionId: ba326cd6-c9ba-4bb5-8837-2e8ad0e6c912 ActivityId fef5671b-a3ed-4a97-a73d-51065bb1e15c-7
        2015-11-30 14:53:57 FINE [SQLServerConnection@dynaTrace_Log Scraper_0] ConnectionID:27 ClientConnectionId: ba326cd6-c9ba-4bb5-8837-2e8ad0e6c912 Server returned major version:10
        2015-11-30 14:53:57 FINER [SQLServerConnection@dynaTrace_Log Scraper_0] ConnectionID:27 ClientConnectionId: ba326cd6-c9ba-4bb5-8837-2e8ad0e6c912 Negotiated encryption level:OFF
        2015-11-30 14:53:57 FINER [Channel@dynaTrace_Log Scraper_0] TDSChannel (ConnectionID:27) Enabling SSL...
        2015-11-30 14:53:57 FINER [Channel@dynaTrace_Log Scraper_0] TDSChannel (ConnectionID:27) SSL handshake will trust any certificate
        2015-11-30 14:53:57 FINER [Channel@dynaTrace_Log Scraper_0] TDSChannel (ConnectionID:27) Starting SSL handshake
        2015-11-30 14:53:57 FINER [Channel@dynaTrace_Log Scraper_0] TDSChannel (ConnectionID:27) (PermissiveX509TrustManager): Trusting server certificate
        2015-11-30 14:53:57 FINER [Channel@dynaTrace_Log Scraper_0] TDSChannel (ConnectionID:27) SSL enabled
        2015-11-30 14:53:57 FINER [Channel@dynaTrace_Log Scraper_0] TDSChannel (ConnectionID:27) Disabling SSL...
        2015-11-30 14:53:57 FINER [Channel@dynaTrace_Log Scraper_0] TDSChannel (ConnectionID:27) Closing SSL socket
        2015-11-30 14:53:57 FINER [Channel@dynaTrace_Log Scraper_0] TDSChannel (ConnectionID:27) SSL disabled
        2015-11-30 14:53:57 FINER [SQLServerConnection@dynaTrace_Log Scraper_0] ConnectionID:27 ClientConnectionId: ba326cd6-c9ba-4bb5-8837-2e8ad0e6c912 Ignored env change: 2
        2015-11-30 14:53:57 FINER [SQLServerConnection@dynaTrace_Log Scraper_0] ConnectionID:27 ClientConnectionId: ba326cd6-c9ba-4bb5-8837-2e8ad0e6c912 Network packet size is 8000 bytes
        2015-11-30 14:53:57 FINER [SQLServerConnection@dynaTrace_Log Scraper_0] ConnectionID:27 ClientConnectionId: ba326cd6-c9ba-4bb5-8837-2e8ad0e6c912 End of connect
        2015-11-30 14:53:57 FINER [Driver@dynaTrace_Log Scraper_0] RETURN ConnectionID:27 ClientConnectionId: ba326cd6-c9ba-4bb5-8837-2e8ad0e6c912
        2015-11-30 14:53:57 FINER [Connection@dynaTrace_Log Scraper_0] ENTRY
        2015-11-30 14:53:57 FINER [Connection@dynaTrace_Log Scraper_0] ENTRY 1,003 1,007
        2015-11-30 14:53:57 FINER [Statement@dynaTrace_Log Scraper_0] ENTRY adaptive
        2015-11-30 14:53:57 FINER [Statement@dynaTrace_Log Scraper_0] RETURN
        2015-11-30 14:53:57 FINER [SQLServerStatement@dynaTrace_Log Scraper_0] Properties for SQLServerStatement:28: Result type:1003 (2003) Concurrency:1007 Fetchsize:128 bIsClosed:false useLastUpdateCount:true
        2015-11-30 14:53:57 FINE [SQLServerStatement@dynaTrace_Log Scraper_0] SQLServerStatement:28 created by (ConnectionID:27 ClientConnectionId: ba326cd6-c9ba-4bb5-8837-2e8ad0e6c912)
        2015-11-30 14:53:57 FINER [Connection@dynaTrace_Log Scraper_0] RETURN SQLServerStatement:28
        2015-11-30 14:53:57 FINER [Statement@dynaTrace_Log Scraper_0] ENTRY select * from LogFileMonitor where Server='brksvw366' and Directory='\\brksvw366\server\Server.0.0.log' and Search_Term='*shutdown command*';
        2015-11-30 14:53:57 FINE [SQLServerStatement@dynaTrace_Log Scraper_0] SQLServerStatement:28 Executing (not server cursor) select * from LogFileMonitor where Server='brksvw366' and Directory='\\brksvw366\server\Server.0.0.log' and Search_Term='*shutdown command*';
        2015-11-30 14:53:57 FINE [SQLServerResultSet@dynaTrace_Log Scraper_0] SQLServerResultSet:28 created by (SQLServerStatement:28)
        2015-11-30 14:53:57 FINER [Statement@dynaTrace_Log Scraper_0] RETURN SQLServerResultSet:28
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] ENTRY
        2015-11-30 14:53:57 FINER [SQLServerResultSet@dynaTrace_Log Scraper_0] SQLServerResultSet:28 currentRow:0 numFetchedRows:0 rowCount:-3
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] RETURN true
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] ENTRY Line_Count
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] Previous message was repeated 1 times.
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] RETURN 3
        2015-11-30 14:53:57 FINER [SQLServerResultSet@dynaTrace_Log Scraper_0] SQLServerResultSet:28 Getting Column:3
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] Previous message was repeated 1 times.
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] ENTRY Last_Line_Number
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] Previous message was repeated 1 times.
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] RETURN 4
        2015-11-30 14:53:57 FINER [SQLServerResultSet@dynaTrace_Log Scraper_0] SQLServerResultSet:28 Getting Column:4
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] Previous message was repeated 1 times.
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] ENTRY Last_Line_Number
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] Previous message was repeated 1 times.
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] RETURN 4
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] Previous message was repeated 1 times.
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] ENTRY LogID
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] Previous message was repeated 1 times.
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] RETURN 1
        2015-11-30 14:53:57 FINER [SQLServerResultSet@dynaTrace_Log Scraper_0] Previous message was repeated 1 times.
        2015-11-30 14:53:57 FINER [SQLServerResultSet@dynaTrace_Log Scraper_0] SQLServerResultSet:28 Getting Column:1
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] Previous message was repeated 1 times.
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] ENTRY
        2015-11-30 14:53:57 FINER [SQLServerResultSet@dynaTrace_Log Scraper_0] SQLServerResultSet:28 currentRow:1 numFetchedRows:1 rowCount:-3
        2015-11-30 14:53:57 FINER [InputStream@dynaTrace_Log Scraper_0] com.microsoft.sqlserver.jdbc.PLPInputStreamID:80 closing the adaptive stream.
        2015-11-30 14:53:57 FINER [InputStream@dynaTrace_Log Scraper_0] com.microsoft.sqlserver.jdbc.PLPInputStreamID:81 closing the adaptive stream.
        2015-11-30 14:53:57 FINER [InputStream@dynaTrace_Log Scraper_0] com.microsoft.sqlserver.jdbc.PLPInputStreamID:82 closing the adaptive stream.
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] RETURN false
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] ENTRY
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] RETURN
        2015-11-30 14:53:57 FINER [Statement@dynaTrace_Log Scraper_0] ENTRY
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] ENTRY
        2015-11-30 14:53:57 FINER [ResultSet@dynaTrace_Log Scraper_0] RETURN
        2015-11-30 14:53:57 FINER [Statement@dynaTrace_Log Scraper_0] RETURN
        2015-11-30 14:53:57 FINER [Connection@dynaTrace_Log Scraper_0] Previous message was repeated 1 times.
        2015-11-30 14:53:57 FINER [Connection@dynaTrace_Log Scraper_0] ENTRY
        2015-11-30 14:53:57 FINER [Channel@dynaTrace_Log Scraper_0] TDSChannel (ConnectionID:27): Closing TCP socket...
        2015-11-30 14:53:57 FINER [Connection@dynaTrace_Log Scraper_0] RETURN

  50. Anonymous (login to see details)

    Hi George,

    Looking at the log file entries above, it looks like the monitor is connecting to the the monitored log file and performing the initial query to the SQL Server database for its last known location within the log file.  I don't see any issues with the above log entries.  Is there more to the log file than just the above entries?  Also, are you using the latest version of the log file monitor?  We added additional logging to 3.15.7 in order to allow for more effective troubleshooting of issues.

    Thanks,

    Josh

    1. Anonymous (login to see details)

      Yes, it looks like we have the latest version installed. This was the section of the log file for the latest run that I triggered. I could email you the whole file if that would help?

  51. Anonymous (login to see details)

    Hey George, 

    Yes, the entire log file would certainly help.  Can you please email it to me at Joshua.Raymond@dynatrace.com?

    Thanks!

  52. Anonymous (login to see details)

    Hi Joshua,

    We have multiple .log files that keep getting generated and we are looking for strings in *.log files. It is very confusing to check to know which log file contains the error.  I printed out all parameters available in Extended Email Plugin, but none of the parameters printed out the file name. Is there a way I can print out the file name as well in the output?

    Thanks,

    Keerti

  53. Anonymous (login to see details)

    Hey Keerti, 

    Unfortunately the log file name is not printed within the measures section within the current version of the plugin.  However, it may be able to add the log file name by developing dynamic series for each log file within the plugin, which in turn would allow to get the same list of measures per file name instead of aggregated.  Unfortunately, this would take some time to develop.

    Also, I could not find a workaround in the meantime.  I'll try to develop this function as soon as possible, time permitting, as I see it as very useful for most users.

    Thanks for the feedback!

    Josh

    1. Anonymous (login to see details)

      Thanks a lot Joshua! Appreciate it. It would be really useful. (smile)

  54. Anonymous (login to see details)

    Wondering if there are any plans to make this plugin available for other unix type systems? In our case AIX.

  55. Anonymous (login to see details)

    Hey John,

    Currently I don't see any limitations with using the plugin for an AIX system.  The plugin works by using SSH to login to the server to retrieve the log file from the configured directory, then parsing it internally for a specified regex string line by line.  Therefore, I would recommend testing the Log File Monitor within a non-production environment to determine if it works as expected.  If it does not, and you encounter an issue, please let me know and I would be happy to assist.

    Thanks,

     Josh

    1. Anonymous (login to see details)

      Thanks Josh

      I will keep you posted.  Just be patient. May take some time. Priorities and all.

  56. Anonymous (login to see details)

    Hi,

    Does the log file monitor support the csv format?

  57. Anonymous (login to see details)

    Hey Katlego,

    Since the log file monitor works by searching a file line by line and does not specify that the file needs to be a .log file, then I don't see any reason why it wouldn't work with a .csv file.  The only caveat from my perspective is that the monitor will not recognize or try to parse the delimited values as individual items to search.  Instead, as stated previously, it will search line by line within the file, not item by item.

    Please let me know if you have any questions and if the log file monitor does not work with a csv format.

    Thanks,

    Josh

  58. Anonymous (login to see details)

    Will this work with files that have no file type on a windows server? The files can be opened in a text editor and plainly read but the file itself has no extension so it just shows as type: file.

    1. Anonymous (login to see details)

      Hi. this will work with any type of file!

  59. Anonymous (login to see details)

    Having an issue connecting but the logs don't seem to be getting me what I need. At Server>Plugins> I have the logging set to Finer. I navigate to my collector running the monitor>logfiles>com.logfile.monitor.0.0.log. Does this mean its not reaching the server, or the regex is incorrect? The regex is just a basic \d{5} as the logs numbers change to various formats of 5 digits.

    I get the following information in the log and nothing more each time:

    2016-04-25 16:22:00 INFO [com.logfile.WP@New Log File Scraper_0] Connecting to \Program Files\XXXXXX\XXXX....on Server Name...

     

    The Plugin shows the following:

     

    Result Status: Failed

    Result: Last polling execution failed

    Reason for failure: External error occurred

    Error message: Target service unreachable

    Detailed error message: No File matching Regex was found

    EDIT: I also Tried doing [0-9]{5} in case there was an issue with the \d but it seems both give me little information in the log.

  60. Anonymous (login to see details)

    Hey Jared, 

    Yes, unfortunately that message means that it is not reaching the server.  Have you ensured that the directory you are trying to reach is shared and you are using the network share path?  If so, what is the exact regex you are using?

    Thanks,

    Josh

    1. Anonymous (login to see details)

      the regex i am using is just \d{5} or [0-9]{5} for the file name, I've tried both. Then for the actual search term I am using: (Failed VM backups: )([1-9]{1,4})

      I feel that it may be a sharing issue, I am going to speak with the file owner regarding access as I see my account has read access to the folders/files. I am able to log into the server as the service account as well as open the files but the sharing tab shows that it is not shown. I was mostly unsure what was going on since the logs were not giving me very descriptive information.

  61. Anonymous (login to see details)

    Sounds good Jared.  Ya, it seems like a sharing issue to me as well.

    Also, it sounds like I'll need to update the log output to be a little more descriptive in future releases.  I've made previous updates to the logs in order to help provide more descriptive troubleshooting output but unfortunately it sounds like there are still gaps. Certainly appreciate the feedback.

    Just let me know what you find out with concerns to the sharing issue and I would be happy to assist further if there are still issues.

  62. Anonymous (login to see details)

    Hi,

    I'm testing the plugin on a Solaris system and had a look at the source code, to make it work in my case :

    • first of all, the filename should not be specified as a RegEx, but as a normal OS wildcard, otherwise the files aren't found.
    • the "-ltc" parameter on the ls command (in source) must be specified BEFORE the filename (this order is important on some Solaris flavor [non-gnu compliant]) - I'll try to upload my source updates to GIT asap
    • I'll need to review the db logging code, as it records the file wildcard name, and not the actual file name.

    I've also noticed in the source file that only the latest/youngest log file is checked ? ... it could happen that important log statements are missed from detection when the log file rotates in between 2 monitor polls.

    1. Anonymous (login to see details)

      Hey Jeroen,

      Thanks for the feedback on how to optimize and improve the plugin code base.  Since this is a Community Supported Plugin, please feel free to make any changes necessary to the code base within GitHub.  I'm sure the updates and GIT upload you mentioned concerning the "-ltc" parameter will be beneficial to many other users.

      Thanks,

       Josh

  63. Anonymous (login to see details)

    Hi,

    I am using this plugin to monitor one of the error files in my server. I noticed that the size of error file is more than 200 MB and when this plugin tries to read the file it shoots the JVM heap utilization of collector to 80% of 2GB committed memory. This plugin cannot be used for big files as per my understanding.

     

    Thanks,

    Gopikrishnan

  64. Anonymous (login to see details)

    Hi Joshua, 

    thanks for the plugin. It's working properly. I also set an incident rule and defined an e-mail action for this incident. 

    Is there a simple way to get the log_message in the e-mail body instead of just count? 

    Details

    Time:

    2016-11-24 18:16:10 EET

    System Profile:

    PROD

    Dynatrace Server:

    logserver

     

    Violations

    Number of Messages:

    New Log File Scraper@srv105: Was 1.00 but should be lower than 0.10.

     

    Regards,

    Murat

  65. Anonymous (login to see details)

    Hey Murat,

    My apologies for the late reply here as I have been having issues with my Community account recently.  Unfortunately there is not a way to include the message within the incident currently.  However, it is possible to modify the plugin to do so by adding a Splitting to a measure which would include the message.  This has been requested in the past and I'll be sure to include it in the next version of the plugin.

    Hopefully this information helps. Just let me know if you have any more questions or concerns.

    Thanks,

    Josh

  66. Anonymous (login to see details)

    Hi,

    I was wondering if I use this log monitoring overcome my issue with Application Process Unavailable (unexpected) for different agent groups. 

    Is it possible to pull a certain key word from the agent log/dynatrace server log which can help to identify this? 

    The reason I am asking is because the customer wants to be able to create different incidents using the OOTB Application Process Unavailable (unexpected) and I am unable to find a solution through other means. 

    I know that we can use the extended email plugin but the customer uses the SSH action plugin for notification. 

    Thanks!

    Jon

  67. Anonymous (login to see details)

    Hey Jon,

    Although monitoring the Dynatrace Server or Collector logs may be a solution to your issue, instead, I would recommend monitoring the process/service directly.  Therefore, when the process or services goes offline, you will know that it is no longer up regardless of whether the Application Process Unavailable (unexpected) incident is triggered.  This monitoring can be done through the following plugins:

    I have two of these plugins implemented to monitor processes and they are working beautifully.  Hopefully they help you as well!

  68. Anonymous (login to see details)

    Hi,

    Is it possible to use this to monitor log files on an AIX server?

    Thanks,

    Derick

  69. Anonymous (login to see details)

    Hi Derick,

    Currently I don't see any limitations with using the plugin for an AIX system.  The plugin works by using SSH to login to the server to retrieve the log file from the configured directory, then parsing it internally for a specified regex string line by line.  Therefore, I would recommend testing the Log File Monitor within a non-production environment to determine if it works as expected.  If it does not, and you encounter an issue, please let me know and I would be happy to assist.

    Thanks,

    Josh

    1. Anonymous (login to see details)

      Hi Josh,

      I am experiencing problems, and I'm not sure whether it is plugin-related, or whether it is an AIX issue. I have created log file monitors and set them to execute on a collector running on an AIX host. This is trying to read a log file on another AIX host. As far as I can see, the plugin installed on the collector, but when I try and execute it, the following message appears in the collector log file (no plugin log is created):

      2017-02-06 12:25:41 WARNING [Scheduler] Required extensions for Task[OHS1 access_0] have not arrived in time. Execution discarded.

      What required extensions could be referred to?

      Thanks,

      Derick

      1. Anonymous (login to see details)

        Hey Derick,

        We are using SSH2 through the Java Secure Channel package (http://www.jcraft.com/jsch/) in order to connect to servers.  Can you confirm SSH2 is installed on the server?

        Additionally, are you able to set the plugin logs to FINER and send them to my email (joshua.raymond@dynatrace.com)?  I can look into the issue accordingly.

        Thanks,

        Josh

        1. Anonymous (login to see details)

          Hi Josh,

          As far as I can see, we should have SSH2. The version output is:

          OpenSSH_6.0p1, OpenSSL 1.0.1e

          No log file is being generated by the monitor yet, all I see are the comments in the collector log.

          Regards,

          Derick

          1. Anonymous (login to see details)

            Hey Derick, 

            Is that the output for the "ssh -v localhost" command?  I would expect to see SSH 2.0 before the OpenSSH version number such as SSH2.0-OpenSSH_5.3

            Thanks,

            Josh

  70. Anonymous (login to see details)

    We are trying to use the log monitor plug-in, we able to execute the plug-in successfully but the Retrieved measurement section always returns 0.0, even if the search term exist.  Please advise.

    Thank you.

    Ken

  71. Anonymous (login to see details)

    Hey Ken, 

    Did you ensure that the search term had RegEx surrounding it?  i.e. .*TERM.*  The .* syntax is important as the log file monitor searches the log line by line to find the term, therefore, it needs to allow for strings before and after the term to match.

    Thanks,

    Josh

  72. Anonymous (login to see details)

    Hi Josh, 

    Is there a version compatibility restriction for the DB ? I am looking for Postgres version in particular 

    Thanks, 

    Rejith 

    1. Anonymous (login to see details)

      Hi Rejith,

      The plugin was tested with Postgres 9.4 but there shouldn't be any issues with later versions as the tables used are relatively simple.  Are you experiencing issues with the plugin's writing to the Postgres database?

      Thanks,
      Josh 

      1. Anonymous (login to see details)

        Hi Josh, 

        Nope. Just thought to confirm before try. 

        I will test this soon and update you 

        btw, anybody tried this plug-in to monitor logs from Elasticsearch nodes ?

        Thanks, 

        Rejith 

  73. Anonymous (login to see details)

    Hey Rejith,

     

    Not that I'm aware of.  Have you checked out the Elasticsearch Fastpack yet though?  It actually has a monitor within it that is used specifically to monitor Elasticsearch nodes.  Could be useful unless you have a specific use case where you want to monitor the logs instead.

    Hope this helps.  Just let me know if you have any other questions or concerns.

    Josh

  74. Anonymous (login to see details)

    Hi

    I have configured a log file monitor to check for a text pattern in an OS log file on a remote linux host but I am seeing a "permission denied" error in the plugin log when the plugin attempts to access the pem file on the collector host. I have configured the following settings:

    OS: Linux

    Connection Type: SSH

    SSH Type: Public Key

    Linux Username: dtmuser

    Linux Password: ********

    SSH Key: /appl/dynatrace/.ssh/id_rsa.pem

    I am assuming the user that is attempting to access the pem file is dtmuser, but I get access denied even though dtmuser is the owner of the file and has full access to the file (-rwx------. 1 dtmuser dynatrace). I ran the plugin multiple times and still got permission denied. I am able to cat the file when logged on to the collector host as dtmuser.

    If I reset the permissions on the pem file to full access for all users then the plugin executes fine. When I then reset the pem permissions back to dtmuser access only, the plugin fails to execute successfully due to permission denied error. I am little confused by this, is anyone able to explain this? It seems that the dtmuser is not the user that is attempting to access the pem file.

    Also, one other question, will this plugin work for Solaris hosts as well as Linux and AIX?

    Thanks

    Clayton.

     

    1. Anonymous (login to see details)

      Hi, what is the RunAsUser uid for the collector process on the local machine ? This is the one that needs access to the key file.

       

  75. Anonymous (login to see details)

    Thanks for responding to my query Jeroen. The collector user is indeed different to the user I have configured in the plugin settings. Which begs the question, what is the purpose of the following fields in this plugin:

    Linux Username

    Linux Password

    Can these fields be left blank when SSH Type = Public Key?

  76. Anonymous (login to see details)

    Hi,

    I am able to read the log file successfully but always get a 'New Message Found' even for old messages. I created a Postgres Database and created the 2 required tables (logrecords and logfilemonitor) but still did not work.

     

    I looked at the plugin logs and found this error regarding the driver. I was under the impression the driver comes packaged with the plugin. Is that not the case? Do I need to install the driver separately?

     

    2017-05-25 15:54:28 INFO [com.logfile.WP@New Log File Scraper-test_0] Log ID 0: New Message Found!
    2017-05-25 15:54:28 WARNING [com.logfile.WP@New Log File Scraper-test_0] java.lang.ClassNotFoundException: org.postgresql.Driver
    at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:513)
    at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:429)
    at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:417)
    at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:190)
    at com.logfile.LogFile.updateData(LogFile.java:230)
    at com.logfile.WP.execute(WP.java:356)
    at com.dynatrace.diagnostics.sdk.UserPluginManager.executePlugin(SourceFile:565)
    at com.dynatrace.diagnostics.sdk.MonitorPluginExecutor.execute(SourceFile:51)
    at com.dynatrace.diagnostics.sdk.MonitorPluginExecutor.execute(SourceFile:26)
    at com.dynatrace.diagnostics.scheduling.impl.ServerJobCenterRegistry.a(SourceFile:189)
    at com.dynatrace.diagnostics.scheduling.impl.ServerJobCenterRegistry.a(SourceFile:412)
    at com.dynatrace.diagnostics.scheduling.impl.ServerJobCenterRegistry.execute(SourceFile:336)
    at com.dynatrace.diagnostics.scheduling.impl.SchedulerJob.a(SourceFile:101)
    at com.dynatrace.diagnostics.scheduling.impl.SchedulerJob.work(SourceFile:92)
    at com.dynatrace.diagnostics.scheduling.impl.SchedulerJob.executeJobInfo(SourceFile:241)
    at com.dynatrace.diagnostics.scheduling.impl.QuartzJob.execute(SourceFile:45)
    at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
    at com.dynatrace.diagnostics.scheduling.impl.QuartzThreadPool$WorkerThread.run(SourceFile:788)

     

    Any inputs appreciated.

     

    Thank you!

    Regards,

    Priyanka

  77. Anonymous (login to see details)

    Hi Priyanka,

    That's correct, the driver should be packaged with the plugin therefore it is interesting that you are getting the above exception.  Do you mind emailing the complete log to me at Joshua.Raymond@dynatrace.com so I can investigate accordingly?  Also, can you please let me know what version of Postgres you are using?

    Thanks,

    Josh