26 Nov 2019 04:34 PM
Hi, I need to install Dynatrace One agent on one of my Solaris box. I have the parameters ready which i need to include in the startup script of tomcat.
What i want to understand is does the position of these parameters in the script file matters? I mean should i be placing them above or below JVM parameters or you can place it anywhere in the script?
DT_HOME=/opt/dynatrace
export DT_HOME
LD_PRELOAD_64=$DT_HOME/agent/lib64/liboneagentproc.so
export LD_PRELOAD_64
Solved! Go to Solution.
26 Nov 2019 05:00 PM
Hi Shashank,
DT_HOME and LD_PRELOAD_64 should be set as the environment variables, export these variables using setenv,sh file.
If you don't find the "setenv.sh" in bin folder of tomcat, follow the following steps :(CATALINA_BASE/bin - Follow tomcat documentation for more details)
setenv.sh
in bin folder of tomcat.28 Nov 2019 01:48 PM
@Saravanakumar P. Hi Saravana, I don't think that's mandatory. On the dytantrace website it is mentioned that you can place it on your startup script and it will start working next time the tomcat starts. But what i want to know is where to put it?
I mean we have JVM parameters as well in the startup.sh script so should we placing these DT parameters above JVM or can be placed anywhere?
02 Dec 2019 04:28 PM
@Shashank A. You are right, We can place the above env variables application start up script(before java is executed) and also as the environment variables in the .profile of the user the tomcat process is running, This way of instrumentation was working for us when we are starting our tomcat manually but it wasn't working when we tried to start tomcat through remote process.
By consulting with Dynatrace support, the ideal way to place these environment variables in the Tomcat start up environment script. It is also recommended to use either one of these variables based on your operating system type.
LD_PRELOAD or
LD_PRELOAD_64
26 Nov 2019 10:37 PM
Another way to think about it:
LD_PRELOAD_64 must be defined before the java command is executed. So you can define it any way you want, anywhere you want as long as it's defined before java is executed.
04 Dec 2019 01:01 PM
27 Nov 2019 09:56 PM
RUN (don't walk) as far away from Solaris as you can! That has been nothing but a nightmare for us when it comes to all the limitations of the OneAgent for Solaris. The memory does not track correctly which has been a huge problem for us. It looks at the LDOM and counts that amount of memory for every single zone on that LDOM. Thankfully we are actively moving 100% away to Linux.
28 Nov 2019 01:45 PM
@Larry R. Hi Larry, I know it's a nightmare but that will take some time to be on Linux. As of now we have to get it working on Solaris. Please let me know if you have any views on my original query about placing the parameters in startup.sh script
02 Dec 2019 03:58 PM
@Larry R. Hi Larry, Thank you for your help. Is it possible for you to list out the Limitations of DT on Solaris box? I mean we have to do it on Solaris as right now we are far away from moving to Linux so if you can tell me what will be the pros and cons of it if i anyways go ahead and install on Solaris?
03 Dec 2019 01:41 PM
You're very welcome - Happy to help!
WhileI can only speak from our perspective within our Solaris environment, hopefully this info will aid you.
Memory does not track correctly
This is a big one. In our case, our Solaris environment consists of WebLogic 12.2. Within that environment we have many domains and all of them run on 3 nodes (Solaris Zones) clusters and this is where the problem comes in. The zones are configured to share the memory of the LDOM they run on. The keyword there is "share". Dynatrace looks at the memory of the LDOM where the zones run and not at the actual memory that the zone is using. Therefore, from a licensing perspective and from a metrical standpoint as well, Dynatrace thinks each and every zone has the total amount of memory that the LDOM has. So for example, if you have an LDOM with 128GB and lets say you have 3 zones on that LDOM that are configured to share that memory, Dynatrace will say that each of those 3 zones has 128GB each when really that is not true because all 3 share a total of 128GB between them all on the LDOM.
Because of this, you can't really count on any of the memory metrics in Dynatrace because they are flawed. This is a nightmare for licensing and just everything in general. They are aware of it, but I have no idea if a fix is coming or not. They did make it so that now if you set the memory limit for a node, it will look at that, but that is really useless as thats the entire point of having zones on an LDOM - So they can share the memory. If you put a limit into the configuration, you have essentially killed the benefit of having zones on the LDOM.
OneAgent is should really be called something else for Solaris (Legacy Agent maybe?)
As much as I love Dynatrace, to call the Solaris agent a "OneAgent" is a rather large stretch in my opinion. Nothing about the Solaris version of this screams "OneAgent". In fact, it's deployed the same way you would have deployed any previous vendors or older technology agent. It's not like the true OneAgent where its installed and picks up every and anything on said host. This only picks up Java processes where implemented and thats it. You can't really count on it for anything else. It does pickup on some of the OS metrics once implemented into Java, but honestly I do not trust most of it for the reasons mentioned above about the memory. This is a very barebones agent that is anything but what one would consider a "OneAgent" in my opinion.
No auto-updates
While the OneAgent for Linux, Windows, etc. can have auto-updates - Solaris can not. Each agent (not going to called it a OneAgent) must be updated by hand. Yes you could use something likely Puppet to automate this, but again if you look at what makes the OneAgent shine for Dynatrace - The Solaris agent is everything opposite of that.
There is one pro
There is actually one pro to the Solaris agent - It does not need root to install! You just drop the files in place, make your calls to them, and you are good to go.
Final thoughts
This is by no means negative towards Dynatrace. I think they have most likely done the best they can when it comes to dealing with Solaris. Solaris has been a challenge for all monitoring vendors for years. It has never been an easy OS to work with in terms of monitoring. From a business perspective I highly doubt they are putting much thought or resources towards improving it as everyone is aware by now that Solaris is effectively a dead OS. I do think they need to fix the memory issues though as they are going to continue to have customers with Solaris for some time as we all know migration takes work and time. Until then, I think it's important to support it as much as possible and this memory issue is a huge problem.
The best bet is to migrate to Linux as soon as possible. We are doing just that ourselves. We are just taking it chunk by chunk. The ironic part is most of the time when we have issues, it's due to something around Solaris and I always get the question - "What is Dynatrace showing?" and my response is always - "Not much... Remember this is Solaris".
Just do not expect anything on Solaris to line up with all the features around the OneAgent because there is a night and day difference. As long as you are aware of that and move forward knowing that, you should be ok. I would recommend using this as more fuel for the fire to motivate your leadership into the migration away from Solaris. Again, this is not just a Dynatrace thing. All vendors have a hard time with Solaris. There is a good reason it's quickly become known as a dead OS.
On a side note, I would agree with the other responses here. You can place it just about anywhere you like really as long as its done before any java is executed.
04 Dec 2019 01:01 PM
@Larry R. Hi Larry. Thank you for your brilliant explanation. This has actually cleared so many things I have had on my mind. We have just installed DT on one of our Solarix box/node and we are monitoring it now. Hopefully we will soon be moving to Linux to make the full use of DT.
Thank you once again. 🙂
04 Dec 2019 02:29 PM
You're very welcome! Happy to help.
17 Dec 2019 12:44 PM
@Larry R. Hi Larry, Hope you can help here. I am trying to monitor my Apache httpd which is also installed on the same host. From the dynatrace documentation link it looks like the same configuration is required.
I have followed the steps and exploded the same zip file which i downloaded for tomcat into the folder /opt/dynatrace/oneagent but I am not able to understand where should i be putting the below parameters in Apache config to load oneagent?
DT_HOME=/opt/dynatrace/oneagent export DT_HOME LD_PRELOAD_64=$DT_HOME/agent/lib64/liboneagentproc.so export LD_PRELOAD_64
Let me know if you can help here. I have been going rounds but not able to find anything.
OR is there a different process to install oneagent on Apache ?
06 Jan 2020 03:44 PM
Apache is not one I have had to do yet so I don't think I can be of much help on this one. I would think it could go just about anywhere as long as it's before the actual startup command for Apache.
29 Nov 2019 12:16 PM
We have ours placed right towards the start of the startup script. Hope that helps.
DT_HOME=/lcl/prd/apps/dynatrace/oneagent
export DT_HOME
LD_PRELOAD_64=$DT_HOME/agent/lib64/liboneagentproc.so
export LD_PRELOAD_64
LD_PRELOAD=$DT_HOME/agent/lib/liboneagentproc.so
export LD_PRELOAD
24 May 2023 04:32 PM
Basically the agent gets on the process where you call it with the environment variables. If the process ceases to be active as a consequence the agent will also be active. It is a great disadvantage that I only recently realized.