@Thomas Doyle: For java applications there is an openshift v2 cartridge available: https://github.com/akirasoft/openshift-cartridge-d... (My apologies as it is on my personal github and needs to be pulled up into the dynatrace primary github.) We are in the process of developing cartridges to support Openshift v3.
The main purpose of the cartridge is to both install the agent on the container's filesystem as well as populate the JAVA_OPTS_EXT environment variable for the container running your application.
This could be also accomplished manually by including Dynatrace agent binaries in your deployed war/jar/ear and utilizing the
"rhc env set JAVA_OPTS_EXT=-agentpath:"/some/path/to/dynatrace/agent/lib64/libdtagent.so=name=OpenshiftJava,server=192.168.1.48:9998" -a App_Name"
commands to manually define the java options
Any updates on the cartridge/docker image for OSE v3? We have had a lot of success using the v2 cartridge up to this point, but we are standing up v3 and have some opportunities to set this up. Let me know.
did you happen to have updated docker image for openshift v3? currently we are moving toward microservices infrastructure and openshift is the selected tool for this. It seems you guys have experience with this type of implementation on previous open shift version, I appreciate experiences or documentation you can share about this subject
I want to keep this conversation going since we also have started making the switch to OSE v3. Our temporary solution for setting up Dynatrace App-Mon in OSE v3 has two approaches (explained lightly below). Hopefully this will help anyone looking to start somewhere with setting up App-Mon in OSE v3. We primarily run JBOSS apps in our OSE deployments so our approach was geared toward solving that problem. I'm sure there is a better way to set this up in OSE v3, but at this point we are able to deploy agents. One item that I find very nice about OSE v3 versus v2 is that now the host field in Dynatrace will append the Pod name so you can follow the app regardless of which host the service is being deployed on.
1. Source-too-Image we created an assemble script in the /.sti/bin directory of your code repo that then includes the built in local/s2i/assemble script (very important because this gets created during build and if you don't include it then you will only get the agent and nothing else) and included a monitoring script that essentially echo's the install and pulls the dynaTrace.jar file from an artifactory we have local to our OSE infrastructure (otherwise you will need to include the agent jar in your code repo). We then use a bash command in the assemble script to to apply the Collector:port and agent name to the standalone.conf file that is placed during build.
2. Docker images we make changes to the docker file to pull in the agent.jar file from artifactory and pass an echo to perform the java -jar and install/configure the agent. Then on the OSE side we change the environment variables by using JAVA_OPTS_APPEND=<agentpath>.
Can anyone share their approach regarding the collector architecture on OpenShift?
Background: In our case (we're using OSE v3.4) we would have liked to offer a shared collector infrastructure which would be available to all projects but we are facing a limitation due to the virtual routing in that it only supports HTTPS and TLS between project networks whereas the agent needs to talk TCP to the collector. E.g. this was the feedback I got from our OpenShift platform team when I confronted them with our requirement:
(...) an OpenShift router only supports HTTPS and TLS, because those are the
only ones by which OpenShift can do "virtual" hosting. Since routers are
shared between many projects, it needs to be able to find out which
project a request belongs to (based on the "virtual" host).
IMO possible workarounds would include:
However, all of the above have significant drawbacks.
Also, I should add that
I'm curious to know if anybody else faced this problem and how you coped with it.