11 May 2020 11:30 AM
I am trying to enable monitoring on a Task running Node.js on AWS Fargate - the base image is node. I followed the steps outlined in Install OneAgent, no errors found at the time of build and after running the task, I could see in the logs that it prints the following:
Info: Using DT_HOME: /opt/dynatrace/oneagent
I don't see any other Dynatrace specific error on INFO messages. The ECS Fargate is not listed in my Dashboard.
I am also sharing the content of Dockerfile below, could I get help on what is missing or what I didn't follow in the instructions?
FROM node
ARG MASTER_NAME
ARG DT_API_URL="https://myenv.live.dynatrace.com/api"
ARG DT_API_TOKEN
ARG DT_ONEAGENT_OPTIONS="flavor=default&include=nodejs"
ENV DT_HOME="/opt/dynatrace/oneagent"
RUN mkdir -p "$DT_HOME" && \
wget -O "$DT_HOME/oneagent.zip" "$DT_API_URL/v1/deployment/installer/agent/unix/paas/latest?Api-Token=$DT_API_TOKEN&$DT_ONEAGENT_OPTIONS" && \
unzip -d "$DT_HOME" "$DT_HOME/oneagent.zip" && \
rm "$DT_HOME/oneagent.zip"
WORKDIR /server
COPY . /server
RUN npm install
EXPOSE 3000
ENTRYPOINT [ "/opt/dynatrace/oneagent/dynatrace-agent64.sh" ]
CMD ["npm", "start"]
Solved! Go to Solution.
12 May 2020 02:05 PM
Quick question: are you sure you used a PaaS token (for the value of DT_API_TOKEN)? Even though the variable is named DT_API_TOKEN, what is needed is a PaaS token and not an API token. That's a common source of errors, that's why I'm asking.
12 May 2020 02:42 PM
Thanks for helping with this. Yes, I used one of the 2 PaaS tokens I had on my account.
Please note that I am able to monitor my app running on EC2 - deployed both OneAgent and ActiveGate for that.
08 Jul 2020 02:31 AM
Hello,
Are you running the container alone or the task also deploys any other security container? ( As I have seen in the past some solutions that conflicted with Dynatrace OneAgent)
The message you see "Info: Using DT_HOME: /opt/dynatrace/oneagent " comes from the execution of "/opt/dynatrace/oneagent/dynatrace-agent64.sh"
By default Dynatrace oneAgent does not print information into cloudwatch as it can be too chatty, but you can enable that for NGINX with the environment variable DT_NGINX_OPTIONS=loglevelconinfo.
With that option enabled you will see the Dynatrace OneAgent logs in CloudWatch .
Regards
14 Apr 2021 05:23 AM
Hi @patb23 ,
I'm facing the exact issue , do let me know if there's a solution or workaround for this issue.
Regards,
Rohit U
11 Nov 2021 03:48 PM
Hi, any updates on this error? I have the same problem and message.
11 Nov 2021 04:18 PM
Hi Rodrigo,
Are you using build/auto/runtime injection ? https://www.dynatrace.com/support/help/setup-and-configuration/setup-on-cloud-platforms/amazon-web-s...
Verify if you are using alpine linux or not
(if alpine, the oneagent image name will be : /linux/oneagent-codemodules-musl:<technology> )
Check the cloudwatch logs any reference to oneAgent or add the environment variable "DT_LOGLEVELCON=info" to get more details.
If still not seeing any information, try to run the image locally and you can ssh into the container and verify if the oneagent is running inside of the container.
Some related errors I have seen in the past is that the Dynatrace cluster or the repo where the oneagent image is, does not has a valid certificate, so docker will fail when pulling the image.
21 Jul 2022 08:40 AM - edited 21 Jul 2022 08:45 AM
Did you solve your problem? If so how?
I face same issue, build-time injection (docker image) looks fine. Build pass green, oneAgent is downloaded successfully.
We use Alpine based image so musl is used.
When testing the final image locally (for easy debug) I can "ssh" to the running docker. I can see entrypoint script, next to it seats environment setting script with all the company URLS, tokens and so on. All looks good.
But lunching the dynatrace-agent64.sh produces same "Info: Using DT_HOME: /opt/dynatrace/oneagent" output as in your case.
That is the end, no more logs, errors etc. No process is started (ps aux) or netstat does not show any extra connections made. All the ".so" libs are in place. What is wrong? Deploying to AWS ECS Fargate produces same result, just the very same "info:" in CloudWatch and nothing more. Please share you results
@rodrigo_alvare1 maybe you came up with some other ideas? In my docker I also have verbosity mode set but ... seams not working.
bash-5.1# env |grep DT
DT_HOME=/opt/dynatrace/oneagent
DT_LOGLEVELSDK=finest
DT_LOGLEVELCON=finest
21 Jul 2022 12:27 PM
How you are connecting to Dynatrace api, is there any egress or ingress controller set for the traffic routes exceptionally? also have you mounted the volume as per the map config path?
21 Jul 2022 01:02 PM
We use Dynatrace cloud. As I am pretty new to this I dont get your question. API token is set via DT Cloud UI, for scope I have used PaaS template +ingress scope added. I did not setup any extra controllers Map config path, whats that? I wish even my local docker environment would allow me to see any problem there might be and solve them but if the script outputs just that info: message I dont know what to fix. My integration is build-time/classic as shown here https://www.dynatrace.com/support/help/setup-and-configuration/setup-on-cloud-platforms/amazon-web-s...
What would you expect to see in the following senario:
running local docker,
docker exec -e "DT_LOGLEVELSDK=finest" --user root -it our-app /bin/bash
bash-5.1# /opt/dynatrace/oneagent/dynatrace-agent64.sh
Info: Using DT_HOME: /opt/dynatrace/oneagent
bash-5.1# env |grep DT
DT_HOME=/opt/dynatrace/oneagent
DT_LOGLEVELSDK=finest
DT_LOGLEVELCON=finest
No process in the background (just out app running, no DT visible). This is isolated WSL localhost test. Same image we use for AWS testing and same info message and that is everything we get. Build is okay, with the same scope/token we are downloading the one-agent.zip, embeding it into the docker image, all good. As you can see the docker image has got the agent installed, but produces no errors etc.
21 Jul 2022 02:08 PM - edited 21 Jul 2022 02:52 PM
Hello,
Im not aware of the flag DT_LOGLEVELSDK , where did you saw that one?
(The one I used before is " DT_LOGLEVELCON = info")
If you don't see logs (besides Info: using DT_HOME...) that means that the agent is not being injected at the process level.
Things that you could check on why it could fail the injection:
- check the technology used to be monitored (java,.net...)
- you mentioned that you are using the musl version, try also with the regular version just in case (I remember years ago I had a image that was failing the injection because we though it was alpine, but it wasn't)
- try to print the env variable LD_PRELOAD, it should have a reference to a Dynatrace library (/opt/dynatrace/oneagent/agent/lib64/liboneagentproc.so). I had an scenario where other tools where overriding the LD_PRELOAD, removing the Dynatrace library so no automatic injection was possible
- you could do a manual injection on the app process to verify if it is working, for example:
“export JAVA_OPTS="${JAVA_OPTS} -Xshare:off -Djava.net.preferIPv4Stack=true -agentpath:/opt/dynatrace/oneagent/agent/lib64/liboneagentloader.so=server=hhtps://<mydynatraceserver>/e/<tenant>:443,tenant=<tenantID>,tenanttoken=<TOKEN>,loglevelcon=info"
- The last troubleshooting option I had to do was to modify the dynatrace-agent64.sh to print some echos and verify which steps on that script were executed. (JUST TROUBLESHOOTING PURPOSES)
Regards
22 Jul 2022 03:13 PM - edited 22 Jul 2022 03:18 PM
Hi thanks for replying!
Let me provide more details. The image I change is
FROM kong/kong-gateway:2.8.1.1-alpine
So I have used musl (but also default, nginx and all as well)
Currently my Dockerfile is
FROM kong/kong-gateway:2.8.1.1-alpine
USER root
# Dynatrace OneAgent
ARG DT_API_URL
ARG DT_API_TOKEN
ARG DT_ONEAGENT_OPTIONS="flavor=musl&include=nginx"
ENV DT_HOME="/opt/dynatrace/oneagent"
ENV DT_LOGLEVELCON=info
RUN mkdir -p "$DT_HOME" && \
wget --quiet -O "$DT_HOME/oneagent.zip" "$DT_API_URL/v1/deployment/installer/agent/unix/paas/latest?Api-Token=$DT_API_TOKEN&$DT_ONEAGENT_OPTIONS" && \
unzip -d "$DT_HOME" "$DT_HOME/oneagent.zip" && \
rm "$DT_HOME/oneagent.zip"
RUN apk --no-cache add curl jq
COPY docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT [ "/opt/dynatrace/oneagent/dynatrace-agent64.sh", "/docker-entrypoint.sh", "kong", "docker-start" ]
USER kong
To be honest there was NO entrypoint in the original Dockerfile, somehow this container knew that starting point is /docker-enrtypoint.sh (which starts kong nginx smth, probably set in the original "FROM" image). So I knew I have to inject the LD_PRELOAD before /docker-entrypoint.sh is magically called. I came to this long ENTRYPOINT as you can see above. This runs DT magic first with a set o 3 parameters. And I think this made a change... I have to note that running /dynatrace-agent64.sh wont set the LD_PRELOAD outside the script, dunno why? Exported var is visible inside the script but not when it finishes
bash-5.1# /opt/dynatrace/oneagent/dynatrace-agent64.sh
Info: Using DT_HOME: /opt/dynatrace/oneagent
bash-5.1#
bash-5.1# env |grep LD
bash-5.1# env |grep DT
DT_HOME=/opt/dynatrace/oneagent
DT_LOGLEVELCON=info
Also maybe you will need that, the LD_PRELOAD library does not have to overwrite any existing one as nginx does not seam to use the one exported by the DT scritpt?
bash-5.1# ldd /usr/local/openresty/nginx/sbin/nginx
/lib/ld-musl-x86_64.so.1 (0x7f3d2958f000)
libluajit-5.1.so.2 => /usr/local/openresty/luajit/lib/libluajit-5.1.so.2 (0x7f3d28ea4000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x7f3d28d03000)
libopentracing.so.1 => /usr/local/kong/lib/libopentracing.so.1 (0x7f3d28aeb000)
libssl.so.1.1 => /usr/local/kong/lib/libssl.so.1.1 (0x7f3d28857000)
libcrypto.so.1.1 => /usr/local/kong/lib/libcrypto.so.1.1 (0x7f3d28366000)
libz.so.1 => /lib/libz.so.1 (0x7f3d2834c000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x7f3d28332000)
libc.musl-x86_64.so.1 => /lib/ld-musl-x86_64.so.1 (0x7f3d2958f000)
Anyway the nested and long ENTRYPOINT made a change:
bash-5.1# ps
PID USER TIME COMMAND
1 kong 0:00 bash /docker-entrypoint.sh kong docker-start
11 root 0:00 /bin/bash
2136 kong 0:00 perl /usr/local/openresty/bin/resty /usr/local/bin/kong prepare -p /usr/local/kong kong docker-start
2138 kong 0:00 /usr/local/openresty/nginx/sbin/nginx -p /tmp/resty_xxxxxx/ -c conf/nginx.conf
2169 kong 0:00 nginx -v
2185 root 0:00 ps
# and container exits to my local linux shell
root@EPPLWROW0270:~/kong# docker logs kong-xxxxxx
Info: Using DT_HOME: /opt/dynatrace/oneagent
*** Welcome to the rocketship ***
Running checks...
Enjoy the flight!
/
2022-07-22 12:17:56.641 UTC [0000085a] info [native] ... last message repeated 3 times ...
xxxxx a lot of lines, removed for serurity and clearnes
2022-07-22 12:18:01.282 UTC [0000085a] info [native] Detected static modules: ndk_http_module (0x5635c723de40) a lot of them nxxxxxxxxxxxxxxx
2022-07-22 12:18:01.282 UTC [0000085a] info [native] Registering agent via core module hook (default; custom array registration viable as well)
2022-07-22 12:18:01.282 UTC [0000085a] info [native] Nginx successfully instrumented
2022-07-22 12:18:01.283 UTC [0000085a] info [native] Agent successfully loaded from '/opt/dynatrace/oneagent/agent/bin/1.243.166.20220701-145555/linux-x86-64/liboneagentnginx.so'
Error: could not prepare Kong prefix at /usr/local/kong: could not find OpenResty 'nginx' executable. Kong requires version 1.19.3.1 to 1.19.9.1
Run with --v (verbose) or --vv (debug) for more details
So DT agent is UP but somehow Kong is complaining about the altered nginx and exits, along with container that dies (I mean I think messing with .so libraries might have caused that?) Why? If I run unmodified container (no entrypoint used so DT wont be started at all)
bash-5.1# nginx -v
nginx version: openresty/1.19.9.1
So the version is within the scope of supported versions by Kong however it is not starting up... I have no idea what to do next, maybe test latest source container.