Showing results for 
Show  only  | Search instead for 
Did you mean: 

OneAgent on Fargate


I am trying to enable monitoring on a Task running Node.js on AWS Fargate - the base image is node. I followed the steps outlined in Install OneAgent, no errors found at the time of build and after running the task, I could see in the logs that it prints the following:

Info: Using DT_HOME: /opt/dynatrace/oneagent

I don't see any other Dynatrace specific error on INFO messages. The ECS Fargate is not listed in my Dashboard.

I am also sharing the content of Dockerfile below, could I get help on what is missing or what I didn't follow in the instructions?

FROM node
ARG DT_ONEAGENT_OPTIONS="flavor=default&include=nodejs"
ENV DT_HOME="/opt/dynatrace/oneagent"
RUN mkdir -p "$DT_HOME" && \
wget -O "$DT_HOME/" "$DT_API_URL/v1/deployment/installer/agent/unix/paas/latest?Api-Token=$DT_API_TOKEN&$DT_ONEAGENT_OPTIONS" && \
unzip -d "$DT_HOME" "$DT_HOME/" && \
rm "$DT_HOME/"

WORKDIR /server

COPY . /server

RUN npm install

ENTRYPOINT [ "/opt/dynatrace/oneagent/" ]
CMD ["npm", "start"]

Dynatrace Advisor
Dynatrace Advisor

Quick question: are you sure you used a PaaS token (for the value of DT_API_TOKEN)? Even though the variable is named DT_API_TOKEN, what is needed is a PaaS token and not an API token. That's a common source of errors, that's why I'm asking.

Thanks for helping with this. Yes, I used one of the 2 PaaS tokens I had on my account.

Please note that I am able to monitor my app running on EC2 - deployed both OneAgent and ActiveGate for that.

Dynatrace Champion
Dynatrace Champion

Are you running the container alone or the task also deploys any other security container? ( As I have seen in the past some solutions that conflicted with Dynatrace OneAgent)

The message you see "Info: Using DT_HOME: /opt/dynatrace/oneagent " comes from the execution of "/opt/dynatrace/oneagent/"

By default Dynatrace oneAgent does not print information into cloudwatch as it can be too chatty, but you can enable that for NGINX with the environment variable DT_NGINX_OPTIONS=loglevelconinfo.

With that option enabled you will see the Dynatrace OneAgent logs in CloudWatch .



Hi @patb23  ,


I'm facing the exact issue , do let me know if there's a solution or workaround for this issue.



Rohit U

Hi, any updates on this error? I have the same problem and message.

Dynatrace Professional

Hi Rodrigo, 
Are you using build/auto/runtime injection ?

Verify if you are using alpine linux or not
(if alpine, the oneagent image name will be : /linux/oneagent-codemodules-musl:<technology> )

Check the cloudwatch logs any reference to oneAgent or add the environment variable "DT_LOGLEVELCON=info" to get more details.
If still not seeing any information, try to run the image locally and you can ssh into the container and verify if the oneagent is running inside of the container.
Some related errors I have seen in the past is that the Dynatrace cluster or the repo where the oneagent image is, does not has a valid certificate, so docker will fail when pulling the image.

Frequent Guest

Did you solve your problem? If so how?

I face same issue, build-time injection (docker image) looks fine. Build pass green, oneAgent is downloaded successfully.
We use Alpine based image so musl is used.

When testing the final image locally (for easy debug) I can "ssh" to the running docker. I can see entrypoint script, next to it seats environment setting script with all the company URLS, tokens and so on. All looks good.

But lunching the produces same "Info: Using DT_HOME: /opt/dynatrace/oneagent" output as in your case.

That is the end, no more logs, errors etc. No process is started (ps aux) or netstat does not show any extra connections made. All the ".so" libs are in place. What is wrong? Deploying to AWS ECS Fargate produces same result, just the very same "info:" in CloudWatch and nothing more. Please share you results 
@rodrigo_alvare1 maybe you came up with some other ideas? In my docker I also have verbosity mode set but ... seams not working.

bash-5.1# env |grep DT

Dynatrace Champion
Dynatrace Champion

How you are connecting to Dynatrace api, is there any egress or ingress controller set for the traffic routes exceptionally? also have you mounted the volume as per the map config path?


Frequent Guest

We use Dynatrace cloud. As I am pretty new to this  I dont get your question. API token is set via DT Cloud UI, for scope I have used PaaS template +ingress scope added. I did not setup any extra controllers  Map config path, whats that? I wish even my local docker environment would allow me to see any problem there might be and solve them but if the script outputs just that info: message I dont know what to fix.  My integration is build-time/classic as shown here
What would you expect to see in the following senario:

running local docker, 

docker exec -e "DT_LOGLEVELSDK=finest" --user root -it our-app /bin/bash
bash-5.1# /opt/dynatrace/oneagent/
Info: Using DT_HOME: /opt/dynatrace/oneagent
bash-5.1# env |grep DT

No process in the background (just out app running, no DT visible). This is isolated WSL localhost test. Same image we use for AWS testing and same info message and that is everything we get. Build is okay, with the same scope/token we are downloading the, embeding it into the docker image, all good. As you can see the docker image has got the agent installed, but produces no errors etc. 


Im not aware of the flag DT_LOGLEVELSDK , where did you saw that one?
(The one I used before is " DT_LOGLEVELCON = info")

If you don't see logs (besides Info: using DT_HOME...) that means that the agent is not being injected at the process level.

Things that you could check on why it could fail the injection:
- check the technology used to be monitored (java,.net...)
- you mentioned that you are using the musl version, try also with the regular version just in case (I remember years ago I had a image that was failing the injection because we though it was alpine, but it wasn't)
- try to print the env variable LD_PRELOAD, it should have a reference to a Dynatrace library  (/opt/dynatrace/oneagent/agent/lib64/ I had an scenario where other tools where overriding the LD_PRELOAD, removing the Dynatrace library so no automatic injection was possible
- you could do a manual injection on the app process to verify if it is working, for example:
export JAVA_OPTS="${JAVA_OPTS} -Xshare:off -agentpath:/opt/dynatrace/oneagent/agent/lib64/<mydynatraceserver>/e/<tenant>:443,tenant=<tenantID>,tenanttoken=<TOKEN>,loglevelcon=info"
- The last troubleshooting option I had to do was to modify the to print some echos and verify which steps on that script were executed. (JUST TROUBLESHOOTING PURPOSES)


Hi thanks for replying!

Let me provide more details. The image I change is 


FROM kong/kong-gateway:


So I have used musl (but also default, nginx and all as well)

Currently my Dockerfile is


FROM kong/kong-gateway:
USER root
# Dynatrace OneAgent
ARG DT_ONEAGENT_OPTIONS="flavor=musl&include=nginx"
ENV DT_HOME="/opt/dynatrace/oneagent"
RUN mkdir -p "$DT_HOME" && \
    wget --quiet -O "$DT_HOME/" "$DT_API_URL/v1/deployment/installer/agent/unix/paas/latest?Api-Token=$DT_API_TOKEN&$DT_ONEAGENT_OPTIONS" && \
    unzip -d "$DT_HOME" "$DT_HOME/" && \
    rm "$DT_HOME/"

RUN apk --no-cache add curl jq

RUN chmod +x /

ENTRYPOINT [ "/opt/dynatrace/oneagent/", "/", "kong", "docker-start" ]

USER kong


To be honest there was NO entrypoint in the original Dockerfile, somehow this container knew that starting point is / (which starts kong nginx smth, probably set in the original "FROM" image). So I knew I have to inject the LD_PRELOAD before / is magically called. I came to this long ENTRYPOINT as you can see above. This runs DT magic first with a set o 3 parameters. And I think this made a change... I have to note that running / wont set the LD_PRELOAD outside the script, dunno why? Exported var is visible inside the script but not when it finishes


bash-5.1# /opt/dynatrace/oneagent/
Info: Using DT_HOME: /opt/dynatrace/oneagent
bash-5.1# env |grep LD
bash-5.1# env |grep DT


Also maybe you will need that, the LD_PRELOAD library does not have to overwrite any existing one as nginx does not seam to use the one exported by the DT scritpt?


bash-5.1# ldd /usr/local/openresty/nginx/sbin/nginx
        /lib/ (0x7f3d2958f000) => /usr/local/openresty/luajit/lib/ (0x7f3d28ea4000) => /usr/lib/ (0x7f3d28d03000) => /usr/local/kong/lib/ (0x7f3d28aeb000) => /usr/local/kong/lib/ (0x7f3d28857000) => /usr/local/kong/lib/ (0x7f3d28366000) => /lib/ (0x7f3d2834c000) => /usr/lib/ (0x7f3d28332000) => /lib/ (0x7f3d2958f000)



Anyway the nested and long ENTRYPOINT made a change:


bash-5.1# ps
    1 kong      0:00 bash / kong docker-start
   11 root      0:00 /bin/bash
 2136 kong      0:00 perl /usr/local/openresty/bin/resty /usr/local/bin/kong prepare -p /usr/local/kong kong docker-start
 2138 kong      0:00 /usr/local/openresty/nginx/sbin/nginx -p /tmp/resty_xxxxxx/ -c conf/nginx.conf
 2169 kong      0:00 nginx -v
 2185 root      0:00 ps

# and container exits to my local linux shell 

root@EPPLWROW0270:~/kong# docker logs kong-xxxxxx
Info: Using DT_HOME: /opt/dynatrace/oneagent
*** Welcome to the rocketship ***
Running checks...
Enjoy the flight!
2022-07-22 12:17:56.641 UTC [0000085a] info    [native] ... last message repeated 3 times ...

xxxxx a lot of lines, removed for serurity and clearnes 

2022-07-22 12:18:01.282 UTC [0000085a] info    [native] Detected static modules: ndk_http_module (0x5635c723de40) a lot of them nxxxxxxxxxxxxxxx
2022-07-22 12:18:01.282 UTC [0000085a] info    [native] Registering agent via core module hook (default; custom array registration viable as well)
2022-07-22 12:18:01.282 UTC [0000085a] info    [native] Nginx successfully instrumented

2022-07-22 12:18:01.283 UTC [0000085a] info    [native] Agent successfully loaded from '/opt/dynatrace/oneagent/agent/bin/'

Error: could not prepare Kong prefix at /usr/local/kong: could not find OpenResty 'nginx' executable. Kong requires version to

  Run with --v (verbose) or --vv (debug) for more details



So DT agent is UP but somehow Kong is complaining about the altered nginx and exits, along with container that dies (I mean I think messing with .so libraries might have caused that?) Why? If I run unmodified container (no entrypoint used so DT wont be started at all) 


bash-5.1# nginx -v
nginx version: openresty/


 So the version is within the scope of supported versions by Kong however it is not starting up... I have no idea what to do next, maybe test latest source container.