Showing results for 
Show  only  | Search instead for 
Did you mean: 
Dynatrace Contributor
Dynatrace Contributor

1. Installation didn't finish successfully.

Before you begin, make sure that you have all the required tools installed in supported versions. For more details, see the list of tools.

Since the integration requires the deployment of resources in your Google Cloud environment, sometimes prerequisite conditions are not met, or there are misconfigurations. For more details, see the list of prerequisites.


Make sure that you have access to the Kubernetes cluster. For example, execute the following command:

kubectl get pods


If there is no connection, configure the connection.

Then, make sure that there is a connection between the current machine (VM or Cloud console) and your DT environment:

curl -v https://<specified_dynatrace_URL>


Once the connection between your VM and DT is verified, check the connection between your K8s cluster and your DT environment:

kubectl -n dynatrace run --restart=Never --rm -it comm-check --image=curlimages/curl -- -v "https://<specified_dynatrace_URL>"


If the communication between your Google Cloud environment and DT is not the problem, you need to make sure that that the deployment script has all required permissions. To do that, you need to verify whether the dynatrace_monitor.helm_deployment role was created in the proper project and has all the required permissions.

You can do it either:

  • In your Cloud console (IAM service > Roles section) or

  • By executing the following commands:

gcloud iam roles list --project=<my_project>
gcloud iam roles describe <DT_monitoring_role> --project=<my_project>


If the role is created properly, make sure that it is added to your user in the IAM. For more details, see how to grant access.

Other external errors should appear in the console after running the installation script. If the installation went through, you can try these commands to verify it.


2. Missing metrics in Dynatrace

Sometimes, the metrics we expect to observe (for a given combination of filters) are not being generated in Google Cloud in the first place, so nothing will be collected and ingested into DT.

Firstly, verify that there are existing data points for a given query. You can do this in your Cloud console:

1. See the Monitoring service > Metrics explorer section

2. Selecting any desired metric and applying filters.

If there is no data, no chart will be displayed. If there is data for a given query, you will be able to apply the same query (same filtering, grouping, etc.) in Dynatrace's Data Explorer and to compare the results.


For more details, see the list of supported metrics per service.


3. Missing logs in Dynatrace

Sometimes, the logs we expect to observe are not being generated or configured to be collected in Google Cloud in the first place, so nothing will be ingested into DT.

Firstly, verify that the resource you want to monitor generates logs. You can do this in your Cloud console:

  • By selecting an existing instance and looking for its Logs section in the chosen service or

  • Through the Logging service. See the Logs explorer section and construct a query.

If logs from a certain resource are generated properly, make sure that the Log Routing Sink rules are configured to send them to the PubSub Topic. To configure that, see Google's documentation.


You can check which logs will be collected through the Cloud console (Logging service > Log Router section). By editing the selected Sink from the list, in the Choose logs to include in sink section there is the possibility to preview the logs---depending on the applied filter.


4. Missing entities in Dynatrace

There might be instances of resources you have created in Google Cloud that are not appearing in Dynatrace.

Verify that the instances in Google Cloud that you want to monitor have traffic (refer to the previous sections---2. and 3.). Without data generated by those instances, no entities will be created in Dynatrace.


5. Delayed logs in Dynatrace

If you go over the stated throughput limits, delays in log ingestion will start to appear.

One way to check if you are dealing with bigger loads is by observing the metrics Oldest unacked message age and Unacked messages of your PubSub Subscription (Cloud console > Monitoring service > Metrics explorer section), filtering by your subscription's instance.

If you notice increasing values in those charts, it means that the messages are being queued and cannot be processed in real time. In that case, you need to follow our aforementioned scaling guide.


6. Other errors

Any error during the collection of data from the Google Cloud environment and ingestion into Dynatrace should appear in the logs from the deployed GKE containers.

You can check them through your Cloud console, in the Kubernetes Engine service. In the Workloads section, select your deployment and go to the Logs tab. There's also a button to view those logs in the Logs Explorer, with the option to download and export them.

Search there for any error or misconfiguration, or collect evidence from your environment (logs, screenshots, explanations, etc.) and contact Support.

Version history
Last update:
‎07 May 2024 07:44 AM
Updated by: