cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Unable to break out Rabbit MQ using process group detection

arun_ramalingam
Participant

Currently, we have multiple environments in our Dynatrace account and we are using both AWS Tags and Environment variables to isolate the different java based application by their respective environment. Unfortunately, for Rabbit MQ which is an Erlang Process, there is NO Process Group Detection Option for Erlang Process. Hence, we are unable to break out the Rabbit MQ processes by their Environments. Is there any workaround for this? OR Can you please create an option to isolate the Rabbit MQ processes by environment?

6 REPLIES 6

Jakub_Mierzewsk
Inactive

Hello Arun, do you have separate RabbitMQ cluster per each environments ?

arun_ramalingam
Participant

Hi Jakub,

Yes. While we can break out services and hosts using AWS Tags, we cannot break out processes using the same AWS tags. For breaking them out, Dynatrace is relying only on Environment names. I think Dynatrace should use AWS tags for breaking out processes too.

Many Thanks,

Arun Thilak

arun_ramalingam
Participant

And FYI - these different environments are in their own VPCs inside one AWS account. Hence, they are literally separate in their own virtual private clouds.

Jakub_Mierzewsk
Inactive

In case of RabbitMQ we should be able to discover clusters and create separate process groups for each cluster. Will it work for you?

Jakub_Mierzewsk
Inactive

While we are working on fully automated solution you should be able to use DT_CLUSTER_ID= as an environment variable when deploying RabbitMQ. This should divide the processes into the appropriate clustering preference.help

arun_ramalingam
Participant

@Jakub M.

Hi Jakub,

Thank you for the information. I will let my platform team know. In support ticket regarding this issue , I was told that there was a Host Group variable thats going to be built soon, which will be used to uniquely different environments. So we might end up waiting for that as well.

Many Thanks,

Arun Thilak