Dynatrace version : 5
in all our Solaris paravirtualisation zone, we get wrong information about network.
There are datas of a network interface that does not exist : MAC.
And this card is in a warning state : using more than 2GB/s of data for a maximum of 1GB/s
The result of is that all our hosts are in a warning state.
Then the diagnostic dashboards are unreadable : warning everywhere for nothing.
Before a correction of this bug, is it possible to desactive network monitoring from host health ?
OR better, just desactivate the wrong network interface ?
Thanks Reinhard ,
I changed the <profileName>.profile.xml it works well.
Other question : Do i have to restart server after editing : <profileName>.profile.xml ?
I did it , but was it necessary ?
I have to change it for some other profile.
Hi Derick, hi David,
This issue is already fixed since 5.0. Unfortunately if you migrate from a previous version of dynaTrace the exclusion rules are not updated in your existing System Profiles automatically. Although new System Profiles will contain the new rules.
When editing System Profiles manually you have to stop the dynaTrace Server first, then edit the profile XML files and then start the server again. Otherwise it might be possible that the server overwrites your changes during the restart.
I am using a new installation of dynaTrace 5.5. I applied the exclusion rule as suggested above in order to prevent the continuous incident being raised for the network interface on Solaris. No exclusion rules were applied prior to that, and there was no version upgrade.
Wesbank recently installed a Java agent on a new Oracle Exalogic server, running Solaris 11. The server has the following network interfaces configured:
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000
fnb_vnic1_man1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 172.18.195.234 netmask ffffff00 broadcast 172.18.195.255
fnb_ib1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 65520 index 3 inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255 groupname fnb_bondib1
fnb_ib2: flags=61000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,STANDBY,INACTIVE> mtu 65520 index 5 inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255 groupname fnb_bondib1
fnb_bondib1: flags=8001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,IPMP> mtu 65520 index 4 inet 192.168.10.210 netmask ffffff00 broadcast 192.168.10.255 groupname fnb_bondib1
fnb_vnic1_cli1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 6 inet 172.18.194.156 netmask ffffff00 broadcast 172.18.194.255
fnb_vnic2_cli2: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 7 inet 0.0.0.0 netmask 0
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1 inet6 ::1/128
fnb_vnic1_man1: flags=20002000840<RUNNING,MULTICAST,IPv6> mtu 1500 index 2 inet6 ::/0
fnb_bondib1: flags=28002000840<RUNNING,MULTICAST,IPv6,IPMP> mtu 65520 index 4 inet6 ::/0 groupname fnb_bondib1
fnb_ib1: flags=20002000841<UP,RUNNING,MULTICAST,IPv6> mtu 65520 index 3 inet6 ::/0 groupname fnb_bondib1
fnb_ib2: flags=20062000841<UP,RUNNING,MULTICAST,IPv6,STANDBY,INACTIVE> mtu 65520 index 5 inet6 ::/0 groupname fnb_bondib1
fnb_vnic1_cli1: flags=20002000840<RUNNING,MULTICAST,IPv6> mtu 1500 index 6 inet6 ::/0
In short, you can see the IPv4 and IPv6 interfaces. There is the normal loopback (lo0), as well as two Infiniband interfaces (fnb_ib1 and fnb_ib2) grouped into fnb_bondib1.
The problem we are seeing is in the dynaTrace Host Health screen for the new host:
dynaTrace is constantly showing that the network for the host is critical (red), based on the performance of an interface called “phys”. It also lists the following interfaces for the host:
There is obviously something wrong with how dynaTrace is recognising and reporting the interface performance, probably related to the grouping of the Infiniband interfaces.
@Derick: At the moment we don't have support for infiniband. If you need this please file a RFE.
In the meanwhile I would suggest to exclude the specific "non-present" adapter phys.
Anyway it would be nice to have the output of 'kstat -c net' to check whats going on.