cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Sorairs Host health , wrong network utilization value.

david_beysang
Newcomer

Hi,



Dynatrace version : 5



in all our Solaris paravirtualisation zone, we get wrong information about network.

There are datas of a network interface that does not exist : MAC.

And this card is in a warning state : using more than 2GB/s of data for a maximum of 1GB/s









The result of is that all our hosts are in a warning state.

Then the diagnostic dashboards are unreadable : warning everywhere for nothing.



Before a correction of this bug, is it possible to desactive network monitoring from host health ?

OR better, just desactivate the wrong network interface ?

10 REPLIES 10

r_weber
Pro

David,



we are alrady aware of this issue and a fix is planned for Solaris Zones.



Meanwhile you can exclude this interface by adding an exclusion rule as described here.



Reinhard

david_beysang
Newcomer

Thanks Reinhard ,



I changed the <profileName>.profile.xml it works well.



Other question : Do i have to restart server after editing : <profileName>.profile.xml ?



I did it , but was it necessary ?

I have to change it for some other profile.


derick_hewetson
Organizer

Looks like I'm not the only one experiencing this problem (smile)

andreas_grabner
Dynatrace Leader
Dynatrace Leader

Did your problem got resolved with the description from Reinhard?

david_beysang
Newcomer

yes

derick_hewetson
Organizer

The work-around has stopped the alerts being generated, but when will the Fixpack be available?

georg_schau2
Inactive

Hi Derick, hi David,

This issue is already fixed since 5.0. Unfortunately if you migrate from a previous version of dynaTrace the exclusion rules are not updated in your existing System Profiles automatically. Although new System Profiles will contain the new rules.

When editing System Profiles manually you have to stop the dynaTrace Server first, then edit the profile XML files and then start the server again. Otherwise it might be possible that the server overwrites your changes during the restart.

Regards,
    Georg

Hi Georg,

I am using a new installation of dynaTrace 5.5. I applied the exclusion rule as suggested above in order to prevent the continuous incident being raised for the network interface on Solaris. No exclusion rules were applied prior to that, and there was no version upgrade.

Wesbank recently installed a Java agent on a new Oracle Exalogic server, running Solaris 11. The server has the following network interfaces configured:

[orawebsp@edqrbfnbdev01]:/export/home/orawebsp>/sbin/ifconfig –a

 

lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000

fnb_vnic1_man1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 172.18.195.234 netmask ffffff00 broadcast 172.18.195.255

fnb_ib1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 65520 index 3 inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255 groupname fnb_bondib1

fnb_ib2: flags=61000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,STANDBY,INACTIVE> mtu 65520 index 5 inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255 groupname fnb_bondib1

fnb_bondib1: flags=8001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,IPMP> mtu 65520 index 4 inet 192.168.10.210 netmask ffffff00 broadcast 192.168.10.255 groupname fnb_bondib1

fnb_vnic1_cli1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 6 inet 172.18.194.156 netmask ffffff00 broadcast 172.18.194.255

fnb_vnic2_cli2: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 7 inet 0.0.0.0 netmask 0

 

lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1 inet6 ::1/128

fnb_vnic1_man1: flags=20002000840<RUNNING,MULTICAST,IPv6> mtu 1500 index 2 inet6 ::/0

fnb_bondib1: flags=28002000840<RUNNING,MULTICAST,IPv6,IPMP> mtu 65520 index 4 inet6 ::/0 groupname fnb_bondib1

fnb_ib1: flags=20002000841<UP,RUNNING,MULTICAST,IPv6> mtu 65520 index 3 inet6 ::/0 groupname fnb_bondib1

fnb_ib2: flags=20062000841<UP,RUNNING,MULTICAST,IPv6,STANDBY,INACTIVE> mtu 65520 index 5 inet6 ::/0 groupname fnb_bondib1

fnb_vnic1_cli1: flags=20002000840<RUNNING,MULTICAST,IPv6> mtu 1500 index 6 inet6 ::/0

 

In short, you can see the IPv4 and IPv6 interfaces. There is the normal loopback (lo0), as well as two Infiniband interfaces (fnb_ib1 and fnb_ib2) grouped into fnb_bondib1.

 

The problem we are seeing is in the dynaTrace Host Health screen for the new host:

dynaTrace is constantly showing that the network for the host is critical (red), based on the performance of an interface called “phys”. It also lists the following interfaces for the host:

 

Interface

Speed

In

Out

Phys

10Gb/s

6.5Gb/s

27.6Gb/s

Fnb_ib1

32Gb/s

0.01Gb/s

0.02Gb/s

Fnb_ib2

32Gb/s

0Gb/s

0Gb/s

Fnb_vnic1_cli1

10Gb/s

0Gb/s

0Gb/s

Fnb_vnic2_cli2

10Gb/s

0Gb/s

0Gb/s

Fnb_vnic1_man1

1Gb/s

1Gb/s

0Gb/s

 

There is obviously something wrong with how dynaTrace is recognising and reporting the interface performance, probably related to the grouping of the Infiniband interfaces.

Regards,

Derick

david_beysang
Newcomer

Hello,

 

5.0 or 5.5 ?

We are in 5.0 and is was not a migration from a previous version.

georg_schau2
Inactive

@Derick: At the moment we don't have support for infiniband. If you need this please file a RFE.
In the meanwhile I would suggest to exclude the specific "non-present" adapter phys.

Anyway it would be nice to have the output of 'kstat -c net' to check whats going on.