Showing results for 
Show  only  | Search instead for 
Did you mean: 

Installing the AMD software erased bond network interfaces configuration


While installing AMD on new machine with RedHat 7 that the infrastructure guys prepare for us with a bond interface for the communication port, after reboot of the machine the communication to the machine with this bond interface was some how destroyed and then we need to eliminate the bond interface and to use only one of the physical interfaces.

Is there any explanation to this behavior or any remark on it on the documentation that I missed?

The Infrastructure guys was quite sure that was also happening last time when we install the AMD software on the former RedHat 6

Thanks in advance





I'm pretty sure when you install the software rtminst overwrites any network settings you may have had and I think it even goes as far as to remove the libraries as well. If you check the install log on the AMD it should mention any modification it has done to any linux packages.

Last time I did an install, I ran across a similar thing. I had set a static IP and then installed the software and it broke my remote connection, but in the logging I saw it disabled or removed the network settings.

Hi Matthew,

I will look at the log.

If this is the case, that the installation removes all the network definitions, I think that there should be kind of remark on this in the documentation ... but I didn't notice one


Dynatrace Pro
Dynatrace Pro

My experience has not been the same. Usually when I install the AMD software the interfaces are left alone, bonds and all.

When I launch RTMINST to id the interfaces I select the bond as the communication interface.

I can't recall installing an new AMD (High Speed) with a bond - but the classic AMD always seemed to install ok.

I just tried installing an AMD (RHEL 7.2, AMD 12.4.12) in my VM environment. I created 3 interfaces and bonded 2 of them. Once I installed the AMD software the interfaces were as I expected them - a bonded interface and a sniffing interface. The AMD installation did not modify the interface configuration.

What did you install a new HS AMD or the Classic AMD, which version of RHEL? I wonder if that makes a difference.

Hi John,

This was a classic AMD installation


What version of DCRUM and RHEL?

DCRUM 12.4.12

RedHat EL 7.3


This happened to me with 12.4.7 on Redhat 6 but I understand it was fixed in 12.4.10...

Support gave me a patch to fix this for 12.4.7

You should log a support ticket

Hi Antony ,

You are rigth, I will open a ticket on that issue.



A long time ago (a lot of years) i installed on a BOND RH. It had a lot of problems as the BOND also wanted to enable offloading (TOE and and other "nasty" things.

Why these things are "nasty"?

The AMD prefers to have as "raw" packets as possble in order to enable a correct decode. Compare it to setting up a SPAN without bringing in the VLAN header. You will be blind to what goes on in the VLAN's. The same with offloading. You will not be able to see network specific information (or at least miss some of it) and only get the application payload.

Hence , I try to have as few things running as possible on theAMD and I prefer to have as much Control of the NIC as possible. It's slightly historic, but our own drivers Always gave mre Control than the generic (and better performance). That might have changed all together in the last couple of releases and especially with the HS AMD.

But think of it yourself - You don't really know what the BOND driver is doing or how it is doing it's stuff - right?

You also add one more shim to pass through before getting to the packet and letting the AMD do ananlysis (think about why anti-virus scanners are not recommended on AMD's)

Is that good or bad for your analysis?

The AMD is a special kind of box and should perhaps be treated as a "black-box" by the provision people.

My 2 cents of thinking - share what your conclusion becomes 🙂

Hi Ulf

The Infrastructures guys look at the machine while installing it and decided it need redundancy also with in the NICs, as they are usually doing with all the rest of their installations in this site.

Actually I didn't check the machine before I started the installation in perspective of NIC setup. That was my mistake.

On the other hand I didn't found any remark in the documentation on that issue that say "check NICs setup before installation it might be destroyed"....

Thanks for sharing your experience and toughs


Dynatrace Pro
Dynatrace Pro

Hello guys,

We acknowledge this is a known issue with a plan to be addressed in upcoming 17.00 release as well as next service pack to 12.4 release, that is 12.4.13. Meanwhile please use one of the available workarounds as described in DC RUM 12.4 Known Issues.

The fix is coming in 12.4.13 that we plan on releasing after Easter Holiday.

12.4.13 breaks the bond ifc configuration. No improvement.