I have experienced an app server which uses a custom driver to mount a disk to the host. When you use `df -h`, you can see the mount, but Dynatrace does not show it in the detected disks. Is there any documentation around which disks Dynatrace lists in the host overview that may elude to why this isn't showing?
I suppose this is all related to disks that is in documentation:
We’ve seen such situation as well (sometimes disks where detected, but not all metrics were collected properly)
Here is a part of the df -h which includes the missing mount points:
479T 83T 396T 18% /***/gridA
55T 16T 39T 29% /***/gridB
Also, they can be found in the /etc/fstab and /etc/mtab
/***/ref-fs/***/***/gridA omfs rw,username=***,password=XXXXXXX 0 2
/***/***-fs /***/gridB omfs rw,username=***,password=XXXXXXXXXX 0 2
There are some file-system types that Dynatrace might not normally monitor, I'd suggest opening up a ticket with support to see if your file-system type falls within that category. Also I'm not sure if your directory is autofs mounted, but if it is one test you could try is to access the mount directly and see if that causes it to appear within the Dynatrace UI. Another test would be the following:
1) Access the effected host
2) Run df -k -a
3) Run echo $? to check the result code of the previous command
The echo command should return a value of 0 to indicate it ran without errors. If it returns with a value of 1 this could be an indicator that there is a problem, such as a stale file handler, that could be preventing the disk from appearing within the Dynatrace UI that needs to be taken care of.
Hope this helps!
I did not accept this as the answer - someone else must've.
The customer ran the command and the return code was 0. But, the disk still isn't appearing. I will recommend them a support case to dig into this further. Thanks!
df -k -a && echo "return code was : $?"
513569159928 101099555360 412469604568 20% /***/gridA
58336290184 16702970784 41633319400 29% /***/gridB
return code was : 0