Hi, we monitor a large retail web site and the client requires to filter out traffic from crawler bots such as Yahoo, Googlebot and slurp. Obviously we could do this by IP address but there seems to be a lot of IP addresses to add and these may change. Any other way of doing this. We currently use version 11.7
Tricky question. As you've found IP's change and you cannot rely on the crawler to identify itself through the header either.
If it is a requirement then that has to balanced against the efforts of scavaging through the data and looking for repetitions. This is probably the most apparent common denominator for crawlers. They are all automated and run according to clock - hence they will return after X time.
Sorry to say but there's is no "catch all" for this request.
How does Dynatrace identifying Bots?
Does it exclude also our internal bots when activating this option?
our bots have a such user agent: Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; HB-Prerender-Bot/1.0; )