I have a customer who is posing a question about the operation time we are seeing for some mq requests coming into a server like so:
He is stating that these operation times are unrealistic based on the response times he is gathering for the associated Brokers internally which average about 100 ms . My theory is that the metrics he is grabbing and the metrics dcrum are reporting are from represent different sets of measurements. So my question is from a DCRUM perspective what does this operation time represent? From my understanding it represents the time it takes to send the mq message from server A to request que B and does not take into account any of the broker work that is processed once that message is on the request que Am i right in my assumption? Let me know if i can clarify anything. Thanks for looking into this.
My understanding that this is on a TCP level (in all message protocol decodes) and a message level. That is that we check the delivery of the message against a queue and the ACK of that message from the target queue on a message level (not necessarily the destination). But we don't know if this is the "end" queue so that the message is to be processed here, or if it's just a queue "hop" by the message. In theory, you can have an endless number of "hops" by a message and we wouldn't necessarily know which one is the final.
AFAIK, the internal channel protocol of WMQ isn't freely available and hence, our decode can't decode that.
But it all relates to what you hint to yourself - what is he comparing the measures to?