This is a question we get quite a bit. Let me see if I can help.
Response time is the median response time for all executions of a certain statement withing a measurement period. So if you have a response time of 40 ms/minute for your analysis period, it means half of you statement ran in 40ms or less.
Total time is (loosely) calculated as response time * # of Executions. This is in order to show the weighted impact of execution counts. So a statement with low response time that is executed 1,000's of time in the period can have a quite high total time, and consume a large portion of resources.
I see your issue. I have attached a screen shot here to illustrate your point.
This is what I meant by the term "loosely". The numbers show the relative relationships, but won't tie out if you try and apply an equation.
These numbers are over a 30 minute sample, so it's tempting to try something like this:
(Total Time *30) / (Executions*30) = Response time (normalized to seconds). But since these numbers are based on medians over the period, it won't tie out
So the best use case for the numbers is relative sort. You can find the most executed statement, the statement that takes the longest per execution (Response Time), and the statement that takes the most time in database (Total Time). Once you find the statement, you can tune it in the DB tool of choise, and test with some binds.
PS: The fetches look unusual too. 50 thousand rows fetched one row at a time? Could be the DISTINCT
The fetch count indicates how often the application requests data from the database. Ideally you have one - or a few - fetches per execution. In your case it seems to be 50.000 per execution. This means you have 50.000 network round trips.