Reading the below (last) paragraph in the documentation, in regard to OneAgent and code module resource requirements, and recent experiences, I wonder if OneAgent full-stack should be enabled at all, or code modules disabled, when heavy load & stress (performance) tests are performed on an application environment. OA results in cpu overhead, but especially memory impact can be large. Noticed on IIS app pools.
There may be some relevance to Solved: Hotspot section in Service details page of Dynatrace does not show data? - Dynatrace Communi...
OneAgent code modules are optimized to efficiently use memory and to free resources when they're no longer needed, to burden application execution as little as possible. So, memory demand might vary over application execution time.
Dependent on OneAgent code module, memory demand might peak at application startup. This is especially true for .NET technology. Preparing .NET assemblies for monitoring causes memory footprint to spike, as assembly code temporarily resides twice in memory. This is a known issue of Microsoft .NET technology and can't be mitigated by Dynatrace OneAgent.
Of course, you should be running OneAgents during performance and stress tests. Sure, there is some overhead (both CPU and memory), but how do you want to analyze your performance test runs? Typically for performance tests, you need to give the application time to warm up after startup, but this is a common practice, not related to observability tools.
The only danger I see is when you have a very large number of application pools running on one machine. I had a case where my client had more than 200 application pools on one machine, and just turning on the agent caused a lot of overhead, and the performance tests couldn't pass.
However, if you stick to Julius' suggestion and have a properly scaled architecture, it shouldn't affect the tests.