05 Apr 2024 08:57 PM - last edited on 08 Apr 2024 08:41 AM by MaciejNeumann
I started ingesting some JSON, not that big, but with a lot of attributes. So, I get a lot of attr_count_trimmed
I figured out that when this occurs, lots of things start to break, including processing and extraction. Even attributes that I put in are lost...
It seems that all this is due to Dynatrace processing the JSON automatically and creating a lot of attributes, that I really don't need. Looks like a simple solution would be to stop doing this attribute creation...
Any ideas on how to get past this attr_count_trimmed ?
Solved! Go to Solution.
08 Apr 2024 10:48 AM
The most desirable solution is to switch to Grail, we can then increase the limit.
If that is not an option, put the JSON in the "content" fields as string. Then, in log processing you can extract only the attributes that are relevant.
08 Apr 2024 07:11 PM
Hello,
To add onto how to do parsing in log processing, please look at the documentation found here:
Log processing examples - Dynatrace Docs
It gives many examples on what kinds of information you can extract with parsing rules / how to formulate one.
12 Apr 2024 08:16 PM
Wasn't yet on Grail in this tenant, one more good reason to migrate
Still wondering though what logic Dynatrace applies for the 50 limit that exists. I tried to figure it out, but it seems to be random... Besides "content", I have several other attributes, and they appear randomly, which is not good. There should a clear indication of which attributes are disposed of, because that can leave us without data that is really there, but won't make it in the query results...
12 Apr 2024 08:42 PM
BTW, If anyone want's to see if this is impacting you, check:
https://community.dynatrace.com/t5/Troubleshooting/Why-don-t-ingested-logs-look-as-expected/ta-p/230...