03 Mar 2026 03:29 PM - edited 03 Mar 2026 03:31 PM
Hello community,
I have an array which is build as follow (it contains 2 elements):
[
{
"text": "Brave new world xx.xx.xxxx of xxxxxx and",
"id": "021(anastasia)",
"index": "0"
},
{
"text": "Brave new world but old",
"id": "404(anastasia)",
"index": "1"
}
]
and my goal is to reiterate element extraction from the array to be displayed as columns.
For instance if I use DQL function arrayElement I can extract one single element and this will be displayed on a single row as follow:
What if I want multiple column per each element of the previous array?
Wish someone can help on this 🙂 @krzysztof_hoja
Yann
Solved! Go to Solution.
03 Mar 2026 04:20 PM
I am not sure if you need this:
data json:"""{"bla":
[
{
"text": "Brave new world xx.xx.xxxx of xxxxxx and",
"id": "021(anastasia)",
"index": "0"
},
{
"text": "Brave new world but old",
"id": "404(anastasia)",
"index": "1"
}
]}"""
| expand blaor this (when you know you have just 2 elements)
| fields e1 = arrayElement(bla,0), e2=arrayElement(bla,1)
or this which works for any array:
| fields bla = concat("\"", iIndex(), "\": ", toString(bla[]))
| fields bla=concat("{",arrayToString(bla, delimiter:","),"}")
| parse bla,"JSON:bla"
| fieldsFlatten bla
| fieldsRemove bla03 Mar 2026 05:49 PM
Ei this is the solution: "works for any array" the one at the bottom that you provided.
| fields bla = concat("\"", iIndex(), "\": ", toString(bla[]))
| fields bla=concat("{",arrayToString(bla, delimiter:","),"}")
| parse bla,"JSON:bla"
| fieldsFlatten bla
| fieldsRemove bla
I'm really greteful for you help. You make it look easy 😄
Reading from your other post on the community I understood that I had to build a json but I had no idea on how to do this precisely.
I have few more question if you feel like answering 🙂
1) where did you learn to craft the json via DQL in the way you did?
2) do you think that all these transformation will be supported in OpenPipeline?
3) do you think there are more automated way of reaching this same result via arrayElement() [Logic would be for each array Element create a new flatten colum]? (I experimented a lot with iAny / iIndex() but with no success - when searching from docs to community haven't found much)
Regards,
Yann
03 Mar 2026 10:04 PM
1) It's just my own invention 😉
2) I would ask a different question here. I understood your question more as a visualization need. Our tables do not display well more complex arrays. I personally see data stored as array superior over this record/object I used to make this data more readable. It is just more convenient to process data when it is array. So why would you do this conversion such conversion in OpenPipeline? But I think that such transformation is already allowed in OP.
3) Right now not. I see a need for object->array and array->object functions. The first class could give you array of keys in object or array of key-value pairs. The second could construct an object from array of key-value pairs. Such array->object function would be useful for your need. Your array should be enchanted with key names and this is doable today with iterative expression.
04 Mar 2026 07:17 AM
1) you sure reached a deeper level of understanding on how the data structure behave. Hope I'll reach that level too.
2) I share your view. For me the true reason behind making this change systematically in Openpipeline is easiness of read for non IT / non technical / not in Dynatrace / not in DQL people.
I do agree the array, once/if well build, is superior and easier to have in one single field.
Thank you for all the support and feedback,
Yann
04 Mar 2026 11:13 AM
To my surprise arrayToString is not a function supported by open pipeline
04 Mar 2026 12:14 PM
Surprise indeed. This is new function, so maybe it is temporary only.
Alternative not requiring arrayToString
| fields bla = concat( iIndex(), ":", toString(bla[]))
| fields bla = replaceString(toString(bla),"\\\"","\"")
| parse bla,"""'[' KVP{ '"' LD:key ':' JSON:value ('", '|'"]') }:bla """
| fieldsFlatten bla
| fieldsRemove bla
04 Mar 2026 12:19 PM - edited 04 Mar 2026 12:22 PM
Most probably. After the conversation we had here I decided to follow what you said in your point 2).[seeing "data stored as array superior"]
I'll keep the field as an array in OpenPipeline, so the original Bizevent will be tinner. Will later distribute in that many column for that many array elements in the down stream apps: Dashboard and Notebook (and some automated reported via excel).
Featured Posts