<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic 📌 TIP#4: Parsing non-standard timestamps embedded in long log lines using DQL in Dynatrace tips</title>
    <link>https://community.dynatrace.com/t5/Dynatrace-tips/TIP-4-Parsing-non-standard-timestamps-embedded-in-long-log-lines/m-p/298245#M1903</link>
    <description>&lt;H2&gt;The Problem&lt;/H2&gt;
&lt;P&gt;Sometimes your application logs contain timestamps in non-standard formats buried deep inside long, unstructured log lines. The timestamp isn't at the beginning of the line and doesn't follow ISO 8601 with separators — so toTimestamp() can't parse it directly.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Example:&lt;/STRONG&gt; Your logs look like this (simplified):&lt;/P&gt;
&lt;PRE&gt;{hostName: app-server-01.example.com,level: WARN,message: PaymentService#validateAccount input: source=batch account=123456,serverId: node1,userId: svc-user,threadName: default task-1024,contextMap: [{traceid:a1b2c3d4e5f6},{correlationId:f47ac10b-58cc-4372-a567-0e02b2c3d479}],applicationName: payments,timestamp: 20260423T122121.131-0300}&lt;/PRE&gt;
&lt;P&gt;The timestamp field 20260423T122121.131-0300 is:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Buried at the &lt;STRONG&gt;end&lt;/STRONG&gt; of a very long log line&lt;/LI&gt;
&lt;LI&gt;In a &lt;STRONG&gt;compact format&lt;/STRONG&gt; without the usual - and : separators (yyyyMMdd'T'HHmmss.SSS±ZZZZ)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Not ISO 8601 compliant&lt;/STRONG&gt; — toTimestamp() won't parse it directly&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;The Strategy: indexOf + substring + TIMESTAMP pattern&lt;/H2&gt;
&lt;P&gt;Instead of trying to parse the entire log line, we use a &lt;STRONG&gt;surgical extraction&lt;/STRONG&gt; approach:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Locate&lt;/STRONG&gt; the timestamp key using indexOf&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Extract&lt;/STRONG&gt; from that position forward using substring&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Parse&lt;/STRONG&gt; the extracted fragment using DPL's TIMESTAMP matcher with a custom format&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Override&lt;/STRONG&gt; the record's timestamp field with the parsed value&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;The Solution&lt;/H2&gt;
&lt;PRE&gt;fetch logs
// ... your filters here ...
| fieldsAdd index = indexOf(content, "timestamp")
| fieldsAdd timestamp_log = substring(content, from:index)
| parse timestamp_log, "LD TIMESTAMP('yyyyMMdd\\'T\\'HHmmss.SSSZ'):parsed_ts LD"
| fieldsAdd timestamp = parsed_ts
| fieldsRemove index, timestamp_log, parsed_ts&lt;/PRE&gt;
&lt;H3&gt;Step-by-step breakdown&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;Step 1 — Find the position&lt;/STRONG&gt;&lt;/P&gt;
&lt;PRE&gt;| fieldsAdd index = indexOf(content, "timestamp")&lt;/PRE&gt;
&lt;P&gt;indexOf returns the character position where "timestamp" starts in the content field. This avoids the need to parse the entire log structure.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2 — Cut the string&lt;/STRONG&gt;&lt;/P&gt;
&lt;PRE&gt;| fieldsAdd timestamp_log = substring(content, from:index)&lt;/PRE&gt;
&lt;P&gt;This gives us a short fragment like:&lt;BR /&gt;timestamp: 20260423T122121.131-0300}&lt;/P&gt;
&lt;P&gt;Now we have a manageable string to parse.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 3 — Parse with TIMESTAMP pattern&lt;/STRONG&gt;&lt;/P&gt;
&lt;PRE&gt;| parse timestamp_log, "LD TIMESTAMP('yyyyMMdd\\'T\\'HHmmss.SSSZ'):parsed_ts LD"&lt;/PRE&gt;
&lt;P&gt;Breaking down the DPL pattern:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;LD — matches timestamp: (any data up to the next matcher)&lt;/LI&gt;
&lt;LI&gt;TIMESTAMP('yyyyMMdd\\'T\\'HHmmss.SSSZ') — the custom format:
&lt;UL&gt;
&lt;LI&gt;yyyyMMdd → 20260423 (year, month, day — no separators)&lt;/LI&gt;
&lt;LI&gt;\\'T\\' → literal T (escaped single quotes in DQL string)&lt;/LI&gt;
&lt;LI&gt;HHmmss → 122121 (hour, minute, second — no separators)&lt;/LI&gt;
&lt;LI&gt;.SSS → .131 (milliseconds)&lt;/LI&gt;
&lt;LI&gt;Z → -0300 (timezone offset in RFC 822 format)&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;:parsed_ts — export name for the parsed timestamp&lt;/LI&gt;
&lt;LI&gt;LD — matches any trailing characters (the closing })&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Step 4 — Override the timestamp&lt;/STRONG&gt;&lt;/P&gt;
&lt;PRE&gt;| fieldsAdd timestamp = parsed_ts&lt;/PRE&gt;
&lt;P&gt;This replaces the record's timestamp field with the correctly parsed value. Dynatrace will now use this timestamp for time-based filtering, sorting, and visualization.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 5 — Clean up&lt;/STRONG&gt;&lt;/P&gt;
&lt;PRE&gt;| fieldsRemove index, timestamp_log, parsed_ts&lt;/PRE&gt;
&lt;P&gt;Remove the auxiliary fields to keep the result clean.&lt;/P&gt;
&lt;H2&gt;Self-contained test query&lt;/H2&gt;
&lt;P&gt;You can validate this approach without any log data using data record():&lt;/P&gt;
&lt;PRE&gt;data record(content = "{hostName: app-server-01.example.com,level: WARN,message: PaymentService#validateAccount input: source=batch account=123456,serverId: node1,userId: svc-user,threadName: default task-1024,applicationName: payments,timestamp: 20260423T122121.131-0300}")
| fieldsAdd index = indexOf(content, "timestamp")
| fieldsAdd timestamp_log = substring(content, from:index)
| parse timestamp_log, "LD TIMESTAMP('yyyyMMdd\\'T\\'HHmmss.SSSZ'):parsed_ts LD"
| fieldsAdd timestamp = parsed_ts
| fields timestamp, type(timestamp)&lt;/PRE&gt;
&lt;P&gt;&lt;STRONG&gt;Expected result:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;timestamp type(timestamp)&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;2026-04-23T12:21:21.131-03:00&lt;/TD&gt;
&lt;TD&gt;timestamp&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;Note: the original -0300 offset is correctly converted to UTC.&lt;/P&gt;
&lt;H2&gt;Alternative: If the TIMESTAMP pattern doesn't match the offset&lt;/H2&gt;
&lt;P&gt;If the Z pattern doesn't parse your specific offset format, you can try replacing it with XX (which explicitly matches offsets without colons like -0300):&lt;/P&gt;
&lt;PRE&gt;| parse timestamp_log, "LD TIMESTAMP('yyyyMMdd\\'T\\'HHmmss.SSSXX'):parsed_ts LD"&lt;/PRE&gt;
&lt;H2&gt;Alternative: Manual string reconstruction + toTimestamp&lt;/H2&gt;
&lt;P&gt;If the DPL TIMESTAMP matcher gives you trouble, you can manually reconstruct an ISO 8601 string and use toTimestamp():&lt;/P&gt;
&lt;PRE&gt;fetch logs
// ... your filters here ...
| fieldsAdd index = indexOf(content, "timestamp")
| fieldsAdd raw_ts = substring(content, from:index+11, to:indexOf(content, "}", from:index))
| fieldsAdd iso_ts = concat(
    substring(raw_ts, from:0, to:4), "-",
    substring(raw_ts, from:4, to:6), "-",
    substring(raw_ts, from:6, to:8), "T",
    substring(raw_ts, from:9, to:11), ":",
    substring(raw_ts, from:11, to:13), ":",
    substring(raw_ts, from:13)
  )
| fieldsAdd timestamp = toTimestamp(iso_ts)
| fieldsRemove index, raw_ts, iso_ts&lt;/PRE&gt;
&lt;P&gt;This transforms 20260423T122121.131-0300 → 2026-04-23T12:21:21.131-0300, which is standard ISO 8601 and toTimestamp() handles it natively.&lt;/P&gt;
&lt;H2&gt;When to use this approach&lt;/H2&gt;
&lt;P&gt;This indexOf + substring strategy is useful when:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The log line is &lt;STRONG&gt;very long&lt;/STRONG&gt; and contains many fields&lt;/LI&gt;
&lt;LI&gt;The timestamp is &lt;STRONG&gt;not at a fixed position&lt;/STRONG&gt; in the line&lt;/LI&gt;
&lt;LI&gt;The log format is &lt;STRONG&gt;semi-structured&lt;/STRONG&gt; (key-value pairs without strict JSON)&lt;/LI&gt;
&lt;LI&gt;You need to &lt;STRONG&gt;override the record's timestamp&lt;/STRONG&gt; with the application-level timestamp instead of the ingestion timestamp&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;References&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.dynatrace.com/docs/discover-dynatrace/references/dynatrace-pattern-language/log-processing-time-date" target="_blank" rel="noopener"&gt;DPL Time and Date&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.dynatrace.com/docs/discover-dynatrace/platform/grail/dynatrace-query-language/functions/time-functions" target="_blank" rel="noopener"&gt;DQL Time Functions&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.dynatrace.com/docs/discover-dynatrace/platform/grail/dynatrace-query-language/commands/extraction-and-parsing-commands" target="_blank" rel="noopener"&gt;DQL Extraction and Parsing Commands&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html" target="_blank" rel="noopener"&gt;Java DateTimeFormatter patterns&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
    <pubDate>Fri, 24 Apr 2026 06:08:32 GMT</pubDate>
    <dc:creator>tracegazer</dc:creator>
    <dc:date>2026-04-24T06:08:32Z</dc:date>
    <item>
      <title>📌 TIP#4: Parsing non-standard timestamps embedded in long log lines using DQL</title>
      <link>https://community.dynatrace.com/t5/Dynatrace-tips/TIP-4-Parsing-non-standard-timestamps-embedded-in-long-log-lines/m-p/298245#M1903</link>
      <description>&lt;H2&gt;The Problem&lt;/H2&gt;
&lt;P&gt;Sometimes your application logs contain timestamps in non-standard formats buried deep inside long, unstructured log lines. The timestamp isn't at the beginning of the line and doesn't follow ISO 8601 with separators — so toTimestamp() can't parse it directly.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Example:&lt;/STRONG&gt; Your logs look like this (simplified):&lt;/P&gt;
&lt;PRE&gt;{hostName: app-server-01.example.com,level: WARN,message: PaymentService#validateAccount input: source=batch account=123456,serverId: node1,userId: svc-user,threadName: default task-1024,contextMap: [{traceid:a1b2c3d4e5f6},{correlationId:f47ac10b-58cc-4372-a567-0e02b2c3d479}],applicationName: payments,timestamp: 20260423T122121.131-0300}&lt;/PRE&gt;
&lt;P&gt;The timestamp field 20260423T122121.131-0300 is:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Buried at the &lt;STRONG&gt;end&lt;/STRONG&gt; of a very long log line&lt;/LI&gt;
&lt;LI&gt;In a &lt;STRONG&gt;compact format&lt;/STRONG&gt; without the usual - and : separators (yyyyMMdd'T'HHmmss.SSS±ZZZZ)&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Not ISO 8601 compliant&lt;/STRONG&gt; — toTimestamp() won't parse it directly&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;The Strategy: indexOf + substring + TIMESTAMP pattern&lt;/H2&gt;
&lt;P&gt;Instead of trying to parse the entire log line, we use a &lt;STRONG&gt;surgical extraction&lt;/STRONG&gt; approach:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Locate&lt;/STRONG&gt; the timestamp key using indexOf&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Extract&lt;/STRONG&gt; from that position forward using substring&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Parse&lt;/STRONG&gt; the extracted fragment using DPL's TIMESTAMP matcher with a custom format&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Override&lt;/STRONG&gt; the record's timestamp field with the parsed value&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;The Solution&lt;/H2&gt;
&lt;PRE&gt;fetch logs
// ... your filters here ...
| fieldsAdd index = indexOf(content, "timestamp")
| fieldsAdd timestamp_log = substring(content, from:index)
| parse timestamp_log, "LD TIMESTAMP('yyyyMMdd\\'T\\'HHmmss.SSSZ'):parsed_ts LD"
| fieldsAdd timestamp = parsed_ts
| fieldsRemove index, timestamp_log, parsed_ts&lt;/PRE&gt;
&lt;H3&gt;Step-by-step breakdown&lt;/H3&gt;
&lt;P&gt;&lt;STRONG&gt;Step 1 — Find the position&lt;/STRONG&gt;&lt;/P&gt;
&lt;PRE&gt;| fieldsAdd index = indexOf(content, "timestamp")&lt;/PRE&gt;
&lt;P&gt;indexOf returns the character position where "timestamp" starts in the content field. This avoids the need to parse the entire log structure.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2 — Cut the string&lt;/STRONG&gt;&lt;/P&gt;
&lt;PRE&gt;| fieldsAdd timestamp_log = substring(content, from:index)&lt;/PRE&gt;
&lt;P&gt;This gives us a short fragment like:&lt;BR /&gt;timestamp: 20260423T122121.131-0300}&lt;/P&gt;
&lt;P&gt;Now we have a manageable string to parse.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 3 — Parse with TIMESTAMP pattern&lt;/STRONG&gt;&lt;/P&gt;
&lt;PRE&gt;| parse timestamp_log, "LD TIMESTAMP('yyyyMMdd\\'T\\'HHmmss.SSSZ'):parsed_ts LD"&lt;/PRE&gt;
&lt;P&gt;Breaking down the DPL pattern:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;LD — matches timestamp: (any data up to the next matcher)&lt;/LI&gt;
&lt;LI&gt;TIMESTAMP('yyyyMMdd\\'T\\'HHmmss.SSSZ') — the custom format:
&lt;UL&gt;
&lt;LI&gt;yyyyMMdd → 20260423 (year, month, day — no separators)&lt;/LI&gt;
&lt;LI&gt;\\'T\\' → literal T (escaped single quotes in DQL string)&lt;/LI&gt;
&lt;LI&gt;HHmmss → 122121 (hour, minute, second — no separators)&lt;/LI&gt;
&lt;LI&gt;.SSS → .131 (milliseconds)&lt;/LI&gt;
&lt;LI&gt;Z → -0300 (timezone offset in RFC 822 format)&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;:parsed_ts — export name for the parsed timestamp&lt;/LI&gt;
&lt;LI&gt;LD — matches any trailing characters (the closing })&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;STRONG&gt;Step 4 — Override the timestamp&lt;/STRONG&gt;&lt;/P&gt;
&lt;PRE&gt;| fieldsAdd timestamp = parsed_ts&lt;/PRE&gt;
&lt;P&gt;This replaces the record's timestamp field with the correctly parsed value. Dynatrace will now use this timestamp for time-based filtering, sorting, and visualization.&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 5 — Clean up&lt;/STRONG&gt;&lt;/P&gt;
&lt;PRE&gt;| fieldsRemove index, timestamp_log, parsed_ts&lt;/PRE&gt;
&lt;P&gt;Remove the auxiliary fields to keep the result clean.&lt;/P&gt;
&lt;H2&gt;Self-contained test query&lt;/H2&gt;
&lt;P&gt;You can validate this approach without any log data using data record():&lt;/P&gt;
&lt;PRE&gt;data record(content = "{hostName: app-server-01.example.com,level: WARN,message: PaymentService#validateAccount input: source=batch account=123456,serverId: node1,userId: svc-user,threadName: default task-1024,applicationName: payments,timestamp: 20260423T122121.131-0300}")
| fieldsAdd index = indexOf(content, "timestamp")
| fieldsAdd timestamp_log = substring(content, from:index)
| parse timestamp_log, "LD TIMESTAMP('yyyyMMdd\\'T\\'HHmmss.SSSZ'):parsed_ts LD"
| fieldsAdd timestamp = parsed_ts
| fields timestamp, type(timestamp)&lt;/PRE&gt;
&lt;P&gt;&lt;STRONG&gt;Expected result:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;timestamp type(timestamp)&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;2026-04-23T12:21:21.131-03:00&lt;/TD&gt;
&lt;TD&gt;timestamp&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;Note: the original -0300 offset is correctly converted to UTC.&lt;/P&gt;
&lt;H2&gt;Alternative: If the TIMESTAMP pattern doesn't match the offset&lt;/H2&gt;
&lt;P&gt;If the Z pattern doesn't parse your specific offset format, you can try replacing it with XX (which explicitly matches offsets without colons like -0300):&lt;/P&gt;
&lt;PRE&gt;| parse timestamp_log, "LD TIMESTAMP('yyyyMMdd\\'T\\'HHmmss.SSSXX'):parsed_ts LD"&lt;/PRE&gt;
&lt;H2&gt;Alternative: Manual string reconstruction + toTimestamp&lt;/H2&gt;
&lt;P&gt;If the DPL TIMESTAMP matcher gives you trouble, you can manually reconstruct an ISO 8601 string and use toTimestamp():&lt;/P&gt;
&lt;PRE&gt;fetch logs
// ... your filters here ...
| fieldsAdd index = indexOf(content, "timestamp")
| fieldsAdd raw_ts = substring(content, from:index+11, to:indexOf(content, "}", from:index))
| fieldsAdd iso_ts = concat(
    substring(raw_ts, from:0, to:4), "-",
    substring(raw_ts, from:4, to:6), "-",
    substring(raw_ts, from:6, to:8), "T",
    substring(raw_ts, from:9, to:11), ":",
    substring(raw_ts, from:11, to:13), ":",
    substring(raw_ts, from:13)
  )
| fieldsAdd timestamp = toTimestamp(iso_ts)
| fieldsRemove index, raw_ts, iso_ts&lt;/PRE&gt;
&lt;P&gt;This transforms 20260423T122121.131-0300 → 2026-04-23T12:21:21.131-0300, which is standard ISO 8601 and toTimestamp() handles it natively.&lt;/P&gt;
&lt;H2&gt;When to use this approach&lt;/H2&gt;
&lt;P&gt;This indexOf + substring strategy is useful when:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The log line is &lt;STRONG&gt;very long&lt;/STRONG&gt; and contains many fields&lt;/LI&gt;
&lt;LI&gt;The timestamp is &lt;STRONG&gt;not at a fixed position&lt;/STRONG&gt; in the line&lt;/LI&gt;
&lt;LI&gt;The log format is &lt;STRONG&gt;semi-structured&lt;/STRONG&gt; (key-value pairs without strict JSON)&lt;/LI&gt;
&lt;LI&gt;You need to &lt;STRONG&gt;override the record's timestamp&lt;/STRONG&gt; with the application-level timestamp instead of the ingestion timestamp&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;References&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A href="https://docs.dynatrace.com/docs/discover-dynatrace/references/dynatrace-pattern-language/log-processing-time-date" target="_blank" rel="noopener"&gt;DPL Time and Date&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.dynatrace.com/docs/discover-dynatrace/platform/grail/dynatrace-query-language/functions/time-functions" target="_blank" rel="noopener"&gt;DQL Time Functions&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.dynatrace.com/docs/discover-dynatrace/platform/grail/dynatrace-query-language/commands/extraction-and-parsing-commands" target="_blank" rel="noopener"&gt;DQL Extraction and Parsing Commands&lt;/A&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;A href="https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html" target="_blank" rel="noopener"&gt;Java DateTimeFormatter patterns&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Fri, 24 Apr 2026 06:08:32 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/Dynatrace-tips/TIP-4-Parsing-non-standard-timestamps-embedded-in-long-log-lines/m-p/298245#M1903</guid>
      <dc:creator>tracegazer</dc:creator>
      <dc:date>2026-04-24T06:08:32Z</dc:date>
    </item>
  </channel>
</rss>

