cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Trace files exceed DNA console limit of 300 MB.

massoud_shamsia
Newcomer

We have a post filtered pcap that is 366 MB. Is there any way to tweak DNA to accept importing this trace?

14 REPLIES 14

ulf_thornander3
Inactive

Hi Massoud

DNA is a transaction oriented tool. If you really have a single transaction that is over 300 MB then I have to say I'm impressed 🙂

That said I have 2 questions:

Where is your DNA - Is it standalone on your desktop or is it in your server?

At what stage does it fail?

Hello Ulf, Agreed, although considering the usefulness of thread analysis it would be nice to have this view for our single large filtered capture.

Our DNA is on a desktop (standalone). The error comes up as soon as we try to import the pcap ("the file format is not recognized"). Thank you.

Aaah - but that's something different, "the file format is not recognized" isn't saying its to big - right?

Where did you get that file from?

On your workstation, what is the SQL you have?

Just to explain the "File format not recognized" error: when DNA tries to import a trace, it runs it through a number of available parser engines and tries to match one based on file extension and preamble.

To higher-level APIs, error type returned from the selected parser engine is similar to "Can't read/parse trace file". It is often attributable to a corrupt trace or unrecognized format, but could also mean a memory allocation error (due to large trace file size). The message displayed to users can be misleading at times, unfortunately.

Tomasz_Szreder
Advisor

Hello Massoud,

We have a
tool designed specifically for dealing with large traces – this is the Trace
Trimmer tool installed together with DNA console (you can find it in the Start Menu). It allows you to browse trace
contents and decide on which portion of conversations or protocols you want to
analyze further. You can also pick a time range at this point, which would make
the output trace smaller and let you focus on the actual transactions.

While the
above recommendation is worth trying anyway, you mentioned “file format not recognized”,
which may lead us somewhere else. Where does this trace come from?

Regards

Tomasz

Hello, Trace file is from Wireshark (.pcap) and if further reduced i.e. filtered down to a smaller size then it will import. Only when the size is large i get the above error (i.e. the delineator seem to be size alone)

The existing file is as filtered as can possibility be, short of packet slicing. Searched on Google for a tool to "packet silice" a post capture trace but unfortunately nothing obvious out there that can do that.

I’ve just
tried that with a 1 GB-large trace and saw the message you reported when I
tried the drag-and-drop path. If I run it via {{File > Import Trace}}
however, I can see a red warning about trace size and can select Trace Trimmer
as the filter engine before importing. Parsing such a trace does take a while,
but it does the job.

If you
prefer a different route, one of the other tools you can use to cut traces by start/end
times is editcap: https://www.wireshark.org/docs/wsug_html_chunked/AppToolseditcap.html.
It is distributed with Wireshark and you can find it in the root program folder.

Hi Tomasz

I Think this error with the GUI has been there for a while. I have seen it before and have since a couple of releases only used the"File-Import..." path that you point to.

Is this a imposed limit in the GUI or is just something in the GUI Control that can't be tweaked?

I’ll have
to confirm it, but I’m assuming the error message is a generic one and means
that the DNA console refuses to import a trace. In case of large traces, when
you import via File > Import Trace, you have a chance to apply filter, and when
proceeding with the drag and drop variant there is no way to trim the trace, so
that’s probably why we see the error.

I’ll look
into that and let you know if I could provide an option to accept such a trace
via drag and drop as well.

Tomasz_Szreder
Advisor

It took me a
while, but I finally dug into the problem and here’s an interesting thing I’ve
learned: DNA manages memory differently, depending on the trace format used.

When
importing an .opx trace (or another within the family), DNA tries to map chunks
of file into memory with MapViewOfFile (https://msdn.microsoft.com/en-us/library/windows/desktop/aa366761(v=vs.85).aspx).
This means that it cannot load a trace larger than its address space (minus RAM
used to display GUI and other needs). Since this is a 32-bit application, the realistic
upper bound for importing an .opx trace should be considered to be about 2-3
GB. (Performance will probably be unacceptable, so we’re only talking theory.)

When it
comes to dealing with third-party formats such as pcap, DNA resorts to other
libraries to do the parsing and load the entire trace into memory as one big
chunk using GlobalAlloc (https://msdn.microsoft.com/en-us/library/windows/desktop/aa366574(v=vs.85).aspx).
With the ad-hoc instrumentation I made on my machine, I found the actual upper
bound to be variable, between 300-700 MB, supposedly due to memory
fragmentation.

Now what it
means for you is: when you have a large non-opx trace and want to import it
into DNA console regardless of the warnings, you can:


  • Convert
    the trace into .opx first, and then approach the DNA import, or
  • Enhance
    available contiguous address space by tweaking virtual memory.

The former
sounds definitely like a better plan. We have a small command-line trace
conversion utility which can do the former without any significant overhead; I
guess I could share it if need be. Let me know if this is an acceptable
solution.

EDIT Added trace conversion utility: makeopx.zip. Extract to DNA console installation directory. When run with a path to trace file as an argument, it loads it into memory and then saves an opx equivalent under the same name with an .opx suffix.

Regards
Tomasz

Sure, if you can please provide the utility. Thank you.

I have uploaded the utility as requested. A few items of note when importing large traces:

- Before attempting a large trace import, it's a good idea to restart DNA console. There are better chances of DNA getting all the requested memory chunks from OS.

- Not all DNA features will be usable in the large trace context. In particular, I found it chokes on heuristic auto-adjustment, sometimes also on deleting leading/trailing idle time. I suppose it depends on the actual number of frames, so when your traces are composed from fewer frames with large payloads, it could be more responsive. Views like Packet Trace and Thread Analysis seemed usable, so that's something to begin with.

For some reason getting below error:

C:\Program Files (x86)\Compuware\Transaction Trace>makeopx remote-lab-sh2_lan0_0
_outlook_probe.cap
log4cxx: No appender could be found for logger (ontrace32).
log4cxx: Please initialize the log4cxx system properly.

I wouldn’t
worry about those messages, they are related to early log initialization. One
of the libraries attempts to set up logging system immediately when it’s
loaded, before the host application has a chance to configure it. I’ll consider
refactoring the tool source code and bundling it with DNA console, but for now
please ignore these log4cxx warnings.