mirror of
https://github.com/johnkerl/miller.git
synced 2026-01-23 18:25:45 +00:00
24 lines
1.5 KiB
ReStructuredText
24 lines
1.5 KiB
ReStructuredText
Log-processing examples
|
|
----------------------------------------------------------------
|
|
|
|
Another of my favorite use-cases for Miller is doing ad-hoc processing of log-file data. Here's where DKVP format really shines: one, since the field names and field values are present on every line, every line stands on its own. That means you can ``grep`` or what have you. Also it means not every line needs to have the same list of field names ("schema").
|
|
|
|
Again, all the examples in the CSV section apply here -- just change the input-format flags. But there's more you can do when not all the records have the same shape.
|
|
|
|
Writing a program -- in any language whatsoever -- you can have it print out log lines as it goes along, with items for various events jumbled together. After the program has finished running you can sort it all out, filter it, analyze it, and learn from it.
|
|
|
|
Suppose your program has printed something like this (`log.txt <./log.txt>`_):
|
|
|
|
POKI_RUN_COMMAND{{cat log.txt}}HERE
|
|
|
|
Each print statement simply contains local information: the current timestamp, whether a particular cache was hit or not, etc. Then using either the system ``grep`` command, or Miller's ``having-fields``, or ``is_present``, we can pick out the parts we want and analyze them:
|
|
|
|
POKI_INCLUDE_AND_RUN_ESCAPED(10-1.sh)HERE
|
|
|
|
POKI_INCLUDE_AND_RUN_ESCAPED(10-2.sh)HERE
|
|
|
|
Alternatively, we can simply group the similar data for a better look:
|
|
|
|
POKI_RUN_COMMAND{{mlr --opprint group-like log.txt}}HERE
|
|
|
|
POKI_RUN_COMMAND{{mlr --opprint group-like then sec2gmt time log.txt}}HERE
|