No description
Find a file
2019-09-12 22:00:46 -04:00
autotools autoreconf -fiv 2019-09-07 13:11:32 -04:00
c 5.6.0 2019-09-12 22:00:46 -04:00
data neaten 2018-02-10 12:37:01 -05:00
doc 5.6.0 2019-09-12 22:00:46 -04:00
m4 autoreconf -fiv 2019-09-07 13:11:32 -04:00
perf commit old polyglottal mandelbrot benchmarks 2017-04-16 17:27:17 -04:00
python doc neatens 2015-05-14 14:05:45 -04:00
.gitattributes fix mlr termcvt 2017-05-13 19:37:40 -04:00
.gitignore neaten 2018-02-10 12:37:01 -05:00
.travis.yml add osx to testing 2018-07-24 17:37:46 -07:00
aclocal.m4 autoreconf -fiv 2019-09-07 13:11:32 -04:00
appveyor.yml appveyor iterate: mlr.exe + msys-2.0.dll 2017-07-02 14:11:33 -04:00
config.h.in autoreconf -fiv 2019-04-11 21:35:31 -04:00
configure 5.6.0 2019-09-12 22:00:46 -04:00
configure.ac 5.6.0 2019-09-12 22:00:46 -04:00
index.html Initial commit 2015-05-03 16:11:45 -07:00
LICENSE.txt neaten 2015-08-15 10:15:50 -07:00
Makefile.am bump package version in prep for release of autoconfig support 2015-09-23 20:41:59 -04:00
Makefile.in autoreconf -fiv 2019-09-07 13:11:32 -04:00
Makefile.no-autoconfig makefile doc notes 2015-11-29 20:23:49 -05:00
miller.spec 5.6.0 2019-09-12 22:00:46 -04:00
msys2-build.sh appveyor iterate 2017-07-02 21:11:22 -04:00
name-ideas.txt fix nidx reader for null fields 2015-05-09 13:50:05 -07:00
README-RPM.md neaten 2016-04-04 22:23:07 -04:00
README.md Link the Fedora package page instead of Koji 2019-08-14 12:01:02 +02:00

Miller is like awk, sed, cut, join, and sort for name-indexed data such as CSV, TSV, and tabular JSON.

Linux build status Windows build status License Docs

Ubuntu Ubuntu 16.04 LTS Fedora Debian Gentoo

NetBSD FreeBSD Pro-Linux Arch Linux Homebrew/MacOSX

With Miller, you get to use named fields without needing to count positional indices, using familiar formats such as CSV, TSV, JSON, and positionally-indexed.

For example, suppose you have a CSV data file like this:

county,tiv_2011,tiv_2012,line,construction
SEMINOLE,22890.55,20848.71,Residential,Wood
MIAMI DADE,1158674.85,1076001.08,Residential,Masonry
PALM BEACH,1174081.5,1856589.17,Residential,Masonry
MIAMI DADE,2850980.31,2650932.72,Commercial,Reinforced Masonry
HIGHLANDS,23006.41,19757.91,Residential,Wood
HIGHLANDS,49155.16,47362.96,Residential,Wood
DUVAL,1731888.18,2785551.63,Residential,Masonry
ST. JOHNS,29589.12,35207.53,Residential,Wood

Then, on the fly, you can add new fields which are functions of existing fields, drop fields, sort, aggregate statistically, pretty-print, and more:

$ mlr --icsv --opprint --barred \
  put '$tiv_delta = $tiv_2012 - $tiv_2011; unset $tiv_2011, $tiv_2012' \
  then sort -nr tiv_delta flins.csv 
+------------+-------------+----------------+
| county     | line        | tiv_delta      |
+------------+-------------+----------------+
| Duval      | Residential | 1053663.450000 |
| Palm Beach | Residential | 682507.670000  |
| St. Johns  | Residential | 5618.410000    |
| Highlands  | Residential | -1792.200000   |
| Seminole   | Residential | -2041.840000   |
| Highlands  | Residential | -3248.500000   |
| Miami Dade | Residential | -82673.770000  |
| Miami Dade | Commercial  | -200047.590000 |
+------------+-------------+----------------+

This is something the Unix toolkit always could have done, and arguably always should have done. It operates on key-value-pair data while the familiar Unix tools operate on integer-indexed fields: if the natural data structure for the latter is the array, then Miller's natural data structure is the insertion-ordered hash map. This encompasses a variety of data formats, including but not limited to the familiar CSV, TSV, and JSON. (Miller can handle positionally-indexed data as a special case.)

For a few more examples please see Miller in 10 minutes.

Features:

  • Miller is multi-purpose: it's useful for data cleaning, data reduction, statistical reporting, devops, system administration, log-file processing, format conversion, and database-query post-processing.

  • You can use Miller to snarf and munge log-file data, including selecting out relevant substreams, then produce CSV format and load that into all-in-memory/data-frame utilities for further statistical and/or graphical processing.

  • Miller complements data-analysis tools such as R, pandas, etc.: you can use Miller to clean and prepare your data. While you can do basic statistics entirely in Miller, its streaming-data feature and single-pass algorithms enable you to reduce very large data sets.

  • Miller complements SQL databases: you can slice, dice, and reformat data on the client side on its way into or out of a database. You can also reap some of the benefits of databases for quick, setup-free one-off tasks when you just need to query some data in disk files in a hurry.

  • Miller also goes beyond the classic Unix tools by stepping fully into our modern, no-SQL world: its essential record-heterogeneity property allows Miller to operate on data where records with different schema (field names) are interleaved.

  • Miller is streaming: most operations need only a single record in memory at a time, rather than ingesting all input before producing any output. For those operations which require deeper retention (sort, tac, stats1), Miller retains only as much data as needed. This means that whenever functionally possible, you can operate on files which are larger than your systems available RAM, and you can use Miller in tail -f contexts.

  • Miller is pipe-friendly and interoperates with the Unix toolkit

  • Miller's I/O formats include tabular pretty-printing, positionally indexed (Unix-toolkit style), CSV, JSON, and others

  • Miller does conversion between formats

  • Miller's processing is format-aware: e.g. CSV sort and tac keep header lines first

  • Miller has high-throughput performance on par with the Unix toolkit

  • Not unlike jq (http://stedolan.github.io/jq/) for JSON, Miller is written in portable, modern C, with zero runtime dependencies. You can download or compile a single binary, scp it to a faraway machine, and expect it to work.

Documentation links:

More examples:

% mlr --csv cut -f hostname,uptime mydata.csv
% mlr --tsv --rs lf filter '$status != "down" && $upsec >= 10000' *.tsv
% mlr --nidx put '$sum = $7 < 0.0 ? 3.5 : $7 + 2.1*$8' *.dat
% grep -v '^#' /etc/group | mlr --ifs : --nidx --opprint label group,pass,gid,member then sort -f group
% mlr join -j account_id -f accounts.dat then group-by account_name balances.dat
% mlr --json put '$attr = sub($attr, "([0-9]+)_([0-9]+)_.*", "\1:\2")' data/*.json
% mlr stats1 -a min,mean,max,p10,p50,p90 -f flag,u,v data/*
% mlr stats2 -a linreg-pca -f u,v -g shape data/*
% mlr put -q '@sum[$a][$b] += $x; end {emit @sum, "a", "b"}' data/*
% mlr --from estimates.tbl put '
  for (k,v in $*) {
    if (isnumeric(v) && k =~ "^[t-z].*$") {
      $sum += v; $count += 1
    }
  }
  $mean = $sum / $count # no assignment if count unset
'
% mlr --from infile.dat put -f analyze.mlr
% mlr --from infile.dat put 'tee > "./taps/data-".$a."-".$b, $*'
% mlr --from infile.dat put 'tee | "gzip > ./taps/data-".$a."-".$b.".gz", $*'
% mlr --from infile.dat put -q '@v=$*; dump | "jq .[]"'
% mlr --from infile.dat put  '(NR % 1000 == 0) { print > stderr, "Checkpoint ".NR}'