Standardize Go-package structure (#746)

This commit is contained in:
John Kerl 2021-11-11 14:15:13 -05:00 committed by GitHub
parent 0d4fc143ad
commit e2b6ec2391
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
20166 changed files with 6121 additions and 6828 deletions

View file

@ -30,11 +30,11 @@ jobs:
- name: PrepareArtifactNonWindows
if: matrix.os != 'windows-latest'
run: mkdir -p bin/${{matrix.os}} && cp go/mlr bin/${{matrix.os}}
run: mkdir -p bin/${{matrix.os}} && cp mlr bin/${{matrix.os}}
- name: PrepareArtifactWindows
if: matrix.os == 'windows-latest'
run: mkdir -p bin/${{matrix.os}} && cp go/mlr.exe bin/${{matrix.os}}
run: mkdir -p bin/${{matrix.os}} && cp mlr.exe bin/${{matrix.os}}
- uses: actions/upload-artifact@v2
with:

View file

@ -1,14 +1,28 @@
# Please edit Makefile.in rather than Makefile, which is overwritten by ../configure.
PREFIX=/usr/local
INSTALLDIR=$(PREFIX)/bin
build:
make -C go build
@echo "Miller executable is: ./mlr, or go\mlr.exe on Windows"
go build
check:
make -C go check
# Unit tests (small number)
go test -v mlr/internal/pkg/...
# Regression tests (large number)
#
# See ./regression_test.go for information on how to get more details
# for debugging. TL;DR is for CI jobs, we have 'go test -v'; for
# interactive use, instead of 'go test -v' simply use 'mlr regtest
# -vvv' or 'mlr regtest -s 20'. See also src/auxents/regtest.
go test -v
install:
make -C go install
install: build
cp mlr $(INSTALLDIR)
make -C man install
fmt:
go fmt ./...
# For developers before pushing to GitHub.
#
# These steps are done in a particular order:
@ -22,9 +36,9 @@ install:
# * note the man/manpage.txt becomes some of the HTML content
# * turns *.md into docs/site HTML and CSS files
dev:
make -C go fmt
make -C go build
make -C go check
-make fmt
make build
make check
make -C man build
make -C docs
@echo DONE
@ -38,4 +52,4 @@ release_tarball: build check
./create-release-tarball
# Go does its own dependency management, outside of make.
.PHONY: build check install precommit
.PHONY: build check fmt dev

182
README-go-port.md Normal file
View file

@ -0,0 +1,182 @@
# Quickstart for developers
* `go build` -- produces the `mlr` executable
* Miller has tens of unit tests and thousands of regression tests:
* `go test mlr/src/...` runs the unit tests.
* `go test` or `mlr regtest` runs the regression tests in `test/cases/`. Using `mlr regtest -h` you can see more options available than are exposed by `go test`.
# Continuous integration
* The Go implementation is auto-built using GitHub Actions: see [../.github/workflows/go.yml](../.github/workflows/go.yml). This works splendidly on Linux, MacOS, and Windows.
* See also [../README.md](../README.md).
# Benefits of porting to Go
* The [lack of a streaming (record-by-record) JSON reader](http://johnkerl.org/miller/doc/file-formats.html#JSON_non-streaming) in the C implementation ([issue 99](https://github.com/johnkerl/miller/issues/99)) is immediately solved in the Go implementation.
* In the C implementation, [arrays were not supported in the DSL](http://johnkerl.org/miller/doc/file-formats.html#Arrays); in the Go implementation they are.
* [Flattening nested map structures to output records](http://johnkerl.org/miller/doc/file-formats.html#Formatting_JSON_options) was clumsy. Now, Miller will be a JSON-to-JSON processor, if your inputs and outputs are both JSON; JSON input and output will be idiomatic.
* The quoted-DKVP feature from [issue 266](https://github.com/johnkerl/miller/issues/266) will be easily addressed.
* String/number-formatting issues in [issue 211](https://github.com/johnkerl/miller/issues/211), [issue 178](https://github.com/johnkerl/miller/issues/178), [issue 151](https://github.com/johnkerl/miller/issues/151), and [issue 259](https://github.com/johnkerl/miller/issues/259) will be fixed during the Go port.
* I think some DST/timezone issues such as [issue 359](https://github.com/johnkerl/miller/issues/359) will be easier to fix using the Go datetime library than using the C datetime library
* The code will be easier to read and, I hope, easier for others to contribute to. What this means is it should be quicker and easier to add new features to Miller -- after the development-time cost of the port itself is paid, of course.
# Why Go
* As noted above, multiple Miller issues will benefit from stronger library support.
* Channels/goroutines are an excellent for Miller's reader/mapper/mapper/mapper/writer record-stream architecture.
* Since I did timing experiments in 2015, I found Go to be faster than it was then.
* In terms of CPU-cycle-count, Go is a bit slower than C (it does more things, like bounds-checking arrays and so on) -- but by leveraging concurrency over a couple processors, I find that it's competitive in terms of wall-time.
* Go is an up-and-coming language, with good reason -- it's mature, stable, with few of C's weaknesses and many of C's strengths.
* The source code will be easier to read/maintain/write, by myself and others.
# Efficiency of the Go port
As I wrote [here](http://johnkerl.org/miller/doc/whyc.html) back in 2015 I couldn't get Rust or Go (or any other language I tried) to do some test-case processing as quickly as C, so I stuck with C.
Either Go has improved since 2015, or I'm a better Go programmer than I used to be, or both -- but as of 2020 I can get Go-Miller to process data about as quickly as C-Miller.
Note: in some sense Go-Miller is *less* efficient but in a way that doesn't significantly affect wall time. Namely, doing `mlr cat` on a million-record data file on my bargain-value MacBook Pro, the C version takes about 2.5 seconds and the Go version takes about 3 seconds. So in terms of wall time -- which is what we care most about, how long we have to wait -- it's about the same.
A way to look a little deeper at resource usage is to run `htop`, while processing a 10x larger file, so it'll take 25 or 30 seconds rather than 2.5 or 3. This way we can look at the steady-state resource consumption. I found that the C version -- which is purely single-threaded -- is taking 100% CPU. And the Go version, which uses concurrency and channels and `MAXPROCS=4`, with reader/transformer/writer each on their own CPU, is taking about 240% CPU. So Go-Miller is taking up not just a little more CPU, but a lot more -- yet, it does more work in parallel, and finishes the job in about the same amount of time.
Even commodity hardware has multiple CPUs these days -- and the Go code is *much* easier to read, extend, and improve than the C code -- so I'll call this a net win for Miller.
# Developer information
## Source-code goals
Donald Knuth famously said: *Programs are meant to be read by humans and only incidentally for computers to execute.*
During the coding of Miller, I've been guided by the following:
* *Miller should be pleasant to read.*
* If you want to fix a bug, you should be able to quickly and confidently find out where and how.
* If you want to learn something about Go channels, or lexing/parsing in Go -- especially if you don't already know much about them -- the comments should help you learn what you want to.
* If you're the kind of person who reads other people's code for fun, well, the code should be fun, as well as readable.
* `README.md` files throughout the directory tree are intended to give you a sense of what is where, what to read first and and what doesn't need reading right away, and so on -- so you spend a minimum of time being confused or frustrated.
* Names of files, variables, functions, etc. should be fully spelled out (e.g. `NewEvaluableLeafNode`), except for a small number of most-used names where a longer name would cause unnecessary line-wraps (e.g. `Mlrval` instead of `MillerValue` since this appears very very often).
* Code should not be too clever. This includes some reasonable amounts of code duplication from time to time, to keep things inline, rather than lasagna code.
* Things should be transparent. For example, `mlr -n put -v '$y = 3 + 0.1 * $x'` shows you the abstract syntax tree derived from the DSL expression.
* Comments should be robust with respect to reasonably anticipated changes. For example, one package should cross-link to another in its comments, but I try to avoid mentioning specific filenames too much in the comments and README files since these may change over time. I make an exception for stable points such as [mlr.go](./mlr.go), [mlr.bnf](./src/parsing/mlr.bnf), [stream.go](./src/stream/stream.go), etc.
* *Miller should be pleasant to write.*
* It should be quick to answer the question *Did I just break anything?* -- hence the `build` and `reg_test/run` regression scripts.
* It should be quick to find out what to do next as you iteratively develop -- see for example [cst/README.md](https://github.com/johnkerl/miller/blob/master/go/src/dsl/cst/README.md).
* *The language should be an asset, not a liability.*
* One of the reasons I chose Go is that (personally anyway) I find it to be reasonably efficient, well-supported with standard libraries, straightforward, and fun. I hope you enjoy it as much as I have.
## Directory structure
Information here is for the benefit of anyone reading/using the Miller Go code. To use the Miller tool at the command line, you don't need to know any of this if you don't want to. :)
## Directory-structure overview
Miller is a multi-format record-stream processor, where a **record** is a
sequence of key-value pairs. The basic **stream** operation is:
* **read** records in some specified file format;
* **transform** the input records to output records in some user-specified way, using a **chain** of **transformers** (also sometimes called **verbs**) -- sort, filter, cut, put, etc.;
* **write** the records in some specified file format.
So, in broad overview, the key packages are:
* [src/stream](./src/stream) -- connect input -> transforms -> output via Go channels
* [src/input](./src/input) -- read input records
* [src/transforming](./src/transforming) -- transform input records to output records
* [src/output](./src/output) -- write output records
* The rest are details to support this.
## Directory-structure details
### Dependencies
* Miller dependencies are all in the Go standard library, except two:
* GOCC lexer/parser code-generator from [github.com/goccmack/gocc](https://github.com/goccmack/gocc):
* This package defines the grammar for Miller's domain-specific language (DSL) for the Miller `put` and `filter` verbs. And, GOCC is a joy to use. :)
* It is used on the terms of its open-source license.
* [golang.org/x/term](https://pkg.go.dev/golang.org/x/term):
* Just a one-line Miller callsite for is-a-terminal checking for the [Miller REPL](https://github.com/johnkerl/miller/blob/go-mod/go/src/auxents/repl/README.md).
* It is used on the terms of its open-source license.
* See also [./go.mod](go.mod). Setup:
* `go get github.com/goccmack/gocc`
* `go get golang.org/x/term`
### Miller per se
* The main entry point is [mlr.go](./mlr.go); everything else in [src](./src).
* [src/entrypoint](./src/entrypoint): All the usual contents of `main()` are here, for ease of testing.
* [src/platform](./src/platform): Platform-dependent code, which as of early 2021 is the command-line parser. Handling single quotes and double quotes is different on Windows unless particular care is taken, which is what this package does.
* [src/lib](./src/lib):
* Implementation of the [`Mlrval`](./src/types/mlrval.go) datatype which includes string/int/float/boolean/void/absent/error types. These are used for record values, as well as expression/variable values in the Miller `put`/`filter` DSL. See also below for more details.
* [`Mlrmap`](./src/types/mlrmap.go) is the sequence of key-value pairs which represents a Miller record. The key-lookup mechanism is optimized for Miller read/write usage patterns -- please see [mlrmap.go](./src/types/mlrmap.go) for more details.
* [`context`](./src/types/context.go) supports AWK-like variables such as `FILENAME`, `NF`, `NR`, and so on.
* [src/cli](./src/cli) is the flag-parsing logic for supporting Miller's command-line interface. When you type something like `mlr --icsv --ojson put '$sum = $a + $b' then filter '$sum > 1000' myfile.csv`, it's the CLI parser which makes it possible for Miller to construct a CSV record-reader, a transformer-chain of `put` then `filter`, and a JSON record-writer.
* [src/cliutil](./src/cliutil) contains datatypes for the CLI-parser, which was split out to avoid a Go package-import cycle.
* [src/stream](./src/stream) is as above -- it uses Go channels to pipe together file-reads, to record-reading/parsing, to a chain of record-transformers, to record-writing/formatting, to terminal standard output.
* [src/input](./src/input) is as above -- one record-reader type per supported input file format, and a factory method.
* [src/output](./src/output) is as above -- one record-writer type per supported output file format, and a factory method.
* [src/transforming](./src/transforming) contains the abstract record-transformer interface datatype, as well as the Go-channel chaining mechanism for piping one transformer into the next.
* [src/transformers](./src/transformers) is all the concrete record-transformers such as `cat`, `tac`, `sort`, `put`, and so on. I put it here, not in `transforming`, so all files in `transformers` would be of the same type.
* [src/parsing](./src/parsing) contains a single source file, `mlr.bnf`, which is the lexical/semantic grammar file for the Miller `put`/`filter` DSL using the GOCC framework. All subdirectories of `src/parsing/` are autogen code created by GOCC's processing of `mlr.bnf`. If you need to edit `mlr.bnf`, please use [tools/build-dsl](./tools/build-dsl) to autogenerate Go code from it (using the GOCC tool). (This takes several minutes to run.)
* [src/dsl](./src/dsl) contains [`ast_types.go`](src/dsl/ast_types.go) which is the abstract syntax tree datatype shared between GOCC and Miller. I didn't use a `src/dsl/ast` naming convention, although that would have been nice, in order to avoid a Go package-dependency cycle.
* [src/dsl/cst](./src/dsl/cst) is the concrete syntax tree, constructed from an AST produced by GOCC. The CST is what is actually executed on every input record when you do things like `$z = $x * 0.3 * $y`. Please see the [src/dsl/cst/README.md](./src/dsl/cst/README.md) for more information.
## Nil-record conventions
Through out the code, records are passed by reference (as are most things, for
that matter, to reduce unnecessary data copies). In particular, records can be
nil through the reader/transformer/writer sequence.
* Record-readers produce an end-of-stream marker (within the `RecordAndContext` struct) to signify end of input stream.
* Each transformer takes a record-pointer as input and produces a sequence of zero or more record-pointers.
* Many transformers, such as `cat`, `cut`, `rename`, etc. produce one output record per input record.
* The `filter` transformer produces one or zero output records per input record depending on whether the record passed the filter.
* The `nothing` transformer produces zero output records.
* The `sort` and `tac` transformers are *non-streaming* -- they produce zero output records per input record, and instead retain each input record in a list. Then, when the end-of-stream marker is received, they sort/reverse the records and emit them, then they emit the end-of-stream marker.
* Many transformers such as `stats1` and `count` also retain input records, then produce output once there is no more input to them.
* An end-of-stream marker is passed to record-writers so that they may produce final output.
* Most writers produce their output one record at a time.
* The pretty-print writer produces no output until end of stream (or schema change), since it needs to compute the max width down each column.
## Memory management
* Go has garbage collection which immediately simplifies the coding compared to the C port.
* Pointers are used freely for record-processing: record-readers allocate pointed records; pointed records are passed on Go channels from record-readers to record-transformers to record-writers.
* Any transformer which passes an input record through is fine -- be it unmodifed as in `mlr cat` or modified as in `mlr cut`.
* If a transformer drops a record (`mlr filter` in false cases, for example, or `mlr nothing`) it will be GCed.
* One caveat is any transformer which produces multiples, e.g. `mlr repeat` -- this needs to explicitly copy records instead of producing multiple pointers to the same record.
* Right-hand-sides of DSL expressions all pass around pointers to records and Mlrvals.
* Lvalue expressions return pointed `*types.Mlrmap` so they can be assigned to; rvalue expressions return non-pointed `types.Mlrval` but these are very shallow copies -- the int/string/etc types are copied but maps/arrays are passed by reference in the rvalue expression-evaluators.
* Copy-on-write is done on map/array put -- for example, in the assignment phase of a DSL statement, where an rvalue is assigned to an lvalue.
## More about mlrvals
[`Mlrval`](./src/types/mlrval.go) is the datatype of record values, as well as expression/variable values in the Miller `put`/`filter` DSL. It includes string/int/float/boolean/void/absent/error types, not unlike PHP's `zval`.
* Miller's `absent` type is like Javascript's `undefined` -- it's for times when there is no such key, as in a DSL expression `$out = $foo` when the input record is `$x=3,y=4` -- there is no `$foo` so `$foo` has `absent` type. Nothing is written to the `$out` field in this case. See also [here](http://johnkerl.org/miller/doc/reference.html#Null_data:_empty_and_absent) for more information.
* Miller's `void` type is like Javascript's `null` -- it's for times when there is a key with no value, as in `$out = $x` when the input record is `$x=,$y=4`. This is an overlap with `string` type, since a void value looks like an empty string. I've gone back and forth on this (including when I was writing the C implementation) -- whether to retain `void` as a distinct type from empty-string, or not. I ended up keeping it as it made the `Mlrval` logic easier to understand.
* Miller's `error` type is for things like doing type-uncoerced addition of strings. Data-dependent errors are intended to result in `(error)`-valued output, rather than crashing Miller. See also [here](http://johnkerl.org/miller/doc/reference.html#Data_types) for more information.
* Miller's number handling makes auto-overflow from int to float transparent, while preserving the possibility of 64-bit bitwise arithmetic.
* This is different from JavaScript, which has only double-precision floats and thus no support for 64-bit numbers (note however that there is now [`BigInt`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt)).
* This is also different from C and Go, wherein casts are necessary -- without which int arithmetic overflows.
* See also [here](http://johnkerl.org/miller/doc/reference.html#Arithmetic) for the semantics of Miller arithmetic, which the [`Mlrval`](./src/types/mlrval.go) class implements.
## Software-testing methodology
See [./regtest/README.md](./regtest/README.md).
## Godoc
As of September 2021, `godoc` support is minimal: package-level synopses exist;
most `func`/`const`/etc content lacks `godoc`-style comments.
To view doc material, you can:
* `go get golang.org/x/tools/cmd/godoc`
* `cd go`
* `godoc -http=:6060 -goroot .`
* Browse to `http://localhost:6060`
* Note: control-C an restart the server, then reload in the browser, to pick up edits to source files
## Source-code indexing
Please see https://sourcegraph.com/github.com/johnkerl/miller

4
configure vendored
View file

@ -23,5 +23,5 @@ else
fi
fi
sed "s@PREFIX_TEMPLATE_IS_HERE@$prefix@" go/Makefile.in > go/Makefile
sed "s@PREFIX_TEMPLATE_IS_HERE@$prefix@" man/Makefile.in > man/Makefile
sed -I .prefix-backup 's@^PREFIX.*@PREFIX='$prefix'@' Makefile
sed -I .prefix-backup 's@^PREFIX.*@PREFIX='$prefix'@' man/Makefile

16
go.mod Normal file
View file

@ -0,0 +1,16 @@
module mlr
// 'module github.com/johnkerl/miller' would be more standard, but it has the
// fatal flaw that 'go build' would produce a file named 'miller', not 'mlr' --
// and this naming goes back many years for Miller with executable named 'mlr',
// predating the Go port, across many platforms.
go 1.15
require (
github.com/goccmack/gocc v0.0.0-20210331093148-09606ea4d4d9 // indirect
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51
github.com/lestrrat-go/strftime v1.0.4
github.com/mattn/go-isatty v0.0.12
golang.org/x/sys v0.0.0-20210326220804-49726bf1d181
golang.org/x/term v0.0.0-20201210144234-2321bbc49cbf
)

View file

View file

@ -1,28 +0,0 @@
# Please edit Makefile.in rather than Makefile, which is overwritten by ../configure.
PREFIX=/usr/local
INSTALLDIR=$(PREFIX)/bin
# Attempt cp; will fail on Windows but ignore and continue
build:
go build
-cp mlr ..
check:
# Unit tests (small number)
go test -v mlr/src/...
# Regression tests (large number)
#
# See ./regression_test.go for information on how to get more details
# for debugging. TL;DR is for CI jobs, we have 'go test -v'; for
# interactive use, instead of 'go test -v' simply use 'mlr regtest
# -vvv' or 'mlr regtest -s 20'. See also src/auxents/regtest.
go test -v
install: build
cp mlr $(INSTALLDIR)
fmt:
go fmt ./...
# Go does its own dependency management, outside of make.
.PHONY: build fmt

View file

@ -1,28 +0,0 @@
# Please edit Makefile.in rather than Makefile, which is overwritten by ../configure.
PREFIX=PREFIX_TEMPLATE_IS_HERE
INSTALLDIR=$(PREFIX)/bin
# Attempt cp; will fail on Windows but ignore and continue
build:
go build
-cp mlr ..
check:
# Unit tests (small number)
go test -v mlr/src/...
# Regression tests (large number)
#
# See ./regression_test.go for information on how to get more details
# for debugging. TL;DR is for CI jobs, we have 'go test -v'; for
# interactive use, instead of 'go test -v' simply use 'mlr regtest
# -vvv' or 'mlr regtest -s 20'. See also src/auxents/regtest.
go test -v
install: build
cp mlr $(INSTALLDIR)
fmt:
go fmt ./...
# Go does its own dependency management, outside of make.
.PHONY: build fmt

View file

@ -1,182 +0,0 @@
# Quickstart for developers
* `go build` -- produces the `mlr` executable
* Miller has tens of unit tests and thousands of regression tests:
* `go test mlr/src/...` runs the unit tests.
* `go test` or `mlr regtest` runs the regression tests in `regtest/cases/`. Using `mlr regtest -h` you can see more options available than are exposed by `go test`.
# Continuous integration
* The Go implementation is auto-built using GitHub Actions: see [../.github/workflows/go.yml](../.github/workflows/go.yml). This works splendidly on Linux, MacOS, and Windows.
* See also [../README.md](../README.md).
# Benefits of porting to Go
* The [lack of a streaming (record-by-record) JSON reader](http://johnkerl.org/miller/doc/file-formats.html#JSON_non-streaming) in the C implementation ([issue 99](https://github.com/johnkerl/miller/issues/99)) is immediately solved in the Go implementation.
* In the C implementation, [arrays were not supported in the DSL](http://johnkerl.org/miller/doc/file-formats.html#Arrays); in the Go implementation they are.
* [Flattening nested map structures to output records](http://johnkerl.org/miller/doc/file-formats.html#Formatting_JSON_options) was clumsy. Now, Miller will be a JSON-to-JSON processor, if your inputs and outputs are both JSON; JSON input and output will be idiomatic.
* The quoted-DKVP feature from [issue 266](https://github.com/johnkerl/miller/issues/266) will be easily addressed.
* String/number-formatting issues in [issue 211](https://github.com/johnkerl/miller/issues/211), [issue 178](https://github.com/johnkerl/miller/issues/178), [issue 151](https://github.com/johnkerl/miller/issues/151), and [issue 259](https://github.com/johnkerl/miller/issues/259) will be fixed during the Go port.
* I think some DST/timezone issues such as [issue 359](https://github.com/johnkerl/miller/issues/359) will be easier to fix using the Go datetime library than using the C datetime library
* The code will be easier to read and, I hope, easier for others to contribute to. What this means is it should be quicker and easier to add new features to Miller -- after the development-time cost of the port itself is paid, of course.
# Why Go
* As noted above, multiple Miller issues will benefit from stronger library support.
* Channels/goroutines are an excellent for Miller's reader/mapper/mapper/mapper/writer record-stream architecture.
* Since I did timing experiments in 2015, I found Go to be faster than it was then.
* In terms of CPU-cycle-count, Go is a bit slower than C (it does more things, like bounds-checking arrays and so on) -- but by leveraging concurrency over a couple processors, I find that it's competitive in terms of wall-time.
* Go is an up-and-coming language, with good reason -- it's mature, stable, with few of C's weaknesses and many of C's strengths.
* The source code will be easier to read/maintain/write, by myself and others.
# Efficiency of the Go port
As I wrote [here](http://johnkerl.org/miller/doc/whyc.html) back in 2015 I couldn't get Rust or Go (or any other language I tried) to do some test-case processing as quickly as C, so I stuck with C.
Either Go has improved since 2015, or I'm a better Go programmer than I used to be, or both -- but as of 2020 I can get Go-Miller to process data about as quickly as C-Miller.
Note: in some sense Go-Miller is *less* efficient but in a way that doesn't significantly affect wall time. Namely, doing `mlr cat` on a million-record data file on my bargain-value MacBook Pro, the C version takes about 2.5 seconds and the Go version takes about 3 seconds. So in terms of wall time -- which is what we care most about, how long we have to wait -- it's about the same.
A way to look a little deeper at resource usage is to run `htop`, while processing a 10x larger file, so it'll take 25 or 30 seconds rather than 2.5 or 3. This way we can look at the steady-state resource consumption. I found that the C version -- which is purely single-threaded -- is taking 100% CPU. And the Go version, which uses concurrency and channels and `MAXPROCS=4`, with reader/transformer/writer each on their own CPU, is taking about 240% CPU. So Go-Miller is taking up not just a little more CPU, but a lot more -- yet, it does more work in parallel, and finishes the job in about the same amount of time.
Even commodity hardware has multiple CPUs these days -- and the Go code is *much* easier to read, extend, and improve than the C code -- so I'll call this a net win for Miller.
# Developer information
## Source-code goals
Donald Knuth famously said: *Programs are meant to be read by humans and only incidentally for computers to execute.*
During the coding of Miller, I've been guided by the following:
* *Miller should be pleasant to read.*
* If you want to fix a bug, you should be able to quickly and confidently find out where and how.
* If you want to learn something about Go channels, or lexing/parsing in Go -- especially if you don't already know much about them -- the comments should help you learn what you want to.
* If you're the kind of person who reads other people's code for fun, well, the code should be fun, as well as readable.
* `README.md` files throughout the directory tree are intended to give you a sense of what is where, what to read first and and what doesn't need reading right away, and so on -- so you spend a minimum of time being confused or frustrated.
* Names of files, variables, functions, etc. should be fully spelled out (e.g. `NewEvaluableLeafNode`), except for a small number of most-used names where a longer name would cause unnecessary line-wraps (e.g. `Mlrval` instead of `MillerValue` since this appears very very often).
* Code should not be too clever. This includes some reasonable amounts of code duplication from time to time, to keep things inline, rather than lasagna code.
* Things should be transparent. For example, `mlr -n put -v '$y = 3 + 0.1 * $x'` shows you the abstract syntax tree derived from the DSL expression.
* Comments should be robust with respect to reasonably anticipated changes. For example, one package should cross-link to another in its comments, but I try to avoid mentioning specific filenames too much in the comments and README files since these may change over time. I make an exception for stable points such as [mlr.go](./mlr.go), [mlr.bnf](./src/parsing/mlr.bnf), [stream.go](./src/stream/stream.go), etc.
* *Miller should be pleasant to write.*
* It should be quick to answer the question *Did I just break anything?* -- hence the `build` and `reg_test/run` regression scripts.
* It should be quick to find out what to do next as you iteratively develop -- see for example [cst/README.md](https://github.com/johnkerl/miller/blob/master/go/src/dsl/cst/README.md).
* *The language should be an asset, not a liability.*
* One of the reasons I chose Go is that (personally anyway) I find it to be reasonably efficient, well-supported with standard libraries, straightforward, and fun. I hope you enjoy it as much as I have.
## Directory structure
Information here is for the benefit of anyone reading/using the Miller Go code. To use the Miller tool at the command line, you don't need to know any of this if you don't want to. :)
## Directory-structure overview
Miller is a multi-format record-stream processor, where a **record** is a
sequence of key-value pairs. The basic **stream** operation is:
* **read** records in some specified file format;
* **transform** the input records to output records in some user-specified way, using a **chain** of **transformers** (also sometimes called **verbs**) -- sort, filter, cut, put, etc.;
* **write** the records in some specified file format.
So, in broad overview, the key packages are:
* [src/stream](./src/stream) -- connect input -> transforms -> output via Go channels
* [src/input](./src/input) -- read input records
* [src/transforming](./src/transforming) -- transform input records to output records
* [src/output](./src/output) -- write output records
* The rest are details to support this.
## Directory-structure details
### Dependencies
* Miller dependencies are all in the Go standard library, except two:
* GOCC lexer/parser code-generator from [github.com/goccmack/gocc](https://github.com/goccmack/gocc):
* This package defines the grammar for Miller's domain-specific language (DSL) for the Miller `put` and `filter` verbs. And, GOCC is a joy to use. :)
* It is used on the terms of its open-source license.
* [golang.org/x/term](https://pkg.go.dev/golang.org/x/term):
* Just a one-line Miller callsite for is-a-terminal checking for the [Miller REPL](https://github.com/johnkerl/miller/blob/go-mod/go/src/auxents/repl/README.md).
* It is used on the terms of its open-source license.
* See also [./go.mod](go.mod). Setup:
* `go get github.com/goccmack/gocc`
* `go get golang.org/x/term`
### Miller per se
* The main entry point is [mlr.go](./mlr.go); everything else in [src](./src).
* [src/entrypoint](./src/entrypoint): All the usual contents of `main()` are here, for ease of testing.
* [src/platform](./src/platform): Platform-dependent code, which as of early 2021 is the command-line parser. Handling single quotes and double quotes is different on Windows unless particular care is taken, which is what this package does.
* [src/lib](./src/lib):
* Implementation of the [`Mlrval`](./src/types/mlrval.go) datatype which includes string/int/float/boolean/void/absent/error types. These are used for record values, as well as expression/variable values in the Miller `put`/`filter` DSL. See also below for more details.
* [`Mlrmap`](./src/types/mlrmap.go) is the sequence of key-value pairs which represents a Miller record. The key-lookup mechanism is optimized for Miller read/write usage patterns -- please see [mlrmap.go](./src/types/mlrmap.go) for more details.
* [`context`](./src/types/context.go) supports AWK-like variables such as `FILENAME`, `NF`, `NR`, and so on.
* [src/cli](./src/cli) is the flag-parsing logic for supporting Miller's command-line interface. When you type something like `mlr --icsv --ojson put '$sum = $a + $b' then filter '$sum > 1000' myfile.csv`, it's the CLI parser which makes it possible for Miller to construct a CSV record-reader, a transformer-chain of `put` then `filter`, and a JSON record-writer.
* [src/cliutil](./src/cliutil) contains datatypes for the CLI-parser, which was split out to avoid a Go package-import cycle.
* [src/stream](./src/stream) is as above -- it uses Go channels to pipe together file-reads, to record-reading/parsing, to a chain of record-transformers, to record-writing/formatting, to terminal standard output.
* [src/input](./src/input) is as above -- one record-reader type per supported input file format, and a factory method.
* [src/output](./src/output) is as above -- one record-writer type per supported output file format, and a factory method.
* [src/transforming](./src/transforming) contains the abstract record-transformer interface datatype, as well as the Go-channel chaining mechanism for piping one transformer into the next.
* [src/transformers](./src/transformers) is all the concrete record-transformers such as `cat`, `tac`, `sort`, `put`, and so on. I put it here, not in `transforming`, so all files in `transformers` would be of the same type.
* [src/parsing](./src/parsing) contains a single source file, `mlr.bnf`, which is the lexical/semantic grammar file for the Miller `put`/`filter` DSL using the GOCC framework. All subdirectories of `src/parsing/` are autogen code created by GOCC's processing of `mlr.bnf`.
* [src/dsl](./src/dsl) contains [`ast_types.go`](src/dsl/ast_types.go) which is the abstract syntax tree datatype shared between GOCC and Miller. I didn't use a `src/dsl/ast` naming convention, although that would have been nice, in order to avoid a Go package-dependency cycle.
* [src/dsl/cst](./src/dsl/cst) is the concrete syntax tree, constructed from an AST produced by GOCC. The CST is what is actually executed on every input record when you do things like `$z = $x * 0.3 * $y`. Please see the [src/dsl/cst/README.md](./src/dsl/cst/README.md) for more information.
## Nil-record conventions
Through out the code, records are passed by reference (as are most things, for
that matter, to reduce unnecessary data copies). In particular, records can be
nil through the reader/transformer/writer sequence.
* Record-readers produce an end-of-stream marker (within the `RecordAndContext` struct) to signify end of input stream.
* Each transformer takes a record-pointer as input and produces a sequence of zero or more record-pointers.
* Many transformers, such as `cat`, `cut`, `rename`, etc. produce one output record per input record.
* The `filter` transformer produces one or zero output records per input record depending on whether the record passed the filter.
* The `nothing` transformer produces zero output records.
* The `sort` and `tac` transformers are *non-streaming* -- they produce zero output records per input record, and instead retain each input record in a list. Then, when the end-of-stream marker is received, they sort/reverse the records and emit them, then they emit the end-of-stream marker.
* Many transformers such as `stats1` and `count` also retain input records, then produce output once there is no more input to them.
* An end-of-stream marker is passed to record-writers so that they may produce final output.
* Most writers produce their output one record at a time.
* The pretty-print writer produces no output until end of stream (or schema change), since it needs to compute the max width down each column.
## Memory management
* Go has garbage collection which immediately simplifies the coding compared to the C port.
* Pointers are used freely for record-processing: record-readers allocate pointed records; pointed records are passed on Go channels from record-readers to record-transformers to record-writers.
* Any transformer which passes an input record through is fine -- be it unmodifed as in `mlr cat` or modified as in `mlr cut`.
* If a transformer drops a record (`mlr filter` in false cases, for example, or `mlr nothing`) it will be GCed.
* One caveat is any transformer which produces multiples, e.g. `mlr repeat` -- this needs to explicitly copy records instead of producing multiple pointers to the same record.
* Right-hand-sides of DSL expressions all pass around pointers to records and Mlrvals.
* Lvalue expressions return pointed `*types.Mlrmap` so they can be assigned to; rvalue expressions return non-pointed `types.Mlrval` but these are very shallow copies -- the int/string/etc types are copied but maps/arrays are passed by reference in the rvalue expression-evaluators.
* Copy-on-write is done on map/array put -- for example, in the assignment phase of a DSL statement, where an rvalue is assigned to an lvalue.
## More about mlrvals
[`Mlrval`](./src/types/mlrval.go) is the datatype of record values, as well as expression/variable values in the Miller `put`/`filter` DSL. It includes string/int/float/boolean/void/absent/error types, not unlike PHP's `zval`.
* Miller's `absent` type is like Javascript's `undefined` -- it's for times when there is no such key, as in a DSL expression `$out = $foo` when the input record is `$x=3,y=4` -- there is no `$foo` so `$foo` has `absent` type. Nothing is written to the `$out` field in this case. See also [here](http://johnkerl.org/miller/doc/reference.html#Null_data:_empty_and_absent) for more information.
* Miller's `void` type is like Javascript's `null` -- it's for times when there is a key with no value, as in `$out = $x` when the input record is `$x=,$y=4`. This is an overlap with `string` type, since a void value looks like an empty string. I've gone back and forth on this (including when I was writing the C implementation) -- whether to retain `void` as a distinct type from empty-string, or not. I ended up keeping it as it made the `Mlrval` logic easier to understand.
* Miller's `error` type is for things like doing type-uncoerced addition of strings. Data-dependent errors are intended to result in `(error)`-valued output, rather than crashing Miller. See also [here](http://johnkerl.org/miller/doc/reference.html#Data_types) for more information.
* Miller's number handling makes auto-overflow from int to float transparent, while preserving the possibility of 64-bit bitwise arithmetic.
* This is different from JavaScript, which has only double-precision floats and thus no support for 64-bit numbers (note however that there is now [`BigInt`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt)).
* This is also different from C and Go, wherein casts are necessary -- without which int arithmetic overflows.
* See also [here](http://johnkerl.org/miller/doc/reference.html#Arithmetic) for the semantics of Miller arithmetic, which the [`Mlrval`](./src/types/mlrval.go) class implements.
## Software-testing methodology
See [./regtest/README.md](./regtest/README.md).
## Godoc
As of September 2021, `godoc` support is minimal: package-level synopses exist;
most `func`/`const`/etc content lacks `godoc`-style comments.
To view doc material, you can:
* `go get golang.org/x/tools/cmd/godoc`
* `cd go`
* `godoc -http=:6060 -goroot .`
* Browse to `http://localhost:6060`
* Note: control-C an restart the server, then reload in the browser, to pick up edits to source files
## Source-code indexing
Please see https://sourcegraph.com/github.com/johnkerl/miller

View file

@ -1,55 +0,0 @@
#!/bin/bash
verbose=""
do_wips="false"
if [ "$1" = "-v" ]; then
shift
set -x
verbose="-v"
fi
if [ "$1" = "-x" ]; then
shift
do_wips="true"
fi
if [ "$do_wips" = "false" ]; then
set -euo pipefail
fi
export TZ=""
echo ================================================================
echo BUILD
go build
echo Compile OK
echo
echo ================================================================
echo UNIT TESTS
go test -v mlr/src/...
# 'go test' (with no arguments) is the same as 'mlr regtest'
echo
echo ================================================================
echo REGRESSION TESTS MAIN
echo
./mlr regtest $verbose regtest/cases
if [ "$do_wips" = "true" ]; then
echo
echo ================================================================
echo REGRESSION TESTS PENDING WINDOWS
echo
./mlr regtest $verbose cases-pending-windows
echo
echo ================================================================
echo REGRESSION TESTS PENDING GO PORT
echo
./mlr regtest $verbose regtest/cases-pending-go-port
fi
echo
# Run the auto-formatter
go fmt ./...

View file

@ -1,56 +0,0 @@
#!/bin/bash
# ================================================================
# Reads the Miller DSL grammar file and generates Go code.
#
# This is not run on every build / commit / etc.
#
# It's intended to be run manually by the developer, as needed when mlr.bnf
# changes for example.
#
# Resulting auto-generated .go files should then be checked into source control.
#
# With verbose, *.txt files are created with information about LR1 conflicts
# etc. Please don't commit them.
#
# As of mid-2021 this takes easily 5-10 minutes to run.
# ================================================================
set -euo pipefail
verbose="true"
if [ $# -eq 1 ]; then
if [ "$1" == "-v" ]; then
verbose="true"
elif [ "$1" == "-q" ]; then
verbose="true"
fi
fi
# Build the bin/gocc executable:
go get github.com/goccmack/gocc
#go get github.com/johnkerl/gocc
bingocc="$HOME/go/bin/gocc"
if [ ! -x "$bingocc" ]; then
exit 1
fi
rm -f src/parsing/*.txt
if [ "$verbose" = "true" ]; then
lr1="src/parsing/LR1_conflicts.txt"
$bingocc -v -o ./src/parsing -p mlr/src/parsing src/parsing/mlr.bnf || expand -2 $lr1
else
$bingocc -o ./src/parsing -p mlr/src/parsing src/parsing/mlr.bnf
fi
# Code-gen directories:
# src/parsing/errors/
# src/parsing/lexer/
# src/parsing/parser/
# src/parsing/token/
# src/parsing/util/
# Override GOCC codegen with customized error handling
cp src/parsing/errors.go.template src/parsing/errors/errors.go
for x in src/parsing/*/*.go; do gofmt -w $x; done

View file

@ -1,6 +0,0 @@
go build
go test -v mlr/src/...
# 'go test' (with no arguments) is the same as 'mlr regtest'
mlr regtest regtest/cases

View file

@ -1,12 +0,0 @@
module mlr
go 1.15
require (
github.com/goccmack/gocc v0.0.0-20210331093148-09606ea4d4d9 // indirect
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51
github.com/lestrrat-go/strftime v1.0.4
github.com/mattn/go-isatty v0.0.12
golang.org/x/sys v0.0.0-20210326220804-49726bf1d181
golang.org/x/term v0.0.0-20201210144234-2321bbc49cbf
)

View file

@ -1,42 +0,0 @@
# Miller regression tests
There are a few files unit-tested with Go's `testing` package -- a few dozen cases total.
The vast majority of Miller tests, though -- thousands of cases -- are tested by running scripted invocations of `mlr` with various flags and inputs, comparing against expected output, and checking the exit code back to the shell.
## How to run the regression tests, in brief
*Note: while this `README.md` file is within the `go/regtest/` subdirectory, all paths in this file are written from the perspective of the user being cd'ed into the `go/` directory, i.e. this directory's parent directory.*
* `mlr regtest --help`
* `go test` -- TODO -- also comment
## Items for the duration of the Go port
* `mlr regtest -c ...` runs the C version of Miller from the local checkout
## More details
TODO: needs to be written up
```
alias mr='mlr regtest'
mr
mr regtest/cases/foo
mr -v regtest/cases/foo
mr -cj regtest/cases/foo/0003
mr -gj regtest/cases/foo/0003
mr -gp regtest/cases/foo/0003
...
```
```
mr -gp regtest/cases/foo
git diff
git reset --hard
```
## Creating new cases
TODO: needs to be written up

View file

@ -1 +0,0 @@
mlr head -n 2 then put 'end{ print "Final NR is ".NR}' regtest/input/abixy-wide

View file

@ -1 +0,0 @@
mlr cat then head -n 2 then put 'end{ print "Final NR is ".NR}' regtest/input/abixy-wide

View file

@ -1 +0,0 @@
mlr tac then head -n 2 then put 'end{ print "Final NR is ".NR}' regtest/input/abixy-wide

View file

@ -1 +0,0 @@
mlr head -n 2 then put 'end{ print "Final NR is ".NR}' regtest/input/abixy-wide regtest/input/abixy-wide regtest/input/abixy-wide

View file

@ -1 +0,0 @@
mlr --prepipe '.${PATHSEP}mlr cat' --odkvp join -j a -f regtest/input/join-het.dkvp regtest/input/abixy-het

View file

@ -1 +0,0 @@
mlr --odkvp join --prepipe '.${PATHSEP}mlr cat' -j a -f regtest/input/join-het.dkvp regtest/input/abixy-het

View file

@ -1 +0,0 @@
mlr --prepipe '.${PATHSEP}mlr cat' --odkvp join --prepipe cat -j a -f regtest/input/join-het.dkvp regtest/input/abixy-het

View file

@ -1 +0,0 @@
mlr lecat --mono < regtest/input/line-ending-cr.bin

View file

@ -1 +0,0 @@
mlr lecat --mono < regtest/input/line-ending-lf.bin

View file

@ -1 +0,0 @@
mlr lecat --mono < regtest/input/line-ending-crlf.bin

View file

@ -1 +0,0 @@
mlr unhex < regtest/input/256.txt > regtest/input/auxents-hex-unhex/0004.bin

View file

@ -1 +0,0 @@
mlr unhex < regtest/input/256.txt > regtest/input/auxents-hex-unhex/0002.bin

View file

@ -1 +0,0 @@
mlr unhex < regtest/input/256-ragged.txt > regtest/input/auxents-hex-unhex/0003.bin

View file

@ -1 +0,0 @@
mlr unhex < regtest/input/256-ragged.txt > regtest/input/auxents-hex-unhex/0004.bin

View file

@ -1 +0,0 @@
mlr unhex regtest/input/256.txt > regtest/input/auxents-hex-unhex/0005.bin

View file

@ -1 +0,0 @@
mlr unhex regtest/input/256.txt > regtest/input/auxents-hex-unhex/0006.bin

View file

@ -1 +0,0 @@
mlr unhex regtest/input/256-ragged.txt > regtest/input/auxents-hex-unhex/0007.bin

View file

@ -1 +0,0 @@
mlr unhex regtest/input/256-ragged.txt > regtest/input/auxents-hex-unhex/0008.bin

View file

@ -1 +0,0 @@
mlr hex regtest/input/auxents-hex-unhex/0001.bin

View file

@ -1 +0,0 @@
mlr hex -r regtest/input/auxents-hex-unhex/0002.bin

View file

@ -1 +0,0 @@
mlr hex regtest/input/auxents-hex-unhex/0003.bin

View file

@ -1 +0,0 @@
mlr hex -r regtest/input/auxents-hex-unhex/0004.bin

View file

@ -1 +0,0 @@
mlr hex regtest/input/auxents-hex-unhex/0005.bin

View file

@ -1 +0,0 @@
mlr hex -r regtest/input/auxents-hex-unhex/0006.bin

View file

@ -1 +0,0 @@
mlr hex regtest/input/auxents-hex-unhex/0007.bin

View file

@ -1 +0,0 @@
mlr hex -r regtest/input/auxents-hex-unhex/0008.bin

View file

@ -1 +0,0 @@
mlr termcvt --cr2lf < regtest/input/line-ending-cr.bin

View file

@ -1 +0,0 @@
mlr termcvt --cr2crlf < regtest/input/line-ending-cr.bin

View file

@ -1 +0,0 @@
mlr termcvt --lf2cr < regtest/input/line-ending-lf.bin

View file

@ -1 +0,0 @@
mlr termcvt --lf2crlf < regtest/input/line-ending-lf.bin

View file

@ -1 +0,0 @@
mlr termcvt --crlf2cr < regtest/input/line-ending-crlf.bin

View file

@ -1 +0,0 @@
mlr termcvt --crlf2lf < regtest/input/line-ending-crlf.bin

View file

@ -1 +0,0 @@
mlr hex < regtest/input/auxents-hex-unhex/line-ending-temp-1.bin

View file

@ -1 +0,0 @@
mlr hex < regtest/input/auxents-hex-unhex/line-ending-temp-2.bin

View file

@ -1 +0,0 @@
mlr hex < regtest/input/auxents-hex-unhex/line-ending-temp-3.bin

View file

@ -1 +0,0 @@
mlr hex < regtest/input/auxents-hex-unhex/line-ending-temp-4.bin

View file

@ -1 +0,0 @@
mlr hex < regtest/input/auxents-hex-unhex/line-ending-temp-5.bin

View file

@ -1 +0,0 @@
mlr hex < regtest/input/auxents-hex-unhex/line-ending-temp-6.bin

View file

@ -1 +0,0 @@
mlr cat then cat regtest/input/short

View file

@ -1 +0,0 @@
mlr cat then tac regtest/input/short

View file

@ -1 +0,0 @@
mlr tac then cat regtest/input/short

View file

@ -1 +0,0 @@
mlr tac then tac regtest/input/short

View file

@ -1 +0,0 @@
mlr cat then cat then cat regtest/input/short

View file

@ -1 +0,0 @@
mlr cat then cat then tac regtest/input/short

View file

@ -1 +0,0 @@
mlr cat then tac then cat regtest/input/short

View file

@ -1 +0,0 @@
mlr cat then tac then tac regtest/input/short

View file

@ -1 +0,0 @@
mlr tac then cat then cat regtest/input/short

View file

@ -1 +0,0 @@
mlr tac then cat then tac regtest/input/short

View file

@ -1 +0,0 @@
mlr tac then tac then cat regtest/input/short

View file

@ -1 +0,0 @@
mlr tac then tac then tac regtest/input/short

View file

@ -1 +0,0 @@
mlr then cat then head -n 2 -g a,b then tac regtest/input/abixy-het

View file

@ -1 +0,0 @@
mlr --csv cut -f a regtest/input/rfc-csv/simple.csv-crlf

View file

@ -1 +0,0 @@
mlr --csv cut -f a regtest/input/rfc-csv/simple.csv-crlf

View file

@ -1 +0,0 @@
mlr --ofs pipe cat regtest/input/abixy

View file

@ -1 +0,0 @@
mlr --ofs=pipe cat regtest/input/abixy

View file

@ -1 +0,0 @@
mlr --csv --mfrom regtest/input/s.csv -- cat

View file

@ -1 +0,0 @@
mlr --csv --mfrom regtest/input/s.csv regtest/input/t.csv -- cat

View file

@ -1 +0,0 @@
mlr put -q '@sum += $x; end{emitp @sum}' regtest/input/abixy

View file

@ -1 +0,0 @@
mlr put -q -f ${CASEDIR}/mlr regtest/input/abixy

View file

@ -1 +0,0 @@
mlr put '$nonesuch = @nonesuch' regtest/input/abixy

View file

@ -1 +0,0 @@
mlr put -q '@sum += $x; end{emitp @sum}' regtest/input/abixy-het

View file

@ -1 +0,0 @@
mlr put -q -f ${CASEDIR}/mlr regtest/input/abixy-het

View file

@ -1 +0,0 @@
mlr put '$nonesuch = @nonesuch' regtest/input/abixy-het

View file

@ -1 +0,0 @@
mlr put -q '@sum += $x; @sumtype = typeof(@sum); @xtype = typeof($x); emitf @sumtype, @xtype, @sum; end{emitp @sum}' regtest/input/abixy

View file

@ -1 +0,0 @@
mlr put -q '@sum += $x; @sumtype = typeof(@sum); @xtype = typeof($x); emitf @sumtype, @xtype, @sum; end{emitp @sum}' regtest/input/abixy-het

View file

@ -1 +0,0 @@
mlr put '$z = $x + $y' regtest/input/typeof.dkvp

View file

@ -1 +0,0 @@
mlr put '$z = $x + $u' regtest/input/typeof.dkvp

View file

@ -1 +0,0 @@
mlr put '@s = @s + $y; emitp @s' regtest/input/typeof.dkvp

View file

@ -1 +0,0 @@
mlr put '$z = $x + $y; $x=typeof($x);$y=typeof($y);$z=typeof($z)' regtest/input/typeof.dkvp

View file

@ -1 +0,0 @@
mlr put '$z = $x + $u; $x=typeof($x);$y=typeof($y);$z=typeof($z)' regtest/input/typeof.dkvp

View file

@ -1 +0,0 @@
mlr put '@s = @s + $y; $x=typeof($x);$y=typeof($y);$z=typeof($z);$s=typeof(@s)' regtest/input/typeof.dkvp

View file

@ -1 +0,0 @@
mlr put '@s = @s + $u; $x=typeof($x);$y=typeof($y);$z=typeof($z);$s=typeof(@s)' regtest/input/typeof.dkvp

View file

@ -1 +0,0 @@
mlr put '@s = @s + $u; emitp @s' regtest/input/typeof.dkvp

View file

@ -1 +0,0 @@
mlr --from regtest/input/abixy put -f ./${CASEDIR}/mlr

View file

@ -1 +0,0 @@
mlr --from regtest/input/abixy put -f ./${CASEDIR}/mlr

View file

@ -1 +0,0 @@
mlr --from regtest/input/abixy put -f ./${CASEDIR}/mlr

View file

@ -1 +0,0 @@
mlr --from regtest/input/abixy put -f ./${CASEDIR}/mlr

View file

@ -1 +0,0 @@
mlr --from regtest/input/abixy put -f ./${CASEDIR}/mlr

View file

@ -1 +0,0 @@
mlr --from regtest/input/abixy put -f ./${CASEDIR}/mlr

View file

@ -1 +0,0 @@
mlr --from regtest/input/abixy put -f ./${CASEDIR}/mlr

View file

@ -1 +0,0 @@
mlr --from regtest/input/abixy put -f ./${CASEDIR}/mlr

View file

@ -1 +0,0 @@
mlr --oxtab --from regtest/input/abixy head -n 1 then put -f ${CASEDIR}/mlr

View file

@ -1 +0,0 @@
mlr --oxtab --from regtest/input/abixy head -n 1 then put -f ${CASEDIR}/mlr

View file

@ -1 +0,0 @@
mlr --oxtab --from regtest/input/abixy head -n 1 then put -f ${CASEDIR}/mlr

View file

@ -1 +0,0 @@
mlr --oxtab --from regtest/input/abixy head -n 1 then put -f ${CASEDIR}/mlr

Some files were not shown because too many files have changed in this diff Show more