Merge: Json benchmark
Added a JSON parser benchmark between different languages and Nit using 3 variants:
* Nit/NitCC: The old parser relying on NitCC, which is slow and memory-consuming (more than 6 Gio RAM for the 100Mio escaping-intensive file)
* Nit/Ad-hoc UTF-8 no ropes: The new parser working exclusively on `FlatString`
* Nit/Ad-hoc UTF-8 with ropes: The new parser with a mix of `Concat` and `FlatString`
![vr5fa](https://cloud.githubusercontent.com/assets/1444825/
11787549/
4375a4e6-a25a-11e5-87b3-
ac4346dee3bd.jpg)
I hear you all clamouring, well, here are the results (after #1885 and #1887, naturally):
![output](https://cloud.githubusercontent.com/assets/1444825/
11787622/
b24c0c98-a25a-11e5-8cff-
0e0afe03c9d8.png)
So yeah, I guess we could do better when it comes to escaping since the biggest difference in runtime is in the `large_escape` benchmark which coincidentally contains mostly `\uXXXX` characters.
Other than that, we do as well as Go and better than Ruby (also worse than Python, but this does not count), which is nice.
About the inputs:
* large_escaped is an unusual file since it contains large strings with lots of unicode escaping sequences which should highlight the handling of String-to-Int conversions and Unicode-escape-sequences-to-UTF-8-characters, and it is big, as in very big (94.7 Mio)
* magic, a normally-formatted 54 Mio JSON file with quite a bunch of Unicode characters
* gov_data, a 6.9 Mio JSON file with ASCII characters only
* twitter, a 64 kio JSON file with a lot of japanese characters
I might add some more files later to better represent the variety of inputs, but right now is a good time to push the benchmark suite, enjoy !
Note: Since the ad-hoc JSON parser is benched, #1886 will need to be merged prior to this one if the bench is to work on your machines
Pull-Request: #1895
Reviewed-by: Jean Privat <jean@pryen.org>