Category Archives: Uncategorized

NativeScript: TypeScript Speed/Memory usage

For those who may have seen some post of mine on Slack about TypeScript being not a performant as JavaScript; I have finally done the real benchmarks and spent the time totally de-typescriptifing the NativeScript 2.00 runtimes.  And here is my startling results....

everythingyouknowiswrong

(c) 2010, Jan-Willem Reusink - https://www.flickr.com/photos/jimmybenson

Oh, wait; that is not right -- I was the one wrong.  😉

The actual real numbers do not bear out what I had believed based on some TypeScript tests that I had done in Node a while back.  I am still not sure why my initial tests in Node behaved differently; but after spending a couple days building the tests, using a large JS application and a totally de-typescriptified the NativeScript runtime, I can say without a doubt in my mind that TS add little to no meaningful hit to the runtimes.  On iOS I actually didn't see a memory difference at all, the GC seems to collect the memory so quickly, that it wasn't even showing up.    On android it takes about 40-60 more megabytes of memory for everything; however after the first GC, all the memory is reclaimed.     So yes, you do end up with a small amount wasted memory and GC pressure added.  However, with it ALL being reclaimed at the first GC; 40mb temporarily wasted really is a drop in the bucket for what TS does offer you.

The other thing that I was surprised by was that the TS runtimes actually started up faster than the pure JS versions.  Now that I had to reason through why TS code was starting faster than raw JS; it makes a lot of sense -- The reason why is because most classes are lazy instantiated.  So the amount of JS code actually compiled and ran by the v8/JSC engines are a lot smaller in TS compiled code because the majority of the code is inside the function that wraps each class.   So if a class isn't needed yet during startup (but is loaded via the require statements) then the time it spends running it is minuscule compared to a runtime that actually builds all the classes while it is loading each one.  So, even though the difference was in literally milliseconds, it still was measurable.  Eventually when you do have that class instanciated; you will have more time used their; but since each class being created later is literally measured in NanoSeconds, the hit later actually "feels" faster to the user since time to first pixel ended up being faster using TypeScript code...

Actual numbers (best of):

Startup time TS: 393,675,213 (NanoSeconds)
Startup time JS:  399,058,778 (NanoSeconds)

Memory Usage TS: 7,544,460 (Bytes)
Memory Usage JS: 7,488,292 (Bytes)

Memory after GC TS: 4,979,548 (Bytes)
Memory after GC JS: 4,993,268 (Bytes)

Please note the reason why GC JS is actually bigger than GC'd TS is more (all) classes have been defined, created and are in memory as real objects vs the raw un-instanciated source code in TS if they haven't been used yet.

So, guess what I am going to be using more of....  😉

NativeScript and WebWorkers/Threads

gears-148196_960_720So about 10 months ago, I put in an issue for adding threads/workers to NativeScript.  I realized early on this was a major feature that is missing.  10 months later, I still have that same opinion; the only real major feature NativeScript is really missing is background threads.   For a lot of projects this won't have an effect; but their are those that need to do processing and this missing feature is a major problem for these types of applications as there has not been a good way to work around it.

Well mid-last week, a user x4080 posted a question about if WebViews main thread is tied to the NativeScript main thread or if it could be used as another thread.  Light bulb went on in my head --  awesome thinking outside the box by x4080!     I quickly created a test framework using an existing app I had written and tested it.   The threads are distinct!   Fast forward a couple days and today, I am happy to announce nativescript-webworkers!   I have wrapped everything up on Android so that it works like a just like a traditional WebWorker with extras!  iOS support should be later this week.

To install you just need to do the standard nativescript plugin add nativescript-webworkers then you can follow my simple sample in the docs or any of the countless myriad of webworker examples on the web.   This does obviously increase the ram required and their is a startup cost for the webworker to be primed and ready to run.    But if you have any processing you don't want on the main thread; we now have a solution that should solve most use cases where more than one thread is needed.

 

 

 

Upgrading to NativeScript v.Next (From pre-release nightly masters)

Please note these are how to install the newest pre-release; based on my experience with the current nightly pre-release masters available at http://nativescripts.rocks.

The first thing you MUST do is upgrade your NativeScript Command Line utility, first.     The easist way is to do a: npm remove nativescript -g

Yes, we need to de-install the current version; trust me it is easier this way.

Next thing if you are doing anything with Android; you want to do is type "gradle" and see if it runs.  If it doesn't run from the command line you need to either install gradle or set your path to use it.   If you have Android Studio installed; gradle is including with Android Studio, so you don't have to install it again.  For example on my Windows machine; my gradle is located at: C:\Program Files (x86)\Android\android-studio\gradle\gradle-2.4\bin.   If you are using Ubuntu, the version included is really old and you will need to install a ppa from: https://launchpad.net/~cwchien/+archive/ubuntu/gradle Then you will be able to do a sudo apt-get update && sudo apt-get install gradle and get a much more recent version.   On a Macintosh, it is recommended you install brew, and then do a brew install gradle.   You can alternatively download and install it directly from https://gradle.org/

The next thing you need to make sure is that you have your ANDROID_HOME and JAVA_HOME environmental variables set.   If you don't have them set you will get WEIRD unrelated errors when trying to do things with the new version of the NativeScript command line tool.

Then you download the latest master nativescript-cli-master.tgz from nativescript.rocks.  Then type:  npm install nativescript-cli-master.tgz -g

If everything worked fine; you should be able to do: tns --version and you will see  the next version number plus "non-ci".

Now that you have your updated command line; you next want to download your platform(s); tns-android-master.tgz and/or tns-ios-master.tgz.   You will next need to do a
tns platform remove android ----- WARNING!!!  THIS WILL DELETE EVERYTHING IN YOUR platform/android folder.   If you have anything you customized (i.e. like the androidmanifest.xml file); you will want to copy it out first... WARNING!!!!

Then you can run tns platform add android --frameworkPath=tns-android-master.tgz assuming the tns-android-master.tgz file is in the same folder where you are running the tns command.    Please note the --frameworkPath is case sensitive; and you need to point it to the entire path wherever the tns-android-master.tgz or tns-ios-master.tgz files are located at.   I typically put them in the parent folder that contains all my nativescript project folders, so then I can do tns platform add android --frameworkPath=../tns-android-master.tgz from any of the projects.

The final piece is updating the common core (tns-core-modules-master.tgz).  Now in some cases you can skip installing the new CLI & Runtimes and just use the core.  I haven't tried to see if the new common core library is compatible with the older runtimes; but I in a lot of cases initially they are compatible, but close to the middle of the development they typically are now relying on a new feature exposed in the the runtimes.   So it is always safer to keep the updated together.

In version 1.3 the tns_module folder in the app folder has been depreciated and is no longer used.   So you can just delete the app/tns_modules folder.   The new location is in the node_modules folder; so you can now do a npm install tns-core-modules-master.tgz

And finally after everything is all done; you do a:
tns prepare android
and/or
tns prepare ios

And you are now running on the latest masters!

NativeScript Nightly Masters

For those who would like to live on the bleeding edge; I have started the process of having one of my servers build each of the different NativeScript repo's nightly from the master branch. You can now download any of these from NativeScript.rocks.

Currently done are:

  • NativeScript Common/Core Library
  • NativeScript Command Line Interface
  • NativeScript TypeScript Declarations
  • NativeScrtip Android Runtime

The Android runtime does automatically have my LiveSync patches; so you will be able to use any of the masters with my LiveSync plugin.

I am in the process of getting the iOS runtime building (I believe I have pretty much everything I need to make it to work). However, I still need to purchase a Apple Developer Key for the server, to hopefully complete it. Since I am a contract developer; this last part will have to wait until I have some extra funds to fund this part of the project (which hopefully will be in the next couple of weeks)...

Update (2015-14-09): I may have a way to build the iOS without a key thanks to Yavor Georgiev; I will be testing this soon...

Fonter - A Simple NativeScript Font Application for iOS and Android

[[ A updated version of this post has been posted for NativeScript v1.5+ and icon fonts here. ]]

Since the subject of Fonts has been causing issues for multiple people in the NativeScript community I figured I would write up a post on how to do it.

Attached to this post is the completed project.    First thing you need to do is a
tns create project fonter -- then cd fonter and tns platform add android or tns platform add ios.   Now your project is ready to go.  I deleted the app/main-page.js and the app/main-view-model.js in this sample app, since they are not needed.

Next thing we need are some fonts.   So I go to https://Google.com/fonts and look at several fonts and pick a couple fonts and download them.  (Please note you actually have to download the fonts from https://github.com/google/fonts)   To keep things simple, all three fonts are under the SIL Open Font License: http://scripts.sil.org/OFL

font-folder

Second thing we need to do navigate to your app folder, and create a new "fonts" folder like so:

This folder MUST be named the exactly "fonts", no upper case letters.  It must be all lower case letters, exactly "fonts".

 

font-list

The next thing we will do is copy the fonts you downloaded into your fonts folder.   In my case I downloaded Indie Flower, Josefin Sans, and Lobster.  So my folder looks like this:
So we have three fonts that we want to add to the CSS.  In my case I want these fonts globally in the application so I will open up the app.css file; and add the three following changes:


.Lobster {
font-family: Lobster-Regular;
}

.IndieFlower {
font-family: IndieFlower;
}

.JosefinSans {
font-family: JosefinSans-Regular;
}

You might notice; the font-family name is the EXACT SAME SPELLING and EXACT SAME CASE of each of the file names.  The only thing that is removed is the .ttf extension.   This is REQUIRED for Android.  Android will automatically load the file (with the .ttf) referenced in the font-family from the fonts folder.

On iOS we have to do something slightly different; we have to register them before use.  If you open up the app.js file, you need to add the following code to it, as you see just BEFORE the application.start().


if (application.ios) {
var fontModule = require("ui/styling/font");
fontModule.ios.registerFont("IndieFlower.ttf");
fontModule.ios.registerFont("JosefinSans-Regular.ttf");
fontModule.ios.registerFont("Lobster-Regular.ttf");
}
application.start();

Again, you will notice the file name is spelled and cased exactly the same as the file name on in your fonts folder.   This is important!    However you name the file; both the iOS registration and the android css declarations MUST be exactly the same.

At this moment, the fonts are registered and usable on both iOS and Android.   So lets show them.   Open up your main-page.xml and here is the code I used:



<label></label>
<label></label>
<label></label>

fonts-androidSo now we want to try the code.  tns run ios --emulator or tns run android --emulator.  On Android everything should work perfectly and you should see something like this.

 

 

 

Unfortunately on iOS it isn't as simple, when you run this project you will see that Lobster and Indie Flower work properly, but Josefin Sans is defaulting to the default font...

font-titlebar-osxHere is the "small gotcha" on iOS.   The font is registered under its actual font name, not under the file name.   So we have one more step you need to do; and you can do it in either OSX or Windows.  Go to your fonts folder and double click on the JosefinSans-Regular.ttf and you should see a window like either of these (depending on your platform).  Notice the part I highlighted in Red.  That is the actual font name.   So, the last piece to this puzzle is you need re-open the app.css file and change the .JosefinSans class declaration to:font-titlebar-win

.JosefinSans {
font-family: JosefinSans-Regular,Josefin Sans;
}

Do you see what I added.  I appended the actual font name to the font-family.  The ORDER is important.   On Android because it does auto-registration of the fonts, it will load and register the FIRST font, and then it is happy.   On iOS it doesn't see the first font so it just ignores it and then sees the SECOND font and then it is happy.

fonts-iosSo now when you do a tns run ios --emulator you should see this.

 

 

 

 

You can download the complete project (including fonts) here: fonter.zip

The fonts/fonts.json file is just the font information for the fonts from the google code project; I wanted to make sure the copyrights information was in the same folder as the fonts in case someone finds the sample elsewhere.

[[ A updated version of this post has been posted for NativeScript v1.5+ and icon fonts here. ]]

DLNA Servers with Passwordable Folders

Over the last couple years I have played around with several DLNA servers and other media servers on my own network.      I like to eliminate the physical media and make it as simple as possible to listen to my favorite music or watch a video from anywhere in my dwelling.   Since I have kids and a wife, I need to make it simple for everyone.

However, based on my research the solutions for protected content have always been very lacking using straight DLNA.    The only solution I've seen to date using DLNA is setting up Access Groups based on the device playing the media.   However, this solution is very lacking in that their is no way to know who is actually using that device.     The only other choice I have seen is to not use DLNA, but instead use a custom front and back end to protect the content. This isn't a horrible option if the front end is available for ALL your devices.  

Up until now; neither option was a good fit for me.   So, about a month ago I figured out TWO ways to attack this problem.      The first option I really liked the best; but unfortunately after doing some research and playing with a couple DLNA clients -- I discovered not all DLNA players support searching.     So, my first idea of using the search system to allow input of password was nixed.   I needed whatever I was going to do to be fully universal and several of my devices just didn't support searching.

So my second "ingenious" idea became the actual final working implementation.    Basically, I add a new "virtual folder" (a DLNA Container Object) to the top level folders called "Password".     The rest of the folders remain the same.      In the Password folders; I put in 10 folders; labeled 0 through 9. In those folders are another 10 folders (again 0 through 9), until you have the number of digits you need.    So if you had a password of 1234, you would navigate to "Password", "1", "2", "3", "4" and then either use the "Back" or "Top" button to return to the top level.     The DLNA server, saw that you hit a 4 digits, and so it then saves this as a entered password.     Now any content that is marked with that password now actually shows up in the list of available media.  Pretty simple and a very ingenious method to allow password entries!     Their is no reason that I couldn't do 0-9 and a-z; other than it makes navigation a lot larger when you are having to scroll through 36 (or more) different options rather than a simple 10 items.

I have release my modified source code to minidlna on my own github.com account http://github.com/nathanaela/minidlna -- I will be sending a patch to the author Justin; but their are no guarantees that these changes will be accepted in the mainline as I do have a couple potential issues outlined below.

A couple notes:
You can enter as many passwords as you want; each time you enter a new password it remembers it for your entire session. This way you can actually have multiple passwords for different content.   Entering all Zeros for a password will clear all passwords you have entered during that session.   When a DLNA client disconnects from the server; the server will also forget any passwords that the client has entered for that session.   Each client has its own list of passwords; so entering the password on Device 1; does not make the content show up on any other devices.

2. New minidlna.conf configuration option:
- password_length = 1-10 (defaults to 4); this allows you to set how long you want your passwords to be.

3. New file for password configuration
You need to create a .password file in any directories you want protected.  This directory and ALL sub-directories under it will be protected then.    This is a simple text file.   At this point it is NOT encrypted or hashed.  It is raw text; so "technically" this is not very secure.  However if I already have access to the folder to read your .password file; then I can already read the media in the folder -- so you already have a insecure setup.      I would recommend you change the permissions on this file to only allow the minidlna server to read it for better security.    Again, the only content in the .password file is the password you want to use.  (ex:  echo 1111>.password would create a .password file with 1111 as the password for accessing this folder and all sub-folders).   You can also add another .password file to a sub-folder of a already password protected folder and then that sub-folder (and any of its sub-folders) would use the new password.

Gotchas:
1. Changing a password in a .password currently requires you to rebuild the database; as minidlna has to do a full scan to pick up the new password.

2. If you attempt to use a password (of a different length ie. like 123 or 12345) and have the password length set to 4; you won't be able to enter either of those as the required length is 4.

Thanks to Justin Maggard for all his hard work on minidlna, without it I wouldn't have had a base to implement the password code.

 

 

Data Compression Revisited

Update: There is a relevant update for this in a new post.

Over a year ago; one of my co-workers bench marked several compression libraries and since then we have been using library called jslzjb by Bear.  This is on a un-released product and we currently use it almost constantly on a wide variety of devices and browsers to reduce the amount of data going over websockets.

Interestingly enough a couple months ago  Colt "mainroach" McAnlis wrote a very interesting blog "State Of Web Compression" where he did quite a few compression tests on different compression methods.  And in that blog post he referenced compressjs by Dr. C. Scott Ananian (CSA).   CompressJS is a fairly comprehensive javascript compression test library with several implementations of different javascript compression libraries (and results). So I made a note in our project tracker that someone on our team at some point in the future should check out CSA's version of LZJB vs the original that we are running since LZJB was showing up still as the fastest of the bunch on his tests.

So mid-last week; we discovered a bug caused by the compression library; if we turned it off -- it worked; if it was on; it caused issues only with apparently a couple characters.    I was tasked with the bug report and so I also took the opportunity to check out the newer rewrite of LZJB also since I was dealing in that area of the system and CSA's version might fix everything and be fairly drop in replacement.

But before we did so, we needed to see the speed increase or hit we would take.  So to make real world tests, I took Chrome. connected to my local instance of the our product; turned off compression and then promptly saved a couple "HAR with content" from the network tab->websockets and basically generated about 32megs of real transmission data doing a variety of things in our system.   Then I wrote a simple JS program to extract all the actual data packets into separate packets from the har file.  Created a couple additional files with the characters that were actually causing the problems; and then promptly added CSA's  test data which basically made over 37megs of test data across 526 different files.

From there, I wrote a very simplistic node test framework that read in every packet into memory; then ran each through a compression function (using the nano-second precision timer) and then ran it through the decompression function with the same timing.  Then just to verify compared the output buffer with the original to verify compression-decompression worked successfully and recorded the stats.  (For consistency; I load ALL the data first; run the tests on ONE compression library; and exit with the results for that library -- this should keep the memory footprint the same for every library and eliminate and gc hits beyond what the library itself causes.)

So my first attempt failed as Node reads things in as Buffers; and Bear's LZJB only works with Arrays and Strings.   So adding a quick toString() (outside of the timing) and I have my first timings; and a slew of failed files.  37,345,189 Bytes of Data; Compression was ~2.75 seconds, Decompression was ~1.29 seconds.  Not bad speed wise, but a tad over 50 of the files failed, that isn't good.

Next up was grabbing CSA's version and directly copy and pasted my test suite for it; and ran it.  Failed -- it didn't like strings; it wanted buffers or TypedArray's.   So I removed the ".toString()" and re-ran it; and got ~2.78 / ~0.63; and no failures.  I'm like not too bad a tiny hit on compression; but twice as fast at decompression.    But; I know this test isn't fair; Bear's does a String -> Array conversion that CSA's doesn't do.  And I know that String->Array converter is one of the slowest parts (from profiling it a while back).    So to make the test a bit fairer; I remove the .toString() from Bears harness; and modify the code slightly so that it will treat Buffers like Arrays.   And my new output is ~0.62 / ~1.25; but the same 50 odd files failed.

I'm like WOW; ~.62 seconds compression.  We now know the conversion hit is really killing us; so allowing it to use buffers makes it a considerably faster, nice win.  But 50 files failed; not good at all. And the decompression is still twice as slow as CSA's version.    So at this point, since I barely understand the routine and I do understand optimization, I decide I'm going to attempt to speed up something rather than "fix" something I don't fully understand.

I grab CSA's version, duplicated the "compress" routine and start messing with it.  I see a lot of what I considered "low" lying fruit; and a couple hours later my "new" version of CSA's is clocking at ~2.47 vs the original ~2.78; much better but still a far cry from the ~0.62 of Bear's compress.  Disappointing to say the least.

However, now I have a much better grasp of how the routine works; and realize that I would have to rewrite CSA's version to get any major speed up.    So I decide to go back to Bear's routine and see if I can fix it.   By looking at the original 'C' source; I can see a couple issues in the bear's conversion and correct them; and so now I am getting ALL my files passing with bears routine.   I also notice that Bears version is based on a older LZJB version so I upgrade the routine to use the newer hash (and a couple other tweaks).  Then to make a already long story much shorter; I spend the time to figure out why CSA's version of the decompress is so blasted fast and apply those techniques to Bears decompression routine.

So at the end of a couple days; the results are using the ENWIKI8 file (100,000,000 bytes):

Compression Decompression Compressed Size
Bears' Original* OUT OF MEMORY DURING COMPRESSION
Bears' Modified 3.092157 0.556772 68551699
CSAs' Original 10.96466 1.975028 67820737
CSAs - Modified 9.848296 1.975028 67820737

 

I then took the ENWIKI8 file and split it in basically half (so at least I could get a benchmark with Bears Original);

Compression Decompression Compressed Size Original Size
Bears' Original* 1.963242 1.904763 38204603 50,000,896
Bears' Modified 1.849097 0.280587 34332875 50,000,896
CSAs' Original 5.875302 0.992718 33966678 50,000,896
CSA's Modified 5.285134 0.992718 33966678 50,000,896

 

All 526 Files:

 526 Files (1k to 917k) Compression Decompression Compressed Total Size  Original Total
Bears' Original* 0.626847 1.258599 17250904 37,345,189
Bears' Modified 0.486729 0.177391 15890401 37,345,189
CSAs' Original 2.782399 0.636517 15709537 37,345,189
CSAs' Modified 2.413777 0.636517 15709537 37,345,189

* - Not technically Bears' original; this version supports Buffers and has the bug fix that allows it to compress all the files properly; no other bug fixes, enhancements or changes.

By using the New Hash; we caused our compression to be similar to CSA's (which he also uses the new hash).  In addition Bears' Modified now uses the same decompression idea as CSA; so it is now blazing fast for decompression.

So at the end of the a couple days work; we went from ~2.75 / ~1.29 down to ~0.49 / ~0.18; major win!

The funny thing is this didn't actually fix the original bug that we uncovered, it did however fix some other bugs we had patched around in our code (so now we can remove those patches).    The original bug was actually caused by Converting from a from a Array to a String on the decompression side.   On the conversion from a string to an array; we convert UCS-2/UTF-16 to UTF-8 encoded.   However, Bears code never had any converting code back from UTF-8; back into UCS-2/UTF-16 which is what JavaScript expects.   So all my Tests passed; if you read in a Buffer -> Compressed -> Decompressed -> Buffer.    But the minute you went String/Buffer -> Compress -> Decompress -> String; your data would be wrong if it had any UTF-8 encoded characters.   So by adding a UTF-8->UCS-2/UTF-16 on the other side on the toString conversion path we now have flawless (& much faster) compression and decompression.

Final Stats:

Megabytes Per Second Percentage of Compression
50m 100m 37m   50m 100m 37m
Bears 12.92679 N/A 19.80708 23.6% N/A 53.8%
Modifed 23.47808 27.4053 56.23253 31.3% 31.4% 57.4%
CSA 7.280249 7.728159 10.92311 32.1% 32.2% 57.9%
Modifed 7.96465 8.457858 12.24314 32.1% 32.2% 57.9%

So as of this moment; LZJB is still winning in speed; but it is now considerably faster than Colt's or CSA's website numbers show...  CSA still has a slightly better compression (even at the default level 1); but I would much, much rather have the extra 46 seconds over the minuet .5% file loss in size in our real data).

All these tests and numbers were done using Nodejs (10.2) -- Running a set of tests on the browser (not as comprehensive, but with several sized files) showed similar improved speed/compression results, under Chrome and Firefox.

I want to thank to Colt McAnlis for posting his article -- which led me to Dr. Ananian's compressjs and started the ball rolling on what has turned into a making a copy of LZJB running 22% faster during compression, and 86% faster during decompression with a 8% reduction is payload size!   Now our data moves faster to all of our devices, meaning the customer has gets his screens up sooner and that is why Performance Matters!

Updated Compression: LZJBn.js

Update: There is a relevant update for this in a new post.

Transforming JavaScript JSON

Colt McAnlis posted a very interesting blog post (http://mainroach.blogspot.com/2013/08/json-compression-transpose-binary.html) this evening on using Transposing to reduce the JSON data size; his post was right on the money.

We have been using a similar technique for a couple years now.  (Although, we use a different compression method over websocket as gzip is too expensive in pure JavaScript).

However, one thing that I commented on is that he went to step one, and step two gives him better results -- it actually improves the compression.

I created my own "original dataset" to show this example.   The Dataset has Spaces here in the blog and show it for formatting purposes to make it easier to read; but all my numbers are excluding spaces and returns as a raw json wouldn't have those in it.

The original Data (265 Characters):

[{Id: 1, Name: 'Nathan', Address: 'Somewhere', Country: 'USA', City:'Here', State:'OK',Zip:'55555'},
 {Id: 2, Name: 'Colt', Address: 'Elsewhere', Country: 'USA', City: 'There', State: 'CA',Zip:'44444'}
 {Id: 3, Name: 'You', Address: 'Not Sure', Country: 'USA', City: 'Where', State: 'AZ', Zip:'33333'}]

Colt's Transposing (211 Characters):
{'id':[1,2,3],
'Name':['Nathan','Colt','You'],
'Address':['Somewhere','Elsewhere','Not Sure'],
'Country':['USA','USA','USA'],
'City':['Here','There','Where'],
'State':['OK','CA','AZ'],
'Zip': ['55555','44444','33333']}

We transpose it into basically a JSON CSV (206 Characters):
[['Id','Name','Address','Country','City','State','Zip'],
 [1,'Nathan','Somewhere','USA','Here','OK','55555'],
 [2,'Colt','Elsewhere','USA','There','CA','44444'],
 [3,'You','Note Sure','USA','Where','AZ','33333']]

Now for every additional row of data we add with this dataset you add:

Original: 48 Characters of Static unchanging field definitions. (Ouch!)
Colt's: 7 Characters
Ours: 9 Characters

So how do we end up with better compression when after a dozen or so records our raw size is actually larger than Colt's?    Well; we only use [] and comma's.   He has added additional data to his data stream in addition to the [] and commas, he has  {}, and the colons.    By having more redundancy in our stream we compress better.

Wait; there is another easy savings if you think about the data...    Why send the header row?  If you already know the layout of what you are requesting; you can entirely eliminate the header row; which would then shrink your "raw" data down another 55 characters.  Meaning we start out at a small 151 characters.

So if you are dealing with straight raw characters; Colt's method actually is smaller (after about 30 rows) .  However, If you are going to compress the stream; the additional redundancy in our transformation appears to be better suited to make smaller compressed files.

Measure everything and think about how you actually use your data might be the difference in how you send your data making all the difference in how fast your app actually responds to requests because Performance Matters.