Category Archives: Tips

Optimization Gotcha's for for/i and forEach

So I mentioned something on my interview with Alex of NativeScripting.com that he did with me. And someone asked about this in the comments, so I decided to create a blog article on this specific optimization tip.

I am going to code this to a browser rather than in NativeScript because well JS in a browser is just plain easier to show the memory issues because you can plop the code in to a new browser tab and step through the code instantly. 🙂

<html>
<body><div id="stacklayout"></div></body>
<script type="application/javascript">

// Pretend Color class to be similar to NativeScript
class Color {
    constructor(val) { this._color = val; }
    toString() { return this._color; }
}

const <strong><em>colors </em></strong>= ["#ff0000", "#00ff00", "#0000ff"];
for (let i=0;i&lt;<strong><em>colors</em></strong>.length;i++) {
    const labels=["Wow","Awesome","Interview", "Alex", "Created!"];
    for (let j=0;j&lt;labels.length;j++) {
        const clr = new Color(<strong><em>colors</em></strong>[i]);
        const txt = <strong><em>document</em></strong>.createElement("div");
        txt.innerText = labels[j];
        txt.style.backgroundColor = clr.toString();
        const parent = <strong><em>document</em></strong>.getElementById("stacklayout");
        parent.appendChild(txt);
    }
}
&lt;/script>
&lt;/html>

And you should see something like this:

We are repeating the labels for each of the primary colors...

Or you might be used to doing something like this instead:

&lt;html>
&lt;body>&lt;div id="stacklayout">&lt;/div>&lt;/body>
&lt;script type="application/javascript">

class Color {
  constructor(val) { this._color = val; }
  toString() { return this._color; }
}

["#ff0000", "#00ff00", "#0000ff"].forEach((clr) => {
  ["Wow","Awesome","Interview", "Alex", "Created!"].forEach((word) => {
     const txtColor = new Color(clr);
     const txt = <strong><em>document</em></strong>.createElement("div");
     txt.innerText = word;
     txt.style.backgroundColor = txtColor.toString();
     const parent = <strong><em>document</em></strong>.getElementById("stacklayout");
     parent.appendChild(txt);
  });
});

&lt;/script>
&lt;/html>

Both have several issues, however the second one can be vastly worse than the first one depending on how many iterations. So lets walk through the issues.

So the Color class is fine, we are using it to show creating objects... So just ignore it for now.

const <strong><em>colors </em></strong>= ["#ff0000", "#00ff00", "#0000ff"];
for (let i=0;i&lt;<strong><em>colors</em></strong>.length;i++) {

These lines are fine, we allocated colors, then looped through them. So far so good. Lets proceed, what about:

const labels=["Wow","Awesome","Interview", "Alex", "Created!"];

Well here is the first issue; Fortunately for us, the v8 JS engine actually pre-allocates the 5 different string "Wow" through "Created!" when it parses the JavaScript however, each loop through it does create a brand new Labels array and then links those strings into it. So in this case it is small overhead; however in the 3 loops that still is 2 more sets of allocations and all new GC enties that have to be cleaned up after the loop is completed. If you were to put this single line of code outside the loop; your code runs the same, but no less memory usage and less processing.

 const clr = new Color(<strong><em>colors</em></strong>[i]);

What about this line, well in this case all 15 times it creates a brand new Color object. You can easily make this only be ran 3 times just by moving it outside the inner for (j) loop into the for (i) loop. In this example the Color class is very light weight. But many classes you create inside loops will be very heavy with lots of allocations going on when you allocate the class. So pay attention, 3 allocations vs 15 allocations, all by putting the line in the wrong spot...

Continuing on, most the rest of the lines has to be inside the loop; but I tossed another item into the loop, that wasn't needed... This line is very easy to accidentally end up in the loop because you needed it at that point.

     const parent = <strong><em>document</em></strong>.getElementById("stacklayout");

Yep, this has to do an expensive lookup in the dom to find this element, it never changes. It also has to allocate it to that "parent" variable 15 times. If you dropped it before/outside of both loops, you would have a single lookup and allocation...

---

Finally lets look at the last example; it suffers from the same issues. However, this one I used a Array.forEach rather than the simple for (i) loop. Why do I say it can be worse then? Well in small numbers, the difference is actually pretty tiny. The extra amount of memory it uses over a for (i) loop is also reasonable. But in larger loops (especially loops inside of loops inside of loops); the difference can end up being exponentially larger.

Your creating and calling functions, and all those functions now has added scopes and the engine has to setup all the callstacks for each and every iteration, additional error checking and just plain running a lot more code behind the scenes. So what is a simple for (i) loop in the first example, now became 15 EXTRA functions calls with all the extra overhead of cpu and memory that entails. Again in small chunks they are perfectly fine, but if you are wanting performance, every ms and gc adds up and sometimes they can add up very quickly when dealing with the Array helper functions that use callbacks...

As a better way to phrase this; imagine you have a for (i) loop that takes 5ms and a forEach loop takes 25ms to finish. Both are still dang fast, hard to even benchmark. But now you embed it in another forEach that also by itself takes 25ms. 25ms * 25ms = 625ms. More than half of a second. But, if you were to switch this to a for (i) loop each of them is 5ms. So, 5ms * 5ms = 25ms. Both are exponentially larger, but 25ms vs 625ms and the difference starts becoming much clearer. Now add a third loop; 5ms*5ms*5ms = 125ms. 25ms*25ms*25ms=15,625ms (or > 15 seconds!) In small chunks the time and memory used from a forEach is minuscule. But depending on how many times it is called; it can add up very quickly to be a major difference.

Please note this goes for all the nice Array helper functions, .map, .reduce, .filter, etc. Everyone of them that uses a callback, has somewhere between 2 times and 50 times as much overhead as a simple for (i) loop. Remember, despite this overhead; these functions are extremely fast. So don't be afraid to use them, but be more wary of using them in deeply nested code that can easily be called in loops when the datasets they are working against are also large.

As many items that you can actually move outside of your loop, the better off you are. This Includes any expensive calculations! You might be able to do 99% of the expensive calculation outside of the loop; and then finish off the last part of the calculation inside the loop...

I hope this answers the question about loops and performance. Every loop is a golden opportunity to greatly enhance your app,

GitLab Verified Email account issue

I use a privately hosting Gitlab's instance for a bazillion projects; and have a lot of users from around the world. Unfortunately, I have ran into a long standing bug that has never been fixed in many years. It only crops up occasionally now, but it is still a major pain for everyone involved. Many, many reports and many issues in the gitlabs issue database on this issue.

Basically, the end user never sees a "verify this email" email. And without that email; they can not login. Catch-22. In addition the "Confirm" button in the administrator account also fails to work in this situation, so neither the user nor the admin can do anything to fix it. Which then basically means I have a dead account that no one can do anything with. Until today.

These notes are so that I can easily fix it in the future.

First thing you do is go to the https://yourinstance/users/confirmation/new and create a new verification email. This way the token you will get later is now valid (as tokens expire after a number of hours).

Next thing is login to the Docker instance; if you aren't running docker; then don't worry about this step:

docker exec -it gitlab bash

Then once inside the docker bash shell; you launch the gitlab's interactive shell -- technically you can just the gitlab-rails runner "command; command; command" but the docker instance I run doesn't seem to have it present -- so we will launch the full gitlab rails console.

bundle exec rails console -e production

Once in this the rails console; we search for the user:

u = User.find_by_username('USERNAME')
(or u = User.find_by_any_email("emailaddress"))

Once we have the u variable; we can do a huge number of items on it; but what we really need to do is output the current confirmation token.

u.confirmation_token

And it will print a token that looks like this: "s2vzxVeWYhkzzDsBN8fC"

Type, exit, to exit the gitlab console. Then exit again to exit the docker instance. Go to your browser; and then use a modified version of this url:

https://yourinstance/users/confirmation?confirmation_token=<strong>s2vzxVeWYhkzzDsBN8fC</strong>

And let your browser go to it; and if you did everything correctly; you should see gitlab say it verified your email address and that you can now login.

Please note their are a lot of features/functions/data off the user "u" variable (over 3000); however, after spending over an hour trying confirmation, verification and other related features -- everything "looked" correct on the user record; but in the web admin page still showed the account was unverified. I just realized their might be some sort of user save record which then would have persisted my changes; but I didn't think about it until I was writing this blog.

However, since I could not get any of the other u.* functions to fix the record, I finally figured that the simplest way to fix this was just to have gitlab run its OWN verification code and so all I need to do is provide the token it generated myself to its verification page to make gitlab believe the end user got the email and now verified it.

VMWare - Shrinking disk size Macintosh/Linux

If you have a Macintosh or Linux based image; the tooling to shrink the hard drives is basically broken in the Gui (and/or, don't work). So to shrink these types of containers; the easiest method is to force the entire empty area to be Zero's, then defragment, and finally shrink all ..

First thing we need to do is Zero all blocks that are no longer used. Please note this will make your container grow to the maximum size your container is allowed to grow; so you might need a lot of disk space!

cat /dev/zero > zerofile; rm zerofile

Next you need to shutdown the client, so on linux it might be shutdown now -h or clicking a couple buttons to shutdown cleanly. After the VM has been shutdown you need to run the following commands from the command line.

vmware-vdiskmanager -d <diskimage.vmdk>

This command will defragment the image; moving all the data to the front of the image and all the empty blocks to the end. You obviously need to point the &lt;diskimage.vmdk> at wherever the vmdk you are working with is located, and if the vmware-vdiskmanager is not on your path; you will have to manually call it from wherever it resides. Finally you need to call

vmware-vdiskmanager -k <diskimage.vmdk>

As this will shrink the image; and if you are lucky, your 280 gig image will shrink to 80 gigs.

NativeScript Fonter revisited again (using DuoTone FontAwesome)

In a prior post I discussed using fonts in NativeScript. This morning I saw an issue in the NativeScript github repo about how to use the Font Awesome Duotone fonts.

Since I love puzzles, and I know the NativeScript font system fairly well, and I also own a subscription to the commercial Font Awesome -- I figured I could easily see if I can make it work. Immediately, I had a idea on how to solve the issue.

So I downloaded both the WebFonts and the Desktop fonts. Interestingly, enough the Desktop fonts actually have a warning that links to a post about Duotone fonts and the issues with the desktop... So after I read that post; it basically confirmed my understanding on how this font works and exactly how to make this work inside NativeScript.

So first things first, you need to update your app.css file; you want to add something like this to it:

.fad {
   font-family: fa-duotone-900, Font Awesome 5 Duotone;
   font-weight: 900;
}

.fad-op {
   opacity: .4;
}

Why TWO names for font-family?

If you recall from my prior fonter post; you want to create the rule that has the name of the actual font file without the .TTF or .OTF extension. So since the font folder looks like this:

The last file in the example fonts folder, is fa-duotone-900.ttf; straight from the web download zip file that I downloaded this morning.

So you see the rule is:

   font-family: fa-duotone-900, Font Awesome 5 Duotone;

This allows NativeScript to find the actual file name, in a lot of cases it can use just the filename to work its magic. However on iOS specifically, we want to include the actual internal font name. So double clicking on the font gives us this image under linux.

Under macintosh the title bar also shows the exact same information. So that is why I then typed "Font Awesome 5 Duotone" as the second part of the rule -- this must EXACTLY match. The two names covers all the cases we need, font file name to load it and actual internal font name to reference it later.

The second css rule we just added was fad-op, it is so that we can have some opacity for which ever section of the font we want or need opacity on.

We can do no opacity, solid/solid, and it would look like this:
Notice both parts are solid.

Now if we add opacity to section two; you can see the beak, legs on the bird are slightly transparent; and the acorn body is partially transparent.

Finally if we reverse the opacity to make section one have the opacity it looks like this:

You can also apply any opacity you want like 0.10 looks like this

And you can apply opacity to both sections and it looks like this:

All of them are using the EXACT same colors; just a difference in opacity. The sky is the limit; you are free to color and apply opacity on whatever amounts to each of the two sections.

So how do we do this magic?

Well technically the hard part is already done, that awesome css rule and dropping the font file into the app/fonts folder has 99% of the work done for us. The rest is simple stuff.

So the next thing you need to know is; which duotone icons do you want to use?

So first we need to know what is available for duotone; so we go to the font awesome icon page; and select the duotone filter. Once we have that; we can choose something; so lets look at the crow.

The crow at the top of the page lists this information:

Now at this point we can't really use "fa-crow" just because NativeScript doesn't know what that is; but the two pieces of information we do need to know to make it work is the f520 and the 10f520 these are the unicode values for the two sections of the icon we want.

So our xml needs to look like this:

 
  

This gives us the two characters in the same location on the screen. The first character is using the "fad" css rule; along with the text color blue. The second character is also using the "fad" css rule, and it adds the red text color and the "fad-op" rule for opacity.

Which gives us the duotone bird:

The key item is that you want the characters to print over each other; Gridlayout by default allows you to put multiple items in the same row and column. Since the characters are the same size; they by default will automatically line up and work.

Angular Flavor Issue

Angular's HTML parser has an issue which characters in the upper unicode -- it is great with any unicode value 65535 and below. Which of course if you have seen the first character of each of these duotone values is < 0xffff (65535). However the second character starts in the 0x100000 (1048576) range which is a lot larger than 0xffff. Unfortunately this means that in Angular even though you pass in 0x10f520 you actually get 0xf520 assigned. If you are paying really close attention, this is the same value as the very first code. So in angular you get two labels with the exact same unicode value. So how do we fix this?

Well ultimately; the underlying Angular parser should be fixed -- so if one of you wants to tackle this, track down where in the parser it breaks and push a PR; that would make a lot of people in the Angular community happy! However, since this is not something I am going to tackle and I still want you to be able to use Duotone fonts, I will present an alternate method to how to fix this on Angular.

In Angular; we can do it one of two ways; automagically; or direct assign. I prefer the automagically; so this is the HTML that we have:

<Label col="1" text="&#xf6ae;" class="larger fad "></Label>
<Label col="1" text="&#x10f6ae;" class="larger fad blue fad-op" (loaded)="fixAngularParsingIssue($event)"></Label>

As you can see it is identical to the above NS-Core XML, except I added the (loaded) event. This event fires once in the lifecycle of the Label. And runs my handy dandy "fixAngularParsingIssue" function.

fixAngularParsingIssue(args: any): void {
  let field = args.object;
  let val = field.text.codePointAt(0);

  // Check to see if this is already a valid unicode string above 0xffff.
  if (val > 65535) { return; }

  // We found a character we need to change...
  val = val | 0x100000;
  field.text = String.fromCodePoint(val);
}

The function is straight forward; we grab the current text value; verify it is actually still less than the unicode value we want as the parser could get fixed in a future version. Then we take that value and add our missing 0x100000; and then re-assign the text. Now this function could instead be written to take the actual value you want to assign rather than computing it, but I prefer simplicity as then I can just add this function to all labels that need it and just not think about it.

Angular Directive

I suggested in the issue that someone create a directive; and Pandishpan spent the time and converted my simple code into a directive for Angular making it even easier to use in angular! Thank you for your awesome work! So now, all you have to do is apply the "labelDuoTone" directive to the html, and it handles the rest of the work for you.

You can grab the code for the angular directive in the github issue here.

NativeScript Core demo application

You can download the NativeScript-Core fonter application from the github repo, https://github.com/NathanaelA/fonter

Please note the fa-duotone-900.ttf font is NOT shipped with the demo; you must download it yourself and copy it into the app/fonts folder.

Android (on left) and iOS (on right) looks like this:

Android
iOS


PSA: NativeScript 6.20 - Is a Breaking Change release!!!

Upgrading to 6.2.1 and the latest Nativescript-angular (if using angular) and the latest proplugins; should fix MOST issues that 6.2.0 had. However, I still highly recommend the webpack rewrite rules at the bottom of the post to solve ALL the issues.


NativeScript follows the SemVer process; so normally they would have bumped a major version on breaking changed. This break is mostly an un-attended side effects of a couple things. And unfortunately they didn't realize it broke things, so we now have 6.2 which breaks things... Oh well, mistakes happen, life will go on. And so this post is to make you aware of what things you want to change when you upgrade to NS 6.2.0...

To Upgrade or not to Upgrade, that is the question!

I do want to make sure I am VERY clear -- I do NOT think this was a bad change, nor do I think NS 6.2 is a bad release! My apps will be upgraded to it. It offers a lot of good features and fixes. It is just that being aware of the issues will help you make sure you don't have any issues with it when you do upgrade to it.

Lets dig in to the issue.

So lets get some details, first about 3 major versions ago in 2017, my business partner and buddy Nathan Walker proposed on issue 4041 to move the nativescript tns-core-modules into its own namespace. @nativescript along with other nativescript pieces being put into that parent namespace...

So based on this you might guess where the source of the issue lies now.

If you read the above issue; you will notice Peter (another awesome developer) and I both recommended that they do this in the 6.0 release. I personally felt like a hard break would be the safest course of action. Just eliminate tns-core-modules totally. Since they already were totally breaking the build tools, and also breaking other plugins things in v6.0 -- they might as well add this to the pile so we deal with all the breaking changes at one time in plugins. Unfortunately, our advice wasn't taken, or it was missed and so here we are with a very interesting breaking change that I honestly did NOT foresee when I recommended they do it in the initial 6.0.

There are three different issues here that stem from the same cause:

First, if you are using NativeScript-Angular and use any third party plugins -- you might find that they are failing because NativeScript-Angular was moved into the @nativescript/angular namespace. The compatibility imports for the old namespace; is apparently missing a couple redirects so plugins trying to load them fail. I'm sure these plugins will be updated shortly; but right now they are broken. (This has been fixed in the latest version of NativeScript-Angular)

I do NOT know yet if NS-Vue, NS-Svelte, or React-NativeScript have any issues; but odds are a fairly low with them because they are third party and were not moved into the @nativescript namespace, so apps depending on this probably are NOT affected directly (other than by any other plugins.)

Second, because of the design of the tns-core-modules redirect, the redirects waste a little bit of running memory. If your app or plugin imports tns-core-modules/BLAH (which all apps do before 6.2) then webpack will give you Object #1 tns-core-modules-blah, however, when a plugin or the core modules loads @nativescript/core/BLAH you get Object #2 nativescript-core-blah -- they LOOK the same but they are different objects each using memory. This does increase a little bit some parsing and memory that the V8 has to use because you now have basically more than a single framework in memory. Some of this is mitigated based on how objects are tracked; but some of it isn't. So this creates some other un-intended side effects of little bit more GC pressure, more ram usage, and the nasty pitfall we will discuss later...

Finally, if you are using any plugin that does any sort of class monkey patching, they will now all fail -- in this category is things like my very own nativescript-platform-css, nativescript-orientation. The reason why is because of the above allocation issues. Object #1 != Object #2. monkey patch Object #2, doesn't monkey patch Object #1. This is a corner case; as it requires you to actually replace the class (i.e. the top level object). So any plugins that rely on nativescript-global-events which extends and enhances the page class; were affected.

As an aside: The good news is virtually every one of my plugins on https://proplugins.org has been updated to have a NS 6.2 version. If you find a plugin that no longer works after you updated to 6.2; let us know and we will attempt to get it fixed. The 6.1 -> 6.2 fixes are very minor; so they can be rapidly put into the plugins.

One final note -- Please see the method to update the webpack at the bottom to have webpack rewrite tns-core-modules to @nativescript/core this is still highly recommended as some plugins will still be broken if you are still using the tns-core-modules redirect version.


The low level nitty gritty details - Proceed with caution.

For those who are interested in the nitty-gritty; we will walk through why #2 and #3 occur in NS 6.2. Please note this is VERY simplified -- The actual TS -> JS code is a bit more complex and a wrapped object. However, this should give you an idea of the why's...

V8 basically keeps a object map of how a object looks in memory; so if I have file "Blah" with class HiYa -- that looks like this:

export class HiYa {

   function hi() { /* do something */ }

}

When it exports it has a unique signature. In NativeScript this is compiled down to ES5 code. So this will basically look like this when all said and done:

function HiYa() {} 

HiYa.prototype.hi = function() { /* do something */}

module.exports = HiYa;

Now when I do a const HiYaRunnerMaster = require('@nativescript/core/blah'); The Blah.js file is loaded into memory; parsed and then a runnable object representation of this is passed back to HiYaRunner, lets call this HiYaRunnerMaster. So far so good.

Now for the NativeScript tns-core-module redirects / compatibility layer.. This is the exact code that you see in tns-core-modules/* in 6.2... It basically redirects to the @nativescript/core version of the file. Seems simple, and unless you are paying close attention you might not even see why this actually is a problem.

function __export(m) {

    for (var p in m) if (!exports.hasOwnProperty(p)) exports[p] = m[p];

}

Object.defineProperty(exports, "__esModule", { value: true });

__export(require("@nativescript/core/blah"));

So when I do a const HiYaRunnerFromTNS = require('tns-core-modules/blah'); it loads the above code and runs it. . It first defines the __export function, then runs that function on what it gets from the require of @nativescript/core/blah . Seems simple enough. Lets break this down into each step to what the v8/JavaScriptCore engines do.

  1. const tempObject = require('@nativescript/core/blah'); -- THIS object is the same/identical HiYaRunnerMasterObject as we got earlier when we ran it. Every time we do this specific require statement, we get the exact same object. Which is what we want!
  2. __export(tempObject); This runs the export function, passing in our HiYaRunnerMasterObject which so far doesn't seem like an issue.
  3. for (key in HiYaRunnerMasterObject) { we are looping through every property in HiYaRunnerMasterObject
  4. if (!exports.hasOwnProperty(key) verify the exports variable doesn't have this key property already.
  5. exports[key] = HiYaRunnerMasterObject[key]
  6. And behind the scenes exports is passed to the HiYaRunnerFromTNS at the end.

Did anyone else see what happened in step 5? First, exports is a 100% BRAND NEW OBJECT. (our Object #2). We then COPIED all the properties/functions from Object #1 (HiYaRunnerMasterObject) onto this new Object #2 (exports). Each copy is actually a new piece of memory that point to the old object's key value. So the new exports[Key] = old.value. in simplistic terms has two separate memory slots; "Key" and Value". So exports[key] is always a new memory pointer (of key) that either points to the original object's [key]; or to its own copy of a un-boxed object. So the memory usage could be 50% of the original object to the exact same memory usage as the original object, all depending on what the values (boxed or unboxed) are. In NativeScript it should be much closer to adding the 50% additional memory, not the 100% as a worst case number as almost all values on a nativescript key are objects.

At the end, in step 6 we used the brand new object to assign to HiYaRunnerFromTNS. Whoops; so we now have TWO different objects in memory with two different sets of memory usage.

This means that if you monkey patch the HiYaRunnerFromTNS copy; you are not actually monkey patching the actual original and running version. So in several of my plugins, this caused them to break as the monkey patching basically never occurred as far as the running copy of NativeScript was concerned.

In addition their is one more potential gotcha; I'll make a simple example:

let Obj1 = {a: {value: 1}}; // Original Object
let Obj2 = {a: Obj1.a}; // Copied key object

If I do Obj1.a.value = 2; then Obj2.a.value === 2 because both a's point to the exact same memory location with a "value" key in it . However, if I do Obj1.a = {value: 10} then Obj1.a then creates a brand NEW object which also has a value key; of which it has the value of 10 in it. And Obj2.a continues to point to the OLD original object with the value = 1. So even though your objects initially started pointing to the exact same value in memory. A new object assignment will replace just that objects location.

Why does this matter? When since the running instance of NativeScript is the @nativescript/core versions of the object. And your applications code will all currently require tns-core-module versions; you DO initially have two objects but they should all start out pointing to the same memory locations. Just like our example earlier (Obj1.a = Obj2.b) However, if you change a value that is NOT using a setter -- then the value you just changed won't be propagated to the running version, and you will have Obj1.a = {value: 10} and Obj2.a = {Value: 1} and you won't have a clue you just did a noop statement.

I do want to say the above potential gotcha scenario is very UNLIKELY, all code that I can think of runs through a setter; a setter function should be getting and setting the original object's value, and thus they shouldn't go out of sync. But corner cases can and do occur. So my recommendation for anyone updating to NS 6.2 is to do...

Recommendations

  1. Long term, replace ALL tns-core-modules/ with @nativescript/core/ and all nativescript-angular/ with @nativescript/angular/ everywhere in your app. No longer have ANYTHING pointing to tns-core-modules/ or nativescript-angular -- pretend those old namespaces no longer exist and were deleted.
  2. Update to the latest @proplugins if you are using any @proplugins, as most of them have already made the transition, as this will at least save you time on that aspect.

Short term tip (and highly recommended until they have a solid fix):

Richard Smith, posted a good tip that can quickly get you working -- you can add a rule to your `webpack.config.js` file. Find the alias section and add:

   
  "nativescript-angular": "@nativescript/angular",
  "tns-core-modules": "@nativescript/core",

So it looks like:

  
alias: {
      "~": appFullPath,
      "nativescript-angular": "@nativescript/angular",
      "tns-core-modules": "@nativescript/core"
},

NativeScript 5.4 Build Failures

People who have been using the plugins like Firebase and/or other Google services may have all of a sudden had their apps stop building with an error like this.

+ adding aar plugin dependency: .[some path].\widgets-release.aar
.[somepath].\app\src\main\AndroidManifest.xml:22:18-91 Error:
Attribute application@appComponentFactory value=(android.support.v4.app.CoreComponentFactory) from [com.android.support:support-compat:28.0.0] AndroidManifest.xml:22:18-91
is also present at [androidx.core:core:1.0.0] AndroidManifest.xml:22:18-86 value=(androidx.core.app.CoreComponentFactory).
Suggestion: add 'tools:replace="android:appComponentFactory"' to element at AndroidManifest.xml:19:5-37:19 to override.


FAILURE: Build failed with an exception.
What went wrong:
Execution failed for task ':app:processDebugManifest'.
Manifest merger failed : Attribute application@appComponentFactory value=(android.support.v4.app.CoreComponentFactory) from [com.android.support:support-compat:28.0.0] AndroidManifest.xml:22:18-91
is also present at [androidx.core:core:1.0.0] AndroidManifest.xml:22:18-86 value=(androidx.core.app.CoreComponentFactory).
Suggestion: add 'tools:replace="android:appComponentFactory"' to element at AndroidManifest.xml:19:5-37:19 to override.


Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
Get more help at https://help.gradle.org
BUILD FAILED in 30s

Wonderfully helpful error message huh? Well the issue is that Google recently changed their services to use AndroidX descended components. This change has been in the wings for a while and even NativeScript has been working towards this goal (NS 6.0 will be fully AndroidX). But in the meantime unfortunately Android.* and AndroidX.* are totally incompatible.

There are two things you might have to do to solve this so that you can continue to work...

  1. You create (or modify if it exists) a file in your App/App_Resources/Android/before-plugin.gradle and add the following section:
android {
     project.ext { 
          supportVersion = "28.+"
          googlePlayServicesVersion = "16.+"
     }
} 

This will pin your app to continue using a slightly older version of the Google services which is still based on the Android library. (IMPORTANT NOTE: when you later upgrade to NativeScript 6, you will want to delete this out; because NS 6 is Android X based; and so you will want the later versions).

2. If the above doesn't fully solve the issue; you may have to do a search in your node_modules and specifically the plugins for any plugin that actually uses google services in their gradle.include files; they might be hard coded to use a "+" as the version, and you will need to change them to be also "16.+"

--- UPDATE:

I had one of my friends, Dick Smith; showed me another solution via slack this morning; instead use the app.gradle file (also located in the same spot App/App_Resources/Android/app.gradle)

dependencies {
     configurations.all {
          exclude group: 'commons-logging', module: 'commons-logging'
         resolutionStrategy.eachDependency { DependencyResolveDetails details ->
             def requested = details.requested
             if (requested.group == 'com.google.firebase') {
               details.useVersion '17.+'
             }
             if (requested.group == 'com.google.android.gms') {
                 details.useVersion '16.+'
             }
             if (requested.group == 'com.android.support' && requested.name != 'multidex') {
                 // com.android.support major version should match buildToolsVersion
                 details.useVersion '28.+'
             }           
         }
     }
 }

This will override it as Gradle looks for the pieces it needs; it runs this code and even thought the plugins gradle file might state it wants version X of something; it will change it to the version we want... (Please note the same warning applies; when you upgrade to NS 6.0 in a couple months, you will want to then either change these versions to the AndroidX versions or delete this section).

--- End Update.

Finally after you make both those changes; I highly recommend you do a "tns platform clean android" before you rebuild the app, so everything is clean..

Finally if all else fails and you aren't expecting to release your app for another couple months; you can actually upgrade to the test version of NS 6.x; this version is currently unstable and you might have other issues (and you will probably want to update to the latest 6's each day just to try and make sure you are on the most recent version...)

npm i -g nativescript@next
npm i --save tns-core-modules@next
tns platform remove android
tns platform add android@next

If @next fails, and you need to return back to a solid release you can do the following:

npm i -g nativescript@latest
npm i --save tns-core-modules@latest
tns platform remove android
tns platform add android@latest

Please note; I do NOT recommend you run @next it normally is the least stable and most buggy version of Nativescript you can run. But sometimes you have to do what you have to do to proceed. 🙂

NativeScript 5.x and HMR Issues

If you are one of the lucky people who are reading this blog article, because your app used to work! After upgrading to the latest NativeScript and enabling the new Workflow your app is now broken...

Well I have some good news... It is easy to get back to a working state.

Here is how you can revert back to using the Legacy Workflow..

First thing, you need to open the nsconfig.json file in the root of your project. This file is where this key parameter is kept.

Inside the nsconfig.json file you will see "useLegacyWorkflow": true change this to false and save the file.

Then from the CLI; do a
- tns platform clean android (if using Android)
- tns platform clean ios (if using iOS)

Then you can do a tns run android or tns run ios and be back working.

Please note; DO NOT USE --bundle or --hmr flags; as this will re-enable HMR and the only reason you are changing this flag is because HMR is already not working for you...

I would recommend you submit a bug report to the nativescript-dev-webpack github repo about your issue; so that it can be tracked. Version 6.0 of NativeScript is supposedly going to disable the Legacy Workflow mode; so it is critical they get all the data they can on what issues you have seen so they can iron them all out...

Common HMR issues with fixes:

  • Your page/css/xml/html isn't changing on the device.
    • Make sure the name of the files end with "-page". So "main-page.xml" or "cool-page.css"
  • Your extra files (like sqlite databases) aren't being synced to the device.
    • Edit your webpack.config.js file, and add them to the copy step so that they are bundled up.

Some common broken NativeScript-Core HMR issues:

  • Pure JavaScript apps (i.e. not TypeScript) do not function properly on the second screen or on the first screen if using a SideDrawer as the root view. Click/Tap/Event handlers are not assigned properly.
  • Width/Height Specific files are unused (i.e. main-page.HW600.xml doesn't work; only main-page.xml is picked up.)
  • Shared NativeScript XML files do not link up to proper event handlers. (Similar to first issue; but a different aspect)
  • Some CSS files do not seem to be tracked properly.

NativeScript-Core replacing the root view...

Please note this to my knowledge only works with NativeScript-Core / PAN (Plain Awesome NativeScript) applications. I do not believe this will work with the Vue or Angular variations of NativeScript. There maybe another way to accomplish this with those variations, and if you know how -- please let me know and I'll update this post.

So I was minding my own business again, and I saw this question pop up in NativeScript GitHub. Take this scenario; you want to have a SideDrawer that is active on all pages.

Seems simple enough you create the /app-root.xml as this:

&lt;nsDrawer:RadSideDrawer xmlns:nsDrawer="nativescript-ui-sidedrawer"><br>    &lt;nsDrawer:RadSideDrawer.drawerContent><br>        &lt;GridLayout backgroundColor="#ffffff"><br>            &lt;Label text="Go to Cool Page" tap="page1"/><br>            &lt;Label text="Go to Another Cool Page" tap="page2"/><br>        &lt;/GridLayout><br>    &lt;/nsDrawer:RadSideDrawer.drawerContent><br>    &lt;nsDrawer:RadSideDrawer.mainContent><br>        &lt;Frame defaultPage="/pages/main/main-page">&lt;/Frame><br>    &lt;/nsDrawer:RadSideDrawer.mainContent><br>&lt;/nsDrawer:RadSideDrawer><br>

But now I need a login page... And my awesome side drawer is now active on the login page. Uh-oh -- it even has a ton of links that go to places and so we don't want that if they haven't even logged in. In fact we probably don't want them to even have access to the side drawer when not logged in...

What is the solution?

My solution, is fairly simple is you revert your code back to close to the default app-root.xml:
Your <strong>app-root.xml</strong> actually is changed to this:

<Frame defaultPage="/pages/login/login-page"/>

That is because we want the main frame of the app to load my login page; in this case my login page will force them to login if they don't already have a valid stored login token... Notice, their is no SideDrawer, because it is no longer the root.. So part one is working great, no side drawer during login...

So now how do we get our cool SideDrawer to be active on the rest of all the pages...

const Application = require('tns-core-modules/application');<br><br>function loginIsValid() {<br> <strong> Application._resetRootView("/pages/root/root");</strong><br>}<br>

Yep, look their -- there is a cool feature in the framework that allows you to REPLACE (or RESET) the root view. So by doing this, my /pages/root/root.xml file now contains the SideDrawer root layout which I displayed at the top of this post.

Notes:

My own personal applications typically are laid out in such a fashion that I have a "pages" folder, and in the "pages" folder I have each separate page js, CSS, and XML. This typically allows me to quickly find the page I want to work on.

NativeScript AndroidX support and Plugins

The next major version of NativeScript (version 6) will be switching from the android.support packages to the new androidx namespace. This will be a hard break; as you can only use the support api's or androidx api's; but not both. This is something Google has been implementing for a while; and NativeScript is getting on board to eliminate later issues and give us great support. The awesome developers at Progress have already created a fork of NativeScript; which you can install to test which uses androidx instead of using the support libraries.

Since one of my plugins is affected by the changes; I took a look into what was required to support both the current versions of NativeScript and the new upcoming version of NativeScript.

For any of you who have plugins; this code is what I basically devised.

let androidSupport=null;
if (android.support && android.support.v4) {
  androidSupport = android.support.v4;
} else if (global.androidx && global.androidx.core) {
  androidSupport = global.androidx.core;
}

Then you can use androidSupport to access the namespace, and it pretty much works the same as what you would access using android.support.v4 or the new androidx.core replacement. Because we used feature detection; this allows us to support both namespaces in the same code base.

So if you have JavaScript or TypeScript code that uses android.support.* then you can use this technique to make your plugin forwards compatible with the next major version of NativeScript while retaining support for the prior versions.

Here is a list of classses that are being moved from android support to androidx: https://developer.android.com/jetpack/androidx/migrate

VMWare Network Hickups ( sent link down event. )

I ran into this because I went to a hotel that had DHCP renew every 300 seconds, trying to download something inside the VM was getting clobbered...

If you check your syslog or kern.log and see the following entries; then I might have the solution for you:

kernel: [235397.022939] userif-3: sent link down event.
kernel: [235397.022943] userif-3: sent link up event.

Then the issue is one that when your DHCP renews it calls into the kernel with the new information; unfortunately this causes VMWare Workstation & Player (might effect other VMWare products?) to reset its network stack, even though nothing changed...

The solution after a lot of time googling different terms and trying to figure out what exactly was going on. I finally found it here (by Jan Just Keijer): https://www.nikhef.nl/~janjust/vmnet/ it was for an much older version of VMWare Player; but the solution still applies to VMWare workstation 15...

To make sure the information doesn't just disappear in the future (and so I can easily find it myself) I'm going to quote the relevant code and/or modifications I did..

Extract the /usr/lib/vmware/modules/source/vmnet-only.tar tar file, you need to modify the userif.c file. Search it for VNetUserIfSetUplinkState function and right after the variable declaration add if (!linkUp) return 0;

// Around line 966
VNetUserIfSetUplinkState(VNetPort *port, uint8 linkUp)
 {
    VNetUserIF *userIf;
    VNetJack *hubJack;
    VNet_LinkStateEvent event;
    int retval;

    /* ADD: never send link down events */   
    if (!linkUp) return 0;
    /* END Added code */
    
    ... rest of code ...

This will eliminate the linkdown events; which disconnect the internet from the VMWare network driver...

After you do that; recreate the vmnet-only.tar and replace the original. Then you can run: /usr/bin/vmware-modconfig --console --install-all

And then either
systemctl restart vmware
or
service vmware restart

If you still see " sent link down event." in your logs; then the patch wasn't applied properly.