TIL: NodeJS modules are all Singletons

I am not sure if this applies to ecmaScript modules though I suspect it does. But...

I have known all along that NodeJS only loaded a module once no matter how many times it appears in a require() statement. Today I ran into a ramification that I never realized. Makes perfect sense but I never thought about it.

I will say that part of the reason I didn't think about it is that I am rabid in my opposition to global context. As soon as I learned enough, I wrapped everything in a function and only, ever pass values into functions and classes. I never use closure in a situation longer than a few lines and regret even that. (The only, only thing I like better about PHP than JS is its 'function() use' statement instead of closure.

It turns out that every single require()'d module is actually a singleton. Make an object and push crap into it from anywhere in your system and all the crap from all the sources are in that one object, their only connection being a require() statement. It's the same with any side effects. Only once.

Consequently, my modules are always a single function that is either executed by:
  • module.exports = moduleFunction; //if I want to pass arguments
  • module.exports = moduleFunction();
  • module.exports = args=>new moduleFunction(args);
The only other statements are require() statements which, if they are mine, have the same carefully controlled lack of state. I always explicitly initialize everything about a module separate from require()ing it.

But, it is merely good luck that I never got my ass bitten painfully by this. If anyone had ever asked me about it, I would have said, "Yes, you have to watch out for variables that persist between require() actions." But, nobody ever did and I have never given it one single thought. It's embarrassing. Saved only by good luck.

I guess, though, that there is some benefit to being crazed about carefully controlled variable scope and accessibility. Made it so that there wasn't any loose data to be compromised. So, lucky me.

Javascript Template Literal Tagged Templates Variation

The javascript phrase

test`That ${person} is a ${age}.`

Has never been useful to me and so I have never really understood it. Today I found something that relies on it. As a consequence, I had to figure out how it works. The usually awesome MDN gave working examples but did a terrible job of explaining what was going on. I hereby leap into the breach.

The first thing is that the syntax is a sort of function call with no parenthesis. It is a function name followed immediately by a literal. Weird to look at but there you have it.

The next thing is that the function execution has a big, complicated process behind it. When it is called, the JS engine does these things...

  1. It splits the template literal on the variable tags into an array. It's as if they did a regex to change the '${var}' to a comma and then split(',').
  2. Collect values for each variable mentioned and insert them into the function call in the order they appear.


Eg,

The effective function call ,

test(stringsArray, person, age));

This allows you to process the components of the literal in any way you want. Assemble the strings backwards? Sure. Put the values in the wrong place? Can do.

Below I constructed an example that acts as if there was no function. I felt like it helped me understand.

#!/usr/local/bin/node

let person = 'Mike';
let age = 28;

function test(strings, ...keys) {
    let output = '';
    strings.forEach(item => {
        const tmp = keys.shift();
        output += `${item}${unset(tmp) ? tmp : ''}`;
    });
    return output;
}

let output = test`That ${person} is a ${age}.`;

console.log(output);
// That Mike is a 28.






Remembering Details of Programs and Environments

I suffer a great deal of hassle trying to deal with the insane amount of details that come along with being a programmer. Partly because I am really old, I cannot remember a single thing but really, I have not been able to ever remember details. I have long developed practices to accommodate this weakness.

I started by using BASH aliases to remember various commands. I still have a couple of aliases from fifteen years ago for lsof, a command I use rarely and can never remember how to work. Later, I got the idea of having an alias (initProjA) always present in my shell for each project that executed a script, initDevTerminal, that was in a management directory I kept in every project. This script created aliases and also showed notes for things I might forget. EG, I keep a project for my computer that just tells me where the NGINX config files are located since I can never remember.

For real projects, the initDevTerminal script generates aliases that initialize or execute tests, copy code to production servers, execute the toolchain, start and stop things, whatever is needed. Some projects have a dozen entries. Some fewer. One longstanding and stupidly complicated one actually has pages for the various subsections. The script also initializes environment variables if needed and, in some cases, swaps out launchctl or systemctl jobs. The important thing is that most of the complicated commands are put into aliases or listed as notes when 'initProjA' is executed on the command line for easy reference.

I also manage applications on various client servers. For a long time, I would put such a helper script in my BASH environment on the client systems. Over the years, that became complicated to manage so I changed the structure. Now I put the scrips in the system specific config directory, the one where my programs look to find out, eg, the actual path or API key or something.

In the directory for each computer, my development one include, there is a directory called terminalAndOperation. Each has a well-known file named 'initTerminal'. Depending on when I last worked on the project, that file might be an old, simple one or a new, cool one that includes, for example, boilerplate reference to a common file at the root of the configs directory so I do not have to repeat things that are, well, common to all the environments.

The main things I have figured out are, 1) use a script to contain all the stuff I know I will not remember, 2) structure projects so that there is a script for each environment I have to work on, 3) be rigorous in making all that structure the same because I know I will not remember where the script is if it's different, 4) use aliases (also, btw, listed in my .bash_profile) to execute them so I am not confused and 5) make sure to write all the fussy details in those scripts and keep them up to date and refactored frequently.

This has allowed me to keep being a productive programmer long into senility. My wife has to remind me about everything but, when my colleagues need to know fussy details, they ask me and I can easily find the answer.


Javascript Date Format Option Property Values

weekday:    "narrow", short", "long"
year:    "numeric", 2-digit"
month:    "numeric", 2-digit", narrow", short", "long"
day:    "numeric", 2-digit"

hour12:    true, false
hour:    "numeric", "2-digit"
minute:    "numeric", "2-digit"
second:    "numeric", "2-digit"

timeZone:    [list](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones)

systemctl (systemd) Timer onCalendar Unit Executes Target Service When Start or Restart

(Yes, this looks familiar. It's the same debugging session as 'runs when started'. Be patient for heaven's sake.)

I grabbed some boilerplate off the web to make my systemd timer/service pair work. Worked fine.

ie,

systemctl enable myThing.timer

systemctl enable myThing.service

systemctl start myThing.timer

#do not start myThing. service or it will run immediately, that's ok at some other time for a special non-timer purpose.

I started my debugging with a quick script that just logged hello and it all worked quite nicely.

What I didn't notice initially is that it was running the target service when I started the timer.

Later, I changed it to refer to my real, long-running (hours) process and, as you might imagine, that made it very clear that it was being run an extra time.

The problem is that my boilerplate had a 'Requires' statement in the Unit stanza. When this is around, systemd does a 'start' on that Require'd process when the timer is started.

The Require statement I copied said...

Requires=myThing.service (just like it had Unit=myThing.service in the Timer stanza)

Turns out that systemd starts the Require'd service if it is not already running in case, I suppose, you forgot to.

The only reason I can imagine having the target process in the Require statement is so that it runs once at startup. I don't know but, there you have it. If you are getting extra invocations of your service, get it out of the Require statement.



systemctl (systemd) Timer onCalendar Unit Goes Into Dead State When Target Service is Stopped

I grabbed some boilerplate off the web to make my systemd timer/service pair work. Worked fine.

ie,

systemctl enable myThing.timer

systemctl enable myThing.service

systemctl start myThing.timer

#do not start myThing. service or it will run immediately, that's ok at some other time for a special non-timer purpose.

Showed a nice schedule with 'status' and it ran correctly at the right time.I did some debugging and experiments to tune the system and, after seeming to work ok, it started telling me that the timer was dead. I could find no reason for a long, harrowing debug session.

Here is the answer.

I had started the process by implementing the timer/service pair pointing at a trivial test script. It ran and quit instantly. I did not notice that it was running and quitting on timer start. I also think that I (incorrectly) did "systemctl start" on the quick test service which then exited normally.

Later, I wired it up to my real process, one that takes more than an hour to execute. Of course, I don't want to wait that long so I did "systemctl stop".

After that, the timer would not work. It said it was dead. I tried everything.

The problem is that the boilerplate I grabbed included a "Requires" statement in the Unit stanza. It was set to the same name (myThing.service) as the target service so I did it too. That was an error. I do not know why the boilerplate author included it but requiring the target.service as a dependency makes no sense even though it basically worked.

When I did systemctl stop the long-running target service, that put it into a state where the dependency was not able to be met. Not entirely sure what the difference between never run and stop is but, I have proven this.

When I removed the extraneous 'Require' statement, that dependency went away, the timer started correctly and all was happiness and unicorns.

What I still don't know (Hey, Smartypants, that's what comments are for) is why stopping the target service made it violate the Require constraint.




GITLAB SSH key won't work, asks for password

We are using a locally hosted instance of gitlab. It's a nice program and are enjoying many things about it.

Most of the people working on it live a fairly simple life. I am not one of them. I work a zillion servers and SSH all over the place. I have many RSA keys for various purposes. To keep track of them, I used the comment section of the public key to let me know which one is which.

EG, a public key might end

...SSCtQUvSJ5jdfW9YB3w== mySpecialKeyName

Time came to move a repo from github to gitlab. I grabbed the public key I was using for github and pasted it into the settings of gitlab. It accepted it happily.

Also, it did not work. When I tried to push my newly re-origined ("git remote add origin git@...") repo to git lab, it asked me for a password. There is, of course, no password nor should it ever ask for one.

After trying EVERYTHING, I noticed instructions in the gitlab page that reminded me to copy my public key completely. It said from ssh-rsa to the "email address".

I changed it to

...SCtQUvSJ5jdfW9YB3w== tq@myGitlabEmailAddress.com

And it worked.

You're welcome.

PS, Gitlab, if you ever hear about this: This is bullshit. There is not a single annotation anywhere. Even if I was in the habit of using the RSA key comment section for an email address, there's no reason to imagine it would be the right email address. If you're not going to make *me* responsible for establishing the connection between the key and my account, at least do me the solid of giving me an error message when it doesn't have the right info.






Use iTerm2 for bbedit 'Go Here in Terminal' function

Obviously, nobody wants to use Apple's Terminal in a world where there is iTerm2.

It's equally obvious that Barebones bbedit is awesome.

If you want the context menu selection, Go Here in Terminal to open in iTerm2, paste this into the command line:

defaults write com.barebones.bbedit TerminalBundleID "com.googlecode.iterm2"

(Make sure the quotes weren't made curly by your web browser. You can use bbedit->Text->Straighten Quotes to make sure.)



macOS Will Not Save Mail Signatures

I've been going crazy. I need to change my company email signature. I do so, it works fine but next time I quit Mail.app it goes away. I messed around with plists and permissions. Nothing helped.

Then I found an article HERE.  It had the answer. I repeat it here for posterity.

The problem arises from some bug between Mail.app and iCloud Drive. The problem has been around at least since 2017. There does not appear to be a fix that will make Mail.app save signatures as long as it is involved with iCloud Drive. Fortunately, you can turn that on and off with no apparent harm.

Go to System Preferences->iCloud Drive, then click on iCloud Drives Options... button. In there, you will see a list that includes Mail. Uncheck Mail.

(Note: Mail appears in the list initially shown when you open the iCloud preference panel. This is not the correct one to uncheck. You need to click into iCloud Drive Options and uncheck Mail there.)

Back in Mail.app, you will find that signatures save correctly. Do your work. Quit Mail and reopen and you will find you new and changed signatures in place. Win!!

Once done, you can return to the iCloud Preference Panel->iCloud Drive->options and recheck Mail. Signatures will remain. Life will be good.

Nodejs 10, Readable Stream (close, end, etc) Events Not Being Called


The context is that I had a request to add a feature to a Nodejs app that I wrote a couple of years ago. The purpose of the app is to grab a bunch of files off of an SFTP server and make files out of them. The purpose of this post is to provide a google friendly summary so nobody has to spend as much time on this as I did.

So, I fired up the app on my development machine. It looked ok at first but then it just stopped processing. Further investigation told me that the downloaded files got to disk and that the problem was that the ‘close’ event was never firing.

Inspiration strikes and I realize that the app was written for an older version of Nodejs. More experimentation revealed that the app runs correctly on versions up to 9.11.2 but at 10.0.0, it fails. For google, I restate: upgrade nodejs to version ten causes readable stream to never fire ‘close’ (or ‘end’ or any other) event.

After a full day and a half of research and experimentation, I learned that, starting in nodejs 10, readable streams default to ‘pause’ mode. Previously, they were in ‘flowing’ mode. In ‘pause’, the stream waits for the consumer to take data before it does anything else.

Well, sort of. It does provide the data, somehow. After all, my zip files were created and did work correctly.

In any case, I found a clue someplace that there had been a change to the default for streams. I found another clue that talked about the ‘paused’ behavior. Eventually I found a comment on stackoverflow (here) that mentioned read();

So, here’s the code:

sftpControls.sftpConnection
    .get(remoteDirectoryPath + name, false, null)
    .then(zipSourceStream => {
        const filePath = `${zipFileDirPath}/${name}`;
        zipSourceStream
            .pipe(qtools.fs.createWriteStream(filePath, { encoding: null }))
            .on('close', () => {
                zipSourceStream.destroy();
                next('', name);
            });
        zipSourceStream.read(); //tell the stream to be ‘flowing’
    })


That is the entire change. Add one read() and the stream becomes ‘flowing’ and emits a close event when it’s done.



A small warning. I tried to chain the .read() along with the pipe() and on() methods. That failed. The read() method does not return the source object. You will get an error, “no read().pipe()”. Reverse it and get “no pipe().read()”. You have to run it like it’s shown, directly off of the original stream object.

If anyone reading this actually understands why this is happening or how it works, I am sure future programmers will be grateful to read the comments. I know I will.