Fixing: error: GH001: Large files detected. You may want to try Git Large File Storage

In the the repository directory...


1) Do git log to show, then copy commits done since that last push.

2) Delete (only) the .git folder (rm -rf).


Then...


1) Clone the repo from GitHub in a temporary directory.

2) Move the newly cloned .git folder from the temporary directory into the repository directory.

3) Do a commit. (I included the commit messages from the log copy made above, plus an explanation of the commit message discontinuity.)

4) Push.


QED


---------------------------------------------

Jina, Fix this up.


Firstly, within the repository directory:


1) Execute `git log` to display and then copy the commits done since the last push.

2) Remove only the .git folder using `rm -rf .git`.


Subsequently:


1) In a temporary directory, clone the repository from GitHub.

2) Transfer the newly cloned .git folder from the temporary directory to the repository directory.

3) Perform a commit. This should include the commit messages copied earlier, along with an explanation for the discontinuity in the commit messages.

4) Push your changes.


And that's it!

Sonicwall Port Forward Example

I found way too many explanations for how to configure Sonicwall to forward a an external port to an internal destination. Everyone seems intent on trying to explain too much until I can to this old (2017) one by Sonicwall's M. G. Sriram Iyer. It's like a dream come true, just the specifics of creating the capability. I took notes.

My specific need was to forward traffic on port 9000 to port 9000 on a specific internal system.

Create address object for internal and external addresses

[on my system, this already exists as Default WAN]

[eg, New Internal Address Object points to the internal IP address]


create service definition assigning TCP (or whatever) to the internal port number 

[eg, New Internal Port Assignment Service points to port 9000]


create firewall->access to allow traffic, WAN/LAN

service: http

source: any

dest address: address object (eg, Default WAN)

dest service: service object (New Internal Port Assignment Service)

_must be above (priority) any deny rule_


create NAT Policy to translate and forward traffic

orig source: any

orig dest: an address object (Default WAN)

orig service: http

trans source: origin

trans dest: an address object (New Internal Address Object)

trans service: service object (New Internal Port Assignment Service)

inbound interface: X1




Using ESM with legacy CommonJS modules in NodeJS

I have been a full-time Javascript programmer since 2010. I have an elaborate library of utilities and snippets and patterns and practices based on CommonJS running in NodeJS. I have absolutely no use for ESM. I understand its benefit in some situations but, none of them are ones I am in. Consequently, I am unfamiliar with it.


So, when I found that, as I implement a new app, that OpenAI has made it's new, V4, npm module entirely incompatible with my previous CommonJS library, I was annoyed that I could not just pick up previous code. When I found that it is only published as ESM and that you can't just put "import openAi from openai"  inline with require(), I was furious.

Trying to figure out how to use ESM in a CJS world, I have read so many idiots saying, "Don't. ESM is great. Rewrite your code base to be cool and modern." Or, convoluted crap that tells you how to do some complicated thing related to this topic. I have to tell you that my last couple of hours have been an epic fail in Google, StackOverflow and everyone else.

It made me so mad that I was forced to the final resort: RTFM. In this case, the NodeJS module documentation.

I'm sure you have as little interest in reading about ESM as I do, so ere is the answer:

I wrote an interface module. All it does is use the NodeJS import() function with its promise. (You can obviously use this without the structure but I personally prefer callbacks to async/await and think this is easier to interpret.)

This module only import()'s and is called mport-openai.js:

'use strict';
module.exports = async function(callback) {
    const openAi = await import('openai');
    callback('', openAi);
};

This is in my main program:

require('./import-openai')((err, openAi)=>{
    //do cool stuff here
});

I make no claim that this is brilliant insight. In fact, I know it is totally obvious... Once you know about it,which you won't until you do and Google was no help until you got here.

You're welcome.

ps, You might have thought, "Why'd he have to tell me this whole long story about legacy CommonJS, ESM, node modules, looking for import(), etc?" The answer is Google Bait, not that I think you actually care about my journey. It needs the explanation for the index.








Unix Slash or No Slash on mv or cp

I go for long periods of time where I know the answer to this but then, brain fart, I get confused. I have been amazed at how hard it is to find a clear statement about this. Now there is one. You're welcome.

If I want to merge source files into a target directory, I should not use a slash on the target directory path..

If I want to put a directory inside of a target directory, I should use a final slash on the target directory path.

Repeat after me: Final slash means the new directory. No slash means merge.

For example...

sourceDir

fileOne

fileTwo

targetDir

someFileOne

someFileTwo

I get a merge If I use:

cp -R sourceDir targetDir # no final slash

My targetDir ends up looking like:

targetDir

someFileOne

someFileTwo

fileOne

fileTwo

I get a new subdirectory if I add a final slash, eg:

cp -R sourceDir targetDir/ # yes final slash

The result is:

targetDir

someFileOne

someFileTwo

sourceDir

(And targetDir/sourceDir contains fileOne, fileTwo.)

Repeat after me: No slash to merge. Yes slash to create a new subdirectory.


TIL: NodeJS modules are all Singletons

I am not sure if this applies to ecmaScript modules though I suspect it does. But...

I have known all along that NodeJS only loaded a module once no matter how many times it appears in a require() statement. Today I ran into a ramification that I never realized. Makes perfect sense but I never thought about it.

I will say that part of the reason I didn't think about it is that I am rabid in my opposition to global context. As soon as I learned enough, I wrapped everything in a function and only, ever pass values into functions and classes. I never use closure in a situation longer than a few lines and regret even that. (The only, only thing I like better about PHP than JS is its 'function() use' statement instead of closure.

It turns out that every single require()'d module is actually a singleton. Make an object and push crap into it from anywhere in your system and all the crap from all the sources are in that one object, their only connection being a require() statement. It's the same with any side effects. Only once.

Consequently, my modules are always a single function that is either executed by:
  • module.exports = moduleFunction; //if I want to pass arguments
  • module.exports = moduleFunction();
  • module.exports = args=>new moduleFunction(args);
The only other statements are require() statements which, if they are mine, have the same carefully controlled lack of state. I always explicitly initialize everything about a module separate from require()ing it.

But, it is merely good luck that I never got my ass bitten painfully by this. If anyone had ever asked me about it, I would have said, "Yes, you have to watch out for variables that persist between require() actions." But, nobody ever did and I have never given it one single thought. It's embarrassing. Saved only by good luck.

I guess, though, that there is some benefit to being crazed about carefully controlled variable scope and accessibility. Made it so that there wasn't any loose data to be compromised. So, lucky me.

Javascript Template Literal Tagged Templates Variation

The javascript phrase

test`That ${person} is a ${age}.`

Has never been useful to me and so I have never really understood it. Today I found something that relies on it. As a consequence, I had to figure out how it works. The usually awesome MDN gave working examples but did a terrible job of explaining what was going on. I hereby leap into the breach.

The first thing is that the syntax is a sort of function call with no parenthesis. It is a function name followed immediately by a literal. Weird to look at but there you have it.

The next thing is that the function execution has a big, complicated process behind it. When it is called, the JS engine does these things...

  1. It splits the template literal on the variable tags into an array. It's as if they did a regex to change the '${var}' to a comma and then split(',').
  2. Collect values for each variable mentioned and insert them into the function call in the order they appear.


Eg,

The effective function call ,

test(stringsArray, person, age));

This allows you to process the components of the literal in any way you want. Assemble the strings backwards? Sure. Put the values in the wrong place? Can do.

Below I constructed an example that acts as if there was no function. I felt like it helped me understand.

#!/usr/local/bin/node

let person = 'Mike';
let age = 28;

function test(strings, ...keys) {
    let output = '';
    strings.forEach(item => {
        const tmp = keys.shift();
        output += `${item}${unset(tmp) ? tmp : ''}`;
    });
    return output;
}

let output = test`That ${person} is a ${age}.`;

console.log(output);
// That Mike is a 28.






Remembering Details of Programs and Environments

I suffer a great deal of hassle trying to deal with the insane amount of details that come along with being a programmer. Partly because I am really old, I cannot remember a single thing but really, I have not been able to ever remember details. I have long developed practices to accommodate this weakness.

I started by using BASH aliases to remember various commands. I still have a couple of aliases from fifteen years ago for lsof, a command I use rarely and can never remember how to work. Later, I got the idea of having an alias (initProjA) always present in my shell for each project that executed a script, initDevTerminal, that was in a management directory I kept in every project. This script created aliases and also showed notes for things I might forget. EG, I keep a project for my computer that just tells me where the NGINX config files are located since I can never remember.

For real projects, the initDevTerminal script generates aliases that initialize or execute tests, copy code to production servers, execute the toolchain, start and stop things, whatever is needed. Some projects have a dozen entries. Some fewer. One longstanding and stupidly complicated one actually has pages for the various subsections. The script also initializes environment variables if needed and, in some cases, swaps out launchctl or systemctl jobs. The important thing is that most of the complicated commands are put into aliases or listed as notes when 'initProjA' is executed on the command line for easy reference.

I also manage applications on various client servers. For a long time, I would put such a helper script in my BASH environment on the client systems. Over the years, that became complicated to manage so I changed the structure. Now I put the scrips in the system specific config directory, the one where my programs look to find out, eg, the actual path or API key or something.

In the directory for each computer, my development one include, there is a directory called terminalAndOperation. Each has a well-known file named 'initTerminal'. Depending on when I last worked on the project, that file might be an old, simple one or a new, cool one that includes, for example, boilerplate reference to a common file at the root of the configs directory so I do not have to repeat things that are, well, common to all the environments.

The main things I have figured out are, 1) use a script to contain all the stuff I know I will not remember, 2) structure projects so that there is a script for each environment I have to work on, 3) be rigorous in making all that structure the same because I know I will not remember where the script is if it's different, 4) use aliases (also, btw, listed in my .bash_profile) to execute them so I am not confused and 5) make sure to write all the fussy details in those scripts and keep them up to date and refactored frequently.

This has allowed me to keep being a productive programmer long into senility. My wife has to remind me about everything but, when my colleagues need to know fussy details, they ask me and I can easily find the answer.


Javascript Date Format Option Property Values

weekday:    "narrow", short", "long"
year:    "numeric", 2-digit"
month:    "numeric", 2-digit", narrow", short", "long"
day:    "numeric", 2-digit"

hour12:    true, false
hour:    "numeric", "2-digit"
minute:    "numeric", "2-digit"
second:    "numeric", "2-digit"

timeZone:    [list](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones)

systemctl (systemd) Timer onCalendar Unit Executes Target Service When Start or Restart

(Yes, this looks familiar. It's the same debugging session as 'runs when started'. Be patient for heaven's sake.)

I grabbed some boilerplate off the web to make my systemd timer/service pair work. Worked fine.

ie,

systemctl enable myThing.timer

systemctl enable myThing.service

systemctl start myThing.timer

#do not start myThing. service or it will run immediately, that's ok at some other time for a special non-timer purpose.

I started my debugging with a quick script that just logged hello and it all worked quite nicely.

What I didn't notice initially is that it was running the target service when I started the timer.

Later, I changed it to refer to my real, long-running (hours) process and, as you might imagine, that made it very clear that it was being run an extra time.

The problem is that my boilerplate had a 'Requires' statement in the Unit stanza. When this is around, systemd does a 'start' on that Require'd process when the timer is started.

The Require statement I copied said...

Requires=myThing.service (just like it had Unit=myThing.service in the Timer stanza)

Turns out that systemd starts the Require'd service if it is not already running in case, I suppose, you forgot to.

The only reason I can imagine having the target process in the Require statement is so that it runs once at startup. I don't know but, there you have it. If you are getting extra invocations of your service, get it out of the Require statement.



systemctl (systemd) Timer onCalendar Unit Goes Into Dead State When Target Service is Stopped

I grabbed some boilerplate off the web to make my systemd timer/service pair work. Worked fine.

ie,

systemctl enable myThing.timer

systemctl enable myThing.service

systemctl start myThing.timer

#do not start myThing. service or it will run immediately, that's ok at some other time for a special non-timer purpose.

Showed a nice schedule with 'status' and it ran correctly at the right time.I did some debugging and experiments to tune the system and, after seeming to work ok, it started telling me that the timer was dead. I could find no reason for a long, harrowing debug session.

Here is the answer.

I had started the process by implementing the timer/service pair pointing at a trivial test script. It ran and quit instantly. I did not notice that it was running and quitting on timer start. I also think that I (incorrectly) did "systemctl start" on the quick test service which then exited normally.

Later, I wired it up to my real process, one that takes more than an hour to execute. Of course, I don't want to wait that long so I did "systemctl stop".

After that, the timer would not work. It said it was dead. I tried everything.

The problem is that the boilerplate I grabbed included a "Requires" statement in the Unit stanza. It was set to the same name (myThing.service) as the target service so I did it too. That was an error. I do not know why the boilerplate author included it but requiring the target.service as a dependency makes no sense even though it basically worked.

When I did systemctl stop the long-running target service, that put it into a state where the dependency was not able to be met. Not entirely sure what the difference between never run and stop is but, I have proven this.

When I removed the extraneous 'Require' statement, that dependency went away, the timer started correctly and all was happiness and unicorns.

What I still don't know (Hey, Smartypants, that's what comments are for) is why stopping the target service made it violate the Require constraint.