tag:tech.genericwhite.com,2013:/posts I Love Javascript! 2022-05-21T21:08:11Z TQ White II tag:tech.genericwhite.com,2013:Post/1832046 2022-05-21T20:38:55Z 2022-05-21T21:08:11Z TIL: NodeJS modules are all Singletons
I am not sure if this applies to ecmaScript modules though I suspect it does. But...

I have known all along that NodeJS only loaded a module once no matter how many times it appears in a require() statement. Today I ran into a ramification that I never realized. Makes perfect sense but I never thought about it.

I will say that part of the reason I didn't think about it is that I am rabid in my opposition to global context. As soon as I learned enough, I wrapped everything in a function and only, ever pass values into functions and classes. I never use closure in a situation longer than a few lines and regret even that. (The only, only thing I like better about PHP than JS is its 'function() use' statement instead of closure.

It turns out that every single require()'d module is actually a singleton. Make an object and push crap into it from anywhere in your system and all the crap from all the sources are in that one object, their only connection being a require() statement. It's the same with any side effects. Only once.

Consequently, my modules are always a single function that is either executed by:
  • module.exports = moduleFunction; //if I want to pass arguments
  • module.exports = moduleFunction();
  • module.exports = args=>new moduleFunction(args);
The only other statements are require() statements which, if they are mine, have the same carefully controlled lack of state. I always explicitly initialize everything about a module separate from require()ing it.

But, it is merely good luck that I never got my ass bitten painfully by this. If anyone had ever asked me about it, I would have said, "Yes, you have to watch out for variables that persist between require() actions." But, nobody ever did and I have never given it one single thought. It's embarrassing. Saved only by good luck.

I guess, though, that there is some benefit to being crazed about carefully controlled variable scope and accessibility. Made it so that there wasn't any loose data to be compromised. So, lucky me.

]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1794252 2022-02-11T16:20:14Z 2022-02-11T16:20:14Z Javascript Template Literal Tagged Templates Variation

The javascript phrase

test`That ${person} is a ${age}.`

Has never been useful to me and so I have never really understood it. Today I found something that relies on it. As a consequence, I had to figure out how it works. The usually awesome MDN gave working examples but did a terrible job of explaining what was going on. I hereby leap into the breach.

The first thing is that the syntax is a sort of function call with no parenthesis. It is a function name followed immediately by a literal. Weird to look at but there you have it.

The next thing is that the function execution has a big, complicated process behind it. When it is called, the JS engine does these things...

  1. It splits the template literal on the variable tags into an array. It's as if they did a regex to change the '${var}' to a comma and then split(',').
  2. Collect values for each variable mentioned and insert them into the function call in the order they appear.


Eg,

The effective function call ,

test(stringsArray, person, age));

This allows you to process the components of the literal in any way you want. Assemble the strings backwards? Sure. Put the values in the wrong place? Can do.

Below I constructed an example that acts as if there was no function. I felt like it helped me understand.

#!/usr/local/bin/node

let person = 'Mike';
let age = 28;

function test(strings, ...keys) {
    let output = '';
    strings.forEach(item => {
        const tmp = keys.shift();
        output += `${item}${unset(tmp) ? tmp : ''}`;
    });
    return output;
}

let output = test`That ${person} is a ${age}.`;

console.log(output);
// That Mike is a 28.






]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1788707 2022-01-28T15:10:47Z 2022-01-28T15:10:47Z Remembering Details of Programs and Environments

I suffer a great deal of hassle trying to deal with the insane amount of details that come along with being a programmer. Partly because I am really old, I cannot remember a single thing but really, I have not been able to ever remember details. I have long developed practices to accommodate this weakness.

I started by using BASH aliases to remember various commands. I still have a couple of aliases from fifteen years ago for lsof, a command I use rarely and can never remember how to work. Later, I got the idea of having an alias (initProjA) always present in my shell for each project that executed a script, initDevTerminal, that was in a management directory I kept in every project. This script created aliases and also showed notes for things I might forget. EG, I keep a project for my computer that just tells me where the NGINX config files are located since I can never remember.

For real projects, the initDevTerminal script generates aliases that initialize or execute tests, copy code to production servers, execute the toolchain, start and stop things, whatever is needed. Some projects have a dozen entries. Some fewer. One longstanding and stupidly complicated one actually has pages for the various subsections. The script also initializes environment variables if needed and, in some cases, swaps out launchctl or systemctl jobs. The important thing is that most of the complicated commands are put into aliases or listed as notes when 'initProjA' is executed on the command line for easy reference.

I also manage applications on various client servers. For a long time, I would put such a helper script in my BASH environment on the client systems. Over the years, that became complicated to manage so I changed the structure. Now I put the scrips in the system specific config directory, the one where my programs look to find out, eg, the actual path or API key or something.

In the directory for each computer, my development one include, there is a directory called terminalAndOperation. Each has a well-known file named 'initTerminal'. Depending on when I last worked on the project, that file might be an old, simple one or a new, cool one that includes, for example, boilerplate reference to a common file at the root of the configs directory so I do not have to repeat things that are, well, common to all the environments.

The main things I have figured out are, 1) use a script to contain all the stuff I know I will not remember, 2) structure projects so that there is a script for each environment I have to work on, 3) be rigorous in making all that structure the same because I know I will not remember where the script is if it's different, 4) use aliases (also, btw, listed in my .bash_profile) to execute them so I am not confused and 5) make sure to write all the fussy details in those scripts and keep them up to date and refactored frequently.

This has allowed me to keep being a productive programmer long into senility. My wife has to remind me about everything but, when my colleagues need to know fussy details, they ask me and I can easily find the answer.


]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1692319 2021-05-18T18:55:04Z 2021-10-01T19:33:48Z Javascript Date Format Option Property Values weekday:    "narrow", short", "long"
year:    "numeric", 2-digit"
month:    "numeric", 2-digit", narrow", short", "long"
day:    "numeric", 2-digit"

hour12:    true, false
hour:    "numeric", "2-digit"
minute:    "numeric", "2-digit"
second:    "numeric", "2-digit"

timeZone:    [list](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones)
]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1622594 2020-11-29T18:13:06Z 2020-11-29T18:13:06Z systemctl (systemd) Timer onCalendar Unit Executes Target Service When Start or Restart

(Yes, this looks familiar. It's the same debugging session as 'runs when started'. Be patient for heaven's sake.)

I grabbed some boilerplate off the web to make my systemd timer/service pair work. Worked fine.

ie,

systemctl enable myThing.timer

systemctl enable myThing.service

systemctl start myThing.timer

#do not start myThing. service or it will run immediately, that's ok at some other time for a special non-timer purpose.

I started my debugging with a quick script that just logged hello and it all worked quite nicely.

What I didn't notice initially is that it was running the target service when I started the timer.

Later, I changed it to refer to my real, long-running (hours) process and, as you might imagine, that made it very clear that it was being run an extra time.

The problem is that my boilerplate had a 'Requires' statement in the Unit stanza. When this is around, systemd does a 'start' on that Require'd process when the timer is started.

The Require statement I copied said...

Requires=myThing.service (just like it had Unit=myThing.service in the Timer stanza)

Turns out that systemd starts the Require'd service if it is not already running in case, I suppose, you forgot to.

The only reason I can imagine having the target process in the Require statement is so that it runs once at startup. I don't know but, there you have it. If you are getting extra invocations of your service, get it out of the Require statement.



]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1622584 2020-11-29T17:48:37Z 2020-11-29T18:14:31Z systemctl (systemd) Timer onCalendar Unit Goes Into Dead State When Target Service is Stopped

I grabbed some boilerplate off the web to make my systemd timer/service pair work. Worked fine.

ie,

systemctl enable myThing.timer

systemctl enable myThing.service

systemctl start myThing.timer

#do not start myThing. service or it will run immediately, that's ok at some other time for a special non-timer purpose.

Showed a nice schedule with 'status' and it ran correctly at the right time.I did some debugging and experiments to tune the system and, after seeming to work ok, it started telling me that the timer was dead. I could find no reason for a long, harrowing debug session.

Here is the answer.

I had started the process by implementing the timer/service pair pointing at a trivial test script. It ran and quit instantly. I did not notice that it was running and quitting on timer start. I also think that I (incorrectly) did "systemctl start" on the quick test service which then exited normally.

Later, I wired it up to my real process, one that takes more than an hour to execute. Of course, I don't want to wait that long so I did "systemctl stop".

After that, the timer would not work. It said it was dead. I tried everything.

The problem is that the boilerplate I grabbed included a "Requires" statement in the Unit stanza. It was set to the same name (myThing.service) as the target service so I did it too. That was an error. I do not know why the boilerplate author included it but requiring the target.service as a dependency makes no sense even though it basically worked.

When I did systemctl stop the long-running target service, that put it into a state where the dependency was not able to be met. Not entirely sure what the difference between never run and stop is but, I have proven this.

When I removed the extraneous 'Require' statement, that dependency went away, the timer started correctly and all was happiness and unicorns.

What I still don't know (Hey, Smartypants, that's what comments are for) is why stopping the target service made it violate the Require constraint.




]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1590053 2020-09-02T20:49:18Z 2020-09-02T20:49:19Z GITLAB SSH key won't work, asks for password

We are using a locally hosted instance of gitlab. It's a nice program and are enjoying many things about it.

Most of the people working on it live a fairly simple life. I am not one of them. I work a zillion servers and SSH all over the place. I have many RSA keys for various purposes. To keep track of them, I used the comment section of the public key to let me know which one is which.

EG, a public key might end

...SSCtQUvSJ5jdfW9YB3w== mySpecialKeyName

Time came to move a repo from github to gitlab. I grabbed the public key I was using for github and pasted it into the settings of gitlab. It accepted it happily.

Also, it did not work. When I tried to push my newly re-origined ("git remote add origin git@...") repo to git lab, it asked me for a password. There is, of course, no password nor should it ever ask for one.

After trying EVERYTHING, I noticed instructions in the gitlab page that reminded me to copy my public key completely. It said from ssh-rsa to the "email address".

I changed it to

...SCtQUvSJ5jdfW9YB3w== tq@myGitlabEmailAddress.com

And it worked.

You're welcome.

PS, Gitlab, if you ever hear about this: This is bullshit. There is not a single annotation anywhere. Even if I was in the habit of using the RSA key comment section for an email address, there's no reason to imagine it would be the right email address. If you're not going to make *me* responsible for establishing the connection between the key and my account, at least do me the solid of giving me an error message when it doesn't have the right info.






]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1460845 2019-09-29T15:39:38Z 2019-09-29T15:39:38Z Use iTerm2 for bbedit 'Go Here in Terminal' function

Obviously, nobody wants to use Apple's Terminal in a world where there is iTerm2.

It's equally obvious that Barebones bbedit is awesome.

If you want the context menu selection, Go Here in Terminal to open in iTerm2, paste this into the command line:

defaults write com.barebones.bbedit TerminalBundleID "com.googlecode.iterm2"

(Make sure the quotes weren't made curly by your web browser. You can use bbedit->Text->Straighten Quotes to make sure.)



]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1421487 2019-06-18T14:37:05Z 2019-06-18T14:37:05Z macOS Will Not Save Mail Signatures

I've been going crazy. I need to change my company email signature. I do so, it works fine but next time I quit Mail.app it goes away. I messed around with plists and permissions. Nothing helped.

Then I found an article HERE.  It had the answer. I repeat it here for posterity.

The problem arises from some bug between Mail.app and iCloud Drive. The problem has been around at least since 2017. There does not appear to be a fix that will make Mail.app save signatures as long as it is involved with iCloud Drive. Fortunately, you can turn that on and off with no apparent harm.

Go to System Preferences->iCloud Drive, then click on iCloud Drives Options... button. In there, you will see a list that includes Mail. Uncheck Mail.

(Note: Mail appears in the list initially shown when you open the iCloud preference panel. This is not the correct one to uncheck. You need to click into iCloud Drive Options and uncheck Mail there.)

Back in Mail.app, you will find that signatures save correctly. Do your work. Quit Mail and reopen and you will find you new and changed signatures in place. Win!!

Once done, you can return to the iCloud Preference Panel->iCloud Drive->options and recheck Mail. Signatures will remain. Life will be good.

]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1368628 2019-01-30T20:04:53Z 2019-01-30T20:06:37Z Nodejs 10, Readable Stream (close, end, etc) Events Not Being Called


The context is that I had a request to add a feature to a Nodejs app that I wrote a couple of years ago. The purpose of the app is to grab a bunch of files off of an SFTP server and make files out of them. The purpose of this post is to provide a google friendly summary so nobody has to spend as much time on this as I did.

So, I fired up the app on my development machine. It looked ok at first but then it just stopped processing. Further investigation told me that the downloaded files got to disk and that the problem was that the ‘close’ event was never firing.

Inspiration strikes and I realize that the app was written for an older version of Nodejs. More experimentation revealed that the app runs correctly on versions up to 9.11.2 but at 10.0.0, it fails. For google, I restate: upgrade nodejs to version ten causes readable stream to never fire ‘close’ (or ‘end’ or any other) event.

After a full day and a half of research and experimentation, I learned that, starting in nodejs 10, readable streams default to ‘pause’ mode. Previously, they were in ‘flowing’ mode. In ‘pause’, the stream waits for the consumer to take data before it does anything else.

Well, sort of. It does provide the data, somehow. After all, my zip files were created and did work correctly.

In any case, I found a clue someplace that there had been a change to the default for streams. I found another clue that talked about the ‘paused’ behavior. Eventually I found a comment on stackoverflow (here) that mentioned read();

So, here’s the code:

sftpControls.sftpConnection
    .get(remoteDirectoryPath + name, false, null)
    .then(zipSourceStream => {
        const filePath = `${zipFileDirPath}/${name}`;
        zipSourceStream
            .pipe(qtools.fs.createWriteStream(filePath, { encoding: null }))
            .on('close', () => {
                zipSourceStream.destroy();
                next('', name);
            });
        zipSourceStream.read(); //tell the stream to be ‘flowing’
    })


That is the entire change. Add one read() and the stream becomes ‘flowing’ and emits a close event when it’s done.



A small warning. I tried to chain the .read() along with the pipe() and on() methods. That failed. The read() method does not return the source object. You will get an error, “no read().pipe()”. Reverse it and get “no pipe().read()”. You have to run it like it’s shown, directly off of the original stream object.

If anyone reading this actually understands why this is happening or how it works, I am sure future programmers will be grateful to read the comments. I know I will.



]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1308951 2018-08-03T15:01:40Z 2018-08-03T15:01:41Z Using an asynchronous function inside a synchronous one - NOT!!

tl;dr: You can't. At least not in Javascript. (I beg you. Prove me wrong in the comments!)

Here's the problem I face.

I have a template system I use. It relies on a .ini file that has all the parts of (in this case) an email message. Subject, text body, html body, etc. It replaces tags in the elements with properties of a supplied object of the same name. The template also includes a property called 'transformations'.

Transformations contains a set of properties whose values are functions. When processing the replacement, my system runs the transformations against the other replacement properties and adds the result for replacement in the template. It's useful for making sure that names are capitalized or adding dates or changes to html styling based on the data.

The new challenge is that I need to have a transformation that gets a result from the database. The problem is that the database access is asynchronous. The system is set up to run simple string processing synchronously. There is no callback option.

I spent an entire day screwing around with generators/yield, async/await and reading what felt like the entire internet looking for an idea of how to deal with this. I consider putting the database function into code, per se, to be a crime. (I do not mind calling a specific system function but it has to be called from the primary locus of control, ie, the template. I don't want someone to look up later and find that the result is just not there anymore with no clue where it should have come from.)

I ended up adding a property to the template called 'asyncronousTransformations' and having the program that uses the template process them. This required back filling asynchrony several levels up the code. As you might imagine, sending email is asynchronous but the (hitherto) simple string processing was not. Now it is.

]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1288819 2018-05-29T17:59:50Z 2018-05-29T17:59:50Z How to turn off the Tap to Wake feature on your iPhone X

My iPhone has been a huge pain in the butt since iPhone X made it so that it thought my nipple under my shirt pocket is a finger that required it to do whatever my nipple wanted to do.

No more!! This article explains, settings->general->accessibility->tap to wake!! Disable it and my nipple is no longer empowered to hang up my phone calls.

And, it's harmless because settings->display & brightness->raise to wake makes it so that it almost always automatically wakes up when I want it to.

Salvation.

read more: How to turn off the Tap to Wake feature on your iPhone X

Tue May 29 2018 12:56:23 GMT-0500 (CDT)
]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1281113 2018-05-06T20:47:11Z 2018-05-06T20:47:11Z Missing req.body received from nodejs/npm request.post() call

This is almost trivial unless you forget about it and try to research the solution. It's stupidly obscure, probably because it's so trivial.

I am using request to post into some application. I didn't have a convenient boilerplate to reference and the damn internet did not provide a decent example. Here is a working and completely adequate post block:

    request.post(
        {
            url:url,
            headers: {
                authorization: `${userId} ${authToken}`,
                'Content-Type': 'application/json'
            },
            body: JSON.stringify({ propertyName:'propertyValue' })
        },
        (err, response, body) => {
            console.dir({ 'err': err });
            console.dir({ 'response': response });
            console.dir({ 'body': body });
            //callback(err, body);

        }
    );

This works but it didn't at first. On the receiving end (also nodejs, using express), the call was just fine, came back 200, showed its presence on the far end but, absolutely no post body content. The problem? I had forgotten to include the Content-Type header.

Easy but, since I forgot about it, I needed some help. Unfortunately, the internet was not so helpful. Eventually I did find a clue but, hopefully I've said enough here that google was able to help you learn this a lot more quickly than I did.
]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1274231 2018-04-18T15:59:39Z 2018-04-18T15:59:39Z SSH issues with Mac OS X High Sierra

macOS sftp "no matching cipher found"

Add this to ~/.ssh/config

Host * SendEnv LANG LC_* Ciphers +aes256-cbc

Works like a charm.

Thanks to Jason.

read more: SSH issues with Mac OS X High Sierra

Wed Apr 18 2018 10:57:46 GMT-0500 (CDT)
]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1224850 2018-01-02T00:25:07Z 2018-01-02T00:31:55Z Public Key Encryption Playground

My interest in public key encryption continues. I wanted to be able to actually use a tool to play  so I wrote one.

It does these things:

1) Generate a key pair.
2) Extract a public key from a private key.
3) Manually enter public or private key.
4) Create a crypto text string from plain text input.
5) Extract plain text from a crypto string.


You can play with this at: http://genericwhite.com/rsaEncryptionDemo/

Here's a brief video to get started...



Code is available on github.
]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1224307 2017-12-31T07:26:11Z 2022-02-11T16:22:56Z Public Key Encryption for NodeJS with node-rsa
When I was trying to make node-rsa work, I felt that the instructions were a little bit
cryptic. It took way too much time for figure out the hyphenated argument structure,
ie, pkcs1-public.

Also, I'm not a huge expert in encryption stuff so it took way too long to figure out that
the key produced by ssh-keygen was wrong and what to do about fixing it.

I decided that the things I learned need to be documented for posterity.

So, when I got it working, I tuned this up for readability and put it in a repo so
that you can find it. It does three things.

1) Encrypt with public key/decrypt with private key, both from files
2) Encrypt with private key/decrypt with private key, both from files
3) Generate keys to use for decryption and print them out

Change the variable testName to try them out.

Just navigate to the directory and run the file:

node testNodeRsa.js

Bonus! For your convenience, here is the command to convert the .pub generated by ssh-keygen into a .pem:

ssh-keygen -f keyName.pub -e -m pem > keyName.pem

You're welcome.

CHAPTER TWO

I want to be able to use the keys in a browser. I figured these learnings were worth documenting, too.

I used Browserify.

browserify testNodeRsaBrowser.js -o testNodeRsaBrowserBrowserify.js

If you put this repository someplace you can serve html, it will let you play with it.

You can also play with it at:  http://genericwhite.com/rsaEncryptionDemo

CHAPTER THREE

My interest in public key encryption continues. I wanted to be able to actually use a tool to play so I wrote one.

It does these things:

1) Generate a key pair.
2) Extract a public key from a private key.
3) Manually enter public or private key.
4) Create a crypto text string from plain text input.
5) Extract plain text from a crypto string.


You can play with this at: http://genericwhite.com/rsaEncryptionDemo/

The code is available on github.

]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1220851 2017-12-23T16:47:19Z 2017-12-23T16:47:19Z Javascript Alert Debug: Coolest thing I ever did

Here's the problem. A co-worker got a bizarre alert() dialog in a web app. It popped up and just said "1". It wasn't her code and she was completely stumped. That's how I got involved.

I looked around the code. Searched for errant /alert\(/, etc. Nothing worked.

Then I hit the developer console. I typed the best single thing I ever typed:

window.alert= msg=>{ console.log(msg); console.trace(); }

Yeh, I did that. You may enjoy my awesomeness.

Console.trace() gives a stack dump at the location that the alert dialog was called. That was very helpful.

You are welcome.

]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1215669 2017-12-12T19:52:42Z 2017-12-12T19:54:15Z SIF v3.5 Adds Support for Individualized Education Plans

Access for Learning (a4l.org) is happy to announce a major new addition to the School Interoperability Framework (SIF) data model to support students with special needs. Comprising two major components, xIndividualizedEducationPlan and xIepTransfer, this release is the result of a two year effort led by TQ White II of the Central Minnesota Educational Research and Development Corporation (cmerdc.org) with the help of national experts in special education and data modeling. The effort is motivated by the recognized need to make a student’s individualized education plan (IEP) content available when a student transfers into a new school.

The new data models are intended to support three main use cases. 1) Immediate support for an administrator the very first time a student shows up in a new school. 2) Information to support the special education team as they adapt plans already in place to the resources and strategies of the receiving school. 3) Sufficient information for schools and districts to support reporting and resource management needs. The goal is to ensure that a school has the information needed to provide students having special needs with critical, ongoing services.

This new model is based on a thorough survey of the standard form sets published by nearly every state, as well as the federal government. They were categorized into representative groups for an exhaustive inventory of data and evaluation of documentation strategies. With the input of a workgroup averaging about ten people a structured hierarchy of elements was developed and refined. Once done, the work was passed to Jill Parkes, education data analyst, at CEDS (Common Educational Data Standards) a federal organization that develops a dictionary of education related data definitions.

The CEDS process did two things. First, they evaluated each element in the new, tentative IEP data model and, where appropriate, attached a formal definition to it, either new or in reference to an existing definition. Then it was put into a formal CEDS community review. CEDS stakeholders, especially those with an interest in special education, reviewed the new definitions and approved them. This discussion improved confidence in the data design and made it more complete.

After this information was added to the XML – and with substantially more confidence, the data model was formally moved into the Access for Learning community review process. Though some people looked at the XML and offered comments, the main process involved TQ making presentations to various groups explaining the process and product in detail. Many valuable comments were made that resulted in changes but two were especially valuable.

First was Megan Gangl, a co-worker of TQ’s at cmERDC. Megan has spent her entire career as special education teacher, case manager and leader of case managers. Her decades of experience brought many new details to the model, suggested reorganization of some parts and validated others. She identified missing details, helped to rename elements and refine both their data definitions and the explanations of their meanings. After the initial presentation, she spent several days collaborating on the model in detail. Once done, confidence in the usefulness, correctness and completeness of the model was again tremendously improved.

The day before community review started in October, a new person, Danielle Norton, joined the North American Technical Board. Danielle’s team contributed to the community review with sessions including the detailed overview presentation and discussions with various subgroups of her team. A particularly important contribution was made by Rick Shafer, a long experienced data architect, who noted some problems with normalization in the data model.

The initial motivation for the IEP effort was to support the transfer of students between schools or districts. Throughout the process, the foremost intention was to provide complete information for the receiving educational agency. As a consequence, the data model included data elements that were duplicates of things that were defined elsewhere in SIF. That is, it was badly de-normalized. It made it so that the element would provide a complete picture for a receiving district but was ill-suited for use as a local SIF entity object.

To solve this problem, the data model was split into two elements, xIndividualizedEducationPlan and xTransferIep. The former is completely normalized to serve as a formal entity. No data is represented that is defined elsewhere in SIF but is, instead, referenced with a refId. If a receiving program needs to know those details, it is expected to query the appropriate system for details.

The latter is conceived as a reporting object, i.e., it is intended to wrap information that is defined elsewhere for convenient reference. The xTransferIep includes structures that allow it to contain data referenced in the IEP that would otherwise require a query to a system to which the receiving organization may not have access. The xTransferIep is a complete representation of an IEP containing all details.

In this process, a new concept was added to SIF, the typed refId. Troubled by the fact that refIds inside the IEP provided no information about where the target information referenced by the refId could be found, TQ added several new data types to the data model. Each is a UUID (as is the generic refId) but each also included documentation elements that explain what the UUID refers to and where the data can be found. For example, one of the new types, iepCommonStudentContactRefIdPointerType, explains that it references a contact inside a student object, distinct from iepCommonContactRefIdPointerType, which points to an independent xContactType, e.g., service provider or doctor, somewhere else.

The last thing is that, with the help of Access for Learning’s John Lovell, the new data models were refined to fit the new xPress object strategy. It does not use XML attributes and refIds are only present for elements that need it. This allows easier use of the model in non-Java/.NET systems. xPress is a more recent addition to SIF v3 and has proven to be easier to work with and, consequently, more popular. It is expected that xPress will be the foundation of new infrastructure work to formally bring JSON into the data model.

As with any first effort, it is fully understood by TQ and the entire community that as this data model comes into actual use, shortcomings will be noted and ideas will be conceived. It is intended that the SpecEd/IEP workgroup will reconvene in the future to evaluate the results of implementation. That is to say that, as with the rest of SIF, the new IEP data models being released with SIF v3.5 are not the end of the effort to better support students with special needs. This release is the beginning of an ongoing effort to insure that SIF is able to help schools, districts and teachers have the information needed to support optimal educational outcomes and to allow students with special needs to have the brightest possible future.

For even more information, a video recording of the IEP Data Model Overview is available HERE. To contact TQ White II, leave word in the comments.

]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1172543 2017-07-11T19:01:36Z 2017-11-21T06:57:42Z mysql reset root password with fix for Socket problem

I don't reset mysql's root password often enough to remember how to do it. So, I google and go through endless hassle because all the examples are old or incomplete. It's maddening.

The main problem is that nobody includes the stuff below referring to /var/run/mysqld. I don't know why. Perhaps it was not needed in the past. However, it sure is now. You can tell if you do by seeing this:

mysqld_safe Directory '/var/run/mysqld' for UNIX socket file don't exists (2)

when you try 'mysql -uroot mysql' without it.

The sequence below works on my Ubuntu 16 installation. 100%. I did it a few times because I wanted to make sure I had done it correctly and repeatedly.

#mysql: reset root user password bash commands
sudo service mysql stop
sudo mkdir /var/run/mysqld
sudo chown mysql: /var/run/mysqld
sudo mysqld_safe --skip-grant-tables --skip-networking &
mysql -uroot mysql

#in msyql:
UPDATE mysql.user
SET
  authentication_string=PASSWORD('PUT_NEW_PASSWORD_HERE'),
  plugin='mysql_native_password'
WHERE User='root' AND Host='localhost';
exit;

#and back in bash
sudo mysqladmin -S /var/run/mysqld/mysqld.sock shutdown
sudo service mysql start

mysql -uroot -pPUT_NEW_PASSWORD_HERE

(Of course, mysql will beef that you put your password in the command line. Don't do it if your bash_history or logs could be accessed.)


By the way, I got this information from this website. Obviously this person is a genius. Props.

https://coderwall.com/p/j9btlg/reset-the-mysql-5-7-root-password-in-ubuntu-16-04-lts







]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1172216 2017-07-10T17:58:15Z 2017-07-10T17:58:15Z SIF JSON Response

So, I saw the PDF John  posted discussing JSON format ideas. Ian and Jon, you rock. It is a great document and excellent ideas. Most of it makes good sense to me and, I'm sure that, as I reread and understand better, I will love even more.

That said, there is one fundamental detail I do not love:

    authors: {
        '#contains': 'author',
        '#items': [{ '#value': 'John Smith' }, { '#value': 'Dave Jones' }]
    }

First reason is that the label 'authors' is plural but contains only one thing. In my opinion, things named plural should always be arrays. Second is that I envision a line of code like:

    const firstAuthor = inData.authors["#item"][0]["#value"];

Looks a lot like C# to me, low signal to noise ratio.

In my other Javascript life, we would be inclined to use inflection, i.e., the assumption that 'authors' has elements with an implied name of 'author' and vice versa. I can understand that our XML roots make this difficult to accept.

Consequently, I am inclined toward the everything is an object (if it's not a list) approach. EG,

    authors: [
        { author: 'John Smith', '@type': 'bigshot' },
        { author: 'Dave Jones', '@type': 'contributor' }
    ]

This provides a data structure that mentions the word 'author' the same number of times as does the XML. That it also provides room for attributes is good. This seems nicer to me:

    const firstAuthor = inData.authors[0].author;

I don't know if this can be expressed properly with openAPI or if it violates some other rule of interaction with XML. I do know that, as a Javascript programmer, I would rather use the form I suggest.



]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1152540 2017-05-08T17:36:05Z 2017-05-08T17:45:57Z Comment on Net Neutrality.

Trump's FCC is about to permanently turn the internet over to the corporations. In a few years, your ISP will be like a cable provider. You can only access the sites that make them money. Other sites will be very slow or non-existent. You will pay for packages. Package A: YouTube, Netflix. Package B: Netflix and Hulu. You want to start an internet business, you will have to work a deal with every ISP corporation. It will be bad.

Go to GoFccYourself.com to be forwarded to the correct page for your comment. Do it every day.

When you click on GoFccYourself.com, you will end up at a page that looks like this: 

Click on the 'Express' link. It will take you to the entry form.

Suggested text:

Net neutrality is essential for freedom. Net neutrality requires Title II regulation of internet service providers. ISPs should completely prevented from influencing the cost or performance of the internet resources and websites I want to use. My bandwidth purchase from my ISP and the site's bandwidth purchase from their ISP should be the only charges.

Mon May 08 2017 12:35:18 GMT-0500 (CDT)
]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1132846 2017-04-19T18:54:08Z 2017-04-19T18:54:08Z Macbook Display Port VGA Adapter Doesn't Work

Google as I might, nobody would tell me that the adapter had firmware that could be out of date. Eventually, I found a reference to the idea in the form of an updater that would not work.

The important thing is that I realized that if you cannot use your Macbook for presentations on a VGA projector or display, it might be that you simply need to buy a new one.

I did and now it works.

]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1147610 2017-04-19T18:53:09Z 2017-04-19T18:53:09Z MacOS/OS X Dock presentation formatting with Spacers!!
I have a lot of stuff in my dock. I have long wished I could have some sort of grouping mechanism so it was easier to find what I want. Today I learned that you can add spaces to your Dock. Life is good.

Enter:

defaults write com.apple.dock persistent-apps -array-add '{"tile-type"="spacer-tile";}’; 
killall Dock;

In your Dock, you will see a space that you can drag as you see fit. You can repeat the above as many times as you want. If you want to remove the space, just drag it out like anything else.

I added some to separate my email and web browser from my development tools and those from the rest of the stuff.

]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1127342 2017-01-30T20:17:49Z 2017-01-30T20:21:23Z Using 'prettier', a Javascript formatter in bbedit

You have to have NodeJS. If you don't have it, google it and make it happen.

Then you need to install prettier. It's NPM page is here.

To install it, type...

npm install prettier -g

This will install it in a command line utility.

In some file (I do a lot of this, so I created a Scripts folder and called the file, ~/Scripts/bin/js/runPrettier.js, you can do what works for you just remember that the bash file below has to point to the file), insert this:

#!/usr/local/bin/node
const prettier=require('prettier');
var inString='';
var writeStuff = function() {
        var outString='';
outString=inString;
        outString=outString.replace(/^[\s]$/gm, "/*linebreak*/"); //I like to retain linebreaks
        outString=prettier.format(outString)
        outString=outString.replace(/\/\*linebreak\*\//gm, ""); //you can remove these if you don't
        process.stdout.write(outString);
    };
//the rest ========================================================
process.stdin.resume();
process.stdin.setEncoding('utf8');
process.stdin.on('data', function(data){
        inString+=data;
    });
process.stdin.on('end', writeStuff);

Then in a file in the bbedit directory (~/Library/Application Support/BBEdit/Text Filters)

Create a file (I called this 'runPrettier') containing this...

#!/bin/bash
~/Scripts/bin/js/runPrettier.js

In the terminal make the bash script executable...

chmod +x ~/Library/Application Support/BBEdit/Text Filters/runPrettier

and, voila!, you have an operating formatter for Javascript.

I assigned mine to a command key so I can always make it pretty.


]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1123974 2017-01-17T22:06:54Z 2017-01-17T22:06:54Z NGINX server_name is not working, ignoring config and getting the wrong server including SSL When NGINX is trying to find something to serve it tries to match all the server names BUT ONLY IF THERE IS A DEFAULT SITE.

I don't understand why it fails even when the server name matches something. However, if you have two separate servers with:

server_name xxx.com

server_name yyy.com

You would expect that (assuming that the configs appear in this order) that http://yyy.com would match that server_name. It will not. It will match xxx.com. Why? Because when there is not default, it simply uses the first server. Period.

If you have a default, though...

server_name xxx.com

server_name yyy.com

default_server

It works. yyy.com will match xxx.com.

I come upon this problem because I had a configuration that had the default file that comes in the distribution and it worked.

Then I added SSL. It did not work. Having long forgotten the issue with default, I debugged like a madman. Then I thought about the default issue (I ran into it sometime in the dark past - it is buried in the docs) and saw, There's a default right there!!!

Eventually (I know. This is the least entertaining punchline in history.), I realized that there was no default for port 443. QED


# Default server configuration
#
server {
    listen 80 default_server;
    listen [::]:80 default_server;
    root /var/www/html;
    index index.html index.htm index.nginx-debian.html;
    server_name _;
    location / {
        try_files $uri $uri/ =404;
    }
}
server {
    listen 443 ssl default_server;
    listen [::]:443 ssl default_server;
    ssl on;
    ssl_certificate /etc/ssl/PATH/TO/CERT.cer;
    ssl_certificate_key /etc/ssl/PATH/TO/CERT.key;
    root /var/www/html;
    index index.html index.htm index.nginx-debian.html;
    server_name _;
    location / {
        try_files $uri $uri/ =404;
    }
}






]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1118474 2016-12-27T05:39:02Z 2016-12-27T05:39:03Z Using iTerm2 for the bbedit 'Go here in terminal' command

Turns out that bbedit uses the standard system terminal. Someday I will figure out how to subvert that because I cannot imagine any reason that I would ever want to use Apple's dumb old terminal program when iTerm2 exists. In the meantime, I asked the lovely people at Bare Bones Software how to change bbedit's behavior.

Turns out it's right there in the Expert Preferences list (about a hundred obscure things that I never thought of as I searched the Preference preferences for some way to control this). To make it easier for future generations, I offer the complete command line:

    defaults write com.barebones.bbedit TerminalBundleID -string "com.googlecode.iterm2"

It works and make bbedit another fraction better.

]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1111804 2016-11-29T23:05:24Z 2016-11-29T23:05:24Z nodemon does not watch node_modules I'm no fan of the node_modules structure. I'd have gone for a dichotomy, node_library and node_modules. node_library would be the place that npm and yarn install stuff from npmjs.org. node_modules would be for my modules and other code. I would have node search up the tree in both folders. Complicated, yes. But having no systematic way of determining which code is mine is simply awful.

In one of the goofiest decisions in the node world, the project monitor, nodemon, does not look inside node_modules to figure out whether to restart a project when files are changed. This is certainly because it has to monitor a zillion files if it does and the guy worries about performance.

The problem is that my projects are comprised of node modules and they are put in node_modules. If there is a better place to put them, I beg you, write some comments and save me and the rest of the misery.

So, if you

cannot get nodemon to restart your project or
nodemon won't detect changed files
nodemon will not watch node_modules
(put search phrases in the comments, please)

you can remedy the situation by adding an ignoreRoot key to you nodemon.json file
{
  "ignoreRoot": [".git", ".jpg", ".whatever"]
}


This overrides the default ignore behavior entirely. Choosing to not list node_modules means they can now be watched.

This is, in fact, explained on the github site (here) but, you have to read a lot of stuff to get to it and it doesn't get found by google.

Perhaps this will change that.

]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1111707 2016-11-29T15:58:08Z 2016-11-29T15:58:08Z Scrolls Bars on Macintosh

Among the worst things that Apple ever did was make it so that scroll bars are only visible when you want to use them. Often it's hard to figure out how to activate them or they go away before I'm done using them.

Of course, Apple is actually awesome and, after all these years, I just realized that they provide make them show all the time. In the two days since I discovered this, I am happy again for the first time.

To accomplish this minor miracle...

System Preferences -> General -> Show Scroll Bars -> Always

It's like being able to breathe again.

]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1093191 2016-09-25T05:46:35Z 2016-11-29T15:58:32Z Semi-colons should be considered mandatory in Javascript

As long as this

var obj={a:'a', b:'b'};
var a = obj
[a].forEach(()=>{})

produces an error, semi-colons are mandatory and any suggestion to the contrary is childish perversity.


]]>
TQ White II
tag:tech.genericwhite.com,2013:Post/1004115 2016-02-29T22:24:21Z 2016-02-29T22:24:32Z rsync error: error in rsync protocol data stream (code 12)

The internet failed to tell me:

This error can result from not having one of the directories present. Yes, I know that rsync creates lots of directories very nicely. Not all of them.

To wit:

rsync someDirectory someUser@1.1.1.1:/home/someUser/system/code

Gave the the data stream error until I created the directory 'system'.

I can't imagine why. The only thing that is distinctive about 'system' is that it is also ~/system, ie, at the top of the user's directory.



]]>
TQ White II