Public Key Encryption for NodeJS with node-rsa


When I was trying to make node-rsa work, I felt that the instructions were a little bit
cryptic. It took way too much time for figure out the hyphenated argument structure,
ie, pkcs1-public.

Also, I'm not a huge expert in encryption stuff so it took way too long to figure out that
the key produced by ssh-keygen was wrong and what to do about fixing it.

I decided that the things I learned need to be documented for posterity.

So, when I got it working, I tuned this up for readability and put it in a repo so
that you can find it. It does three things.

1) Encrypt with public key/decrypt with private key, both from files
2) Encrypt with private key/decrypt with private key, both from files
3) Generate keys to use for decryption and print them out

Change the variable testName to try them out.

Just navigate to the directory and run the file:

node testNodeRsa.js

Bonus! For your convenience, here is the command to convert the .pub generated by ssh-keygen into a .pem:

ssh-keygen -f keyName.pub -e -m pem > keyName.pem

You're welcome.

CHAPTER TWO

I want to be able to use the keys in a browser. I figured these learnings were worth documenting, too.

I used Browserify.

browserify testNodeRsaBrowser.js -o testNodeRsaBrowserBrowserify.js

If you put this repository someplace you can serve html, it will let you play with it.

You can also play with it at:  http://genericwhite.com/rsaEncryptionDemo

CHAPTER THREE

My interest in public key encryption continues. I wanted to be able to actually use a tool to play so I wrote one.

It does these things:

1) Generate a key pair.
2) Extract a public key from a private key.
3) Manually enter public or private key.
4) Create a crypto text string from plain text input.
5) Extract plain text from a crypto string.


You can play with this at: http://genericwhite.com/rsaEncryptionDemo/

The code is available on github.

Javascript Alert Debug: Coolest thing I ever did

Here's the problem. A co-worker got a bizarre alert() dialog in a web app. It popped up and just said "1". It wasn't her code and she was completely stumped. That's how I got involved.

I looked around the code. Searched for errant /alert\(/, etc. Nothing worked.

Then I hit the developer console. I typed the best single thing I ever typed:

window.alert= msg=>{ console.log(msg); console.trace(); }

Yeh, I did that. You may enjoy my awesomeness.

Console.trace() gives a stack dump at the location that the alert dialog was called. That was very helpful.

You are welcome.

SIF v3.5 Adds Support for Individualized Education Plans

Access for Learning (a4l.org) is happy to announce a major new addition to the School Interoperability Framework (SIF) data model to support students with special needs. Comprising two major components, xIndividualizedEducationPlan and xIepTransfer, this release is the result of a two year effort led by TQ White II of the Central Minnesota Educational Research and Development Corporation (cmerdc.org) with the help of national experts in special education and data modeling. The effort is motivated by the recognized need to make a student’s individualized education plan (IEP) content available when a student transfers into a new school.

The new data models are intended to support three main use cases. 1) Immediate support for an administrator the very first time a student shows up in a new school. 2) Information to support the special education team as they adapt plans already in place to the resources and strategies of the receiving school. 3) Sufficient information for schools and districts to support reporting and resource management needs. The goal is to ensure that a school has the information needed to provide students having special needs with critical, ongoing services.

This new model is based on a thorough survey of the standard form sets published by nearly every state, as well as the federal government. They were categorized into representative groups for an exhaustive inventory of data and evaluation of documentation strategies. With the input of a workgroup averaging about ten people a structured hierarchy of elements was developed and refined. Once done, the work was passed to Jill Parkes, education data analyst, at CEDS (Common Educational Data Standards) a federal organization that develops a dictionary of education related data definitions.

The CEDS process did two things. First, they evaluated each element in the new, tentative IEP data model and, where appropriate, attached a formal definition to it, either new or in reference to an existing definition. Then it was put into a formal CEDS community review. CEDS stakeholders, especially those with an interest in special education, reviewed the new definitions and approved them. This discussion improved confidence in the data design and made it more complete.

After this information was added to the XML – and with substantially more confidence, the data model was formally moved into the Access for Learning community review process. Though some people looked at the XML and offered comments, the main process involved TQ making presentations to various groups explaining the process and product in detail. Many valuable comments were made that resulted in changes but two were especially valuable.

First was Megan Gangl, a co-worker of TQ’s at cmERDC. Megan has spent her entire career as special education teacher, case manager and leader of case managers. Her decades of experience brought many new details to the model, suggested reorganization of some parts and validated others. She identified missing details, helped to rename elements and refine both their data definitions and the explanations of their meanings. After the initial presentation, she spent several days collaborating on the model in detail. Once done, confidence in the usefulness, correctness and completeness of the model was again tremendously improved.

The day before community review started in October, a new person, Danielle Norton, joined the North American Technical Board. Danielle’s team contributed to the community review with sessions including the detailed overview presentation and discussions with various subgroups of her team. A particularly important contribution was made by Rick Shafer, a long experienced data architect, who noted some problems with normalization in the data model.

The initial motivation for the IEP effort was to support the transfer of students between schools or districts. Throughout the process, the foremost intention was to provide complete information for the receiving educational agency. As a consequence, the data model included data elements that were duplicates of things that were defined elsewhere in SIF. That is, it was badly de-normalized. It made it so that the element would provide a complete picture for a receiving district but was ill-suited for use as a local SIF entity object.

To solve this problem, the data model was split into two elements, xIndividualizedEducationPlan and xTransferIep. The former is completely normalized to serve as a formal entity. No data is represented that is defined elsewhere in SIF but is, instead, referenced with a refId. If a receiving program needs to know those details, it is expected to query the appropriate system for details.

The latter is conceived as a reporting object, i.e., it is intended to wrap information that is defined elsewhere for convenient reference. The xTransferIep includes structures that allow it to contain data referenced in the IEP that would otherwise require a query to a system to which the receiving organization may not have access. The xTransferIep is a complete representation of an IEP containing all details.

In this process, a new concept was added to SIF, the typed refId. Troubled by the fact that refIds inside the IEP provided no information about where the target information referenced by the refId could be found, TQ added several new data types to the data model. Each is a UUID (as is the generic refId) but each also included documentation elements that explain what the UUID refers to and where the data can be found. For example, one of the new types, iepCommonStudentContactRefIdPointerType, explains that it references a contact inside a student object, distinct from iepCommonContactRefIdPointerType, which points to an independent xContactType, e.g., service provider or doctor, somewhere else.

The last thing is that, with the help of Access for Learning’s John Lovell, the new data models were refined to fit the new xPress object strategy. It does not use XML attributes and refIds are only present for elements that need it. This allows easier use of the model in non-Java/.NET systems. xPress is a more recent addition to SIF v3 and has proven to be easier to work with and, consequently, more popular. It is expected that xPress will be the foundation of new infrastructure work to formally bring JSON into the data model.

As with any first effort, it is fully understood by TQ and the entire community that as this data model comes into actual use, shortcomings will be noted and ideas will be conceived. It is intended that the SpecEd/IEP workgroup will reconvene in the future to evaluate the results of implementation. That is to say that, as with the rest of SIF, the new IEP data models being released with SIF v3.5 are not the end of the effort to better support students with special needs. This release is the beginning of an ongoing effort to insure that SIF is able to help schools, districts and teachers have the information needed to support optimal educational outcomes and to allow students with special needs to have the brightest possible future.

For even more information, a video recording of the IEP Data Model Overview is available HERE. To contact TQ White II, leave word in the comments.

mysql reset root password with fix for Socket problem

I don't reset mysql's root password often enough to remember how to do it. So, I google and go through endless hassle because all the examples are old or incomplete. It's maddening.

The main problem is that nobody includes the stuff below referring to /var/run/mysqld. I don't know why. Perhaps it was not needed in the past. However, it sure is now. You can tell if you do by seeing this:

mysqld_safe Directory '/var/run/mysqld' for UNIX socket file don't exists (2)

when you try 'mysql -uroot mysql' without it.

The sequence below works on my Ubuntu 16 installation. 100%. I did it a few times because I wanted to make sure I had done it correctly and repeatedly.

#mysql: reset root user password bash commands
sudo service mysql stop
sudo mkdir /var/run/mysqld
sudo chown mysql: /var/run/mysqld
sudo mysqld_safe --skip-grant-tables --skip-networking &
mysql -uroot mysql

#in msyql:
UPDATE mysql.user
SET
  authentication_string=PASSWORD('PUT_NEW_PASSWORD_HERE'),
  plugin='mysql_native_password'
WHERE User='root' AND Host='localhost';
exit;

#and back in bash
sudo mysqladmin -S /var/run/mysqld/mysqld.sock shutdown
sudo service mysql start

mysql -uroot -pPUT_NEW_PASSWORD_HERE

(Of course, mysql will beef that you put your password in the command line. Don't do it if your bash_history or logs could be accessed.)


By the way, I got this information from this website. Obviously this person is a genius. Props.

https://coderwall.com/p/j9btlg/reset-the-mysql-5-7-root-password-in-ubuntu-16-04-lts







SIF JSON Response

So, I saw the PDF John  posted discussing JSON format ideas. Ian and Jon, you rock. It is a great document and excellent ideas. Most of it makes good sense to me and, I'm sure that, as I reread and understand better, I will love even more.

That said, there is one fundamental detail I do not love:

    authors: {
        '#contains': 'author',
        '#items': [{ '#value': 'John Smith' }, { '#value': 'Dave Jones' }]
    }

First reason is that the label 'authors' is plural but contains only one thing. In my opinion, things named plural should always be arrays. Second is that I envision a line of code like:

    const firstAuthor = inData.authors["#item"][0]["#value"];

Looks a lot like C# to me, low signal to noise ratio.

In my other Javascript life, we would be inclined to use inflection, i.e., the assumption that 'authors' has elements with an implied name of 'author' and vice versa. I can understand that our XML roots make this difficult to accept.

Consequently, I am inclined toward the everything is an object (if it's not a list) approach. EG,

    authors: [
        { author: 'John Smith', '@type': 'bigshot' },
        { author: 'Dave Jones', '@type': 'contributor' }
    ]

This provides a data structure that mentions the word 'author' the same number of times as does the XML. That it also provides room for attributes is good. This seems nicer to me:

    const firstAuthor = inData.authors[0].author;

I don't know if this can be expressed properly with openAPI or if it violates some other rule of interaction with XML. I do know that, as a Javascript programmer, I would rather use the form I suggest.



Comment on Net Neutrality.

Trump's FCC is about to permanently turn the internet over to the corporations. In a few years, your ISP will be like a cable provider. You can only access the sites that make them money. Other sites will be very slow or non-existent. You will pay for packages. Package A: YouTube, Netflix. Package B: Netflix and Hulu. You want to start an internet business, you will have to work a deal with every ISP corporation. It will be bad.

Go to GoFccYourself.com to be forwarded to the correct page for your comment. Do it every day.

When you click on GoFccYourself.com, you will end up at a page that looks like this: 

Click on the 'Express' link. It will take you to the entry form.

Suggested text:

Net neutrality is essential for freedom. Net neutrality requires Title II regulation of internet service providers. ISPs should completely prevented from influencing the cost or performance of the internet resources and websites I want to use. My bandwidth purchase from my ISP and the site's bandwidth purchase from their ISP should be the only charges.

Mon May 08 2017 12:35:18 GMT-0500 (CDT)

Macbook Display Port VGA Adapter Doesn't Work

Google as I might, nobody would tell me that the adapter had firmware that could be out of date. Eventually, I found a reference to the idea in the form of an updater that would not work.

The important thing is that I realized that if you cannot use your Macbook for presentations on a VGA projector or display, it might be that you simply need to buy a new one.

I did and now it works.

MacOS/OS X Dock presentation formatting with Spacers!!

I have a lot of stuff in my dock. I have long wished I could have some sort of grouping mechanism so it was easier to find what I want. Today I learned that you can add spaces to your Dock. Life is good.

Enter:

defaults write com.apple.dock persistent-apps -array-add '{"tile-type"="spacer-tile";}’; 
killall Dock;

In your Dock, you will see a space that you can drag as you see fit. You can repeat the above as many times as you want. If you want to remove the space, just drag it out like anything else.

I added some to separate my email and web browser from my development tools and those from the rest of the stuff.

Using 'prettier', a Javascript formatter in bbedit

You have to have NodeJS. If you don't have it, google it and make it happen.

Then you need to install prettier. It's NPM page is here.

To install it, type...

npm install prettier -g

This will install it in a command line utility.

In some file (I do a lot of this, so I created a Scripts folder and called the file, ~/Scripts/bin/js/runPrettier.js, you can do what works for you just remember that the bash file below has to point to the file), insert this:

#!/usr/local/bin/node
const prettier=require('prettier');
var inString='';
var writeStuff = function() {
        var outString='';
outString=inString;
        outString=outString.replace(/^[\s]$/gm, "/*linebreak*/"); //I like to retain linebreaks
        outString=prettier.format(outString)
        outString=outString.replace(/\/\*linebreak\*\//gm, ""); //you can remove these if you don't
        process.stdout.write(outString);
    };
//the rest ========================================================
process.stdin.resume();
process.stdin.setEncoding('utf8');
process.stdin.on('data', function(data){
        inString+=data;
    });
process.stdin.on('end', writeStuff);

Then in a file in the bbedit directory (~/Library/Application Support/BBEdit/Text Filters)

Create a file (I called this 'runPrettier') containing this...

#!/bin/bash
~/Scripts/bin/js/runPrettier.js

In the terminal make the bash script executable...

chmod +x ~/Library/Application Support/BBEdit/Text Filters/runPrettier

and, voila!, you have an operating formatter for Javascript.

I assigned mine to a command key so I can always make it pretty.


NGINX server_name is not working, ignoring config and getting the wrong server including SSL

When NGINX is trying to find something to serve it tries to match all the server names BUT ONLY IF THERE IS A DEFAULT SITE.

I don't understand why it fails even when the server name matches something. However, if you have two separate servers with:

server_name xxx.com

server_name yyy.com

You would expect that (assuming that the configs appear in this order) that http://yyy.com would match that server_name. It will not. It will match xxx.com. Why? Because when there is not default, it simply uses the first server. Period.

If you have a default, though...

server_name xxx.com

server_name yyy.com

default_server

It works. yyy.com will match xxx.com.

I come upon this problem because I had a configuration that had the default file that comes in the distribution and it worked.

Then I added SSL. It did not work. Having long forgotten the issue with default, I debugged like a madman. Then I thought about the default issue (I ran into it sometime in the dark past - it is buried in the docs) and saw, There's a default right there!!!

Eventually (I know. This is the least entertaining punchline in history.), I realized that there was no default for port 443. QED


# Default server configuration
#
server {
    listen 80 default_server;
    listen [::]:80 default_server;
    root /var/www/html;
    index index.html index.htm index.nginx-debian.html;
    server_name _;
    location / {
        try_files $uri $uri/ =404;
    }
}
server {
    listen 443 ssl default_server;
    listen [::]:443 ssl default_server;
    ssl on;
    ssl_certificate /etc/ssl/PATH/TO/CERT.cer;
    ssl_certificate_key /etc/ssl/PATH/TO/CERT.key;
    root /var/www/html;
    index index.html index.htm index.nginx-debian.html;
    server_name _;
    location / {
        try_files $uri $uri/ =404;
    }
}