Color and Other Format for Fargo.io

Dave Winer is the guru of outlining. He is the person who moved outlines from English class onto the computer in the eighties. He's done a lot of other things, too. But, his development tool Frontier was a game changer for me. I eventually had to leave it behind because my brain needs color. I need code coloring when I'm programming and I need color cues to work productively with an outliner (everything else, too; my emails alway have colored sections).

His newest outline, Fargo, is very cool (at http://fargo.io). It runs in a web browser and saves your files into your Dropbox. Awesome. Unfortunately, it's also black and white and serif type face. I don't like that and, honestly, I have a hard time working with black type if there is very much of it. I tried it out. Liked it. I continued to use Omni Outliner. (If I could get Frontier to use Omni's formatting, I would be so happy.)

Lately I've been working on a very outline intensive project and now I want to work on it with other people. But, it really needs to be an active outline with expanding and collapsing sections.

I discovered that I could export the outline from Omni into OPML, the basic data structure Dave uses for Fargo. I copied the OPML into the Dropbox file Fargo uses and, boom!, I had Fargo functionality.

To my eye, it was ugly but it turns out that Dave did a BEAUTIFUL THING. He made it so that you can execute Javascript from the outliner. No kidding. You just type some JS into the outline, hit cmd-/ and it runs.

Here's what I did:

$('body').prepend("<script src='http://static.tqwhite.org/iepProject/formatFargo.js'>");


That's right, I loaded a chunk of JS from my static server. That code changes this:



to look like this..



It took a fair amount of reverse engineering to figure it out but, it works like a charm.


Here's the code:

(I think the colorized picture is easier to read.)


And here it is if you want to do your own colorizing:


var colorize = function() {
    $('.concord .concord-node .concord-wrapper .concord-text').css({'font-family': 'sans-serif'});
    $('.concord-level-1-text').css({'color': 'black'});
    $('.concord-level-2-text').css({'color': '#664F58'});
    $('.concord-level-3-text').css({'color': '#456D72'});
    $('.concord-level-4-text').css({'color': '#AD9470'});
    $('.concord-level-5-text').css({'color': '#D3AF74'});
    $('.concord-level-6-text').css({'color': '#90967E'});

    $('.concord-level-7-text').css({'color': '#778'});
    $('.concord-level-8-text').css({'color': '#788'});

    $('.concord .concord-node > .concord-wrapper').css({'background': 'white'});
    $('.concord .concord-node.selected > .concord-wrapper').css({'background': 'rgb(245,250, 250)'});
    $('.concord .concord-node.selected').find('li .concord-wrapper').css('background', 'rgb(245,250, 250)')
}

$('body').bind('keyup', colorize);
$('body').bind('click', colorize);

colorize();

document.styleSheets[0].insertRule(".selected { background:rgb(245,250, 250); }", 0);
document.styleSheets[0].insertRule(".selected div { background:rgb(245,250, 250); }", 0);
document.styleSheets[0].insertRule(".selected div { color:normal; }", 0);
document.styleSheets[0].insertRule(".selected i { background:rgb(245,250, 250); }", 0);







Reinstalling NodeJS and npm

Recently, I upgraded to the latest NodeJS/npm and npm stopped working. It turns out that there was a problem with the OSX installer.

After painful amounts of googling, I found that some had solved it by "tracking down" the node and npm files, removing them and then reinstalling with the node distribution download (dmg) from nodejs.org.

I tracked down the files. Do this:


sudo rm -rf /usr/local/bin/node

sudo rm /usr/local/bin/npm

sudo rm -rf /usr/local/lib/node_modules/npm


And then hit up http://nodejs.org for a new installer. You will have a fresh working installation.


PS, the problem with npm was this:

When I typed

npm init

to start up a node module, I got errors that included

Error: Cannot find module 'github-url-from-git'

Turns out that basically everything I did with npm except --version was broken in this way.

The 'delete before reinstalling' process listed above fixed it.







Visual Studio 2015, .NET 5 rc1, dnu restore, asp.net missing (I can't believe it either)

It's been a half dozen years since I started a new project in Visual Studio. I was a little excited at the prospect. I like learning things and I know a lot about almost all the rest of the internet development topics.

I looked up the latest stuff and it turns out that we have a new Visual Studio and a new .NET that have taken a lot of good lessons from the rest of the world of web development. .NET 5 is out of beta and into Release Candidate 1. That's good enough for me. I expect the bugs will be small.

Wait. It's Microsoft and everything they do is stupid.

Problem Zero (which I won't detail)

I actually went through fits trying to get it all installed and looking good, but, having done that, I create a new project: ASP Web Application/ASP.NET 5 Web Application.

Problem One:

I have to do this twice. I keep code on an external drive that the file dialog navigates to as //psf.stuff... . It tells me I that "UNC paths are not supported." The second time, I typed (not navigate) the volume letter, "X:", and it worked.

Problem Two:

I build the solution. I get a bazillion (well, 204) errors. The first one tells me that "The type or namespace name 'Identity' does not exist in the namespace "Microsoft.AspNet'". Another, "The type or namesspace name 'AspNet' does not exist in the namespace 'Microsoft'". Can you imagine?

The project listed in the first error says "TestProject.DNX 4.5.1, TestProject.DNX Core 5.0" (obviously, the 'TestProject' is my project name). For the second one, it's "DNX 4.5.1" only.

I try using Nuget to add "Identity.Core" and it changes things. I screw around with that for awhile as new missing references appear until I start getting messages telling me that I have duplicate definitions. This is truly awful. (Did I mention that Microsoft always does it stupid? The package manager doesn't make sure the references are correct? Really?)

Problem Three:

I start over and this time I decide that I'm working toward .NET 5.0 so to hell with 4.5.1. I edit the project.json file and remove it. Build takes forever and I pretty much expect everything will blow up but instead, I get a message, "Dependencies in project.json were modified. Please run "dnu restore" to generate a new lock file."

This feels like progress. I right-click on the project, choose Open Command Line and type "dnu restore". It works. I return to VS and build again. It instantly repeats the exact same message. I delete the lock file and restore it. Same thing. A complete, stupid dead end.

THE SOLUTION (and a lesson is the complete depth of Microsoft stupidity)

I reverse the order of the references to 4.5.1 and 5.0 so that 5.0 comes first. IE,

I change from this:



to this:



The build succeeds promptly. Clicking the IIS Express button opens a web browser and shows me the scaffold web page.

This has taken me over 2.5 hours. Certainly, my inexperience with this technology made it slower. Someone better might have done it more quickly. However, this is the scaffold. This is the part that supposed to save time. This is an epic fail on Microsoft's part. I mean, they put the dependencies in the scaffold in the wrong order!!

The good news is that, if I got enough Google-friendly text in this page, you might have found it well before 2.5 hours elapsed.

Of course, that just means you need to endure Microsoft's next awful surprise. Good luck. I know I need some.



Bash List of Files in Directories with Complete Paths

I don't want to forget this and it took me too much googling to find.

find $PWD -type f | grep xsd

gets:

/..absolute path../Collections.xsd
/..absolute path../Composite/SIFNACompositeObjects.xsd
/..absolute path../Entity/SIFIdentityManagement.xsd
/..absolute path../Report/SIFNAassessmentSummary.xsd
/..absolute path../SIFglobal.xsd

Note that it is looking inside folders.


(Too much googling but, thanks StackOverflow: http://stackoverflow.com/questions/246215/how-can-i-list-files-with-their-absolute-path-in-linux)

NodeJS Express Body-Parser Post Data Missing Problem

I am amazed that NodeJS' main network server package, Express, does not handle Post data on its own. I just don't get it. It requires a package called Body-Parser.

I copied in the sample code from the Express website

http://expressjs.com/4x/api.html#req.body

got the required packages and made a little form to test it.

It did not work.

The docs explain that Body-Parser builds a request.body that contains the post data but it was empty. It did exist, but it was empty.

I did one million things to make sure that I was doing what I thought I was doing. Postman? Check. Curl? Check. Inch by inch inspection of my entry page? Check.

I got to looking at the post on the way into the page in Firebug. I noticed that the encoding that Firefox was using was

application/x-www-form-urlencoded

The Body-Parser docs say that any of their decoders will take a type parameter to specify this. I found an example and tried it out:

{type:'application/x-www-form-urlencoded'}

Nope. I tried this in some decoder called raw(), the urlencoded() one, I even put in into json() just in case. Nada.

At my wits end, I'm just trying things in Postman. I tell it to encode it in various ways and, Voila!!, when I choose

multipart/form-data

It works.

WTF? I think. Everything specifically tells me that Body-Parser SPECIFICALLY DOES NOT DO MULTIPART form data.


How much clearer could it be?

Then I realize, the sample code from the Express docs caused me to install (last night when this nightmare began) something called Multer. Experimentation tells me that this is the reason I can do multipart/form-data. Without Multer, I get nothing.

But, without Multer, I get nothing no matter what I do. I did everything I have tried before and could not get Body-Parser to work. With Multer, only multipart/form-data. Without, nothing.

If anyone can enter a comment telling me how I got this wrong, I would be grateful.


UPDATE: It was none of the above!!!


It turns out that, in the course of the above screwing around, I moved the assignment of the router to follow the assignment of the Body_Parser. It executes the router before the parser if you tell it to. I had inserted the new body-parser code after the route. No reason, it just happened .

app.use(bodyParser.urlencoded({ extended: true }))

must precede

app.use('/', router);


“This project is incompatible with the current version of Visual Studio”

Nice error message Microsoft. Why not just say, "Screw off. We don't care about your old projects." This is why I avoid them whenever possible.

However, in this case, it wasn't possible. It was some reference code for the protocol of a new thing I'm working on. It was done in Visual Studio 2013. I'm using Visual Studio 2015.

After doing a bunch of stuff that was totally useless (I am not very strong in .NET), I happened up on this simple solution...

Open the .csproj file in a text editor.

Change

12

to

14

Done.

Or, if you need more detail:

Change

<Project ToolsVersion="12.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

to

<Project ToolsVersion="14.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

After that it opened with no problem and I was able to build and use the project.

I have no idea what the ramifications of this could be in the long run. If I find any that are adverse, I will write it down here.




mysql: Load Data From and Select ... Into Outfile

Just so I don't forget...

I did a test of this.

select firstName, lastName from users into outfile 'tempTest9999'

Then, after creating an appropriate table...

load data infile 'tempTest9999' into table tempTest

The thing I don't want to forget is that the file was put into the directory...

/var/lib/mysql/DATABASENAME

The owner and group were both 'mysql'.

I know, it's what one would expect, but I forget these things.

Responsive Image Maps - Generator and jQuery Plugin

Bottom line, I need to do an image map in a responsive website.

First, I found this lovely image map generator. It is simple and works very well. It's good enough that I actually gave him a few bucks.

http://www.image-maps.com

The resulting <map> worked right away.

Then I started working on the real website. The image loads in a size relative to the browser window's size. The image map was out of alignment. I quickly realized the problem and googled Responsive Image Maps.

I found this:

https://github.com/stowball/jQuery-rwdImageMaps

I groaned. I hate adding dependencies and even more, I hate figuring out how to work new stuff that I'm probably not going to use again (this is the first image map I've used in years and once this project ends, probably the last for a long time). But, I did it.

The learning curve was ZERO.

I added the plugin. Copied his sample initialization and it worked perfectly the first time.

Win!!

Tape Suppresses console.log() and Makes Debugging Difficult

I was seduced, eg [1], by the fact that the Tape (the "tap-producing test harness for node and browsers") unit testing tool does less. Not only does it not litter your Javascript universe with globals, it does a lot less magic stuff.

I am in the early stages of a new project and decided to do the right thing and test everything from the first moment. Having been irritated by the amount of arcane stuff in Mocha, I drank the koolaid and rewrote my starter testing for Tape.

Bad move.

Tape is simple to use but, when I started doing real development and a new test didn't run, I needed to debug and couldn't.

Turns out that Tape suppresses console.log(), process.exit(), etc. Of course, there are other ways to debug Javascript, but I am a fan of print-trace, ie, console.log() and you cannot use console.log() with Tape.

Searching the web, I found that this is not something that is not something that is noted very often. I don't know why. If I had known this, I would not use Tape. In fact, I am going to revert to using Mocha. The elimination of testing globals is not sufficiently compelling to make it worth changing my approach to debugging.


[1] Why I use Tape Instead of Mocha & So Should You - http://tqwhite.org?F7A285

Remote Volume Sharing with Ubuntu and fstab

I have two applications that run on my Ubuntu server. They both deposit their output onto the different  directories on the same remote volume. To avoid confusing my tiny brain, I like to isolate separate applications into separate users on my server. (It allows me to login and have the environment initialized with appropriate, different, management tools.)

So, I set about making it so that the volume mounted all the time. (That they are Windows volumes has, I believe, no bearing on this subject.) That is, the /ets/fstabs file contains a couple of lines like this:

//00.00.00.00/c$ /home/appUserA/volumeName cifs uid=appUseruserA,rw,user,username=remoteName,password=******* 0 0

//00.00.00.00/c$ /home/appUseruserA/volumeName cifs uid=appUseruserB,rw,user,username=remoteName,password=******* 0 0

The important point is that the IP addresses (here shown with zeroes) are the same and volumes (c$) match. 

The mounting worked. I could see the volume in both user directories. I got the application for appUserA to work. Life is good. But, when I got to the other application for appUserB, I could not write to its destination directory. 

After screwing around a long time, I realized that Ubuntu locks the mounted directory for the user that touches it first. This, before I figured it out, made for confusing results. Sometimes A could write. Other times B. In any case, I could always write with sudo.

I will have to do a different way of making these directories accessible (probably a common mount point with aliases into the appropriate directories).

So will you.