Author Archives: John Munsch

Shipit vs Flightplan for Automated Administration

First off, let me say one really important thing. If you want your next project to succeed, I believe you should automate your server setup and deployment from day one. If you find yourself doing a particular thing even twice, it’s time to write down the steps to do it in a form you can run again; not just make a note of it in Evernote or on a piece of paper.

If you do that you will always be keen to do another deployment whenever you want to make a small improvement or fix a bug. Especially when you’re returning to the project weeks or months later and the process to do things is no longer fresh in your mind. Any solution at all is better than no solution at that point.

PaperQuik and ClearAndDraw

Last year I threw together a couple of small projects, grabbed a DigitalOcean server (thanks for their $60 credit from ng-conf last year), figured out how to setup my own server and deploy everything to create some new websites. I did everything from start to finish myself and I felt duly proud about having built something from scratch and launched it no matter how small it was.

But there’s a siren song that every developer feels when working in a particular language, be it Ruby or Java, JavaScript or Python, that all your tools should be written in your favorite language. You want your web server, build tools, continuous integration server, and sometimes even your editor built using the language you use most of the time. The same things get rewritten over and over to support that idea and although I hate the wastefulness of it, I certainly understand the feeling and everything I used to do my administration tasks for my projects was Bash shell scripts and I didn’t really like them much.

Then I started seeing new deployment and automation tools like Flightplan and Shipit come out specifically for JavaScript developers. Both are focused on the deployment part of things and not the building and development automation tasks which Grunt or Gulp focus on. So I thought it might be interesting to try and replace my shell scripts with these tools to see how easily they could do the same jobs. The tasks covered by my scripts were: initial machine configuration, updating of Ubuntu (mainly to get security fixes), deployment, and creating a SSH shell to the remote server. It’s not a lot, but that covered everything I found myself doing repeatedly as I put together my projects.

Flightplan

First I started with Flightplan. It’s a tool supposedly similar to Python’s Fabric tool. Never having used the latter I cannot say anything about that. What I can say is that I was able to cobble together a flightplan.js file which allowed me to do three out of my four tasks (you can run an SSH shell under it but not easily interact with it, so I abandoned that).

I might not be making the absolute best use of Flightplan because I was trying to import the same commands I had used in my shell scripts into the flightplan.js file to create tasks in it. However, it worked and it was pretty straightforward.

// Running this requires installing flightplan (https://github.com/pstadler/flightplan).
// Then use commands like:
//   fly install:production
//   fly deploy:production
//   fly upgrade:production
var plan = require('flightplan');

plan.target('production', [
  {
    host: 'PocketChange',
    username: 'root',
    agent: process.env.SSH_AUTH_SOCK
  }
]);

var tmpDir = 'PaperQuik-com-' + new Date().getTime();

// Install software on the server necessary to run this application.
// Then ensure that Apache is properly configured to serve the 
// application.
plan.remote('install', function (remote) {
  remote.sudo('apt-get update');
  remote.sudo('apt-get -y install apache2 emacs23 git unzip');
});

plan.local('install', function (local) {
  local.echo("We couldn't copy this file earlier because there isn't a spot for it until after Apache is installed.");
  local.transfer('paperquik.conf', '/etc/apache2/sites-available/');
});

plan.remote('install', function (remote) {
  remote.sudo('a2enmod expires headers rewrite proxy_http');

  remote.sudo('a2dissite 000-default');

  remote.sudo('a2ensite paperquik mdm');

  remote.sudo('service apache2 reload');
});

// Deploy the application.
plan.local('deploy', function(local) {
  local.log('Deploy the current build of PaperQuik.com.');
  local.log('Run build');
  local.exec('grunt build');

  local.log('Copy files to remote hosts');
  local.with('cd dist', function() {
    var filesToCopy = local.exec('find .');

    // rsync files to all the target's remote hosts
    local.transfer(filesToCopy, '/tmp/' + tmpDir);
  });
});

plan.remote('deploy', function(remote) {
  remote.log('Move folder to web root');
  remote.sudo('cp -R /tmp/' + tmpDir + '/*' + ' /var/www/paperquik');
  remote.rm('-rf /tmp/' + tmpDir);
});

// Upgrade Ubuntu to the latest.
plan.remote('upgrade', function (remote) {
  remote.log('Fetches the list of available Ubuntu upgrades.');
  remote.sudo('apt-get update');

  // And then actually does them.
  remote.sudo('apt-get -y dist-upgrade');
});

Notice the way the install task (or the deploy task) switches back and forth between sections which are for local and remote. I don’t think I messed that up, it seems like the normal structure for Flightplan and it seems odd to me. Doubly so because Flightplan doesn’t seem to have a mechanism for task dependencies (I need to do task A before I do task B or task C and I don’t want to repeat that code).

Pros:

  • All the commands within a task execute sequentially so it’s an easier transition from something like a shell script.
  • Allows you to specify multiple servers and will allow you to run tasks simultaneously against all of them. Not anything I need at this time, but you never know when a project could grow from one server to two.

Cons:

  • A given task is broken up into local and remote sections and they run sequentially based upon them all having the same name. There doesn’t seem to be any way for a given task to specify that it has dependencies upon other tasks being executed first (for example, maybe I do a directory cleanup before several different tasks).
  • Although it’s easier to deal with serially executing code, if you have several actions which would execute more quickly in parallel Flightplan doesn’t really support that.

Shipit

Then I built the same thing again in Shipit. As with Flightplan, it claims similarity to another tool as well, the Ruby deployment tool Capistrano in this case. Again I have to claim ignorance on this never having used Capistrano. Here’s the same set of commands (install, deploy, and upgrade) using a Shipit file:

// Running this requires installing Shipit (https://github.com/shipitjs/shipit).
// Then use commands like:
//   shipit production install
//   shipit production deploy
//   shipit production upgrade
module.exports = function (shipit) {
  require('shipit-deploy')(shipit);

  shipit.initConfig({
    production: {
      servers: 'root@PocketChange'
    }
  });

  var tmpDir = 'PaperQuik-com-' + new Date().getTime();

  shipit.task('install', function () {
    shipit.remote('sudo apt-get update').then(function () {
      // We'll wait for the update to complete before installing some software I like to have on the
      // server.
      shipit.remote('sudo apt-get -y install apache2 emacs23 git unzip').then(function () {
        // We don't need the following set of actions to happen in any particular order. For example,
        // we're good if the disables happen before the enables.
        var promises = [ ];

        // We couldn't copy this file earlier because there isn't a spot for it until after Apache is installed.
        promises.push(shipit.remoteCopy('paperquik.conf', '/etc/apache2/sites-available/'));

        promises.push(shipit.remote('sudo a2enmod expires headers rewrite proxy_http'));

        promises.push(shipit.remote('sudo a2dissite 000-default'));

        promises.push(shipit.remote('sudo a2ensite paperquik mdm'));

        // But we do need this to wait until we've complete all of the above. So we have it wait until
        // all of their promises have resolved.
        Promise.all(promises).then(function () {
          shipit.remote('sudo service apache2 reload');
        });
      });
    });
  });

  // This shipit file doesn't yet use the official shipit deploy functionality. It may in the future but
  // this is my old sequence and I know it works. Note: I also know theirs seems like it might be
  // better because it can roll back and I definitely do not have that.
  shipit.task('deploy', function () {
    shipit.log('Deploy the current build of PaperQuik.com.');
    shipit.local('grunt build')
        .then(function () {
          return shipit.remoteCopy('dist/*', '/tmp/' + tmpDir);
        })
        .then(function () {
          shipit.log('Move folder to web root');
          return shipit.remote('sudo cp -R /tmp/' + tmpDir + '/*' + ' /var/www/paperquik')          
        })
        .then(function () {
          shipit.remote('rm -rf /tmp/' + tmpDir);        
        });
  });

  shipit.task('upgrade', function () {
    shipit.log('Fetches the list of available Ubuntu upgrades.');
    shipit.remote('sudo apt-get update').then(function () {
      shipit.log('Now perform the upgrade.');
      shipit.remote('sudo apt-get -y dist-upgrade');
    });
  });
};

Sorry for the small text above, the line wrapping was bad if I didn’t reduce it. Here’s the original over on Github. The huge and most obvious difference here is that Shipit wants to do all of those Apache configuration commands in parallel. So I let it. I just added a little bit of code to delay restarting the server until all of them have been completed (you can see that around line 37 above). Likewise the deploy and upgrade tasks want to execute steps in parallel and I can’t always let it do that. Since all of the asynchronous actions in Shipit return promises I just added a little bit of code in each task where I need to control the order in which things happen and it works.

Pros:

  • Executes commands within a task in parallel to achieve maximum speed.
  • Allows you to specify multiple servers and will allow you to run tasks simultaneously against all of them. Not anything I need at this time, but you never know when a project could grow from one server to two.
  • Supports tasks which run other tasks (or which broadcast/sink events). Thus dependencies for tasks can be handled.

Cons:

  • The documentation. Seriously, come on. I’m going to have to contribute to this project just to fill out the documentation some.
  • Harder to structure serial commands which need to execute in a particular sequence.

Thoughts

You see what I mean about people rebuilding the same tools over and over again just using different languages. Both Shipit and Flightplan claim similarity to previous tools for Ruby and Python. However, at the same time I have to confess I don’t find either of those particularly appealing to use when all I use day to day is JavaScript. I used Java for over ten years and I still don’t want to do all of my build and deployment with Ant. When I wanted to control the order of the asynchronous events in Shipit, it was nice that I could easily see how to do that from my experience with JavaScript promises in AngularJS and Node.js.

Both tools allow you to run tasks against multiple servers simultaneously. Both allow you to have multiple sets of servers so you can have staging servers or, if your just playing around like me, a Vagrant server you bring up and down just for testing purposes. Either could probably do your administration jobs, but I just liked Shipit a little bit better because it seemed more powerful. Going forward I’m going to probably pull the Flightplan files out of the master branch of my projects and leave them up only for reference from this blog post. Now I just need to see if I can do something about that Shipit documentation.

The Best Part of Any AngularJS Troll Post

Any time I see the latest “I Hate AngularJS and So Should You” article I always skip straight to the end because that’s the very best part of all of them. It’s the fun part where we get to hear what the author of this particular piece is going to advocate you use instead. Here are the usual suspects and my highly uncharitable response to each one:

I’m writing my own framework now

Bonus points for this one if it’s accompanied by a link to their new half formed idea on Github. It should continue getting commits for at least a couple of months.

There are literally dozens of front-end frameworks at this point, but theirs is going to be way better than any of them. Look, there’s really only one or two guys who will work on it, but they are stellar programmers. God knows they are going to do a much better job than programmers at Google, Facebook, or the likes of Yehuda Katz and Tom Dale.

TodoMVC is beginning to look like one of those four page resumes you get these days with all of the “frameworks” that they have examples for. If you don’t believe me, be sure to look at their “Labs” tab. Yes, they have so many they had to put in tabs.

Backbone.js

Ha ha ha ha ha ha ha ha hahahaha haha ha ha ha ha. Oh god. I may have hurt myself. This person is so upset about how “heavy weight” AngularJS is and “complex”. Look for lots of mentions of how things should be “minimal” and “simple” and at least one mention of how many lines of code Backbone.js is vs. the object of their derision. I figure their house looks like this:Form_Gable_House5

I did Backbone.js for two years, that’s why when I went somewhere new I put them on AngularJS instead. I really hope the people who advocate going back to Backbone.js have to work on a large team of mixed skill level developers. The unskilled ones will make a hash of any framework but what they can do with Backbone.js is just amazing.

That New Framework That You Just Heard About on Hacker News Two Weeks Ago

This is the framework from author #1 above. It’s going to solve all the ridiculous mistakes that AngularJS made and probably all of those from the other major frameworks as well at the same time. Ultimately it won’t get anymore updates, but that’s OK because it only got used on one project before our author realized it not only had as many problems as the major frameworks but many many more. Plus it gives him/her an opportunity to tweet about the abandonment of this framework and the excitement for the next new one.

The Chinese Menu Framework

This is the idea that sticking together a bunch of different best-of-breed pieces to make your own framework is perfectly viable. Just pick something from columns A/B/C/D and start using it. You’ll find lots of people who can answer your questions, there are many books and videos for that particular combo of tech, and there are developers out there by the hundreds you can hire who will have no problem diving right into your projects.

Ha ha. I’m kidding. It doesn’t really work that way. Pick an arbitrary grab bag of stuff and maybe you’ll make some excellent choices. But you’ll have to live with that decision for quite a while. Even a less popular stack like Ember.js is going to get more third party support than whatever you decide upon for yourself.

Again, I council rationality

Above all, please do a quick experiment for me. The next time somebody tells you that AngularJS is a dead end and you can’t rely on it for years to come, ask them what they would have recommended back in 2013? Just two years ago. What set of stuff would they have advocated then that would be doing so well today and have this long lived future into 2017+ that wasn’t AngularJS? Backbone.js? I don’t know of anything.

My point is this, the front-end and JavaScript tech is changing at a rate way too high for anyone’s predictions about two and three years down the road to have a lot of merit. AngularJS seems like a reasonable bet to have done well and have lots of info available about migration from 1.X to 2.0 so at the moment I’m still on that path. In the meantime I hope to learn more about Facebook’s stuff to see if it gives me useful ideas or to see if I can incorporate parts of it into AngularJS (Flux seems interesting for instance and would likely slide into most of the frameworks). But the people who speak with such certainty about the future… Maybe they don’t see it as clearly as they think.

AngularJS is not perfect. I’m not about to say that it is. It has problems, over time they’ve been worked on and reduced. I’m sure if I went and picked up React/Flux/Relay/whatever (come on Facebook, give a name to your stack!) or Ember.js I’d see much the same things. Lots of great people are working on them and they have thousands of adopters. Most of the time for most projects it works pretty well.

If you’re having problems with AngularJS it may be that you need to learn more, look at some open source, maybe even pull in a mentor with more experience. Alternatively, if you’re struggling and you think you’ve put in more than enough effort, look at one of the major alternatives and see if it works better for you. I haven’t put in as much time on Ember.js but I’ve looked at Facebook’s offering and it is very different than what Google put together.

A Very Different Eulogy for RadioShack

Recently a “eulogy” for RadioShack was making the rounds online. Let’s ignore for the moment that’s a little harsh to have a eulogy before somebody is even dead but RadioShack is definitely on life support so I certainly understand why now seemed like the time. This could easily be their last Christmas.

The thing that struck me was how different the experience was from my own. I grew up in Fort Worth, TX and RadioShack has been here, well, forever. After I graduated from college in the late 1980’s, I went to work for the Tandy corporation from 1987-1992 (and then a couple of more years at AST Computers after they bought Tandy’s computer business). So I thought I’d give the company a different eulogy, one from the perspective of a different era and a different part of the business and one that’s perhaps more nostalgic and melancholy and less bitter.

I

It starts with the Texas Employment Commission (TEC). During my summers off from attending Rice I had taken one job making pizzas at Mazzio’s and another working in the Plans & Specs division of the Army Corps of Engineers. Trust me, if you are ever given that choice, pick pizza.

After my job at the Army Corp I was cured of taking any job just because it wasn’t food service. I went down to the TEC and told them I wanted something where I would be programming. I figured after years of BASIC programming on my own and three years of learning languages at school somebody would want to hire me to do something. But the response from the lady at the TEC was to a) forget any idea of doing something like that or even computer work of any kind and b) maybe she could find me something that wasn’t menial labor, but I shouldn’t expect much just because I was almost done with college.

Fortunately, I completely ignored her horrific depressing advice (and I mean depressing in both senses of the word, she seemed as depressed as the advice she gave) and went down to fill out an application at Tandy. They hired me quickly and told me I could come in and test software. I think I did it for about five days before they realized I knew Pascal, Modula-2, C, some basic Unix commands, and more. I was immediately moved over to start programming in C for Tandy.

II

Varsity Scripsit

The people I had gone to work for in the software division were working on the Varsity Scripsit word processor. It was a pretty good little word processor which ran on MS-DOS on PCs and several decades before the mantra of “eat your own dog food” became common, most of the team was actually using a stripped down version of the word processor as a text editor to edit the code for the the word processor! The Scripsit word processor line had been fairly successful for the company on previous machines (I think the Xenix based Model 6000 and others) so this was one of their first forays into PC applications.

Since the core of the project was already pretty solid, most of the team was working on a multitude of expansions for it including:

  • A Calculator
  • Printing graphics on dot matrix printers
  • Dictionary/Thesaurus
  • Macros
  • The list went on and on

However, after adding all of that, memory constraints on real world machines made it clear that it wasn’t going to work with the kitchen sink attached to it, so the dot matrix graphic printing I had worked on and several other features all had to be removed to get it to load and run. C’est la vie.

P.S. There were seven people working on this software, including Kevin (more on him later) who had written the editor/core of the word processor and was only one of two people in the crew who had a hard drive in his machine. Every other machine was floppy only. You’ve never experienced software development until you’re doing all of your editing and compilation off of 5 1/4″ floppy disks.

III

After I went back to school, either I contacted Tandy or they contacted me, I can’t remember which but they told me they would really like me to come in and work even during the brief period I would be home for the Christmas holidays. This was a) enormously flattering and b) a source of serious money for a kid in college. I think I might have given some real gifts that year.

Tandy was working on their Tandy 1000 series which were actually not clones of the IBM PC but of the IBM PCjr. They had graphics built in (320 x 200 in four colors! Booyah!) and thanks to some really clever engineering from one of their crew they were adding digital audio by piggy backing on the existing hardware they had added to support joysticks. Apparently the digital to analog converters had multiple uses and he figured out how to use them for something which wouldn’t be common other PCs for years to come (think SoundBlaster cards) with only a few cents of additional cost.

As with Varsity Scripsit the digital audio recording and playback software (DeskMate Sound) was being written again by Kevin (yes, he really was that good). He was hard at work on a music program (DeskMate Music) which actually used sampled instruments digitized with DeskMate Sound.

If you don’t want to watch it all the way through, skip to ten minutes in and listen to the piano. Kevin was resampling notes from a handful of actual notes which could be loaded into memory for each instrument (there was not nearly enough memory in those days to have a full range of high quality samples for each instrument so he was adjusting them on fly to make missing notes). I still marvel at it.

IV

My boss for both Varsity Scripsit and the DeskMate Music/Sound work was Jeff. He was a great guy and one of my favorite memories of him was him playing with the Sound/Music app combo. He wanted to wrap both of them with another app which could run in the stores. If you used them in conjunction you could record a simple sound in Sound (say a person saying “Meow” or making a sound with keys) and then Music could load the recorded sound and play Jingle Bells scaling the single “note” up and down the entire scale. It was pretty funny to listen to and seemed like exactly the kind of thing which, if kept clean, would attract people in the stores. Sadly, I don’t think it ever got built. Maybe I should make an online app for it someday.

One thing to note around this time was that Jeff had hiccups continuously. All the time. He saw doctors about it but nothing they tried helped any. It just made him miserable for a long period.

V

After I graduated from school I went straight to work for Tandy. They had made me a good offer and I worked for them for several years pretty happily. Eventually they built a new “Technology Center” next door to the headquarters and moved us over there. Supposedly they spent $30 mil when $30 mil was a whole lot of money.

I tried not to be much of a trouble maker during my time there but I always had posters up the entire time I had worked at Tandy. In fact, I posted Calvin and Hobbes on the glass of my office every day and people would stop to read it. When I moved to the Technology Center the word came down that there wasn’t going to be any more of that. They had paid good money for the place, it was attractive (not really, it was a big circular cube farm) and it didn’t need posters or anything like that. They were going to select some artwork and post it on various walls and halls throughout the place to make it really nice (they never did).

So I decided to parody one of the multitude of memos we got on topics like this every day. It was really easy by cutting off the top and bottom of one memo, writing my own, and then pasting those sections atop mine and then photocopying the result to have a new memo from management. It explained that they were very happy with the all white/gray/creme motif and that employees would need to start wearing clothes which matched and only clothes which matched. Also, the steady stream of vendors we had coming in to sell us stuff (software and hardware) would be given colored ponchos which matched that they could wear over their clothes so they wouldn’t clash. The last part was the part where I went so ridiculous that I figured everybody would know it was a joke. I don’t think people read that far or if they did, they were humor challenged. Quite a few people took it seriously and several people got very pissed off about that. But nobody ever fingered me as the guy behind it.

VI

I’ve thought about it and most of the projects I worked on while I was there don’t stand out in my mind as particularly interesting until the coming of “multimedia” machines. Tandy had found a source for a CD-ROM that they could start bundling into their machines and selling as an add on for existing PCs that didn’t cost a fortune. Around that one piece, they crafted the idea of the Tandy Sensation! machine (yes, it had the exclamation mark). It was a Windows PC with sound, good graphics, and a CD-ROM built in.

Our CD-ROM burner had cost a fortune and was two big boxes hooked to a PC. After burning innumerable useless discs over the course of our work we eventually figured out that even the slightest amount of work being done on the PC would cause it to screw up the disc. It had to be disconnected from the network and left untouched for the duration of a long burn to generate a disc we could use. That memory pairs with Jeff on the phone with a vendor in Hong Kong trying to get CD-ROM blanks for us to use. They were $50 each and he was trying to figure out how to order 100 of them and get them flown to us in time for them to be useful to us.

I did lots of work on graphics and animation for this machine and it was a lot of fun. Plus, Sensation! sold very well for Tandy. I was told that they sold something like 17,000 units fairly early and that was apparently quite good. Unfortunately, our success with Sensation! set us up to be the go-to people to work on the worst mess I ever saw while working for them.

VII

Philips had brought out their CD-i machine and for some insane reason there were people within Tandy who wanted to copy it. It already seemed to be a clear cut commercial failure. It was too expensive, it didn’t seem to offer any software that people found compelling, and Philips was spending more money marketing each unit than they were making if they actually sold one. Sometimes that happened with video game systems of the time, but they actually sold enough software for it to end up being profitable. CD-i was clearly not doing that.

But none of that dissuaded the people who believed in this project at Tandy. So the Tandy Video Information System (VIS) was born. Here’s a link to information about it at Wikipedia but trust me, it’s fairly dry and in no way conveys how much blood sweat and tears people poured into it nor what a crappy boat anchor it was.

Let me just lead off with this:

I really hope you watched that all the way to the end. It’s hammy, tone deaf, ridiculous in almost every way. I don’t know any engineer who worked on the project, software or hardware side who did not tell them not to do it. I bought a Sega Genesis to bring in to show them Sonic running on the console. It was blazingly fast and nothing, absolutely nothing about the VIS was fast. It was a 286 processor in a box that took forever to start up to run your game/educational program and if you wanted to boot it into Windows then it took forever times forever to do that.

They did focus groups and spent considerable money polling people about what they wanted from such a machine and what they would pay for it. The answer was that they were largely uninterested in it and if they were it shouldn’t cost more than $400. Tandy didn’t think they could sell it for less than $800. That should have stopped them cold but like everything else, it didn’t.

For whatever reason, Microsoft was also invested in this idea too. They had a stripped down version of Windows they imagined would start making its appearance in small appliance like boxes like this. However Windows, even stripped down, was the antithesis of anything you wanted to boot over and over again with cheap processors and no memory. Eventually they licensed it to Tandy for inclusion into VIS for a quarter ($0.25) during a time when Tandy was probably paying $20 to include Windows with their regular PCs. I say Microsoft was “invested” in this idea but the truth is I think they were invested in it the same way a chicken is invested in a ham and eggs breakfast. The problem is, Tandy was the pig. I was told that Tandy spent somewhere around $75 million dollars developing the VIS and it sold handfuls of units (after you figure in all the returns). Eventually companies like Tiger started selling bundles which included every software title ever produced for the machine and I think they still were only selling them at $99.

VIII

I worked for Jeff for many years at Tandy and one day he came by my cube to tell me that he needed to go in and have some surgery. He didn’t make a huge deal about it but it was clear that he was kind of sad. I didn’t think too much about it and I should have asked him to sit down and talk to me. I didn’t.

Next week his boss broke the news that Jeff had pancreatic cancer and after they opened him up on the operating table they just closed him back up and sent him to recovery. He died some hours later.

The hiccups he had suffered with years before had actually been one of several symptoms according to an oncologist who diagnosed him.

He definitely deserved a better version of me than he got. I’m sorry Jeff. I really am.

IX

It wasn’t that much later that Tandy sold its computer business to AST Research. At the time AST was in the top five manufacturers and doing very well. Pretty much everyone who had worked for Tandy continued to work for AST for the next couple of years, initially in the same Technology Center but later in a commercial area on the north side of Fort Worth. I moved on to Crystal Semiconductor with some of my colleagues and eventually poor business decisions caught up with AST.

As I said, my account lacks the pathos (with the exception of VIS) that the other eulogy had but it’s my perspective and I didn’t want the other one to be the only thing everybody heard about Tandy/RadioShack if this is indeed the end for them.

Extra Life

Here’s my last minute appeal for donations for Extra Life. I’m going to be doing a 24 hour gaming marathon tomorrow (board games in my case) to raise money for Children’s Miracle Network Hospitals and specifically Cook Children’s Hospital. Cook Children’s is a non-profit hospital and money given will help purchase medicine, equipment, and provide treatment for the kids there.

I’ve made it past my original goal ($500), second goal ($750), and third goal ($1000), but that means nothing. Those are artificial numbers I used to encourage myself and others. If you can give, you should. I appreciate it and so do so many other people. Thank you!

http://www.extra-life.org/index.cfm?fuseaction=donorDrive.participant&participantID=89859

Ideas are like Legos

Want to learn about how I think? No. No, you don’t; but I’m going to tell you anyway. Every day that goes by I add to this long list of stuff which interests me. It is neat from a technical standpoint, or it allows me to do something I didn’t know how to do or didn’t want to figure out for myself, etc.

Then I let those pieces rattle around in my head until something occurs to me about how I could combine them with what I already know to make something interesting.

So here’s a brain dump of all the stuff rattling around up there right now:

Tools

  • Map/Reduce
    • pjs
  • Flow
    • NodeRed
    • NoFlo
    • dat

Browser Compatible Libraries

  • OAuth
    • hello.js
  • Encryption
    • TweetNaCl
  • QR Codes
    • qrcode.js
    • jquery.qrcode.js
  • Color
    • randomColor
  • Generate Files
    • FileSaver.js
  • Data
    • TingoDB
  • Markdown Editor
    • EpicEditor
  • Communications
    • WebRTC
    • Socket.IO
  • Genetic Algorithms
    • genetic-js
  • Game Development
    • Phaser.js

Server

  • Authentication
    • Passwordless
  • Framework
    • MEAN.IO/MEAN.JS

“That’s a mess of divergent crap you’ve got there John,” you might say; and you’d be right. But it’s kind of like looking at a huge pile of Legos. What do you see to build when you look?

I’m Open Sourcing Two AngularJS Projects

By far, the most successful open source thing I’ve done in years is the project I called airquotes. It was my first project built using AngularJS and I published it early on to give others a chance to see something finished which had been built using it other than a to-do list.

airquotes on Github

Since then I’ve built some other projects outside of my day job using AngularJS and though not particularly profitable they are diverse (to say the least) and I’ve decided I’d like to open source them as well to see if they can help people.

The first up is PaperQuik (PaperQuik.com). It’s an app which asks a few simple questions and then generates a printable sheet of paper (lined, dot paper, graph paper, etc.) in the browser. Unlike most sites like this, it doesn’t just have a canned set of PDF files it dispenses, nor does it have a server process building them. Instead it uses the HTML5 canvas to draw an image of the paper and then helps you print that image.

PaperQuik on Github

The second project is ClearAndDraw (ClearAndDraw.com). It’s a simple webapp that I threw together in just a few evenings because I wanted to keep track of my cards and dice for the game Marvel Dicemasters: Avengers vs. X-Men. It’s not nearly as complicated as the paper generation in PaperQuik, but it does show real time filtering using AngularJS and it stores all of the information you give it in localStorage of the user’s browser so it doesn’t forget anything they enter.

ClearAndDraw on Github

Neither of these projects has any back-end at all, they are served up strictly as a set of static HTML, CSS, JavaScript, and images and do all of their work client side. That’s not to say that I don’t want to build a back-end; ClearAndDraw.com in particular cries out for one to be added so users can enter in card/dice information and then retrieve it from any browser and any machine, rather than always having to return to the same place previous cataloging was done. But the initial solution was simple and worked as a starting point. It also presents an example of how a site might save data locally even for unregistered users and then later save that to a back-end data store if the user does create an account later.

I also took an evening and updated airquotes to the current version of AngularJS (1.2.25) and deployed it to a GitHub page so people can play with it without having to deploy it locally (like PaperQuik and ClearAndDraw).

Dead(?) MacBook

This morning we encountered a MacBook that was not just dead, it was super-dead. It wouldn’t come on, even holding down the power button for ten seconds wasn’t sufficient to kill it and get it to start back up.

So I learned all new keystrokes to press to get out of this kind of situation that I’ve never encountered before:

Reset the SMC (http://support.apple.com/kb/HT3964)

Hold down the Shift + Control + Option keys (all on the left hand side of the built in keyboard) + the power button

Release and then hit the power button again

Reset the NVRAM / PRAM (http://support.apple.com/kb/ht1379)

Command + Option + P + R with the machine powered on

It took the last one to get the machine back to a working state, something I’ve never seen happen with a Mac before, but if I ever see it again, I’ll know what to do.