A Small Rant About Being IDE-Dependent

{Sort of a Rant}

I recently saw this tweet.

I responded with this tweet.

Now I need to describe some context around this response real quick. Here are the key points behind this tweet from my context in the software development industry.

  1. I’m not a .NET Developer. I’m a software developer, or application programming, coder, hacker or more accurately I’m a solutions developer. I don’t tie myself to one stack. I’ve developed with C# & VB.NET with Visual Studio. I’ve done C++ and VB 4. I’ve done VBA and all sorts of other things in the Microsoft space on Windows and for the web with Windows technology. About 5 years ago I completely dropped that operating system dependency and have been free of it for those 5 years. I doubt I ever want to couple an application up to that monstrous operating system again, it is far to limiting and has no significant selling point anymore.
  2. In addition to the .NET code I’ve written over the years I’ve also built a few dozen Erlang Applications (wish I’d known about Elixir at the time), built a ton of Node.js based API and web applications, and have even done some Ruby on Rails and Sinatra. These languages and tooling stacks really introduced me to reality I longed for. A clean, fast, understandable reality. A reality with configuration files that are readable and convention that gives me real power to get things done and move on to the next business need. These stacks gave me the ability to focus more on code and business and research value and less on building the tool stack up just to get one single deployment out the door. Since, I’ve not looked back, because the heavy handed stacks of legacy Java, .NET, and related things have only laid heavily on small business and startups trying to add business value. In the lands I live in, that is San Francisco to Vancouver BC and the cities in between, Java and .NET can generally take a hike. Because businesses have things to do, namely make money or commit to getting research done and not piddling around building a tool stack up.

So how was this new reality created? It isn’t a new one, really the new reality was this heavy handed IDE universe of fat editors that baby a developer through every step. They create laziness in thinking and do NOT encourage a lean, efficient, simple way to get an application built and running. It doesn’t have to be this way. A small amount of developer discipline, or dare I say craftsmanship, and most of these fat IDEs and tightly coupled projects with massive XML files for configuration in their unreadable catastrophofuck of spaghetti can just go away.

Atom, Sublime, and other editors before that harnessed this clean, lean, and fast approach to software development. A Node.js Project only needs one file and one command to start a project.

npm init

A Ruby on Rails Project is about the same.

rails new path/to/your/new/application

That’s it, and BOOM you’ve got an application project base to start from.

Beyond that however, using a tool like Yeoman opens up a very Unix style way of doing something. Yeoman has a purpose, to build scaffolding for applications. That is its job and it does that job well. It has a pluggable architecture to enable you to build all sorts of scaffold packaging that you want. Hell, you can even create projects with the bloated and unreadable XML blight that pervades many project types out there from enterprisey platforms.

Take a look at Yeoman, it is well worth a look. This is a tool that allows us to keep our tooling, what we do use, loosely coupled so we don’t have a massive bloated (5+ GB) installation of nonsense to put on our machines. Anybody on any platform can load up Yeoman, and grab an editor like Atom, Submlime, Visual Studio Code (not to be confused with the bloated Visual Studio) and just start coding!

Take a look at some of the generators that yeoman will build projects for you with -> http://yeoman.io/generators/ There are over 1500! No more need to tightly couple this into an IDE. Just provide a yeoman plugin in your editor of choice. Boom, all the features you need without the tight coupling in editor! Win, win, win, win, and win!

As a bit more of my rant about bloated editors, and don’t get me wrong, I love some of the features of the largesse of some like Visual Studio and WebStorm. But one huge rant I have is the absolutely zero motivation or vested interest that a community has to add features, bug fix, or do anything in relation to editors like Visual Studio or WebStorm. ZERO reason to help, only to file complaints or maybe file a bug complaint. That’s cool, I’m glad that the editors exist but this model isn’t the future. It’s slowly dying and having tightly coupled, slow, massively bloated editors isn’t going to be sustainable. They don’t adapt to new stacks, languages, or even existing ones.

Meanwhile, Atom, Sublime, and even Visual Studio Code and other lean editors have plugin and various adaptar patterns that allow one to add support for new languages, stacks, frameworks, and related tooling and not have tight coupling to that thing. This gives these editors the ability to add features and capabilities for stacks at a dramatically faster rate than the traditional fat IDEs.

This is the same idea toward tooling that is used in patterns that a software developer ought to know. Think of patterns like seperation of concerns and loose coupling, don’t tie yourself into an untenable and frustrating toolchain. Keep it loosely coupled and I promise, your life will get easier and that evening beer will come a little bit sooner.

{Sort of a Rant::End}

…and this is why I say, do NOT tie project creation into an editor. Instead keep things loosely coupled and let’s move into the future.

Don’t Learn to Code, That’s Just Nonsense… Learn Instead!

photodune-2102225-coding-blue-screen-lI keep reading post after post after post. “Learn to code”, “Everybody Should Learn to Code”, “In the Future Everyone Will Need to Know How to Code, Learn Now!”, and so many more. I’m going to point out a few very important tips to life. These tips are especially important when it comes to programming, or NOT programming.

Then, I read something that was like a lovely breath of fresh air. It was as if someone had actually pondered the world for just one more second past “LEARN TO CODE” and though, “naw, that’s probably a bit much”. I read Yevgeniy Brikman’s blog post “Don’t Learn to Code, Learn to Think“. You might have seen it. It’s been picked up by more than a few other media sources and extensively tweeted.

Now, in Yevgeniy’s article there is a lot of talk about still learning Computer Science. I also agree with that, versus just learning the semantic and pedantic details of just programming. However I’m going a step further and advise this…


In addition to that advice, here’s some serious advice that goes above and beyond what to learn in school. This very short list of tips is about what to learn for life, to do well in life, and not just for the petty demands of school. These are tips are for any competent person, who wants to excel in anything that they do. If you’re about to get into high school, college, or whatever. These are the things you should be focused on and thinking about extensively.

Systems Thinking

ContainersDon’t learn to write code just to learn to program. That’s just absurd and it’s a waste of time. Learn to think, and not just to think, but to think about systems. Learn to understand how systems work together at a deep level.

An example, give into something like the freight system in North America (or Europe) and figure out how it works functionally from strategic to tactical levels.

Learn to throw away confirmation bias, congnitive dissonance and always ask yourself if you have the data you need to draw the connections about life in an accurate and meaningful way. If you don’t, keep asking questions and keep learning.


Abstract ConstructionBecome an autodidact. You instantly become dramatically more powerful than anyone that requires structured teaching to attain new knowledge. It opens up doors and instantly gives you an advantage in any conversation, any new skill you want, and in visiting anywhere in the world. When you learn to learn the world changes for you. You’ll be able to understand and deduct conclusions and solutions faster than any counterpart that can’t do this. It also gives you a significant advantage in programming, if you do decide you want to code.

In the process of becoming an autodidact one helpful idea is to actually study learning. Take a course, get a book, find content, and just observe yourself. Get as much information as you can about how people learn and read it or watch videos on it. Whatever the case, get as many ideas and strategies about how you learn and what the most effective methods are for you to gain a deep, systemic, and ordered knowledge of topics you want to understand.


JobThese two skills will give you far more of an advantage in life then merely learning to program or taking classes or going to school to learn to program/code/hack/whatever. So do yourself a favor, don’t get suckered by the “Learn programming and get a good job QUICK!” nonsense. If you gain the two skills I mentioned and get your brain sorted, you’ll do much better than merely learning to program and you’ll thank yourself in the future!

I can promise you that!  :)

__4 “CD Is Working, Let’s Get a Site Live with Loopback!”

Since it has been more than a few weeks let’s do a quick recap of the previous posts in this series.

  1. Introducing the Thrashing Code Team & Projects” – Know who’s working on what and what the projects are.
  2. Getting Started, Kanban & First Steps for a Sharing App” – Getting the kanban put together and the team involved.
  3. Starting a Basic Loopback API & Continuous Integration” – Getting the skeleton of the API application setup and the continuous integration services running.
  4. Going the Full Mile, Continuous Delivery” – Here the team got the full continuous delivery process setup for ongoing development efforts.

In this article of the series I work with some of my cohort to get initial parts of the application deployed to production. For the first part of this, let’s get some of the work that Norda has done into the project and get it running on the client side.

Client Side

client directory of the loopback project.

client directory of the loopback project.

The first thing I needed was to get a static page setup. This page would then be used to submit an email to the server side that would then deal with whatever processing that I would have for the email message confirmation and related actions.

In Loopback, it is very easy to setup a static page. I just needed a simple index.html page so I created an empty one in the client directory of the project.

The first thing I did was literally put an index.html file with the words “this should work” in the client directory of the project. But after some tweaking and adding in Norda’s work the team ended up with the following.

The next thing to do is setup the Loopback.io framework to host a static site page.

Static Page via Loopback.io

This part of the article is taken pretty much straight out of the static page hosting Strongloop Loopback.io Documentation.

Applications typically need to serve static content such as HTML and CSS files, client JavaScript files, images, and so on.  It’s very easy to do this with the default scaffolded LoopBack application.  You’re going to configure the application to serve any files in the /client directory as static assets.

First, you have to disable the default route handler for the root URL.   Remember back in Create a simple API (you have been following along, haven’t you?) when you loaded the application root URL, http://localhost:3000/, you saw the application respond with a simple status message such as this:


This happens because by default the scaffolded application has a boot script named root.js that sets up route-handling middleware for the root route (“/”):


module.exports = function(server) {  // Install a `/` route that returns server status
  var router = server.loopback.Router();
  router.get('/', server.loopback.status());

This code says that for any GET request to the root URI (“/”), the application will return the results of loopback.status().

To make your application serve static content you need to disable this script.  Either delete it or just rename it to something without a .js ending (that ensures the application won’t execute it).

Define static middleware

Next, you need to define static middleware to serve files in the /client directory.

Edit server/middleware.json.  Look for the “files” entry:

  "files": {


Add the following:

  "files": {
    "loopback#static": {
      "params": "$!../client"

These lines define static middleware that makes the application serve files in the /client directory as static content.  The $! characters indicate that the path is relative to the location of middleware.json.

Add an HTML file

Now, the application will serve any files you put in the /client directory as static (client-side) content.  So, to see it in action, add an HTML file to /client.  For example, add a file named index.html with this content:


        Some static content...  but just look at the big HTML file below!  :)

Of course, you can add any static HTML you like–this is just an example.

Graphic Assets

Norda setup the project with a number of assets for the current page creation as well as future pages the site will need. The theme is a high quality theme, with the Coder Swap Logo added at respective key locations. The following are the graphics assets included in the project under the client directory here.

The key files that are included that will be used for our first site we deploy are the various favicon, logo, and related images. You can see those in the root of the client directory. There are a whole bunch of them because of the funky mobile design requirements.


The first page I’ve setup is a simple sign up page, with no real functionality. Just something to get started with to build off of. The code for that page looks like this.

<!DOCTYPE html>
        Code Swap - Notify Me!
    <link href="http://fonts.googleapis.com/css?family=Lato:100,300,400,700" media="all" rel="stylesheet"
    <link href="stylesheets/bootstrap.min.css" media="all" rel="stylesheet" type="text/css"/>
    <link href="stylesheets/font-awesome.min.css" media="all" rel="stylesheet" type="text/css"/>
    <link href="stylesheets/se7en-font.css" media="all" rel="stylesheet" type="text/css"/>
    <link href="stylesheets/style.css" media="all" rel="stylesheet" type="text/css"/>
    <link rel="shortcut icon" href="favicon.ico">
    <link rel="apple-touch-icon" sizes="57x57" href="apple-icon-57x57.png">
    <link rel="apple-icon" sizes="114x114" href="apple-icon-114x114.png">
    <link rel="apple-icon" sizes="72x72" href="apple-icon-72x72.png">
    <link rel="apple-icon" sizes="144x144" href="apple-icon-144x144.png">
    <link rel="apple-icon" sizes="60x60" href="apple-icon-60x60.png">
    <link rel="apple-icon" sizes="120x120" href="apple-icon-120x120.png">
    <link rel="apple-icon" sizes="76x76" href="apple-icon-76x76.png">
    <link rel="apple-icon" sizes="152x152" href="apple-icon-152x152.png">
    <link rel="apple-icon" sizes="180x180" href="apple-touch-icon-180x180.png">
    <link rel="icon" type="image/png" href="favicon-192x192.png" sizes="192x192">
    <link rel="icon" type="image/png" href="favicon-160x160.png" sizes="160x160">
    <link rel="icon" type="image/png" href="favicon-96x96.png" sizes="96x96">
    <link rel="icon" type="image/png" href="favicon-16x16.png" sizes="16x16">
    <link rel="icon" type="image/png" href="favicon-32x32.png" sizes="32x32">
    <meta name="msapplication-TileColor" content="#ffffff">
    <meta name="msapplication-TileImage" content="mstile-144x144.png">
    <meta name="msapplication-config" content="browserconfig.xml">

    <script src="http://code.jquery.com/jquery-1.10.2.min.js" type="text/javascript"></script>
    <script src="http://code.jquery.com/ui/1.10.3/jquery-ui.js" type="text/javascript"></script>
    <script src="javascripts/bootstrap.min.js" type="text/javascript"></script>
    <script src="javascripts/modernizr.custom.js" type="text/javascript"></script>
    <script src="javascripts/main.js" type="text/javascript"></script>
    <meta content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no" name="viewport">
<body class="login2">
<!-- Registration Screen -->
<div class="login-wrapper">
    <a href="./">
        <img width="100"
        <div class="form-group">
            <div class="input-group">
                <span class="input-group-addon">
                    <i class="fa fa-envelope"></i>
                <input class="form-control"
                       value="Enter your email address">
        <input class="btn btn-lg btn-primary btn-block"
               value="Register for updates!">
<!-- End Registration Screen -->

As you can see a bulk of the page is favicon and logo related nonsense while about a 1/3rd of the screen is actually the HTML for the form itself. What that looks like when rendered is something like this.

The registration page.

Once that page is up we can then commit to the production branch as outlined in the previous blog entry.

btw – If you’re curious (especially if you’ve read the intro blog entry about the mock team), you might be wondering where and who and how did I create this gorgeous theme? Well, I didn’t, I actually purchased this sweet theme from Theme Forrest. The specific theme is se7en.

_____101 |> F# Coding Ecosystem: Paket && Atom w/ Paket

One extremely useful tool to use with F# is Paket. Paket is a package manager that provides a super clean way to manage your dependencies. Paket can handle everything from Nuget dependencies to git or file dependencies. It really opens up your project capabilities to easily pull in and handle dependencies, whereever they are located.

I cloned the Paket Project first, since I would like to have the very latest and help out if anything came up. For more information on Paket check out the about page.

git clone git@github.com:fsprojects/Paket.git

I built that project with the respective ./build.sh script and all went well.


NOTE – Get That Command Line Action

One thing I didn’t notice immediately in the docs (I’m putting in a PR right after this blog entry) was anyway to actually get Paket setup for the command line. On bash, Windows, or whatever, it seemed a pretty fundamental missing piece so I’m going to doc that right here but also submit a PR based on the issue I added here). It could be I just missed it, but either way, here’s the next step that will get you setup the rest of the way.


Yeah, that’s all it was. Kind of silly eh? Maybe that’s why it isn’t documented that I could see? After the installation script is run, just execute paket and you’ll get the list of the various commands, as shown below.

$ paket
Paket version
Command was:
available commands:

	add: Adds a new package to your paket.dependencies file.
	config: Allows to store global configuration values like NuGet credentials.
	convert-from-nuget: Converts from using NuGet to Paket.
	find-refs: Finds all project files that have the given NuGet packages installed.
	init: Creates an empty paket.dependencies file in the working directory.
	auto-restore: Enables or disables automatic Package Restore in Visual Studio during the build process.
	install: Download the dependencies specified by the paket.dependencies or paket.lock file into the `packages/` directory and update projects.
	outdated: Lists all dependencies that have newer versions available.
	remove: Removes a package from your paket.dependencies file and all paket.references files.
	restore: Download the dependencies specified by the paket.lock file into the `packages/` directory.
	simplify: Simplifies your paket.dependencies file by removing transitive dependencies.
	update: Update one or all dependencies to their latest version and update projects.
	find-packages: EXPERIMENTAL: Allows to search for packages.
	find-package-versions: EXPERIMENTAL: Allows to search for package versions.
	show-installed-packages: EXPERIMENTAL: Shows all installed top-level packages.
	pack: Packs all paket.template files within this repository
	push: Pushes all `.nupkg` files from the given directory.

	--help [-h|/h|/help|/?]: display this list of options.

Paket Elsewhere && Atom

If you’re interested in Paket with Visual Studio I’ll let you dig into that on your own. Some resources are Paket Visual Studio on Github and Paket for Visual Studio. What I was curious though was Paket integration with either Atom or Visual Studio Code.

Krzysztof Cieślak (@k_cieslak) and Stephen Forkmann (@sforkmann) maintain the Paket.Atom Project and Krzysztof Cieślak also handles the atom-fsharp project for Atom. Watch this gif for some of the awesome goodies that Atom gets with the Paket.Atom Plugin.

Click for fullsize image of the gif.

Click for fullsize image of the gif.

Getting Started and Adding Dependencies

I’m hacking along and want to add some libraries, how do I do that with Paket? Let’s take a look. This is actually super easy, and doesn’t make the project dependentant on peripheral tooling like Visual Studio when using Paket.

The first thing to do, is inside the directory or project where I need the dependency I’ll intialize the it for paket.

paket init

The next step is to add the dependency or dependencies that I’ll need. I’ll add a Nuget package that I’ll need shortly. The first package I want to grab for this project is FsUnit, a testing framework project managed and maintained by Dan Mohl @dmohl and Sergey Tihon @sergey_tihon.

paket add nuget FsUnit

When executing this dependency addition the results displayed show what other dependencies were installed and which versions were pegged for this particular dependency.

✔ ~/Codez/sharpPaketsExample
15:33 $ paket add nuget FsUnit
Paket version
Adding FsUnit to /Users/halla/Codez/sharpPaketsExample/paket.dependencies
Resolving packages:
 - FsUnit 1.3.1
 - NUnit 2.6.4
Locked version resolution written to /Users/halla/Codez/sharpPaketsExample/paket.lock
Dependencies files saved to /Users/halla/Codez/sharpPaketsExample/paket.dependencies
Downloading FsUnit 1.3.1 to /Users/halla/.local/share/NuGet/Cache/FsUnit.1.3.1.nupkg
NUnit 2.6.4 unzipped to /Users/halla/Codez/sharpPaketsExample/packages/NUnit
FsUnit 1.3.1 unzipped to /Users/halla/Codez/sharpPaketsExample/packages/FsUnit
3 seconds - ready.

I took a look in the packet.dependencies and packet.lock file to see what were added for me with the paket add nuget command. The packet.dependencies file looked like this now.

source https://nuget.org/api/v2

nuget FsUnit

The packet.lock file looked like this.

  remote: https://nuget.org/api/v2
    FsUnit (1.3.1)
      NUnit (2.6.4)
    NUnit (2.6.4)

There are a few more dependencies that I want, so I went to work adding those. First of this batch that I added was FAKE (more on this in a subsequent blog entry), which is a build tool based off of RAKE.

paket add nuget FAKE

Next up was FsCheck.

paket add nuget FsCheck

The paket.dependencies file now had the following content.

source https://nuget.org/api/v2

nuget FAKE
nuget FsCheck
nuget FsUnit

The paket.lock file had the following items added.

  remote: https://nuget.org/api/v2
    FAKE (4.1.4)
    FsCheck (2.0.7)
      FSharp.Core (>=
    FSharp.Core (
    FsUnit (1.3.1)
      NUnit (2.6.4)
    NUnit (2.6.4)

Well, that got me started. The code repository at this state is located on this branch here of the sharpSystemExamples repository. So on to some coding and the next topic. Keep reading, subsribe, or hit me up on twitter @adron.


Linux Containers

Docker Tips n’ Tricks for Devs – #0001 – 3 Second to Postgresql

The easiest implementation of a docker container with Postgresql that I’ve found recently allows the following commands to pull and run a Postgresql server for you.

docker pull postgres:latest
docker run -p 5432:5432 postgres

Then you can just connect to the Postgresql Server by opening up pgadmin with the following connection information:

  • Host: localhost
  • Port: 5432
  • Maintenance DB: postgres
  • Username: postgres
  • Password:

With that information you’ll be able to connect and use this as a development database that only takes about 3 seconds to launch whenever you need it.

TypeScript up in my WebStorm | TypeScript Editor Shootout @ TypeScriptPDX

Recently I joined a panel focused on TypeScript and a shootout between editors that have some pretty sweet TypeScript feature. Here is a wrap up of key features around WebStorm and the TypeScript.

A quick definition of TypeScript from the TypeScript Site. If you’re interested in the spec of the language, check this out.

TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. Any browser. Any host. Any OS. Open Source. TypeScript offers classes, modules, and interfaces to help you build robust components. These features are available at development time for high-confidence application development, but are compiled into simple JavaScript. TypeScript types let you define interfaces between software components and to gain insight into the behavior of existing JavaScript libraries.

Ok, so now, in case you didn’t already, you have an idea of what TypeScript is. If you’re unaware of what WebStorm is, here’s the lowdown on the IDE and a few of the other – not particularly TypeScript related features – directly from the WebStorm Site.

WebStorm is a lightweight yet powerful IDE, perfectly equipped for complex client-side development and server-side development with Node.js. Enjoy smart code autocompletion for JavaScript keywords, variables, methods, functions and parameters! WebStorm provides complex code quality analysis for your JavaScript code. On-the-fly error detection and quick-fix options will make the development process more efficient and ensure better code quality – In addition to dozens of built-in inspections, you can use integrated JSHint, ESLint, JSCS, JSLint and Google Closure Linter code quality tools. Smart refactorings for JavaScript will help you keep your code clean and safe:

  • Rename a file, function, variable or parameter everywhere in the project.
  • Extract variable or function: create a new variable or a function from a selected piece of code.
  • Inline Variable or Function: replace a variable usage(s) with its definition, or replace a function call(s) with its body.
  • Move/Copy file and Safe Delete file.
  • Extract inlined script from HTML into a JavaScript file.
  • Surround with and Unwrap code statements.

…and seriously, that’s only the tip of the iceberg. There is a TON of other features that WebStorm has. But just check out the site and you’ll see, I don’t need to sell it to you. On to the TypeScript features that I discussed last night at the editor shootout!


WebStorm 10 offers support for TypeScript 1.4 and 1.5. This support is basically enabled out of the box. The minute that you launch WebStorm you will see TypeScript features available. This is the version that was included in the shootout for discussion on the panel at the TypeScript Editor Shootout @TypeScriptPDX.

My Two Cents – i.e. My Favorite TypeScript Features in WebStorm

To see the full shootout, you’d have to have come to the TypeScript PDX meetup. But here’s the key features that I enjoy in my day-to-day coding the most.

TypeScript Transpiling

First and foremost is the fact that WebStorm builds the TypeScript code files automatically the second you create them. The way to insure this is turned on is very simple and there are two avenues. One is to navigate into settings and turn it on in the TypeScript settings screen.

TypeScript Settings / Transpiler Settings (Click for full size image)

TypeScript Settings / Transpiler Settings (Click for full size image)

The other option is simply to create a new TypeScript file in the project you’re working in.

Creating a new TypeScript File. (Click for full size image)

Creating a new TypeScript File. (Click for full size image)

When the file is created and opened in the WebStorm Editor, a prompt above the file will show up to turn on the transpiler.

Enable (Click for full size image)

Enable (Click for full size image)

This will setup the project and turn on the transpiler for TypeScript. Once this is done any TypeScript file will automatically be compiled. For instance, I added this basic code to the coder.js file that I just created above.

 * Created by adron on 7/26/15.
 * Description: An class around the coder in the system.

class Coder {
  constructor(theName: string) { this.name = theName; }
  swapWith(teamGroup: number = 0) {
    alert(this.name + " swapping " + teamGroup + "m.");

class SwappingCoder extends Coder {
  constructor(name: string) { super(name); }
  swapWith(meters = 5) {

class SwappeeCoder extends Coder {
  constructor(name: string) { super(name); }
  swapWith(meters = 45) {

This code, as soon as I saved the file was immediately transpiled into the following JavaScript and .js.map file as shown.

First the JavaScript code of the transpilation.

 * Created by adron on 7/26/15.
 * Description: An class around the coder in the system.
var __extends = this.__extends || function (d, b) {
    for (var p in b) if (b.hasOwnProperty(p)) d[p] = b[p];
    function __() { this.constructor = d; }
    __.prototype = b.prototype;
    d.prototype = new __();
var Coder = (function () {
    function Coder(theName) {
        this.name = theName;
    Coder.prototype.swapWith = function (teamGroup) {
        if (teamGroup === void 0) { teamGroup = 0; }
        alert(this.name + " swapping " + teamGroup + "m.");
    return Coder;
var SwappingCoder = (function (_super) {
    __extends(SwappingCoder, _super);
    function SwappingCoder(name) {
        _super.call(this, name);
    SwappingCoder.prototype.swapWith = function (meters) {
        if (meters === void 0) { meters = 5; }
        _super.prototype.swapWith.call(this, meters);
    return SwappingCoder;
var SwappeeCoder = (function (_super) {
    __extends(SwappeeCoder, _super);
    function SwappeeCoder(name) {
        _super.call(this, name);
    SwappeeCoder.prototype.swapWith = function (meters) {
        if (meters === void 0) { meters = 45; }
        _super.prototype.swapWith.call(this, meters);
    return SwappeeCoder;
//# sourceMappingURL=coder.js.map

Now the map JSON data that is also transpiled automatically by WebStorm.


This is a great feature, as it removes any need for manually building these files and such. Immediately they’re available in other code files when this is enabled.

Code Formatting

One of the next features I really like is the code formatting that is available in the TypeScript settings for the language.

TypeScript Code Formatting / Styles (Click for full size image)

TypeScript Code Formatting / Styles (Click for full size image)

Code Completion

  • Basic code completion on ^ Space.
  • Type completion on ^ ⇧ Space.
  • Completing punctuation on Enter.
  • Completing statements with smart Enter.
  • Completing paths in the Select Path dialog.
  • Expanding words with ⌥ Slash.


Out of the top features I like, along with automatic transpiling, from WebStorm (and the other jetbrains products too) is the ability to do various refactorings on the code base! This one is also more valuable than the transpiling feature, by far, but it’s right there on par as far as my own interest in the feature since I find manually transpiling annoying.

  • Copy/Clone – The Copy refactoring allows you to copy a class, file, or directory with its entire structure from one directory to another, or clone it within the same directory.
  • Move Refactorings – The Move refactorings allow you to move files and directories within a project. So doing, WebStorm automatically corrects all references to the moved symbols in the source code.
  • Renaming – Rename refactorings allow you to rename symbols , automatically correcting all references in the code.
  • Safe Delete – The Safe Delete refactoring lets you safely remove files and symbols from the source code.
  • Extract Method – When the Extract Method refactoring is invoked in the JavaScript context , WebStorm analyses the selected block of code and detects variables that are the input for the selected code fragment and the variables that are output for it.
  • Extract Variable – The Extract Variable refactoring puts the result of the selected expression into a variable. It declares a new variable and uses the expression as an initializer. The original expression is replaced with the new variable.
  • Change Signature – In JavaScript, you can use the Change Signature refactoring to:
    • Change the function name.
    • Add new parameters and remove the existing ones. Note that you can also add a parameter using a dedicated Extract Parameter refactoring.
    • Reorder parameters.
    • Change parameter names.
    • Propagate new parameters through the method call hierarchy.
  • Extract Parameter – The Extract Parameter refactoring is used to add a new parameter to a method declaration and to update the method calls accordingly.

So that’s the skinny on WebStorm and TypeScript. Happy hacking, cheers!

The Latest 5th Generation Dell XPS 13 Developer Edition

Just about 4 weeks ago now I purchased a Dell XPS 13 Developer Edition directly from Dell. The reason I purchased this laptop is because of two needs I have while traveling and writing code.

  1. I wanted something small, compact, that had reasonable power, and…
  2. It needed to run Linux (likely Ubuntu, but I’d have taken whatever) from the factory and have active support.

Here’s my experience with this machine so far. There are lots of good things, and some really lousy things about this laptop. This is the lowdown on all the plusses and minuses. But before I dive into the plusses and minuses, it is important to understand more of the context in which I’m doing this review.

  • Dell didn’t send me a free laptop. I paid $1869 for the laptop. Nobody has paid me to review this laptop. I purchased it and am reviewing it purely out of my own interest.
  • The XPS 13 Developer Edition that I have has 8GB RAM, 512 GB SSD, and the stunningly beautiful 13.3-inch UltraSharp™ QHD+ (3200 x 1800) InfinityEdge Touch Display.
  • Exterior Chassis Materials -> CNC machined aluminum w/ Edge-to-edge Corning® Gorilla® Glass NBT™ on QHD+ w/ Carbon fiber composite palm rest with soft touch paint.
  • Keyboard -> Full size, backlit chiclet keyboard; 1.3mm travel
  • Touchpad -> Precision touchpad, seamless glass integrated button


The Freakin’ Keyboard and Trackpad

Let’s talk about the negatives first. This way, if you’re looking into purchasing, this will be a faster way to go through the decision tree. The first and the LARGEST negative is the keyboard. Let’s just talk about the keyboard for a moment. When I first tweeted about this laptop, one of the first responses I got in relation to this machine was a complaint – and a legitimate one at that – is the blasted keyboard.

There are plenty of complaints and issues listed here, here, and here via the Dell Support site. Twitter is flowing with such too about the keyboard. To summarise, the keyboard sticks. The trackpad, by association, also has some sticky behavior.

Now I’m going to say something that I’m sure some might fuss and hem and haw about. I don’t find the keyboard all that bad, considering it’s not an Apple chiclet keyboard and Apple trackpad, which basically make everything else on the market seem unresponsive and unable to deal with tactile response in a precise way. In that sense, the Dell keyboard is fine. I just have to be precise and understand how it behaves. So far, that seems to resolve the issue for me, same for the trackpad related issues. But if you’re someone who doesn’t type with distinct precision – just forget this laptop right now. It’s not even worth the effort. However, if you are precise, read on.

The Sleeping Issue

When I first received the laptop several weeks ago it had a sleeping issue. Approximately 1 out of every 3-5 times I’d put the computer to sleep it wouldn’t resume from sleep appropriately. It would either hang or not resume. This problem however, has a pretty clean fix available here.

Not Performant

Ok, so it has 8GB RAM, and SSD, and an i7 Proc. However it does not perform better than my 2 year old Mac Book Air (i7, 8 GB RAM, 256 GB SSD). It’s horribly slow compared to my 15″ Retina w/ 16GB RAM and i7 Proc. Matter of fact, it doesn’t measure up well against any of these Apple machines. Linux however has a dramatically smaller footprint and generally performs a lot of tasks as well or better than OS-X.

When I loaded Steam and tried a few games out, the machine wasn’t even as performant as my Dell 17″ from 2006. That’s right, I didn’t mistype that, my Dell from 2006. So WTF you might ask – I can only guess that it’s the embedded video card and shared video card memory or something. I’m still trying to figure out what the deal is with some of these performance issues.

However…   on to the positives. Because there is also positives about the performance it does have.


The Packaging

Well the first thing you’ll notice, that I found to be a positive, albeit an insignificant one but it did make for a nice first experience is the packaging. Dell has really upped their game in this regard, instead of being the low-end game, Dell seems to have gotten some style and design put together for the packaging.

Dell XPS 13 Developer Edition Box

Dell XPS 13 Developer Edition Box

The box was smooth, and seamless in most ways. Giving a very elegant feel. When I opened up the box the entire laptop was in the cut plastic wrap to protect all the surfaces.

Plastic Glimmer from protective plastics

Plastic Glimmer from protective plastics

Umm, what is this paper booklet stuff.  :-/

Umm, what is this paper booklet stuff. :-/

Removing the cut plastic is easy enough. It is held together with just some simple stickiness (some type of clean glue).

Removing the Plastic

Removing the Plastic

Once off the glimmer of the machine starts to really show. The aluminum surface material is really really nice.

A Side View of the XPS 13

A Side View of the XPS 13

The beauty of an untainted machine running Ubuntu Linux. Check out that slick carbon fiber mesh too.

Carbon Fiber Mesh

Carbon Fiber Mesh

Here it is opened and unwrapped, not turned on yet and the glimmer of that glossy screen can be seen already.

A Glimmer of the Screen

A Glimmer of the Screen

Here’s a side by side comparison of the screens for the glossy hi res screen against the flat standard res screen. Both are absolutely gorgeous screens, regardless of which you get.

XPS 13 Twins

XPS 13 Twins

Booting up you can see the glimmer on my XPS 13.

Glimmer on the Bootup

Glimmer on the Bootup

The Screen

The screen, even during simple bootup and first configuration of Ubuntu like this it is evident that the screen is stunning. The retina quality screen on such a small form factor is worth the laptop alone. The working resolution is 1920×1080, but of course the real resolution is 3200×1800. Now, if you want, you could run things at this resolution at your own risk to blindness and eye strain, but it is possible.

The crispness of this screen is easily one of the best on the market today and rivals that of the retina screens on any of the 13″ or 15″ Apple machines. The other aspect of the screen, which isn’t super relevant when suing Ubuntu is that it is touch enabled. So you can poke things and certain things will happen, albeit Ubuntu isn’t exactly configured for touch display. In the end, it’s basically irrelevant that it is a touch screen too, except in the impressive idea that they got a touch screen of this depth on such a small machine!

Booted Up

Booted Up

Here’s a little more of the glimmer, as I download the necessary things to do some F# builds.

Setting up F#

Setting up F#

Performance and Boot Time

Boot time is decent. I’m not going to go into the seconds it takes but it’s quick. Also when you get the update for sleep, that’s really quick too. So no issue there at all.

On the performance front, as I mentioned in the negatives there are some issues with performance. However, for many – if not most – everyday developer tasks like building C#, F#, C++, C, Java, and a host of other languages the machine is actually fairly performant.

In doing other tasks around Ruby, PHP (yes, I wrote a little bit of PHP just to check it out, but I did it safely and deleted it afterwards), JavaScript, Node.js, and related web tasks were also very smooth, quick, and performant. I installed Atom, Sublime 3, WebStorm, and Visual Studio Code and tried these out for most of the above web development. Everything loads really fast on the machine and after a few loads they even get more responsive, especially WebStorm since it seems to load Java plus the universe.

Overall, if you do web development or some pretty standard compilable code work then you’ll be all set with this machine. I’ve been very happy with it’s performance in these areas, just don’t expect to play any cool games with the machine.

Weight and Size

I’ll kick this positive feature off with some addition photos of the laptop compared to a Mac Book Pro 15″ Retina and a Apple Air 13″.

First the 13″ Air.

Stacked from the side.

Stacked from the side.

USB, Power, Headphones and Related Ports up close.

USB, Power, Headphones and Related Ports up close.

No the Mac Book Pro 15″ Retina

MBP 15

MBP 15″. The XSP 13 is considerably smaller – as it obviously would be.

A top down view of the XPS 13 on top of the 15

A top down view of the XPS 13 on top of the 15″ Retina.

…and then on top of the Mac Air 13″.

On top of the MBA 13

On top of the MBA 13″

The 13

The 13″ sitting on top of the 15″ Retina

Of course there are smaller Mac Book Pros and Mac Book Air Laptops, but these are the two I had on hand (and still use regularly) to do a quick comparison with. The 13″ Dell is considerably smaller in overall footprint and is as light or lighter than both of these laptops. The XPS makes for a great laptop for carrying around all the time, and really not even noticing its presence.

Battery Life

The new XPS 13 battery life, with Ubuntu, is a solid 6-12 hours depending on activity. I mention Ubuntu, because as anybody knows the Linux options on conserving battery life are a bit awkward. Namely, they don’t always do so well. But with managing the screen lighting, back light, and resource intensive applications it would be possible to even exceed the 12 hour lifespan of the batter with Ubuntu. I expect with Windows the lifespan is probably 10-15% better than under Ubuntu. That is, without any tweaks or manual management of Ubuntu.

So if you’re looking for a long batter life, and Apple options aren’t on the table, this is definitely a great option for working long hours without needing to be plugged in.


Overall, a spectacular laptop in MOST ways. However that keyboard is a serious problem for most people. I can imagine most people will NOT want to deal with the keyboard. I’m ok with it, but I don’t mind typing with hands up and off the resting points on the laptop. If Dell can fix this I’d give it a 100% buy suggestion, but with the keyboard as buggy and flaky as it is, I give the laptop at 60% buy suggestion. If you’re looking for a machine with Ubuntu out of the box, I’d probably aim for a Lenovo until Dell fixes the keyboard situation. Then I’d even suggest this machine over the Lenovo options.

…and among all things, I’d still suggest running Linux on a MBA or MBP over any of these – the machines are just more solid in manufacturing quality, durability, and the tech (i.e. battery, screen, etc) are still tops in many ways. But if you don’t want to feed the Apple Nation’s Piggy Bank, dump them and go with this Dell or maybe a Lenovo option.

Happy hacking and cheers!