TypeScript up in my WebStorm | TypeScript Editor Shootout @ TypeScriptPDX

Recently I joined a panel focused on TypeScript and a shootout between editors that have some pretty sweet TypeScript feature. Here is a wrap up of key features around WebStorm and the TypeScript.

A quick definition of TypeScript from the TypeScript Site. If you’re interested in the spec of the language, check this out.

TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. Any browser. Any host. Any OS. Open Source. TypeScript offers classes, modules, and interfaces to help you build robust components. These features are available at development time for high-confidence application development, but are compiled into simple JavaScript. TypeScript types let you define interfaces between software components and to gain insight into the behavior of existing JavaScript libraries.

Ok, so now, in case you didn’t already, you have an idea of what TypeScript is. If you’re unaware of what WebStorm is, here’s the lowdown on the IDE and a few of the other – not particularly TypeScript related features – directly from the WebStorm Site.

WebStorm is a lightweight yet powerful IDE, perfectly equipped for complex client-side development and server-side development with Node.js. Enjoy smart code autocompletion for JavaScript keywords, variables, methods, functions and parameters! WebStorm provides complex code quality analysis for your JavaScript code. On-the-fly error detection and quick-fix options will make the development process more efficient and ensure better code quality – In addition to dozens of built-in inspections, you can use integrated JSHint, ESLint, JSCS, JSLint and Google Closure Linter code quality tools. Smart refactorings for JavaScript will help you keep your code clean and safe:

  • Rename a file, function, variable or parameter everywhere in the project.
  • Extract variable or function: create a new variable or a function from a selected piece of code.
  • Inline Variable or Function: replace a variable usage(s) with its definition, or replace a function call(s) with its body.
  • Move/Copy file and Safe Delete file.
  • Extract inlined script from HTML into a JavaScript file.
  • Surround with and Unwrap code statements.

…and seriously, that’s only the tip of the iceberg. There is a TON of other features that WebStorm has. But just check out the site and you’ll see, I don’t need to sell it to you. On to the TypeScript features that I discussed last night at the editor shootout!

Versions

WebStorm 10 offers support for TypeScript 1.4 and 1.5. This support is basically enabled out of the box. The minute that you launch WebStorm you will see TypeScript features available. This is the version that was included in the shootout for discussion on the panel at the TypeScript Editor Shootout @TypeScriptPDX.

My Two Cents – i.e. My Favorite TypeScript Features in WebStorm

To see the full shootout, you’d have to have come to the TypeScript PDX meetup. But here’s the key features that I enjoy in my day-to-day coding the most.

TypeScript Transpiling

First and foremost is the fact that WebStorm builds the TypeScript code files automatically the second you create them. The way to insure this is turned on is very simple and there are two avenues. One is to navigate into settings and turn it on in the TypeScript settings screen.

TypeScript Settings / Transpiler Settings (Click for full size image)

TypeScript Settings / Transpiler Settings (Click for full size image)

The other option is simply to create a new TypeScript file in the project you’re working in.

Creating a new TypeScript File. (Click for full size image)

Creating a new TypeScript File. (Click for full size image)

When the file is created and opened in the WebStorm Editor, a prompt above the file will show up to turn on the transpiler.

Enable (Click for full size image)

Enable (Click for full size image)

This will setup the project and turn on the transpiler for TypeScript. Once this is done any TypeScript file will automatically be compiled. For instance, I added this basic code to the coder.js file that I just created above.

/**
 * Created by adron on 7/26/15.
 * Description: An class around the coder in the system.
 */

class Coder {
  name:string;
  constructor(theName: string) { this.name = theName; }
  swapWith(teamGroup: number = 0) {
    alert(this.name + " swapping " + teamGroup + "m.");
  }
}

class SwappingCoder extends Coder {
  constructor(name: string) { super(name); }
  swapWith(meters = 5) {
    alert("Slithering...");
    super.swapWith(meters);
  }
}

class SwappeeCoder extends Coder {
  constructor(name: string) { super(name); }
  swapWith(meters = 45) {
    super.swapWith(meters);
  }
}

This code, as soon as I saved the file was immediately transpiled into the following JavaScript and .js.map file as shown.

First the JavaScript code of the transpilation.

/**
 * Created by adron on 7/26/15.
 * Description: An class around the coder in the system.
 */
var __extends = this.__extends || function (d, b) {
    for (var p in b) if (b.hasOwnProperty(p)) d[p] = b[p];
    function __() { this.constructor = d; }
    __.prototype = b.prototype;
    d.prototype = new __();
};
var Coder = (function () {
    function Coder(theName) {
        this.name = theName;
    }
    Coder.prototype.swapWith = function (teamGroup) {
        if (teamGroup === void 0) { teamGroup = 0; }
        alert(this.name + " swapping " + teamGroup + "m.");
    };
    return Coder;
})();
var SwappingCoder = (function (_super) {
    __extends(SwappingCoder, _super);
    function SwappingCoder(name) {
        _super.call(this, name);
    }
    SwappingCoder.prototype.swapWith = function (meters) {
        if (meters === void 0) { meters = 5; }
        alert("Slithering...");
        _super.prototype.swapWith.call(this, meters);
    };
    return SwappingCoder;
})(Coder);
var SwappeeCoder = (function (_super) {
    __extends(SwappeeCoder, _super);
    function SwappeeCoder(name) {
        _super.call(this, name);
    }
    SwappeeCoder.prototype.swapWith = function (meters) {
        if (meters === void 0) { meters = 45; }
        _super.prototype.swapWith.call(this, meters);
    };
    return SwappeeCoder;
})(Coder);
//# sourceMappingURL=coder.js.map

Now the map JSON data that is also transpiled automatically by WebStorm.

{"version":3,"file":"coder.js","sourceRoot":"","sources":["coder.ts"],"names":["Coder","Coder.constructor","Coder.swapWith","SwappingCoder","SwappingCoder.constructor","SwappingCoder.swapWith","SwappeeCoder","SwappeeCoder.constructor","SwappeeCoder.swapWith"],"mappings":"AAAA;;;GAGG;;;;;;;AAEH,IAAM,KAAK;IAETA,SAFIA,KAAKA,CAEGA,OAAeA;QAAIC,IAAIA,CAACA,IAAIA,GAAGA,OAAOA,CAACA;IAACA,CAACA;IACrDD,wBAAQA,GAARA,UAASA,SAAqBA;QAArBE,yBAAqBA,GAArBA,aAAqBA;QAC5BA,KAAKA,CAACA,IAAIA,CAACA,IAAIA,GAAGA,YAAYA,GAAGA,SAASA,GAAGA,IAAIA,CAACA,CAACA;IACrDA,CAACA;IACHF,YAACA;AAADA,CAACA,AAND,IAMC;AAED,IAAM,aAAa;IAASG,UAAtBA,aAAaA,UAAcA;IAC/BA,SADIA,aAAaA,CACLA,IAAYA;QAAIC,kBAAMA,IAAIA,CAACA,CAACA;IAACA,CAACA;IAC1CD,gCAAQA,GAARA,UAASA,MAAUA;QAAVE,sBAAUA,GAAVA,UAAUA;QACjBA,KAAKA,CAACA,eAAeA,CAACA,CAACA;QACvBA,gBAAKA,CAACA,QAAQA,YAACA,MAAMA,CAACA,CAACA;IACzBA,CAACA;IACHF,oBAACA;AAADA,CAACA,AAND,EAA4B,KAAK,EAMhC;AAED,IAAM,YAAY;IAASG,UAArBA,YAAYA,UAAcA;IAC9BA,SADIA,YAAYA,CACJA,IAAYA;QAAIC,kBAAMA,IAAIA,CAACA,CAACA;IAACA,CAACA;IAC1CD,+BAAQA,GAARA,UAASA,MAAWA;QAAXE,sBAAWA,GAAXA,WAAWA;QAClBA,gBAAKA,CAACA,QAAQA,YAACA,MAAMA,CAACA,CAACA;IACzBA,CAACA;IACHF,mBAACA;AAADA,CAACA,AALD,EAA2B,KAAK,EAK/B"}

This is a great feature, as it removes any need for manually building these files and such. Immediately they’re available in other code files when this is enabled.

Code Formatting

One of the next features I really like is the code formatting that is available in the TypeScript settings for the language.

TypeScript Code Formatting / Styles (Click for full size image)

TypeScript Code Formatting / Styles (Click for full size image)

Code Completion

  • Basic code completion on ^ Space.
  • Type completion on ^ ⇧ Space.
  • Completing punctuation on Enter.
  • Completing statements with smart Enter.
  • Completing paths in the Select Path dialog.
  • Expanding words with ⌥ Slash.

Refactoring

Out of the top features I like, along with automatic transpiling, from WebStorm (and the other jetbrains products too) is the ability to do various refactorings on the code base! This one is also more valuable than the transpiling feature, by far, but it’s right there on par as far as my own interest in the feature since I find manually transpiling annoying.

  • Copy/Clone – The Copy refactoring allows you to copy a class, file, or directory with its entire structure from one directory to another, or clone it within the same directory.
  • Move Refactorings – The Move refactorings allow you to move files and directories within a project. So doing, WebStorm automatically corrects all references to the moved symbols in the source code.
  • Renaming – Rename refactorings allow you to rename symbols , automatically correcting all references in the code.
  • Safe Delete – The Safe Delete refactoring lets you safely remove files and symbols from the source code.
  • Extract Method – When the Extract Method refactoring is invoked in the JavaScript context , WebStorm analyses the selected block of code and detects variables that are the input for the selected code fragment and the variables that are output for it.
  • Extract Variable – The Extract Variable refactoring puts the result of the selected expression into a variable. It declares a new variable and uses the expression as an initializer. The original expression is replaced with the new variable.
  • Change Signature – In JavaScript, you can use the Change Signature refactoring to:
    • Change the function name.
    • Add new parameters and remove the existing ones. Note that you can also add a parameter using a dedicated Extract Parameter refactoring.
    • Reorder parameters.
    • Change parameter names.
    • Propagate new parameters through the method call hierarchy.
  • Extract Parameter – The Extract Parameter refactoring is used to add a new parameter to a method declaration and to update the method calls accordingly.

So that’s the skinny on WebStorm and TypeScript. Happy hacking, cheers!

The Latest 5th Generation Dell XPS 13 Developer Edition

Just about 4 weeks ago now I purchased a Dell XPS 13 Developer Edition directly from Dell. The reason I purchased this laptop is because of two needs I have while traveling and writing code.

  1. I wanted something small, compact, that had reasonable power, and…
  2. It needed to run Linux (likely Ubuntu, but I’d have taken whatever) from the factory and have active support.

Here’s my experience with this machine so far. There are lots of good things, and some really lousy things about this laptop. This is the lowdown on all the plusses and minuses. But before I dive into the plusses and minuses, it is important to understand more of the context in which I’m doing this review.

  • Dell didn’t send me a free laptop. I paid $1869 for the laptop. Nobody has paid me to review this laptop. I purchased it and am reviewing it purely out of my own interest.
  • The XPS 13 Developer Edition that I have has 8GB RAM, 512 GB SSD, and the stunningly beautiful 13.3-inch UltraSharp™ QHD+ (3200 x 1800) InfinityEdge Touch Display.
  • Exterior Chassis Materials -> CNC machined aluminum w/ Edge-to-edge Corning® Gorilla® Glass NBT™ on QHD+ w/ Carbon fiber composite palm rest with soft touch paint.
  • Keyboard -> Full size, backlit chiclet keyboard; 1.3mm travel
  • Touchpad -> Precision touchpad, seamless glass integrated button

Negatives

The Freakin’ Keyboard and Trackpad

Let’s talk about the negatives first. This way, if you’re looking into purchasing, this will be a faster way to go through the decision tree. The first and the LARGEST negative is the keyboard. Let’s just talk about the keyboard for a moment. When I first tweeted about this laptop, one of the first responses I got in relation to this machine was a complaint – and a legitimate one at that – is the blasted keyboard.

There are plenty of complaints and issues listed here, here, and here via the Dell Support site. Twitter is flowing with such too about the keyboard. To summarise, the keyboard sticks. The trackpad, by association, also has some sticky behavior.

Now I’m going to say something that I’m sure some might fuss and hem and haw about. I don’t find the keyboard all that bad, considering it’s not an Apple chiclet keyboard and Apple trackpad, which basically make everything else on the market seem unresponsive and unable to deal with tactile response in a precise way. In that sense, the Dell keyboard is fine. I just have to be precise and understand how it behaves. So far, that seems to resolve the issue for me, same for the trackpad related issues. But if you’re someone who doesn’t type with distinct precision – just forget this laptop right now. It’s not even worth the effort. However, if you are precise, read on.

The Sleeping Issue

When I first received the laptop several weeks ago it had a sleeping issue. Approximately 1 out of every 3-5 times I’d put the computer to sleep it wouldn’t resume from sleep appropriately. It would either hang or not resume. This problem however, has a pretty clean fix available here.

Not Performant

Ok, so it has 8GB RAM, and SSD, and an i7 Proc. However it does not perform better than my 2 year old Mac Book Air (i7, 8 GB RAM, 256 GB SSD). It’s horribly slow compared to my 15″ Retina w/ 16GB RAM and i7 Proc. Matter of fact, it doesn’t measure up well against any of these Apple machines. Linux however has a dramatically smaller footprint and generally performs a lot of tasks as well or better than OS-X.

When I loaded Steam and tried a few games out, the machine wasn’t even as performant as my Dell 17″ from 2006. That’s right, I didn’t mistype that, my Dell from 2006. So WTF you might ask – I can only guess that it’s the embedded video card and shared video card memory or something. I’m still trying to figure out what the deal is with some of these performance issues.

However…   on to the positives. Because there is also positives about the performance it does have.

Positives

The Packaging

Well the first thing you’ll notice, that I found to be a positive, albeit an insignificant one but it did make for a nice first experience is the packaging. Dell has really upped their game in this regard, instead of being the low-end game, Dell seems to have gotten some style and design put together for the packaging.

Dell XPS 13 Developer Edition Box

Dell XPS 13 Developer Edition Box

The box was smooth, and seamless in most ways. Giving a very elegant feel. When I opened up the box the entire laptop was in the cut plastic wrap to protect all the surfaces.

Plastic Glimmer from protective plastics

Plastic Glimmer from protective plastics

Umm, what is this paper booklet stuff.  :-/

Umm, what is this paper booklet stuff. :-/

Removing the cut plastic is easy enough. It is held together with just some simple stickiness (some type of clean glue).

Removing the Plastic

Removing the Plastic

Once off the glimmer of the machine starts to really show. The aluminum surface material is really really nice.

A Side View of the XPS 13

A Side View of the XPS 13

The beauty of an untainted machine running Ubuntu Linux. Check out that slick carbon fiber mesh too.

Carbon Fiber Mesh

Carbon Fiber Mesh

Here it is opened and unwrapped, not turned on yet and the glimmer of that glossy screen can be seen already.

A Glimmer of the Screen

A Glimmer of the Screen

Here’s a side by side comparison of the screens for the glossy hi res screen against the flat standard res screen. Both are absolutely gorgeous screens, regardless of which you get.

XPS 13 Twins

XPS 13 Twins

Booting up you can see the glimmer on my XPS 13.

Glimmer on the Bootup

Glimmer on the Bootup

The Screen

The screen, even during simple bootup and first configuration of Ubuntu like this it is evident that the screen is stunning. The retina quality screen on such a small form factor is worth the laptop alone. The working resolution is 1920×1080, but of course the real resolution is 3200×1800. Now, if you want, you could run things at this resolution at your own risk to blindness and eye strain, but it is possible.

The crispness of this screen is easily one of the best on the market today and rivals that of the retina screens on any of the 13″ or 15″ Apple machines. The other aspect of the screen, which isn’t super relevant when suing Ubuntu is that it is touch enabled. So you can poke things and certain things will happen, albeit Ubuntu isn’t exactly configured for touch display. In the end, it’s basically irrelevant that it is a touch screen too, except in the impressive idea that they got a touch screen of this depth on such a small machine!

Booted Up

Booted Up

Here’s a little more of the glimmer, as I download the necessary things to do some F# builds.

Setting up F#

Setting up F#

Performance and Boot Time

Boot time is decent. I’m not going to go into the seconds it takes but it’s quick. Also when you get the update for sleep, that’s really quick too. So no issue there at all.

On the performance front, as I mentioned in the negatives there are some issues with performance. However, for many – if not most – everyday developer tasks like building C#, F#, C++, C, Java, and a host of other languages the machine is actually fairly performant.

In doing other tasks around Ruby, PHP (yes, I wrote a little bit of PHP just to check it out, but I did it safely and deleted it afterwards), JavaScript, Node.js, and related web tasks were also very smooth, quick, and performant. I installed Atom, Sublime 3, WebStorm, and Visual Studio Code and tried these out for most of the above web development. Everything loads really fast on the machine and after a few loads they even get more responsive, especially WebStorm since it seems to load Java plus the universe.

Overall, if you do web development or some pretty standard compilable code work then you’ll be all set with this machine. I’ve been very happy with it’s performance in these areas, just don’t expect to play any cool games with the machine.

Weight and Size

I’ll kick this positive feature off with some addition photos of the laptop compared to a Mac Book Pro 15″ Retina and a Apple Air 13″.

First the 13″ Air.

Stacked from the side.

Stacked from the side.

USB, Power, Headphones and Related Ports up close.

USB, Power, Headphones and Related Ports up close.

No the Mac Book Pro 15″ Retina

MBP 15

MBP 15″. The XSP 13 is considerably smaller – as it obviously would be.

A top down view of the XPS 13 on top of the 15

A top down view of the XPS 13 on top of the 15″ Retina.

…and then on top of the Mac Air 13″.

On top of the MBA 13

On top of the MBA 13″

The 13

The 13″ sitting on top of the 15″ Retina

Of course there are smaller Mac Book Pros and Mac Book Air Laptops, but these are the two I had on hand (and still use regularly) to do a quick comparison with. The 13″ Dell is considerably smaller in overall footprint and is as light or lighter than both of these laptops. The XPS makes for a great laptop for carrying around all the time, and really not even noticing its presence.

Battery Life

The new XPS 13 battery life, with Ubuntu, is a solid 6-12 hours depending on activity. I mention Ubuntu, because as anybody knows the Linux options on conserving battery life are a bit awkward. Namely, they don’t always do so well. But with managing the screen lighting, back light, and resource intensive applications it would be possible to even exceed the 12 hour lifespan of the batter with Ubuntu. I expect with Windows the lifespan is probably 10-15% better than under Ubuntu. That is, without any tweaks or manual management of Ubuntu.

So if you’re looking for a long batter life, and Apple options aren’t on the table, this is definitely a great option for working long hours without needing to be plugged in.

Summary

Overall, a spectacular laptop in MOST ways. However that keyboard is a serious problem for most people. I can imagine most people will NOT want to deal with the keyboard. I’m ok with it, but I don’t mind typing with hands up and off the resting points on the laptop. If Dell can fix this I’d give it a 100% buy suggestion, but with the keyboard as buggy and flaky as it is, I give the laptop at 60% buy suggestion. If you’re looking for a machine with Ubuntu out of the box, I’d probably aim for a Lenovo until Dell fixes the keyboard situation. Then I’d even suggest this machine over the Lenovo options.

…and among all things, I’d still suggest running Linux on a MBA or MBP over any of these – the machines are just more solid in manufacturing quality, durability, and the tech (i.e. battery, screen, etc) are still tops in many ways. But if you don’t want to feed the Apple Nation’s Piggy Bank, dump them and go with this Dell or maybe a Lenovo option.

Happy hacking and cheers! 

The Question of Docker, The Future of OS Virtualization

In this article I’m going to take a look at Docker and OS Virtualization autonomously of each other. There’s a reason, which will unfold as I dig through some data and provide this look into what is and isn’t happening in the virtualization space.

It’s important to also note what methods were used to attain the information provided in this article. I have obtained information through speaking with Docker employees and key executives including Ben Golub and founder Solomon Hykes over the years since the founding of Docker (and it’s previous incarnation dotCloud, before the pivot and name change to Docker).

Beyond communicating directly with the Docker team and gaining insight from them I have also done a number of interviews over the course of 4 days. These interviews have followed a fairly standard set of questions and conversation about the Docker technology, including but not limited to the following questions.

  • What is your current use of Docker visualization technologies?
  • What is your future intended use of Docker technologies?
  • What is the general current configuration and setup of your development team(s) and tooling that they use (i.e. stack: .NET, java, python, node.js, etc)
  • Do you find it helps you to move forward faster than without?

The History of OS-Level Virtualization

First, let’s take a look at where virtualization has been, then I’ll dive into where it is now, and then I’ll take a look at where it appears to be going in the future and derive some information from the interviews and discussions that I’ve had with various teams over the last 4 days.

The Short of It

OS-level virtualization is a virtualization application that allows the installation of software in a complete file system, just like a hypervisor based virtualization server, but dramatically faster installation and prospectively speed overall by using the host OS for OS-level virtualization. This cuts down on excess redundancies
within the core system and the respective virtual clients on the host.

Virtualization in concept has been around since the 1960s, with IBM being heavily involved at the Cambridge Scientific Center. Over time developments continued, but the real breakthrough in pushing virtualization into the market was VMware in 1999 with their virtual platform. This, hypervisor level virtualization great into a huge industry with the help of VMware.

However OS-level virtualization, which is what Docker is based on, didn’t take off immediately when introduced. There were many product options that came out over time around OS-level virtualization, but nothing made a huge splash in the industry similar to what Docker has. Fast forward to today and Docker was released in 2013 to an ever increasing developer demand and usage.

Timeline of Virtualization Development

Docker really brought OS-level virtualization to the developer community at the right time in regards to demands around web development and new ways to implement effective continuous delivery of applications. Docker has been one of the most extensively used OS-level virtualization tools to implement immutable infrastructure, continuous build, integration, and deployment environments, and to use as a general virtual environment to spool up resources as needed for development.

Where we Are With Virtualization

Currently Docker holds a pretty dominant position in the OS-level virtualization market space. Let’s take a quick review of their community statistics and involvement from just a few days ago.

The Stats: Docker on Github -> https://github.com/docker/docker

Watchers: 2017
Starred: 22941
Forks: 5617

16,472 Commits
3 Branches
102 Releases
983 Contributors

Just from that data we can ascertain that the Docker Community is active. We can also take a deep look into the forks and determine pull requests, acceptance of and related data to find out that the overall codebase is healthy with involvement. This is good to know since at one point there were questions if Docker had the capability to manage the open source legions pushing the product forward while maintaining the integrity, reputation, and quality of the product.

Now let’s take a look at what that position is based on considering the interviews I’ve had in the last 4 days. Out of the 17 people I spoke with all knew what Docker is. That’s a great position to be in compared to just a few years ago.

Out of the 17 people I spoke with, 15 of the individuals are working on teams that have, are implementing or are in some state between having and implementing Docker into their respective environments.

Of the 17, only 13 said they were concerned in some significant way about Docker Security. All of these individuals were working on teams attempting to figure out a way to use Docker in a production way, instead of only in development or related uses.

The list of uses that the 17 want to use or are using Docker for vary as much as the individual work that each is currently working on. There are however some core similarities in what they’re working on where Docker comes into play.

The most common similarity among Docker uses is simply as a platform to build out development testing environments or test servers. This is routinely a database server or simple distributed database like Cassandra or Riak, that can be built immutably, then destroyed and recreated whenever it is needed again for test and development. Some of the build outs are done with Docker specifically to work up a mock distributed database environment for testing. Mind you, I’m probably hearing about and seeing this because of my past work with Basho and other distributed systems programmers, companies, and efforts around this type of technology. It’s still interesting and very telling none the less.

The second most common usage is for Docker to be used somewhere in the continuous delivery chain. The push to move the continuous integration and delivery process to a more immutable, repeatable, and reliable process has been a perfect marriage between Docker and these needs. The ability to spin up entire environments in a matter of seconds and destroy them on whim, creating them again a matter of moments later, as made continuous delivery more powerful and more possible than it has ever been.

Some of the less common, yet still key uses of Docker, that came up during the interviews included; in memory cache servers, network virtualization, and distributed systems.

Virtualization’s Future

Pathing

With the history covered, the core uses of Docker discussed, let’s put those on the table with the acquisitions. The acquisitions by Docker have provided some insight into the future direction of the company. The acquisitions so far include: Kitematic, SocketPlane, Koality, and Orchard.

From a high level strategic play, the path Docker is pushing forward into is a future of continued virtualization around, as the hipsters might say “all the things”. With their purchase of Kitematic and SocketPlane. Both of these will help Docker expand past only OS virtualization and push more toward systemic virtualization of network environments with programmatic capabilities and more. These are capabilities that are needed to move past the legacy IT environments of yesteryear which will open up more enterprise possibilities too.

To further their core use that exists today, Docker has purchased Koality. Koality provides parallelizable continuous integration, deployment, and related services. This enables Docker to provide more built out services around this very important.

The other acquisition was Orchard (orchardup.com). This is a startup that provides a Docker host in the cloud, instantly. This is a similar purchase as the Koality one. It bulks up capabilities that Docker had some level of already. It also pushes them forward with two branches of capabilities: SaaS based on the web and prospectively offering something behind the firewall, which the Koality acquisition might have some part to play also.

Threat Vectors

Even though the pathways toward the future seem clear for Docker in many ways, in other ways they see dramatically less clear. For one, there are a number of competitive options that are in play now, gaining momentum and on the horizon. One big threat is Google’s lack of interest in Docker has led them to build competing tooling. If they push hard into the OS level virtualization space they could become a substantial threat.

The other threat vector, is the simple unknown of what could become a threat. Something like Mesos might explode in popularity and determine it doesn’t want to use Docker, and focus on another virtualization path. In the same sense, Mesos could commoditize Docker to a point that the value add at that level of virtualization doesn’t retain a business market value that would sustain Docker.

The invisible threat around this area right now is fairly large. There’s no greater way to determine this then to just get into a conversation with some developers about Docker. In one sense they love what it allows them to do, but the laundry list of things they’d like would allow for a disruptor to come in and steal the Docker thunder pretty easily. To put it simply, there isn’t a magical allegiance to Docker, developers will pick what helps them move the ball forward the fastest and easiest.

Another prospective threat is a massive purchase by a legacy software company like Oracle, Microsoft, or someone else. This could effectively destabilize the OSS aspects of the product and slow down development and progress, yet it could increase corporate adoption many times over what it is now. So this possibility is something that shouldn’t be ruled out.

Summary

Docker has two major threats: the direct competitor and their prospectively being leapfrogged by another level of virtualization. The other prospective threat to part of the company is acquisition of Docker itself, while it could mean a huge increase in enterprise penetration. In the future path the company and technology is moving forward in, there will be continued growth in usage and capabilities. The growth will maintain in the leading technology startups and companies of this kind, while the mid-size and larger corporate environments will continue to adopt and deploy at a slower pace.

A Question for You

I’ve put together what I’ve noticed, and I’d love to see things that you dear reader might notice about the Docker momentum machine. Do you see networking as a strength, other levels of virtualization, deployment of machines, integration or delivery, or some other part of this space as the way forward into the future. Let me know what your thoughts are on Twitter or whatever medium you feel like reaching out on. Of course, I’d also love to know if you think I’m wrong about anything I’ve written here.

_____100 |> F# Some Troubleshooting Linux

In the last article I wrote on writing a code kata with F# on OS-X or Windows, I had wanted to use Linux but things just weren’t cooperating with me. Well, since that article I have resolved some of the issues I ran into, and this is the log of those issues.

Issue 1: “How can I resolve the “Could not fix timestamps in …” “…Error: The requested feature is not implemented.””

The first issue I ran into with running the ProjectScaffold build on Linux I wrote up and posted to Stack Overflow titled “How can I resolve the “Could not fix timestamps in …” “…Error: The requested feature is not implemented.”“. You can read more about the errors I receiving on the StackOverflow Article, but below is the immediate fix. This fix should probably be added to any F# Installation instructions for Linux as part of the default.

First ensure that you have the latest version of mono. If you use the instructions to do a make and make install off of the fsharp.org site you may not actually have the latest version of mono. Instead, here’s a good way to get the latest version of mono using apt-get. More information can be found about this on the mono page here.

apt-get install mono-devel
apt-get install mono-complete

Issue 2: “ProjectScaffold Error on Linux Generating Documentation”

The second issue I ran into I also posted to Stack Overflow titled “ProjectScaffold Error on Linux Generating Documentation“. This one took a lot more effort. It also spilled over from Stack Overflow to become an actual Github Issue (323) on the project. So check out those issues in case you run into any issues there.

In the next issue, to be published tomorrow, I’ll have some script tricks to use mono more efficiently to run *.exe commands and get things done with paket and fake in F# running on any operating system.

______10 |> F# – Moar Thinking Functionally (Notes)

More notes on the “Thinking Functionally” series. Previous notes are @ “_______1 |> F# – Getting Started, Thinking Functionally“.

#6 Partial Application

Breaking down functions into single parameter functions is the mathematically correct way of doing it, but that is not the only reason it is done — it also leads to a very powerful technique called partial function application.

For example:

let add42 = (+) 42 // partial application
add42 1
add42 3

[1;2;3] |> List.map add42

let twoIsLessThan = (<) 2 // partial application twoIsLessThan 1 twoIsLessThan 3 // filter each element with the twoIsLessThan function [1;2;3] |> List.filter twoIsLessThan

let printer = printfn "printing param=%i"

[1;2;3] |> List.iter printer

Each case a partially applied function above it can then be reused in multiple contexts. It can also fix function parameters.

let add1 = (+) 1
let add1ToEach = List.map add1   // fix the "add1" function

add1ToEach [1;2;3;4]

let filterEvens =
   List.filter (fun i -> i%2 = 0) // fix the filter function

filterEvens [1;2;3;4]

Then the following shows plug in behavior that is transparent.

let adderWithPluggableLogger logger x y =
    logger "x" x
    logger "y" y
    let result = x + y
    logger "x+y"  result
    result 

let consoleLogger argName argValue =
    printfn "%s=%A" argName argValue 

let addWithConsoleLogger = adderWithPluggableLogger consoleLogger
addWithConsoleLogger  1 2
addWithConsoleLogger  42 99

let popupLogger argName argValue =
    let message = sprintf "%s=%A" argName argValue
    System.Windows.Forms.MessageBox.Show(
                                 text=message,caption="Logger")
      |> ignore

let addWithPopupLogger  = adderWithPluggableLogger popupLogger
addWithPopupLogger  1 2
addWithPopupLogger  42 99

Designing Functions for Partial Application

Sample calls to the list library:

List.map    (fun i -> i+1) [0;1;2;3]
List.filter (fun i -> i>1) [0;1;2;3]
List.sortBy (fun i -> -i ) [0;1;2;3]

Here are the same examples using partial application:

let eachAdd1 = List.map (fun i -> i+1)
eachAdd1 [0;1;2;3]

let excludeOneOrLess = List.filter (fun i -> i>1)
excludeOneOrLess [0;1;2;3]

let sortDesc = List.sortBy (fun i -> -i)
sortDesc [0;1;2;3]

Commonly accepted guidelines to multi-parameter function design.

  1. Put earlier: parameters ore likely to be static. The parameters that are most likely to be “fixed” with partial application should be first.
  2. Put last: the data structure or collection (or most varying argument). Makes it easier to pipe a structure or collection from function to function. Like:
    let result =
       [1..10]
       |> List.map (fun i -> i+1)
       |> List.filter (fun i -> i>5)
    
  3. For well-known operations such as “subtract”, put in the expected order.

Wrapping BCL Function for Partial Application

Since the data parameter is generally last versus most BCL calls that have the data parameter first, it’s good to wrap the BCL.

let replace oldStr newStr (s:string) =
  s.Replace(oldValue=oldStr, newValue=newStr)

let startsWith lookFor (s:string) =
  s.StartsWith(lookFor)

Then pipes can be used with the BCL call in the expected way.

let result =
     "hello"
     |> replace "h" "j"
     |> startsWith "j"

["the"; "quick"; "brown"; "fox"]
     |> List.filter (startsWith "f")

…or we can use function composition.

let compositeOp = replace "h" "j" >> startsWith "j"
let result = compositeOp "hello"

Understanding the Pipe Function

The pipe function is defined as:

let (|>) x f = f x

It allows us to put the function argument in front of the function instead of after.

let doSomething x y z = x+y+z
doSomething 1 2 3

If the function has multiple parameters, then it appears that the input is the final parameter. Actually what is happening is that the function is partially applied, returning a function that has a single parameter: the input.

let doSomething x y  =
   let intermediateFn z = x+y+z
   intermediateFn        // return intermediateFn

let doSomethingPartial = doSomething 1 2
doSomethingPartial 3
3 |> doSomethingPartial

#7 Function Associativity and Composition

Function Associativity

This…

let F x y z = x y z

…means this…

let F x y z = (x y) z

Also three equivalent forms.

let F x y z = x (y z)
let F x y z = y z |> x
let F x y z = x <| y z

Function Composition

Here’s an example

let f (x:int) = float x * 3.0  // f is int->float
let g (x:float) = x > 4.0      // g is float->bool

We can create a new function h that takes the output of “f” and uses it as the input for “g”.

let h (x:int) =
    let y = f(x)
    g(y)                   // return output of g

A much more compact way is this:

let h (x:int) = g ( f(x) ) // h is int->bool

//test
h 1
h 2

These are notes, to read more check out the Function Composition.

______11 |> F# – Some Hackery – A String Calculator Kata

Now for some F# hacking. The first thing I did was actually go through a Code Kata, which I’ll present here.

The first step I took was to get a project started. For that I used the ProjectScaffold to build a clean project via bash.

First cloned…

git clone git@github.com:fsprojects/ProjectScaffold.git sharpKataStringCalc

…then I navigated into the directory and executed the build.sh script…

cd sharpKataStringCalc/
./build.sh

…then I got prompted for some input.

  #####################################################

# Project Scaffold Init Script
# Please answer a few questions and we will generate
# two files:
#
# build.fsx               This will be your build script
# docs/tools/generate.fsx This script will generate your
#                         documentation
#
# NOTE: Aside from the Project Name, you may leave any
# of these blank, but you will need to change the defaults
# in the generated scripts.
#

  #####################################################

Project Name (used for solution/project files): sharpKataStringCalc
Summary (a short description): A code kata for the string calculator exercise.
Description (longer description used by NuGet): The code kata, kicked off my Roy Osherove, this is my iteration of it (at least my first iteration of it).
Author: Adron Hall
Tags (separated by spaces): fsharp f# code kata stringcalculator
Github User or Organization: adron
Github Project Name (leave blank to use Project Name):

Once I hit enter after entering the information I’ve gotten more than a few of these broken builds.

Time Elapsed 00:00:00.1609190
Running build failed.
Error:
Building /Users/adronhall/Coderz/sharpKataStringCalc/sharpKataStringCalc.sln failed with exitcode 1.

---------------------------------------------------------------------
Build Time Report
---------------------------------------------------------------------
Target         Duration
------         --------
Clean          00:00:00.0019508
AssemblyInfo   00:00:00.0107624
Total:         00:00:00.6460652
Status:        Failure
---------------------------------------------------------------------
  1) Building /Users/adronhall/Coderz/sharpKataStringCalc/sharpKataStringCalc.sln failed with exitcode 1.
  2) : /Users/adronhall/Coderz/sharpKataStringCalc/src/sharpKataStringCalc/sharpKataStringCalc.fsproj(0,0): Target named 'Rebuild' not found in the project.
  3) : /Users/adronhall/Coderz/sharpKataStringCalc/tests/sharpKataStringCalc.Tests/sharpKataStringCalc.Tests.fsproj(0,0): /Users/adronhall/Coderz/sharpKataStringCalc/tests/sharpKataStringCalc.Tests/sharpKataStringCalc.Tests.fsproj: The required attribute "Project" in Import is empty
---------------------------------------------------------------------

This problem I was able to solve once, based on what I did in a previous blog entry “That Non-Windows Scaffolding for OS-X and Linux |> I Broke It! But…“. Which seemed odd that I fixed it previously. To help with the build I actually opened it up in Xamarin Studio. Now, one of the problems with doing this, is that it’s only available on Windows & OS-X. I’m however interested in using this stuff on Linux too, but that’s looking a bit more difficult the more I work with the toolchain unfortunately.

After working through the issue I found that on one OS-X box I’d installed Mono via make and F# via make and that messes things up. Do one or the other and you should be ok. So on my other two OS-X boxes (I’ve a personal retina and a work retina) the build worked flawlessly, and when it works flawlessly it looks like this toward the end of the build execution.

Finished Target: GenerateReferenceDocs
Starting Target: GenerateDocs (==> GenerateReferenceDocs, GenerateReferenceDocs)
Finished Target: GenerateDocs
Starting Target: All (==> GenerateDocs)
Finished Target: All

---------------------------------------------------------------------
Build Time Report
---------------------------------------------------------------------
Target                  Duration
------                  --------
Clean                   00:00:00.0035253
AssemblyInfo            00:00:00.0103142
Build                   00:00:04.9369669
CopyBinaries            00:00:00.0052210
RunTests                00:00:00.6568475
CleanDocs               00:00:00.0025772
GenerateHelp            00:00:08.6989318
GenerateReferenceDocs   00:00:11.7627584
GenerateDocs            00:00:00.0003409
All                     00:00:00.0000324
Total:                  00:00:26.1162623
Status:                 Ok
---------------------------------------------------------------------

I’ve gotten this to work on OS-X and Windows just fine using the straight up ProjectScaffold and the ./build.sh. So all is good, I’m going to move forward with writing the kata based on that and loop back around to straighten out the Linux issues.

To run the tests, execute the following script after creating the project scaffold.

./build.sh RunTests

First off, what are the ideas behind the string calculator kata? Well here’s how Roy Osherove lays it out this particular code kata.

Before you start:

  • Try not to read ahead.
  • Do one task at a time. The trick is to learn to work incrementally.
  • Make sure you only test for correct inputs. there is no need to test for invalid inputs for this kata.

String Calculator

  1. Create a simple String calculator with a method int Add(string numbers)
    1. The method can take 0, 1 or 2 numbers, and will return their sum (for an empty string it will return 0) for example “” or “1” or “1 2”
    2. Start with the simplest test case of an empty string and move to 1 and two numbers
    3. Remember to solve things as simply as possible so that you force yourself to write tests you did not think about
    4. Remember to refactor after each passing test
  2. Allow the Add method to handle an unknown amount of numbers.
  3. Allow the Add method to handle new lines between numbers (instead of an empty space).
    1. the following input is ok: “1\n2 3” (will equal 6)
    2. the following input is NOT ok: “1 \n” (not need to prove it – just clarifying)
  4. Support different delimiters
    1. to change a delimiter, the beginning of the string will contain a separate line that looks like this: “//[delimiter]\n[numbers…]” for example “//;\n1;2” should return three where the default delimiter is ‘;’ .
    2. the first line is optional. all existing scenarios should still be supported
  5. Calling Add with a negative number will throw an exception “negatives not allowed” – and the negative that was passed.if there are multiple negatives, show all of them in the exception message
  6. Numbers bigger than 1000 should be ignored, so adding 2 + 1001 = 2
  7. Delimiters can be of any length with the following format: “//[delimiter]\n” for example: “//[***]\n1***2***3” should return 6
  8. Allow multiple delimiters like this: “//[delim1][delim2]\n” for example “//[*][%]\n1*2%3” should return 6.
  9. Make sure you can also handle multiple delimiters with length longer than one char.

Ok, so now that we’re clear on the string calculator, I’m going to dig into knocking out the first item, “Create a simple string calculator with a method int Add (string numbers)”

But first, in TDD fashion let’s write the test and make it fail first. I changed the code in the Tests.fs file in the tests directory and tests project to read as follows.

module sharpKataStringCalc.Tests

open System
open sharpKataStringCalc
open NUnit.Framework

[<TestFixture>]
type CalculatorTests() =
  [<Test>]
  member x.add_empty_string() =
    let calculator = Calculator()
    let result = calculator.Add ""
    Assert.That(result, Is.EqualTo 0)

That gets us a failing test, since we don’t even have any implementation yet. So now I’ll add the first part of the implementation code. First I created a Calculator.fs file and deleted the other file that ProjectScaffold put in there in the first place.

namespace sharpKataStringCalc

open System

type Calculator() = 
  member x.Add express = 
    0

Ok, that gives me a passing test for the first phase of all this. Now since I’m a total F# newb still I’ve got to kind of dig around and read documentation while I’m working through this. So I’m taking a couple of hours while Roy’s suggestion is to use 30 minutes to do this kata. But I figured it is a good way to force myself to learn the syntax and start getting into an F# refactoring practice.

The first thing I started to do was write a test where I set the Calculator() again that looked something like this. I didn’t like that so I tried to pull it out of the test.

  [<TestCase("1", Result = 1)>]
  member x.Add_single_number_returns_that_number expression =
    let calculator = Calculator()
    calculator.Add expression

I ended up with something like this then.

let calculator = Calculator()

[<TestFixture>]
type CalculatorTests() =
  [<Test>]
  member x.add_empty_string() =
    let result = calculator.Add ""
    Assert.That(result, Is.EqualTo 0)

  [<TestCase("1", Result = 1)>]
  member x.Add_single_number_returns_that_number expression =
    calculator.Add expression

After adding that code with that little refactor I ran it, red light fail, so I then moved on to implementation for this test.

type Calculator() = 
  member x.Add expression = 
    match expression with
    | "" -> 0
    | _ -> 1

Everything passed. So now on to the next scenario other subsequent number strings. I add another test and result condition.

  [<TestCase("1", Result = 1)>]
  [<TestCase("2", Result = 2)>]
  member x.Add_single_number_returns_that_number expression =
    calculator.Add expression

It runs, gets a red light fail, I then implement with this minor addition.

type Calculator() = 
  member x.Add expression = 
    match expression with
    | "" -> 0
    | _ -> Int32.Parse expression

Before moving on, I’m just going to cover some of the syntax I’ve been using. The | delimits individual matches, individual discriminated union cases, and enumeration values. In this particular case I’m just using it to match the empty string or the
wildcard. Which speaking of, the _ is a wildcard match or specifies a generic parameter. To learn more about these in detail check out match expressions or generics. There are lots of good things in there.

The other syntax is somewhat more self-explanatory so I’m going to leave it as is for the moment. It is, in the end, when executing the tests evident what is going on at least. Alright, back to the kata. Let’s actually add two numbers. For the test I’m just going to add another TestCase with two actual numbers.

  [<TestCase("1", Result = 1)>]
  [<TestCase("2", Result = 2)>]
  [<TestCase("1 2", Result = 3)>]
  member x.Add_single_number_returns_that_number expression =
    calculator.Add expression

Fails, so on to implementation. I’m just going to do this the cheap “it works” way and do something dumb.

type Calculator() = 
  member x.Add expression = 
    match expression with
    | "" -> 0
    | _ when expression.Contains " " -> 3
    | _ -> Int32.Parse expression

That’ll give me a passing green light, but I’ll add another bit of attribute to the test and get another failing test.

  [<TestCase("1", Result = 1)>]
  [<TestCase("2", Result = 2)>]
  [<TestCase("1 2", Result = 3)>]
  [<TestCase("2 3", Result = 5)>]
  member x.Add_single_number_returns_that_number expression =
    calculator.Add expression

I’ll add the following code to implement and get a passing test.

type Calculator() = 
  member x.Add expression = 
    match expression with
    | "" -> 0
    | _ when expression.Contains " " -> 
        let numbers = expression.Split [| ' ' |]
        (Int32.Parse numbers.[0]) + (Int32.Parse numbers.[1])
    | _ -> Int32.Parse expression

Ok. So that part of the match looks for an empty space, and then takes the two numbers opposite sides of that empty space (array item 0 and 1) and then parses them and adds them together. Keep in mind that ‘ ‘ signifies a single character, and not a string, even though for the contains method that executes on a string, passing in a string with ” ” is ok and the appropriate actions are taken by the compiler.

For the tests I’m going to do a refactor and break them apart just a bit and rename them using the “ xyz “ technique of methods. After the refactor the code looked like this. I got this idea from the “Use F# to write unit tests with readable names” tip.

[<TestFixture>]
type CalculatorTests() =
  [<Test>]
  member x.``should return zero if no string value is passed in.``() =
    let result = calculator.Add ""
    Assert.That(result, Is.EqualTo 0)

  [<TestCase("1", Result = 1)>]
  [<TestCase("2", Result = 2)>]
  member x.``take one number and return that number`` expression =
    calculator.Add expression

  [<TestCase("1 2", Result = 3)>]
  [<TestCase("2 3", Result = 5)>]
  member x.``add single number to single number and return sum`` expression =
    calculator.Add expression

At this point I’m going to take a break, and wrap this up in a subsequent part of this series. It’s been a fun troubleshooting and getting started string calculator kata. So stay tuned and I’ll be slinging some F# math at ya real soon.

Reference:

_______1 |> F# – Getting Started, Thinking Functionally (Notes)

Recently I took the plunge into writing F# on OS-X and Linux (Ubuntu specifically). This is the first of a new series I’m starting (along with my other ongoing series on JavaScript workflow and practices). In this article particularly I’m just going to provide an overview and notes of key paradigms in functional programming. Most of these notes are derived from a great series called “Thinking Functionally“.

#1 Thinking Functionally: Introduction

This is the article that kicks off the series. The emphasis is focused around realizing that a different way of thinking needs applied when using a functional language. The difference between functional thinking and imperative. The key topics of the series includes:

  • Mathematical Functions
  • Functions and Values
  • Types
  • Functions with Multiple Parameters
  • Defining Functions
  • Function Signatures
  • Organizing Functions

#2 Mathematical Functions

Functional programming, its paradigms and its origins, are all rooted in mathematics. A mathematical function looks something like this:

Add1(x) = x + 1

  • The set of values that can be used as input to the function is called the domain.
  • The set of possible output values from the function is called the range.
  • The function is said to map the domain to the range.

If F# the definition would look like this.

let add1 x = x + 1

The signature of the function would look like this.

val add1 : int -> int

Key Properties of Mathematical Functions

  • A function always gives the same output value for a given input value.
  • A function has no side effects.

Pure functions

  • They are trivially parallelizable.
  • I can use a function lazily.
  • A function only needs to be evaluated once. (i.e. memoization)
  • The function can be evaluated in any order.

Prospectively “Unhelpful” properties of mathematical functions

  • The input and output values are immutable.
  • A function always has exactly one input and one output

#3 Function Values and Simple Values

Looking at this function again:

let add1 x = x + 1

The x means:

  • Accept some value from the input domain.
  • Use the name “x” to represent that value so that we can refer to it later.

It is also referred to as x is bound to the input value. In this binding it will only ever be that input value. This is not an assignment of a variable. Emphasis on the fact that there are no “variables”, only values. This is a critical part of functional thinking.

Function Values: this is a value that provides binding for a function to a name.

Simple Values: This is what one might think of as constants.

let c = 5

Simple Values vs. Function Values

Both are bound to names using the keyword let. The key difference is a function needs to be application to an argument to get a result, and a simple value doesn’t, as it is the value it is.

“Values” vs. “Objects”

In F# most things are referred to as “values”, contrary to “objects” in C# (or other languages like JavaScript for that matter). A value is just a member of a domain, such as a domain of ints, string, or functions that map ints to strings. In theory a value is immutable and has no behavior attached to it. However in F#, even some primitive values have some object-like behavior such as calling a member or property with dot notation syntax like “theValue”.Length or something similar.

Naming Values

Most of the naming is the general ruleset that applies to practically every language. However there is one odd feature that comes in handy in some scenarios. Let’s say you want to have a name such as “this is my really long and flippantly ridiculous name” for a domain. Well, what you can use is the “this is my really long and flippantly ridiculous name“ to have that be the name. It’s a strange feature, but I’ll cover more of it in subsequent articles.

In F# it is also practice to name functions and values with camel case instead of pascal case (camelCase vs. PascalCase).

#4 How Types Work with Functions

Function signatures look like this, with the arrow notation.

val functionName : domain -> range

Example functions:

let intToString x = sprintf "x is %i" x  // format int to string
let stringToInt x = System.Int32.Parse(x)

Example function signatures:

val intToString : int -> string
val stringToInt : string -> int

This means:

  • intToString has a domain of int which it maps onto the range string.
  • stringToInt has a domain of string which it maps onto the range int.

Primitive Types

Standard known types you’d expect: string, int, float, bool, char, byte, etc.

Type Annotations

Sometimes type might not be inferred, if that is the case, the compiler can take annotations to resolve type.

let stringLength (x:string) = x.Length   
let stringLengthAsInt (x:string) :int = x.Length 

Function Types as Parameters

A function that takes other functions as parameters, or returns a function, is called a higher-order function (sometimes abbreviated as HOF).

Example:

let evalWith5ThenAdd2 fn = fn 5 + 2 

The signature looks like:

val evalWith5ThenAdd2 : (int -> int) -> int

Looking at an example executing. The example:

let times3 x = x * 3
evalWith5ThenAdd2 times3

The signature looks like:

val times3 : int -> int
val it : int = 17

Also these are very sensitive to types, so beward.

let times3float x = x * 3.0
evalWith5ThenAdd2 times3float

Gives an error:

error FS0001: Type mismatch. Expecting a int -> int but 
              given a float -> float

Function as Output

A function value can also be the output of a function. For example, the following function will generate an “adder” function that adds using the input value.

let adderGenerator numberToAdd = (+) numberToAdd

Using Type Annotations to Constrain Function Types

This example:

let evalWith5ThenAdd2 fn = fn 5 +2

Becomes this:

let evalWith5AsInt (fn:int->int) = fn 5

…or maybe this:

let evalWith5AsFloat (fn:int->float) = fn 5

The “unit” Type

When a function returns no output, it still requires a range, and since there isn’t a void function in mathematical functions this supplants the void as return output. This special range is called a “unit” and has one specific value only, a “()”. It is similar to a void type or a null in C# but also is not, in that it is the value “()”.

Parameterless Functions

For a WAT moment here, parameterless functions come up. In this example we might expect “unit” and “unit”.

let printHello = printf "hello world" 

But what we actually get is:

hello world
val printHello : unit = ()

Like I said, a very WAT kind of moment. To get what we expect, we need to force the definition to have a “unit” argument, as shown.

let printHelloFn () = printf "hello world"

Then we get what we expect.

val printHelloFn : unit -> unit

So why is this? Well, you’ll have to dive in a little deeper and check out the F# for fun and profitThinking Functional” entry on “How Types Work with Functions” for the full low down on that. Suffice it I’ve documented how you get what you would expect, the writer on the site goes into the nitty gritty of what is actually happening here. Possibly, you might have guessed what is happen already. There is also some strange behavior that takes place with forcing unit types with the ignore function that you may want to read on that article too, but if you’re itching to move on, and get going faster than digging through all of this errata, keep going. I’m going to jump ahead to the last section I want to cover for this blog entry of notes, currying.

#5 Currying

Haskell Curry

Haskell Curry

Breaking multi-parameter functions into smaller one-parameter functions. The way to write a function with more than one parameter is to use a process called currying. For more history on this check out Haskell Curry the mathematician to dive deep into his work (which also covers a LOT of other things beside merely currying, like combinatory logic and Curry’s paradox for instance).

Basic example:

let printTwoParameters x y = 
   printfn "x=%i y=%i" x y

This of course looks neat enough right? Well, if we look at the compiler rewrite, it gets rather interesting.

let printTwoParameters x =
   let subFunction y = 
      printfn "x=%i y=%i" x y
   subFunction